Differentiable programming is getting a little exciting nowadays. The core idea is that you can build a language from the ground up that simultaneous supports both (a) computation of direct numerical results and (b) cheap, cheap computation of the derivative. The most popular implementation of this idea is TensorFlow. TensorFlow is by most accounts a tool for constructing neural networks. The principal way you build a neural network is by designing its architecture as a series of interacting “layers” that compute a forward prediction based on input data and then you train that architecture through a process known as _gradient descent_—which essentially requires computing the derivative of the forward prediction over and over and over.