A visual proof that neural nets can compute any function

A visual proof that neural nets can compute any function

That is, suppose someone hands you some complicated, wiggly function, $f(x)$:

No matter what the function, there is guaranteed to be a neural network so that for every possible input, $x$, the value $f(x)$ (or some close approximation) is output from the network, e.g.:

This result holds even if the function has many inputs, $f = f(x_1, \ldots, x_m)$, and many outputs. Summing up, a more precise statement of the universality theorem is that neural networks with a single hidden layer can be used to approximate any continuous function to any desired precision. To understand why the universality theorem is true, let’s start by understanding how to construct a neural network which approximates a function with just one input and one output:

It turns out that this is the core of the problem of universality.

Source: neuralnetworksanddeeplearning.com