Feedforward neural networks with ReLU activation functions are linear splines
In this thesis the approximation properties of feedforward articial neural networks with one hidden layer and ReLU activation functions are examined. It is shown that functions of these kind are linear splines and the number of spline knots depend on the number of nodes in the network. In fact an upper bound can be derived for the number of knots. Furthermore, the positioning of the knots depend o