Cross Validated Asked by ofow on January 3, 2022
Let $mgeq 1$ be an integer and $Fin mathbb{R}[x_1, dots, x_m]$ be a polynomial. I want to approximate $F$ on the unit hypercube $[0, 1]^m$ by a (possibly multilayer) feedforward neural network. The activation function is $mathrm{tanh}$ for all the connections.
Let $varepsilon>0$ be a real number. If I want the approximation to deviate from $F$ by less than $varepsilon$ in the $L^2$ norm what is the smallest possible number of non-zero weights?
It is kind of stupid to approximate a function that is known to be polynomial by a neural network but I just wanted to get more quantitative insight into the universal approximation theorem (and polynomials seem to be the most accessible class of functions).
0 Asked on November 20, 2021
forecasting intermittent time series multivariate regression prophet time series
1 Asked on November 18, 2021
1 Asked on November 18, 2021 by charles-orlando
2 Asked on November 18, 2021
intuition neural networks semi supervised learning transfer learning unsupervised learning
6 Asked on November 18, 2021
1 Asked on November 18, 2021 by imagineerthat
1 Asked on November 18, 2021 by crimson_idiot
0 Asked on November 18, 2021
2 Asked on November 16, 2021 by shial-de
2 Asked on November 16, 2021 by bruce-rawlings
1 Asked on November 16, 2021
0 Asked on November 16, 2021 by dimitriy
0 Asked on November 16, 2021
3 Asked on November 16, 2021
0 Asked on November 16, 2021 by user291976
1 Asked on November 16, 2021 by ben-s
1 Asked on November 16, 2021
1 Asked on November 16, 2021 by ashirwad
Get help from others!
Recent Answers
Recent Questions
© 2023 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP