- Views: 5
- Report Article
- Articles
- Reference & Education
- College & University
Sample Essay on Universal Approximation Theorem
Posted: Aug 09, 2014
Universal approximation theorem is commonly used in the artificial neural networks in mathematics. It states that feed-forward network that has one hidden layer with a finite neurons number is capable of approximating continuous functions in compact subsets of the Rn, under the mild assumptions on activation function.
This implies that a wide range of interesting functions may be represented by a simple neural network when appropriate parameters are given. However, algorithmic learnability of the parameters is not touched by the theorem.
George Cybenko proved one of the versions of this theorem in 1989 for the sigmoid activation function. In 1991, Hurt Hornik depicted that is not specific choice of an activation function but the architecture of multilayer feed-forward itself that gives the neural networks potential of becoming universal approximators. It is assumed that output units are linear.
Hornik and his team proved that a continuous function can be approximated by one hidden layer in a neural network from the compact domain to arbitrary precision’s reels. This convinced several people that it is possible for neural networks to work for different applications if it is designed after learning the function.
However, some experts note that this was misleading. This is because the theorem claimed that one hidden layer for a neural network was capable of approximating a function insinuating that a single hidden layer neural network is suitable for virtually any application. However, these experts are not concerned about the efficiency of the functions. Universal approximation construction depends on and it works by allocation of neuron to each small volume of input space, and then learning correct answer for such volume.
The problem comes with the exponential growth of these volumes in an input space dimensionality. This implies that universal approximation construction by Hornik is exponentially inefficient. This makes it useless. Nevertheless, it is important to note that deep neural networks should not be considered as and are not universal approximators. This is because they have more functions than a small neural network has.
This issue has made many researchers to refute the effectiveness of the universal approximation theorem. Some experts have claimed that Hornik and his team missed the most significant feature in neural networks. They ignored depth. This is because deep neural networks are able to represent the functions computed in several steps of the computation process. This implies that a deep neural network can be considered as a constant-depth threshold circuit, and this is known for its ability to compute interesting functions.
Order an essay on universal approximation theorem online
Are you worried about beating the set submission deadline for your universal approximation theorem essay? Do you need help of a professional essay writer to submit a superior quality essay within the deadline set by your lecturer or college teacher? Then place an order for your essay at HelpInWriting.com and we will help you with your essay.
We guarantee you that once you order your universal approximation theorem essay with us, we will deliver a superior quality essay within your timeline.
Sources
http://en.wikipedia.org/wiki/Universal_approximation_theorem
http://theneural.wordpress.com/2013/01/07/universal-approximation-and-depth/
https://www.hindawi.com/journals/afs/2013/136214/
I am a writer at HomeworkWriters.net, where we offer credible help in academic research and assignment writing.