TransWikia.com

How are Artificial Neural Networks and the Biological Neural Networks similar and different?

Artificial Intelligence Asked by Andreas Storvik Strauman on January 20, 2021

I’ve heard multiple times that “Neural Networks are the best approximation we have to model the human brain”, and I think it is commonly known that Neural Networks are modelled after our brain.

I strongly suspect that this model has been simplified, but how much?

How much does, say, the vanilla NN differ from what we know about the human brain? Do we even know?

3 Answers

We all know that artificial neural networks (ANNs) are inspired by but most of them are only loosely based on the biological neural networks (BNNs).

We can analyze the differences and similarities between ANNs and BNNs in terms of the following components.

Neurons

The following diagram illustrates a biological neuron (screenshot of an image from this book).

enter image description here

The following one illustrates a typical artificial neuron of an ANN (screenshot of figure 1.14 of this book).

enter image description here

Initialization

In the case of an ANN, the initial state and weights are assigned randomly. While for BNNs, the strengths of connections between neurons and the structure of connections don't start as random. The initial state is genetically derived and is the byproduct of evolution.

Learning

In BNN, learning comes from the interconnections between myriad neurons in the brain. These interconnections change configuration when the brain experiences new stimuli. The changes result in new connections, strengthening of existing connections, and removal of old and unused ones.

ANNs are trained from scratch usually using a fixed topology (remember topology changes in case of BNNs), although the topology of ANN can also change (for example, take a look at NEAT or some continual learning techniques), which depends on the problem being solved. The weights of an ANN are randomly initialized and adjusted via an optimization algorithm.

Number of neurons

Another difference (although this difference is always smaller) is in the number of neurons in the network. A typical ANN consists of hundreds, thousands, millions, and, in some exceptional (e.g. GPT-3), billions of neurons. The BNN of the human brain consists of billions. This number varies from animal to animal.

Further reading

You can find more information here or here.

Correct answer by Ugnes on January 20, 2021

The common statement that Artificial Neural Networks are inspired by the neural structure of brains is only partially true.

It is true that Norbert Wiener, Claude Shannon, John von Neuman, and others began the path toward practical AI by developing what they then called the electronic brain. It is also true

  • Artificial networks have functions called activations,
  • Are wired in many-to-many relationships like biological neurons, and
  • Are designed to learn an optimal behavior,

but that is the extent of the similarity. Cells in artificial networks such as MLPs (multilayer perceptrons) or RNN (Recurrent neural networks) are not like cells in brain networks.

The perceptron, the first software stab at arrays of things that activate, was not an array of neurons. It was the application of basic feedback involving gradients, which had been in common use in engineering ever since James Watt's centrifugal governor was mathematically modeled by Gauss. Successive approximation, a principle that had been in use for centuries, was employed to incrementally update an attenuation matrix. The matrix was multiplied by the vector feeding an array of identical activation functions to produce output. That's it.

The projection in a second dimension to a multi-layer topology was made possible by the realization that the Jacobian could be used to produce a corrective signal that, when distributed as negative feedback to the layers appropriately, could tune the attenuation matrix of a sequence of perceptrons and the network as a whole would converge upon satisfactory behavior. In the sequence of perceptrons, each element is called a layer. The feedback mechanism is now called back propagation.

The mathematics used to correct the network is called gradient descent because it is like a dehydrated blind man using the gradient of the terrain to find water, and the issues of doing that are similar too. He might find a local minima (low point) before he finds fresh water and converge on death rather than hydration.

The newer topologies are the additions of already existing convolution work used in digital image restoration, mail sorting, and graphics applications to create the CNN family of topologies and the ingenious use of what is like a chemical equilibrium from first year chemistry to combine optimization criteria creating the GAN family of topologies.

Deep is simply a synonym for numerous in most AI contexts. It sometimes infers complexity in the higher level topology (above the vector-matrix products, the activations, and the convolutions).

Active research is ongoing by those who are aware how different these deep networks are from what neural scientists have discovered decades ago in mammalian brain tissue. And there are more differentiators being discovered today as learning circuitry and neuro-chemistry in the brain is investigated from the genomic perspective.

  • Neural plasticity ... change in circuit topology due to dendrite and axiom growth, death, redirection, and other morphing
  • Topological complexity ... large number of axioms crisscross without interacting and are deliberately shielded from cross-talk (independent) most likely because it would be disadvantageous to let them connect [note 1]
  • Chemical signaling ... mammalian brains have dozens of neuro-transmitter and neuro-regulation compounds that have regional effects on circuitry [note 2]
  • Organelles ... living cells have many substructures and it is known that several types have complex relationships with signal transmission in neurons
  • Entirely different form of activation ... activations in common artificial neural nets are simply functions with ordinal scalars for both range and domain ... mammalian neurons operate as a function of both amplitude and relative temporal proximity of incoming signals [note 3]

[1] Topology is ironically both a subset of architecture (in the fields of building design, network provisioning, WWW analysis, and semantic networks), yet at the same time topology is, much more than architecture, at the radical center of both AI mathematics and effective actualization in control systems

[2] The role of chemistry may be essential to learning social and reproductive behavior that interrelates with DNA information propagation, linking in complex ways learning at the level of an ecosystem and the brain. Furthermore, long term and short term learning divides the brain's learning into two distinct capabilities too.

[3] The impact of the timing of incoming signals on biological neuron activation is understood to some degree, but it may impact much more than neuron output. It may impact placticity and chemistry too, and the organelles may play a role in that.

Summary

What machine learning libraries do is as much simulating the human brain as Barbie and Ken dolls simulate a real couple.

Nonetheless, remarkable things are arising in the field of deep learning, and it would not surprise me if autonomous vehicles become fully autonomous in our lifetimes. I would not recommend to any students to become a developer either. Computers will probably code much better than humans and orders of magnitude faster, and possibly soon. Some tasks are not of the kind that biology has evolved to do and computers can exceed human capabilities after only a few decades of research, eventually exceeding human performance by several orders of magnitude.

Answered by Douglas Daseeco on January 20, 2021

They are not close, not anymore!

[Artificial] Neural Nets vaguely inspired by the connections we previously observed between the neurons of a brain. Initially, there probably was an intention to develop ANN to approximate biological brains. However, the modern working ANNs that we see their applications in various tasks are not designed to provide us a functional model of an animal brain. As far as I know, there is no study claiming they have found something new in a biological brain by looking into the connections and weight distributions of let's say a CNN or RNN model.

Answered by Borhan Kazimipour on January 20, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP