Background of Neural Networks

Neural network is a massively parallel distributed processor made up of simple processing units, which has a natural property of storing experimental knowledge and making it available for use. It resembles the brain in two respects:

– Knowledge is acquired by the network from its environment through learning process.

– The interneuron connection strengths, known as synaptic weights are used to store the acquired knowledge.

The primary significance of the neural network is the ability of the network to learn from its environments and to improve its performance through learning. It learns about its environment through an interactive process of adjustments applied to its synaptic weights and biases. The network becomes more knowledgeable about its environment after each iteration of learning process. The definition of learning process implies the sequence of events:

• The neural network is stimulated by an environment.

• Neural network undergoes changes in its parameters.

• Neural network responds in a new way to the environment because of the changes that have occurred in its internal structure.

The fundamental processing element of a neural network is a neuron. This building block of human awareness encompasses a few general capabilities. Basi­cally, a biological neuron receives inputs from other sources, combines them in some way, performs a generally nonlinear operation on the result, and then outputs the final result. Figure 3.9 shows the relationship of these four parts.

Within humans there are many variations on this basic type of neuron, further complicating man’s attempts at electrically replicating the process of thinking. Yet, all natural neurons have the same four basic components. These components are known by their biological names – dendrites, soma, axon, and synapses. Dendrites are hair-like extensions of the soma, which act like input channels. These input channels receive their input through the synapses of other neurons. The soma then processes these incoming signals over time. The soma then turns that processed value into an output, which is sent out to other neurons through the axon and the synapses.

But currently, the goal of artificial neural networks is not the grandiose recrea­tion of the brain. On the contrary, neural network researchers are seeking an understanding of nature’s capabilities for which people can engineer solutions to

I = Z w-| x. Summation

Y = f[I] Transfer xo

Fig. 3.10 A basic artificial neuron

problems that have not been solved by traditional computing. To do this, the basic unit of neural networks, the artificial neurons, simulate the four basic functions of natural neurons. Figure 3.10 shows a fundamental representation of an artificial neuron.

In Fig. 3.10, various inputs to the network are represented by the mathematical symbol, x(n). Each of these inputs are multiplied by a connection weight. These weights are represented by w(n). In the simplest case, these products are simply summed, fed through a transfer function to generate a result, and then output. This process lends itself to physical implementation on a large scale in a small package. This electronic implementation is still possible with other network structures, which utilize different summing functions as well as different transfer functions.

Updated: June 16, 2015 — 2:35 pm