Biological and artificial neurons

Before going ahead, first, we will explore what are neurons and how neurons in our brain actually work, and then we will learn about artificial neurons.

A neuron can be defined as the basic computational unit of the human brain. Neurons are the fundamental units of our brain and nervous system. Our brain encompasses approximately 100 billion neurons. Each and every neuron is connected to one another through a structure called a synapse, which is accountable for receiving input from the external environment, sensory organs for sending motor instructions to our muscles, and for performing other activities.

A neuron can also receive inputs from the other neurons through a branchlike structure called a dendrite. These inputs are strengthened or weakened; that is, they are weighted according to their importance and then they are summed together in the cell body called the soma. From the cell body, these summed inputs are processed and move through the axons and are sent to the other neurons.

The basic single biological neuron is shown in the following diagram:

Now, let's see how artificial neurons work. Let's suppose we have three inputs , , and , to predict output . These inputs are multiplied by weights , , and and are summed together as follows:

But why are we multiplying these inputs by weights? Because all of the inputs are not equally important in calculating the output . Let's say that is more important in calculating the output compared to the other two inputs. Then, we assign a higher value to than the other two weights. So, upon multiplying weights with inputs, will have a higher value than the other two inputs. In simple terms, weights are used for strengthening the inputs. After multiplying inputs with the weights, we sum them together and we add a value called bias, :

If you look at the preceding equation closely, it may look familiar? Doesn't look like the equation of linear regression? Isn't it just the equation of a straight line? We know that the equation of a straight line is given as:

Here m is the weights (coefficients), x is the input, and b is the bias (intercept).

Well, yes. Then, what is the difference between neurons and linear regression? In neurons, we introduce non-linearity to the result, , by applying a function called the activation or transfer function. Thus, our output becomes:

A single artificial neuron is shown in the following diagram:

So, a neuron takes the input, x, multiples it by weights, w, and adds bias, b, forms , and then we apply the activation function on and get the output, .

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset