What are Artificial Neural Networks, their types, and how do they work

Artificial Neural Networks

intelligence is occupying the world around us and invading all the fields that we know in addition to the new ones that appear every day, and with the increasing demand of it, we expect it to be more intelligent and to resemble humans and human intelligence more and more, which prompted scientists and artificial intelligence engineers to try in every way to make it Smarter than it is now. 

Artificial Neural Networks
 Artificial Neural Networks

 

Artificial Neural Networks (ANN) or Neural Networks (NN) - for short - were the beginning and development of the AI ​​revolution. And working on it is the solution to reach a more intelligent artificial intelligence, a way that enables artificial intelligence to deal with the world around it as the human mind deals with it. 


What are Artificial Neural Networks? And how does it interact with the world around it like the human mind? How did it start and where is it now? These are all questions that this article will answer.

What are Artificial Neural Networks?

The usual programming we know; It is the process of giving a computer or machine a set of steps or tasks that it must perform specifically in order to complete a task, and this is called the algorithm.

And the computer in this case is stupid, or in other words, it does not understand what it is doing specifically, nor why it is doing it, but it is carrying out exactly what is required of it by receiving inputs that it does not understand, and issuing Outputs that it does not know its usefulness.

This usually suffices the purpose and more. The browser you are reading this article has, for example; It does not automatically drag to the bottom of an article without you doing so, and it cannot make this article a favorite without you pressing the required buttons.

But scientists are not satisfied with this. They want to take the software to another level, in which they can understand and make decisions on their own with minimal or no human intervention.

Netflix, for example, uses artificial intelligence that filters series and movies through your watch history, to ensure that you will enjoy and watch the show suggested to you, so how do you do that?

If you're thinking of doing this in normal programming, you'll need to write very long codes that say to the algorithm: That if the viewer chooses show A, he recommends show B, and that if he likes show C, choose show D for him.

And so on for hundreds of thousands of shows, movies, and series making it almost impossible to do, and it would also contain a lot of bugs because of how it bases its suggestions on just one show or series, not on user activity and views as a whole.

Here comes the role of neural networks and the smart algorithms that depend on them, which can independently and without human intervention understand what a person would like to watch through the history of his views and ratings, in addition to the similarity with other users in the viewing pattern, and all this through more than 7500 classifications of offers To make the decision for more than 100 million users on the platform.

Neural networks, such as the Convolutional Neural Network or CNN for short that Netflix uses in its filtering system… By providing them with data, they can understand and penetrate the psyche of the viewer in order to make the appropriate suggestion to him.

Which will make him continue to watch shows and renew his monthly subscription.

Scientists have come up with the idea of ​​neural networks by trying to understand the human mind, so psychologists and neuroscientists have had a great role in the development of neural networks and artificial intelligence until recently. By understanding how a person can see, hear, or understand words and understand their meanings, we can create a simulation For a computer it can do the same.

This simulation will rely on data and numbers, or more accurately, it will rely on statistical and predictive models in order to try to reach the most accurate and correct results.

Through several mathematical methods such as gradient descent and backpropagation, we can make these neural networks produce logical results with acceptable error rates.

So we can express neural networks as being data-driven, and by understanding this we will understand how these networks simulate the human mind, and how they can do all these things that they do that we will review in our article.


Artificial neural networks mimic the neurons of the human body

In a complex experiment called the Neural Rewiring Experiment, the scientists were able to use out of place neurons. That is, for example, they connected the neurons responsible for hearing to the eye in one of the animals, and through observation they found that this animal's brain was able to train its neurons to perform the visual process effectively.

One of the most prominent results of this experiment was that scientists were able, in a computer way, to help a paralyzed monkey restore the ability to move his wrists, by connecting his wrists directly with his brain through a computer, between them, the computer translates brain activity into electrical charges that the muscles understand to deliver them to the wrist muscles and move them. smoothly.


The results of this experiment have given artificial intelligence scientists an idea that human neurons work efficiently through experiment, and that they have great flexibility that can be controlled through training, which made them think about simulating this property in a mathematical computational way, so-called artificial neural networks that we talk about in our article This.

What scientists have done is not only this, but this is just a recent experiment that helped in the development of neural networks, but at the beginning of this science and with the development of neuroscience and its discovery of how cells and biological neural networks work, computer scientists have imitated these human networks in order to create neural networks Artificial Neural Networks (ANN) that we are talking about.


How does a biological neuron work?


The neuron works through a number of basic parts, which work together in harmony to achieve the results we hope for, namely:

  • Dendrites: These dendrites whose primary function is to receive the inputs that they receive in the form of charges.
  • Cell Body or Soma: This part deals with and processes the values ​​and charges that come from the dendrites to determine whether to travel through the axon ganglia or not.
  • Axon: The function of this part is to transmit nerve signals through the neuron.
  • Synapses: This part is responsible for issuing the outputs from the cell, and determines the effectiveness and impact of this signal on the cell to which it will be transmitted.

Thus, the intertwining of many neurons enables them to create a biological neural network that can perform a specific function with great intensity and accuracy, by receiving and processing many inputs, and then issuing outputs for the response.


In the next section, we will see how Artificial Neural Networks (ANN) simulate it.

How does an Artificial Neuron work?


As we can see in the Artificial Neuron in the image, it is also done by receiving several inputs (symbol X in the image), which are handled and processed by weights (symbol W in the image), which we will talk about shortly.

Then their values ​​are processed using a mathematical function called the sigmoid function (symbol σ in the image) to output a value between one and zero, this value constitutes the outputs of the cell (symbol Y in the image).

Through an assemblage of tens, hundreds, and even thousands of these artificial neurons, what we know as Artificial Neural Networks (ANN) is formed.

Comparison of Artificial Neural Networks (ANN) and Biological Neural Networks (BNN)

There are 3 main areas in which we can compare Biological Neural Networks (BNN) and Artificial Neural Networks (ANN).

These themes are:

1. Speed

Artificial neural networks (ANNs) are superior to biological neural networks (BNNs) in the speed factor, as their response time is estimated in nanoseconds, while others are in milliseconds.

2. Processing

Biological Neural Networks (BNNs) have a huge processing capacity, and they have a parallel pattern of processing, that is, they can process a large number of different inputs in parallel (Parallel Processing), while Artificial Neural Networks (ANNs) perform serial processing of their own inputs (Serial Processing), that is, the inputs pass a number of successive steps.

3. Complexity

Biological neural networks (BNNs) are far too complex for artificial neural networks (ANNs).


History of Artificial Neural Networks (ANN)

The history of artificial neural networks is long and very complex, and includes the efforts of many scientists in various disciplines who have contributed to the current form of highly intelligent sophisticated neural networks that we know.

We will shorten this long history into a number of points for the sake of convenience and see how it developed:

1. In 1943, neurophysiologist Warren McCulloch and fellow neuroscientist and mathematician Walter Bates modeled a primitive artificial nerve cell for the first time.

It is worth noting that at this time artificial neurons were used and manufactured in order to clarify and understand the work of biological neurons, which was abbreviated according to their knowledge at that time by the term Connectionism.

2. In the year 1954, researchers finally made a computer application that represents these mathematical models that were made, and this is after many strenuous attempts in the forties and early fifties.

3. In 1958, the famous psychologist Frank Rosenblatt discovered the so-called Perceptron or the artificial neuron, and made a model for it based on the 1943 model we mentioned above, and what distinguishes the Rosenblatt model that we talked about contains the idea of ​​weights and was able to determine them successfully.

4. In the year 1959, the two scientists from Stanford University Bernard Widrow and Marcian Hof made the first artificial neural network to be used in real life, and this network was designed to reduce noise in phones and is still used today.

5. In 1969, the book Perceptron or the artificial neuron of the famous scientist in the field of artificial intelligence, Marvin Minsky, came out, which discussed the impossibility of making a multi-layer neural network, which disrupted neural networks, deep learning and artificial intelligence in general for about a decade or more.

6. In 1982, this stalemate, known as the AI ​​winter, ended through a paper presented by the eminent scientist John Joseph Hopfield, which discussed what he called the Hopfield Neural Network. 


7. In 1985, the American Institute of Physics launched what it called the annual meeting of “Neural Networks in Computing,” which was followed in 1987 by the first annual conference on neural networks from the Institute of Electrical and Electronics Engineers (IEEE).

8. Thus, interest in neural networks and artificial intelligence returned again, without which our world would not have been the shape we know now, and we would not have developed this tremendous development in just three decades.


The most important terminology of artificial neural networks

Before moving to the heart of the article, and getting acquainted with how artificial neural networks work - which we will call from now for short, Neural Networks - we must familiarize ourselves with a number of terms that we will use, and this is so that the explanation of how they work and their types is easy and smooth, and to facilitate the return and understanding of these terms In the event of forgetting or mixing with each other.

I do not advise you to memorize these terms, but just read them until their turn comes, and I do not expect you to understand much of them, but rather they will be like a lamp that lights up in your mind when I mention them in my speech, so do not worry if you do not understand them deeply from the beginning, and these terms are:

1. Inputs

The inputs are the values ​​received by the neurons. In order to make the neural networks deal with the problems or phenomena that we face, we must convert them into a digital form that the neural network can deal with and interact with. The neural network receives the least two inputs, while the number of inputs in the neural network may reach thousands or more than the values.

For example, if we want to make the neural network see a black and white image, we make it treat each pixel separately.

If the image size is 100 * 100 pixels, it deals with about 10,000 pixels, and we make each pixel contain a value and this value ranges between one and zero according to its color gradation between white and black, so that the pixel 0 is white and 0.5 is gray and 1 be black.

But if we want to enter a color image to the neural network, we use the RGB model, which treats the colors as a mixture of Red, Green and Blue, so that each pixel in the image has three values ​​in terms of the overlap of these three colors in it.


2. Activation Value

The activation value is the output of each cell of the neural network that the cell exits after the cell performs its processing operations, or even after it is exposed to a sigmoid function, and replaces the initial input in the layers that follow.

3. Outputs

The output is the final output of the neural networks. For example, if we have a neural network that recognizes handwritten numbers, it will have ten digits, ranging from 0 to 9.

And when we enter the image of any handwritten number for it, the output is the number that is recognized. If it was 7, for example, we would find the cell in the output layers that indicates the number 7 is activated.

4. Weights

Weights are one of the most important terms that you must know to understand how neural networks work, and weights are the values ​​that you multiply the inputs in in order to determine their value or importance in the functioning of the neural network.

They are initially selected completely randomly and with certain algorithms and mathematical methods. In the training phase of the model, the correct values ​​of the weights that contribute to the neural network's work are determined more accurately, and output them in the form of reliable results.

For example, assuming that there are only two inputs in the neuron, after training the model and reaching an acceptable error rate, each of them is multiplied by its own weight, so that the input multiplied by a larger weight is the most important input, which the neural network should take care of, unlike the factorial input at a lower weight, which the network overlooks by a percentage.

5. Bias value

The bias value is a constant that is very similar to the constant in the straight line equation, and through this value we can easily control the shape of our results, so that we can display them and use the Sigmoid Function on it and give us better results.

6. Sigmoid Function

A sigmoid function is a mathematical function or equation by which we can convert activation values ​​to Z values, which allows us to make groups of activation values ​​range between the numbers 0 and 1.

7. Values

The values ​​denoted by the symbol Z are the output after applying the sigmoid function to the activation values, so that they are between zero and one, and therefore easy to deal with and understand.

8. Layer

The layer is one of the most important terms that we must deal with when talking about neural networks, and it is responsible for calling deep learning deep learning, because deep learning neural networks are many, they give them depth. A layer is a group of cells that share the same level in a neural network.

9. Inputs Layer

The input layer is the first layer in any neuron, and it is through which we give the neural network the inputs, for example, in the example of the image, it is the layer that contains a number of cells equal to the number of pixels in the image, and each cell contains the value of a gradient Color from black and white in the case of black and white images, or RGB values ​​in the case of color images.

10. Output Layer

The output layer is the last layer in any neural network, and it is made up of cells that indicate the result of the neural networks processing data. 9.

11. Hidden Layers

The hidden layers are the layer or layers that exist between the input layer and the output layer, and through this layer or layers we can process data to reach output values ​​and make decisions, and the more of these layers, the smarter and more accurate the neural network, but it also becomes slower, more complex and difficult to Understand how it works.

Increasing the number of hidden layers in deep learning neural networks is what made us call it depth, because in this way it can provide huge energies for data processing and more powerful models, but this will make us need much more computing power. 


12. Error Cost

When neural networks work, especially in the training phase, they often produce illogical and incorrect results, and through mathematical models and algorithms we can improve their results by reaching error values ​​in them, and when applying these error values ​​we can reach the desired results, which we can rely on To explain this concept, I will give an example:

In the Multi Classification cell, we want it to determine from the images of handwritten numbers that we give it as input whether this number is 0, 1 or 2 and so on until the number 9, and we expect it to finally determine the number that represents the image by being its cell The value of 1 and the other eight cells have a value of 0.

But we were surprised during the training that she put gradient values ​​in several cells in the output layer, and this is illogical, as it is impossible for the image with one number to be more than one number, so we show it to some algorithms and specify the error values ​​for it in order to make it finally put the value 1. in the integer cell in the output layer, and put a 0 in the other eight layers.


How do Artificial Neural Networks work?

Neural networks are basically statistical mathematical models of all kinds, so many of the terms and concepts that we have just introduced; They are mathematical functions and equations used to make the industrial network able to understand, deal with and process data.

And then produce the correct results that we want and expect so that we can rely on them and make decisions through them, and we do not delve into the mathematical details in this explanation, but we will treat them superficially for understanding.

We can explain how neural networks work easily and clearly through 7 main steps:

1. Data entry

In the first step we enter the training data for which we already know the correct output, after which it is processed through the hidden layers of the neural network.

2. Data processing by neurons (weights)

At first, the input values ​​are multiplied by the weights, and then transferred to the cells of the hidden layers. It is worth noting that each cell in the layer is connected to all the cells in the next layer.

3. Adding the bias value to the activation values

After multiplying the weights of each cell individually, these values ​​are all multiplied by a constant called the bias value of one for all values.

4. Convert the values ​​into the activation function

After this, the activation values ​​are passed to a function called the activation function in order for the activation function to determine whether the cell in the next layer will light up or not.

5. Repeat the process with the other hidden layers

These operations are repeated with the other layers in the neural network until the last layer, which is the output layer.

6. Determine the output at the end

The output values ​​are ultimately determined by the type and function of the neural network, so we get the results and compare them with ours.

7. Adjust the weights to get more accurate results by comparing them with the training data inputs

The neural network redefines the values ​​of the weights in order to get more accurate results than what it got the previous time, and this step occurs only when training the model, but if the model is trained, it stops at the previous step.

This is how the neural network works in a somewhat abstract way so that our theory is clear, but in the next lines I will review a specific example, which is for a neural network that reads images of handwritten numbers, and defines these numbers in the output:


In the beginning, the input will be the pixels of the handwritten numbers, and let's say that this image has a resolution of 28 * 28, so the number of pixels will be 784, which means that the number of inputs in the first layer of the input layer will be 784.

And the input values ​​will be the pixel hue value between black and white so for example for the image we inserted it is 0 for black and 1 for white and gray 0.5.

After this, these inputs will be multiplied by their respective weights, and then the bias value, and this is in order for the first hidden layer to deal with it. One of the 784 input cells connects to each of the cells of the first hidden layer.

After processing the values ​​through the first hidden layer, the neural network multiplies them by their weights and bias value, and all the cells of the first hidden layer are connected to the cells of the second hidden layer.

Let's imagine that the function of this layer is to collect the shapes of the pixels and identify the parts of the basic numbers, such as: the circle for the number nine 9 and the number 6, and to identify the horizontal line in the number 4 or the vertical line of the number 1 and so on.

Each cell of the second layer is connected to the cells of the ten output layer - the numbers from zero to nine - and the values ​​of the weights and their bias value are multiplied, so that the basic parts that we discussed in the second layer are combined to collect the same numbers in the output layer so that the burning or working cell indicates the number written, and thus the neural network fulfilled its role.

If the neural network is in the training phase, the weights in all layers are random, which causes a lot or a few errors.

For example, we find that the neural network occupies several cells in the output layer, which is impossible because the number written cannot be 3, 6, and 8 at the same time.

So, by comparing the output with the results we already know, we can determine the error values ​​and adjust the weights in the neural network to correct the results, and think of this as cheating the neural network in order to correct the error it made, and this is done through mathematical and statistical models.

After this training phase, the neural network is tested using the training data, which the neural network has not dealt with before, and through the accuracy of the network we rely on it or re-adjust it through mathematical models and algorithms so that it is more accurate and smart.


types of artificial neural networks

There are many neural networks that each have specific functions and capabilities that distinguish them in order to perform the tasks required of them, and you can see some of them in this image:


Networks differ from each other in general in the number of layers, data direction, and speed, and we will review the 6 most important and most widely used neural networks at the moment, which are:

  1. Feed Forward Neural Networks
  2. Convolutional Neural Network (CNN)
  3. Recurrent Neural Networks (RNN)
  4. Radial Basis Function Neural Networks
  5. Self-Organizing Map Neural Network
  6. ) Modular Neural Network (MNN

First: Feed Forward Neural Networks


Feed-forward networks are one of the simplest neural networks and the least in size, but it is characterized by its simplicity and speed, but because of this its accuracy is not the best among other neural networks, and it is called forward because the data goes in one direction, and the backpropagation algorithm is not used in it.

It may contain one hidden layer or it may not contain any hidden layer, and we use in this network the sigmoid function, and it is used to identify and distinguish sounds specifically, and in computer vision, and self-driven cars and may be used in simple classification.

It is worth noting that there is an update of this neural network to what is known as the Multi Layers Feed Forward Neural Networks (MLFFNN).


Second: Convolutional Neural Network (CNN)


Convolutional neural networks are the most widely used neural networks, which we certainly use at least one of its applications every day in our daily lives, and they are very famous for their use in recognition of images and videos, but nevertheless they are used in many suggestion algorithms, such as the Netflix algorithm that we talked about before. 


These neural networks are known for being less parameter dependent than other networks, but on the other hand, they are slow and complex and difficult to design and modify.

Despite the complexity of this algorithm, we explained it simply in the application of hand-written numbers recognition above, and this algorithm is used in image and video analysis, in face recognition, computer vision, voice recognition, and in a large number of medical uses.

Third: Recurrent Neural Networks (RNN)


These recursive networks are one of the most used neural networks, and they are the most used in the process of prediction and reinforcement learning, as they depend on feedback inputs or ratings for you to deal with, and are used in applications such as Facebook friend suggestions, completion of writing, translation, or voice-to-text translation .

Fourth: Radial Basis Function Neural Networks


These networks are very similar to feed-forward networks, but instead of a sigmoid function they use a radial basis function (RBF), which is based on the distance between the center of the circle and points, and is used for classification, chronological functions, and machine control.

Fifth: Self-Organizing Map Neural Network


These networks are also called Cohonen networks, after the Finnish scientist who invented them, and these networks receive a lot of input, and then divide and distribute them as Clustering, and therefore they are used in unsupervised learning, and it takes place in 3 steps: building, training and introducing.

For example, if I have the characteristics of a million people and I want to number them, group them, or cluster them, use this neural network, and it does all the work and produces the appropriate output. This network is used in navigation, mapping, water and petroleum exploration, and data analysis. massive.

Sixth: Modular Neural Network (MNN)


This network has a very innovative idea which is to combine more than one unit of neural networks or combine a number of neural networks, which work in parallel with each other and eventually intertwine in order to aggregate the processed values ​​and from which a single output is issued.

These parallel networks may be different, one handles text, one handles audio, and one handles video.

These networks are very fast compared to the data they process, are used in financial and economic analyses, and have many uses in biology.


Problems and shortcomings of artificial neural networks

Neural networks, like everything else, are not completely rosy. Rather, there are many problems and shortcomings that limit their use or constitute a major obstacle to their use in all aspects of our lives. I will summarize the 4 most important weaknesses of neural networks, which are:

Data requirements for artificial neural networks

Neural networks need a lot and a lot of data in order to feed them, and although this is sometimes available, to use them in some other applications it is very cumbersome or even impossible, and this is because you want to collect information and then classify it to train and model the neural network, in addition to that This data problem also contributes to making the network too time-consuming for a mother to operate.


One of the concerns on the scene is also that neural networks need more data to be effective, and to be able to use them, especially in deep learning applications, and this is what makes us obliged to provide the largest amount of data in order to increase the effectiveness of the neural network or even make its effectiveness acceptable.

Artificial neural networks require enormous computing power

Just as neural networks need a lot and a lot of data to process, they also need huge computing energies in order to process this amount of data, which are capabilities that are usually not available in regular devices available for daily use, and sometimes even not found in more advanced and powerful computers, So the CPU and GPU that are required are quite formidable.

Besides this, neural networks are also costly when it comes to time, to process this huge amount of data and even when the necessary computing power is available… We need a lot of time, maybe months, in order to train the model and the neural network and reach an acceptable and reliable efficiency.

Difficulty and blindness of artificial neural networks

The difficulty in artificial neural networks is that they are complex, and it is difficult to design new neural networks with stronger functions, because they are very complex and require ingenious understanding of mathematics and link many things together, so the number of neural networks or models that exist is almost limited.

As for blindness in neural networks, it is evident in complex neural networks, where we cannot see how the neural networks process data, but we can only see the results and determine whether they are correct or not, and so we do not know how the network thinks about what it is doing, which is what It could cause catastrophe later on.

For example, there was a model that differentiated between dogs and wolves, and the model was producing very effective results, and then when presenting untitled images to it, it made terrible mistakes and its effectiveness was very poor.

After a lot of time and effort, the research team discovered that the neural network determines whether an object is a wolf or a dog through the background, because the images of wolves presented to it in training were in a snowy background.

You may not think that this is a serious mistake, but it would be if this was a model for diagnosing various cancers, or for controlling a nuclear reactor, or supervising military missiles, or any other military application.

Long-term effectiveness of artificial neural networks

Many researchers believe that neural networks have no benefit or existence in the long run, and this does not mean that they are not useful, but rather that they are limited and suffer from severe shortcomings, but to reach the form of general artificial intelligence that we think of as intelligence similar or superior to human intelligence… Neural networks in their current form are a long way from reaching this degree of complexity and sophistication. 


Final words and important resources for learning artificial neural networks

This article is the most stressful article I have written in my entire life - but it was also so much fun - I have prepared the courses and lessons that I have listened to before, which exceeded ten hours.

I also read dozens of articles without exaggeration, and it took about 9 and a half hours of writing, editing, adding or deleting information, in order for the article to come out with this smooth and gradual picture to make the person who does not know anything about neural networks understand its basics.

I came out of this pleasant experience with an important advice, which is that you do not have to understand everything the first time, so do not despair. To understand the concept or application, you will need to listen to the lesson or read the article several times, and it is natural that there are still points that you do not fully understand.

So if you don't understand any of the points I have explained in this article, read it once or twice, search for it on YouTube and watch the other lessons it explains, or search for it on Google and take a dip in the various articles about it.

One of the sources that I recommend for you to learn artificial neural networks is the fifth section on neural networks from the machine learning course of the famous artificial intelligence engineer Hisham Assem.

Which I relied on in many of my explanations, because it is in my opinion the most powerful Arabic source for explaining industrial networks, and it is also amazing at explaining machine learning, deep learning, and natural language processing in general.

Also, the Neural Networks series from 3Blue1Brown's YouTube channel with its 4 clips is one of the best visual explanations I've ever learned from.

And its owner Grand Sanderson is one of the best who can explain mathematical concepts in an easy and simple way, so I highly recommend you to watch it if your English is strong.

In the end, I hope you like the article, and if you have any question or inquiry, ask me in the comments, and I will answer you as soon as possible

Keywords

artificial neural networks
artificial neural networks (ann)
artificial neural networks explained
artificial neural networks and deep learning
artificial neural networks definition
artificial neural networks python
artificial neural networks example
artificial neural networks applications
artificial neural networks algorithm
an artificial neural networks
an ensemble of neural networks
artificial neural networks used for
artificial neural network review
artificial neural network problems
artificial neural network methods
artificial neural networks book