github twitter linkedin email
ELI5 : Neural networks and their memory
Jan 19, 2018
3 minutes read

What are neural networks ?

Neural networks are special computer programs. It can learn and tell apart things it has seen before. For example, we know when we see a dog because we have seen many dogs before. Neural networks can do just that but in a computer. They were created to copy the way our heads work.

How do they learn ?

For a very long time, when you wanted your computer to do something, you had to tell it exactly everything it has to do to get to a result. What is new is that you don’t tell the computer how it should work. You show to it many examples, the result you want, and the computer figures out by itself how to get the result. In the dog example, the old way would be to write down what a dog face looks like (it has two eyes, a nose, pointy ears, a certain color, …) and then you know everything about it. In the new way, you show the computer many pictures with animals and it might learn what a line looks like, from there what a face is and finally might learn how a dog face is different from other faces. They learn the rules without us writing them down.

Are all the networks the same ? Can they they only ‘see’ pictures ?

Neural networks can have very different structures for different goals. One type of neural networks are called recurrent neural networks (RNN). They are often used to learn about things that are ordered and to guess what is happening next. A simple example is to create text : given the start of a sentence, try to fill in the end of it. The program is shown a lot of full sentences, it learns how the words are linked and then it can create the end of one if shown the start of it.

How can you learn the order of the words ? The network has to know what is important and what is not. Imagine that you have one whiteboard to write down many things you would like to learn and a pen with an ink you can’t erase. You keep writing things, everything keeps getting messier and messier. This is one problem with recurrent neural networks. Older memories gets fuzzy. Now, if you use a regular pen that you can erase, you can select what to keep and what to forget, this is the idea behind a new type of recurrent neural networks : long short term memory networks (LSTM). You show one example, decide what to write down and what to erase and start again. In this way, you keep the memory under control, you remember only what you need.

Don’t forget, you are learning and memorizing every day. The neural networks have learned on many examples and make their own rules to identify dogs for example. What happens if one day you show a picture of a very strange dog with pink hair and three eyes ? Or the start of a sentence with words it has never seen ?

NB : ELI5 stands for the Explain Like I’m 5.



Back to posts


comments powered by Disqus