Scientists have discovered Cthulhu

Does our brain use deep training to understand the world?

Immediately when Dr. Blake Richards heard about the deep training, he realized that he was facing not only a method that revolutionized artificial intelligence. He realized that he was looking at something fundamental from the human brain. It was the beginning of the 2000s, and Richards conducted a course at the University of Toronto, along with Jeff Hinton. Hinton, who was at the source of the algorithm that conquered the world, was offered to read an introductory course on his method of teaching, inspired by the human brain.

The key words here are "inspired by the brain". Despite the conviction Richards, the bet played against him. The human brain, as it turned out, does not have an important function that is programmed in deep learning algorithms. On the surface, these algorithms violated the basic biological facts already proven by neurobiologists.

But what if the deep training and the brain are actually compatible?

And so, in a new study published in eLife, Richards, working with DeepMind, proposed a new algorithm based on the biological structure of neurons in the neocortex. Cortex, the cerebral cortex, is home to higher cognitive functions, such as reasoning, forecasting and flexible thinking.

The team combined its artificial neurons into a multilevel network and set before it the task of classical computer vision - to determine the handwritten figures.

The new algorithm coped well. But what's important is that he analyzed the examples for learning the way deep learning algorithms do, but was built entirely on the fundamental biology of the brain.

"Deep training is possible in the biological structure," scientists concluded.

Since this model is currently a computer variant, Richards hopes to pass the baton to experimental neurobiologists who could test whether such an algorithm works in the real brain.

If yes, the data can be transferred to computer scientists to develop massively parallel and efficient algorithms, on which our machines will operate. This is the first step towards the merger of the two regions into the "virtuous circle" of discoveries and innovations.

Search for a scapegoat

Although you've probably heard that artificial intelligence has recently beat the best of the best in go, you hardly know how algorithms work at the heart of this artificial intelligence.

In a nutshell, in-depth training is based on an artificial neural network with virtual "neurons". Like a tall skyscraper, the network is structured in a hierarchy: low-level neurons process input - for example, horizontal or vertical dashes forming a 4 digit - and high-level neurons handle abstract aspects of the digit 4.


To train the network, you give her examples of what you are looking for. The signal spreads through the network (it goes up the steps of the building), and each neuron tries to see something fundamental in the work of the Quartet.

As children learn something new, the network does not do very well at first. She betrays everything, which, in her opinion, looks like a figure four - and images are obtained in the spirit of Picasso.

But this is how learning proceeds: the algorithm compares the output with the ideal input and calculates the difference between them (read: errors). Errors "re-spread" through the network, teaching each neuron, they say, it's not what you're looking for, look for it better.

After millions of examples and repetitions, the network begins to work immaculately.

The error signal is extremely important for learning. Without an effective "back propagation error," the network will not know which of its neurons are wrong. In search of a scapegoat, artificial intelligence improves itself.

The brain does this too. But how? We have no idea.

Biological deadlock

Obviously another: the decision with deep training does not work.

Reverse propagation of the error is an extremely important function. It requires a certain infrastructure for proper operation.

First, each neuron on the network should receive an error notification. But in the brain, neurons are connected to only a few downstream partners (if at all). In order for the reverse propagation to work in the brain, the neurons at the first levels must perceive information from billions of compounds in the descending channels - which is biologically impossible.

And although some deep learning algorithms adapt the local form of back propagation of the error - essentially between neurons - it requires that their connection back and forth be symmetrical. In the synapses of the brain this does not happen almost never.

More modern algorithms adapt a somewhat different strategy by implementing a separate feedback path that helps neurons find errors locally. Although this is more biologically feasible, the brain does not have a separate computer network dedicated to finding scapegoats.

But he has neurons with complex structures, unlike homogeneous "balls", which are currently used in deep training.

Branching networks

Scientists draw inspiration from pyramidal cells that fill the human cerebral cortex.

"Most of these neurons are tree-shaped, their" roots "go deep into the brain, and" branches "come to the surface," Richards said. "What is remarkable, the roots receive one sets of input data, and the branches are different."

Curiously, the structure of neurons often turns out to be "just the right way" to effectively solve a computational problem. Take, for example, the processing of sensations: the bottom of the pyramidal neurons are where they should, to get a touch input, and the tops are conveniently located to transmit errors through feedback.

Can this complex structure be an evolutionary solution to combat an erroneous signal?

Scientists have created a multilayer neural network based on previous algorithms. But instead of homogeneous neurons, they gave it neurons of middle layers - sandwiched between input and output - similar to real ones. Learning on handwritten figures, the algorithm proved much better than a single-layer network, despite the lack of a classic back-propagation error. Cell structures themselves could determine the error. Then, at the right time, the neuron combined both sources of information to find the best solution.

There is a biological basis in this: neurobiologists have long known that the input branches of the neuron perform local calculations that can be integrated with the signals of the backward propagation of the error from the output branches. But we do not know if the brain really works that way - that's why Richards instructed the neuroscientists to find out.

Moreover, this network handles the problem similar to the traditional method of in-depth training in a way: it uses a multi-layered structure to extract progressively more abstract ideas about each number.

"This is a feature of in-depth training," the authors explain.

Deep Learning Brain

No doubt, in this story there will be more unexpected turns, because computer scientists are making more and more biological details in AI algorithms. Richards and his team view the predictive function from the top to the bottom, when signals from higher levels directly affect how the lower levels react to input.

Feedback from the upper levels not only improves the signaling of errors; it can also encourage lower-level neurons to work "better" in real time, Richards says. While the network has not surpassed other non-biological networks of in-depth training. But it does not matter.

"Deep training has had a tremendous impact on AI, but until today its impact on neuroscience has been limited," the study authors say. Now, neurobiologists will have an occasion to conduct an experimental check and find out whether the neuron structure lies at the basis of the natural algorithm of deep learning. Perhaps in the next ten years a mutually beneficial exchange of data between neurobiologists and researchers of artificial intelligence will begin.

The article is based on materials https://hi-news.ru/research-development/ispolzuet-li-nash-mozg-glubokoe-obucheniya-dlya-osmysleniya-mira.html.

Comments