Scientists have discovered Cthulhu

Artificial intelligence and the genetics of living beings are trained very much

Gene networks in animals resemble the networks of neurons in our brains - they too can "learn" on the go. In 1996, a young graduate student named Richard Watson decided to read an article on evolution. It was provocative and affected the old problem in evolutionary biology: we do not understand how organisms can so successfully adapt to the environment.

Living beings throughout their lives undergo changes, or mutations, in genes, but they do not seem at all random. Instead, they actually "improve" their ability to adapt. It seems that this ability is explained not only by the process of natural selection, when the best traits are passed on to the most successful organisms.

Therefore, the authors of the article Guenter Wagner of Yale University and Lee Altenberg of the Hawaiian Institute of Geophysics and Planetology in Honolulu decided to look for answers in an unexpected place: computer sciences.

Watson, a computer scientist, literally went crazy. In the 20 years that have passed since he read this article, he developed a theory based on ideas then expressed. It could help explain why animals evolve so well: this trait is called "evolving" (or evolving). Moreover, it could help solve old curious questions in the field of evolutionary biology.

Many people are familiar with the idea that genes are transmitted from parents to offspring, and genes that help their masters to survive and reproduce are more likely to be transmitted. This is the essence of evolution and natural selection.

But that's not all, because genes often work together. They form "gene networks," and these gene networks can also sometimes be transmitted intact for several generations.

"The fact that organisms have gene networks and they are inherited from one generation to another, this information is far from new," says Watson, currently working at the University of Southampton in the UK. His contribution is mainly related to how natural selection works in these networks.

He believes that he is not just a partial barrier, allowing some adaptations to pass, and some do not. Instead, the effect of this filter allows gene networks in animals to actually "learn" what works and what does not, over time. Thus, they can improve their performance - in much the same way as artificial neural networks used by computer scientists can "learn" to solve problems.

"Gene networks" are developing as neural networks are learning, "he says. "That's what's really new."

Watson's statement is based on the idea that the connections between the genes can be strengthened or weakened as evolution and species change - and it is the strength of these links in the gene networks that allows organisms to adapt.

This process is similar to how artificial neural networks work on computers.

In our time, these systems are used to perform a variety of tasks. For example, they can recognize people's faces in photos or videos, and even analyze the shooting of football games to understand which team's tactics are showing themselves better and why. How do computers manage to determine even this?

Artificial neural networks are created in the image and likeness of biological networks - for the most part of the brain. Each network is a collection of simulated "neurons" that are connected in a certain way; like stations and metro lines.

Networks like these are capable of receiving input data - say, the word "hello" written on the page - and compare them with the output - say, in this case with the word "hello", which is stored in the computer's memory. This is how children learn to read and write.

Like a child, a neural network can not instantly establish a connection, but must be trained over time. This training is complex, but essentially involves changing the strong links between virtual neurons. Each time this improves the result, until the entire network can reliably output the desired answer: in our example, the funny characters on the page ("hello") correspond to the word "hello." Now the computer knows what you wrote down.


Watson believes that something similar happens in nature. The evolving view "draws" a line just for a specific environment.

There are different ways of learning neural networks. The one on which Watson focused is a good example of what is happening in biological gene networks, the "Hebb training".

In the Hebbian study, the connection between adjacent neurons that have similar results is enhanced with time. In short: "neurons that work together are linked together." The network "learns", creating strong links within itself.

If an organism has certain genes that work together in this way, and this organism is successful enough for reproduction, then its offspring do not just inherit its useful genes, Watson says. It also inherits a link between these genes.

A particular advantage of the Hebbian learning is that these networks can develop "modular" functions. For example, one group of genes can determine if the animal has hind paws, or eyes, or fingers. Similarly, a handful of related adaptations-like the ability of a fish to cope with fever and salinity-can be contacted and inherited entirely, in one gene network.

"If there is a single creature that has a slightly stronger regulatory link between these genes than any other, it will be preferable," says Watson. "They will be selected by natural selection. So, after the evolution of time, the strength of the connections between these genes will be increased. "

For Watson, this helps to circumvent the sticky problem in the theory of evolution.

Imagine for a moment that the genome of the organism is a computer code. A novice programmer could gradually update his code from time to time, trying to make improvements. With their help, it would be possible to determine whether another sequence of commands can cause the program to work a little more efficiently.

To begin with, this trial and error process can work quite well. But over time, updating the code in this way will make it rather cumbersome. The code will begin to look messy, which will make it difficult to determine the consequences of a change. Sometimes this happens in programming, the result is called "spaghetti code."

If organisms really evolved this way, says Watson, their "evolving ability - the ability to adapt to new stresses or environments - would not be the best." But in fact "the ability of natural organisms to adapt to a selective environment or problems is simply amazing."

Watson also suggested that gene networks can include "memories" of previous adaptations that could be conditioned by environmental demands.

For example, it is possible that some groups of organisms could quickly develop to consume food harmful to other members of the same species - because their ancestors had already sustained such a diet. In the past, the structure of gene regulation could change, facilitating some triggers of gene expression. This "bias" would ultimately help their descendants to digest complex food.

One of the real examples of Watson is stickleback. These fish developed at the same time the tolerance of fresh, then salted water, then returned back, depending on what the current environment required of them.

Watson's idea means that organisms must be filled with a variety of options for adaptation.

It also means that the gene networks evolved - in all animals - to adapt to the natural world of the Earth. That's why organisms respond so well to the environment: stresses and stresses in the Earth's environment are imprinted in regulatory relationships between genes for millions of years.

"I think there has always been a lot of potential to explore the parallels between computer learning and evolution, but no one has done this with the same rigor as Richard Watson," says Kevin Lalande of the University of St. Andrews in the UK, who participated in a large-scale project with Watson .

However, the big problem of Watson's hypothesis is whether it is possible to find any empirical proof of it in nature.

Until now, all of Watson's ideas were based on computational experiments in the laboratory. Apparently, these experiments could yield results similar to real organisms, but no specific processes have been observed yet.

"This is a $ 64 million issue," Watson concedes.

But Watson and Lalande believe that there are other ways to test this theory of evolution. Watson suggests analyzing how the gene networks of microbes developing in the laboratory change. Since microbes, for example bacteria, are reproduced quickly, several generations of adaptation can be observed in a few days.

"If you want to test the theory rigorously, you should ask yourself if you can make new predictions that have not yet been reflected in the literature," Lalande says.

For example, a computer system based on Watson's ideas could be developed that could predict how organisms will develop in the wild under certain known conditions. If such a system turns out to be accurate, this will certainly help to strengthen the theory.

In gene networks, there are already several features that help to reach Watson's approach. A mini-network of genes that determine a specific adaptation, like one of the modules mentioned above, can sometimes be turned on or off by just one other activator gene.

Examples of this can be found in nature, says Watson. Among them are "evolutionary rejections": organisms with adaptations, which, it was believed, should have disappeared from their ancestors. This is called "atavism."

A well-known example of this is the teeth of chickens. Chickens are genetically able to grow their teeth, but do not usually do it in the wild or in captivity. However, the growth of the teeth can be included in the laboratory with the help of molecular biology.

Sometimes atavistic traits appear in natural populations. One of the last possible cases is a whale found on a beach in Australia in February 2016. He had teeth like fangs, which are usually not visible in whales. Perhaps this was left of the ancestors, who also had tooth-like teeth, millions of years ago.

Another relevant phenomenon is "convergent evolution", when unrelated species living in completely different habitats somehow come to the same adaptation. Among the examples are certain drawings on the wings of butterflies and very similar fish that live in individual lakes in Africa, says Lalande.

"The same forms, the same patterns appear again and again," he says. "Perhaps it is easier to create a certain kind of fish than others. Some forms can manifest themselves more often in the course of generations. "

Evolutionary nature of this kind, described by Watson, can explain this. The genetic networks, he says, have gradually learned to react in a similar way in similar situations. These modular functions, such as the butterfly wing design, can be the most likely solutions for the learning system than others.

In other words, if there are several necessary conditions, evolution will perform the same tricks again and again.

And here philosophical questions are born. On the one hand, evolution functions as a large natural computer. And can "evolving" assume that life is in some sense programmed to improve - at least at the genetic level? Some biologists are terrified of this idea, but if the ability of organisms to adapt improves over time, if evolution learns with time, is everything so transparent?

Watson thinks that yes.

"Only if you present a system that has the appropriate variability, selection and inheritance, will you make evolution work. And without evolvizioniruemosti it is impossible to imagine. "

The article is based on materials https://hi-news.ru/research-development/iskusstvennyj-intellekt-i-genetika-zhivyx-sushhestv-obuchayutsya-ochen-poxozhe.html.

Comments