Scientists have discovered Cthulhu

When will we have real artificial intelligence?

The field of artificial intelligence research has come a long way, but many believe that it was officially born when a group of scientists from Dartmouth College came together in the summer of 1956. Over the past few years, computers have improved many times; today they perform computing operations much faster than humans. Given all this incredible progress, the scientists' optimism could be understood. The ingenious computer scientist Alan Turing suggested the appearance of thinking machines a few years earlier, and scientists came to a simple idea: intellect, in fact, it's just a mathematical process. The human brain is a machine to a certain extent. Highlight the process of thinking - and the machine can simulate it.

Then the problem did not seem particularly difficult. Dartmouth scholars wrote: "We believe that significant progress can be made in one or more of these problems if a carefully selected group of scientists will work on this together during the summer." This proposal, by the way, contained one of the earliest applications of the term "artificial intelligence". There were many ideas: perhaps an imitation scheme of the action of neurons in the brain could teach machines the abstract rules of human language.

The scientists were optimistic, and their efforts were rewarded. They had programs that seemed to understand the human language and could solve algebraic problems. People confidently predicted that machine intelligence at the human level would appear in about twenty years.

It also coincided that the forecasting area, when we have artificial human intelligence, was born around the same time as the field of AI itself. In fact, everything goes back to Turing's first article on "thinking machines", in which he predicted that the Turing test - in the process of which the machine should convince a person that she is human too - will be passed 50 years later, by 2000. Today, of course, people still predict that this will happen in the next 20 years, among the famous "prophets" - Ray Kurzweil. There are so many opinions and forecasts that sometimes it seems that the AI ​​researchers put the following phrase on the answering machine: "I have already predicted what your question will be, but no, I can not exactly predict this."


The problem with trying to predict the exact date of the appearance of AI of the human level is that we do not know how far we can go. It's not like Moore's law. Moore's Law - doubling the computing power every couple of years - makes a concrete prediction about a particular phenomenon. We almost understand how to move forward - to improve the technology of silicon chips - and we know that, in principle, we are not limited in our current approach (until we start working with chips on an atomic scale). You can not say the same about artificial intelligence.

Common Errors

Stuart Armstrong's study was devoted to trends in these forecasts. In particular, he was looking for two basic cognitive distortions. The first was the idea that experts in the field of AI predict that AI will arrive (and make them immortal) just before they die. This is a criticism of the "admiration of the nerves" to which Kurzweil is exposed - his predictions are motivated by the fear of death, the desire for immortality and fundamentally irrational. The creator of the superintelligence becomes almost an object of worship. Criticize usually people working in the field of AI and knowing firsthand about the disappointments and limitations of modern AI.

The second idea is that people always choose a time period of 15-20 years. This is enough to convince people that they are working on something that will become revolutionary in the near future (because people are less attracted by the efforts that will manifest themselves through the centuries), but not so close that you will immediately be damn wrong. People are happy to predict the appearance of AI before their death, but it is desirable that it was not tomorrow and not after a year, but in 15-20 years.

Progress in measurements

Armstrong notes that if you want to assess the reliability of a specific forecast, there are many parameters that you can look at. For example, the idea that human-level intelligence will evolve through the modeling of the human brain, at least provides you with a clear framework for assessing progress. Each time we get a more detailed map of the brain, or we successfully imitate a certain part of it, and so we progress towards a specific goal, which, presumably, will result in human-level AI. Maybe 20 years will not be enough to achieve this goal, but at least we can assess progress from a scientific point of view.

And now compare this approach with the approach of those who say that AI, or something conscious, will "appear" if the network is sufficiently complex and will have sufficient computing power. Perhaps this is how we represent the human intellect and consciousness that emerged in the process of evolution, although evolution took place over billions of years, not decades. The problem is that we do not have empirical data: we have never seen how a consciousness arises from a complex network. We not only do not know whether this is possible, we can not know when it awaits us, because we can not measure progress along this path.

There is a tremendous difficulty in understanding which tasks are really difficult to perform, and this has haunted us since the birth of the AI ​​until today. Understand the human language, randomness and creativity, self-improvement - and all at once, it is simply impossible. We have learned how to handle natural speech, but do our computers understand what they are processing? We made AI, which seems "creative", but is there any creativity in his actions? Exponential self-improvement, which leads to a singularity, in general seems to be something beyond the clouds.

We ourselves do not understand what intelligence is. For example, experts in the field of AI have always underestimated the AI's ability to play go. In 2015, many thought that AI would not learn to play in go until 2027. But only two years have passed, not twenty. Does this mean that in a few years the AI ​​will write the greatest novel? Will the world conceptually understand? Approaches a person in terms of intelligence? Unknown.

Not a man, but smarter than humans

Perhaps we misunderstood the problem. For example, the Turing test has not yet been passed in the sense that AI could convince a person in a conversation that he is talking to a person; but the AI's computational capabilities, as well as the ability to recognize patterns and drive cars, are much higher than the level available to humans. The more decisions the "weak" AI algorithms adopt, the more the Internet of things grows, the more data is fed to neural networks and the greater the influence of this "artificial intelligence".

Perhaps we do not yet know how to create a human-level intelligence, but in the same way we do not know how far we can go with the current generation of algorithms. So far, they do not closely resemble those terrible algorithms that undermine the social order and become a sort of vague superintelligence. And in the same way it does not mean that we should adhere to optimistic forecasts. We will have to make sure that algorithms will always lay the value of human life, morality, morality, so that the algorithms are not completely inhuman.

Any forecasts should be divided into two. Do not forget that at the dawn of the development of AI it seemed that he would succeed very quickly. And today we also think so. Sixty years have passed since scientists gathered in Dartmouth in 1956 to "create intelligence in twenty years," and we are still continuing their work.

The article is based on materials https://hi-news.ru/research-development/kogda-u-nas-budet-nastoyashhij-iskusstvennyj-intellekt.html.

Comments