Contents
The Max Planck Institute conducted a study on the prospects for the development of AI. Its authors believe that soon we will not be able to control superintelligent machines. Trends Asked Experts What They Think of the Potential Threat
The Max Planck Society is a network of research organizations headquartered in Munich. It includes more than 80 institutes and centers around the world. Among the members of the society are 20 Nobel laureates.
What is it about
The experts — developers and technofuturists who conducted the study — talk about superintelligent machines: robots and AI-based algorithms that will surpass the human brain. Based on theoretical calculations, they suggested that when this happens, we will not be able to fully control artificial intelligence.
“The superintelligent machine that rules the world sounds like science fiction. But there are already machines that perform important tasks on their own, and programmers do not fully understand how they learned this. Therefore, the question arises: can this process at some point become uncontrollable and dangerous for humanity? said Manuel Sebrian, study co-author and head of the Digital Mobilization Group at the Center for Humans and Machines at the Max Planck Institute.
In their work, the authors modeled two scenarios.
“If we look at the problem from the point of view of the basics of computer science, it turns out that the algorithm that should stop AI from destructive actions can block its own operations. When this happens, you won’t even know if the algorithm is still analyzing the threat or if it has stopped while containing the malicious AI. In essence, this renders such an algorithm unusable,” said Iyad Rahwan, director of the Center for Humans and Machines at the Max Planck Institute.
Thus, experts do not yet see any technical possibility to deter AI from malicious actions. Moreover, they argue that they may not have noticed that the machines have already approached a critical point in development. Therefore, now we are faced with another task – to recognize this moment.
Should we fear superintelligent machines for the foreseeable future? Here’s what the experts think.
Sergey Markov: “To arrange an apocalypse, a person does not need AI”
About Speaker: machine learning specialist, founder of the popular science portal 22century.ru.
– In general, scenarios with the release of some general AI system out of control with subsequent catastrophic consequences seem extremely unlikely to me. For its realization, a confluence of a huge number of absolutely incredible circumstances is necessary.
Most supposed apocalyptic scenarios involving AI actually include the AI system only as a kind of “icing on the cake”. Bostromov’s paper clip maximization machine can only grind all of humanity into paper clips with a destructive technology that other AI systems can’t stop. Such a machine does not need an “operator” in the form of a superintelligence or a human-level AI system. In the presence of such technology, people themselves will be able to arrange a catastrophe without any problems.
Some interpretations of this theory suggest that the AI system will manipulate humans in some way, due to its intellectual superiority. In reality, such superiority, most likely, will simply never arise, since there are fundamental physical limits on the speed of calculations (Bremermann, Landauer). Building an optimal strategy in an environment similar to the real world belongs to the EXPTIME-complete complexity class. Simply put, an exponential increase in the speed of calculations will only give a linear increase in “intelligence”.
Of course, AGI systems (Artificial general intelligence, strong artificial intelligence) at some point will begin to outperform humans in a wide range of intellectual tasks, as weak AI systems do today for the tasks for which they are designed. But they are tightly integrated into the system of social relations and are used, first of all, to enhance the intellectual capabilities of a person. In the future, the degree of integration of people and AI systems will only increase.
Surely you can find stories when an ordinary car navigator could provoke an accident. But it is important to understand that the risks of using “smart” systems must be compared with the risks of not using them, and not with a hypothetical situation of no risks.
Gerd Leonhard, a futurist and chief skeptic of how technology affects humanity, devoted an entire chapter to artificial intelligence in his book Technology Against Humans. Here is what he writes:
“Imagine a society where technology, especially AI, can solve every major problem for humanity, from disease, aging and death to climate change, global warming, energy and food production, and even terrorism. Imagine machine intelligence that can process more information than we will ever be able to comprehend and read all the information in the world in real time, anywhere, anytime. This device (and its owners and developers) will become something like a global brain, incredibly powerful, beyond human understanding. Is this where companies like DeepMind and Google want to lead us? How will we be able to maintain our human qualities in this scenario?
Alexander Krainov: “Let’s wait until the robot can at least make coffee in an unfamiliar kitchen”
About Speaker: Head of the Yandex Machine Intelligence Laboratory.
“I don’t think anything like that will ever happen. At least, there is no hint of it now. Let’s wait at least until the moment when the machine passes the Wozniak test and prepares a cup of coffee in an unfamiliar kitchen. So far, this is still very far away. All modern systems are able to solve very narrow problems and are not able to operate in uncertain conditions with a vaguely formulated (in mathematical terms) goal.
If we talk about AI in general, then in the near future we expect significant progress in the analysis and generation of texts, as well as in medicine. The development of RL (reinforcement learning, reinforcement learning) will give us great progress in the use of AI for robotic manipulators, traffic management, recommender systems.
Any decision-making system has a certain percentage of errors. But they are introduced because with them this percentage is much less.
Andrey Neznamov: “The potential harm from AI and robots is not comparable to the human factor”
About Speaker: Executive Director of PJSC Sberbank (sber.ai) for the regulation of AI technologies and related areas, PhD in Law, author of books and publications on the topic of AI regulation.
– There is an opinion that the moment when we stop controlling strong AI will be a turning point. From a legal point of view, the main danger lies in the fact that we do not know anything about it: neither when it will come, nor by any signs to determine it, or even who will be its creator. In the absence of any real information, all attempts to control the AI are up in the air.
In the future, we will depend on AI exactly as much as we now depend on all modern technologies and the realities of the information society. Robots and AI are, first of all, a tool. Any tool can cause harm, and robots are no exception. Yes, there are examples where industrial robots have caused harm, some even with fatal outcomes. Cases where drones have caused harm are rare so far. But in each of them, the harm is not comparable to that which, in similar circumstances, a person would cause. Humans are to blame for 90% of accidents.
Subscribe also to the Trends Telegram channel and stay up to date with current trends and forecasts about the future of technology, economics, education and innovation.