Skoltech professor Maxim Fedorov told Trends what the uncontrolled use of artificial intelligence can lead to and why the repetition of the plot of the film “She” about love for a car is quite real
About the expert: Maxim Fedorov, Skoltech Vice President for Artificial Intelligence and Mathematical Modeling, professor, member of the UNESCO expert group on developing recommendations on ethical principles for the development and use of artificial intelligence.
Let’s start by defining what artificial intelligence (AI) is. Some experts believe that AI is a marketing ploy: now computers cannot think and do not have intelligence. What is your point of view on this matter?
“There are over a hundred definitions of AI. For me, artificial intelligence is a system of several components, analyzing, collecting and processing data, making decisions based on them and executing such decisions. Such systems can be called artificial intelligence.
Initially, the term comes from the English word intelligence, which has two meanings. The first meaning is the collection and processing of data. The second is the intellect itself. In the 1950s, the term was introduced precisely in the meaning of “an artificial system for collecting and processing data.” Nothing related to the consciousness or cognitive functions of a person was implied.
Now uncertainty is connected with this term, journalists turned up the heat. In the technical community, it is customary to divide artificial intelligence into strong and weak.
Strong AI are systems that can reproduce and exceed the entire range of human cognitive abilities.
Weak AI are simple algorithms that we encounter on a daily basis: voice assistants, face recognition, chess programs, etc. Basically, all the hype that we are now seeing is due to weak AI. There is no strong artificial intelligence yet, and it is unlikely that it will appear in the foreseeable future.
However, weak artificial intelligence can also pose some threats.
What threats are you talking about?
— It is important to understand that AI technology is a subject, a piece of iron, like, for example, a calculator or a hammer. And the danger can be as far as how to use it. They should not be considered to have any essence of the object. Why do people consciously and unconsciously endow the AI system with some personal qualities? Because this is a feature of man: we tend to anthropomorphize everything around. We give names to animals, machines. But the anthropomorphization of artificial intelligence systems is not justified. We do not equate dolls with people, although outwardly they are very similar.
— But, on the other hand, if you are a member of the UNESCO group for the development of global recommendations on the ethics of artificial intelligence, then you admit that there is still difficulty in interpreting AI, difficulty in communicating with it?
You have now touched on a very interesting topic. Within the framework of this working group, we discussed a lot that its name was chosen not quite correctly. Because AI ethics as such cannot exist. Ethics is a human characteristic. Rather, we are talking about the ethics of using AI.
— That is, how should people properly use artificial intelligence?
Yes, absolutely true. You see, you can hammer a nail with a hammer, or you can cause physical harm to a person. From this, the hammer has no ethics. Ethics must be in the one who uses it. It’s the same with artificial intelligence.
Moreover, as part of the discussion, we tried to exclude all possible references to the subjectivity of artificial intelligence. There have been many proposals from a number of countries to give artificial intelligence the status of a quasi-member of society. I think that this is a very harmful direction. It can lead to technoracism, since behind every thing there is its creator or owner. That is, if I harm someone with a hammer (I hit, for example), it will be unequivocally clear that I am to blame. But if you give the hammer subjectivity, then there is room for reflection: maybe the hammer itself struck – let’s punish the hammer, disassemble it, and so on. And I, it seems, is not to blame already, although I hit it. The same is true for complex systems.
So far, in the legal field, it is more or less clear what to do in case of harm. But not everyone likes it: there are stakeholder groups who want to receive benefits, but do not want to receive negative complaints. Therefore, they are trying to make a cunning move – to give artificial intelligence rights and legally encourage or punish it.
To technical specialists, this seems like nonsense, but the idea is being actively promoted on international platforms. What for? Because the manufacturer profits from the sale of the device, the owner can profit from the use, and if the device causes some harm, then the person is not to blame. The story is very complicated and, I would say, politicized. It has a lot of money in it.
— You mentioned strong AI. What will he represent?
“Actually, it is not clear what it is. You can go with different systemic approaches.
If we consider that a strong artificial intelligence is a machine that has consciousness and feelings at the level of a person, then the question arises: what is a person? What is consciousness? What are emotions? There is still no single answer. Personally, I doubt that strong AI can be created at all. If we don’t understand who we are, how do we create it?
On the other hand, there is another concept – about non-humanoid AI. Indeed, who said that consciousness must necessarily be anthropomorphic? We can come up with many new forms of consciousness that are different from the existing ones or even more powerful. However, the question of measure arises. And is there even a ruler for measuring essential human qualities? As part of the working group, we try to clearly separate people and technology, which should be made for the benefit of people. And if the development of the latter infringes on human rights, then the development of such technology must be stopped. For example, this was the case with human cloning – due to controversial ethical issues, as a result of a long discussion, it was decided to ban it.
— What ethical issues do you discuss as part of the working group? Are you trying, for example, to regulate the gradual displacement of people by artificial intelligence from their jobs?
— Yes, a lot of attention is paid to this issue. Even, rather, the issue of saving jobs and the issue of the digital divide. The digital divide is unequal access to technology at the level of individuals, companies or states. This is especially clearly seen on the UNESCO platform, because there are representatives of various states, including North Africa and the Middle East. Their views on the problem differ from the views of representatives of technologically advanced states, among which I include Russia. They say: how can there be an issue related to AI, when we don’t have the Internet in many places, there aren’t enough textbooks in schools? The problem of unequal access to technology is not limited to artificial intelligence. Similar questions can be raised in the technologies of the agricultural sector. It is known that in many countries agricultural corporations have forced out local producers, which has led to a large number of environmental, social and other problems.
– So technology inevitably leads to a digital divide?
— Each spurt of technological progress led to a change in the landscape, jobs, the economy, and so on. But you need to understand the difference between AI and a number of other technologies.
The first is its distribution. Many compare the risks of artificial intelligence to the risks of nuclear weapons. Somewhere, I may agree: indeed, threats can be comparable in terms of effects on the economy, on people.
But there is one major difference. Nuclear weapons are localized, while AI is delocalized by definition. This is a whole stack of technologies, a combination of sets of software products and hardware, which is distributed around the world via the Internet. Therefore, artificial intelligence can almost instantly distribute solutions throughout the planet. And the person may simply not respond. The speeds at which artificial intelligence technologies operate are significantly higher than the speeds of the human brain. A person is simply not able to control a number of processes that artificial intelligence performs in small fractions of a second. But how ready are we for the fact that more and more functions will be transferred to AI, how safe is this? This is a rather dangerous trend. Finally, AI is denationalization. In the case of AI, it is difficult to understand its ownership, what interests the technology pursues.
– Will there ever be a time when maintaining human control over systems, over algorithms will become problematic?
“We need to think from a different angle. Today there is no reliable system without flaws. Many algorithms are hacked by ordinary schoolchildren. We are trying to carry water in a leaky sieve, but water is human life. There are a lot of technical risks. And on the wave of interest, we want to shift many functions to artificial intelligence – out of laziness and because now we seem to be making a profit. But this leads to huge losses, which are simply not talked about. As one example, what do you know about hacker attacks on the chemical industry?
– Almost nothing.
Because they don’t actively talk about it. But in fact, a lot of chemical industries around the world have been hacked in recent years. Criminals extorted money and received it. Because it’s one thing when a bank’s system is hacked – it’s unpleasant, bad, but still it won’t come to an explosion. And another thing is when they hacked a chemical reactor in which a reaction with sulfuric acid takes place with the release of heat and toxic gases. There are many problems associated with the re-automation of production.
— But there are statistics proving that automation reduces the number of accidents associated with the human factor.
– Do you know why I am a big opponent of the widespread introduction of autonomous vehicles on the streets? They explain to me, like everyone else, that people drive badly, and drones will reduce the number of accidents. I usually answer this way: an individual driver, if he has gone crazy or lost consciousness, can locally cause a lot of trouble. What if, for example, a million cars were hacked in Moscow? This is already a global catastrophe. When thinking about an autonomous car, many people think of an autonomous passenger car. Now imagine that this is a hacked fuel truck. Completely unreasonable transfer of risk.
How do you propose to regulate this?
I have a position. Within the working group, we need to focus on application ethics. That is, human-centeredness should be written everywhere as a through point. We, in fact, are returning to the three rules of Isaac Asimov’s robotics, which are now forgotten. In fact, he formulated everything well.
Three laws of robotics according to Isaac Asimov
There is another point. Most of the planet does not have access to technology and development. In our country, for example, there is Yandex. And in many countries, even developed ones – in Germany, for example – there is no good search engine. They are sitting on American technology. That is, many countries have no alternative: what they will be given, they will use. But how ethical is market monopolization? What about data monopolization? For example, Facebook collects data that it does not share. In general, there are a lot of issues related to the digital footprint of a person. For example, a person dies, who owns their digital assets? Facebook account? How ethical is it, for example, to continue using this account? There are so many new ethical issues associated with digital technologies.
– Do there happen in life cases similar to the plots of films, where the characters begin to feel something for the robot?
There are a lot of ethical issues associated with the empathy that new systems of social robots cause – these are robots that are designed specifically to care for the elderly, the sick. How close should they be to a person in terms of characteristics and evoke empathy? Or, on the contrary, should they be specially made non-human so as not to arouse sympathy? You know, for many people this is a personal tragedy. There was such a case in my practice. The person communicated with artificial intelligence in the dialogue system, they “befriended”. But robots don’t know how to be friends, they just answer requests. And then the owner came and took the robot, closed the account. And the man seemed to have lost a friend. By the way, there are already cases of extortion of money for the continued use of the machine.
“It’s like a Tamagotchi in the early XNUMXs. There were cases when the Tamagotchi died, and the children also did something to themselves, because they were upset, because they were attached to it.
— Yes, absolutely true. True, the stickiness of new technologies – there is such a professional concept of “sticky technologies” – is an order of magnitude higher than that of the Tamagotchi. Because there both voice and intonation are a complete illusion. Do you understand what the problem is? Everything is an illusion, technologies have no consciousness, they do not know how to suffer and love, but they can evoke some emotions in people. And technology owners can manipulate people. This should be the focus of conversations about the ethics of artificial intelligence. Technology doesn’t have to be sticky. They should clearly show that you are not talking to a person.
— Yuval Noah Harari, in his book 21 Lessons for the XNUMXst Century, says that technology can give rise to new dictatorships. Do you agree with this?
Yes, that’s where I started when I talked about technoracism. In general, Harari is cunning on a number of issues, but, nevertheless, he has worthy ideas. He is a really interesting philosopher. He’s right about the digital elite. If we look at where the owners of Microsoft, Amazon and Facebook are now investing, everything is clear there: in the treatment of Alzheimer’s disease, life extension, etc. Of course, it is interesting how society will develop when it is stratified. It is still partially stratified. If we start doing biohacking, it will indeed turn out that some people will become smarter, faster and live longer than others. But for now, we are one human species, and what Harari is talking about is some kind of division into super-races and “others”. Technoracism may well be an example. How to deal with it? I believe that it is necessary to solve the issues of access to technologies in a complex way. Maybe through them to solve a number of other issues related to social inequality, access to benefits, and so on.
Is there a limit to the development of artificial intelligence?
– The development of artificial intelligence is limited due to the large amount of electricity that these systems consume. Some hopes are pinned on quantum technologies, but if we are talking about semiconductors, then we have almost reached the development plateau. And further energy for development is no longer enough. That is, we are very close to the limit. I would say that soon we will see a slowdown in the development of technologies in the current paradigm, but this does not mean that there are no risks. The atomic bomb was invented over 70 years ago, but that hasn’t made it any less dangerous. So here. There are different forecasts regarding the development of AI, but I think that something else will actively develop.
– For example?
— Biotechnology and medicine. While everyone is engaged in artificial intelligence, they missed the coronavirus.
Does that mean you need to invest in science?
Yes, to science. And do not rely only on artificial intelligence. Until now, the simplest protein structure, the coronavirus, is a nightmare for all of humanity. And this is just one example. We need to study the world around us: there are many mysteries in it that artificial intelligence is not able to solve. Therefore, biology, physics, astronomy, social sciences – all this needs to be explored, and AI, as a tool, can help with this.
Subscribe to the Trends Telegram channel and stay up to date with current trends and forecasts about the future of technology, economics, education and innovation.