Ethics of artificial intelligence: why it is impossible to humanize robots

Speculation about how machines can change our lives has been a staple of science fiction for decades. But what are the risks behind the anthropomorphization of robots?

In the 1940s, when widespread interaction between humans and artificial intelligence still seemed a distant prospect, the American science fiction writer Isaac Asimov formulated the famous three laws of robotics, which were designed to ensure that robots do not harm humans:

  • the robot must not harm a person or by its inaction allow a person to be harmed;
  • a robot must obey all orders given by a human, except in cases where these orders are contrary to the first law;
  • the robot must take care of its safety to the extent that this does not contradict the first or second laws.

In 1938, before these laws were even formulated, Lester del Rey’s story “Helen O’Loy” saw the light of day, about a robotic woman who fell in love with her creator and became his ideal wife. And in 1939, Endo Binder published the story “I, Robot” about the fate of the robot Adam Link, who is driven by love and the principles of honor.

In real life, such a manifestation of feelings is impossible, says the rector of the Sirius University of Science and Technology, the representative of the Russian Federation in the UNESCO Working Group on the development of recommendations on the ethics of AI Maxim Fedorov. He emphasizes that artificial intelligence is just a set of algorithms.

Much more acute is the issue of anthropomorphization, that is, the humanization of robots, computers and machines by people.

“People tend to anthropomorphize everything around. In Japan, for example, the approach to humanizing objects sometimes goes beyond the bounds of reason: there names are even given to houses and trees. It’s part of human nature to animate things. But is it good? Anthropomorphism is an illusion, because people stop communicating with people and start communicating with robots, with algorithms, and this leads to all sorts of social problems. We have already begun to prefer chats in social networks and instant messengers to live communication. The quality of communication is completely changing, which could not but affect demographics, social and family ties,” says Maxim Fedorov.

Practical psychologist Nina Volontei agrees with him: “Excessive empathy and humanization of robots can threaten people’s unwillingness to invest in building relationships in reality. Sometimes relationships with living people can be difficult and painful, but replacing them with artificial intelligence can breed depression.”

Robots among us

They already talk like people, move like people, and work like people. Robots are getting smarter and more human-like. The US already has the Society for the Prevention of Cruelty to Robots (ASPCR), and neuroscientists argue that they will have social rights like animals. This is not science fiction. Our coexistence with machines has already begun, and so has the ethical debate.

People very easily attribute mental properties to objects and empathize with them, especially when the appearance of these objects is similar to the appearance of living beings. The danger is that this can be used for manipulative purposes: to force people to attribute more intellectual or even emotional value to robots and artificial intelligence systems than they deserve.

One of the most famous roboticists in the field of creating humanoid machines is Osaka University professor Hiroshi Ishiguro. In 2018, he created a robot in the form of a ten-year-old boy, Ibuki. His facial expressions contain not only the manifestation of emotions, but also “involuntary” movements that only add realism to the android – Ibuki can blink, move his head and eyes.

The gynoid Sophia, developed by the Hong Kong company Hanson Robotics, has also gained worldwide fame. Gynoids began to be called a variety of android, which has an emphasized female appearance with all gender differences.

Sophia even became a citizen of Saudi Arabia in 2017 and is also the first robot to become a citizen of any country. The news put an end to the discussion in the ASPCR about the rights of robots to legal personality. They thought that in the future, machines and robots would have to pay taxes and deduct social security contributions. Thus society will shift to an economic relationship with machines to compensate, for example, for the loss of jobs.

Human cruelty towards robots

A more thorny issue is the social rights of robots, comparable to animal rights and, ultimately, human rights. But this is if the robots develop a certain degree of consciousness. Although the degree of consciousness of the people themselves still raises questions. For example, in 2014 in the United States, the HitchBOT robot was damaged by an unknown vandal. The attacker tore off his hands and beheaded him. The head was never found. The news of this incident on local TV channels caused real outrage all over the world. The robot was developed by two Canadian engineers as a hitchhiking robot for a social experiment to analyze human response to androids. HitchBOT stood on the side of the road, always with a thumbs up, until the driver dared to take him as a passenger. The face of the iron hitchhiker, painted on the screen dotted with red pixels, was always smiling.

HitchBOT was designed to be a lovable travel companion that can carry on a simple conversation and take photos of the journey. But HitchBOT’s own journey ended in Philadelphia, where he ended up after Canada, Holland and Germany. An unknown driver on a dark deserted street smashed the robot with particular cruelty. The experiment was over, and the man in relation to the android showed himself from the worst side.

The attack on hitchBOT is not an isolated incident. You can find enough videos on YouTube where people, including children, take out their anger on helpless robots. According to psychologists, a person who harms a robot is testing not only the limits of the machine’s capabilities, but his own and those around him in order to understand what is acceptable and what is not.

Although, it would seem, if you want to destroy the robot, just cut its cables. But no, beating a car is already anthropomorphism. Psychological experts believe that violent behavior towards androids is a way to distance themselves, to show that they are different and an outsider. And this may mean that people have a certain fear that robots may one day join their social group. Although initially the creation of anthropomorphic robots should, on the contrary, cause sympathy.

“In the likeness of a human, robots are made, apparently, in order to inspire confidence in people. External similarity evokes pleasant feelings, says practical psychologist Nina Volontei. – If artificial intelligence looks like some kind of abstract object, I think it will arouse less feelings and interest in people. After all, few of us feel love, for example, for our computer or vacuum cleaner. We perceive this technique functionally.”

Psychologist Oksana Kozyreva does not agree with this point of view: “Starting from the Stone Age, it was common for a person to endow any object and even weather phenomena with features close to himself. The point was to bring this phenomenon closer to us so that it would become more understandable. As soon as a person came up with a name for the Sun or the wind, as soon as he began to talk to him, imagining that this was some kind of deity, then it became clearer and he could, for example, no longer be afraid. The same with robots. There is a temptation to humanize them, come up with human names for them and project interhuman relations onto them. But at the same time, a too human-like robot seems creepy. “Eerie” is the title of an article by Sigmund Freud. His main idea was that it is terrifying for us not from meeting with something or someone unknown and alien, but, on the contrary, with what we are familiar with.

In 1919, an article by the Austrian psychoanalyst Sigmund Freud, The Creepy, was published. In his work, he referred to an article by psychologist Ernst Jentzsch, who analyzed E. T. A. Hoffmann’s story “The Sandman”, which contains a realistic Olympia doll. Analyzing the text of a work of art, Freud singled out the main mechanisms for creating the “creepy” effect: dolls, wax figures, doubles, ghosts. All these objects have one and the same feature in common: we are not able to tell whether this object is alive or dead. All of them, one way or another, remind us of a person, but there is also something that betrays in them “inhuman” or “inanimate”. It is on this effect that almost the entire horror film industry is built. Freud even quotes Friedrich Schelling in his work: “The eerie is that which should have been hidden, but has revealed itself.”

Uncanny Valley effect

The “uncanny valley” effect is a hypothesis formulated by the Japanese robotics scientist and engineer Masahiro Mori. It implies that a robot or other object that looks or acts approximately like a person, but not exactly like a real one, causes dislike and disgust in people. In 1970, Maury conducted a survey and found that a human-like machine is no longer perceived as a technique. An anthropomorphic robot begins to seem like an unhealthy or lifeless person, a revived corpse, causing fear of death in a person.

Maxim Fedorov: “A humanoid robot is perceived as undead, no matter how it copies the facial expressions and movements of people, the human brain will not be able to perceive the robot as a living being. By and large, making robots look like us is expensive and useless. Their main essence is functionality. In Japan, for example, social robotics has long been developed. Androids are at airports for transporting passengers’ luggage or for patrolling. Robotic nurses and robotic nurses are used in hospitals. And most of them are not humanoid. Anthropomorphization is not needed here. The main thing is that the assistant robot performs its function.”

Dangerous experiments

Psychologists and sociologists are sure that our interaction with robots can change human nature itself. And perhaps not for the better. It is obvious that the appearance and abilities of various forms of artificial intelligence decisively determine our attitude towards them. If a robot is able to move, we perceive it as a creature that has some kind of intention or purpose, which probably reflects its inner mind. Humans are wired to think and respond to it that way. Maybe that’s why voice assistants seem harmless to us, because they are closed in their columns and capsules.

But the inquisitive human mind wants to go further, and many experiments can really frighten. At the beginning of 2022, it became known about Alexander Osipovich, a programmer from the Perm Territory. The man created a copy of the Terminator on a 3D printer and implanted the consciousness of his deceased grandfather into it. The man posted a video of the experiment on his YouTube channel. Osipovich told reporters that the technology could be useful to people who are grieving the loss of a loved one.

Psychologists are skeptical about this method of experiencing the death of a loved one.

Oksana Kozyreva is sure that integrating the consciousness of dead people into robots is a dangerous undertaking, first of all, for the user himself. Everyone has a moment in life when we lose someone close to us. “Of course, I want to cling to some kind of straw in order to see or hear a dear person again, but this is all a deception and an illusion. Loss must be dealt with naturally. Let choking in tears and grief, even with depression. But all this will pass. And such inventions will greatly slow down this process. It’s like artificially supporting the life of a patient whose brain has already died, but whose heart is still beating. Distorting reality is very harmful. So you can go crazy. The human psyche will simply begin to bifurcate reality. In psychology, severe cases are known when a person cannot survive the loss even after decades, which leads to complex health problems. And to create such a robot means to manipulate and use people at a moment when they are heartbroken.”

Scientists insist: before you start the production of such inventions, you need to weigh a lot. Maxim Fedorov adds: “Long-term scientific research is needed here. It is difficult to immediately give a moral assessment. After the loss of a loved one, people sometimes tend to talk with photographs of the dead, with their things. But the photos don’t match. The question is who created the algorithm that will respond to the user. And who will subsequently be responsible for what this algorithm will produce. As an analogue of this invention from the Perm region, chat bots can be cited as an example. You can also have a dialogue with them. These are marketing, so-called “sticky” technologies that make you keep your attention on a particular product or service. It’s also kind of like manipulation. Therefore, with the development of artificial intelligence technologies, a lot of not only philosophical and ethical, but also regulatory issues arise.”

Code of Ethics

The regulation of artificial intelligence is being actively discussed around the world. “Our country was one of the first to adopt a code of ethics in this area. It is signed by the leading Russian organizations working in this field,” says Ivan Oseledets, director of the Center for Artificial Intelligence Technologies at the Skolkovo Institute of Science and Technology. There are also discussions in the European Union and the USA. Certain areas, such as, for example, video surveillance, are already regulated at different levels. For example, the ban on detecting violations on CCTV cameras in certain American cities. The use of artificial intelligence is closely linked to the use of data, and the European General Data Protection Regulation (GDPR) also has an impact on the development of artificial intelligence technologies. However, it is ethical issues that are key. It is very difficult at the moment to develop a unified position on the development and application of artificial intelligence methods, which, on the one hand, will reduce risks, and on the other hand, will not interfere with the development of technologies.”

In the international group of UNESCO experts, which is preparing the first global recommendations on the ethics of artificial intelligence, which includes Maxim Fedorov and Ivan Oseledets, there are heated discussions. our country was the only one among 178 countries that clearly stated that decisions on the ethics of artificial intelligence need a purely scientific approach. Our experts suggested using computing power and various software products that allow us to simulate millions of event scenarios, on the basis of which it will be possible to draw any conclusions and make decisions. At the end of 2021, the Russian recommendation on the ethics of artificial intelligence was approved by UNESCO.

digital divide

It seems clear that artificial intelligence and robotics will lead to significant productivity gains and therefore a positive impact on overall well-being. The attempt to increase productivity has often been a feature of the economy. However, increased productivity through automation usually means that fewer people are required to produce the same result. Major labor market shocks have occurred in the past, for example, in 1800, agriculture employed more than 60% of the labor force in Europe and North America, and by 2010 it employed about 5% in the EU, and even less in the most rich countries of the world.

Classical automation has replaced human power, while digital automation is replacing human thinking or information processing. And that could mean more radical changes in the labor market. A similar ethical moment Trends has already discussed with Maxim Fedorov in one of the articles. This so-called digital divide implies different access to technology at the level of individuals, companies or states.

“The digital divide is especially noticeable on the UNESCO platform, because there are representatives of various states, including North Africa and the Middle East,” emphasizes Maxim Fedorov. — Their views on the problem differ from the views of representatives of technologically advanced countries, among which I include Russia. They say: how can there be an issue related to AI, when we don’t have the Internet in many places, there aren’t enough textbooks in schools? The problem of unequal access to technology is not limited to artificial intelligence. Similar questions can be raised in the technologies of the agricultural sector. It is known that in many countries, agricultural corporations have displaced local producers, which has led to a large number of environmental, social and other problems.”

Leave a Reply