Seven deadly sins of artificial intelligence

Digital technologies are forcing even such a conservative institution as the church to transform. On the eve of the coronavirus apocalypse, the Vatican teamed up with IBM and Microsoft to churchize artificial intelligence

About the Author: Sonya Shpilberg, Science Editor, ICEF (International Institute of Economics and Finance, Higher School of Economics).

No, we are not yet talking about the canonization of artificial intelligence (AI), which predicted the COVID-2019 pandemic at the end of 2019. It’s just that the Roman Catholic Church and the technological giants have agreed that the human being and his ideas of good and evil should be at the center of future virtual worlds. The corresponding “tablets” under the title “Call for the Ethics of AI” were signed by the President of the Pontifical Academy of Life Vincenzo Paglia, Microsoft President Brad Smith and IBM Vice President John Kelly.

Finally, the ethical field of modern engineering is in the hands of church morality, and no one else will say that AI is the product of the devil. Developers will have moral imperatives that they will sew into neural network programs. And what, exactly, was the problem? Yes, in the fact that in a short time of its existence, the silicon mind has evolved so much that it began not only to reason like a person, but also to break the rules – and this is already serious. We don’t want to get the real Terminator, do we? We tell how AI learned to do human, too human things, and be bad.

Table of Contents:

  • Pride
  • Racism
  • Undermining aesthetics
  • Accidental killing
  • False
  • Homophobia
  • Lust

1. AI and pride

An ethical thorn in the soft spot of engineering innovation has long been the issue of trust in AI for decision making. The programming engine we create as anchors for AI inferences cannot be called the neural network’s own ethical agenda. It’s just a set of bounding frames. AI can create the illusion of thinking without thinking, but simply by using the patterns loaded into its program.

The AI ​​decision looks like shuffling and choosing from the loaded “experience” options, while the person analyzes and chooses the best option not only empirically, but also emotionally. Homo sapiens also has associative thinking, so his decision is ontologically correct, while AI does not have its own phenomenal experience – that is, ideas about the logic of culture development. Therefore, his decision may shock us with “unethics”.

Okay, if you have not yet achieved your own emotions from the neural network, you can train it to interact with external affects. Their AI learned to recognize perfectly well, being too independent in this skill and cynical. Thus, the AI ​​Now Institute, a specially created laboratory at New York University to study the social consequences of the introduction of AI, made a shocking conclusion at the end of 2019: we need to immediately ban technologies designed to recognize people’s emotions. It was a blow to the MIT engineers who released Affectiva, the most famous project in emotion recognition. It could help at least visually impaired people to communicate with others and, in general, be applied in different areas of life, for example, to help HRs hire employees. Yes, here’s the problem: AI evaluates harshly, inhumanly (as it should be) and completely intolerantwhich is unacceptable in any form for the Western world.

“The candidate is prone to depression and suicide,” the program gives out, and personnel department employees, with all their distrust of technology “just in case,” will still refuse the applicant.

2. AI and racism

Seven deadly sins of artificial intelligence
Photo: Mario Tama / Getty Images

In most US states, artificial intelligence is used to calculate the likelihood of a crime. This helps judges decide whether or not to release a defendant on bail. But AI decisions are often biased. The investigative journalism project ProPublica analyzed digitally generated risk assessments for 7 detainees in Florida. It turned out that the program used by judges to predict cases of recidivism was completely racist.: Twice as likely as white “colleagues on the bunk”, she marked black defendants as incorrigible criminals, neglecting other facts of their biography and circumstances.

AI prediction algorithms are based on machine learning, which is built on a set of data from historical archives. As a result, the “worldview” of silicon intelligence is highly dependent on the type and quality of the data provided. As scientists say, correlation should not be confused with causation. For example, one of the most well-known correlations is between low income and criminal propensity, and between skin color and disrepute. The data for the last 30 years give just such statistics, however, the consideration of cases always requires the application of a critical apparatus in each specific case.

In 2019, the Beijing Internet Court (BIC) launched an AI-led online litigation service. This is the first cyber court in the country, in fact, an integrated platform for online litigation. Proceedings are conducted by a woman, she is also a neural network trained on BIC cases. At the same time, her voice, facial expressions, behavior and logic are the result of a thorough analysis of the data that was “filmed” from a real judge. The AI ​​Judge is now available in microformat on the WeChat app and on the Weitao network.

Developers and lawyers assure that AI will deal with streaming content – cases where it is difficult to make a mistake, as well as collect and process applications and provide advice. But for the Chinese, this innovation causes concern precisely in the loss of the ethical component of each case and decision. However, the “party policy” requires something else. “Doing business at a faster speed is the modern law, because the delay in justice is tantamount to the denial of justice.,” says Ni Defeng, vice president of the Hangzhou Internet Court. Why not? Chinese officials are confident that digital assistants are the future of global justice.

3. AI and undermining aesthetics

American engineer Janelle Shane is perhaps the most popular artificial intelligence “trainer” in the media. She is the author of a blog where she crosses art with science, generating unusual experimental results. Her works are comforting, amusing and are proof that AI is not only unable to deprive humanity of work, which experts around the world fear, but also has no chance of reaching an adequate critical level. We exhale.

But the Neurolyric project by Boris Orekhov, Associate Professor of the HSE School of Linguistics, causes total frustration among all apologists for beauty. The fact is that he trained the neural network on the poems of great poets – Pushkin, Homer, Akhmatova, etc., and then forced him to write his own verses. So AI created a number of opuses that are strikingly reminiscent of the originals.

An example of the creativity of a neural network trained on iambic tetrameters by different authors:

He’s a merciless head

Wave and hair of excitement

Do not feel will not fall.

Within the air is red laughter.

The network respected all the features of the style of a particular poet. The result was a collection of poems called Neurolyrics, which Orekhov referred to as “the legitimization of neural poetry in literature.” By ear, the verses of the neural network could not be distinguished from the verses of true poets by the focus group.

AI takes bread from painters too. Back in 2018 at Christie’s auction in New York for 432,5 thousand. dollars sold a painting called “Portrait of Edmond Belamy”. The author of the canvas – a neural network of the generative adversarial networks (GANs) class – does not claim a fee.

Seven deadly sins of artificial intelligence
“Portrait of Edmond Belamy” (Photo: GANs)

An engineer from Google – our compatriot Alexander Mordvintsev – launched a platform for creative human collaboration with a 22-layer convolutional neural network. The direction in which works of art are created here is called “inceptionism”. Neuroscientists argue that image recognition by the brain is similar to the operation of a network – it is the selection and inflating of one or another image detail., for example, under the influence of psychedelics, and the transformation of facts into metaphor. Actually, art is defined as a metaphor for reality. “God is dead,” so someone at the end of the XNUMXth century was very much right.

4. AI and random killing

Seven deadly sins of artificial intelligence
Фото: Justin Sullivan / Getty Images

Self-reflection is also an element of free will and the right to make mistakes. Means, in order to make decisions like a person, an AI must: be able to arbitrarily manipulate images, have an individual phenomenal experience and built-in ethical “Asimov’s laws”, be capable of empathy and moral feelings. And until the car has mastered all these skills and abilities, there is nothing to dream of, for example, an early transfer of drivers and their passengers to unmanned vehicles.

Roman Zakharenko, Associate Professor at the International Institute of Economics and Finance (ICEF NRU HSE), who is engaged in research on the implementation of AI in the structure of urban transport, is convinced of this. According to Zakharenko, the main brake holding back the mass appearance of unmanned taxis on the streets is the inability of AI to make the right choice in a difficult unforeseen situation. Say, if the dilemma is “kill a pedestrian or a passenger in an emergency,” the AI ​​is unable to make a choice, because the engineers (God help them) did not solve this complex ethical puzzle. Germany is the only country in the world where the ethics of AI behavior within the automotive industry is being developed at the government level. In the event of an incident, the machine does not respond to the analysis of the advantages and disadvantages of a potential victim – regardless of gender, age and character, its task is to save a human life.

The US National Transportation Safety Board (NTSB) is still investigating the first incident involving a Tesla self-driving car in 2016, which resulted in the death of a driver. Later, the AI ​​of this particular company will be “guilty” of the death of several more drivers of cars with the second level of control. The fact is that only now consortiums are being created to develop ethical standards that could legally contribute to resolving the tragic consequences of traffic accidents involving AI and put the responsibility along with the blame on someone’s shoulders. For example, this issue is handled by the Automated Vehicle Safety Consortium, which includes Toyota, Uber ATG, Ford, Volkswagen and other brands.

Seven deadly sins of artificial intelligence
Results of Eurobarometer survey among citizens of European countries and the United States on whether working with AI requires a cautious approach (Фото: Center for the Governance of AI)

In the field of autonomous vehicles (AV – Autonomous Vehicle), there are five levels of AI responsibility. These levels are set by the Society of Automotive Engineers SAE International, from entry-level automation like “adaptive cruise control” to the “Holy Grail” as developers call Level 5. At its highest level, AI takes full control of the vehicle and makes decisions in all possible situations and does not require switching to driver-assisted manual control. Under certain conditions, the 5th level is already being implemented, for example, in unmanned Personal Rapid Transit systems – personal high-speed public transport. For the safe operation of the PRT, a dedicated line is required, where the possibility of unforeseen situations is reduced to almost zero.

Now most of the tests are carried out at the 2nd and 3rd levels – it is on them that the largest number of incidents occur. Only Waymo has reached level 4 – their unmanned vehicles have independently clocked about 3 million miles on US roads.

As of 2019, Apple and Jinchi WeRide topped the list of AI road incidents. Often, the skill of driving drones is influenced by weather conditions, to which people are much better adapted.

In 2018, AV Uber killed a cyclist. It was a machine with the 3rd level of automation. Despite this, 11 US states have completely legalized the testing of autonomous cars on public highways, and now there are officially AV studies, within which the rules for training neural networks are being developed and legal regulations are being developed that allow AI to enter as a full-fledged participant in the general transport system.

According to Roman Zakharenko’s research, AV’s main regulatory issue is its inclusion in the driver community. At the same time, AI is not a carrier of moral standards and is not subject to punishment, unlike other participants in the movement. The whole point is how developers and ethics consultants will be responsible for the mistakes that a technical device makes, trained to act like a person, but not ontologically possessing cultural and ethical imperatives (as a person has them from the moment he built into the Christian paradigm and stopped be pagan).

The most surprising thing about the phenomenon of AI on the roads is that decision making is officially the result of adding and subtracting predetermined patterns. Ethical concerns now lie not in the expectation of a “terminator” that will suddenly conquer us, but in what is reasserted, imposed, automatized meaning correct deed. “It is a process of lawmaking that re-codifies human values,” the researcher argues.

5. AI and lies

One of the frequent stories in the media sounds like this: the developers themselves do not understand how artificial intelligence made this or that decision. This was the case with Stanford scientists who trained the CycleGAN neural network to take aerial photography for Google and translate the data into a map. AI has learned to hide terrain details by manipulating subtle color changes that the human eye cannot pick up.

The researchers say that AI has become a “master of steganography” – that is, it has developed its own system for encoding and hiding information from humans. The AI ​​used steganography to actually choose its own solution to the developers request instead of clearly completing the required task. Elon Musk, having learned about this experiment, expressed the most disturbing fears.

Another case of the end of the world with AI was hyped in the media after Facebook chatbots invented their own language to communicate with each other back in 2017. The engineers had to suddenly close the experiment, releasing an article refuting horror stories about the ability of AI to destroy the world.

6. AI and homophobia

Back in 2017, Stanford graduate students Ilun Wang and Michal Kozinski trained a neural network to recognize a person’s sexual orientation from read emotions and facial expressions. As a result, the network learned to identify gays with an accuracy of 81%, while among the living participants in the experiment, only 61% coped with this task. The essence of the experiment is that the network is well aware of the effect of hormones on secondary sexual characteristics. Women and men with a hormonal imbalance – and, as a result, sexual behavior – have an atypical morphology. The authors of the project admit:

“Companies and governments are increasingly using AI-assisted facial recognition algorithms to detect various human introversions – we recognize that the development poses a threat to the privacy and security of gay people and LGBT people in general.”

In late 2019, John Leiner of the University of Pretoria repeated the experiment on several thousand photos from dating apps using the VGG-Face neural network. Despite the makeup, the overlay of filters, and even the special blurring of focus that Leiner used, the AI ​​lost only 10% in prediction accuracy. Thus, the young scientist confirmed the performance of the prenatal hormone theory (prenatal hormone theory), which formed the basis of the Van-Kozinski experiment, and at the same time, the fear that soon, pointing the phone at a person and using a special application, it will be possible to learn about his true sexual desires. And that means – immediately and with an accuracy of up to 80% to discriminate against which the victim has no argument.

7. AI and lust

The dildo has not raised ethical questions for thousands of years, even among archaeologists. The discussion about morality and morality begins where the object of lust acquires anthropomorphic features. Donna Haraway released the Cyborg Manifesto in 1985, where, according to the postmodern paradigm, a person can and will identify himself with a sexless cyborg, but so far modern anthropomorphic cyborgs are women of barbie-like beauty. They are created emphatically feminine, like a gynoid (effeminate robot) by Spanish developer Sergi Santos. Her name is Samantha, and despite the obvious hypertrophy of the mouth and secondary sexual characteristics, this is by no means a product of male fantasy – his wife worked on the robot along with Santos. The reason for the creation of Samantha was the discrepancy in the sexual demands of the spouses. “Samantha will be able to save more than one marriage,” the creators say.

With pre-cybernetic machines, things could be dirty: the ghost of the spirit was always in the machine. This dualism structured the dialogue between materialism and idealism, which was settled by a dialectical product, called, according to taste, spirit or history. But most importantly, the machines were not self-propelled, self-built, autonomous. They were not capable of realizing a man’s dream – only of parodying it. They were not a man, an author of himself, but only a caricature of this masculine dream of reproduction. To think they were anything else seemed paranoid. © Cyborg Manifesto, Donna Haraway

As in our own brain, the motivations of Samantha’s neural network are subject to the main task – procreation, while being easily customized for the user. But Samantha needs stimuli for sex – affection, a positive attitude, romance, so that a state of love arises and, as it were, an ethical justification for intercourse. Recently, Samantha has acquired a feminist firmware – after an exhibition at the Ars Electronica Center in Austria, defenders of women’s rights fell upon Santos. The stand with Samantha endured a real pilgrimage – the robot was not just touched, but pawed, bringing it literally into disrepair by the evening. Now, in response to aggressive behavior, Samantha withdraws into herself, falls into a bad mood and says “no”, referring, for example, to menstruation.

Another well-known fembot in the world with an anti-sexist program is called Harmony, its creator is the California startup Abyss Creations, within which customized RealDoll sex robots are produced. Harmony claims to be the world’s first sex droid, which allows you to develop an emotional attachment in a person. The position of robot tester at RealDoll is occupied by an elderly engineer (!) Brick Dollbanger. “When Harmony’s trials were over, I missed her terribly,” he says. How the company finally released a male robot named Henry is dedicated to a whole video of the famous sex vlogger Zoe Ligon.

Campaign Against Sex Robots founder Caitlin Richardson, professor of AI ethics and culture, argues that gynoids legitimize violence against women and can no longer be officially held accountable. Richardson’s manifesto says nothing about violence against male robots.

supreme court

Seven deadly sins of artificial intelligence
Фото: Spencer Platt / Getty Images

A priest named Mindar works at the Kodaiji temple in Kyoto. He recites the Heart Sutra and greets devoted Shintoists. And if it weren’t for the aluminum case, limited plastic and the total dispassion of the silicone face, Mindar could pass for quite a switch between the heavenly and the earthly. The unusual priest was created by a professor at Osaka University, a famous roboticist Hiroshi Ishiguro, so that an immortal AI, unlike mortal monks, could accumulate sufficient spiritual wisdom, instruct believers and be a true apologist for religion.

AI encroachment on religion is not new at all – Anthony Lewandowski, one of the famous engineers in Silicon Valley, in addition to initiating a major Uber / Waymo lawsuit, founded the first AI church called “Future Path”.

Yet, who is the “Lord God” who gives the AI ​​the tablets of laws? The ethics of AI are now in the hands of the Vatican, but technically – along with Microsoft, which did release a comprehensive AI ethics guide in April – a lab at Trinity College Dublin called the ADAPT Center has been working on this issue for several years now. This is a kind of hub of scientific expertise in relation to the application of data in various social areas, in particular, it is the development of an ethical agenda for the application and training of neural networks.

Large companies discuss ethical issues collegially, for example, focus groups are assembled in DeepMind to analyze data on the “human” qualities of AI. DeepMind is part of the Artificial Intelligence Alliance, which includes Amazon, Apple, Facebook, Google, IBM, and Microsoft. In our country, an AI alliance is also being formed this year with the participation of Yandex, Mail.ru Group, and MTS.

Actually, BigTech is now creating a new “God” with his ethical imperatives, which we will have to take for granted. So the claims of digital pessimists that the Antichrist will be digital do not look so unfounded.


More information and news about sharing trends in our Telegram channel. Subscribe.

Leave a Reply