How to build trust between human and artificial intelligence

As artificial intelligence takes on more and more decisions and roles in society, distrust of it also grows: how can we be sure that it will reflect our values ​​and will not go against a person?

The “humanity” of artificial intelligence depends on people

Advanced technologies can not only make life easier, but also pose a serious threat: to replace people in many industries, become much smarter than us, and begin to define for themselves what is “good” and “bad” according to personal settings that may not coincide with human moral principles.

To avoid this, it is necessary to formulate a new approach to the relationship between people and artificial intelligence, based on empathy and trust.

And as Rana El-Kaliubi, CEO and co-founder of Affectiva, said earlier, that trust must be mutual. Why? An example is semi-autonomous vehicles. These cars still assume that the driver must be ready to regain control of the steering if the AI ​​for some reason cannot safely navigate the situation. But how can he be sure that a person at this moment is not busy, not distracted, does not feel sleepy? AI systems need to truly understand our emotional and cognitive state in the moment in order to make the right decisions. Thus, according to Rana, empathy is the key to understanding and trust between AI and humans. In addition, it is necessary to develop emotional intelligence in the AI ​​team itself, especially in data scientists who write models: the “humanity” of artificial intelligence depends on the people sitting on the other side of the screen.

Objective artificial intelligence

While AI in some cases may show bias. Most often this is due to the data that is used to train it: if it was somehow discriminatory towards people of certain races, genders or nationalities, so will the AI. Development teams should be made up of a wide range of people: As IBM Research director Grady Booch has stated, one of the reasons why AI still exhibits bias is that the people who build such systems are still mostly “white young males.” “.

AI systems should benefit people

To do this, they need to be brought into line with human principles and values. Many are involved in the development of standards that would guarantee the use of AI for good purposes: MIT and Harvard, the AI ​​Partnership, the European Commission, but there is no single, universally recognized ethical system for AI. There are several methods that AI professionals are experimenting with to instill the necessary principles into systems. Reinforcement learning shows the strongest result. This method allows systems to observe how we behave in different situations and make decisions in accordance with our values.

Relationships built on trust

Industry leaders point out that transparency is a key factor in building trust between humans and AI. End users sometimes do not understand why the system made a certain decision, which may cause doubts about the accuracy of the results. To trust technology, people need to understand how the system arrives at conclusions and how it makes recommendations.

Developers should also be open about what the system actually does when interacting with us.: collects information about us, observes facial expressions through the camera to read emotions, etc. Users need to be informed about this. And people should be able to disable some of these features.

Another important principle that helps in establishing human confidence in artificial intelligence is reproducibility.: We need to be able to trace every result generated by the AI ​​system. It can be affected by algorithms, artifacts, system parameters, different code versions, and different datasets, so ensuring reproducibility can be extremely challenging.

Some experts emphasize that in order to reduce people’s mistrust of technology, it is necessary to convey to them the idea that AI is designed not to replace a person, but to help him.

“AI will not be successful unless it reduces human stress, calms them down and allows them to enjoy the experience,” John Rose, head and CTO of products and operations at Dell Technologies, told Fast Company.

We need to prepare people for the fact that they will use the possibilities of AI and should not be afraid of it. Technology will take over the routine and help us focus on more important tasks.

And finally, we must not forget that people are behind everything. Companies that develop and implement AI need to more often bring the scientists and technologists behind all these processes out of the shadows, writes Fast Company. The image of real specialists who make sure that technologies work for the benefit of a person and are responsible for the result will help to “humanize” faceless algorithms and help stop being afraid of them.


Subscribe and follow us on Yandex.Zen — technology, innovation, economics, education and sharing in one channel.

Leave a Reply