The technology for creating deepfakes used to be owned only by experts in artificial intelligence and special effects. The latest programs and the spread of machine learning have made it easier to create fake, therefore dangerous videos.
The word deepfake appeared several years ago. It combines two concepts: deep learning (training of neural networks), and fake. The deepfake technology is based on AI synthesis of human images: the algorithm combines several photos in which a person is depicted with different facial expressions and makes a video out of them. At the same time, AI analyzes a large number of images and learns how a particular person might look and move. This is written by the American edition of Forbes.
With the spread of deepfakes, there have been cases of “discrediting” public figures, whose images are many in the public domain. For example, in the spring of 2019, a deepfake video with Nancy Pelosi, Speaker of the US House of Representatives, was published on the Web. The author of the video, with the help of AI, changed Pelosi’s speech so that she did not pronounce words well, and users who watched the video thought that the politician was drunk. The situation turned into a loud scandal, and only after some time it was proved that Pelosi’s speech was generated by AI.
This deepfake was done quite simply. But there are also complex algorithms. They do not just change the timbre of the voice or slow down speech, but generate such videos in which a person does something that he has never done, or says something that he has never said. Over time, deepfakes will become more and more realistic, writes the American edition of Forbes. Already, the technology claims to influence politics, and in the future, such videos may even become a threat to national security.