Deepfakes: How Content Copyright Transforms

The number of deepfakes on the internet is on the rise. But is there any technology to detect them? What to do with the use of publicly available personal data and is this area somehow regulated in terms of copyright?

Mass deepfake: a trend towards synthetic content

Deepfake technologies are developing faster than the technologies for their detection and the legal framework to regulate their creation. AI will soon reach a point where it will be almost impossible to tell the difference between audio and video of people saying something they never said.

deepfake (from two English words “deep learning” and “fake”) is synthetic content in which a person in an existing photo, audio or video is replaced by another. Deepfakes can use any format – your photo, video, or your voice. Deepfakes are most commonly used in advertisements, porn films, revenge porn, fake news, and financial fraud.

Deepfake technology is not an absolute evil, it can revolutionize, for example, the film industry. With it, you can artificially rejuvenate or age actors, make doubles look more like actors, synchronize lip movements when dubbing a translation, or even shoot a film with an image of an actor who has suddenly died or ceased to participate in filming. As a rule, the use of such technologies is legally impossible without the consent of the actors whose images will be used in the final material, or their heirs if the person has already died.

“There are already many projects for the commercial use of deepfakes in the world (Synthesia, WPP, Rosebud, Rephrase.ai, Canny AI). Deepfake technology opens up the possibility of creating completely synthetic identities – images and voices of people who never existed. The use of such images almost completely removes the dependence of business on models and actors, including the need to “clear” copyright and related rights and sign various releases,” believes Vadim Perevalov, Senior Associate at the international law firm Baker McKenzie.

Audio fakes: is there a copyright in the voice?

Voice deepfakes are the biggest problem because a person’s voice is not subject to ownership in any country in the world – unless the person’s name is registered as a commercial brand.

However, in 2020 a precedent was set. YouTube channel Vocal Synthesis makes humorous deepfakes using the voices of politicians and celebrities. The channel’s team humorously posted several generated rapping recordings of American rap star Jay-Z without commercial benefit, clearly labeling all videos as speech synthesis.

However, the concert company RocNation, which is owned by Jay-Z, filed a copyright infringement lawsuit. The organization put forward a demand to remove the video, where, in their opinion, AI was illegally used to imitate the voice of a musician.

Only two out of four Jay-Z videos have been removed it was acknowledged that the resulting sound product was a derivative work with nothing to do with any of the rapper’s songs. In the US, not every commercial use of someone else’s voice is against the law.

“A promising direction of deepfakes is the use of images or voices of celebrities to produce content without their participation. For example, in our country, the use of someone else’s speech to create a “similar” voice is not explicitly prohibited by law. Voice imitation, for example, by telephone pranksters, is also not a violation and rather depends on the content of the joke, which can be criminalized. And the use of deepfake technologies in advertising should not mislead consumers, for example, that a famous person allegedly recommends a product. In addition, the question remains debatable whether the creation of such a “voice cast” will violate copyright and related rights to phonograms and actors’ performance, ”shared his opinion Vadim Perevalov.

In March 2019 at UK there was an incident with an audio fake – software based on artificial intelligence was used to imitate the voice of the chief executive of a British energy company and instruct an employee of this company to transfer €220 to third parties. The employee thought he was talking to his boss on the phone and sent the money to scammers . Investigators have not identified the suspects.

Copyright protection of deepfake sources

The main problem with deepfakes is that no country in the world has yet created a legislative practice that could affect both the creators of deepfakes and the procedure for removing them. Copyright law can act as an effective means of regulating deepfakes, but it needs to be improved to do so. The issue of protecting the rights of the deceased (for example, film actors) in relation to the use of their voice and image also remains open.

  • Russian legislation

Probably, in Russian legislation, deepfakes should be viewed through the prism of a derivative work, in which the use of the original work without the consent of its copyright holder would be illegal.

“The exclusive right to the result of intellectual activity arises initially from the author (co-authors), and then can be transferred by the author to another person under an agreement (Article 1228 of the Civil Code of the Russian Federation). Russian law also provides that no registration or compliance with any formalities is required for the creation, exercise and protection of copyright. This is the main approach for the modern international system of copyright protection, which also provides for the possibility of authorship confirmation using the presumption of authorship (Article 15 of the Berne Convention for the Protection of Literary and Artistic Works), ”says Dmitry Ignatenko, Head of Legal at Rubytech.

If we consider the right of authorship for a deepfake as a result of intellectual activity created by the program, then according to Russian law, only a citizen whose creative work created this result can be an author (Article 1228 of the Civil Code of the Russian Federation). And the right holder can only be a person or legal entity that has the exclusive right to the result of intellectual activity or to a means of individualization (Article 1229 of the Civil Code of the Russian Federation). So there must be a natural or legal person behind any machine, otherwise no copyright object will be created.

  • Social Media Politics

Technologies for improving deepfakes are developing faster than the laws in this area. American Twitter, Facebook and Chinese TikTok tried to independently regulate the distribution of deepfakes in different ways.

Twitter introduced rules on deepfakes and media manipulation, which are mainly about flagging tweets and warning users about changed news, rather than deleting them. According to the company, tweets containing falsified or inaccurate material will only be removed if they could cause harm. The definition of harm also includes any threat to the privacy or right of an individual or group of individuals to express themselves freely. This means that the policy covers harassment, harassment, voter suppression or intimidation, and content that contains phrases designed to silence someone.

Facebook announced a policy to remove deepfakes in early 2020. Posts will be deleted if they meet the following criteria:

  • the content has been edited or synthesized (other than improving clarity or quality) in a way that is not obvious to the average person and is likely to mislead someone into thinking that the subject of the video said words that they did not actually say;
  • is a product of artificial intelligence or machine learning that merges, replaces or superimposes content on a video, making it look like the authentic one.

TikTok TikTok’s new policy prohibits any synthetic or manipulative content that misleads users, distorts the truth about events, and that causes harm. The policy does not so much ban a particular AI-based technology as it does more to discourage the use of any kind of deceptive video to vilify political opponents online.

The main problem for social networks was the lack of technology for detecting deepfakes. In 2019, Facebook along with Microsoft and other partners launched The Deepfake Detection Challenge. The most successful was the model of the Belarusian developer Selim Seferbekov. It achieved an accuracy of 65,18% on test datasets. In third place is the model of the Russian company NTechLab.

Tech leaders — Yandex, TikTok, Microsoft, IBM, Kaspersky Lab, Reface, FaceApp, Ntechlab, and others — refused to comment on deepfake and copyright trends.


Subscribe also to the Trends Telegram channel and stay up to date with current trends and forecasts about the future of technology, economics, education and innovation.

Leave a Reply