Analyzing the Commoditization of Deepfakes

By: Robert Volkert1VP of Threat Investigation at Nisos and Henry Ajder2Head of Threat Intelligence at Deeptracelabs

February 27, 2020

Introduction

Since deepfakes first emerged in December 2017, the phenomenon has evolved both in terms of its technological sophistication and online presence. Both of these aspects of the development of deepfakes have been subject to intense scrutiny, with most commentators focusing on two key concerns:

1) The technology for creating deepfakes is becoming increasingly accessible and commoditized.

2) The threats posed by nefarious and criminal uses of deepfakes are becoming increasingly tangible.

In this blog, we present the key findings from our research investigating exactly how deepfakes are being created, shared, and sold online. We also assess the extent to which deepfakes are currently being used for nefarious and criminal purposes, as well as how new deepfake-specific laws apply to these use cases. This research aims to provide insights into the different ways deepfakes are currently being used online, in order to help understand the legitimate and illicit economies which have developed around this technology.

How Are Deepfakes Being Commoditized?

The term deepfakes was originally coined by Reddit user “u/deepfakes,” who created a subreddit page of the same name in November 2017. At the time, deepfake referred exclusively to the process of swapping celebrities’ faces into pornographic videos using open source deep learning software. However, the term is now commonly used to refer to different forms of AI generated synthetic media, including synthetic voice audio and text generation. 

Following the shutdown of the original deepfakes subreddit in February 2018, a large number of deepfake communities, tools, and services have emerged. Most of this activity migrated down two general paths – research-focused outlets such as GitHub, Discord, and Reddit (for hobbyist experimentation and information sharing) and “underground” outlets such as Voat, Telegram, and a variety of other deep web closed forums (for experimentation and discussion focused on pornographic applications). Today, the vast majority of deepfake activity online continues to focus on the original technique of face swapping in pornographic videos. However, we also identified several highly active communities of hobbyists and deepfake YouTubers who are creating “safe for work (SFW)” deepfakes, both for research and entertainment purposes.

Our research into the commoditization of deepfakes focused primarily on understanding how deepfakes are being sold and shared online. This involved analyzing hundreds of deepfake marketplaces, forums, and chat rooms across the surface, deep, and dark web. From this research, we identified three main approaches to creating and selling deepfakes: open source toolsservice platforms, and marketplace sellers.

Open Source Tools

The primary driving force behind deepfakes’ commoditization is open source software similar to the kind used by the creator of the original deepfake subreddit. This is software that is public and free to download, with most deepfake projects located on the popular open source platform Github. While this software is not monetized at the download source, many project creators request user-donations via Patreon, Paypal, or Bitcoin. The largest and most popular of these projects focus on face swapping capabilities, with a much smaller subset providing software for generating rudimentary synthetic voice audio. 

Most of these open source tools require some knowledge of programming and a powerful graphics processor to operate effectively, making them inaccessible for many amateur users. However, we found that several of the more popular tools are accompanied by detailed tutorials and discussion groups on chat platforms such as Discord, where amateur users can request assistance and advice on how to create deepfakes using the tools. These open source tools will continue as the foundation of the commoditization of deepfakes, with service portals and marketplace sellers relying on them to create deepfakes for a fee.

Service Platforms

We characterize service platforms as websites that appear to the user to automate the process of creating deepfakes through a graphical user interface (GUI). With service platforms, users are typically required to upload training data (photos or videos) of their chosen subjects and receive the deepfake video once it has been processed. In some cases, it is unclear if elements of the process are automated, or whether the website’s owner or employees are manually operating open source software with the training data uploaded through the GUI. Regardless, service platforms are presented as professional online businesses, where the user pays to outsource the deepfake creation process. 

We found that several of these service platforms are explicitly advertised in the context of deepfake pornography, while others cloned a popular app for synthetically removing clothes from pictures of women. Of the service platforms we identified that did not explicitly advertise their use for creating deepfake pornography, they specifically prohibited this content in their user terms as well as numerous additional uses such as impersonating or harassing others. We identified several of these service platform owners who were geographically dispersed around the world, such as a likely Japan-based university professor, a Russia-based technology innovator, and a China-based technology hobbyist. Our research suggests that some of these platforms are showcasing deepfake technology, as opposed to only generating business revenue, due to the lack of complete identity obfuscation and accessibility on the open web.

Marketplace Sellers

Marketplace sellers are private individuals who advertise custom-made deepfakes on forums or online marketplaces. This category can be further defined in terms of “Safe For Work (SFW)” and “Not Safe For Work (NSFW)” marketplace sellers depending on the deepfake content they advertise.

The SFW sellers we identified are mostly YouTubers and hobbyists who sell deepfakes on SFW forums and online marketplaces such as Fiverr. The majority of these sellers clearly state that they will not make pornographic, or what they consider to be malicious, content. Conversely, NSFW marketplace sellers are typically located on message board websites such as Voat and 4Chan, as well as messaging apps such as Telegram, and openly advertise their services for creating deepfake pornography. This illicit activity is also prominent on the forums of deepfake pornography websites and involves some creators sharing their videos to attract new customers. We found the pricing of marketplace services varied greatly, and many sellers moved discussions to a platform’s private messaging system to discuss the terms of the sale.

For Now, the Dark Web is Not a Hotspot for Selling Deepfakes

To understand the state of deepfake activity on the deep and dark web, we examined content from over 200 marketplaces, forums, and communications channels. We primarily examined globally marketed content in English, as the term “deepfake” is the most commonly advertised term and lacks equivalent translation in most other languages. Through this research, we identified one notable dark web entity advertising deepfakes for a fee, in addition to creating deep “nudes” for a lower cost. However, this was an outlier case, with our research indicating a significant lack of sellers on these underground sites overall. From these findings, we conclude that the demand for video creation on the dark web is currently very low. 

This conclusion was reinforced by our analysis of deepfake discussions on dark web forums. The vast majority of these deepfake discussions centered around instructing users on how to create deepfakes themselves and references to locations where software or tutorials could be obtained. Dark web marketplaces are for precisely that – a market to sell a good or service – and the lack of a deepfake presence on these sites indicates that the demand is yet to materialize. It is possible that deepfake video creation services are being sold entirely on private and encrypted channels, but this would not be conducive for large and recurring profits, with few options for marketing to a wider audience. 

The Vast Majority of Deepfakes Being Created and Sold are Pornographic

Our findings confirm previous Deeptrace research that found the vast majority of deepfake activity on the surface and dark web is pornographic. Given deepfakes’ origins in the pornographic face swapping of celebrities, it is unsurprising (but no less disturbing) that an extensive ecosystem of communities, services, and tools has continued to grow around this use of the technology.

We found that the majority of deepfake activity centers on dedicated deepfake pornography platforms, where communities of users have created and uploaded thousands of deepfake pornography videos, typically using popular open source tools. These videos consistently attract millions of views, with some of the websites featuring polls where users can vote for who they want to see targeted next. Most of these platforms also feature an affiliated forum or chat group where thousands of active members discuss, request, and sell deepfake pornography. This includes helping users learn how to create their own videos, through sharing pre-trained models and face sets for creating deepfake pornography of a specific celebrity. The presence of banner advertising, membership fees, and donation buttons also suggests that the owners are generating revenue from the millions of video views and consistent traffic the websites attract.

In addition to dedicated deepfake pornography platforms, we also identified a range of independent deepfake pornography forums on encrypted messaging apps, message board websites, and dark web locations. These forums featured similar activity to dedicated deepfake pornography websites, but typically showed increased activity regarding the creation and solicitation of deepfake pornography by private individuals. Some of these forums also featured Russian and Chinese as the default posting language, illustrating a global demand and interest in deepfake pornography.

We also found that one of the most popular open-source deepfake creation tools is explicitly advertised as the best way to create deepfake pornography. This included the creator linking directly to one of the most active deepfake pornography websites on the tool’s Github page and engaging with users on the website’s forum. This tool appears to have been developed by the creator “forking,” or misappropriating, the code from another open source face swapping project, and the creator actively requests Bitcoin donations from the tool’s users.

Aside from Nonconsensual Pornography, the Current Public Market Demand for Criminal Deepfakes is Low

Based on our findings both on the surface and dark web, we assess that deepfakes are not being widely bought or sold for criminal or disinformation purposes as of early February 2020. One possible reason is that at the current stage of the commoditization of deepfakes, the outputs generated by open source tools are low quality and could not be effectively deployed for criminal purposes. Techniques being developed by academic and industry leaders have arguably reached the required quality for criminal uses, but these techniques are not currently publicly accessible and will take time to be translated into stable, user-friendly implementations. As a result, traditional techniques for conducting social engineering and election interference are likely more viable options at this time.

Additionally, publicly advertising or buying criminal deepfake services would likely generate increased visibility and law enforcement interest that criminal actors seek to avoid. While there are still numerous sellers and buyers of nonconsensual deepfake pornography, many engaging in this activity may not view it as a criminal activity, even if they are aware of its unethical nature. 

These two explanations are not mutually exclusive, and it is likely that our findings result from a combination of the two. Our research suggests any current criminal activity surrounding deepfakes is not presently being conducted in the public domain or is very well hidden. Given the reasons outlined above, we believe that the former possibility is more likely.

How Would New U.S. State Laws be Applied to Our Findings?

2019 saw three U.S. states – Virginia, Texas, and California – pass the first non-federal laws  that impose criminal penalties on deepfakes with various intents. Virginia became the first state to pass a law that makes distribution of nonconsensual, “falsely created,” explicit images and videos (deepfakes) a Class 1 misdemeanor. In September, Texas passed legislation prohibiting the “creation or distribution of deepfake videos intended to harm candidates for public office or influence elections,” and classified it a Class A misdemeanor. Texas defines a deepfake video as one “created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.” Finally, California passed two laws in September that collectively allow victims of nonconsensual deepfake pornography to sue for damages and give candidates for public office the ability to sue organizations distributing “with actual malice” election-related deepfakes without warning labels near Election Day.

While our research did not find any distributors of deepfake videos that would explicitly violate the Texas law or platform policies involving political election manipulation, we did identify numerous entities selling or distributing deepfake videos for nonconsensual pornography that would likely be punishable under the California and Virginia laws. The vast majority of pornographic deepfakes involved face swapping technology (mostly celebrities), and are nonconsensual. We identified one dark web site where members were recommending child actors for deepfake child pornography purposes (and was referred to law enforcement), which was much more straightforward in terms of criminal intent, however most of the deepfake videos we identified appeared to involve adult subjects.

In addition to state legislation, three of the largest social media companies also announced proactive policies to counter malicious deepfakes on their platforms. In January 2020, Facebook announced a new manipulated media policy, stating that it would remove any media that “has been edited or synthesized… in ways that aren’t apparent to an average person” or media that is the “product of artificial intelligence or machine learning” that make them appear authentic – conditions that specifically target deepfakes. In early February 2020, Twitter also announced a similar policy on manipulated media, and YouTube instituted a ban on “content that has been technically manipulated or doctored in a way that misleads users.” 

However, it is unclear how well these policies, in particular Facebook’s, will account for cases where deepfakes are used for satirical purposes, or “shallowfake” cases where audiovisual media has been crudely manipulated using a variety of manual ‘click by click’ editing techniques.

Conclusions

The clear challenge for enforcing deepfake legislation is the identification and attribution of the deepfakes sellers and distributors themselves. While most of the surface-level website administrators we identified did not appear to be obfuscating their identities, it was impossible to determine whether a customer had the consent of the individuals in the videos as well as revealing intent (both requirements for the U.S. state laws mentioned above). Although illicit activity by deep or dark web actors was more easily identifiable given their distribution ofnonconsensual pornographic videos, all obscured their identities to varying degrees and would require additional investigation or subpoena actions to fully attribute.

We anticipate that as deepfakes reach higher quality and “believability,” coupled with advancing technology proliferation, they will increasingly be used for criminal purposes. For example, instead of using a phone call for a social engineering cyber-attack, deepfake synthetic voice audio could be utilized in real time, or as evidence of a company executive’s decision. If the end result of these criminal operations was extortion or theft, then the laws above would likely be applicable. Additionally, we anticipate more sophisticated nation-state actors will use deepfakes for information operations and espionage. However, these use cases might fall under federal cybersecurity legislation designed to protect election integrity or other issues of national security.


Robert Volkert, VP of Threat Investigation at Nisos

Henry Ajder, Head of Threat Intelligence at Deeptracelabs

Suggested Citation: Robert Volkert & Henry Ajder, Analyzing the Commoditization of Deepfakes, N.Y.U. J. Legis. & Pub. Pol’y Quorum (2020).

  • 1
    VP of Threat Investigation at Nisos
  • 2
    Head of Threat Intelligence at Deeptracelabs