Episode 1: An Introduction to Deepfakes

April 9, 2020

Hosted by Alex Rose (JD ‘20); Produced by Rachel Cohen (JD ‘20), Alex Rose (JD ‘20), and Dani Schulkin (JD ‘20); Edited by Michael Quinn (JD ‘20); Published by Katie McFarlane (JD ‘20)

Alex Rose, NYU Law ‘20, sits down with Britt Paris, a professor and critical informatics scholar at Rutgers University, to discuss the topic of “deepfakes” in light of the DEEP FAKES Accountability Act currently before Congress.

Image and bio c/o Britt Paris and Rutgers University

Britt S. Paris is an Assistant Professor of Library and Information Science at Rutgers University. She is a critical informatics scholar using methods from discourse analysis and qualitative social science to study how groups build, use, and understand information systems according to their values, and how these systems influence evidentiary standards and political action.  

Major Takeaways

Editor’s Note: Questions and answers have been edited for clarity and brevity

Alex Rose: What is a deepfake?

Britt Paris: In recent months headlines have reveled at how computer scientists at computer science research universities have built neural networks in machine learning models to turn audio or audio-visual clips into realistic, but completely faked, videos or audio pieces, commonly referred to as deepfakes. Free applications paired with consumer-grade software used by amateur communities pop up often in pornography sites and various creative spaces online. These are consumer-grade software technologies used by immature communities to make realistic videos of people doing and saying things that never actually happened.

AR: How simple is it to create a deepfake? Can you just compile several different images and voice recordings of somebody into an algorithm and produce this product?

BP: Essentially at a very basic level that is what’s happening. But there are very sophisticated things that you must do in terms of getting that face grafted on to the moving image appropriately and having the face moving the same way. So it’s not that simple but it’s becoming easier with these open source technologies. Even though [open source technologies] will proliferate and get shut down- these models still exist and you can build off of these models especially if you know where they are and how to manipulate them. 

AR: Are there any uses for deepfakes that you are particularly concerned about?

BP: The thing that is most concerning to me is when these videos are taken up as evidence of something. Whenever these manifest in pornography they often masquerade as revenge porn. 96% of deepfakes that exist, exist in pornography. It’s increasingly easy to take these images of anyone that exist online to make deepfakes using these open source technologies with I think just a few hundred images of a person. So anyone with an online profile is fair game to be faked. 

We know from history the ways in which images have been used in these sort of online contexts. It was primarily directed against people who were LGBTQ individuals, people of color, women broadly within all categories. People speaking out against the status quo or traditional structural hierarchies at the time were targeted with these manipulated images. This is very much a continuation of that old practice but with new technological dressing. 

There are some pretty serious issues that deal with democracy in America around these but there are ways to think about this that are sort of grounded. These people are public figures and whatever they do, whatever they post, whatever is posted about them, it bears a higher level of scrutiny. Everybody’s writing about these cheap fake videos and platforms have said we’re going to allow these to stay up because having this type of image manipulation available and on-the-record is something that is useful to people in the citizenry. Public figures have the press on their side and public scrutiny on their side. Generally they have the economic resources and time resources to refute this in court and force takedowns. People who are targeted who may not have the economic resources that wouldn’t make any headlines if somebody posted revenge porn about them, those are the people I think that need the most protection and that is what worries me the most.

AR: Do you think that the DEEP FAKES Accountability Act (currently before Congress) adequately goes after the people that need to be regulated?

BP: It’s very difficult to go after platforms for spreading this type of misinformation and particularly for making money off of spreading this type of disinformation. So I think that there is certainly a shortcoming here. 

I think the bill is a nice first step in terms of getting something on the books and in front of legislators in terms of regulating disinformation and disingenuous content that circulates on social media. The focus on punishing individuals can certainly detract some people from doing more nefarious things that they otherwise wouldn’t be discouraged from doing. But we need to think of better and more effective ways of punishing the people who are powerful in the situation and people who are profiting from the situation. I think holding these platforms more accountable through a number of methods is important and legslation is one, public pushback is another. But this issue of manipulated images and videos is sort of a microcosm for this larger problem that this large system has been pretty much allowed free reign over communication media in general and has not been regulated whatsoever.


For more information on deepfakes, check out “Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence” by Britt Paris and Joan Donovan