About the Author

This website represents the culmination of my time at the University of Texas at Austin receiving my Masters in Media Studies.
​
I first began to take an interest in machine-human interactions and relationships in Dr. Sharon Strover's "Communication, Technology, and Culture" class, where I was first introduced to the digital influencer Lil Miquela. Miquela was the start of my thinking about the ways that we integrate machines and technology into our daily lives. In following this line of enquiry, I found myself contemplating the way that in taking a recuperative stance toward technology, and in looking at our images of machines and tech through a queer lens, can reveal new modes of being in the world. I found myself transfixed by the ways that these creations are frequently the backdrop that allow humans a way to imagine a different world and a different self. More often than not, these imaginings take on a distinctly counter-hegemonic sensibility.
Halfway through “Communication, Technology, and Culture” (Spring 2020), the COVID-19 pandemic forced us into remote online courses. I’m writing this section on the one-year anniversary of the World Health Organization declaring a worldwide pandemic. Since then, I’ve completed the majority of my Masters remotely.
What are Deepfakes?
Intro to a Deep Learning course at MIT
How do Deepfakes Work?
PCmag.com has an excellent article explaining deepfakes. I’ve summarized the main points below, but for the full article click here.
​
Deepfakes commonly work in two fashions; either by directly mapping an actor’s facial movements onto a video created for that purpose (known as a target video) or by mapping a face onto other videos that have already been created.
​
Deepfakes are created using AI machine learning using “deep learning” and neural networks. Neural networks are algorithms that focus on finding patterns within massive amounts of data and visual examples. Neural networks are created with the intent to mimic the way the human brain works. Because of this they are adept at finding patterns and can build off of their discoveries.
​
Deepfake technology is the same premise as CGI (computer-generated imagery). While previously only available to big-budget filmmakers, the technology has become much more accessible. While creating deepfakes still takes a lot of time and processing power, the technology has never been cheaper or more accessible.
To read more on the concealing of queerness, click here.
To read more on queer activism in Welcome to Chechnya, click here.
A “deepfake” refers to a video where someone’s face has been swapped with someone else’s using AI machine learning. First appearing in 2018 and the result of hobbyists experimenting with neural networks, there are now a number of apps and open source software that can easily swap someone’s face onto another with little difficulty.
A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates
Example of mapping a face onto other videos
The Dangers of Deepfakes
Deepfakes pose a lot of dangers in our current climate, especially considering the sheer amount of information that people have access to at any moment in time. The dangers of deepfakes range from swapping faces on pornographic videos with those of celebrities or politicians to videos of public individuals spewing mis/disinformation. This false information can sway elections or public opinion about causes. Deepfakes are notoriously hard to catch, as the AIs “continuously improve their performance by learning from experience and adjusting their behaviour based on prior performance and new inputs” (Maras and Alexandrou). Intersected with the systems inherent in the majority of social media algorithms, deepfakes have the potential to spread quickly and be seen by millions. As Aaron Smith writes for the PEW Research Center, “nearly all the content people see on social media is chosen not by human editors but rather by computer programs…” (20). These algorithms analyze massive amounts of data, as well as the habits of users, to prioritize content that a user might find engaging. To put it simply a snowball effect is created, the more engagement a post gets, the more it spreads.
​
As Fallis writes, direct visual perception has always been a “source of information that we can simply trust without a lot of verifying” (1). Basically, people tend to trust what they see and when one cannot be in the right place at the right time, video becomes the next best thing. Deepfakes exploit our trust in our own visual perception.
Potential Positive Uses of Deepfakes
Deepfakes and Welcome to Chechnya were included in this project for the specific fact that studying Chechnya’s deepfakes is an engaging counter reading to the previous arguments of their societal dangers. Welcome to Chechnya is generative in a number of respects. Deepfakes are used to counteract and protect against a hyper-disciplined surveillance state. Aside from showing a non-hierarchal queer network of activists smuggling survivors out of the Republic, Chechnya also utilized a similar network in their postproduction. Queer activists, found primarily through Instagram, volunteered to have their faces mapped for the AI. These volunteers offered up their agency and identities to allow the Chechen survivors' their own.
​
The use of deepfakes in the documentary performed the dual role of protecting survivors’ identities while not sacrificing the audience’s emotional attachment to the survivors. Bill Nichols, an innovator in documentary studies and professor emeritus at San Francisco State University agreed on the effectiveness of the deepfakes; “I am seeing him—him being his ‘face’—bare his soul.”
​
Deepfakes in Welcome to Chechnya give users a chance to think, not only about the possible positive utility of deepfake technology, but about how queerness is surveilled and masked in modern society, and how queer activism manifests in the digital world.