A“deepfake” video purportedly depicting Ukrainian President Volodymyr Zelenskyy calling on his forces to surrender to the Russian army circulated online Wednesday – including on Facebook, which later removed the misinformation.
University of Virginia professors Danielle Citron, a Law School expert on deepfakes and digital privacy, and David Nemer, an assistant professor of media studies who researches online misinformation, talked to UVA Today about the future of malign actors using digital tools to create false impressions for their own ends, including the ill omen for democracies and their security.
Q. Is this the pivotal moment experts have been fearing for deepfakes?
Citron: Alas, yes. When Bobby Chesney [a professor at University of Texas, Austin] and I started writing about deepfakes in early 2018, the national security implications were largely hypothetical. Then, and now, the problem was very real in the case of deepfake sex videos, where mostly women’s faces were inserted into porn in a fairly realistic way. Chesney and I warned about the national security implications, but then we only saw glimmers of a well-timed, destructive fake. Now, we see our warnings coming to fruition. It is a bummer to say, “I told you so.”
Nemer: The recent video was poorly edited – his accent was off, and his head and voice did not appear authentic upon close inspection. This deepfake video did not bring anything new and was quickly debunked by people outside Ukraine and Russia.
Although we could say that the deepfake video itself was not that dangerous, the current scenario in which this deepfake was inserted is what makes it extremely dangerous. The reason why people outside Ukraine and Russia were able to debunk it so easily was because they had steady internet connection, and quick access to multiple sources of information – which is not the case for Ukrainians and Russians.
The Russian government controls the local news media channels, and it has blocked access to every major social media platform, such as Facebook, Instagram, Twitter, TikTok and Telegram. Thus, it makes it really hard for the average Russian citizen to verify whether the video was real.
As for Ukrainians, the internet connection in Ukraine has been very spotty and unreliable due to the constant attacks to their telecommunication infrastructure by Russian troops. Ukrainians are primarily concerned about surviving, and don’t have the mental bandwidth to stop and verify information online – thus, the war scenario doesn’t allow them to engage with information as critically as they would in peaceful times.
Q. Is there a way to assess the damage from such videos?
Citron: One part of the damage is the way a well-timed deepfake can turn the tide in a war and inspire and justify physical attacks.
Another crucial part of the damage is what Chesney and I call the “liar’s dividend,” where the idea of fakery can be used to debunk real proof of destruction. The liar’s dividend is what Putin and other masters of disinformation mine to great effect to suggest that only the truths or videos that they say are real are real.
Nemer: It is hard to assess the damage of deepfakes due to how easily and widely they can be spread, and the convincing features embedded in them. They can cause short- and long-term social harms. Deepfakes can speed up the already declining trust in media. Such erosion can contribute to a culture of factual relativism, fraying the increasingly strained social fabrics of civil society.
Q. Would fear of responding too early to something that could be a deepfake give the perpetrator an advantage?
Citron: That is always the case. Deepfakes are highly effective because they spread like wildfire and are taken as true before they can be debunked. As my colleague, Dr. Wael Abd-Almageed of University of Southern California’s Visual Image Lab, tweeted in response to the Zelenskyy video, the problem is time – computer scientists are not afforded time to authenticate or debunk videos before they are shared and believed.