Deepfakes will challenge public trust in what’s real. Here’s how to defuse them.
- categories: Defusing Disinfo
By Sam Gregory The panic around the threat of ‘deepfakes’ began in earnest in 2018. The ability to impersonate the voice or face of a person strikes fear into the heart of Senators, as well as the vulnerable human rights activists and journalists with whom I work at the human rights organization, WITNESS. So far, deepfakes have been used in a few instances to attack the credibility of journalists. There are thousands of examples of non-consensual use of the faces of celebrities and public figures on pornographic websites and elsewhere. We have not, however, seen other potential malicious usages, in terms of undermining national security, conducting broad attacks on public trust, targeting influential newsrooms, or widespread integration into influence campaigns. Mainstream products from the major commercial players already use AI-driven approaches, like Portrait Mode on the iPhone, Night Sight on your Pixel, or filters and augmented realities in Snap. But the most dangerous functions have not yet been productized – yet. More subtle forms of ‘synthetic media’ manipulation are emerging, like the ability to alter the background of videos, remove an object or insert a person. These synthetic media threaten the integrity of news journalism, human rights documentation, and investigative reporting. And the weaponization of the idea that we cannot believe any image – which is simply not true in the near-term for most images we will encounter – will be a boon to authoritarians and totalitarians worldwide. We are in the calm before the storm. This is an opportunity to be seized. We can be proactive and pragmatic in addressing this threat to the public sphere and our information ecosystem. We can prepare, not panic. In this context, I’ve been leading the work at WITNESS on this emerging threat. At WITNESS, we began our exploration of this problem set by convening the first expert meeting connecting technologists, industry insiders, researchers, human rights investigators and journalists to shape the range of pragmatic, partial solutions I outline here and below. We have followed this with leading a series of threat modelling workshops to understand how other constituencies including journalists and people working in the misinformation space perceive the risks. The necessity for WITNESS to lead on this is clear. As a human rights network, we are focused on the power of video and technology as a tool for transparency, accountability, and rights, both in the US and internationally. We’ve been a key player in supporting many millions of people around the world to use the explosion of mobile, social media and the Internet ethically and persuasively to share their ground-truthed realities, and then advocate for change. For many people, taking a video of an act of injustice is their first response to seeing it in front of them. Snapchat and Instagram form the lingua franca of young people. YouTube is used as the go-to place to learn a new skill for many. We know the power of these media when used for good. It’s in the compelling evidence from the ground that courageous activists and civic journalists in Syria and Myanmar have shared of war crimes and atrocities. It’s in the movement organizing that is seen here in the US in the coalescence of a more powerful Movement for Black Lives around the visual evidence of police violence.