The election interference tactics originally deployed by Russia against the United States and Europe are now global. Hackers across the democratic world have exploited weaknesses in campaign email servers; probed electronic voting machines for vulnerabilities; set up troll farms to spread highly-partisan narratives; and employed armies of bots to distort the truth online. Tech experts in countries such as Iran and Venezuela have borrowed these tactics and joined efforts toward the same goals: to erode confidence in electoral processes and in democratic governance itself.
Aggressive monitoring by social media companies seems to have blunted some of the disinformation campaigns aimed at US voters ahead of the November 06 congressional midterm election. But these disinformation tactics are just the beginning of global vote meddling. As we look to the future beyond the US midterms - especially the 2020 US presidential election - there will be a far more dangerous interference tool, one that will be available not only to malign governments, but individual actors as well: deepfake video.
Put simply, deepfake technology will enable users anywhere to fabricate video of virtually anyone, doing and saying anything. With the help of artificial intelligence (AI), the technology allows users to combine and superimpose images on to existing source video. It lets them go a step beyond the crude technique, first seen on Reddit, of making it seem that celebrities had appeared in porn movies by placing images of their heads on to the bodies of adult film actors - allowing them to make facial and voice manipulations convincing in ways never before possible.
Currently, creating a credible deepfake video is not easy, and only a few countries have both the advanced technology and the specialists capable of using it. However, video and audio manipulation technology are progressing quickly, becoming easier to use and thus giving less tech-savvy actors access to deepfake tools. By the next US presidential election, these tools will likely have become so widespread that anyone with a little technical knowledge will - from the comfort of their home - be able to make a video of any person, doing and saying whatever they want.
Deepfake videos have the potential to do tremendous harm. If they can be easily fabricated to show candidates making inflammatory statements, or simply looking inept, civic discourse will further degrade and public trust will plummet to new depths. Deepfake technology could take a bogus conspiracy theory - like "pizzagate," which falsely claimed a Washington, DC pizzeria was the centre of a child sex ring operated by Hillary Clinton and her campaign chairman - and make it appear even more credible, confusing the media and voters alike and sowing further discord among political parties, constituents and even families. This means deepfakes could transform not just election interference, but politics and geopolitics as we know it.
There are several measures that everyone who cares about democracy should take to prepare for this next wave of disinformation and election interference.
First, we must use AI to identify and stop deepfake videos before they spread. AI can use machine learning classifiers (similar to those used to detect spam emails) to sniff out imperfections in manipulated video invisible to the human eye. It can detect otherwise invisible watermarking algorithms and metadata built into authentic video, but tell machines whether the video has been tampered with. Governments and tech companies need to work together to continue developing this detection technology and make it widely available.
Second, social media and digital information companies in the private sector should turn their R&D (Research and development) fire on to this threat before it spirals out of control on their own platforms. Facebook has recently built a machine-learning model to detect potentially fake material, which it then sends to fact-checkers, who use a variety of techniques to verify the content. All social media platforms should embrace this challenge and treat it as a shared public interest priority.
But even action from social media companies will not be enough to contain deepfakes. Everyone working on election integrity has a responsibility to educate the public about the existence of these fraudulent videos - and how to spot them and prevent them from spreading.
Without greater public awareness of the danger, deepfake technology has the potential to cause electoral chaos and geopolitical instability. Our leaders and innovators must get ahead of this new weapon. Developing detection technology and educating the public about deepfake disinformation must be priorities if we hope to face down this threat to democracy.
Michael Chertoff was US Secretary of Homeland Security 2005-2009. Eileen Donahoe is Executive Director at the Global Digital Policy Incubator, Stanford Centre for Democracy, Development and the Rule of Law. Both are members of the Transatlantic Commission on Election Integrity, which unites political, tech, media and business leaders to tackle election interference across the democratic world.
The views expressed in this article are not those of Reuters News.
— Reuters