Dell Technologies Realize magazine Issue 3 | Page 41

Twitter as a tool to spread false information. Another difference, he points out, will be how people win or lose the game. “Instead of gaining followers and losing credibility, they will lose ‘lives’ when someone in their network reports or blocks them,” he adds. They will work with India-based Digital Empowerment Foundation to translate the game, adapt it to the local cultural context, and test it in rural areas. Linden expects the testing to be complete by the end of this year and to release the game early next year. FIGHT FIRE WITH FIRE Several researchers are trying another interesting tactic in the war against fake news— make more of it. One such example is Grover, an AI model created by University of Washington and Allen Institute for Artificial Intelligence computer scientists. They claim that their neural network is extremely good at generating fake and misleading news articles in the style of actual human journalists and is equally good at spotting AI-written online propaganda. The idea of using AI to both generate and identify fake news is not new. AI research company OpenAI’s natural language model GPT-2 created a controversy earlier this year, when they decided that their text-generating AI tool was too dangerous to release publicly. But Grover’s creators believe that it’s the best tool against AI-generated propaganda. “Our work on Grover demonstrates that the best models for detecting disinformation are the best models at generating it,” said University of Washington professor and research paper co-author Yejin Choi in a press release. “The fact that participants in our study found Grover’s fake news stories to be more trustworthy than the ones written by their fellow humans illustrates how far natural language generation has evolved—and why we need to try and get ahead of this threat.” 39 WHEN SEEING IS NO LONGER BELIEVING These new technologies have accelerated ongoing discussions about the potential dangers of AI-generated content, especially deepfakes—AI systems that adapt audio, pictures, and videos to make people say and do things they never did. By creating realistic representations of events that never happened, they are threatening to take the war of disinformation to another level. “Everyone is worried about deepfakes. We were at the European Commission a few months ago and the first question they asked was about deepfakes,” says Linden. “It is on our to-do list.” Linden and his team now plan to enhance Bad News to add deepfakes to the round where players are asked to impersonate an authority figure. “We will upgrade our impersonation badge to include tricks to spot fake videos, such as [the] fake Obama or Mark Zuckerberg videos that went viral recently,” he says. Deepfakes are horrifying everyone today, but do researchers like Linden have a silver bullet solution for the crisis? Probably not, but they plan to step up their game, as malicious actors adopt new and more vicious propaganda techniques. “It’s just like the flu vaccine,” he says. “We need to adapt proactively every season as the virus changes.” ■