Home > News & Events > News > A new AI early warning system to combat disinformation and prevent violence previewed in the Bulletin of Atomic Scientists

A new AI early warning system to combat disinformation and prevent violence previewed in the Bulletin of Atomic Scientists

Hannah Heinzekehr • DATE: March 16, 2020

Categories:  Press Release

In a new article published in the Bulletin of the Atomic Scientists, three University of Notre Dame researchers preview the development of an Artificial Intelligence (AI) early warning system meant to monitor the ways manipulated content online (i.e., altered photos, misleading memes, edited videos) can lead to violent conflict, societal instability, and interfere with democratic elections. The article includes research on the 2019 Indonesian election as a prime example of the ways online disinformation campaigns can have real world consequences.

The piece was co-authored by Michael Yankoski, doctoral candidate in theology and peace studies at the Kroc Institute for International Peace Studies, Walter Scheirer, assistant professor in the Department of Computer Science and Engineering, and Tim Weninger, associate professor in the Department of Computer Science and Engineering. The three scholars met while participating in a Notre Dame panel discussion on the ethics of AI, and started to think about the possible intersections between computer science and peace studies.

“As an ethicist and a scholar of peace studies, I tend to orient my work toward anticipating future threats,” said Yankoski. “I'm fascinated by the questions: what challenges will scholars and practitioners of peace studies face in the next 25 years and how might we be as prepared as possible for what lies ahead? While there is a lot of concern that AI will be deployed for purposes that undermine human well-being, I'm convinced that AI can also be designed for good ends and to help build a more peaceful and flourishing world.”

Together, the three researchers are developing AI tools that can validate the integrity of online photos and videos, and provide an early warning system of “emerging malicious trends” on social media. They hope the tool will be used to combat disinformation online, preserve peace and security, and to provide real-time monitoring of the possibility of emerging mass violence.

While memes and other digitally-manipulated images and videos are often shared for laughs, the authors warn that online disinformation has the potential for devastating impacts.

In 2019, results of the Indonesian general election between Joko Widodo and Prabowo Subianto were contested by Subianto, and were followed by widespread protests in Jakarta by Subianto’s followers. Online information about these protests was full of disinformation, leading to a decision by the Indonesian government to block access to social media in hopes of curbing the spread of misleading content.

In the lead up to the election, Scheirer and Weninger used their AI early warning system to monitor images from Twitter and Instagram over a one-year period. The system revealed that supporters of both candidates in the election shared content that was full of disinformation and meant to manipulate. Examples included memes shared by Widodo’s supporters that portrayed the opposition as particularly prone to violence, pro-Subianto memes depicting anti-Islam sentiment among secular politicians, and fake news stories edited to include the CNN international logo. 

“The broad reach of social media platforms – their incredible ability to sway opinions, shape perspectives, and incite action – means that attempts to exploit and manipulate the users of these platforms are likely to increase,” they write in the Bulletin article. “But given the massive amount of data generated on the internet every minute, AI systems are the only tools capable of identifying and analyzing the trends and potential threats arising from disinformation in real time. Our AI early warning system will allow people to understand manipulative online content in order to limit its potential to disrupt elections or incite violence.”

The computer science expertise required to build the AI system is complemented by insights from peace studies that can help to identify regions or societies where there is a high probability of disinformation impacting elections or resulting in violence. Peace and security studies researchers can also help make recommendations about how AI insights are used and shared and implications of findings for policymakers.

“Peace research has a long tradition of developing early warning systems – from weapons monitoring and hostility indicators to genocide prediction,” says George A. Lopez, the Rev. Theodore M. Hesburgh, C.S.C., Professor Emeritus of Peace Studies and Chair of the Bulletin’s Board of Directors from 1998-2003. “In applying the latest AI techniques to identify manipulated disinformation and malicious social media content, these authors provide essential tools for election monitors, journalists and peacebuilders.”

The development and refinement of this early warning system is still underway, and the researchers emphasize the importance of not creating just a one-size-fits-all approach, but rather developing a “suite of technologies” that will be nimble and able to evolve. They are currently working to identify key institutional partners, funding channels and other support mechanisms to ensure the sustainability of the project. They also presented their research at the Kroc Institute’s Building Sustainable Peace conference in November 2019 and received valuable feedback that they are integrating into their research and development processes.

Publishing their initial work in the Bulletin, one of the most renowned sources investigating the relationship among science, technology, and society also represents a substantial step forward for the project.

“The Bulletin has been a signature venue in the peace and security studies community for 75 years, and was thus an ideal location to demonstrate this new partnership between scholars of computer science and peace studies at Notre Dame,” said Yankoski. “We hoped to publish our piece both to further establish this intersection as an urgent space requiring more focus and attention from the larger peace studies community, and also in order to identify key conversation partners and potential collaborators in the field.”

This is the second time in the last two years that peace studies doctoral students have had their research published in the Bulletin. In fall 2018, Kristina Hook and Richard (Drew) Marcantonio published an article on possible war-related environmental disaster in Ukraine.

The Ph.D. Program in Peace Studies is administered by the University of Notre Dame’s Kroc Institute for International Peace Studies, part of the Keough School of Global Affairs. The program is a partnership with the Departments of Anthropology, History, Political Science, Sociology, Psychology, and Theology.

— Hannah Heinzekehr, Kroc Institute for International Peace Studies

Filed under: Press Release