Fighting Misinformation: AI Predicts Disinformation on X

Fighting Misinformation: AI Predicts Disinformation on X

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

The rise of social media use is impacting society—and not always in a good way, with increasing instances of malicious behavior online, such as coordinated campaigns to spread disinformation. To address this issue, a group of researchers in Europe created a new machine learning algorithm that can predict future malicious activity on X (formerly known as Twitter).

In their study, published 12 July in IEEE Transactions on Computational Social Systems, the researchers tested their model on three real-world datasets where malicious behavior took place—in China, Iran, and Russia. They found that the machine-learning model outperforms a conventional state-of-the-art prediction model by 40 percent.

Malicious behavior on social media can have profoundly negative effects, for example by spreading disinformation, discord, and hate. Rubén Sánchez-Corcuera, an engineering professor at the University of Deusto, in Spain, who was involved in the study, says he sees the need for social networks that allow people to communicate or stay informed without being subject to attacks.

“Personally, I believe that by reducing hate and idea induction that can occur through social networks, we can reduce the levels of polarization, hatred, and violence in society,” he says. “This can have a positive impact not only on digital platforms but also on people’s overall well-being.

This prompted him and his colleagues to develop their novel prediction model. They took an existing type of model named Jointly Optimizing Dynamics and Interactions for Embeddings (JODIE), which predicts future interactions on social media, and incorporating additional machine learning algorithms to predict if a user would be malicious over increments of time.

“This is achieved by applying a recurrent neural network that considers the user’s past interactions and the time elapsed between interactions,” explains Sánchez-Corcuera. “The model leverages time-sensitive features, making it highly suitable for environments where user behavior changes frequently.”

In their study, they used three different datasets comprising millions of tweets. The three datasets included 936 accounts linked to the People’s Republic of China that aimed to spur political unrest during the Hong Kong Protests in 2019; 1,666 Twitter accounts linked to the Iranian government, publishing biased tweets that favored Iran’s diplomatic and strategic perspectives on global news in 2019; and 1,152 Twitter accounts active in 2020 that were associated with a media website called Current Policy, which engages in state-backed political propaganda within Russia.

They found that their model was fairly accurate at predicting who would go on to engage in malicious behavior. For example, it was able to accurately predict 75 percent of malicious users by analyzing only 40 percent of interactions in the Iranian dataset. When they…

Read full article: Fighting Misinformation: AI Predicts Disinformation on X

The post “Fighting Misinformation: AI Predicts Disinformation on X” by Michelle Hampson was published on 09/18/2024 by spectrum.ieee.org