This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
The rise of social media use is impacting society—and not always in a good way, with increasing instances of malicious behavior online, such as coordinated campaigns to spread disinformation. To address this issue, a group of researchers in Europe created a new machine learning algorithm that can predict future malicious activity on X (formerly known as Twitter).
In their study, published 12 July in IEEE Transactions on Computational Social Systems, the researchers tested their model on three real-world datasets where malicious behavior took place—in China, Iran, and Russia. They found that the machine-learning model outperforms a conventional state-of-the-art prediction model by 40 percent.
Malicious behavior on social media can have profoundly negative effects, for example by spreading disinformation, discord, and hate. Rubén Sánchez-Corcuera, an engineering professor at the University of Deusto, in Spain, who was involved in the study, says he sees the need for social networks that allow people to communicate or stay informed without being subject to attacks.
“Personally, I believe that by reducing hate and idea induction that can occur through social networks, we can reduce the levels of polarization, hatred, and violence in society,” he says. “This can have a positive impact not only on digital platforms but also on people’s overall well-being.
This prompted him and his colleagues to develop their novel prediction model. They took an existing type of model named Jointly Optimizing Dynamics and Interactions for Embeddings (JODIE), which predicts future interactions on social media, and incorporating additional machine learning algorithms to predict if a user would be malicious over increments of time.
“This is achieved by applying a recurrent neural network that considers the user’s past interactions and the time elapsed between interactions,” explains Sánchez-Corcuera. “The model leverages time-sensitive features, making it highly suitable for environments where user behavior changes frequently.”
In their study, they used three different datasets comprising millions of tweets. The three datasets included 936 accounts linked to the People’s Republic of China that aimed to spur political unrest during the Hong Kong Protests in 2019; 1,666 Twitter accounts linked to the Iranian government, publishing biased tweets that favored Iran’s diplomatic and strategic perspectives on global news in 2019; and 1,152 Twitter accounts active in 2020 that were associated with a media website called Current Policy, which engages in state-backed political propaganda within Russia.
They found that their model was fairly accurate at predicting who would go on to engage in malicious behavior. For example, it was able to accurately predict 75 percent of malicious users by analyzing only 40 percent of interactions in the Iranian dataset. When they compared their model to another state-of-the-art prediction model, theirs outperformed it by 40 percent. Curiously, the results show that the new model was less accurate in identifying malicious users in the Russian dataset, although the reasons for this disparity in accuracy are unclear.
Sánchez-Corcuera says their approach to predicting malicious behavior on social media could apply to networks with text and comments, like X, but that applying it to multimedia-based networks like TikTok or Instagram may require a different approach.
Regardless of which platform these types of models are applied to, Sánchez-Corcuera sees value in them. “Creating a model that can predict malicious activities before they happen would allow for preventive action, protecting users and maintaining a safer and more constructive online space,” he says.
From Your Site Articles
Related Articles Around the Web