Uyen Trang Nguyen, an associate professor in the Department of Electrical Engineering & Computer Science at York University’s Lassonde School of Engineering is developing artificial intelligence (AI) systems to detect clickbait and Twitter bots, two techniques commonly used to spread fraudulent content online.
“I was inspired to start this work because I see the issues that are caused by false information on the internet,” says Nguyen.
Fraudulent content online, such as misinformation and marketing scams, can have major global and personal consequences, ranging from financial to political damage, to cultural and personal disagreements and divides.
The systems Nguyen has created to combat them are developed with a subfield of AI called machine learning (ML), which trains computers to extract patterns and knowledge from specific data and learn from it, similar to the way humans read an instruction manual before completing an unfamiliar task. Each target – clickbait and Twitter bots – will be detected in particular ways by Nguyen’s AI.
For clickbait, Nguyen’s system analyzes the relationship between words in an article or on a webpage to detect clickbait. This system operates using a combination of methods that have not been used for clickbait detection systems before: a neural network that can mimic our brain’s ability to recognize patterns and regularities in data, coupled with human semantic knowledge of language to understand the relationship between words. While analyzing an article or webpage, the system relies on a graph that represents the semantic relationship between words and uses this information to correlate the title of an article or webpage to its content – if the title and content do not match, it is labelled as clickbait.
To detect Twitter bots, Nguyen’s system combines natural language processing with a recurrent neural network. Working together to analyze tweet content, natural language processing allows the system to understand text the way humans do, while the recurrent neural network helps the system identify language patterns used by bots. Using these methods, the system can distinguish a Twitter bot from a legitimate Twitter account.
Using these proposed systems to detect clickbait and Twitter bots, network administrators from companies such as Google or Twitter would have the ability to slow down or prevent the spread of fraudulent content before it reaches more internet users. An added feature that Nguyen is developing to improve the use of these systems is explainability – this allows the systems to provide an explanation behind their decisions. “It’s hard for people to trust artificial intelligence – it’s a computer, not a person,” says Nguyen. “I want to make sure these systems can explain what they are doing, so we can build trust in AI.”
Nguyen is working on additional improvements on her AI systems, including a feature that will permit her Twitter-bot detection system to distinguish between harmful and harmless bots. She is also applying machine learning methods to develop a system that can support financial institutions by detecting money laundering transactions.