Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate. Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more.
Matti Aksela, VP of Artificial Intelligence at F-Secure, commented: “As we rely more and more on AI in the future, we need to understand what we need to do to protect it from potential abuse.
Having AI and machine learning power more and more of the services we depend on requires us to understand its security strengths and weaknesses, in addition to the benefits we can obtain, so that we can trust the results. Secure AI is the foundation of trustworthy AI.”
Sophisticated disinformation efforts – such as those organised by Russia’s infamous “troll farms” – have spread dangerous lies around COVID-19 vaccines, immigration, and high-profile figures.
Andy Patel, Researcher at F-Secure’s Artificial Intelligence Center of Excellence, said: “Twitter and other networks have become battlefields where different people and groups push different narratives. These include organic conversations and ads, but also messages intended to undermine and erode trust in legitimate information.
Examining how these ‘combatants’ can manipulate AI helps expose the limits of what AI can realistically do, and ideally, how it can be improved.”
Legitimate and reliable information is needed more than ever. Scepticism is healthy, but people are beginning to either trust nothing or believe everything. Both are problematic.
According to a PEW Research Center survey from late-2020, 53 per cent of Americans get their news from social media. Younger respondents, aged between 18-29, reported that social media is their main source of news.
No person or media outlet gets everything right, but a history of credibility must be taken into account—which tools such as NewsGuard help with. However, almost all mainstream media outlets have at least more credibility than a random social media user who may or may not even be who they claim to be.
In 2018, an investigation found that Twitter posts containing falsehoods are 70 percent more likely to be reshared. The ripple effect created by this resharing without fact-checking is why disinformation can spread so far within minutes. For some topics, like COVID-19 vaccines, Facebook has at least started to prompt users whether they’ve considered if the information is accurate before they share it.
Also Read: Who’s Buying Who: Big Deals In Cybersecurity
Patel trained collaborative filtering models (a type of machine learning used to encode similarities between users and content based on previous interactions) using data collected from Twitter for use in recommendation systems. As part of his experiments, Patel “poisoned” the data using additional retweets to retrain the model and see how the recommendations changed.
The findings showed how even a very small number of retweets could manipulate the recommendation engine into promoting accounts whose content was shared through the injected retweets.
“We performed tests against simplified models to learn more about how the real attacks might actually work,” said Patel.
“I think social media platforms are already facing attacks that are similar to the ones demonstrated in this research, but it’s hard for these organisations to be certain this is what’s happening because they’ll only see the result, not how it works.”