In 2018 the California-based company FireEye tipped Facebook and Google off to a network of fake social media accounts from Iran that was conducting campaigns to influence people in the United States.
In response, Google and Facebook, using backend data to determine that a branch of the Iranian government was responsible, removed dozens of YouTube channels, a score of Google+ accounts and a handful of blogs.
Lee Foster, manager of information operations at FireEye, was at the forefront of the firms’ investigation. “Right now, you know something’s automated just by the sheer volume of content pushing out,” he says. ”It’s not possible for a human to do this, so it’s clearly not organically created. Often you’ll see automated retweeting of some list of accounts that just to boost out a message. “
But the landscape is about to change, he says, as artificial intelligence comes online that can mask its automated roots.
“Imagine having a capability out there that can automate the organic creation of original content effectively enough that it looks real, but you don’t even have to have it operate or touch it,” Foster says.
His fears are shared by other analysts. A recent Brookings Institute report outlined some of the changes that are in store. “In the very near term, the evolution of AI and machine learning, combined with the increasing availability of big data, will begin to transform human communication and interaction in the digital space,” the report, The Future of Political Warfare, predicts. “It will become more difficult for humans and social media platforms themselves to detect automated and fake accounts, which will become increasingly sophisticated at mimicking human behaviour.”
The days of AI catfishing is fast approaching. A sophisticated AI could detect information about people, determine who is susceptible to a particular message, and tailor the interaction as if the AI was a person. Brookings says AI will “micro-target citizens with deeply personalized messaging. They will be able to exploit human emotions to elicit specific responses. They will be able to do this faster and more effectively than any human actor.”
So what’s the solution? Artificial intelligence that can compete with the volume and analysis it will take to detect manipulated photos, articles and social media messages. It will take an AI to catch an AI, duelling each other to determine what’s real.
“I suspect that may well be the case,” Foster says of this future. “The thing that the AI brings to this is sheer volume. You’re not going to have enough a human talent in place to be able to catch all of that. It’s going to have to be a very capital mix of human intelligence and talent, combined with a kind of AI tools that can detect these automated campaigns.”
The big data behind social media is a key front in this struggle. Facebook, Google and Twitter use AI to determine what content and ads appear in search results, newsfeeds and timelines. This same bits of information, in nefarious hands, can be used to target more sinister messages.
Last week Facebook vowed to use staff and automation to weed out fake news, including video and photos. “The same false claim can appear as an article headline, as text over a photo or as audio in the background of a video,” Facebook product manager Tessa Lyons said in the statement. “In order to fight misinformation, we have to be able to fact-check it across all of these different content types.”
But while these companies can use algorithms to also detect disinformation, the results may not do much good. “Social media companies can tweak their algorithms to better detect disinformation campaigns or other forms of manipulation (and they have begun to do so), but the underlying systems and revenue models are likely to stay the same,” Brookings says.