Two reports released this month warn that China is using artificial intelligence, or AI, to influence public opinion in the democratic world, with elections in Taiwan and the United States next year most likely at stake.
A RAND policy researcher told reporters last week on the release of the company’s study that “logically” Taiwan’s presidential elections in January 2024 would be the first major target of China’s AI misinformation campaigns.
AI makes it possible for an actor such as the People’s Liberation Army to create opinions that appear to be widely held, manufacturing a false sense of reality, a RAND director said.
Heather Williams, associate director for the international security and defense policy program at RAND, told reporters that the false sense of reality “might be hyped projections about future disorder or disaster.”
For Taiwanese voters, in other words, that might mean they were being told to vote in support of a candidate promising a deal with China or possibly face annihilation.
In the case of the U.S., AI would likely be of assistance in dividing American voters on issues they’re already divided on, a Microsoft threat analyst said.
AI, which for many might be seen as a “chat” tool, in the hands of “bad actors” can be a powerful tool of malign persuasion. RAND and Microsoft are warning that democracies are vulnerable to such work ahead of key elections.
"We have observed China-affiliated actors leveraging AI-generated visual media in a broad campaign that largely focuses on politically divisive topics, such as gun violence, and denigrating U.S. political figures and symbols," Clint Watts, general manager of Microsoft's Threat Analysis Center, wrote in a blog post subsequent to the release of the Microsoft report.
Spreading the word
The Microsoft and RAND Corporation reports say that China is already using AI to produce persuasive language and imagery to spread disinformation and foment debate on divisive political issues on which the West allows free discussion and China likely would not.
AI likely makes disinformation campaigns “more plausible, harder to detect, and more attractive to more malign actors,” the RAND report argues. This is because it dramatically “lowers the cost of creating inauthentic (fake) media that is of a sufficient quality to fool users’ reliance on their senses to decide what is true about the world.”
In short, China is harnessing extremely intelligent software that has learned from large-language models trained in the West to believe its narrative, not ours, the report says.
Social media accounts “affiliated” with the Chinese Communist Party have been using innumerable ex-China accounts on various internet platforms to propagate messaging and images that create misleading narratives, both reports say.
According to Microsoft, what it calls the “threat landscape” is increasing as China’s cyber and influence operations show increasing sophistication.
RAND warns that AI is making it possible for China to harness armies of credible bots that can swamp apparently legitimate online news sources globally to create a mass disinformation campaign in China’s interests.
Microsoft added that China’s campaigns put multiple websites to use, spreading memes, videos, and messages in multiple languages, positioning Chinese state media as the authoritative voice on international discourse on China.
A wildfire
When wildfires devastated Maui recently, China's AI machine went into action, the New York Times reported Monday. In posts that spread virally over the internet a narrative sped globally that the fires were the product of a U.S. secret "weather weapon."
The images that made that story credible, the Times reported, were likely generated by AI, “making them among the first to use these new tools to bolster the aura of authenticity of a disinformation campaign.”
The U.S. is simultaneously trying to engage with China and push back against China misinformation amid what may be a rematch between President Joe Biden and former president Donald Trump.
According to CNN, 69% of Republicans and Republican-leaners still say Biden’s 2020 win was not legitimate.
China, the Microsoft report says, has further expanded the scale of its influence operations in 2023 by reaching audiences in new languages and on new platforms.
“These operations combine a highly controlled overt state media apparatus with covert or obfuscated social media assets, including bots, that launder and amplify the CCP’s preferred narratives,” the report says.
“These accounts messaged in new languages (Dutch, Greek, Indonesian, Swedish, Turkish, Uyghur, and more) and on new platforms (including Fandango, Rotten Tomatoes, Medium, Chess.com, and VK, among others).”
Edited by Mike Firn and Elaine Chan.