In a YouTube video, a voice in English announces that China has researched and developed its own ultra-thin 1-nanometer chip – a staggering claim given that the chip isn’t expected in commercial devices for another decade.
"Recent news from China has sent ripples of excitement and astonishment across the globe," gushes the voice-over on the China Charged YouTube channel. "This revolutionary breakthrough is more than a technological marvel; it is a game-changer that will redefine the global tech landscape."
"Prepare to have your mind blown," says another video, this time on the channel Unbelievable Projects. "Welcome to today's video, in which we'll discover why America remains behind China in infrastructure development."
These voices and their “good news” about China are evidence that the Chinese Communist Party and its overseas proxies are using artificial intelligence to flood YouTube with propaganda videos, according to a new report that describes a "coordinated inauthentic influence campaign" on the platform.
The videos are part of at least 30 channels identified by researchers as being part of the "Shadow Play" network promoting pro-China and anti-U.S. narratives, according to the Australian Strategic Policy Institute.
YouTube has taken down at least 19 of them.
The campaign used "entities and voice-overs generated by artificial intelligence" to put out content in line with the Chinese Communist Party's narratives across at least 30 YouTube channels, the institute said in its report titled "Shadow Play."
Aiming to Influence opinion
Starting in mid-2022, the propaganda campaign appears to be part of a bid to "shift English-speaking audiences’ views" of China, including Beijing's efforts to upgrade its technology despite U.S. sanctions, and has garnered nearly 120 million views and 730,000 subscribers so far, the report said.
"The campaign’s ability to amass and access such a large global audience – and its potential to covertly influence public opinion on these topics – should be cause for concern," the authors warned, adding that YouTube had responded to the report by taking down a number of channels for "coordinated inauthentic behavior" and for spam.
British artificial intelligence company Synthesia has also disabled the account of one of the channels for violating its media policies as of Dec. 14, according to the Institute.
"The operator of this network could be a commercial actor operating under some degree of state direction, funding or encouragement," the report said. "This could suggest that some patriotic companies increasingly operate China-linked campaigns alongside government actors."
State-directed campaign
Report author Jacinta Keast said researchers had traced much of the channels' content to stories that originated in China's heavily controlled news media in Chinese.
"A lot of those stories were often stories that only really appeared and most likely first appeared in the mainland Chinese media ecosystem," she said.
"We think at this stage that it's probably a Chinese state-directed and/or supported campaign that's being delivered by a corporate contractor or perhaps a patriotic Chinese company that's been encouraged to undertake this campaign," Keast told Radio Free Asia in a recent interview.
While one channel had partially monetized its content, the degree of monetization wasn't enough to suggest a commercial operation, she said.
"We would expect a purely commercial actor to really make more effort to fully monetize their operations," she said.
Some channels even had comments from viewers saying that the AI voices were "artificial," yet there had been little attempt to improve the quality of content to maximize revenue, Keast added.
The report also found that some of the YouTube content -- a report that Iran had switched on its China-provided BeiDou satellite system -- appeared on X, formerly Twitter, and other social media platforms, where it was gaining traction within a few hours of appearing on YouTube.
It called for greater information sharing about Chinese influence operations among Five Eyes nations and their allies, as well as tighter rules requiring social media users to disclose the use of generative AI in audio, video or image content.
‘Tell good stories about China’
U.S.-based political commentator Wu Jianmin said such channels are responding to Chinese leader Xi Jinping's call to " tell good stories about China," and seek to "infiltrate the ideological systems of Western countries."
Taiwan National Defense University doctoral researcher Gao Cheng-pu said similar content is used to target the democratic island of Taiwan, which Beijing claims as its territory despite never having controlled it, through indirect channels that are hard to trace.
"Usually after Beijing gives an order, it will contact Taiwanese businesspeople via a cooperative businessperson in Hong Kong, and have them hire online public relations companies to run [the operation]," Gao told Radio Free Asia.
The use of AI saves time and money, while making it much harder to trace the source of the content or the intermediaries, he said.
'Poisoned'
Keast said social media companies need to get wise to such operations.
"It's important for social media platforms to care about this because users go online expecting to engage with other users who are genuine and they don't want to be influenced, they don't want to have the online space poisoned by foreign influence actors," she said.
"We'd like to see them continue the work we've done in this investigation and have a look at any other content on our network that matches the indicators we provided in this report and ... take those down promptly."
But she warned that AI technology is so sophisticated that it's often hard to detect in audio, video or text.
And social media platforms aren't always well-informed about Chinese influence.
In October, YouTube deleted a second channel that produced satirical videos featuring Chinese Communist Party leader Xi Jinping, sparking renewed concerns over whether the Chinese government or its proxies are exploiting the social media giant's copyright rules to censor satirical or dissident content.
Keast said greater transparency from governments would help.
"We need governments to come out and say this is what we're seeing, this is who's doing it to strengthen democratic resilience against this threat," she said.
Translated by Luisetta Mudie. Edited by Malcolm Foster.