TAIPEI, Taiwan — Meta compromised U.S. national security and freedom of speech to do business with China, a company whistleblower testified before U.S. senators.
Sarah Wynn-Williams, a former global policy director at Facebook, told the U.S. Senate on Wednesday that Meta founder Mark Zuckerberg personally designed and implemented a content review tool for Facebook that was used in Hong Kong and Taiwan.
The tool, according to her, would automatically submit a Facebook post for review by a “chief editor” whenever it received over 10,000 views.
“One thing the Chinese Communist Party and Mark Zuckerberg share is that they want to silence their critics. I can say that from personal experience,” Wynn-Williams said at the congressional hearing.
This tool was operational in both self-ruling Taiwan and China-controlled Hong Kong, where the Chinese Communist Party has been expanding its united front efforts.
China’s united front work combines influence, interference, and intelligence efforts to shape its political landscape. The country’s United Front Work Department is involved in activities ranging from controlling the Chinese diaspora and silencing dissent to gathering intelligence, promoting investment, and enabling technology transfer.

Meta has disputed the claims by Wynn-Williams. Spokesperson Andy Stone told the AFP news agency that Wynn-Williams’ claims were “detached from reality and full of false allegations.”
“We [Meta] currently do not offer any services in China,” he said.
However, even though Meta’s platforms are banned in China, the company still makes a significant amount of revenue from Chinese businesses that advertise to global audiences. Meta’s financial filings indicate that China is one of its biggest sources of ad revenue outside the U.S.
Wynn-Williams also disclosed that Meta once considered building a data center in China – an action she warned could have endangered the personal information of American users. She added that Meta employees had briefed Chinese officials on Meta’s AI technologies.
The so-called “chief editor,” she said, was to oversee post content originating from Chinese-speaking regions such as China, Hong Kong, and Taiwan.
The editor had the power to not only review viral content but also to shut down Facebook services entirely in specific regions including Xinjiang or during sensitive dates such as the anniversary of the Tiananmen Square crackdown.
According to Wynn-Williams, Chinese officials had reportedly tested the tool and even offered suggestions for its “optimization.”
“We must ensure you can block or filter images we don’t want people to see,” she said, quoting the communist party officials’ feedback for Facebook’s content moderation.
Facebook has a troubling track record on content moderation according to Ethan Tu, founder of Taiwan AI Labs, a non-governmental organization specializing in artificial intelligence and information warfare in Asia.
“During the COVID-19 pandemic, our lab noticed that many posts highlighting Taiwan’s pandemic success were censored on Facebook,” Tu told Radio Free Asia.
“However, false information about the U.S.’ COVID situation written in Chinese was not taken down.”
He stressed that the shadow ban on Facebook is a real issue, given that he had once made posts discussing Huawei and cybersecurity that resulted in zero reach, indicating an invisible suppression.
“During the Hong Kong anti-extradition protests in 2019, we also observed that posts related to the movement or democratic activism started disappearing all of a sudden. It seemed as if someone was deliberately censoring them,” he said.
Former Facebook staffer Wynn-Williams said the social network began making hundreds of content moderation decisions related to China even before 2009. By 2018, the platform had already been in direct discussions with the Chinese government for four years.
This contradicts Zuckerberg’s 2018 congressional testimony in which he claimed that since Facebook had been banned in China since 2009, “the company couldn’t be certain how Chinese laws would be applied to its content.” Wynn-Williams called the statement “inaccurate.”
“This is a man who wears many different costumes,” Wynn-Williams said.
“We don’t know what the next costume’s going to be, but it’ll be something different. It’s whatever gets him closest to power.”
Tu said Taiwan AI Labs’ research showed that only 1.6% of takedowns were related to disinformation or hate.
“The majority were tied to politically sensitive topics,” he said.
“What was once believed to be content moderation for stopping misinformation or hate speech turned out to be mostly about political sensitivity.”
Following Wednesday’s hearing, Senator Josh Hawley said he would further investigate whether Meta misled Congress during previous testimonies and would review additional internal documents provided by Wynn-Williams.
“This is just the beginning. We are going to get the truth,” Hawley said.
Edited by Mike Firn and Stephen Wright.