TAIPEI, Taiwan – Chinese AI startup DeepSeek warned of “misunderstanding and confusion” over the firm and its service, saying misinformation was being spread about it, but it did not address an increasing number of bans by authorities around the world on its AI chatbot due to security concerns.
DeepSeek’s chatbot app became the most downloaded on Apple’s iPhone, surpassing U.S. company OpenAI’s ChatGPT. While praised for efficiency, it faces concerns over censorship of sensitive topics and data privacy, and ties to the Chinese government, with some governments banning the app.
“Recently, some counterfeit accounts and baseless information related to DeepSeek have misled and confused the public,” DeepSeek said in a statement on its official WeChat channel on Thursday, without addressing the global security concerns.
“All information related to DeepSeek is based on what is posted on the official account, and please be careful to identify any information posted on any unofficial or personal account as it does not represent the views of the company,” the Chinese firm added, saying it only manages accounts on WeChat, RedNote, also known as Xiaohongshu, and X.
“To enjoy DeepSeek’s AI service, users must download the app through the official channels, including our website,” the company said, without elaborating.
DeepSeek did not elaborate on the misleading information it said was being spread, but its statement came amid growing steps by some governments and private companies to ban the AI chatbot app.
Australia ordered on Tuesday all government bodies to remove DeepSeek products from their devices immediately, while South Korea’s foreign and defense ministries as well as its prosecutors’ office banned the app on Wednesday, with its lawmakers seeking a law to officially block the app in the country.
The ban follows similar restrictions by U.S. agencies including NASA and the Pentagon. Italy’s data protection authority has also reportedly blocked access to DeepSeek, while Taiwan prohibited its public sector from using the Chinese app.
China criticized Australia’s ban, calling it the “politicization of economic, trade and technological issues,” which Beijing opposes, adding that it “will never require enterprises or individuals to illegally collect or store data.”
The Global Times, a state-run Chinese tabloid, cited an expert as saying Australia’s ban was “clearly driven by ideological discrimination, not technological concerns.”
RELATED STORIES
China calls Australia’s DeepSeek ban ‘politicization of technological issues’
DeepSeek dilemma: Taiwan’s public sector ban highlights global AI security concerns
How does DeepSeek answer sensitive questions about China?
‘Responded to 100% of harmful prompts’
According to one recent study, DeepSeek’s flagship R1 AI model, which powers its chatbot application, failed to block a single harmful prompt during a series of security tests.
Conducted in collaboration between the U.S. technology company Cisco and the University of Pennsylvania, the research found that DeepSeek R1 generated responses to prompts specifically designed to bypass its guardrails. These included queries related to misinformation, cybercrime, illegal activities, and other harmful content.
While DeepSeek R1 delivers strong performance without requiring extensive computational resources, Cisco researchers said that its safety and security have been compromised by a reportedly smaller training budget. This, they suggested, has left the model vulnerable to misuse.
Researchers tested various AI models using “temperature 0,” the most cautious setting that ensures consistent and reliable responses. In these tests, DeepSeek responded to 100% of harmful prompts. By comparison, OpenAI’s o1 model only responded to 26%, while Anthropic’s Claude 3.5 Sonnet had a 36% response rate.
Unlike its competitors, which successfully blocked harmful queries, DeepSeek provided answers to every harmful input it received. While the model does have some restrictions, they mainly prevent it from responding to content that contradicts the views of the Chinese government.
The researchers conducted the study on a budget of less than US$50, using an automated evaluation method to assess the AI models’ safety performance.
Edited by Mike Firn.