Skip to main content

Artificial intelligence (AI) is advancing rapidly, and with that comes competition between major players. One of the latest challengers to shake up the AI industry is DeepSeek, a Chinese-developed AI model that claims to offer high performance at a fraction of the cost of its Western rivals.

Its popularity has surged, especially after surpassing ChatGPT as the most-downloaded free app on the iOS App Store. Businesses, particularly those looking to reduce AI costs, might be tempted to explore it.

But is DeepSeek actually safe for businesses? And could its cost-effectiveness come at a hidden price? While DeepSeek is gaining popularity, it raised serious security and privacy concerns, leading some governments and organisations to ban its use entirely. Let’s take a deeper look.

What is DeepSeek AI?

DeepSeek is an open-source AI model developed by High Flyer AI in China. Unlike closed models such as OpenAI’s GPT or Google’s Gemini, DeepSeek allows businesses to download and run it on their own servers, offering more control over customisation and deployment.

At first glance, this sounds great, lower costs, flexibility, and independence from expensive cloud-based AI services. But as with any emerging technology, what’s not being openly discussed is just as important as what is.

Security researchers, industry analysts, and even government agencies are raising serious concerns about data security, privacy, and potential misuse of DeepSeek.

DeepSeek’s Data Privacy and Storage Risks

One of the biggest concerns with DeepSeek is where and how it stores user data. According to its privacy policy, all collected data including IP addresses, keystrokes, chat logs, and personal details, is stored on servers in China.

This means that businesses using DeepSeek must comply with Chinese data laws, which differ significantly from GDPR (General Data Protection Regulation) in Europe and CCPA (California Consumer Privacy Act) in the US. As a result, this has led to concerns that businesses using DeepSeek could be exposing sensitive client data, trade secrets, and intellectual property to unknown risks.

Several organisations including NASA, the US Navy, and the governments of Taiwan and Italy have outright banned DeepSeek due to these concerns.

If your business operates in regulated industries like finance, healthcare, or legal services, DeepSeek could put you at risk of compliance violations. Laws like the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the US require businesses to ensure data protection and storing data on Chinese servers could present a major liability.

Keystroke Tracking and Potential Cyber Threats

Another alarming discovery is that DeepSeek’s privacy policy explicitly states it tracks user keystrokes (AI researcher Tara Tamiko Thompson). Keystroke tracking is often associated with cybersecurity risks, as it can potentially expose:

  • Passwords and sensitive information entered into the chatbot.
  • Behavioural patterns, which could be used for social engineering attacks.
  • Employee data, raising questions about workplace surveillance.

Additionally, cybersecurity experts tested DeepSeek’s AI security by running harmful prompts and the results were troubling.

  • Cisco researchers tested 50 prompts related to cybercrime, misinformation, and illegal activities, and DeepSeek failed to block all of them. On top of this, AI security firm Adversa AI found that DeepSeek could be tricked into generating hacking tools, malware scripts, and social engineering techniques (Wired,2025).

DeepSeek’s lack of security filtering makes it more vulnerable to misuse by hackers. If a business relies on AI for cybersecurity training or internal knowledge sharing, using a model that fails to block dangerous content could be risky.

Built-in Censorship and Political Bias

All AI models have moderation systems to block harmful content, but DeepSeek appears to selectively censor topics based on government mandates.

For example:

  • It refuses to answer questions about the Tiananmen Square protests, Taiwan’s independence, or China’s surveillance programmes.
  • If users try to bypass censorship by submitting questions as images, DeepSeek initially generates an answer, then deletes it after a few seconds.

If your business relies on AI for unbiased research, legal analysis, or reporting, DeepSeek may not be a reliable source of information. The built-in censorship could limit what responses employees receive, potentially skewing decision-making.

Legal and Compliance Issues

DeepSeek is under scrutiny for possible copyright violations. OpenAI has accused it of illegally scraping its data, and the US Department of Commerce is investigating its acquisition of banned Nvidia chips. This raises concerns about:

  • If DeepSeek’s training data includes copyrighted content, businesses using its outputs could unknowingly reproduce copyrighted material, creating legal risks.
  • Open-source AI models do not always provide clear copyright protections. If a business creates AI-generated text, images, or code using DeepSeek, it may not be legally enforceable as intellectual property in some regions.

Final Verdict: Is DeepSeek Safe for Business Use?

DeepSeek presents an affordable AI solution, but its security, privacy, and legal risks cannot be ignored. Businesses must carefully evaluate whether the potential cost savings outweigh the serious concerns surrounding data protection, cybersecurity, and compliance.

Key Concerns for Businesses

  • Data privacy risks – User data is stored in China, raising concerns about government access and compliance with international privacy laws.
  • Keystroke tracking – DeepSeek monitors user inputs, creating cybersecurity risks.
  • Weak security safeguards – The model is highly vulnerable to jailbreaking and misuse.
  • Built-in censorship – Political and sensitive topics are restricted, affecting unbiased research.
  • Legal uncertainties – Copyright issues and unclear content ownership may pose risks for businesses.

Final Recommendation

AI can be a powerful tool, but businesses must prioritise security, compliance, and ethical responsibility when selecting an AI solution. Instead, safer alternatives include:

OpenAI’s ChatGPT Enterprise – Enhanced security and compliance features.
Anthropic’s Claude AI – Built with strong ethical and safety measures.
Microsoft Copilot – A reliable AI assistant integrated with Microsoft 365, offering enterprise-grade security and compliance.

At Lucidica, we help businesses navigate the challenges of AI and cybersecurity with trusted technology solutions. Whether you need AI integration, training, cybersecurity audits, or compliance guidance, our expert team ensures your business stays secure and efficient.

📩 Get in touch today to protect your data and make the most of Ai, the right way.