Is DeepSeek Safe for My Company’s Systems?

by

China’s new DeepSeek AI engine Has Ushered in a New Era of Fast-Turn, Low-Cost AI Tools. But Are the Risks Worth the Rewards for US Companies?

 

Key Takeaways:

 

  • China’s DeepSeek has been hailed as the nimble new competitor to US large language AI models—an alternative that was developed for only $5.7 million, uses fewer advanced AI chips, and operates on fewer datasets. But the hype isn’t yet living up to the reality.
  • DeepSeek has suffered major breaches, ranking poorly in security tests and exposing user data.
  • The US Government is moving to ban DeepSeek and investments in US-China development projects involving AI, with many states and federal agencies already banning its use.
  • Tightened US restrictions on AI chips may impact DeepSeek’s future viability.
  • For now, we are advising our clients to avoid DeepSeek in any capacity, sticking instead to US-developed AI products on major platforms.
  • We recommend companies of all sizes take steps to secure all the AI products their employees may be using, officially or unofficially. This involves implementing AI Fair Use Policies, vetting AI tools for cybersecurity risk before they’re downloaded, and strengthening cybersecurity overall to strengthen your defenses against AI-related threats.

 


 

Chris LaseckiIn one breathtaking launch, DeepSeek cast itself as the monopoly-breaking, nimble AI platform of the future, ready to usher in a new era of cheap, efficient AI that anyone can access. 

The world has been taking them up on it. Soon after its introduction, DeepSeek surged to the most downloaded AI platform on Apple’s online store, proving people like the idea of using an advanced large-language model without a pricey subscription. Now, you can even find smaller, localized models of DeepSeek on Microsoft’s Azure marketplace, packaged and ready for download into the guarded environment of your company’s systems. But just because you can download DeepSeek AI…should you?

For many reasons, the answer is no. The dangers go far beyond just DeepSeek. Here’s my take on the risk and impact of some of these new tools developed outside the US, and what you can do to build AI into your infrastructure—safely.

 

The Inherent Risks of Deep Seek AI Systems

As a new release, DeepSeek is still considered a bit of a research project—albeit one with a growing international user base. But, as a headline-grabbing proof of concept, it’s not living up to expectations, especially when it comes to cybersecurity.  

 

Consider these facts: 

  • The platform scores very poorly on tests for script injection vulnerabilities. This score puts its security infrastructure at the bottom of all the  LLMs available today. Numerous examples of system vulnerabilities have been found already which could have allowed an attacker to pollute search results, steal data, and more. The company is scrambling to fix the problems, but the size and severity of the issues this early on points to a system that was raced to market a little too fast to address these issues. 
  • DeepSeek is leaking user data at an alarming rate. Wiz Research recently uncovered a publicly accessible ClickHouse database that contained over a million DeepSeek log entries. The leak exposed chat histories, backend details, API secrets, and sensitive operational information. The firm claimed to have found this trove with an easy public search, revealing proprietary data, extracted plaintext passwords, and accessed local files stored on DeepSeek’s servers. Even worse, there appeared to be no authentication mechanisms in place. Even after a fix, this kind of sloppy development raises serious questions about whether DeepSeek is ready for prime time, in any country. 
  • Skepticism abounds about DeepSeek’s claims. More people are beginning to wonder if DeepSeek developed its technology as cheaply as it says it did. The company stockpiled A-100 chips to create the platform, even if it was used with a fraction of the chips other engines, such as ChatGPT, have used. If the US tightens its grip even further on the AI chip market, how will non-US based AI engines like DeepSeek grow? What will the alternatives be? As more developers “look under the hood” with DeepSeek, questions about the structure of the system are mounting. 
  • User data collected in DeepSeek is not protected, even in the localized models that are not linked to the internet. According to Chinese law, the government has the right to pull data from any company based in China, and the DeepSeek user agreement reflects this. All your queries and results are vulnerable, as well as any company documents and information you may have trained it on. DeepSeek coding is capable of transferring user’s data directly to the Chinese government, according to this latest report from ABC news. 

It’s that last point that has lawmakers so concerned, in much the same way they were concerned about the Chinese government using TikTok user data. Government action against DeepSeek could be swift, bipartisan, and decisive—and perhaps is the biggest risk of using Chinese-developed AI models, to date. 

 

The US Government Response to DeepSeek—and Implications for Chinese-Developed AI in the US

Not surprisingly, both US and state governments have banned DeepSeek for its employees and servers. The US Pentagon has moved swiftly to ban DeepSeek on all Defense Department servers–a directive that will soon affect all companies contracting with the DoD, too. On January 24, the US Navy issued a directive to all personnel prohibiting the use of DeepSeek technology “in any capacity” because of “potential security and ethical concerns associated with the model’s origin and usage.”  NASA, the US Congress, and many state governments are issuing similar warnings to its employees and elected officials.  

In one of the first bills of the 119th Congress, Senator Josh Hawley (R-MO), introduced the S.321 US-China AI Decoupling Bill, which if passed, would enforce wide-ranging prohibitions on U.S. imports and exports of generative AI technologies and halt R&D to and from China. It would also ban U.S. investments in AI technology developed or produced in China.  

More Executive Action to Come

On February 3, Senator Elizabeth Warren (D-MA) penned a joint letter with Hawley to the Commerce Secretary, urging the Commerce Department to “update and enforce our export controls” in response to DeepSeek. It is one of a flurry of letters being sent around Capitol Hill, urging agencies at every level to rewrite their tech acquisition and fair use policies to ban the use of Chinese-based AI tech. 

As with any new bills or government directives, they will face challenges and modifications before they’re passed and adopted. The speed of innovation, however, seems to have put some political headwinds behind these efforts. For now, we’re advising all our clients to stick to US-based AI models, to ensure your company’s innovation efforts don’t get compromised or derailed. 

US Market Dominance and the New Small Data Race in AI Development

The US government has long-standing export controls on the computer chips needed to power AI. These controls prohibit the sale of the latest tech in AI chips to China, and meters out the amount of lower-grade AI chips they can buy from the US. These regulations tightened significantly in January 2025, when the Biden administration announced a new rules restricting AI chip flow to China, and heavily favoring AI chip access for the US and its allies. The Trump administration has yet to comment on these new rules, but most industry watchers expect those rules to stand or be tightened further. This gives American-based AI platforms a significant market advantage and ready access to the world’s best tech. In a world where AI is powered by big data, this hamstrings the competition quite effectively. 

In response, DeepSeek has directly challenged the assumption that large language models need big data to run well—and that is perhaps its greatest accomplishment. To put this into perspective, Chat GPT was developed with 10,000+ Nvidia GPUs with primarily the latest, top-of-the-line H100 chips. By contrast, DeepSeek claims it trained its model with only 2,000 of Nvidia’s less-powerful H800 chips. 

DeepSeek is not only chip efficient, it’s data efficient. DeepSeek engineers said they curated a training data set of 800,000 examples (600,000 reasoning-related answers). For comparison, the latest ChatGPT-4 model is trained on 1.7 trillion parameters, or about 1 petabytes of information. 

This innovation could very well kick off a global “small data” race in AI development. Shortly after DeepSeek’s release, a Hong Kong University of Science and Technology team announced it replicated the DeepSeek model with only 8,000 examples. These innovations sent Nvidia’s stock plummeting, losing the company $589 billion in market cap, though the losses seem to have leveled out since 

This new way of thinking about AI infrastructure may very well democratize AI development, and kickoff a worldwide tech boom. Regardless of where your latest AI tools are made, you’ll need a new set of protocols to make sure they’re safe to use. 

How Can I Make Sure My Company Is Using AI Safely?

These next few years we’ll see an enormous amount of innovation that will both revolutionize and disrupt the way our work is done. It’s critical that your IT operations gear up to meet these challenges. The first step is developing systems to vet, monitor, and properly implement the new tools coming on to your system. We recommend setting the following safeguards in place. 

  • Develop an AI Fair Use Policy, instructing employees not to download Chinese AI tools on their work devices, or any personal device, such as cell phones, that may also be running your company’s apps or used to view or share company information. Want to know how to successfully launch an AI usage policy at your company? Check out our blog on the subject, and download a  free copy today. 
  • Advise employees not to download AI tools of any kind on their work computers or phones without them being reviewed by your IT team first. Many free AI tools like photo/video editors, writing tools, or the like may seem like a harmless download, but are often designed to be Trojan horses riddled with spyware and malware.
  • Create a system for vetting new AI tools. Your IT MSP can help a great deal with this. They can run vulnerability assessments that suss out whether the AI tool returns good information, is prone to logic flaws or injection attacks, or filled with bad code, malware, or spyware.
  • Prioritize AI tools that can be run in a “walled garden.” AI tools that are run from within your system, such as M365 Copilot, are far safer than having employees work on tools housed on the internet. These protected platforms ensure that your data, queries, and results are all stored within your system and not searchable by the AI provider. This usually requires an investment for the monthly subscriptions, but the security benefits are worth it.
  • Run new AI tools in a controlled environment at first. Because these tools are so new, they don’t always perform as expected. When introducing a new AI-enabled tool, run controlled beta tests with a small, core group of users first. Contain it within a portion of your system, to ensure any bugs you encounter don’t affect your IT infrastructure.
  • Harden your defenses with Responsible IT Architecture. With so much new technology flooding the market, it’s more important than ever to shore up your cybersecurity defenses. At Integris, we recommend all our clients have a responsible  architecture including important safeguards such as end point detection and response, employee cybersecurity training, and a zero-trust authentication environment, among others.  These cybersecurity tools are more than just best practices. They can be your first line of defense, helping you catch AI-induced breaches or attacks before they can cause any damage. 

 

DeepSeek Is Just the Beginning

Right now, the most common AI engines are designed like ChatGPT, allowing you to search the Internet to provide intelligent and integrated answers to your questions. More localized AI models, such as Copilot for M365, can do the same thing, while parsing and searching your own company’s internal data. Other tools use AI to do one particular task well, such as creating a video or image from a text prompt.  

In the future, however, new, “mutli-modal” AI will be able to work across internet sites and your password protected accounts at once. AI will become your own personal agent, much like having an assistant that lives in your computer. Imagine a world, for instance, where you can ask an AI engine to research, book, and pay for your next vacation. Or ask it to write a report, send it out for approval to your executive team, and make revisions based on their comments. Look for a future where AI agent capabilities build on each other, and talk to each other in their own language, as you can see in this video. 

This world is already here, with new tools like Alibaba’s Qwen and Moonshot AI’s Kimi leading the pack of multi-modal, agented AI models that can talk to each other and independently manage entire workflows. The business opportunities—and security risks—are obvious. Your cybersecurity infrastructure and IT processes will need to be up to the challenge. 

 

Need Help with Your AI Strategy? Integris Can Help.

Our team of virtual Chief Information Officers (vCISOs) are CISSP-certified and ready to help you vet and implement the latest AI tech. Available fractionally on a retainer basis, they can provide the scalable, affordable help you need to keep your cybersecurity ahead of the curve. Contact us today for a free consultation. 

Chris Lasecki, CISSP, is a vCISO at Integris. Chris has over 30 years of experience in the IT field, with a recent focus on cyber threat hunting and consultation.

Keep reading

Law Firm Cybersecurity: Does Your Firm Measure Up?

Law Firm Cybersecurity: Does Your Firm Measure Up?

At Integris, law firms were our first clients. Today, we're incredibly proud to say we're providing managed IT services to more than 150 law firms across the US, and the legal industry is one of our largest client categories.   Most law firms come to us needing a...

How Can I Measure the ROI in Managed IT Services?

How Can I Measure the ROI in Managed IT Services?

How Can I Measure the ROI in Managed IT Services? The Quick Take Measuring the ROI of managed IT services is crucial for IT managers and C-suite leadership. Here are the key steps: Step #1—Define Goals and Metrics: Set clear goals and identify key performance...

Glassdoor Names Integris Best Place to Work in 2025

Glassdoor Names Integris Best Place to Work in 2025

Integris, a national managed IT service provider, today, announces its recognition in the 17th annual Glassdoor Employees’ Choice Awards for Best Places to Work in 2025. Unlike other workplace awards, the Glassdoor Employees’ Choice Awards are based on the input of...