Microsoft Prohibits Employees from Using Controversial DeepSeek AI App Due to Security Risks
Brad Smith, Microsoft Corp’s President, recently outlined the company’s stance on the DeepSeek AI application, which he stated is a no-go for Microsoft employees. This decision, highlighted during a Senate hearing, is fundamentally driven by serious concerns surrounding data security and potential manipulation of information—”propaganda concerns”—arising from the app’s operational framework.
“At Microsoft, we don’t allow our staff to engage with DeepSeek,” Smith noted, elaborating that this prohibition is due to risks involved with data storage in China and the possibility of influence by Chinese propaganda. Notably, the privacy policy of DeepSeek indicates that user data is stored on Chinese servers and is subject to Chinese law, which necessitates cooperation with governmental intelligence agencies. This raises significant red flags for many organizations and public bodies, drawing attention to issues of privacy and security.
This public declaration marks the first time Microsoft has explicitly communicated its ban of the DeepSeek app, amidst a global backdrop where various countries have already instituted restrictions against it. By opting not to list DeepSeek in its app store, Microsoft appears to be reinforcing its position amidst growing scrutiny.
DeepSeek has faced criticism for heavily censoring discussions on sensitive topics in line with Chinese government priorities, thereby sparking concerns regarding censorship and data integrity. Despite these issues, Microsoft has shown a complicated relationship with DeepSeek; they marketed the DeepSeek R1 model through their Azure cloud service earlier this year, which was also met with scrutiny, given its controversial association.
The nuances of this situation are further compounded by the competitive nature of the AI market. Microsoft’s own Copilot, an AI-driven tool for internet searches, stands in direct competition with DeepSeek’s offerings. With other chat applications, such as Perplexity, still available to users, the question of fairness in the app ecosystem arises.
Smith touched briefly on Microsoft’s efforts to engage with DeepSeek’s AI technology directly, claiming that they intervened to mitigate “harmful side effects” of the model. Nevertheless, no detailed insights were provided concerning the modification process or the exact nature of any changes made.
As AI technology continues to evolve and adapt, the divide over its governance, ethical use, and competitive practices remains a hotly debated topic in both public and corporate sectors. Moreover, with Microsoft diving deeper into AI capabilities through initiatives like this, the implications on privacy, security, and corporate responsibility merit ongoing attention.