HomeArtificial IntelligenceMicrosoft brings a Deepseek model to its cloud

Microsoft brings a Deepseek model to its cloud

The close partner and worker of Microsoft, Openai, could indicate that Deepseek Steel his IP and violated his terms of use. But Microsoft still wants Deepseek's shiny latest models on its cloud platform.

Microsoft announced today that R1, Deepseek's so-called argumentation model, on which Azure Ai Foundry Service, the Microsoft platform, brings together quite a few AI services for firms as a part of a single banner. In A Blog postMicrosoft said that the version of R1 on Azure Ai Foundry “Strict Red Teaming and Security Reviews was subjected”, including “Automated reviews of model behavior and extensive security checks for reducing potential risks”.

According to Microsoft, customers can use “distilled” aromas of R1 within the near future to run locally on Copilot+ PCs, Microsoft's brand of Windows hardware, which meets certain demands on AI readiness.

“While we further expand the model catalog within the Azure Ai Foundry, we’re excited to see how developers and corporations (…) R1 use the actual challenges and supply transformative experience,” continued Microsoft within the post office.

The addition of R1 to Microsoft's Cloud services is strange if you happen to consider that Microsoft has reportedly initiated an investigation into Deepseek's potential abuse of his and Openai services. According to security researchers who work for Microsoft, Deepseek could have sanded a considerable amount of data using the Openai -API in autumn 2024. Microsoft, who also notifies Opena's largest shareholder of the suspicious activity, which the suspicious activity, Pro Bloomberg.

But R1 is the town's conversation, and Microsoft could have been persuaded to bring it into the cloud fold while it remains to be attracting an appeal.

It is unclear whether Microsoft has made changes to the model to enhance its accuracy – and to combat its censorship. After a test According to the Information Responsibility Organization Newsguard, R1 offers inaccurate answers or non-answers in 83% of cases if he’s asked about news in regards to the news. A separate test found that R1 refused to reply 85% of the input requests in reference to China, possibly a consequence of the Government censorship, for which AI models developed within the country are examined.

.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read