HomeEthics & SocietyAI-related insider controversies hit each Microsoft and Google

AI-related insider controversies hit each Microsoft and Google

This week, two insider situations involving tech giants Microsoft and Google raised questions on the responsible development of AI systems and mental property management. 

First, at Microsoft, Shane Jones, a principal software engineering manager with six years of experience, has been independently testing the AI image generator Copilot Designer in his free time. 

Jones told CNBC that he was deeply troubled by the violent, sexual, and copyrighted images the tool was able to generating. “It was an eye-opening moment,” Jones said. “It’s after I first realized, wow this is basically not a secure model.”

Since November 2022, Jones has been actively testing the product for vulnerabilities, a practice referred to as red-teaming. 

He discovered that Copilot Designer could create images depicting “demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of ladies in violent tableaus, and underage drinking and drug use.” 

Despite reporting his findings to Microsoft in December, Jones said the corporate has been reluctant to remove the product from the market.

Microsoft’s Copilot has acted strangely on occasions, including adopting a “god mode” that saw it vow to pursue world domination. 

In a letter addressed to Federal Trade Commission Chair Lina Khan, Jones wrote, “Over the last three months, I even have repeatedly urged Microsoft to remove Copilot Designer from public use until higher safeguards could possibly be put in place.” 

He added that since Microsoft has “refused that suggestion,” he urges the corporate so as to add clear disclosures to the product and alter its rating on Google’s Android app to point that it is just suitable for mature audiences.

Copilot Designer has been reportedly easy to coax into bypassing its guardrails and was accountable for the recent notorious explicit images of Taylor Swift, which circulated to hundreds of thousands across social media.

As Jones argues, the ability of AI systems to generate disturbing and potentially harmful images raises serious questions on the efficacy of safety features and the way easy they’re to subvert. 

Insider controversy at Google

Meanwhile, Google is grappling with its own AI-related controversy.

Linwei Ding, also referred to as Leon Ding, a former Google software engineer, was indicted in California on 4 charges related to allegedly stealing trade secrets about AI while secretly working for 2 Chinese firms. 

The Chinese national is accused of stealing over 500 confidential files related to the infrastructure of Google’s supercomputing data centers, which host and train large AI models.

According to the indictment, Google hired Ding in 2019 and commenced uploading sensitive data from Google’s network to his personal Google account in May 2022. 

These uploads continued periodically for a yr, during which Ding spent a couple of months in China working for Beijing Rongshu Lianzhi Technology. This start-up tech company approached him and offered a monthly salary of $14,800 to function their Chief Technology Officer (CTO).

Ding also allegedly founded his own AI company, Shanghai Zhisuan Technology.

US Attorney General Merrick Garland stated, “The Justice Department is not going to tolerate the theft of artificial intelligence and other advanced technologies that might put our national security in danger.” FBI Director Christopher Wray added that Ding’s alleged actions “are the newest illustration of the lengths” firms in China will go to “to steal American innovation.”

As the world grapples with AI’s transformative potential, insider controversies in tech firms threaten to stir up dissent. 

Cases at Microsoft and Google highlight the importance of fostering a culture of responsible innovation, including trust and transparency inside the company itself.

AI is a technology that demands trust, and tech firms need to offer more assurance. This hasn’t all the time been forthcoming.

For instance, a gaggle of 100+ tech experts recently co-signed a letter pleading with AI firms to open their doors to independent testing. 

They argued that tech firms are too secretive about their products except when their hand is forced, as we saw when Google pulled Gemini’s image generation model for creating bizarre, historically inaccurate images. 

Right now, it appears that evidently AI’s exceptional pace of development often leaves trust and safety in its wake.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read