Elon Musk, the billionaire entrepreneur behind Tesla and SpaceX, said Monday he would ban Apple devices from his firms if the iPhone maker integrates OpenAI's artificial intelligence on the operating system level. The threat, posted on Musk's social media platform X.com (formerly generally known as Twitter), got here hours after Apple announced a broad partnership with OpenAI at its annual Worldwide Developers Conference.
“This is an unacceptable breach of security,” Musk wrote in an X-post, referring to Apple's plans to include OpenAI's powerful language models and other AI features into the core of its iOS, iPadOS and macOS operating systems. “And visitors must leave their Apple devices on the door, where they will probably be stored in a Faraday cage,” he added, apparently referring to a shielded enclosure that blocks electromagnetic signals.
Escalating rivalry among the many tech giants
Musk's broadside against Apple and OpenAI underscores the escalating rivalry and tensions between the tech giants battling for dominance within the booming generative AI market. The Tesla CEO has been an outspoken critic of OpenAI, an organization he co-founded as a nonprofit in 2015 before an acrimonious split, and is now positioning his own AI startup xAI as a direct competitor to Apple, OpenAI and other major players.
But Musk is just not alone in raising concerns in regards to the security implications of Apple's tight integration with OpenAI's technology, which allows developers across the iOS ecosystem to leverage the startup's powerful language models for applications reminiscent of natural language processing, image generation and more. Pliny the Prompter, a pseudonymous but widely respected cybersecurity researcher known for jailbreaking OpenAI's ChatGPT model, called the move a “daring” but potentially dangerous one given the present state of AI security.
Security concerns are high
“Time will tell! Given the present security state of LLM, an integration of this magnitude is a daring move,” Pliny posted on X, using the acronym for giant language models reminiscent of OpenAI's GPT series. In recent months, Pliny and other researchers have shown that it is feasible to bypass the safety safeguards of ChatGPT and other AI models, causing them to generate malicious content or reveal sensitive information utilized in their training data.
The technology industry has struggled with data leaks, cyberattacks and the theft of sensitive user information in recent times. For Apple, the stakes at the moment are raised as the corporate opens up its operating systems to third-party AI. Although Apple has long championed user privacy and insists that OpenAI adhere to its strict privacy policies, some security experts worry that the partnership could create recent vulnerabilities that may very well be exploited by malicious actors.
From our perspective, Apple is actually installing a black box at the guts of its operating system, trusting that OpenAI's systems and security are robust enough to maintain users protected. But even probably the most advanced AI models today are vulnerable to errors, bias, and potential misuse. This is a calculated risk for Apple.
Musk's turbulent history with OpenAI
Apple and OpenAI each insist that AI systems built into iOS run entirely on users’ devices as a substitute of transferring sensitive data to the cloud, and that developers who Apple Intelligence Tools will probably be subject to strict guidelines to forestall abuse, but details are still scarce, and a few fear that the lure of user data from Apple's 1.5 billion energetic devices may very well be a temptation for OpenAI to avoid its own rules.
Musk's past at OpenAI was turbulent. He was one among the corporate's early backers and served as its board chairman before leaving in 2018 over disagreements in regards to the company's direction. Musk has since criticized OpenAI for transforming from a nonprofit research lab right into a for-profit behemoth, accusing the corporate of abandoning its original mission of developing protected and useful AI for humanity.
Now, along with his xAI startup riding a wave of hype and recently closing a $6 billion funding round, Musk seems wanting to fuel the narrative of an epic AI battle for the ages. By threatening to ban Apple devices from his firms' offices, factories and facilities worldwide, the tech magnate is signaling that he sees the upcoming competition as no-holds-barred and a zero-sum game.
Whether Musk will follow through with a comprehensive Apple ban at Tesla, SpaceX and his other firms stays to be seen. As Meta's chief AI scientist recently identified, Musk often makes “obviously incorrect predictions” within the press. The logistical and security challenges alone of enforcing such a policy across tens of 1000’s of employees can be enormous. Some also query whether Musk, as CEO, really has the proper to unilaterally ban employees' personal devices.
But the episode highlights the strange alliances and hostilities that develop in Silicon Valley's AI gold rushwhere yesterday's partners can quickly turn into today's rivals and vice versa. With tech superpowers like Apple, Microsoft, Google and Amazon now closely aligned with OpenAI or developing their very own advanced AI in-house, the battle lines are being drawn for a showdown over the long run of computing.
As the stakes rise and the saber rattling intensifies, cybersecurity researchers may Pliny the Prompter will search for and investigate signs of vulnerabilities that might harm consumers in between. “We're going to rejoice, Pliny!” joked Comedone other outstanding AI safety tester, in a playful but ominous X-exchange on Monday. Fun, it seems, is one word for it.