Dozens of protesters gathered outside OpenAI On Monday evening, protests at its San Francisco headquarters as employees walked off work protested the corporate's development of artificial intelligence (AI).
The demonstration was organized by two groups – AI break And NO AGI – who openly asked OpenAI engineers to stop working on advanced AI systems just like the chatbot ChatGPT.
The collective's message was clear from the beginning: Stop the event of artificial intelligence that may lead to a future by which machines surpass human intelligence, often called artificial general intelligence (AGI), and forego further military affiliations.
The event was organized partly in response OpenAI deletes language from its usage policy last month, which banned using AI for military purposes. Days after the usage policy change, it was reported that OpenAI had adopted it Pentagon as a customer.
“On (February 12), we are going to demand that OpenAI end its relationship with the Pentagon and stop accepting military customers,” he said Event description said. “If their ethics and security boundaries may be modified for convenience, they can’t be trusted.”
VentureBeat spoke with each protest organizers to learn more about what they wanted to realize with the demonstration and what success would appear like from each organizations' perspectives.
“The goal of No AGI is to lift awareness that we shouldn’t be constructing AGI in the primary place,” Sam Kirchener, head of No AGI, told VentureBeat. “Instead, we ought to be things like whole-brain emulation, which keeps human thought on the forefront of intelligence.”
Holly Elmore, the lead organizer of Pause AI (USA), told VentureBeat that her group wants “a world, indefinite pause on AGI's border development until it’s protected.” She added: “I can be so completely satisfied in the event that they had theirs.” would end their relationship with the military. That looks like a very essential boundary.”
Growing distrust of AI development
The protest comes at a critical time in the general public discourse concerning the ethical implications of AI. OpenAI's decision to vary its usage policies and cooperate with the Pentagon has sparked debate concerning the militarization of AI and its potential consequences.
Protesters' fears are primarily rooted within the concept of AGI – an intelligence that might perform any mental task a human can, but at potentially unimaginable speed and scale. The concern shouldn’t be nearly job losses or the autonomy of warfare; it’s about fundamentally changing the facility dynamics and decision-making in society.
“If we construct AGI, there may be a risk that we are going to lose a variety of meaning in a post-AGI world as a consequence of the so-called psychological threat, where AGI does every little thing for everybody. People won't need jobs. And in our society today, people derive great meaning from their work,” Kirchener told VentureBeat.
“Self-governance shouldn’t be enough for these firms, there really must be external regulation,” Elmore added, highlighting how repeatedly OpenAI has gone back on its guarantees. “In June, Sam Altman bragged that the board could fire him, but in November they couldn't fire him. Now we’re seeing the same thing with the usage policy (and military contract)…What’s the purpose of those policies in the event that they don’t actually stop OpenAI from doing whatever they need.”
Both Pause AI and No AGI share the common goal of stopping AGI, but their methods differ. Pause AI is open to the thought of AGI if it might probably be developed safely, whereas No AGI strongly opposes its creation, emphasizing the potential psychological threats and lack of meaning in human life.
Both groups say this likely won't be their last protest.
Others concerned about AI risk can get entangled here their web sites And social media. But for now, Silicon Valley is marching forward, racing toward an unknown AI future.