February 12, 2024 8:47 PM
Image Credit: VentureBeat / Michael Nunez
Dozens of protesters gathered outside the OpenAI office on Monday evening as employees were leaving work for the day, rallying against the company’s development of artificial intelligence.
The demonstration was organized by two groups — Pause AI and NoAGI — who were openly pleading with OpenAI engineers to quit their work on advanced AI systems like the chatbot ChatGPT.
The collective’s message was clear from the onset: halt the development of artificial intelligence that could lead to a future where machines surpass human intelligence, known as artificial general intelligence (AGI), and refrain from any further military affiliations.
The event was organized in part as a response to OpenAI deleting language from its usage policy last month that prohibited using AI for military purposes. Days after the usage policy was altered, it was reported OpenAI took on the Pentagon as a client.
The AI Impact Tour – NYC
We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.
Request an invite
“On [February 12], we will demand that OpenAI end its relationship with the Pentagon and not take any military clients,” the event description said. “If their ethical and safety boundaries can be revised out of convenience, they cannot be trusted.”
VentureBeat spoke with both protest organizers to learn more about what they hoped to accomplish with the demonstration, and what success would look like from each organization’s perspective.
“The goal for No AGI is to spread awareness that we really shouldn’t be building AGI in the first place,” Sam Kirchener, head of No AGI, told VentureBeat. “Instead we should be looking at things like whole brain emulation that keeps human thought at the forefront of intelligence.”
Holly Elmore, the lead organizer of Pause AI (U.S.), told VentureBeat her group wants “a global, indefinite pause on frontier development of AGI until it’s safe.” She added, “I would be so happy if they ended their relationship with the military. That seems like a really important boundary.”
Growing distrust around AI development
The protest comes at a critical juncture in the public discourse on the ethical implications of AI. OpenAI’s decision to amend its usage policy and engage with the Pentagon has sparked a debate on the militarization of AI and its potential consequences.
The protestors’ fears are primarily rooted in the concept of AGI — an intelligence that could perform any intellectual task that a human can but with potentially incomprehensible speed and scale. The concern isn’t just about the loss of jobs or the autonomy of warfare; it is about the fundamental alteration of power dynamics and decision-making in society.
“If we build AGI, there’s the risk that in a post-AGI world, we’ll lose a lot of meaning from what’s called the psychological threat, where AGI does everything for everyone. People won’t need jobs. And in our current society, people derive a lot of meaning from their work,” Kirchener told VentureBeat.
“Self governance is not enough for these companies, there really needs to be external regulation,” Elmore added, highlighting how frequently OpenAI has rescinded promises it made. “In June, Sam Altman was bragging that the board could fire him, then in November, they couldn’t fire him. Now we see a similar thing going on with the usage policy [and military contract]… What is the point of these policies, if they don’t actually restrict OpenAI from doing anything they want.”
Both Pause AI and No AGI share a common goal to halt AGI, but their methods diverge. Pause AI is open to the idea of AGI if it can be developed safely, whereas No AGI staunchly opposes its creation, emphasizing the potential psychological threats and loss of meaning in human lives.
Both groups say this likely won’t be their last protest.