pwshub.com

OpenAI Staffers Responsible for Safety Are Jumping Ship

Image for article titled OpenAI Staffers Responsible for Safety Are Jumping Ship

Photo: Justin Sullivan (Getty Images)

OpenAI launched its Superalignment team almost a year ago with the ultimate goal of controlling hypothetical super-intelligent AI systems and preventing them from turning against humans. Naturally, many people were concerned—why did a team like this need to exist in the first place? Now, something more concerning has occurred: the team’s leaders, Ilya Sutskever and Jan Leike, just quit OpenAI.

The resignation of Superalignment’s leadership is the latest in a series of notable departures from the company, some of which came from within Sutskever and Leike’s safety-focused team. Back in November of 2023, Sutskever and OpenAI’s board led a failed effort to oust CEO Sam Altman. Six months later, several OpenAI staff members have left the company that were either outspoken about AI safety or worked on key safety teams.

Sutskever ended up apologizing for the coup (my bad, dude!) and signed a letter alongside 738 OpenAI employees (out of 770 total) asking to reinstate Altman and President Greg Brockman. However, according to a copy of the letter obtained by The New York Times with 702 signatures (the most complete public copy Gizmodo could find), several staffers who have now quit either did not sign the show of support for OpenAI’s leadership or were laggards to do so.

The names of Superalignment team members Jan Leike, Leopold Aschenbrenner, and William Saunders—who have since quit—do not appear alongside more than 700 other OpenAI staffers showing support for Altman and Brockman in the Times’ copy. World-renowned AI researcher Andrej Karpathy and former OpenAI staffers Daniel Kokotajlo and Cullen O’Keefe also do not appear in this early version of the letter and have since left OpenAI. These individuals may have signed the later version of the letter to signal support, but if so, they seem to have been the last to do it.

Gizmodo has reached out to OpenAI for comment on who will be leading the Superalignment team from here on out but we did not immediately hear back.

More broadly, safety at OpenAI has always been a divisive issue. That’s what caused Dario and Daniela Amodei in 2021 to start their own AI company, Anthropic, alongside nine other former OpenAI staffers. The safety concerns were also what reportedly led OpenAI’s nonprofit board members to oust Altman and Brockman. These board members were replaced with some infamous Silicon Valley entrepreneurs.

OpenAI still has a lot of people working on safety at the company. After all, the startup’s stated mission is to safely create AGI that benefits humanity! That said, here is Gizmodo’s running list of notable AI safety advocates who have left OpenAI since Altman’s ousting. Click through on desktop or just keep scrolling mobile.

Image for article titled OpenAI Staffers Responsible for Safety Are Jumping Ship

Photo: JACK GUEZ / AFP (Getty Images), Screenshot: X (Getty Images)

The former leaders of OpenAI’s Superalignment team simultaneously quit this week, one day after the company released its impressive GPT-4 Omni model. The goal of Superalignment, outlined during its July 2023 launch, was to develop methods for “steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

At the time, OpenAI noted it was trying to build these superintelligent models but did not have a solution for controlling them. It’s unclear if the departure of Leike and Sutskever was related to safety concerns, but their absence certainly leaves some room for speculation.

Image for article titled OpenAI Staffers Responsible for Safety Are Jumping Ship

Photo: Michael Macor/The San Francisco Chronicle (Getty Images)

Karpathy, a founding member of OpenAI, left the company for the second time in Feb. 2024. He remarked at the time that “nothing happened,” though his departure comes roughly one year after he left Tesla to rejoin OpenAI. Karpathy is widely regarded as one of the most influential and respected minds in artificial intelligence. He worked under the “godmother of AI” Fei-Fei Li at Stanford, another outspoken AI safety advocate.

Image for article titled OpenAI Staffers Responsible for Safety Are Jumping Ship

Photo: Jerod Harris (Getty Images)

Helen Toner and Tasha McCauley were the first victims of Sam Altman’s return to power. When Altman came back, these two were out.

At the time, Toner said the decision to fire Altman was about “the board’s ability to effectively supervise the company.” While somewhat ominous, Toner said she would continue her work focusing on AI policy, safety, and security. McCauley, on the other hand, said even less at the time, though she has ties to the effective altruism movement which claims to prioritize addressing the dangers of AI over short-term profits.

Image for article titled OpenAI Staffers Responsible for Safety Are Jumping Ship

Screenshot: The Information

Aschenbrennar was a known ally to Sutskever and a member of the Superalignment team. He was fired in April 2024 for allegedly leaking information to journalists, according to The Information. He also has ties to the effective altruism movement.

Izmailov was another staffer fired for leaking information to journalists in April 2024. He worked on the reasoning team but also spent time on the safety side of things.

Image for article titled OpenAI Staffers Responsible for Safety Are Jumping Ship

Photo: Joan Cros/NurPhoto (Getty Images)

Saunders, an OpenAI staffer on Sutskever and Leike’s Superalignment team, resigned in Feb. 2024, according to Business Insider. It’s unclear why exactly he resigned.

Image for article titled OpenAI Staffers Responsible for Safety Are Jumping Ship

Photo: Joan Cros/NurPhoto (Getty Images)

Kokotajlo resigned from OpenAI in April 2024, after working on the company’s governance team, also reported by Business Insider. On Kokotajlo’s Less Wrong page, he writes that he “quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI.”

Image for article titled OpenAI Staffers Responsible for Safety Are Jumping Ship

Screenshot: YouTube

O’Keefe appears to have resigned from OpenAI in April 2024 after over four years on the company’s governance team, according to his LinkedIn. O’Keefe frequently blogs about AI safety, and says in a post he continues to pursue “safe and beneficial AGI, but now in an independent role.”

newsid: 3ufpihlqqeiph95

Related stories
20 minutes ago - Every restaurant investor is looking for the next Chipotle Mexican Grill (NYSE: CMG). The fast-casual restaurant chain has posted phenomenal returns...
20 minutes ago - My bologna has a first name, it's O-S-C-A-R...We all recognize the famous Oscar Mayer jingle. But the brand has much more than bologna. It also has...
49 minutes ago - The Dow Jones Industrial Average is coming off of a notable week, having reached an all-time high on Thursday and a close above 40,000 on Friday.
50 minutes ago - Ripple saw substantial progress in numerous aspects during Q1. Will XRP see green as progress is made in Q2?
50 minutes ago - There was a significant surge in activity on the Cardano network over the past few days. Will ADA rise again?
Other stories
50 minutes ago - Copper is emerging as the next indispensable industrial commodity, mirroring oil’s rise in earlier decades, a top commodities analyst said. This...
50 minutes ago - Costco's multinational chain of membership-only stores have increased some prices, like for olive oil and gas, while its $1.50 hot dog-soda combo is unchanged for the time being.
50 minutes ago - Home Aviation...
1 hour ago - (Bloomberg) -- Asian stocks were mostly higher in early trading after US equities notched a fresh record amid resilient corporate earnings and as China took steps to shore up its property market. Most Read from BloombergIran’s President Missing After Helicopter Crash in Dense FogEven If Alito Is Right, the Upside-Down Flag Was WrongGantz Says He’ll Quit Unless Netanyahu Moves to New War PlanChina-Bound Oil Tanker Hit by Houthi Missile in Red Sea, US SaysSaudi King to Receive Treatment at Palace
1 hour ago - Analysts see Nvidia's earnings per share soaring 474% to $5.22, with revenue up 241% to $24.5 billion.