pwshub.com

Sutskever strikes AI gold with billion-dollar backing for superintelligent AI

adventures in alignment —

Top venture firms back SSI to develop "safe" AI with teams in Palo Alto and Tel Aviv.

Ilya Sutskever, OpenAI Chief Scientist, speaks at Tel Aviv University on June 5, 2023.

Enlarge / Ilya Sutskever, OpenAI Chief Scientist, speaks at Tel Aviv University on June 5, 2023.

On Wednesday, Reuters reported that Safe Superintelligence (SSI), a new AI startup cofounded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in funding. The 3-month-old company plans to focus on developing what it calls "safe" AI systems that surpass human capabilities.

The fundraising effort shows that even amid growing skepticism around massive investments in AI tech that so far have failed to be profitable, some backers are still willing to place large bets on high-profile talent in foundational AI research. Venture capital firms like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel participated in the SSI funding round.

SSI aims to use the new funds for computing power and attracting talent. With only 10 employees at the moment, the company intends to build a larger team of researchers across locations in Palo Alto, California, and Tel Aviv, Reuters reported.

While SSI did not officially disclose its valuation, sources told Reuters it was valued at $5 billion—which is a stunningly large amount just three months after the company's founding and with no publicly-known products yet developed.

Son of OpenAI

Enlarge / OpenAI Chief Scientist Illya Sutskever speaks at TED AI 2023.

Benj Edwards

Much like Anthropic before it, SSI formed as a breakaway company founded in part by former OpenAI employees. Sutskever, 37, cofounded SSI with Daniel Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher.

Sutskever's departure from OpenAI followed a rough period at the company that reportedly included disenchantment that OpenAI management did not devote proper resources to his "superalignment" research team and then Sutskever's involvement in the brief ouster of OpenAI CEO Sam Altman last November. After leaving OpenAI in May, Sutskever said his new company would "pursue safe superintelligence in a straight shot, with one focus, one goal, and one product."

Superintelligence, as we've noted previously, is a nebulous term for a hypothetical technology that would far surpass human intelligence. There is no guarantee that Sutskever will succeed in his mission (and skeptics abound), but the star power he gained from his academic bona fides and being a key cofounder of OpenAI has made rapid fundraising for his new company relatively easy.

The company plans to spend a couple of years on research and development before bringing a product to market, and its self-proclaimed focus on "AI safety" stems from the belief that powerful AI systems that can cause existential risks to humanity are on the horizon.

The "AI safety" topic has sparked debate within the tech industry, with companies and AI experts taking different stances on proposed safety regulations, including California's controversial SB-1047, which may soon become law. Since the topic of existential risk from AI is still hypothetical and frequently guided by personal opinion rather than science, that particular controversy is unlikely to die down anytime soon.

Source: arstechnica.com

Related stories
2 weeks ago - No product? No problem! Wondering if the AI bubble is set to pop? Safe Superintelligence (SSI) has just scored more than $1 billion in investor funding.…
1 week ago - Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future.
2 weeks ago - 200 million is an impressive figure in an industry facing intensifying competition from tech titans like Microsoft, Google, and Meta. All are aggressively developing and promoting their own conversational AI assistants in a bid to catch...
2 weeks ago - Its ChatGPT chatbot quickly set the tone for what we can expect from Big Tech in the coming years.
1 month ago - De Kraker: "If OpenAI is right on the verge of AGI, why do prominent people keep leaving?"
Other stories
20 minutes ago - Experts at the Netherlands Institute for Radio Astronomy (ASTRON) claim that second-generation, or "V2," Mini Starlink satellites emit interference that is a staggering 32 times stronger than that from previous models. Director Jessica...
20 minutes ago - The PKfail incident shocked the computer industry, exposing a deeply hidden flaw within the core of modern firmware infrastructure. The researchers who uncovered the issue have returned with new data, offering a more realistic assessment...
20 minutes ago - Nighttime anxiety can really mess up your ability to sleep at night. Here's what you can do about it right now.
20 minutes ago - With spectacular visuals and incredible combat, I cannot wait for Veilguard to launch on Oct. 31.
20 minutes ago - Finding the perfect pair of glasses is difficult, but here's how to do so while considering your face shape, skin tone, lifestyle and personality.