pwshub.com

What’s Ahead for OpenAI? Project Strawberry, Orion, and GPT Next

OpenAI is on the cusp of releasing two groundbreaking models that could redefine the landscape of machine learning. Codenamed Strawberry and Orion, these projects aim to push AI capabilities beyond current limits—particularly in reasoning, problem-solving, and language processing, taking us one step closer to artificial general intelligence (AGI).

Strawberry, previously known as Q* or Q-Star, seems to be more than just a chatbot; it focuses on showing a significant leap in AI reasoning abilities. Sources familiar with the project have told different media outlets like Reuters or The Information that it has demonstrated remarkable proficiency in solving complex mathematical problems and enhancing logical analysis.

Orion, meanwhile, is positioned as OpenAI’s next flagship language model, potentially succeeding GPT-4. It's designed to outperform its predecessor in language understanding and generation, with the added ability to handle multimodal inputs, including text, images, and videos.

Both projects have garnered attention from U.S. national security officials, underscoring their potential strategic importance. This development comes as OpenAI continues to raise capital despite substantial revenue growth, likely due to the high costs associated with developing and training these advanced models.

Strawberry and reasoning power

Despite an unending flurry of speculation online, OpenAI has not said anything officially about Project Strawberry. Purported leaks, however, gravitate toward its capabilities for sophisticated reasoning.

Unlike traditional models that provide rapid responses, Strawberry is said to employ what researchers call "System 2 thinking," able to take time to deliberate and reason through problems, rather than predicting longer sets of tokens to complete its responses. This approach has yielded impressive results, with the model scoring over 90 percent on the MATH benchmark—a collection of advanced mathematical problems—according to Reuters.

Another key innovation anticipated from Strawberry is its ability to generate high-quality synthetic training data. This addresses a critical challenge across AI development: the scarcity of diverse, high-quality data for training models. If true, Strawberry not only enhances its own capabilities, but also paves the way for more advanced models like Orion.

Considering the huge amounts of data already scraped by OpenAI, and the privacy movement that is now very present among users unwilling to give their data to AI trainers, this feature may play an important role in the quality of future AI models—just like some users today train their own custom models using images generated by Stable Diffusion.

However, Strawberry's deliberate processing approach may present challenges for real-time applications. OpenAI researchers are reportedly working on "distilling" Strawberry's capabilities—basically decreasing its quality so consumers can do massive amounts of inferences at low computing costs.

Even so, the potential integration of Strawberry's technology into consumer-facing products like ChatGPT could mark a significant boost to the way OpenAI trains new models. It’s possible, however, that OpenAI will use Strawberry as a foundation to train new models rather than made widely available to consumers.

Project Orion or GPT Next

Project Orion stands as OpenAI's ambitious successor to GPT-4o, aiming to set new standards in language AI. A recent presentation by by Tadao Nagasaki, CEO of OpenAI Japan, suggests that it could be named GPT Next. Leveraging advancements from Project Strawberry, Orion is designed to excel in natural language processing while expanding into multimodal capabilities.

And OpenAI claims the leap will not be incremental.

“The upcoming AI model, likely to be called ‘GPT Next,’ will evolve nearly 100 times more than its predecessors, judging by past performance,” Nagasaki said at the KDDI SUMMIT 2024 in Japan as reported by IT Media, “Unlike traditional software, AI technology grows exponentially. Therefore, we want to support the creation of a world where AI is integrated as soon as possible."

'GPT Next’ to Achieve 3 OOMs Boost. Great insights from the #KDDISummit. Tadao Nagasaki of @OpenAI Japan unveiled plans for ‘GPT Next,’ promising an Orders of Magnitude (OOMs) leap. ⚡️ This AI model aims for 100x more computational volume than GPT-4, using similar resources but… pic.twitter.com/fMopHeW5ww

— Shaun Ralston (@shaunralston) September 3, 2024

Training Orion on data produced by Strawberry would represent a technical advantage for OpenAI. However, this technique should be used with caution. Researchers have already proven that models start to degrade after being trained on too much synthetic data, so finding that sweet spot in which Strawberry can make Orion powerful without affecting its accuracy seems key for OpenAI to remain competitive.

Orion's native multimodal capabilities will also represent a significant advancement. The model is being developed to seamlessly integrate text, image, and even video inputs and outputs, as reported by The Information, opening up new possibilities for ChatGPT users and putting the company in direct competition against Google’s Gemini—which can process up to 2 hours of video input.

This is the model that users will interact with when they use ChatGPT or OpenAI’s API Playground.

The development of Orion aligns with OpenAI's broader strategy to maintain its competitive edge in an increasingly crowded AI landscape. With open-source models like Meta's LLaMA-3.1, and state-of-the-art models like Claude or Gemini making rapid progress, Orion is basically OpenAI's bid to stay ahead of the curve.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: decrypt.co

Related stories
1 week ago - In the ever-evolving world of cryptocurrency, certain projects stand out by garnering the attention of major investors, or as they’re often called, whales. Recently, Decide AI has captured the interest of Internet Computer (ICP) whales,...
1 month ago - Grok 2 promises great performance when compared to the best LLMs in the industry—and may be the best image generators in specific cases.
1 week ago - From data mining to tracking Russia and China, CIA Director Bill Burns and MI6 Chief Richard Moore share how AI is boosting global security.
1 month ago - We put Grok-2 to the test and compared against ChatGPT Plus with GPT-4o, Claude 3.5 Sonnet and Grok 2 Mini to see which one is the best.
4 days ago - As Quant (QNT) continues its impressive price rally, surging by 28% in the last seven days, many investors are looking for the next big opportunity in the cryptocurrency space. One token that has caught the attention of Quant (QNT)...
Other stories
6 minutes ago - The Consensys suit was originally filed in April and included the allegation that the SEC had opened an investigation into Ethereum.
21 minutes ago - Solana unveils the PSG1, its first web3 gaming handheld, with preorders announced at the Solana Breakpoint Conference. The post Play Solana unveils first-ever crypto gaming handheld appeared first on Crypto Briefing.
51 minutes ago - Last chance: With a day to go before the Hamster Kombat snapshot, here’s how to set yourself up for a bigger HMSTR bag in the airdrop.
1 hour ago - Despite the short-term boost, a Bybit exec warns investors of “potential challenges posed by economic uncertainty and market fluctuations.”
1 hour ago - Solana (SOL) hit another record: Today, it has over 75 million monthly active addresses. The surge speaks to growing popularity of the network, especially in areas like developer and user activity within the decentralized applications...