pwshub.com

New AI algorithm promises to slash AI power consumption by 95 percent

Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.

A hot potato: As more companies jump on the AI bandwagon, the energy consumption of AI models is becoming an urgent concern. While the most prominent players – Nvidia, Microsoft, and OpenAI – have downplayed the situation, one company claims it has come up with the solution.

Researchers at BitEnergy AI have developed a technique that could dramatically reduce AI power consumption without sacrificing too much accuracy and speed. The study claims that the method could cut energy usage by up to 95 percent. The team calls the breakthrough Linear-Complexity Multiplication or L-Mul for short. The computational process uses integer additions, which require much less energy and fewer steps than floating-point multiplications for AI-related tasks.

Floating-point numbers are used extensively in AI computations when handling very large or very small numbers. These numbers are like scientific notation in binary form and allow AI systems to execute complex calculations precisely. However, this precision comes at a cost.

The growing energy demands of the AI boom have reached a concerning level, with some models requiring vast amounts of electricity. For example, ChatGPT uses electricity equivalent to 18,000 US homes (564 MWh daily). Analysts at the Cambridge Centre for Alternative Finance estimate that the AI industry could consume between 85 and 134 TWh annually by 2027.

The L-Mul algorithm addresses this excessive waste of energy by approximating complex floating-point multiplications with simpler integer additions. In testing, AI models maintained accuracy while reducing energy consumption by 95 percent for tensor multiplications and 80 percent for dot products.

The L-Mul technique also delivers proportionally enhanced performance. The algorithm exceeds current 8-bit computational standards, achieving higher precision with fewer bit-level calculations. Tests covering various AI tasks, including natural language processing and machine vision, demonstrated only a 0.07-percent performance decrease – a small tradeoff when factored into the energy savings.

Transformer-based models, like GPT, can benefit the most from L-Mul, as the algorithm integrates seamlessly into the attention mechanism, a crucial yet energy-intensive component of these systems. Tests on popular AI models, such as Llama and Mistral, have even shown improved accuracy with some tasks. However, there is good news and bad news.

The bad news is that L-Mul currently requires specialized hardware. Contemporary AI processing is not optimized to take advantage of the technique. The good news is plans for developing specialized hardware and programming APIs are in the works, paving the way for more energy-efficient AI within a reasonable timeframe.

The only other obstacle would be companies, notably Nvidia, hampering adoption efforts, which is a genuine possibility. The GPU manufacturer has made a reputation for itself as the go-to hardware developer for AI applications. It is doubtful it will throw its hands up to more energy-efficient hardware when it holds the lion's share of the market.

Those who live for complex mathematical solutions, a preprint version of the study is posted on Rutgers University's "arXiv" library.

Source: techspot.com

Related stories
1 month ago - Sword or plowshare? That depends on whether you're an attacker or a defender Sponsored Feature Artificial intelligence: saviour for cyber defenders, or shiny new toy for online thieves? As with most things in tech, the answer is a bit of...
1 month ago - Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.
1 month ago - And as Anthropic CEO reckons there's 'a good chance ... we'll be able to get models that are better than most humans at most things' Legislation to regulate artificial intelligence (AI) software in California has been revised in response...
1 month ago - Chatterbox Labs CEO claims Chief Digital and Artificial Intelligence Office unfairly cancelled a contract then accused him of blackmail In-depth Chatterbox Lab CEO Danny Coleman alleges that after three and a half years of uncompensated...
2 weeks ago - UN report: Organization should take much more active role in AI monitoring, oversight.
Other stories
29 minutes ago - Regulators know this is a nightmare and have done little to stop it. Privacy advocacy group wants that to change Smart TVs are watching their viewers and harvesting their data to benefit brokers using the same ad technology that denies...
38 minutes ago - The researchers identified two distinct events associated with Helene's landfall. The first was its actual landfall along the Florida coast. The...
38 minutes ago - With China's AI video generators pushing memes into weird territory, it was time to test one out.
38 minutes ago - Compounding pharmacies could make knockoffs during shortage. But FDA says it's over.
38 minutes ago - Honda is recalling almost 1.7 million vehicles due to a steering defect. An improperly made part can cause certain cars' steering to become...