pwshub.com

Nvidia works with Accenture to pioneer custom Llama large language models

Accenture Plc Tuesday announced the launch of the Accenture AI Refinery framework, developed on Nvidia Corp.’s new AI Foundry service. The offering, designed to enable clients to build custom large language models using Llama 3.1 models, enables enterprises to refine and personalize these models with their own data and processes to create domain-specific generative AI solutions.

The generative AI journey to Nvidia AI Foundry

In a briefing, Kari Briski, vice president of AI software at Nvidia, said she’s often asked about the buzz surrounding generative AI.

“It’s been a journey,” she said. “Yes, generative AI has been a big investment. And enterprises ask, ‘Why should we do it? What are the use cases?’ When you think about employee productivity, have you ever wished that you had more hours in the day? I know that I do. Maybe if there were 10 of you, you could get more things done. And that’s what generative AI helps — automate repetitive, mundane tasks, things like summarization, best practices and next steps.”

AI Foundry: a comprehensive infrastructure

“Nvidia AI Foundry is a service that enables enterprises to use accelerated computing and software tools combined with our expertise to create and deploy custom models that can be supercharged for enterprises’ generative AI applications,” Briski said.

The AI Foundry platform offers an infrastructure for developing and deploying custom AI models. It includes:

  • Foundation models: A suite of Nvidia and community models, including Llama 3.1.
  • Accelerated computing: DGX Cloud provides scalable compute resources essential for large-scale AI projects.
  • Expert support: Nvidia AI Enterprise experts assist in the development, fine-tuning and deployment of AI models.
  • Partner ecosystem: Collaborations with partners like Accenture offer consulting services and solutions for AI-driven transformation projects.

Briski said that once companies customizes the model, they must evaluate it. This is where some customers get stuck, she noted. She recounted some of the things she’s heard from customers: “‘How well is my model doing? I just customized it. Is doing the things that I need?’ So the NeMo customers are offered many ways to evaluate, with academic benchmarks, you can upload your own custom evaluation benchmarks, you can connect to a third-party ecosystem of human evaluators, and then you can also use an LLM as a judge.”

Industry adoption

As Briski indicated in the briefing, several companies are using AI Foundry, including Amdocs, Capital One and ServiceNow. According to Nvidia, these three are integrating AI Foundry into their workflows. The company says they’ve gained a competitive edge by developing custom models that incorporate industry-specific knowledge.

The advantages of Nvidia NIM

Nvidia’s NIM has some unique advantages that Briski discussed.

“NIM is a customized model and container accessed by a standard API,” she explained. “And this is the culmination of years of work and research that we’ve done.” She said she has been at Nvidia eight years and the company has been working on it at least that long.

“It’s on a cloud-native stack, it runs out-of-the-box on any GPU,” she said. “That’s across our 100 million-plus installed base of Nvidia GPUs. Once you have NIM, you can customize and add models very quickly.”

She added NIM now supports Llama 3.1, including Llama 3.1 8B NIM (a single GPU LLM), Llama 3.1 70B NIM (for high accuracy generation) and Llama 3.1 405B NIM (for synthetic data generation).

Deploying custom LLMs

In addition, Accenture announced it worked with Nvidia on the AI Refinery framework, which runs on the AI Foundry. Accenture said the framework advances the field of gen AI for enterprises. Integrated within Accenture’s foundation model services, it promises to help businesses develop and deploy custom LLMs tailored to their requirements. According to both companies, the framework includes four key elements:

  • Domain nodel customization and training: This lets enterprises refine LLM models using their own data and processes, enhancing the relevance and value of the models for specific business needs. The customization runs on AI Foundry, which should result in robust and efficient model training.
  • Switchboard Platform: This enables users to select and combine models based on specific business contexts or criteria such as cost and accuracy.
  • Enterprise Cognitive Brain: This component scans and vectorizes corporate data and knowledge, creating an enterprise-wide index that enhances the capabilities of generative AI systems.
  • Agentic architecture: Designed to enable AI systems to operate autonomously, this architecture supports responsible AI behavior with minimal human oversight.

Strategic importance and impact

Accenture’s AI Refinery framework has an opportunity to change enterprise functions, starting with marketing and expanding to other areas. The ability to create and deploy generative AI applications quickly that are tailored to specific business needs underscores Accenture’s commitment to innovation and transformation. By applying the framework internally before offering it to clients, Accenture shows the potential it sees.

Reinventing enterprises

In the announcement, Julie Sweet, chair and chief executive officer of Accenture, highlighted the transformative potential of generative AI in reinventing enterprises. She emphasized the importance of deploying applications powered by custom models to meet business priorities and drive industry-wide innovation.

In addition, Jensen Huang, founder and CEO of Nvidia, noted that Accenture’s AI Refinery would provide the necessary expertise and resources to help businesses create custom Llama LLMs.

Some final thoughts

Accenture’s launch of the AI Refinery framework could be pivotal in adopting and deploying generative AI in enterprises. By employing the Llama 3.1 models, which Briski applauded on the briefing, and the capabilities of the AI Foundry, Accenture enables businesses to create highly customized and effective AI solutions.

As enterprises continue to explore the potential of generative AI, frameworks such as Accenture’s AI Refinery will play a crucial role in enabling customized and effective AI solutions.

The collaboration between Accenture and Nvidia promises to drive further advancements in AI technology, offering businesses avenues for growth and innovation. It also underscores that all AI roads lead to Nvidia.

 Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.

Source: siliconangle.com

Related stories
1 month ago - Vice President Kamala Harris’ presumed ascension to the Democratic presidential nomination following President Joe Biden’s withdrawal scrambled this week certainly scrambled the race, but it also set everyone in tech wondering what a...
5 days ago - Oracle Corp. is seeing renewed business momentum powered by a combination of an entrenched database business, years of investment in cloud infrastructure, an integrated application suite and artificial intelligence technologies that are...
3 weeks ago - The race to build good artificial intelligence apps is a long one and getting off the starting block from zero can be difficult without some kind of an advantage. To provide customers that advantage, Nvidia Corp. announced NIM Agent...
2 weeks ago - All eyes were on Nvidia’s earnings report this week as a proxy for the artificial intelligence economy, and even for the graphics chip giant, it was too much to live up to. Nvidia earnings disappointed, but really, how could they not?...
1 month ago - (Bloomberg) -- With questions swirling around Federal Reserve policy, the state of the economy and the US presidential race, at least one thing seems clear on Wall Street: spending on artificial intelligence remains a central...
Other stories
22 minutes ago - Ransomware has quickly grown into a multi-billion-dollar industry, forcing a shift in how cybersecurity is approached, including the development of solutions such as Mandiant Threat Intelligence. In the last five years, as profits for...
22 minutes ago - There is disruption underway in the cloud industry itself as businesses begin to look outside of the major providers to support private artificial intelligence and AI cloud services. The growth of AI has led to a need for infrastructure...
22 minutes ago - The reach of enterprise technologies such as artificial intelligence has permeated every business operations area. Given the resulting explosion in organizational data generation and reliance, the surface for cyberattacks has expanded....
22 minutes ago - Deepgram Inc., the developer of a speech recognition engine that provides its service via application programming interfaces, today announced a powerful addition to its platform that enables natural-sounding conversations between humans...
51 minutes ago - Trump maintains a roughly 60% stake in Trump Media & Technology Group, which trades on the Nasdaq under the ticker symbol "DJT."