pwshub.com

AI Is Evolving Faster Than Experts, Including Bill Gates, Imagined

When Bill Gates — the man who co-founded Microsoft and created the software that helped turn personal computers into an everyday appliance — describes AI as the "biggest technical advancement in my lifetime," it's kind of hard not to stop and say, "Whoa."

Gates, in an ABC-TV interview with Oprah Winfrey earlier this month, shared his thoughts on just how big a deal he expects generative AI systems to be, saying he sees the tech enhancing myriad aspects of society. It'll affect the state of health care, for instance, by serving as the "third person" sitting in on your medical appointments to offer real-time translations and summaries of what health care professionals are telling you. And it'll become an educational assistant that can offer every student a personal tutor "who's always available."   

But the comment from Gates that really got my attention was how quickly gen AI tools, which were introduced to the world almost two years ago when OpenAI released ChatGPT, have advanced. 

"This is the first technology that is happening faster than even the insiders expected," Gates told Winfrey. Even with all the good that AI might lead to, he added, "I have significant fears about the risks."

He's not the only one. Former Google CEO Eric Schmidt said something similar last year, noting that "people are not going to be able to adapt" to a world with AI.  

As for Gates, he believes the speed of development means companies must work with governments to create regulations aimed at ensuring AI doesn't undermine our economy, among other things. (The United Nations last week also shared its thoughts on AI governance in a new report, called Governing AI for Humanity.)

Gates isn't the only tech notable who thinks government regulation will be needed to mitigate the risks of fast-developing systems. OpenAI CEO Sam Altman, speaking with Winfrey in that same special, also noted that "there's been a pretty steep rate of improvement" in AI systems. His suggestion is that AI makers will need to work with the government "to start figuring out how to do safety testing of these systems ... like we do for aircraft or new medicines or things like that." 

After that's done, Altman said, "We'll have an easier time figuring out the regulatory framework later."

Given that adage about history being doomed to repeat itself — new tech gets introduced (like social media networks); government scrambles to figure out how to regulate it after it causes harms — I wonder whether the conversations that Altman told Winfrey he's now having with people from the government "every few days" should've happened before the launch of ChatGPT.  

Here are the other doings in AI worth your attention.

Oprah talks AI but misses opportunity with OpenAI's Altman

Speaking of that Winfrey special, called AI and the Future of Us and now streaming on Hulu, I said last week that I'd offer up a review. The above takeaways from Altman and Gates aside, I'll just say I was disappointed by what Winfrey didn't ask Altman. 

Specifically, when or if he'll share details about what's in his popular chatbot's training data. Why do we want to know? In part because OpenAI and one of its funders, Microsoft, are being sued by The New York Times for allegedly scraping the NYT's content library without permission, attribution or compensation, to train the Large Language Model, or LLM, that powers ChatGPT. 

Lawyers and legal scholars say the suit is the "first big test for AI in the copyright space." 

Though OpenAI hasn't said what's in its training data, it has argued that whatever copyrighted content the company has copied from the NYT and other content creators to build its for-profit chatbot would be covered under the "fair use" doctrine.  

I don't know who will prevail in the suit. But given that Winfrey is one of the most powerful content creators on the planet, and given that notable authors, artists and publishers have expressed concerns and filed suits arguing that their intellectual property is being stolen as training data by AI companies, you'd think she might've asked Altman something about it. 

I guess we'll just have to wait for the next special.   

The 'godmother of AI' aims to help you build new worlds 

If you follow AI news regularly, you'll hear mention of the godfathers of AI — computer scientists Yoshua Bengio, Geoffrey Hinton and Yann LeCun, who've made headlines with their thoughts on the risks, opportunities and pace of development of AI. Last week, it was Fei-Fei Li's turn to make news. Li, an AI researcher and Stanford University professor who's worked at Google and Stanford, is considered the godmother of AI. And she's launched a new AI company, World Labs, after raising $230 million. 

World Labs writes that it's building LLMs focused on "spatial intelligence," saying they'll be able to "perceive, generate, and interact with the 3D world."

What does that mean? Longtime tech reporter Steven Levy, writing in Wired, said World Labs' goal is to teach "AI systems deep knowledge of physical reality" so that artists, designers, game developers, movie studios and engineers using those AI engines can all be "world builders." 

World Labs' first product is expected in 2025, just another sign of how fast AI is developing. Optimism in what Li can do is high, with her startup already valued at over $1 billion.

How much power, water does it take for AI to write a short email? 

We know that computing comes at an environmental price. There are costs to powering and cooling the server farms housing the processors, software, computers, networking gear and other technologies that deliver the internet and online services to us every day.

So what's the environmental cost of a chatbot query? The Washington Post decided to find out, with researchers at the University of California, Riverside. They learned that a single, 100-word email created by a chatbot using OpenAI's GPT-4 model, which powers ChatGPT, requires 519 milliliters of water — a little more than a bottleful. That same email consumes 0.14 kilowatt hours of electricity, or "14 LED light bulbs for 1 hour." 

It's worth reading through their study to see how much these costs add up, when you consider that according to the Pew Research Center, about a quarter of Americans have used ChatGPT since its debut.

Public libraries can help counter AI-generated misinformation

The Urban Libraries Council has published a worthwhile brief on how public libraries can tap into their position as community spaces to encourage people to gather face-to-face — not only to help them overcome feelings of social isolation in an increasingly digital world, but also to offer tools and workshops to teach people how to spot the misinformation and disinformation served up by digital platforms. 

"Various studies indicate that misinformation and disinformation are more likely to thrive in societies that are either severely polarized or in communities with low levels of social connectedness," the council wrote in the 10-page brief, entitled The Role of Libraries as Public Spaces in Countering Misinformation, Disinformation, and Social Isolation in the Age of Generative AI.

Among the library programs that've already been successful, the council highlighted the Boston Public Library for hosting a workshop in August aimed at countering misinformation by teaching digital literacy skills and offering tools that help people "identify accurate information on the internet."

FYI, there are more than 123,000 public libraries in the US, according to the American Library Association. 

My lessons in AI vocabulary 

Subscribers to the newsletter version of this column get an additional bit of insight from me each week in the form of AI vocabulary terms they should know (you can subscribe at CNET's AI Atlas consumer hub for all things AI).

If you just want a few quick refreshers, though, I've also started creating short TikTok vocab lessons. You can find the one on gen AI, chatbots and LLMs here. And you'll find a super quick recap on hallucinations and training data here. 

The videos have been and will be created and presented entirely by a human.   

Source: cnet.com

Related stories
1 month ago - Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.
1 month ago - Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.
1 month ago - Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.
1 month ago - The evolving world of work can make it hard for young professionals in their first jobs. Here are expert tips to help you succeed.
2 weeks ago - Still confused about the differences between Wi-Fi and Ethernet? Here's one CNET writer's take after trying to find out for herself.
Other stories
24 minutes ago - Four Chocolate Factory trackers cracked the Top 25 in all regions Google, once again, is the "undisputed leader" when it comes to monitoring people's behavior on the internet, according to Kaspersky's annual web tracking report.…
42 minutes ago - family matters — "Hundreds" of files found on SD card, FBI agent says. Alex Schmidt / Getty Images ...
42 minutes ago - "In all my years of working with Boeing I never saw them sign up for additional work for free."
56 minutes ago - Because flu season is near, the time is here to talk about flu vaccines. Here's what to know.
57 minutes ago - Did your screen cover crack or iPhone case take a beating? Here are the best iPhone SE cases if you're looking for a replacement.