pwshub.com

California AI bill passes State Assembly, pushing AI fight to Newsom

SAN FRANCISCO — The California State Assembly passed a bill Wednesday that would enact the nation’s strictest regulations on artificial intelligence companies, pushing the fierce fight over how to regulate AI toward Gov. Gavin Newsom’s desk.

The proposed law would require companies working on AI to test their technology before selling it for “catastrophic” risks such as the ability to instruct users in how to conduct cyberattacks or build biological weapons. Under the proposed law, if companies fail to conduct the tests and their tech is used to harm people, they could be sued by California’s attorney general. The bill only applies to companies training very large and expensive AI models, and its author, Democratic state Sen. Scott Wiener, has insisted it will not impact smaller startups seeking to compete with Big Tech companies.

The bill, which passed with a vote of 41 to 9, will now return to the state Senate, where it was first introduced, and is expected to quickly pass on to Newsom. That would put the high-profile governor in a position to enact or veto sweeping and contentious tech regulation at a time when prospects for Congress passing federal AI legislation dim as lawmakers focus on the presidential election.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, better known by its bill number “1047,” has deepened a rift in the world of AI researchers, developers and entrepreneurs over how serious the potential risks of the technology are. On one side are AI researchers associated with the effective altruism movement, which advocates for stricter limits on AI development in order to stave off the ability for the tech to be used to conduct cyberattacks, build dangerous weapons or even become sentient on its own. On the other are AI researchers who say the tech is nowhere near good enough for those dangers to be realistic, and tech founders, executives and investors who say strict government limits could stifle AI innovation or allow other countries to leapfrog the United States when it comes to tech prowess.

Dan Hendrycks, the head of the Center for AI Safety, a think tank that has received funding from effective altruism leader Dustin Muskovitz, helped consult on early versions of the bill and has testified in support of it at the California legislature. The bill also has support from AI research legends Geoff Hinton and Yoshua Bengio, and earlier this week, X owner Elon Musk, who has long warned about the dangers of super-intelligent AI, also threw his support behind the bill.

Industry surrogates have mounted an extensive lobbying campaign against the bill that has included the launch of a website that generates letters calling California policymakers to vote against the legislation. Representatives from tech trade groups, including the Software Alliance, have been meeting with Newsom’s office to raise concerns about the legislation in recent months.

“If it reaches his desk, we will be sending a veto letter,” said Todd O’Boyle, tech policy lead at Chamber of Progress, a center-left group that receives funding from Google, Apple and Amazon.

National politicians have taken the unusual step of weighing in on the fight in Sacramento. Earlier this month, Rep. Nancy Pelosi (D-Calif.) said in a statement that the bill was “well-intentioned but ill informed,” joining other prominent federal politicians from California in opposing it.

Tech companies, including Google and Meta, have written letters opposing the law, and prominent voices from the AI community have said the bill is shortsighted because it regulates broad AI tech that has potentially unlimited uses, rather than specific harmful applications of AI .

“This proposed law makes a fundamental mistake of regulating AI technology instead of AI applications,” Andrew Ng, an AI startup CEO who previously ran AI teams at Google and Chinese tech giant Baidu, said last month on X. Dozens of other startup founders have also opposed the bill.

After OpenAI launched ChatGPT in November 2022 and set off a new AI arms race among tech companies, regulators have been studying the issue and debating whether the new tech should be regulated. Executives from the most powerful AI companies have all called for regulation of the new tech, including OpenAI CEO Sam Altman who testified before Congress in 2023 and suggested the government form a new agency to regulate AI.

But despite multiple hearings and proposed bills, lawmakers in Washington have not passed any AI-focused legislation. Wiener and other California politicians have said that means they need to step in to fill that gap, and serve as the nation’s de facto tech regulators, as they did when it came to laws on privacy and online safety.

SB 1047 would codify many of the directives in President Joe Biden’s 2023 AI executive order, which former president Donald Trump has promised to revoke and replace if he is re-elected. Critics of the bill are wary of a single state wielding so much power over the future of the nascent technology and have raised concerns that California is overstepping its jurisdiction by pursuing initiatives that would address the national security risks of artificial intelligence.

“It’s another important step," Wiener said of the Assembly vote. “When it comes to AI, innovation and safety are not mutually exclusive," he said in an interview.

Complicating the picture further, Wiener is seen by Democratic Party watchers as in the running to contest Pelosi’s congressional seat if she retires.

Wiener has argued for months that the bill won’t criminalize AI development or impose onerous new restrictions that might stifle innovation. The bill seeks to hold tech companies accountable to practices that they’ve already adopted, and is meant to increase public trust in AI at a time when people are generally skeptical of the tech industry, he has said.

Google, Meta, Microsoft, OpenAI and Anthropic all conduct internal testing to see if their AI chatbots display racist and sexist biases, encourage people to harm themselves or spout falsehoods. The companies have also signed onto voluntary commitments, and their executives frequently talk about how it’s important to develop new AI products responsibly.

But the bill has gone too far for many in the AI community by introducing potential legal liabilities for AI companies. According to the proposed law, if AI companies fail to test their tools and they’re used for harmful purposes, the attorney general could bring a civil action against them.

Since the early years of the web, tech companies have been shielded from liability for actions committed on their platforms by users, an important legal framework that tech leaders say have allowed the internet to flourish.

Some AI experts say the new technology should be treated differently, and that companies that make chatbots, image-generators and other AI products should be regulated in the same way automakers or pharmaceutical companies are and penalized if they create tools that harm people.

Source: washingtonpost.com

Related stories
2 weeks ago - The so-called kill switch bill would create more oversight into how AI is developed and deployed in California.
1 month ago - Critics say SB-1047, proposed by "AI doomers," could slow innovation and stifle open source AI.
3 weeks ago - Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.
5 days ago - The US government has a war chest of funding to connect every American currently on the sidelines of our connected digital world. Will it be enough?
5 days ago - An Ohio power company is trying to make major tech companies pay more of the costs of upgrading the electric grid to accommodate power-hungry data centers.
Other stories
40 minutes ago - Experts at the Netherlands Institute for Radio Astronomy (ASTRON) claim that second-generation, or "V2," Mini Starlink satellites emit interference that is a staggering 32 times stronger than that from previous models. Director Jessica...
41 minutes ago - The PKfail incident shocked the computer industry, exposing a deeply hidden flaw within the core of modern firmware infrastructure. The researchers who uncovered the issue have returned with new data, offering a more realistic assessment...
41 minutes ago - Nighttime anxiety can really mess up your ability to sleep at night. Here's what you can do about it right now.
41 minutes ago - With spectacular visuals and incredible combat, I cannot wait for Veilguard to launch on Oct. 31.
41 minutes ago - Finding the perfect pair of glasses is difficult, but here's how to do so while considering your face shape, skin tone, lifestyle and personality.