pwshub.com

AI firms and civil society groups plead for federal AI law

More than 60 commercial orgs, non-profits, and academic institutions have asked Congress to pass legislation authorizing the creation of the US AI Safety Institute within the National Institutes of Standards and Technology (NIST).

Bills introduced previously in the US Senate and the House of Representatives – S 4178, the Future of AI Innovation Act, and HR 9497, the AI Advancement and Reliability Act – call for a NIST-run AI center focused on research, standards development, and public-private partnerships to advance artificial intelligence technology.

The Senate bill, backed by senators Maria Cantwell (D-Wash.), Todd Young (R-Ind.), John Hickenlooper (D-Colo.), Marsha Blackburn (R-Tenn.), Ben Ray Luján (D-N.M.), Roger Wicker (R-Miss.) and Kyrsten Sinema (I-Ariz.), would formally establish the US AI Safety Institute, which already operates within NIST.

The House bill, sponsored by Jay Obernolte (R-CA-23), Ted Lieu (D-CA-36), and Frank Lucas (R-OK-3), describes the NIST-based group as the Center for AI Advancement and Reliability.

If approved by both legislative bodies, the two bills would be reconciled into a single piece of legislation for president Biden to sign. Whether this might happen at a time of historic Congressional inaction, amid a particularly consequential election cycle, is anyone's guess.

The "Do Nothing" 118th Congress, which commenced on January 3, 2023 and will conclude on January 3, 2025, has been exceptionally unproductive – enacting just 320 pieces of legislation to date compared to an average of about 782. That's the smallest number of laws enacted in the past 50 years, which is as far as the records go at GovTrack.us.

Undaunted, the aforementioned coalition, led by Americans for Responsible Innovation (ARI) and the Information Technology Industry Council (ITI), published an open letter [PDF] on Tuesday urging lawmakers to support NIST's efforts to address AI safety for the sake of national security and competitiveness.

  • Gary Marcus proposes generative AI boycott to push for regulation, tame Silicon Valley
  • Sorry, but the ROI on enterprise AI is abysmal
  • Fake reviewers face the wrath of Khan
  • Is the first European on the Moon in ESA's astronaut corps?

"As other governments quickly move ahead, Members of Congress can ensure that the US does not get left behind in the global AI race by permanently authorizing the AI Safety Institute and providing certainty for its critical role in advancing US AI innovation and adoption," declared ITI president and CEO Jason Oxman in a statement. "We urge Congress to heed today's call to action from industry, civil society, and academia to pass necessary bipartisan legislation before the end of the year."

Signatories of the letter include: AI-focused platform providers like Amazon, Google, and Microsoft; defense contractors like Lockheed Martin and Palantir; model makers like Anthropic and OpenAI; advocacy groups like Public Knowledge; and academic institutions like Carnegie Mellon University.

So this call to action has more to do with national policy goals and frameworks for assessing AI systems than creating enforceable limits, unlike California's SB 1047, which met resistance from the tech industry and was vetoed by state governor Gavin Newsom last month over concerns about the bill's effect on the state economy.

Both the Senate and House bills call for the formulation of voluntary best practices. That sets them apart from SB 1047, which envisioned enforceable obligations to promote AI safety.

California senator Scott Wiener, who introduced SB 1047, responded to Newsom's veto by saying, "While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public. This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way."

In the 70 days remaining in 2024, perhaps lawmakers will find a way to unite and pass federal AI legislation that tech companies themselves have endorsed. But probably not. ®

Source: theregister.com

Related stories
10 hours ago - "Every passing car is captured," says 4th Amendment lawsuit against Norfolk, Va. An...
1 month ago - It's going to take more than CAPTCHA to prove you're real Researchers at Microsoft and OpenAI, among others, have proposed "personhood credentials" to counter the online deception enabled by the AI models sold by Microsoft and OpenAI,...
1 month ago - tales from the near future — "We’re going to have supervision," says billionaire Oracle co-founder Ellison. On...
1 month ago - Newsom still worried about SB 1047's 'chilling effect' on AI industry tax dollar revenue innovation in California California Governor Gavin Newsom signed five AI-related bills into law this week, but a pivotal one remains unsigned, and...
1 week ago - The Federal Bureau of Investigation created its own cryptocurrency token as part of an operation to catch fraudsters in the crypto market. The sting, dubbed Operation Token Mirrors, led to charges against 18 individuals and several...
Other stories
28 minutes ago - Don't earn a paltry APY with a standard savings account when the top HYSAs still earn up to 5.25% APY.
28 minutes ago - Using your thermostat correctly can make your home more comfortable and lower your utility bills.
1 hour ago - GenAI poster child is a 100-story-tall baby with simple infrastructure but extreme demands Interview When OpenAI launched GPT-4 in March last year, it was coy about the model's size and what went into making it. Nonetheless, the current...
2 hours ago - The chip technology company Arm has given Qualcomm 60 days notice, according to Bloomberg.
2 hours ago - Whether you want fiber internet or fixed wireless, these are the best internet plans in Springfield.