On Wednesday, California lawmakers passed a bill that aims to prevent catastrophic damages caused by artificial intelligence software. The legislation, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, requires certain AI companies doing business in California to curb massive loss of life or damages surpassing US $500 million with safety evaluations and other measures.
Most of the world’s largest AI companies are based in the Golden State, and nearly all that aren’t will still do business there. As a result, the bill will have far reaching—perhaps even global—effectsif Governor Gavin Newsom signs it into law in the coming weeks. The bill is currently awaiting his signature after being passed by both the state assembly and the state senate.
The bill was hotly debated. It passed after nine rounds of amendments that resulted from back-and-forth between lawmakers and the AI industry. It also prompted disagreement within the AI industry, with some backing the bill, even if hesitantly, and others saying it would stifle innovation and deter smaller companies and investors from developing AI products.Open source advocates also expressed concerns that the bill would put onerous requirements on those who publish AI models for others to build on freely.
Yoshua Bengio, a computer scientist and a so-called “godfather of AI,” said the potential for both extremely good and extremely bad consequences calls for balance. Speaking at a press conference held Monday by California State Senator Scott Wiener, the bill’s sponsor, Bengio said foreseeable risks call for action.
“We should aim for a path that enables innovation but also keeps us safe in the plausible scenarios identified by scientists,” Bengio, who supports the bill, said.
Regulating technology with unknown power
At stake is the future of a technology with revolutionary potential. As programmers create software that replicates aspects of human intelligence, the potential to automate and significantly speed up tasks that require advanced cognition grows.
The possibilities inherent in AI mean that governments should adopt a “moonshot mentality” to supporting the tech’s development, Fei-Fei Li wrote in an essay for Forbes. Li—a computer scientists often referred to as a “godmother of AI”—also wrote that an earlier version of the bill faltered by holding the original developer of AI software liable for misuse by a third party (the bill also holds the third party accountable.). Following Li’s remarks, Wiener went through multiple rounds of amendments that aimed to lessen the burdens on original programmers.
The consequences of AI for business, military, and government sectors are difficult to predict, but both boosters and concerned watchdogs agree that the widespread use of the technology will be transformative.
Concerns over AI include doomsday scenarios like the creation of a biological weapon, as well as the amplification of more mundane horrors like identity theft (think of hackers getting much faster at stealing and selling your personal information). Then there’s the specter of human biases becoming supercharged in software programs that approve mortgages, offer job interviews, or decide whether someone charged with a crime should receive bail.
Wednesday’s bill looks to cap the most catastrophic outcomes from AI models with a level of computational power higher than current models are capable of that cost more than $100 million to train. It allows the California attorney general to seek a court injunction against companies offering software that doesn’t meet the bill’s safety requirements, and allows the office to sue if the AI leads to large numbers of deaths or cyberattacks on infrastructure causing $500 million more in damages.
Why California’s law affects the entire AI industry
As a state that often puts itself at the forefront of emerging policy issues, California is in a unique position to put guardrails on AI. Its laws have a history of influencing regulations throughout the United States, sometimes by serving as a proof of concept, but also by defining how companies must operate if they want to do business in the state.
For example, egg farmers anywhere in the world must keep their chickens in cage-free systems if they want to sell their products to California’s market of more than 39 million consumers. In the tech realm, companies must allow California residents a certain level of control of their personal data. Many firms said they’d extend those rights to all U.S. users when the privacy regulations went into effect because it’s costly and complicated to offer two different levels of control to users depending on where they live. It’s also not always possible to know if a user is a California resident logging in from somewhere else.
Some lawmakers, including Rep. Nancy Pelosi, joined AI companies in calling for a federal solution, fearing that a state-by-state approach would create a complicated patchwork of regulation. But State Senator Wiener said the state has an imperative to act. With no regulations coming out of the U.S. Congress, it’s up to California, he said, to turn voluntary commitments by AI companies into legal requirements.
Wiener said in a press conference Monday that the risks presented by AI require action. “We should try to get ahead of those risks,” he said, “instead of playing catch up.”
Open source concerns
Some advocates for the open source community say the bill threatens to discourage programmers from openly releasing AI software, despite amendments meant to address their concerns. Ben Brooks, an incoming fellow at the Berkman Klein Center for Internet & Society, said he’s concerned that the updated bill still requires original programmers to track what their models do once in the hands of other users.
These requirements, he says, are “simply not compatible with the open release of this technology.”
Weiner has argued that the bill’s amendments keep enforcement focused on the user of a given AI model.
Geoffrey Hinton, another so-called godparent of AI, said in a statement Wednesday that the bill balances critics’ concerns with the need to protect humanity from misuse.
“I am still passionate about the potential for AI to save lives through improvements in science and medicine,” he said, “but it’s critical that we have legislation with real teeth to address the risks.”
From Your Site Articles
Related Articles Around the Web