Concerns that artificial intelligence (AI) systems pose serious risks for public safety have caused legislators and other policymakers around the world to propose legislation and other policy initiatives to address those risks. One bold initiative in this vein was the California legislature’s enactment of SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, in late August 2024.
Lobbying for and against SB 1047 was so intense that California’s governor Gavin Newsom observed that the bill “had created its own weather system.” In the end, the governor vetoed the bill for reasons I explain here. After a brief review of the main features of SB 1047, this column points out key differences between SB 1047 and the EU’s AI Act, identifies key supporters and opponents of SB 1047, and discusses arguments for and against this bill. It also explains Governor Newsom’s reasons for vetoing that legislation and considers whether national or state governments should decide what AI regulations are necessary to ensure safe development and deployment of AI technologies.
Key Features of SB 1047
Under SB 1047, developers of very large frontier models (defined as models trained on computing power greater than 1026 integer or floating point operations or costing more than $100 million at the start of training) and those who fine-tune large frontier models (also measured by compute requirements and/or training costs) would be responsible to ensure that these models will not cause “critical harms.”
The bill identifies four categories of critical harms:
Creation or use of chemical, biological, radiological, or nuclear weapons causing mass casualties;
Mass casualties or more than $500 million damage because of cyberattacks on critical infrastructure;
Mass casualties or more than $500 million in damages resulting in bodily injury or damage to property that would be a crime if humans did it; and
Other comparably grave harms to public safety and security.
Under this bill, developers of large frontier models would be required to take numerous steps at three phases of development: some before training, some before use of such a model or making it available, and some during uses of covered models. Among the required steps would be installing a “kill switch” at the pre-training stage, taking reasonable measures to prevent models from posing unreasonable risks, and publishing redacted copies of the developers’ safety and security protocols. (A “kill switch” would enable humans to stop an AI system from becoming an autonomous actor capable of inflicting critical harms.)
Developers would also be required to hire independent third-party auditors to ensure compliance with the law’s requirements. They would further be obliged to submit these audits and a statement of compliance annually with a state agency. Developers would further be responsible for reporting any safety incident of which they become aware to that agency within 72 hours of learning about it.
The legislation authorized the California Attorney General to file lawsuits against frontier model developers who violated that law’s requirements seeking penalties for up to 10% of the initial cost of model development for a first violation and up to 30% of development costs for subsequent violations. Whistleblowers who called attention to unreasonable risks that frontier models pose for causing critical harms would be protected against retaliation.
In addition, SB 1047 would authorize establishment of a new California agency to publish implementing guidelines for compliance with the Act. This agency would have received the required audit and compliance reports, overseen model development, and proposed amendments as needed (including updates to the compute thresholds).
Comparing SB 1047 to EU’S AI Act
SB 1047 and the European Union’s AI Act both focus on safety issues posed by advanced AI systems and risks that AI systems could cause substantial societal harms. Both require the development of safety protocols, pre-deployment testing to ensure systems are safe and secure, and reporting requirements, including auditing by independent third parties and compliance reports. Both would impose substantial fines for developers’ failure to comply with the acts’ safety requirements.
There are, however, significant differences between SB 1047 and the EU AI Act. For one thing, SB 1047 focused its safety requirements mainly on the developers of large frontier models rather than on deployers. Second, the California bill focused secondarily on those who fine-tune large frontier models, not just on initial developers. The AI Act does not address fine-tuning.
Third, SB 1047 would require developers to install a “kill switch” so that the models can be turned off if the risks of critical harms are too great. The EU’s AI Act does not require this. Fourth, the California bill assumed that the largest models are those that pose the most risks for society, whereas the AI Act does not focus on model size. Fifth, SB 1047 was intended to guard against those four specific types of critical harms, whereas the EU’s AI Act has a broader conception of harms and risks that AI developers and deployers should design to avoid.
Proponents of SB 1047
Anthropic was the most prominent of the AI model developers to have endorsed SB 1047. (Its support came after the bill was amended to drop a criminal penalty provision and to substitute a “reasonable care” instead of a “reasonable reassurance” standard for the duty of care expected of large frontier model developers). Thirty-seven employees of leading AI developers expressed support for SB 1047 as well.
Yoshua Bengio, Geoff Hinton, Stuart Russell, Bin Yu, and Larry Lessig are among the prominent proponents of SB 1047 as a “bare minimum effective regulation.” They believe that making developers of advanced frontier models responsible for averting critical harms is sound because these developers are in the best position to prevent such harms.
Proponents consider SB 1047 to be “light touch” regulation because it does not try to control design decisions or impose specific protocols on developers. They believe that the public will not be adequately protected if malicious actors are the only persons or entities that society can hold responsible for grave harms.
The AI Policy Institute reported that 65% of Californians support SB 1047 and more than 80% agree that advanced AI system developers should have to embed safety measures in the systems and should be accountable for catastrophic harms. Proponents further believe that SB 1047 will spur significant research and advance the state of the art in safety and security of AI models.
Without this new regulatory regime, moreover, proponents believe developers who are willing to invest in safety and security will be at a competitive disadvantage to firms that cut corners on safety and security design and testing to get to market faster.
Opponents of SB 1047
Google, Meta, and OpenAI, along with associations of technology companies, as well as Marc Andreesen and Ben Horowitz, opposed SB 1047 in part because it focused on the development of models instead of on harmful uses of such models. These opponents are concerned this law will impede innovation and American competitiveness in AI industries.
OpenAI argued that because SB 1047 heavily emphasizes national security harms and risks, it should be for the U.S. Congress, not the California legislature, to regulate AI systems to address these kinds of harms.
Among SB 1047’s opponents are many AI researchers, including notably Professors Fei Fei Li of Stanford and Jennifer Chayes of UC Berkeley. These researchers are concerned about the bill’s impacts on the availability of advanced open models and weights to which researchers want access and on which they want to build.
San Francisco Mayor London Breed and Congresswomen Nancy Pelosi and Zoe Lofgren were among the other prominent critics of SB 1047. Lofgren, who serves on a House subcommittee focused on science and technology issues, wrote an especially powerful letter to Governor Newsom expressing her reasons for opposing that bill. Among other things, Lofgren said that AI regulations should be based on demonstrated harms (such as deep fakes, misinformation, and discrimination), not hypothetical ones (such as those for which kill switches might be needed).
The science of AI safety, noted Lofgren, is very early stages. The technical requirements that SB 1047 would impose on developers of large frontier models are thus premature. While the National Institute of Science and Technology aims to develop needed safety protocols and testing procedures, these measures are not yet in place. Nor are voluntary industry guides yet fully developed.
Lofgren also questioned SB 1047’s “kill switch” requirement. Although this might sound reasonable in theory, such a requirement would undermine the development of ecosystems around open models. She agreed with a report of the National Telecommunications and Information Administration that there is insufficient evidence of heightened risks from open models to justify banning them.
Lofgren also expressed concern about innovation arbitrage. If California regulates AI industries too heavily or in inappropriate ways, it might lose its early leadership in this nascent industry sector. And U.S. competitiveness would be undermined.
Governor Newsom’s Reactions
Governor Gavin Newsom issued a statement explaining his reasons for vetoing SB 1047. He pointed out that California is home to 32 of the world’s leading AI companies. He worried that this law would harm innovation in California’s AI industries. Regulation should, he believes, be based on empirical evidence and science.
Newsom questioned whether the cost and amount of computing power needed for AI model training is the right regulatory threshold. He suggested it might be better to evaluate risks based on ecosystems in which AI systems were deployed or on uses of sensitive data. He warned that the bill’s focus on very large models could give the public a false sense of security because smaller models may be equally or more dangerous as the ones SB 1047 would regulate. While recognizing the need for AI regulations to protect the public, Newsom observed that the AI technology industry is still in early stages and regulations need to be balanced and able to be adapted as the industry matures.
The governor agreed with SB 1047’s sponsors that it would be unwise to wait for a catastrophe to protect the public from AI risks and that AI firms should be held accountable for harms to which they have contributed. But SB 1047, in his view, was just not the right law at the right time for California.
To demonstrate his commitment to ensuring proper attention to public safety, Governor Newsom appointed an expert committee of thought leaders to advise him further about how California can achieve the delicate policy balance between promoting the growth of AI industries and research communities and protecting the public against unreasonable risks of harm. Joining Fei Fei Li and Jennifer Chayes on this committee is Tino Cuellar, a former Stanford Law professor, a former California Supreme Court Justice, and now Executive Director of the Carnegie Institute for Peace.
Despite vetoing SB 1047, the governor signed into law 19 other AI-related bills passed by the California legislature this year. Two of them regulate deep fakes, one obliges developers to make disclosures about AI training data, and one requires provenance data for AI-generated outputs.
Conclusion
The sponsors of SB 1047 seem to have carefully listened to and heeded warnings of some prominent computer scientists who are deeply and sincerely worried about AI systems causing critically serious harms to humankind. However, there is no consensus among scientists about AI public safety risks.
Concerns that advanced AI systems, such as HAL in 2001: A Space Odyssey, will take over and humans will not be able to stop them because their developers failed to install kill switches seem implausible. Legislation to regulate AI technologies should be based on empirical evidence of actual or imminent harms, not conjecture.
In any event, regulation of AI systems that pose risks of national security harms would optimally be done at the national, not state, level. But the Trump Administration is less likely than the Biden Administration to focus on systemic risks of AI, so maybe the state of California should lead the way in formulating a regulatory regime to address these risks.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment