close
close

California's rejected AI safety law sparks debates over regulation – Xinhua

California's rejected AI safety law sparks debates over regulation – Xinhua
A woman experiences AI 3D technology at the exhibition area of ​​French company Dassault Systèmes during the 2023 Consumer Electronics Show (CES) in Las Vegas, the United States, January 8, 2023. (Photo by Zeng Hui/Xinhua)

The outcome of California's AI regulatory efforts is likely to have far-reaching implications, considering California's leadership in technology legislation such as data privacy.

by Wen Tsui

SACRAMENTO, United States, Oct. 1 (Xinhua) — California Governor Gavin Newsom's recent veto of an artificial intelligence (AI) security bill has sparked a national debate about how to effectively use the rapidly evolving technology The balance between innovation and security can be controlled.

On Sunday, Newsom vetoed SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, saying the bill was not “the best approach to protecting the public from real threats posed by technology.” “.

In his veto message, the governor said the bill “magnifies” the potential threats and risks “limiting” innovations that drive technological development.

The vetoed bill, introduced by California State Senator Scott Wiener, passed the California legislature with overwhelming support. It should be one of the first in the country to establish mandatory security protocols for AI developers.

Had it been enshrined in law, it would have imposed liability on developers for serious damage caused by their models. The bill aims to prevent “catastrophic” harm from AI and would apply to all large-scale models that cost at least $100 million to train, regardless of the potential harm.

The bill would require AI developers to publicly disclose the methods used to test the model's likelihood of causing critical harm and the conditions under which the model would shut down completely before training begins.

Violations would be punished by the California Attorney General with a civil penalty of up to 10 percent of the cost of the computing power used to train the model and 30 percent for each subsequent violation.

This photo taken on April 20, 2024 shows a competition robot designed by two 11th grade students at the AI ​​Robotics Academy, an after-school club, in Plano, Texas, USA. (Photo by Lin Li/Xinhua)

According to an analysis by Pillsbury Winthrop Shaw Pittman LLP, a law firm specializing in technology, the bill could have a “significant” impact on large AI developers and create “significant testing, governance and reporting hurdles” for these companies.

The bill's broad scope has sparked debate over whose behavior should be regulated – the developers or operators of AI models.

Some in the tech industry are urging lawmakers to focus on the contexts and use cases of AI rather than the technology itself.

Lav Varshney, an associate professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign, told technology website VentureBeat that the defeated bill would have unfairly penalized the original developers for the actions of those using the technology.

He argued for “shared responsibility” between the original developers and those who optimize the AI ​​for specific applications.

Many experts expressed concerns about the bill's potential “chilling effect” on open source AI, a collaborative approach to AI development that allows developers to access, modify and share AI technologies.

Andrew Ng, co-founder of Coursera, a US online course provider, praised Newsom's veto as “pro-innovation” in a social media post and said it would protect open source development.

In response to the veto, Anja Manuel, managing director of Aspen Strategy Group, said in a statement that she advocated for “limited pre-deployment testing, focusing only on the largest models.”

She pointed to the lack of “mandatory, independent and rigorous” testing to prevent AI from causing harm, which she called a “glaring gap” in the current approach to AI security.

Manuel pointed to parallels with the Food and Drug Administration's regulations in the pharmaceutical industry, arguing that AI, like pharmaceuticals, should only be released to the public after thorough testing for safety and effectiveness.

Following the veto, Governor Newsom outlined alternative measures for AI regulation, calling for a more targeted regulatory framework that addresses specific risks and applications of AI, rather than general rules that could affect even low-risk AI functions.

The outcome of California's AI regulatory efforts is likely to have far-reaching implications, considering California's leadership in technology legislation such as data privacy.

“What happens in California doesn’t just stay in California; it often creates the basis for statewide standards,” the Pillsbury analysis said.

An artificial intelligence (AI) expert delivers a speech at the San Francisco AI Summit in San Francisco, USA, on September 25, 2019. (Xinhua/Wu Xiaoling)

Whether you are a developer or operator of AI systems, Pillsbury advises companies to develop a comprehensive compliance strategy and take a proactive approach given the rapidly evolving regulatory landscape around the world.

“Safe and responsible AI is essential to California’s dynamic innovation ecosystem,” said Fei-Fei Li, a professor in the Computer Science Department at Stanford University and co-director of the Human-Centered AI Institute at Stanford. “To effectively manage this powerful technology, we must rely on science to determine how best to promote innovation and mitigate risk.”