Share to Facebook
Share to Twitter
Share to Linkedin
California not ready for heavy-handed AI regulation
The recently quashed California AI Safety Bill may have not made it into law, but that doesn’t mean industry observers still aren’t worried about the implications stemming from a lack of AI safety. The bill, vetoed by Governor Newsome, would have imposed some relatively heavy-handed regulations on AI developers: such as mandatory safety testing, inclusion of kill switches in applications, and requirements of greater oversight of frontier models.
That doesn’t mean the debate over stricter AI guidelines is over. The now-nullified bill has “sparked an essential conversation about how to regulate AI effectively,” related Paul McDonagh-Smith, senior lecturer of IT at MIT Sloan Executive Education. “While ensuring safety is crucial, especially for frontier AI models, there is also a need to strike a balance where AI is a catalyst for innovation without putting our organizations and broader society at risk.”
What’s at stake? “Key core issues in AI safety include unintended consequences and behaviors, bias and fairness, transparency and accountability, and malicious use,” McDonagh-Smith said.
Broad sweeping regulation is difficult, from a political standpoint, to put into effect. Narrower point-by-point regulations may be a path of less resistance as issues are identified.. “Regulation of digital replicas and watermarking requirements have the most traction because these are the easiest points of regulation to achieve consensus,” said Monique (Nikki) Bhargava, co-leader of the artificial intelligence practice at global law firm Reed Smith. “Thus can be seen in laws such as the ELVIS Act [Ensuring Likeness Voice and Image Security Act, signed into law in Tennessee], digital replica laws in Illinois and California, as well as California’s recently passed SB 942 [California AI Transparency Act].”
It may even be too early to start regulating AI, which is still a relatively immature technology. “Many AI technologies still seem to be very much in the innovation phase,” said Andy Thurai, principal analyst with Constellation Research. “Granted, some of the old-school AI and machine-learning technologies are more mature – hence, they have security, governance and even compliance built in. Many of the newer tech, including generative AI, is still trying to find its footing. As such, most vendors are bypassing safety and security measures now, as it is in the experimentation phase still.”
There are fears that imposing too much regulation, especially in the early stages of AI, may stifle innovation in this emerging and promising space, This requires a delicate balancing act. “As we move further forward and deeper into the AI era, we will need to calibrate — and no doubt regularly re-calibrate — AI safety measures,” said McDonagh-Smith. “We need to ensure they are delivering the required results whilst optimizing the innovation and invention quotient of the increasingly powerful AI models being designed and released.”
MORE FOR YOU
Hurricane Milton: Winds Pick Up As ‘Extremely Dangerous’ Storm Heads For Florida—Here’s What To Know
A Missile Could Not Erase Russian Drone’s Embarrassing Stealth Secret
Gmail Hackers Have Control Of 2FA, Email And Number? Here’s What To Do
AI safety should “focus on models and the data used to train them,” said Joel Krooswyk, Federal CTO at GitLab. “Without transparency and disclosure, models become black boxes, raising questions about bias, safety, accuracy, and even secrets. Government agencies should prioritize partnering with vendors that are accountable and transparent about their data management solutions, development processes, and data used to train their AI models.”
The key is to keep AI safe and ethical, while also keeping AI developers and creators free to move forward with the technology. “From ethical leadership and risk management to maintaining oversight and accelerating innovation, human capabilities including curiosity, creativity critical thinking, collaboration, and consilience are essential in creating and sustaining AI systems that benefit business and society, without compromising safety,” McDonagh-Smith said.
Bhargava expressed concern that the costs and delays imposed by regulations could slow down the pace of innovation. Overregulation may “increase the cost of compliance in a way where smaller entities cannot afford to enter the market due to the administrative cost that may be imposed by AI regulation,” she explained.
“These costs could stem from setting up internal compliance processes, such as registrations, government reporting, as well as complex assessments and audits,” she added. “These costs may be reasonable where risks posed by certain AI systems are high, but where certain systems pose minimal risk the costs can be overly burdensome. Costs can also stem from access to data, particularly from companies that have not generated their own extensive datasets.”
There are also related concerns about the “protracted nature of the AI regulation process, including the supplementing of statutes with additional guidance and rules which could continuously move the goal pasts for compliance,” said Bhargava. “Finally, there is concern about the lack of harmonization. Particularly in the US, we are seeing similar but different AI regulation proposals. A patchwork approach will complicate the AI regulation process and increase costs.”
This balancing act – between speed and security – “is a delicate act, often hindered by regulatory burdens,” said Krooswyk. “While automation can streamline compliance, inconsistent regulations across jurisdictions can slow down innovation. To ensure both public safety and technological progress, we must advocate for a unified approach to AI governance that fosters transparency, accountability, and responsible development. Organizations must take a duty-of-care approach to securing code and delivering secure software for end users.”
Ultimately, McDonagh-Smith said, “I think there is a critical paradox at the core of the AI safety debate. This paradox relates to what I see as a requirement for human capabilities to lead the way in determining the safety of todays, and tomorrow’s AI models. While it’s true that AI systems are impressively powerful, humans play, and will continue to play, a pivotal role in ensuring their safe and responsible development and application.”
Follow me on Twitter.
Joe McKendrick
Editorial Standards
Forbes Accolades