• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

California’s governor blocked landmark AI safety laws. Here’s why it’s such a key ruling for the future of AI worldwide

Simon Osuji by Simon Osuji
October 23, 2024
in Artificial Intelligence
0
California’s governor blocked landmark AI safety laws. Here’s why it’s such a key ruling for the future of AI worldwide
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


ai
Credit: CC0 Public Domain

In a world where artificial intelligence is rapidly shaping the future, California has found itself at a critical juncture. The US state’s governor, Gavin Newsom, recently blocked a key AI safety bill aimed at tightening regulations on generative AI development.

Related posts

‘Uncanny Valley’: Tech Elites in the Epstein Files, Musk’s Mega Merger, and a Crypto Scam Compound

‘Uncanny Valley’: Tech Elites in the Epstein Files, Musk’s Mega Merger, and a Crypto Scam Compound

February 8, 2026
A Landmark Social Media Addiction Case Puts Big Tech on Trial

A Landmark Social Media Addiction Case Puts Big Tech on Trial

February 8, 2026

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) was seen by many as a necessary safeguard on the technology’s development. Generative AI covers systems that produce new content in text, video, images and music—often in response to questions, or “prompts”, by a user.

But Newsom said the bill risked “curtailing the very innovation that fuels advancement in favor of the public good”. While agreeing the public needs to be protected from threats posed by the technology, he argued that SB 1047 was not “the best approach”.

What happens in California is so important because it is the home of Silicon Valley. Of the world’s top 50 AI companies, 32 are currently headquartered within the state. California’s legislature therefore has a unique role in efforts to ensure the safety of AI-based technology.

But Newsom’s decision also reflects a deeper question: can innovation and safety truly coexist, or do we have to sacrifice one to advance the other?

California’s tech industry contributes billions of dollars to the state’s economy and generates thousands of jobs. Newsom, along with prominent tech investors such as Marc Andreessen, believes too many regulations could slow down AI’s growth. Andreessen praised the veto, saying it supports “economic growth and freedom” over excessive caution.

However, rapidly advancing AI technologies could bring serious risks, from spreading disinformation to enabling sophisticated cyberattacks that could harm society. One of the significant challenges is understanding just how powerful today’s AI systems have become.

Generative AI models, like OpenAI’s GPT-4, are capable of complex reasoning and can produce human-like text. AI can also create incredibly realistic fake images and videos, known as deepfakes, which have the potential to undermine trust in the media and disrupt elections. For example, deepfake videos of public figures could be used to spread disinformation, leading to confusion and mistrust.

AI-generated misinformation could also be used to manipulate financial markets or incite social unrest. The unsettling part is that no one knows exactly what’s coming next. These technologies open doors for innovation—but without proper regulation, AI tools could be misused in ways that are difficult to predict or control.

Traditional methods of testing and regulating software fall short when it comes to generative AI tools that can create artificial images or video. These systems evolve in ways that even their creators can’t fully anticipate, especially after being trained on vast amounts of data from interactions with millions of people, such as ChatGPT.

SB 1047 sought to address this concern by requiring companies to implement “kill switches” in their AI software that can deactivate the technology in the even of a problem. The law would also have required them to create detailed safety plans for any AI project with a budget over US$100 million (£77.2m).

Critics said the bill was too broad, meaning it could affect even lower-risk projects. But its main goal was to set up basic protections in an industry that’s arguably moving faster than lawmakers can keep up with.

California as a global leader

What California decides could affect the world. As a global tech leader, the state’s approach to regulating AI could set a standard for other countries, as it has done in the past. For example, California’s leadership in setting stringent vehicle emissions standards through the California Consumer Privacy Act (CCPA), and its early regulation of self-driving cars, have influenced other states and countries to adopt similar measures.

But by vetoing SB 1047, California may have sent a message that it’s not ready to lead the way in AI regulation. This could leave room for other countries to step in—countries that may not care as much as the US about ethics and public safety.

Tesla’s CEO, Elon Musk, had cautiously supported the bill, acknowledging that while it was a “tough call”, it was probably a good idea. His stance shows that even tech insiders recognize the risks AI poses. This might be a sign the industry is ready to work with policymakers on how best to regulate this new breed of technology.

The notion that regulation automatically stifles innovation is misleading. Effective laws can create a framework that not only protects people, but allows AI to grow sustainably. For example, regulations can help ensure that AI systems are developed responsibly, with considerations for privacy, fairness and transparency. This can build public trust, which is essential for the widespread adoption of AI technologies.

The future of AI doesn’t have to be a choice between innovation and safety. By implementing reasonable safeguards, we can unlock the full potential of AI while keeping society safe. Public engagement is crucial in this process. People need to be informed about AI’s capabilities and risks to participate in shaping policies that reflect society’s values.

The stakes are high and AI is advancing rapidly. It’s time for proactive action to ensure we reap the benefits of AI without compromising our safety. But California’s killing of the AI bill also raises a wider question on the increasing power and influence of tech companies, given they raised objections that subsequently led to its veto.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
California’s governor blocked landmark AI safety laws. Here’s why it’s such a key ruling for the future of AI worldwide (2024, October 23)
retrieved 23 October 2024
from https://techxplore.com/news/2024-10-california-governor-blocked-landmark-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Starlink-Equipped Flight; World’s First Boeing 777

Next Post

GSA: A snapshot of Africa’s telecoms landscape

Next Post
GSA: A snapshot of Africa’s telecoms landscape

GSA: A snapshot of Africa’s telecoms landscape

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Dubai Police add custom Rolls-Royce to supercar fleet

Dubai Police add custom Rolls-Royce to supercar fleet

9 months ago
President El-Sisi Meets the Prime Minister and the Ministers of Electricity and Finance

President El-Sisi Meets the Prime Minister and the Ministers of Electricity and Finance

1 year ago
US Bitcoin Corp Updates on Merger with Hut 8 and Court Approval of Celsius Bankruptcy Plan

US Bitcoin Corp Updates on Merger with Hut 8 and Court Approval of Celsius Bankruptcy Plan

2 years ago
A novel framework for retrieval of concise entailing legal articles

A novel framework for retrieval of concise entailing legal articles

1 year ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.