• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Can speed and safety truly coexist in the AI race?

Simon Osuji by Simon Osuji
July 18, 2025
in Artificial Intelligence
0
Can speed and safety truly coexist in the AI race?
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


A criticism about AI safety from an OpenAI researcher aimed at a rival opened a window into the industry’s struggle: a battle against itself.

It started with a warning from Boaz Barak, a Harvard professor currently on leave and working on safety at OpenAI. He called the launch of xAI’s Grok model “completely irresponsible,” not because of its headline-grabbing antics, but because of what was missing: a public system card, detailed safety evaluations, the basic artefacts of transparency that have become the fragile norm.

It was a clear and necessary call. But a candid reflection, posted just three weeks after he left the company, from ex-OpenAI engineer Calvin French-Owen, shows us the other half of the story.

French-Owen’s account suggests a large number of people at OpenAI are indeed working on safety, focusing on very real threats like hate speech, bio-weapons, and self-harm. Yet, he delivers the insight: “Most of the work which is done isn’t published,” he wrote, adding that OpenAI “really should do more to get it out there.”

Here, the simple narrative of a good actor scolding a bad one collapses. In its place, we see the real, industry-wide dilemma laid bare. The whole AI industry is caught in the ‘Safety-Velocity Paradox,’ a deep, structural conflict between the need to move at breakneck speed to compete and the moral need to move with caution to keep us safe.

French-Owen suggests that OpenAI is in a state of controlled chaos, having tripled its headcount to over 3,000 in a single year, where “everything breaks when you scale that quickly.” This chaotic energy is channelled by the immense pressure of a “three-horse race” to AGI against Google and Anthropic. The result is a culture of incredible speed, but also one of secrecy.

Consider the creation of Codex, OpenAI’s coding agent. French-Owen calls the project a “mad-dash sprint,” where a small team built a revolutionary product from scratch in just seven weeks.

This is a textbook example of velocity; describing working until midnight most nights and even through weekends to make it happen. This is the human cost of that velocity. In an environment moving this fast, is it any wonder that the slow, methodical work of publishing AI safety research feels like a distraction from the race?

This paradox isn’t born of malice, but of a set of powerful, interlocking forces.

There is the obvious competitive pressure to be first. There is also the cultural DNA of these labs, which began as loose groups of “scientists and tinkerers” and value-shifting breakthroughs over methodical processes. And there is a simple problem of measurement: it is easy to quantify speed and performance, but exceptionally difficult to quantify a disaster that was successfully prevented.

In the boardrooms of today, the visible metrics of velocity will almost always shout louder than the invisible successes of safety. However, to move forward, it cannot be about pointing fingers—it must be about changing the fundamental rules of the game.

We need to redefine what it means to ship a product, making the publication of a safety case as integral as the code itself. We need industry-wide standards that prevent any single company from being competitively punished for its diligence, turning safety from a feature into a shared, non-negotiable foundation.

However, most of all, we need to cultivate a culture within AI labs where every engineer – not just the safety department – feels a sense of responsibility.

The race to create AGI is not about who gets there first; it is about how we arrive. The true winner will not be the company that is merely the fastest, but the one that proves to a watching world that ambition and responsibility can, and must, move forward together.

(Photo by Olu Olamigoke Jr.)

See also: Military AI contracts awarded to Anthropic, OpenAI, Google, and xAI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

Related posts

'Infinite Jest' Is Back. Maybe Litbros Should Be, Too

'Infinite Jest' Is Back. Maybe Litbros Should Be, Too

January 30, 2026
PepsiCo uses AI to rethink how factories are designed and updated

PepsiCo uses AI to rethink how factories are designed and updated

January 30, 2026
Previous Post

Sarepta reports patient death in limb-girdle trial, compounding concerns on gene therapy risks

Next Post

Schneider Electric Unveils a New Digital Platform

Next Post
Schneider Electric Unveils a New Digital Platform

Schneider Electric Unveils a New Digital Platform

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Refugee numbers swell as war continues to drive displacement

Refugee numbers swell as war continues to drive displacement

1 year ago
KCB Group to Pay KShs. 13B in Dividends as Net Profit Rises 8%

KCB Group to Pay KShs. 13B in Dividends as Net Profit Rises 8%

6 months ago
PSiRA has revoked thousands of private security company registrations over non-compliance

PSiRA has revoked thousands of private security company registrations over non-compliance

1 year ago
Scientists warned a decade ago American lives were shortening. Then it got worse : Shots

Scientists warned a decade ago American lives were shortening. Then it got worse : Shots

3 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.