• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Chatbots won’t help anyone make weapons of mass destruction, but other AI systems might

Simon Osuji by Simon Osuji
December 6, 2024
in Artificial Intelligence
0
Chatbots won’t help anyone make weapons of mass destruction, but other AI systems might
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


 chatbot
Credit: Pixabay/CC0 Public Domain

Over the past two years, we have seen much written about the “promise and peril” of artificial intelligence (AI). Some have suggested AI systems might aid in the construction of chemical or biological weapons.

Related posts

NASA Delays Launch of Artemis II Lunar Mission Once Again

NASA Delays Launch of Artemis II Lunar Mission Once Again

February 23, 2026
Perplexity’s Retreat From Ads Signals a Bigger Strategic Shift

Perplexity’s Retreat From Ads Signals a Bigger Strategic Shift

February 22, 2026

How realistic are these concerns? As researchers in the field of bioterrorism and health intelligence, we have been trying to separate the genuine risks from the online hype.

The exact implications for “chem bio” weapons are still uncertain. However, it is very clear that regulations are not keeping pace with technological developments.

Assessing the risks

Assessing the risk an AI model presents is not easy. What’s more, there is no consistent and widely followed way to do it.

Take the case of large language models (LLMs). These are the AI engines behind chatbots such as ChatGPT, Claude and Gemini.

In September, OpenAI released an LLM called o1 (nicknamed “Strawberry”). Upon its release, the developers claimed the new system had a “medium” level risk of helping someone create a biological weapon.

This assessment might sound alarming. However, a closer reading of the o1 system card reveals more trivial security risks.

The model might, for example, help an untrained individual navigate a public database of genetic information about viruses more quickly. Such assistance is unlikely to have much material impact on biosecurity.

Despite this, media quickly reported that the new model “meaningfully contributed” to weaponization risks.

Beyond chatbots

When the first wave of LLM chatbots launched in late 2022, there were widely reported fears that these systems could help untrained individuals unleash a pandemic.

However, these chatbots are based on already-existing data and are unlikely to come up with anything genuinely new. They might help a bioterrorism enterprise come up with some ideas and establish an initial direction, but that’s about it.

Rather than chatbots, AI systems with applications in the life sciences are of more genuine concern. Many of these, such as the AlphaFold series, will aid researchers fighting diseases and seeking new therapeutic drugs.

Some systems, however, may have the capacity for misuse. Any AI that is really useful for science is likely to be a double-edged sword: a technology that may have great benefit to humanity, while also posing risks.

AI systems like these are prime examples of what is called “dual-use research of concern.”

Prions and pandemics

Dual-use research of concern in itself is nothing new. People working on biosecurity and nuclear non-proliferation have been worrying about it for a long time. Many tools and techniques in chemistry and synthetic biology could be used for malicious ends.

In the field of protein science, for example, there has been concern for more than a decade that new computational platforms might help in the synthesis of the potentially deadly misfolded proteins called prions, or in the construction of novel toxin weapons. New AI tools such as AlphaFold may bring this scenario closer to reality.

However, while prions and toxins may be deadly to relatively small groups of people, neither can cause a pandemic that could wreak true havoc. In the study of bioterrorism, our main concern is with agents that have pandemic potential.

Historically, bioterrorism planning has focused on Yersinia pestis, the bacterium that causes plague, and variola virus, which causes smallpox.

The main question is whether new AI systems make any tangible difference to an untrained individual or group seeking to obtain pathogens such as these, or to create something from scratch.

Right now, we simply do not know.

Rules to assess and regulate AI systems

Nobody yet has a definitive answer to the question of how to assess the new landscape of AI-powered biological weapons risk. The most advanced planning has been produced by the outgoing Biden administration in the United States, via an executive order on AI development issued in October 2023.

A key provision of the executive order tasks several US agencies with establishing standards to assess the impact new AI systems may have on the proliferation of chemical, biological, radiological or nuclear weapons. Experts often group these together under the heading of “CBRN,” but the new dynamic we call CBRN+AI is still uncertain.

The executive order also established new processes for regulating the hardware and software needed for gene synthesis. This is the machinery for turning the digital ideas produced by an AI system into the physical reality of biological life.

The US Department of Energy is soon due to release guidance on managing biological risks that might be generated by new AI systems. This will provide a pathway for understanding how AI might affect biosecurity in the coming years.

Political pressure

These nascent regulations are already coming under political pressure. The incoming Trump administration in the US has promised to repeal Biden’s executive order on AI, concerned it is based on “radical leftist ideas.” This stance is informed by irrelevant disputes in American identity politics that have no bearing on biosecurity.

While it is imperfect, the executive order is the best blueprint for helping us comprehend how AI will impact proliferation of chemical and biological threats in the coming years. To repeal it would be a great disservice to the US national interest, and global human security at large.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Chatbots won’t help anyone make weapons of mass destruction, but other AI systems might (2024, December 5)
retrieved 5 December 2024
from https://techxplore.com/news/2024-12-chatbots-wont-weapons-mass-destruction.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Russia Says West Ignoring Warnings Over Sending Troops to Ukraine

Next Post

Can you Trade on Tos with a Webull Account?

Next Post
Can you Trade on Tos with a Webull Account?

Can you Trade on Tos with a Webull Account?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Rachel Nichols to join Richard Sherman and Skip Bayless on FS1’s ‘Undisputed’

Rachel Nichols to join Richard Sherman and Skip Bayless on FS1’s ‘Undisputed’

3 years ago
5 states in Nigeria with the highest cost of diesel

5 states in Nigeria with the highest cost of diesel

2 years ago
Elon Musk’s xAI Just Laid Off 500 Workers Who Trained Grok

Elon Musk’s xAI Just Laid Off 500 Workers Who Trained Grok

5 months ago
Clients Can Register, Purchase Land And Receive CofO Online, Within 72 Hrs – Enugu Commisioner For Lands And Urban Devt, Lawrence Ezeh

Nigeria’s Health System In Crisis As 500 Specialist Doctors Leave Country

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.