Sunday, June 8, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

AI should be better understood and managed, new research warns

Simon Osuji by Simon Osuji
November 2, 2023
in Artificial Intelligence
0
Nineteen researchers say AI is not sentient—not yet
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


ai
Credit: CC0 Public Domain

Artificial intelligence (AI) and algorithms can and are being used to radicalize, polarize, and spread racism and political instability, says a Lancaster University academic.

Related posts

Bill Atkinson, Macintosh Pioneer and Inventor of Hypercard, Dies at 74

Bill Atkinson, Macintosh Pioneer and Inventor of Hypercard, Dies at 74

June 7, 2025
How to Prepare for a Climate Disaster in Trump’s America

How to Prepare for a Climate Disaster in Trump’s America

June 7, 2025

Professor of International Security at Lancaster University Joe Burton argues that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but can be contributors to polarization, radicalism and political violence—posing a threat to national security.

Further to this, he says, securitization processes (presenting technology as an existential threat) have been instrumental in how AI has been designed, used and to the harmful outcomes it has generated.

Professor Burton’s article, “Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence,” is published in Technology in Society.

“AI is often framed as a tool to be used to counter violent extremism,” says Professor Burton. “Here is the other side of the debate.”

The paper looks at how AI has been securitized throughout its history, and in media and popular culture depictions, and by exploring modern examples of AI having polarizing, radicalizing effects that have contributed to political violence.

The article cites the classic film series, The Terminator, which depicted a holocaust committed by a ‘sophisticated and malignant’ artificial intelligence, as doing more than anything to frame popular awareness of Artificial intelligence and the fear that machine consciousness could lead to devastating consequences for humanity—in this case a nuclear war and a deliberate attempt to exterminate a species.

“This lack of trust in machines, the fears associated with them, and their association with biological, nuclear and genetic threats to humankind has contributed to a desire on the part of governments and national security agencies to influence the development of the technology, to mitigate risk and (in some cases) to harness its positive potentiality,” writes Professor Burton.

The role of sophisticated drones, such as those being used in the war in Ukraine, are, says Professor Burton, now capable of full autonomy including functions such as target identification and recognition.

And, while there has been a broad and influential campaign debate, including at the UN, to ban ‘killer robots’ and to keep the human in the loop when it comes to life-or-death decision-making, the acceleration and integration into armed drones has, he says, continued apace.

In cyber security—the security of computers and computer networks—AI is being used in a major way with the most prevalent area being (dis)information and online psychological warfare.

Putin’s government’s actions against US electoral processes in 2016 and the ensuing Cambridge Analytica scandal showed the potential for AI to be combined with big data (including social media) to create political effects centered around polarization, the encouragement of radical beliefs and the manipulation of identity groups. It demonstrated the power and the potential of AI to divide societies.

And during the pandemic, AI was seen as a positive in tracking and tracing the virus but it also led to concerns over privacy and human rights.

The article examines AI technology itself, arguing that problems exist in the design of AI, the data that it relies on, how it is used, and in its outcomes and impacts.

The paper concludes with a strong message to researchers working in cyber security and International Relations.

“AI is certainly capable of transforming societies in positive ways but also presents risks which need to be better understood and managed,” writes Professor Burton, an expert in cyber conflict and emerging technologies and who is part of the University’s Security and Protection Science initiative.

“Understanding the divisive effects of the technology at all stages of its development and use is clearly vital.”

“Scholars working in cyber security and International Relations have an opportunity to build these factors into the emerging AI research agenda and avoid treating AI as a politically neutral technology.”

“In other words, the security of AI systems, and how they are used in international, geopolitical struggles, should not override concerns about their social effects.”

More information:
Joe Burton, Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence, Technology in Society (2023). DOI: 10.1016/j.techsoc.2023.102262

Provided by
Lancaster University

Citation:
AI should be better understood and managed, new research warns (2023, November 2)
retrieved 2 November 2023
from https://techxplore.com/news/2023-11-ai-understood.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

The next big thing in plant-based meat?

Next Post

Cyber-Pandemic draws attention to children’s online safety – IT News Africa

Next Post
Cyber-Pandemic draws attention to children’s online safety – IT News Africa

Cyber-Pandemic draws attention to children's online safety - IT News Africa

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Character Biosciences raises $93M with an eye on vision loss drugs

Character Biosciences raises $93M with an eye on vision loss drugs

2 months ago
Top 10 African countries Russia makes the most money from

Top 10 African countries Russia makes the most money from

3 months ago
Professor discusses whether AI and sustainability can co-exist

Professor discusses whether AI and sustainability can co-exist

5 months ago
Govt dismisses COVID-19 variant claims, reassures Nigerians – EnviroNews

Govt dismisses COVID-19 variant claims, reassures Nigerians – EnviroNews

6 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.