Monday, June 2, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Meta now allows military agencies to access its AI software. It poses a moral dilemma for everybody who uses it

Simon Osuji by Simon Osuji
November 12, 2024
in Artificial Intelligence
0
Meta now allows military agencies to access its AI software. It poses a moral dilemma for everybody who uses it
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Meta
Credit: Pixabay/CC0 Public Domain

Meta will make its generative artificial intelligence (AI) models available to the United States’ government, the tech giant has announced, in a controversial move that raises a moral dilemma for everyone who uses the software.

Related posts

Neurosymbolic AI is the answer to large language models’ inability to stop hallucinating

Neurosymbolic AI is the answer to large language models’ inability to stop hallucinating

June 2, 2025
IBM and Roche use AI to forecast blood sugar levels

IBM and Roche use AI to forecast blood sugar levels

June 2, 2025

Meta last week revealed it would make the models, known as Llama, available to government agencies, “including those that are working on defense and national security applications, and private sector partners supporting their work.”

The decision appears to contravene Meta’s own policy which lists a range of prohibited uses for Llama, including “[m]ilitary, warfare, nuclear industries or applications” as well as espionage, terrorism, human trafficking and exploitation or harm to children.

Meta’s exception also reportedly applies to similar national security agencies in the United Kingdom, Canada, Australia and New Zealand. It came just three days after Reuters revealed China has reworked Llama for its own military purposes.

The situation highlights the increasing fragility of open source AI software. It also means users of Facebook, Instagram, WhatsApp and Messenger—some versions of which use Llama—may inadvertently be contributing to military programs around the world.

What is Llama?

Llama is a collation of large language models—similar to ChatGPT—and large multimodal models that deal with data other than text, such as audio and images.

Meta, the parent company of Facebook, released Llama in response to OpenAI’s ChatGPT. The key difference between the two is that all Llama models are marketed as open source and free to use. This means anyone can download the source code of a Llama model, and run and modify it themselves (if they have the right hardware). On the other hand, ChatGPT can only be accessed via OpenAI.

The Open Source Initiative, an authority that defines open source software, recently released a standard setting out what open source AI should entail. The standard outlines “four freedoms” an AI model must grant in order to be classified as open source:

  • use the system for any purpose and without having to ask for permission
  • study how the system works and inspect its components
  • modify the system for any purpose, including to change its output
  • share the system for others to use with or without modifications, for any purpose.

Meta’s Llama fails to meet these requirements. This is because of limitations on commercial use, the prohibited activities that may be deemed harmful or illegal and a lack of transparency about Llama’s training data.

Despite this, Meta still describes Llama as open source.

The intersection of the tech industry and the military

Meta is not the only commercial technology company branching out to military applications of AI. In the past week, Anthropic also announced it is teaming up with Palantir—a data analytics firm—and Amazon Web Services to provide US intelligence and defense agencies access to its AI models.

Meta has defended its decision to allow US national security agencies and defense contractors to use Llama. The company claims these uses are “responsible and ethical” and “support the prosperity and security of the United States.”

Meta has not been transparent about the data it uses to train Llama. But companies that develop generative AI models often utilize user input data to further train their models, and people share plenty of personal information when using these tools.

ChatGPT and Dall-E provide options for opting out of your data being collected. However, it is unclear if Llama offers the same.

The option to opt out is not made explicitly clear when signing up to use these services. This places the onus on users to inform themselves—and most users may not be aware of where or how Llama is being used.

For example, the latest version of Llama powers AI tools in Facebook, Instagram, WhatsApp and Messenger. When using the AI functions on these platforms—such as creating reels or suggesting captions—users are using Llama.

The fragility of open source

The benefits of open source include open participation and collaboration on software. However, this can also lead to fragile systems that are easily manipulated. For example, following Russia’s invasion of Ukraine in 2022, members of the public made changes to open source software to express their support for Ukraine.

These changes included anti-war messages and deletion of systems files on Russian and Belarusian computers. This movement came to be known as “protestware.”

The intersection of open source AI and military applications will likely exacerbate this fragility because the robustness of open source software is dependent on the public community. In the case of large language models such as Llama, they require public use and engagement because the models are designed to improve over time through a feedback loop between users and the AI system.

The mutual use of open source AI tools marries two parties—the public and the military—who have historically held separate needs and goals. This shift will expose unique challenges for both parties.

For the military, open access means the finer details of how an AI tool operates can easily be sourced, potentially leading to security and vulnerability issues. For the general public, the lack of transparency in how user data is being utilized by the military can lead to a serious moral and ethical dilemma.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Meta now allows military agencies to access its AI software. It poses a moral dilemma for everybody who uses it (2024, November 12)
retrieved 12 November 2024
from https://techxplore.com/news/2024-11-meta-military-agencies-access-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

DeepOcean, Exceed partner on P&A services for decommissioning market

Next Post

Security Cameras Aim to Prevent Al-Shabaab Attacks

Next Post
Security Cameras Aim to Prevent Al-Shabaab Attacks

Security Cameras Aim to Prevent Al-Shabaab Attacks

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

When War Came to Their Country, They Built a Map

When War Came to Their Country, They Built a Map

9 months ago
foreigners to get visa upon arrival in Kenya

foreigners to get visa upon arrival in Kenya

2 years ago
A cellular ag startup with a real moat

A cellular ag startup with a real moat

2 years ago
Google’s AI Boss Says Gemini’s New Abilities Point the Way to AGI

Google’s AI Boss Says Gemini’s New Abilities Point the Way to AGI

2 weeks ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.