Friday, May 23, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Protesters Decry Meta’s “Irreversible Proliferation” of AI

Simon Osuji by Simon Osuji
October 6, 2023
in Artificial Intelligence
0
Protesters Decry Meta’s “Irreversible Proliferation” of AI
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Efforts to make AI open source have become a lightning rod for disagreements about the potential harms of the emerging technology. Last week, protesters gathered outside Meta’s San Francisco offices to protest its policy of publicly releasing its AI models, claiming that the releases represent “irreversible proliferation” of potentially unsafe technology. But others say that an open approach to AI development is the only way to ensure trust in the technology.

While companies like OpenAI and Google only allow users to access their large language models (LLMs) via an API, Meta caused a stir last February when it made its LLaMA family of models freely accessible to AI researchers. The release included model weights—the parameters the models have learned during training—which make it possible for anyone with the right hardware and expertise to reproduce and modify the models themselves.

Within weeks the weights were leaked online, and the company was criticized for potentially putting powerful AI models in the hands of nefarious actors, such as hackers and scammers. But since then, the company has doubled down on open-source AI by releasing the weights of its next-generation Llama 2 models without any restrictions.

The self-described “concerned citizens” who gathered outside Meta’s offices last Friday were led by Holly Elmore. She notes that an API can be shut down if a model turns out to be unsafe, but once model weights have been released, the company no longer has any means to control how the AI is used.

“It would be great to have a better way to make a [large language] model safe other than secrecy, but we just don’t have it.” —Holly Elmore, AI safety advocate

“Releasing weights is a dangerous policy, because models can be modified by anyone, and they cannot be recalled,” says Elmore, an independently funded AI safety advocate who previously worked for the think tank Rethink Priorities. “The more powerful the models get, the more dangerous this policy is going to get.” Meta didn’t respond to a request for comment.

A person holds a sign that says 'Honk for AI Safety" by a roadside.Misha Gurevich

LLMs accessed through an API typically feature various safety features, such as response filtering or specific training to prevent them from providing dangerous or unsavory responses. If model weights are released, though, says Elmore, it’s relatively easy to retrain the models to bypass these guardrails. That could make it possible to use the models to craft phishing emails, plan cyberattacks, or cook up ingredients for dangerous chemicals, she adds.

Part of the problem is that there has been insufficient development of “safety measures to warrant open release,” Elmore says. “It would be great to have a better way to make a model safe other than secrecy, but we just don’t have it.”

Beyond any concerns about how today’s open-source models could be misused, Elmore says, the bigger danger will come if the same approach is taken with future, more powerful AI that could act more autonomously.

That’s a concern shared by Peter S. Park, AI Existential Safety Postdoctoral Fellow at MIT. “Widely releasing the very advanced AI models of the future would be especially problematic, because preventing their misuse would be essentially impossible,” he says, adding that they could “enable rogue actors and nation-state adversaries to wage cyberattacks, election meddling, and bioterrorism with unprecedented ease.”

So far, though, there’s little evidence that open-source models have led to any concrete harm, says Stella Biderman, a scientist at Booz Allen Hamilton and executive director of the nonprofit AI research group EleutherAI, which also makes its models open source. And it’s far from clear that simply putting a model behind an API solves the safety problem, she adds, pointing to a recent report from the European Union’s law-enforcement agency, Europol, that ChatGPT was being used to generate malware and that safety features were “trivial to bypass.”

Encouraging companies to keep the details of their models secret is likely to lead to “serious downstream consequences for transparency, public awareness, and science.”—Stella Biderman, EleutherAI

The reference to “proliferation,” which is clearly meant to evoke weapons of mass destruction, is misleading, Biderman adds. While the secrets to building nuclear weapons are jealously guarded, the fundamental ingredients for building an LLM have been published in freely available research papers. “Anyone in the world can read them and develop their own models,” she says.

The argument against releasing model weights relies on the assumption that there will be no malicious corporate actors, says Biderman, which history suggests is misplaced. Encouraging companies to keep the details of their models secret is likely to lead to “serious downstream consequences for transparency, public awareness, and science,” she adds, and will mainly impact independent researchers and hobbyists.

But it’s unclear if Meta’s approach is really open enough to derive the benefits of open source. Open-source software is considered trustworthy and safe because people are able to understand and probe it, says Park. That’s not the case with Meta’s models, because the company has provided few details about its training data or training code.

The concept of open-source AI has yet to be properly defined, says Stefano Maffulli, executive director of the Open Source Initiative (OSI). Different organizations are using the term to refer to different things. “It’s very confusing, because everyone is using it to mean different shades of ‘publicly available something,’ ” he says.

For a piece of software to be open source, says Maffulli, the key question is whether the source code is publicly available and reusable for any purpose. When it comes to making AI freely reproducible, though, you may have to share training data, how you collected that data, training software, model weights, inference code, or all of the above. That raises a host of new challenges, says Maffulli, not least of which are privacy and copyright concerns around the training data.

OSI has been working since last year to define what exactly counts as open-source AI, says Maffulli, and the organization is planning to release an early draft in the coming weeks. But regardless of how the concept of open source has to be adapted to fit the realities of AI, he believes an open approach to its development will be crucial. “We cannot have AI that can be trustworthy, that can be responsible and accountable if it’s not also open source,” he says.

From Your Site Articles

Related Articles Around the Web



Source link

Related posts

The Best Memorial Day Mattress Sales (2025)

The Best Memorial Day Mattress Sales (2025)

May 23, 2025
Anthropic touts improved Claude AI models

Anthropic touts improved Claude AI models

May 23, 2025
Previous Post

SBF stole customer funds from the beginning

Next Post

Jobberman Launches Second Soft Skills Referral Program

Next Post
Jobberman Launches Second Soft Skills Referral Program

Jobberman Launches Second Soft Skills Referral Program

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Bhagyashree says rasam is ‘soup for the soul’; know benefits | Health News

Bhagyashree says rasam is ‘soup for the soul’; know benefits | Health News

2 years ago
$1,000 Disrupt pass savings end Friday

$1,000 Disrupt pass savings end Friday

1 year ago
Government censorship comes to Bluesky, but not its third-party apps … yet

Government censorship comes to Bluesky, but not its third-party apps … yet

4 weeks ago
A framework to train multi-skilled robots for domestic use

A framework to train multi-skilled robots for domestic use

1 year ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.