Thursday, May 8, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Meta Opens Its AI Model for the U.S. Military

Simon Osuji by Simon Osuji
November 17, 2024
in Artificial Intelligence
0
Meta Opens Its AI Model for the U.S. Military
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter



Meta’s open large language model family, Llama, isn’t “open-source” in a traditional sense, but it’s freely available to download and build on—and national defense agencies are among those putting it to use.

A recent Reuters report detailed Chinese researchers fine-tuned Llama’s model on military records to create a tool for analyzing military intelligence. Meta’s director of public policy called the use “unauthorized.” But three days later, Nick Clegg, Meta’s president of public affairs, announced that Meta will allow use of Llama for U.S. nation security.

Related posts

US Customs and Border Protection Quietly Revokes Protections for Pregnant Women and Infants

US Customs and Border Protection Quietly Revokes Protections for Pregnant Women and Infants

May 8, 2025
New study explores role of generative AI in using copyrighted material

New study explores role of generative AI in using copyrighted material

May 8, 2025

“It shows that a lot of the guardrails that are put around these models are fluid,” said Ben Brooks, a fellow at Harvard’s Berkman Klein Center for Internet and Society.

Meta isn’t alone in rush to support U.S. defense

The Reuters investigation found that researchers from China’s Academy of Military Science used the 13 billion parameter version of Meta’s Llama large language model to develop ChatBIT, an AI tool for military intelligence analysis and decision-making. It’s the first clear evidence of the People’s Liberation Army adapting open-source AI models for defense purposes.

Meta told Reuters that ChatBIT violated the company’s acceptable use policy, which prohibits use of Llama for (among other things) military, warfare, espionage, and nuclear industries or applications. Three days later, however, Clegg touted Meta’s support of the U.S. defense industry.

It was an odd turn of events, as use of Llama by any military would seem to violate Llama’s acceptable use policy. While Meta has no way to enforce its policy—its models don’t require authorization or authentication for use—the company’s stance on military use had, up until now, remained against it.

That’s still true today, but only for militaries outside the U.S. A Meta spokesperson told IEEE Spectrum that Llama’s terms haven’t changed; instead, the company is “waiving the military use policy for the U.S. government and the companies supporting their work.”

Meta isn’t alone in finding a sudden need to support U.S. defense. Anthropic’s Claude 3 and Claude 3.5 models will be used by defense contractor Palantir to sift through secret government data. OpenAI, meanwhile, recently hired former Palantir CISO Dane Stuckey and appointed retired U.S. Army General Paul M. Nakasone to its board of directors.

“All the [major AI companies] are eagerly showing their commitment to U.S. national security, so there’s nothing surprising about Meta’s response. And I think it would’ve been a curious outcome if open AI models were available to potential adversaries while [domestically] having strict national security or defense restrictions,” says Brooks.

What’s next for AI, defense, and regulation?

While Meta’s decision to make Llama available to the U.S. government could help approved military contractors adopt it, it doesn’t put the open AI genie back in the model. As the Reuters’ report shows, Llama models are already being put into use by militaries—authorized, or otherwise. Now the question becomes: What, if anything, will regulators do about it?

“By choosing not to secure their cutting-edge technology, Meta is single-handedly fueling a global AI arms race.” —David Evan Harris, California Initiative for Technology and Democracy

David Evan Harris, senior policy advisor to the California Initiative for Technology and Democracy, urged a stronger stance against open models. In 2023, IEEE Spectrum published an article by Harris about open AI’s dangers.

“By choosing not to secure their cutting-edge technology, Meta is single-handedly fueling a global AI arms race,” says Harris. “It’s not just the top unsecured model that comes from Meta. It’s the top three.” He said Meta’s decision to make its models freely available is similar to the idea of Lockheed Martin giving sophisticated military technology away to U.S. adversaries.

Brooks took the opposite view. He said open models are more transparent and easier to evaluate for opportunities or vulnerabilities. Brooks compared Llama to other popular open-source software, like Linux, which many companies and government agencies build on for custom-tailored applications. “I think the open-source community expects that open is the way forward for sensitive and regulated AI applications,” he said.

While Harris and Brooks have opposite views on regulating open AI, they agreed on one thing: Trump’s election victory is a wild card.

President-elect Trump’s position on AI isn’t yet clear, but Elon Musk—who appeared on stage with Trump several times during his presidential campaign and reportedly wields sizable influence with Trump—is emblematic of the uncertainty around the incoming administration’s position.

“The election results could reset the conversation in unusual ways.” —Ben Brooks, Berkman Klein Center for Internet and Society

Musk owns AI company Grok and believes AI will be smarter than humans by the end of the decade, yet spoke in favor of California’s Safe and Secure Innovation for Frontier Artificial Models Act, which sought broad restrictions on AI research (but was ultimately vetoed by governor Gavin Newsom). And if that weren’t confusing enough: While Musk supports AI regulation, he prefers open AI models and has a pending lawsuit against OpenAI for (among other claims) the company’s decision to close access to its models.

“The election results could reset the conversation in unusual ways,” said Brooks. “The effective accelerationist culture is going to clash with stop-AI culture in this administration, and that will be very interesting to watch.”

From Your Site Articles

Related Articles Around the Web



Source link

Previous Post

Israel Pummels South Beirut as Hezbollah Targets Haifa Area

Next Post

BRICS Countries Are Rejecting De-Dollarization

Next Post
BRICS Countries Are Rejecting De-Dollarization

BRICS Countries Are Rejecting De-Dollarization

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Top 3 Cryptocurrencies to Watch Under $1 in November

Top 3 Cryptocurrencies to Watch Under $1 in November

2 years ago
American lady claims she’s expecting Davido’s child, shares pregnancy test result

American lady claims she’s expecting Davido’s child, shares pregnancy test result

2 years ago
Buymeacoffee’s founder has built an AI-powered voice note app

Buymeacoffee’s founder has built an AI-powered voice note app

12 months ago
20th Edition Of The Standard Chartered Nairobi Marathon

20th Edition Of The Standard Chartered Nairobi Marathon

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.