Sunday, May 18, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content

Simon Osuji by Simon Osuji
March 20, 2024
in Artificial Intelligence
0
Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


In 2023, OpenAI told the UK parliament that it was “impossible” to train leading AI models without using copyrighted materials. It’s a popular stance in the AI world, where OpenAI and other leading players have used materials slurped up online to train the models powering chatbots and image generators, triggering a wave of lawsuits alleging copyright infringement.

Two announcements Wednesday offer evidence that large language models can in fact be trained without the permissionless use of copyrighted materials.

A group of researchers backed by the French government have released what is thought to be the largest AI training dataset composed entirely of text that is in the public domain. And the nonprofit Fairly Trained announced that it has awarded its first certification for a large language model built without copyright infringement, showing that technology like that behind ChatGPT can be built in a different way to the AI industry’s contentious norm.

“There’s no fundamental reason why someone couldn’t train an LLM fairly,” says Ed Newton-Rex, CEO of Fairly Trained. He founded the nonprofit in January 2024 after quitting his executive role at image generation startup Stability AI because he disagreed with its policy of scraping content without permission.

Fairly Trained offers a certification to companies willing to prove that they’ve trained their AI models on data that they either own, have licensed, or is in the public domain. When the nonprofit launched, some critics pointed out that it hadn’t yet identified a large language model that met those requirements.

Today, Fairly Trained announced it has certified its first large language model. It’s called KL3M and was developed by Chicago-based legal tech consultancy startup 273 Ventures, using a curated training dataset of legal, financial, and regulatory documents.

The company’s cofounder Jillian Bommarito says the decision to train KL3M in this way stemmed from the company’s “risk-averse” clients like law firms. “They’re concerned about the provenance, and they need to know that output is not based on tainted data,” she says. “We’re not relying on fair use.” The clients were interested in using generative AI for tasks like summarizing legal documents and drafting contracts, but didn’t want to get dragged into lawsuits about intellectual property as OpenAI, Stability AI, and others have been.

Bommarito says that 273 Ventures hadn’t worked on a large language model before but decided to train one as an experiment. “Our test to see if it was even possible,” she says. The company has created its own training data set, the Kelvin Legal DataPack, which includes thousands of legal documents reviewed to comply with copyright law.

Although the dataset is tiny (around 350 billion tokens, or units of data) compared to those compiled by OpenAI and others that have scraped the internet en masse, Bommarito says the KL3M model performed far better than expected, something she attributes to how carefully the data had been vetted beforehand. “Having clean, high-quality data may mean that you don’t have to make the model so big,” she says. Curating a dataset can help make a finished AI model specialized to the task its designed for. 273 Ventures is now offering spots on a waitlist to clients who want to purchase access to this data.

Clean Sheet

Companies looking to emulate KL3M may have more help in the future in the form of freely available infringement-free datasets. On Wednesday, researchers released what they claim is the largest available AI dataset for language models composed purely of public domain content. Common Corpus, as it is called, is a collection of text roughly the same size as the data used to train OpenAI’s GPT-3 text generation model and has been posted to the open source AI platform Hugging Face.

The dataset was built from sources like public domain newspapers digitized by the US Library of Congress and the National Library of France. Pierre-Carl Langlais, project coordinator for Common Corpus, calls it a “big enough corpus to train a state-of-the-art LLM.” In the lingo of big AI, the dataset contains 500 million tokens, OpenAI’s most capable model is widely believed to have been trained on several trillions.



Source link

Related posts

Coinbase Will Reimburse Customers Up to $400 Million After Data Breach

Coinbase Will Reimburse Customers Up to $400 Million After Data Breach

May 17, 2025
Is Elon Musk Really Stepping Back from DOGE?

Is Elon Musk Really Stepping Back from DOGE?

May 17, 2025
Previous Post

UK fast-tracks wind power cable to address grid deficiencies

Next Post

Mermaid Chart, a Markdown-like tool for creating diagrams, raises $7.5M

Next Post
Mermaid Chart, a Markdown-like tool for creating diagrams, raises $7.5M

Mermaid Chart, a Markdown-like tool for creating diagrams, raises $7.5M

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Machine learning framework boosts residential electricity clustering for demand-response

Machine learning framework boosts residential electricity clustering for demand-response

8 months ago
Bribery Charges Reveal Russia’s Expanding ‘African Legion’ in Libya

Bribery Charges Reveal Russia’s Expanding ‘African Legion’ in Libya

12 months ago
Congo set to bolster its energy cooperation with Russia

Congo set to bolster its energy cooperation with Russia

8 months ago
29 Killed in Army Strike on Camp for Displaced in Myanmar: Rebels

29 Killed in Army Strike on Camp for Displaced in Myanmar: Rebels

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.