Sunday, May 18, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Think you can cheat with AI? Researcher creates watermarks to detect AI-generated writing

Simon Osuji by Simon Osuji
February 20, 2025
in Artificial Intelligence
0
Think you can cheat with AI? Researcher creates watermarks to detect AI-generated writing
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


robot writing
Credit: Pixabay/CC0 Public Domain

Artificial intelligence is putting instructors and employers in an awkward position when it comes to accepting written work, leaving them wondering: Who wrote this? A human or AI?

Related posts

Coinbase Will Reimburse Customers Up to $400 Million After Data Breach

Coinbase Will Reimburse Customers Up to $400 Million After Data Breach

May 17, 2025
Is Elon Musk Really Stepping Back from DOGE?

Is Elon Musk Really Stepping Back from DOGE?

May 17, 2025

But imagine a digital watermark that could remove the guesswork and actually flag AI-generated text whenever someone submits their writing. A University of Florida engineering professor is developing this technology right now.

“If I’m a student and I’m writing my homework with ChatGPT, I don’t want my professor to detect that,” said Yuheng Bu, Ph.D., an assistant professor in the Department of Electrical and Computer Engineering in the Herbert Wertheim College of Engineering.

Using UF’s supercomputer HiPerGator, Bu and his team are working on an invisible watermark method for large language models designed to reliably detect AI-generated content—even altered or paraphrased—while maintaining writing quality.

Navigating the AI landscape

Large language models, such as Google’s Gemini, are AI platforms capable of generating human-like text. Writers can feed prompts into these AI models, and the models will complete their assignments using information from billions of datasets. This creates a significant problem in academic and professional settings.

To address this, Peter Scarfe, Ph.D., and other researchers from the University of Reading in the United Kingdom tested AI detection levels in classrooms last year. They created fake student profiles and wrote their assignments using basic AI-generated platforms.

“Overall, AI submissions verged on being undetectable, with 94% not being detected,” that study noted. “Our 6% detection rate likely overestimates our ability to detect real-world use of AI to cheat on exams.”

The low performance is due to the continuous advancement of large language models, making AI-generated text increasingly indistinguishable from human-written content. As a result, detection becomes progressively more difficult and may eventually become impossible, Bu said.

Watermarking offers an alternative and effective solution by proactively embedding specifically designed, invisible signals into AI-generated text. These signals serve as verifiable evidence of AI generation, enabling reliable detection.

Specifically, Bu’s work focuses on two key aspects: maintaining the quality of large language model-generated text after watermarking, and ensuring the watermark’s robustness against various modifications. The proposed adaptive method ensures the embedded watermark remains imperceptible to human readers, preserving the natural flow of writing, compared to the original large language models.

Streamlining the detection process

Some tech companies are already developing watermarks for AI-generated text. Researchers at Google DeepMind, for example, created a text-detection watermark last year and deployed it to millions of chatbot users.

Asked about the difference between those watermarks and his project, Bu said UF’s method “applies watermarks to only a subset of text during generation, so we believe it achieves better text quality and greater robustness against removal attacks.”

Additionally, Bu’s work enhances the system’s strength against common text modifications in daily use, such as synonym replacement and paraphrasing, which often render AI detection tools ineffective. Even if a user completely rewrites the watermarked text, as long as the semantics remain unchanged, the watermark remains detectable with high probability. And a watermark key is applied by the platform itself.

“The entity that applies the watermark also holds the key required for detection. If text is watermarked by ChatGPT, OpenAI would possess the corresponding key needed to verify the watermark,” Bu said. “End users seeking to verify a watermark must obtain the key from the watermarking entity. Our approach employs a private key mechanism, meaning only the key holder can detect and validate the watermark.”

The primary issue now, Bu said, is how end users obtain that watermark key. In the current framework, a professor must contact the entity that embeds the watermark to obtain the key or use an application programming interface provided by the entity to detect watermarking. The question of who holds the key and, consequently, the ability to claim intellectual property, is critical in the development of large language model watermarking.

“A crucial next step is to establish a comprehensive ecosystem that enforces watermarking usage and key distribution or develops more advanced techniques that do not rely on a secret key,” Bu said.

Bu has written multiple papers on AI watermarks, including “Adaptive Text Watermark for Large Language Models” for the International Conference on Machine Learning (ICML 2024), posted to the arXiv preprint server last year, and “Theoretically Grounded Framework for LLM Watermarking: A Distribution-Adaptive Approach,” also available on arXiv.

“Watermarks have the potential to become a crucial tool for trust and authenticity in the era of generative AI,” Bu said. “I see them seamlessly integrated into schools to verify academic materials and across digital platforms to distinguish genuine content from misinformation. My hope is that widespread adoption will streamline verification and enhance confidence in the information we rely on every day.”

More information:
Yepeng Liu et al, Adaptive Text Watermark for Large Language Models, arXiv (2024). DOI: 10.48550/arxiv.2401.13927

Haiyun He et al, Theoretically Grounded Framework for LLM Watermarking: A Distribution-Adaptive Approach, arXiv (2024). DOI: 10.48550/arxiv.2410.02890

Provided by
University of Florida

Citation:
Think you can cheat with AI? Researcher creates watermarks to detect AI-generated writing (2025, February 20)
retrieved 20 February 2025
from https://techxplore.com/news/2025-02-ai-watermarks-generated.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Ghanaians abandon over 600,000 Ghana Cards as government struggles to clear backlog

Next Post

Seychelles Fights Trafficking, Addiction as Heroin Hub

Next Post
Seychelles Fights Trafficking, Addiction as Heroin Hub

Seychelles Fights Trafficking, Addiction as Heroin Hub

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Djibouti wind farm is a blueprint for greater things

Djibouti wind farm is a blueprint for greater things

1 year ago
Explosive Growth on the TON Blockchain Sparks New 1000x Meme Coin Possibilities

Explosive Growth on the TON Blockchain Sparks New 1000x Meme Coin Possibilities

11 months ago
FTC v. Meta Trial: The Future of Instagram and WhatsApp Is at Stake

FTC v. Meta Trial: The Future of Instagram and WhatsApp Is at Stake

1 month ago
Torpedo Bats and the Physics of the Sweet Spot

Torpedo Bats and the Physics of the Sweet Spot

1 month ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.