Wednesday, June 4, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

New tool uses vision language models to safeguard against offensive image content

Simon Osuji by Simon Osuji
July 11, 2024
in Artificial Intelligence
0
New tool uses vision language models to safeguard against offensive image content
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Researchers develop safeguard against offensive image content
LlavaGuard judges images for safety alignment with a policy providing a safety rating, category, and rationale. Credit: arXiv (2024). DOI: 10.48550/arxiv.2406.05113

Researchers at the Artificial Intelligence and Machine Learning Lab (AIML) in the Department of Computer Science at TU Darmstadt and the Hessian Center for Artificial Intelligence (hessian.AI) have developed a method that uses vision language models to filter, evaluate, and suppress specific image content in large datasets or from image generators.

Related posts

Why AI can’t understand a flower the way humans do

Why AI can’t understand a flower the way humans do

June 4, 2025
Luxus Couples Vibrator Review: Magnetic Attraction

Luxus Couples Vibrator Review: Magnetic Attraction

June 4, 2025

Artificial intelligence (AI) can be used to identify objects in images and videos. This computer vision can also be used to analyze large corpora of visual data.

Researchers led by Felix Friedrich from the AIML have developed a method called LlavaGuard, which can now be used to filter certain image content. This tool uses so-called vision language models (VLMs). In contrast to large language models (LLMs) such as ChatGPT, which can only process text, vision language models are able to process and understand image and text content simultaneously. The work is published on the arXiv preprint server.

LlavaGuard can also fulfill complex requirements, as it is characterized by its ability to adapt to different legal regulations and user requirements. For example, the tool can differentiate between regions in which activities such as cannabis consumption are legal or illegal. LlavaGuard can also assess whether content is appropriate for certain age groups and restrict or adapt it accordingly.

“Until now, such fine-grained safety tools have only been available for analyzing texts. When filtering images, only the ‘nudity’ category has previously been implemented, but not others such as ‘violence,’ ‘self-harm’ or ‘drug abuse,'” says Friedrich.

LlavaGuard not only flags problematic content, but also provides detailed explanations of its safety ratings by categorizing content (e.g., “hate,” “illegal substances,” “violence,” etc.) and explaining why it is classified as safe or unsafe.

“This transparency is what makes our tool so special and is crucial for understanding and trust,” explains Friedrich. It makes LlavaGuard an invaluable tool for researchers, developers and political decision-makers.

The research on LlavaGuard is an integral part of the Reasonable Artificial Intelligence (RAI) cluster project at TU Darmstadt and demonstrates the university’s commitment to advancing safe and ethical AI technologies. LlavaGuard was developed to increase the safety of large generative models by filtering training data and explaining and justifying the output of problematic motives, thereby reducing the risk of generating harmful or inappropriate content.

The potential applications of LlavaGuard are far-reaching. Although the tool is currently still under development and focused on research, it can already be integrated into image generators such as Stable Diffusion to minimize the production of unsafe content.

In addition, LlavaGuard could also be adapted for use on social media platforms in the future to protect users by filtering out inappropriate images and thus promoting a safer online environment.

More information:
Lukas Helff et al, LLavaGuard: VLM-based Safeguards for Vision Dataset Curation and Safety Assessment, arXiv (2024). DOI: 10.48550/arxiv.2406.05113

Journal information:
arXiv

Provided by
Technische Universitat Darmstadt

Citation:
New tool uses vision language models to safeguard against offensive image content (2024, July 10)
retrieved 11 July 2024
from https://techxplore.com/news/2024-07-tool-vision-language-safeguard-offensive.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Why this entrepreneur is targeting the building materials industry

Next Post

How Much to Get your Real Estate License?

Next Post
How Much to Get your Real Estate License?

How Much to Get your Real Estate License?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Taiwan export orders up 3.1 percent in June

Taiwan export orders up 3.1 percent in June

11 months ago
World Bank to Help Ghana Provide Jobs for Over 500,000 Entering Job Market Each Year

World Bank to Help Ghana Provide Jobs for Over 500,000 Entering Job Market Each Year

1 month ago
Rosond develops the next generation of drilling industry professionals

Rosond develops the next generation of drilling industry professionals

11 months ago
The D Brief: DOD’s Replicator program advances; Nuclear forensics in Idaho; Austin, Milley defend women in combat; Intel watchdogs resign; And a bit more.

The D Brief: DOD’s Replicator program advances; Nuclear forensics in Idaho; Austin, Milley defend women in combat; Intel watchdogs resign; And a bit more.

6 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.