• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Cohere claims its new Aya Vision AI model is best-in-class

Simon Osuji by Simon Osuji
March 5, 2025
in Creator Economy
0
Cohere claims its new Aya Vision AI model is best-in-class
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter

Cohere For AI, AI startup Cohere’s nonprofit research lab, this week released a multimodal “open” AI model, Aya Vision, the lab claimed is best-in-class.

Aya Vision can perform tasks like writing image captions, answering questions about photos, translating text, and generating summaries in 23 major languages. Cohere, which is also making Aya Vision available for free through WhatsApp, called it “a significant step towards making technical breakthroughs accessible to researchers worldwide.”

“While AI has made significant progress, there is still a big gap in how well models perform across different languages — one that becomes even more noticeable in multimodal tasks that involve both text and images,” Cohere wrote in a blog post. “Aya Vision aims to explicitly help close that gap.”

Aya Vision comes in a couple of flavors: Aya Vision 32B and Aya Vision 8B. The more sophisticated of the two, Aya Vision 32B, sets a “new frontier,” Cohere said, outperforming models 2x its size, including Meta’s Llama-3.2 90B Vision, on certain visual understanding benchmarks. Meanwhile, Aya Vision 8B scores better on some evaluations than models 10x its size, according to Cohere.

Both models are available from AI dev platform Hugging Face under a Creative Commons 4.0 license with Cohere’s acceptable use addendum. They can’t be used for commercial applications.

Cohere said that Aya Vision was trained using a “diverse pool” of English datasets, which the lab translated and used to create synthetic annotations. Annotations, also known as tags or labels, help models understand and interpret data during the training process. For example, annotation to train an image recognition model might take the form of markings around objects or captions referring to each person, place, or object depicted in an image.

Cohere Aya Vision
Cohere’s Aya Vision model can perform a range of visual understanding tasks.Image Credits:Cohere

Cohere’s use of synthetic annotations — that is, annotations generated by AI — is on trend. Despite its potential downsides, rivals including OpenAI are increasingly leveraging synthetic data to train models as the well of real-world data dries up. Research firm Gartner estimates that 60% of the data used for AI and an­a­lyt­ics projects last year was syn­thet­i­cally created.

According to Cohere, training Aya Vision on synthetic annotations enabled the lab to use fewer resources while achieving competitive performance.

“This showcases our critical focus on efficiency and [doing] more using less compute,” Cohere wrote in its blog. “This also enables greater support for the research community, who often have more limited access to compute resources.”

Together with Aya Vision, Cohere also released a new benchmark suite, AyaVisionBench, designed to probe a model’s skills in “vision-language” tasks like identifying differences between two images and converting screenshots to code.

The AI industry is in the midst of what some have called an “evaluation crisis,” a consequence of the popularization of benchmarks that give aggregate scores that correlate poorly to proficiency on tasks most AI users care about. Cohere asserts that AyaVisionBench is a step toward rectifying this, providing a “broad and challenging” framework for assessing a model’s cross-lingual and multimodal understanding.

With any luck, that’s indeed the case.

“[T]he dataset serves as a robust benchmark for evaluating vision-language models in multilingual and real-world settings,” Cohere researchers wrote in a post on Hugging Face. “We make this evaluation set available to the research community to push forward multilingual multimodal evaluations.”

Source link

Related posts

Waymo is asking DoorDash drivers to shut the doors of its self-driving cars

Waymo is asking DoorDash drivers to shut the doors of its self-driving cars

February 13, 2026
Musk needed a new vision for SpaceX and xAI. He landed on Moonbase Alpha.

Musk needed a new vision for SpaceX and xAI. He landed on Moonbase Alpha.

February 12, 2026
Previous Post

Millions in Covid relief funds went to shadowy companies registered at a Wyoming storefront that hundreds of thousands of firms used as an address

Next Post

How Trump’s Tariffs Will Disrupt Key Industries in Mexico

Next Post
How Trump’s Tariffs Will Disrupt Key Industries in Mexico

How Trump’s Tariffs Will Disrupt Key Industries in Mexico

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

CMS says private Medicare plans can’t automatically deny Biogen’s ALS drug

CMS says private Medicare plans can’t automatically deny Biogen’s ALS drug

1 year ago

PTC’s Vatiquinone Fails Second Trial in as Many Months

3 years ago
Army faces logistics, alliance hurdles in the Pacific

Army faces logistics, alliance hurdles in the Pacific

2 years ago
OpenAI pulls free GPT-4o image generator after one day

OpenAI pulls free GPT-4o image generator after one day

11 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.