• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
Home Artificial Intelligence

Approach improves how new skills are taught to large language models

Simon Osuji by Simon Osuji
July 7, 2025
in Artificial Intelligence
0
Approach improves how new skills are taught to large language models
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


ChatGPT
Credit: Unsplash/CC0 Public Domain

Researchers have developed a technique that significantly improves the performance of large language models without increasing the computational power necessary to fine-tune the models. The researchers demonstrated that their technique improves the performance of these models over previous techniques in tasks including commonsense reasoning, arithmetic reasoning, instruction following, code generation, and visual recognition.

Large language models are artificial intelligence systems that are pretrained on huge data sets. After pretraining, these models predict which words should follow each other in order to respond to user queries. However, the nonspecific nature of pretraining means that there is ample room for improvement with these models when the user queries are focused on specific topics, such as when a user requests the model to answer a math question or to write computer code.

“In order to improve a model’s ability to perform more specific tasks, you need to fine-tune the model,” says Tianfu Wu, co-corresponding author of a paper on the work and an associate professor of computer engineering at North Carolina State University.

“However, these models are so large that it is not feasible to re-train the entire model. Instead, you want to determine the smallest number of changes necessary to improve the model’s performance. We’ve developed a technique, called WeGeFT (pronounced wee-gift), that represents a significant advance for fine-tuning these large models.”

The big breakthrough for fine-tuning these large models was called LoRA, which came out in 2022. LoRA works by using mathematical tools to identify a small subset of key parameters that are most likely to improve a model’s performance on a specific task.

There have been many attempts to improve upon LoRA, but Wu and his collaborators found these previous efforts either required significantly more computational power to improve performance, or used the same amount of computing power without improving performance.

“WeGeFT builds on LoRA, but incorporates additional mathematical tools that allow us to determine which of the key parameters the model is already familiar with and which parameters the model would need to ‘learn,'” says Wu. “By placing more weight on the truly novel parameters, we are able to improve model performance compared to LoRA without incorporating significant new computational demands.”

In proof-of-concept testing, the researchers found that WeGeFT performed as well as or better than LoRA and its many variants across a variety of downstream tasks: commonsense reasoning, arithmetic reasoning, instruction following, code generation, and visual recognition.

“We think this is a valuable step forward,” Wu says. “We are now exploring ways that WeGeFT could also be used to identify elements of the model that are responsible for harmful outputs, with the goal of improving AI alignment and ‘surgery’ to improve model safety and outputs. We expect that work to be forthcoming.”

The paper, “WeGeFT: Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models,” will be presented July 17 at the International Conference on Machine Learning, being held in Vancouver, Canada. Co-corresponding author of the paper is Chinmay Savadikar, a Ph.D. student at NC State. The paper was co-authored by Xi Song, an independent researcher.

More information:
“WeGeFT: Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models,” Chinmay Savadikar and Tianfu Wu, North Carolina State University; Xi Song, independent researcher. Presented: July 13-19, International Conference on Machine Learning, Vancouver, Canada. icml.cc/virtual/2025/poster/45660

Provided by
North Carolina State University

Citation:
Approach improves how new skills are taught to large language models (2025, July 7)
retrieved 7 July 2025
from https://techxplore.com/news/2025-07-approach-skills-taught-large-language.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

KITE 2025 – supporting industrial growth in manufacturing

Next Post

Tesla short sellers set to pocket about $1.4bln in profits after stock slump

Next Post
Tesla short sellers set to pocket about $1.4bln in profits after stock slump

Tesla short sellers set to pocket about $1.4bln in profits after stock slump

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

  • Mahama attends Liberia’s 178th independence anniversary

    Mahama attends Liberia’s 178th independence anniversary

    0 shares
    Share 0 Tweet 0
  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.