• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Amplifying AI’s impact by making it understandable

Simon Osuji by Simon Osuji
August 29, 2025
in Artificial Intelligence
0
Amplifying AI’s impact by making it understandable
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


AI
Datasets used to train AI algorithms may underrepresent older people. Credit: Pixabay/CC0 Public Domain

As AI becomes a ubiquitous part of everyday life, people increasingly understand how it works. Whereas traditional computer programs operate on clear logic—”If A happens, then do B”—AI models, especially neural networks, make decisions that are difficult to trace back to any single rule or line of code. As a result, traditional program analysis techniques such as code review are ineffective in addressing neural networks’ vulnerabilities.

Related posts

‘Fallout’ Producer Jonathan Nolan on AI: ‘We’re in Such a Frothy Moment’

‘Fallout’ Producer Jonathan Nolan on AI: ‘We’re in Such a Frothy Moment’

February 4, 2026
Republicans Are All In on Boosting Fraud Allegations in California

Republicans Are All In on Boosting Fraud Allegations in California

February 4, 2026

Yang Zhou, a winner of the SMU Research Staff Excellence Award for 2024, is part of SMU Professor of Computer Science Sun Jun’s MOE Tier 3 project titled “The Science of Certified AI Systems” that will look into the issue. As listed in the proposal, the first aim of the project is:

“First, we will develop a scientific foundation for analyzing AI systems, with a focus on three fundamental concepts: abstraction, causality and interpretability. Note that these concepts are fundamental for program analysis and yet must be re-invented formally due to the difference between programs and neural networks. This scientific foundation will be the basis of developing systematic analysis tools.”

Abstraction, causality and interpretability are core concepts in AI and computer science. Abstraction refers to the invisible process of how a program or model produces an output, such as a “calculate_area” function in a computer program that considers pi and radius that the user never sees.

In AI, a model would learn to identify what is a “circle” through repeated training and learn to measure its area, but nobody can point to a single line of code to identify it as where/when it learnt to do so.

Causality is simpler to understand. In programming it’s an if-then situation, e.g., if water level > 2m, sound alarm. It’s less clear cut in AI, where a car loan application could be rejected based on patterns and correlations. For example, someone over 50 years old might have the loan application approved but another 50-year-old might be rejected.

The screening model might have spotted other factors such as a history of hospitalization at an eye hospital or being issued a speeding ticket recently. As such, AI systems learn correlations but not necessarily causes.

Interpretability, simply put, is: Do you understand how the software came to the final output or decision? AI output can sometimes be opaque and needs special tools for decisions to make sense.

Once that is done, the following will be developed:

  • A set of effective tools for analyzing and repairing neural networks, including testing engines, debuggers and verifiers.
  • Certification standards which provide actionable guidelines for achieving a different level of quality control.
  • Propose principled processes for developing AI systems, with best practices and guidelines from researchers as well as practitioners.

“This project is a huge one, and the research group under each Co-PI works on a subset of the problems above,” Yang explains. “I work with [UOB Chair Professor of Computer Science] David Lo, and our responsibility is to understand the concerns and challenges developers face when developing AI-enabled systems in practice, as well as to extract the best practices and guidelines from AI researchers and practitioners.”

The impact

Examples of AI-enabled systems include autonomous driving, image recognition, and smart traffic lights. “My research in this project focuses on an important phase of AI: How AI is integrated into software in practice and what the challenges, solutions, concerns, and practices are in this important phase,” Yang tells the Office of Research Governance & Administration (ORGA).

“For example, we suggest that it is important to write well-structured documentation for an open-source model to be more easily adapted in other software.”

The real-world impact of Yang’s work is substantial. Clear and comprehensive documentation could help smoothen deployment by listing hardware requirements and alternatives in cases of software failing to work on certain devices.

Proper documentation also facilitates faster adoption by showing developers how to plug AI models into systems, be they for autonomous driving, supply chain optimization, or smart assistants such as Amazon’s Alexa and Google Assistant.

Yang’s work on the project ties in with some of his other collaborations, one of which involves interviewing AI practitioners from the industry to understand the challenges and solutions to ensure the quality of AI systems, and validating findings by conducting surveys to collect the opinions and practices of AI developers.

More research, more impact

Yang also recently published a paper titled “Unveiling Memorization in Code Models” that looked at AI models trained to understand and generate computer code. As written in the paper, these models “automate a series of critical tasks such as defect prediction, code review, code generation and software questions analysis.”

While code models make it easier to write and maintain code, they do so by being trained on a lot of data, so much so they memorize frequently occurring code.

“Generally, language models are trained on a large corpus of code, aiming to learn ‘given a piece of code, what are the next tokens/code snippets,'” explains Yang. “There exist many code clones (identical code) in the training data, and the code models will learn such information very well, just like memorizing some training data.

“Code models may memorize the information belonging to one developer and expose the information to another, which may cause some concerns,” he adds.

Among these include security breaches (models leak passwords and database credentials), intellectual property theft (proprietary algorithms and licensed code get exposed), vulnerability propagation (insecure code patterns spread to new applications), and privacy violations (personal information and sensitive business data exposure).

How does Yang’s work address this issue? “We prompt the model to generate a large number of code snippets and identify those that can also be found in the training data via a technique called ‘code clone detection,” says Yang.

“In the paper, we aim to expose the problem of memorization and not to address it. We have recently published another paper on mitigating privacy information leakage in code models.”

The impact of this particular piece of research lies in better preserving the privacy of developers in the era of large language models. He explains, “Specifically, we design a new ‘machine unlearning’ method to guide the model to ‘forget’ the privacy information while preserving its general knowledge. When the new model is deployed, it can still generate the correct code upon user request, but will use a placeholder when privacy information is likely to be involved.”

Provided by
Singapore Management University

Citation:
Amplifying AI’s impact by making it understandable (2025, August 29)
retrieved 29 August 2025
from https://techxplore.com/news/2025-08-amplifying-ai-impact.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

US Dollar Value Falls as More Nations Choose Yuan: What’s Happening

Next Post

How Kenyan forces stopped a gang plot to blackout Haiti

Next Post
How Kenyan forces stopped a gang plot to blackout Haiti

How Kenyan forces stopped a gang plot to blackout Haiti

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

UK climate watchdog chief Chris Stark to resign with goals in doubt

UK climate watchdog chief Chris Stark to resign with goals in doubt

2 years ago
Verismo Therapeutics and University of Pennsylvania Discover Two Novel Binders Targeting CD19; Enter Licensing Agreement

Verismo Therapeutics and University of Pennsylvania Discover Two Novel Binders Targeting CD19; Enter Licensing Agreement

2 years ago
National Recycling Day – Don’t Put These Paper Products In Your Recycling Bin

National Recycling Day – Don’t Put These Paper Products In Your Recycling Bin

5 months ago
Top 10 African countries with the most expensive broadband internet

Top 10 African countries with the most expensive broadband internet

1 year ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.