• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

AI efficiency advances with spintronic memory chip that combines storage and processing

Simon Osuji by Simon Osuji
October 30, 2025
in Artificial Intelligence
0
AI efficiency advances with spintronic memory chip that combines storage and processing
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


New digital compute-in memory chip could boost speed and energy efficiency of AI
The experimental and measurement platform that the team used to evaluate the nvDCIM chips. Credit: Nature Electronics (2025). DOI: 10.1038/s41928-025-01479-y

To make accurate predictions and reliably complete desired tasks, most artificial intelligence (AI) systems need to rapidly analyze large amounts of data. This currently entails the transfer of data between processing and memory units, which are separate in existing electronic devices.

Related posts

How the Olympic Torchbearers Are Chosen

How the Olympic Torchbearers Are Chosen

February 5, 2026
Moving experimental pilots to AI production

Moving experimental pilots to AI production

February 5, 2026

Over the past few years, many engineers have been trying to develop new hardware that could run AI algorithms more efficiently, known as compute-in-memory (CIM) systems. CIM systems are electronic components that can both perform computations and store information, typically serving both as processors and non-volatile memories. Non-volatile essentially means that they can retain data even when they are turned off.

Most previously introduced CIM designs rely on analog computing approaches, which allow devices to perform calculations leveraging electrical current. Despite their good energy efficiency, analog computing techniques are known to be significantly less precise than digital computing methods and often fail to reliably handle large AI models or vast amounts of data.

Researchers at Southern University of Science and Technology, Xi’an Jiaotong University and other institutes recently developed a promising new CIM chip that could help to run AI models faster and more energy efficiently.

Their proposed system, outlined in a paper published in Nature Electronics, is based on a so-called spin-transfer torque magnetic-random access memory (STT-MRAM), a spintronic device that can store binary units of information (i.e., 0s and 1s) in the magnetic orientation of one of its underlying layers.

Using spintronics to run AI more efficiently

STT-MRAM devices, like the one employed by this research team, essentially consist of a tiny structure known as a magnetic tunnel junction (MTJ). This structure has three layers, a magnetic layer with a “fixed” orientation, a magnetic layer that can change its orientation, and a thin insulating layer that separates the other two layers.

When the two magnetic layers have parallel magnetic directions, electrons can tunnel easily through the device, but while they are opposite, the resistance increases and the flow of electrons becomes more challenging. STT-MRAM devices leverage these two different states to store binary data.

“Non-volatile CIM macros (i.e., pre-designed functional modules inside a chip that can both process and store data) can reduce data transfer between processing and memory units, providing fast and energy-efficient artificial intelligence computations,” wrote Humiao Li, Zheng Chai and their colleagues in their paper.

“However, the non-volatile CIM architecture typically relies on analog computing, which is limited in terms of accuracy, scalability and robustness. We report a 64-kb non-volatile digital compute-in-memory macro based on 40-nm STT-MRAM technology.”

A step toward more scalable AI hardware

The STT-MRAM-based module introduced by the researchers can reliably perform computations and store bits, all within a single device. In initial tests, it performed remarkably well, running two distinct types of neural networks with remarkable speed and accuracy.

“Our macro features in situ multiplication and digitization at the bitcell level, precision-reconfigurable digital addition and accumulation at the macro level and a toggle-rate-aware training scheme at the algorithm level,” wrote the authors. “The macro supports lossless matrix–vector multiplications with flexible input and weight precisions (4, 8, 12 and 16 bits), and can achieve a software-equivalent inference accuracy for a residual network at 8-bit precision and physics-informed neural networks at 16-bit precision.

“Our macro has computation latencies of 7.4–29.6 ns and energy efficiencies of 7.02–112.3 tera-operations per second per watt for fully parallel matrix–vector multiplications across precision configurations ranging from 4 to 16 bits.”

In the future, the team’s newly developed CIM module could contribute to the energy-efficient deployment of AI directly on portable devices, without having to rely on large datacenters. Over the next few years, it could also inspire the development of similar CIM systems based on STT-MRAMs or other spintronic devices.

Written for you by our author Ingrid Fadelli, edited by Lisa Lock, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.

More information:
Humiao Li et al, A lossless and fully parallel spintronic compute-in-memory macro for artificial intelligence chips, Nature Electronics (2025). DOI: 10.1038/s41928-025-01479-y

© 2025 Science X Network

Citation:
AI efficiency advances with spintronic memory chip that combines storage and processing (2025, October 29)
retrieved 29 October 2025
from https://techxplore.com/news/2025-10-ai-efficiency-advances-spintronic-memory.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

NBA champion Tristan Thompson and World Mobile launch community-owned network Uplift

Next Post

Air Force’s 10-year fighter jet report is missing key details, experts say

Next Post
Air Force’s 10-year fighter jet report is missing key details, experts say

Air Force’s 10-year fighter jet report is missing key details, experts say

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Apple’s iPhone Spyware Problem Is Getting Worse. Here’s What You Should Know

Apple’s iPhone Spyware Problem Is Getting Worse. Here’s What You Should Know

2 years ago
Will Evergrande’s Liquidation Affect The U.S. Stock Market Today?

Will Evergrande’s Liquidation Affect The U.S. Stock Market Today?

2 years ago
The Power of Personal Branding through Social Media Marketing

The Power of Personal Branding through Social Media Marketing

2 years ago
Africa risks missing out on AI revolution benefits

Africa risks missing out on AI revolution benefits

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.