Wednesday, June 4, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Circumventing a long-time frustration in neural computing

Simon Osuji by Simon Osuji
December 18, 2024
in Artificial Intelligence
0
Circumventing a long-time frustration in neural computing
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Circumventing a long-time frustration in neural computing
Illustration depicting the method of random noise training and its effects. Credit: arXiv (2024). DOI: 10.48550/arxiv.2405.16731

The human brain begins learning through spontaneous random activities even before it receives sensory information from the external world. A new technology developed by the KAIST research team enables much faster and more accurate learning when exposed to actual data by pre-learning random information in a brain-mimicking artificial neural network, and is expected to be a breakthrough in the development of brain-based artificial intelligence and neuromorphic computing technology in the future.

Related posts

AI deployemnt security and governance, with Deloitte

AI deployemnt security and governance, with Deloitte

June 4, 2025
AI enables shift from enablement to strategic leadership

AI enables shift from enablement to strategic leadership

June 3, 2025

Professor Se-Bum Paik’s research team in the Department of Brain Cognitive Sciences solved the weight transport problem, a long-standing challenge in neural network learning, and through this, explained the principles that enable resource-efficient learning in biological brain neural networks. The findings are posted to the arXiv preprint server.

Over the past several decades, the development of artificial intelligence has been based on error backpropagation learning proposed by Geoffery Hinton, who won the Nobel Prize in Physics this year. However, error backpropagation learning was thought to be impossible in biological brains because it requires the unrealistic assumption that individual neurons must know all the connected information across multiple layers in order to calculate the error signal for learning.

This difficult problem, called the weight transport problem, was raised by Francis Crick, who won the Nobel Prize in Physiology or Medicine for the discovery of the structure of DNA, after the error backpropagation learning was proposed by Hinton in 1986. Since then, it has been considered the reason why the operating principles of natural neural networks and artificial neural networks will forever be fundamentally different.

Circumventing a long-time frustration in neural computing
Illustration depicting the meta-learning effect of random noise training. Credit: arXiv (2024). DOI: 10.48550/arxiv.2405.16731

At the borderline of artificial intelligence and neuroscience, researchers including Hinton have continued to attempt to create biologically plausible models that can implement the learning principles of the brain by solving the weight transport problem.

In 2016, a joint research team from Oxford University and DeepMind in the U.K. first proposed the concept of error backpropagation learning being possible without weight transport, drawing attention from the academic world. However, biologically plausible error backpropagation learning without weight transport was inefficient, with slow learning speeds and low accuracy, making it difficult to apply in reality.

KAIST research team noted that the biological brain begins learning through internal spontaneous random neural activity even before experiencing external sensory experiences. To mimic this, the research team pre-trained a biologically plausible neural network without weight transport with meaningless random information (random noise).

Circumventing a long-time frustration in neural computing
Illustration depicting research on understanding the brain’s operating principles through artificial neural networks. Credit: arXiv (2024). DOI: 10.48550/arxiv.2405.16731

As a result, they showed that the symmetry of the forward and backward neural cell connections of the neural network, which is an essential condition for error backpropagation learning, can be created. In other words, learning without weight transport is possible through random pre-training.

The research team revealed that learning random information before learning actual data has the property of meta-learning, which is ‘learning how to learn.” It was shown that neural networks that pre-learned random noise perform much faster and more accurate learning when exposed to actual data, and can achieve high learning efficiency without weight transport.

Professor Se-Bum Paik said, “It breaks the conventional understanding of existing machine learning that only data learning is important, and provides a new perspective that focuses on the neuroscience principles of creating appropriate conditions before learning.

“It is significant in that it solves important problems in artificial neural network learning through clues from developmental neuroscience, and at the same time provides insight into the brain’s learning principles through artificial neural network models.”

More information:
Jeonghwan Cheon et al, Pretraining with Random Noise for Fast and Robust Learning without Weight Transport, arXiv (2024). DOI: 10.48550/arxiv.2405.16731

Journal information:
arXiv

Provided by
The Korea Advanced Institute of Science and Technology (KAIST)

Citation:
Circumventing a long-time frustration in neural computing (2024, December 18)
retrieved 18 December 2024
from https://techxplore.com/news/2024-12-circumventing-frustration-neural.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Modernisation plan for Kenya’s navy

Next Post

You’ve Got a Trust: Now Who Should Be the Successor Trustee?

Next Post
You’ve Got a Trust: Now Who Should Be the Successor Trustee?

You've Got a Trust: Now Who Should Be the Successor Trustee?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Challenges in Granting Alimony by Kenyan Courts After Divorce

Challenges in Granting Alimony by Kenyan Courts After Divorce

3 months ago
‘Competition replaced by a culture of care’? Condo London makes its post-pandemic return

‘Competition replaced by a culture of care’? Condo London makes its post-pandemic return

1 year ago
Manager Becomes Millionaire by Investing Only $8000 in SHIB

Manager Becomes Millionaire by Investing Only $8000 in SHIB

2 years ago
Boeing Invests in British Columbia for Canadian Poseidon Aircraft Program

Boeing Invests in British Columbia for Canadian Poseidon Aircraft Program

10 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.