In a sense, each of us begins life ready for action. Many animals perform amazing feats soon after they’re born. Spiders spin webs. Whales swim. But where do these innate abilities come from? Obviously, the brain plays a key role as it contains the trillions of neural connections needed to control complex behaviors.
However, the genome has space for only a small fraction of that information. This paradox has stumped scientists for decades. Now, Cold Spring Harbor Laboratory (CSHL) Professors Anthony Zador and Alexei Koulakov have devised a potential solution using artificial intelligence.
When Zador first encounters this problem, he puts a new spin on it. “What if the genome’s limited capacity is the very thing that makes us so smart?” he wonders. “What if it’s a feature, not a bug?”
In other words, maybe we can act intelligently and learn quickly because the genome’s limits force us to adapt. This is a big, bold idea—tough to demonstrate. After all, we can’t stretch lab experiments across billions of years of evolution. That’s where the idea of the genomic bottleneck algorithm emerges.
In AI, generations don’t span decades. New models are born with the push of a button. Zador, Koulakov, and CSHL postdocs Divyansha Lachi and Sergey Shuvaev set out to develop a computer algorithm that folds heaps of data into a neat package—much like our genome might compress the information needed to form functional brain circuits. They then test this algorithm against AI networks that undergo multiple training rounds.
The study is published in the journal Proceedings of the National Academy of Sciences.
Amazingly, they find the new, untrained algorithm performs tasks like image recognition almost as effectively as state-of-the-art AI. Their algorithm even holds its own in video games like Space Invaders. It’s as if it innately understands how to play.
Does this mean AI will soon replicate our natural abilities? “We haven’t reached that level,” says Koulakov. “The brain’s cortical architecture can fit about 280 terabytes of information—32 years of high-definition video. Our genomes accommodate about one hour. This implies a 400,000-fold compression technology cannot yet match.”
Nevertheless, the algorithm allows for compression levels thus far unseen in AI. That feature could have impressive uses in tech. Shuvaev, the study’s lead author, explains, “For example, if you wanted to run a large language model on a cell phone, one way [the algorithm] could be used is to unfold your model layer by layer on the hardware.”
Such applications could mean more evolved AI with faster runtimes. And to think, it only took 3.5 billion years of evolution to get here.
More information:
Sergey Shuvaev et al, Encoding innate ability through a genomic bottleneck, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2409160121
Cold Spring Harbor Laboratory
Citation:
The next evolution of AI begins with ours: Neuroscientists devise a potential explanation for innate ability (2024, November 25)
retrieved 25 November 2024
from https://techxplore.com/news/2024-11-evolution-ai-neuroscientists-potential-explanation.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.