
Turns out, training artificial intelligence systems is not unlike raising a child. That’s why some AI researchers have begun mimicking the way children naturally acquire knowledge and learn about the world around them—through exploration, curiosity, gradual learning, and positive reinforcement.
“A lot of problems with AI algorithms today could be addressed by taking ideas from neuroscience and child development,” says Christopher Kanan, an associate professor in the Department of Computer Science at the University of Rochester, and an expert in artificial intelligence, continual learning, vision, and brain-inspired algorithms.
Of course, learning and being able to reason like a human—just faster and possibly better—opens up questions about how best to keep humans safe from ever-advancing AI systems. That’s why Kanan says all AI systems need to have guardrails built in, but doing so at the very end of the development is too late. “It shouldn’t be the last step, otherwise we can unleash a monster.”
What is artificial general intelligence and how does it differ from other types of AI?
AI involves creating computer systems that can perform tasks that typically require human intelligence, such as perception, reasoning, decision-making, and problem-solving. Traditionally, much of AI research has focused on building systems designed for specific tasks—so-called artificial narrow intelligence (ANI). Examples include systems for image recognition, voice assistants, or playing strategic games, all of which can perform their tasks exceptionally well, often surpassing humans.
Then there is artificial general intelligence (AGI), which aims to build systems capable of understanding, reasoning, and learning across a wide range of tasks, much like humans do. Achieving AGI remains a major goal in AI research but has not yet been accomplished. Beyond AGI lies artificial superintelligence (ASI)—a form of AI vastly exceeding human intelligence in virtually every domain, which remains speculative and is currently confined to science fiction.
In my lab, we’re particularly interested in moving closer to artificial general intelligence by drawing inspiration from neuroscience and child development, enabling AI systems to learn and adapt continually, much like human children do.
What are some of the ways that AI can ‘learn?’
ANI is successful thanks to deep learning, which since about 2014 has been used to train these systems to learn from large amounts of data annotated by humans. Deep learning involves training large artificial neural networks composed of many interconnected layers. Today, deep learning underpins most modern AI applications, from computer vision and natural language processing to robotics and biomedical research. These systems excel at tasks like image recognition, language translation, playing complex games such as Go and chess, and generating text, images, and even code.
A large language model (LLM) like OpenAI’s GPT-4 is trained on enormous amounts of text using self-supervised learning. This means the model learns by predicting the next word or phrase from existing text, without explicit human guidance or labels. These models are typically trained on trillions of words—essentially the entirety of human writing available online, including books, articles, and websites. To put this in perspective, if a human attempted to read all this text, it would take tens of thousands of lifetimes.
Following this extensive initial training, the model undergoes supervised fine-tuning, where humans provide examples of preferred outputs, guiding the model toward generating responses that align closely with human preferences. Lastly, techniques such as reinforcement learning with human feedback (RLHF) are applied to shape the model’s behavior by defining acceptable boundaries for what it can or cannot generate.
What are AIs really good at?
They are excellent at tasks involving human languages, including translation, essay writing, text editing, providing feedback, and acting as personalized writing tutors.
They can pass standardized tests. For example, OpenAI’s GPT-4 achieves top-tier scores on really challenging tests such as the Bar Exam (90th percentile), LSAT (88th percentile), GRE Quantitative (80th percentile), GRE Verbal (99th percentile), USMLE, and several Advanced Placement tests. They even excel on Ph.D.-level math exams. Surprisingly, studies have shown they have greater emotional intelligence than humans.
Beyond tests, LLMs can serve as co-scientists, assisting researchers in generating novel hypotheses, drafting research proposals, and synthesizing complex scientific literature. They’re increasingly being incorporated into multimodal systems designed for vision-language tasks, robotics, and real-world action planning.
What are some of the current limitations of generative AI tools?
LLMs can still “hallucinate,” which means they confidently produce plausible-sounding but incorrect information. Their reasoning and planning capabilities, while rapidly improving, are still limited compared to human-level flexibility and depth. And they don’t continually learn from experience; their knowledge is effectively frozen after training, meaning they lack awareness of recent developments or ongoing changes in the world.
Current generative AI systems also lack metacognition, which means they typically don’t know what they don’t know, and they rarely ask clarifying questions when faced with uncertainty or ambiguous prompts. This absence of self-awareness limits their effectiveness in real-world interactions.
Humans excel at continual learning, where early-acquired skills serve as the basis for increasingly complex abilities. For instance, infants must first master basic motor control before progressing to walking, running, or even gymnastics. Today’s LLMs neither demonstrate nor are effectively evaluated on this type of cumulative, forward-transfer learning. Addressing this limitation is a primary goal of my lab’s research.
What main challenges and risks does AI pose?
Generative AI is already significantly transforming the workplace. It’s particularly disruptive for white-collar roles—positions that traditionally require specialized education or expertise—because AI copilots empower individual workers to substantially increase their productivity; they can transform novices into operating at a level closer to that of experts. This increased productivity means companies could operate effectively with significantly fewer employees, raising the possibility of large-scale reductions in white-collar roles across many industries.
In contrast, jobs requiring human dexterity, creativity, leadership, and direct physical interaction, such as skilled trades, health care positions involving direct patient care, or craftsmanship, are unlikely to be replaced by AI anytime soon.
While scenarios like Nick Bostrom’s famous “Paperclip Maximizer,” in which AGI inadvertently destroys humanity, are commonly discussed, I think the greater immediate risk are humans who may deliberately use advanced AI for catastrophic purposes. Efforts should focus on international cooperation, responsible development, and investment in academic safety AI research.
To ensure AI is developed and used safely, we need regulation around specific applications. Interestingly, the people asking for government regulation now are the ones who run the AI companies. But personally, I’m also worried about regulation that could eliminate open-source AI efforts, stifle innovation, and concentrate the benefits of AI among the few.
What are the chances of achieving artificial general intelligence (AGI)?
The three “godfathers” of modern AI and Turing Award winners—Yoshua Bengio, Geoffrey Hinton, and Yann LeCun—all agree that achieving AGI is possible. Recently, Bengio and Hinton have expressed significant concern, cautioning that AGI could potentially pose an existential risk to humanity. Nevertheless, I don’t think any of them—or I—believe that today’s LLM architectures alone will be sufficient to achieve true AGI.
LLMs inherently reason using language, whereas for humans, language primarily serves as a means of communication rather than a primary medium for thought itself. This reliance on language inherently constrains the ability of LLMs to engage in abstract reasoning or visualization, limiting their potential for broader, human-like intelligence.
University of Rochester
Citation:
What is artificial general intelligence and how does it differ from other types of AI? (2025, April 4)
retrieved 4 April 2025
from https://techxplore.com/news/2025-04-artificial-general-intelligence-differ-ai.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.