The Nobel Prize Committee for Physics caught the academic community off-guard by handing the 2024 award to John J. Hopfield and Geoffrey E. Hinton for their foundational work in neural networks.
The pair won the prize for their seminal papers, both published in the 1980s, that described rudimentary neural networks. Though much simpler than the networks used for modern generative AI like ChatGPT or Stable Diffusion, their ideas laid the foundations on which later research built.
Even Hopfield and Hinton didn’t believe they’d win, with the latter telling The Associated Press he was “flabbergasted.” After all, AI isn’t what comes to mind when most people think of physics. However, the committee took a broader view, in part because the researchers based their neural networks on “fundamental concepts and methods from physics.”
“Initially, I was surprised, given it’s the Nobel Prize in Physics, and their work was in AI and machine learning,” says Padhraic Smyth, a distinguished professor at the University of California, Irvine. “But thinking about it a bit more, it was clearer to me why [the Nobel Prize Committee] did this.” He added that physicists in statistical mechanics have “long thought” about systems that display emergent behavior.
Hopfield first explored these ideas in a 1982 paper on neural networks. He described a type of neural network, later called a Hopfield network, formed by a single layer of interconnected neurons. The paper, which was originally categorized under biophysics, said a neural network could retain “memories” from “any reasonably sized subpart.”
Hinton expanded on that work to conceptualize the Boltzmann machine, a more complex neural network described in a 1985 paper Hinton co-authored with David H. Ackley and Terrence J. Sejnowski. They introduced the concept of “hidden units,” additional layers of neurons which exist between the input and output layers of a neural network but don’t directly interact with either. This makes it possible to handle tasks that require a more generalized understanding, like classifying images.
So, what’s the connection to physics?
Hopfield’s paper references the concept of a “spin glass,” a material in which disordered magnetic particles lead to complex interactions. Hinton and his co-authors drew on statistical mechanics, a field of physics that uses statistics to describe the behavior of particles in a system. They even named their network in honor of Ludwig Boltzmann, the physicist whose work formed the foundation of statistical mechanics.
And the connection between neural networks and physics isn’t a one-way street. Machine learning was crucial to the discovery of the Higgs boson, where it sorted the data generated by billions of proton collisions. This year’s Nobel Prize for Chemistry further underscored machine learning’s importance in research, as the award went to a trio of scientists who built an AI model to predict the structures of proteins.
While Hopfield and Hinton authored influential papers, their contributions to machine learning were cemented by their continued work, and both won multiple awards before the Nobel Prize. Among others, Hopfield won the Boltzmann Medal in 2022; Hinton received the IEEE Frank Rosenblatt Award in 2014, the IEEE James Clerk Maxwell Medal in 2016, and the Turing Award in 2018 (that last one alongside Yann LeCun and Yoshua Bengio).
Smyth saw Hopfield’s efforts first-hand as a student at the California Institute of Technology. “Hopfield was able to bring together mathematicians, engineers, computer scientists, and physicists. He got them in the same room, got them excited about modeling the brain, doing pattern recognition and machine learning, unified by mathematical theories he brought in from physics.”
In 2012, Hinton co-founded a company called DNNResearch with two of his students; Ilya Sutskever, who later co-founded OpenAI, and Alex Krizhevsky. Together, the trio collaborated on AlexNet, a hugely influential neural network for computer vision. Hinton also taught at the University of Toronto, where he continued to champion machine learning.
Navdeep Jaitly, now a deep learning researcher at Apple, said Hinton inspired new generations of engineers and researchers. In Jaitly’s case, the influence was direct; Jaitly studied under Hinton at the University of Toronto.
“I came in with experience in statistical modeling,” says Jaitly, “but Hinton still managed to entirely change how I think about problem solving. In terms of his contributions to machine learning, his methods are central to almost everything we do.”
From Your Site Articles
Related Articles Around the Web