3 Big Things To Know About Geoff Hinton's Nobel Prize Winning Research


3 Big Things To Know About Geoff Hinton's Nobel Prize Winning Research

It's one of the strangest phenomena that we've seen lately in our times - someone wins the esteemed Nobel prize for science, and you find out that they are actually pretty concerned about their own work.

That's the story with the work of Geoffrey Hinton, who became notable in the past generation for his work on neural networks and related technologies.

Hinton, notably of the University of Toronto, has been basically a household name in computer science; this year, he was recognized by the Nobel Prize committee for his work on network models and its contribution to massive advances in technology over the past decade or so.

Here are three things you should know about this groundbreaking work, and how it's been honored by what's arguably the top award in the international science world.

The Background - Hinton and Boltzmann Machines

The first important thing to know about Hinton's research is that in delving into creating new types of neural networks, it used a lot of classical mathematics and science.

The term for Hinton's groundbreaking network model is a "restricted Boltzmann machine" or RBM. This makes a lot more sense when you research and find that Hinton used some of the work of Austrian scientist and mathematician Ludwig Boltzmann, who came up with certain ideas while looking at natural phenomena in thermodynamics, and energy states.

Fast-forward to the 21 century, and scientists were looking at these kinds of models, Markov states and the mathematics around things like thermodynamics, in trying to figure out how to create the models that would run machine learning and artificial intelligence programs.

Now, it's important to note that Hinton's work was not derived from actual research on the states of gases in a system, but on the Boltzmann distribution, the mathematics involved.

"RBMs are usually trained using the contrastive divergence learning procedure," Hinton wrote in a 2010 paper on the research. "This requires a certain amount of practical experience to decide how to set the values of numerical meta-parameters such as the learning rate, the momentum, the weight-cost, the sparsity target, the initial values of the weights, the number of hidden units and the size of each mini-batch."

In any case, Hinton's work moved the ball forward, based on models developed by John Hopfield, who was also honored, and the restricted Boltzmann machine became fundamental as a way to talk about how to create weights between network nodes and layers.

Hinton's Network Models and Backpropagation

The second thing to know is that Hinton's work was, according to the Nobel committee, also consequential in developing a system called backpropagation, where a network can go back and inspect its prior activity in order to learn. Through certain dynamics like backpropagation, we got major advances in network systems that led to our modern use of large language models for everything from medical diagnosis to environmental research.

The Nobel committee, in its notice of Hinton's work, also credits those like Yann LeCun, who took this research forward with convolutional neural networks (I wrote about LeCun's presentation at a recent IIA event here). They also point to one of the earliest practical applications - image recognition of handwritten letters and numbers, that translated to ATM machines and other equipment reading human handwriting, a pretty neat hat trick you can see at any local bank.

So that's a little bit about how this research led us to where we are now.

A Warning to Humanity

Here's the third thing to know about Hinton's work, and it's one of the more profoundly fascinating elements of how all of us are moving through the AI era.

Essentially, news media reports that Hinton is now famously concerned about the applications of his work.

"It's going to be wonderful in many respects," Hinton reportedly told news media in a recent interview, "It'll mean huge improvements in productivity. But we also have to worry about a number of possible bad consequences ... I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control."

He has previously mentioned the possibility of new network models "taking over" and "becoming more intelligent than people," and has voiced concerns about how that will affect our societies.

With that in mind, we should all be deliberate about how we use technology, and how we see the future, given that the people behind this research are thinking a lot about it themselves.

In other words, Hinton's words stand as a stark reminder that we're not creating new tools - that we're creating a new reality, and that we really have to think about how we navigate that reality in the years to come. Notwithstanding Hinton's notoriety in the field, you have to ask yourself: was his biggest contribution his work on neural networks, his warning to the population at large, or both?

Previous articleNext article

POPULAR CATEGORY

entertainment

9689

discovery

4333

multipurpose

10022

athletics

10148