Note: I am trying to narrow down the scope of this blog to technology, products, books and movies. It is an attempt to double down on my interests and get better in these areas.
Key Takeaways
I tend to listen to interesting podcasts while driving. Of late, I am exploring Lex Fridman’s earlier episodes where he focused purely on technology related topics. I completed an episode related to Gödel machines. (I do feel a sense of excitement when I see two dots on the ‘o’ in Gödel.)
The episode is about a self learning AI machine that can learn concepts and get better at learning over a period of time, just like humans. It’s fascinating to hear about this concept from the person who invented this idea in the first place. Before delving into my own thoughts, I would love to summarise my important takeaways from the episode.
What are Godel Machines?
Gödel Machines are a concept in theoretical computer science and artificial intelligence, introduced by Jürgen Schmidhuber in 2003. They are self-improving, general-purpose problem-solving systems that can theoretically achieve optimal performance.
Named after the logician Kurt Gödel, these machines leverage principles from Gödel’s incompleteness theorems and the idea of formal self-reference to make themselves better over time.
Self-improvement: A Gödel Machine has the ability to rewrite its own code, including the rules that govern how it rewrites itself, which allows it to improve its problem-solving abilities.
Optimality: The machine aims to achieve optimal performance in problem-solving tasks. It does this by proving, within its own formal system, that a modification to its code will be beneficial.
Proof-based rewrites: The machine only changes its own behavior if it can mathematically prove that a new version will lead to better performance. This ensures that it doesn't degrade over time.
Recursion and self-reference: It uses a formal system that includes a description of itself. By reasoning about its own structure and performance, it can make theoretically sound decisions about how to improve.
Source: ChatGPT
Having this kind of machine for learning can drastically reduce computation costs and eliminate the need for huge amounts of data to create useful AI models for various applications.
Compressible Knowledge
Before listening to this episode, I felt that science and research was aimed at expanding out knowledge of the universe. And technological innovations if any was a side effect of scientific research.
But Schmidhuber gives an insanely contradictory, or rather expansive view of the process of human scientific endeavours.
The history of science, the history of humanity, our civilization and life on Earth as some kind of path towards greater and greater compression. What does that mean?
Hundreds of years ago, there was an astronomer whose name was Kepler, and he looked at the data points that he got by watching planets move. And then he had all these data points and suddenly it turned out that he can greatly compress the data by predicting it through an ellipse long.
So it turns out that all these data points are more or less on ellipses around the sun. And another guy came along whose name was Newton and before him, Hook, and they said the same thing that is making these planets move like that is what makes the apples fall down.
And it also holds for stones and for all kinds of other objects. And certainly many, many of these compression of these observations became much more compressible because as long as you can predict the next thing, given what you have seen so far, you can compress it, but you don't have to store that data. This is called predictive coding.
And then there was still something wrong with that theory of the universe, and you had deviations from these predictions of the theory, and 300 years later, another guy came along whose name was Einstein, and he he was able to explain away all these deviations from the predictions of the old theory through a new theory, which was called the general theory of relativity, which at first glance looks a little bit more complicated.
And you have to warps space and time, but you can phrase it within one single sentence, which is. No matter how fast you accelerate and how fast or hard you decelerate, and no matter what is the gravity in your local framework, light speed always looks the same.
Paraphrased from the Podcast Transcripts
This explanation gives a new insight into how the human race makes progress. It is not by creating new knowledge or theories but by compressing data into small usable pieces of knowledge.
Can Humans become a kind of Gödel Machine?
Though it is a purely theoretical idea aimed at making a self learning machine, I feel even humans operate like a Gödel Machine. If not all, at least humans who want to grow and be better do operate in a manner similar to Schmidhuber’s machines.
Let us revisit the characteristics of a Gödel Machine.
It rewrites its own code.
Tries to solve problems in the most optimal manner.
It tries to change its ways if it thinks it can do better.
Introspects and improves continuously.
Isn’t it so elegant and beautiful? This is the manner in which people ought to function if they want to get better. How can one move forward if the individual keeps on repeating the same mistakes and patterns again and again? Can you expect a better result by repeating same mistakes? Of course, no!
Whether we can build a Gödel machine or not, we can surely try to imitate one, along with all our flaws and emotions working in sync.