Pedro Domingos, breaks down what’s still missing in our race toward Artificial General Intelligence (AGI) — and why the path forward requires a radical unification of AI's five foundational paradigms: Symbolists, Connectionists, Bayesians, Evolutionaries, and Analogizers.

 
 
 

250 Audio.mp3: Audio automatically transcribed by Sonix

250 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Pedro: 0:00

All of these five tribes are 70 years old. They were there from the beginning. This is actually remarkable. In the world of AI, where change is perpetual and ever more rapid, certain things are remarkably constant, one of which is these five paradigms. They haven't changed. They all were all invented in the 50s, all of them in one way or another, and they're still the same ones today. When I talk to people about the five schools people who are not AI experts the one that immediately resonates with them is reasoning by analogy. Everybody can understand because we do it all the time.

Craig: 0:30

Create an oasis with Thuma, a modern design company that specializes in furniture and home goods. By stripping away everything but the essential, thuma makes elevated beds with premium materials and intentional details. I'm in the process of reorganizing my house and I'm giving Thuma a serious look for help in renovating and redesigning. Thuma combines the perfect balance of form, craftsmanship and functionality. With over 17,000 five-star reviews, the Thuma Bed Collection is proof that simplicity is the truest form of sophistication. Using the technique of Japanese joinery, pieces are crafted from solid wood and precision cut for a silent, stable foundation With clean lines, subtle curves and minimalist style. The Thuma Bed Collection is available in four signature finishes to match any design aesthetic. Headboard upgrades are available for customization as desired. To get $100 toward your first bed purchase, go to Thuma. That's T-H-U-M-A, dot, c-o, slash IonAI, ionai, all run together E-Y-E-O-N-A-I, e-y-e-o-n-a-i. So for $100 off your first purchase, go to thumaco, slash ionai.

Pedro: 2:23

That's T-H-U-M-A, dot, c-o, slash ionai, to receive $100 off your first bed purchase. Hi, I'm Pedro Domingos. I'm a professor of computer science at the University of Washington and an AI researcher. A few years ago, I wrote a book called the Master Algorithm. That was an introduction to what I call the five tribes of machine learning, the five main paradigms, and we've been going through them one at a time. We've covered the symbolists, who traditionally dominated AI, the connectionists, or deep learning, who dominated today, the Bayesians, who have always been there and always will be. And today we're going to do the last two, which is, the evolutionaries, who do AI inspired by evolution, and the analogizers, who do reasoning by analogy, and the analogizers, who do reasoning by analogy, okay, well, why don't we start with the evolutionaries, or the other way around, whichever is more current?

Pedro: 3:14

I mean, they're both current. Maybe the evolutionaries are a nice segue from the connectionists, because they have something very important in common, which is they are also inspired by biology. Yeah, the evolutionaries and the connectionists both because they have something very important in common, which is they are also inspired by biology. Yeah, the evolutionaries and the connectionists both believe in doing AI inspired by psychology. The others don't. The others think that's a silly idea because biology is a mess and suboptimal and blah blah. But the connectionists are inspired by the brain, the architecture of the brain. If we can produce that in hardware, then we're on our way. But the evolutionaries say like, well, wait a second, that's really not the whole problem. Like, where did that architecture come from? Right, you're only tweaking some way. It's big deal, right. What you want to know is like, how do you create a brain in the first place? And, uh, you know nature has an algorithm for creating brains and and robots and a lot of other things, and that's evolution. And you truly can think of evolution as an algorithm. In fact, you know, people in the 19th century are already saying the equivalent thing. I forget who it was that said it. Maybe it was George Boole, right, who invented the logic that computers are based on. He said words to the effect that God does not create animals and plants. He creates the algorithm by which animals and plants are made. So you know, what we're going to do is be a little God on the computer and mimic the evolutionary process on designs that were created by some of these so-called genetic algorithms, because they're inspired by genetics.

Pedro: 4:52

And there's also this whole field of genetic program where you evolve programs and it's interesting how your basic genetic algorithm is really a very literal translation to the computer of our basic understanding of evolution. There's a population of strings, literally bit strings, and then there's several generations. You start out with random ones. This is the amazing thing. You start out with random strings and after a while you're doing amazing things, like, for example, there's this guy, hod Lipson, who's at Cornell or maybe somewhere else. Now he has this lab where they literally evolve robot insects from scratch. They start by doing it in simulation and after a while they just manufacture them, like they 3D print them or something, and they start crawling and flying in the real world. So you start with these random strings and then you mutate them, you cross them over, right?

Pedro: 5:42

That is the key thing in evolutionary computing that is not present in, say you know, gradient, descent or things like that there's. You know, you measure the fitness of your systems or programs or whatever based on that string at the task right there's, as usual, a reward, an objective function of some kind, and then the best performing ones get to mate. Literally it's like sex on a computer, and then the offspring right are the new generation and those in turn will be evaluated and so on. So it's this very basic mimicking of evolution that works to a surprising degree. Now we can go into some of the history and why they kind of diverged from the rest of machine learning, and some people are very skeptical about AI and whether or not they're truly capturing the important things about evolution and what the future of the whole thing is. But there's certainly a lot of interesting things there.

Craig: 6:35

Let me just ask can you define? When you say these strings, are you talking about functions or what are you talking about when you say strings?

Pedro: 6:45

Sorry, yeah, I lapsed into technical jargon there. A string in computer science is just a series of bits. Okay, I mean it can be a series of characters. For example, a sentence in English for a computer scientist is a string of characters. Okay, because it looks like a string. Right, it's a long of characters. Okay, because it looks like a string. Right, it's a long linear thing. And the genome? Right, a DNA is a string of letters. That's the amazing thing. Everything about how you and I are made is encoded in a bunch of these strings of TGA, tc, etc, etc. It's kind of mind-boggling, right. But there it all is right. If we know how to manipulate those strings, maybe we can get far. So it's not a function. A string is just about the simplest thing you can possibly have in computer science.

Craig: 7:32

Yeah, and you say you start with random strings. How then do they evolve? Do you have an algorithm operating on them?

Pedro: 7:47

Again, the idea is kind of mind-bogglingly simple and it shouldn't work, but it does, and you know, the proof is real-life evolution, right? So let's suppose that I want to. What's a simple example? I want and this is a real example I want to build a radio, right? A radio is a bunch of electronic components put together. You know a transistor, you know resistors, capacitors, blah, blah, blah. You need to tune it to whatever frequency, right, we know how to do that. But maybe there's better, more efficient ways to do to make a radio.

Pedro: 8:22

So the way you do is general algorithm is that you start out with a bunch of random strings. But what you have and often this is a very simple algorithm, but it needn't be is something that that string is a specification of how to build a radio. In that case, like for example, if there's a one in a certain position, it means that this transistor is connected to that resistor. So it's like I have a pile of transistors and resistors and my string just specifies who's connected to whom, right, if you have all the necessary components and we know what they are and you allow possible connections, one of those strings is your radio, right? And another string is potentially an even better radio, right? So now you take a random string, you build the corresponding radio and in practice you only have to simulate it, right?

Pedro: 9:07

There's packages to do that, right, that electrical engineers use, and then it's probably a terrible radio, like your first generation of a thousand things. They're all terrible radios, right? But one of them kind of picks up a little bit of something, right, just randomly. Some will be better than others, and so there's another. So then you take those two strings and you randomly mutate them, right, because some of evolution is driven by random mutations, being like, you just flip some of the bits, which means, oh, you know, this transistor was connected to this resistor, but now let me connect it to that capacitor instead, at random, and more powerful in principle, right?

Pedro: 9:42

And this is really where the interesting part is. You say, like well, here are two strings that actually seem to be better than random at describing a radio. Let me do a crossover between them, like you do in evolution, which is, I'm going to take half of the string from one side and half from the other the mom and the dad, if you will and I have a new string that's a new radio, and maybe that string is actually more garbage than the previous ones were. But if you think about it like if this string had something good here and this string had something good here, if you pick this part and this part, the new string is actually better than either the previous ones. And if you do that for 100 generations, lo and behold, you actually have a fantastic radio generations.

Craig: 10:28

Lo and behold, you actually have a fantastic radio and driving the mutations and choosing the leaders or the winners out of the offspring. Is that reinforcement learning? Or what is the assessment algorithm?

Pedro: 10:46

It's not reinforcement learning, but in fact you can think of reinforcement learning as being sped up evolution. It's actually good to, and you know some people have actually formalized this and there's interesting lessons. Reinforcement learning is what animals do right and it's discovering how to do something properly which, prior to there being reinforcement learning, could only be done by evolution. So you can. It's discovering how to do something properly which, prior to there being reinforcement learning, could only be done by evolution. So it's actually the other way around. You can think of reinforcement learning as a more efficient way to evolve and in fact you think of us people having ideas as an even more efficient way to evolve and things keep speeding up. But evolution actually is, or at least this basic version of evolution, which is really a cartoon. Understanding of evolution is really just this how does evolution evaluate animals? It throws them out in the world and sees if they survive and reproduce. That's what it is right. And you know there's a mathematical theory of evolution at this point which of course you know Darwin didn't have. But is you know part of the so-called modern synthesis, which of course you know Darwin didn't have? But is you know part of the so-called modern synthesis?

Pedro: 11:46

And there's this notion of a fitness function, which is really the equivalent of the reward function in reinforcement learning. Reward function is you know, you touch the stove, you get pain, right? You eat an ice cream, you get pleasure, right? The fitness function in evolution is essentially how many offspring do you have? But how many offspring you have is driven by how well adapted to the environment you are. So if you're a bird, then you have a better wing, or you have a lighter skeleton, you're able to fly better, faster, farther, et cetera, et cetera. So that's your fitness function, so the whole art, if you will. There's always somewhere in machine learning where the whole art goes, and in generic columns it's in defining the feature, the fitness function.

Pedro: 12:27

Yeah right, I mean. And for example, in the, in the case of the radio that I was talking about, your fitness function can literally be that's, for example, this thing called spice, which is a software package that will simulate any electronic circuit, right, you? You define your, your, your circuit, as I described, using a random string or, at some point, a not random string. But then you create that circuit in software, you put it through Spice and you test it as a radio. You go OK, let me try to listen to FM 101. Does that work when I do that? So you have this battery of tests, which is the generic algorithm of throwing out the circuit in the real world and seeing if it actually does what it's supposed to be fit for, which is, let you listen to the radio.

Craig: 13:12

And this process you said you end up 10,000 generations down and you have something. It's all automated, or does it require intervention?

Pedro: 13:29

No, I mean, there's all sorts of variations, but the basic version does not require any intervention and it works surprisingly well, and it's often not even 10,000 generations. I remember a long time ago, I was seeing this demo of something that was very famous then called the Connection Machine. It was one of the first massively parallel computers and it did the following thing this was at some graphics exhibition or something. It showed you 10 random images, random images, and you pick the one that you like best. Right, why not Like, I mean, abstract art? And you do this for half a dozen times and even after that small number of times, right, literally while other people are standing in line after those, let's say, 10 generations. It's actually generating really amazing images, right? So it's, yeah, evolution is a shockingly effective learning method.

Craig: 14:20

Yeah, are there any practical applications out there that use this method? I mean you talked about the insects. Was that at CMU?

Pedro: 14:30

you said the insects were at Cornell. It's this guy called Hodlipson. I think he's moved to NYU or somewhere else, but anyway. So, as I mentioned, there have been, you know, real radios and amplifiers and whatnot designed this way, the genetic algorithms. Folks have a whole list of things they claim are successes of genetic algorithms, like creating a, you know, robot playing soccer, a robot soccer playing team, et cetera, et cetera. A robot soccer playing team, et cetera, et cetera.

Pedro: 15:03

This is very controversial, however, because the other people in machine learning said like, nah, those applications aren't really real and you could do them with, you know, just greedy search. And you know, there was actually at one point a very famous bust up which we could go into, and the consensus in machine learning is that that stuff is useless. In fact, many of the machine learning people, when they saw my book, said like, why did you even write a chapter about that? On the other hand, you could have said that about neural networks circa 2000. Like, why did you even write a chapter about that? That's crap, it doesn't work. So you know place your bets Okay.

Craig: 15:46

And then the analogizers. So let's talk about the analogizers.

Pedro: 15:51

So analogizers are? We're going to do machine learning and AI based on reasoning by analogy. So, first of all, what is reasoning by analogy Is? I have a problem to solve and what I do is I retrieve from my memory similar problems that I solved before. I don't, you know. Typically, yeah, like you build up your solution one little piece at a time, which is incredibly expensive and inefficient, right, and you do chain of thought prompting and like blah, blah, blah, right, and the analogies go like oh my God, that's such a headache. Blah, blah, blah, right, and the analogizers go like oh my God, that's such a headache. What you do, what you and I do right, is we do this automatically every day, from the smallest things to the biggest ones. It's like we retrieve from memory similar episodes, similar problems, and then we adapt the solution to that problem to the new one.

Pedro: 16:41

This is an incredibly powerful thing to do and, in fact, the term analogizer was coined by a famous guy, douglas hofstetter, the writer of good leisure, bach. His most recent book is is is called surfaces and essences analogy as the fuel and fire of thinking, and it's basically 600 pages proving, according to him, that every single thing in, from the simplest word use things to the highest achievements of the Einsteins and whatnot, it's all reasoning by analogy and nothing else. So it really is. You know, he really does think that analogy is the master algorithm, and I think he's gone a little too far, of course, because, again, analogy solves some problems, but not all of them. But there's no denying that this argument has a lot of force, and I would even say that reasoning by analogy is the most unfairly ignored school in AI, however, right, so interesting point when I talk to people about the five schools, like you know, people who are not AI experts the one that immediately resonates with them is reasoning by analogy.

Pedro: 17:54

You know vision learning, what is that? Like Bayes' theorem, symbolic AI, all of that is like not, you know neural networks? Yeah, the brain, but it's a pile of numbers. But, yeah, reasoning by analogy, everybody can, because because we do it all the time right. So so there's a, there's a lot of intuitive appeal in it and, moreover, and very importantly, in psychology, in cognitive science, there is a literature going back decades of thousands of papers doing this thing of showing how you do all these things by analogy, often in different ways than what Douglas of Stutter says, like there's a thing called structure mapping and whatnot, which has gotten a lot of play. It was invented by David Gentner, et cetera, et cetera.

Pedro: 18:38

So that's one aspect, but this was influential maybe in AI, I don't know like decades ago. What is more relevant is that until fairly recently, until the AlexNet explosion, the dominant paradigm in machine learning was kernel machines. Everybody did everything using kernel machines, including vision. The state of the art of vision was a so-called support vector machine, which is a simple form of kernel machine. Right, that was the state-of-the-art Like. That's what people believed was the right thing to do. They don't call themselves analogizers, unlike you know, people like Hofstadter, but it really, you know like. Kernel machines are a primitive form of reasoning by analogy.

Pedro: 19:22

Kernel is a similarity function.

Craig: 19:25

Yeah, are a primitive form of reasoning by analogy. A kernel is a similarity function. Yeah, and just for listeners, I did an episode on support vector machines, but can you describe, define for listeners, what a kernel machine is?

Pedro: 19:39

Yeah. So what is a kernel machine, right? First of all, what is a kernel? A kernel is just a clunky mathematical term for a function that measures the similarity of two objects. You give it two objects, you know. You give it you and me, and it gives us a score for how similar we are using some set of attributes, right? You know, like how tall are you, how smart, what is your job or whatever, right? Or how similar two images are based on the pixels or something more sophisticated.

Pedro: 20:05

Again, in kernel machines, the secret sauce is how you design the kernel and it could be learned and whatnot. But then what it does is like for every pair of examples, it spits out a number saying you're very similar or not so similar, right. And then the kernel machine in addition to the kernel, what it has is a bunch of examples that it saw in the past from your so-called training. Data is all machine learning. It throws out most of them, but it saves some key ones, and those are the support vectors. Support vector machine comes from the term support vector and the vector is just, you know, it's an example, right? It's a series of values of pixels or whatever, right? It stores those support vectors because they're the ones that are going to support the decisions that you make. And then, when the new example comes along, it's actually very simple.

Pedro: 20:48

Like I'm a doctor, I want to diagnose my new patient. I don't know let's say I don't know anything about medicine, right? And what do I have? I have a file of past patients. I have a new patient in front of me. I ask her what are your symptoms? Tell me? Like I fill out this vector for her and then I go in my file system and I look for the patient with the most similar symptoms and I say, oh, this patient has whatever COVID. You have COVID too, which sounds incredibly dumb and simplistic. But there's a mathematical proof right that if you do this with enough examples, you can learn any function. Proof right that if you do this with enough examples, you can learn any function. In fact, nearest neighbor, which we talked about before, is the simplest analogy in a similarity-based algorithm is nearest neighbor right, which is as simple as it can get, and kernel machines are really just a more sophisticated version of the nearest neighbor algorithm.

Craig: 21:43

Yeah, so how does I mean beyond support vector machines? How do the analogizers develop new architectures or algorithms to do new things?

Pedro: 21:59

Let me give you an example. So, actually, first of all, let me tell you what structure mapping is. Structure mapping is probably the key concept in reasoning by analogy, and the idea in structure mapping is that every problem, every domain, has a structure. And what I do to solve a new problem is map the structure from the problem that I've seen. So, for example famous example of structure mapping, reasoning by analogy Niels Bohr's model of the atom.

Pedro: 22:23

Reasoning by analogy, niels Bohr's model of the atom. In his time, people discovered that, hey, you shoot these alpha particles at an atom and most of them go right through, a few of them bounce right back. They knew that there were electrons with negative charge around somewhere, and now, clearly, the positive charge seems to be concentrated in the small thing in the middle. Most of it is vacuum. What does this remind you of? It reminds you of the solar system, with the nucleus as the sun and the electrons as the planets. And so Niels Bohr was like aha, I'm going to do a model of the atom by analogy with the solar system, which actually historically turned out to not to be that accurate, but it was a key step in the development of quantum mechanics. So what did he do, right? He noticed a similarity between the atom and the solar system. And then he mapped the structure of the solar system, with the sun in the middle and the planets revolving around it at various distances to the atom, which in many ways, is a remarkably accurate picture, with the different shells, you know, which are electrons at different distances, you know, roughly speaking, and so on, right, so mapping the structure gives you such a leap in problem solving, right? You're like you're not lost in the woods anymore. Now you're like okay, this is how I'm going to try to solve the problem. And now let me adjust it To give a more modern and more relevant example. Let me adjust it To give a more modern and more relevant example.

Pedro: 23:46

One form of analogy, you know, based on AI, is called case-based reasoning, which is reasoning by cases, and it has actually been very popular for decades in call centers and help centers, and they work just like this.

Pedro: 24:02

You know, like you have, you know you're Microsoft, you have a help desk for people who are having problems with whatever windows, right? And? And you call up and say like hey, you know my printer isn't working, it's spewing out garbage, help me, right. And then what the system does and again, this works really well in a large fraction of the case, like it asks you a few stock questions and it goes like aha, what are the problems in my, in my database right or knowledge base that have these characteristics? And let me not just suggest the same solution, but now tweak it. Oh, you have a different version of Windows, so this part changes, and your printer is Epson instead of SP, so this part changes. But you take that solution and you adapt it to the new customers and the shocking amount of the time this works. This solves the problem at, of course, much less cost than asking a human or having some big, you know symbolic AI or like large language model spending a ton of money to get to the same conclusion.

Craig: 24:54

Yeah, but that to me sounds like a kind of search, not a way to solve more complex problems, or or are there?

Pedro: 25:06

so uh, nearest neighbor and kernel machines. Effectively they, you could say they don't do any search. There are versions of them where, for example, I mean like let me refine that, the basic, and we'll do this in a few stages the basic neural nest problem doesn't do any search. It remembers all the examples and it just spits out the answer you have, whatever breast cancer or you don't, or et cetera. Right Now there are versions of Nearest Neighbor that try to cleverly select the best examples to remember, so they don't have to remember all of them and it's more efficient. And that's what a support vector machine is. It's actually a clever way to search for the best example. So there's already some search going on there.

Pedro: 25:43

Now what is the advantage of structure mapping or case-based reasoning? You're right, there is still search going on here. But the key thing is that this search is way more efficient than trying to find your solution one step at a time. It retrieves a whole chunk of some, literally, in psychology they call this a chunk right, it retrieves a whole chunk that is relevant to solving your problem that you then only have to tweak. So you're right, it's just. I mean, on an ideal day there is no search. You found the answer you give it to me right. My problem is exactly the same as the one that somebody had. That actually happens a good chunk of the time. But more generally there will be search, but the number of steps in that search might be a dozen instead of a million.

Craig: 26:28

Yeah, yeah, Okay. So I want to spend the last half hour talking about how these schools, where they stand today. And I remember talking to you a few years ago about the master algorithm in the book and you were saying that, well, the master algorithm is probably not a single algorithm, it's a system of algorithms or a family of algorithms that work on different parts of any problem, and how much I mean. Certainly, reinforcement learning is being blended with generative AI, with transformer architectures, but how much blending or working together is going on these days among these different schools, or do they remain fairly siloed?

Pedro: 27:38

No, they don't. So in fact, it used to be so. There's a very interesting history in all of this, the first point of which is that all of these five tribes are 70 years old. They were there from the beginning. This is actually remarkable. In the world of AI, where change is perpetual and never more rapid, certain things are remarkably constant, one of which is these five paradigms. They haven't changed. They were all invented in the 50s, all of them in one way or another, and they're still the same ones today. And another very interesting point is every decade, a different one dominates.

Pedro: 28:19

In the 60s, neural networks dominated, and then in the 70s, it was symbolic, ai, yeah, et cetera. Like you know, the 90s were the Bayesian decade, the 2000s were the kernel decade, and then neural networks came back. Now you could say well, you know, this time is different, neural networks are going to dominate forever and the others are now irrelevant. A lot of people think that. Or you could say well, you know, extrapolating Right Now, it used to be that these paradigms were fairly separate and had the people actually have a somewhat antagonistic relation, right? So in 1990, the symbolist would say like you'll never saw a bunch of garbage, and then you'll never. People would say symbolically is a bunch of garbage, right and so on. And then the Bayesians came along and said like no, you guys are both a bunch of garbage. I mean, I remember I used to go to ICML that was the Symbolic Machine Learning Conference and to NIPS, which was the neural conference, and almost nobody there were like half a dozen people in the world that actually went to both of them, but then they actually mailed it and this was really largely brought about by support vector machines that first took over the neural network community and then took over the symbolic AI community, and then people started publishing in one or another indiscriminately. At this point today there is no difference between ICML and NeurIPS, so at that level things have completely merged.

Pedro: 29:46

Now there are still people who identifiably work in these paradigms, but there are also a lot of people, including myself, who've done a lot of work combining them. For example, there's this whole area called Neurosymbolic AI, whose whole agenda is to combine neural AI and symbolic AI, and in fact this was popular in the 80s. You know, jeff Hinton circa 1990, was doing what they called connectionist symbol processing, and here's the thing. It comes and goes, but actually right now, this very moment today, combining symbolic and neural AI is the thing, right? Not necessarily by that name, but what you see in the papers is getting these neural models to reason, which, of course, is what symbolic AI is for, and what people have been doing, sometimes consciously, sometimes unconsciously and reinvent the wheel is bringing techniques from symbolic AI over. So, for example, what is O1, right, chatgpt is O1. It's a combination of an LLM, which is a neural system with symbolic search in ways that they haven't made public, but in one way or another, this is what's going on. So this agenda of combining the paradigms is very much alive and, I would say, gaining power, although, again, this waxes and wanes. Right, there was a previous one a few years ago that petered out, and maybe this one will peter out as well. My bet is that you will need to combine these, and not just ideas from two of them, but from all of them, right? How long that will take to happen is an open question.

Pedro: 31:11

Now, you mentioned that I say there is necessarily one master algorithm. Now, you mentioned that I say there isn't necessarily one master algorithm. What I mean by that is not that you're going to have this. A lot of people do this, like I have a symbolic subroutine that I call when I have a symbolic problem. Like you know, retrieval augmented generation for those of you who are familiar with it is really a symbolic subroutine in a neural system. That is not. That's a very shallow combination of the two and you know I spent some time in the book explaining why that is not the answer. I really think you need the deep unification of the two, and you know a good analogy to this you know reasoning by analogy is electromagnetism, right. You know Maxwell didn't say you know, the physical world is a program that has the electricity subroutine that he calls some of the time and other times he calls the magnetism subroutine. That's not it at all. What he actually showed brilliantly is that they are the same force. It's a unification, not a combination, and I believe this is actually what's needed. And when AI is mature, as in any mature science, this is actually what's needed. And when a AI is mature, as in any mature science, this is what we'll have. Now. There's an important point here, which is that algorithm there's no single form that it has to take. And again, a good analogy here is Turing machines, right, alan Turing discovered or invented the concept of a Turing machine, which is a machine that can do anything right, which at the time was a veryuring machine which is a machine that can do anything right which at the time was a very strange idea right, like a sewing machine would sew and a typewriter would type.

Pedro: 32:42

But what's a machine that both sews and types? But a computer this is the essence of a computer is that a machine that does anything. But there are hundreds or probably even thousands of different things that are what are called Turing complete or Turing equivalent. His actual machine nobody uses. We have a von Neumann computer. What you have in your cell phone and we all have are von Neumann computers, which is an architecture. So the real Turing machine is the von Neumann architecture, but conceptually they're the same and there will continue to be new ones. So my point is that we need to arrive at the first form of the master algorithm. In some ways it doesn't even matter what it is. Then there will be many variations that are good for different things, but in machine learning, if you will, it's inductive reasoning. We don't even have what they have for deductive reasoning, which is the concept of a Turing machine, and that's the first thing we need to get to.

Craig: 33:29

Yeah, and to get to these? You know these reasoning models, whether they're unified or whether they're. You know two systems, one calling the other as they get more powerful. I mean, do you think that they will help solve this problem of how to unify the various tribes? I mean, do you have any confidence in reasoning models' ability to advance scientific research?

Pedro: 34:09

Yeah, so there are many paths to the master algorithm and people are following them. You could start with any two of these things, combine them, unify them and then bring in the third. That's largely what I've done over the last 20 years, and so there are many ways to get there. The one that is most popular right now is to start with a connectionist system, with a deep learning system, and then bolt on symbolic reasoning capabilities. The way that has been done mostly so far is very shallow and I don't think it's going to survive the test of time, but I do think in very concrete ways that there is a deep combination of them.

Pedro: 34:50

But to take maybe a better example, think of transformers. A transformer is a type of neural network. A transformer is a neural network, but it right. A transformer is a neural network, but it's much more powerful than a multi-layer perceptron, which was the architecture that preceded it, in a way that there's many attempts to understand what it's doing, but I would say that at least my best understanding of what it's doing is it actually has some of the capabilities that symbolic AI has that previous neural networks didn't have. So, in a way, a transformer is, in a way that we don't fully understand yet a combination of connectionist and symbolic features, and that's what makes it so powerful. In fact, the closest thing to the mass rolling that we have today is a transformer and, if you think about it right, a lot of people were skeptical when the book came out because it was before Transformers came out. But Transformers today are one algorithm, right, that does all of these things that you see in the media every day. It's remarkable.

Craig: 35:44

Yeah, but on the idea of using AI to advance research, you know, basically transformer based systems, do you see promise in that or do you think that the AI itself won't be able to advance thought that we're? We're still going to need human intuition and and creative thought and and all of that. I mean, at some point it seems like these models are becoming powerful enough and there's enough human knowledge encoded out there that they can ingest that there'll be some creativity.

Pedro: 36:40

So I think you have a number of questions there, so let me try this one part at a time. Creativity, right, people used to think that creativity was something that computers would never have. You know about Moravec's paradox, which is this notion that the things that we think are easy for you are hard, and vice versa, because the things that are easy for humans are easy for humans, because evolution spent 500 million years evolving us, and I have this slide that I've used in various talks. That is Moravec's paradox easy versus hard. And one of the lines that I have in there is easy creativity, hard reliability. And I would say to people reliability is hard, creativity is easy. And be like what are you talking about? You're smoking, and even a couple of years ago, right, and these days I just rest my case and my creativity is like well, you know, use whatever Dali or Midjourney, generate videos, generate poems, generate music, just like whatever. Creativity is easy. Reliability no one knows how to make an LLM reliable today, and that's the problem.

Pedro: 37:49

Creativity, intuition are there's nothing magical about them, right? We? I mean, I used to be a musician and write songs, and people you know have this notion like writing a song is like some kind of magical inspiration that comes to you from the muse, right, it's not right. In fact, anybody can write an okay song. Writing a hit is really hard, right, but you know, like, if you spend you know whatever a year learning to play guitar, piano and start playing, you will write okay songs, right? So creativity is not magic. So I don't think there's anything that humans do that, at the end of the day, ai can't, and most people in the community this is what they believe. It might be very hard, it might take a long time, but, unless you have some mystical belief about what goes on in the brain, it's a bunch of atoms and, in fact, if you believe in reductionism, then the mass, rather, may exist, because it's the one running in your head and mind right now. Right, so I think at that level, there's no doubt that we'll get there.

Pedro: 38:40

Now, where are we in terms of AI being able to, for example, do real scientific discovery? Could an AI, for example, come up with general relativity or solve the problem of unifying it with the standard model? And the answer to that is today we're nowhere close to that, and this is very interesting In fact, people have remarked on this that you know, like AI. The application of AI across the sciences is progressing very rapidly, right Physics, economics, biology, et cetera. They're full of AI these days, but it's AI that does lower level stuff.

Pedro: 39:14

It doesn't do that really creative things like people like Newton and Einstein and whatnot did. And it's interesting because the LLMs have a bigger knowledge base. They've read every paper that's ever been written. So, come on, where are the discoveries? A human being with that knowledge base would be doing amazing things every day now. So clearly something is still missing and our job, the researchers, is to discover that thing that's still missing. And I have this long-running argument going on with Jan LeCun, because, jan, he thinks back propagation is the master algorithm, he thinks machine learning will evolve and blah, blah, blah, but at the end of the day, the solution is still going to be gradient descent. He's like a fundamentalist connectionist in that regard and I asked him this question that he has no answer to. He's like okay, how did Einstein come up with general relativity by gradient descent? There's no answer to that question. So clearly something is missing.

Craig: 40:16

And do you think what's missing is in one of these schools?

Pedro: 40:22

that you've defined.

Pedro: 40:25

Exactly, that is the right question. So, for example, douglas Hofstadter, in his book that I mentioned, general relativity, is one of these examples of something that was discovered by analogy. So, clearly, reasoning by analogy is important and again, it's very interesting that Jeff Hinton, who's really the godfather of deep learning, you know, he says like he's been saying this forever that, like you know, neural networks are better than symbolic AI because they reason by analogy. But, jeff, where is the reasoning by analogy? Explain to me where it is. Now I can tell you where I think the reasoning by analogy is happening in neural networks. And at the end of the day, we're going to have a single algorithm that, in a way, looks like a neural network but does reasoning by analogy and does symbolic reasoning, and we could get into the weeds there. But this is, I think, where the solution is.

Pedro: 41:26

Is there active research on that, in building reasoning by analogy into these systems.

Pedro: 41:35

So, for example, there's a longstanding active area of research on what is called automated discovery and it started in the 70s with people like Pat Langley doing theses where they show like, look, this system rediscovers Kepler's laws or Boyle's law or simple laws like that, and actually again recently that's picked up and people have all this work on, you know, discovering differential equations and discovering you know how different systems work using AI.

Pedro: 41:55

It's still, I think, at the level of Kepler, not at the level of Newton. The level of Newton requires, I think, some of this reasoning by analogy and again, there are people in psychology and cognitive science who have looked at how that might work. Right, so there's a lot written on this. I don't think anyone has solved it. I also think that, disappointingly, in mainstream AI research, there's a lot written on this. I don't think anyone has solved it. I also think that, disappointingly, in mainstream AI research, there's a lot of stuff going on, but not this. There aren't a lot of people going explicitly like how can we get a neural network to reason by analogy and therefore do scientific discovery which to me is a scandal, right, like you know some fraction of people should be doing this instead of doing more tweaks on LLMs.

Craig: 42:36

Yeah, and also the evolutionary tribe or school. That sounds very promising. I mean, you don't have to start with random strings. You can start with a system that's already very advanced and go from there. I mean, are there people doing that?

Pedro: 43:01

I mean there are. What you make a very good point is you probably don't want to start with random strings. Unfortunately, a lot of machine learning people have this machine learning, and the connectionists and evolutionaries are both great examples of this. They have this fundamentalist machine learning attitude and they're like no, we want to learn everything from scratch. If you put in knowledge, you're cheating. The Bayesians and the symbolists don't have that problem at all. On the contrary, the Bayesians are all about priors, which is literally putting in your prior knowledge, and the symbolists are all about combining learning with knowledge-based AI, because that's what symbolic AI traditionally was Like.

Pedro: 43:37

You start with the knowledge base and then you refine it, which I think is an excellent idea, right? So why throw that away? Right? And you could think of what LMs are doing today is like they are acquiring a knowledge base from text. That's what they're doing in a very convoluted way and then they're flexible about how they reason with that knowledge base. It's very opaque and then clearly, what's missing is the ability to reason on top of that text, which really is what things like O-1 and so on and DeepSeek are trying to do. So there is a way to look at all this and say like yeah, you know, in one way or another, things are moving in the right direction and we will eventually get there.

Craig: 44:19

Yeah, that things are moving in the right direction, then we will eventually get there. Yeah, it just seems that there is enough research out there now and in all these different disciplines or schools that these AI reasoning models could or should be able to go across all of it and, you know, find analogies, or go across all of it and find opportunities for evolutionary algorithms to advance what's already there.

Pedro: 45:10

I mean it's interesting, because I mean so. To take the evolutionaries, to begin with, they are, at this point, the tribe that is most distant from the others. The other four, the support vector machine people. I mean, like all these people, they're mixing it up at this point, the tribe that is most distant from the others. The other four, the support vector machine people. I mean, like all these people, they're mixing it up at this point, the evolutionaries there's very little, but there are a few things like generative adversarial networks, as you very sharply pointed out last time. There is a flavor of co-evolution to that, so there is one path by which they could come in. There's also this whole area of multi-agent systems. So there's a type of reinforcement learning that is very close to ideas from generic algorithms, openai. At one point, you know, before the whole chat GPT thing, they had these papers showing like, look, you know, there are a lot of problems for which, surprisingly, if instead of reinforcement learning we just use a simple generic algorithm, it actually gets to the solution faster, right? So a lot of this is happening Now, unfortunately, the problem is that, like, there's more AI research than ever before today by an order of magnitude or two, but most of it is along a very narrow front and one common view I would even say probably the prevailing view, is that, like we're just going to keep pounding on this and you know we're going to do so many things, eventually we'll solve the problem.

Pedro: 46:27

I'm not so sure, because you know the saying that you know nine women can't make a baby in one month, right, you don't get. You know you don't get, for example, to general relativity, by having, you know, a thousand random physicists just do what they do for a century. It doesn't work that way, right? Or you know one way that I often put this is like solving AI is not a sprint, it's a marathon. I really think you know somebody needs to go really deep, and there is, I know there are people trying to do that, but I think not enough. And then you know, like once we do do that, like we will see how all these people are just spinning their wheels and all of that, unfortunately, is going to go in the garbage can of history, and like I know which part of those I want to be in.

Craig: 47:14

You know I've been talking to people about quantum computing in the past week, which is advancing and you know, and the timeline is looking more real, not realistic, I should say it's looking shorter to getting to a practical quantum computer. Do you think that will advance any of this? Will advance any of this, I mean, if you can, and not for just large problems that classical computing struggles with, but presumably with quantum computers you'll begin to understand where quantum physics and Newtonian physics meet, like what happens at the quantum level when a virus attaches to a protein. You know right now it's very Newtonian the way people think about it. You know right now it's very Newtonian the way people think about it. You know it's shapes. But anyway, do you track what's happening in quantum and think about how that may address some of these issues?

Pedro: 48:41

So the promise of quantum computing is that it can solve problems exponentially faster than classical computing, and if that ever comes to pass, boy can we ever use it in AI. We will be the biggest consumers of quantum computing in the universe, no questions asked. Okay, so that is the number one promise of quantum computing. Now, having said that, there are a lot of caveats here. One of them is that if you talk to the people who are kind of like serious and knowledgeable about quantum computing, as opposed to the people hyping various things, they will tell you that it's exceedingly unlikely at this point that there will ever be a general purpose quantum computer, in the same way that there are general purpose classical computers. Right, that's just not. It doesn't look like it's going to happen. There may be quantum computers for specific problems that are that is the hope, right, and if those problems are important enough, then great. And some of those problems are in AI. So, in fact, there's one type of quantum computing that is about finding global optimal by tumbling out of the local ones. Right, there's this company called D-Wave that you know claims to have done this, and yeah, we could totally use that. So that's the promise.

Pedro: 49:52

Now, will that ever happen, and how soon will that be?

Pedro: 49:59

We are still, you know, I roughly track not very closely what's happening in quantum computing, just out of curiosity more than anything else. I mean, honestly, I think it's a hard bet to lay, because quantum computing is such a hard problem, right, and there's this whole notion like, well, there's all this computing and superposition, and that's where the magic happens, but the error correction really kills you. Right, the whole thing is so fragile and making it robust is so expensive. Right, you need 1,000 qubits to actually have a robust one, and it goes on from there. You need super, super low temperature, right, and you know, the first real quantum computer that does something useful is years away, maybe decades away. Right, and you could also make arguments, which some people do, about why this is never going to happen, that all of quantum computing is really a misunderstanding and a misconception and they're just deluding themselves, or that it's a very nice idea at a theoretical level, which is where it began, but it'll never be practical. So you know, as an AI person it'll never be practical.

Pedro: 51:00

So you know, as an AI person, my attitude to this is I wish them great success. I don't think we in AI we will depend on that success. We, you know, in a way, AI is about a different path to doing exponentially faster computation. It's about being smart, about your classical algorithms and in fact, one suspicion that some people have and in fact Demis Hassabis was talking about it just the other day is that maybe and I think this is a real possibility we in AI, or in computer science more general, we'll discover algorithms that are smart enough that actually you don't need the quantum computing anymore. That exponential gain we can already have it in other ways that you know, just use classical computers. So we'll see.

Pedro: 51:41

It's an interesting space, but I think you know for the most and there's I mean there's a lot of you know there's papers on like quantum machine learning and blah, blah, blah. I should say there's another way, intriguing one, in which quantum computing might be relevant to AI, which is there are ideas. This is often how fields of research wind up having an impact. It's not in what they were trying to do, because it failed, but because people came up with ideas that were then useful somewhere else, right, and it could very well be that quantum computing comes up with ideas that in the end, will actually be useful in machine learning. You know, there's almost nothing in this world that isn't potentially useful in machine learning, so I could see that happening with quantum computing, but so far, on either that front or the front of practical computers, I haven't really seen anything that I think people in AI need to be paying attention to.

Pedro: 52:31

Having said that I've heard rumors that this is what Ulysses Skever is doing in his new company is quantum computing for AI. So we'll see.

Craig: 52:41

Yeah, you know, since you wrote the book, AI has advanced dramatically and very quickly and seems to not be slowing down. There was talk like two or three months ago that everything was slowing down and you know, I don't see it slowing down. So do you have in your head kind of a timeline for how close we're getting to a master algorithm?

Pedro: 53:15

Great question and you know, my answer is we could be almost there or it could be very far away. Nobody really knows. Anybody who gives you a precise prediction is making it up or deluding themselves. Now here's why Technology progresses in S-curves, right, it's something slow progress and then fast progress, and then it slows down again into plateaus. Right, the early part of an S-curve looks mathematically like it is an exponential, but then what people forget is that the slowdown is coming. Right. You have an initial phase of increasing returns the exciting one and then a phase of decreasing returns, which is where most technologies are stuck for most of their existence, like cars and planes and TVs and whatnot.

Pedro: 54:01

And now the thing that has happened in AI in the last 10 years. Clearly, we've been on that upward curve, right, and every few years, people go like, oh, things are slowing down now, in fact, I forget if you mentioned this already, but, like I had this conversation with Elisabeth Skiver at iClear in 2017, where he said like, oh, you know, deep learning is slowing down. Like you know, there's no more progress. I was like, well, not so fast. Like you know, new things will come up, right, and you know, a month later, the Transformers paper came out. So, you know, and there are people today I could actually argue both sides of this so, like, you could look at today and say like well, but things are slowing down In down. In particular, the way you see things slowing down is that it takes exponentially more resources to produce the same amount of progress. In fact, the folks that you know open the eye and topic is like, yeah, yeah, that's the way it is right. You know, like you gave me a billion dollars and now you have to give me 10 billion and tomorrow it'll be a trillion. You know, pony up, right, which to me is alarming, right. So, but by that standard, things are slowing down. But again, you know it's normal, right?

Pedro: 55:02

The question is like, will there be a new idea that gives us another boost? And you know, for example, alexnet was one such idea. It was really just doing things on GPUs, but that's fair game, right. And transformers are another idea and in many ways, you could say there hasn't been a big idea. And you know, gans, maybe, were such an idea. Again, it plateaued.

Pedro: 55:21

But some people say there hasn't been any major progress in AI since Transformers came out, which is almost 10 years ago. Or Attention, right? Attention is 10 years old now, right? So who knows? Right? It really depends. This is not preordained, right? It's not like we're on some deterministic curve. It's like we, the researchers, have to come up with ideas and if we do, you know, the exponential will keep going until finally it saturates somewhere. The question is, is it about to saturate Then? Will it saturate for a year, or 10, or 100? My hope, right, is that no. If I had to lay down my bets, right, I said like no, we are not about to plateau, we are going. You know, like this fast progress of the last 10 years is going to look slow compared to the next 10, right, get your head around that. But this is not going to happen by magic. It's really going to require major new ideas, of which you know, with all due respect to the guys doing O1 and DeepSeek, those are not. Those are tweaks.

Craig: 56:24

They're nice, they're perfectly good work, but those are not the thing that's going to give us, you know, the next phase of the exponential. Yeah, oh, although you know, and granted, sam Altman is, you know he's got a proprietary model that he's trying to protect and and generate excitement around. But uh it it does. When you listen to him, it does feel like, uh, this reasoning is going to continue to develop. When you listen to Jeff Hinton, I mean he already thinks, uh, when you listen to Jeff Hinton, I mean he already thinks these generative, pre-trained models are conscious at some level. I mean it's incredible the stuff he interpolates or sees in these models. And then you have, you know, rich Sutton's team, I mean Ilya's team, and all of these people are starting to jump ahead to ASI, to artificial superintelligence, Like, don't even worry about general intelligence.

Craig: 57:44

Well, but I don't know, I'm a journalist and I've I've seen what the field has done since I started paying attention in 2017, when I first met you, and you know, if we get to super intelligence, then basically we have the master algorithm, right.

Pedro: 58:08

Well, once we have the master algorithm, we have AGI. I mean, by definition, we have AGI, otherwise it's not the master algorithm. And again, as people never tire of pointing out, once we have that, then just scaling it up, we can have 10 or 1,000 times or a million times the intelligence of a human being, right? So that part is easy. The hard part is getting to that master algorithm, right. But you know you mentioned people like Sam Altman, jeff Hinn, et cetera. It's interesting because these are all different cases, right, and they have different reasons to say what they do and it's good to understand a little bit of that. Sam Altman is a very smart guy, but he's not he's a VC, he's a great hustler, he's great at raising capital and sniffing out opportunities. He's really good at that right, and persuading people and so on. But his technical knowledge I think he himself would admit it is not very deep, right. And I remember him saying, like in an interview with you know, hoffman, like years ago, like yeah, transformers aren't going to do it. He doesn't say that now because of being convenient, but he didn't know what would. He said like something else is going to have to come along, but he didn't know what it was, and I think that's true, right, it's just not what you will hear him say today, right? I think it's easy for a lot of people and I think he partly has fallen into that. You see what these algorithms are doing and you get very excited and go like, oh, super intelligence is almost here. You have to remember that when you see I mean like this is a lesson that you have to learn in machine learning it's like the algorithm always seems to be doing a lot more than it really is. You find examples where it's amazing. You're like, wow, you know super intelligence. But then there's other examples where you know it does stupid things that a child wouldn't, right. And that problem is still with us. So you have to not get too carried away with that. There's also the sales pitch, but ignoring that right Now.

Pedro: 59:58

Geoff Hinton, right, is a guy who is perennially over-optimistic about what neural networks can do, and you know more power to him, because that's what kept him going for 40 years, right, whereas others gave up. But he has always, I think, underestimated. He has this notion that, like, let me put it this way Jeff does believe in the master algorithm, as does Rich Sutton, In fact. I asked a bunch of people at the time I was writing the book and those were the two strongest yeses I got were, you know, jeff and Rich. Of course they're different for them, but I think Jeff believes in a note like what Jeff thinks of as the master algorithm and how the brain works is too simple. He actually, I think Jeff massively underestimates the true complexity of the human brain, or even a simplified algorithm that would do the same.

Pedro: 1:00:46

So every decade and in fact this is a well-known joke in the field and he will say that himself, you know, in a self-deprecating way he always thinks that like, yeah, I've just figured out how the brain works, we're on the verge of whatever consciousness, et cetera, et cetera.

Pedro: 1:00:59

So in a way, his latest things about like chat, gpt being blah, blah, they're totally consistent with Jeff, but unfortunately they're totally consistent with him seeing more than is there and underestimating the length and the difficulty of the path to human-level intelligence.

Pedro: 1:01:18

And Rich Sutton, in his own way, with reinforcement learning instead of neural networks, is also prone to that. I think he's, you know, at this, you know, like Jeff, in some ways to oversimplify, has been very successful and Rich hasn't. So in a way, rich has learned the famous, you know, bitter lesson and I think he's a little bit more, a little wiser now, if you will. But at the end of the day, I mean I think you know, let me put it this way the founding fathers of AI were crazy right In the 1950s. We're saying we'll have human level intelligence in 10 years. They were crazy, they were madmen. But thank God for those madmen because they started the field right. So in a way it's good to have those madmen and those are optimistic people. But if you were an investor deciding what to invest in, I would take what people like Jeff and Rich and Sam said with a large grain of salt.

Craig: 1:02:12

Yeah, although you know you have models now that well, and they're not single models, they're assemblies of models but that can handle audio or speech, they can handle text, they can handle imagery, they can handle whatever other modalities are out there. You know, whatever other modalities are out there and they can, you know, generate answers in all these different modalities. I mean that's certainly more general than things were in the 90s or the early 2000s. So there is, I mean, agi. It's not going to be a moment. It's where there's a spectrum and it seems to me we're in the early part of that spectrum where, yeah, you have these models that are multimodal and and now have a certain amount of reasoning. So, if you look at it that way, there's a progression and we're moving along that spectrum.

Pedro: 1:03:29

Absolutely so. Let me even make that point more strongly before making some caveats. These aren't even assemblies of models. You have some of these transformer-based models that one model, the same model, does audio and video and text and speech, and all of that. It's remarkable, right? This really is the closest to the mass argument we've ever gotten, and it is a completely different place from where we were in the 90s, like it just doesn't compare. You know what I mean. In the 90s I was doing the PhD thesis on data sets with you know 500 examples, trying to learn to do medical diagnosis and, you know, create the assignment and things like that. We're in a completely different place.

Pedro: 1:04:06

Now, right, having said all that and you very you know, you made the key point there, which is AGI is not a point, it's like AGI is. You know, human intelligence is a whole bunch of different abilities. In some of them, computers are already way better than humans. Right, they can add two numbers a lot faster than you and they can play chess better, but in other ways they're still far behind. So there's not going to be a point at which you reach a GI or a point at which you reach super intelligence. You need to think at a finer level, like in each of these dimensions. How far along are we? And in some of those dimensions, we are far along.

Pedro: 1:04:38

However and this is the big caveat is like where's my housebot? My housebot is nowhere in sight, right, we're nowhere near AGI, right, like you could say, oh, you've reached AGI once you beat humans at every single one of those things. Well, to take the twin to the spectrum, we're nowhere near there. On producing Einsteins, right, we have these systems that know more than anybody ever could, but no Einstein or a maid. A maid is an incredibly sophisticated system that no AI can mimic right now. Making the beds, loading the dishwasher we don't have that. We don't have that and we do not have. I mean, I talk to robotics people. I'm not a robotics master, but what's happening? No one has, despite what you might hear in the media, you know like we do not currently have a path to having a house bought in your house anytime in the next whatever five or 10 years.

Craig: 1:05:35

Right, right, okay, well, we're up to over an hour. Okay, well, we're up to over an hour. Let's leave it here, and I really enjoyed this series and I'm hoping listeners have learned a lot.

Pedro: 1:05:53

Are you working on a new book? So I do have a couple of books that I want to write and that I'm making notes towards right, but I did recently publish a book, 2040, A Silicon Valley Satire. The main focus of my work right now is doing research, so I do want to. You know, I have something that I think will make a big difference and I want to get that ready and release it and then see where that goes and then after that, I probably will be writing my next book.

Craig: 1:06:18

Create an Oasis with Thuma, a modern design company that specializes in furniture and home goods. By stripping away everything but the essential, thuma makes elevated beds with premium materials and intentional details. I'm in the process of reorganizing my house and I'm giving Thuma a serious look for help in renovating and redesigning. Thuma combines the perfect balance of form, craftsmanship and functionality. With over 17,000 five-star reviews, the Thuma bed collection is proof that simplicity is the truest form of sophistication. Using the technique of Japanese joinery, pieces are crafted from solid wood and precision cut for a silent, stable foundation With clean lines, subtle curves and minimalist style. The Thuma bed collection is available in four signature finishes to match any design aesthetic. Headboard upgrades are available for customization as desired. To get $100 toward your first bed purchase, go to Thuma that's T-H-U-M-A. Dot C-O, slash IonAI, ionai all run together E-Y-E-O-N-A-I. So for $100 off your first purchase, go to Thuma dot C-O, slash IonAI that's thumaco. To receive $100 off your first bed purchase.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including automatic transcription software, world-class support, generate automated summaries powered by AI, powerful integrations and APIs, and easily transcribe your Zoom meetings. Try Sonix for free today.


 
blink-animation-2.gif
 
 

 Eye On AI features a podcast with senior researchers and entrepreneurs in the deep learning space. We also offer a weekly newsletter tracking deep-learning academic papers.


Sign up for our weekly newsletter.

 
 

WEEKLY NEWSLETTER | Research Watch

Week Ending 4.27.2025 — Newly published papers and discussions around them. Read more