Jamie Metzl, a renowned futurist and author, provides a deep dive into the superconvergence of artificial intelligence, genomics, and exponential technologies currently reshaping our world.
309 Audio.mp3: Audio automatically transcribed by Sonix
309 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.
JAMIE:
I absolutely believe that the future of humans will include the genome editing of pre-implanted human embryos that will be taken to terms. So now what we need to be doing is laying the foundation, not just of the science, which isn't quite there yet, but also the legal, regulatory, governance, social norms, all the other pieces of societal infrastructure that are required to do this responsibly and not have it just turn into some kind of Nuremberg-style human experimentation, even if the people who are getting these procedures aren't prisoners, but they're just kind of rich people trying to give their kids a Lego.
CRAIG:
Build the future of multi-agent software with agency. That's AGNTCY. Now an open source Linux Foundation project. Agency is building the Internet of Agents, a collaborative layer where AI agents can discover, connect, and work across any framework. All the pieces engineers need to deploy multi-agent systems now belong to everyone who builds on agency, including robust identity and access management that ensures every agent is authenticated and trusted before interacting. Agency also provides open standardized tools for agent discovery, seamless protocols for agent-to-agent communication, and modular components for scalable workflows. Collaborate with developers from Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat, and more than 75 other supporting companies to build next generation AI infrastructure together. Agency is dropping code, specs, and services, no strings attached. Visit agency.org to contribute. That's agntcy.org. How did you get to writing about the future?
JAMIE:
Well, thanks so much, Craig. Really a pleasure to be with you. My story in this stuff starts about 30 years ago. I was working on the National Security Council for then President Bill Clinton. And my boss at the time and current friend was Richard Clark, who at that time was obscure, later became very well known after 9-11 as kind of the Cassandra who had warned about everything, but not been able to change the course of events. And Dick always used to say that if everyone in Washington is focusing on one thing, you can be sure there's something much more important that's being missed. And the key to efficacy in Washington and in life is to try to solve problems that other people don't see. And for him at the time, it was terrorism and cyber. And for me, 30 years ago, looking around the world, this convergence, what I now call superconvergence, of these incredible computer tools of machine learning, now we call AI, the tools of biotechnology, at that time, the nascent genetics revolution, were all going to come together with profound implications for a whole lot of things. So I became very obsessive in my just self-learning about all of these issues. And when I was ready, I started writing articles about this issue and what it meant and what America could do to prepare. And then those articles got a lot of attention. And a member of Congress, Brad Sherman, called me up and said that he'd read them and he thought it was really important. He wanted to hold hearings based on what I'd written and asked if I'd be the lead witness and help him organize the hearings, which I did. And so I was doing a lot of speaking and writing on this topic, but I felt like I wasn't breaking through with the message to the broader audience that I was trying to reach. And so earlier in my life, I did my had done my PhD on why the world failed to respond to the Cambodian genocide, and that was published as my first book. And then my first novel was exploring those same issues, uh, but as stories, so that people could absorb this history and in a more personal, intimate way. And so then that was when uh with this other issue of these converging technology revolutions, I decided that I wanted to write um sci-fi novels exploring all of these issues, but doing it in the context of stories. And that was the genesis of my two uh near-term sci-fi novels, Genesis Code and Eternal Sonata. And when I was and Genesis Code deals with human genetic engineering and eternal sonata, the science of hacking aging. Um, and when I was on my book tours for those books, and I would explain the science to people in my way, as someone who taught it all to my, to myself, I could just see in people's eyes that they were suddenly getting that there was a very meaningful story that would have big implications for their lives. And suddenly they saw it, they saw themselves inside of those stories. And that was what made me feel I had to write a popular nonfiction book about the past, present, and future of humanity and certainly genetic modification and biotechnology and what that would mean for humans. And that was uh that led to uh hacking Darwin, um, genetic engineering and the future of humanity. And that became a pretty big international bestseller. And when I was on my book tours for that book, uh I would, I'd give a lot of talks and I never use notes. Um, and I always just so that for that reason, my talks just change every time I give a talk. I'll incorporate new things. And I found my talks about human genetic engineering were just shifting. Because I would say, well, here's the story of human genetic engineering, but that's just a little piece of this much bigger story at the confluence of these exponential technological revolutions. And here's the even bigger story. And so that was what led me to start developing the thesis uh for what became my latest book, Superconvergence. And along the way, after Hack and Darwin uh was written, where I had predicted uh the that the first CRISPR babies would be born in China, I'd predicted I had a list of five genes that would most likely be mutated. And one of the genes on my list was the one that the Chinese biophysicist Ho Jong Kwe uh altered in in 2018. After that, I was invited by uh Dr. Tedros, the Director General of the WHO, to be a member of the World Health Organization Expert Advisory Committee on Human Genome Editing. And now, after superconvergence, uh now I've become a commissioner of the Lancet Commission on Precision Medicine, which is going to be working over the next three years to come up with uh a kind of a blueprint as much as possible for the future of healthcare at the intersection of these uh these technologies. Precision medicine is kind of a of a shorthand, but it's really how do we think differently about the application of technology into healthcare? So basically, my view is that all that there's so much happening, and to understand it in a deep way, we really have to come at it from different angles. And certainly, for me, kind of thinking like a science fiction writer is really important because increasingly we're living in a world that is moving at a pace of science fiction and feels to most of us like it has significant elements of science fiction.
CRAIG:
Yeah, and you focus on uh on within biotech on uh gene editing on genomics, uh, but the superconvergence is much broader than that. I mean, it's affecting everything from material science to uh to fusion, you know, energy production and uh quantum mechanics. I mean, how wide and that and why did you decide to focus on genomics? So my net is pretty broad.
JAMIE:
I mean, my basic thesis uh is that the big story of this moment of life on Earth, not just human life, the life on Earth is this is the moment when one species suddenly develop the increasing ability to engineer novel intelligence and re-engineer life. And so that's going to have profound implications really across the board of everything. My particular focus was uh on the story of re-engineering life. And so it has that has implications beyond just the life sciences, but it's obviously transformative for human health and healthcare, obviously transformative uh for agriculture, uh for energy, now with a new age of uh of biofuels, for advanced materials, where we have new ways of developing the industrial products that we need for data storage. And people think that there's this big division between computing and genetics, but uh DNA, of course, is the greatest information storage mechanism in history. It's much, much, much better and more efficient in really every way than any kind of uh computing infrastructure that that uh that we have today. And so you know I could you could keep going broader and broader because even issues like quantum, which may feel to some people like they're totally dissociated from the life sciences, aren't dissociated at all. The quantum computers will be another tool to help us understand the world around us, including the living systems around us more deeply. That's what Richard Mike Feynman said, that the essentially the world isn't binary, the world is quantum. And so we need quantum analytics to better understand quantum systems. So I really see it as all of one piece. Yeah, but when writing a book, I couldn't call the book the everything book, and so I tried to bind it as uh the intersection ai genetics and biotech.
CRAIG:
And um, I I mean there's so many ways we could go in this conversation, uh, but let's focus on the on the uh CRISPR babies, yeah. And uh because that seems to have uh come back. I mean MIT had Ho Junghui uh you know hosted him for a while and wrote a lot about those interactions and then uh his uh his supposed wife, it's a little unclear, yeah. Kathy Tia as a startup now uh that's focused on gene editing and uh to to prevent uh hereditary diseases or uh genetic diseases. Uh why do you think it's coming and well I you mentioned in the book that that there was a long debate about IVF and vitro fertilization that and it took a long time for people to accept that as ethical and uh normal. Uh, do you do you think there'll be that same trajectory with uh gene editing of embryos?
JAMIE:
Uh I absolutely do, but that doesn't mean that anything goes or should go right now. The reason why I've called Ho Jong Kwe every bad name in the book a rogue, a lout, a scoundrel, uh, is because I think that what he did was so phenomenally unethical. And not only that, by racing to be first without any regard to the basic underlying ethics or what's good for humanity or science, uh, solely for financial and nationalist reasons, as far as I can tell, um, he just basically undermined this entire field. So I absolutely believe that the future of humans uh will include the genome editing of pre-implanted human embryos that will be taken to TERP. And that will happen, I believe it will happen in the future. But I'm also, as I mentioned, I was a member of the WHO Expert Advisory Committee on Human Genome Editing. And I 100% stand by our uh position uh that we humans aren't ready for doing that now. And so now what we need to be doing is laying the foundation, not just of the science, which isn't quite there yet, uh, but also the legal, regulatory, governance, social norms, all the other pieces of societal infrastructure that are required to do this responsibly and not have it just turn into some kind of Nuremberg-style human experimentation, even if the people who are getting these procedures aren't prisoners, but they're just kind of rich people trying to give their kids a like.
CRAIG:
Yeah. Uh but again on the IBF debate, it was a little different because the uh the outcome wasn't debated or the future uh wasn't debated as much as uh just the fact that uh there are religious and societal norms that were being broken. In gene editing, there's a real question once you have heritable uh mutations in the gene or edits in the gene in the genome, uh what that could mean further down the line. Or as we all know, genes are not uh as discrete as as you would like to think. So uh editing a gene may have one positive effect, uh, but you don't see what negative effects it may have.
JAMIE:
Yeah. That's exactly right. We understand such a tiny percentage of the full complexity of human biology, and that's why when we're thinking about making one very specific change, changing a single uh mutated nucleotide from abnormal to more normal, especially if the abnormal mutation is has the potential to be highly deadly, you know, at least that's a little clearer because if you know that somebody has a uh a mutation that's going to cause Huntington's disease or some other deadly disorder, and you can change that. And of all the ways of addressing that, one option is wait till a child is born and treat it somatically, and another option is fetal surgery, and a third option in very limited cases uh may be editing the pre-implanted embryo. Um so, in that third option, if we're saying we're gonna make this single change that we're pretty confident is going to change a painful, painful five-year life to a relatively normal 90 or 100-year life, then then the cost-benefit analysis may be justified. What Ho Jong Kwe was doing had nothing to do with um treating some kind of disease. He what he was trying to do uh was to give these uh these future children uh enhanced resistance to HIV uh at some point later in life. And it was totally unnecessary and unethical. And the risk uh clearly wasn't worth the uh the reward. Yeah.
CRAIG:
I I thought in Ho's case, the intent was to develop a therapy for the children of HIV positive parents.
JAMIE:
No, it wasn't a therapy, so it was for uh children who had one parent who was already infected with HIV. Uh and so but there's very, very simple things to do, for example, when it's the father of called sperm washing. And so it's very easy to make sure that a father with HIV won't pass it to their children. So if it had been, if that first case had been carried out very openly, very uh transparently, very collaboratively, and it was addressing something, a single gene mutation disorder uh that otherwise would have been deadly to the child and there were no other alternatives, that would have been a great use case. Uh, this was very different from that.
CRAIG:
Yeah. Although uh, you know, in this kind of stuff, there's always a maverick that that that that upsets a lot of people, but he they do kind of uh break open a pathway that then others follow.
JAMIE:
Yeah, so but this was the opposite, because I think that maverick needs if the first step is a responsible, collaborative, constructive step, then that opens the door. So this this Chinese effort, I think actually did more harm than good to the idea of that at some point we'll we would be able to edit the pre-implanted embryos of potential future humans. And so that's why I think this this was an anti-maverick, even though I'm a big believer in mavericks, but I should say Hu himself, um his big heroes um were Steptoe and Edwards, the guys who had innovated for uh for IVF itself. And that's what he was saying is I'm gonna you know bring glory to China and win the Nobel Prize uh just like these guys. So, you know, I'm a big believer in Mavericks, but that doesn't mean every maverick and everyone doing something calling themselves a maverick is is doing the right thing. And I very much think that Ho was not doing the right thing. Could I imagine a different kind of maverick doing this at some point in the future? I can.
CRAIG:
Yeah. Why do you think it's come back? Uh I mean, with uh Kathy Tia uh and I've forgotten the Manhattan yeah. She has a sort of a nod to the Manhattan Project in there. Um and there's um but there's some other startups I've seen. I mean, suddenly it's in the wind again. Why do you think that is?
JAMIE:
Yeah, you're exactly right. It's in the wind again. There's this is I think his name's Brian Armstrong, and he has a thing called the Gattaca stack, which is it it just highlights. What's so, I don't know, we can swear on this podcast, so fucked up about all of this. So this is we're talking about life. And I think that you know, humans we're a naturally hubristic species coming up with all this crazy stuff. I mean, squirrels haven't come up with space travel. They have they haven't come up with all I mean, humans are nuts by definition. That's why we're here, because we have this diversity and we have you all these risk takers. And you when you see, like I don't know if you saw the um uh the I can't remember what it's called, free solo uh film about Alex Hanholt, the guy who climbed uh El Capitan without any ropes, and they did this brain scan, and it turned out that he just didn't have like his brain didn't light up for fear. And so this, you know, clinging to the side of a cliff, whatever, 600 feet in the air, was just like you and I being here on this podcast. So we have a lot of diversity in in humans, and it's great that we have these kinds of risk takers, but there's a reason why we have societies and cultures to try to say, well, here are the risks that we want, and here are the risks that we don't want. We don't want, you know, some individual person saying, you know, I'm gonna start a war with some adversary just on my own, or I'm gonna do experiments in my basement that that have the potential to cause a global pandemic killing billions of people. So that it's always a balance, but I definitely think that we're in this hubristic moment. Uh definitely the message to the tech community has been that this is a no-holes barred moment, which is a real shift between uh what was happening with the Biden administration here in the United States and what's happening with the Trump administration. And it's an exciting moment. And I get the, and I frankly share uh the optimism that all of these technologies can and will be applied beneficially to uh to humans. Um, but I just think that a lot of people who are involved in this don't appreciate the full complexity of biology, that it just seems to a lot of people who have been socialized in AI and computer tech that biology is just a system like that. Maybe in some ultimate cosmic way, it is. Uh but biology is just much more, it's a system of systems. And so it's not just genetics, humans aren't just genetics on stilts. We have many different dynamic interactive systems. We understand them very, very poorly. I'd say if understanding the full complexity of human biology is 100 and understanding nothing is zero, we're at three or four, meaning three or four percent understanding of all of biology, even including our own. And so it just seems like change this, change that, and you'll get this outcome. You know, change this one gene, uh, and maybe you'll have a smarter kid. We have no clue how many genes say something about intelligence. Or as you said before, just the complexity. It's not, it's not one uh one gene, one task. This is a very, very complex system, which doesn't mean we can't manipulate it. We've been manipulating complex systems for the last million years at least. It just means that we need to balance our hubris with some element of uh humility and some sense of societal interests.
CRAIG:
Yeah, and that's interesting when you say uh people who have been socialized uh with AI and even though it's not deterministic, are comfortable with it uh and don't understand the complexities of biology. I see that a lot in the discussions of AI becoming conscious. Yep. Whether AI could become conscious. I mean consciousness. I had Stuart Hamroff, he's the guy that wrote the book with Roger Penrose about the potential for quantum effects to be the link between biology and uh consciousness, which he sees in this kind of panpsychic view as a as a property of the universe or but uh I can I can I mean he points out and a lot of people don't understand, he calls the AI uh view uh cartoon neurons, you know, these neurons that are either on or off. Uh but uh consciousness, there's all kinds of other things happening. There are different brain structures, there's hormones, there's yeah, um, yeah, uh everything else that that contributes. Um so yeah, so AI has made people look at complex systems in in a different way, and that's normal.
JAMIE:
Like in in a lot of our technological history, certainly since the dawn of industrialization, with every technology innovation, we always think, oh, humans are just like that. Humans are you know are whatever. And so now so you now the this HBO or I'm sorry, Netflix film Frankenstein is at it reminds me of it's like the early days of the industrialization, and humans are electricity. And if you just plug the electricity and you channel lightning, you can make a human and say with every one of these, we have a metaphor. Humans are locomotives or humans are whatever. And uh that's why for me, it's why I write both fiction and non-fiction, because I think fiction forces you to stay plugged in to a recognition of the of the full and unknown complexity of humans and of life, yeah.
CRAIG:
But when you say that that uh gene editing, for example, uh is something to look forward to, but there have to be all of these uh regulatory and ethical structures built up uh before we get to that point. But that's not how the world works.
JAMIE:
I mean, yeah, I don't think it needs to be built up before, but I think the basically the science isn't ready, the science isn't at a level of efficacy to do this responsibly enough that we could feel justified taking a risk with another human, having it not be, as I said before, uh Nuremberg level human experimentation. I think we'll get there. And I think we I mean we made it, and now we have uh new base editing and prime editing and less aggressive and more focused uh ways of doing this. So there's a lot of progress that's being made. So I don't think it's I'm not saying that we have to have everything in place and then move forward because that's obviously never going to happen. But while we're waiting uh for the science to advance, we really need to be investing similar amounts of energy, not just in regulation, uh, but in governance. And governance is what happens at all levels of society, not just at the level of governance.
CRAIG:
Yeah. Where do you think the I mean, I haven't looked at this uh as closely. Where do you think the progress is being made? Is it at university research labs, or do uh startups like Kathy Tiaz and these others uh play a role in pushing the field forward?
JAMIE:
Well, certainly both do. And what I would differentiate, so certainly university research labs across the board. And then there are the gene editing companies that aren't doing pre-implanted human embryos. I mean, that's a tiny, tiny, tiny fraction, but there's a lot of uh companies that are doing great work on gene therapies. I mean, we're at the beginning of really a new age of gene therapies. So you look at the mRNA vaccines, which were thinking differently about how to deliver instructions to the cell using the uh mRNA carried to our cells uh through lipid nanoparticles. There's huge progress that's being made in those areas. In the field of uh editing the pre-implanted human embryos, there are a lot of gonzo startups, certainly the ones who are doing the editing. There's other fields. I mean, I'm on the scientific advisory board for a company called Genomic Prediction, uh, which is doing polygenic risk scoring for pre-implanted embryos. And that's a little bit different. That's saying, well, you have 10 different pre-implanted embryos, and you're you want to pick which one to implant in the mother. The way that it's done traditionally, meaning most of the time now, is you have an embryologist who looks under a microscope and says, Yeah, that one looks good, that one looks whatever, whatever it is, fluffy, healthy. And there's no, there's no really way of testing those other nine. So it's all just, well, I have an embryologist who seems like a nice person and they have a lot of experience. And maybe that's a good way. I mean, I think you know, I was the keynote speaker for their National Embryologist Association, and so I've had a lot of conversations. And I actually do think it's meaningful to have an experienced embryologist who's just looking under a microscope and says, you know, I just feel, based on 30 years of doing this, that this embryo looks like it's pretty healthy. And, you know, I think that's not that's not nothing, as we say in Missouri, where I'm originally from. But um, but have it being able to extract a few cells from an early stage pre-implanted embryo after about five or six days, uh, and then to sequence those cells for the 10 different embryos, you can actually get a lot of potentially highly relevant information. And so that that's one thing. I'm much more optimistic in the nearer term about screening embryos smartly during IVF than I am about um uh about editing those embryos. Even though I do think that in the future, we will both screen, we'll screen and then we'll do relatively small numbers of edits. There are people who I respect, like George Church, who are saying we can really scale up editing, but again, we run into the complexity problem. But if we did screening and we had one or two embryos that seem optimal based on that, and there were one or two very discrete changes, like changing a likely disease state to a likely non-disease state, I think I think that is probably where we're going.
CRAIG:
Yeah. You know, I've spoken to a number of people on this podcast about longevity studies and longevity uh strategies. Uh, and one thing I always ask, uh, and it's it applies to this too, if we optimize human life so that we all live uh much longer as uh collectively, not only individually, right? Uh isn't that gonna I mean, how does the world feed and clothe and right provide power to that increased pop population? I mean, it's a little we're sort of um uh you're younger than me, but I'm sure you remember Lester Brown, uh, who was predicting uh this is in the 70s, 60s, 70s, predicting that the population growth we were run out of food and there were gonna be right wars over water and all this stuff. And in fact, uh, you know, yeah, industrial.
JAMIE:
We may get our wars over water, but it's a great question. Um, you know, I I give I write about this topic extensively, and I give tons of keynotes to whether it's the health medicine associations or even to the biohackers, although I don't consider myself a biohacker. And what I will say is you framed it right because the right answer is how do we increase average lifespan? And we don't need any science, any new science, in order to do that. I think that's the ethical question because we know people in Japan are living on average to about 85 here in the United States. We're living to on average 80. And there are people in places in in Africa and elsewhere who are living till in till their mid-40s. And there are people in our societies. Uh, we're both in greater New York area. I mean, there are people who in our community who have much shorter expected lifespans than either you or I. So everybody should be saying, how do we increase average lifespan? And if we do that, we will just unlock tremendous potential everywhere. I mean, that right now, if we had a health in the United States, if everybody was healthier, if everybody was living a life geared towards living until they're 80 and 85, which means they're eating healthy diets and they're exercising and they're they aren't exposed to all kinds of environmental toxins. So I think that would be fantastic. And there are a lot of people, and a lot of people who have their own podcasts who are deeply involved with the issue of uh the science of aging, uh, who have very high-level credentials for well-established institutions. And a lot of these people are total bullshitters selling snake oil because the things that they are selling are these interventions that may work in mice and uh and round worms um and um and fruit flies, um, but we have no idea whether they would work in humans or not, or could even cause harm. And so there's this whole feel that that influencers like Brian Johnson with, I like Brian, but it this don't die movement is preposterous idiocy. Everybody is going to die. It's this it's just part of the natural order of things. And suggesting otherwise is just lying. And I and I can't imagine anybody believes that humans are not going to die. If you want to live forever, order a bunch of plasticware from um Amazon and just keep throwing it out the window. That can be your legacy. Or maybe write a book or have kids and have them have kids down the line. And so I do think, though, that there'll be huge benefits for uh humans extending our health span. Uh it will save a lot of money. Uh, humans, it's not like there's some limited amount of innovation that's possible to us. We have unlimited amounts of innovation, but we're throwing away human capital. And whether we're throwing it away because some otherwise healthy, previously healthy person in the 1960s who has 20 years more of ideas and love and babysitting or whatever, whatever it is to contribute can't because they don't have health, or because we have billions of what we're functionally calling throwaway people around the world who are just living lives of total depravity. And we see because when people from the places that Donald Trump calls shithole countries come here and have the tiniest bit of opportunity, they become more successful in many cases than those of us who are just born into this opportunity. I mean, you look at the recent immigrants from Africa. I mean, they are killing it. I mean, they're incredibly successful, incredibly hardworking, incredibly educated. So I'm not worried at all about over uh overpopulation. Uh, but I do think uh that if we have, I mean, right now we have 8 billion people, you hope apparently we're moving towards 10. And then there's some um debate about whether that's going to happen and whether we're gonna drop, drop off after that. But even so, um, to feed and clothe everybody, and especially if people are becoming wealthier and more people want to live lives like ours, we just can't do things the way that we're doing them now. And that was what that's one of the main theses of superconvergence is that for healthcare, we need to distribute health care and empower individuals so that we're not overloading our healthcare system. Uh, for agriculture, we need to continue to enhance agricultural productivity with fewer inputs of the uh raw materials of land and fertilizer and water. We need to find ways of storing data and possibly using the lessons learned from DNA, where we don't have to build data centers that will cover the entire surface of the earth. So I'm not worried about that, but I think we have the potential to unlock human capacities that can decrease the footprint and allow us to have a greater productivity and efficiency in how we're using natural resources.
CRAIG:
Build the future of multi-agent software with agency. That's AGNTCY. Now an open source Linux Foundation project. Agency is building the Internet of Agents, a collaborative layer where AI agents can discover, connect, and work across any framework. All the pieces engineers need to deploy multi-agent systems now belong to everyone who builds on agency, including robust identity and access management that ensures every agent is authenticated and trusted before interacting. Agency also provides open standardized tools for agent discovery, seamless protocols for agent-to-agent communication, and modular components for scalable workflows. Collaborate with developers from Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat, and more than 75 other supporting companies to build next generation AI infrastructure together. Agency is dropping code, specs, and services, no strings attached. Visit agency.org to contribute. That's agntcy.org. Yeah. You know, intuitively that's that sounds right. But when I think you know, I had uh Nick Bostrom also on the on the program. Yeah. And he's he has a book out uh called uh Deep Utopia.
JAMIE:
Yeah, I read it.
CRAIG:
Uh yeah, yeah.
JAMIE:
But hard book to read, I have to say. I write books to make it easy on the reader. This was uh he was not making things easy on the reader.
CRAIG:
Yeah. Uh yeah, I and I'm not sure the novelistic elements. work but uh but the um uh you know it's about humans in a solved technologically solved world and you know where does meaning lie and all of that uh but in in in his book and in uh in a lot of this talk about uh longevity and uh making this available uh you know so that it's not just the wealthy that benefit right uh the fact is that largely due to uh education poorly distributed education human nature which doesn't for most people resist work resist uh effort uh and in your example of recent immigrants it's still a fraction of people who become uh really productive actually that may not be right maybe it's not a fraction but there is a large fraction that that are not really productive uh and as technology becomes increasingly you know solves all the survival problems that that people face you end up with uh you know bored teenagers uh doing bad things and uh the you know yeah so I’m not sure that increasing longevity the average lifespan or health span necessarily leads to a better world if it's evenly distributed no i completely agree i i i i just think that it doesn't necessarily lead to a a resource crunch i mean that my basic point with everything yeah is the future is not set i'm not some kind of Marxist who believes that everything is inevitable what I think and then the the focus of my work is that there are different possible futures available to us some are better and some are worse and if we want those better futures we need to be talking about them and saying well what are the things that we need to do what are the values that we need to weave into our decisions now to increase the odds of that kind of outcome and that's why for me I keep coming back to this issue of governance which again is broader than government regulation governance how do we how do we think about managing our societies about managing our technologies about what incentives do we want to create for things that we want and disincentives for things that we uh that we don't want.
JAMIE:
So for sure and that's why I'm very critical of these uh technology utopianists who act as if these or believe maybe that these technologies are going to on their own unleash this better world. I remember Bill Clinton as everybody else does saying the internet you can't control the internet it's like nailing Jell-O to the wall well there's a lot of Jell O nailed to the wall these days it turns out Jell-O is primarily suited by this metaphor for being nailed to the wall and based on how a bunch of decisions that we've collectively that we've collectively made so I yeah that for me that's why it's really all about values it's all about what are we trying to achieve and then I give a lot of talks um to big tech conferences and corporate conferences and I always tell people if the first question you're asking is how do we use these technologies, you're gonna be screwed from the start. It's the wrong question.
CRAIG:
The right question is who are we and what do we stand for and what are we trying to achieve and what's the right mix of humans and technologies uh to help us get there in the smartest and most efficient and productive manner possible yeah uh I mean we the first time we spoke uh you know talking about managing uh society uh and governance you know we're developing uh these uh artificial intelligence systems yeah uh that uh are able to uh hold larger and larger uh pools of data in their context or their memory uh that it you would think at some point uh there would be an AI system that that could provide optimal solutions to what very difficult problems and you know I think we talked about the Russia Ukraine war as an example uh to me that's exciting the that possibility there are a lot of people who think it can't happen because the uh a little bit like what you're saying with genomics that that the systems are so complex and there's so many interactions uh within them that you can't see that that it's hopeless to expect to come up with a solution that that won't have adverse implications somewhere down the line.
JAMIE:
So yeah so I but what do you think about that is that yeah you know I do not believe that there's going to be some AI that's going to I mean I'll give some variations on this after I answer the question the first time but there's some AI that's going to be maybe you could get a system that could articulate hey here's an optimal thing to do but I can tell you right now what's the optimal thing to do for Russia Ukraine which is Russia should leave Ukraine uh completely and then we should say well how do we build uh the kind of regional and global infrastructure that is more collaborative and less hostile like that's the right answer. And so then you say if you had all right our AI system make it happen well then you have to give AI the ability to have all the levers that we humans have like oh maybe we're gonna bomb this or we're gonna give long range weapons to the to the Ukrainians. And so when you get there, I mean human societies are just so messy and so complex could you have today if we went on GPT 5 or whatever and we said you know give me uh write a 10 page memo uh with and providing an optimal outcome for the Russia Ukraine war that prioritizes mutual respect and whatever you could get something and it would be great. But to do that it would be very difficult to do. And that's so I think that yes AI systems can provide insights as I mentioned to you in our in our prep call I have a book that's coming out in April called the AI Ten Commandments no really and in it I describe my process of working with GPT 5 to articulate an alternate and frankly much better 10 commandments than the deeply flawed 10 commandments that Moses purportedly received across at the top of Mount Sinai. And in the book I talk about then my dialogue with GPT 5 about well how can we make it happen? There's some great ideas but I've been involved in these kinds of ideas for my entire life I was founder of the global interdependence movement One Shared World. So it's hard to do stuff within the messy complexity of our of our real lives do I think that AI systems have the ability to help us see different possibilities for ourselves absolutely do I think that AI systems can help usher in a period of greater abundance yes do I think that these systems can allow facilitate the greater distribution of that abundance so it's not winner take all and you have a few people may just use the example of Uber like Uber isn't responsible for 99.9% of the innovations that made Uber possible and yet there's the 0.1% of what they did in in setting up their app and then we societally have said well they are the people who've created this wealth not society or human cultural history or all of all of these things. So if we set up winner take all systems we're going to we're going to have this vast inequality that frankly is going to undermine the and everything that we're talking about. And so I think that I wish we could say oh AI is going to do this.
CRAIG:
It's not it's the future doesn't belong to AIs alone doesn't belong to humans alone but it's humans and our technologies finding optimal ways to work together toward our best possible outcomes because our human perceptions of those best outcomes are varied um it it's just going to be messy for the entire foreseeable future if you increase on average everyone's health everyone's uh which presumably would increase their lifespan uh if those people are not educated yes you're still you're only going to increase many of the problems that society already faces because of the lack of education to me education is the prior before all this other stuff because uh you know when I look at I was in India for a month recently uh talking to AI people and when you're in the research institutions and you're talking about all the great things that that uh this technology can do for society and then you go out on the street uh and you've got this chaotic environment of uh with a very large population that's undereducated uh it you know all of the ease and everything it it's it only increases some of the bad effects uh if people are no longer worried about feeding and clothing themselves but they don't have any education they're gonna be easily manipulated by populists or they're gonna be uh you know attracted to uh drugs uh or other things that make the misery of a meaningless life uh palatable so yeah what are your thoughts about that yeah no so I completely agree all these issues go hand in hand if you just do one and not the others uh and that's why it's human flourishing is the broad category and it has different cases traditionally the places that have developed and the communities that have developed greater lifespans and health spans have tended to be more educated anyway, to have stronger communal bonds.
JAMIE:
So I absolutely think that it's not the case either for education or for extending health or for health span it can't be serial like we're gonna pick one and then two and then three it's gonna be that in every society, in every community, there are ways of expanding human flourishing and that people will need to be more educated in order to benefit from these things. It's not like there's you're going to get one shot there are a lot of lifestyle I mean there's much bigger wins to be had in extending health span by smarter lifestyle decisions than anything that can be delivered by AI or medical systems. So so these things certainly have to go hand in hand.
CRAIG:
Yeah and in your book you talk about uh humanity is kind of on the brink of this new world because of the superconvergence uh the effects of that convergence are gonna uh they're gonna be sort of uh they'll multiply each other whatever right domains so there'll be this exponential progress one would presume uh and uh you know again looking uh thinking of Nick Bostrom's uh work and all of the uh the negative effects that came from you know besides the positive effects but the negative effects that came from uh from the internet uh and particularly social media yeah uh are you optimistic that that we'll be able to guide society through this new future uh in in a predominantly positive way or do you worry that it could go very bad I guess that's what why you talk about and then my answer of course is yes I mean I'm an optimist I I certainly think excuse me I'm I certainly think that we have tremendous opportunities to do very positive things with these technologies and we have tremendous opportunities to do terrible things with these technologies and that's why I keep saying the technologies themselves don't come with a excuse me that's why I keep saying the technologies themselves don't come with a built-in value system but it's up to us to infuse our best values into our processes of developing these almost godlike capabilities.
JAMIE:
So I am optimistic because I think we have these great tools that can allow us to do all of the things that we've discussed expand human flourishing education healthcare adequate food supplies solve a lot of difficult problems and we have the opportunities to do much greater mischief at a much greater scale and that's the same with every technology that we've ever had agriculture the stirrup any technology that you come up with you could say well here's some really positive things and then here are some really horrible things that happen as a result of those uh of those capabilities and that's certainly the case here there will be terrible things. Do I believe that humans are the new Neanderthals and AIs are going to take us over and kill us all I don't think that's a very likely scenario at all. Do I believe that humans will continue to use these capabilities to do harm to each other it's a certainty. I mean you look at what's happening in Ukraine the only reason that both sides aren't deploying fully autonomous killer robots is that the partly autonomous killer robots are more effective at killing than fully autonomous. The second the fully autonomous become better killers is the second that everybody is going in Ukraine is going to flip that switch. You look at societies around the world I mean you mentioned social media a moment ago I certainly share your concern that social media is a cesspit uh and you look at how it's just pity these companies and the algorithms they've created are just pitting people against each other. And here in the United States, you know we have the crazy right and the crazy left who are mirror images of each other each using the exact same strategies, the exact same dynamics in our attention economy. But again even these terrible social medias I mean the issue is that they aren't governed we as a society decided well let's just let these things go on their own and now these companies and them and crypto have become so powerful that they're able to flood our um political class with money and so they become unregulatable. And that's why I just think that societies have a role and it's very difficult in our democracies because not everybody is thinking about long-term risks and benefits. And most people are just thinking yeah well here's a new tool and this is this is great. And that's why we have governments in the in the first place but I I just I worry certainly like everybody else should that these wonderful technologies certainly have the have the possibility to do very significant harms but everything then leads back to what are our values now what are the things that we're doing now to realize our values and weave our values into our systems yeah what how do you see I mean you say you're an optimist uh what do you see and you're a science fiction writer yeah at times what where do you so what kind of I mean can you in a few sentences sort of paint a picture of the world uh at some end of term that point in the future when the super convergence takes hold and then I'll counter okay sure with what I see yeah well there's no one world I mean that's what I've been saying there there's no one inevitable world there are many different worlds and so certainly with these capabilities we will have the potential to have radically improved health and healthcare radically improved and more productive agriculture radically decentralized education so that people even in poor places can have access to world class uh capabilities uh will have greater levels of abundance and as I said before we're going to have all these capabilities that will allow us to do some really horrible things like the ones that we've that we've described. And so if you want to know kind of what is super convergence look like just think of you know what our pre-industrial ancestors would think if they just visited us today. This would feel like superconvergence to them with such a big jump from how they lived and how we live. We're going to have similar I think even bigger jumps and they're going to happen in a faster period of time. And so but again there's no there's no one inevitable future but it is inevitable that our godlike capabilities are going to become Even more godlike if we think of the word God as how our ancestors perceived these deities that could you know negotiate life and death and change the world around them.
CRAIG:
Yeah, uh yeah. My primary concern the older I get is that uh the humans search for meaning. Yeah uh and you know in a universe where personally I think there is no meaning. Uh I mean there is meaning in interpersonal uh relationships, but beyond that that there's no sort of greater meaning or greater purpose. And so you have, I mean, Richard Dawkins, uh, you know, in his book uh The Selfish Gene talks about how we're just these dumb hosts for uh genes that that want to you know propagates, and so we're stumbling around we are we are symbionts with our own biology with all the microbiomes that are getting a lift on us, so we were part of a complex ecosystem, right? But there's no but we've developed consciousness in these brains, and we're trying to figure out why, what is our purpose, why or where is the meaning, and there isn't any, and that uh creates a lot of human misery and a lot of uh you know anger and frustration. And I don't see any of these technologies addressing that. You know, there is so I imagine, yeah.
JAMIE:
Yeah, well no, we do have a technology that can address it, it's called Buddhism. I mean, I think that we are we have these brains that have evolved in in collaboration um with our technologies, like controlling fire and even agriculture. And we have different human systems to try to think of these wonderful. So that's why when people think about technology as this total other, and it's either gonna give us a sense of meaning or not give us a sense of meaning, I think it's just preposterous. We humans have been struggling with this exact question for all of our, certainly all of our recorded history, but obviously much longer than that. And I'm not saying we have any complete answers, but you know, Buddhism is an answer to that for people who believe in their various gods, though, that's an answer. I'm an atheist. Um, but there are humanist traditions where just caring for other people in this life and living an ethical like life, that's enough. So I I just I don't think neither technology or anything else is going to give us an absolute answer, but there's nothing new in that question that really changes as a result of our relationship with these technologies.
CRAIG
These are the same things except that's right, except, and this is what fascinated me about uh Bostrom's uh yeah, deep utopia. Once, you know, since the dawn of man, there's been this steady technological improvement, uh, which is designed to make life easier, more comfortable, longer. And uh but this fundamental problem uh the question about the meaning of life has always been there. But earlier when you were when people were occupied with uh feeding themselves and clothing themselves and creating shelters and getting warm, you know, as that gets taken care of, then this question becomes larger, and that leads to a lot of criminality, a lot of yeah, uh yeah, it can drug use, and yeah, it certainly can, but though those are societal choices.
JAMIE:
I mean, I always the it's funny. I mean, so many of our ancestors lived in these little shitty villages somewhere. Um, but it seems like a great life, like living in a little village and being a hunter-gatherer, and kind of you have these strong communal bonds, and there's the life cycle and the ups and the downs. Like that seems pretty great to me. And certainly our societies have become so big and we've become so atomized. Um, that but I do think that humans can recreate that. I mean, you know, we live in in New York. There are lots of people who have these wonderful urban herds. There are different, as I said, I'm an atheist, uh, but I do stuff with Central Synagogue here in New York, and it's a wonderful community of people who are coming together and finding meaning, not just through community, but through uh good acts of repair tikunalam, repairing the world. I just think that there are many different possible answers. There are some countries, when you look at the happiness index, you know, countries like Estonia or Israel or many of the Scandinavian countries where there's like a sense of purpose, a sense of community, a sense of belonging. So I just don't think anything any of this stuff is inevitable, but I absolutely feel that in places like the United States, where we've underinvested in community, it's the bowling alone thesis, uh, I think there's a real cost for that because humans are inherently social animals. And if we don't have those kinds of structures and infrastructures and even stories, absolutely we'll get these kinds of outcomes. And if this even if the technology story is an us versus them story of a small number of people who are the big beneficiaries who are controlling the stories and even manipulating everybody else, which is the case of Elon Musk and Twitter, then I think it's going to be very toxic and dangerous. So none of this is inevitable. The question is just how are we going to do our best to try to make the best possible story?
CRAIG:
The way I see the world trending, and the way it's been actually for a long time, uh, is that the power and resources are becoming increasingly concentrated, and technology is allowing uh the people that control it to keep the masses more or less occupied. I mean, I imagine a future where there is this educated uh elite that that lives a wonderful life, and uh then the rest of the population is basically uh drugged, you know, they uh you know the governments legalize certain drugs, there'll be clinics where you go in and get your get your fix, and then you these people lay around and with their VR headsets on or whatever, and pass the time.
JAMIE:
It could be if we I mean it's not inevitable, but that's certainly one possible future. And if you look at what's happening now, where in the United States the Trump administration is engaged in wholesale looting uh of the generational assets of the country through this crypto scam. And just like in your case, it's drugs, but in our case, and then feeding this story of nationalism and nativism. And so these people who are getting looted, who that the their country is taking on debt that is being transferred to the Trump family, they're fully on board and they're excited about this, not recognizing that they are the ones who are going to actually have to pay this debt, and whether it's through higher taxes or bad schools or bad roads or bad health care or no social security. And so the world that you're describing is a world that partly already exists. And what I'm saying is if that's not the world that we want, we've got to fight for the world that we do want. And certainly it's hard, and certainly these inequalities seem to be growing. But that's I mean, for me, that's why I do the work that I do. That's why in my communications, whether my books or interviews like this or whatever, what I'm trying to say is this is a transformational story for humanity. And it needs to feel intimate and personal to every single person. And every single person needs to say, here's a future that I'd like to see, and then here's what I'm going to do. It may be on the smallest level, my own family or community or government or internationally, whatever it is, to try to move toward that future. But the best scenarios are possible, and the worst scenarios are possible, and everything in between is possible.
Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.
Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.
Sonix has many features that you'd love including enterprise-grade admin tools, generate automated summaries powered by AI, secure transcription and file storage, upload many different filetypes, and easily transcribe your Zoom meetings. Try Sonix for free today.