The Eleventh Commandment: Jamie Metzl and GPT-5 Write a New Moral Code for Humanity
“These technologies are morally agnostic. They could be the best things ever and the worst things ever, and the determinant is us.” — Jamie Metzl
Two summers ago, Jamie Metzl gave a talk on AI and spirituality at the Chautauqua Institution in Upstate New York. That same spot where Salman Rushdie was stabbed on stage a couple of years earlier. Rather than an assassination attempt, Metzl’s talk triggered The AI Ten Commandments: A New Moral Code for Humanity — a book co-authored with GPT-5. Metzl humbly claims that AI enabled him to incorporate other non-Christian traditions in a new moral code for humanity.
Some might think, however, that this type of ChatGPT-5 co-production reflects a new moral crisis for humanity. The victory of AI slop. Fast information. High on intellectual calories, low on everything else.
Five Takeaways
• Co-Authoring with GPT-5: Five to six thousand back-and-forth exchanges over the course of writing the book. Metzl is a novelist who cares deeply about language and the provenance of ideas — he is explicit that this is not the kind of AI fraud that got Mia Ballard’s book pulled from Hachette. The analogy he reaches for: Refik Anadol at MoMA, whose installation uses the museum’s entire digital collection not to reproduce the images but to create something new from them. The collaboration with AI isn’t about outsourcing the thinking. It’s about gaining a vantage point that no individual human could have — the same way we collaborate with machines in biology to see the genome, which no one could simply observe by looking at another person.
• Moses’s Problem: The biblical 10 commandments, examined closely, don’t hold up. The first two are preamble. “Thou shalt not kill” — Moses received it on Sinai and then came down and murdered 3,000 people at God’s instruction. The commandments were written by people with no awareness of the moral traditions of the Americas, Asia, or Africa. Metzl’s counterproposal uses AI to look at all of human recorded history simultaneously — every tradition, every culture, every spiritual framework — and decipher what they share. The analogy: the Artemis II astronauts seeing Earth holistically from space, rather than one community at a time.
• The Ten Commandments, Listed: (1) Treat every being with compassion and dignity. (2) Do no harm; actively protect the vulnerable. (3) Speak and act truthfully, with integrity and humility. (4) Share generously, especially with those in need. (5) Seek to understand others before judging them. (6) Resolve conflict with fairness, forgiveness, and the intent to heal. (7) Live in harmony with nature and all forms of life. (8) Value wisdom over dominance; cultivate inner growth. (9) Honour the freedom and uniqueness of others. (10) Remember the sacredness of life; live with awe, gratitude, and love. Metzl’s favourite is number ten. Andrew’s objection: you don’t need GPT-5 to come up with any of these. You could get most of them from a local Buddhist centre.
• Humanistic Slop vs. Selfish Survivalism: Andrew’s repeated challenge: these principles are so unobjectionable that they amount to nothing — a kind of AI-laundered platitude. Metzl half-concedes, but argues that the absence of articulated universal norms is itself a political danger. Kant described the League of Peace in 1795. It took a hundred and fifty years and two world wars before the UN Charter was signed in 1945. The UN has now largely failed. If we don’t articulate what we’re trying to achieve, it becomes even harder to get there. Globalism, in Metzl’s framing, isn’t idealism. It’s survivalism. Our fates are intertwined whether we recognise it or not.
• The Eleventh Commandment: World-changing technologies must be governed responsibly, including through national regulation and accountability frameworks. The hope that AI CEOs will voluntarily do the right thing — even the best of them, even Dario, even Demis — is a terrible strategy. It will fail, because some companies will always seek opportunity. The nuclear analogy: at the dawn of the nuclear age, nobody said “alright, just do whatever you want and good luck.” These are civilizational transformations. They require governance. These technologies are morally agnostic. They could be the best things ever and the worst things ever. The determinant is us.
About the Guest
Jamie Metzl is a technology futurist, geopolitics expert, sci-fi novelist, and founder and chair of OneShared.World. He is a Senior Fellow at the Atlantic Council and a Singularity University expert. He is the author of The AI Ten Commandments: A New Moral Code for Humanity (co-authored with GPT-5, April 21, 2026), Superconvergence, and Hacking Darwin.
References:
• The AI Ten Commandments: A New Moral Code for Humanity by Jamie Metzl and GPT-5 (April 21, 2026).
• OneShared.World — Metzl’s global social movement and Declaration of Interdependence.
• Episode 2877: Keith Teare on AI Is Not Dangerous — the Silicon Valley seminary argument, one episode prior.
• Episode 2878: Victoria Hetherington on The Friend Machine — the AI intimacy investigation that immediately precedes this show.
About Keen On America
Nobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States — hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,900 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.
Chapters:
- (00:31) - Why GPT-5 and not Claude? The co-author question
- (02:58) - Is this a joke? The Chautauqua origin story
- (05:09) - The Refik Anadol distinction: collaboration vs. fraud
- (07:57) - From the genome to the moral code: why collaborate with AI
- (08:54) - What is Chautauqua? The six-thousand-person standing ovation
- (09:53) - Moses’s problem: the biblical 10 commandments examined
- (12:48) - Sam Altman and the Ronan Farrow piece
- (14:00) - Advanced praise from the Vatican and a leading reform rabbi <...
00:31 - Why GPT-5 and not Claude? The co-author question
02:58 - Is this a joke? The Chautauqua origin story
05:09 - The Refik Anadol distinction: collaboration vs. fraud
07:57 - From the genome to the moral code: why collaborate with AI
08:54 - What is Chautauqua? The six-thousand-person standing ovation
09:53 - Moses’s problem: the biblical 10 commandments examined
12:48 - Sam Altman and the Ronan Farrow piece
14:00 - Advanced praise from the Vatican and a leading reform rabbi
15:15 - Humanistic slop: the Andrew challenge
16:05 - Peter Singer: if commandment seven is right, are we all vegetarians?
17:33 - The ten commandments, read aloud
19:29 - OneShared.World: the global interdependence movement
20:38 - Is this just Metzl’s Oxford PhD dressed up as universal truth?
21:46 - Give me a concrete one. Is there any that’s not slop?
23:40 - The politics: is this globalism, socialism, environmentalism?
24:02 - AI is morally agnostic: the technology is us
25:14 - The UN Charter, the Declaration of Human Rights, and the current administration
26:17 - What about the localists who don’t want Buddhist traditions?
27:52 - Is there any practical evidence anyone wants One Shared World?
28:28 - No evidence at all. But the survivalist case for universal norms
30:49 - The famous five CEOs and what they should do
32:20 - We can’t just hope the CEOs are moral: the governance argument
34:08 - The eleventh commandment: world-changing technologies must be governed
35:45 - Everything is political; norms precede structures
00:00:31 Andrew Keen: Hello, everybody. It's Monday, 04/20/2026. If it's April 2026, it must be the month to talk AI. It's all we seem to do these days. A couple of days ago, we did a show with Keith Teare, Silicon Valley venture capitalist and investor, on why, at least in his view, AI is not dangerous. He said we should say it out loud, so we did. And then yesterday, did a show with the Canadian writer Victoria Hetherington on how to friend the machine, how to marry one's AI. She has a new book out, The Friend Machine. We're going one step further, though, today with my guest, Jamie Metzl. He's a technology futurist, a geopolitics expert, entrepreneur, sci-fi novelist, and keynote speaker. So a big deal. And he has a new book out. It's a rather adventurous, ambitious one. It incorporates a new moral code for humanity. It's called The AI 10 Commandments, but Jamie is just the co-author. He's co-authored it with a certain GPT-5, one of Sam Altman's algorithms. So congratulations, Jamie, on the new book.
00:01:49 Jamie Metzl: Thank you so much, Andrew. Very happy to be here.
00:01:52 Andrew Keen: So why did you choose GPT-5 over, for example, Claude? I much prefer Claude. Does it really matter who you co-authored this with in terms of an AI algorithm? Did you find GPT-5 particularly trustworthy or wise?
00:02:15 Jamie Metzl: Yeah, it doesn't particularly matter. I feel like I could have written roughly — I mean, it wouldn't have been exactly the same, but the same idea, with any one of these high-quality algorithms. But I chose GPT-5 because I wanted to pick one. And because right now — or until very, very recently — GPT-5 was the known brand. And if I was going to have a co-author, I just wanted to have it feel a little bit familiar. That was, to have the unfamiliar feel familiar to people, and that's why I went with GPT-5. But had I done the exact same thing with Claude, there would have been, I think, some meaningful differences, but the broad contours of the project would have been the same.
00:02:58 Andrew Keen: Jamie, some people are gonna be watching this and thinking, "some guy who's co-authored a book with GPT-5 talking about a new moral code for humanity, the AI 10 Commandments," which is, of course, borrowed, I suppose, from the Bible's 10 Commandments — might think, is this a joke? Are you actually serious in this book?
00:03:21 Jamie Metzl: I am serious. And so there's two things. Maybe if I take a step back and give you where the book comes from. Two summers ago, I was invited to the Chautauqua Institution in Upstate New York to give a big talk on the future of AI. And I got a 6,000-person standing ovation in their amphitheater, and they invited me back for the next summer to speak on anything. And I really thought a lot about it, and I decided that I wanted to connect my thinking on AI with the Chautauqua tradition for over a hundred and fifty years of religious pluralism. And so I gave a talk on AI and spirituality. And that talk went really well. But one of the things that people really loved as part of that talk was when I described how I collaborated with GPT-5 to mine the entirety of human recorded history and all of our spiritual, moral, and ethical traditions to come up with 10 principles based on all of us and all of our history that, if followed by everybody, would lead to the greatest amounts of peace and happiness and human flourishing. And so this piece of the book is articulating what these 10 commandments are, and my back-and-forth with GPT-5 over the course of writing the book. I mean, if I had to put a number on it, maybe five or 6,000 back-and-forths — it was an incredible intimacy. There's so much crap coming out into the world, of people doing things, co-writing with these algorithms. But I am also a novelist. I care deeply about the language, about ideas, about the provenance of ideas. So I feel it wasn't like Mia Ballard, who just had her book pulled from the shelves by Hachette because it was just pure fraud, working with —
00:05:09 Andrew Keen: Well, you're being honest. And as you say, you've written a number of books. Your books Superconvergence and Hacking Darwin have both done well, and you're a novelist as well. What's it like then, Jamie, as a novelist and successful nonfiction writer, to co-author a book with GPT-5?
00:05:29 Jamie Metzl: It was quite wonderful. It was just different. So I was talking about Mia Ballard. I don't know, Andrew, if you've been to the Museum of Modern Art here in New York, but there's —
00:05:40 Andrew Keen: Yeah, I have.
00:05:41 Jamie Metzl: So then you've seen, I presume, there's this wonderful installation. It's about three stories tall by an incredible artist named Refik Anadol. And so what Refik Anadol has done is get the digital images of the entire collection of the Museum of Modern Art, and he's written an algorithm not to show those images, but inspired by those images to create this wonderful, phantasmagorical series of colors and shapes, and it's just mesmerizing. And so the difference between Mia Ballard and Refik Anadol is the difference between fraud — humans just cheating — and somebody, an artist, saying, "well, we're at this new era where humans and our machines are going to be able to collaborate in new and unique ways. Let's explore what those things are." So his work is one way. And for me with this work, what I wanted to basically show is that collaborating with AI, we humans have the ability to see ourselves a little bit differently, to see our entire recorded cultural history from a collective lens that is probably not possible for any one of us individually. And so that's why it's a very logical outgrowth from the work that I do in biology, where nobody just looking at another human could see their genome. You couldn't see their complex systems biology, their epigenome, their metabolome. So we collaborate with machines in order to see humans differently. And seeing humans differently in the medical context and the health context is what allows us to have different treatments that can cure our diseases and help us live longer, healthier lives. So I think this is a very exciting moment of potential collaborations with AI. And my book is about that. Your previous guest, as you mentioned, is talking about relationships. And yes, there'll be some people who have very unhealthy relationships with AI, and there'll be people who will be extended and in some ways enhanced by our collaboration with these kinds of machines. So I think, while we can have healthy skepticism, we can also and should explore what kind of collaborations are possible, and that's what I tried to do in this book.
00:07:57 Andrew Keen: The subtitle of the book, Jamie, is A New Moral Code for Humanity. You've talked a little bit about the scientific benefits of AI. And I know you've written about them as well, about how your own family fought back against cancer and how we can all do it too, which is very inspiring. But how did your association with GPT-5, and in this new book, the AI 10 Commandments, how did that help you come up with a new moral code for humanity? I mean, the old code of the 10 commandments from the Bible — I think most people would kind of agree that they're okay. Maybe one or two might not be appropriate in the twenty-twenties. We might be missing a couple, but most people kind of agree on the moral code. The question is how'd you actually get it done?
00:08:54 Jamie Metzl: You know, that's one way of seeing it. In this Chautauqua talk that I gave last summer —
00:09:01 Andrew Keen: Which is — to explain. You've mentioned that a couple of times. What is Chautauqua? What does that mean?
00:09:06 Jamie Metzl: So Chautauqua is this magical place in Upstate New York. And if you go there in the ten weeks of the summer, you would think, oh, I'm just in a large village. It's got houses. It's on a lake. It's got an opera house, a city hall, a theater. There's a 6,000-person amphitheater. But it comes to life in the summer, where people live there, and there are lectures and conversations and concerts all day. And tragically, the place that people know it mostly by is — this is where Salman Rushdie was attacked.
00:09:43 Andrew Keen: Yeah, I remember that. But what's the difference? I mean, that sounds to me like a summer version of TED. What's that got to do with a moral code?
00:09:53 Jamie Metzl: Anyway, what I'm saying is that the foundation of this whole book was this talk that I gave there. And as part of that — definitely Chautauqua has a religious tradition, as it's been around for more than a hundred and fifty years — but that's why I gave that talk. In that talk, I mentioned how I collaborated with AI, but I also said, well, let's go through these biblical 10 commandments. And I went through them one by one. I hear what you're saying — oh, everybody stands by them. But when you go through them one by one, at least when you take them literally, I've come up with two that are less ambiguous, that we would say, oh, I guess I'm pretty much for that. The first two — "I'm the Lord your God. You shouldn't have any gods before me" — that seems like preambular language to me. If I was doing 10 commandments, I would say, "I'm the Lord your God. Don't have other gods. And I give you these 10 commandments." So now we're down to eight. And as I mentioned in the book, most of them as written aren't fully defensible. On the one hand, it's like, you know, "thou shalt not kill" or murder. Well, that seems good. But if you're landing on the beaches in Normandy, your mission is to kill as many of those others as you possibly can. And even Moses, according to the story, was up on Sinai, and he got these 10 commandments allegedly from God. And one said don't kill. And then he came down, and the golden calf was happening, and he threw the 10 commandments at these people. And then him and his brother, at God's instruction, murdered 3,000 of these people for the crime of having slightly different religious beliefs. And then he went back up on the mountain and rewrote these 10 commandments. So I think when you go through the 10, I would actually personally not fully agree with your characterization. They're not optimal. And the people who wrote these 10, whoever they were, they had no clue that there were people in the Americas who had their own very beautiful moral and ethical traditions. And they didn't know about the people in what's now India or other parts of the world who had struggled with these very same ideas. And so for me, working with AI to see all of humans and all of our cultural heritage as one thing, from which we can decipher 10 universal principles, is kind of equivalent to the Artemis II astronauts seeing our planet holistically from space. Could I, on my own, have just sat down — and I've done this before — and come up with ten great principles that most people would stand by? I think I could have done it. But I couldn't have done it with the breadth of vision that an AI algorithm with access to the entire digitized recorded history of humanity would have.
00:12:48 Andrew Keen: One, of course, owned by — well, not owned — you saw the New Yorker piece, the Ronan Farrow piece. A lot of people don't quite trust him. Maybe we'll come to that later.
00:13:04 Jamie Metzl: I know. What I will say on that — the author of that article, Ronan Farrow, I've known since he was a teeny-bopper high school kid working for a former mentor of mine, Richard Holbrooke. So I read that article. It's very interesting. And for sure, if our strategy for AI is we're going to just trust the CEOs of these companies to do the right thing — whether it's Sam Altman or Dario or Demis Hassabis, who I think is incredible — that's a bad strategy for us. The best strategy —
00:13:36 Andrew Keen: I wanna come to that later and talk about trust and that sort of thing. But let's go back to the book and your thinking. I mean, it sounds to me like you got a bit lucky, Jamie, given your heretical take on the 10 commandments. You were lucky someone didn't jump on the stage and stab you. Maybe they were too busy.
00:14:00 Jamie Metzl: Well, let me push back on that, because I'm very proud — if you scroll down on what you just showed, I have the advanced praise for the book. And I have advanced praise from the Vatican, from Angela Buchdahl, arguably the leading reform rabbi in the United States. And the reason why I'm getting this kind of praise, including from people in the religious community — the reason why I've been invited to share this with Unitarian Universalist churches here in New York — is I'm not saying that all of our prior traditions are bunk. I'm actually saying the opposite. What I'm saying is that all of our cultures, all of our traditions have these wonderful ethical codes that mostly agree with each other. And if we can look holistically at all of them, we can decipher universal principles that tie us together. And it doesn't mean we have to jettison any other principles, whether it's the 10 commandments or the five pillars of Islam or really anything else. But we can have an additional layer, and that additional layer comes from a level of recognition of everybody's humanity that our ancestors, who were largely the authors of our individual traditions, just didn't have.
00:15:15 Andrew Keen: I'm gonna take your point, Jamie. Although some people might be thinking — and it's like talking to GPT-3 or 5 or Claude — what you're articulating is a kind of humanistic slop that no one would disagree with, but no one would exactly know what it means. So let's get to the details. You obviously know your religion. You know your AI. You reject the sort of localism, the parochialism of the original 10 commandments. I take your point on that. What came out then of all this work you did with ChatGPT-5 to come up with a new moral code for humanity that gets beyond kind of humanistic slop — saying stuff that no one would disagree with, but also no one would quite know what it means?
00:16:05 Jamie Metzl: Well, I think there is a lot of slop. But to come up with 10 principles — the reason why I'll come back to Chautauqua — I got mobbed. I mean, these are mostly old —
00:16:17 Andrew Keen: Well, yeah. I mean, I'm sure you got mobbed. But that's about the point.
00:16:22 Jamie Metzl: No. But what I'm saying is that everybody was saying that when you read the biblical 10 commandments, they didn't seem intuitive. They didn't seem intuitive to me as a human based on how I live my life. But every single person from every background has said these 10 principles do feel intuitive. You know, I did a conversation with Peter Singer, the great philosopher, and that was what he said. But then he said, but if you drill down — if one of them is "treat all of life with dignity and respect," shouldn't everybody be a vegetarian? Which is probably correct.
00:17:03 Andrew Keen: Well, you know, I'm sure Singer had strong views on the rights of animals.
00:17:07 Jamie Metzl: Yes, yes.
00:17:08 Andrew Keen: In the same vein — particularly, sort of vocal moral — so let's get — I take your point. You obviously know a lot of people, from Ronan Farrow to Peter Singer to these happy people at your speech. But give me an example concretely of one of these new commandments that will resonate, that makes sense, that's not just AI slop.
00:17:33 Jamie Metzl: Yeah, I agree. Do you wanna call it up? Because if you go to the AI 10 Commandments —
00:17:37 Andrew Keen: I want you to tell me.
00:17:38 Jamie Metzl: Alright. Perfect.
00:17:39 Andrew Keen: We don't do slides. I mean, we're not — this is not TED. Jamie, this is not —
00:17:45 Jamie Metzl: But you've been showing slides the entire time.
00:17:48 Andrew Keen: Yeah. But they're my slides.
00:17:50 Jamie Metzl: Oh, okay. Fair enough. Alright. So let me just get you —
00:17:54 Andrew Keen: Well, I got some stuff. I got more advanced praise. I don't know who it's from, but: "in an unprecedented and historic collaboration, leading futurist and best-selling author partners with the AI system to demonstrate a breathtaking possibility that advanced artificial intelligence guided by human ethics and wisdom can help shape a more humane future." So I hope you're right, but just give me some concrete —
00:18:23 Jamie Metzl: You know what? Here, I will read you the actual AI 10 Commandments. How about that?
00:18:28 Andrew Keen: Yeah, well, let's start. What's the best one? If there was just gonna be one, what would it be?
00:18:34 Jamie Metzl: Let's have a look. They're all pretty good, but I'd say number 10: "Remember the sacredness of life. Live with awe, gratitude, and love."
00:18:46 Andrew Keen: But you don't need AI to come up with that. I mean, all you need to do is go to your local massage — I don't know who's —
00:18:57 Jamie Metzl: I don't know who's giving you your massage.
00:18:58 Andrew Keen: I don't know. Well, you go to a local Buddhist center. I mean, say it again.
00:19:02 Jamie Metzl: No, I think it's a very good point. A lot of these principles are principles where you say, well, a lot of this sounds like Buddhism. A lot of this sounds like Baha'ism. A lot of it sounds like Unitarianism. I'm the author of the One Shared World Declaration of Interdependence, with a lot of these principles. So the "so what?" — it may be that it's totally —
00:19:29 Andrew Keen: Right. And just to be clear, you're — well, you're the founder, or you're certainly involved with —
00:19:33 Jamie Metzl: Yep.
00:19:34 Andrew Keen: — One Shared World, the nonprofit cultivating a culture of peace.
00:19:39 Jamie Metzl: Yes. And so I think the whole point is, well, this "so what?" is just that these are all totally familiar. And why is it that everybody is not Baha'i? Because it seems like if you just look at all the different traditions — wow, Baha'ism and Unitarian Universalism and Sufi Islam and Reform Judaism, all these traditions are all pointing in the same direction. And this is that same direction. So you could call it slop, but is there a normative reason why everybody just hasn't become a Buddhist? My contention is that part of it is that any one of these traditions, coming from a unique individualized place, it becomes harder for people to say, oh, this is part of the story of all of us. But no Buddhist would read these principles and say, oh, I'm against these principles.
00:20:38 Andrew Keen: Isn't this — I mean, it's the same with all these arguments about what AI does and doesn't tell us and this idea of creating some sort of universal objective truth. Doesn't this really just reflect your own particular interest? I know you've got a PhD in East Asian history from Oxford. So it's your thing. There's nothing wrong with it being your thing, but dressing it up as a new moral code for humanity is a bit of a bridge, isn't it?
00:21:05 Jamie Metzl: No. I don't know how — what's the connection with my PhD? But what I will say is, the point that I'm making, and have made repeatedly and have made with you, is that this is not a replacement for all other moral codes. What this is is a bringing together of the common essence of the best of all of these traditions. And there is, frankly, a legitimation, I believe, partially, of doing this through a process that has access to all of our traditions. That's it.
00:21:46 Andrew Keen: Okay, I take the point. And, you know, I don't want to sound too critical. But I mean, it would just sound so vague and forgettable. Give me another example of one that's more concrete of one of your new 10 commandments.
00:22:00 Jamie Metzl: So, you're not gonna like any of them. But in the book — I don't know whether maybe you haven't read it yet — I have chapters about how to drill down and how to turn everything into being more meaningful.
00:22:15 Andrew Keen: Mhmm.
00:22:16 Jamie Metzl: But alright. So here's what we have. I'll just go quickly one to 10. One: treat every being with compassion and dignity. Two: do no harm; actively protect the vulnerable. Three: speak and act truthfully, with integrity and humility. Four: share generously, especially with those in need. Five: seek to understand others before judging them. Six: resolve conflict with fairness, forgiveness, and the intent to heal. Seven: live in harmony with nature and all forms of life. Eight: value wisdom over dominance; cultivate inner growth. Nine: honor the freedom and uniqueness of others. Ten, as we've discussed: remember the sacredness of life; live with awe, gratitude, and love. And so we have three states here in the United States which — one by law, and two almost by law — are requiring the biblical 10 commandments to be posted in every classroom. I think it would be very beneficial for the kids in these schools to print these out and tape them next to these biblical 10 commandments, saying, well, these are 10 commandments that come from all of us.
00:23:15 Andrew Keen: Those states wouldn't probably be particularly keen on your book, or at least your 10 commandments.
00:23:22 Jamie Metzl: We will see. But again, I'm respectful of everything that has come before. I think that it is okay for us to say that, in addition to these very wonderful traditions across the board that we have, what are the ways that we can come together and do something that reflects the best of all of us?
00:23:40 Andrew Keen: So what's the politics of all this? I mean, AI by definition is political. It reflects us. Some people might have listened to some of those commandments and think, well, Jamie's suggesting we all become radical environmentalists or socialists, even communists, give up our money, pay large taxes. Is there a political dimension to this, or is this beyond politics for you?
00:24:02 Jamie Metzl: Let me talk about the word "new" — because this isn't Martians coming down and declaring these commandments. This is mining our own history. And I think people get confused about where the wisdom of AI is coming from. Most of the wisdom of AI is coming from us. And I think there's a real danger of people othering AI when it is reflecting us back to us. So it depends on the politics. My hope is that this will be taken, by whoever is interested in it, as a statement that there are universal values and universal principles. That's something, you know, we tried — we, humanity, tried — with the Universal Declaration of Human Rights. We tried with the United Nations Charter. But we're at the early stages of articulating that there are common principles that don't come from any one tradition, but that can come from all of our traditions. So I think this very much should be seen as part of that.
00:25:14 Andrew Keen: Although — I mean, we've done a number of shows recently about the crisis of the United Nations. The current American administration doesn't seem to be particularly keen on the Declaration of Human Rights.
00:25:25 Jamie Metzl: Yep.
00:25:26 Andrew Keen: So there is a politics there. You're clearly a globalist. You've fed all this stuff into ChatGPT to suggest that there are universal principles. But some people might be watching or reading your book and thinking to themselves, well, I've got my tradition, a Christian tradition, the 10 commandments. They're enough for me. I don't want any Buddhist traditions in my tradition. You seem to be suggesting that we're a species, and that when we think about ourselves, we should be thinking about ourselves as a kind of global species, whether or not we've been to Japan or East Asia or Africa or North America. What about the localists who say, I don't wanna know about people from another neighborhood, let alone another country?
00:26:17 Jamie Metzl: Yeah. So it's my view — you talked about One Shared World — I absolutely have been involved with the global interdependence movement. And I certainly believe that in our world as it exists today, we are just connected to one another, whether we like it or not and whether we recognize it or not. I mean, that's what the pandemic has shown us. That's what the debates over climate change show us. And so I would be the last person to denigrate or degrade people's extremely valuable local or personal traditions. But what I am saying, and have said for a very long time, is that if we think of this as a layer cake and we all don't consider the layer of our global interdependence, we're going to be in very big trouble, because we are a species. We're connected to one another. Our fates are intertwined, again, whether we like it or not. And so we need to have, in addition to our many fantastic individual traditions, some kind of overlay that recognizes that we're all, in many ways, in this together. That's why, yeah, I've been very involved with the World Health Organization over many years, and that's why I was a critic of the Trump administration pulling out of the WHO. Because if we're not in the WHO, we're just going to need to recreate something kind of like the WHO, because pandemics are inherently global. The next time one of these happens, there's no such thing as building a wall. You can't build a wall high enough. So I have absolute respect —
00:27:52 Andrew Keen: — for the local and parochial. You know yourself — I mean, you're described, at least on Fox, as a COVID origins whistleblower, and that sort of thing generated a lot of local biases of one kind or another, US, China. So that's the reality of the world. I respect your commitment to One Shared World, but is there any practical evidence these days, Jamie, that anyone apart from yourself wants it? And maybe a couple of people who go to a conference in New York.
00:28:28 Jamie Metzl: Yeah. But let me answer that clearly. There's no evidence at all. Our world is dividing. And you talked about my work on COVID origins — I don't wanna be some kind of blind Pollyannaish optimist. I believe that the way to build a better world is to fight for the good stuff and fight against the bad stuff. That's why for many, many decades, I've been speaking very honestly and very forcefully about China, about politics in the United States, on COVID origins. I have been at the center of that debate for six years, based on my belief, based on all of the evidence — albeit circumstantial — that points to a research-related origin in China followed by a criminal cover-up. So I don't wanna be Pollyannaish. But it is my view that in our globally interconnected world, we need to have some set of universal norms and global principles, or else we are going to destroy ourselves. So it's not just about some kind of bland kumbaya-ism. It's actually very selfish survivalism — that in a world where our fates are intertwined, we're going to need to find ways of coming together. And to start with that, we need to have some kinds of normative frameworks. These things can take a long time. It was 1795 when Immanuel Kant described what he described as the League of Peace. And it was a hundred and fifty years later, and two world wars, and the failure of the League of Nations, when finally in 1945 the United Nations Charter was signed. And now, we're eighty-one years after that, and the United Nations has largely failed, significantly because of actions by large states. I would put China and Russia first among them, but the United States is now certainly doing its bit. And if we continue down this path, the future of our species is going to be unnecessarily painful. So I'm under no illusions that publishing this book is going to lead to some kind of eruption of people recognizing their mutual responsibilities in our world that's dividing. But if we don't articulate what we're trying to achieve in a set of principles, it's going to be even harder.
00:30:49 Andrew Keen: Yeah. And let's end with how AI can help all this. I mean, nobody's gonna disagree with you. There was an interesting piece this week in The Economist about how AI's leading men could become as powerful as Ford or Rockefeller. The five leading men, of course — Zuckerberg, Amodei, Musk, Altman, and Demis Hassabis, who you respect. I think Hassabis maybe — people are a bit warier of Sam Altman, they don't trust him. But leaving aside personalities, these five are all gonna be very powerful in terms of shaping the future. You've worked closely with Sam's algorithm, ChatGPT-5, for this book. What should they be collectively doing? Dario Amodei has been quite outspoken in taking on the government when it comes to how AI is used or abused in warfare. What would you like to see from these famous five in terms of redirecting the world? As you've said, your AI 10 Commandments are all derived from our universal knowledge as a species, which is being garnered from ChatGPT for better or worse. What can these people actually do to redirect the world — back to Kant, perhaps, or at least back to the UN Charter?
00:32:20 Jamie Metzl: Well, let me say this. If our strategy for redirecting the world in a more positive direction is that we're just hoping that a bunch of tech CEOs are going to do it on their own, it's preposterous. It's bound to fail. It just will not work. What we need to do is recognize that the reason we've come together in our civil societies is that there are some problems, some challenges, that need to be addressed collectively — whether that's on a national level, and also increasingly on a global level. That's why I thought this debate about OpenAI and Anthropic and the Defense Department was just silly. Because if our strategy is that we hope that the AI companies are as moral as possible, and fingers crossed — that's a terrible strategy. It will fail, because we have a lot of different companies, and some of them will seek opportunity. So what we need to do is have governance and accountability, including regulation. We can't let these companies, including Anthropic, any of them, run wild. That would be preposterous. It's like, in the beginning of the nuclear age, saying, "alright, just do whatever you want, and good luck." I mean, these are civilizational transformations, and they require governance and accountability frameworks. This idea that let's just let them go because we're in a race with China, and balls to the wall, just keep your foot down on the gas pedal and let's see what happens — that's just a terrible way to do things, and ultimately it'll be self-defeating. So I hope that everybody is as ethical as Dario and Demis, but even then, that can't be our strategy.
00:34:08 Andrew Keen: Right. And you know they're not, because some people — even Keith Teare — believe that Dario is actually very self-interested, that his morality is self-serving. It's maybe a subject for another show. So given the reality of these people — I mean, you said we need more government. That's not gonna happen. Let's end with Jamie Metzl's eleventh commandment when it comes to enabling the 10 commandments. What would your eleventh commandment be? Not some moral slop, but something concrete that we can begin to get to.
00:34:45 Jamie Metzl: World-changing technologies must be governed responsibly, including national regulations. We are at this transformative moment. And this idea, this laissez-faire thing — let the companies do what they want — that's the point. I mean, I guess I'm your third guest in a row who's been positive about AI, but that doesn't mean that the positive story is inevitable. Right now, these technologies are morally agnostic. They could be the best things ever and the worst things ever, and the determinant is us. It's madness to think that there's an inherent moral direction to these technologies. And I think that we all have a role. Certainly, those of us living in democratic societies, we need to demand that our democracies function. We need to demand that our governments do their leadership.
00:35:45 Andrew Keen: Your eleventh commandment is a political commandment.
00:35:48 Jamie Metzl: It is. Well, everything is in politics. I mean, humans, when we come together in societies, there are politics of societies. I certainly believe in Enlightenment thinking about the relationship between individuals in a society. And I think that the way we're going to realize these better futures is by norms followed by structures. That's why we're even having this conversation — because the United States has been very successful in harnessing the creativity of the people who are here. I've also lived in Cambodia. I spent a lot of time in Afghanistan. The people there are no less talented. They haven't had that opportunity. So we need to make sure — but this is all an ecosystem, and we're all part of it. And everybody, I think, has to play a role in articulating the kind of world that we'd like to see, and in whatever way building, trying to build a path from here to there.
00:36:44 Andrew Keen: Well, there you have it. We're living in an ecosystem, and Jamie Metzl has co-authored a new book with a certain character called GPT-5: The AI 10 Commandments. Jamie, congratulations on the new book. It's out tomorrow, and I'll have to feed all this into Claude to see what he thinks. Thank you so much.
00:37:11 Jamie Metzl: Thank you, Andrew. I've enjoyed it.