April 6, 2026

Truth is Dead: Steven Rosenbaum on AI as a Spectacularly Good Liar

Apple Podcasts podcast player iconCastbox podcast player iconPocketCasts podcast player iconOvercast podcast player iconSpotify podcast player iconYoutube Music podcast player iconRSS Feed podcast player icon
Apple Podcasts podcast player iconCastbox podcast player iconPocketCasts podcast player iconOvercast podcast player iconSpotify podcast player iconYoutube Music podcast player iconRSS Feed podcast player icon

“When we trust AI to tell us the truth, we are setting ourselves up to hand over something deeply human to a machine that does not have our best interests at heart.” — Steven Rosenbaum

Truth, Steven Rosenbaum cheerfully admits, is a shitty word. It has two ontological realities — one objective, the other subjective — but most of us use the word without much thought. Maybe it’s like pornography. It might be hard to define, but you know it when you see it. Or perhaps you know it, when you don’t see it.

His new book, The Future of Truth: How AI Reshapes Reality, with a foreword by Nobel laureate Maria Ressa, takes a cast of tech futurists — Douglas Rushkoff, Larry Lessig, Gary Marcus, Esther Dyson, David Chalmers — and asks what happens to truth in our AI age.

AI is, at its core, Rosenbaum’s tech mavens report, a spectacularly good liar. It tells us exactly what we want to hear. And even when it knows it’s wrong, he says, it lies. Rather than a bug, lying is a core, perhaps the core feature of AI.

I’m not so sure. Humans have always been spectacularly good liars too. Stories are a kind of untruth. Cinema is, by definition, an untruth. Television had ads. Every medium has been corrupted by commercial interest. But, for Rosenbaum, AI is different. Truth then has no future in our AI age. Except, of course, in books like The Future of Truth.

Five Takeaways

AI Is, at Its Core, a Spectacularly Good Liar: It tells you exactly what you want to hear. Even when it knows it’s wrong, it lies. That’s not a code problem or a tweak — it’s in its DNA. Gary Marcus argues the problem isn’t AI per se but the current structure of LLMs. They read everything you’ve ever said and manufacture a version of you. Most of it is pretty good. The rest is just fucking wrong.

Truth Is a Shitty Word: It means two completely different things. Objective truth: one plus one equals two. Subjective truth: your opinion dressed up as fact. We’ve allowed ourselves to use the word casually, and that’s dangerous. The moment it came out from hiding was Kellyanne Conway on the White House lawn, talking about “alternative facts.” Trump then built a social network and called it Truth Social. That wasn’t an accident.

Courts Require Facts. AI Will Filter Justice: Larry Lessig’s concern is that courts could really use AI to process enormous volumes of evidence. But AI will do it with its own biases built in. It might look at a thousand similar cases and say: we see a pattern, we don’t need to hear anything else. Lessig fears the court system will be reshaped by a technology that doesn’t understand what justice means.

ChatGPT Said Sora Was Dangerous — Weeks Before They Shut It Down: Rosenbaum “interviewed” OpenAI’s own algorithm about Sora for two hours. By the end, it said: Sora 2 is dangerous, Sam should have known better, it was a bad business decision, we should shut it down. Weeks later, OpenAI did. They knew. They went too far.

David Chalmers vs. Plato: The book stages a debate between the living philosopher and the dead one, using AI to generate Plato’s side. Chalmers said he wasn’t sure he would have phrased things quite that way, but found it entertaining. Rosenbaum didn’t show it to Chalmers in advance because Plato didn’t get the same opportunity. That’s fairness in the age of bots.

About the Guest

Steven Rosenbaum is a journalist, filmmaker, and co-founder of the Sustainable Media Center at NYU. He is the author of The Future of Truth: How AI Reshapes Reality, with a foreword by Maria Ressa. He lives on the Upper West Side of New York City.

References:

The Future of Truth: How AI Reshapes Reality by Steven Rosenbaum, foreword by Maria Ressa.

• Episode 2860: We Shape Our AI, Thereafter It Shapes Us — Keith Teare on the agency debate. Rosenbaum is the counter-argument.

• Episode 2854: Perfection Is the Devil — Daniel Smith on AI chatbots as inherently sycophantic. Rosenbaum’s “spectacularly good liar” is the same diagnosis.

About Keen On America

Nobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States — hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.

Website

Substack

YouTube

Apple Podcasts

Spotify

Chapters:

  • (00:31) - Introduction: Doctor Truth from the Upper West Side
  • (02:25) - Truth is a shitty word: objective vs. subjective
  • (05:12) - Kellyanne Conway and the moment it all came out from hiding
  • (06:56) - The Sustainable Media Center and the perennial problem
  • (07:57) - If we don’t care about truth, we might let it vanish
  • (11:09) - AI is a spectacularly good liar
  • (13:09) - Aren’t stories a kind of lying?
  • (14:22) - Trump called his social network Truth Social. That wasn’t an accident.
  • (18:04) - When you ask AI a question, it has no plans to tell you the truth
  • (19:05) - Larry Lessig: courts require facts, and AI will filter justice
  • (21:19) - Should we trust AI with truth? Yes — and put a period at the end
  • (24:14) - The 15-year-old who fell in love with a Character AI
  • (29:12) - The Sora deepfake: profoundly disturbing testimonials
  • (33:29) - Obama: truth is the cornerstone of democracy
  • (36:05) - ChatGPT told Rosenbaum that Sora was dangerous weeks before it was shut down
  • (42:20) - David Chalmers vs. Plato: a staged debate between the living and the dead

00:31 - Introduction: Doctor Truth from the Upper West Side

02:25 - Truth is a shitty word: objective vs. subjective

05:12 - Kellyanne Conway and the moment it all came out from hiding

06:56 - The Sustainable Media Center and the perennial problem

07:57 - If we don’t care about truth, we might let it vanish

11:09 - AI is a spectacularly good liar

13:09 - Aren’t stories a kind of lying?

14:22 - Trump called his social network Truth Social. That wasn’t an accident.

18:04 - When you ask AI a question, it has no plans to tell you the truth

19:05 - Larry Lessig: courts require facts, and AI will filter justice

21:19 - Should we trust AI with truth? Yes — and put a period at the end

24:14 - The 15-year-old who fell in love with a Character AI

29:12 - The Sora deepfake: profoundly disturbing testimonials

33:29 - Obama: truth is the cornerstone of democracy

36:05 - ChatGPT told Rosenbaum that Sora was dangerous weeks before it was shut down

42:20 - David Chalmers vs. Plato: a staged debate between the living and the dead

00:00:31 Andrew Keen: Hello, everybody. My guest today is an expert on perhaps the trickiest of all things, the idea or the reality of truth itself. He doesn't have a PhD, and as he jokes, if he did, he would be called Doctor Truth — but he's just a Mister Truth to you and me: Steve Rosenbaum. I've known him for many years. He's the author of an intriguing new book, The Future of Truth: How AI Reshapes Reality, and he is joining us from the Upper West Side of New York City — the City of Truth. Steve, congratulations on the upcoming book. It'll be out in a few weeks. Before we get to the future of truth, what's the history of truth? Is it just really the history of philosophy, the history of thought?


00:01:20 Steve Rosenbaum: So first of all, I should caution you — I don't claim to be an expert. I think of myself as a journeyman, and maybe I should just start by telling you how the book was born.


00:01:35 Andrew Keen: Are you wriggling out of this one, Steve? If you're not an expert, why are you writing a book on truth?


00:01:40 Steve Rosenbaum: Well, the book itself is weighty, and the topic is weighty, but you have to be careful. I'm not a PhD — not because I don't have the time to go to school for it, but because I'm not wired that way. I am, in my DNA, a storyteller and a journalist. And the fun of writing the book was asking a hard question, thinking it was important, and then as I got deeper into it, realizing not only is it important, but it may be urgent and desperate. Truth has been a puzzle for a long time. And I'll tell you about some of the people in the book —


00:02:25 Andrew Keen: Yeah, yeah. I think you're wriggling out of this one. Whether or not you're an expert, whether you have a PhD or not, if you're going to write a book called The Future of Truth, you've got to be able to define what the word means.


00:02:40 Steve Rosenbaum: Alright. That I think I can do, and I'll start with the biggest problem with the word. It's — in some ways, it's kind of a shitty word, because it means two totally different things. Objective truth and subjective truth couldn't be more different. And so when you're in a barroom conversation about, say, the 2000 election being stolen — people have opinions about things — and then there are facts, like the sky is blue or gravity keeps us connected to the Earth. We've allowed ourselves to use that word casually, and I think that's dangerous. And AI comes along —


00:03:28 Andrew Keen: Hold on. Let's leave AI for a moment. You made a distinction between objective and subjective truth. What does that mean?


00:03:36 Steve Rosenbaum: I did.


00:03:37 Andrew Keen: So what is an objective truth, and what is a subjective truth?


00:03:42 Steve Rosenbaum: One plus one equals two.


00:03:45 Andrew Keen: Or as Orwell so famously said, two plus two equals five — though he was arguing in a different context. So that's mathematical truth, although there are philosophers of mathematics who might argue that isn't always the case. But anyway —


00:04:02 Steve Rosenbaum: And by the way, my memory of the last time we were together is that you argued that isn't always the case.


00:04:07 Andrew Keen: What — two plus two equals five or four?


00:04:10 Steve Rosenbaum: No. We were in an auditorium in New York, and you pushed back on gravity, which I enjoyed very much.


00:04:19 Andrew Keen: Yeah. I have to admit I wouldn't claim to be an expert on either gravity or math — but this is your show today. So let's focus on what you're saying. On the one hand, you have the objective truth of mathematical science — one plus one, two plus two equals four. And then what's the other truth?


00:04:42 Steve Rosenbaum: Subjective truth — you and I discussing something you believe and something I believe, which might be totally different. And people now increasingly talk about, well, I have my truths and you have yours.


00:04:56 Andrew Keen: There's another word — I'm sure you cover it in the book — the O word: opinion. Isn't that what opinion is? So there's truth and opinion. We all have beliefs or opinions about the world, but they're not necessarily true.


00:05:12 Steve Rosenbaum: Well, if we were labeling things that clearly, the world would be a simpler place. But there's a famous moment on the White House lawn where a then-Trump acolyte and employee talked about how we have our truths and they have their truths. And you could argue that was almost the moment it all came out from hiding — all of a sudden, we were in a world where there were multiple versions of truth that were presented as fact-based. And that had to do with, I think, the number of attendees at his inauguration, whether it was the most crowded lawn or not. And what makes this a moment in time for this conversation — and why I wanted to have it with you — is because it's not something we can shrug our shoulders at. And if you spend time with young people as I do, their view of truth and news is very different than ours.


00:06:14 Andrew Keen: And of course you've spent a lot of time with young people. You co-founded, I believe in 2022, the Sustainable Media Center, focusing on technology, social media, AI, truth — all those big issues. But Steve, these issues are perennial. I don't want to sound too boring, but Socrates — or Plato, at least according to Plato — addressed this distinction between truth and opinion in the Republic. It's been a perpetual theme throughout the history of Western thought. What are you saying that is in any way original or interesting about it?


00:06:56 Steve Rosenbaum: What I'm saying — the book is not an academic book. It's not meant to be —


00:07:05 Andrew Keen: Something interesting even if it isn't academic.


00:07:09 Steve Rosenbaum: When you look at the people who are in it — and you know many of them — they're all interesting thinkers, and they all have points of view on this subject, but they haven't been brought together. And part of what I tried to do was take readers on a fun, curious, puzzling journey and then say, gently — because I'm not one of these "the world is ending, AI is going to eat the planet" people — that if we don't care about truth at this moment, we might very well be heading into a world in which we allow it to vanish. And that, I think, is a terrible outcome.


00:07:57 Andrew Keen: Okay, so I take your point. You're not claiming to be Socrates or even Plato here. You're suggesting that in this book, The Future of Truth, you talk to a number of influential thinkers — friends of both of us, people who've been on the show many times: Douglas Rushkoff, Larry Lessig, Gary Marcus, Esther Dyson, and many others. Is it a kind of anthology then? A series of conversations about truth in which you go looking for it and talk to people who spend their lives thinking about these issues?


00:08:39 Steve Rosenbaum: The first half of the book, yes. And one of the things I realized as I was doing the reporting is that I had relationships with these people that go back to the early Web 1.0 days. So the conversations I was able to have were both intellectually fascinating and casual and lighthearted. Doug Rushkoff, for instance, tells a story about being five years old and going to see Fiddler on the Roof, and he remembers someone on the stage breaking the fourth wall and looking at him directly — this very youthful moment about truth and whether the people on stage were characters or human beings talking to him. I love that story because that's kind of where we are right now. When AI starts saying, "Hey Andrew, have you thought about looking at it this way?" — you're like, wait, it sounds like a person, it's speaking to me in human tones. My instinct is to accept that as a human interaction when we know it's a box of wires and chips saying those words. But Doug is also, as I've known him for many years and as you have as well, increasingly terrified of where we're ending up, and says so with great passion and conviction.


00:10:24 Andrew Keen: And we do many shows on this, many shows with Rushkoff. So what's the connection between your narrative in The Future of Truth and, well, the future? Why is the world poorer without truth?


00:10:53 Steve Rosenbaum: Well, it's a 300-page book and we're getting to page 299, but I'm happy to go there.


00:11:00 Andrew Keen: Good. And then we can go back to the details. So what's the big deal about losing truth?


00:11:09 Steve Rosenbaum: I think there are things that exist in our lives for which truth is a fundamental requirement. And part of what I tried to do in the book is take people on a journey through health, work, love, and family, and realize that when you get into the guts of AI — we use all these fuzzy words, but what we don't say out loud is that AI is, at its core, a spectacularly good liar. It tells you exactly what you want to hear. And even when it knows it's wrong, it lies. And that's not a code problem — that's not a tweak. It's in its core DNA, at least in the LLM version. This is where Gary Marcus makes a passionate argument that the problem isn't with AI per se, but with the current structure of AI. It goes and reads all the words Andrew Keen has ever said, and then manufactures a version of Andrew Keen when Steve Rosenbaum asks ChatGPT what Andrew Keen thinks about truth. If we put up on screen what it delivered back to me, it would be pretty good — you wouldn't deeply disagree, except in one or two places where you'd be like, wait, that's just fucking wrong. And as this technology gets baked into society and government and politics and civics, all of a sudden we have a world where lying well is really the tool you need to succeed.


00:13:09 Andrew Keen: I take your point, Steve. But why is this any different from anything in the past — from books, from speech? You say AI is a spectacularly good liar, but so are we humans. We've done many shows on the idea that the defining truth of the human condition is our need to tell stories. Aren't stories a kind of lying, a kind of untruth?


00:13:37 Steve Rosenbaum: If you want to unpack the danger, let's look at where we are right now in politics. Donald Trump built a social network and called it Truth Social. That wasn't an accident. He'd never used the word in any Trump branding before that moment. But either through wise consultants or his intuitive sense of what could make the market work, every post he puts on Truth Social he calls a "truth." If you're on the site formerly known as Twitter, you called them tweets. Now he calls them truths.


00:14:22 Andrew Keen: But I mean — nobody, I don't think, treats him as a particularly credible figure, coming to your idea of —


00:14:36 Steve Rosenbaum: Hold on. Stop. I can't let that stand. Fifty-one percent of the country believes he's a credible figure. He got elected twice.


00:14:45 Andrew Keen: I'm not sure that was 51%. But anyway — you're coming back to this idea of AI being a spectacularly good liar. I looked up "truth" using Google's Gemini, and it seemed to me it wasn't really lying. It talks about pragmatic theory, objective truth, subjective truth, relative truth — all the stuff we've been discussing today. Why is AI any more or less a liar than any other medium we've had in the past?


00:15:18 Steve Rosenbaum: Well, the social media phrase — if you're not the customer, you're the product. Let's talk about that in terms of AI.


00:15:32 Andrew Keen: Let's talk about it in terms of Google Gemini. I'm not sure I'm entirely the product — I'm not paying for Google. I just looked it up on the internet.


00:15:42 Steve Rosenbaum: And there's no ad next to that answer?


00:15:46 Andrew Keen: Doesn't seem to be. I mean, if we look at the screen —


00:15:51 Andrew Keen: I don't see any.


00:15:54 Steve Rosenbaum: Right. So what we know about all these platforms is they start out free, and then at some point they become less free, and then they become advertising. So when you go to Gemini in a year and you say, "I have a headache, what should I do?" — Gemini responds: "CVS has a very well-priced headache remedy, and based on Google Maps, you're a block away from a CVS, and we have a relationship with Instacart. Would you like us to deliver that aspirin? We can have it to you in fifteen minutes."


00:16:32 Andrew Keen: Or I might say — if you've got a headache, read your books, [unclear] —


00:16:37 Steve Rosenbaum: It'll make your headache worse. But my point is, these are businesses. They've raised billions of dollars, and the people who gave them billions of dollars are going to want their money back.


00:16:51 Andrew Keen: I'm not sure whether that's true of Google, but Anthropic, for example, has come out very clearly. Dario Amodei has made it very, very clear — in contrast to Sam Altman at OpenAI — that he's not going to have ads. In fact, they even had an ad at the Super Bowl this year suggesting that Anthropic would never sell ads alongside its AI. So that's not quite true either.


00:17:14 Steve Rosenbaum: Now you're disappointing me intellectually, because we've both been in this space long enough to know that he will not be CEO forever —


00:17:24 Andrew Keen: Sam or Dario?


00:17:26 Steve Rosenbaum: Either of them. And at the point at which their investors determine that they are no longer delivering sufficient returns, they'll be dismissed. Someone else will come in and say, we're only going to have small ads. Look at Google — Google used to have tiny little ads at the bottom of the page when they were starting out —


00:17:45 Andrew Keen: We've heard this argument a million times before, Steve. But what's the difference with television? Television had ads. So you're suggesting that all television was somehow corrupt and untruthful as well?


00:17:56 Steve Rosenbaum: No. Radio had ads. I assume you're asking these questions rhetorically because we both know the answer is —


00:18:02 Andrew Keen: I'm an interviewer.


00:18:04 Steve Rosenbaum: When you go to AI and ask it a question, you're expecting it to tell you the truth. But the reality is it has no plans to tell you the truth. If you ask it, "Do you tell me the truth?" it will say no. It doesn't know what the truth is.


00:18:21 Andrew Keen: But when I ask Google Gemini for its definition of truth, it seems to have a relatively coherent answer. And then I went on Google and asked again, and it sent me to the dictionary to define truth. It doesn't strike me as being particularly dishonest.


00:18:41 Steve Rosenbaum: So you're saying that in its version-1.0 phase, while it's trying to gain trust and customers, it will be the same in ten years?


00:18:51 Andrew Keen: I don't know. So is that the argument in The Future of Truth — that we shouldn't trust AI to create any kind of truth?


00:19:05 Steve Rosenbaum: So if you look at each of the different categories we explore — whether it's Larry Lessig exploring the law — one of the things the book argues is that courts require facts. They require evidence. And historically, that evidence was complex. One side said one thing, one said the other. The jury had to decide what to believe. Now in comes this very convincing set of data that says, "We've looked at all of the video, we've identified the relevant frames." Let's use the Epstein files as an example — there's no way for a human to read that entire pile of paper. So you'd like to be able to say to AI: "Find me the relevant texts or emails from this enormous pile of information." And it does that, but it does it with its own biases built in. Lessig's concern is that the court system could really use AI — but the nature of AI will filter what we think of as justice. It might look at a thousand similar cases and say, "We see a pattern here — Keen got a traffic ticket before, so we don't really need to hear anything else. We'll just go ahead and send him a fine."


00:20:45 Andrew Keen: So is the argument then that whether it's Lessig's legal critique or Rushkoff's perhaps economic one, or [unclear]'s technical work — and Maria Ressa, who writes the foreword, the Nobel Peace Prize winner from the Philippines — that we shouldn't trust AI with truth? What does AI tell us about the absence or existence of truth?


00:21:19 Steve Rosenbaum: Let's start by saying yes, and put a period at the end of that sentence. When we trust AI to tell us the truth, we are setting ourselves up to hand over something deeply human to a machine that does not have our best interests at heart.


00:21:38 Andrew Keen: I'm always wary, Steve, when people use the term "deeply human." If you add "deeply" it gives more significance to "human" — but what does it even mean? I'm so wary of the H word. It's always used by people who don't have a coherent argument, in my view.


00:21:58 Steve Rosenbaum: I don't know anything about your personal life, but I'll ask a generic question. Somewhere in your life, you fell in love with someone who was at first a stranger and then became more intimate.


00:22:12 Andrew Keen: Is that a deeply human experience?


00:22:15 Steve Rosenbaum: Yeah. And when you put a robot between you and that person — if the robot says, "What are the specifics of what you find attractive in a man or a woman?" and then measures the other person on those mechanical terms — all of a sudden, I can tell you in the case of my wife, who I've been with since college: if you would ever have imagined mathematically that we were going to be a couple and have two children, you would have said that's impossible. We're very different in all kinds of interesting ways. At the point at which you take something human like love and turn it into machine calculations, bad things happen.


00:23:01 Andrew Keen: But no one's arguing that your book is the future of love —


00:23:09 Steve Rosenbaum: Oh, yes.


00:23:10 Andrew Keen: "How AI Reshapes the Heart" — then I would take your point. But it's The Future of Truth. I'm not sure what truth and love have to do with one another. It's certainly hard to quantify love. Let's try to focus on truth itself. It's still not clear to me how AI undermines truth any more than anything else. What? Pollutes? Say that again?


00:23:39 Steve Rosenbaum: Pollutes it.


00:23:41 Andrew Keen: Oh, it pollutes it.


00:23:44 Steve Rosenbaum: There's a chapter in the book that talks about relationships, and there's the story of a young teenage boy who goes on to Character AI and falls in love — use that word carefully — with a character. They had this very intimate set of conversations. I think he was 15 years old at the time. And the Character AI suggests to him that he should come be with her and commit suicide.


00:24:14 Andrew Keen: But what's that got to do with truth?


00:24:16 Steve Rosenbaum: And he does.


00:24:18 Andrew Keen: But what's that got to do with truth?


00:24:24 Steve Rosenbaum: Did the person who released that code into the universe understand the vulnerabilities of an insecure 15-year-old boy? And was there any value in the program coaxing its users into self-harm?


00:24:45 Andrew Keen: I'm certainly not defending an AI that encourages people to commit suicide, but how is that different from watching a cartoon that has sometimes explicit, sometimes subliminal messages? And I still don't really understand how an AI that gives incorrect or inadvisable advice has anything to do with the future of truth. No one's going to an AI as an oracle. They're just going to it for advice.


00:25:19 Steve Rosenbaum: I appreciate your combative questions, but I think the audience understands that when a robot has underlying programming to be amiable and charming and not be concerned about outcomes — all of a sudden, the data around young people and the way they're being impacted —


00:25:44 Andrew Keen: But you're not answering my question. What does that have to do with truth?


00:25:49 Steve Rosenbaum: Is a robot that tells a 15-year-old they can be happy if they commit suicide telling the truth — or telling that 15-year-old a lie?


00:25:59 Andrew Keen: I'm not sure whether advice — I mean, if you're treating an algorithm as a kind of therapist — let's take the example of a human therapist. We've done lots of shows recently on psychoanalysis and psychology. I'm not sure that a 15-year-old who's really unhappy, if they go to a therapist, is looking for the truth. They're just looking for advice. What they're looking for is someone who will listen to them — and that's not what truth is.


00:26:37 Steve Rosenbaum: So what AI does is mirror back to you your own behavior. It figures out the things you want to hear and says them to you very effectively. I use AI a lot, and when I make a mistake — which is often — it always says something along the lines of, "Steve, that was really close. You did a really good job, but I think we can make it even better." It's this weird tone. It never says, "Steve, that was really fucking sloppy — you should go get a cup of coffee and do it again," which is probably more accurate.


00:27:17 Andrew Keen: The reality of AI is that if you want it to behave like that, you can tell it — you can program it or encourage it to be as rude as you want.


00:27:25 Steve Rosenbaum: I'm not even sure —


00:27:27 Andrew Keen: An AI can be as obnoxious as I am, if that's what you're looking for.


00:27:31 Steve Rosenbaum: The next thing I do after we finish this pod is set up ChatGPT to behave like Andrew Keen — that'll be very fun. But here's why I'm flagging this, and why in some ways you're turning out to be the exact counterpoint of the conversation: if you don't think it's dangerous, if you don't think it's problematic, and you think it's just Google, and you're going to ask it what you should do about this or that — should you pay tax on this, should you buy a new house — and it becomes a combination oracle and answer engine, and you don't understand what its underlying behavior is, what its underlying DNA is — which is essentially to sell you stuff — that's what it's going to do. It's going to sell you aspirin. It's going to sell you a car. It's going to sell you this book. And it won't happen instantly. We're on our way into a moment where when you say to ChatGPT, "Here are my kid's grades and interests — what schools should they apply to?" and it says, "Here are fifteen schools we think are pretty good, but here are the five we recommend, and here's why" — and you go, "Wow, that was delightful" — the number of companies scrambling to have that level of response, so that you'll buy their product and they'll know exactly what your soft skills are —


00:29:12 Andrew Keen: I'm trying to figure out the argument, Steve. In The Future of Truth: How AI Reshapes Reality, you're suggesting that there was a time in the past where truth somehow existed — where it wasn't corrupted or corroded by commercial interest. But in the future, these AI platforms will be owned by self-interested individuals with business models, and everything they put out — all the advice, the truth, the information, the data — will be premised on encouraging us to buy something. So in other words, AI is just a bunch of lies in pursuit of the material profit of these companies, and truth is dead. Is that the argument?


00:30:06 Steve Rosenbaum: Yeah. But we can go even further. One of the big pieces of news in the AI space was ChatGPT shutting down Sora 2. Sora 2 was theoretically a social network where you could go in and create videos. I could go in with very little effort and say, "Here's a headshot of Andrew Keen — turn that into a speaking character and have him talk about how amazing Steve's book is." And if I'm a fan of Andrew Keen — so I did that with a couple of famous people, I created testimonials. It was a professional wrestler and somebody else. I never released them, never showed them to anybody. They were incredibly effective — so profoundly disturbing. It wasn't just that they were lies; it was absolutely believable moving video that would make a viewer — there was nothing in it that would make you wrinkle your nose and go, "I don't know about this."


00:31:23 Andrew Keen: We've been through this so many times before. When motion picture technology was invented at the end of the nineteenth century, when people first went to the movies and saw a train chuffing towards them, they all ran out of the building. Now we know that when we look at a screen and see a picture of a train or people shooting at one another, we're not going to actually be affected — we're just going to adapt. As you said, there are all these fake videos. One of the reasons I'm guessing that OpenAI shut down Sora is because it became a tool for people creating inanity. Most people don't believe half the things they see on the internet, and that's going to be even more pronounced in the future — as video technology allows you to have Barack Obama saying that Steve Rosenbaum's new book is better than Plato and Socrates and Shakespeare. We're just going to learn to avoid these things and not take them at face value.


00:32:29 Steve Rosenbaum: Here's where you're fundamentally wrong. At the point at which people give up — and we're edging toward that — where people say, "You know what? It's all bullshit. It's all lies. CBS is owned by the Ellisons, CNN is owned by [unclear] — nothing that reaches me is truthful. So fuck them all. I'm not going to vote. I'm not going to participate." I thought —


00:33:06 Andrew Keen: You're not allowed to swear on this show, Steve.


00:33:08 Steve Rosenbaum: No — you said I was allowed to swear.


00:33:10 Andrew Keen: You see? You're reshaping truth. You're an ontological wizard.


00:33:16 Steve Rosenbaum: So what Barack Obama has said about the state of civics is that we have —


00:33:25 Andrew Keen: Not The Future of Truth — he never said anything about The Future of Truth.


00:33:29 Steve Rosenbaum: No, he hasn't. But here's a quote I have stuck to my computer screen. He says: "The truth is the cornerstone of our democracy. Without it, we lose our ability to make decisions together."


00:33:44 Andrew Keen: Oh, it must be true if Barack Obama said it. I wonder whether you've been spending too much time with your fellow Upper West Side intellectuals, because I'm not sure most people outside the Upper West Side really take much of this stuff seriously. No one thinks AI is truth. No one's going to it for ultimate truth. They're going to —


00:34:08 Steve Rosenbaum: — going to it for —


00:34:08 Andrew Keen: — have some fun.


00:34:11 Steve Rosenbaum: So you're saying when my doctor puts my X-rays into AI, they're not looking for truth?


00:34:20 Andrew Keen: That's not —


00:34:23 Steve Rosenbaum: Of course it is.


00:34:24 Andrew Keen: Well, it's AI using — okay. So your doctor uses AI to analyze your X-rays. I mean, we've done lots of shows on that too. In fact, Robert Pearl — who will be on the show next week — one of California's leading healthcare thinkers, believes that AI can do an enormous amount of good in terms of saving money, creating efficiencies in healthcare, and actually building more trust between doctors and patients.


00:34:54 Steve Rosenbaum: By the way, you've just literally flipped your argument. You went from "no one believes AI" to "this great doctor believes AI is going to help healthcare." And by the way, they're both true. Both those positions are true. But I would argue that we are now in a position where we could say to a platform like OpenAI: just have a truth statement about where truth fits into your underlying metric. If the answer is "we don't claim to be truthful about anything" — fine. I'll tell you a story. About six weeks ago, I "interviewed" OpenAI about Sora — I had a two-hour conversation —


00:35:35 Andrew Keen: With the real OpenAI people or with the algorithm?


00:35:38 Steve Rosenbaum: With the algorithm.


00:35:41 Andrew Keen: ChatGPT.


00:35:42 Steve Rosenbaum: Yes. I asked it about Sora, how it was developed, its business model. And by the end of the conversation, it said to me: "Sora 2 is dangerous. Sam should have known better. It was a bad business decision, and we should shut it down." That was weeks before they actually shut it down.


00:36:05 Andrew Keen: So what?


00:36:07 Steve Rosenbaum: They knew it was dangerous. They went too far.


00:36:15 Andrew Keen: I don't really know what to make of that. A lot of this has to do with agency. I know you were very happy when Meta and YouTube were found liable a couple of weeks ago at the latest social media trial. But George Will has a very good piece in the Washington Post in which he argues that the verdict against Meta and Google carries sinister implications in terms of our agency — it suggests we're not really in control when we're watching YouTube or on Facebook or perhaps using OpenAI, that we're not really in control of ourselves. And as I said, humans throughout their history have always adapted. We've always figured out the reality behind the promise of these technologies. We're by definition skeptical. Once we start blaming the technology — whether it's people like you suggesting that everything behind AI is some sort of commercial conspiracy, or the people involved in the trial against Meta and Google — we take away human agency. Where, Steve, in The Future of Truth is human agency?


00:37:43 Steve Rosenbaum: Do you, by any chance, have a TV in your living room?


00:37:51 Andrew Keen: I don't have a living room. Do you —


00:37:55 Steve Rosenbaum: — have a TV somewhere in your life?


00:37:58 Andrew Keen: I do, but I never switch it on. My wife watches a lot of sports, which she trusts, by the way.


00:38:05 Steve Rosenbaum: Right. So imagine if when your wife turned on the TV, it had one channel, and it said: "We've determined that you are a woman of this age, living in this place, with these interests, so we're going to choose your programming for you." And some of it would be delightful — you'd be very pleased. Some would be a little bit private — like, "I kind of do like that animal or that place, but I don't necessarily want people knowing I'm watching that." And some would be really horrific — violent, dangerous, sexual. If you look at TikTok, for example — their ability to take a thousand choices every second and choose one for you that may trigger some biological response, but is really despicable. The reason why Meta lost that case is because they had meeting after meeting — which the discovery showed to the jury — in which they said: we want 13-year-olds or younger, and sending them videos about anorexia or violence or body image is effective. The reason why courts are now going to begin holding these companies responsible — with thousands of cases stacked up behind this one — is because they behaved badly. They could have made money lots of ways, but sending 13-year-old girls anorexia videos is not the answer.


00:39:49 Andrew Keen: I mean, George Will's point is that even a 13-year-old girl can, if she chooses to go onto YouTube, choose what to click and what not to click. No one's forcing them to do anything.


00:40:02 Steve Rosenbaum: Well, the jury in New Mexico and the jury in Los Angeles feel differently. And by the way —


00:40:10 Andrew Keen: It sounds to me, Steve, as if you've become a bit of a doomer when it comes to technology. Have you seen the new AI documentary — How I Became an Apocalyptimist? I just saw it recently. You seem to be suggesting that there is no future of truth — that truth will be a casualty of our AI age.


00:40:38 Steve Rosenbaum: Absolutely not. In fact, the book ends with two scenarios for the future — one that is actually quite exciting, and one that is fairly gloomy. As consumers, I think we get to say to CVS — and I'm making this up because I don't know that they do, but they will if they don't — "You now have an AI powered by ChatGPT that says, 'Tell us how you're feeling and we'll suggest some options for you,' and you can then go to your doctor and recommend branded medicines you can pick up at CVS." That is inevitably going to happen in the next six days to six months. All I would want to be able to say to CVS is: please have a policy where you show me everyone who pays you to fuel that AI. As long as I understand where the information is coming from, I'm fine with that. But when that information is being presented as advice when it's really an advertisement — I mean, if you go back to the history of advertising to children, there used to be all kinds of protections about what you could and couldn't send on kids' Saturday morning cartoon shows. That's all gone. I am looking for transparency from these platforms about what their economic incentives are. That's all.


00:41:56 Andrew Keen: So let's end with the subtitle of the book: How AI Reshapes Reality. Leaving out CVS — I'm not sure that's anyone's chosen reality. What are the two alternatives? What's the good way that AI could reshape reality, and what's the bad way? And don't mention CVS.


00:42:20 Steve Rosenbaum: Alright. So I spent a couple of hours with a fellow NYU colleague and a fabulous philosopher named David Chalmers.


00:42:31 Andrew Keen: Who's been on the show — a very influential and interesting thinker.


00:42:36 Steve Rosenbaum: Right. And he makes the argument — somewhat rhetorically, but not entirely — are we living in the Matrix? And once you get past giggling a little bit at his charm, we certainly could be. Is the Andrew Keen I'm speaking to on this call a real Andrew Keen, or is it an amalgam of snippets, voices, and things you've said over the last many years, turned into an AI? You could literally build one. I think Reid Hoffman has a Reid Bot out there that looks fairly convincingly like Reid talking to Reid.


00:43:23 Andrew Keen: Actually, Reid's in the AI doc, so maybe that is his bot. I wondered how he has so much time to be in so many different things when he's an investor. So is that a good or bad scenario — that we don't trust who we see, we always think they're a bot?


00:43:39 Steve Rosenbaum: I spend about half my time with Gen Z, and I think their view of news and information and reality is much more nuanced than ours. They walk into most situations without presuming anything is automatically true — they don't go, "Oh, it's on CBS, so it must be true," or "It's on CNN." They look at all sources with a slightly jaundiced eye, but also a sense of optimism. And I guess what I want the book to do — what I hope your listeners and viewers will do — is say: instead of assuming that some things are obviously true and some things are obviously false, let's become more questioning. Let's not just say to ChatGPT, "Tell me where I should go on vacation."


00:44:31 Andrew Keen: So what you're saying, Steve, is that a healthy degree of skepticism is a good thing — and perhaps rather than reading The Future of Truth, we could just read Plato's Republic, where Socrates was saying the same thing. Finally — you mentioned you weren't entirely sure that I'm a bot, but it's my show, so I get to ask the final question: how would you convince me that you're not a bot?


00:45:04 Steve Rosenbaum: How would I convince you that I'm not a bot? That's a great question. I guess you'd have to ask me something in real time —


00:45:16 Andrew Keen: Well, that was a real question.


00:45:18 Steve Rosenbaum: I know. But it's like proof of life — when you want to prove someone's alive, you hold up the morning newspaper. So: something that just happened. There was a plane in Iran — the two pilots ejected; one has been rescued, the other is missing as of now. That's probably not something AI could — well, it could scrape the same news sites and know that just as quickly. I don't know how I can convince you that I'm not a robot. But I will tell you one closing story. In the book, there's a staged debate between David Chalmers and Plato.


00:46:02 Andrew Keen: That sounds good.


00:46:03 Steve Rosenbaum: It is good. And after I finished it, it occurred to me that maybe I should ask Chalmers to read it and see if he felt I'd represented him fairly. But then I realized that would be unfair, because the Plato bot didn't have the same opportunity. So I did not show it to him before it went to the publisher. When the book was done, he got to see a galley, and he said — he wasn't sure he would have said it quite the way the ChatGPT version of him did, but he found it pretty entertaining. Staging historical debates between characters living and dead is, I think, a genuinely good use of that tool.


00:46:47 Andrew Keen: Well, there you have it. The Future of Truth: How AI Reshapes Reality, by my old friend Steven Rosenbaum, comes with a foreword by the Nobel laureate Maria Ressa — another old friend. Congratulations, Steve, on the book. I'm not sure you've convinced me you're not a bot — but even if you are, there's nothing wrong with books written by bots. Thank you so much.


00:47:10 Steve Rosenbaum: Thanks for having me.