We Shape Our AI, Thereafter It Shapes Us: How to Maintain Human Agency in Our Agentic Age
“We shape our tools, and thereafter they shape us.” — Marshall McLuhan (attributed)
Who gets to tell the AI story? A movie, a media company or Marshall McLuhan?
1. The movie: the AI doc, How I Became an Apocaloptimist, which That Was The Week publisher Keith Teare dismissed because it failed to define AI.
2. A media company: OpenAI bought the streaming show TBPN for hundreds of millions of dollars in a move that is akin to Lenin starting Pravda.
3. Marshall McLuhan: Ezra Klein visited Silicon Valley and was reminded of McLuhan’s (supposed) remark that “first we shape our tools, and thereafter they shape us.”
Klein argues that AI agents are empowering tools that give humans a massive boost in productivity. But the effect, he writes, is to constantly reinforce a certain version of ourselves. These agentic tools are undermining our agency, he fears. So AI ultimately gets to tell the AI story.
Agency is becoming simultaneously the political problem and the cure — the thing-in-itself. Writing in the New York Times, Sophie Haigney argues that all the worst people want to be high-agency. Out here, in Silicon Valley, we think that all the worst people want to be low-agency. Perhaps the only thing we all agree on is that nobody wants to be a bot. First we shape our AIs and thereafter they shape us.
Five Takeaways
• The AI Doc Is a Massive Failure: Well made, technically fine, but it never establishes what the problem with AI actually is or what kind of solution it offers. All three leaders — Altman, Amodei, Hassabis — come across as unconvinced there will be a good future. The only opinion you can leave with is a negative one.
• OpenAI Bought a Media Company: TBPN acquired for what may be hundreds of millions. Om Malik compares it to Lenin starting Pravda. You don’t buy a media outlet unless you want to influence the message. Keith thinks it’s about winning the messaging war against Anthropic. Meanwhile, OpenAI’s COO shifts to special projects and Fidji Simo takes medical leave.
• Ezra Klein Saw Something New in San Francisco: He noticed people using AI agents as personal assistants — empowering tools that give humans a massive boost in productivity. His observation: the effect is to constantly reinforce a certain version of yourself. We shape our tools, and thereafter they shape us.
• Agency Is the Defining Political Conversation: The New York Times argues all the worst people want to be high-agency. Keith argues the opposite: agency is the precondition for making history. The Meta verdict treated a depressed girl as a passive victim of media with no decision-making role. That depicts humans as infants. It isn’t true.
• AI Is a Calculating Machine. You Have to Ask It Something: Agency hasn’t been given up. The human shapes the AI completely. Each session starts from scratch. The fear is that the next generation won’t be as clever as AI. But unless we have a strong sense of the self, we will be lost. If we do, we can shape these tools as we want.
About the Guest
Keith Teare is a serial entrepreneur, investor, and publisher of That Was The Week, a weekly newsletter on the tech economy. He is co-founder of SignalRank and a regular Saturday guest on Keen On America.
References:
• That Was The Week — Keith’s editorial: “Who Gets to Tell the AI Story?”
• Episode 2852: Don’t Fight the Last War — last TWTW on the social media trial and the Anthropic trap.
• Episode 2850: Bring the Friction Back — Balkam on social media addiction. The agency debate continues.
About Keen On America
Nobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States — hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.
Chapters:
- (00:31) - Introduction: the AI doc, How I Became an Apocaloptimist
- (01:28) - Keith’s verdict: a massive failure of a movie
- (03:20) - Daniel Roher’s narrative: should I have a kid in an AI world?
- (05:30) - Who gets to tell the AI story?
- (07:55) - Brain surgeons vs. social policy: the trust problem
- (09:37) - OpenAI buys TBPN: Lenin, Pravda, and the propaganda play
- (11:57) - Executive churn at OpenAI: Lightcap, Simo, and the COO shuffle
- (15:22) - Stability is the enemy: the biggest startup the world has ever seen
- (17:28) - The markets: rear-view mirror meets speculation
- (19:48) - SpaceX with xAI: rumoured at $2 trillion
- (22:32) - Ezra Klein in San Francisco: I saw something new
- (24:19) - McLuhan: we shape our tools, and thereafter they shape us
- (26:42) - Why didn’t the AI doc actually use AI?
- (31:19) - The agency debate: all the worst people want to be high-agency
- (38:09) - AI is a calculating machine. You have to ask it something.
00:31 - Introduction: the AI doc, How I Became an Apocaloptimist
01:28 - Keith’s verdict: a massive failure of a movie
03:20 - Daniel Roher’s narrative: should I have a kid in an AI world?
05:30 - Who gets to tell the AI story?
07:55 - Brain surgeons vs. social policy: the trust problem
09:37 - OpenAI buys TBPN: Lenin, Pravda, and the propaganda play
11:57 - Executive churn at OpenAI: Lightcap, Simo, and the COO shuffle
15:22 - Stability is the enemy: the biggest startup the world has ever seen
17:28 - The markets: rear-view mirror meets speculation
19:48 - SpaceX with xAI: rumoured at $2 trillion
22:32 - Ezra Klein in San Francisco: I saw something new
24:19 - McLuhan: we shape our tools, and thereafter they shape us
26:42 - Why didn’t the AI doc actually use AI?
31:19 - The agency debate: all the worst people want to be high-agency
38:09 - AI is a calculating machine. You have to ask it something.
THAT WAS THE WEEK — April 4, 2026
Cleaned transcript
00:00:31 Andrew Keen: Hello, everybody. It's Saturday, April 4. If it's a Saturday, it must be That Was the Week — our summary with Keith Teare of everything interesting that's happened in technology this week. We're both out on the West Coast, so we have a front row seat in this new movie. And speaking of movies, last week I told Keith that he needed to go and see the AI doc. I'd seen it the previous week. This week we're going to talk about it a little bit more. So Keith, who's a very obedient fellow, went off with his wife, Janae, to a movie house in Palo Alto. It's good they still have them there, Keith. To see the AI doc — or How I Became an Apocaloptimist — meaning, in other words, how I got confused about AI. Lots of reaction online. We'll get to that. But before we get to the movie, what did you think of it?
00:01:28 Keith Teare: You know, the movie was — I mean, technically, I thought it was fine. It was well made. It kept your attention for two hours, which isn't easy. But I think it's a massive failure of a movie because it pretends to be talking about AI either as a problem or as a solution and ultimately does neither. It doesn't establish a problem, and it definitely doesn't establish what kind of a solution it is for what problem. It brings both optimists and pessimists to the table — or rather, to this sort of weird studio that seemed to dominate the whole film. And in both cases, it limited itself to declarations without substance. There was never an explanation of what the negative case is or what the positive case is. So it ended up being a voyeuristic kind of exercise, peeking behind the curtains of opinion, which really didn't lead anywhere.
00:03:20 Andrew Keen: I don't necessarily disagree, but I think the interesting thing about the film is its director, Daniel Roher, is a pretty successful young director, obviously very talented. The central narrative in the film is that Daniel, the co-director, goes out to figure out what this AI thing is, particularly in the context of being a new parent. The narrative is that his wife gets pregnant during the movie, and so he wants to know firstly whether or not he should have a kid, and secondly what kind of world this child will inherit in an AI world. So in a sense that's a very pertinent narrative. It's very typical in the sense that everyone's thinking that the next chapter in our collective narrative will be AI. So that's a fair beginning to the story, isn't it, Keith?
00:04:19 Keith Teare: Yeah. I think his initial opening statement of motivation is fine — totally fine — and he holds that thread through the movie. His wife is pregnant and gives birth, and by the end of the movie the child is visibly about one year old. So that probably does reflect the thoughts of a lot of people, because it's pretty hard unless you're an expert to get into the mechanics of either side. You do end up being an observer of a debate without really having the tools to fall down on one side or the other except for your natural instinct. I think he captures the angst on the doomsday side, and he captures the overoptimistic zeal on the other side. But, given that I do actually know how it works and what is really happening, it leaves you dissatisfied.
00:05:30 Andrew Keen: Well, maybe they should have interviewed you, Keith — that was the mistake. In your editorial, you suggest that the film positions Tristan Harris, who's become perhaps the most articulate and successful critic of technology — not just of AI, but of social media as well — as a moderate or someone in between. I'm not sure that was the case, though. Was Harris presented as the voice of reason in the film?
00:06:02 Keith Teare: No. I think he was definitely on the doomsday side — let's just label it that, though that might be an unfair word for him. And when you leave the film, I said to my wife, who is naturally inclined to be a skeptic about AI — she fears the replacement of human agency by tech, leading to worse outcomes. She's not in the "it's going to kill us all" world; it's more that it's going to make our lives less interesting. And the point we both agreed on is that the only opinion you could come out of the movie with is a negative one of AI. There's no narrative that would allow you to go in as negative and come out as positive — to change your mind.
00:07:09 Andrew Keen: Sure. The other piece of Roher's story — the co-director and main character trying to figure out the meaning of AI — was that his father had a rare form of cancer, and it was acknowledged, particularly in conversations with some of the more senior AI people, that this might help him. You broadly talk in your editorial — and it's not just about the AI doc — about who gets to tell the AI story. I mean, don't we all? What does this even mean? Who gets to tell the AI story? You're sounding like a bit of a woke type, Keith.
00:07:55 Keith Teare: Well, look — who gets to tell the story of brain surgery? Hopefully it isn't people who are against surgery. Hopefully it's brain surgeons, because they understand it, and they give it context and allow you to feel that there will be a good outcome because they understand it. And you trust them.
00:08:26 Andrew Keen: Well, not all of us trust them. As we know from the politics of COVID and even the RFK Jr. stuff these days, not everyone trusts experts. They certainly don't trust journalists, and they don't even trust doctors. Anthony Fauci has become the antichrist for many people. Some scientists are considered ideologues of one kind or another.
00:08:50 Keith Teare: I think it changes when the discussion is social policy versus cure. A brain surgeon is about cure. Social policy is a different sphere, and Fauci was very much in the world of social policy around COVID — not diagnosis or science per se. It was the science of social policy that he was focused on. And opinion flourishes because in civil society you're allowed to — and in fact we should encourage — many different opinions. But that doesn't mean any of them are right. They're just opinions. And so I think the headline this week — the catalyst for it — was OpenAI acquiring TBPN.
00:09:37 Andrew Keen: This was another piece of surprising news. OpenAI buying the streaming show TBPN, aiming to change the narrative on AI. And this comes back to your theme of who gets to tell the AI story. Do you think the story behind this story is that the people at OpenAI feel the AI story isn't being told fairly? So they bought a company — it's like buying TechCrunch in the Web 2.0 age. It's as if Google or Facebook had bought TechCrunch, and you were on the front lines of that. I mean, it would have been slightly absurd, wouldn't it?
00:10:17 Keith Teare: Yeah. Well, eventually AOL did buy TechCrunch, and it bought it as a media business. OpenAI, as Om Malik says in his piece about this this week, is buying what Om calls a propagandist and an agitator. He quotes Lenin as saying — when Lenin started Pravda, he did it because he wanted a media outlet he could use to educate the masses. Well, OpenAI is buying something, and even though it claims editorial independence, you don't really buy a media outlet unless you want to influence the message.
00:10:58 Andrew Keen: And it's a very odd decision by OpenAI, especially given that in the last few weeks there's been all these stories about them — focus, focus, focus, emergency alert, get rid of their video algorithm and all the rest of it. And now they've distracted themselves by buying a media company. The Times says that this was driven by Fidji Simo, a top OpenAI executive who'd been impressed with the show's marketing instincts. Is that your reading? And apparently they're all going to report to Chris Lehane — perhaps the most invisible power broker in Silicon Valley, who lives just up the road from here. My wife and his wife are very close friends. It's all a bit weird, isn't it?
00:11:57 Keith Teare: Weird — yes, I'll give a tick to that word. But it also probably denotes a moment when OpenAI feels it isn't winning the messaging war — probably vis-à-vis Anthropic, actually. I don't think it's the messaging war against the doomsters. I think it's the messaging war against Anthropic.
00:12:20 Andrew Keen: That's a good point. And there was another — I think we covered this last week — a piece in the Wall Street Journal by Keach Hagey, who's a very good writer and just wrote a book on Google and OpenAI, and who speaks of the increasingly personal nature of the competition between OpenAI and Anthropic. I'm getting the sense, Keith — and you'll probably deny this — that you're beginning to slightly doubt OpenAI. You've always been the ultimate OpenAI guy, but now you're beginning to think maybe they're not quite as inevitable as you used to think. Is that fair?
00:13:02 Keith Teare: No. You're putting words in my mouth. Every week when you publish your version of our show, I say to myself and scratch my head and ask, did we really say that on the show?
00:13:17 Andrew Keen: So I'm the propagandist. I'm the Lenin of That Was the Week.
00:13:22 Keith Teare: Look. If you look at this from the TBPN point of view, you've got to ask why did they sell. They obviously don't really care about journalism. But if they got — and that's the other part of the deal — I read a rumor that it was in the hundreds of millions. I mean, how much did TechCrunch sell to AOL for? Was it twenty or thirty million?
00:13:49 Keith Teare: It was never disclosed, but it was a little bit higher than that — not that far north of it though. So why would they turn down a $100 million, maybe $200 million deal to be acquired by OpenAI? They're set for life. They can go and do another one. And if you look at it from the OpenAI side, I think you have to say it's a smart move. It kind of reflects last week's message — OpenAI is growing up.
00:14:18 Andrew Keen: But this seems like the ultimate frivolity. Why would you buy a media company?
00:14:25 Keith Teare: We've yet to understand that. We'll only know in the future. I do think Fidji is vulnerable.
00:14:34 Andrew Keen: Well, she's more than vulnerable — because this is the other piece of news that you didn't include. All these executive changes at OpenAI: Brad Lightcap, the longtime COO — the number two man in the company — is now going to lead special projects. The chief marketing officer is stepping down, and Fidji Simo is taking medical leave for several weeks to seek treatment for a rare disease. So there's a lot of executive churn. Again, it doesn't necessarily suggest stability on Sam's ship, does it?
00:15:22 Keith Teare: Well, look — if you're looking for stability, you don't look in AI companies. None of them are stable.
00:15:28 Andrew Keen: You're a startup guy, Keith. You've run these things. You know how it works. All these senior people — the COO shifting to special projects doesn't sound very reassuring.
00:15:39 Keith Teare: Special projects is probably important right now. But you have to put the news into context, Andrew. OpenAI is the biggest and fastest-growing startup the world has ever seen — compared to nobody, compared to Tesla, compared to SpaceX. There's nothing like it. It's many times bigger than Anthropic.
00:16:09 Andrew Keen: Well, Anthropic is catching up. Would you acknowledge that?
00:16:19 Keith Teare: I don't think so. I think the market's growing. OpenAI still owns the bulk of it, and Anthropic's growing as well. Catching up is a difficult thing to prove — I'm not convinced I'd go with that. But they're both great companies. They're number one and number two in the biggest, fastest-growing startups ever.
00:16:43 Andrew Keen: But we always have that in tech. Every generation — whether it's web one or web two or web three or AI — they're all the biggest because that's the nature of the economy.
00:16:53 Keith Teare: Yeah. But when you're in the middle of that kind of scenario, stability is your enemy. You really need to be discussing your strategy probably weekly, re-addressing your priorities probably monthly, and rearranging things appropriately. So I don't think this instability is a bad thing. I think it's a sign of life, not a sign of death.
00:17:28 Andrew Keen: Well, you gave away the game with "deck chairs" — that's the Titanic quote about rearranging the deck chairs when you're about to hit the iceberg. We will see whether OpenAI hits the iceberg or whether Fidji Simo will save the company by acquiring TBPN.
There's a third strand of who gets to tell the AI story in your editorial, which in a way is probably more interesting than even OpenAI buying a media company or the AI doc. It's the markets, Keith. What are the markets telling us? Should the markets tell the story accurately? Is this the best way of actually gauging what's happening?
00:18:17 Keith Teare: Markets are a combination of a rear-view mirror and speculation about the future. So they never actually tell the story — they're a pulse on the present, is the truth. It's interesting — one of our viewers on Facebook, Courtney Hamilton, has left a couple of comments saying that on the show we predicted this would become a moral panic before anybody else. He credits me with that —
00:18:54 Andrew Keen: He's probably your AI, Keith. I don't trust Courtney Hamilton.
00:19:04 Keith Teare: And then he says he can sense the moral panic in the voice of the interviewer — which is you. Anyway, back to the question. I think the markets are pricing these companies very aggressively. They're probably both going to IPO this year. Polymarket says OpenAI might be more likely to IPO next year. But what's interesting is Elon is going to beat both of them — SpaceX with xAI —
00:19:48 Andrew Keen: That was another piece of big news this week.
00:19:52 Keith Teare: And there are rumors it's going to be priced around $2 trillion. Now — and by the way, Courtney is a he, not a girlfriend.
00:20:06 Andrew Keen: Oh, boyfriend! I don't mind. I'm open-minded.
00:20:09 Keith Teare: Hey, Courtney — you must admit it's a reasonable mistake. My name is Keith Teare, and in school I was known as KT, and that got shortened to Kate. So I spent my whole teenage years being called Kate. That definitely created a few tense moments after school. But, yeah — I've said before that OpenAI is probably going to end up being worth $10 trillion. I still think that. And by the way, I think Anthropic might be worth $3 to $5 trillion.
00:20:50 Andrew Keen: We'll see. Again, that's very long term. Let's focus a little bit more on the concrete. One of your critiques — which I think is a good one — and the New York Times review made the same point: the movie tried to cover so much it ended up being more confusing than clarifying, though the parts were fascinating. I think that's a fair reflection. What you said earlier is that no one really was defining what AI was, which is the problem with the film — they weren't really using it. Maybe Daniel Roher, whether or not he was quite as inexperienced and innocent as he claims, was presented as the guy who knew nothing about AI and needed to be educated. But as you say, you need to know something about it. And there was a good op-ed in the New York Times this week — which you list in the newsletter — by Ezra Klein, the very popular podcaster and writer, and co-author of Abundance. He writes about "I Saw Something New in San Francisco," writing about AI as someone who actually uses it. He brings up the McLuhan quote — the famous line about do we use AI, or does AI use us — which I think is particularly relevant. So what does Klein say about AI, and why is this perhaps more useful to read than to watch the AI doc?
00:22:32 Keith Teare: Ezra Klein's piece is really about the triumph of agency. His visit to San Francisco resulted in him noticing how many people were using [unclear — OpenClaw?], which is this interactive personal assistant-style agent that was released to open source and then acquired by OpenAI. Very similar to the TBPN acquisition — it retains its separateness, and it's now being run by Dave Morin, who's a well-known Valley venture capitalist and entrepreneur. And [unclear] is basically an empowering tool for a human that gives the human a massive boost in productivity and control. But it comes with some risks because you have to give it access to your computer — which I do. Klein, who's on the East Coast, was an observer of this in the same way that the movie was made by an observer, and came away thinking this is a kind of change in the whole way things are used. Now, since then, Anthropic's Claude has morphed to move in that direction. It hasn't quite gotten there yet — it's way too hard to turn it into —
00:24:03 Andrew Keen: Yeah. But it will. That's almost inevitable, isn't it?
00:24:07 Keith Teare: I think so. I've played with the efforts they've made so far, and they're not as good as [unclear], but I want them to succeed because I kind of like Anthropic's approach.
00:24:19 Andrew Keen: But Klein's point — and he invokes McLuhan's famous line, which he may not have actually said, because the best lines aren't actually said by the people we believe said them: "We shape our tools, and thereafter they shape us." And I think Klein's point is that this AI is shaping us as individuals. He said — and I'm quoting him — "the effect is to constantly reinforce a certain version of myself." And that's what these AIs do. They pick up on aspects of ourselves and push them. Maybe they're complementing us. Maybe they're trying to get us to self-improve. But I think it's an interesting observation.
00:25:09 Keith Teare: Yeah. Self-improvement is, I think, the motivation of all technology ever. I don't know why you would get interested in technology if it wasn't for self-improvement. And self can be collective — individual and collective. You have to hand it to Ezra Klein. I think his Abundance insight, about a year ago after the Democrats' election loss, and his recognition of AI as self-improvement are both very humanistic — in the humanist and Enlightenment tradition of thinking. Not to see it that way would be bizarre. Otherwise you'd have to endow AI with some kind of consciousness as a thing in itself.
00:26:10 Andrew Keen: Coming back to the movie — one of the things it should have done, or could have done to make it more interesting, is use a little bit of AI in it. To show us, the viewer — because this is filmed for the viewer — how AI could actually change a movie. It didn't do any of that. It was very much a traditional, top-down film with lots of graphics. And the more graphics there were, the more confusing it became, because all you were watching were images on the screen without being quite sure what they meant.
00:26:42 Keith Teare: Well, even worse, Andrew — and we haven't made this point — all three leaders who showed up to be interviewed: Sam Altman, Dario Amodei, and Demis Hassabis — none of them actually showed any leadership in addressing those questions. In fact, they're so paranoid by the moral panic that they come across as unconvinced that there will be a good future.
00:27:19 Andrew Keen: I think Demis was a little bit more mature in how he handled it. But do you find — Keith, you said the film was missing people who actually use AI — do you find that your interactions with whichever AI you're using are pushing certain versions of yourself? Is it creating more or fewer Keith Teares?
00:27:51 Keith Teare: Well, my use of AI exposes its weaknesses mostly. Of course it has massive strengths, but the thing you notice as a user is its weaknesses. This week I did a few things — I've got a board meeting next week and I have an agent that produces my board report based on some database queries. I published the State of Venture at stateofventure.com, and I did the monthly venture capital report, which is another agent. The whole workflow for That Was the Week involves agents all the way through — from headline writing, editorial, gathering the pieces, organizing the newsletter into something publishable. And the editorial writing is the biggest nightmare because the AI never quite gets it.
00:28:54 Andrew Keen: I'm not going to make any rude remarks about the quality of your editorial, Keith.
00:28:59 Keith Teare: But I don't publish its editorial — I publish mine. If you could see the one I don't publish, you would understand what I mean by its weakness. For me, it means I have the constant experience of using it a lot but being in control. I'm overriding it more than I'm just accepting it.
00:29:21 Andrew Keen: It sounds like your wife, Keith. You're a traditional male in this marriage — you're overriding it. And you're a one-man publishing business. Your post of the week is also about another lean operation — but this is a $1.8 billion company, Medvie, and you have a post about how it's a $1.8 billion revenue company with just two employees. When are you going to sell us to Anthropic for $1.8 billion, Keith?
00:30:00 Keith Teare: Not going to happen. It's all about audience, Andrew. And sadly, our audience —
00:30:05 Andrew Keen: Well, they're all put off when you make assumptions about viewers like Courtney. You need to be more careful these days. Some of our viewers might be both simultaneously on different days, so you shouldn't jump to gendered assumptions.
00:30:24 Keith Teare: Absolutely right. But I've now forgotten the question.
00:30:29 Andrew Keen: The question is this Medvie company — two people, a $1.8 billion company. Although it's not worth $1.8 billion, is it, Medvie?
00:30:43 Keith Teare: It's probably worth more than that. What they do is market GLP-1 drugs — they're the most successful operation working with compounding pharmaceutical companies to get GLP-1s to people who don't qualify for a prescription, and it's become a huge business. It's mainly a marketing business running ads on Facebook. The $1.8 billion is the revenue number, but the profit margin is about 30-something percent.
00:31:19 Andrew Keen: Maybe they should be a sponsor of us. Anyway — the future is, of course, when we have a $1.8 billion company run by nobody at all. But all this comes back to the same theme that comes up every week: the issue of agency. There was an interesting op-ed in the New York Times this week — which I saw and encouraged Keith to put in the newsletter — by Shira Ovide [unclear — Sophie Haney?]. It's a polemic against the idea of agency: all the worst people seem to want to be high-agency. And I wonder, Keith, whether agency is becoming the defining political quality in our age of AI — whether it could even be seen as an ideology.
00:32:26 Keith Teare: It's interesting — agency is most assumed to be a right-wing idea these days. I think it started with the intellectual backlash against modernism in the wake of Stalin and Hitler —
00:32:43 Andrew Keen: Wait — are you saying Stalin and Hitler were modernists? I thought they were political leaders.
00:32:48 Keith Teare: They depicted modernism — the idea that you can consider the whole and try to change it. And that led — anyway, my point is that the left had an intellectual reaction against what was called dictatorial thinking, oligarchs and the like, which led to a kind of identity politics: kumbaya, let everyone be who they want to be. And in that context, agency is arrogant. It's somebody who believes they have the right to influence outcomes, and that's considered arrogant. So this article against agency is what I consider the left's abandoning of any historical agenda. If you have a historical agenda, you need to have agency. Agency is the precondition for making history. And the idea that someone is arrogant because they have an opinion and want to persuade you to agree with it — what you're really saying is don't talk to me, let me just get on with my life, I'm not going to be responsible for the future. So I do think it's a super important conversation.
00:34:23 Andrew Keen: And this comes back to the Meta and YouTube case last week, where they were found guilty. George Will had an op-ed in the Washington Post — which you did end up including in the newsletter — saying the verdict against Meta and Google carries sinister implications, because it suggests that when we use technologies like Instagram or YouTube, we're not really in control of ourselves. I'm not sure whether he would agree with the New York Times piece on agency. But the issue of agency seems to be everywhere. And apparently high agency is what everyone likes in Silicon Valley — we all want to be high-agency people. When you go to cocktail parties in Palo Alto, do people introduce themselves as high-agency, Keith?
00:35:53 Keith Teare: You know, we used to call it Type A personalities, if you remember that, Andrew. Silicon Valley is full of Type A personalities who believe that the things they think about are important for everyone — as should every human being, otherwise we become passive acceptors of the life we're given. I do think the Meta case is interesting in that regard, because what that case said is that the girl who was depressed was a victim of social media and that her own decision-making had nothing to do with it — she was solely a passive victim of media. That depicts humans as infants who have no decision-making role whatsoever. And that just isn't true. It really is a kind of left anti-big-tech victim culture that is just wrong.
00:37:09 Andrew Keen: Well, it's not just the left. I agree in part, but it's the right as well. Steve Bannon has given up on agency just as much as Bernie Sanders has. And the New York Times piece suggests this will come to a boil in the age of AI. On the one hand, the promise of AI is that it's supposed to empower us — to turn us all into high-agency individuals, and you always talk about that, Keith. But on the other hand, the big fear is that it will take agency away. So I don't know whether the AI revolution is a cause or a consequence. It's a bit of both, but it's brought the issue of agency to the fore. It's really becoming the defining political conversation.
00:38:09 Keith Teare: Yeah. But that's the misnomer, and it's a shame the movie didn't go into this. They did acknowledge that AI is a big calculating machine — that's what AI is, a calculating machine.
00:38:24 Andrew Keen: A sophisticated one.
00:38:27 Keith Teare: Yeah. And in order to trigger a calculation, you have to ask it something. And when you ask it something, you can provide context in the form of files — known as markdown files. For example, [unclear] has a file called soul.md that gives the AI its personality. You describe it. So agency has not been given up. Agency is in the hands of the human, and it shapes the AI completely — 100%. The AI can't do anything outside of the context you give it. And by the way, each session is unique. You're starting from scratch every time, a little bit like Groundhog Day. So the idea that AI is this thing that lives outside of you is just wrong.
00:39:19 Andrew Keen: Yeah. And it comes back to the McLuhan comment: we shape our tools, and thereafter they shape us. We shape our AIs, and then they do shape us. But given that we shape them in the first place, we get what we deserve, Keith.
00:39:40 Keith Teare: I think most people's fear is driven by this notion that his child — meaning Roher's child — will never be as clever as AI. It's the first generation where new children will not be — even taking it collectively across the whole world — as clever as AI. And that creates this sense of AI as a separate thing, as opposed to a tool that we control. And I think all of the fears live in that question.
00:40:14 Andrew Keen: Yeah. And it's also culturally a movie about the Daniel Roher generation of obsessive parents. So it's an interesting film, an interesting conversation. I think what it speaks of, Keith — perhaps as a conclusion — is that in our age of AI, we need a strong sense of the self. I think that's what Ezra Klein is seeing in San Francisco. Unless we have a strong sense of the self, we will indeed be lost. But if we have it, we can push these AIs around and shape them as we want. Is that fair?
00:40:52 Keith Teare: I think that's fair.
00:40:55 Andrew Keen: Well, that was a rather unfair conclusion, Keith — I thought we liked to disagree. Anyway, excellent conversation. Next week we will no doubt talk about the newest of new things, AI. Thank you, Keith.
00:41:07 Keith Teare: Adios.