March 7, 2026

No AI Good Guys? Andrew & Keith Ask If Altman Amodei, & Hegseth Have All Failed the Leadership Test

Apple Podcasts podcast player iconCastbox podcast player iconPocketCasts podcast player iconOvercast podcast player iconSpotify podcast player iconYoutube Music podcast player iconRSS Feed podcast player icon
Apple Podcasts podcast player iconCastbox podcast player iconPocketCasts podcast player iconOvercast podcast player iconSpotify podcast player iconYoutube Music podcast player iconRSS Feed podcast player icon

“They’re both naughty boys in the playground, leveraging the absence of clarity to their own advantage. Neither one of them is an authoritative leader of opinion with the interests of everyone at heart.” — Keith Teare

What a difference a week makes. Last Saturday, Keith Teare was arguing that Anthropic was wrong to push back against the US government’s use of AI in warfare. This week his editorial is entitled “No Good Guys.” He’s used AI to put images of Sam Altman, Dario Amodei, and Pete Hegseth around the same table—and found all three guilty of poor leadership. According to Keith, Amodei is “ideologically” (whatever that means) driven. Altman is commercially driven and Hegseth is just following orders. None of them is asking the all-important questions about AI policy. And the man who should be—Trump’s AI czar David Sacks—is absent-without-leave. All four should be court martialed.

Yes, a lot has happened in seven days. Altman publicly supported Amodei’s position on surveillance and autonomous weapons—then pulled a classic Sam u-turn and signed a contract with the Department of War. Amodei’s internal memo was leaked to The Information, revealing that he’d interpreted the government’s “no unlawful use” language as meaning there is no law. And the US military used Claude in the Iran war anyway. As Keith puts it: they’re all naughty boys in the playground, leveraging the gaps to their own self-advantage.

The only problem, of course, is that this isn’t a playground game. And that these men are all shaping the lives (and deaths) of countless people around the world.

Meanwhile, Om Malik’s “Post of the Week” offers a devastating contrast between Xi’s China and Trump’s America. China, Om argues, has published a five-year AI plan built on open-source software and bottom-up adoption. America, in contrast, has AI theater. No strategy, no policy, no leadership—just contracts, leaks, and perpetual spin. Then there’s the Startup of the Week, Jobright, which hit $5 million in annual revenue with nine people, suggesting that the companies of the future may not need humans at all. Keith’s own SignalRank has four people and claims to be going public. We seem to be heading for post-human companies before we’ve figured out who’s managing the humans.

Maybe we should court martial everyone. What a difference a week makes.

 

Five Takeaways

•       No Good Guys: Keith Teare’s editorial puts Sam Altman, Dario Amodei, and Pete Hegseth in the same room—and finds all three guilty of bad leadership. Amodei is ideologically driven, Altman is commercially driven, and Hegseth is just doing his job. None of them is asking the big questions about AI policy. The real culprit may be the invisible AI czar, David Sacks.

•       Altman Said One Thing, Then Did Another: Last week Altman publicly supported Amodei’s position on surveillance and autonomous weapons. This week he signed a contract with the Department of War. The contract uses “no unlawful use” language—which, as Amodei’s leaked memo points out, effectively means there is no law.

•       The US Used Claude in Iran Anyway: Despite the very public dispute between Anthropic and the government, the US military used Claude in the Iran operation. The government doesn’t need your permission to use your product. It just needs an API key and a credit card.

•       China Has a Plan. America Has Theater: Om Malik’s “Post of the Week” contrasts China’s published five-year AI strategy—built on open-source software and bottom-up adoption—with America’s complete absence of AI policy. The Chinese approach is more inclusive and practical than anything coming out of Washington or Silicon Valley.

•       The Future Company Has Nine Employees: Startup of the week Jobright hit $5 million in annual recurring revenue with just nine people. Keith’s own company, SignalRank, has four people and is going public. The implication: the companies of the future will be run mostly by software agents, not humans. We’re heading for post-human companies.

 

About the Guest

Keith Teare is the publisher of That Was The Week, founder and CEO of SignalRank, and a recurring sparring partner on Keen On America. A serial entrepreneur and investor, he is the co-founder of TechCrunch and RealNames. He joins the show every Saturday for the weekly tech roundup.

References

Essays, posts, and interviews referenced:

•       Keith Teare, “No Good Guys” — That Was The Week editorial

•       Om Malik, “The Great AI Game versus AI Theater” — Post of the Week

•       Ross Douthat, “If AI Is a Weapon, Who Should Control It?” — New York Times

•       Ben Thompson, Stratechery — on “no unlawful use” and the absence of international law

•       Paul Krugman on the economics of technological change — technology, jobs, wages, and monopolies

•       Tim O’Reilly, “How We Bet Against the Bitter Lesson” — skills and the future knowledge economy

•       Yascha Mounk and Danielle Allen on participatory democracy and AI governance

•       Previous Keen On episodes: Tom Wells on the Kissinger tapes; Michael Ellsberg on Daniel Ellsberg and the Pentagon Papers

•       Startup of the Week: Jobright — $5M ARR with nine employees

About Keen On America

Nobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States—hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.

Website

Substack

YouTube

Apple Podcasts

Spotify

 

Chapters:

  • (00:00) - Introduction: What a difference a week makes
  • (01:14) - “No Good Guys”: Keith’s editorial and Om Malik’s wake-up call
  • (02:30) - Amodei, Altman, Hegseth: three self-interested players
  • (04:02) - How the Iran invasion changed the AI debate
  • (05:28) - “No unlawful use”: a meaningless phrase in a lawless context
  • (06:50) - The US used Claude in Iran despite the Anthropic dispute
  • (08:15) - Naughty boys in the playground: spinning vs. leadership
  • (09:31) - Bobby Kenn...

00:00 - Introduction: What a difference a week makes

01:14 - “No Good Guys”: Keith’s editorial and Om Malik’s wake-up call

02:30 - Amodei, Altman, Hegseth: three self-interested players

04:02 - How the Iran invasion changed the AI debate

05:28 - “No unlawful use”: a meaningless phrase in a lawless context

06:50 - The US used Claude in Iran despite the Anthropic dispute

08:15 - Naughty boys in the playground: spinning vs. leadership

09:31 - Bobby Kennedy, the Cuban Missile Crisis, and the myth of good leadership

11:34 - Amodei’s leaked memo: ideologically driven or Machiavellian?

16:21 - If AI is a weapon, who should control it?

19:35 - Participatory democracy and real-time AI governance

21:35 - An open letter to David Sacks: the invisible AI czar

23:09 - Krugman on jobs, wages, and monopolies after AI

25:34 - Tim O’Reilly and the bitter lesson

28:50 - Startup of the week: Jobright—$5M ARR with nine people

31:38 - Post of the week: Om Malik on China’s AI strategy vs. US AI theater

Keen on America: Full Transcript with Timecodes (March 7, 2026)
00:00 - 00:23
Andrew Keen: Hello. My name is Andrew Keen. Welcome to Keen on America, the daily interview show about the United States. Hello everybody. It is Saturday, March 7th, 2026. What a difference a week makes, as Harold Wilson once so famously said—especially when it comes to technology.


00:23 - 00:52
Andrew Keen: Last week on That Was The Week, our weekly roundup of tech news, Keith Teare was asking whether Anthropic was wrong in terms of its pushback against the government. He argued it was, but we were arguing it very much in a vacuum, out of the context of the American invasion of Iran. Keith believed that Anthropic was wrong, or perhaps is wrong. I wasn't so sure; I saw Anthropic's pushback in the context of the unusual political situation in the United States.


00:52 - 01:14
Andrew Keen: A week later, Keith seems to have changed his mind. This week, his editorial on That Was The Week is entitled "No Good Guys". He's used AI to put Sam Altman, Dario Amodei, and Pete Hegseth in the same room—I'm not sure the three of them ever would or could ever be in the same room. They all look rather miserable, Keith. So, there are no good guys. Does that mean you've changed your mind, that you may, Keith Teare, once in your life, have been wrong?


01:14 - 01:17
Keith Teare: What do you think?


01:17 - 01:23
Andrew Keen: Well, it was a leading question. I want you to admit you were wrong, not me. I always tell you you're wrong.


01:23 - 01:45
Keith Teare: Well, the first third of my editorial is reaffirming that I believe I was right, and I still think that Anthropic was wrong. But they weren't the only wrong actors. So, in a different way, was OpenAI, and in yet a different way again, was the US government. And the collective failure is equal to the complete absence of leadership over AI, which I just want to credit Om Malik with putting that thought in my head. He had an excellent piece which is this week's "Post of the Week."


01:45 - 01:54
Andrew Keen: And this piece—we'll come to it at the end—is "The Great AI Game versus AI Theater". Of course, America is the stage of all sorts of theaters, not just AI theater, but AI's piece of it.


01:54 - 02:04
Keith Teare: So the theme this week really is a complete absence of leadership, of which Anthropic, you know, is one of the players.


02:04 - 02:30
Andrew Keen: So again, let's step back a bit. "No Good Guys." So, Amodei isn't a leader, Sam Altman isn't a leader, Pete Hegseth certainly isn't a leader. You've left out the guy who claims to be leading the US—some people wonder whether he's capable of it. But isn't the whole point, Keith, of your argument last week about Anthropic being wrong, is that AI companies shouldn't be leaders? That they're just providers...


02:30 - 02:34
Keith Teare: No, they shouldn't set policy. Let's be specific.


02:34 - 02:35
Andrew Keen: Right, so they shouldn't be leaders.


02:35 - 02:40
Keith Teare: No, you can be a leader in an opinion of what policy should be, but you don't get to set it.


02:40 - 02:47
Andrew Keen: Well, I don't really see the difference. If you have a view and you want to set it, you might not succeed...


02:47 - 02:54
Keith Teare: Hold on, you're getting confused. You can't become the legislature, but you can be a voice in civil society with an opinion.


02:54 - 03:00
Andrew Keen: No one's going to argue with that. No one's going to argue that Dario Amodei or Sam Altman shouldn't have an opinion.


03:00 - 03:22
Keith Teare: Right. So, leadership—now here's the thing: they don't seem to have an opinion about the big question of AI policy and the future. They have lots of opinions about contracts that they're negotiating right now, as does the government. But none of the three of them are standing back and asking the big questions about AI. It's almost as if the US is on remote control when it comes to AI policy, and who's the real culprit? It's probably David Sacks, who is the "Czar of AI," who is this invisible...


03:22 - 03:28
Andrew Keen: I mean, invisible czar. I don't even have a slide for David Sacks this week; he's such an invisible czar.


03:28 - 04:02
Andrew Keen: So, let's step back, Keith. I said that a week was a long time; Wilson famously said a week was a long time in politics. Certainly, a long time when it comes to international politics and war. Since we last talked this time last week, there's been this huge war in Iran—a joint US-Israeli invasion, or an attempt to invade, or at least bomb Iran back into the Stone Age. How has that changed the debate? What's happened in the last seven days in terms of AI policy and the relations between that and the current war in the Middle East?


04:02 - 04:22
Keith Teare: Yeah, good question. A few things changed. The first is that when we spoke last week, Altman had gone on the record supporting Amodei's instincts around surveillance and autonomous weapons. Since then, he's signed a contract with the Department of War, which clearly had been in negotiation as well.


04:22 - 04:36
Andrew Keen: Isn't that, Keith, classic Sam? Or at least for those of us who aren't great fans of Sam Altman, it's part of his classic playbook of saying one thing and then doing something quite different.


04:36 - 05:01
Keith Teare: Well, the letter of the actual event is that he agreed to the US's "no unlawful use" language, which, on the face of it, is fairly reasonable. "We'll only use this within the law". Now, the second thing that then happened is Amodei released an internal memo to his team which got leaked in The Information, that made clear that Amodei's belief is that "no unlawful use" is the same thing as saying there is no law.


05:01 - 05:28
Andrew Keen: Whereas one of the essays this week, which I thought was really good—Stratechery by Ben Thompson, one of your favorite sets of essays—he makes the important argument that there's no such thing mostly as law when it comes to a lot of this stuff in international context. So when we're talking about "unlawful," that itself is a meaningless word. I mean, the very US-Israeli invasion of Iran, or this attempt to bomb the country back into the Stone Age and assassinate all its leaders, that's by definition against all forms of international law. So is there any value in even using this word "unlawful" in this discussion?


05:28 - 05:40
Keith Teare: Well, I think that we shouldn't get into the wordsmithing of it, but what does it mean?


05:40 - 05:42
Andrew Keen: Well, you're the one who's talking about "unlawful."


05:42 - 05:54
Keith Teare: No, I'm not. I'm describing what was signed. If you ask me my opinion, I'll give it to you, but so far you've asked me what has happened, and I'm saying two things happened. Then a third thing that happened is the US did actually use Claude in the Iran operation, despite the conflict with Claude, with Anthropic.


05:54 - 06:06
Andrew Keen: You and I, before we went live, Keith, we were talking about—I've got the $100 a month Claude, you've got the $200. I'm assuming the US has its $200 seat.


06:06 - 06:12
Keith Teare: They're probably using the API and paying for token use, which is more expensive than we pay.


06:12 - 06:23
Andrew Keen: I mean, we shouldn't be laughing, but it is in many ways completely absurd, this whole thing, isn't it?


06:23 - 06:50
Keith Teare: Well, yeah, but I think we've got to use it as a prism to properly analyze the challenges faced by AI in the US, because it is a magnifying glass into the present moment that reveals what the role of each of the actors is. And that has big implications for the near future and what happens next. By the way, Google has been—Google and Microsoft have been silent on the edges, but it did materialize yesterday that the Department of War has a contract for use of OpenAI with Microsoft.


06:50 - 06:59
Andrew Keen: Yeah, and I don't want to give away any marital secrets, but I can guarantee you one thing: Google is not silent behind its doors on this stuff.


06:59 - 07:29
Andrew Keen: I mean, in terms of your three guys—these "no good guys"—Amodei, Sam, and Hegseth, shouldn't we be combining—you put this fake photo together, they didn't really exist in the same room, or fake video. Should we even be distinguishing between Sam and Amodei? I mean, they're different kinds of characters, slightly different companies, but basically, there's not that much fundamental difference between OpenAI and Anthropic. But of course, when you compare them with Hegseth and the US government, you do have a fundamental division.


07:29 - 07:38
Keith Teare: You've got three self-interested players, and as you said, in the absence of clear law, the government gets to decide what to do more than any company would get to decide.


07:38 - 07:54
Andrew Keen: Well, but wouldn't Amodei say that's not true? And in the absence of any clear law, especially since this complete avoidance or rejection of international law by the current administration, it actually—and this is what you and I talked about last week—it actually makes the role of an Amodei or a Sam actually much more important because they're players here. When the government does stuff that is in complete denial of the existence of law, doesn't it give some degree of, if not legal, moral authority to private companies?


07:54 - 08:15
Keith Teare: No, what they become is naughty boys in the playground, leveraging the gaps to their own self-advantage. And they're both doing that. Amodei seems to be ideologically driven in that regard; Altman seems to be mainly commercially driven in that regard. But they're both naughty boys in the playground, leveraging the absence of clarity to their own advantage. Neither one of them is an authoritative leader of opinion with the interests of everyone at heart. Neither one. And by the way, obviously Hegseth is not that either.


08:15 - 08:33
Andrew Keen: But does that ever exist? I mean, you've read enough political philosophy, Keith. Every political philosopher from Socrates to Rousseau to Marx have claimed that authority, but one wonders whether there's ever been a government that can claim to speak for all people.


08:33 - 08:58
Keith Teare: No, there are in history, repetitively, opinion leaders with moral good and societal good on their side. You think of the thousands of examples, but let's just pick the civil rights movement in America and the role of JFK; also in the Cold War with the Cuban Missile Crisis. There are leaders who think ahead, big picture, and execute against it. But in the world of AI today...


08:58 - 09:31
Andrew Keen: Well, let me just call you on that, because that's wrong. I'm currently, as you know, I'm in the process of trying to write a book about Bobby Kennedy, and I've done a lot of reading about the Cuban Missile Crisis. He claims in some ways, probably correctly, that he and his brother saved the world from this terrible war with the Soviet Union because of the Soviet establishment of missiles on Cuba. But the truth is much more complicated. They never really tell the truth about what happened in the back channels and the deal they did with the Soviets. So this idea that there was a time when government acted for the people and was trusted and now that's no longer true is also wrong.


09:31 - 09:38
Keith Teare: What we're trying to do here, Andrew, just to tether it to the previous discussion, is define what good leadership looks like.


09:38 - 09:44
Andrew Keen: And I take your point. So we're not talking about perfect leadership; we're talking about the difference between good and bad leadership.


09:44 - 09:56
Keith Teare: Yeah, and I think it is pretty definitive that on all three of their shoulders, pinning the label "bad leadership" is not unreasonable.


09:56 - 10:14
Andrew Keen: Again, I would disagree. I'm not a fan of Hegseth or Trump, but I mean, would it be fair to say—and this is where your photo is actually pretty accurate—that Sam sitting at the table with his hand on his head and Amodei looking just as miserable, they've been put in an impossible situation because of the behavior of the US government.


10:14 - 10:30
Keith Teare: Yeah. So if, as you probably know, I'm a great believer in the right of nations to self-determination, and I do consider what the US did in Iran—even though, you know, I'm not going to cry any tears over the result—was absolutely outside of any international rule.


10:30 - 10:38
Andrew Keen: Well, results—I mean, there are lots of results. I mean, it's one thing the results of the assassination of their leadership, another of what's happening now. But that's another issue.


10:38 - 11:00
Keith Teare: Yeah, there's a lot one could talk about there. But that said, in the world we live in, the nation-state, especially a powerful one like America, does get to set rules even if it's breaking rules when it does it, and companies don't. And so Hegseth probably is the least culpable of the three because he's just doing his job and he does run the Department of War, as it's now...


11:00 - 11:05
Andrew Keen: Okay, so you're saying that Hegseth is less culpable than Sam or Amodei? What have they done wrong?


11:05 - 11:11
Keith Teare: They've all done different things wrong, including Hegseth.


11:11 - 11:34
Andrew Keen: Okay, well let's leave Hegseth out. What have Sam and—I include Sam in this—what have Sam and Amodei done wrong?


11:34 - 11:58
Keith Teare: I don't know if you highlighted it, but if you look at the quote from Amodei's memo that leaked, he has clearly stated an ideological preference, has quoted that the "no lawful use" phrase, meaning that he's well aware that the government was intending to act, quote, "within the law." And he then interpreted that wrongly as meaning there is no law—it's written in his own words—and said in the absence of law, which of course isn't true, there are laws, we're going to decide policy. He says it in his own words, so he's ideologically driven, which is fine as long as you don't try to set state policy.


11:58 - 12:12
Andrew Keen: Well, but in the context—and we talked about this last week—this is all in the context of the current Trump administration, which many, including myself and certainly Amodei, think are behaving not only illegally but immorally.


12:12 - 12:18
Keith Teare: Yeah, but you don't get to change that, sadly, unless you win an election.


12:18 - 12:41
Andrew Keen: But in all fairness to Amodei, you seem to be suggesting that he's somehow Machiavellian here, that he knew what he was doing. I mean, doesn't he have a right to take a moral position?


12:41 - 12:51
Keith Teare: Well, read his words. His memo, which is by the way 1,600 words, demonstrates he totally knew what he was doing.


12:51 - 13:01
Andrew Keen: So he's Machiavellian here, rather than MLK or JFK. He's—he knows what he's doing. In other words, he's manipulating us morally for the purposes of Anthropic.


13:01 - 13:14
Keith Teare: He's doing what we all accuse Trump of doing, yeah.


13:14 - 13:21
Andrew Keen: And of course Sam—and this is the story of Sam—he's doing the same thing. Although he's less—he's less prone to making moral arguments of...


13:21 - 13:24
Keith Teare: They're both spinning, as we say in England. Spinning is the...


13:24 - 13:46
Andrew Keen: Well, are you saying, Keith—and this is surprising, and you've done a lot of reading around this—are you saying that Amodei is purely Machiavellian? That he doesn't care at all, that he's purely using the outrage over this war, one kind of outrage or another, to pursue the interests of Anthropic?


13:46 - 14:13
Keith Teare: I obviously can't read his mind, so the true answer is I don't know. But here's what I do know: he has an ideological disposition which, by the way, I would broadly agree with—you probably would too—he was in the middle of a negotiation where that disposition dominated his thinking about what the right thing to do was in the contract. It wasn't a commercial set of decisions; it was an ideological set of decisions. Again, you or I may have tried to do the same thing. And it got exposed because the government refused to agree, and now his statement this week, internal statement, was an after-the-fact—it wasn't a mea culpa, it was a "here's why I did it and I would do it all over again." And it exposes that he's ideologically driven, which is fine as long as you don't try to set state policy.


14:13 - 14:38
Andrew Keen: Well, but isn't the whole foundation of Anthropic—I don't even like this word "ideological" because I don't know what that word means, and there's a sort of pejorative sense here that if we're ideological we're doing something wrong. Wasn't Anthropic created as a response to perhaps the lack of, or a sense of the lack of, morality in OpenAI? I mean, Amodei was with OpenAI and he split.


14:38 - 14:48
Keith Teare: Yeah, I think that's accurate. Most of my friends are on his side of that split, morally speaking and intellectually. I probably would be too, but in the cold light of economic reality, OpenAI's winning by far. So, but being caught up, to give Anthropic credit where it's due, and Gemini for that...


14:48 - 14:55
Andrew Keen: At least you're beginning to acknowledge that. You used to say that they couldn't be caught up, OpenAI.


14:55 - 15:00
Keith Teare: Well, I didn't say the board wouldn't change, but I said OpenAI is ultimately the winner.


15:00 - 15:23
Andrew Keen: And one of the ironies of this—and I think this is purely unintended—is clearly this dispute, this very public dispute, has benefited Anthropic. I mean, at the beginning of the war, regular users of Anthropic including myself, of course, we couldn't even use it because it came down because so many people were using it.


15:23 - 15:53
Keith Teare: Yeah, the week's been interesting because I think as every day went by, Amodei looked worse and worse, especially with the leak of his statement. That was a kind of a killer. And Altman is pretty much the same as he was a week ago—nothing's changed there. And Hegseth—I think your opinion of Hegseth will correlate directly to your opinion of the Iran conflict. And it's interesting, I watched Real Time with Bill Maher last night, which, for those who are not American, is a comedy show called Real Time with Bill Maher, and Maher is a Democrat, and everyone on the show including Democrats had to acknowledge they liked what they're doing in Iran. So there's really not much of an outcry about what's happening in Iran, which is astounding given that they've...


15:53 - 16:21
Andrew Keen: Yeah, I agree. And I—I mean, I consider myself on the left, I'm certainly outraged. But so let's move on a little bit, because we could spend the whole show, and this is not a politics show, this is a tech show. One of the pieces that you cite this week is by the New York Times columnist Ross Douthat, "If AI is a weapon, who should control it?" That seems to be the core issue here. Who is, firstly, who is in charge of AI when it comes to its use when it comes to the government in war, and who should be?


16:21 - 16:40
Keith Teare: Well, I think the answer is the same as who's in charge of battleships. But sadly with battleships, there's a plan and a process. With AI, it's so new, there isn't. And so the use of AI is pragmatic; it's day-to-day, it's based on situational complexities that we don't know about, and the government—probably we want this to be true—tries to make a rational decision in every single moment what its role is.


16:40 - 17:09
Andrew Keen: Yeah, but you compare it to battleships. There isn't a single company out there—I'm no expert on the battleship economy or battleship economics—but there's no equivalent to OpenAI or Anthropic when it comes to battleships. Isn't this why this is a different issue? Because—I mean, one of the things that came out of this was that when Amodei supposedly—and you say it wasn't entirely true—when Amodei pushed back against government, he was doing it because he had a degree of power because the government needed Anthropic technology. I mean, if Raytheon said to the government, "We don't agree with what you're doing with our battleships, we're not going to give you our technology," the government would say, "Fine, go and find another vendor."


17:09 - 17:35
Keith Teare: So SignalRank's an investor in a company called Saronic that produces autonomous battleships—well, ships in general, but military. And it's quite clear if you're producing a military ship that it's going to do military operations, and it's autonomous.


17:35 - 17:39
Andrew Keen: You've ambushed me on this one, Keith. You knew we were going to talk battleships and I'm completely, so to speak, out of my depth.


17:39 - 18:08
Keith Teare: Well, it is AI, but it's AI embodied in a ship. And when the company gets a contract to deliver a ship, it knows because that's the whole purpose of the ship, that it's going to be autonomous. So autonomous doesn't mean no human in the loop, like drones are autonomous and they're clearly AI in drones to do with navigational and other characteristics. But there's a human in the loop. Eventually, I think we can all agree that we're going to get to the point where there isn't a human in the loop, probably. It seems very likely. So that question that you put on the screen...


18:08 - 18:33
Andrew Keen: Who—right, so the question is, if AI is a weapon, which it clearly is in some ways or can be used in terms of war, who should control it?


18:33 - 19:00
Keith Teare: Yeah. And the answer's got to be the same as the answer to who controls anything in democracy. And it isn't the vendor. I mean, who shouldn't control it? The vendor. Who should control it is the authoritative user. In a democracy, that's the government, and even in a dictatorship, that's the government. So the Chinese government make decisions, the Russian government make decisions, the American and British governments make decisions, the French government make decisions, and no one would ever believe that that decision should sit anywhere else.


19:00 - 19:35
Andrew Keen: Yeah, but you—one of the other pieces you linked to this week is a very interesting conversation between Yascha Mounk and Danielle Allen on—both of them are prominent political thinkers—on this kind of crisis of traditional top-down liberal politics, and an increasing focus of people like Mounk and Allen, many others as well—I've had them on my show—for what we call participatory democracy. So in terms of this AI debate, it's more than just "Oh well, Hegseth should run things, Trump should run things, whoever's in government should run things." Something is changing both on the left and the right, and that this idea of an old-fashioned technocratic liberalism now is being challenged by participatory liberalism.


19:35 - 20:10
Andrew Keen: And I'm sure that the participatory liberals, whether it's Danielle Allen or many others who are writing on citizen assemblies and many other things, are all beginning to wonder in this new age, participatory democracy should have a role in controlling AI if it is indeed—which it is—a weapon.


20:10 - 20:38
Keith Teare: I think that is the right way to think, because unfortunately, democracy is representative democracy and your ability to control is delayed by, in the US, every four years or every two years if you account the midterms. So the ability for participant electors to control outcomes and policy is there, but it's time-delayed, and because it's representative, it's subject to capture by lobbyists and others. So it isn't a perfect participatory democracy by any means. But at least compared to a dictatorship, the people do have a way to change policy and use it, and clearly governments do change. So we kind of do have that, but it's imperfect. I certainly feel like with AI, the playing board changes weekly, and the challenges of what it is you're controlling and in what context change weekly.


20:38 - 21:12
Andrew Keen: Right, and it's that changing weekly which makes the idea of participatory democracy in—it's really what technologists might call real-time democracy—not just intriguing but essential. If everything is changing weekly or sometimes daily—I mean, a week is a long time in any week of technology, Keith, you and I know this from doing this show for several years—then we've got to rethink the nature of government. That's not for us on That Was The Week, we're a tech show, but I think that's why including the Mounk-Allen conversation is useful. I've dealt with it a lot, as I said on the show couple of weeks ago, I had the Yale political thinker Hélène Landemore on the show. She's another leading thinker here.


21:12 - 21:35
Andrew Keen: So in the meantime—and I'm quoting the end of your editorial—the question of who sets AI policy deserves a serious answer. That goes without saying. This week proved that nobody currently in the room is capable of providing one; it wasn't Sam or Amodei or Hegseth. What should be done in the short term? For next week, for example, when these issues haven't gone away and in perhaps in some ways they become even more salient?


21:35 - 21:58
Keith Teare: Well, I think in the spirit of writing an open letter, I would write an open letter to David Sacks, who Trump has given AI to as one of his domains. Sacks has done a very good job in crypto of setting rules which have gone through Congress and have become or are becoming law, that is very different to what the case was before. He hasn't done that with AI. With AI, in some ways rightly—so plaudits to him in some ways—he took a hands-off approach which was mainly focused on regulation as a bad idea.


21:58 - 22:20
Keith Teare: But there's a difference, and liberals need to understand this as well: there's a difference between regulating something and setting policy for something. Regulating is generally how to stop bad things or good things happening depending on your point of view; policy is about how to allow good things to happen. Sacks probably has the power to begin to do that, but he hasn't. So I think Sacks needs to step up in this scenario.


22:20 - 22:36
Andrew Keen: In other words, when we've got your vision of no good guys—the three guys in the room, Amodei, Sam, and Hegseth—we need Sacks in it too.


22:36 - 22:52
Keith Teare: Well, Sacks needs to be, you know, floating above on the ceiling as the god who can fix things.


22:52 - 23:09
Andrew Keen: I don't know if everyone's going to be happy with David Sacks as—he's not my god, might be yours, Keith. Let's move on. This is a subject we will no doubt come back to, probably next week. Couple of other interesting essays you have, from heavyweight thinkers. You've got the Krugman essay—Krugman now has gone over to Substack, he no longer writes for the New York Times—the economics of technological change. What is Krugman saying here that's different from anything else anyone's saying on the economics of technological change, particularly AI?


23:09 - 23:14
Keith Teare: Well, the bad news is I only could read the introduction because it's paywalled.


23:14 - 23:16
Andrew Keen: Well, then you shouldn't include it.


23:16 - 23:45
Keith Teare: I included it because the introduction itself was compelling. He's got an image of rampaging Luddites. He talks about three things: the relationship between technology and jobs, the relationship between technology and wages, and the relationship between technology and the tendency to monopolies and oligopolies. All three of which seem to me to be crucial talking points at this moment in history. So that's why I put it in. There are probably people who will go and pay...


23:45 - 23:49
Andrew Keen: Well, you should pay, Keith. I mean, aren't you wealthy?


23:49 - 23:53
Keith Teare: ...


23:53 - 23:57
Andrew Keen: You ignored that question. You're not wealthy? Aren't you—don't you have enough money to subscribe to Paul Krugman?


23:57 - 24:25
Keith Teare: I think in the context of Silicon Valley I don't count as wealthy, but yes, I am wealthy by any other standard. So here's why I put it in: there is no plan for what happens to people when AI replaces jobs. So number one, he's right: technology and jobs, and what happens after is key. I have opinions about that. Technology and wages: typically, historically, wages have gone up over time as the working hour has shrunk, and that is one of the ways of capturing productivity and progress.


24:25 - 24:45
Keith Teare: If in capital and labor, if AI removes labor as being required, then the question of wages doesn't arise, but living still does. So there's a whole set of discussions around "How do you live after wages?"


24:45 - 24:50
Andrew Keen: And this is your—I'm not going to get sucked into this one this week—this is your Muskian utopia of post-money which I'm very skeptical of. You also include...


24:50 - 25:12
Keith Teare: No, wait, wait, just one thing, Andrew, because the third one's important too: monopolization and oligopolization. In other words, big things get bigger. How can that be transitional to a post-labor society? That's another conversation. In other words, maybe getting big is a gateway to changing society where capitalism results in a post-capitalist reality. Not a revolution, not communism, but capitalism itself is so successful that it creates the condition for a post-capitalist society.


25:12 - 25:34
Andrew Keen: That sounds to me like sort of Hegelian or Marxist sophistry, but maybe we'll come back to that, I'm sure we will. Another heavy hitter you're linked with, and this one I think you have access to the entire piece: Tim O'Reilly, who's always very wise on these sorts of things. "How we bet against the bitter lesson: Skills and the future knowledge economy." Is Tim in sync with Krugman? Politically they're in pretty much the same camp.


25:34 - 26:11
Keith Teare: Well, Tim is focused on the human experience and how that changes with the use of AI. And he basically has this thing called the bitter lesson, and the bitter lesson comes from an essay by Richard Sutton, which means that basically methods of leveraging computers have always beaten approaches that try to capture human knowledge. Like chess engines will beat the best chess champion is an example. So he's in this kind of awkward place between, yes, computation is going to become more and more capable of being better than humans, but what does that mean for humans?


26:11 - 26:45
Andrew Keen: Well, that's the trillion-dollar question, which again we will come back to. Very briefly, I included as my interview of the week one with Tom Wells, a journalist who's written a book called The Kissinger Tapes about the behavior of Henry Kissinger in Vietnam. He had access to all Kissinger's phone records, and we've talked about this in the show this week, or on my show Keen on America. In some ways things have changed, and in some ways they haven't. I mean, Hegseth and Trump are behaving very much like Kissinger and Nixon in Iran.


26:45 - 27:01
Andrew Keen: So if you want to remind yourself that a lot of these—a lot of these issues at least aren't new, watch my interview with Tom Wells as well as the stuff on Ellsberg with Michael Ellsberg, the son of Daniel Ellsberg who published the Pentagon Papers. Then your startup of the week, Keith. Sorry, I interrupted you.


27:01 - 27:21
Keith Teare: Isn't Kissinger—if you look at your five key takeaways on that interview and you forget who you're talking about, it sounds like Sam Altman to me. "He lied more than expected."


27:21 - 27:40
Andrew Keen: Right, and which—and I'm no fan of either Kissinger or Altman, so aren't you coming into my camp on this one then?


27:40 - 27:45
Keith Teare: Well, that's a judgment issue.


27:45 - 27:54
Andrew Keen: Everything's a judgment issue, Keith, but you hide behind this when you don't like the outcome.


27:54 - 28:18
Keith Teare: No, because I think you can admire the ability to get things done that include lying, callousness, dodgy morality, being two-faced and being banal or evil. The fact that you move things along and get things done maybe trumps those five points.


28:18 - 28:50
Andrew Keen: Well, I'm not going to get sucked into Trump himself, but I mean, the whole point of this conversation with Wells, as well as the stuff on Ellsberg, is that Kissinger's or Nixon's indifference to human suffering ultimately cost not just them, of course, but particularly the country, not to mention their victims. So I certainly don't think this in any way legitimizes Sam Altman. Moving along, your startup of the week is an interesting one. I'm less interested in the company as the implications. Jobright, according to one piece this week, did $5 million ARR, whatever that means, with nine people.


28:50 - 29:13
Andrew Keen: Are we increasingly—and I think this touches on some of the bigger themes here, Keith—are we increasingly coming to a point where these large companies are going to get run not maybe by nine people, but by five or one?


29:13 - 29:33
Keith Teare: Yes, definitely. I mean, SignalRank's only four people and we're going public.


29:33 - 29:43
Andrew Keen: And you're going public. How can four—a four-person company go public?


29:43 - 30:10
Keith Teare: A company is basically in the abstract sense, it's capital, labor, revenue, and profit. And if you can—if you can, you know, have an equation where most of your value is in on the capital side and labor is largely, you know, human-in-the-loop to an automation engine, which is what SignalRank is, then revenue and profit can get really big. So that's the definition of a company.


30:10 - 30:45
Andrew Keen: So in other words, and this comes back to all these—I don't want to get into the economics or even the technology of this—but when it comes to morality, we're going to have these huge companies of the future, massively valuable, run by a tiny group of people, maybe even by a single individual. I mean, whatever one says about Amodei and Sam, I mean, they have thousands of people working for their companies, but the Anthropic's and the OpenAI's of the future, Keith, are they going to be run by tiny groups of people, maybe even by a single individual?


30:45 - 30:54
Keith Teare: They probably will be run mainly by agents, as in software agents.


30:54 - 31:02
Andrew Keen: So they're post-human companies, they won't even have one person running it.


31:02 - 31:13
Keith Teare: I think that is inevitable that that will happen. I don't see what would stop that happening.


31:13 - 31:38
Andrew Keen: Well, that's very provocative. Finally, your "Post of the Week" is by your old friend Om Malik, one of tech's wisest men, which seems to bring everything together in terms of our conversation this week. "The Great AI Game versus AI Theater." What is Om saying that is a good conclusion not just to your newsletter of this week, Keith, but to our show?


31:38 - 31:54
Keith Teare: Well, his starting point is that the Chinese Communist Party produced a five-year plan for AI in China that is published and readable in English.


31:54 - 32:05
Andrew Keen: "No good guys" as you say, whether it's Hegseth, Amodei, or Sam, none of them are producing what the Communist Chinese are doing.


32:05 - 32:32
Keith Teare: And then the second thing is the actual strategy: the Chinese strategy is founded on open-source software being used in pretty much every element of the Chinese economy as a bottoms-up kind of organic and viral process. That empowers individuals and small businesses to leverage AI, which is, you know, kind of the opposite of how you normally think of China.


32:32 - 32:41
Andrew Keen: So hold on, is Om saying that the Chinese are doing something more democratic, fairer, more moral than the US?


32:41 - 32:51
Keith Teare: Well, he isn't saying China is democratic, but he's saying that approach to technology is inherently more inclusive and bottoms-up.


32:51 - 32:54
Andrew Keen: So he is saying China's better than the US.


32:54 - 33:14
Keith Teare: He likes that strategy, yeah.


33:14 - 33:37
Andrew Keen: So is that the answer, Keith? Ultimately, when it comes down to all of it, that when you're looking for there's no good guys, is the good guy in the room going to be Xi, or some American equivalent of Xi who determines all this?


33:37 - 34:11
Keith Teare: Well, it's starting to be practical to do what the Chinese are doing. Apple this week announced a new MacBook Pro that's got 128 gigabytes of memory. That is more than enough memory to run a very, very powerful AI model on your laptop, and OpenCloud can run on top of that, so you can have your own agent on your laptop running on AI.


34:11 - 34:31
Andrew Keen: We can all be Xi. It's interesting, I ask Keith the question, "Is the Chinese model of AI better than the US, more moral?" and the answer was something about this new MacBook—all new MacBook. Neo, it only costs $5.99. Is that the only cost, Keith, of being the next Xi? $5.99?


34:31 - 34:42
Keith Teare: Sadly, that one isn't powerful enough to do what I just described. You need the...


34:42 - 34:54
Andrew Keen: You need to spend a bit more, maybe the...


34:54 - 35:10
Keith Teare: You need to spend about $5,000. But it's still a lot less than the billions of dollars that OpenAI have—are having to spend.


35:10 - 35:43
Andrew Keen: There you have it. If you've got $5,000 lying around, you can become Xi and become a dictator. Keith, as always, a pleasure, and we will talk again next week. Another very tech-centric news week, I'm sure is to come. So we'll talk next week. Thank you.


35:43 - 35:45
Keith Teare: All right, thanks everyone.


35:45 - 36:23
Andrew Keen: Hi, this is Andrew again. Thank you so much for listening to or watching the show. If you enjoyed it, please subscribe. We're on Substack, YouTube, Apple, Spotify—all the platforms. And I'd be very curious as to your comments as well on what you think of the show, how it can be improved, and the kind of guests that you would enjoy hearing or listening to in the future. Thank you again.