Let’s Just Say It Out Loud: AI Is Not Dangerous
“Let’s just say it out loud,” Keith Teare, publisher of the That Was the Week newsletter, says. “AI is not dangerous.”
Not all of you will agree. I’m certainly not so sure. But the gruff Yorkshireman is convinced that AI can only benefit humanity. For him, with his scientific faith in historical progress, today’s AI revolution is a glorious combination of the Enlightenment and the industrial revolution. The only danger, he warns, is the belief in danger itself. Thus his criticism of Anthropic’s Dario Amodei, who has been quite explicit about AI’s dangers — and for whom the doom narrative is, in Keith’s reading at least, designed as a business strategy to solicit governmental backing without government control.
AI Is Not Dangerous. Repeat it. Take your ideological medicine. As if you’re in a Silicon Valley seminary. Sing it out loud. As if you’re in a Methodist choir. Believe it now?
Five Takeaways
• The Economist’s “Lowlife” Moment: Keith’s editorial was triggered by The Economist’s forty-five-minute video on the five men running AI — the title alone, “How to Control the Men Who Control AI,” was enough. Why would The Economist think it could control them? And why focus on the personalities rather than the technology, the applications, or the actual human impact? Judging the AI industry by its CEOs is like judging a film by the leading actor’s personality rather than the script or the performances. It’s the wrong focus — and in Keith’s view, a low one for a publication that should know better. The cult of personality is a media creation, feeding on controversy because controversy sells subscriptions.
• AI Is Not Dangerous. Full Stop. Keith’s boldest claim: AI is not dangerous — not a little, not potentially, not in the wrong hands. The doom narrative is a media-driven frenzy, fed by CEOs who give it too much airtime and by a readymade audience of Americans whose well-founded economic pessimism makes them receptive to negative messages. The Stanford AI Index Report shows that America is the country where AI is trusted least — paradoxically, also the country where media has the greatest influence. In China, people trust AI more, not because the government tells them to, but because economic progress gives them reasons for optimism. You get what you pay for.
• Amodei’s Pitch Disguised as Science: Keith’s reading of Dario Amodei’s doom narrative: it is a business strategy. The message — AI might kill us all, AI might make us all unemployed — is not a scientific assessment. It’s a pitch for Anthropic specifically: if AI is this dangerous, you can’t let anyone else control it, so trust us and give us government backing without government oversight. Contrast with Demis Hassabis, who acknowledges risk and then immediately explains what he’s doing about it — taking responsibility rather than pointing the finger. And contrast with Zuckerberg, who Keith describes as sociopathic: “whatever serves my interest is gonna come out of my mouth at any given moment.”
• Consensus Capital and the Winner-Take-All Endgame: Keith’s post of the week: 75% of all venture capital raised goes to five funds, and 75% of all VC investment goes into five companies. Noah Smith’s piece on winner-take-all AI makes the same point from a different angle: linear extrapolation suggests two, maybe five, companies end up with all the money and power. This is what capitalism does — many car companies became a handful, many banks became a handful. AI will produce the same centralisation, but at unprecedented scale and across every domain simultaneously. The question — how does society benefit? — is the most important question of the era. Altman and Musk at least try to answer it. The others don’t.
• Manifest Agency. Lean In. Keith’s advice to young people who distrust AI: get involved and shape it, because the alternative is to be a victim of whatever outcome arrives without you. AI is valid and inevitable. The question is what influence you have over it, and the answer is: more than you think, but only if you exercise it. Musk and Altman, for all their faults, are two people who do care — and who talk about UBI and universal high income because they understand that the winner-take-all endgame raises genuine questions about distribution. The Sophie Haigney argument — that all the worst people want to be high-agency — has it backwards. A world without agency is a world where elected officials are accountable to no one.
About the Guest
Keith Teare is a British-American entrepreneur, investor, and the publisher of the That Was the Week newsletter — a daily curation of the most important stories at the intersection of technology, business, and culture. He is a co-founder of TechCrunch and a long-time interlocutor on Keen On America.
References:
• That Was the Week newsletter by Keith Teare — this week’s editorial: “The Cult of Personality.”
• “How to Control the Men Who Control AI,” The Economist, April 2026. The video that triggered Keith’s editorial.
• “I Don’t Think Sam Altman Lies,” by Stewart Alsop — the piece that started the conversation.
• John Thornhill, “AI Has an Awful Image Problem,” Financial Times, April 2026.
• Noah Smith, “What If a Few AI Companies End Up with All the Money and Power?” — the winner-take-all argument.
• Episode 2873: Agency, Agency, Agency — Sophie Haigney on the A-word that Keith takes issue with this week.
About Keen On America
Nobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States — hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,900 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.
Chapters: