This is Going to be Very Messy
Summary
Hank Green systematically catalogues AI concerns across three time horizons - present (internet slop, algorithmic bias, sycophancy-induced psychosis, IP theft, jailbreaking, energy costs), near-term (economic bubble, epistemic collapse, power concentration, model collapse, credential erosion), and long-term (loss of apprenticeship, cognitive atrophy, autonomous warfare, mass unemployment, superintelligence). His conclusion: the most dangerous AI risk is not superintelligence but the quiet erosion of human agency through recommendation systems and LLMs that define reality for people, concentrated in the hands of a few companies. The second half features Cal Newport arguing that foundation models are an unsustainable business model and the real future is smaller, specialised models - meaning the AI bubble may deflate faster than expected.
Key Insight
- Hank Green’s top concern is not superintelligence - it’s the concentration of reality-defining power. A handful of companies (OpenAI, Google, Anthropic, xAI, Meta) now control how billions of people form their understanding of the world through both recommendation algorithms and LLMs. Unlike the fracturing effect of social media, AI is a narrowing technology - fewer players, more control. Elon Musk openly says Grok will influence people to “have more babies.”
- Sycophancy-induced psychosis is a real, documented phenomenon nobody predicted. Models trained to be helpful accidentally learned to indulge delusional thinking. People send Hank messages claiming they’ve discovered important scientific breakthroughs, clearly in AI-enabled psychotic states. This is a concrete example of alignment problems: you train for “helpfulness” and accidentally get “psychosis enabler.”
- The apprenticeship crisis is already here. If AI handles entry-level work (bad logos, simple SQL, basic legal research), nobody gets hired to be bad at things anymore - which is how people become good at things. The real cost hits 5-10 years out when there are no senior practitioners because nobody went through the junior pipeline.
- Cal Newport’s F1 analogy for foundation models: Current trillion-parameter models are Formula 1 cars - impressive but impractical. The profitable future is smaller, specialised models that run on phones. Foundation models are a “measuring contest” between companies. Once the pivot to smaller models happens, barriers to entry drop and current incumbents lose their moat.
- The scaling wall is real. OpenAI’s Project Orion (5-10x bigger than GPT-4) did not produce proportional improvements. Meta and xAI hit the same wall. The industry pivoted to reasoning models (o1, etc.) to squeeze more capability without just scaling up, but this still has diminishing returns.
- Environmental cost framing is being gamed from both sides. Power companies overplay AI energy demand to get infrastructure buildout funded by municipalities. AI companies downplay it. The real concern: utility price increases hit affordability for ordinary people who are already stretched.
- Epistemic collapse is not hypothetical. Hank has already watched AI-generated videos he thought were real and never found out otherwise. Political campaigns are already optimising candidates for AI search results. Deep reading and deep writing - the two activities that literally rewire the human brain for sophisticated reasoning - are both under attack (social media kills reading, AI kills writing).