A viral ~5,000-word essay that sparked one of the most heated AI debates of 2026. Here's what he said, what the internet thought, and what it means for you.
Shumer, CEO of OthersideAI, frames AI's current trajectory as a "February 2020 moment" — a seismic shift that only a small fraction of people are paying attention to. Here are his four core arguments.
Recent models like GPT-5.3 Codex and Anthropic's Opus 4.6 are not incremental upgrades — they represent a new paradigm. Shumer claims he is "no longer needed for the actual technical work of my job," as AI can autonomously build and test complex applications from plain English.
AI is now a "general substitute for cognitive work." The disruption already hitting software engineers is a preview for law, finance, medicine, and accounting — not in a decade, but within one to five years, possibly sooner.
OpenAI's own documentation for GPT-5.3 Codex states the model was "instrumental in creating itself." This recursive self-improvement loop creates an exponential acceleration that most people are not accounting for.
Most people's understanding of AI is based on older, free models. The gap between public perception and the reality of what the latest paid models can do is vast — and growing. Casual use is no longer enough.
Beyond the alarm, Shumer offers four practical recommendations for navigating the shift.
Use the best available paid AI models and integrate them into your daily professional workflow. Free tiers don't show you the real picture.
The most critical skill for the future is adaptability. The tools will keep changing; the willingness to learn is the only durable advantage.
Focus on skills that are harder to automate: deep human relationships, physical presence, licensed accountability, and novel problem definition.
For those who are proactive, AI presents an unprecedented opportunity to build, create, and learn at a pace previously impossible for individuals.
Based on analysis of reactions across X.com, Reddit, Substack, and major publications, the essay generated a deeply polarized response. Skepticism dominated, but a significant minority of practitioners validated Shumer's core observations.
Selected reactions from X.com, Reddit, Substack, and major publications. Filter by sentiment below.
Yes, the recent progress of these tools absolutely warrants the framing from this piece. My work as a software engineer is unrecognizable from what it was a year or even six months ago.
I've been building a complex business management platform for the last nine months and the difference in even the last month is unbelievable. Issues that would have taken hours now get easily one-shotted.
It's insane, because I didn't expect this. I felt that there was an inflection point when GPT-5.3-Codex came out. I tried it and was like, Oh my God, this is not just a step better, this is massively better.
I read the thing. It is AI hype disguised as an insider spilling the beans on an impending jobs apocalypse. These AI companies are in a bubble and they need doom and gloom pessimism to get investor money.
It's a masterpiece of hype, written in the style of old direct marketing campaigns. He gives no actual data to support his claims. The picture he sells just isn't realistic.
Man With Vested Interest In AI Adoption Tells Us It's Now Or Never For Those Who Haven't Adopted AI.
Shumer's claims are "weaponized hype" that "stumbles on the facts." He misrepresents benchmarks like METR and ignores persistent AI problems like hallucinations and unreliability.
Every finance bro in the AI space was sharing this in the past 12 hours. It's doomer hype wrapped around a paragraph about how most people aren't seeing this because they're using the free version of these tools.
Where he is right: The "skill floor" is rising. Being "average" at coding or analysis is no longer a viable career path. Where he is likely wrong: The timeline. Societal change is stickier and slower than software updates.
It's a "twitter essay designed to one-shot bosses everywhere." It's less aimed at people already deep in AI-assisted development and more at executives who need convincing to invest in AI tooling.
I don't think just because the AI can do something means it's going to immediately proliferate across the economy. There are so many structural things — regulations, standards, people's comfort — that means for certain industries it's going to take more time.
The COVID analogy is doing a lot of heavy lifting and is a bit manipulative as a rhetorical device. It's designed to make you feel like if you're skeptical, you're the person who didn't take the pandemic seriously early enough.
The debate surrounding Shumer's essay highlights the deep divisions in how we perceive and are preparing for the impact of AI. While the essay's dramatic tone and anecdotal evidence have drawn valid criticism, it has undeniably succeeded in bringing a critical conversation to a mainstream audience.
The truth likely lies somewhere between the two extremes. The rapid improvement in AI capabilities is real, and it is already transforming certain industries, particularly software development. However, the timeline and extent of its impact on the broader economy remain highly uncertain. Social, regulatory, and economic factors will play a significant role in how these technologies are adopted and the ultimate consequences for the workforce.
Ultimately, the most prudent takeaway is one that both Shumer and his more measured critics seem to agree on: ignoring the advancements in AI is a mistake. A proactive, curious, and critical approach to understanding and experimenting with these powerful new tools is the most rational path forward.
Note: Shumer confirmed he used AI as a collaborative writing tool to help structure and refine the essay — which he argues is itself a demonstration of his core point.
All sources used in this briefing, with direct links.