0:00/1:34

There's an arms race happening right now. Trillions of dollars are pouring into making AI more capable. The progress is relentless. Every few months, something that seemed impossible becomes a demo, and then a product. The result is billions of people having conversations with a machine that sounds like the smartest person they've ever talked to. But there's a question nobody is asking, and it might be the most important question of the decade: What's happening to the humans on the other side of the conversation?

The Illusion of Expertise

You asked AI something last week. Maybe it was a technical question or an explanation of a concept you've been meaning to understand. The answer was thorough, articulate, clearer than what you'd get from twenty minutes of manual work. You used it. You moved on feeling a little more competent, a little more capable.

Now here's the part that should bother you: could you explain what it said to someone right now? Not the gist. The actual reasoning. The nuance. Could you reconstruct the argument from memory? Could you defend it if someone pushed back?

For most of us, the honest answer is no, not really. You have a vague sense of it. You remember the shape of it, maybe a keyword or two. But the substance, the reasoning, the specific insight that made it a great answer, is gone. It lived in the conversation, stayed there, and was never yours.

This is the illusion of expertise. AI gives you the experience of understanding without the actual understanding. The feeling of learning without the retention. The confidence of knowledge without the underlying competence. You walk away from an AI conversation feeling like you became smarter, but you didn't. You consumed. And AI makes it incredibly easy to confuse the two, because the output is so polished, so complete that your brain registers it as "handled" and moves on.

There's something almost paradoxical about it. The better the AI's explanation, the less likely you are to actually learn from it. A confusing response forces you to ask follow-ups, challenge it, rephrase it in your own words — to struggle. A perfect explanation just... lands. You nod. You move on. The perfection of the AI's response is precisely what prevents the learning from happening.

There's a reason learning has always been effortful. The effort is the learning. The struggle to articulate a concept, the friction of working through a problem step by step, the slow process of building a mental model through trial and error. That's how knowledge moves from "I read about this" to "I understand this" to "I can apply this in a new situation." AI gives you the answer without building the internal architecture that would let you reconstruct it, extend it, or recognize when it's wrong.

This is deeper than the "Google effect," the tendency to forget information you know you can look up. With AI, you're not just outsourcing the facts. You're outsourcing the reasoning. The framework. The thinking itself. And you can't look up thinking the way you can look up a fact. What's happening underneath is a slow, quiet erosion — cognitive atrophy. And the fact that it feels good is what makes it so dangerous. This isn't a problem that announces itself with pain. It announces itself with comfort.

The Pattern

It's not just you. It's happening at scale, and once you see the pattern, you can't unsee it.

Developers are leaning on AI-generated code more heavily with each passing month. Not copy-pasting blindly, but accepting suggestions faster, interrogating them less, and gradually losing the habit of reasoning through implementation choices from scratch. The code works. But the deep understanding of why it works, the kind that matters when something breaks in an unexpected way, is thinner than it used to be. I've heard from engineering managers who say they can no longer tell from code review alone whether a junior developer actually understands what they've written. The code quality looks fine — better than fine, actually. But when you sit the person down and ask them to walk through the logic, there's nothing there. The illusion of competence, written in clean syntax.

Professionals across every industry are forwarding AI-written analysis they haven't fully interrogated: strategy memos, market research, competitive assessments. They've skimmed it. Maybe they checked that nothing looks obviously wrong. Made a few tweaks. Changed a word here, a paragraph there. And sent it. But they haven't stress-tested the reasoning. They're pattern-matching for plausibility, not actually thinking. And because the output is so fluent, so confident, so well-structured, it passes. It gets accepted. As long as it gets the job done.

Students are "learning" concepts by asking AI to explain them, reading the explanation, feeling the little dopamine hit of understanding, and then moving on. A day later, it's gone. Not because the explanation was bad (it was probably excellent) but because reading an explanation is not the same as learning. Learning is struggle. Learning is getting it wrong and figuring out why. Learning requires friction. And AI, like most tech products, is specifically designed to eliminate it. Deliver the results you want, fast. That's the product. That's the pitch. Frictionless answers. Frictionless code. Frictionless analysis. But what if some of that friction was the point? What if the struggle was where the learning actually lived?

Here's a question worth sitting with: if you removed AI from your workflow tomorrow, what could you still produce, create, analyze, build? What skills have you maintained, and what have you quietly let atrophy because the AI handles it? The answer to that question is your actual competence. Everything else is borrowed.

The pattern is simple and it's accelerating: as AI gets more capable, humans are getting more passive. Not dumber. This isn't about intelligence. It's about engagement. The difference between actively thinking and passively receiving. And this isn't a prediction. It's a trend that has already started.

The Incentives

Perhaps we can count on the big AI companies to do something about this. I wouldn't. The entire incentive structure of the AI industry ensures it will get worse.

Every major AI investment is an AI-side improvement. Persistent memory so it remembers your preferences. Agents that take actions on your behalf. Longer context windows so it can process entire codebases and document libraries. Making the AI smarter, more autonomous, more capable of doing the thinking for you. Each of these investments, in isolation, is genuinely useful. I'm not anti-AI. These tools are incredible, and I use them every day. But the cumulative direction is unmistakable: the industry is building a world where humans need to think less.

What nobody is investing in, at least not with anywhere near the same urgency, is the human side. Retention. Comprehension. Skill development. Critical thinking. The ability to actually grow from your interactions with AI rather than just consume them.

This isn't an accident or an oversight. It's the logical outcome of how the business model works. AI companies make money when you use AI more. Subscriptions, API calls, usage metrics. More queries. More reliance. More delegation. That's the revenue model. They have zero incentive to say "hey, you've asked this same question three times this month, maybe you should try working through it yourself." The business model is built on your passivity. Your dependence is their growth metric.

I don't think this is malicious. Nobody at these companies is plotting to make humanity dumber. But incentives are more powerful than intentions. The system is not designed to make you sharper. It's designed to make itself more essential. And those are very different things.

It's like the food industry. Nobody at Kraft or General Mills set out to make people unhealthy. They set out to make food people would buy. But the incentives led somewhere predictable. Optimize for taste, for craveability, for repeat purchase, and you end up with outcomes that were terrible for human health. And it took decades for anyone to seriously address the gap between what the industry was optimized for and what humans actually needed. We're in the early stages of the same dynamic with AI. The product is optimized for one thing. The human needs something else. And the gap is growing every day.

The Loop

And here's the part that actually scares me: it's a feedback loop. It compounds.

It starts with passivity. AI gives great answers, so you stop struggling with problems. Why would you when the AI can solve it in seconds? Totally reasonable. But passivity leads to atrophy. The skills you're not using fade. Critical analysis, problem decomposition, creative synthesis, even just the habit of wrestling with hard problems. Not overnight, but steadily. A surgeon who stops operating loses their edge. A mathematician who stops doing proofs loses their fluency. A writer who stops writing loses their voice. And a knowledge worker who stops thinking loses everything that made them a knowledge worker in the first place.

Then atrophy leads to something really dangerous: the inability to catch when AI is wrong. AI makes mistakes. It hallucinates. It confuses correlation with causation. It optimizes for plausibility over truth. It confidently asserts fabricated facts. Catching these errors requires exactly the kind of deep, active engagement that passive AI usage is eroding. The very thing you need to catch the mistakes is the thing that's disappearing. And when you can't catch the mistakes, you trust the AI even more, because what else are you going to do? Deeper dependence accelerates the atrophy. The loop tightens. The endgame is a spiral.

But this isn't just a competence issue. It's an identity issue. It's a meaning issue. So much of what gives human life texture and satisfaction is the experience of getting good at things. The slow accumulation of skill, the moment something clicks, the pride of doing something difficult. What happens to that when the default for every hard problem is "ask the AI"? This isn't just about productivity. It's about what kind of humans we become.

The Next Frontier

The next frontier of the human race isn't making AI smarter. We've got that covered. Thousands of brilliant people are on it, and they're succeeding spectacularly. The next frontier, the one almost nobody is working on, the one without billions in funding or armies of engineers, is making sure human intelligence doesn't quietly collapse in the shadow of artificial intelligence — on the contrary, harnessing it to push us beyond what we thought we were capable of.

This is the problem I want to work on. I don't have all the answers. I'm not sure anyone fully understands the shape of this problem yet, because it's evolving as fast as the AI systems themselves. But I'm convinced, deep in my bones convinced, that this is an important problem to solve for humanity, with potential to advance the progress of human cognitive evolution. Someone needs to be asking: how do we design AI interactions that make people more capable over time, not less? How do we build tools that strengthen human thinking instead of replacing it?

For millions of years, human intelligence has been the engine of our survival and progress. Every tool we've ever built, from language to the printing press to the internet, has ultimately expanded what the human mind could do. AI should be the next chapter in that story, not the final one. Not just powerful AI, but powerful humans wielding powerful AI. That's the future worth building toward.

Eugene Wang

Founder

Share