The Comfortable Plateau
When every domain feels instantly familiar, do we lose the ability to recognize what we don't actually know?

I recently came across an excellent piece titled "AI is a floor-raiser, not a ceiling-raiser." The clarity of the argument and the simple diagrams stuck with me—AI dramatically lifts the baseline performance for everyone, but doesn't necessarily push the upper bounds of what's possible. It's a compelling framework, and I found myself nodding along as I read it.
But then a thought crept in: what if that raised floor becomes a comfortable plateau we never leave?
I've been wondering if the immediate satisfaction of AI-powered work is actually stunting our growth as thinkers and problem-solvers. While AI helps me accomplish tasks faster than ever, I keep asking myself: am I getting better, or just getting by?
I wrote previously about my career journey and discussed the impact of the 10,000 hour rule. One part I did not emphasize enough in that piece was that it's not just the hours—it's what happens during those hours that matters. And what matters most might be the role of discomfort in learning. Real growth has always come from pushing against our boundaries, from that frustrating space where our current abilities fall short. But what happens when AI removes that friction? When it can search and distill any information into a format so familiar and digestible that we never have to struggle?
I'm starting to think that by raising the floor so effectively, AI might be removing the very struggles that push us toward a higher ceiling.
The Evolution of Feedback Loops: From Libraries to AI
I've been thinking about how the feedback loop between question and answer has compressed throughout my life, and each compression felt revolutionary at the time.
In elementary school, curiosity meant a trip to the library. You'd wonder about something on Monday, walk to the library after school, hope they had the right book, and maybe—if you were lucky—find your answer by Wednesday. The feedback loop was measured in days.
I was fortunate to have access to a computer at home early on in my life and I distinctly remember having Microsoft Encarta on CD-ROM. This was the next leap forward. Questions that would have taken a library trip could be answered in minutes. The feedback loop shrank by an order of magnitude. Yet you still had to navigate, to browse, to make connections between disparate content—and it wasn't exactly up to date!
Dial-up internet compressed it further. There was an explosion of available content from a variety of "experts." The compression of access to up-to-date information was revolutionary, but it took time to load—and in those loading moments, you were synthesizing along the way. The feedback loop was minutes, but those minutes of waiting became thinking time.
Broadband changed things even further. The speed at which Google could search and list relevant links was astonishing. The feedback loop shrank to seconds. But even then, you still had to synthesize, to read through multiple sources, to piece together understanding from fragments.
And now? AI doesn't just find information—it pre-digests it, contextualizes it, presents it in exactly the format I need. The feedback loop hasn't just compressed; it's effectively disappeared. What's more, I can even choose how long I'm willing to wait—a quick response for immediate needs, or a deeper dive if I'm feeling patient. The control over this loop feels unprecedented. This phenomenon of having a controllable knob for how deep you want AI to go on a subject is insane.
The False Familiarity Trap
There's something unsettling about how quickly AI can make any topic feel approachable. Within minutes, it can break down complex concepts into digestible pieces, complete with analogies tailored to your liking. You walk away feeling informed.
But there's a crucial difference between understanding something and understanding AI's translation of it.
AI doesn't just give us information—it transforms unfamiliar territory into familiar patterns. It's like having a universal translator that makes every foreign language sound like your native tongue. Convenient? Absolutely. But you're not actually learning the language.
What's particularly interesting is how this compression works compared to previous information gathering techniques. Traditional learning meant wading through links, reading through dense papers, attempting to find conflicting explanations. You'd hit dead ends, realize you were asking the wrong questions, backtrack, and try again. That meandering path—frustrating as it was—built real understanding. You learned not just the answer, but why other answers were wrong.
AI collapses all of that into a smooth, frictionless experience. No dead ends. No confusion. No struggle. Just clarity delivered on demand. And I'm increasingly convinced that this is creating a new kind of technical debt that lives in our minds. Competency debt, if you will.
The question that keeps nagging at me: When every domain feels instantly familiar, do we lose the ability to recognize what we don't actually know?
The Missing Discomfort
There is an important difference between practicing what you know and stepping outside your comfort zone.
Discomfort in learning isn't a bug—it's the feature. It's what forces us to develop intuition, to build mental models from first principles rather than borrowed understanding. You can't develop that kind of deep knowledge without the struggle of genuine confusion and the slow work of finding your way through.
This brings me back to the 10,000 hour rule, but with a crucial asterisk. It's not just that quality matters more than quantity—it's that the quality is defined by how far outside your comfort zone those hours push you. Ten thousand hours of assisted, friction-free work might make you efficient, but will it make you better?
What's particularly concerning is how optional that discomfort has become. Traditional learning forced expansion—you hit a wall, you had no choice but to scale it. Now AI presents you with a door and there's an immediate dopamine hit when you walk through it—the satisfaction of "progress," of moving forward, of getting things done. That feeling of productivity can be intoxicating.
The real challenge isn't that AI removes struggle—it's that it makes struggle feel unnecessary. Why spend days understanding the underlying system when AI can guide you to a working solution in minutes? It's a rational choice in the moment, but I'm wondering what the compound effect looks like over a few decades.
The Path Forward: A New Paradox
We're living through a fascinating shift in the age old comparison of Man vs Machine. On one hand, AI-augmented productivity is undeniably powerful. The collective hopes are high—and rightfully so—that AI will help us solve humanity's most difficult challenges in energy, medicine, climate. These aren't pipe dreams; they feel increasingly reachable.
But we are the ones still pushing AI forward. We're the ones asking the questions, setting the parameters, recognizing when we're onto something versus running in circles. And our ability to do that well—to push the boundaries of what's possible rather than just what's easy—depends on the very cognitive muscles that AI makes it so tempting to let atrophy.
The difference between guiding AI toward a breakthrough and having it efficiently solve the wrong problem might come down to whether we've maintained our ability to think deeply, to recognize patterns that don't fit, to know when to push through difficulty rather than route around it.
This isn't anti-AI doom-saying. It's recognition that the tools are only as good as the people wielding them. If we all settle comfortably on that raised floor, who's left to push toward the ceiling? If we lose our capacity for productive struggle, do we also lose our ability to recognize which struggles are worth having?
I'm increasingly convinced that conscious discomfort—deliberately choosing the harder path when the easy one would suffice—isn't just about personal growth anymore. It might be about maintaining the collective capacity to guide these powerful tools toward the breakthroughs we're all hoping for.
A Conscious Choice
The raised floor that AI provides is real, and it's spectacular. But comfort has always been the enemy of growth, and I'm more convinced than ever that we need to be intentional about our discomfort in this new era.
The challenge isn't to reject AI—that ship has sailed, and honestly, why would we want to? The challenge is to remain conscious. To recognize when we're taking the door instead of scaling the wall. To notice when familiarity is false, when productivity isn't progress, when efficiency is making us less capable.
I'm trying to think of this as a practice, like meditation or exercise. Some days I deliberately choose the harder path—reading the dense paper instead of the AI summary, building from scratch instead of prompting my way to a solution, sitting with confusion instead of immediately reaching for clarity. Not always, but enough to keep those muscles from atrophying.
I believe the future breakthroughs—in energy, medicine, climate, and problems we haven't even identified yet—will come from humans who maintained their ability to think deeply while using AI as a tool. They'll come from people who can recognize when AI is running in circles because they've run in those circles themselves. They'll come from those who know the difference between the raised floor and the ceiling because they've pushed against both.
We're at an inflection point. AI will continue to raise the floor, making previously difficult tasks trivial. But who among us will still know how to reach for the ceiling? That's not a question AI can answer for us. That's a choice we have to make, deliberately and repeatedly, in how we approach our work and learning every single day.
I'm curious how others are thinking about this balance. How do you maintain your edge while leveraging AI's power? When do you choose struggle over speed? I am eager to know how others are thinking about this challenge.