I have been stuck. Every time I sit down to write a blog post, code a feature, or start a project, I come to the same realization: in the context of AI, what I’m doing is a waste of time. It’s horrifying. The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will. All of my original thoughts feel like early drafts of better, more complete thoughts that simply haven’t yet formed inside an LLM.
I used to write prolifically. I’d have ideas, write them down, massage them slowly and carefully into cohesive pieces of work over time, and then–when they were ready–share them with the world. I’d obsess for hours before sharing anything, working through the strengths and weaknesses of my thinking. Early in my career, that process brought a lot of external validation. And because I think when I write, and writing is how I form opinions and work through holes in my arguments, my writing would lead to more and better thoughts over time. Thinking is compounding–the more you think, the better your thoughts become.
But now, when my brain spontaneously forms a tiny sliver of a potentially interesting concept or idea, I can just shove a few sloppy words into a prompt and almost instantly get a fully reasoned, researched, and completed thought. Minimal organic thinking required. This has had a dramatic and profound effect on my brain. My thinking systems have atrophied, and I can feel it–I can sense my slightly diminishing intuition, cleverness, and rigor. And because AI can so easily flesh out ideas, I feel less inclined to share my thoughts–no matter how developed.
I thought I was using AI in an incredibly positive and healthy way, as a bicycle for my mind and a way to vastly increase my thinking capacity. But LLMs are insidious–using them to explore ideas feels like work, but it’s not real work. Developing a prompt is like scrolling Netflix, and reading the output is like watching a TV show. Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.
The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. The output from AI answers questions. It teaches me facts. But it doesn’t really help me know anything new.
While using AI feels like a superhuman brain augmentation, when I look back on the past couple of years and think about how I explore new thoughts and ideas today, it looks a lot like sedation instead.
And I’m still stuck. But at least I’m here, writing this, and conveying my raw thoughts directly into your brain. And that means something, I think, even though an AI could probably have written this post far more quickly, eloquently, and concisely. It’s horrifying.
This post was written entirely by a human, with no assistance from AI. (Other than spell- and grammar-checking.)
152
Kudos
152
Kudos