When we were young, we were taught that shortcuts were cheating.
We learned to associate them with a moral wrong that would always come back to bite you. They rendered you vulnerable to the shame of outright penalty, or the guilt of undeserved victory.
In most cases, taking a shortcut meant cheating yourself out of an intangible reward that could only be earned through hard work and perseverance. We grew up resisting such temptations, knowing that it was unethical to give in, and that we would only be hurting ourselves if we did.
We’re now living in a world where shortcuts have become our greatest tools, where those intangible rewards are devalued, and where hard work and thoughtful effort feel increasingly obsolete.
ChatGPT, among other large language models, is a shortcut to all tasks imaginable.
It is a shortcut to reasoning through complex hypotheticals, to conducting research, to producing vast quantities of written material, to breaking down difficult tasks into manageable ones. And it’s not just a shortcut that skips a short portion of the track. It takes you from any point on the track all the way to the finish line.
I see two fundamental functions for GPT — one of which is sure to enrich our lives, the other of which could easily go either way.
The first one is education.
A hundred years ago, if you wanted to learn something very specific, say everything there is to know about eggs — why they’re that shape, what the anatomy is like, how the shell hardens, what the timeline is for a chicken to be born, what an amniotic sac does, et cetera — you would be at the combined mercy of (a) whether that information was available to you, in the form of a teacher or book, and (b) whether that person or book could explain it to you in a way that you understood.
Twenty years ago, if you wanted to learn something new, you’d go online, to articles, blog posts and YouTube videos, where although you’d likely have the information available, you might not still fully follow.
Now, if you want to learn something new, you have a personalized teacher, to whom you can be honest about what you don’t understand along the way and who can frame things in exactly the way that you’ll understand them.
The GPT classroom is so effective because ego is out of play. Where before a lack of knowledge would be an obstacle in and of itself, by virtue of it being embarrassing, that barrier is totally stripped away. With the lack of audience comes an inherent comfort, and thus a predisposition to honesty and vulnerability — both critical to a productive education.
There is no incentive to lie because there is no fear of judgment. At 25, you can say,
“I don’t know anything about the Balkans. I don’t know where they are. I don’t know what countries they are. I don’t know what kind of people live there. I don’t know anything about the geopolitics, the history, what kind of food they eat, anything. Can you give me a comprehensive lesson that gets me up-to-date?”
And suddenly, you enter into a wonderful, lucid conversation with a masterful instructor who does not hold your doe-eyed ignorance against you, but who is happy to fill in the gaps.
I’ve always said that the people who impress me most in the world are those who understand the most complex things at their highest levels — the geniuses of their fields, who could go toe-to-toe with any other genius — but who can also explain those things to people who aren’t familiar with them.
Talking to GPT is like talking to one of those people. It has established itself alongside the rarest, most impactful category of person — that of the layspeaking genius, a ripe tree from which all fruit hangs low.
Now, of course, there’s value in trying to parse ambiguities and figure things out for yourself. And you have to think that the smartest people in the world came to their realizations not by having it spoonfed to them, but by going out and understanding it from the raw materials available to them.
But with GPT, it doesn’t feel like spoon-feeding when you’re the one grabbing the spoon and guiding it into your own mouth. “What does [that] mean?” “Go back.” “What’s the difference between [this] and [that]?”
Human curiosity, like a liquid, expands to fill the space it’s given. The fact that we can ask anything and get a proper answer encourages us to indulge in our more eccentric curiosities.
Where before, it wasn’t even worth wondering why the little egg veins form in those lightning shapes, now we can entertain such a question. We entertain it precisely because we don’t need to talk to the right scientist, or spend hours online sifting through related articles about egg anatomy, to find the answer. The answer to that hyper-specific question will be spelled out to us, in plain English, mere milliseconds after we finish writing it.
The other fundamental function of GPT, one that carries more negative externalities than I feel most people account for, is production — the expansion of nascent ideas and creation of deliverable content.
Software engineers, for example, leverage GPT to write code at rates unimaginable in the 2010s. And in the last year or so, a new, laid-back approach to engineering called “vibe coding” has gone mainstream, whereby engineers will explain to GPT what they want to create and GPT will spit out code for them. When the code produces errors, the engineer simply copies and pastes the error message into GPT, and GPT responds with a fix. It sounds, and is, great.
The flip side, of course, is that it fundamentally makes you a worse engineer.
When you rely on GPT to build something for you, you have no ownership of the interior and no real firm concept of why things work the way they do. If ever GPT gets stumped, you’re more or less screwed. You can try to dive deep into the architecture, but there’s no guarantee you’ll find your way to the solution — either because you’re out of practice, or because you’re too far removed from the context.
And this flip side isn’t specific to software. Basic tasks, across the board, feel increasingly laborious once you’ve had GPT help you do them once or twice.
What you don’t realize in the moment is that the more you ask it to do those basic things, the more daunting they feel. You get used to asking for instructions and following them precisely, as opposed to trusting your intuition and correcting your path based on natural mistakes.
You start to think in brackets like [add 2-3 examples here] and [propose a counter-argument here], your sentences decomposing into mere blueprints of sentences. Such is the natural consequence of relying on a tool that automates a task for you — though it does raise concern when the task being automated is the very thinking that makes your output meaningful.
Imagine you’re a high school student writing a paper on a certain chapter from, say, Emma. You have an idea of what you want to write about, and so you give GPT the context of your assignment and a short summary of your thesis, which it, of course, will understand and be ready to expand upon.
You hit Enter and watch it produce these perfect, eloquent, incisive paragraphs faster than you can even read them. It’s astounding how thoughtful it is, how it develops every nuance you hinted at in your lowercase, punctuation-free word vomit.
By sheer comparison to the effort it would’ve taken you to come up with the same output, you can’t help but think, “God, that would’ve been so unbelievably hard to do myself.” And as a result, you become less self-reliant and less confident in your own productive capacity.
The mind is a muscle, and as it acclimates to a new standard of half-thinking, it atrophies. Little by little, the burden of producing and developing thought — once entirely human — shifts away from the mind and onto the model. Eventually, it arrives at a point where its only job is to get the ball rolling, before handing it off to research and development.
Keep in mind, it is entirely our decision what we do with this technology. It is our decision how to integrate it into our creative processes and into our work, whether we want it to handle just the low-level minutia or also provide counsel on the higher-level decisions.
There’s no single right way to use GPT, but it is important to be wary that whatever you delegate to it, your mind doesn’t get to do. And you don’t want to starve your mind of things to do.
It’s a strange inversion of school rules — as kids, we are punished for taking shortcuts, and as adults, we are encouraged to. And yet, the same consequences persist. We do not become stronger writers or thinkers — just better supervisors and assistants. Our productivity spikes, but we are not the ones producing.
The line between healthy use and harmful overreliance comes down to who is doing the work of the sentient being — the thinking, deliberating, creating, pausing, and reconsidering.
When we set our minds to auto-complete, we lose fundamental pieces of ourselves — a distinctive voice, a controversial thought, a peculiar question that falls outside the mean. This battle, of whether we prioritize GPT or our own internal engines, will ultimately decide whether this technology enriches our lives or takes something away from them.
Well written , however what’s the root cause of this human behavior?
Thought provoking article - loved it!