Beware skill atrophy
By “skill atrophy” I mean “skill rot”, “becoming rusty”, “dulling your blade” or any of the other colorful metaphors people use. It’s analogous to a runner getting out of practice and losing speed, or a litigator who hasn’t seen the courtroom in a long time and doesn’t have quite the same edge.
Cell Phone Contacts
A more banal comparison that those of us of a certain age might know:
Those of us that grew up without cellphones probably used to have many phone numbers memorized. Since cellphones (“dumb” phones, included!) have contact lists that reference numbers by name instead of digits, our brains have no reason to memorize new phone numbers.
I still remember most phone numbers from my childhood, but only have a handful of phone numbers memorized now (close family members).
In development, it might look like forgetting syntax more frequently, or finding it harder to refactor, or even as severe as getting executive dysfunction when confronting a programming problem.
Self-confidence and critial thinking in the age of “AI”
In this 2025 Carnegie Mellon study The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers
Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship.
The study is careful to acknowledge:
Our analysis does not establish causation. However, based on our evidence, it is possible that fostering workers’ domain expertise and associated self-confidence may result in improved critical thinking when using GenAI. Task confidence significantly influences how users engage with AI tools, particularly in the context of human-AI “collaboration” (not withstanding objections to that term).
LLM-driven development is still a very novel concept, so we really don’t have any longitudinal studies objectively showing skill atrophy.
Opportunities for growth
Personally, I enjoy programming. The depth of knowledge I have about my particular slice of the coding world is something that is consistently useful at my job, especially with bug hunting and resolution. I would be disappointed to lose that edge or become reliant on a third party service to be able to perform my job.
I have yet to be convinced that using an LLM to generate a solution will save me a significant amount of time, when you consider the “total cost of ownership” for that solution – factoring in review time, corrections, and implementation time. It feels more jarring and stunted than approaching it more conventionally.
Given that, I see every opportunity to program as an opportunity for me to hone my skills a bit more, to become a bit sharper, and to learn new things.
Over-reliance on an LLM means undermines your value-add as an employee
This applies more for juniors, but seniors should beware too:
I see a lot of organizations that have “LLM-required” mandates from management. I see this as a huge red flag that those developer jobs are in danger of being replaced or reduced. For whatever acceleration they may tout they are targeting, the real danger here is that if your job becomes purely prompt engineering, you are being forced into potential skill atrophy and also your role can be replaced by someone who can be paid less for doing the same work.
“AI”-driven Layoffs
Recently, Amazon laid off 14,000 corporate (read “white collar”) employees. It included employees in “Amazon’s cloud, grocery, video games, human resources, sustainability and communications, ads and devices”. Management at Amazon very much believes humans can be replaced by LLMs, and is acting on that belief.
Fortune magazine quotes Jason Schloetzer:
“it’s not so much AI directly taking jobs, but AI’s appetite for cash that might be taking jobs”
The article notes these large corporations and their planned / executed layoffs:
| Company | Layoffs | % of workforce | Source |
|---|---|---|---|
| Amazon | 14,000 | 4% | AP News |
| UPS | 34,000 | - | AP News |
| Target | 1,800 | 8% | AP News |
| Nestlé | 16,000 | - | AP News |
| Lufthansa Group | 4,000 | - | AP News |
| Novo Nordisk | 9,000 | 11% | AP News |
| ConocoPhillips | 2,600-3,250 | 20-25% | AP News |
| Intel | ~22,000 | 15% | AP News |
| Microsoft | 15,000 | - | AP News |
| Proctor & Gamble | 7,000 | 6% | AP News |
Note that not all of these cuts are strictly due to “AI”, but this was cited expressly as a factor by more than a few of them.
LLM-driven Skill Caps
If you’re a junior and all of your development is based on generated code, then the limitation of your capabilities is capped at what you are able to produce with an LLM. You will speed out of the gate in a vehicle that has rocket-fast acceleration but a lower overall max speed.
The corrollary to this, of course, is that a developer who chooses NOT to use LLMs to drive their development will have a higher ceiling cap, though it may take longer to get to that point. Your metaphorical vehicle has a lower acceleration but ultimately a much higher skill ceiling. By choosing the slower route, you will ultimately be a stronger developer and ostensibly more competitive in the marketplace.
Market Recovery
I sincerely believe that when the hype finally dies down a bit, as it did for Blockchain, there will be a demand for skilled developers again.
The abilities of LLMs are impressive, but they are predominantly better at creation than maintenance, because creation can ostensibly be done de novo whereas maintenance requires synthesis of context. As things get more and more complex and as systems grow, it will eventually be too large or costly to perform these functions, even with RAG (Retrieval Augmented Generation).
This may or may not be partly behind the ravenously hungry appetite that the various “AI” companies have for expansion and growth – in order to make good on the bets that are being made by corporate America, there will need to be an expansion of computing power.
Inevitable Enshittification of LLMs
If you are a heavy user of LLMs, or are considering it, I would like you to consider:
- How much are you spending per month on LLM services?
- How effective would you be if they were suddenly unavailable?
- How much of a cost increase could you realistically absorb before it became untenable? How much would be a hardship?
Enshittification
In Cory Doctorow’s 2022 (and recently re-printed!) book Enshittification, he proposes 3 stages that every online platform goes through:
- The platform is good to users
- The platform is bad to users, good to businesses
- The platform is bad to users, bad to businesses, good to itself
The progression occurs as a platform reaches critical mass for user lock-in (enough people are established as users that they cannot easily walk away). Consider, for example, the friction you would feel from walking away from using any particular online platform.
We are somewhere between #1 and #2.
When ChatGPT first came out, there was a rapid expansion of users finding different use-cases for it. The future seemed bright and exciting. But we are also seeing LLMs being the driving factor behind layoffs. We are seeing the job market dry up for Junior Devs, which AWS Ceo Matt Garman calls the dumbest thing he’s ever heard in light of entry level jobs disappearing because of “AI”.
Mark my words: as soon as the AI companies feel they have enough users and businesses dependent on their services, they will jack up prices, significantly.
Every single AI company is still operating at a loss (overall) despite raking in massive amounts of revenue. OpenAI and Oracle have signed a 5-year $300 Billion cloud deal. For a company that is operating at a loss, their belief that they will be able to convert that to a net gain of $300 Billion (whether through revenues or investment rounds) means they have a plan for something.
Additionally, they are planning on spending $1 Trillion over the next decade to expand their computing infrastructure.
By comparison, they are currently claiming 800 million regular users, with 5% (40 million) contributing to their $13 Billion of annualized revenue.
Covering the shortfall
Hypothetically speaking, let’s just say that OpenAI alone wants to satisfy their $60 Billion annual payments to Oracle. They need to make up for a $47 Billion shortfall. They would need to increase their subscription prices / costs PER ALL USER (even the 95% of the free tier users!) by $59 / user to recover that difference.
Looking at their current plans:
| Tier | Current price | Adjusted price |
|---|---|---|
| Free | $0 | $59 |
| Plus | $20 | $79 |
| Pro | $200 | $259 |
Granted, there are probably other ways they could re-distribute those values, perhaps by introducing a new tier between Free and Plus that costs $20 and requires a substantial portion of Free users to convert, but one way or another, the company needs to either extract $47 Billion from its 800 million users, find more users and also raise prices, or start screwing its business customers over too.
Enshittification points towards some combination of all of those.
Dependency
This brings me back to my original point:
If you’re currently using the Free tier, could you walk away? Could you afford a $59 increase?
If you’re using Plus or Pro, could you afford a 300% or ~35% (respectively) increase in fees? Could you do your job if you had to walk away?
The cynic in me thinks that the companies demanding their employees use LLMs at their jobs are trying to get users dependent on these services, fattening them up for the inevitable slaughter.
It then becomes an act of resistance to continue to be a strong developer in the absence of these generative tools.