Large language models make programming boring

LLMs are a powerful tool, but at what cost?

Published 25/06/2025


The Good

LLMs are known to be a terrible research tool, but when used in the correct way, is the greatest tool anyone can have. Asking ChatGPT "I want to write a web API in <language>, what available frameworks or libraries exist, or should I write my own?" is a great introduction into learning a new language. With extra prompting, it can generate a few examples with the libraries of that language, and you can do further research on your own to get a good idea about what your getting into.

Using an LLM to quickly fix a problem I have can work sometimes as well. Sometimes I don't particularly want to research this tiny, niche problem, and just want to move on to the more important parts of the project. Getting an AI to give me rough code to solve the issue is extremely convenient.

The Bad

Using AI as an over-powered autocomplete (with Cursor or other similar text editors) removes the joy of programming for me. Having an LLM solve problems of moderate complexity and above for me also removes said joy.

I enjoy typing and I enjoy solving problems, even if that means I take a bit longer to implement some functionality. When an LLM autocompletes for me, it pauses my thinking and switches my focus on whether I should hit tab to apply the changes. When I'm writing actually important code , why would I want anything to stop my train of thought?

Most importantly though, I don't want to depend on an LLM to do my job.

LLMs are not only falliable in terms of the code they generate, but also depend on expensive infrastructure . Do I want to offshore my expertise to something that can be shut down at any point in time? Having the ability to write code from my brain is something I don't want to loose by getting used to AIs doing it for me.