The “correct” way to use AI for coding (and anything really) is to ask for explanations / tutorials when you can’t find one online, then learn from that.
Never let it do something for you. That’s how you lose. If you’re not actively learning, you’re actively rotting, and that goes for life in general too.
I don’t think that’s a good idea, if you can’t find an explanation online that means that there’s not much info available in which case the best thing would be to ask on a forum, that way other people that look for that info will find it.
The main problem here are the software developers who don’t notice their brain rot.
I mean they’re not too wrong. We’ll probably be screwed over in some time and period
I like how no liability is even spoken about until after something goes wrong.
Developers who are told to use AI whether they like it or not, however, tell a different story.
Well there’s the problem.
I’m a software developer and I say that AI is the greatest force-multiplier that’s been introduced into the field since the compiler. I love using it, it handles the most tedious and annoying parts of the process. But there are situations I don’t want to use it in, and of course being forced to use would give me a more negative opinion of it. Obviously.
There isn’t any credible evidence out there that actually shows LLMs are a “force multiplier.” That is almost certainly just a made up marketing term for unprofitable chatbot companies.
In this case the evidence is literally first-hand experience. There is nothing that will change my mind on this because it’s my direct personal experience from actual use.
I honestly don’t care what marketing says, and if other people have different experiences then that’s just them. In my personal actual real-world experience I found that they let me get tons more done and their quality of work is perfectly fine as long as you’re using the right tools and giving them the right instructions.
The article says that developers are disagreeing with that in situations where they are “forced” to use AI, and that’s fair, it doesn’t make sense to force a tool to be used for something it’s not good at. They might be using it wrong. I use it whenever it’s better than not using it, and that ends up being quite often in my workflow.
I kind of agree it’s a multiplier. But so far every time I’ve had it do something its written such an ugly turd I have to rewire it all taking more time than if I’d just solved the problem to start with. Maybe someday but it’s not up to the quality I expect of development.
Have you tried giving it coding standards and other such preferences about how you like your code to be organized? I’ve found that coding agents can be quite adaptable to various styles, you can put stuff like “try to keep functions less than 100 lines long” or “include assertions validating all function inputs” into your coding agent’s general instructions and it’ll follow them.
For me, one of the things that’s a huge fundamental improvement is telling the agent to create and run unit tests for everything. That way when it does mess up accidentally it can immediately catch the problem and usually fixes it in the same session without further intervention. Unit tests used to be more trouble than they were worth most of the time, now I love them.
I’ll say that during a recent week where I was forced to use an LLM, I found Claude Opus to be extremely poor at referencing this guide: https://mywiki.wooledge.org/BashPitfalls
it took almost an hour to get Claude to write me a shell script which I considered to be of acceptable quality. It completely hallucinated about several of the points in that guide, requiring me to just go read the guide myself to verify that the language model was falsifying information. That same task would have taken me about 5 minutes.
I believe that GIGO applies here. 99% of shell scripts on the internet are unsafe and terrible (looking at you,
set -euo pipefail), and Claude is much more likely to generate god awful garbage because of the inherent bias present in the training data.And as for unit tests? Imo, anything other than property-based testing is irrelevant. If you’re using something like Pydantic, you can auto-generate a LOT of your tests using the rich type annotations available in that library along with hypothesis. I tend to write a testing framework once, and then special case property tests for things that fall outside of my models. None of this is super helpful for big ugly codebases with a lot of inertia around practices, but that’s not been my environment, thankfully.
You… just started writing unit tests?
No, I’ve used them plenty before. I just found them to generally be a huge hassle of minimal benefit. They became much more useful in the context of agentic coding, where you want the agent to be able to immediately realize “oh, this change I made causes these specific problems when it’s run.” The hassle is all on the agent, not on me.
I think we do very different development.
Could be. I’m a professional programmer whose usage runs the whole gamut - large applications with hundreds of programmers working on them for years, smaller apps that I make for my own use, and one-off scripts to do some particular task and then generally throw away afterwards.
I don’t do unit tests for that last category, of course. I don’t even use coding agents for those, generally speaking - a bit of back-and-forth in a chat interface is usually enough there.
Is this like a who’s got a bigger portfolio situation? I’m not sure how to respond
I guess I’ve been developing for decades including consulting for Page 6, a stint in RD at Sony Music. One of my open source contributions was used as part of the backend for one of Obama’s State of the Unions. I spend my time these days writing and maintaining multiple software stacks integrating across multiple platforms.
Unit tests used to be more trouble than they were worth most of the time, now I love them.
Sounds like you were writing bad unit tests and AI showed you how to do it right.
If so, it was project-wide across hundreds of devs.
It lets me focus on the software architecture, not the minutiae. It feels exactly like when I ran a team of brand new interns. They require a lot of hand holding but with the right direction they get good at their jobs very fast.
I think the problem is that for now, it will always continue to require that hand-holding, whereas interns/new programmers will need less and less over time and become more independent over time
I guess prepare for potential kernel rot: https://www.neowin.net/news/linus-torvalds-declares-massive-ai-fueled-code-surges-as-the-new-normal-for-linux/
Perhaps I’ll follow LTS 🫤
Yeah… well Arch’s LTS kernel is on 6.18 not too bad. I can definitely live with that.






