lol shut the fuck up NYT
So much cope. Everyone is in denial. It strikes deep, I know.
Ignoring the truth won’t help you deal with it though.
As in: you agree with the article’s assertion?
I don’t write code anymore. No one I work with does either.
Same here. None of my coworkers write code.
But then again, I’m a truck driver.
I work with a lot of productive software developers. They all write code. They also leverage AI to do all the boring stuff.
They’re wasting time cause the AI can also do the non-boring stuff.
Trust me… it’s getting better, but it can’t do the non-boring stuff in a way that’s ISO compliant. Yet.
Tell it what ISO compliant is
Sure, for basic projects if you don’t value security or efficiency.
Or long term maintainability.
Removed by mod
maybe you and your colleagues shouldn’t be writing code…?
you have chosen a valid solution to the problem. congratulations.
I agree. No one should be writing code anymore. Management will soon fire everyone who still writes code cause they’re slow. And, everyone resistant to AI will also be fired.
lol. good luck, man.
I’m not a coder, so I can’t speak to the quality of code generated by these models. I am a lawyer, and every time I see stuff that lay people think is impressive in my field, I can’t help but guffaw and think “none of this is going to function, and no one will know for years. We’re so fucked…and then one day we’ll have to clean all this up and it’s gonna be so much work.” I kind of assume it’ll be similar for code? Like…it’ll obviously be somewhat better because there is a lot of testing you can actually do, whereas in law “testing” takes many years…and by the time you find out something doesn’t work, the burden of having done it wrong all this time, thinking it was right is catastrophic (which is why lawyers are so conservative about language that they “know works.”
I can see how little features can get added and these tools can deliver on those projects fast…but like…can they do bigger things with consistency? Can they like…set things up well? I’m not saying it’s impossible, but…I guess i’m thinking about Go. It took a long time for neural networks to get to be good at 19 x 19. They got good at 9 x 9 pretty fast. But as the game gets more complicated, it’s way WAY harder to do good long-term strategy. And the machines got there, no doubt. But the entire universe of Go is a 19x19 grid, on which the spaces are black or white or empty. How much more complicated is a language? Even a programming language? infinitely more complex, of course!
So I worry that we’re going to have individual features that work well, but systems that cannot function…looking like the uhhh…weasley house in Harry Potter…but without the magic to hold it up lol.
I kind of assume it’ll be similar for code?
Yes.
Like…it’ll obviously be somewhat better because there is a lot of testing you can actually do
If your code can cost someone their life savings or get them maimed or killed, there’s even more testing to do when using an LLM, since there’s no demonstrable basis for the way the code is that it recommends.
I’ve been coding for a very long time. Now I’m mainly in software tech management, but I still code (proofs of concept, new visualizations, that sort of thing). In the field I’m in, we’ve put in a lot of effort to assess the value of large language models (LLMs) to assist in our coding. We’re in a highly technical field. Because our use cases are not common, and some of our requirements are extreme, there are no good code examples to train an LLM on. Consequently, we have found that the LLM’s recommendations in those cases are worthless time-wasting crap.
If you’re doing something in a well-known language, in a well-known framework, with non-safety-critical requirements and with volumes, response times and reliability within moderate bounds, the training set will be much bigger and you’ll probably have better luck with LLMs. But that means you could also just do a web search or look on something like StackOverflow.
We do have active machine learning (ML) efforts underway, and some of those look very promising for certain tricky problems within our domain. But ML is a whole different kettle of fish than LLMs.
Your observations on Go are regarding the size of the state space of the game, which is 19 times 19 times a few more dimensions and constraints that reflect the allowable state combinations and the transition rules from one position to the next. The 4th or 5th power of something (to be conservative) gets big really damn fast. Some problems are intrinsically intractable, and AI won’t help with those, though quantum computing might in at least some cases.
I think the difference is that the LLMs can read all the context of your project and figure out what will work. If you want to add a feature, it will do so in a way that won’t break other things or offer you options if you can’t make that change without breaking something.
Also, LLMs are super fast compared to humans so even when it’s slightly wrong, it can be fixed with another prompt. People act like the LLM doing something wrong makes using LLMs pointless, but they are ignoring the fact that the LLM can always take another prompt and keep working until it gets it right, which is usually immediately once the issue is recognized.
You can even automate the feedback loop by describing the test scenarios and then having it run those tests, see the failures, and fix the code all by itself.
I get LLMs might not work as well for law at this point, but they do work for coding.
but they do work for coding
Only under certain conditions. See my comment above.
I’ll have to take your word for it! “figuring out” sounds like a higher-order process than a large language model is capable of to me, but if what they do is as good, then great.
I think I’m just skeptical because of how horrendously bad LLM output is in my field of expertise (despite looking fine to a lay person), so I immediately analogize that to other areas. The output of law and coding are both really about language, and the process of creating that output on the part of a lawyer or coder are really about language, so I can see how one might think LLMs would be able to recreate what lawyers and coders do. But boy it doesn’t strike me as remotely plausible that LLMs will ever get there, at least for law. I have no doubt some yet-unimagined technology could get us there, but “next word prediction” just isn’t gonna be it.
The more specialized and less public the knowledge is that’s needed to train an LLM, the worse its output will be. In addition, explainability is an absolute necessity where safety is a concern, but LLMs are not good at explaining how they got their results (because the results are derived from a statistical process, not logical steps originating from first principles). I suspect that explainability and verifiability are also essential in law.
They are, for sure. I mean in some sense, the explain-ability is why it’s correct…you might need to explain why it’s correct to a judge one day!
You don’t have to take my word for it. You can get a subscription to Claude for $20 and install the CLI tool. Ask it to start building something basic. Give it something small first and then expand what your asking for in the next request.
https://code.claude.com/docs/en/setup
Claude can also help explain how to set it up if your unfamiliar with things like the terminal or git.
I have to take your word for it because I don’t know what good code looks like lol. Again, to compare to what I’m familiar with, you can also ask an LLM to draft you a purchase agreement for shares of a private company, and if you’re not a lawyer it’ll look good…and it’ll be able to sound like it’s explaining to you why it’s good…but it will not be good haha
You use software though? You don’t need to even look at the code, lol. I’m downloading open source projects and modifying their functionality for my personal use with Claude and I don’t even know how they work. Don’t even open the code in an editor, I don’t need to know what it looks like.
I was suspicious the whole time, reading your replies. This finally seals it. Troll confirmed. Well played.
Ah, well in that case I won’t take your word for it that it’s good. I’ll take your word for it that it’s working for you for now… Again in a legal context, that’s like “I got chat GPT to write this contract, and it’s working great,” but of course…it won’t be when things go wrong haha!
My coworker is leaving tomorrow and he vibe coded go code and while most of it works by itself, it does not take into effect the kyverno policy that needs to be changed and it will get replaced while deleting the database backups so I removed it and rolled back the other changes.
It’s good…and bad. I dunno. I asked ChatGPT to update some basic CRUD functions in client side javascript a couple days ago so that it followed a UML schematic more accurately… and it just took my entire code base and wrapped it in a single class…and that was it.
So then I was like no, here’s some sample classes from the UML and here’s some properties and how these methods map to these functions I wrote before, get it?
And then, yeah, it did the thing I wanted…so…cool? I mean, sure, you can call it skill issues with prompting, but man, I’ve been coding with this thing for some time now, and sometimes I’m just like, “I miss stack overflow man”…and shit…I never thought I’d ever say that.
Sure, coding was slower, and maybe you didn’t find the thing you needed to fix your problem, but that friction taught you so much and you made friends (and enemies) as you tried to get an answer to your problem. Now we’re all missing out on that and just making the AI sort of kind of not really better.
AI makes you dependent, and I’d never stake my life or wellbeing on it.
The dependency is by design. If what you do can be replicated well by AI, you need to upskill or you’ll starve.
The current state of LLMs is financially and environmentally unsustainable. I’m sure that in short time, additional technologies like neurosymbolic AI will prevent hallucinations and improve efficiency. But will they help AI vendors become profitable?
They AI bubble might pop or fizzle, but we’ll see what developers do with their code bases when they don’t have their toys at their disposal anymore.
You know journalism is dead when they write an article using an LLM that is about how an industry is supposedly dying because of AI when it’s actually exponentially growing because of the new capabilities of AI.
I’m willing to bet NYT has hired more Engineers than Journalists in the last few months.
deleted by creator



