

Yeah, the ignorant way they cope with the truth does come close to malice.


Yeah, the ignorant way they cope with the truth does come close to malice.


I agree that 0-days aren’t numbered. There are so many layers on which tech can be exploited that this is a difficult claim to make.
On the other hand, there are two different kind of exploits: clear holes in the logic, a situation or code path not considered by the coder. And the much harder to catch extremely creative ways to make a program do things it was never designed to do.
I have not seen LLMs doing creative things ever, so I doubt it would catch this second category. But sure, catching some logic holes it can be helpful with.


The job market is actually pretty bad right now and with all the recent layoffs in tech very saturated. Unionizing would make more sense.


I know it’s not an excuse, but I doubt they all know and knowingly support all of this. There’s plenty of people utterly un- or misinformed about what’s going on in the world or inside the US. There was this video recently where they interviewed beach-goers about the Iran war. They barely even knew what a war or Iran was.


Can’t believe it kinda recovered early last summer. Wasn’t that when he was flipflopping on tariffs? The tariffs the gov now needs to pay back after businesses raised prices to compensate for them?


But we have to identify this as what it is: an internal policy failure where they abandon proven processes to maintain code quality.
I guess I’m lucky my managers have not put that pressure on me yet. I do however see developers getting sloppy and lazier so the reviews actually do take more effort and AI rarely catches all problems with a change.


At least in my experience these models are pretty good now to write code based on best practices. If you ask for impractical things they will start doing ugly shortcuts or workarounds. A good eye catches these and you either rerun with a refined prompt, fix your own design or just keep telling it how you want to have it fixed.
You still gotta know how good code looks like to write it, but the models can help a lot.


Or rather the right to use shovels under ToS that can be changed on a whim.


I really don’t get this quantity-first approach. If you wanted to actually transform the world with tech in a way it’s not just superficial, you’d create task forces that sit together with specialist in each field of medical, construction, logistics, finance etc. give them 2 years to build prototypes and action plans. Then bet on the N most promising applications, spin them off as separate companies with premium access to your most advanced AI models and vertically integrate them into their workflows.
This would actually, sustainably achieve a foothold into these industries, disrupt and transform them long term.


He’s majority shareholder or has some trick to never be dethroned.


I have acquaintances at Meta and they literally waste tokens on bullshit tasks. They have like 10 agents running simultaneously doing some elaborate task that takes a long time. You can’t tell me this is more productive or efficient than doing actual work. Even if half of these tasks are somewhat useful and related to your project.


Just a doctor.


Any society who cares about its weakest members would ostracize such scum. Instead we celebrate them on TV and event make them our representatives. It really shows what society values: people that can make that singular number go up, the faster the better, regardless of sacrifice.


This shit makes dystopian cyberpunk stories cute fairytales I can read to my toddlers. An alternative has to be worked towards. For all our interest.


It sounds like they were measuring chatbot use than a deeper integration into their systems. That may not me the best use of LLMs.


LLMs are the only thing that is hyped. The other models and applications have existed already back when ChatGPT first hit the public and they have not had any special break through that would explain exponential growth in investment or a need for compute power. Language models had that with the transformer structure, everything else just develops iteratively.
The bubble we see now is because of language models and we can try and conflate it with other deep models and call it all AI, but it doesn’t change the fact that the generative models are the only ones requiring these resources and are looking for a problem to solve.


Best counterargument is cultural export. We don’t see it with Chinese nor with African French. If at all Japanese, Spanish or South Korean. But for the Asian languages the learning curve is much higher and the utility lower.


Works the same way in Switzerland. Some places may have a minimum wage locally, but it’s rare and on a national level there’s nothing. However the unions aren’t strong in all sectors so some jobs really do pay shit.


It’s a too snarky/cynical thought. Waging wars, destroying infrastructure etc. all have huge environmental costs. The US leaving the Paris accords, blocking clean energy adoption, cancelling projects as well as Trump’s anti-renewable propaganda has a net negative effect even if an oil crisis and economic recession is good for fewer carbon emissions as well.
Biden’s IRA had more positive potential.
Because you can’t dismiss 30% of a population. They need to at least partially be taken along, just because they’re too many to just declare war on.
Let’s declare war on a more manageable percentage and definitely without compromising core values. So we gotta pierce the bubble of the misinformed, but defeat the ones who misinform out of malice and self interest.