I love the detail of how he drinks his wine.
?udm=14Spread the word.?
It’s a URL parameter that forces Google to only show search results, like it used to do. No AI crap, no “widgets”, no bullsh*t.
You can configure most browsers to include this in searches by default, effectively making Google decent again.
Holy hell, really? That’s so useful.
Yep. There’s TONS of nerd tricks for this sort of stuff that would make people’s lives way easier, but are somehow reserved for geeks and nerds (like how you can actually make Windows decent by saying you’re in the EU and using O&O ShutUp10++, Power toys and that kind of stuff).
Hello, so, does it matter where I type this? Before or after the words I’m searching for?
You can look up URL parameter structure online, but the short version is that you use
?if it is the first parameter (no other?in the URL), or&otherwise.So you can use:
https://google.com/?udm=14and then search for something.https://google.com/?q=question&udm=14by adding it with an&after an existing search
I recommend adding a custom search engine to your browser with this baked in. It’s incredibly easy in Firefox and it’s derivatives, with a ton of tutorials online.
I see, thank you!
And yet, they still serve malicious ads before the actual search results. Just ruined a user’s day over such an ad tricking them into running malicious code. You’d think their AI could figure out when an ad link is impersonating a legitimate site and not serve the malicious ad. But, since they aren’t held responsible for serving malicious links, they have a negative incentive to fix the problem.
There’s basically no way for free for-profit search to function if it’s held accountable for serving malicious links, it’d be so cost prohibitive that they’d never make profit. The only way that could possibly work is if searching was a subscription service so their user base paid for the cost of vetting links, or if it was a public utility and the whole of society paid for the cost of vetting links.
It actually seems like a good place for an LLM. One of the security tools I work with uses an LLM to scan emails for malicious links and things like Business Email Compromise and Phishing. It’s actually pretty good. It seems like Google, et. al. could use something similar to catch some of the more obvious malvertising links. But, since they don’t have any accountability, they have no incentive. The only way to build that incentive is to start hitting them in the pocketbook. Letting them ignore the problem isn’t working.
An LLM might just lie and say that the link is malicious, or not malicious, and you’d never know. That’s kind of a problem.
Actually, that’s the start of a solution.
I’ve personally implemented something similar to this in the past. At one site we had an issue with people browsing porn on their office PCs. Some folks got pretty creative in getting around the blocks we had in place. However, we had full packet capture at the firewall; so, all of the evidence was there. I setup a system which pulled images above a certain size out of those packet captures and passed them through an open source image classifier which used a model based on machine learning. Anything above a certain threshold was flagged for human review, everything else was ignored. It wasn’t perfect, I looked as quite a few images of sand dunes, but it did 90% of the work. And sure, some false negatives likely got through. But, it let us run down the worst offenders.
Right now, Google seems to be ignoring the problem and has no incentive to do anything about it. Google is directly profiting from those malvertising links and so should bear some responsibility for ensuring that they are not serving malware to users. We can certainly work out the fine details around their duty of care and how they can meet it (e.g. LLM scanning with human review), but holding our collective dicks with both hands and claiming “nothing can be done” because it would cost Google money is a bad answer.
flagged for human review,
And there we go. Google processes over 5.9 trillion searches per year, if even .01% of those were flagged for human review the cost burden would be so huge the system would collapse.
A small-scale internal solution for a single office does not scale to the entire internet.
Google processes over 5.9 trillion searches per year
That number has nothing to do with the problem. They don’t need to review every search, they need to review every advertising link they have been paid to place (not every link indexed). Presumably, they already have the infrastructure in place to track those links and verify that they comply with laws such as CSAM, copyright or other areas where they actually have some accountability in those areas. The number of paid advertisement links will be far smaller than that 5.9 trillion number.
So they need to review every website? That’s not as daunting, there’s only 1.1 billion websites with only about 17% (roughly 193 million) being actively maintained and updated. Compared to the number of searches it’s certainly much smaller, but that’s still a huge dataset that has to be reviewed.
Face it, this is not a simple thing that can just be solved by throwing AI at it. The only way search could exist in this environment is if it was subscription based or a public utility.
For the record, I favor search being a public utility. Nationalize Google.
Well that’s not true at all. If there were other search engines that were legitimate competition, there would be incentive to fix the problems that they face. That was the case over a decade ago, if you recall.
But then Google got a monopoly, and that meant they stopped caring, and it also meant all of the scammers started gaming their system. In other words, failure to enforce antitrust legislation created this situation and starting to enforce it would solve the problem.
There’s legitimate competition on search, it’s just that people are locked in to Google’s network of other products (gmail, youtube, maps, etc). Notably, all those things also operate at a loss, Google subsidizes them with its more profitable search and ad business to keep people locked in to the monopoly. If search became less profitable their business model would collapse.
2nd panel doesn’t make much sense. Google was always an “AI” mixing ads into search results. There weren’t any humans. This is one algorithm (LLM) replacing another ( PageRank plus logic code ).
I believe the implication is that all humans at the company have been laid off. And PageRank was not neural network to my knowledge but I also don’t work at Google so I won’t say it’s impossible that they have been lying about that for some reason. At some point they started adding adverts and sorting results using a neural network, which one could argue likely works similar to the input of an LLM.
The implication of the second is however that the content of the pages displayed as search results are being summarized by AI while incorporating advertising, making it impossible to separate the advert from the rest.
Oh sure the rest of the panels make sense but the second one is LLM ai vs traditional ai.
PageRank was not neural network to my knowledge
AI is more than neural net code. Chess programs used to be AI in the 1960’s. Pagerank and the code around it automated away Yahoo’s human curated lists.






