And yet, they still serve malicious ads before the actual search results. Just ruined a user’s day over such an ad tricking them into running malicious code. You’d think their AI could figure out when an ad link is impersonating a legitimate site and not serve the malicious ad. But, since they aren’t held responsible for serving malicious links, they have a negative incentive to fix the problem.
There’s basically no way for free for-profit search to function if it’s held accountable for serving malicious links, it’d be so cost prohibitive that they’d never make profit. The only way that could possibly work is if searching was a subscription service so their user base paid for the cost of vetting links, or if it was a public utility and the whole of society paid for the cost of vetting links.
It actually seems like a good place for an LLM. One of the security tools I work with uses an LLM to scan emails for malicious links and things like Business Email Compromise and Phishing. It’s actually pretty good. It seems like Google, et. al. could use something similar to catch some of the more obvious malvertising links. But, since they don’t have any accountability, they have no incentive. The only way to build that incentive is to start hitting them in the pocketbook. Letting them ignore the problem isn’t working.
I’ve personally implemented something similar to this in the past. At one site we had an issue with people browsing porn on their office PCs. Some folks got pretty creative in getting around the blocks we had in place. However, we had full packet capture at the firewall; so, all of the evidence was there. I setup a system which pulled images above a certain size out of those packet captures and passed them through an open source image classifier which used a model based on machine learning. Anything above a certain threshold was flagged for human review, everything else was ignored. It wasn’t perfect, I looked as quite a few images of sand dunes, but it did 90% of the work. And sure, some false negatives likely got through. But, it let us run down the worst offenders.
Right now, Google seems to be ignoring the problem and has no incentive to do anything about it. Google is directly profiting from those malvertising links and so should bear some responsibility for ensuring that they are not serving malware to users. We can certainly work out the fine details around their duty of care and how they can meet it (e.g. LLM scanning with human review), but holding our collective dicks with both hands and claiming “nothing can be done” because it would cost Google money is a bad answer.
And there we go. Google processes over 5.9 trillion searches per year, if even .01% of those were flagged for human review the cost burden would be so huge the system would collapse.
A small-scale internal solution for a single office does not scale to the entire internet.
Google processes over 5.9 trillion searches per year
That number has nothing to do with the problem. They don’t need to review every search, they need to review every advertising link they have been paid to place (not every link indexed). Presumably, they already have the infrastructure in place to track those links and verify that they comply with laws such as CSAM, copyright or other areas where they actually have some accountability in those areas. The number of paid advertisement links will be far smaller than that 5.9 trillion number.
So they need to review every website? That’s not as daunting, there’s only 1.1 billion websites with only about 17% (roughly 193 million) being actively maintained and updated. Compared to the number of searches it’s certainly much smaller, but that’s still a huge dataset that has to be reviewed.
Face it, this is not a simple thing that can just be solved by throwing AI at it. The only way search could exist in this environment is if it was subscription based or a public utility.
For the record, I favor search being a public utility. Nationalize Google.
I’m going to assume you’re just trolling now. I refuse to believe that someone can be this stupid, without actually doing it intentionally. Well done, you got me for a few comments. But, I’m done feeding the troll.
Well that’s not true at all. If there were other search engines that were legitimate competition, there would be incentive to fix the problems that they face. That was the case over a decade ago, if you recall.
But then Google got a monopoly, and that meant they stopped caring, and it also meant all of the scammers started gaming their system. In other words, failure to enforce antitrust legislation created this situation and starting to enforce it would solve the problem.
There’s legitimate competition on search, it’s just that people are locked in to Google’s network of other products (gmail, youtube, maps, etc). Notably, all those things also operate at a loss, Google subsidizes them with its more profitable search and ad business to keep people locked in to the monopoly. If search became less profitable their business model would collapse.
And yet, they still serve malicious ads before the actual search results. Just ruined a user’s day over such an ad tricking them into running malicious code. You’d think their AI could figure out when an ad link is impersonating a legitimate site and not serve the malicious ad. But, since they aren’t held responsible for serving malicious links, they have a negative incentive to fix the problem.
There’s basically no way for free for-profit search to function if it’s held accountable for serving malicious links, it’d be so cost prohibitive that they’d never make profit. The only way that could possibly work is if searching was a subscription service so their user base paid for the cost of vetting links, or if it was a public utility and the whole of society paid for the cost of vetting links.
It actually seems like a good place for an LLM. One of the security tools I work with uses an LLM to scan emails for malicious links and things like Business Email Compromise and Phishing. It’s actually pretty good. It seems like Google, et. al. could use something similar to catch some of the more obvious malvertising links. But, since they don’t have any accountability, they have no incentive. The only way to build that incentive is to start hitting them in the pocketbook. Letting them ignore the problem isn’t working.
An LLM might just lie and say that the link is malicious, or not malicious, and you’d never know. That’s kind of a problem.
Actually, that’s the start of a solution.
I’ve personally implemented something similar to this in the past. At one site we had an issue with people browsing porn on their office PCs. Some folks got pretty creative in getting around the blocks we had in place. However, we had full packet capture at the firewall; so, all of the evidence was there. I setup a system which pulled images above a certain size out of those packet captures and passed them through an open source image classifier which used a model based on machine learning. Anything above a certain threshold was flagged for human review, everything else was ignored. It wasn’t perfect, I looked as quite a few images of sand dunes, but it did 90% of the work. And sure, some false negatives likely got through. But, it let us run down the worst offenders.
Right now, Google seems to be ignoring the problem and has no incentive to do anything about it. Google is directly profiting from those malvertising links and so should bear some responsibility for ensuring that they are not serving malware to users. We can certainly work out the fine details around their duty of care and how they can meet it (e.g. LLM scanning with human review), but holding our collective dicks with both hands and claiming “nothing can be done” because it would cost Google money is a bad answer.
And there we go. Google processes over 5.9 trillion searches per year, if even .01% of those were flagged for human review the cost burden would be so huge the system would collapse.
A small-scale internal solution for a single office does not scale to the entire internet.
That number has nothing to do with the problem. They don’t need to review every search, they need to review every advertising link they have been paid to place (not every link indexed). Presumably, they already have the infrastructure in place to track those links and verify that they comply with laws such as CSAM, copyright or other areas where they actually have some accountability in those areas. The number of paid advertisement links will be far smaller than that 5.9 trillion number.
So they need to review every website? That’s not as daunting, there’s only 1.1 billion websites with only about 17% (roughly 193 million) being actively maintained and updated. Compared to the number of searches it’s certainly much smaller, but that’s still a huge dataset that has to be reviewed.
Face it, this is not a simple thing that can just be solved by throwing AI at it. The only way search could exist in this environment is if it was subscription based or a public utility.
For the record, I favor search being a public utility. Nationalize Google.
I’m going to assume you’re just trolling now. I refuse to believe that someone can be this stupid, without actually doing it intentionally. Well done, you got me for a few comments. But, I’m done feeding the troll.
Well that’s not true at all. If there were other search engines that were legitimate competition, there would be incentive to fix the problems that they face. That was the case over a decade ago, if you recall.
But then Google got a monopoly, and that meant they stopped caring, and it also meant all of the scammers started gaming their system. In other words, failure to enforce antitrust legislation created this situation and starting to enforce it would solve the problem.
There’s legitimate competition on search, it’s just that people are locked in to Google’s network of other products (gmail, youtube, maps, etc). Notably, all those things also operate at a loss, Google subsidizes them with its more profitable search and ad business to keep people locked in to the monopoly. If search became less profitable their business model would collapse.