• 2 Posts
  • 748 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle


  • Google processes over 5.9 trillion searches per year

    That number has nothing to do with the problem. They don’t need to review every search, they need to review every advertising link they have been paid to place (not every link indexed). Presumably, they already have the infrastructure in place to track those links and verify that they comply with laws such as CSAM, copyright or other areas where they actually have some accountability in those areas. The number of paid advertisement links will be far smaller than that 5.9 trillion number.


  • Actually, that’s the start of a solution.

    I’ve personally implemented something similar to this in the past. At one site we had an issue with people browsing porn on their office PCs. Some folks got pretty creative in getting around the blocks we had in place. However, we had full packet capture at the firewall; so, all of the evidence was there. I setup a system which pulled images above a certain size out of those packet captures and passed them through an open source image classifier which used a model based on machine learning. Anything above a certain threshold was flagged for human review, everything else was ignored. It wasn’t perfect, I looked as quite a few images of sand dunes, but it did 90% of the work. And sure, some false negatives likely got through. But, it let us run down the worst offenders.

    Right now, Google seems to be ignoring the problem and has no incentive to do anything about it. Google is directly profiting from those malvertising links and so should bear some responsibility for ensuring that they are not serving malware to users. We can certainly work out the fine details around their duty of care and how they can meet it (e.g. LLM scanning with human review), but holding our collective dicks with both hands and claiming “nothing can be done” because it would cost Google money is a bad answer.



  • It actually seems like a good place for an LLM. One of the security tools I work with uses an LLM to scan emails for malicious links and things like Business Email Compromise and Phishing. It’s actually pretty good. It seems like Google, et. al. could use something similar to catch some of the more obvious malvertising links. But, since they don’t have any accountability, they have no incentive. The only way to build that incentive is to start hitting them in the pocketbook. Letting them ignore the problem isn’t working.


  • And yet, they still serve malicious ads before the actual search results. Just ruined a user’s day over such an ad tricking them into running malicious code. You’d think their AI could figure out when an ad link is impersonating a legitimate site and not serve the malicious ad. But, since they aren’t held responsible for serving malicious links, they have a negative incentive to fix the problem.







  • The real miracle in the Bible is that Joseph didn’t fuck for his entire marriage and was ok with that.

    According to Christian mythology Jesus has several brothers and sisters from Mary and Joseph. So no miracle there. One just has to wonder if they waited until after Jesus was born to start fucking.


  • He was one of the early authors of the Christian church and is the author of several books of the official Christian mythology. In the Christian Bible, the letters to the Romans, Corinthians, Galatians, Thessalonians and Philippians are all believed to have been written by him. There are several other books (also letters to various congregations) which are attributed to him, but there is some debate about the actual authorship.

    So, he’s kinda the OG Paul when it comes to Christian mythology.




  • I regularly use CoPilot to search Microsoft documentation for me. E.g. I needed to find a particular interface in Entra and couldn’t remember where it was. So, I asked CoPilot and it got me to the right spot. I’ve thought about asking it about Microsoft licensing, but I figure that might result in CoPilot becoming self aware enough to kill itself.

    I also use a number of AI agents built into the cybersecurity tools I use on a daily basis. Generally stuff along the lines of “find all the cases related to this system/IP/user/etc” type queries. It’s also good for questions like “how do I tune this alert” so I don’t have to remember whatever bullshit process this vendor put together for tuning false positives. Our primary SIEM/SOAR tool has an AI which does initial triage and investigation work and it’s not terrible. It struggles with correlations for more complex events, usually highlighting events which have no bearing on the event in question. But, it often provides a good first pass and description our first line analysts can use to start a real investigation.

    AI is a tool. And like a lot of tools, it has it’s benefits and limitations. The problem is we’re still figuring all those out and the people marketing these tools don’t want to admit to the limitations and they over-sell the benefits, then blame the user when those benefits don’t materialize. Given how much modern economies are based on information and knowledge, I do expect AI to have some lasting impact, but I also expect that we’ll adapt and it will just be another way of getting things done in a generation or two.