Cloudflare, the publicly traded cloud service provider, has launched a new, free tool to prevent bots from scraping websites hosted on its platform for data to train AI models.

Some AI vendors, including Google, OpenAI and Apple, allow website owners to block the bots they use for data scraping and model training by amending their site’s robots.txt, the text file that tells bots which pages they can access on a website. But, as Cloudflare points out in a post announcing its bot-combating tool, not all AI scrapers respect this.

  • MSids@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    The core features of a WAF do require SSL offload, which of course means that the data needs to be unencrypted with your certificate on their edge nodes, then re-encrypted with your origin certificates. There is no other way in a WAF to protect from these exploits if the encryption is not broken, and WAF vendors can respond much faster than developers can to put protections in place for emerging threats.

    I had never considered that Akamai or Cloudflare would be doing any deeper analytics on our data, as it would open them up to significant liability, same as I know for certain that AWS employees cannot see the data within our buckets.

    As for the captcha prompts, I can’t speak to how those work in Cloudflare, though I do know that the AWS WAF does leave the sensitivity of the captcha prompts entirely up to the website owner. For free versions of CF there might be fewer configurable options.

    • schizo@forum.uncomfortable.business
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      The captcha stuff is customizable, but yeah, you have to pay. The issue is that they have, in the past, shipped breaking changes in their default rules that made huge messes, and a huge portion of their customer base just uses the defaults. They’ve gotten better at this, but again, there’s nothing other than their testing to prevent it in the future.

      Also based on experiences doing infosec stuff, I can also say that there’s ABSOLUTELY a huge portion of “admins” that think more security is more betterer, and configure shit in a way that breaks so many things then get mad that they did that; there’s a LOT of depth you have to understand to configure something like Cloudflare’s WAF properly, and way too many admin types just don’t fully understand the impact of any particular thing is and get way way way waaaay too restrictive and then get mad that it breaks things.

      The SSL offload requires you to trust your vendor, and agree that the odds that they’re doing anything suspicious is likely zero: their business would damn near instantly implode if they got caught. But, again, you’re trusting policy and procedure to keep people out of data.

      I think there’s a LOT of bias against “MITM” meaning “malicious”, and Lemmy ranging from very left to leftish, a huge bias against big tech (which, imo, is 100% warranted and totally earned by decades of shitty behavior) which shows up as a ‘Cloudflare is bad because the MITM your traffic’ lacking the nuance that, well, every WAF and a heck of a lot of caching CDNs do that because that’s how it works.