Orbit is an LLM addon/extension for Firefox that runs on the Mistral 7B model. It can summarize a given webpage, YouTube videos and so on. You can ask it questions about stuff that’s on the page. It is very privacy friendly and does not require any account to sign up.

I personally tried it, and found it to be incredibly useful! I think this is going to be one of my long term addons along with uBlock Origin, Decentraleyes and so on. I would highly recommend checking this out!

    • UraniumBlazer@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      7
      ·
      3 months ago

      Don’t want to install and maintain 10gigs of cuda stuff on my PC. Next, my mum won’t know how to do that. Her laptop is a potato. This add-on makes all of this way easier.

      • photonic_sorcerer@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 months ago

        You don’t need CUDA, it’s actually pretty easy. You can run the Mistral 7B model this add-on is based on using GPT4All. It doesn’t require much, if any, technical knowledge.

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          edit-2
          3 months ago

          HOLY HELL THAT’S COOL. It can do so much too!!!

          I locally installed some small LLM model more than a year ago. It took up like 25 gigs or something along with all CUDA libraries n stuff. It was alright, but I figured that cloud based solutions were the best for my use case, as they were better and for free.

          I had no idea that open sourced AI progressed so much in the last year. Amazing stuff!

          • Hawk@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            It depends how you run it etc. You may have not been using a quantized model.

            • UraniumBlazer@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              I was using the quantized version :(

              But again, do remember that this was when the first open sourced AI models had just begun to come out. Stuff from Open Assistant for example. I don’t even remember the name of the model that I was running (it was just too weird and funny lol). I just remember it being HUGE, quite dumb and making my device sweat lol.

      • mosiacmango@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 months ago

        You’re not generating models at this point. You don’t need that kind of hardware to run these.

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        3 months ago

        Well that comes with shit ton of privacy risk. If y’all are comfortable, then it is your choice