Users points out in comments how the LLM recommends APT on Fedora which is clearly wrong. I can’t tell if OP is responding with LLM as well–it would be really embarrassing if so.

PS: Debian is really cool btw :)

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    3 days ago

    gpt-oss 20B

    See all the errors in that rambling wall of slop (which they posted and didn’t even check for some reason?)

    Trying to use a local LLM… could be worse. But in my experience, small ones are just too dumb for stuff beyond fully automated RAG or other really focused cases. They feel like fragile toys until you get to 32B dense or ~120B MoE.

    Doubly so behind buggy, possibly vibe coded abstractions.

    The other part is that Goose is probably using a primitive CPU-only llama.cpp quantization. I see they name check “Ryzen AI” a couple of times, but it can’t even use the NPU! There’s nothing “AI” about it, and the author probably has no idea.

    I’m an unapologetic local LLM advocate in the same way I’d recommend Lemmy/Piefed over Reddit, but honestly, it’s just not ready. People want these 1 click agents on their laptops and (unless you’re an enthusiast/tinkerer) the software’s simply not there yet, no matter how much AMD and such try to gaslight people into thinking it is.

    Maybe if they spent 1/10th of their AI marketing budget on helping open source projects, it would be…

    • TipsyMcGee@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      I have been using gpt-oss:20b for helping me with bash scripts, so far it’s been pretty handy. But I make sure to know what I’m asking for and make sure I understand the output, so basically I might have been better off with 2010-ish Google and non-enshitified community resources.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Yeah, that is a great application because you can eyeball your bash script and verify its functionality. It’s perfectly checkable. This is a very important distinction.

        It also doesn’t require “creativity” or speculation, so (I assume) you can use a very low temperature.

        Contrast that with Red Hat’s examples.

        They’re feeding it a massive dump of context (basically all the system logs), and asking the LLM to reach into its own knowledge pool for an interpretation.

        Its assessment is long, and not easily verifiable; see how the blog writer even confessed “I’ll check if it works later.” It requires more “world knowledge.” And long context is hard for low active parameters LLMs.

        Hence, you really want a model with more active parameters for that… Or, honestly, just reaching out to a free LLM API.


        Thing is, that Red Hat’s blogger could probably run GLM Air on his laptop and get a correct answer spit out, but it would be extremely finicky and time consuming.