• Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    4
    ·
    edit-2
    2 days ago

    Literally never had this happen. Every time I have caved after exhausting all other options the LLM has just made it worse. I never go back anymore.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      arrow-up
      30
      arrow-down
      1
      ·
      edit-2
      1 day ago

      They’re by no means the end-all solution. And they usually aren’t my first choice.

      But when I’m out of ideas prompting gemini with a couple sentences hyper-specifically describing a problem, has often given me something actionable. I’ve had almost no success with asking it for specific instructions without specific details about what I’m doing. That’s when it just makes shit up.

      But a recent example. I was trying to re-install windows on a lenovo ARM laptop. Lenovos own docs were generic for all their laptops, and intended for x86. You could not use just any windows iso. While I was able to figure out how to create the recovery image media for the specific device at hand, there were no instructions on how to actually use it, and entering the BIOS didn’t have any relevant entries.

      Writing half a dozen sentences describing this into Gemini, instantly informed me that there is a tiny pin-hole button on the laptop that boots into a special separate menu that isn’t in the bios. A lo, that was it.

      Then again, if normal search still worked like it did a decade ago, and didn’t give me a shitload of irrelevant crap, I wouldn’t have needed an LLM to “think” it’s way to this factoid. I could have found it myself.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 day ago

        I do use LLMs if I forget to plan one of my tabletop sessions. I will fully admit they are great at that. Love 'em for making encounters.

    • idunnololz@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      1 day ago

      They seem to be pretty good at language. One time i forgot the word “tact” and I was trying to remember it. I even asked some people and no one could think of the word I was thinking of even after I described approximately what it meant. But I asked AI and it got it in one go.

    • Farid@startrek.website
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      Happened to me yesterday. I have an old 4K TV, every component I used to connect to it had HDMI 2.0+ capabilities. Neither laptop nor Steam Deck would output 4K60, only 4K30. Tried getting another cable and a hub, same result. And I know that my Chromecast outputs 4K60 to this TV, so I was extra confused. In my desperation, asked GPT-5 what was I missing, and it plainly told me that those old Samsung TVs turn off HDMI 2.0 support unless you explicitly turn it on in TV settings under “UHD Color”. Apparently Chromecast was doing chroma subsampling, but computers refused and wanted full HDMI 2.0 bandwidth…

      • _g_be@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        12 hours ago

        That’s rather cool, glad to hear it worked. My experience with it is often:

        Where can I find this setting to change for *this thing*? “Gladly! I know how frustrating this process scan be! First, open the settings page, find the page that says “*\thing setting* and change it there” There is no page like that " You’re absolutely right!”

        • Farid@startrek.website
          link
          fedilink
          arrow-up
          1
          ·
          9 hours ago

          True, that totally happens to me all the time, too. For example, yesterday it was repeatedly insisting that there’s a certain checkbox in qbittorrent settings, which wasn’t there. I gave it the screenshot of the setting page and it “realized” it’s named differently. So in the end, it helped me with something that I couldn’t google properly. It’s a supplementary tool for me.

    • TehBamski@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Context is highly important in this scenario. Asking it how many people live in [insert country and then province/state], and it’ll be accurate a high percentage of the time. As compared to asking it, [insert historical geo-political question], and it won’t be able to.

      Also, I have found it can depend on which LLM you ask said question to. I have found Perplexity to be my go to LLM of choice, as it acts like an LLM ‘server’ in selecting the best LLM for the task at hand. Here’s Perplexity’s Wikipedia page if you want to learn more.

    • Eheran@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      12
      ·
      2 days ago

      When was the last time you tried? GPT5 thinking is able to create 500 lines of code without a single error, repeatable, and add new features into it seamlessly too. Hours of work with older LLMs reduced to minutes, I really like how much it enables me to do with my limited spare time. Same with “actual” engineering, the numbers were all correct the last few times. So things it had to find a way to calculate and then figure out some assumptions and then do the math! Sometimes it gets the context wrong and since it pretty much never asks questions back, the result was absurd for me, but somewhat correct for a different context. Really good stuff.

      • BroBot9000@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        15
        ·
        2 days ago

        Really good until you stop double checking it and it makes shit up. 🤦‍♂️

        Go take your Ai apologist bullshit and feed it to the corporate simps.

        • Eheran@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          8
          ·
          1 day ago

          The good thing is that in code, if it makes shit up it simply does not work the way it is supposed to.

          You can keep your hatred to yourself, let alone the bullshit you make up.

          • AmbiguousProps@lemmy.today
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            1 day ago

            Until it leaves a security issue that isn’t immediately visible and your users get pwned.

            Funny that you say “bullshit you make up”, when all LLMs do is hallucinate and sometimes, by coincidence, have a “correct” result.

            I use them when I’m stumped or hit “writer’s block”, but I certainly wouldn’t have them produce 500 lines and then assume that just because it works, it must be good to go.

            • Eheran@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              5
              ·
              23 hours ago

              Calculations with bugs do not magically produce correct results and plot them correctly. Neither can such simple code change values that were read from a file or device. Etc.

              I do not care what you program and how bugs can sneak in there. I use it for data analysis, simulations etc. with exactly zero security implications or generally interactions with anything outside the computer.

              The hostility here against anyone using LLMs/AI is absurd.

              • Holytimes@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                3 hours ago

                I dislike LLMs but the only two fucking things this place seems to agree on is communism is good and ai is bad basically.

                Basically no one has a nuanced take and rather demonize then have a reasonable discussion.

                Honestly lemmy is even at this point just exactly the same as reddit was a few years ago before the mods and admins went full Nazi and started banning people for anything and everything.

                At least here we can still actually voice both sides of the opinion instead of one side getting banned.

                People are people no matter where you go

              • AmbiguousProps@lemmy.today
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                22 hours ago

                Then why do you bring up code reviews and 500 lines of code? We were not talking about your “simulations” or whatever else you bring up here. We’re talking about you saying it can create 500 lines of code, and that it’s okay to ship it if it “just works” and have someone review your slop.

                I have no idea what you’re trying to say with your first paragraph. Are you trying to say it’s impossible for it to coincidentally get a correct result? Because that’s literally all it can do. LLMs do not think, they do not reason, they do not understand. They are not capable of that. They are literally hallucinating all of the time, because that’s how they work. That’s why OpenAI had to admit that they are unable to stop hallucinations, because it’s impossible given that’s how LLMs work.

              • AmbiguousProps@lemmy.today
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                3
                ·
                edit-2
                1 day ago

                “my coworkers should have to read the 500 lines of slop so I don’t have to”

                That also implies that code reviews are always thoroughly scrutinized. They aren’t, and if a whole team is vibecoding everything, they especially aren’t. Since you’ve got this mentality, you’ve definitely got some security issues you don’t know about. Maybe go find and fix them?

                • onslaught545@lemmy.zip
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  18 hours ago

                  If your QA process can let known security flaws into production, then you need to redesign your QA process.

                  Also, no one ever said that the person generating 500 lines of code isn’t reviewing it themselves.

      • Donkter@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        5
        ·
        1 day ago

        I’ve come to realize that these crazed anti-ai people are just a product of history repeating itself. They would be the same leftists who were “anti-gmo”. When you dig into it you understand that they’re against Monsanto, which is cool and good, but the whole thing is so conflated in their heads that you can’t discuss the merits of GMOs whatsoever even though they’re purportedly progressive.

        It’s a pattern, their heads in the right place for the most part. But the logic is just going a little haywire as they buy into hysteria. It’ll take a few years probably as the generations cycle.

      • fuckwit_mcbumcrumble@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        2 days ago

        Also did you adequately describe your problem? Treat it like a human who knows how to program, but has no idea what the fuck you’re talking about. Just like a human you have to sit it down and talk to it before you have it write code.

      • lectricleopard@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        6
        ·
        2 days ago

        It gave you the wrong answer. One you called absurd. And then you said “Really good stuff.”

        Not to get all dead internet, but are you an LLM?

        I dont understand how people think this is going to change the world. Its like the c suite folks think they can fire 90% of their company and just feed their half baked ideas for making super hero sequels into an AI and sell us tickets to the poop that falls out, 15 fingers and all.

        • Eheran@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          1 day ago

          So you physically read what I said and then just went with “my bias against LLMs was proven” and wrote this reply? At no point did you actually try to understand what I said? Sorry but are you an LLM?

          But seriously. If you ask someone on the phone “is it raining” and the person says “not now but it did a moment ago”, do you think the person is a fucking idiot because obviously the sun has been and still is shining? Or perhaps the context is different (a different location)? Do you understand that now?

          • lectricleopard@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            2
            ·
            1 day ago

            You seem upset by my comment, which i dont understand at all. Im sorry if I’ve offended you. I don’t have a bias against LLMs. They’re good at talking. Very convincing. I dont need help creating text to communicate with people, though.

            Since you mention that this is helping you in your free time, then you might not be aware how much less useful it is in a commercial setting for coding.

            I’ll also note, since you mentioned it in your initial comment, LLMs dont think. They can’t think. They never will think. Thats not what these things are designed to do, and there is no means by which they might start to think if they are just bigger or faster. Talking about AI systems like they are people makes them appear more capable than they are to those that dont understand how they work.

            • Eheran@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              23 hours ago

              Can you define “thinking”? This is such a broad statement with so many implications. We have no idea how our brain functions.

              I do not use this tool for talking. I use it for data analysis, simulations, MCU programming, … Instead of having to write all of that code myself, it only takes 5 minutes now.

              • lectricleopard@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                14 hours ago

                Thinking is what humans do. We hold concepts in our working memory and use stored memories that are related to evaluate new data and determine a course of action.

                LLMs predict the next correct word in their sentence based on a statistical model. This model is developed by “training” with written data, often scraped from the internet. This creates many biases in the statistical model. People on the internet do not take the time to answer “i dont know” to questions they see. I see this as at least one source of what they call “hallucinations.” The model confidently answers incorrectly because that’s what it’s seen in training.

                The internet has many sites with reams of examples of code in many programming languages. If you are working on code that is of the same order of magnitude of these coding examples, then you are within the training data, and results will generally be good. Go outside of that training data, and it just flounders. It isn’t capable and has no means of reasoning beyond its internal statistical model.

              • Clent@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                21 hours ago

                We have no idea how our brain functions.

                This isn’t even remotely true.

                You should have asked your LLM about it before making such a ridiculous statement.