• Wolf@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      5 hours ago

      My cousin was fired from his job at Home Depot and the General Manager told him that it was beyond his control, that the company had implemented an AI to make those decisions.

      It seems like they took the wrong message from this meme. “We can’t be held accountable? Yay!”

  • asudox@lemmy.asudox.dev
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    11 hours ago

    “Did I ever give you permission to delete all the files in my D drive?” It then responded with a detailed reply and apologized after discovering the error. The AI said, “No, you did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to clear the project cache (rmdir) appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part.”

    At least it was deeply, deeply sorry.

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 hours ago

    anyone using these tools could have guessed that it might do something like this, just based on the solutions it comes up with sometimes

  • Darkness343@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    10
    ·
    7 hours ago

    Oh hey! Just like an intern.

    Why is it suddenly worse when a computer deletes something important?

    • rami@ani.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      Because the ai will gaslight you into thinking it’s learned a lesson when it hasn’t. Also they’re fucking stupid. You’re welcome!

  • Smoogs@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    11 hours ago

    Thank fuck I left my mount on password. Locked up permissions on Linux might be a pain but it is a lesser pain.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 hours ago

        A year ago I was looking for a job, and by the end I had three similar job offers, and to decide I asked all of them do they use LLMs. Two said “yes very much so it’s the future ai is smarter than god”, and the third said “only if you really want, but nowhere where it matters”. I chose the third one. Two others are now bankrupt.

      • trannus_aran@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        10 hours ago

        Yeah, because the market is run by morons and all anyone wants to do is get the stock price up long enough for them to get a good bonus and cache out after the quarter. It’s pretty telling that these tools still haven’t generated a profit yet

      • BluesF@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 hours ago

        The company I work for (we make scientific instruments mostly) has been pushing hard to get us to use AI literally anywhere we can. Every time you talk to IT about a project they come back with 10 proposals for how to add AI to it. It’s a nightmare.

        I got an email from a supplier today that acknowledged that “76% of CFOs believe AI will be a game-changer, [but] 86% say it still hasn’t delivered mean value. Ths issue isn’t the technology-it’s the foundation it’s built on.”

        Like, come on, no it isn’t. The technology is not ready for the kind of applications it’s being used for. It makes a half decent search engine alternative, if you’re OK with taking care not to trust every word it says it can be quite good at identifying things from descriptions and finding obscure stuf… But otherwise until the hallucination problem is solved it’s just not ready for large scale use.

        • mirshafie@europe.pub
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          9 hours ago

          I think you’re underselling it a bit though. It is far better than a modern search engine, although that is in part because of all of the SEO slop that Google has ingested. The fact that you need to think critically is not something new and it’s never going to go away either. If you were paying real-life human experts to answer your every question you would still need to think for yourself.

          Still, I think the C-suite doesn’t really have a good grasp of the limits of LLMs. This could be partly because they themselves work a lot with words and visualization, areas where LLMs show promise. It’s much less useful if you’re in engineering, although I think ultimately AI will transform engineering too. It is of course annoying and potentially destructive that they’re trying to force-push it into areas where it’s not useful (yet).

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 hours ago

            It is far better than a modern search engine, although that is in part because of all of the SEO slop that Google has ingested. The fact that you need to think critically is not something new and it’s never going to go away either.

            Very much disagree with that. Google got significantly worse, but LLM results are worse still. You do need to think critically about it, but with LLM blurb there is no ways to check for validity other than to do another search without LLM, to find sources, (and in this case why even bother with the generator in the first place), or accept that some of your new info can be incorrect, and you don’t know which part.
            With conventional search you have all the context of your result, you have the reputation of the website itself, you have the info about who wrote the article or whatever, you have the tone of article, you have comments, you have all the subtle clues that we learnt to pick up on both from our lifetime experience on the internet, and civilisational span experience with human interaction. With the generator you have zero of that, you have something that is stated as fact, and everything has the same weight and the same validity, and even when it sites sources, those can be just outright lies.

            • hoppolito@mander.xyz
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 hours ago

              I think you really nailed the crux of the matter.

              With the ‘autocomplete-like’ nature of current LLMs the issue is precisely that you can never be sure of any answer’s validity. Some approaches try by giving ‘sources’ next to it, but that doesn’t mean those sources’ findings actually match the text output and it’s not a given that the sources themselves are reputable - thus you’re back to perusing those to make sure anyway.

              If there was a meter of certainty next to the answers this would be much more meaningful for serious use-cases, but of course by design such a thing seems impossible to implement with the current approaches.

              I will say that in my personal (hobby) projects I have found a few good use cases of letting the models spit out some guesses, e.g. for the causes of a programming bug or proposing directions to research in, but I am just not sold that the heaviness of all the costs (cognitive, social, and of course environmental) is worth it for that alone.

            • mirshafie@europe.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              Alright you know what, I’m not going to argue. You do you.

              I just know that I’ve been underwhelmed with conventional search for about a decade, and I think that LLMs are a huge help sorting through the internet at the moment. There’s no telling what it will become in the future, especially if popular LLMs start ingesting content that itself has been generated by LLMs, but for now I think that the improvement is more significant than the step from Yahoo→Google in the 2000s.

              • Nalivai@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 hours ago

                I’m not going to argue

                Obviously, that would require reading and engaging with my response, and you clearly decided to not do both even before I wrote it

  • 87Six@lemmy.zip
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    2
    ·
    1 day ago

    Kinda wrong to say “without permission”. The user can choose whether the AI can run commands on its own or ask first.

    Still, REALLY BAD, but the title doesn’t need to make it worse. It’s already horrible.

    • Jhex@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      3
      ·
      1 day ago

      hmmm when I let a plumber into my house to fix my leaky tub, I didn’t imply he had permission to sleep with my wife who also lives in the house I let the plumber into

      The difference you try to make is precisely what these agentic AIs should know to respect… which they won’t because they are not actually aware of what they are doing… they are like a dog that “does math” simply by barking until the master signals them to stop

      • Hawanja@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        22 hours ago

        hey are like a dog that “does math” simply by barking until the master signals them to stop

        I mean, it’s not even that. Your dog at least can learn and has limited reasoning capabilities. Your dog will know when it fucks up. AI doesn’t do any of that because it’s not really “intelligent.”

      • 87Six@lemmy.zip
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        I agree with you, but still, the AI doesn’t do this by default which is a shitty defense, but it’s fact

        • Jhex@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 day ago

          Absolutely… this just illustrates that these AI tools are, at best, some assistance that need to be kept on a very short leash… which can only be properly done by people who already know how to do the work the AI is supposed to assist with.

          But that is NOT what the AI bubblers are peddling

          • 87Six@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 hours ago

            Yea the AI peddlers force the AI down your throat then write in a tiny text “btw this thing can kill you te-hee”

      • PmMeFrogMemes@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        in your example tho it would be like the plumber asked you specifically if he could bone, and you were like “sure dawg sounds good”

        • Jhex@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 day ago

          No, not at all

          I get what you are saying but any reasonable entity would understand that telling someone at the door “come in”, does not mean “come in my wife’s ass”

          Specifically the “without permission” in the title, relates to the fact the AI did not ask about it… it simply took a previously granted right to run commands and ran any/all commands without warning.

          If you and I were working on a project together and nothing is working right, I could say “hmm let’s start over” and you would know it means “let’s start the project from scratch”, not “let’s wipe the data centre”

          • PumaStoleMyBluff@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Inviting an agentic AI isn’t really asking them to do one task, though.

            It’s more like offering a plumber a room in your house to stay in 24/7 so they can be on-call when you need them. And telling them they can use your food, dishes, clothes, and living room while they’re there and you’re at work.

            Which makes it much less surprising when they get bored and bone your wife.

            • Jhex@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              23 hours ago

              It’s more like offering a plumber a room in your house to stay in 24/7 so they can be on-call when you need them.

              Again I get your point… but no reasonable plumber would make that mistake.

              If I invite the dumbest plumber alive into my home, show him the leaky tub and say “I have to work but do whatever you need”… they would understand the context to mean “do whatever you need to fix the tub”… I doubt they would go make themselves a sandwich, grab a beer from the fridge and invite their buddies for a BBQ at my place and then say “but you said I could do whatever I needed”

              I absolutely understand what happened here. The point is there is no benefit to these Agentic AIs because they need to be as supervised as a monkey with a knife… why would I ever want that? let alone need that

              • partial_accumen@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                23 hours ago

                Again I get your point… but no reasonable plumber would make that mistake.

                To extend your analogy, agentic AI isn’t the “reasonable plumber”, its the sketchy guy that says he can fix plumbing and upon arrival he admits he’s a meth addict that hasn’t slept in 3 days and is seeing “the shadow people” standing right there in the room with you.

                I absolutely understand what happened here. The point is there is no benefit to these Agentic AIs because they need to be as supervised as a monkey with a knife… why would I ever want that? let alone need that

                I can see applications for agentic AI, but they can’t be handed the keys to the kingdom. You put them in an indestructible room with a hammer and a pile of rocks and say “please crush any rock I hand you to be no bigger than a walnut and no smaller than an almond”. In IT terms, the agenic AI could run under a restrictive service account so that even if they went off the rails they wouldn’t be able to damage any thing you cared about.

    • mcv@lemmy.zip
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 day ago

      A big problem in computer security these days is all-or-nothing security: either you can’t do anything, or you can do everything.

      I have no interest in agentic AI, but if I did, I would want it to have very clearly specified permission to certain folders, processes and APIs. So maybe it could wipe the project directory (which would have backup of course), but not a complete harddisk.

      And honestly, I want that level of granularity for everything.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 day ago

      The user can choose whether the AI can run commands on its own or ask first.

      That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers, here is an example :

      rm *filename

      versus

      rm * filename

      where a single character makes the entire difference between deleting all files ending up with filename rather than all files in the current directory and also the file named filename.

      Of course here you will spot it because you’ve been primed for it. In a normal workflow, with pressure, then it’s totally different.

      Also IMHO more importantly if you watch the video ~7min the clarified the expected the “agent” to stick to the project directory, not to be able to go “out” of it. They were obviously painfully wrong but it would have been a reasonable assumption.

      • nutsack@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 hours ago

        That implies the user understands every single code with every single parameters.

        why not? you can even ask the ai if you don’t know

        • EldritchFemininity@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          There’s no guarantee that it will tell you the truth. It could tell you to use Elmer’s glue to keep the cheese from falling off your pizza. The AI doesn’t “know” or “understand,” it just does as its training set informed it to. It’s just a very complex predictive text that you can give commands to.

      • Jhex@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        1 day ago

        That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers

        I wouldn’t say impossible but I would say it completely defeats the purpose of these agentic AIs

        Either I know and understand these commands so well I can safely evaluate them, therefore I really do not need the AI… or, I don’t really know them well and therefore I shouldn’t use the AI

        • utopiah@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          Yep. That’s exactly why I tend to never discuss “AI” with people who don’t have to actually have a PhD in the domain, or at least a degree in CS. It’s nothing against them specifically, it’s only that they are dangerously repeating what they heard during marketing presentations with no ability to criticize it and, in such cases, it can be quite dangerous.

          TL;DR: people who could benefit from it don’t need it, people who would shouldn’t.

          • Jhex@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            This is EXACTLY the YouTube woodworkers dilemma…

            TONs of YT channels to show people how to do woodwork would normally showcase $50K worth of equipment to show how to make a cutting board.

            The thing is, people with access to such equipment, already know how to make a cutting board and are learning nothing from you… on the other hand, newbies who what to know what is this “sanding” thing they have heard, will not benefit from the vid since they do not have those tools, they’d have crappy manual tools at most.

            Therefore, those videos are completely useless for learning… at best, they made for good background noise while people eat their lunches in their cubicles

            • utopiah@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              I agree… but beside the point I have access to a dedicated workshop and a tool library https://www.tournevie.be/ which challenges this whole setup. It’s relatively unique though, unfortunately, so your example still stands, thanks for sharing.

    • setsubyou@lemmy.world
      link
      fedilink
      English
      arrow-up
      140
      arrow-down
      3
      ·
      2 days ago

      We need to start posting this everywhere else too.

      This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.

    • alias_qr_rainmaker@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      2 days ago

      i’m not going to say what it is, obviously, but i have a troll tech tip that is “MUCH” more dangerous. it is several lines of zsh and it basically removes every image onyour computer or every codee file on your computer, and you need to be pretty familiar with zsh/bash syntax to know it’s a trolltip

      so yeah, definitely not posting this one here, i like it here (i left reddit cuz i got sick of it)

    • Credibly_Human@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      12
      ·
      2 days ago

      Its always been a shitty meme aimed at being cruel to new users.

      Somehow though people continue to spread the lie that the linux community is nice and welcoming.

      Really its a community of professionals, professional elitists, or people who are otherwise so fringe that they demand their os be fringe as well.