• Arthur Besse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    Fuck this site; it lost me when it got to listing “What kinds of things might they be good at?”

    Just say no.

    Using an LLM to summarize something is still a bad idea. The chances of it emphasizing the wrong thing, omitting the most important thing, or just outright making up “facts” remains high. LLMs Will Always Hallucinate.

    • HugeNerd@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      We’re too late. People are copy/pasting code from AI into low-level functions that run hardware. I’m glad I can retire soon, this is absolute lunacy.

  • vladmech@lemmy.world
    link
    fedilink
    arrow-up
    82
    ·
    4 days ago

    Got into a work ‘argument’ yesterday with someone from CyberSec that would not believe a tool we use could not do the thing he wanted me to have it do. I’d researched it and had direct links from the vendor, but CoPilot told him otherwise so I had to spend half an hour rehashing the same thing over and over as he adjusted his stupidass input data until CoPilot basically told him ‘whoops I lied about this.’

    • grainOfSalt@sh.itjust.works
      link
      fedilink
      arrow-up
      24
      ·
      edit-2
      4 days ago

      I’ve run into this twice now. For two different products I support, two different people sent me Claude AI slop answers where it hallucinated functionality into the product that doesn’t exist. And management still says to use AI for research, but verify its responses. What’s the point? That doesn’t save me any time. If anything, it’s wasting time.

    • AbsolutelyClawless@piefed.social
      link
      fedilink
      English
      arrow-up
      20
      ·
      4 days ago

      I don’t know how these people don’t experience crippling embarrassment. I had a few people try to help me solve their issue by using ChatGPT, and of course it hallucinated options in the software, so I had to tell them that no, this does not exist. At least they apologized.

      • vladmech@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        3 days ago

        Our entire meeting was him just feeding different prompts in for stuff while I pulled up vendor pages and found the relevant info quicker and without hallucinations. There’s got to be a breaking point where people realize it’s trash, right? ……right?

      • vladmech@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        3 days ago

        Their manager was using CoPilot to check the latest version of iPadOS and also arguing with me that 18.7.1 wasn’t getting security updates anymore because CP told them only 26.0.1 was current. It’s a bottom to top issue on that whole side of the business right now and it’s driving me nuts

  • kfoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    3 days ago

    I disagree that LLMs are good for summarizing information. They are good at TRUNCATING information. They do not possess the necessary cognitive abilities to accurately understand and distill something down into salient points consistently and reliably.

    • Dozzi92@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I’m curious how it’s any different from the auto summarize feature Microsoft word had at least 20 years ago, that I used to help me write papers in high school.

    • quick_snail@feddit.nl
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      Just fire them. Their bosses know that AI is just a marketing term to bring in VC investment.

      The big bosses know thst it’s not actually useful, and it causes harm to the org internally

  • Soapbox@lemmy.zip
    link
    fedilink
    English
    arrow-up
    19
    ·
    3 days ago

    I’m getting at least 2-3 calls a week now at work from people looking for something we don’t sell, and getting mad because ChatGPT told them we do.

    • FridaySteve@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      3 days ago

      I like sending pictures of the animals at the zoo to ChatGPT and have them identify the animal. It gets a lot wrong. It also says there are capybaras there when there aren’t.

  • Annoyed_🦀 @lemmy.zip
    link
    fedilink
    English
    arrow-up
    35
    ·
    4 days ago

    Lmao, a moment ago our friend group is talking about dream and nightmare, one guy just post an AI wall of text talking about what dream mean, and it does read exactly like a pseudo-science. I mean no harm to him, but it’s just silly because the text from AI sound so authoritative and assertive while i have to look at medical site each time i googled and think “is this site legit or is it copypasta tabloit?”

    I think it’s really how people operate on the internet, they don’t doubt source because it’s on the internet.

    • RedGreenBlue@lemmy.zip
      link
      fedilink
      arrow-up
      17
      ·
      edit-2
      4 days ago

      When anyone drops an ai blob in a discussion I ignore it and continue the discussion as if it was not there.

      If I wanted to ask an llm, I would do it myself.

    • idiomaddict@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      Tbh, I almost never ask or am asked for a source in person (not that I believe everything I hear, I’ll just look it up later unless it’s a specific type of social situation), but it happens a lot online. I don’t know how I would respond if someone dropped some obvious bullshit in a group chat, because it’s a different type of interaction.

  • Hideakikarate@sh.itjust.works
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    4 days ago

    That’s an awful lot of reading for people who can’t be bothered to go down a few sections of the page and get the information themselves.

  • Smoogs@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 days ago

    Add to this that AI is scraping the internet,… the free open internet, and many posters are employees who want to keep their jobs are easily feeding false information on the internet intentionally tainting AI

  • quick_snail@feddit.nl
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    3 days ago

    The best use for this is i18nl

    I can explain this fine in English. What I need is to explain it to the store clerk when I ask them a question in broken foreign language, and they hand me their phone, showing the regurgitated answer from AI

    I want to load this site and hand it to them. In their language.

  • Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    4 days ago

    one minute, let me get chatgpt to summarize this, I can’t bother to read it…

  • Ofiuco@piefed.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    Is the site just down or dead? It doesn’t load for me… I wanted to check if it only was in english so I could share it with the people I know.