• definitemaybe@lemmy.ca
    link
    fedilink
    arrow-up
    12
    ·
    1 day ago

    I think “contextual awareness” would fit better, and AI Believers preach that it’s great already. Any errors in LLM output are because the prompt wasn’t fondled enough/correctly, not because of any fundamental incapacity in word prediction machines completing logical reasoning tasks. Or something.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      21 hours ago

      Ah, of course. The model isn’t wrong, it’s the input that’s wrong. Yes, yes. Please give me investment money now.