• queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    7 days ago

    LLMs literally can not ever be used like an encyclopedia, never ever never.

    A magic 8 ball that sometimes gives wrong answers is in every way worse than just using the search function to look up an article.

    • corvus@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      7 days ago

      Agreed. I don’t use it as an encyclopedia but I have used local models for learning and to explain me some things that I didn’t understand and it’s been impressive. It’s up to you to test and evaluate how can be helpful.

        • corvus@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 days ago

          From the article you posted:

          • “He wasn’t aware that ChatGPT could lie and that it was designed to keep him engaged.”
          • “(They have) been fatally designed to create emotional dependency with users, even if that’s not what they set out looking for in terms of their engagement with the chat bot,”

          Would you expect something different from a chatbot made by a greedy corporation only caring for profit? You are not aware of the dozens of AI solutions created for scientific use cases (among others) with succesful results in medicine, astronomy or mathematics, probably because they are not as clickbait as this article made by another greedy corporation. If you are truly insterested use you search skill to find them, may be you’ll discover your confirmation bias and change your mind about AI being intrinsically bad.

          As an example a very recent work by one of the most renowned mathematicians: “In this paper we showcase AlphaEvolve as a tool for autonomously discovering novel mathematical constructions and advancing our understanding of long-standing open problems. To demonstrate its breadth, we considered a list of 67 problems spanning mathematical analysis, combinatorics, geometry, and number theory. The system rediscovered the best known solutions in most of the cases and discovered improved solutions in several. In some instances, AlphaEvolve is also able to generalize results for a finite number of input values into a formula valid for all input values. Furthermore, we are able to combine this methodology with Deep Think and AlphaProof in a broader framework where the additional proof-assistants and reasoning systems provide automated proof generation and further mathematical insights. These results demonstrate that large language model-guided evolutionary search can autonomously discover mathematical constructions that complement human intuition, at times matching or even improving the best known results, highlighting the potential for significant new ways of interaction between mathematicians and AI systems. We present AlphaEvolve as a powerful new tool for mathematical discovery, capable of exploring vast search spaces to solve complex optimization problems at scale, often with significantly reduced requirements on preparation and computation time.” https://arxiv.org/abs/2511.02864

      • Hawke@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        7 days ago

        to explain me some things that I didn’t understand and it’s been impressive

        So now you still don’t understand them but now you are confidently incorrect because an LLM made up some bullshit.

        Seriously, ask it about a topic you do understand and see how wrong it is. Then realize it’s at least as wrong about everything else.

        • corvus@lemmy.ml
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          7 days ago

          Seriously, ask it about a topic you do understand and see how wrong it is. >Then realize it’s at least as wrong about everything else.

          I teach physics and math and I asked about a topic of differential geometry related to general relativity that wasn’t clear for me. I know enough of both topics to understand the answer and to acknowledge that it nailed it. You are the perfect example on how the mind of a fanatic works, without knowing if I know about what I asked and what was the answer, you dictate that it was wrong, based on pure speculation, just because it doesn’t fit your beliefs. Seriously guys, don’t be like this idiot, being so irrational can only do harm.