• TheOakTree@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 months ago

    Chess engines initially had the same stigma “they’ll never be better than humans since they can just calculate, no creativity, real analysis, insight…”

    I don’t know if this is a great example. Chess is an environment with an extremely defined end goal and very strict rules.

    The ability of a chess engine to defeat human players does not mean it became creative or grew insight. Rather, we advanced the complexity of the chess engine to encompass more possibilities, more strategies, etc. In addition, it’s quite naive for people to have suggested that a computer would be incapable of “real analysis” when its ability to do so entirely depends on the ability of humans to create a complex enough model to compute “real analyses” in a known system.

    I guess my argument is that in the scope of chess engines, humans underestimated the ability of a computer to determine solutions in a closed system, which is usually what computers do best.

    Consciousness, on the other hand, cannot be easily defined, nor does it adhere to strict rules. We cannot compare a computer’s ability to replicate consciousness to any other system (e.g. chess strategy) as we do not have a proper and comprehensive understanding of consciousness.

    • racemaniac@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      I’m not saying chess engines became better than humans so LLM’s will become concious, just using that example to say humans always have this bias to frame anything that is not human is inherently less, while it might not be. Chess engines don’t think like a human do, yet play better. So for an AI to become concious, it doesn’t need to think like a human either, just have some mechanism that ends up with a similar enough result.

      • TheOakTree@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        Yeah, I can agree with that. So long as the processes in an AI result in behavior that meets the necessary criteria (albeit currently undefined), one can argue that the AI has consciousness.

        I guess the main problem lies in that if we ever fully quantify consciousness, it will likely be entirely within the frame of human thinking… How do we translate the capabilities of a machine to said model? In the example of the chess engine, there is a strict win/lose/draw condition. I’m not sure if we can ever do that with consciousness.