• Shanmugha@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 day ago

    Oh, that I can, and thank you for your message, really. With that said, here is the differences: cars did work as a transport, and AI as it is marketed (magic replacer of all) does not, and even in the narrow use case of programming - no, it does not. It can produce heaps of lines of code, it cannot do the work of building a reliable software that does what is required of it. It has also failed to replace artists. So no, I am not afraid

    • Angry_Autist (he/him)@lemmy.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      3
      ·
      20 hours ago

      Compare AI generated content from now and 2 years ago and extrapolate the curve

      It’s not linear, but your monkey brain will insist it is

      That’s why you’re not afraid.

      Honestly taking programmer jobs isn’t even close to the worst thing AI is going to do to us

      • Shanmugha@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        14 hours ago

        My monkey brain keeps hearing of non-linear progress, and things keep staying here:

        Besides that, since you insist on being fearful: why AI of all things and not a handful of rich assholes who actually make our lives hard every damn day?

        • Angry_Autist (he/him)@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          5 hours ago

          I don’t think you understand how dangerous a system is that can correlate every factor of every humans post activity and use it to create manipulative profiles for ever human who has ever logged in to anything

          • Shanmugha@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            5 hours ago

            Ooh, scary once again. First, let me give you some credit and take the description for face value: How is this omnipotent system going to be created? By humans who err? By current LLMs which dream up names of libraries and functions? And most importantly, how is it going to become capable of manipulating “anyone to do anything” when even “I” do not always know what it is going to take for me to do some arbitrary X, and this is true for almost all humans, save sages/buddhas etc (can’t deny they are possible, so count them as existing)?

            Your proposed threat looks like a conspiracy theory. Some of them have proven to be actually true, and this is no reason to believe anything

      • balsoft@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        15 hours ago

        Yes, it’s not linear. The progress of GenAI in the past 2 years is logarithmic at best, if you compare it with the boom that was 2019-2023 (from GPT2 to GPT4 in text, DALL-E 1 to 3 in images). The big companies trained their networks on all of the internet and ran out of training data, if you compare GPT4 to GPT5 it’s pretty obvious. Unless there’s a significant algorithmic breakthrough (which is looking less and less likely), at least text-based AI is not going to have another order-of-magniture improvement for a long time. Sure, it can already replace like 10% of devs who are doing boring JS stuff, but replacing at least half of the dev workforce is a pipe dream of the C-suite for now.

        • Angry_Autist (he/him)@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          5 hours ago

          Up until last week I worked for a stupidly big consumer data company and our in-house AI tools were not LLMs, they used an LLM as its secondary interface and let me tell you none of you are ready for this.

          The problem with current LLMs is confabulation and it is not solvable. It’s inherent in what a LLM is. the returns I was generating were not from publicly available LLMs or LLM services, but from expert systems trained only on the pertinent datasets. These do not confabulate as they are not word guessing algorithms.

          Think of it like wolfram alpha for human behavior

          People look at LLMs as the public face of AI but they aren’t even close to the most important.