• balsoft@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    15 hours ago

    Yes, it’s not linear. The progress of GenAI in the past 2 years is logarithmic at best, if you compare it with the boom that was 2019-2023 (from GPT2 to GPT4 in text, DALL-E 1 to 3 in images). The big companies trained their networks on all of the internet and ran out of training data, if you compare GPT4 to GPT5 it’s pretty obvious. Unless there’s a significant algorithmic breakthrough (which is looking less and less likely), at least text-based AI is not going to have another order-of-magniture improvement for a long time. Sure, it can already replace like 10% of devs who are doing boring JS stuff, but replacing at least half of the dev workforce is a pipe dream of the C-suite for now.

    • Angry_Autist (he/him)@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      Up until last week I worked for a stupidly big consumer data company and our in-house AI tools were not LLMs, they used an LLM as its secondary interface and let me tell you none of you are ready for this.

      The problem with current LLMs is confabulation and it is not solvable. It’s inherent in what a LLM is. the returns I was generating were not from publicly available LLMs or LLM services, but from expert systems trained only on the pertinent datasets. These do not confabulate as they are not word guessing algorithms.

      Think of it like wolfram alpha for human behavior

      People look at LLMs as the public face of AI but they aren’t even close to the most important.