Since some times now the AI bubble is growing and its consequences with it, flash storage price increase, electricity went wild in some places, GPU… Don’t need to say anything sadly… NVIDIA became the most valuable company exceeding 4T

So when all of this will go crazy and grow to the burst? When does prices will go down and speculators rushing out of it?

Open question feel free to explain the wider you can, I’m not a financial so I’m really interested in some analysis of the situation :)

  • zd9@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    3 days ago

    As an AI researcher who has over a decade of research experience and publications, it’s kind of funny to see the general public’s reaction to what they think AI is. It’s not that AI is a problem (though it will be, and AGI is coming sooner than you know it), it’s that these products being used for pure profit over anything else.

    Just to clarify, AI has been used in every single industry for almost two decades. It’s in every aspect of life. The public now thinks of AI as funny pictures or a chatbot, but that’s just the smallest tip of the iceberg.

      • zd9@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        23 hours ago

        I don’t personally work in the AGI space, but there have been some massive improvements within the last 2 years even. The public only has access to the most constrained, well-understood models, and even those are pretty good. I (and LeCunn, Hinton, other big names) don’t think transformers + massive compute are the solution to AGI, but even that combination now leads to emergent unexplainable capabilities. I work in the field and even I think it’s magic sometimes.

        edit: lol I should’ve expected to be downvoted for talking about AI, in the Fuck AI community

        • MonkderVierte@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          but even that combination now leads to emergent unexplainable capabilities

          Ok, but that’s only a sign that you don’t understand it enough. Which would be catastrophal in case of AGI.

          • zd9@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            23 hours ago

            Absolutely. That’s called “learning”. We observe some kind of behavior, investigate, make discoveries, then incorporate into the broader knowledge base. As for AGI, it’s going to be just like humans, where we can probe behaviors on the surface and investigate 2nd order effects (through things like linear probes), but we won’t be able to understand 100% of exactly how it makes a decision.

          • zd9@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            3 days ago

            Basically LLMs can do things it wasn’t explicitly trained to do once a certain relative scale is reached. This is for LLMs but other model families show (considerably less) potential too. Keep in mind this is from THREE YEARS AGO: https://arxiv.org/pdf/2206.07682

            and it’s only accelerated since

            • very_well_lost@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              Basically LLMs can do things it wasn’t explicitly trained to do once a certain relative scale is reached.

              What are some examples?

              • zd9@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                3 days ago

                too much to write here, look at Table 1 in the paper posted above, and you can explore from there

                • very_well_lost@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  3 days ago

                  I don’t find that terribly compelling… It looks like there’s also a large body of research disputing that paper and others like it.

                  Here is just one such paper that presents a pretty convincing argument that these behaviors are not ‘emergent’ at all and only seem that way when measured using bad statistics: https://arxiv.org/pdf/2304.15004

                  • zd9@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    3 days ago

                    As with anything, especially in a field moving this fast, yes of course it’s not black and white. Here’s an article I just found that goes into more detail if you’re curious. The first paper I shared was the one I read a while ago but there are dozens of them. Also I don’t work in NLP, more in computer vision and physics-informed neural networks (PINNs), so I don’t know all the most recent developments of LLMs (though I use ViTs in my work all the time).

    • foremanguy@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      In that way, what’s your opinion about the future of it?

      Do you think it will ever be stuck in a speculative for-profit environment or would became a real technology alone with researches and real statement only?

      • zd9@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        Look up the Gartner hype cycle. Every new technology goes through a similar process. Like I said, it has been used in every possible industry, things like energy, banking, healthcare, climate, defense and intelligence, social media (“the algorithm”), I mean name a sector and there are very mature pipelines already there.