Since some times now the AI bubble is growing and its consequences with it, flash storage price increase, electricity went wild in some places, GPU… Don’t need to say anything sadly… NVIDIA became the most valuable company exceeding 4T

So when all of this will go crazy and grow to the burst? When does prices will go down and speculators rushing out of it?

Open question feel free to explain the wider you can, I’m not a financial so I’m really interested in some analysis of the situation :)

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    3 days ago

    I’m going to be the crazy person here and say, “it’s not a bubble.” That is, it’s not what the “fuck AI” community is thinking about as “a bubble” (e.g. the housing market collapse).

    OpenAI could very well fold and take Oracle along with it (yes, please!) but even if that happens there’s far, far too much demand for AI stuff for any sort of grand “popping” economic effect.

    To this community, “AI” means “AI chat” with people using Big AI company services like ChatGPT for stupid or malicious purposes. But “AI” as an industry is way, way TF bigger than that.

    Think of Nvidia GPUs as generic infrastructure like roads: You can use a road to transport all sorts of things using all sorts of vehicles. What the “fuck AI” community cares about is the billionaire drunkenly driving a sports car anywhere and everywhere without consequences when they cause an accident or even kill people. However, the road itself isn’t responsible for that.

    Even if OpenAI, Anthropic, Copilot, and Gemini end up failing or being disused, Nvidia and the demand for AI services and solutions will continue to grow for decades. Saying that “AI is a bubble” or a “fad” is like going back in time to the early 1990s and saying the Internet is a fad.

    • someacnt@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      To be precise, it is LLM bubble. We never know if other, more useful flavors of AI would need that much resources.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 hours ago

        Well, I sort of agree but I’d be more specific: It’s a chat LLM bubble where all the money is going to a few players that do that. LLMs for other purposes (e.g. programming) have already begun to diverge and I believe they’ll continue to do so.

        My hypothesis is that programming LLMs will reach a plateau soon where even the open source models perform within 90% of the most expensive, top-performing commercial models (e.g. Claude Sonnet 4.5 and Gemini 3). When that happens, the market will switch from a handful of big players to a whole lot of service providers that merely host all the open source models like Ollama Cloud does today.

    • greygore@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      3 days ago

      I’m going to guess that you’re younger and if you were alive through the dot com bubble that you weren’t really old enough to understand what was going on. Back then, no one was seriously saying that the internet was a fad or going away or anything like that, although you can find a few prognosticators that will contradict me, no one took them seriously and ridiculed them even then. No, it was a bubble because people were irrationally throwing money at companies that did “web stuff” and added “dot com” to their names. Investors were so hyped up about the potential of the internet that they threw all common sense out the window and gave money to anyone, and companies in turn dumped money into salaries and expensive perks to hire people who barely knew what they were doing.

      The amount of waste and excess was appalling to people when the bubble popped and they saw how much companies had spent on unnecessary bullshit. I remember auctions where you could pick up super expensive Aeron chairs for dirt cheap because clueless companies didn’t know how to spend money and all the big internet companies had plenty of chairs like that. There was a ton of money dumped into infrastructure too. Cisco made a boatload of money during the bubble and after it popped they had so much used equipment floating around that their stock is only now recovering to their dot com peak twenty five years later. Companies like Worldcom blew up because of this infrastructure boom, but when the bubble popped they engaged in fraudulent accounting shenanigans in order to appear healthy.

      I would argue the demand for “the web” in 1999 far exceeds that of AI today, and yet there was absolutely a crash like the housing market collapse. It’s not that there’s no use for AI or that some of the capital expenditures might prove useful in the decade ahead, it’s that so much money was thrown around irresponsibly to anyone that claimed to have a web presence or something to do with the internet, even when they didn’t, that there was a huge economic contraction when people finally sobered up - ie. the bubble popping.

      I see the exact same excesses now:

      • Companies that have nothing to do with AI shoehorning it in to claim they’re part of this big boom
      • investors throwing money at ludicrously bad ideas because they are added “AI” to a product no one wanted to begin with
      • People with expertise in the field having absurd amounts of money thrown at them to gain competitive advantage
      • Brand new companies worth billions of dollars that are not pulling in a fraction of the revenue necessary to justify that valuation

      That said, I believe that this is even worse than the dot com bubble for at least two reasons:

      1. Back then, these companies were public, and were required to disclose a bunch of financial information as a result. Sure a lot of people ignored the warning signs and got caught up in the hype and FOMO, but the fraud was so much easier to unravel because their numbers weren’t hidden like today’s private AI companies.
      2. Dark fiber after the bubble popped is still useful today, so a lot of the money spent then enabled future services, as you alluded to. On the other hand, today’s GPUs will be obsolete in a few years. Aside from being surpassed by newer technologies, the cards themselves will only run for so long before failure. The data centers themselves will still have some value, and some of the electrical generation being built will be useful, but overall, the long term benefits won’t be nearly as transformational.

      So yeah, it’s a bubble, it will pop, and it will suck when it does. AI isn’t going away but most of the companies soaking up money now will end up as historical footnotes, like Netscape or Yahoo are today. LLMs and other generative AI will remain inefficient, continue to “hallucinate”, be used for propaganda, and further alienate people from one another. Yay?

    • foremanguy@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      To reply I’m absolutely not saying AI in its kind is a tech revolution but not in the way thats it’s shown today.

      Your point of view about this community is surely right but I would had that we care about the global ads about AI.

      AI as a technology is not a bubble, that’s sure.
      But how AI is developed and funded today, makes it one.
      Just seeing NVIDIA value going crazy in the last 2 years shows that the (over) hype is taking a lot more in account than the really technology behind it.

      By that I’m just saying that as every time (crypto I look at you) speculation just ruined it (for now) and I really hope that it will take the way of the internet and that it separate himself from the speculative side to became an open technology acquired by humanity

    • zd9@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      3 days ago

      As an AI researcher who has over a decade of research experience and publications, it’s kind of funny to see the general public’s reaction to what they think AI is. It’s not that AI is a problem (though it will be, and AGI is coming sooner than you know it), it’s that these products being used for pure profit over anything else.

      Just to clarify, AI has been used in every single industry for almost two decades. It’s in every aspect of life. The public now thinks of AI as funny pictures or a chatbot, but that’s just the smallest tip of the iceberg.

        • zd9@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          edit-2
          23 hours ago

          I don’t personally work in the AGI space, but there have been some massive improvements within the last 2 years even. The public only has access to the most constrained, well-understood models, and even those are pretty good. I (and LeCunn, Hinton, other big names) don’t think transformers + massive compute are the solution to AGI, but even that combination now leads to emergent unexplainable capabilities. I work in the field and even I think it’s magic sometimes.

          edit: lol I should’ve expected to be downvoted for talking about AI, in the Fuck AI community

          • MonkderVierte@lemmy.zip
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            but even that combination now leads to emergent unexplainable capabilities

            Ok, but that’s only a sign that you don’t understand it enough. Which would be catastrophal in case of AGI.

            • zd9@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              23 hours ago

              Absolutely. That’s called “learning”. We observe some kind of behavior, investigate, make discoveries, then incorporate into the broader knowledge base. As for AGI, it’s going to be just like humans, where we can probe behaviors on the surface and investigate 2nd order effects (through things like linear probes), but we won’t be able to understand 100% of exactly how it makes a decision.

            • zd9@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              3 days ago

              Basically LLMs can do things it wasn’t explicitly trained to do once a certain relative scale is reached. This is for LLMs but other model families show (considerably less) potential too. Keep in mind this is from THREE YEARS AGO: https://arxiv.org/pdf/2206.07682

              and it’s only accelerated since

              • very_well_lost@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 days ago

                Basically LLMs can do things it wasn’t explicitly trained to do once a certain relative scale is reached.

                What are some examples?

                • zd9@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  3 days ago

                  too much to write here, look at Table 1 in the paper posted above, and you can explore from there

                  • very_well_lost@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    3 days ago

                    I don’t find that terribly compelling… It looks like there’s also a large body of research disputing that paper and others like it.

                    Here is just one such paper that presents a pretty convincing argument that these behaviors are not ‘emergent’ at all and only seem that way when measured using bad statistics: https://arxiv.org/pdf/2304.15004

      • foremanguy@lemmy.mlOP
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        In that way, what’s your opinion about the future of it?

        Do you think it will ever be stuck in a speculative for-profit environment or would became a real technology alone with researches and real statement only?

        • zd9@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          Look up the Gartner hype cycle. Every new technology goes through a similar process. Like I said, it has been used in every possible industry, things like energy, banking, healthcare, climate, defense and intelligence, social media (“the algorithm”), I mean name a sector and there are very mature pipelines already there.