Sam Altman is a perfect example of how deranged the AI industry is with respect to reality.
What, you don’t think OpenAI will be as big as Walmart and ExxonMobil in the next year? Or that Sam Altman shouldn’t ask for SEVEN TRILLION-WITH-A-“T” dollars to instantly become bigger than the entire current semiconductor industry despite no experience, plan, funding, or reasonable timeframe?
You need to get with the times. This is how it is now. Apparently. According to some very confused young people in other threads.
I assume none of those are real people and they’re all chatbots.
It’s funny watching these guys making Elon Musk style promises for over 5 years and the media is finally starting to ask questions.
Musk still has people believing Tesla can do self driving, for like 10 years now. And no one calls him out on that shit.
Altmann lies are much more destructive as the impact they will have on the economy due to executives believing this and companies will be irreparably harmed. Many will not recover depending how far they go into the rabbit hole.
2010s grifting techniques are showing limits
Unfortunately this article isn’t really one of them. It spends most of its time defending the AI status quo. Ex:
. . . although Financial Times columnist Robert Armstrong noted this week that the MIT report “reads like something given away on the ‘research’ page of a large consultancy.” Its conclusions are fairly obvious, he said: People like ChatGPT for basic tasks and hate complicated enterprise systems, and companies that try to build their own AI usually fail.
The study attributes these failures to implementation problems rather than model quality. “The core issue? Not the quality of the AI models, but the ‘learning gap’ for both tools and organizations,” Fortune wrote about the study. Purchased AI tools succeed 67 percent of the time, while internally built systems succeed only one-third as often. This isn’t necessarily an indictment of AI technology as a whole—it’s potentially an indictment of corporate IT departments thinking they can out-engineer existing applications from AI service providers like OpenAI.
*eyeroll*
And who is pushing the idea that AI is a magic money printer for businesses? It could never be OpenAI and their ilk, they’re too honest for that! Shoves massive piles of stolen training data under the couch
They’re making Peter Molyneux look trustworthy.
Someone is going to lose a lot of money.
Also, we’re looking for someone to give us money.
That “someone” is pretty much everyone’s stock portfolios/pensions/retirement plans, as about 60% of stock market gains in 2025 have come from AI megacaps. The “magnificent 7” make up about 35% of the entire stock market at this point, and they’re all heavily invested (overleveraged) in speculative AI. When this bubble pops it’s going to be nasty.
Money, fools, soon parted.
US taxpayer is will be holding the bag…
So you are correct, we are the fools.
As always
The problem is that the money that these scam artists are pocketing should be going to Infrastructure, healthcare and social stability.
Well, at least when the bubble bursts, there will be surplus water supplies and electricity production; although it won’t be very well built and infrastructure would be for unused data centres.
God I hope so.
Soon too. Like really soon. Next month would be great.
Last month would have been better.
It’s getting to the point where I’m ready to quit some of my clients. For real. One of these days, I’m going to just be like whelp, if AI is the route you want to go, please by all means. And don’t call me, I’ll call you.
I have put options OOTM that expire in October in various (hyped) affected tech companies.
I won’t lose an amount that’s worth complaining over but I would be very happy with some of them to nose dive.
Hope it works for you!
This shit can take forever to crater though, bubbles are a real bitch for that sometimes. Look at Bitcoin, here we are a decade later, it’s still rollercoastering between absurd and literally insane. Our housing market is another example, it’s been in a bubble for almost 20 years. Everyone knows it’s a bubble, but what’s going to make it pop?
I’m not an economist, but I know that ppl only invest in stocks if they think it will be worth more tomorrow than today.
As long as people are convinced that this tech will result in AGI someday, they will keep investing.
And the gameplan for convincing people is not to build not tech that is as useful as possible, as good at fact-checking as possible - but as human-like as possible. The more people anthropormorphize LLMs, the more it seems like it can do stuff it actually can’t (reason, understand, empathize, etc).
OpenAI, Anthropic and others exploit this to the fullest. And I think breaking that spell is key.
There have been a lot of articles coming out recently (as in, in the last 24 hours) that indicates that spell might be breaking:
- Is the AI bubble about to burst – and send the stock market into freefall? (Guardian)
- Nvidia Q2 Preview: AI Bubble Is Popping (Seeking Alpha)
- Is this an AI bubble? (NY Times)
- Say farewell to the AI bubble, and get ready for the crash (LA Times)
- Is The AI Bubble Bursting? Lessons From The Dot-Com Era (Forbes)
- Credit fuels the AI boom — and fears of a bubble (Fortune)
That’s interesting. Better sooner rather than later!
What will happen to all the datacenters? Crypto?
Prepared to rope himself?
Prepared to keep paying fake news for this slop to be inserted into my daily feed.
God I hope he gets taken out by a drunk driver.
Fucking grifter.
Obviously no, he’s gonna go travel with his yacht using the fat bonus he gave himself before handling the control to another person and pretend all these thing never. That’s what every CEO do.
This creates urgency while potentially insulating OpenAI from criticism—acknowledging the bubble exists while positioning his company’s infrastructure spending as different and necessary.
I don’t find things that most corporate leaders say in contexts like this particularly interesting. Altman is willful but he’s also rational enough not to make any public statements that he and his marketing department haven’t reviewed thoroughly. He’ll never accidentally confess to anything. Even when he acknowledges problems, he does so with a plan to have OpenAI make the best of the situation and if acknowledging those problems wasn’t intended to help OpenAI somehow then he wouldn’t have done it.
Nailed it. Only the dumbest, Musk front and center, spew random crap into the world.
I wonder about Musk sometimes. He often sounds downright deranged, but so does Trump and Trump has actually been very successful at using social media to spread his message and win over supporters. I know I’m in a bubble where people don’t respect behavior like theirs, but what if outside this bubble their behavior is actually a winning political strategy?
It’s because people don’t see it as affecting their lives, they view politics the same way they view reality television. They want entertainment, not realizing that what makes a good entertainer or TV show makes a terrible government or business.
My own hypothesis is different. People do view politics as affecting their lives, but many of them view intellectuals as out of touch, uninterested, or even hostile and they want someone in charge who isn’t one of those people. It’s the “good guy to have a beer with” sentiment which helped elect GWB but taken to an extreme. In this extreme, “owning the libs” is inherently good, not just a means to an end.
I’m not sure “intellectuals” is the right word here. The people I have in mind are college-educated but that’s not all there is to it. They are the the sort of people who Trump calls the “deep state” but they’re not just government employees - they’re judges who say that something is unconstitutional, but they’re also HR heads that say something is racist or scientists who say something is harmful to the environment. Large parts of the public now don’t just lack respect for them but actively hate them.