Two major flaws is none of those examples had the potential to be highly profitable. And all three of those had major ethical and morality concerns that the general public wouldn’t approve.
AI is being packaged and sold as a toy, so while people do object to it, it’s not the same scale of playing god or literal war crimes.
On the corporate side of things, it’s being sold as a money maker. Costs a fraction of what an experienced employee does and marketed as less prone to mistakes or bungles. Obviously not true but since when has a CEO ever turned down a promise too good to be true?
Microsoft is already making billions from copilot licences and it’s the same for Claude and Gemini and the others, just because they’re also spending fuck tons of money building data centres to expand their AI business doesn’t mean they aren’t making billions from it already.
Two major flaws is none of those examples had the potential to be highly profitable. And all three of those had major ethical and morality concerns that the general public wouldn’t approve.
AI is being packaged and sold as a toy, so while people do object to it, it’s not the same scale of playing god or literal war crimes.
Neither does so-called “AI”. OpenAI, Anthropic and the rest are burning money like it’s nobody’s business.
On the corporate side of things, it’s being sold as a money maker. Costs a fraction of what an experienced employee does and marketed as less prone to mistakes or bungles. Obviously not true but since when has a CEO ever turned down a promise too good to be true?
That grift can’t work indefinetly, though. At some point the piper will have to be payed.
Microsoft is already making billions from copilot licences and it’s the same for Claude and Gemini and the others, just because they’re also spending fuck tons of money building data centres to expand their AI business doesn’t mean they aren’t making billions from it already.
Nope, the cost of inference is still way too high to actually be profitable.