Based on one of TIME’s two cover photos, some of those people appear to be Nvidia’s Jensen Huang, Tesla’s Elon Musk, OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, AMD’s Lisa Su, Anthropic’s Dario Amodei, Google DeepMind’s Demis Hassabis, and World Labs’s Fei-Fei Li — all individuals who raced “both beside and against each other.”

  • NoTagBacks@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    3
    ·
    23 hours ago

    This is an interesting one and I think I’ll actually agree with them on this one, although I suspect for different reasons. Of course I should probably acknowledge that the Times person of the year is about someone influential in that year regardless of whether or not it was a good thing. Considering the gargantuan size of the AI bubble in economics, the invasion of generative AI slop on the internet, and the reckless predatory behavior of these AI companies, I’d definitely agree with them being highlighted as maybe most influential this year. However, I can’t help but wonder how well this take will age.

    While I certainly hard agree with the general sentiment of ‘fuck AI’, especially for this current iteration of generative ai, and believe we’re currently seeing many of the cracks in the AI bubble starting to form, and I even wholeheartedly agree with the idea that this current form of AI just “ain’t it” and is effectively a dead end, I don’t think this take will age poorly for any of these reasons, at least not primarily. I think it’ll be more of understanding how this current push of AI is influential and therefore giving more weight(heh) to the data these AI models train on. I understand using these tech bros as their person of the year for being the face of training the AI models, but I think the issue with how this will age is specifically because of how the models are trained, marketed, and used. While I won’t say something silly like “ackshually, the training data should be person of the year”, I do think it’s more along the lines of the data itself being more influential, but I will disagree with the idea of current AI models communicating the information accurately, well, or even pervasively. Considering the amount of human intervention required to use AI models with warranted confidence and the limited amount of legitimate use cases, I find the influence of current AI models to be superficial. While I get that it’s vague,

    I think a better candidate would be “internet contributors of all time” or something along those lines, because I think we’ll find the influence of those that have actively contributed to the online world over time have become much more pervasive as of the last few years while certainly flying under the radar. An admittedly weak argument is to point out that current AI models simply wouldn’t exist without the massive amount of internet engagement over time. And I think I should also point out the very obvious ultra-horny pursuit of any training data these tech bros can get their hands on. And, finally, I would also point out that probably a good majority of the actual architects that did the actual work on these AI models were among those internet contributors themselves, some maybe even to an obnoxious degree. In any case, I’m motioning to historic internet data lumped in with data to this day as the influence mainly because of these tech companies and their enshittified ai models being so unusable and obnoxiously marketed so as to revive an appreciation for a more human internet. In light of this, I think we’ll find that the influence of generative ai and these tech companies have been greatly exaggerated, while the actual use of the internet is strongly resisting removal and erasure of it’s human elements.

    Or maybe I’ve been talking out of my ass. I’ll admit that it wouldn’t take much to convince me what I’ve written is misguided, but I find the tension created by AI bullshit to be fascinating in how it has influenced us.