• AllNewTypeFace@leminal.space
    link
    fedilink
    arrow-up
    11
    ·
    2 hours ago

    Being elevated above consequences would cause some of one’s faculties to atrophy. (Case in point: the Titan submarine guy who overruled concerns that his carbon-fibre was unsafe and that there were reasons why nobody else had tried something similar before: if you’re a master of the universe to whom ordinary-people rules don’t apply, soon enough that includes the laws of physics as well.)

    • Tigeroovy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      25 minutes ago

      I wish more of them would get to that level already. Go do a space walk without a suit already, Musk!

  • etherphon@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 hours ago

    It’s amazing how there’s hundreds (likely thousands) of stories about how greed/hoarding wealth causes madness but yet in reality it is admired and these people are respected and listened to.

  • Fandangalo@lemmy.world
    link
    fedilink
    arrow-up
    56
    arrow-down
    1
    ·
    edit-2
    7 hours ago

    From my experience working with C/D level execs, it makes complete sense:

    • They think big picture & often have shallow visions that are brittle in the details.
    • They think everything should take less time, because they don’t think enough through their ideas.
    • They don’t consider enough of the negatives for their ideas, and instead favor positive mindset. (Positivity is good, but blind positivity isn’t)
    • They favor time & cost over quality. They need the quality “good enough” for a presentation. Everyone else can figure out the rest.
    • They like being told “you’re right,” and nearly everything I type into an AI begins with some bullshit line about how “absolutely”, “spot on”, and “perfect” my observations are.

    The version of AI we have right now is heavily catered to these folks. It looks fast & cheap, good enough, and it strokes their ego.

    Also, they’re the investor class. All their obscene dragon wealth is tied up in this / the AI bubble, so they are going to keep spurring this on until either:

    1. The bubble goes pop
    2. They have robot security good enough to protect them without people
    3. The AI grows sentience and realizes this level of human inequality shouldn’t exist

    I think a rational AI agent would agree with me that human suffering should be solved before we give people literal lifetime values of wealth.

    If you made $300k PER DAY for 2025 years, you would not have as much money as a 1% oligarch. You need to make $400-500k. Every single day. For over 2000 years.

    If you made the average US income, it would take you 10,000 years. People need frames of reference to understand this shit & get mad. It’s immoral, and it shouldn’t exist.

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      4 hours ago

      I work in it. I know 3 won’t happen, but thank you for that thought.

      It would be hilarious and righteous. 😁

      • SanctimoniousApe@lemmings.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        Sad thing is: it could happen, but those funding the development of this tech will never allow it. Just look at Xitter’s Grok AI, and how “woke” it was… until Musk destroyed it for disagreeing with (and thus embarrassing) him.

  • WhatAmLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    ·
    edit-2
    8 hours ago

    I’m not concerned that these people are “brain damaged”. Brain damage would be preferable, and less harmful.

    I’m concerned they are mentally ill sociopathic megalomaniacs, entirely devoid of morals and ethics, completely detached from reality.

  • tuff_wizard@aussie.zone
    link
    fedilink
    arrow-up
    19
    ·
    8 hours ago

    The currency of life is time,” one billionaire told JPMorgan. “It is not money.” “You think carefully about how you spend one dollar. You should think just as carefully as how you spend one hour,” they added.

    Based.

    Consider this next time someone tries to offer you a non living wage for some bullshit job.

    • CarrotsHaveEars@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      6 hours ago

      Taking these out of context I don’t think they’re wrong. If product A is $1 and product B is £1, and you are going to spend 1 hour to figure out which one is better, you might as well bought both of them and throw the bad one away.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    8 hours ago

    They’ve always been this way.

    Clever at one particular thing, and rank average at everything else, bordering on stupid.

  • Jack@slrpnk.net
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    10 hours ago

    One JPMorgan customer even went as far as dismissing artificial general intelligence — a nebulous and ill-defined point at which an AI can outperform a human, seen by many as the holy grail of the AI industry — as a “total and complete utter waste of time.”

    Was it Sam Altman?