• morto@piefed.social
    link
    fedilink
    English
    arrow-up
    17
    ·
    10 hours ago

    In a postgraduate class, everyone was praising ai, calling it nicknames and even their friend (yes, friend), and one day, the professor and a colleague were discussing some code when I approached, and they started their routine bullying on me for being dumb and not using ai. Then I looked at his code and asked to test his core algorithm that he converted from a fortran code and “enhanced” it. I ran it with some test data and compared to the original code and the result was different! They blindly trusted some ai code that deviated from their theoretical methodology, and are publishing papers with those results!

    Even after showing the different result, they didn’t convince themselves of anything and still bully me for not using ai. Seriously, this shit became some sort of cult at this point. People are becoming irrational. If people in other universities are behaving the same and publishing like this, I’m seriously concerned for the future of science and humanity itself. Maybe we should archive everything published up to 2022, to leave as a base for the survivors from our downfall.

    • Xenny@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      That’s not a bad idea. I’m already downloading lots of human knowledge and media that I want backed up because I can’t trust humanity anymore to have it available anymore

    • MyMindIsLikeAnOcean@piefed.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      The way it was described to me by some academics is that it’s useful…but only as a “research assistant” to bounce ideas off of and bring in arcane or tertiary concepts you might not have considered (after you vet them thoroughly, of course).

      The danger, as described by the same academics, is that it can act as a “buddy” who confirms you biases. It can generate truly plausible bullshit to support deeply flawed hypotheses, for example. Their main concern is it “learning” to stroke the egos of the people using it so it creates a feedback loop and it’s own bubbles of bullshit.

      • tym@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        So, linkedin? What if the real artificial intelligence was the linkedin lunatics we met along the way?