According to new book by two of the world’s leading experts on AI risk, Eliezer Yudkowsky and Nate Soares, AI presents an existential risk to humanity. The title does what it says on the tin: If Anyone Builds It, Everyone Dies: Why superhuman AI would kill us all.

I’m usually sceptical about proclamations of the AI apocalypse, but find myself on the fence after my conversation with one of the book’s authors, Nate Soares. He’s the director of the Machine Intelligence Research Institute, and a former Google and Microsoft engineer.

We always think some great force is going to destroy us. In the past it was floods sent by god, but now that Artificial Superintelligence is filling the God-shaped hole in the secular west, it’s perfectly placed to be the target for our projections.

With that in mind, could all this doomerism be the result of Silicon Valley echo chambers, transhumanist yearnings, and old-fashioned social panic? At the end of the day, if AI gets too powerful, can’t we just hit a giant off switch? Or force it to align it to human values now so it doesn’t annihilate us?

If you had an extremely powerful genie that did exactly what you wished for, it would be a hard problem to figure out a wish that would actually have good consequences. You know, it’s difficult to come up with a good wish.

A lot of people think this is what the problem with AI is. I wish we had that problem. That would be so much of a nicer problem than the problem we have. The problem we have is that we have we’re not making genies where they do exactly what we said, even when it has consequences we didn’t like.

This may be the most significant threat posed by Artificial Superintelligence: we don’t actually know how it works. Even the best engineers don’t really understand how ChatGPT really works today, or even how it worked three years ago. This is known as ‘opacity’ in AI research, and it’s both insane and terrifying.

After my conversation with Nate, I started to re-evaluate my position. I still think we’re projecting our religious urges onto AI, but we’re also playing with a kind of fire we’ve never seen before.

  • vane@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    10 days ago

    Throw more experts on me so I can fear more. Honestly, doesn’t make any difference. The delusion that people can create something that is more capable of them is stupid and logically broken. There is more probability that nature will wipe us all with deadly bacteria or virus or we wipe ourselves by killing nature than we create some electrical mindset that can do something on it’s own.