• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
  • MudMan@kbin.social
    link
    fedilink
    arrow-up
    68
    arrow-down
    2
    ·
    1 year ago

    Oh, you mean it wasn’t just concidence that the moment OpenAI, Google and MS were in position they started caving to oversight and claiming that any further development should be licensed by the government?

    I’m shocked. Shocked, I tell you.

    I mean, I get that many people were just freaking out about it and it’s easy to lose track, but they were not even a little bit subtle about it.

    • Kaidao@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 year ago

      Exactly. This is classic strategy for first movers. Once you hold the market, use legislation to dig your moat.

    • Salamendacious@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      1 year ago

      AI is going to change quite a bit but I couldn’t wrap my head around the end of the world stuff.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        1
        ·
        edit-2
        1 year ago

        It won’t end the world because AI doesn’t work the way that Hollywood portrays it.

        No AI has ever been shown to have self agency, if it’s not given instructions it’ll just sit there. Even a human child would attempt to leave room if left alone in there.

        So the real risk is not that and AI will decide to destroy humanity it’s that a human will tell the AI to destroy their enemies.

        But then you just get back around to mutually assured destruction, if you tell your self redesigning thinking weapon to attack me I’ll tell my self redesigning thinking weapon to attack you.

        • CodeInvasion@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          1 year ago

          I’m an AI researcher at one of the world’s top universities on the topic. While you are correct that no AI has demonstrated self-agency, it doesn’t mean that it won’t imitate such actions.

          These days, when people think AI, they mostly are referring to Language Models as these are what most people will interact with. A language model is trained on a corpus of documents. In the event of Large Language Models like ChatGPT, they are trained on just about any written document in existence. This includes Hollywood scripts and short stories concerning sentient AI.

          If put in the right starting conditions by a user, any language model will start to behave as if it were sentient, imitating the training data from its corpus. This could have serious consequences if not protected against.

          • LittleHermiT@lemmus.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            There are already instances where chat bots demonstrated unintended racism. The monumental goal of creating a general purpose intelligence is now plausible. The hardware has caught up with the ambitions of decades past. Maybe ChatGPT’s model has no real hope for sentience, as it’s just a word factory, other approaches might. Spiking neural networks for example, on a massive scale, might simulate the human brain to where the network actually ponders its existence.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          1 year ago

          AI doesn’t work the way that Hollywood portrays it

          AI does, but we haven’t developed AI and have no idea how to. The thing everyone calls AI today is just really good ML.

          • jarfil@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 year ago

            At some point ML (machine learning) becomes undistinguishable from BL (biological learning).

            Whether there is any actual “intelligence” involved in either, hasn’t been proven yet.

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Imagine 9-11 with prions. MAD depends on everyone being rational self-interested without a very alien value system. It really only works in the case you got like three governments pointing nukes at each other. It doesn’t work if the group doesn’t care about tomorrow or thinks that they are going into heaven or is convinced that they can’t be killed or any other of the deranged reasons that motivate people to do these types of acts.

        • jarfil@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          The real risk is that humans will use AIs to asses the risk/benefits of starting a war… and an AI will give them the “go ahead” without considering mutually assured destruction from everyone else doing exactly the same.

          It’s not that AIs will get super-human, it’s that humans will blindly trust limited AIs and exterminate each other.

      • MudMan@kbin.social
        link
        fedilink
        arrow-up
        19
        ·
        1 year ago

        At worst it’ll be a similar impact to social media and big data.

        Try asking the big players what they think of heavily limiting and regulating THOSE fields.

        They went all “oh, yeah, we’re totally seeing the robot apocalypse happening right here” the moment open source alternatives started to pop up because at that point regulatory barriers would lock those out while they remain safely grandfathered in. The official releases were straight up claiming only they knew how to do this without making Skynet, it was absurd.

        Which, to be clear, doesn’t mean regulation isn’t needed. On all of the above. Just that the threat is not apocalyptic and keeping the tech in the hands of these few big corpos is absolutely not a fix.