• RandomVideos@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    22 days ago

    Had the big ones not stolen the training data, were they not being used to leverage corporate goals over humans, they could be a very useful thing

    AI still has the problems of spam(propaganda being the most dangerous variant of it), disinformation and impersonating real artists. These could be fixed if every AI image/video had a watermark, but i dont think that could be enforced well enough to completely eliminate these issues

    • southsamurai@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      22 days ago

      Those specific flaws are down to the same issue though. The training data was flawed enough, in large part due to being stolen wholesale, that it skews the matter towards counterfeits being easier. I would agree that in the absence of legislation, no for profit business based on ai will ever tag their output. It could be an easier task for non profit, and/or open source models though. Definitely something that needs addressing.

      I’m not sure what you mean by spam being a direct problem of ai. Are you saying that it’s easier to generate propaganda, and thus allow it to be spammed?

      As near as I can tell, the propaganda farms were doing quite well spreading misinformation and disinformation before ai. Spamming it too, when that was useful to their goals.

      • RandomVideos@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        22 days ago

        As far as i know, the twitter AI tags its images

        Propaganda is more of a problem with text generation than image generation, but both can be used to change peoples opinions much more easily than before