Apparently, stealing other peopleā€™s work to create product for money is now ā€œfair useā€ as according to OpenAI because they are ā€œinnovatingā€ (stealing). Yeah. Move fast and break things, huh?

ā€œBecause copyright today covers virtually every sort of human expressionā€”including blogposts, photographs, forum posts, scraps of software code, and government documentsā€”it would be impossible to train todayā€™s leading AI models without using copyrighted materials,ā€ wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit ā€œmisconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.ā€

  • Phanatik@kbin.social
    link
    fedilink
    arrow-up
    118
    Ā·
    11 months ago

    A comedian isnā€™t forming a sentence based on what the most probable word is going to appear after the previous one. This is such a bullshit argument that reduces human competency to ā€œmonkey see thing to draw thingā€ and completely overlooks the craft and intent behind creative works. Do you know why ChatGPT uses certain words over others? Probability. It decided as a result of its training that one word would appear after the previous in certain contexts. It absolutely doesnā€™t take into account things like ā€œmaybe this word would be better here because the sound and syllables maintains the flow of the sentenceā€.

    Baffling takes from people who donā€™t know what theyā€™re talking about.

    • frog šŸø@beehaw.org
      link
      fedilink
      English
      arrow-up
      65
      Ā·
      edit-2
      11 months ago

      I wish I could upvote this more than once.

      What people always seem to miss is that a human doesnā€™t need billions of examples to be able to produce something thatā€™s kind of ā€œeh, close enoughā€. Artists donā€™t look at billions of paintings. They look at a few, but do so deeply, absorbing not just the most likely distribution of brushstrokes, but why the painting looks the way it does. For a basis of comparison, I did an art and design course last year and looked at about 300 artworks in total (course requirement was 50-100). The research component on my design-related degree course is one page a week per module (so basically one example from the field the module is about, plus some analysis). The real bulk of the work humans do isnā€™t looking at billions of examples: itā€™s looking at a few, and then practicing the skill and developing a process that allows them to convey the thing theyā€™re trying to express.

      If the AI models were really doing exactly the same thing humans do, the models could be trained without any copyright infringement at all, because all of the public domain and creative commons content, plus maybe licencing a little more, would be more than enough.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        25
        Ā·
        11 months ago

        Exactly! You can glean so much from a single work, not just about the work itself but who created it and what ideas were they trying to express and what does that tell us about the world they live in and how they see that world.

        This doesnā€™t even touch the fact that Iā€™m learning to draw not by looking at other drawings but what exactly Iā€™m trying to draw. I know at a base level, a drawing is a series of shapes made by hand whether itā€™s through a digital medium or traditional pen/pencil and paper. But the skill isnā€™t being able replicate other drawings, itā€™s being able to convert something I can see into a drawing. If Iā€™m drawing someone sitting in a wheelchair, then Iā€™ll get the pose of them sitting in the wheelchair but I can add details I want to emphasise or remove details I donā€™t want. Thereā€™s so much that goes into creative work and Iā€™m tired of arguing with people who have no idea what it takes to produce creative works.

        • frog šŸø@beehaw.org
          link
          fedilink
          English
          arrow-up
          26
          Ā·
          11 months ago

          It seems that most of the people who think what humans and AIs do is the same thing are not actually creatives themselves. Their level of understanding of what it takes to draw goes no further than ā€œwell anyone can draw, children do it all the timeā€. They have the same respect for writing, of course, equating the ability to string words together to write an email, with the process it takes to write a brilliant novel or script. They donā€™t get it, and to an extent, thatā€™s fine - not everybody needs to understand everything. But they should at least have the decency to listen to the people that do get it.

          • intensely_human@lemm.ee
            link
            fedilink
            arrow-up
            1
            Ā·
            11 months ago

            Well, thatā€™s not me. Iā€™m a creative, and I see deep parallels between how LLMs work and how my own mind works.

            • frog šŸø@beehaw.org
              link
              fedilink
              English
              arrow-up
              6
              Ā·
              11 months ago

              Either youā€™re vastly overestimating the degree of understanding and insight AIs possess, or youā€™re vastly underestimating your own capabilities. :)

              • Veloxization@yiffit.net
                link
                fedilink
                arrow-up
                2
                Ā·
                11 months ago

                This whole AI craze has just shown me that people are losing faith in their own abilities and their ability to learn things. Iā€™ve heard so many who use AI to generate ā€œartworkā€ argue that they tried to do art ā€œfor yearsā€ without improving, and hence have come to conclusion that creativity is a talent that only some have, instead of a skill you can learn and hone. Just because they didnā€™t see results as fast as theyā€™d have liked.

                • frog šŸø@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  Ā·
                  11 months ago

                  Very well said! Creativity is definitely a skill that requires work, and for which there are no short cuts. It seems to me that the vast majority of people using AI for artwork are just looking for a short cut, so they can get the results without having to work hard and practice. The one valid exception is when itā€™s used by disabled people who have physical limitations on what they can do, which is a point thatā€™s brought up occasionally - and if that was the one and only use-case for these models, I think a lot of artists would actually be fine with that.

                  • Veloxization@yiffit.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    Ā·
                    11 months ago

                    I started drawing seriously when I was 14. Looking at my old artwork, I didnā€™t start improving fast until I was around 19 or 20. Not to say I didnā€™t improve at all during those five to six years but the pace did get faster once I had ā€œlearned to learnā€ so to say. That is to say it can take a lot of patience to get to a point where you actually start seeing improvement fast enough to stay motivated. But it is 100% worth it because at the end you have a lot of things you have created with your own two hands.

                    And regarding the point on physical limitations, I canā€™t blame anyone in a situation like that for using AI if they have no other chance for realising their imaginations. For others, it is completely possible and not reserved for people who have some mythical innate talent. Just grab a pen or a brush and enjoy the process of honing a fine skill regardless of the end result. ā¤ļø

              • jarfil@beehaw.org
                link
                fedilink
                arrow-up
                1
                Ā·
                11 months ago

                Alternatively, you might be vastly overestimating human ā€œunderstanding and insightā€, or how much of it is really needed to create stuff.

                • frog šŸø@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  Ā·
                  edit-2
                  11 months ago

                  Average humans, sure, donā€™t have a lot of understanding and insight, and little is needed to be able to draw a doodle on some paper. But trained artists have a lot of it, because part of the process is learning to interpret artworks and work out why the artist used a particular composition or colour or object. To create really great art, you do actually need a lot of understanding and insight, because everything in your work will have been put there deliberately, not just to fill up space.

                  An AI doesnā€™t know why itā€™s put an apple on the table rather than an orange, it just does it because human artists have done it - it doesnā€™t know what apples mean on a semiotic level to the human artist or the humans that look at the painting. But humans do understand what apples represent - they may not pick up on it consciously, but somewhere in the backs of their minds, theyā€™ll see an apple in a painting and itā€™ll make the painting mean something different than if the fruit had been an orange.

                  • jarfil@beehaw.org
                    link
                    fedilink
                    arrow-up
                    1
                    Ā·
                    11 months ago

                    it doesnā€™t know what apples mean on a semiotic level

                    Interestingly, LLMs seem to show emerging semiotic organization. By analyzing the activation space of the neural network, related concepts seem to get trained into similar activation patterns, which is what allows LLMs to zero shot relationships when executed at a ā€œtemperatureā€ (randomness level) in the right range.

                    Pairing an LLM with a stable diffusion model, allows the resulting AI toā€¦ well, judge by yourself: https://llm-grounded-diffusion.github.io/

      • Quokka@quokk.au
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        edit-2
        11 months ago

        Children learn by watching others. We are trained from millions of examples starting from before birth.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        11 months ago

        When people say that the ā€œmodel is learning from its training dataā€, it means just that, not that it is human, and not that it learns exactly humans. It doesnā€™t make sense to judge boats on how well they simulate human swimming patterns, just how well they perform their task.

        Every human has the benefit of as a baby training on things around them and being trained by those around them, building a foundation for all later skills. Generative models rely on many text and image pairs to describe things to them because they lack the ability to poke, prod, rotate, and disassemble for themselves.

        For example, when a model takes in a thousand images of circles, it doesnā€™t ā€œlearnā€ a thousand circles. It learns what circle GENERALLY is like, the concept of it. That representation, along with random noise, is how you create images with them. The same happens for every concept the model trains on. Everything from ā€œcatā€ to more complex things like color relationships and reflections or lighting. Machines are not human, but they can learn despite that.

        • Eccitaze@yiffit.net
          link
          fedilink
          arrow-up
          3
          Ā·
          11 months ago

          It makes sense to judge how closely LLMs mimic human learning when people are using it as a defense to AI companies scraping copyrighted content, and making the claim that banning AI scraping is as nonsensical as banning human learning.

          But when itā€™s pointed out that LLMs donā€™t learn very similarly to humans, and require scraping far more material than a human does, suddenly AIs shouldnā€™t be judged by human standards? I donā€™t know if itā€™s intentional on your part, but thatā€™s a pretty classic example of a motte-and-bailey fallacy. You canā€™t have it both ways.

        • ParsnipWitch@feddit.de
          link
          fedilink
          arrow-up
          3
          Ā·
          edit-2
          11 months ago

          In general I agree with you, but AI doesnā€™t learn the concept of what a circle is. AI reproduces the most fitting representation of what we call a circle. But there is no understanding of the concept of a circle. This may sound nit picking, but I think itā€™s important to make the distinction.

          That is why current models arenā€™t regarded as actual intelligence, although people already call them thatā€¦

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        4
        Ā·
        11 months ago

        What you count as ā€œoneā€ example is arbitrary. In terms of pixels, youā€™re looking at millions right now.

        The ability to train faster using fewer examples in real time, similar to what an intelligent human brain can do, is definitely a goal of AI research. But right now, we may be seeing from AI what a below average human brain could accomplish with hundreds of lifetimes to study.

        If the AI models were really doing exactly the same thing humans do, the models could be trained without any copyright infringement at all, because all of the public domain and creative commons content, plus maybe licencing a little more, would be more than enough.

        I mean, no, if you only ever look at public domain stuff you literally wouldnā€™t know the state of the art, which is historically happening for profit. Even the most untrained artist ā€œdoing their own thingā€ watches Disney/Pixar movies and listens to copyrighted music.

        • frog šŸø@beehaw.org
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          11 months ago

          If weā€™re going by the number of pixels being viewed, then you have to use the same measure for both humans and AIs - and because AIs have to look at billions of images while humans do not, the AI still requires far more pixels than a human does.

          And humans donā€™t require the most modern art in order to learn to draw at all. Sure, if they want to compete with modern artists, they would need to look at modern artists (for which educational fair use exists, and again the quantity of art being used by the human for this purpose is massively lower than what an AI uses - a human does not need to consume billions of artworks from modern artists in order to learn what the current trends are). But a human could learn to draw, paint, sculpt, etc purely by only looking at public domain and creative commons works, because the process for drawing, say, the human figure (with the right number of fingers!) has not changed in hundreds of years. A human can also justā€¦ go outside and draw things they see themselves, because the sky above them and the tree across the street arenā€™t copyrighted. And in fact, Iā€™d argue that a good artist should go out and find real things to draw.

          OpenAIā€™s argument is literally that their AI cannot learn without using copyrighted materials in vast quantities - too vast for them to simply compensate all the creators. So it genuinely is not comparable to a human, because humans can, in fact, learn without using copyrighted material. If OpenAIā€™s argument is actually that their AI canā€™t compete commercially with modern art without using copyrighted works, then they should be honest about that - but then theyā€™d be showing their hand, wouldnā€™t they?

          • Even_Adder@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            Ā·
            11 months ago

            It isnā€™t wrong to use copyrighted works for training. Let me quote an article by the EFF here:

            First, copyright law doesnā€™t prevent you from making factual observations about a work or copying the facts embodied in a work (this is called the ā€œidea/expression distinctionā€). Rather, copyright forbids you from copying the workā€™s creative expression in a way that could substitute for the original, and from making ā€œderivative worksā€ when those works copy too much creative expression from the original.

            Second, even if a person makes a copy or a derivative work, the use is not infringing if it is a ā€œfair use.ā€ Whether a use is fair depends on a number of factors, including the purpose of the use, the nature of the original work, how much is used, and potential harm to the market for the original work.

            and

            Even if a court concludes that a model is a derivative work under copyright law, creating the model is likely a lawful fair use. Fair use protects reverse engineering, indexing for search engines, and other forms of analysis that create new knowledge about works or bodies of works. Here, the fact that the model is used to create new works weighs in favor of fair use as does the fact that the model consists of original analysis of the training images in comparison with one another.

            What you want would swing the doors open for corporate interference like hindering competition, stifling unwanted speech, and monopolization like nothing weā€™ve seen before. There are very good reasons people have these rights, and we shouldnā€™t be trying to change this. Ultimately, itā€™s apparent to me, you are in favor of these things. That you believe artists deserve a monopoly on ideas and non-specific expression, to the detriment of anyone else. If Iā€™m wrong, please explain to me how.

            If weā€™re going by the number of pixels being viewed, then you have to use the same measure for both humans and AIs - and because AIs have to look at billions of images while humans do not, the AI still requires far more pixels than a human does.

            Humans benefit from years of evolutionary development and corporeal bodies to explore and interact with their world before theyā€™re ever expected to produce complex art. AI need huge datasets to understand patterns to make up for this disadvantage. Nobody pops out of the womb with fully formed fine motor skills, pattern recognition, understanding of cause and effect, shapes, comparison, counting, vocabulary related to art, and spatial reasoning. Datasets are huge and filled with image-caption pairs to teach models all of this from scratch. AI isnā€™t human, and we shouldnā€™t judge it against them, just like we donā€™t judge boats on their rowing ability.

            And humans donā€™t require the most modern art in order to learn to draw at all. Sure, if they want to compete with modern artists, they would need to look at modern artists (for which educational fair use exists, and again the quantity of art being used by the human for this purpose is massively lower than what an AI uses - a human does not need to consume billions of artworks from modern artists in order to learn what the current trends are). But a human could learn to draw, paint, sculpt, etc purely by only looking at public domain and creative commons works, because the process for drawing, say, the human figure (with the right number of fingers!) has not changed in hundreds of years. A human can also justā€¦ go outside and draw things they see themselves, because the sky above them and the tree across the street arenā€™t copyrighted. And in fact, Iā€™d argue that a good artist should go out and find real things to draw.

            AI donā€™t require most modern art in order to learn to make images either, but the range of expression would be limited, just like a humanā€™s in this situation. You can see this in cave paintings and early sculptures. They wouldnā€™t be limited to this same degree, but you would still be limited.

            It took us 100,000 years to get from cave drawings to Leonard Da Vinci. This is just another step for artists, like Camera Obscura was in the past. Itā€™s important to remember that early man was as smart as we are, they just lacked the interconnectivity to exchange ideas that we have.

            • ParsnipWitch@feddit.de
              link
              fedilink
              arrow-up
              2
              Ā·
              11 months ago

              I think the difference in artistic expression between modern humans and humans in the past comes down to the material available (like the actual material to draw with).

              Humans can draw without seeing any image ever. Blind people can create art and draw things because we have a different understanding of the world around us than AI has. No human artist needs to look at a thousand or even at 1 picture of a banana to draw one.

              The way AI sees and ā€œunderstandsā€ the world and how it generates an image is fundamentally different from how the human brain conveys the object banana into an image of a banana.

              • Even_Adder@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                Ā·
                11 months ago

                I think the difference in artistic expression between modern humans and humans in the past comes down to the material available (like the actual material to draw with).

                That is definitely a difference, but even that is a kind of information shared between people, and information itself is what gives everyone something to build on. That gives them a basis on which to advance understanding, instead of wasting time coming up with the same things themselves every time.

                Humans can draw without seeing any image ever. Blind people can create art and draw things because we have a different understanding of the world around us than AI has. No human artist needs to look at a thousand or even at 1 picture of a banana to draw one.

                Humans donā€™t need representations of things in images because they have the opportunity to interact with the genuine article, and in situations when that is impractical, they can still fall back on images to learn. Someone without sight from birth canā€™t create art the same way a sighted person can.

                The way AI sees and ā€œunderstandsā€ the world and how it generates an image is fundamentally different from how the human brain conveys the object banana into an image of a banana.

                Thatā€™s the beauty of it all, despite that, these models can still output bananas.

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            2
            Ā·
            11 months ago

            Sure, if they want to compete with modern artists, they would need to look at modern artists

            Which is the literal goal of Dall-E, SD, etc.

            But a human could learn to draw, paint, sculpt, etc purely by only looking at public domain and creative commons works

            They could definitely learn some amount of skill, I agree. Iā€™d be very interested to see the best that an AI could achieve using only PD and CC content. It would be interesting. But youā€™d agree that it would look very different from modern art, just as an alien who has only been consuming earth media from 100+ years ago would be unable to relate to us.

            the sky above them and the tree across the street arenā€™t copyrighted.

            Yeah, Iā€™d consider that PD/CC content that such an AI would easily have access to. But obviously the real sky is something entirely different from what is depicted in Starry Night, Star Wars, or H.P. Lovecraftā€™s description of the cosmos.

            OpenAIā€™s argument is literally that their AI cannot learn without using copyrighted materials in vast quantities

            Yeah, Iā€™d consider that a strong claim on their part; what they really mean is, itā€™s the easiest way to make progress in AI, and we wouldnā€™t be anywhere close to where we are without it.

            And you could argue ā€œconvenient that it both saves them money, and generates money for them to do it this wayā€, but Iā€™d also point out that the alternative is they keep the trained models closed source, never using them publicly until they advance the tech far enough that theyā€™ve literally figured out how to build/simulate a human brain that is able to learn as quickly and human-like as youā€™re describing. And then we find ourselves in a world where one or two corporations have this incredible proprietary ability that no one else has.

            Personally, Iā€™d rather live in the world where the information about how to do all of this isnā€™t kept for one or two corporations to profit from, I would rather live in the version where they publish their work publicly, early, and often, show that it works, and people are able to reproduce it, open source it, train their own models, and advance the technology in a space where anyone can use it.

            You could hypothesize of a middle ground where they do the research, but arenā€™t allowed to profit from it without licensing every bit of data they train on. But the reality of AI research is that it only happens to the extent that it generates revenue. Itā€™s been that way for the entire history of AI. Douglas Hofstadter has been asking deep important questions about AI as it relates to consciousness for like 60 years (ex. GEB, I am a Strange Loop), but thereā€™s a reason he didnā€™t discover LLMs and tech companies did. Thatā€™s not to say his writings are meaningless, in fact I think theyā€™re more important than ever before, but he just wasnā€™t ever going to get to this point with a small team of grad students, a research grant, and some public domain datasets.

            So, itā€™s hard to disagree with OpenAI there, AI definitely wouldnā€™t be where it is without them doing what theyā€™ve done. And Iā€™m a firm believer that unless we figure our shit out with energy generation soon, the earth will be an uninhabitable wasteland. Weā€™re playing a game of climb the Kardashev scale, we opted for the ā€œburn all the fossil fuels as fast as possibleā€ strategy, and now weā€™re a the point where either spent enough energy fast enough to figure out the tech needed to survive this, or we suffocate on the fumes. The clock is ticking, and AI may be our best bet at saving the human race that doesnā€™t involve an inordinate number of people dying.

            • frog šŸø@beehaw.org
              link
              fedilink
              English
              arrow-up
              4
              Ā·
              11 months ago

              OpenAI are not going to make the source code for their model accessible to all to learn from. This is 100% about profiting from it themselves. And using copyrighted data to create open source models would seem to violate the very principles the open source community stands for - namely that everybody contributes what they agree to, and everything is published under a licence. If the basis of an open source model is a vast quantity of training data from a vast quantity of extremely pissed off artists, at least some of the people working on that model are going to have a ā€œare we the baddies?ā€ moment.

              The AI models are also never going to produce a solution to climate change that humans will accept. We already know what the solution is, but nobody wants to hear it, and expecting anyone to listen to ChatGPT and suddenly change their minds about using fossil fuels is ludicrous. And an AI that is trained specifically on knowledge about the climate and technologies that can improve it, with the purpose of innovating some hypothetical technology that will fix everything without humans changing any of their behaviour, categorically does not need the entire contents of ArtStation in its training data. AIs that are trained to do specific tasks, like the ones trained to identify new antibiotics, are trained on a very limited set of data, most of which is not protected by copyright and any that is can be easily licenced because the quantity is so small - and you donā€™t see anybody complaining about those models!

              • teawrecks@sopuli.xyz
                link
                fedilink
                arrow-up
                2
                Ā·
                11 months ago

                OpenAI are not going to make the source code for their model accessible to all to learn from

                OpenAI isnā€™t the only company doing this, nor is their specific model the knowledge that Iā€™m referring to.

                The AI models are also never going to produce a solution to climate change that humans will accept.

                It is already being used to further fusion research beyond anything weā€™ve been able to do with standard algorithms

                We already know what the solution is, but nobody wants to hear it

                Then itā€™s not a solution. Thatā€™s like telling your therapist, ā€œI know how to fix my relationship, my partner just wonā€™t do it!ā€

                expecting anyone to listen to ChatGPT and suddenly change their minds about using fossil fuels is ludicrous

                Lol. Yeah, I agree, thatā€™s never going to work.

                categorically does not need the entire contents of ArtStation in its training data.

                Thatā€™s a strong claim to make. Regardless of the ethics involved, or the problems the AI can solve today, the fact is we seeing rapid advances in AI research as a direct result of these ethically dubious models.

                In general, Iā€™m all for the capitalist method of artists being paid their fair share for the work they do, but on the flip side, I see a very possible mass extinction event on the horizon, which could cause suffering the likes of which humanity has never seen. If we assume that is the case, and we assume AI has a chance of preventing it, then I would prioritize that over peopleā€™s profits today. And I think itā€™s perfectly reasonable to say Iā€™m wrong.

                And then thereā€™s the problem of actually enforcing any sort of regulation, which would be so much more difficult than people here are willing to admit. Thereā€™s basically nothing you can do even if you wanted to. Your Carlin example is exactly the defense a company would use: ā€œI guess our AI just happened to create a movie that sounds just like Paul Blart, but we swear itā€™s never seen the film. Great minds think alike, I guess, and we sell only the greatest of mindsā€.

                • frog šŸø@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  Ā·
                  11 months ago

                  Personally I think the claim that the entire contents of ArtStation will lead to working technology that fixes climate change is the bolder claim - and if there was any merit to it, there would be some evidence for it that the corporations who want copyright to be disapplied to artists would be able to produce. And if weā€™re saying that getting rid of copyright protections will save the planet, then perhaps Disney should give up theirs as well. Because thatā€™s the reality here: weā€™re expecting humans to be obliterated by AI but are not expecting the rich and powerful to make any sacrifices at all. And art is part of who we are as a species, and has been for hundreds of thousands of years. Replacing artists with AI because somehow that will fix climate change is not only a massive stretch, but what would we even be saving humanity for at that point? So that everybody can slave away in insecure, meaningless work so the few can hoard everything for themselves? Because the Star Trek utopia where AI does all the work and humans can pursue self-enrichment is not an option on the table. The tech bros just want you to think it is.

      • intensely_human@lemm.ee
        link
        fedilink
        arrow-up
        2
        Ā·
        11 months ago

        When you look at one painting, is that the equivalent of one instance of the painting in the training data? There is an infinite amount of information in the painting, and each time you look you process more of that information.

        Iā€™d say any given painting you look at in a museum, you process at least a hundred mental images of aspects of it. A painting on your wall could be seen ten thousand times easily.

    • DaDragon@kbin.social
      link
      fedilink
      arrow-up
      21
      Ā·
      11 months ago

      Thatā€™s what humans do, though. Maybe not probability directly, but we all know that some words should be put in a certain order. We still operate within standard norms that apply to aparte group of people. LLMā€™s just go about it in a different way, but they achieve the same general result. If Iā€™m drawing a human, that means thereā€™s a ā€˜handā€™ here, and a ā€˜headā€™ there. ā€˜Headā€™ is a weird combination of pixels that mostly look like this, ā€˜handā€™ looks kinda like that. All depends on how the model is structured, but tell me thatā€™s not very similar to a simplified version of how humans operate.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        19
        Ā·
        11 months ago

        Yeah but the difference is we still choose our words. We can still alter sentences on the fly. I can think of a sentence and understand verbs go after the subject but I still have the cognition to alter the sentence to have the effect I want. The thing lacking in LLMs is intent and Iā€™m yet to see anyone tell me why a generative model decides to have more than 6 fingers. As humans we know hands generally have five fingers and thereā€™s a group of people who donā€™t so unless we wanted to draw a person with a different number of fingers, we could. A generative art model canā€™t help itself from drawing multiple fingers because all it understands is that ā€œfinger + finger = handā€ but it has no concept on when to stop.

        • DaDragon@kbin.social
          link
          fedilink
          arrow-up
          9
          Ā·
          11 months ago

          And thatā€™s the reason why LLM generated content isnā€™t considered creative.

          I do believe that the person using the device has a right to copyright the unique method they used to generate the content, but the content itself isnā€™t anything worth protecting.

          • Phanatik@kbin.social
            link
            fedilink
            arrow-up
            15
            Ā·
            11 months ago

            You say that yet I initially responded to someone who was comparing an LLM to what a comedian does.

            There is no unique method because thereā€™s hardly anything unique you can do. Two people using Stable Diffusion to produce an image are putting in the same amount of work. One might put more time into crafting the right prompt but thatā€™s not work youā€™re doing.

            If 90% of the work is handled by the model, and you just layer on whatever extra thing you wanted, that doesnā€™t mean you created the thing. That also implies you have much control over the output. Youā€™re effectively negotiating with this machine to produce what you want.

            • DaDragon@kbin.social
              link
              fedilink
              arrow-up
              4
              Ā·
              11 months ago

              Wouldnā€™t that lead to the same argument as originally brought against photography, though?

              A photographer is effectively negotiating with the sun, the sky and everything else to hopefully get the result they are looking for on their device.

              • Phanatik@kbin.social
                link
                fedilink
                arrow-up
                10
                Ā·
                11 months ago

                One difference is that the photographer has to go the places theyā€™re taking pictures of.

                Another is that photography isnā€™t comparable to paintings and it never has been. Iā€™m willing to bet photography and paintings have never coexisted in a contest. Except, when people say their generative art is comparable to what artists have been producing by hand, they are admitting that generative art has more in common with photography than it does with hand-crafted art but they want the prestige and recognition those artists get for their work.

            • Nyfure@kbin.social
              link
              fedilink
              arrow-up
              3
              Ā·
              11 months ago

              more time into crafting the right prompt

              Thats not work to you? My company pays me to spend time to do the right thing, even though most of the work does the computer.

              I see where you are going at, but your argument also invalidates other forms of human interaction and creating.

              In my country copyright can only be granted if a certain amount of (human) work went into something. Any work.
              The difficult part is finding out whats enough and what kind of work qualify to lead to some kind of protection, even if partial.
              The difficult part was not to create something, but to prove someone did or didnt put enough work into it.
              I think we can hold generated or assisted goods to the same standard.

              Putting a simple prompt together should probably not be granted protection as no significant work went into it. But refining it, editing the resultā€¦ maybe thats enough, thats really up to the society to decide.

              At the same time we have to balance the power of machines against human work, so the human work doesnt get totally invalidated, but rather shifted and treated as sub-type.
              Machines already replaced alot of work, also creative ones. Book-printing, forging, producing foodā€¦ the scary part about generative AI is mainly the speed of them spreading.

              • Phanatik@kbin.social
                link
                fedilink
                arrow-up
                12
                Ā·
                11 months ago

                So as a data analyst a lot of my work is done through a computer but I can apply my same skills if someone hands me a piece of paper with data printed on it and told me to come up with solutions to the problems with it. I donā€™t need the computer to do what I need to do, it makes it easier to manipulate data but the degree of problem solving required needs to be done by a human and thatā€™s why itā€™s my job. If a machine could do it, then they would be doing it but they arenā€™t because contrary to what people believe about data analysis, you have to be somewhat creative to do it well.

                Crafting a prompt is an exercise in trial and error. Itā€™s work but itā€™s not skilled work. It doesnā€™t take talent or practice to do. Despite the prompt, you are still at the mercy of the machine.

                Even by the case youā€™ve presented, I have to ask, at what point of a human editing the output of a generative model constitutes it being your own work and not the machineā€™s? How much do you have to change? Can you give me a %?

                Machines were intended to automate the tedious tasks that we all have to suffer to free up our brains for more engaging things which might include creative pursuits. Automation exists to make your life easier, not to rob you of lifeā€™s pursuits or your livelihood. It never shouldā€™ve been used to produce creative work and I find the attempts to equate this abominationā€™s outputs to what artists have been doing for years, utterly deplorable.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          4
          Ā·
          11 months ago

          I donā€™t choose my words man. I get a vague sense of the meaning I want to convey and the words just form themselves.

      • ParsnipWitch@feddit.de
        link
        fedilink
        arrow-up
        3
        Ā·
        11 months ago

        As an artist you draw with an understanding of the human body, though. An understanding current models donā€™t have because they arenā€™t actually intelligent.

        Maybe when a human is an absolute beginner in drawing they will think about the different lines and replicate even how other people draw stuff that then looks like a hand.

        But eventually they will realise (hopefully, otherwise they may get frustrated and stop drawing) that you need to understand the hand to draw one. Itā€™s mass, itā€™s concept or the idea of what a hand is.

        This may sound very abstract and strange but creative expression is more complex than replicating what we have seen a million times. Itā€™s a complex function unique to the human brain, an organ we donā€™t even scientifically understand yet.

    • hascat@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      11 months ago

      Thatā€™s not the point though. The point is that the human comedian and the AI both benefit from consuming creative works covered by copyright.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        14
        Ā·
        11 months ago

        Yeah except a machine is owned by a company and doesnā€™t consume the same way. It breaks down copyrighted works into data points so it can find the best way of putting those data points together again. If you understand anything at all about how these models work, they do not consume media the same way we do. It is not an entity with a thought process or consciousness (despite the misleading marketing of ā€œAIā€ would have you believe), itā€™s an optimisation algorithm.

          • Phanatik@kbin.social
            link
            fedilink
            arrow-up
            5
            Ā·
            11 months ago

            Itā€™s so funny that this is something new. This was Grammarlyā€™s whole schtick since before ChatGPT so how different is Grammarly AI?

            • vexikron@lemmy.zip
              link
              fedilink
              arrow-up
              5
              Ā·
              11 months ago

              Here is the bigger picture: The vast majority of tech illiterate people think something is AI because duh its called AI.

              Its literally just the power of branding and marketing on the minds of poorly informed humans.

              Unfortunately this is essentially a reverse Turing Test.

              The vast majority of humans do not know anything about AI, and also a huge majority of them can also barely tell the difference between, currently in some but not all forms, output from what is basically a brute force total internet plagiarism and synthesis software, from many actual human created content in many cases.

              To me this basically just means that about 99% of the time, most humans are actually literally NPCs, and they only do actual creative and unpredictable things very very rarely.

              • intensely_human@lemm.ee
                link
                fedilink
                arrow-up
                1
                Ā·
                11 months ago

                I call it AI because itā€™s artificial and itā€™s intelligent. Itā€™s not that complicated.

                The thing we have to remember is how scary and disruptive AI is. Given that fear, it is scary to acknowledge that we have AI emerging into our world. Because it is scary, that pushes us to want to ignore it.

                Itā€™s called denial, and itā€™s the best explanation for why people arenā€™t willing to acknowledge that LLMs are AI.

                • vexikron@lemmy.zip
                  link
                  fedilink
                  arrow-up
                  4
                  Ā·
                  11 months ago

                  It meets almost none of the conceptions of intelligence at all.

                  It is not capable of abstraction.

                  It is capable of brute force understanding similarities between various images and text, and then presenting a wide array of text and images containing elements that reasonably well emulate a wide array of descriptors.

                  This is convincing to many people that it has a large knowledge set.

                  But that is not abstraction.

                  It is not capable of logic.

                  It is only capable of again brute force analyzing an astounding amount of content and then producing essentially the consensus view on answers to common logical problems.

                  Ask it any complex logical question that has never been answered on the internet before and it will output irrelevant or inaccurate nonsense, likely just finding an answer to a similar but not identical question.

                  The same goes for reasoning, planning, critical thinking and problem solving.

                  If you ask it to do any of these things in a highly specific situation even giving it as much information as possible, if your situation is novel or even simply too complex, it will again just spit out a non sense answer that is basically going to be very inadequate and faulty because it will just draw elements together from the closest things it has been trained on, nearly certainly being contradictory or entirely dubious due to being unable to account for a particularly uncommon constraint, or constraints that are very uncommonly faced simultaneously.

                  It is not creative, in the sense of being able to generate something novel or new.

                  All it does is plagiarize elements of things that are popular and have many examples of and then attempt mix them together, but it will never generate a new art style or a new genre of music.

                  It does not even really infer things, is not really capable of inference.

                  It simply has a massive, astounding data set, and the ability to synthesize elements from this in a convincing way.

                  In conclusion, you have no idea what you are talking about, and you yourself literally are one of the people who has failed the reverse Turing Test, likely because you are not very well very versed in the technicals of how this stuff actually works, thus proving my point that you simply believe it is AI because of its branding, with no critical thought applied whatsoever.

                • ParsnipWitch@feddit.de
                  link
                  fedilink
                  arrow-up
                  1
                  Ā·
                  11 months ago

                  Current models arenā€™t intelligent. Not even by the flimsy and unprecise definition of intelligence we currently have.

                  Wanted to post a whole rant but then saw vexikron already did so I spare you xD

      • vexikron@lemmy.zip
        link
        fedilink
        arrow-up
        11
        Ā·
        edit-2
        11 months ago

        And human comedians regularly get called out when they outright steal others material and present it as their own.

        The word for this is plagiarism.

        And in OpenAIs framework, when used in a relevant commercial context, they are functionally operating and profiting off of the worlds most comprehensive plagiarism software.

    • teawrecks@sopuli.xyz
      link
      fedilink
      arrow-up
      5
      Ā·
      11 months ago

      A comedian isnā€™t forming a sentence based on what the most probable word is going to appear after the previous one.

      Neither is an LLM. What youā€™re describing is a primitive Markov chain.

      You may not like it, but brains really are just glorified pattern recognition and generation machines. So yes, ā€œmonkey see thing to draw thingā€, except a really complicated version of that.

      Think of it this way: if your brain wasnā€™t a reorganization and regurgitation of the things you have observed before, it would just generate random noise. Thereā€™s no such thing as ā€œtruly originalā€ art or it would be random noise. Every single word either of us is typing is the direct result of everything you and I have observed before this moment.

      Baffling takes from people who donā€™t know what theyā€™re talking about.

      Ironic, to say the least.

      The point you should be making, is that a corporation will make this above argument up to, but not including the point where they have to treat AIs ethically. So thatā€™s the way to beat them. If theyā€™re going to argue that they have created something that learns and creates content like a human brain, then they should need to treat it like a human, ensure it is well compensated, ensure it isnā€™t being overworked or enslaved, ensure it is being treated ā€œhumanelyā€. If they donā€™t want to do that, if they want it to just be a well built machine, then they need to license all the proprietary data they used to build it. Make them pick a lane.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        4
        Ā·
        11 months ago

        Neither is an LLM. What youā€™re describing is a primitive Markov chain.

        My description mightā€™ve been indicative of a Markov chain but the actual framework uses matrices because you need to be able to store and compute a huge amount of information at once which is what matrices are good for. Used in animation if you didnā€™t know.

        What it actually uses is irrelevant, how it uses those things is the same as a regression model, the difference is scale. A regression model looks at how related variables are in giving an outcome and computing weights to give you the best outcome. This was the machine learning boom a couple of years ago and TensorFlow became really popular.

        LLMs are an evolution of the same idea. Iā€™m not saying itā€™s not impressive because itā€™s very cool what they were able to do. What I take issue with is the branding, the marketing and the plagiarism. I happen to be in the intersection of working in the same field, an avid fan of classic Sci-Fi and a writer.

        Itā€™s easy to look at what people have created throughout history and think ā€œthis looks like thatā€ and on a point by point basis youā€™d be correct but the creation of that thing is shaped by the lens of the person creating it. Someone might make a George Carlin joke that weā€™ve heard recently but weā€™ll read about it in newspapers from 200 years ago. Did George Carlin steal the idea? No. Was he aware of that information? I donā€™t know. But Carlin regularly calls upon his own experiences so itā€™s likely that heā€™s referencing a event from his past that is similar to that of 200 years ago. He mightā€™ve subconsciously absorbed the information.

        The point is that the way these models have been trained is unethical. They used material they had no license to use and theyā€™ve admitted that it couldnā€™t work as well as it does without stealing other peopleā€™s work. I donā€™t think theyā€™re taking the position that itā€™s intelligent because from the beginning that was a marketing ploy. Theyā€™re taking the position that they should be allowed to use the data they stole because there was no other way.

        • Pup Biru@aussie.zone
          link
          fedilink
          arrow-up
          1
          Ā·
          11 months ago

          branding

          okay

          the marketing

          yup

          the plagiarism

          woah there! thatā€™s where we disagreeā€¦ your position is based on the fact that you believe that this is plagiarism - inherently negative

          perhaps its best not use loaded language. if we want to have a good faith discussion, itā€™s best to avoid emotive arguments and language thatā€™s designed to evoke negativity simply by their use, rather than the argument being presented

          I happen to be in the intersection of working in the same field, an avid fan of classic Sci-Fi and a writer

          its understandable that itā€™s frustrating, but just because a machine is now able to do a similar job to a human doesnā€™t make it inherently wrong. it might be useful for you to reframe these developments - itā€™s not taking away from humans, itā€™s enabling humansā€¦ the less a human has to have skill to get whatā€™s in their head into an expressive medium for someone to consume the better imo! art and creativity shouldnā€™t be about having an ability - the closer we get to pure expression the better imo!

          the less you have to worry about the technicalities of writing, the more you can focus on pure creativity

          The point is that the way these models have been trained is unethical. They used material they had no license to use and theyā€™ve admitted that it couldnā€™t work as well as it does without stealing other peopleā€™s work

          iā€™d question why itā€™s unethical, and also suggest that ā€œstolenā€ is another emotive term here not meant to further the discussion by rational argument

          so, why is it unethical for a machine but not a human to absorb information and create something based on its ā€œexperiencesā€?

          • Phanatik@kbin.social
            link
            fedilink
            arrow-up
            1
            Ā·
            11 months ago

            First of all, weā€™re not having a debate and this isnā€™t a courtroom so avoid the patronising language.

            Second of all, my ā€œbeliefā€ on the modelsā€™ plagiarism is based on technical knowledge of how the models work and not how I think they work.

            a machine is now able to do a similar job to a human

            This would be impressive if it was true. An LLM is not intelligent simply through its appearance of intelligence.

            Itā€™s enabling humans

            Itā€™s a chat bot thatā€™s automated Google searches, letā€™s be clear about what this can do. Itā€™s taken natural language processing and applied it through an optimisation algorithm to produce human-like responses.

            No, I disagree at a fundamental level. Humans need to compete against each other and ourselves to improve. Just because an LLM can write a book for you, doesnā€™t mean youā€™ve written a book. Youā€™re just lazy. You donā€™t want to put in the work any other writer in existence has done, to mull over their work and consider the emotions and effect they want to have on the reader. To what extent can an LLM replicate the way George RR Martin describes his world without entirely ripping off his work?

            iā€™d question why itā€™s unethical, and also suggest that ā€œstolenā€ is another emotive term here not meant to further the discussion by rational argument

            If I take a book you wrote from you without buying it or paying you for it, what would you call that?

    • Pup Biru@aussie.zone
      link
      fedilink
      arrow-up
      2
      Ā·
      11 months ago

      you know how the neurons in our brain work, right?

      because if not, well, itā€™s pretty similarā€¦ unless you say thereā€™s a soul (in which case we canā€™t really have a conversation based on fact alone), weā€™re just big olā€™ probability machines with tuned weights based on past experiences too

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        5
        Ā·
        11 months ago

        You are spitting out basic points and attempting to draw similarities because our brains are capable of something similar. The difference between what youā€™ve said and what LLMs do is that we have experiences that we are able to glean a variety of information from. An LLM sees text and all itā€™s designed to do is say ā€œx is more likely to appear before y than zā€. If you fed it nonsense, it would regurgitate nonsense. If you feed it text from racist sites, it will regurgitate that same language because thatā€™s all it has seen.

        Youā€™ll read this and think ā€œthatā€™s what humans do too, right?ā€ Wrong. A human can be fed these things and still reject them. Someone else in this thread has made some good points regarding this but Iā€™ll state them here as well. An LLM will tell you information but it has no cognition on what itā€™s telling you. It has no idea that itā€™s right or wrong, itā€™s job is to convince you that itā€™s right because thatā€™s the success state. If you tell it itā€™s wrong, thatā€™s a failure state. The more you speak with it, the more fail states it accumulates and the more likely it is to cutoff communication because itā€™s not reaching a success, itā€™s not giving you what you want. The longer the conversation goes on, the more crazy LLMs get as well because itā€™s too much to process at once, holding those contexts in its memory while trying to predict the next one. Our brains do this easily and so much more. To claim an LLM is intelligent is incredibly misguided, it is merely the imitation of intelligence.

        • Pup Biru@aussie.zone
          link
          fedilink
          arrow-up
          1
          Ā·
          edit-2
          11 months ago

          but thatā€™s just a matter of complexity, not fundamental difference. the way our brains work and the way an artificial neural network work arenā€™t that different; just that our brains are beyond many orders of magnitude bigger

          thereā€™s no particular reason why we canā€™t feed artificial neural networks an enormous amount of ā€¦ letā€™s say tangentially related experiential information ā€¦ as well, but in order to be efficient and make them specialise in the things we want, we only feed them information thatā€™s directly related to the specialty we want them to perform

          thereā€™s someā€¦ ā€œpre trainingā€ or ā€œpre-existing stateā€ that exists with humans too that comes from genetics, but iā€™d argue thatā€™s as relevant to the actual task of learning, comprehension, and creating as a BIOS is to running an operating system (that is, a necessary precondition to ensure the correct functioning of our body with our brain, but not actually what youā€™d call the main function)

          iā€™m also not claiming that an LLM is intelligent (or rather iā€™d prefer to use the term self aware because intelligent is pretty nebulous); just that the structure it has isnā€™t that much different to our brains just on a level thatā€™s so much smaller and so much more generic that you canā€™t expect it to perform as well as a human - you wouldnā€™t expect to cut out 99% of a humans brain and have them be able to continue to function at the same level either

          i guess the core of what iā€™m getting at is that the self awareness that humans have is definitely not present in an LLM, however i donā€™t think that self-awareness is necessarily a pre-requisite for most things that we call creativity. i think thatā€™s itā€™s entirely possible for an artificial neural net thatā€™s fundamentally the same technology that we use today to be able to ingest the same data that a human would from birth, and to have very similar outcomesā€¦ given that belief (and iā€™m very aware that it certainly is just a belief - we arenā€™t close to understanding our brains, but i donā€™t fundamentally thing thereā€™s anything other then neurons firing that results in the human condition), just because you simplify and specialise the input data doesnā€™t mean that the process is different. you could argue that itā€™s lesser, for sure, but to rule out that it can create a legitimately new work is definitely premature

      • ParsnipWitch@feddit.de
        link
        fedilink
        arrow-up
        2
        Ā·
        11 months ago

        ā€œSoulā€ is the word we use for something we donā€™t scientifically understand yet. Unless you did discover how human brains work, in that case I congratulate you on your Nobel prize.

        You can abstract a complex concept so much it becomes wrong. And abstracting how the brain works to ā€œitā€™s a probability machineā€ definitely is a wrong description. Especially when you want to use it as an argument of similarity to other probability machines.

        • Pup Biru@aussie.zone
          link
          fedilink
          arrow-up
          1
          Ā·
          edit-2
          11 months ago

          ā€œSoulā€ is the word we use for something we donā€™t scientifically understand yet

          thatā€™s far from definitive. another definition is

          A part of humans regarded as immaterial, immortal, separable from the body at death

          but since we arenā€™t arguing semantics, it doesnā€™t really matter exactly, other than the fact that itā€™s important to remember that just because you have an experience, belief, or view doesnā€™t make it the only truth

          of course i didnā€™t discover categorically how the human brain works in its entirety, however most scientists iā€™m sure would agree that the method by which the brain performs its functions is by neurons firing. if you disagree with that statement, the burden of proof is on you. the part we donā€™t understand is how it all connects up - the emergent behaviour. we understand the basics; thatā€™s not in question, and you seem to be questioning it

          You can abstract a complex concept so much it becomes wrong

          itā€™s not abstracted; itā€™s simplifiedā€¦ if what youā€™re saying were true, then simplifying complex organisms down to a petri dish for research would be ā€œabstractedā€ so much it ā€œbecomes wrongā€, which is categorically untrueā€¦ itā€™s an incomplete picture, but that doesnā€™t make it either wrong or abstract

          *edit: sorry, it was another comment where i specifically said belief; the comment you replied to didnā€™t state that, however most of this still applies regardless

          i laid out an a leads to b leads to c and stated that itā€™s simply a belief, however itā€™s a belief thatā€™s based in logic and simplified concepts. if you want to disagree thatā€™s fine but donā€™t act like you have some ā€œevidenceā€ or ā€œproofā€ to back up your claimsā€¦ all weā€™re talking about here is belief, because we simply donā€™t know - neither you nor i

          and given that all of this is based on belief rather than proof, the only thing that matters is what we as individuals believe about the input and output data (because the bit in the middle has no definitive proof either way)

          if a human consumes media and writes something and it looks different, thatā€™s not a violation

          if a machine consumes media and writes something and it looks different, youā€™re arguing that is a violation

          the only difference here is your belief that a human brain somehow has something ā€œmoreā€ than a probabilistic model going onā€¦ but again, thatā€™s far from certain

    • You do know that comedians are copying each others material all the time though? Either making the same joke, or slightly adapting it.

      So in the context of copyright vs. model training i fail to see how the exact process of the model is relevant? At the end copyrighted material goes in and material based on that copyrighted material goes out.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      1
      Ā·
      11 months ago

      Text prediction seems to be sufficient to explain all verbal communication to me. Until someone comes up with a use case that humans can do that LLMs cannot, and I mean a specific use case not general high level concepts, Iā€™m going to assume human verbal cognition works the same was as an LLM.

      We are absolutely basing our responses on what words are likely to follow which other ones. Itā€™s literally how a baby learns language from those around them.

      • chaos@beehaw.org
        link
        fedilink
        arrow-up
        9
        Ā·
        11 months ago

        If you ask an LLM to help you with a legal brief, itā€™ll come up with a bunch of stuff for you, and some of it might even be right. But itā€™ll very likely do things like make up a case that doesnā€™t exist, or misrepresent a real case, and as has happened multiple times now, if you submit that work to a judge without a real lawyer checking it first, youā€™re going to have a bad time.

        Thereā€™s a reason LLMs make stuff up like that, and itā€™s because they have been very, very narrowly trained when compared to a human. The training process is almost entirely getting good at predicting what words follow what other words, but humans get that and so much more. Babies arenā€™t just associating the sounds they hear, theyā€™re also associating the things they see, the things they feel, and the signals their body is sending them. Babies are highly motivated to learn and predict the behavior of the humans around them, and as they get older and more advanced, they get rewarded for creating accurate models of the mental state of others, mastering abstract concepts, and doing things like make art or sing songs. Their brains are many times bigger than even the biggest LLM, their initial state has been primed for success by millions of years of evolution, and the training set is every moment of human life.

        LLMs arenā€™t nearly at that level. Thatā€™s not to say what they do isnā€™t impressive, because it really is. They can also synthesize unrelated concepts together in a stunningly human way, even things that theyā€™ve never been trained on specifically. Theyā€™ve picked up a lot of surprising nuance just from the text theyā€™ve been fed, and itā€™s convincing enough to think that something magical is going on. But ultimately, theyā€™ve been optimized to predict words, and thatā€™s what theyā€™re good at, and although theyā€™ve clearly developed some impressive skills to accomplish that task, itā€™s not even close to human level. They spit out a bunch of nonsense when what they should be saying is ā€œI have no idea how to write a legal document, you need a lawyer for thatā€, but that would require them to have a sense of their own capabilities, a sense of what they know and why they know it and where it all came from, knowledge of the consequences of their actions and a desire to avoid causing harm, and they donā€™t have that. And how could they? Their training didnā€™t include any of that, it was mostly about words.

        One of the reasons LLMs seem so impressive is that human words are a reflection of the rich inner life of the person youā€™re talking to. You say something to a person, and your ideas are broken down and manipulated in an abstract manner in their head, then turned back into words forming a response which they say back to you. LLMs are piggybacking off of that a bit, by getting good at mimicking language they are able to hide that their heads are relatively empty. Spitting out a statistically likely answer to the question ā€œas an AI, do you want to take over the world?ā€ is very different from considering the ideas, forming an opinion about them, and responding with that opinion. LLMs arenā€™t just doing statistics, but you donā€™t have to go too far down that spectrum before the answers start seeming thoughtful.

    • SuperSaiyanSwag@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      Ā·
      edit-2
      11 months ago

      Am I a moron? How do you have more upvotes than the parent comment, is it because youā€™re being more aggressive with your statement? I feel like you didnā€™t quite refute what the parent comment said. Youā€™re just explaining how Chat GPT works, but youā€™re not really saying how it shouldnā€™t use our established media (copyrighted material) as a reference.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        1
        Ā·
        edit-2
        11 months ago

        I donā€™t control the upvotes so I donā€™t know why thatā€™s directed at me.

        The refutation was based on around a misunderstanding of how LLMs generate their outputs and how the training data assists the LLM in doing what it does. The article itself tells you ChatGPT was trained off of copyrighted material they were not licensed for. The person I responded to suggested that comedians do this with their work but thatā€™s equating the process an LLM uses when producing an output to a comedian writing jokes.

        Edit: Apologies if I do come across aggressive. Since the plagiarism machine has been in full swing, the whole discourse around it has gotten on my nerves. Iā€™m a creative person, Iā€™ve written poems and short stories, Iā€™m writing a novel and I also do programming and a whole host of hobbies so when LLMs are used to put people like me out of a job using my own work, why wouldnā€™t that make me angry? What makes it worse is that Iā€™m having to explain concepts to people regarding LLMs that they continue to defend. I canā€™t stand it so yes, I will come off aggressive.

        • SuperSaiyanSwag@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          Ā·
          11 months ago

          Sorry, I was essentially emphasizing on my initial point ā€œam I a moron?ā€, lol, because I legitimately didnā€™t get your point at first like others do in this thread.

          I get what you mean now after reading it couple more times