You clearly don’t understand how these things work. AI gen is entirely dependent on human artists to create stuff for it to generate from. It can only ever try to be as good as the data sets that it uses to create its algorithm. It’s not creating art. It’s outputting a statistical array based on your keywords. This is also why ChatGPT can get math questions wrong. Because it’s not doing calculations, which computers are really good at. It’s generating a statistical array and averaging out from what its data set says should come next. And it’s why training AI on AI art creates a cascading failure that corrupts the LLM. Because errors from the input become ingrained into the data set, and future errors compound on those previous errors.
Just like with video game graphics attempting to be realistic, there’s effectively an upper limit on what these things can generate. As you approach a 1:1 approximation of the source material, hardware requirements to improve will increase exponentially and improvements will decrease exponentially. The jump between PS1 and PS2 graphics was gigantic, while the jump between PS4 and PS5 was nowhere near as big, but the differences in hardware between the PS1 and PS2 look tiny today. We used to marvel at the concept that anybody would ever need more than 256MB of RAM. Today I have 16GB and I just saw a game that had 32GB in its recommended hardware.
To be “better” than people at creating art, it would have to be based on an entirely different technology that doesn’t exist yet. Besides, art isn’t a product that can be defined in terms of quality. You can’t be better at anime than everybody else. There’s always going to be someone who likes shit-tier anime, and there’s always going to be parents who like their 4 year old’s drawing better than anything done by Picasso. That’s why it’s on the fridge.
So your argument, if I’m understanding it correctly, is that:
You believe the model of polygon-based rendering in video games has diminishing returns. No argument. Not sure what this has to do with the generated art which doesn’t have similar constraints and doesn’t work the same way.
Art is subjective, so calling something better or worse is pointless. Also no argument, this is why it’s absolutely ridiculous for people to be saying all AI generated art is universally bad. It has its purpose in the same way caricature “artists” in European historical districts have a purpose…in theory.
It sounds like we’re on the same page, but you have a reason (which you’ve been unable to coherently represent) you think AI generated art will never improve to the point of being good.
AI art isn’t bad because of its inherent quality (though tons of it is poor quality), it’s bad because it both lacks the essential qualities that people appreciate about art, and because of the ethics around the companies and the models that they’re making (as well as the attitude of some of the people who use it).
AI has no concept of the technical concepts behind art, which is a skill people appreciate in terms of “quality,” and it lacks “intent.” Art is made for the fun of it, but also with an intrinsic purpose that AI can’t replicate. AI is just a fancy version of a meme template. To quote Bennett Foddy:
For years now, people have been predicting that games would soon be made out of prefabricated objects, bought in a store and assembled into a world. And for the most part that hasn’t happened, because the objects in the store are trash. I don’t mean that they look bad or that they’re badly made, although a lot of them are - I mean that they’re trash in the way that food becomes trash as soon as you put it in a sink. Things are made to be consumed and used in a certain context, and once the moment is gone, they transform into garbage.
Adam Savage had a good comment on AI in one of his videos where he said something like “I have no interest in AI art because when I look at a piece of art, I care about the creator’s intent, the effort that they put into the piece, and what they wanted to say. And when I look at AI, I see none of that. I’m sure that one day, some college film student will make something amazing with AI, and Hollywood will regurgitate it until it’s trash.”
But that’s outside the context of your original post, in which you said that AI art would someday be better than what humans can make. And this is where my point about video game graphics comes in. AI is replicating the art in its training set, much like computer graphics seeking realism are attempting to replicate the real world. There’s no way to surpass this limit with the technology that powers these LLMs, and the closer they get to perfectly mimicking their data and removing the errors that are so common to AI (like the six fingers, strange melty lines, lack of clear light sources, 60% accuracy rate with AI like ChatGPT, etc.), the more their power requirements will increase and the more incremental the advancements will become. We’re in the early days of AI, and the advancements are rapid and large, but that will slow down and the hardware requirements and data requirements are already on a massive scale to the tune of the entirety of the internet for ChatGPT and its competitors.
AI has no concept of the technical concepts behind art, which is a skill people appreciate in terms of “quality,” and it lacks “intent.” Art is made for the fun of it, but also with an intrinsic purpose that AI can’t replicate.
I generally agree with you, AI can’t create art specifically because it lacks intent, but: The person wielding the AI can very much have intent. The reason so much AI stuff is slop is the same reason that most photographs are slop: The human using the machine doesn’t care to and/or does not have the artistic wherewithal to elevate the product to the level of art.
(Also that’s the civitai.green feed sorted by most reactions, not the civitai.com feed sorted by newest. Mindless deluge of dicks and tits, tits and dicks, that one).
AI is replicating the art in its training set
That’s a bit reductive: It very much is able of abstracting over concepts and of banging them against each other. Interesting things are found at the fringes, at the intersections, not on the well-trodden paths. An artist will immediately spot that and try to push a model to its breaking point, ideally multiple breaking points simultaneously, but for that the stars have to align: The user has to be a) an artist and b) willing to use AI. Or at least give it a honest spin.
That fundamentally assumes the exact model used today for, and let’s be clear on this, picking a 16-bit integer, will never be improved upon. It also assumes that even though humans are able to slap two things together and sometimes, often by accident, make it better than the sum of its parts…a machine manipulating integers cannot do the same. It is fundamentally impossible for ai to synthesize anything…except that’s exactly what it’s doing.
There seems to be some other argument going on up above which is about how the actual computation itself can’t compete but that also doesn’t hold water. Ok, a computer is xor-ing things rather than chemical juice and action potentials. But that’s super low level. What it sure appears to be doing is taking things it’s seen as input data and generating variations on that…which humans also do. All art is theft. I just listened to a podcast from an author saying they hadn’t realized this children’s book from when they were 8 impacted their story design when they were 35, and yet now that they’ve reread it they immediately see that they basically stole pieces from the children’s story wholesale.
Finally you have this intent thing in here. Can’t argue that at present there isn’t an intent. But that has never before been a restriction on whether something is art. Plenty of soulless trash is called art. Why is the fruit bowl considered art? But even past that there’s an entirely opposing view where you shouldn’t care what the author thought, making something. What matters is what you think, consuming that thing. If I look at an AI drawing and it sparks some emotional resonance, who is anyone else to say it isn’t important art to me?
I’m not going to argue that there are no issues with ai art today or that the quality is low, but folks in the Lemmy echo chamber are putting human-produced art in an inconceivably high pedestal that cannot possibly stand the test of time.
So… first you say that art is subjective, then you say that a given piece can be classified as “good” or “bad”. What is it?
Your whole shebang is that it [GenAI] will become better. But, if you believe art to be subjective, how could you say the output of a GenAI is improving? How could you objectively determine if the function is getting better? The function’s definition of success is it’s loss function, which all but a measure of how mismatched the input of a given description is to it’s corresponding image. So, how well it copies the database.
Also, an image is “good” by what standards?
Why are you so obsessed with the image looking “good”. There is a whole lot more to an image than just “does it look good”. Why are you so afraid of making something “bad”? Why can you not look at an image any deeper than “I like it.”/“I do not like it.”, “It looks professional”/“It looks amateurish”? These aren’t meaningful critiques of the piece, they’re just reports of your own feelings. To critique a piece, one must try to look at what one believes the piece is trying to accomplish, then evaluate whether or not the piece is succeeding at it. If it is, why? If it isn’t, why not?
Also, these number networks suffer from diminishing returns.
Also:
In the context of Machine Learning “Neuron” means “Number from 0 to 1” and “Learning” means “Minimize the value of the Loss Function”.
Youve gone off the rails here, I don’t know what argument you’re trying to make.
Looks like op used the phrasing “outperform” but that has the same definition problems.
In any case the argument I’m making is simple
For a given claim “computers will never ‘outperform’ humans at X” I need you to prove to me that there is a fundamental physical limitation that silicon computing machines have that human computing machines dont. You can make ‘outperform’ mean whatever, same fundamental issue.
You have stated that AI will improve. Improvement implies being able to classify something as better than something else. You have then stated that art is subjective and therefore a given piece cannot be classified as better than another. This is a logical contradiction.
I then questioned your standards for “good”. By what criteria are you measuring the pieces in order to determine which one is “better” and thus be able to determine if the AI’s input is improving or not? I then tried to, as simply and as timely as I could, give a basic explanation of how the Learning process actually works. Admittedly I did not do a good job. Explanations of this could take up to two or three hours, depending on how much you already know.
Then comes some philosophizing about what makes a piece “good”. First, questioning your focus on the pieces of output being good. Then, inquiring what is the harm of a “bad” image? In the context of “Why not draw yourself? Too afraid of making something that is not «perfect»”? Then I asked why is it that you refuse, on your analisys of the “goodness” of an image, to go beyond “I like it.”/“I do not like it.”, “It looks professional”/“It looks amateurish”. Such statements are not meaningful critiques of a piece, they are reports of the feelings of the observer. The subjectivity of art we all speak of. However, it is indeed possible to create a more objective critique of a piece which goes beyond our tastes. To critique a piece, one must try to look at what one believes the piece is trying to accomplish, then evaluate whether or not the piece is succeeding at it. If it is, why? If it isn’t, why not?
Then, as an addendum, I stated that these functions we call AI have diminishing returns. This is a consequence of the whole loss function thing which is at the heart of the Machine Learning process.
The some deceitful definitions. The words “Neuron” and “Learning” under the context of Machine Learning do not have the same meaning as they do colloquially. This is something which causes many to be fooled and marketing agencies abuse to market “AI”. Neuron does not mean “simulation of biological neuron”, it means “Number from 0 to 1”. That means that a Neural Network is actually just a network of numbers between 0 and 1, like 0.2031. Likewise, learning in Machine Learning is not the same has biological learning. Learning here is just a short hand for Minimizing the value of the Loss Function”.
I could add that even the name AI is deceitful, has it has been used as a marketing buss word since it’s creation. Arguably, one could say it was created to be one. It causes people to judge the Function, not for what it is, as any reasonable actor would, but for what it isn’t. Instead judged by what, maybe, it might become, if only we [AI companies] get more funding. This is nothing new. The same thing happened in the first AI craze in the 19’s. Eventually people realized the promised improvements were not coming and the hype and funding subsided. Now the cycle repeats: They found something which can superficially be considered “intelligent” and are now doing it again.
You clearly don’t understand how these things work. AI gen is entirely dependent on human artists to create stuff for it to generate from. It can only ever try to be as good as the data sets that it uses to create its algorithm. It’s not creating art. It’s outputting a statistical array based on your keywords. This is also why ChatGPT can get math questions wrong. Because it’s not doing calculations, which computers are really good at. It’s generating a statistical array and averaging out from what its data set says should come next. And it’s why training AI on AI art creates a cascading failure that corrupts the LLM. Because errors from the input become ingrained into the data set, and future errors compound on those previous errors.
Just like with video game graphics attempting to be realistic, there’s effectively an upper limit on what these things can generate. As you approach a 1:1 approximation of the source material, hardware requirements to improve will increase exponentially and improvements will decrease exponentially. The jump between PS1 and PS2 graphics was gigantic, while the jump between PS4 and PS5 was nowhere near as big, but the differences in hardware between the PS1 and PS2 look tiny today. We used to marvel at the concept that anybody would ever need more than 256MB of RAM. Today I have 16GB and I just saw a game that had 32GB in its recommended hardware.
To be “better” than people at creating art, it would have to be based on an entirely different technology that doesn’t exist yet. Besides, art isn’t a product that can be defined in terms of quality. You can’t be better at anime than everybody else. There’s always going to be someone who likes shit-tier anime, and there’s always going to be parents who like their 4 year old’s drawing better than anything done by Picasso. That’s why it’s on the fridge.
So your argument, if I’m understanding it correctly, is that:
It sounds like we’re on the same page, but you have a reason (which you’ve been unable to coherently represent) you think AI generated art will never improve to the point of being good.
AI art isn’t bad because of its inherent quality (though tons of it is poor quality), it’s bad because it both lacks the essential qualities that people appreciate about art, and because of the ethics around the companies and the models that they’re making (as well as the attitude of some of the people who use it).
AI has no concept of the technical concepts behind art, which is a skill people appreciate in terms of “quality,” and it lacks “intent.” Art is made for the fun of it, but also with an intrinsic purpose that AI can’t replicate. AI is just a fancy version of a meme template. To quote Bennett Foddy:
Adam Savage had a good comment on AI in one of his videos where he said something like “I have no interest in AI art because when I look at a piece of art, I care about the creator’s intent, the effort that they put into the piece, and what they wanted to say. And when I look at AI, I see none of that. I’m sure that one day, some college film student will make something amazing with AI, and Hollywood will regurgitate it until it’s trash.”
But that’s outside the context of your original post, in which you said that AI art would someday be better than what humans can make. And this is where my point about video game graphics comes in. AI is replicating the art in its training set, much like computer graphics seeking realism are attempting to replicate the real world. There’s no way to surpass this limit with the technology that powers these LLMs, and the closer they get to perfectly mimicking their data and removing the errors that are so common to AI (like the six fingers, strange melty lines, lack of clear light sources, 60% accuracy rate with AI like ChatGPT, etc.), the more their power requirements will increase and the more incremental the advancements will become. We’re in the early days of AI, and the advancements are rapid and large, but that will slow down and the hardware requirements and data requirements are already on a massive scale to the tune of the entirety of the internet for ChatGPT and its competitors.
I generally agree with you, AI can’t create art specifically because it lacks intent, but: The person wielding the AI can very much have intent. The reason so much AI stuff is slop is the same reason that most photographs are slop: The human using the machine doesn’t care to and/or does not have the artistic wherewithal to elevate the product to the level of art.
Is this at the level of the artstation or deviantart feeds? Hell no. But calling it all bad, all slop, because it happens to be AI doesn’t give the people behind it justice.
(Also that’s the civitai.green feed sorted by most reactions, not the civitai.com feed sorted by newest. Mindless deluge of dicks and tits, tits and dicks, that one).
That’s a bit reductive: It very much is able of abstracting over concepts and of banging them against each other. Interesting things are found at the fringes, at the intersections, not on the well-trodden paths. An artist will immediately spot that and try to push a model to its breaking point, ideally multiple breaking points simultaneously, but for that the stars have to align: The user has to be a) an artist and b) willing to use AI. Or at least give it a honest spin.
That fundamentally assumes the exact model used today for, and let’s be clear on this, picking a 16-bit integer, will never be improved upon. It also assumes that even though humans are able to slap two things together and sometimes, often by accident, make it better than the sum of its parts…a machine manipulating integers cannot do the same. It is fundamentally impossible for ai to synthesize anything…except that’s exactly what it’s doing.
There seems to be some other argument going on up above which is about how the actual computation itself can’t compete but that also doesn’t hold water. Ok, a computer is xor-ing things rather than chemical juice and action potentials. But that’s super low level. What it sure appears to be doing is taking things it’s seen as input data and generating variations on that…which humans also do. All art is theft. I just listened to a podcast from an author saying they hadn’t realized this children’s book from when they were 8 impacted their story design when they were 35, and yet now that they’ve reread it they immediately see that they basically stole pieces from the children’s story wholesale.
Finally you have this intent thing in here. Can’t argue that at present there isn’t an intent. But that has never before been a restriction on whether something is art. Plenty of soulless trash is called art. Why is the fruit bowl considered art? But even past that there’s an entirely opposing view where you shouldn’t care what the author thought, making something. What matters is what you think, consuming that thing. If I look at an AI drawing and it sparks some emotional resonance, who is anyone else to say it isn’t important art to me?
I’m not going to argue that there are no issues with ai art today or that the quality is low, but folks in the Lemmy echo chamber are putting human-produced art in an inconceivably high pedestal that cannot possibly stand the test of time.
Their explanation was wasted on useless people like you.
So… first you say that art is subjective, then you say that a given piece can be classified as “good” or “bad”. What is it?
Your whole shebang is that it [GenAI] will become better. But, if you believe art to be subjective, how could you say the output of a GenAI is improving? How could you objectively determine if the function is getting better? The function’s definition of success is it’s loss function, which all but a measure of how mismatched the input of a given description is to it’s corresponding image. So, how well it copies the database.
Also, an image is “good” by what standards?
Why are you so obsessed with the image looking “good”. There is a whole lot more to an image than just “does it look good”. Why are you so afraid of making something “bad”? Why can you not look at an image any deeper than “I like it.”/“I do not like it.”, “It looks professional”/“It looks amateurish”? These aren’t meaningful critiques of the piece, they’re just reports of your own feelings. To critique a piece, one must try to look at what one believes the piece is trying to accomplish, then evaluate whether or not the piece is succeeding at it. If it is, why? If it isn’t, why not?
Also, these number networks suffer from diminishing returns.
Also:
In the context of Machine Learning “Neuron” means “Number from 0 to 1” and “Learning” means “Minimize the value of the Loss Function”.
Youve gone off the rails here, I don’t know what argument you’re trying to make.
Looks like op used the phrasing “outperform” but that has the same definition problems.
In any case the argument I’m making is simple
For a given claim “computers will never ‘outperform’ humans at X” I need you to prove to me that there is a fundamental physical limitation that silicon computing machines have that human computing machines dont. You can make ‘outperform’ mean whatever, same fundamental issue.
You have stated that AI will improve. Improvement implies being able to classify something as better than something else. You have then stated that art is subjective and therefore a given piece cannot be classified as better than another. This is a logical contradiction.
I then questioned your standards for “good”. By what criteria are you measuring the pieces in order to determine which one is “better” and thus be able to determine if the AI’s input is improving or not? I then tried to, as simply and as timely as I could, give a basic explanation of how the Learning process actually works. Admittedly I did not do a good job. Explanations of this could take up to two or three hours, depending on how much you already know.
Then comes some philosophizing about what makes a piece “good”. First, questioning your focus on the pieces of output being good. Then, inquiring what is the harm of a “bad” image? In the context of “Why not draw yourself? Too afraid of making something that is not «perfect»”? Then I asked why is it that you refuse, on your analisys of the “goodness” of an image, to go beyond “I like it.”/“I do not like it.”, “It looks professional”/“It looks amateurish”. Such statements are not meaningful critiques of a piece, they are reports of the feelings of the observer. The subjectivity of art we all speak of. However, it is indeed possible to create a more objective critique of a piece which goes beyond our tastes. To critique a piece, one must try to look at what one believes the piece is trying to accomplish, then evaluate whether or not the piece is succeeding at it. If it is, why? If it isn’t, why not?
Then, as an addendum, I stated that these functions we call AI have diminishing returns. This is a consequence of the whole loss function thing which is at the heart of the Machine Learning process.
The some deceitful definitions. The words “Neuron” and “Learning” under the context of Machine Learning do not have the same meaning as they do colloquially. This is something which causes many to be fooled and marketing agencies abuse to market “AI”. Neuron does not mean “simulation of biological neuron”, it means “Number from 0 to 1”. That means that a Neural Network is actually just a network of numbers between 0 and 1, like 0.2031. Likewise, learning in Machine Learning is not the same has biological learning. Learning here is just a short hand for Minimizing the value of the Loss Function”.
I could add that even the name AI is deceitful, has it has been used as a marketing buss word since it’s creation. Arguably, one could say it was created to be one. It causes people to judge the Function, not for what it is, as any reasonable actor would, but for what it isn’t. Instead judged by what, maybe, it might become, if only we [AI companies] get more funding. This is nothing new. The same thing happened in the first AI craze in the 19’s. Eventually people realized the promised improvements were not coming and the hype and funding subsided. Now the cycle repeats: They found something which can superficially be considered “intelligent” and are now doing it again.