Anyone actually using that shit is either ignorant and completely out of the loop, doesn’t care about the numerous ethical issues it has, or welcomes said issues with open arms.
The only acceptable scenario would be someone who genuinely hasn’t learned about why this shit sucks so much, and is willing to completely drop it after they learn. Someone who’s aware and still uses it isn’t someone to associate with.
I don’t actually think many people are open to changing anything, even with information that may indicate good reasons to. Even with good reasons proving they should.
This entire argument could be had over every divisive societal split.
At first they seem rational.
“Letting common people learn how to read books is a bad idea.”
“Listening to the radio is the down fall of this world.”
“TV is a really bad idea.”
“I don’t make friends with Nazis.”
Then they run the gamut of becoming they sound ever so slightly smarter.
“I don’t make friends with conservatives.”
“I don’t make friends with Republicans.”
Skipping ahead:
“I don’t hangout with people who spend all their time watching (insert streaming service here)”
“I don’t like people who don’t hate AI.”
“I don’t date people who use AI.”
“AI use will prevent me from being friends with someone.”
This is just a smattering of divides, there are plenty in between all these if I had to make a spectrum of them, but you get the point.
Anyway, somewhere on this spectrum you find your spot, and everything previous to that spot seems absolutely obvious, your exact spot seems reasonable, and everything beyond you seems utter lunacy.
Currently I’m pretty strong in the “all AI is bad AI” end of it. I think translating can be useful, but still isn’t great, but I can easily see how a perfectly-preserved translation can be useful. But I don’t see much actual use or value beyond that. And given it’s enormous power and water drain just to support something that might be valuable later, this approach is ass-backwards.
Previous world shaking technologies were easy to find value in pretty quick. Language, printing press, radio, TV… So could be used for brain rot, but information sharing is generally good (if it’s honest).
But AI doesn’t really have a killer application (yet, anyway) and devoting this much to it before we figure out any potential way to use it that makes it worth what we’re giving up to use it is absolutely bonkers.
I don’t personally currently know of anything that’s even possible that it can be used for, but I’m willing to hear use cases.
Meanwhile we’ve got lazy thinkers that have less than zero reason to believe in God that still do. So just having evidence isn’t all there is to it. You have to be open enough to acknowledge and change with that evidence.
Anyone actually using that shit is either ignorant and completely out of the loop, doesn’t care about the numerous ethical issues it has, or welcomes said issues with open arms.
The only acceptable scenario would be someone who genuinely hasn’t learned about why this shit sucks so much, and is willing to completely drop it after they learn. Someone who’s aware and still uses it isn’t someone to associate with.
I don’t actually think many people are open to changing anything, even with information that may indicate good reasons to. Even with good reasons proving they should.
This entire argument could be had over every divisive societal split.
At first they seem rational.
“Letting common people learn how to read books is a bad idea.”
“Listening to the radio is the down fall of this world.”
“TV is a really bad idea.”
“I don’t make friends with Nazis.”
Then they run the gamut of becoming they sound ever so slightly smarter.
“I don’t make friends with conservatives.”
“I don’t make friends with Republicans.”
Skipping ahead:
“I don’t hangout with people who spend all their time watching (insert streaming service here)”
“I don’t like people who don’t hate AI.”
“I don’t date people who use AI.”
“AI use will prevent me from being friends with someone.”
This is just a smattering of divides, there are plenty in between all these if I had to make a spectrum of them, but you get the point.
Anyway, somewhere on this spectrum you find your spot, and everything previous to that spot seems absolutely obvious, your exact spot seems reasonable, and everything beyond you seems utter lunacy.
Currently I’m pretty strong in the “all AI is bad AI” end of it. I think translating can be useful, but still isn’t great, but I can easily see how a perfectly-preserved translation can be useful. But I don’t see much actual use or value beyond that. And given it’s enormous power and water drain just to support something that might be valuable later, this approach is ass-backwards.
Previous world shaking technologies were easy to find value in pretty quick. Language, printing press, radio, TV… So could be used for brain rot, but information sharing is generally good (if it’s honest).
But AI doesn’t really have a killer application (yet, anyway) and devoting this much to it before we figure out any potential way to use it that makes it worth what we’re giving up to use it is absolutely bonkers.
I don’t personally currently know of anything that’s even possible that it can be used for, but I’m willing to hear use cases.
Meanwhile we’ve got lazy thinkers that have less than zero reason to believe in God that still do. So just having evidence isn’t all there is to it. You have to be open enough to acknowledge and change with that evidence.