

He’s jumping ship because it’s destroying his ability to eke out a living. The problem isn’t a small one, what’s happening to him isn’t a limited case.


He’s jumping ship because it’s destroying his ability to eke out a living. The problem isn’t a small one, what’s happening to him isn’t a limited case.




I agree with you that there can be value in “showing people that views outside of their likeminded bubble[s] exist”. And you can’t change everyone’s mind, but I think it’s a bit cynical to assume you can’t change anyone’s mind.


From what I’ve heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn’t quite mastered.
I’ve also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can’t personally speak to its efficacy.


“Public” is a tricky term. At this point everything is being treated as public by LLM developers. Maybe not you specifically, but a lot of people aren’t happy with how their data is being used to train AI.


Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?
As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I’ll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.


I can’t speak for everyone, but I’m absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don’t know everything. It’s one of the reasons I post, for discussion. It’s really unproductive to make blanket statements that try to end discussion before it starts.


I think you’d probably have to hide out under a rock to miss out on AI at this point. Not sure even that’s enough. Good luck finding a regular rock and not a smart one these days.


AI companies could start, I don’t know- maybe asking for permission to scrape a website’s data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn’t agree to being used for training?


Your engagement on this issue is still clearly in bad faith. It reads like a common troll play where they attempt to draw a mark down a rabbit hole.
Understand that I don’t play these games. This is me leaving you to your checkerboard. Take care.
[Edited for grammar and brevity]


A very nuanced and level-headed response, thank you.


I do agree with your point that we need to educate people on how to use AI in responsible ways. You also mention the cautious approach taken by your kids school, which sounds commendable.
As far as the idea of preparing kids for an AI future in which employers might fire AI illiterate staff, this sounds to me more like a problem of preparing people to enter the workforce, which is generally what college and vocational courses are meant to handle. I doubt many of us would have any issue if they had approached AI education this way. This is very different than the current move to include it broadly in virtually all classrooms without consistent guidelines.
(I believe I read the same post about the CEO, BTW. It sounds like the CEO’s claim may likely have been AI-washing, misrepresenting the actual reason for firing them.)
[Edit to emphasize that I believe any AI education we do to prepare for employment purposes should be approached as vocational education which is optional, confined to those specific relevant courses, rather than broadly applied]


While there are some linked sources, the author fails to specify what kind of AI is being discussed or how it is being used in the classroom.
One of the important points is that there are no consistent standards or approaches toward AI in the classroom. There are almost as many variations as there are classrooms. It isn’t reasonable to expect a comprehensive list of all of them, and it’s neither the point nor the scope of the discussion.
I welcome specific and informed counterarguments to anything presented in this discussion, I believe many of us would. I frankly find it ironic how lacking in “nuance or level-headed discussion” your own comment seems.


I appreciated this comment, I think you made some excellent points. There is absolutely a broader, complex and longstanding problem. I feel like that makes the point that we need to consider seriously what we introduce into that vulnerable situation even more crucial. A bad fix is often worse than no fix at all.
AI is a crutch for a broken system. Kicking the crutch out doesn’t fix the system.
A crutch is a very simple and straightforward piece of tech. It can even just be a stick. What I’m concerned about is that AI is no stick, it’s the most complex technology we’ve yet developed. I’m reminded of that saying “the devil is in the details”. There are a great many details in AI.


This is also the kind of thing that scares me. I think people need to seriously consider that we’re bringing up the next wave of professionals who will be in all these critical roles. These are the stakes we’re gambling with.
Congrats! Gaming was the only thing keeping me before I switched over completely as well, though I had been using Linux for years like you. It’s like becoming cancer free or something when you finally get there.


They need to stick the landing. America will threaten and bully. I’ve also heard some are afraid of the cost and complexity of doing something like this. Hopefully they do realize the necessity of it and stay the course despite all of that.


I share this concern.


One of Big Tech’s pitches about AI is the “great equalizer” idea. It reminds me of their pitch about social media being the “great democratizer”. Now we’ve got algorithms, disinformation, deepfakes, and people telling machines to think for them and potentially also their kids.
Not all problems may be cured immediately. Battles are rarely won with a single attack. A good thing is not the same as nothing.