I mean the “allow non verbal people to speak” thing has some merit though. Not LLMs per se, but the types of machine learning used by people trying to develop ways to decode the brainwaves of people to allow them to talk while physically unable are usually lumped in the general category of “AI” from what I’ve seen.
Yeah that’s not what they mean. They mean feeding recorded texts and speeches of a person into an llm, then instruct it to pretend to be that person. Like that AI murder victim “testimony” that was permitted to be shown in court as “evidence” some time ago.
i mean i’m pretty sure we can enable people to communicate if they’re at all conscious and mentally able to communicate, stephen hawking was able to write despite literally only being able to move his eyes reliably. So long as a person can intentionally move one muscle we can rig something up to interpret it as morse code.
is it great? no, these methods fucking suck, but they do work and we don’t need AI to do it.
I’ve just had experiences with Ai help chats where when I started typing the Ai would try to finish my sentence and would jump the cursor around making it absolutely unusable. I had to type in note pad and copy it into the chat. Staggeringly useless. So if this ‘mind reading’ Ai is like that I don’t predict good results.
I mean, any technology can be stupid if it is utilized stupidly, which I would think taking over someone’s keyboard while they’re tping would qualify as. But why would a company deploying a technology in a stupid manner mean that someone else’s research into a different but related technology is guaranteed to produce equally poor results?
They already are, they just don’t understand enough theology to see the parallels
Universal Paperclips is such a great browser game, as buggy as it may (have) be(en).
I mean the “allow non verbal people to speak” thing has some merit though. Not LLMs per se, but the types of machine learning used by people trying to develop ways to decode the brainwaves of people to allow them to talk while physically unable are usually lumped in the general category of “AI” from what I’ve seen.
Yeah that’s not what they mean. They mean feeding recorded texts and speeches of a person into an llm, then instruct it to pretend to be that person. Like that AI murder victim “testimony” that was permitted to be shown in court as “evidence” some time ago.
i mean i’m pretty sure we can enable people to communicate if they’re at all conscious and mentally able to communicate, stephen hawking was able to write despite literally only being able to move his eyes reliably. So long as a person can intentionally move one muscle we can rig something up to interpret it as morse code.
is it great? no, these methods fucking suck, but they do work and we don’t need AI to do it.
I’ve just had experiences with Ai help chats where when I started typing the Ai would try to finish my sentence and would jump the cursor around making it absolutely unusable. I had to type in note pad and copy it into the chat. Staggeringly useless. So if this ‘mind reading’ Ai is like that I don’t predict good results.
Also, fuck you quickbooks.
I mean, any technology can be stupid if it is utilized stupidly, which I would think taking over someone’s keyboard while they’re tping would qualify as. But why would a company deploying a technology in a stupid manner mean that someone else’s research into a different but related technology is guaranteed to produce equally poor results?