Ok ok, hear me out, what if they were harvested neurons from someone everyone wishes had gotten punished for their crimes, but never was? Hitler or Stalin? Or Trump! Or Putin!
Trump is still alive - he can and should be punished for his crimes. I don’t know what makes him special that he should never be.
Whoa whoa whoa. I said will never be punished, not should never be punished.
And Trump and Putin are both still alive.
Love this idea. This new form of consciousness is terrifying but your ideas makes it more tolerable for me. ❤️
It actually makes me even more concerned. I don’t want dead monsters to suffer. I want them to be dead.
I need to make a boomer shooter with this as the plot
something something torment nexus something something
Cortical Labs are the ones who pulled this off. They already have biological computers running on 800,000 lab-grown neurons available for ~$35,000 (just going on what a quick Google search told me) and are planning to open up a cloud computing service with its own API soon.
This makes me feel uneasy. Imagine if reincarnation were a thing and you get brought back into this world, and your purpose is to learn how to play DOOM.
aw sweet, man made horrors beyond my comprehension 😍
There’s another bunch of guys who are trying to do the same thing with rat neurons on the cheap using Gatorade as a growth medium.
Personally my worry really isn’t reincarnation, there’s no reason to believe that that’s true. But if these are fundamentally the same neurons that make up our brains, then how much do you need to put together before they acquire some form of “sentience”? Does a clump of 800,000 human neurons experience pain, sadness, a sense of self? Where is the line between an emotionless biocomputer and torturing a living organism for its entire lifespan?
Despite the fact that I really hate “AI”, that question was of course already sort of relevant for the latest AI models, even though we can generally conclude that they’re not there yet at all. But real neurons are different, we know what they’re capable of. How many do you need before a clump of neurons has rights?
Large language models are not intelligent. They are predictive text applications with massive dictionaries of circumstantial sentence structures to choose from. Nothing more. They do not feel and do not think for themselves. The only time they do anything is when the API calls them to produce more text with an updated context string.
I don’t know how many neurons are in a human brain, but if you made an artificial human brain, could it have consciousness?
It has to be a full fetus with a heartbeat to have rights. /s In all seriousness, the human brain is estimated to have 86 billion neurons.
Sure, but is the full human brain the minimum set necessary?
Sentience/sapience is probably an emergent property of a set of neurons needing to coordinate, plan, predict the future and oneself in relation to it.
I suspect that AI is capable of sentience with sufficient complexity and training, but it’s not there yet. I also suspect we’ll be well past the point where it is there before we realize it is, but not until we make some kind of fundamental change in how we do it - we know human level intelligence is possible in the volume and power consumption of, well, a brain so we’re orders of magnitude off of efficiency limits.
It’s estimated that mice have 70 million to 100 million neurons in their brains. They are capable of feeling pain and have social hierarchy. They also experience emotions like fear, pleasure, and anxiety. (We use them in pharmacology models of many mental illnesses.)
Have you ever heard the phrase, “the neurons that fire together, wire together” ? Our neurons are in a constant feedback loop with the environment we experience. Our experiences shape how our neurons make interconnected networks, which then impacts how we behave upon the environment.
If those neurons connected to the computer chip only ever experience playing the game “DOOM,” how would they know about anything else? How could they know about pain without having limbs to innervate and experience the pain with? How could they have a social hierarchy without others to interact with? We may as well be god to those neurons on the PC chip, because we are controlling the entire world they have access to.
What I find sad is that our society is ok with hooking living cells up to a computer to make smarter computers, but has a problem with ethically harvesting stem cells to be used to treat diseases.
It’s because the stem cells somehow threatened the religious hegemony.
“Do lab grown neurons have a soul?”
I would say consciousness is required for that, so no.
People used to say animals were not concious.
Recent science suggest that some animals have what humans would consider to be language. This is a slippery slipe.
People used to say animals were not concious.
A lot of religious people still say that.
As it turns out, Doomguy is a robot clone of BJ Blazkowitz, who was deliberately smuggled onto Mars by scientists who knew about Hell.
Iirc the study found that the neurons played “slightly better than buttons being pressed at random” or something like that, so it’s hardly pro gamer brain chip.
Ah I see, so we’re adding the matrix to our dystopian horror show reality then.
Next step: putting the cells into one of those Boston Dynamics robot dogs with a gun attached. What could go wrong?

Finally, I knew when I saved this to my phone there would be a perfect moment. (Humanity is too predictable)
Attribution: https://lemmy.world/post/43077529
OK but hear me out here, I think I have the beginnings of a business plan:
-
Create the Torment Nexus
-
?
-
Profit
Some components of the plan are still under development, but let’s not lose momentum. We can advance with the initial phase while brainstorming to refine the plan in real time as we progress. It’s an exciting opportunity and we mustn’t forfeit our first-to-market advantage.
-
Scientists: “No, this isn’t The Torment Nexus, this is ‘The Nexus of Torment’! It’s totally different!”
In other news, Torment (with the patches) was a really good game
Wait is it a real cover? Was it made before or after squid game? It uses the same font
Am I the only one who wonders why, in a world where there are already concerns about machines rebellion, when we train rats, robots and a bench of neurons to play a game, it HAS to be Doom, we can’t think about another, non-violent, or let’s be bold: non-destructive game??
They trained a tiny patch of neurons to respond to low-voltage electric impulses. The cells don’t know they’re playing Doom. They don’t have any kind of social context or even video feedback.
Imagine if I stuck you in a sensory deprivation chamber, handed you an NES controller, and asked you to hit the buttons. Then, periodically, I said “Yes” or “No” based on the buttons you pressed. And when I pulled you out of the tube at the end of an hour, I told you “the yes and no messages were intended to encourage you to correctly navigate Mario through the first level of the original game.” What if, instead of Mario, I’d been telling you how to play Street Fighter?
It doesn’t matter if its Doom. They likely picked Doom because the I/O is so rudimentary that you can install the game on practically anything. The cellular matter has no idea what it’s doing beyond the “Yes/No” signaling.
Tetris.
Pong.
I know there is no real association between the game and real life. It’s more a question on the mindset of the researchers. I’m sure there are other games that would fit their needs.
you can do the exact same thing with a cockroach. Organoids are not brains.
It’s not that deep, it’s a meme.
I have no mouth and I must Doom.
Honestly? Sounds preferable to being stuck in the universe of I Have No Mouth And I Must Scream… I’ll take a challenging power fantasy with some massively overpowered weapons over millennia of endless physical and psychological torture by an insane AI… might just be me though…
I have no MOUSE and I must DOOM
The original DOOM is entirely playable on a keyboard, though. It’s essentially a 2D game, as you can’t look up or jump.
I just remembered, back in the day in russia we used to call keyboard players “tractor driver”
So, uh… is it any good at it?
IIRC, it doesn’t actually pay the game itself. We prod the cells, they fire in a certain way and that response is read to convert it to an output for the game. The cells aren’t a rudimentary Doom bot, they’re the controller.
So no, then.
Iirc it’s slightly better than using a coin toss to fire the inputs. Fantastic for fundraising for this company tho
we grew a human brain
200’000
brian“brain cells” (so about 1/3 of it neurons) is the equivalent to a really simple microcontroller.Edit: left the typo for funny
Yah but the visuals of “growing a human brain and trapping it in hell” gets a lot more clicks than “We made a very basic microcontroller out of organic chemistry to interact with an old video game poorly.”
… Am I missing something, or is this not like, the practical, if not lore accurate first step toward actually creating a:

Why did hell have its own R&D department doing high tech cybernetics anyway?
What other advance industry does hell have. It’s obviously a highly capitalistic place, so I imagine banking/finance?
I mean, pick all your deadly sins, right?
Brothels, Restaurants, Blood Sports…
Next step, give it spider legs and a gattling gun!
I mean, Boston Dynamics figured out how to build essentially robot mules and cats like a decade ago, and they’re actually currently building and improving on humanoid designs.
They got basically acquired by/folded into Hyundai, you know, an actual manufacturing company, unlike Elon’s ongoing fradulent shitshows.
the only missing components are a minigun, robotic spyder legs and positive reinforcement coctail whenever it kills a person.
“We decided to leave those out of our first test, staring down the barrels of a minigun during neural training were putting our scientists off”



Raises uncomfortable questions about consciousness. The only difference between these neurons and your own are the number of them and the structures they form. Of course it doesn’t know what it’s doing, but… Neither do our own neurons
I mean it’s the same question we’ve been asking all our lives about the animals, fetuses and now AI. When does it stop being a flowchart and start being a consciousness.
Do those neurons interact with hormones like mine do?
Nueralink did pretty much the same thing to monkeys that are actually conscious. So it this different only because those are human neurons? Is human consciousness different than animal consciousness?
i dont think op made a mutually exclusive statement?..
I’m not sure this is quite analagous to neuralink’s monkey experiments. That said,
So is this different only because those are human neurons?
To my mind, a neuron is a neuron. The only difference between your brain and a monkey brain is, again, the number of neurons and the structures they form. I don’t see this as any different from monkey or rat or ant or entirely digital neurons.
I’m not sure this is quite analagous to neuralink’s monkey experiments.
Why not? It’s a chip reading inputs from neurons. This meme doesn’t make it clear if the chip was also stimulation neurons but Neuralink has plans for neural stimulation and it’s possible this was also tested on monkeys. So what’s the difference?
You seem to be arguing against a point that no one has made.
You seem not to understand what is being discussed here.
Correct. That was basically my point – I don’t think anything is being discussed, people are talking past each other.
Sounds like those are uncomfortable questions being raised…
Yes. Because it’s us. Anything not us is always going to be less valuable. You’d kill 100 lions if it means saving 1 human.
Lions are not conscious. And I’m not asking about value. Of course we value human consciousness more than monkey consciousness. We don’t grant monkeys any rights. Hell, we assign more value to unconscious (brain dead) humans than to conscious monkeys. But how exactly is human consciousness different?
What leads you to assume that lions lack consciousness exactly?
Shit, turns out lions are conscious! They are just stupid. Stephen Hawking said it in 2012. I honestly didn’t know that.
That was just to try and make the equipment work at all, it wasn’t about doing anything with software. It’s the opposite where you’re only worried about the physical damage and infection.
I was focusing more on the “hooking up conscious brain to computer” part than about the damage and infection part.
Thought experiment: let’s say we have a dead brain patient. You have verified that there is no neural activity in the brain beyond cerebellum. There’s no consciousness in the brain. Legally it’s still considered a person. You can’t for example shoot them.
We also have a 5kg blob of lab grown human brain tissue. We have verified there is neural activity in the entire blob but we don’t know what it’s doing and we can’t communicate with it.
Which one is more conscious? Which one should be considered more human and should have more rights?
Hooking up to a computer is just installing a software keyboard in your brain, that doesnt really mean or do anything. It’s what software you load after that’s relevant.




















