• wonderingwanderer@sopuli.xyz
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    1 day ago

    There’s a lack of evidence for anything not being conscious.

    So should we just assume that nothing is conscious? After all, I can’t prove that you’re conscious, nor you I. So should we relegate ourselves to an amoral solipsism?

    Neurons work by generating electrical signals in response to stimulus and they do this in a physical way.

    I know how neurons work. Nobody knows why they produce consciousness or what particular mechanism is responsible for human awareness.

    I’m not sure there’s any requirement for consciousness to include “human-like reasoning” or “understanding” for it to have some kind of experience and perspective or awareness.

    That’s… irrelevant. I never said they have “human-like reasoning” or “understanding.” I said we don’t understand enough, meaning humanity writ large, including the experts. There are too many unknowns about the nature of consciousness.

    A cluster of neurons trained to play doom might have consciousness but it’s not likely to think like a human

    Again, it doesn’t need to think like a human in order to be capable of experiencing suffering. Babies don’t “think like humans,” or at least we don’t have any solid evidence that they do, but they’re certainly capable of suffering.

    Your mentality is the same one people have used for generations to justify circumcising infants without anaesthetics. How far are you willing to extend it? Do pets “think like humans”? Do uncontacted tribes “think like humans,” in whatever vague way you define it in order to justify cultivating human braincells in a petri dish?

    Do you not see how problematic this is? What if the technology grows and in a decade they’re studying a clump of 2 billion neurons in a vat? Will it suddenly become human enough to deserve your consideration? What about when it becomes 20 billion?

    Whether it’s ethical to squash an ant or turn off an iPhone or stimulate a lab-grown neuron depends on your ethical framework and your philosophical worldview.

    Whether it’s ethical to murder an entire village of your enemies “depends on your ethical framework and philosophical worldview.” See what a slippery slope moral relativism is? Amoral people exist, moral cynicism exists, nihilism exists, solipsism exists, hell even social darwinism exists.

    Any of those frameworks and worldviews can be used to justify atrocities in the minds of those who hold them. And yes, an unethical or even anti-ethical persuasion is still an “ethical framework,” in the strictest sense of the term.

    Just because something can be seated in philosophical jargon doesn’t mean we should grant it license to do whatever it wants.

    • TechLich@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      26 minutes ago

      So should we just assume that nothing is conscious?

      Not at all! In fact, I believe that we should assume almost everything is conscious. I think it’s a bit of human arrogance to think that we brain creatures have a monopoly on perspective.

      Nobody knows why they produce consciousness or what particular mechanism is responsible for human awareness.

      Exactly my point.

      That’s… irrelevant

      I don’t think it is. If the argument is that it’s unethical to poke a neuron because it might have consciousness, would the same argument not apply to anything else? I think you might be getting a bit hung up on the “think like a human” thing. My point is not that it’s okay to torture something if it doesn’t “think like a human.” It’s that there are potentially a lot of things in the world that are conscious that don’t often get the same consideration.

      capable of experiencing suffering

      This is an interesting one. It shifts the question from “does it have a consciousness?” to “does it have a consciousness that is suffering or able to suffer?”. The idea of suffering is a very human concept that we have a whole section of our brains devoted to. There’s a lot of ethics devoted to alleviating suffering (eg. Humanitarianism) and we sorta use it as a means of directing our goals - we avoid things that make us suffer and seek things that bring us happiness. What makes us happy or makes us suffer varies a bit from person to person due to experience and learning/training but a lot of it is biologically evolved. Physical and emotional pain makes us suffer for evolutionary reasons.

      So in one sense, you could define suffering as a stimulus that some conscious system avoids? In which case, training neurons essentially teaches them what suffering is. They’re trained to activate or not activate based on what avoids irregular stimulus (suffering) and results in regular stimulus (happiness).

      If that’s how you define it though, there could be many other systems that work the same way. Obviously animals and plants and fungi etc. But also Computers and lots of mechanical systems do that too. If making decisions to avoid or seek electrical stimulus is suffering then a computer is basically a pleasure/torture box.

      Personally I think that suffering is more than that. I think it’s a larger system we brain creatures have developed that doesn’t necessarily apply very well outside the context in which we use it. Would a vat of 20 billion neurons be able to suffer? I think that depends on how they’re arranged and whether they have that concept.

      Whether it’s ethical to murder an entire village of your enemies “depends on your ethical framework and philosophical worldview.” See what a slippery slope moral relativism is?

      Just because different ethical frameworks and worldviews exist, doesn’t mean they should all be treated equally. Sure, if someone is super utilitarian they might be fine with torturing people for medical research when they feel that the ends justify the means. If someone has a strict deontological code of ethics that tells them homosexuality is a sin punishable by death, they might campaign for that. I think those people suck and their beliefs are evil because of my own ethics and worldview.

      When it comes to a question like “is an ant capable of suffering?” Or “is it okay to swat a fly or set a mouse trap?” Or “how many human neurons does it take to suffer while changing a light bulb?” You’ll get varying answers from people based on who they are. Personally, I think the right answer to those questions is dependent on the brain of the person answering them.

      Moral universalists have the same slippery slopes you mentioned. If right and wrong are fixed and objective and not dependent on people, then groups claiming to know the one true morality will use it to persecute those labelled as evil or morally bankrupt (see the homophobic asshole example above).

      Moral relativism doesn’t mean that morality doesn’t matter or that it’s wrong to fight against what you think is evil. I believe you should fight for what is right and I’m hopeful that the things that I think are good will win out against the things that I think are evil. Absolutism is maybe a bit easier for that because it simplifies moral choices a lot, but I think it’s hubris to think that evil is the same everywhere to everyone and not an artifact of the human mind.