• 2 Posts
  • 94 Comments
Joined 7 months ago
cake
Cake day: June 4th, 2025

help-circle

  • Are you joking? There are hundreds of different ways to get water to them. If you move any physical resources into a place, chances are high you can move water in the same way.

    There’s also just building pipelines and extra desalination plants along the coasts or some more exotic methods of water extraction from the air or earth depending on how you want to do things.

    Point is that we have the technology to produce as much fresh water as we could probably ever have a desire for. Fresh water is an extensible resource. And as long as we have vehicles, we can get that water to the people who need it (though pipelines would be more efficient).

    So why does it seem like water is scarce if it isn’t? Because it requires infrastructure to produce, and—while building that infrastructure is very possible and not difficult at all for a developed country—few countries would pay to save the lives of the less fortunate unless it benefited them economically.

    In other words, the scarcity you mention only exists due to the greed and selfishness of those with economic resources. Overpopulation isn’t the issue, economic systems that value money/revenue over the lives of others (capitalism) are the issue.


    Edit: Also, the rivers running dry is mostly an issue with wasted water and allocation of that water (as the commenter above mentioned). Both of which would be drastically decreased if profit wasn’t controlling their regulation more than preservation or societal benefit.




  • My first project in Rust was replicating this paper because i wanted to learn rust but needed a project to work on because i hate learning from tutorials.

    Of course, I had intended to go the OOP route because that’s what I was used to and this was my first time using rust… that was a bit of a headache. But I did eventually get it working and could watch the weights change in real time. (It was super slow of course but still cool)

    Anyway I’ve started making a much much faster version by using a queue to hold neurons and synapses that need updating instead of running through all of them every loop.

    It’s like lightning fast compared to the old version; I’m very proud of that. However, my code is an absolute mess and is now filled with

    Vec<Arc<Mutex<>>>
    

    And I can’t implement the inhibition in a lazy way like I did the first time, so that’s not fun…



  • I definitely don’t think the human brain could be modeled by a Turing machine.

    In 1994, Hava Siegelmann proved that her new (1991) computational model, the Artificial Recurrent Neural Network (ARNN), could perform hypercomputation (using infinite precision real weights for the synapses)

    Since the human brain is largely comprised of complex recurrent networks, it stands to reason the same holds for it.

    The human brain is an analog computer and is—as far as I’m aware—an undecidable system. As in you cannot algorithmically predict the behavior of the net with certainty. Predictable behavior can arise but it’s probabilistic not certain.

    I also think I see what you’re saying with the thermometer being “conscious” of temperature, but that kind of collapses the definition of conscious to “influenced by” which makes the word superfluous. Using conscious to refer to an ability requiring learning of patterns of different sources of influence seems like a more useful definition.

    Also in the crazy unlikely event in which I actually end up creating a sentient thing, I’ll be hesitant to publish any work related to it.

    If my theory about how focus/attention work is correct, anything capable of focus must be capable of experiencing pain/irritation/agitation. I’m not fond of the idea of going “hey here’s how to create something that feels pain” to the world since a lot of people around me don’t even feel empathy for their own kind




  • if you don’t think my framework is useful, could you provide a more useful alternative or explain exactly where it fails? If you can it’d be a great help.

    As for “skill issue” while I think generalized comparisons of brains are possible (in fact we have some now) I think you might be underestimating the nature of chaotic systems or have a belief that consciousness will arise with equivalent qualia whenever it exists.

    There is nothing saying that our brains process qualia in exactly the same way, quite the opposite, and yet we can reach the same capabilities of thought even with large scale neurodivergences. The blind can still experience the world without their sense of sight, those with synesthesia can experience and understand reality even if their brain processes multiple stimuli as the same qualia. It is very possible that there are multiple different paths to consciousness which will have unique neurological behaviors that only makes sense within their original mind and may have no analog in another.

    The more I look into the functions of the brain—btw I am by no means an expert and this is not my field—the more I realize many of our current models are limited by our desire to classify things discreetly. The brain is an absolute mess. That is what makes it so hard to understand but also what makes it so powerful.

    It may not be possible to isolate qualia at all. It may not be possible to isolate certain thoughts or memory from other circumstances in which it is recalled. There might not be elemental/specific spike trains for a certain sense that are disjoint from other senses. And if this is the case, it is likely possible different individuals may have different couplings of qualia making them impossible to compare directly.

    The idea that other processing areas of the brain (which by the way we do see in the brain (place neurons remapping is a simple example)) may be entangled in different ways across individuals means that even among members of the same species it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.

    Discrete ideas like communicable knowledge/relationships should (imo) be possible to isolate well enough that you could theoretically implant them into any being capable of understanding abstract thought, but raw experiences (ei qualia) most likely will not have this property.


    Also, the project isn’t available online and is a mess because it’s not my field and I have an irrational desire to build everything from scratch because I want to understand exactly how it is implemented and hey it’s a personal hobby project, don’t judge lol

    So far I’ve mostly only replicated the research of others. I have tried some experiments with my own ideas, but spiking neural nets are difficult to simulate on normal hardware, and I need a significant number of neurons, so currently I’m working on designing a more efficient implementation than the ones I’ve previously written.

    After that, my plan is to experiment with my own designs for a spiking artificial hippocampus implementation. If my ideas are sound I should be able to use similar systems to implement both short and long term memory storage.

    If that succeeds I’ll be moving onto the main event of focus and attention which I also have some ideas for, but it really requires the other systems to be functional.

    I probably won’t get that far but hey it’s at least interesting to think about and it’s honestly fun to watch a neural net learn patterns in real time even if it’s kinda slow.



  • I think you’re getting hung up on the words rather than the content. While our definitions of terms may be rather vague, the properties I described are not cyclically defined.

    To be aware of the difference between self means to be able to sense stimuli originating from the self, sense stimuli not from the self, and learn relationships between them.

    As long as aspects of the self (like current and past thoughts) are able to be sensed (encoded into a representation which the mind can work with directly; in our case neural spike chains) exist and senses which compare those senses with other senses or past senses and finally that the mind can learn patterns in those encodings (like spiking neural nets) then it should be possible for conscious awareness to arise. (If you’re curious about the kind of learning that needs to happen you should look into Tolman-Eichenbaum machines, though non-spiking ones aren’t reallly capable of self learning)

    I hope that’s a clear enough “empirical” explanation for you.

    As for qualia, you are entirely wrong. What you describe would not prove that my raw experience of green is the same as your green, only that we both have qualia which can arise from the color green. You can say that it’s not pragmatic to think about that which cannot be known, and I’ll agree that qualia must be represented in a physical way and thus be recreatable in that persons brain, but the complexity of human brains actually precludes the ability to define what actually is the qualia and what are other thoughts. The difference between individuals likely precludes the ability to say “oh when these neurons are active it means this” because other people have different neural structures, similar? Absolutely, similar enough that for any experience you could find exactly the same neurons that would fire the same way as in someone else? Absolutely not.

    Your last statements make it seem like you don’t understand the diffference between learning and knowledge. LLMs don’t learn when you use them. Neither do most modern chess models. They actually don’t learn at all unless they are being trained by an outside source who gives them an input, expects an output, and then computes the weight changes needed to get closer to the answer via gradient descent.

    A typical ANN trained this way does not learn from new experiences furthermore, it is not capable of referencing its own thoughts because it doesn’t have any.

    The self is that which acts, did you know LLMs aren’t capable of being aware they took any action? Are you aware chess engines can’t do that either? There is no comparison mechanism between what was and what is and what made that change. They cannot be self aware the same way a program hardcoded to kill processes other than itself is unaware. They literally lack any sense of their own actions directly. Once again, you not only need to be able to sense that information, but the program then needs a sense which compares that sensation to other sensations and learns the differences, changing the way it responds to those stimuli. You need learning.

    I don’t reject the idea of machines being conscious, in fact I’m literally trying to make a conscious machine just to see if I can (which yeah to most people sounds insane). But I do not think we agree on much else because learning is absolutely essential for any thing to be capable of a conscious action.


  • Anything dealing with perception is going to be somewhat circular and vague. Qualia are the elements of perception and by their nature it seems they are incommunicable by any means.

    Awareness in my mind deals with the lowest level of abstract thinking. Can you recognize this thing and both compare and contrast it with other things, learning about its relation to other things on a basic level?

    You could hardcode a computer to recognize its own process. But it’s not comparing itself to other processes, experiencing similarities and dissimilarities. Furthermore unless it has some way to change at least the other processes that are not itself, it can’t really learn its own features/abilities.

    A cat can tell its paws are its own, likely in part because it can move them. if you gave a cat shoes, do you think the cat would think the shoes are part of itself? No, And yet the cat can learn that in certain ways it can act as though the shoes are part of itself. The same way we can recognize that tools are not us but are within our control.

    We notice that there is a self that is unlike our environment in that it does not control the environment directly, and then there are the actions of the self that can influence or be influenced directly by the environment. And that there are things which we do not control at all directly.

    That is the delineation I’m talking about. It’s more the delineation of control than just “this is me and that isn’t” because the term “self” is arbitrary.

    We as social beings correlate self with identity, with the way we think we act compared to others, but to be conscious of one’s own existence only requires that you can sense your own actions and learn to delineate between this thing that appears within your control and those things that are not. Your definition of self depends on where you’ve learned to think the lines are.

    If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.

    Furthermore, on the note of aliens, I think a better question to ask is “what do you think ‘self’ is?” Because that will determine your answer. If you think a system must be consciously aware of all the processes that make it up, I doubt you’ll ever find a life form like that. The reason those systems are subconscious is because that’s the most efficient way to be. Furthermore, those processes are mostly useful only to the self internally, and not so much the rest of reality.

    To be aware of self is to be aware of how the self relates to that which is not part of it. Knowing more about your own processes could help with this if you experienced those same processes outside of the self (like noticing how other members of your society behave similarly to you) but fundamentally, you’re not necessarily creating a more accurate idea of self awareness just be having more senses of your automatic bodily processes.

    It is equally important, if not more so, to experience more that is not the self rather than to experience more of what would be described as self, because it’s what’s outside that you use to measure and understand what’s inside.



  • Personally, I’m more a fan of the binary/discrete idea. I tend to go with the following definitions:

    • Animate: capable of responding to stimuli
    • Sentient: capable of recognizing experiences and debating the next best action to take
    • Conscious: aware of the delineation between self and not self
    • Sapient: capable of using abstract thinking and logic to solve problems without relying solely on memory or hardcoded actions (being able to apply knowledge abstractly to different but related problems)

    If you could prove that plants have the ability to choose to scream rather than it being a reflexive response, then they would be sentient. Like a tree “screaming” only when other trees are around to hear.

    If I cut myself my body will move away reflexively, it with scab over the wound. My immune system might “remember” some of the bacteria or viruses that get in and respond accordingly. But I don’t experience it as an action under my control. I’m not aware of all the work my body does in the background. I’m not sentient because my body can live on its own and respond to stimuli, I’m sentient because I am aware that stimuli exist and can choose how to react to some of them.

    If you could prove that the tree as a whole or that part of a centralized control system in the tree could recognize the difference between itself and another plant or some mycorrhiza, and choose to respond to those encounters, then it would be conscious. But it seems more likely that the sharing of nutrients with others, the networking of the forest is not controlled by the tree but by the natural reflexive responses built into its genome.

    Also, If something is conscious, then it will exhibit individuality. You should be able to identify changes in behavior due to the self referential systems required for the recognition of self. Plants and fungi grown in different circumstances should respond differently to the same circumstances.

    If you taught a conscious fungus to play chess and then put it in a typical environment, you would expect to see it respond very differently than another member of its species who was not cursed with the knowledge of chess.

    If a plant is conscious, you should be able to teach it to collaborate in ways that it normally would not, and again after placing it in a natural environment you should see it attempt those collaborations while it’s untrained peers would not.

    Damn now I want to do some biology experiments…


  • This isn’t my field but like it shouldn’t be horrible to drink a little sip of this right? It’s just salts and amino acids and sugar, so I’d expect worst case scenario you majorly throw off your electrolyte balance and possibly give your kidneys and liver a lot of amino acids to get rid of. But that’d probably require drinking a significant amount yes?

    Anyone with more bio knowledge want to correct or confirm this hypothesis?


  • I can tell where a laser is pointed on me without looking. Like if you blindfold me and got a laser pen and shined it on my arm, I can point to where it feels like it is with pretty good accuracy. It’s easier to detect motion than precise placement, and sensation wise it’s not touch or heat like you’d expect it’s more like raw proprioception.

    Also it felt the same regardless of the color of laser we used which seems odd since you’d think higher frequency light would be easier to detect.

    Tbf I haven’t done the experiment since I did it with my siblings when I was pretty young. Not sure if I can still do it, but my siblings and cousins couldn’t do it even back then.


  • If I don’t have a choice to leave or feel irrationally compelled to actually try to debate them 10.

    It’s not a choice it’s a fucking curse. I don’t have to think, my mind will eventually start predicting what they say and eventually I want to gut myself because I can think of a hundred things to say and know that it won’t change their fucking minds.

    Worse, mind reading is a fallacy. Sure predictions can be pretty accurate, but there’s no way to know for sure if those arguments will play out exactly as I think. But there’s real curse is that just because all the things I can think to say won’t change their mind, that doesn’t mean there isn’t something that will. I might just be too dumb to think of a good argument. So I rot as the conversation happens to me trying to think of anything that could make a difference.

    Oh also yeah when they say horrible shit and your mind decides to go “here this is how their victims feel” that’s pretty fucking horrible too.

    But if I get up or get upset or react strongly it’ll likely ruin any chance of me changing this person’s mind. Not that that chance existed in the first place.

    Anyway, it isn’t difficult to see things from other people’s perspective but let me tell you I much prefer talking to psychopaths than delusional idiots.

    I had a roommate who was full blown psychopath (and business major to boot lol) who, once he found out I could see things from his perspective, would debate politics with me in a completely candid manner. I once brought up “so you’d support slavery then?” And he deadass said “if it benefitted me then yes”

    Fucked up, but the thing is, he’d listen to my arguments when they were logical. And he wasn’t sadistic, slightly narcissistic, but like he didn’t derive pleasure from other’s pain. Anyway the point is that when you talk to someone who is sane it doesn’t hurt even if they feel no empathy because you can start to understand why they think the way they do and it always feels like you can change their mind, and they don’t feel an active desire to hurt people.

    Nazis typically aren’t that. Nazis are typically idiots who can’t face the real sources of pain in their life, so they direct their hatred of their lives and themselves to others. Same with manosphere incels, same with bigots of almost every kind. They want to hurt others, they want to break things, to be mad, because they’re hurt. But you can’t get them to see what they don’t want to see in the first place.

    So you just feel bad for them, feel bad for others harmed by people like them, and hate yourself for feeling hatred for them because you get why they are doing it.

    It isn’t fun and it’s not even fucking useful because it’s not like you being emotionally stressed out is helping anyone ever and you aren’t changing their minds.

    Its a curse to feel irrationally compelled to talk to those who won’t listen because “maybe this time it’ll work” it doesn’t.


    Edit: okay clearly I’m not in a very good place mentally right now, but I’m leaving this here. If anyone can relate, here’s some external reinforcement since you’ve likely said it to yourself and it doesn’t work: you do not need to feel compelled to feel bad for others constantly especially if it isn’t galvanizing you to take solid action to help. If your suffering stops you from functioning well enough to help anyone then it’s actually a bad thing to feel that empathy. So let yourself relax.


  • AnarchoEngineer@lemmy.dbzer0.comtomemes@lemmy.worldMiss him
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    1 month ago

    The real fascinating thing is that Impossible Colors exist, which means it’s kind of impossible to actually represent all colors or impossible to precisely represent them.

    Imo it seems colors are relative to how our brain and eyes are adapting to their current field of view, meaning the color you experience is not fully dependent on the light an object actually reflects nor the activation of your rods and cones but is dependent on the way your brain processes those signals with each other. Ergo, you can’t actually represent all colors precisely unless you can control every environmental variable like the color of every object in someone’s field of view and where someone’s eyes have been looking previously etc.