• 0 Posts
  • 30 Comments
Joined 3 years ago
cake
Cake day: July 1st, 2023

help-circle

  • Which most of us neuroscientists hated because a neural network is a biological network. […] Conflating the term with research on actual neural networks.

    Yeah that’s fair, co-opting the term in computing was bound to overtake its original definition, but it doesn’t feel fair to blame that on the computer scientists that were trying to strengthen the nodes of the model to mimic how neural connections can be strengthened and weakened. (I’m a software engineer, not a neuroscientist, so I am not trying to explain neuroscience to a neuroscientist.)

    mostly because they called it “neural networks” which sounded super cool.

    To be fair… it does sound super cool.

    It boogled the mind how anyone could believe a prediction model could have consciousness.

    I promise you the computer scientists studying it never thought it could have consciousness. Lay-people, and a capitalist society trying to turn every technology into profit thought it could have consciousness. That doesn’t take AI, though. See, for example, the Chinese Room. From Wikipedia, emphasis mine, “[…] The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.” Also, though it is from a science fiction author Arthur C. Clarke’s third law, “Any sufficiently advanced technology is indistinguishable from magic.” applies here as well. Outside of proper science perception is everything.

    To a lay-person an AI Chatbot feels as though it has consciousness, the very difficulty with which online forums have in telling AI slop comments from real people is evidence to how well an LLM has modeled language such that it can be so easily mistaken for intelligence.

    There is no understanding. No thinking. No ability to understand context.

    We start to diverge into the philosophical here, but these can be argued. I won’t try to have the argument here, because god knows the Internet has seen enough of that philosophical banter already. I would just like to point out that the problem of context specifically was one that artificial neural networks with convolutional filters sought to address. Image recognition originally lacked the ability to process images in a way that took the whole image into account. Convolutions broke up windows of pixels into discreet parameters, and multiple layers in “deep” (absurdly-numbered layer-count) neural networks could do heuristics on the windows, then repeat the process to get heuristics on larger and larger convolutions until the whole network accurately predicted an image of a particular size. It’s not hard to see how this could be called “understanding context” in the case of pixels. If, then, it can be done with pixels why not other concepts?

    We use heuristics

    Heuristics are about a “close enough” approximation for a solution. Artificial neural networks are exactly this. It is a long-running problem with artificial neural networks that overfitting the model leads to bad predictions because being more loose about training the network results in better heuristics.

    Which further feed emotional salience (attention). A cycling. That does not occur in computers.

    The loop you’re talking about sounds awfully similar to the way artificial neural networks are trained in a loop. Not exactly the same because it is artificial, but I can’t in good conscious not draw that parallel.

    You use the word “emotion” a lot. I would think that a neuroscientist would be first in line to point out how poorly understood emotions are in the human brain.

    A lot of the tail end there is about the complexity of human emotion, but a great deal was about the feedback loop of emotion.

    I think something you might be missing about the core difference between artificial and biological neural networks is that one is analogue and the other is digital. Digital systems must by their nature be discreet things. CPUs process instructions one at a time. Modern computers are so fast we of course feel like they multitask but they don’t. Not in the way an analogue system does like in biology. You can’t both make predictions off of an artificial neural network, and simultaneously calculate the backpropogation of that same network. One of them has to happen first, and the other has to happen second, at the very least. You’re right that it’ll never be exactly like a biological system because of this. An analogue computer with bi-directional impulses that more closely matched biology might, though. Analogue computers aren’t really a thing anymore, they have a whole ecosystem of issues themselves.

    The human nervous system is fast. Blindingly fast. However computers are faster. For example videos can be displayed faster than neurons can even process a video frame. We’ve literally hit the limit of human frames-per-second fidelity.

    So if you will, computers don’t need to be analogue. They can just be so overwhelmingly fast at their own imitation loop of input and output that biological analogue systems can’t notice a difference.

    Like I said though the subject in any direction quickly devolves into philosophy, which I’m not going to touch.






  • I had success running unit tests for software deployments in pairs, one with pinned versions (error on a failed build) and one unpinned (warning on a failed build)

    so at least you get forewarning when an upstream dependency messes everything up, and if the software changes are somewhat regular than each log of pipeline runs should show incremental changes making it easier to spot the package that started breaking everything







  • kionay@lemmy.worldtoProgrammer Humor@programming.devErrors
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    I once worked in a program that allowed custom C# scripts to be written into it to add custom functionality. The way it worked under the hood however was that the code written in the text field would be stitched together into a longer file and the whole thing compiled and ran. The developers didn’t want people to have to write or understand boilerplate code like import statements or function declarations so the place you typed into was the body of a function and some UI was used to get the rest of the bits that would create generated code for everything else.

    To add to that there was a section of global code where you could put code explicitly outside of functions if you knew what you were doing. This wouldn’t get code-generation-wrapped into a function, just at the top of the class. It did, however, only run and get runtime checking when one of the functions was ran. And since the program didn’t grasp that the global code error line number should be with respect to the global code block and not the function code block you could get errors on line -54 or whatever since the final generated file landed the global broken code 54 lines before the beginning of the function.

    Not that any of this was told to the user. I only found out because early versions of the app wasn’t compiled with obfuscation so ILSpy let me see how they rigged the thing to work.

    Error on line -54 will probably be what made me the most dumbstruck in all of development.