• 9 Posts
  • 1.24K Comments
Joined 2 years ago
cake
Cake day: August 15th, 2023

help-circle











  • These findings suggest LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply mimicking training data patterns

    lulzwut? LLMs aren’t internalizing jack shit. If they exhibit a bias, it’s because of how they were trained. A quick theory would be that the interwebs is packed to the brim with stories of “all in” behaviors intermixed with real strategy, fiction or otherwise. I speculate that there are more stories available in forums of people winning doing stupid shit then there are of people losing because of stupid shit.

    They exhibit human bias because they were trained on human data. If I told the LLM to only make strict probability based decisions favoring safety (and it didn’t “forget” context and ignored any kind of “reasoning”), the odds might be in its favor.

    Sorry, I will not read the study because of that one sentence in its summary.


  • I’ll never drink again, but there are some days still that I wish my mind could be as numb as it was while I was a raging alcoholic. That thought is usually replaced with remembering how shitty I always felt and how I didn’t give a fuck about anything. Life was a blur.

    A mostly clear mind and recovering body is a very good thing. Daily stress is easily managed with regular exercise and chronic anxiety and depression is only a tiny fraction of what it once was. It’s a good life now.

    I believe the lifestyle changes not only lengthened my life, but it also stretched out my perceived time as well.


  • A Russian RPG detonates on impact or with a timer. The correct distance from the hull is a built-in factor of the warhead.

    You are presenting data that wasn’t relevant to the topic and you didn’t read the full paragraph, it seems.

    Are you done being a jackass now or don’t you understand that what makes a turtle tank the way it is, is the extra armor placed further away from the main hull?


  • remotelove@lemmy.catoUkraine@sopuli.xyz*Permanently Deleted*
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    29 days ago

    Maybe RPGs, if the distance of the plate was far enough away from the main hull of the tank. The blast from shaped charges in RPGs can dissipate fairly quick once it has gone through one bit of armor, so, it needs to detonate against the actual tank hull itself so the jet of molten copper has a higher chance to give a big hug to one of the shells in storage.

    But an AT missile? Highly unlikely it will be stopped. It’s probably also going to have a shaped charge, but it will be much more powerful than an RPG and could penetrate multiple layers of armor. A top-down trajectory is also more likely where actual tank armor would be the weakest. (There are multiple types of anti tank missiles and some can be set for different attack trajectories.)

    Turtle tanks may or may not still have an active main gun. If they don’t, they shouldn’t be carrying any live shells so crew survival rate should be a touch higher depending on where the missile strike happens.



  • I would tweak that a hair and tell people just to make an account somewhere and observe for a bit. Lemmy can have some very distinct groups that reside on very specific instances. Or not. It’s a “pick your adventure” kind of scenario, IMHO.

    It took about six months or so for me to settle into .ca after bouncing around a bit. It’s not really a pain to switch instances, but I personally like my chat history in one spot and I like the concept of a ‘home instance’.

    Depending on your client and your settings, your feed could have a bias that leans in the direction of the posts on your home instance, so that is something of note. Not saying that is bad or good, it just is what it is.



  • When I use it, I use it to create single functions that have known inputs and outputs.

    If absolutely needed, I use it to refactor old shitty scripts that need to look better and be used by someone else.

    I always do a line-by-line analysis of what the AI is suggesting.

    Any time I have leveraged AI to build out a full script with all desired functions all at once, I end up deleting most of the generated code. Context and “reasoning” can actually ruin the result I am trying to achieve. (Some models just love to add command line switch handling for no reason. That can fundamental change how an app is structured and not always desired.)