• 0 Posts
  • 49 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • This is very true, though I’d argue that Windows makes most of the same assumptions with user accounts. Also, the internal threat model is still important because it’s often used to protect daemons and services from each other. Programs not started by the user often run in their own user accounts with least privilege.

    You no longer have 10 different humans using the same computer at once, but you now have hundreds of different applications using the same computer, most of which aren’t really under the user’s control. By treating them like different people, it’s better to handle situations where a service gets compromised.

    The question is more about passwords which is mostly down to configuration. You can configure Windows to need a password for lots of things and you can configure Linux to not. They just have different defaults.


  • The big difference between UAC and Sudo is that you can’t as easily script UAC. They can both require (or not require) a password but UAC requires user interaction. Sudo has no way of knowing if it’s being interacted with by a person or a script so it’s easier for applications to escalate their own privileges without a person doing it. UAC needs to have the escalation accepted with the keyboard or mouse.

    There’s still plenty of sneaky ways to bypass that requirement but it’s more difficult than echo password | sudo -S



  • I’m not anti-ai at all but this sort of thing feels like a security vulnerability to me?

    Any website with a malicious prompt injection on it could instruct the ai to scam the user.

    Almost like xss but instead of needing malicious user-inputted js, malware targeting the ai can just be written in text so an attacker could put it in a comment or whatever.


  • 600 million to 13 billion parameters? Those are very small models… Most major LLMs are at least 600 billion, if not getting into the trillion parameter territory.

    Not particularly surprising given you don’t need a huge amount of data to fine tune those kinds of models anyway.

    Still cool research and poisoning is a real problem. Especially with deceptive alignment being possible. It would be cool to see it tested on a larger model but I guess it would be super expensive to train one only for it to be shit because you deliberately poisoned it. Safety research isn’t going to get the same kind of budget as development. :(








  • “No conclusion whatsoever” is basically the scientific consensus on whether Dvorak has any effect on efficiency or typing speed. It’s hard to get good data because it’s hard to isolate other factors and a lot of the studies on it are full of bias or have really small sample sizes (or both).

    To anyone thinking of learning Dvorak, my advice is don’t. It takes ages to get good at, isn’t THAT much better and causes a lot of little annoyances when random programs decide to ignore your layout settings or you sit down at someone else’s computer and start touch typing in the wrong layout from muscle memory or games tell you to press “E” when they mean “.” or they do say “.” but it’s so small that you don’t know if it’s a dot or a comma and then you hit the wrong one and your guy runs forward and you die…

    That said, I’m also a Dvorak user and it is very comfortable and satisfying and better than qwerty. Just not enough to be worth all the pain of switching.




  • They absolutely would benefit.

    Mr. Hypothetical lord high executive oligarch can take his private jet to Canada and lounge around on the company card with the money from his US company’s car sales, or find an excuse to convert some of it to USD for some reason, or use it to buy up more Canadian companies to expand their power, or a million other things.

    However, I think the point of the boycott is more about making the tariffs hurt the US economy by messing with their ability to export as well as import. Making the trade numbers look bad is likely to put more pressure on the US to end the trade war. It’s not so much about hurting the capitalists that operate in Canada (a worthy goal in itself but not what people are specifically trying to do in this instance since it won’t really affect those trade numbers).


  • I think the point they’re making is that the majority of the money they make in Canada, they spend in Canada. They pay Canadian taxes and Canadian staff, using Canadian banks, etc.

    Just because their headquarters are in the US doesn’t necessarily mean they’re sending vast sums of money across the border, that would be expensive. The American-based company makes money, but not necessarily in America, they’re multi-national and their money is kept all over the world.

    As opposed to a company that exports their products, in which case the money is paid to the American company in America with American staff etc.

    I don’t have any numbers or sources to back this up though. Just outlining what I think the other commenter was implying.



  • If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.

    They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.

    They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”

    Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.

    Paper here: https://arxiv.org/pdf/2412.04984

    Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”

    It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).


  • Yeah, it’s true, a lot of things suck. They can and do get better though. I have a partner with BPD. They’ve been through a LOT of rough times, but they’re now very loved and they enjoy their current job and have plenty of friends who care about and support them.

    Therapy helps and sometimes, the world isn’t always an absolute dick to everyone forever. Life changes and the world revolves and people find each other.

    I hope you find your people too and a place where you can feel a little less shitty. :)

    Edit: if you’re feeling THAT shitty maybe consider reaching out to your local suicide hotline? People like that can help.