

Taking from Bezos to give to the poor? She’s not a billionaire, she’s the female Robin Hood.


Taking from Bezos to give to the poor? She’s not a billionaire, she’s the female Robin Hood.


I have never wanted to see an OSHA safety violation leading to mass death as much as I have when looking at that picture.
Please drop all billionaires from the stratosphere without a parachute, thx.


100% completion is not required, but you’ll share whatever progress you have made.
Granted, that still underestimates my ability to make actual progress. “Hooray, it’s now twice as complicated and actually worse!”


deleted by creator


The wizard desperately shouts, “PAY NO ATTENTION TO THE MAN BEHIND THE CURTAIN!”


I’m not a super-expert but I suspect it’s probably still holding open the stdin and stdout file descriptors of the parent process. Try using &> /dev/null to throw them away and see if that helps. You could also try adding nohup in front of the npx, which does some weird re-parenting jazz to prevent the child process (npx) from actually being attached to the parent process so that it doesn’t get auto-closed when the parent exits, which is kind of the opposite of your problem, but it might also help in this case.
Another possible option is using systemd-run --user <command> which effectively should make it into sytemd’s problem


First he came for the Gulf of Mexico, and I did not cry out, because I was not Mexico. Then he came for the Department of Defense, and I did not cry out, because I did not need defense. Then he came for the football, and … HOLY SHIT that’s a lot of angry football fans!


Most of the countries in the western world have spent so long not really being at risk of being at war that we really have no idea how to react to potentially actually being at war. We are so incredibly unprepared in such incredibly profound ways. Imagine being in a war and not having anti-air defenses around your most important strategic nuclear sites and having to rely on troops shooting at incoming aircraft with what I suspect were simply their service weapons, and almost certainly not even dedicated anti-drone weapons. Yes, drones are sort of new, that’s not really an excuse. New things will happen during a war. You have to be able to react quickly to defend your critical assets at a moment’s notice. The fact that we’re still not doing that properly is a perfect demonstration of how far behind the curve we really are.
I hope this changes soon with the sprawling investments being directed towards defense budgets, but I remain unconvinced, will it just result in more hyper-capable, hyper-expensive techno-wonderweapons? It’s the cheap, good-enough, high-supply things that are currently threatening us, and both history and the present seem to tell us it’s usually the cheap, good enough, high-supply things that both win wars and enable effective defense. Spending money seems like it would imply seriousness, but I don’t think we’re actually taking this seriously enough, yet. When you really get serious about war and defense you need to be asking the real questions about what it’s going to take to win, not just throwing money at the problem.
Maybe I’m wrong, maybe they’re just sandbagging and waiting for the right moment to reveal our true defensive preparations, but I know a lot of people in various western militaries, and I honestly don’t think so at all, and neither do they. If we are more prepared than we look, it’s a pretty goddamn well-kept secret.


we’re so cooked. it is really interesting to be living through the end of all civilization. I didn’t think it would be like it is, but it do.


I thought it was supposed to be closed at one point, it wouldn’t surprise me to learn that never happened but I’m still not fully clear on its status. The article talks about it in present tense, but even the comments here are talking about it in past tense? Are other people also thinking it’s closed? Is it actually still open? What’s the deal?


Ah yes, the famously reliable, safe, and effective Russian submarines, workhorse of the Russian navy. What should they buy next? Some guided missile cruisers? How about an aircraft carrier?


This looks less intimidating than Authentik. Any guides on getting it set up with any common self-hosted stuff?
I find it hard to accept Clippy as being too friendly and nonthreatening to adequately demonstrate my unfathomable rage towards technology companies.


Sounds great, I’ll plan on waiting a year or so until I’m convinced they’ve got the bugs worked out, and then buy it just before they announce a new and upgraded version, as is tradition.


Looks really nice and seems like it should be a great foundation for future development. Personally I can’t lose Nextcloud until there are sufficiently featureful and reliable clients for Linux, Windows, Android that synchronize a local copy and help manage the inevitable file deconfliction (Nextcloud Desktop only barely qualifies at this, but it does technically qualify and that represents the minimum viable product for me). I’m not sure a WebDav client alone is enough to satisfy this criteria, but I am not going to pretend I am actually familiar with any WebDav clients so maybe they already exist.


You’re on the right track. Like everything else in self-hosting you will learn and develop new strategies and scale things up to an appropriate level as you go and as your homelab grows. I think the key is to start with something immediately achievable, and iterate fast, aiming for continuous improvement.
My first idea was much like yours, very traditional documentation, with words, in a document. I quickly found the same thing you did, it’s half-baked and insufficient. There’s simply no way to make make it match the actual state of the system perfectly and it is simply inadequate to use English alone to explain what I did because that ends up being too vague to be useful in a technical sense.
My next realization was that in most cases what I really wanted was to be able to know every single command I had ever run, basically without exception. So I started documenting that instead of focusing on the wording and the explanations. Then I started to feel like I wasn’t capturing every command reliably because I would get distracted trying to figure out a problem and forget to, and it was duplication of effort to copy and paste commands from the console to the document or vice versa. That turned into the idea of collecting bunches of commands together into a script, that I could potentially just run, which would at least reduce the risk of gaps and missing steps. Then I could put the commands I wanted to run right into the script, run the script, and then save it for posterity, knowing I’d accurately captured both the commands I ran and the changes I made to get it working by keeping it in version control.
But upon attempting to do so, I found that just a bunch of long lists of commands on their own isn’t terribly useful so I started to group all the lists up, attempting to find commonalities by things like server or service, and then starting organize them better into scripts for different roles and intents that I could apply to any server or service, and over time this started to develop into quite a library of scripts. As I was doing this organizing I realized that as long as I made sure the script was functionally idempotent (doesn’t change behaviors or duplicate work when run repeatedly, it’s an important concept) I can guarantee that all my commands are properly documented and also that they have all been run – and if they haven’t, or I’m not sure, I can just run the script again as it’s supposed to always be safe to re-run no matter what state the system is in. So I started moving more and more to this strategy, until I realized that if I just organized this well enough, and made the scripts run automatically when they are changed or updated, I could not only improve my guarantees of having all these commands reliably run, but also quickly run them on many different servers and services all at once without even having to think about it.
There are some downsides of course, this leaves the potential of bugs in the scripts that make it not idempotent or not safe to re-run, and the only thing I can do is try to make sure they don’t happen, and if they do, identify and fix these bugs when they happen. The next step is probably to have some kind of testing process and environment (preferably automated) but now I’m really getting into the weeds. But at least I don’t really have any concerns that my system is undocumented anymore. I can quickly reference almost anything it’s doing or how it’s set up. That said, one other risk is that the system of scripts and automation becomes so complex that they start being too complex to quickly untangle, and at that point I’ll need better documentation for them. And ultimately you get into a circle of how do you validate the things your scripts are doing are actually working and doing what you expect them to do and that nothing is being missed, and usually you run back into the same ideas that doomed your documentation from the start, consistency and accuracy.
It also opens an attack vector, where somebody gaining access to these scripts not only gains all the most detailed knowledge of how your system is configured but also the potential to inject commands into those scripts and run them anywhere, so you have to make sure to treat these scripts and systems like the crown jewels they are. If they are compromised, you are in serious trouble.
By now I have of course realized (and you all probably have too) that I have independently re-invented infrastructure-as-code. There are tools and systems (ansible and terraform come to mind) to help you do this, and at some point I may decide to take advantage of them but personally I’m not there yet. Maybe soon. If you want to skip the intermediate steps I did, you might even be able to skip directly to that approach. But personally I think there is value in the process, it helps defining your needs and building your understanding that there really isn’t anything magical going on behind the scenes and that may help prevent these tools from turning into a black box which isn’t actually going to help you understand your system.
Do I have a perfect system? Of course not. In a lot of ways it’s probably horrific and I’m sure there are more experienced professionals out there cringing or perhaps already furiously warming up their keyboards. But I learned a lot, understand a lot more than I did when I started, and you can too. Maybe you’ll follow the same path I did, maybe you won’t. But you’ll get there.


Xi understands that Trump is a stupid man and they think they can manipulate him into doing something stupid to advance their interests, which, honestly, they probably can, unless he just does something so stupid that it backfires. Everything’s a gamble with Trump.


Nextcloud is just really slow. It is what it is, I don’t use it for any things that are huge, numerous, or need speed. For that I use SyncThing or something even more specialized depending on what exactly I’m trying to do.
Nextcloud is just my easy and convenient little dropbox, and I treat it like it’s an oldschool free dropbox with limited space that’s going to nag me to upgrade if I put too much stuff in it. It won’t nag me to upgrade, but it will get slow. So I just don’t stress it out. So I only use it to store little convenience things that I want easy access to on all my machines without any fuss. For documents and “home directory” and syncing my calendars and stuff like that it’s great and serves the purpose.
I haven’t used Seafile. Features sound good, minus the AI buzzword soup, but it looks a little too corporate-enterprisey for me, with minimal commitment to open source and no actual link to anything open source on their website, I don’t doubt that it exists, somewhere, but that raises red flags for potential future (if not in-progress) enshittification to me. After eventually finding their github repo (with no help from them) I finally found a link to build instructions and… it’s a broken link. They don’t seem to actually be looking for contributions or they’re just going through the motions. Open source “community” is clearly not the target audience for their “community edition”, not really.
I’ll stick to SyncThing.


According to the protocol they share (ActivityPub) communities and hashtags are essentially the same thing, they’re a grouping containing many posts. Typing out a hashtag is how you tell Mastodon to add your post to that “hashtag group” (and you can add your post to multiple hashtags). In Lemmy, the community you post in IS the group (and you can cross-post it to multiple communities). The result is the same. They’re the same thing, just different ways of connecting your posts into them, and displayed in very different ways depending on which part of the Fediverse you’re using.
Just keep bringing in more grand juries against her until you either get an indictment or she dies, either way she’s tied up for life.
It’s a DDoJ. Distributed-denial-of-justice.