I have never used them but there are some tools that advertise being able to run GitHub Actions locally, like WRKFLW.
I have never used them but there are some tools that advertise being able to run GitHub Actions locally, like WRKFLW.
Indeed


Also the normal and rpi versions are two completely independent implementations of the same software. So now the LLMs have twice the maintenance load.
I didn’t diff the two files but even the startup and control code appears to be custom for each version.


Since projects of the same language often use the same tooling this makes it easier to clean up the whole directory by running something like this:
for d in ./*/ ; do (cd "$d" && somecommand); done
somecommand could be cargo clean if you’re in the Rust directory for example.


Just out of curiosity I don’t see how 4 sticks die together at the exact same time unless the PSU is/has fucked up hard.
I’d argue that the likelihood of 4 sticks failing together is much lower than the MOBO or CPU or PSU failing in a way that makes RAM inaccessible.
Typically you’d see one stick failing at which point you could take it out and run with the other 3 (or 2 depending on configuration).
Anyway if you ever intend to return its probably best to keep the rest of the components because who knows which of those will be up next for a shortage/crisis.


Depends on the exact system but there will be a method to switch to a newer release channel without reinstalling. Rinse and repeat every x years.


I assume you mean AVIF? Because AV1 is not an image (file) format but a video compression format (that needs to be wrapped in container file formats to be storable).


The std::offload project is kinda cool. Hadn’t heard about that before.
It’ll be interesting to see where that leads.


I was under the impression that the compiler already optimizes out a lot of clones where the clone is primarily used to satisfy the borrow checker. Is that not the case?
How much is the fish?


As long as people are using Rust, it will necessarily attract this kind of action. This won’t be the last attack we will see.
I think the team has handled it quite well.


The post text doesn’t seem to get displayed on some clients (Voyager) so I’ll attach it here as well:
asciinema (aka asciinema CLI or asciinema recorder) is a command-line tool for recording and live streaming terminal sessions.
This is a complete rewrite of asciinema in Rust, upgrading the recording file format, introducing terminal live streaming, and bringing numerous improvements across the board.
Not just on the web. I’ve previously used it to embed a short clip in a presentation.
The nice thing is that it doesn’t do a massive screencap but only captures the text. This way the replay will be freshly rendered at native resolution.


There are even little interactive tools for it like: cargo-wizard


Good move. Especially for incremental builds linking tends to take up quite a large percentage of the compile time.
I’ve mostly switched to mold for that reason but having a fast default linker is nice.


My motivation for using NixOS is maintenance. I’ve been running 2-3 personal Linux computers for the last decade, with one of them being a server.
To get stuff like services working with each other you sometimes need to make small changes to the config files of some services. The issue for me especially with the many services running on the server is coming back to a broken/misbehaving machine after 4+ months and now having to research what changes I made long ago and where those config files are buried.
Making the change and testing it would likely take less than 5 minutes if you had every detail you need fresh on your mind.
I simply don’t have the mental capacity to remember all that stuff after months of working on other things. Especially if you’re coming back to something broken this is a really annoying position to be in.
You want the fix now but have to start by looking up docs and trying to figure out what past-self did to get you into this mess, or to find out what has changed since then.
At some point I had enough and was either going to teach myself some sort of personal changelog / documentation system, or learn a new declarative configuration system.
Huge respect for anyone who can keep all this info in their mind and to those that meticulously update their own documentation, but I lack the discipline to do so in the heat of battle and will easily miss things.
Since then any system that I will have to maintain myself has been using some form of declarative management. It keeps all configuration accessible and organized in one place, so I don’t have to go digging for the correct file paths. It self updates so that even when I go back and forth during testing I won’t miss updating my standalone docs.
And NixOS brings this to my whole system. No old programs lying around because you forgot to uninstall them and have now forgotten about it. Same thing with pinned package versions that then wreak havoc once they’re incompatible with the updated rest of the system. It even configures my goto tools (shell, editors, etc) to my personal liking when I set up a new machine.
Its not the first declarative system and probably won’t be the last one I will use, but for now it really makes my life noticeably easier.


I get that this seems very intimating, but if you’ve ever used more than three programming languages in your life, I believe you won’t have much to learn.
I can mimic the syntax and I very roughly understand how the import system works. But I don’t know Nix! Yet I haven’t had any trouble language wise over the last few years.
In my experience most of the “code” you write is package names and those can be copied from search.nixos.org.
In that sense I’m effectively using it as a markup language and I don’t think anyone has ever gotten discouraged by having to “learn” YAML, just so they can write a config file for some piece of software they want to use.
Something that I would take as discouragement is the state of the documentation. It has been improving to a usable level in some areas but other areas are heavily outdated or just plain missing.


gitui and the plain old git cli


How well NixOS fits your purpose really depends on what you want to do with the OS. If you’re just going run a bunch of docker containers, you could manage them via Nix but its a little cumbersome.
Where NixOS really shines for small servers are the so called NixOS Options.
They allow you to install tons of services on bare metal but manage all the configuration for you. E.g. open the correct firewalls ports, run a dedicated DB or cache, etc. and all those simply require you to enable them with an = true;.
Smaller projects might not have a NixOS Option available and some options are more and/or easier configurable than others, but if you’re running just a few common services you could feasibly manage your whole server with just one native config file and no docker shenanigans.
I’d recommend checking what’s available under the link above. If you wanna go the container route instead, you have the option of just using docker non-declaratively as on every other distro (but then you lose some of the benefits NixOS gives you), or you can declaratively have NixOS manage all the docker containers. There are a few ways to do and manage this so some further research will be required.
Source (German)
Would it have cost you so much to leave a link at the bottom of the post?