Nextcloud asked in a poll at https://mastodon.social/@[email protected]/115095096413238457 what database its users are running. Interestingly one fifth replied they don’t know. Should people know better where their data is stored, or is it a good thing everything is running so smoothly people don’t need to know what their software stack is built upon?
Do you have the data to back that up? Have you measured how much of an impact on system load and power consumption having 2 separate DB processes has?
Roughly the same amount of work is being done by the CPU if you split your DBs between 2 servers or just use one. There might be a slight increase in memory usage, but that would only matter in a few niche applications and wouldn’t affect environmental impact.
I mean, you are the one making the exceptional claim that unnecessarily running multiple instances of programs on a device with finite resources has no practical adverse effect. Of course, the effects can be more or less drastic depending on the many variables at play (hardware, software, memory pressure, thread starvation, cache misses, …) and can indeed be negligible in some lucky circumstances. The point is that you don’t call that shot, and especially not by burying your head in the sand and pretending it’s never gonna be a problem.
Effective use of computing resources requires tuning. Introduction of a new service creates imbalance. Ensuring that the server performs nominally and predictably for all intended services is a balancing act and a sysadmin’s job. Services whose deployment settings are set by someone with no prior knowledge of the deployment constraints can’t be trusted to do a good job at it (that’s the nature of the physical world we live in, not my opinion), and promoting this attitude promote the kind of wasteful and irresponsible computing I was on about.
Now, I’ll give you the link to this basic helper for tuning a PostgreSQL server: https://pgtune.leopard.in.ua/
Will you tell me what are the correct inputs for my homelab (I won’t tell you the hardware, the set-up, the other services running on it, the state of the system, etc)?
And later, when you will distribute your successful container to millions of users, what will you respond to the angry ones that will complain that your software is slow, to no fault of your coding, because they happen to pile up multiple DBs, web servers, application servers, reverse proxies, … on their banana SoCs?
I’m saying this based on real world experience: after a certain point you start to see deminishing returns when optimizing a system, and you’re better off focusing your efforts elsewhere. For most applications, customizing containerized services to share databases is far past that point.
And do you think I would spend my time engaging if that wasn’t from my own very “real world experience” of lessons learned the hard way?
Bringing-up “diminishing returns” as if this was an optimisation game also doesn’t do this justice. Take the typical “household FOSS package” with software names often brought up in here: a nextcloud instance, a photo-sharing service like immich, private instant messaging, a software forge, a subsonic-compatible audio/video streaming server, a couple php websites like wallabag and RSS aggregators.
An Intel Atom CPU and 4GB of RAM is plenty sufficient for all that, and will cost you single digit USD a month, granted you put the (one-time) effort to tune and balance those services. Would you run all the above from upstream’s docker files, I can guarantee you that you would deem this (perfectly fine otherwise) server underpowered for the task at hand (and would probably go for a 10th gen or so Intel Core CPU, quadruple the RAM and 3-6× the energy cost in the process).
And that’s the point I’m making here: a self-hosting community of tinkerers should (ideally) know better, for the ethics’ sake of keeping the process environmentally friendly, and not wasting other people’s money.
You seem to be obsessed with optimising one resource at the expense of others. Time is a limited resource, and even if it only takes 5 minutes to configure all of your containers to share a single db backend (it will take longer than that even if you just have 2), you’re only going to save a few MB of RAM. And since RAM costs roughly $2.5/GB (0.25 cents/MB) your time would have to be worth very little for this to be worthwhile.
On the other hand, if you’re doing it to learn more about computers then it might be worthwhile. This is a community of hobbiests, after all…
If you want to push it and paint me as obsessed about something, then let it be this: providing this community with on-topic and reasonable advice
This is false, and you should read once again my previous message illustrating why: on a decent “self-host”-friendly machine, the same software may work very well, or not at all, depending on whether the user would engage with very basic configuration. This goes beyond RAM (memory isn’t the sole shared resource), and I’m adamant that the alternative (which was “pretending that the problem doesn’t exist” turned into “throwing money at the problem”) is unreasonable.
Or more importantly: the extent to which you can self-host out of sheer luck and ignorance like you suggest is very limited. If you don’t want to engage with a minimum amount of configuration, you might bump into security issues (a much broader and complex subject) long before any of the above has a material impact.