Is it? Why? If a NixOS module doesn’t support what you need, you can just write your own module, and the module system lets you disable existing modules if you need to. Doing anything custom this way still feels easier than doing it in an imperative world.
I can see your point that it can be daunting to have all the pain upfront. When I was using Ubuntu on my servers it was super simple to get things running
The problem was when I had to change some obscure .ini file in /etc for a dependency to something new I was setting up. Three days later I'd realise something unrelated had stopped working and then had to figure out which change in the last many days caused this
For me this is at least 100x more difficult than writing a Nix module, because I'm simply not good at documenting my changes in parallel with making them
For others this might not be a problem, so then an imperative solution might be the best choice
Having used Nix and NixOS for the past 6-7 years, I honestly can't imagine myself using anything than declarative configuration again - but again, it's just a good fit for me and how my mind works
In the NixOS scenario you described, what keeps you from finding an unrelated thing stopped working three days later and having to find what changed?
I’m asking because you spoke to me when you said “because I'm simply not good at documenting my changes in parallel with making them”, and I want to understand if NixOS is something I should look into. There are all kinds of things like immich that I don’t use because I don’t want the personal tech debt of maintaining them.
I think the sibling answer by oasisaimlessly is really good. I'd supplement it by saying that because you can have the entire configuration in a git repo, you can see what you've changed at what point in time
I'm the beginning I was doing one change, writing that change down in some log, then doing another change (this I'll mess up in about five minutes)
Now I'm creating a new commit, writing a description for it to help myself remember what I'm doing and then changing the Nix code. I can then review everything I've changed on the system by doing a simple diff. If something breaks I can look at my commit history and see every change I've ever made
It does still have some overhead in terms of keeping a clean commut history. I occasionally get distracted by other issues while working and I'll have to split the changes into two different commits, but I can do that after I've checked everything works, so it becomes a step at the end where I can focus fully on it instead of yet another thing I need to keep track of mentally
I just realised I didn't answer the first question about what keeps me from discovering the issues earlier
The quick answer is complexity and the amount of energy I have, since I'm mostly working on my homelab after a full work day
Some things also don't run that often or I don't check up on them for some time. Like hardware acceleration for my jellyfin instance stopped working at some point because I was messing around with OpenCL and I messed up something with the Mesa drivers. Didn't discover it until I noticed the fans going ham due to the added workload
I'm not really sure what your point is, but I'll try to take it in good faith and read it as "why doesn't docker solve the problem for it, since you can also keep those configurations in a git repo?"
If any kind of apt upgrade or similar command is run in a dockerfile, it is no longer reproducible. Because of this it's necessary to keep track of which dockerfiles do that and keep track of when a build was performed; that's more out-of-band logging. With NixOS I will get the exact same system configuration if I build the same commit (barring some very exotic edge cases)
Besides that, docker still needs to run on a system, which must also be maintained, so Docker only partly addresses a subset of the issue
If Docker works for you and you're not facing any issues with such a setup, then that's great. NixOS is the best solution for me
That’s all my point was, yeah. Genuinely no extra snark intended.
> it is no longer reproducible
The problem I have with this is that most of the software I use isn’t reproducible, and reproducible isn’t something that is the be all and end all to me. If you want reproducible then yes nix is the only game in town, but if you want strong versioning with source controlled configuration, containers are 1000x easier and give you 95% of the benefit
> docker still needs to run on a system
This is a fair point but very little of that system impacts the app you’re running in a container, and if you’re regularly breaking running containers due to poking around in the host, you’re likely going to do it by running some similar command whether the OS wants you to do it or not.
> if you want strong versioning with source controlled configuration, containers are 1000x easier and give you 95% of the benefit
For some I'm sure that's the case; it wasn't in my case.
I ran docker for several years before. First docker-compose, then docker swarm, finally Nomad.
Getting things running is pretty fast, but handling volumes, backups, upgrades of anything in the stack (OS, scheduler, containers, etc) broke something almost every time - doing an update to a new release of Ubuntu would pretty much always require backing up all the volumes and local state to external media, wiping the disk, installing the new version, and restoring from the backup
That's not to talk about getting things running after an issue. Because a lot of configuration can't be done through docker envs, it has to be done through the service. As a consequence that config is now state
I had an nvme fail on me six months ago. Recovering was as simple as swapping the drive, booting the install media, install the OS, and transfering the most recent backup before rebooting
Took about 1.5 hours and everything was back up and running without any issues
Not OP, and not a very experienced with NixOS (I just use Nix for building containers), but roughly speaking:
* With NixOS, you define the configuration for the entire system in one or a couple .nix files that import each other.
* You can very easily put these .nix files under version control and follow a convention of never leaving the system in a state where you have uncommitted changes.
I've written a dozen flakes because I want some niche behavior that the home-manager impl didn't give me, and I just used an LLM and never opened Nix docs once.
It's just declarative configuration, so you also get a much better deliverable at the end than running terminal commands in Arch Linux, and it ends up being less work.
Have you seen how bad the Nix documentation is and how challenging Nix (the language) is? Not to mention that you have to learn Yet Another Language just for this corner case, which you will not use for anything else. At least Guix uses a lisp variant so that some of the skills you gain are transferable (e.g. to Emacs, or even to a GP language like Common Lisp or Racket).
Don't get me wrong, I love the concept of Nix and the way it handles dependency management and declarative configuration. But I don't think we can pretend that it's easy.
The documentation is not great (especially since it tends to document nix-the-language and not the conventions actually used in Nixpkgs), but there are very few languages on earth with more examples of modules than Nix.
In Pattern: Defensively Handle Constructors, it recommends using a nested inner module with a private Seal type. But the Seal type is unnecessary. The nested inner module is all you need, a private field `_private: ()` is only settable from within that inner module, so you don't need the extra private type.
I haven't used Homebrew in a long time, but if I ever did it would be in the way that you describe (so far I've always found reasonable alternatives for the software I want). What I'm wondering is if this is entirely to support unsigned casks, why does Homebrew not simply resign the software itself at install time with an adhoc signature as though it had just built it?
If you don't use ANC, then why is an ANC issue that only appears on flights a reason for you to not buy a product? If someone said they have weird ears and the AirPods Pro 3 simply doesn't fit their ears, would that be a reason for you to not buy it even though it fits yours?
I'm fascinated by ASN.1, I don't know why it appeals to me so much but I find working with it oddly fun. I've always wanted to find the time to write an ASN.1 compiler for Rust, because for some reason all of the Rust implementations I've seen end up just either being derive macros on Rust structs (so going the other direction), or even just providing a bunch of ASN.1 types and functions and expecting you to manually chain them together using nom.
LaunchDaemons don't rely on GUI login state so they should come up. If you use LaunchAgents then they won't start this way, but LaunchDaemons should be enabled once the data volume is unlocked and booting finishes.
Unlock over SSH terminates the connection after unlocking the data volume, so it doesn't even attempt to start the shell until you reconnect after it's fully booted up.
FWIW you can fix the shell issue by wrapping your shell in a shim that essentially runs wait4path on the nix store before exec'ing your real shell. I set up my environment to install that shim binary directly onto the data volume at a known path so it can be used as my login shell.
Depending on the timeouts involved, I imagined it might still happen if you had automatic retry.
And thanks for the pointer, I actually have the same fix in my config with the nice benefit of only adding a single non-changing entry to /etc/shells. It might be worth up streaming something like this to nix-darwin, so we don't all go implement essentially the same fix.
It's not just 5 moves. It's 5 crossing changes (which don't change the number of crossings, they just change the order of the strings in a crossing). Unknotting also involves moving the strings around to add or remove crossings, without performing crossing changes (if you take a loop and twist it into a figure eight, you've moved the strings and created a crossing but you haven't cut the strings and performed a crossing change).
If you look at the preprint paper, the knot it starts with has 14 crossings, but they actually move the strings around to end up with 20 crossings prior to performing the first 2 crossing changes in the unknotting sequence. So the potential space for moves here is actually rather large.
If you need to hold a lock across an await point, you can switch to an async-aware mutex. Both the futures crate and the tokio crate have implementations of async-aware mutexes. You usually only want this if you're holding it across an await point because they're more expensive than blocking mutexes (the other reason to use this is if you expect the mutex to be held for a significant amount of time, so an async-aware lock will allow other tasks to progress while waiting to take the lock).
reply