Recently, I was confronted with a problem I never thought anyone would have; my VPS didn’t have IPv6 connectivity. Now at first, I of course thought this was an issue of some kind, and opened a support ticket for it. About half an hour later, I get this response:
Hallo, [[ DATA EXPUNGED ]]
vielen Dank für Deine Anfrage.
VMs haben keine v6 adresse
Mit freundlichen Grüßen
[[ DATA EXPUNGED ]]
English Translation:
Hello, [[ DATA EXPUNGED ]]
thank you for your request.
VMs don't have a v6 address
With kind regards
[[ DATA EXPUNGED ]]
WHAT
How do you not have IPv6 in 2024?? It’s literally free!? To be completely fair, I should have expected as much. My host at the time is not some professional big company of the likes of Hetzner or DigitalOcean. I like my servers from good old minecraft server hosts. Seriously, i never had issues with this host before. During my over 1 year of having a VPS there, plus some additional months for minecraft servers, i had exactly 0 seconds of downtime, never felt a single sign of over-provisioning, and paid some of the cheapest prices on the market.
Now I did have one other problem with my VPS, but that one was self-imposed: The VPS didn’t run NixOS. When I got the VPS I didn’t use NixOS yet, and barely knew about nix at all, so I set it up with Debian 11 (later upgraded to 12), and when I wanted to switch, I already had too much stuff on it to do a nixos-infect (see: Sunk Cost Fallacy)
The solution was clear: I needed a new VPS, with IPv6 this time.
So the search began.
I previously paid 14.50€ per month for 3 cores, 6GB RAM and 100GB SDD, and I didn’t really wanna pay more for that just for IPv6. Turns out, that minecraft host had some really competetive pricing! From all the big providers, the prices were either worse or could only be paid by credit card. Afer not finding anything for about half a month, I asked on fedi if anyone could recommend me some hosting providers. (the post) Now of course, latte gave me the obvious suggestion of Hetzner, which would have fit the criteria, but for reasons that fedi users might have guessed already, I don’t feel comfortable hosting there. Another suggestion by latte was st-hosting, which has some INSANE pricing; IF you’re comfortable with running in an LXC container, which I am not. They also have some even more insane prices on IPv6-only LXC servers, where you can get 10 cores / 16GB RAM / 150GB SSD for 8 bucks a month, which if you bridge IPv4 from another VPS using wireguard might honestly be worth it. The last suggestion, that was sent in through a DM, was Webdock.
Webdock advertises themselves as the “no-nonsense cloud”, and they pretty much are. There is really not much to say about them. They have some server management tools that can auto-install stuff on your VPS or register shell users with SSH keys over the webui, but that’s all stuff i don’t need or want. After waiting a bit more for potential other suggestions to arrive, I bought a VPS from webdock.
Webdock, like any reasonable host, doesn’t have NixOS support. It’s already insane to use nix as a normal person, much more so to comercially support it. But that won’t stop a NixOS user.
So, what do you do when you’re not one one of the 4 hosts with first-class NixOS support? (honestly, more than expected) You try nixos-infect. nixos-infect is a small shell script that tries to re-partition and install NixOS on your VPS. Now importantly, this does not work all of the time, you may need to do some tinkering, or it might not work at all. But when you have a freshly bought VPS, you might as well try.
Running nixos-infect works like this:
PROVIDER
variable or set doNetConf=y
in the scriptinsert chud.jpg
Of course, nothing can work on the first try. The script ran through fine, but after it rebooted, it couldn’t be accessed anymore. Not an SSH issue, it was actually refusing all conections.
This is almost always an issue with the network configuration, which can mostly be fixed by setting doNetConf=y
in the script. Note: I’ve made a PR to nixos-infect that does this when setting PROVIDER=webdock
After reseting the VPS, repeating steps 1-6 and rebooting, the server could be reached again. There was one issue tho: The SSH key didn’t work, or so I thought at least (see: Foreshadowing)
The first thought i had was that the script must not have installed the SSH key correctly, especially after seeing an issue discussing exactly that for keys with descriptions. My key didn’t actually have a description, but I didn’t care for that at the time. Afer removing the reboot step from the script and manually inspecting it, I had verified that it was correctly pasting the key into the nix config. Fuck. At this point, I just tried it a few more times to see if it works, which of course it didn’t.
After a few attempts, i finally had the thought that maybe, just maybe, this isn’t a server issue at all and is some kind of misconfiguration in my SSH client.
So, i ran ssh with -vvv
and, lo and behold, SSH doesn’t even try to use my SSH key. Great. It’s important to note here that i use a YubiKey with gnupg for SSH, meaning that my GPG private key is stored on the YubiKey,
which is used by gnupg to derive an SSH key which is then used by ssh. This works great, until you realize that ssh REALLY doesn’t like only having a .pub file, to the point where it’d rather silently fail than try to use the key.
To fix this, you either have to specify -i ~/.ssh/id_rsa_yubikey.pub
for every connection, or add this to your config:
Host hostname
HostName IP
IdentityFile ~/.ssh/id_rsa_yubikey.pub
Note: Why does SSH config switch host and hostname lmao
With that done, I could finally connect to the VPS, and continue my work of purification
NixOS advertises itself as reliable, DECLARATIVE, and REPRODUCIBLE OS. And it is*
If you import any kind of remote resource in your configuration, you have to specify both the link and a file hash, to ensure that any other person trying out your configuration is getting the exact same experience.
Now, of course you don’t do this with most packages. This is mostly done by package maintainers for you, so you only have to import nixpkgs
to get more than 100k packages in your configuration.
So, when importing nixpkgs
, you have to specify the hash! EXTREMELY LOUD INCORRECT BUZZER
By default, nix uses something called channels to manage the nixpkgs version. Channels are basically a system-wide configuration file, that tells nix which link to get nixpkgs from, and which commit hash.
Now, this sounds kinda reasonable, until you’re told that nix channels are managed imperatively with the nix-channel
command, which breaks the declarative restraint promised by nix,
as well as hindering reproducibility by making having the exact same nix channel version a needlessly complicated task.
Also, this makes reproducible development environments basically impossible, since every user would have to keep their nix channel in sync with the project’s intended one.
There is a solution to this, and it’s called flakes. Flakes provide a both fully reproducible and declarative way to manage the version of nixpkgs (and other external imports!) by having a flake.lock file that specifies the commit and file hashes of dependencies.
You still have imperative updating through nix flake update
, but all changes have to be written to the flake.lock before they’re effective, meaning reproducibility can be achieved by simply commiting this file to your git repository.
There is one issue tho*: Flakes are an experimental feature.
* no actually, there are still a few more issues with flakes, but it doesn’t seem like they’re getting resolved anytime soon
To enable flakes, you can either set --experimental-features nix-command flakes
, or set nix.extraOptions = "experimental-features = nix-command flakes";
in your NixOS config.
Now you’re ready to re-write your system config with flakes, good luck!
I already had a flake for 4 of my devices, so I just added testament
to the hosts and wrote the rest of the config.
”When life gives you docker, write nix. Fuck docker”
- Emilia, 2024
With a freshly set up NixOS VPS, i had a lot of work to do. I was running some (11) services that all needed to be ported over.
They were previously running inside docker containers, managed through Dockge, a great tool if you’re not completely insane.
But I was not satisfied. I needed declarativeness. I needed reproducibility. I needed nix.
For porting services over to NixOS, i had three options:
Nix is really fucking cool, and my nginx config shows it best. You should try it (see: Red Herring). Here’s the basic nginx config in nix that basically everyone should have to begin with.
services.nginx = {
enable = true;
recommendedGzipSettings = true;
recommendedOptimisation = true;
recommendedProxySettings = true;
recommendedTlsSettings = true;
};
This basic config basically only enables nginx and sets a lot of recommended settings. The real magic starts once you start defining virtualHosts.
services.nginx = {
virtualHosts = {
"git.ixhby.dev" = {
onlySSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://127.0.0.1:3000";
extraConfig = ''
client_max_body_size 512M;
'';
};
};
};
};
This little snippet creates a virtual host for https://git.ixhby.dev, fetches the SSL certificate automatically through ACME, proxies all HTTPS traffic to port 3000, and 301’s all HTTP traffic to HTTPS, in only 10 lines. But it can get so much more concise than this.
Nix is a full functional programming language, which allows you to create let bindings for intermediate variables. One thing that basically all vHosts need is HTTPS redirection and ACME, so let’s simplify this a bit.
services.nginx = {
virtualHosts = let
ssl = {
onlySSL = true;
enableACME = true;
};
in {
"git.ixhby.dev" = ssl // {
locations."/" = {
proxyPass = "http://127.0.0.1:3000";
extraConfig = ''
client_max_body_size 512M;
'';
};
};
};
};
Granted, this only saves us about 2 lines per vHost, but it gets better than this. git.ixhby.dev
not only proxies the traffic, but has the additional config limitation of client_max_body_size 512M
. But surely we can simplify this for other services.
services.nginx = {
virtualHosts = let
ssl = {
onlySSL = true;
enableACME = true;
};
proxy = port: ssl // {
locations."/".proxyPass = "http://127.0.0.1:${builtins.toString port}";
};
in {
"i.ixhby.dev" = (proxy 3333);
};
};
This is an ENTIRE nginx virtual host, with everything you need, in 1 line.
Writing nix gets addictive. FAST.
garnix.dev is written in Astro, the one good web framework. Astro itself doesn’t have any build tools in nix, why would it? It’s a web framework, you’re not gonna be writing desktop applications with it or anything!
They’re doing WHAT!?!? That is fucking terrifying, who thought this would be a good idea!
Still, packaging a website for nix is not the hardest. I’ll spare you the details of finding this out, but this is all you need to package an astro website (using pnpm) in nix.
packages.default = pkgs.stdenvNoCC.mkDerivation (finalAttrs: {
pname = "garnix.dev";
version = "1.0.0";
src = ./.;
nativeBuildInputs = with pkgs; [
nodejs
pnpm.configHook
];
pnpmDeps = pkgs.pnpm_8.fetchDeps {
inherit (finalAttrs) pname version src;
hash = lib.fakeHash;
};
buildPhase = ''
pnpm build
'';
installPhase = ''
mkdir -p $out
mv dist/* $out
'';
});
Now i could just add the garnix.dev git repo to the flake inputs, and add this quick snippet to the nginx config
# a bunch of stuff snipped here
services.nginx.virtualHosts = {
"garnix.dev" = ssl // {
root = garnix-dev.packages."x86_64-linux".default;
locations."/" = {
index = "index.html";
tryFiles = "$uri $uri/ $uri/index.html =404";
};
};
};
mroew mrrp mrroow nyaa~ nix is cool.
If you’re searching for happiness in your live, try nix instead!