.. | ||
config/coredns | ||
files | ||
modules | ||
secrets | ||
default.nix | ||
README.adoc |
This is Plover, a configuration meant to be used in a low-powered general-purpose machine. It isn’t much of an instance to be seriously used yet but hopefully it is getting there.
This configuration is expected to be deployed in a Google Compute instance.
It has a reasonable set of assumptions to keep in mind when modifying this configuration:
-
Most of the defaults are left to the image profiles from nixpkgs including networking options and filesystems. Though, they should be handled on
./modules/hardware
. -
No additional storage drives.
-
At least 32 GB of space is assumed.
Some of the self-hosted services from this server:
-
An nginx server which will make tie all of the self-hosted services together.
-
A Vaultwarden instance for a little password management.
-
A Gitea instance for my personal projects.
-
A Keycloak instance for identity management.
-
A VPN tunnel with Wireguard.
-
A DNS server with CoreDNS managed as a "hidden" authoritative server and as a local DNS server for easily accessing the services with domain names.
General deployment guidelines
If you want to deploy it anywhere else, you have to keep some things in mind.
-
This uses sops and sops-nix to decrypt secrets. It mainly use the private key to the
./files/age-key.pub
and move it to the appropriate location (i.e.,/var/lib/sops-nix/key.txt
). -
Be sure to set the appropriate firewalls either in the NixOS configuration or in the VPS provider firewall settings. Take note some formats such as Google Compute image disable them entirely so it’s safer to leave the firewall service and just configure the allowed ports and other settings.
-
There are some things that are manually configured such as additional setup for the database. Mostly related to setting up the proper roles which should be set up with the initial script at this point but there are some still left.
-
If needed, restoring the application data from the backup into the services (e.g., Gitea, Keycloak, Vaultwarden).
-
Configuring the remaining parts for the services (which unfortunately involves manually going into each application).
-
Configure the database users with each appropriate service.
-
Configure the services with users if starting from scratch.
-
For Gitea, you have to create the main admin user with the admin interface.
Here’s a way to quickly create a user in the admin interface.
sudo -u gitea gitea admin user create --username USERNAME --email EMAIL \ --random-password --config /var/lib/gitea/custom/conf/app.ini --admin
-
For Vaultwarden, you have to go to the admin page of the Vaultwarden instance (i.e.,
$VAULTWARDEN_INSTANCE/admin
), get the admin token to enter, and invite users from there. -
For Keycloak, you have to create the appropriate realms and users as follows from the server administration guide. Though, you can easily create one from the command-line interface with
kcadm.sh
. -
For Portunus, this is already taken care of with a seed file. Still, test the logins as indicated from the seed file.
-
-
FIREWAAAAAAAAAAAAAAAAAAAAALS! Please activate them and with the right ports.
-
Get the appropriate credentials for the following services:
-
An API key from the domain registrar (i.e., Porkbun). This is used for the certificate generation in case the ACME client went with the DNS-1 challenge.
-
An API key/credentials for the email service (i.e., SendGrid). This is used for setting up configuration for transactional emails used by some of the services such as Gitea and Vaultwarden.
-
Networking guidelines
Networking for a first-timer can be confusing (at least for me). So here’s the documentation for what is the practices that are following here.
-
Networks are mainly divided into client and server blocks. Keep in mind, these blocks are not subnets. They’re more like abstract guidelines for assigning subnets with some ease and some exceptions could be made.
-
Server block are made up of interfaces attached to machines that provide services. They mainly live in
172.16.0.0/13
and10.0.0.0/9
for IPv4,fc00::/8
for IPv6. -
Client block are made up of interfaces attached to machines that are mainly used as clients. They mainly live in
172.24.0.0/13
and10.128.0.0/9
for IPv4,fd00::/8
for IPv6. Furthermore, most of them should be freely assigned an IP address. Thus, use of DHCP is pretty much ideal. -
Wireguard interfaces (including the server) are mainly at
172.28.0.0/14
,10.200.0.0/13
, andfd00:ffff::/64
. They are also included as part of the client block. The same principles are applied if you are considering to implement other VPN servers instead of the current setup. -
For private network
192.168.0.0/16
(for IPv4), it is basically a free-for-all. There is no equivalent of a free-for-all network for IPv6 networks. We’re just dealing with the fact that the aforementioned network is widely used so we’ll leave no assumptions here.
For more details, you can see the interfaces and their networking-related configuration in ./modules/hardware/networks.nix
.
Deploying it as a Google Compute instance
Some documented guidelines to deploy this instance in Google Cloud Platform (GCP) so you won’t have to re-read those documentation like a stuck rat the next time you visit them.
-
A GCP Compute Instance image of the configuration is available to be stored at your storage buckets. You can simply build it at
packages.plover-gce
and store it there.You can take it further automating it by running
../../scripts/generate-and-upload-gce-image
which is just a modified version of thecreate-gce.sh
script from nixpkgs. -
If you already have access to at least one GCP KMS key, then skip this part. Add a key to be used for deployment to wherever relevant file in the secrets directory. [1] For this, you’ll have to create a GCP keyring on their key management system (KMS) and generate a key there.
-
Enable OS Login for your Compute Engine instance.
-
Enable HTTP and HTTPS traffic in the firewall settings.
-
Don’t forget to set the appropriate scopes for the instance. Use the least privileged scopes as much as possible.
-
Reserve a static IP address, pls. Just don’t forget to immediately assign it to the instance since it will charge higher if you just leave it alone.
-
Creating a dedicated service account for the VM is recommended. Just make sure to set the least amount of privileges for that account.
Deploying it to Hetzner Cloud
A deployment to Hetzner Cloud is composed of mainly three things:
-
A server.
-
A firewall.
-
A private network.
First, we will set up the latter two before creating the server. [2]
The firewall is already set in the host so no need to worry about it (as long as it is configured correctly, of course :p).
Next up, the networking setup which is composed of a public IP used for accessing some services and a private network used to communicate inside of the network. However, the main reason we have a private network is to setup a VPN service to hide some of the more sensitive services.
You can create one from Hetzner Cloud web UI.
If you want to create with hcloud
, however…
hcloud network create --name plover-local --ip-range 172.16.0.0/12
To deploy this to Hetzner Cloud, just initialize a server and run nixos-infect script. As an example, you can run the server with the following cloud config.
#cloud-config
runcmd:
- curl https://raw.githubusercontent.com/elitak/nixos-infect/bca605ce2c91bc4d79bf8afaa4e7ee4fee9563d4/nixos-infect | NIX_CHANNEL=nixos-unstable bash 2>&1 | tee /tmp/infect.log
You could also easily create a server with hcloud
with the following command:
hcloud server create --location hel1 --type cx21 --image ubuntu-22.04 \
--network plover-local \
--user-data-from-file ./files/hcloud/hcloud-user-data.yml \
--ssh-key foodogsquared@foodogsquared.one \
--name nixos-plover
Don’t forget to setup the prerequisites such as filesystems properly. Here’s a set of commands setting up to the current filesystem configuration.
e2label /dev/sda1 nixos
fatlabel /dev/sda15 boot
Next, do the steps as written from General deployment guidelines.