nixos-config/hosts/plover
2023-12-13 11:28:45 +08:00
..
config chore: reformat codebase 2023-10-09 20:48:01 +08:00
files hosts/plover: remove unneeded files 2023-11-07 21:02:52 +08:00
modules chore: reformat codebase 2023-12-13 11:28:45 +08:00
secrets hosts/plover: remove unneeded secrets 2023-11-07 21:05:47 +08:00
.terraform.lock.hcl config: migrate from Terraform CLI to OpenTofu CLI 2023-10-13 14:24:10 +08:00
default.nix hosts/plover: restructure host-specific profiles 2023-12-11 19:37:27 +08:00
disko.nix hosts/plover: add disko device config 2023-06-30 13:38:38 +08:00
main.tf chore: reformat codebase 2023-06-22 11:12:43 +08:00
README.adoc docs: update notes on Plover 2023-07-02 20:21:49 +08:00
versions.tf chore: reformat codebase 2023-06-22 11:12:43 +08:00

This is Plover, a configuration meant to be used in a low-powered general-purpose machine. It isnt much of an instance to be seriously used yet but hopefully it is getting there.

This configuration is expected to be deployed in a Google Compute instance.

It has a reasonable set of assumptions to keep in mind when modifying this configuration:

  • Most of the defaults are left to the image profiles from nixpkgs including networking options and filesystems. Though, they should be handled on ./modules/hardware.

  • No additional storage drives.

  • At least 32 GB of space is assumed.

Some of the self-hosted services from this server:

  • An nginx server which will make tie all of the self-hosted services together.

  • A Vaultwarden instance for a little password management.

  • A Gitea instance for my personal projects.

  • A Keycloak instance for identity management.

  • A VPN tunnel with Wireguard.

  • A DNS server with Bind9 managed as a "hidden" authoritative server and as a local DNS server for easily accessing the services with domain names.

A Terraform plan is also available to be deployed given with the right credentials. To deploy it, just start up a Terraform project as usual (assuming it is run from the project root).

terraform -chdir=./hosts/plover init

There are ways to deploy with the given credentials but the common way to do it with this setup is creating a production dotenv (.production.envrc) located at the project root. Just provide the sensitive credentials as an environment variable TF_VAR_$KEY=$VALUE.

General deployment guidelines

If you want to deploy it anywhere else, you have to keep some things in mind.

  • This uses sops and sops-nix to decrypt secrets. It mainly use the private key to the ./files/age-key.pub and move it to the appropriate location (i.e., /var/lib/sops-nix/key.txt).

  • Be sure to set the appropriate firewalls either in the NixOS configuration or in the VPS provider firewall settings. Take note some formats such as Google Compute image disable them entirely so its safer to leave the firewall service and just configure the allowed ports and other settings.

  • There are some things that are manually configured such as additional setup for the database. Mostly related to setting up the proper roles which should be set up with the initial script at this point but there are some still left.

  • If needed, restoring the application data from the backup into the services (e.g., Gitea, Keycloak, Vaultwarden).

  • Configuring the remaining parts for the services (which unfortunately involves manually going into each application).

  • Configure the database users with each appropriate service.

  • Configure the services with users if starting from scratch.

    • For Gitea, you have to create the main admin user with the admin interface.

      Heres a way to quickly create a user in the admin interface.

      sudo -u gitea gitea admin user create --username USERNAME --email EMAIL \
          --random-password --config /var/lib/gitea/custom/conf/app.ini --admin
    • For Vaultwarden, you have to go to the admin page of the Vaultwarden instance (i.e., $VAULTWARDEN_INSTANCE/admin), get the admin token to enter, and invite users from there.

    • For Keycloak, you have to create the appropriate realms and users as follows from the server administration guide. Though, you can easily create one from the command-line interface with kcadm.sh.

    • For Portunus, this is already taken care of with a seed file. Still, test the logins as indicated from the seed file.

  • FIREWAAAAAAAAAAAAAAAAAAAAALS! Please activate them and with the right ports.

  • Get the appropriate credentials for the following services:

    • An API key/credentials for the email service (i.e., SendGrid). This is used for setting up configuration for transactional emails used by some of the services such as Gitea and Vaultwarden.

Networking guidelines

Networking for a first-timer can be confusing (at least for me). So heres the documentation for what is the practices that are following here.

  • Networks are mainly divided into client and server blocks. Keep in mind, these blocks are not subnets. Theyre more like abstract guidelines for assigning subnets with some ease and some exceptions could be made.

  • Server block are made up of interfaces attached to machines that provide services. They mainly live in 172.16.0.0/13 and 10.0.0.0/9 for IPv4, fd00::/9 for IPv6.

  • Client block are made up of interfaces attached to machines that are mainly used as clients. They mainly live in 172.24.0.0/13 and 10.128.0.0/9 for IPv4, fd00::/9 for IPv6. Furthermore, most of them should be freely assigned an IP address. Thus, use of DHCP is pretty much ideal.

  • Wireguard interfaces (including the server) are mainly at 172.28.0.0/14, 10.200.0.0/13, and fd00:ffff::/64. They are also included as part of the client block. The same principles are applied if you are considering to implement other VPN servers instead of the current setup.

  • For private network 192.168.0.0/16 (for IPv4), it is basically a free-for-all. There is no equivalent of a free-for-all network for IPv6 networks. Were just dealing with the fact that the aforementioned network is widely used so well leave no assumptions here.

For more details, you can see the interfaces and their networking-related configuration in ./modules/hardware/networks.nix.