Hetzner dedicated server setup
Posted on June 1, 2024 (Last modified on July 2, 2024) • 7 min read • 1,483 wordsI have a dedicated server now, from Hetzner. My old one was getting kinda crammed, and I realized that a “cloud based” server would be massively smaller at the same price – so I have now “invested” in a proper and totally oversized dedicated server.
The goal: To host my services on it, this time using k8s and all the shiny best-practice tools we have available now. No more fiddling around. Also, no more things that can go “meh”, I spent way too much time reigning my old server back in.
First of all, i basically followed this document.
lspci | grep -i raid
)wipefs -fa /dev/nvme*n1
)installimage
Now the question of questions – which OS? Debian, Ubuntu, or Arch Linux?
Arch it is? … Hm, let’s opt for “rock solid”. Autoupdate should be a given (it is with Debian), so … yeah. Debian. (sadface, a bit, but still – Debian)
An unreadable editor opens. (lightgray on lightblue? srsly??)
Some really serious questions though.
BTRFS? But … on top of software RAID? Or “native” BTRFS-RAID? Conveniently, the Hetzner installation script uses mdadm
based RAID by default, so this is what I’m going to use.
Regarding the file system, some research lead to …
Regarding layout, …
… and I don’t. So RAID 1 (“mirroring”), no LVM (because yeah, sure, i could extend transparently, but … gooood get on with it already, and KEEP IT SIMPLE!!).
So, software-based RAID 1 (mdadm
), ext4 filesystem, go.
## Hetzner Online GmbH - installimage - standard config
## HARD DISK DRIVE(S):
DRIVE1 /dev/nvme0n1
DRIVE2 /dev/nvme1n1
## SOFTWARE RAID:
# activate software RAID? < 0 | 1 >
SWRAID 1
# Choose the level for the software RAID < 0 | 1 | 10 >
SWRAIDLEVEL 1
## HOSTNAME:
HOSTNAME fly3
## NETWORK CONFIG:
# IPV4_ONLY no
## MISC CONFIG:
USE_KERNEL_MODE_SETTING no
## PARTITIONS / FILESYSTEMS:
PART swap swap 32G
PART /boot ext4 1024M
PART / ext4 all
## your system has the following devices:
# Disk /dev/nvme0n1: 512.12 GB (=> 476.94 GiB)
# Disk /dev/nvme1n1: 512.12 GB (=> 476.94 GiB)
## Based on your disks and which RAID level you will choose you have
## the following free space to allocate (in GiB):
# RAID 0: ~952
# RAID 1: ~476
## OPERATING SYSTEM IMAGE:
IMAGE /root/.oldroot/nfs/install/../images/Debian-1205-bookworm-amd64-base.tar.gz
The nice thing: If you “just do” and don’t think, the whole process it’s going to take you less than 10 minutes.
Well, for installing k3s (the k8s distribution of choice) there are were two valid options:
Decisions, decisions, again. I did for v2 of my server, so that sounded like a no-brainer option. But I had a look at the ansible roles, and I thought “do I really need this?” … and the answer is – no. The only thing I want are managed upgrades, and this is possible either with ansible, or – cloud native 😁 – using a K8S Operator, namely Rancher’s
system-upgrade-controller
.
So I am about to try k3sup
, which should give me the most simple K8S cluster there is.
Let’s go:
k3sup \
install
--cluster \
--host $HOST_IP \
--merge \
--user root \
--ssh-key $HOME/.ssh/id_ed25519
It’s a bit annoying that you have to actually specify your SSH private key if you don’t use RSA, but hey – otherwise it worked like a charm.
Running: k3sup install
2024/06/01 13:46:24 fly3.srv.flypenguin.de
Public IP: <host-ip>
[INFO] Finding release for channel stable
[INFO] Using v1.29.5+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.29.5+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.29.5+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
Result: [INFO] Finding release for channel stable
[INFO] Using v1.29.5+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.29.5+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.29.5+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
[INFO] systemd: Starting k3s
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
Merging config into file: /Users/tm/Dev/private/tf-spaghetti/kubeconfig
Saving file to: /Users/tm/Dev/private/tf-spaghetti/kubeconfig
# Test your cluster with:
export KUBECONFIG=/Users/tm/Dev/private/tf-spaghetti/kubeconfig
kubectl config use-context default
kubectl get node -o wide
🚀 Speed up GitHub Actions/GitLab CI + reduce costs: https://actuated.dev
It also seems the cluster is named default
, which is kind of the worst name for a cluster, so after doing this I did do this:
vim $HOME/.kube/config
and change basically everything (context, cluster & user name) to $MYCLUSTER
Or in case you like automation, simply use yq
(using vim
is actually faster in that case, though):
KC=/home/.kube/config
yq eval -i '(.clusters[] | select(.name=="default")).cluster.server="https://localhost:56443"' $KC
yq eval -i '(.clusters[] | select(.name=="default")).name="MYCLUSTER"' $KC
yq eval -i '(.contexts[] | select(.name=="default")).context.cluster="MYCLUSTER"' $KC
yq eval -i '(.contexts[] | select(.name=="default")).contest.user="MYCLUSTER"' $KC
yq eval -i '(.contexts[] | select(.name=="default")).name="MYCLUSTER"' $KC
yq eval -i '(.users[] | select(.name=="default")).name="MYCLUSTER"' $KC
So to finally protect the server, let’s install and activate ufw
(the “uncomplicated firewall”), following
this blog post, slightly modifying based on
this article, some general ufw docs from
digital ocean, and an overview of flannel networking interfaces from
here:
# sudo apt-get install ufw, naturally ;)
ufw default deny
ufw default allow routed
ufw allow ssh
ufw allow in on cni0 from 10.42.0.0/16 comment "K3s kube-system pod traffic"
ufw allow in on flannel.1 from 10.42.0.0/16 comment "K3s"
ufw allow "www" comment "K8s Ingress"
ufw allow "www Secure" comment "K8s Ingress"
I did not enable ufw and copy-pasted this tmux, so if my ssh connectino broke I could restart the server and would not have been locked out. (I don’t have a root
password, so …)
Since this is already taking long, let’s quickly install the upgrade controller and schedule a more in-depth review later :) .
RANCH=https://github.com/rancher/system-upgrade-controller/releases/latest
kubectl apply -f $RANCH/download/crd.yaml
kubectl apply -f $RANCH/download/system-upgrade-controller.yaml
From the professional life I recently came in contact with ArgoCD, and I like it. So I had a look around, and found FluxCD – and this seems also cool.
So I’m going to try this for myself for now. So, to deploy flux, we have to …
flux bootstrap
After I did the first two things, here I went:
# Create a gitlab PERSONAL access token with all permissions first.
# that is done IN YOUR PROFILE, not in the repository.
# (it should be possible without, yet it was kinda too late)
flux bootstrap gitlab \
--owner=my-group \
--repository=my-state-repo \
--components source-controller,kustomize-controller,helm-controller,notification-controller
The whole thing is … weird. On the one hand, super cool, everything done. On the other hand, super complex, meeeeh. If you look in the repository online you will notice a new folder flux-system
, which we will use right now to add a “Kustomization” I want.
flux-system/kustomization.yaml
and add
this content,git push
,flux reconcile ks flux-system --with-source
.And while doing this I learned that FluxCD has no Web UI (well, now it has a 3rd party UI but …). So you can’t really see at a glance whether this worked or not. Or click “update” / “refresh” to get an idea. So … the whole “managing FluxCD” part is … annoying already. Let’s see if this becomes less annoying over time.
So, since the whole FluxCD thingy was … not so cool, I might actually co-install ArgoCD and see where it leads me.
But for now, that’s it.