Skip to main content
Affordable Lab Kubernetes Cluster with Hetzner Cloud

Affordable Lab Kubernetes Cluster with Hetzner Cloud

·580 words·3 mins
Benjamin Martensson
Author
Benjamin Martensson
SRE, passionate about Go, distributed systems, and open source enthusiast.

So I was investigating how to setup a Kubernetes cluster that is both highly available and cheap. After some investigation I discovered that it is actually possible using the cheapest VMs you can find at Hetzner Cloud and their LB offering.

  • CX11 x 3 = 8.88 EUR
  • LB11 = 5.83 EUR

This is more expensive than a regular VM, but still within a reasonable price all things considered. I am not getting this because I need to, but because I work with k8s daily and I like to have a private cluster to play around with (without the typical RPI cluster). I did my first experiment using a floating ip instead of a dedicated LB, but it required (what I felt) a hacky solutions using keepalived or hcloud-fip-controller with metallb, among other options. I tried several solutions, but the failover was never smooth enough and the traffic is not load balanced attaching the external IP to a single host. There is also some complications to setup the ingress to actually read the client IPs correctly. So I decided to pay the extra few bucks to get a true HA solution and avoid the hassle.

Since the VMs are pretty low on resources I decided to install k3s which is a breeze to install even in HA mode with etcd running in a cluster on all three nodes.

On the first node:

curl -sfL https://get.k3s.io | sh -s - \
--disable traefik \
--cluster-init \
--token 123456 \
--node-ip 10.0.0.7 \
--tls-san 10.0.0.7 \
--advertise-address 167.x.x.x \
--node-external-ip 116.x.x.x \

and the following nodes:

curl -sfL https://get.k3s.io | sh -s - \
--disable traefik \
--server https://10.0.0.7:6443 \
--token 123456 \
--node-ip 10.0.0.8 \
--tls-san 10.0.0.8 \
--advertise-address 167.x.x.x \ 
--node-external-ip 116..x.x.x \

That is all you need to get the cluster up and running. Just remember to configure the LB service (6443) so the nodes can communicate with the loadbalanced API endpoint.

I also setup the servers with unattended-upgrades with different reboot times. There should now be little reason to ever ssh to the nodes except upgrading k3s.

But before we can deploy our first webapps we need a few extra components. For L7 http ingress traffic I will use the official Nginx Ingress Controller. It’s what I use at work and already familiar with its configuration. Also the reason I disable traefik in k3s.

nginx ingress traffic

kubectl create namespace ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install -n ingress-nginx ingress-nginx ingress-nginx/ingress-nginx

Remember to enable the proxy protocol in your LB and nginx so you can log the real client ips correctly:

kubectl -n ingress-nginx edit configmap ingress-nginx-controller

and add:

compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
use-proxy-protocol: "true"

Since we want encrypted/https traffic we will also install cert-manager to deal with renewing our letsencrypt certificates. It also support dns01 via Cloudflare, my personal DNS provider.

cert-manager

kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager --namespace cert-manager     --set installCRDs=true

And a cluster is never really HA unless you solve the persistent storage problem. Rancher has developed a great distributed block storage that works flawlessly to solve this problem easily.

longhorn

# Remember to install open-iscsi on your distro first. 
kubectl create namespace longhorn-system
helm repo add longhorn https://charts.longhorn.io
helm install longhorn longhorn/longhorn --namespace longhorn-system

And this is all the basics you need to deploy your stateful webapp in a HA environment.

I will of course recommend that you add some monitoring with tools like prometheus and grafana.