I have recently worked quite a bit with IPv6 at home and recently helped my ISP provider to deploy IPv6 in my residential area. We have a network central in the neighborhood that I help to administer, and had the opportunity distribute a couple of /48 prefixes from our central Mikrotik router to the houses in the area.
Since IPv6 has no NAT by default (there are enough addresses for every ant on earth) I thought it would be nice to also have real IPv6 routable addresses for each POD running in Kubernetes.
I recently moved my private cloud server to a Netcup VPS, and they provide a /64 IPv6 prefix for free with each server. So I decided to try to run Kubernetes with IPv6 while migrating over to the new server.
First try was IPv6-only, but I quickly found out that several applications still do not support IPv6-only environments. Specially if they have to communicate with the outside world. So I had to go with dual-stack (IPv4 + IPv6) for now.
I am running k3s so it was not more than running the following installation:
export INSTALL_K3S_CHANNEL=latest
curl -sfL https://get.k3s.io | sh -s - \
--disable traefik \
--disable servicelb \
--cluster-cidr=2a0a:4cc0:0:0:0:0:1000:1/112,10.42.0.0/16 \
--service-cidr=2a0a:4cc0:0:0:0:0:2000:1/112,10.43.0.0/16 \
--kube-controller-manager-arg=node-cidr-mask-size-ipv6=112 \
I choose a /112 prefix (from my /64-prefix range) for the cluster and service CIDRs to have enough addresses for the PODs and services. The k3s installation automatically configures the rest.
But I had one problem: Netcup only provides a single IPv6 address for the server, and I needed to route the whole /64 prefix to the server. While you do get the whole /64 allocated, Netcup does not route it by default.
After a bit of digging I found out that you can use ndppd (Neighbor Discovery Protocol Proxy Daemon) to route the whole /64 to the server. It basically listens for neighbor solicitations on the network and responds with the server’s MAC address for any address in the /64 range.
I installed ndppd and configured it like this:
proxy eth0 {
rule 2a0a:4cc0:101:932::/64 {
static
}
}
I was now able to ping any POD running inside Kubernetes from my home network using its IPv6 address.
It is very important to configure your firewall correctly when using IPv6, since every address is directly reachable from the internet. One of the side-effects of using IPv4 NAT is that it provides a basic level of security by hiding the internal addresses. With IPv6 you have to explicitly block unwanted traffic.
I had to finally add HAProxy as a proxy server in front of my Kubernetes cluster, and have it configured to listen on both IPv4 and IPv6, but all traffic is forwarded internally to the services running inside Kubernetes using their IPv6 addresses. I also enable the PROXY PROTOCOL for the services that supports it keep the source IP information (like ingress-nginx).
Example HAProxy frontend configuration:
frontend https_frontend
bind :443
bind :::443
timeout client 1m
default_backend https_backend
backend https_backend
option log-health-checks
timeout connect 10s
timeout server 1m
server k8s_https [2a0a:4cc0:101:932::2000:7c03]:443 check send-proxy-v2
That`s it! It was a great experience to work with IPv6 in Kubernetes, and I will continue to play around more with IPv6-only environments in the future.