10 KiB
title, description
| title | description |
|---|---|
| Why I use Traefik Ingress Controller | Among other advantages, I no longer need to replicate SSL certificate secrets for nginx-ingress-controller to consume, once-per-namespace! |
Traefik Ingress Controller
Unlike grumpy ol' man Nginx 👴, Traefik, a microservice-friendly reverse proxy, is relatively fresh in the "cloud-native" space, having been "born" 🍼 in the same year that Kubernetes was launched.
Traefik natively includes some features which Nginx lacks:
- Ability to use cross-namespace TLS certificates (this may be accidental, but it totally works currently)
- An elegant "middleware" implementation allowing certain requests to pass through additional layers of authentication
- A beautiful dashboard
!!! summary "Ingredients"
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
* [x] A [load-balancer](/kubernetes/loadbalancer/) solution (*either [k3s](/kubernetes/loadbalancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*)
Optional:
* [x] [Cert-Manager](/kubernetes/ssl-certificates/cert-manager/) deployed to request/renew certificates
* [x] [External DNS](/kubernetes/external-dns/) configured to respond to ingresses, or with a wildcard DNS entry
Preparation
Namespace
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the flux design, I create this example yaml in my flux repo:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
HelmRepository
Next, we need to define a HelmRepository (a repository of helm charts), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. In this case, we're using the official Traefik helm chart, so per the flux design, I create this example yaml in my flux repo:
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: traefik
namespace: flux-system
spec:
interval: 15m
url: https://helm.traefik.io/traefik
Kustomization
Now that the "global" elements of this deployment (Namespace and HelmRepository) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at /traefik. I create this example Kustomization in my flux repo:
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: traefik
namespace: flux-system
spec:
interval: 15m
path: ./traefik
prune: true # remove any elements later removed from the above path
timeout: 2m # if not set, this defaults to interval duration, which is 1h
sourceRef:
kind: GitRepository
name: flux-system
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: traefik
namespace: traefik
ConfigMap
Now we're into the traefik-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's values.yaml. Paste the values into a values.yaml key as illustrated below, indented 4 spaces (since they're "encapsulated" within the ConfigMap YAML). I create this example yaml in my flux repo:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: traefik-helm-chart-value-overrides
namespace: traefik
data:
values.yaml: |- # (1)!
# <upstream values go here>
- Paste in the contents of the upstream
values.yamlhere, intended 4 spaces, and then change the values you need as illustrated below.
--8<-- "kubernetes-why-full-values-in-configmap.md"
Then work your way through the values you pasted, and change any which are specific to your configuration. It may not be necessary to change anything.
HelmRelease
Lastly, having set the scene above, we define the HelmRelease which will actually deploy traefik into the cluster, with the config and extra ConfigMap we defined above. I save this in my flux repo as traefik/helmrelease-traefik.yaml:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: traefik
namespace: traefik
spec:
chart:
spec:
chart: traefik
version: 10.x # (1)!
sourceRef:
kind: HelmRepository
name: traefik
namespace: flux-system
interval: 15m
timeout: 5m
releaseName: traefik
valuesFrom:
- kind: ConfigMap
name: traefik-helm-chart-value-overrides
valuesKey: values.yaml # This is the default, but best to be explicit for clarity
- Use
9.xfor Kubernetes versions older than 1.22, as described here.
--8<-- "kubernetes-why-not-config-in-helmrelease.md"
Deploy traefik
Having committed the above to your flux repository, you should shortly see a traefik kustomization, and in the traefik namespace, traefik pods happily running:
demo@shredder:~$ kubectl get pods -n traefik
NAME READY STATUS RESTARTS AGE
traefik-5b849b4fbd-svbxk 1/1 Running 0 24h
traefik-5b849b4fbd-xt7vc 1/1 Running 0 24h
demo@shredder:~$
How do I know it's working?
Test Service
By default, the chart will deploy Traefik in LoadBalancer mode. When you use kubectl to display the service (kubectl get services -n traefik), you'll see the external IP displayed:
demo@shredder:~$ kubectl get services -n traefik
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.152.183.162 172.168.209.1 80:30756/TCP,443:30462/TCP 24h
demo@shredder:~$
!!! question "Where does the external IP come from?" If you're using k3s's load balancer, the external IP will likely be the IP of the the nodes running k3s. If you're using MetalLB, the external IP should come from the list of addresses in the pool you allocated.
Pointing your web browser to the external IP displayed should result in a 404 page. Congratulations, you have external access to the Traefik ingress controller! 🥳
Test Ingress
Still, you didn't deploy an ingress controller to look at 404 pages! If you used my template repository to start off your flux deployment strategy, then the podinfo helm chart has already been deployed. By default, the podinfo configmap doesn't deploy an Ingress, but you can change this using the magic of GitOps... 🪄
Edit your podinfo helmrelease configmap (/podinfo/configmap-podinfo-helm-chart-value-overrides.yaml), and change ingress.enabled to true, and set the host name to match your local domain name (already configured using External DNS):
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: podinfo.local
To:
ingress:
enabled: true
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: podinfo.<your domain name>
Commit your changes, wait for a reconciliation, and run kubectl get ingress -n podinfo. You should see an ingress created matching the host defined above, and the ADDRESS value should match the service address of the traefik service.
root@cn1:~# kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
podinfo podinfo <none> podinfo.example.com 172.168.209.1 80, 443 91d
!!! question "Why is there no class value?" You don't have to define an ingress class if you only have one class of ingress, since typically your ingress controller will assume the default class. When you run multiple ingress controllers (say, nginx and traeifk, or multiple nginx instances with different access controls) then classes become more important.
Now assuming your DNS is correct, you should be able to point your browser to the hostname you chose, and see the beautiful podinfo page! 🥳🥳
Test SSL
Ha, but we're not done yet! We have exposed a service via our load balancer, we've exposed a route to a service via an Ingress, but let's get rid of that nasty "insecure" message in the browser when using HTTPS...
Since you setup SSL certificates, including secret-replicator, you should end up with a letsencrypt-wildcard-cert secret in every namespace, including podinfo.
So once again, alter the podinfo ConfigMap to change this:
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
To this:
tls:
- secretName: letsencrypt-wildcard-cert
hosts:
- podinfo.<your domain name>
Commit your changes, wait for the reconciliation, and the next time you point your browser at your ingress, you should get a beautiful, valid, officially-signed SSL certificate1 ! 🥳🥳🥳
Troubleshooting
Are things not working as expected? Watch the traefik's logs with kubectl logs -n traefik -l app.kubernetes.io/name=traefik -f.
{% include 'recipe-footer.md' %}
-
The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup! ↩︎
