mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 17:56:26 +00:00
Experiment with PDF generation
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
19
docs/kubernetes/ingress/index.md
Normal file
19
docs/kubernetes/ingress/index.md
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
description: What is a Kubernetes Ingress?
|
||||
---
|
||||
# Ingresses
|
||||
|
||||
In Kubernetes, an Ingress is a way to describe how to route traffic coming **into** the cluster, so that (*for example*) <https://radarr.example.com> will end up on a [Radarr][radarr] pod, but <https://sonarr.example.com> will end up on a [Sonarr][sonarr] pod.
|
||||
|
||||

|
||||
|
||||
There are many popular Ingress Controllers, we're going to cover two equally useful options:
|
||||
|
||||
1. [Traefik](/kubernetes/ingress/traefik/)
|
||||
2. [Nginx](/kubernetes/ingress/nginx/)
|
||||
|
||||
Choose at least one of the above (*there may be valid reasons to use both!* [^1]), so that you can expose applications via Ingress.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: One cluster I manage uses traefik Traefik for public services, but Nginx for internal management services such as Prometheus, etc. The idea is that you'd need one type of Ingress to help debug problems with the _other_ type!
|
||||
240
docs/kubernetes/ingress/nginx.md
Normal file
240
docs/kubernetes/ingress/nginx.md
Normal file
@@ -0,0 +1,240 @@
|
||||
---
|
||||
title: Install nginx ingress controller into Kubernetes with Flux
|
||||
---
|
||||
# Nginx Ingress Controller for Kubernetes - the "flux way"
|
||||
|
||||
The [Nginx Ingress Controller](https://kubernetes.github.io/ingress-nginx/) is the grandpappy of Ingress Controllers, with releases dating back ot at least 2016. Of course, Nginx itself is a battle-tested rock, [released in 2004](https://en.wikipedia.org/wiki/Nginx) and has been constantly updated / improved ever since.
|
||||
|
||||
Having such a pedigree though can make it a little awkward for the unfamiliar to configure Nginx, whereas something like [Traefik](/kubernetes/ingress/traefik/), being newer-on-the-scene, is more user-friendly, and offers (*among other features*) a free **dashboard**. (*Nginx's dashboard is only available in the commercial Nginx+ package, which is a [monumental PITA](https://www.nginx.com/blog/deploying-nginx-nginx-plus-docker/) to run*)
|
||||
|
||||
Nginx Ingress Controller does make for a nice, simple "default" Ingress controller, if you don't want to do anything fancy.
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] A [load-balancer](/kubernetes/loadbalancer/) solution (*either [k3s](/kubernetes/loadbalancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*)
|
||||
|
||||
Optional:
|
||||
|
||||
* [x] [Cert-Manager](/kubernetes/ssl-certificates/cert-manager/) deployed to request/renew certificates
|
||||
* [x] [External DNS](/kubernetes/external-dns/) configured to respond to ingresses, or with a wildcard DNS entry
|
||||
|
||||
## Preparation
|
||||
|
||||
### Namespace
|
||||
|
||||
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/namespaces/namespace-nginx-ingress-controller.yaml`:
|
||||
|
||||
??? example "Example NameSpace (click to expand)"
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
```
|
||||
|
||||
### HelmRepository
|
||||
|
||||
Next, we need to define a HelmRepository (*a repository of helm charts*), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. In this case, we're using the (*prolific*) [bitnami chart repository](https://github.com/bitnami/charts/tree/master/bitnami), so per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/helmrepositories/helmrepository-bitnami.yaml`:
|
||||
|
||||
??? example "Example HelmRepository (click to expand)"
|
||||
```yaml
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta1
|
||||
kind: HelmRepository
|
||||
metadata:
|
||||
name: bitnami
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 15m
|
||||
url: https://charts.bitnami.com/bitnami
|
||||
```
|
||||
|
||||
### Kustomization
|
||||
|
||||
Now that the "global" elements of this deployment (*Namespace and HelmRepository*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/nginx-ingress-controller`. I create this example Kustomization in my flux repo at `bootstrap/kustomizations/kustomization-nginx-ingress-controller.yaml`:
|
||||
|
||||
??? example "Example Kustomization (click to expand)"
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 15m
|
||||
path: ./nginx-ingress-controller
|
||||
prune: true # remove any elements later removed from the above path
|
||||
timeout: 2m # if not set, this defaults to interval duration, which is 1h
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
validation: server
|
||||
healthChecks:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: nginx-ingress-controller
|
||||
namespace: nginx-ingress-controller
|
||||
|
||||
```
|
||||
|
||||
### ConfigMap
|
||||
|
||||
Now we're into the nginx-ingress-controller-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/bitnami/charts/blob/master/bitnami/nginx-ingress-controller/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 tabs (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo at `nginx-ingress-controller/configmap-nginx-ingress-controller-helm-chart-value-overrides.yaml`:
|
||||
|
||||
??? example "Example ConfigMap (click to expand)"
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: nginx-ingress-controller-helm-chart-value-overrides
|
||||
namespace: nginx-ingress-controller
|
||||
data:
|
||||
values.yaml: |-
|
||||
# paste chart values.yaml (indented) here and alter as required
|
||||
```
|
||||
|
||||
--8<-- "kubernetes-why-full-values-in-configmap.md"
|
||||
|
||||
Then work your way through the values you pasted, and change any which are specific to your configuration. It may not be necessary to change anything.
|
||||
|
||||
### HelmRelease
|
||||
|
||||
Lastly, having set the scene above, we define the HelmRelease which will actually deploy nginx-ingress-controller into the cluster, with the config and extra ConfigMap we defined above. I save this in my flux repo as `nginx-ingress-controller/helmrelease-nginx-ingress-controller.yaml`:
|
||||
|
||||
??? example "Example HelmRelease (click to expand)"
|
||||
```yaml
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
namespace: nginx-ingress-controller
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
chart: nginx-ingress-controller
|
||||
version: 9.x
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: bitnami
|
||||
namespace: flux-system
|
||||
interval: 15m
|
||||
timeout: 5m
|
||||
releaseName: nginx-ingress-controller
|
||||
valuesFrom:
|
||||
- kind: ConfigMap
|
||||
name: nginx-ingress-controller-helm-chart-value-overrides
|
||||
valuesKey: values.yaml # This is the default, but best to be explicit for clarity
|
||||
```
|
||||
|
||||
--8<-- "kubernetes-why-not-config-in-helmrelease.md"
|
||||
|
||||
## Deploy nginx-ingress-controller
|
||||
|
||||
Having committed the above to your flux repository, you should shortly see a nginx-ingress-controller kustomization, and in the `nginx-ingress-controller` namespace, a controller and a speaker pod for every node:
|
||||
|
||||
```bash
|
||||
demo@shredder:~$ kubectl get pods -n nginx-ingress-controller
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-ingress-controller-5b849b4fbd-svbxk 1/1 Running 0 24h
|
||||
nginx-ingress-controller-5b849b4fbd-xt7vc 1/1 Running 0 24h
|
||||
nginx-ingress-controller-default-backend-867d86fb8f-t27j9 1/1 Running 0 24h
|
||||
demo@shredder:~$
|
||||
```
|
||||
|
||||
### How do I know it's working?
|
||||
|
||||
#### Test Service
|
||||
|
||||
By default, the chart will deploy nginx ingress controller's service in [LoadBalancer](/kubernetes/loadbalancer/) mode. When you use kubectl to display the service (`kubectl get services -n nginx-ingress-controller`), you'll see the external IP displayed:
|
||||
|
||||
```bash
|
||||
demo@shredder:~$ kubectl get services -n nginx-ingress-controller
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
nginx-ingress-controller LoadBalancer 10.152.183.162 172.168.209.1 80:30756/TCP,443:30462/TCP 24h
|
||||
nginx-ingress-controller-default-backend ClusterIP 10.152.183.200 <none> 80/TCP 24h
|
||||
demo@shredder:~$
|
||||
```
|
||||
|
||||
!!! question "Where does the external IP come from?"
|
||||
If you're using [k3s's load balancer](/kubernetes/loadbalancer/k3s/), the external IP will likely be the IP of the the nodes running k3s. If you're using [MetalLB](/kubernetes/loadbalancer/metallb/), the external IP should come from the list of addresses in the pool you allocated.
|
||||
|
||||
Pointing your web browser to the external IP displayed should result in the default backend page (*or an nginx-branded 404*). Congratulations, you have external access to the ingress controller! 🥳
|
||||
|
||||
#### Test Ingress
|
||||
|
||||
Still, you didn't deploy an ingress controller to look at 404 pages! If you used my [template repository](https://github.com/geek-cookbook/template-flux) to start off your [flux deployment strategy](/kubernetes/deployment/flux/), then the podinfo helm chart has already been deployed. By default, the podinfo configmap doesn't deploy an Ingress, but you can change this using the magic of GitOps... 🪄
|
||||
|
||||
Edit your podinfo helmrelease configmap (`/podinfo/configmap-podinfo-helm-chart-value-overrides.yaml`), and change `ingress.enabled` to `true`, and set the host name to match your local domain name (*already configured using [External DNS](/kubernetes/external-dns/)*):
|
||||
|
||||
``` yaml hl_lines="2 8"
|
||||
ingress:
|
||||
enabled: false
|
||||
className: ""
|
||||
annotations: {}
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
hosts:
|
||||
- host: podinfo.local
|
||||
```
|
||||
|
||||
To:
|
||||
|
||||
``` yaml hl_lines="2 8"
|
||||
ingress:
|
||||
enabled: false
|
||||
className: ""
|
||||
annotations: {}
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
hosts:
|
||||
- host: podinfo.<your domain name>
|
||||
```
|
||||
|
||||
Commit your changes, wait for a reconciliation, and run `kubectl get ingress -n podinfo`. You should see an ingress created matching the host defined above, and the ADDRESS value should match the service address of the nginx-ingress-controller service.
|
||||
|
||||
```bash
|
||||
root@cn1:~# kubectl get ingress -A
|
||||
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
podinfo podinfo <none> podinfo.example.com 172.168.209.1 80, 443 91d
|
||||
```
|
||||
|
||||
!!! question "Why is there no class value?"
|
||||
You don't **have** to define an ingress class if you only have one **class** of ingress, since typically your ingress controller will assume the default class. When you run multiple ingress controllers (say, nginx **and** [traeifk](/kubernetes/ingress/traefik/), or multiple nginx instances with different access controls) then classes become more important.
|
||||
|
||||
Now assuming your [DNS is correct](/kubernetes/external-dns/), you should be able to point your browser to the hostname you chose, and see the beautiful podinfo page! 🥳🥳
|
||||
|
||||
#### Test SSL
|
||||
|
||||
Ha, but we're not done yet! We have exposed a service via our load balancer, we've exposed a route to a service via an Ingress, but let's get rid of that nasty "insecure" message in the browser when using HTTPS...
|
||||
|
||||
Since you setup [SSL certificates,](/kubernetes/ssl-certificates/) including [secret-replicator](/kubernetes/ssl-certificates/secret-replicator/), you should end up with a `letsencrypt-wildcard-cert` secret in every namespace, including `podinfo`.
|
||||
|
||||
So once again, alter the podinfo ConfigMap to change this:
|
||||
|
||||
```yaml hl_lines="2 4"
|
||||
tls: []
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
```
|
||||
|
||||
To this:
|
||||
|
||||
```yaml hl_lines="2 4"
|
||||
tls:
|
||||
- secretName: letsencrypt-wildcard-cert
|
||||
hosts:
|
||||
- podinfo.<your domain name>
|
||||
```
|
||||
|
||||
Commit your changes, wait for the reconciliation, and the next time you point your browser at your ingress, you should get a beautiful, valid, officially-signed SSL certificate[^1]! 🥳🥳🥳
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
Are things not working as expected? Watch the nginx-ingress-controller's logs with ```kubectl logs -n nginx-ingress-controller -l app.kubernetes.io/name=nginx-ingress-controller -f```.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup!
|
||||
20
docs/kubernetes/ingress/traefik/dashboard.md
Normal file
20
docs/kubernetes/ingress/traefik/dashboard.md
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
title: Traefik Ingress Controller's Dashboard
|
||||
description: Unlike competing ingresses (*cough* nginx *cough*), the beautiful Traefik dashboard UI is free for all.
|
||||
---
|
||||
# Traefik Dashboard
|
||||
|
||||
One of the advantages [Traefik](/kubernetes/ingress/traefik/) offers over [Nginx](/kubernetes/ingress/nginx/), is a native dashboard available in the open-source version (*Nginx+, the commercially-supported version, also includes a dashboard*).
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] A [load-balancer](/kubernetes/loadbalancer/) solution (*either [k3s](/kubernetes/loadbalancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*)
|
||||
* [x] [Traefik](/kubernetes/ingress/traefik/) deployed per-design
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup!
|
||||
242
docs/kubernetes/ingress/traefik/index.md
Normal file
242
docs/kubernetes/ingress/traefik/index.md
Normal file
@@ -0,0 +1,242 @@
|
||||
---
|
||||
title: Why I use Traefik Ingress Controller
|
||||
description: Among other advantages, I no longer need to replicate SSL certificate secrets for nginx-ingress-controller to consume, once-per-namespace!
|
||||
---
|
||||
# Traefik Ingress Controller
|
||||
|
||||
Unlike grumpy ol' man [Nginx](/kubernetes/ingress/nginx/) :older_man:, Traefik, a microservice-friendly reverse proxy, is relatively fresh in the "cloud-native" space, having been "born" :baby_bottle: [in the same year that Kubernetes was launched](https://techcrunch.com/2020/09/23/five-years-after-creating-traefik-application-proxy-open-source-project-hits-2b-downloads/).
|
||||
|
||||
Traefik natively includes some features which Nginx lacks:
|
||||
|
||||
* [x] Ability to use cross-namespace TLS certificates (*this may be accidental, but it totally works currently*)
|
||||
* [x] An elegant "middleware" implementation allowing certain requests to pass through additional layers of authentication
|
||||
* [x] A beautiful dashboard
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] A [load-balancer](/kubernetes/loadbalancer/) solution (*either [k3s](/kubernetes/loadbalancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*)
|
||||
|
||||
Optional:
|
||||
|
||||
* [x] [Cert-Manager](/kubernetes/ssl-certificates/cert-manager/) deployed to request/renew certificates
|
||||
* [x] [External DNS](/kubernetes/external-dns/) configured to respond to ingresses, or with a wildcard DNS entry
|
||||
|
||||
## Preparation
|
||||
|
||||
### Namespace
|
||||
|
||||
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/namespaces/namespace-traefik.yaml"
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: traefik
|
||||
```
|
||||
|
||||
### HelmRepository
|
||||
|
||||
Next, we need to define a HelmRepository (*a repository of helm charts*), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. In this case, we're using the official [Traefik helm chart](https://github.com/traefik/traefik-helm-chart), so per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/helmrepositories/helmrepository-traefik.yaml"
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta1
|
||||
kind: HelmRepository
|
||||
metadata:
|
||||
name: traefik
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 15m
|
||||
url: https://helm.traefik.io/traefik
|
||||
```
|
||||
|
||||
### Kustomization
|
||||
|
||||
Now that the "global" elements of this deployment (*Namespace and HelmRepository*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/traefik`. I create this example Kustomization in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/kustomizations/kustomization-traefik.yaml"
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: traefik
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 15m
|
||||
path: ./traefik
|
||||
prune: true # remove any elements later removed from the above path
|
||||
timeout: 2m # if not set, this defaults to interval duration, which is 1h
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
validation: server
|
||||
healthChecks:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: traefik
|
||||
namespace: traefik
|
||||
|
||||
```
|
||||
|
||||
### ConfigMap
|
||||
|
||||
Now we're into the traefik-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/traefik/traefik-helm-chart/blob/master/traefik/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 tabs (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo:
|
||||
|
||||
```yaml title="/traefik/configmap-traefik-helm-chart-value-overrides.yaml"
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: traefik-helm-chart-value-overrides
|
||||
namespace: traefik
|
||||
data:
|
||||
values.yaml: |- # (1)!
|
||||
# <upstream values go here>
|
||||
```
|
||||
|
||||
1. Paste in the contents of the upstream `values.yaml` here, intended 4 spaces, and then change the values you need as illustrated below.
|
||||
|
||||
--8<-- "kubernetes-why-full-values-in-configmap.md"
|
||||
|
||||
Then work your way through the values you pasted, and change any which are specific to your configuration. It may not be necessary to change anything.
|
||||
|
||||
### HelmRelease
|
||||
|
||||
Lastly, having set the scene above, we define the HelmRelease which will actually deploy traefik into the cluster, with the config and extra ConfigMap we defined above. I save this in my flux repo as `traefik/helmrelease-traefik.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: traefik
|
||||
namespace: traefik
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
chart: traefik
|
||||
version: 10.x # (1)!
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: traefik
|
||||
namespace: flux-system
|
||||
interval: 15m
|
||||
timeout: 5m
|
||||
releaseName: traefik
|
||||
valuesFrom:
|
||||
- kind: ConfigMap
|
||||
name: traefik-helm-chart-value-overrides
|
||||
valuesKey: values.yaml # This is the default, but best to be explicit for clarity
|
||||
```
|
||||
|
||||
1. Use `9.x` for Kubernetes versions older than 1.22, as described [here](https://github.com/traefik/traefik-helm-chart/tree/master/traefik#kubernetes-version-support).
|
||||
|
||||
--8<-- "kubernetes-why-not-config-in-helmrelease.md"
|
||||
|
||||
## Deploy traefik
|
||||
|
||||
Having committed the above to your flux repository, you should shortly see a traefik kustomization, and in the `traefik` namespace, a controller and a speaker pod for every node:
|
||||
|
||||
```bash
|
||||
demo@shredder:~$ kubectl get pods -n traefik
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
traefik-5b849b4fbd-svbxk 1/1 Running 0 24h
|
||||
traefik-5b849b4fbd-xt7vc 1/1 Running 0 24h
|
||||
demo@shredder:~$
|
||||
```
|
||||
|
||||
### How do I know it's working?
|
||||
|
||||
#### Test Service
|
||||
|
||||
By default, the chart will deploy Traefik in [LoadBalancer](/kubernetes/loadbalancer/) mode. When you use kubectl to display the service (`kubectl get services -n traefik`), you'll see the external IP displayed:
|
||||
|
||||
```bash
|
||||
demo@shredder:~$ kubectl get services -n traefik
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
traefik LoadBalancer 10.152.183.162 172.168.209.1 80:30756/TCP,443:30462/TCP 24h
|
||||
demo@shredder:~$
|
||||
```
|
||||
|
||||
!!! question "Where does the external IP come from?"
|
||||
If you're using [k3s's load balancer](/kubernetes/loadbalancer/k3s/), the external IP will likely be the IP of the the nodes running k3s. If you're using [MetalLB](/kubernetes/loadbalancer/metallb/), the external IP should come from the list of addresses in the pool you allocated.
|
||||
|
||||
Pointing your web browser to the external IP displayed should result in a 404 page. Congratulations, you have external access to the Traefik ingress controller! 🥳
|
||||
|
||||
#### Test Ingress
|
||||
|
||||
Still, you didn't deploy an ingress controller to look at 404 pages! If you used my [template repository](https://github.com/geek-cookbook/template-flux) to start off your [flux deployment strategy](/kubernetes/deployment/flux/), then the podinfo helm chart has already been deployed. By default, the podinfo configmap doesn't deploy an Ingress, but you can change this using the magic of GitOps... 🪄
|
||||
|
||||
Edit your podinfo helmrelease configmap (`/podinfo/configmap-podinfo-helm-chart-value-overrides.yaml`), and change `ingress.enabled` to `true`, and set the host name to match your local domain name (*already configured using [External DNS](/kubernetes/external-dns/)*):
|
||||
|
||||
``` yaml hl_lines="2 8"
|
||||
ingress:
|
||||
enabled: false
|
||||
className: ""
|
||||
annotations: {}
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
hosts:
|
||||
- host: podinfo.local
|
||||
```
|
||||
|
||||
To:
|
||||
|
||||
``` yaml hl_lines="2 8"
|
||||
ingress:
|
||||
enabled: false
|
||||
className: ""
|
||||
annotations: {}
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
hosts:
|
||||
- host: podinfo.<your domain name>
|
||||
```
|
||||
|
||||
Commit your changes, wait for a reconciliation, and run `kubectl get ingress -n podinfo`. You should see an ingress created matching the host defined above, and the ADDRESS value should match the service address of the traefik service.
|
||||
|
||||
```bash
|
||||
root@cn1:~# kubectl get ingress -A
|
||||
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
podinfo podinfo <none> podinfo.example.com 172.168.209.1 80, 443 91d
|
||||
```
|
||||
|
||||
!!! question "Why is there no class value?"
|
||||
You don't **have** to define an ingress class if you only have one **class** of ingress, since typically your ingress controller will assume the default class. When you run multiple ingress controllers (say, nginx **and** [traeifk](/kubernetes/ingress/traefik/), or multiple nginx instances with different access controls) then classes become more important.
|
||||
|
||||
Now assuming your [DNS is correct](/kubernetes/external-dns/), you should be able to point your browser to the hostname you chose, and see the beautiful podinfo page! 🥳🥳
|
||||
|
||||
#### Test SSL
|
||||
|
||||
Ha, but we're not done yet! We have exposed a service via our load balancer, we've exposed a route to a service via an Ingress, but let's get rid of that nasty "insecure" message in the browser when using HTTPS...
|
||||
|
||||
Since you setup [SSL certificates,](/kubernetes/ssl-certificates/) including [secret-replicator](/kubernetes/ssl-certificates/secret-replicator/), you should end up with a `letsencrypt-wildcard-cert` secret in every namespace, including `podinfo`.
|
||||
|
||||
So once again, alter the podinfo ConfigMap to change this:
|
||||
|
||||
```yaml hl_lines="2 4"
|
||||
tls: []
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
```
|
||||
|
||||
To this:
|
||||
|
||||
```yaml hl_lines="2 4"
|
||||
tls:
|
||||
- secretName: letsencrypt-wildcard-cert
|
||||
hosts:
|
||||
- podinfo.<your domain name>
|
||||
```
|
||||
|
||||
Commit your changes, wait for the reconciliation, and the next time you point your browser at your ingress, you should get a beautiful, valid, officially-signed SSL certificate[^1]! 🥳🥳🥳
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
Are things not working as expected? Watch the traefik's logs with ```kubectl logs -n traefik -l app.kubernetes.io/name=traefik -f```.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup!
|
||||
Reference in New Issue
Block a user