1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-13 17:56:26 +00:00

Fixed logic / sequence issue for k8s letsencyrpt

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
David Young
2022-08-23 11:22:41 +12:00
parent 45a851df7a
commit 4a7847578b
5 changed files with 295 additions and 524 deletions

View File

@@ -1,4 +1,5 @@
---
title: Install cert-manager in Kubernetes
description: Cert Manager generates and renews LetsEncrypt certificates
---
# Cert Manager
@@ -7,7 +8,7 @@ To interact with your cluster externally, you'll almost certainly be using a web
Cert Manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates.
![Sealed Secrets illustration](/images/cert-manager.svg)
![Cert Manager illustration](/images/cert-manager.svg)
It can issue certificates from a variety of supported sources, including Lets Encrypt, HashiCorp Vault, and Venafi as well as private PKI.
@@ -22,106 +23,102 @@ It will ensure certificates are valid and up to date, and attempt to renew certi
### Namespace
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/namespaces/namespace-cert-manager.yaml`:
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
??? example "Example Namespace (click to expand)"
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager
```
```yaml title="/bootstrap/namespaces/namespace-cert-manager.yaml"
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager
```
### HelmRepository
Next, we need to define a HelmRepository (*a repository of helm charts*), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/helmrepositories/helmrepository-jetstack.yaml`:
Next, we need to define a HelmRepository (*a repository of helm charts*), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
??? example "Example HelmRepository (click to expand)"
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: jetstack
namespace: flux-system
spec:
interval: 15m
url: https://charts.jetstack.io
```
```yaml title="/bootstrap/helmrepositories/helmrepository-jetstack.yaml"
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: jetstack
namespace: flux-system
spec:
interval: 15m
url: https://charts.jetstack.io
```
### Kustomization
Now that the "global" elements of this deployment (*just the HelmRepository in this case*z*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/cert-manager`. I create this example Kustomization in my flux repo at `bootstrap/kustomizations/kustomization-cert-manager.yaml`:
Now that the "global" elements of this deployment (*just the HelmRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/cert-manager`. I create this example Kustomization in my flux repo:
??? example "Example Kustomization (click to expand)"
```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
```yaml title="/bootstrap/kustomizations/kustomization-cert-manager.yaml"
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: cert-manager
namespace: flux-system
spec:
interval: 15m
path: ./cert-manager
prune: true # remove any elements later removed from the above path
timeout: 2m # if not set, this defaults to interval duration, which is 1h
sourceRef:
kind: GitRepository
name: flux-system
validation: server
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: cert-manager
namespace: flux-system
spec:
interval: 15m
path: ./cert-manager
prune: true # remove any elements later removed from the above path
timeout: 2m # if not set, this defaults to interval duration, which is 1h
sourceRef:
kind: GitRepository
name: flux-system
validation: server
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: cert-manager
namespace: cert-manager
```
namespace: cert-manager
```
### ConfigMap
Now we're into the cert-manager-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/bitnami-labs/cert-manager/blob/main/helm/cert-manager/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 tabs (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo at `cert-manager/configmap-cert-manager-helm-chart-value-overrides.yaml`:
Now we're into the cert-manager-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/bitnami-labs/cert-manager/blob/main/helm/cert-manager/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 tabs (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo:
```yaml title="/cert-manager/configmap-cert-manager-helm-chart-value-overrides.yaml"
apiVersion: v1
kind: ConfigMap
metadata:
name: cert-manager-helm-chart-value-overrides
namespace: cert-manager
data:
values.yaml: |-
# paste chart values.yaml (indented) here and alter as required>
```
??? example "Example ConfigMap (click to expand)"
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cert-manager-helm-chart-value-overrides
namespace: cert-manager
data:
values.yaml: |-
# paste chart values.yaml (indented) here and alter as required>
```
--8<-- "kubernetes-why-full-values-in-configmap.md"
Then work your way through the values you pasted, and change any which are specific to your configuration.
### HelmRelease
Lastly, having set the scene above, we define the HelmRelease which will actually deploy the cert-manager controller into the cluster, with the config we defined above. I save this in my flux repo as `cert-manager/helmrelease-cert-manager.yaml`:
Lastly, having set the scene above, we define the HelmRelease which will actually deploy the cert-manager controller into the cluster, with the config we defined above. I save this in my flux repo:
??? example "Example HelmRelease (click to expand)"
```yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cert-manager
namespace: cert-manager
```yaml title="/cert-manager/helmrelease-cert-manager.yaml"
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cert-manager
namespace: cert-manager
spec:
chart:
spec:
chart:
spec:
chart: cert-manager
version: 1.6.x
sourceRef:
kind: HelmRepository
name: jetstack
namespace: flux-system
interval: 15m
timeout: 5m
releaseName: cert-manager
valuesFrom:
- kind: ConfigMap
name: cert-manager-helm-chart-value-overrides
valuesKey: values.yaml # This is the default, but best to be explicit for clarity
```
chart: cert-manager
version: 1.6.x
sourceRef:
kind: HelmRepository
name: jetstack
namespace: flux-system
interval: 15m
timeout: 5m
releaseName: cert-manager
valuesFrom:
- kind: ConfigMap
name: cert-manager-helm-chart-value-overrides
valuesKey: values.yaml # This is the default, but best to be explicit for clarity
```
--8<-- "kubernetes-why-not-config-in-helmrelease.md"

View File

@@ -16,34 +16,71 @@ In order for Cert Manager to request/renew certificates, we have to tell it abou
## Preparation
### Namespace
We need a namespace to deploy our certificates into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
```yaml title="/bootstrap/namespaces/namespace-letsencrypt-wildcard-cert.yaml"
apiVersion: v1
kind: Namespace
metadata:
name: letsencrypt-wildcard-cert
```
### Kustomization
Now we need a kustomization to tell Flux to install any YAMLs it finds in `/letsencrypt-wildcard-cert`. I create this example Kustomization in my flux repo:
```yaml title="/bootstrap/kustomizations/kustomization-letsencrypt-wildcard-cert.yaml"
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: letsencrypt-wildcard-cert
namespace: flux-system
spec:
interval: 15m
path: ./letsencrypt-wildcard-cert
dependsOn:
- name: "cert-manager"
- name: "sealed-secrets"
prune: true # remove any elements later removed from the above path
timeout: 2m # if not set, this defaults to interval duration, which is 1h
sourceRef:
kind: GitRepository
name: flux-system
validation: server
```
!!! tip
Importantly, note that we define a **dependsOn**, to tell Flux not to try to reconcile this kustomization before the cert-manager and sealedsecrets kustomizations are reconciled. Cert-manager creates the CRDs used to define certificates, so prior to Cert Manager being installed, the cluster won't know what to do with the ClusterIssuers/Certificate resources.
### LetsEncrypt Staging
The ClusterIssuer resource below represents a certificate authority which is able to request certificates for any namespace within the cluster.
I save this in my flux repo as `letsencrypt-wildcard-cert/cluster-issuer-letsencrypt-staging.yaml`. I've highlighted the areas you'll need to pay attention to:
I save this in my flux repo as illustrated below. I've highlighted the areas you'll need to pay attention to:
???+ example "ClusterIssuer for LetsEncrypt Staging"
```yaml hl_lines="8 15 17-21"
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
```yaml hl_lines="8 15 17-21" title="/letsencrypt-wildcard-cert/cluster-issuer-letsencrypt-staging.yaml"
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: batman@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
spec:
acme:
email: batman@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- selector:
dnsZones:
- "example.com"
dns01:
cloudflare:
email: batman@example.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
```
solvers:
- selector:
dnsZones:
- "example.com"
dns01:
cloudflare:
email: batman@example.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
```
Deploying this issuer YAML into the cluster would provide Cert Manager with the details necessary to start issuing certificates from the LetsEncrypt staging server (*always good to test in staging first!*)
@@ -52,34 +89,33 @@ Deploying this issuer YAML into the cluster would provide Cert Manager with the
### LetsEncrypt Prod
As you'd imagine, the "prod" version of the LetsEncrypt issues is very similar, and I save this in my flux repo as `letsencrypt-wildcard-cert/cluster-issuer-letsencrypt-prod.yaml`
As you'd imagine, the "prod" version of the LetsEncrypt issues is very similar, and I save this in my flux repo:
???+ example "ClusterIssuer for LetsEncrypt Prod"
```yaml hl_lines="8 15 17-21"
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
```yaml hl_lines="8 15 17-21" title="/letsencrypt-wildcard-cert/cluster-issuer-letsencrypt-prod.yaml"
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: batman@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
spec:
acme:
email: batman@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector:
dnsZones:
- "example.com"
dns01:
cloudflare:
email: batman@example.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
```
solvers:
- selector:
dnsZones:
- "example.com"
dns01:
cloudflare:
email: batman@example.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
```
!!! note
You'll note that there are two secrets referred to above - `privateKeySecretRef`, referencing `letsencrypt-prod` is for cert-manager to populate as a result of its ACME schenanigans - you don't have to do anything about this particular secret! The cloudflare-specific secret (*and this will change based on your provider*) is expected to be found in the same namespace as the certificate we'll be issuing, and will be discussed when we create our [wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/).
You'll note that there are two secrets referred to above - `privateKeySecretRef`, referencing `letsencrypt-prod` is for cert-manager to populate as a result of its ACME schenanigans - you don't have to do anything about this particular secret! The cloudflare-specific secret (*and this will change based on your provider*) is expected to be found in the same namespace as the cert-manager itself, and will be discussed when we create our [wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/).
## Serving
@@ -106,4 +142,4 @@ Provided your account is registered, you're ready to proceed with [creating a wi
--8<-- "recipe-footer.md"
[^1]: Since a ClusterIssuer is not a namespaced resource, it doesn't exist in any specific namespace. Therefore, my assumption is that the `apiTokenSecretRef` secret is only "looked for" when a certificate (*which __is__ namespaced*) requires validation.
[^1]: Since a ClusterIssuer is not a namespaced resource, it doesn't exist in any specific namespace. Therefore, my assumption is that the `apiTokenSecretRef` secret is only "looked for" when a certificate (*which **is** namespaced*) requires validation.

View File

@@ -15,11 +15,9 @@ Kiwigrid's "[Secret Replicator](https://github.com/kiwigrid/secret-replicator)"
### Namespace
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/namespaces/namespace-secret-replicator.yaml`:
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
??? example "Example Namespace (click to expand)"
```yaml
```yaml title="/bootstrap/namespaces/namespace-secret-replicator.yaml"
apiVersion: v1
kind: Namespace
metadata:
@@ -28,133 +26,130 @@ metadata:
### HelmRepository
Next, we need to define a HelmRepository (*a repository of helm charts*), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/helmrepositories/helmrepository-kiwigrid.yaml`:
Next, we need to define a HelmRepository (*a repository of helm charts*), to which we'll refer when we create the HelmRelease. We only need to do this once per-repository. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
??? example "Example HelmRepository (click to expand)"
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: kiwigrid
namespace: flux-system
spec:
interval: 15m
url: https://kiwigrid.github.io
```
```yaml title="/bootstrap/helmrepositories/helmrepository-kiwigrid.yaml"
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: kiwigrid
namespace: flux-system
spec:
interval: 15m
url: https://kiwigrid.github.io
```
### Kustomization
Now that the "global" elements of this deployment have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/secret-replicator`. I create this example Kustomization in my flux repo at `bootstrap/kustomizations/kustomization-secret-replicator.yaml`:
Now that the "global" elements of this deployment have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/secret-replicator`. I create this example Kustomization in my flux repo:
??? example "Example Kustomization (click to expand)"
```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
```yaml title="/bootstrap/kustomizations/kustomization-secret-replicator.yaml"
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: secret-replicator
namespace: flux-system
spec:
interval: 15m
path: ./secret-replicator
prune: true # remove any elements later removed from the above path
timeout: 2m # if not set, this defaults to interval duration, which is 1h
sourceRef:
kind: GitRepository
name: flux-system
validation: server
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: secret-replicator
namespace: flux-system
spec:
interval: 15m
path: ./secret-replicator
prune: true # remove any elements later removed from the above path
timeout: 2m # if not set, this defaults to interval duration, which is 1h
sourceRef:
kind: GitRepository
name: flux-system
validation: server
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: secret-replicator
namespace: secret-replicator
```
namespace: secret-replicator
```
### ConfigMap
Now we're into the secret-replicator-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/kiwigrid/helm-charts/blob/master/charts/secret-replicator/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 tabs (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo at `secret-replicator/configmap-secret-replicator-helm-chart-value-overrides.yaml`:
Now we're into the secret-replicator-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/kiwigrid/helm-charts/blob/master/charts/secret-replicator/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 tabs (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo:
??? example "Example ConfigMap (click to expand)"
```yaml hl_lines="21 27"
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-replicator-helm-chart-value-overrides
namespace: secret-replicator
data:
values.yaml: |-
# Default values for secret-replicator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
```yaml hl_lines="21 27" title="/secret-replicator/configmap-secret-replicator-helm-chart-value-overrides.yaml"
apiVersion: v1
kind: ConfigMap
metadata:
name: secret-replicator-helm-chart-value-overrides
namespace: secret-replicator
data:
values.yaml: |-
# Default values for secret-replicator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: kiwigrid/secret-replicator
tag: 0.2.0
pullPolicy: IfNotPresent
## Specify ImagePullSecrets for Pods
## ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
# pullSecrets: myregistrykey
image:
repository: kiwigrid/secret-replicator
tag: 0.2.0
pullPolicy: IfNotPresent
## Specify ImagePullSecrets for Pods
## ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
# pullSecrets: myregistrykey
# csv list of secrets
secretList: "letsencrypt-wildcard-cert"
# secretList: "secret1,secret2
# csv list of secrets
secretList: "letsencrypt-wildcard-cert"
# secretList: "secret1,secret2
ignoreNamespaces: "kube-system,kube-public"
ignoreNamespaces: "kube-system,kube-public"
# If defined, allow secret-replicator to watch for secrets in _another_ namespace
secretNamespace: letsencrypt-wildcard-cert"
# If defined, allow secret-replicator to watch for secrets in _another_ namespace
secretNamespace: letsencrypt-wildcard-cert"
rbac:
enabled: true
rbac:
enabled: true
resources: {}
# limits:
# cpu: 50m
# memory: 20Mi
# requests:
# cpu: 20m
# memory: 20Mi
resources: {}
# limits:
# cpu: 50m
# memory: 20Mi
# requests:
# cpu: 20m
# memory: 20Mi
nodeSelector: {}
nodeSelector: {}
tolerations: []
tolerations: []
affinity: {}
```
affinity: {}
```
--8<-- "kubernetes-why-full-values-in-configmap.md"
Note that the following values changed from default, above:
* `secretList`: `letsencrypt-wildcard-cert`
* `secretNamespace`: `letsencrypt-wildcard-cert`
- `secretList`: `letsencrypt-wildcard-cert`
- `secretNamespace`: `letsencrypt-wildcard-cert`
### HelmRelease
Lastly, having set the scene above, we define the HelmRelease which will actually deploy the secret-replicator controller into the cluster, with the config we defined above. I save this in my flux repo as `secret-replicator/helmrelease-secret-replicator.yaml`:
Lastly, having set the scene above, we define the HelmRelease which will actually deploy the secret-replicator controller into the cluster, with the config we defined above. I save this in my flux repo:
??? example "Example HelmRelease (click to expand)"
```yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: secret-replicator
namespace: secret-replicator
```yaml title="/secret-replicator/helmrelease-secret-replicator.yaml"
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: secret-replicator
namespace: secret-replicator
spec:
chart:
spec:
chart:
spec:
chart: secret-replicator
version: 0.6.x
sourceRef:
kind: HelmRepository
name: kiwigrid
namespace: flux-system
interval: 15m
timeout: 5m
releaseName: secret-replicator
valuesFrom:
- kind: ConfigMap
name: secret-replicator-helm-chart-value-overrides
valuesKey: values.yaml # This is the default, but best to be explicit for clarity
```
chart: secret-replicator
version: 0.6.x
sourceRef:
kind: HelmRepository
name: kiwigrid
namespace: flux-system
interval: 15m
timeout: 5m
releaseName: secret-replicator
valuesFrom:
- kind: ConfigMap
name: secret-replicator-helm-chart-value-overrides
valuesKey: values.yaml # This is the default, but best to be explicit for clarity
```
--8<-- "kubernetes-why-not-config-in-helmrelease.md"

View File

@@ -22,86 +22,44 @@ To take advantage of the various workarounds available, I find it best to put th
## Preparation
### Namespace
We need a namespace to deploy our certificates and associated secrets into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/namespaces/namespace-letsencrypt-wildcard-cert.yaml`:
??? example "Example Namespace (click to expand)"
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: letsencrypt-wildcard-cert
```
### Kustomization
Now we need a kustomization to tell Flux to install any YAMLs it finds in `/letsencrypt-wildcard-cert`. I create this example Kustomization in my flux repo at `bootstrap/kustomizations/kustomization-letsencrypt-wildcard-cert.yaml`.
!!! tip
Importantly, note that we define a **dependsOn**, to tell Flux not to try to reconcile this kustomization before the cert-manager and sealedsecrets kustomizations are reconciled. Cert-manager creates the CRDs used to define certificates, so prior to Cert Manager being installed, the cluster won't know what to do with the ClusterIssuers/Certificate resources.
??? example "Example Kustomization (click to expand)"
```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: letsencrypt-wildcard-cert
namespace: flux-system
spec:
interval: 15m
path: ./letsencrypt-wildcard-cert
dependsOn:
- name: "cert-manager"
- name: "sealed-secrets"
prune: true # remove any elements later removed from the above path
timeout: 2m # if not set, this defaults to interval duration, which is 1h
sourceRef:
kind: GitRepository
name: flux-system
validation: server
```
### DNS01 Validation Secret
The simplest way to validate ownership of a domain to LetsEncrypt is to use DNS-01 validation. In this mode, we "prove" our ownership of a domain name by creating a special TXT record, which LetsEncrypt will check and confirm for validity, before issuing us any certificates for that domain name.
The [ClusterIssuers we created earlier](/kubernetes/ssl-certificates/letsencrypt-issuers/) included a field `solvers.dns01.cloudflare.apiTokenSecretRef.name`. This value points to a secret (*in the same namespace as the certificate[^1]*) containing credentials necessary to create DNS records automatically. (*again, my examples are for cloudflare, but the [other supported providers](https://cert-manager.io/docs/configuration/acme/dns01/) will have similar secret requirements*)
The [ClusterIssuers we created earlier](/kubernetes/ssl-certificates/letsencrypt-issuers/) included a field `solvers.dns01.cloudflare.apiTokenSecretRef.name`. This value points to a secret (*in the same namespace as cert-manager*) containing credentials necessary to create DNS records automatically. (*again, my examples are for cloudflare, but the [other supported providers](https://cert-manager.io/docs/configuration/acme/dns01/) will have similar secret requirements*)
Thanks to [Sealed Secrets](/kubernetes/sealed-secrets/), we have a safe way of committing secrets into our repository, so to create necessary secret, you'd run something like this:
```bash
kubectl create secret generic cloudflare-api-token-secret \
--namespace letsencrypt-wildcard-cert \
--namespace cert-manager \
--dry-run=client \
--from-literal=api-token=gobbledegook -o json \
| kubeseal --cert <path to public cert> \
| kubectl create -f - \
> <path to repo>/letsencrypt-wildcard-cert/sealedsecret-cloudflare-api-token-secret.yaml
> <path to repo>/cert-manager/sealedsecret-cloudflare-api-token-secret.yaml
```
### Staging Certificate
Finally, we create our certificates! Here's an example certificate resource which uses the letsencrypt-staging issuer (*to avoid being rate-limited while learning!*). I save this in my flux repo as `/letsencrypt-wildcard-cert/certificate-wildcard-cert-letsencrypt-staging.yaml`
Finally, we create our certificates! Here's an example certificate resource which uses the letsencrypt-staging issuer (*to avoid being rate-limited while learning!*). I save this in my flux repo:
???+ example "Example certificate requested from LetsEncrypt staging"
```yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-wildcard-cert-example.com-staging
namespace: letsencrypt-wildcard-cert
spec:
# secretName doesn't have to match the certificate name, but it may as well, for simplicity!
secretName: letsencrypt-wildcard-cert-example.com-staging
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
dnsNames:
- "example.com"
- "*.example.com"
```
```yaml title="/letsencrypt-wildcard-cert/certificate-wildcard-cert-letsencrypt-staging.yaml"
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-wildcard-cert-example.com-staging
namespace: letsencrypt-wildcard-cert
spec:
# secretName doesn't have to match the certificate name, but it may as well, for simplicity!
secretName: letsencrypt-wildcard-cert-example.com-staging
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
dnsNames:
- "example.com"
- "*.example.com"
```
## Serving
@@ -130,26 +88,24 @@ If your certificate does not become `Ready` within a few minutes [^1], try watch
### Production Certificate
Once you know you can happily deploy a staging certificate, it's safe enough to attempt your "prod" certificate. I save this in my flux repo as `/letsencrypt-wildcard-cert/certificate-wildcard-cert-letsencrypt-prod.yaml`
Once you know you can happily deploy a staging certificate, it's safe enough to attempt your "prod" certificate. I save this in my flux repo:
???+ example "Example certificate requested from LetsEncrypt prod"
```yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-wildcard-cert-example.com
namespace: letsencrypt-wildcard-cert
spec:
# secretName doesn't have to match the certificate name, but it may as well, for simplicity!
secretName: letsencrypt-wildcard-cert-example.com
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- "example.com"
- "*.example.com"
```
```yaml title="/letsencrypt-wildcard-cert/certificate-wildcard-cert-letsencrypt-prod.yaml"
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-wildcard-cert-example.com
namespace: letsencrypt-wildcard-cert
spec:
# secretName doesn't have to match the certificate name, but it may as well, for simplicity!
secretName: letsencrypt-wildcard-cert-example.com
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- "example.com"
- "*.example.com"
```
Commit the certificate and follow the steps above to confirm that your prod certificate has been issued.

View File

@@ -1,213 +0,0 @@
# Traefik
This recipe utilises the [traefik helm chart](https://github.com/helm/charts/tree/master/stable/traefik) to proving LetsEncrypt-secured HTTPS access to multiple containers within your cluster.
## Ingredients
1. [Kubernetes cluster](/kubernetes/cluster/)
2. [Helm](/kubernetes/helm/) installed and initialised in your cluster
## Preparation
### Clone helm charts
Clone the helm charts, by running:
```bash
git clone https://github.com/helm/charts
```
Change to stable/traefik:
```bash
cd charts/stable/traefik
```
### Edit values.yaml
The beauty of the helm approach is that all the complexity of the Kubernetes elements' YAML files are hidden from you (created using templates), and all your changes go into values.yaml.
These are my values, you'll need to adjust for your own situation:
```yaml
imageTag: alpine
serviceType: NodePort
# yes, we're not listening on 80 or 443 because we don't want to pay for a loadbalancer IP to do this. I use poor-mans-k8s-lb instead
service:
nodePorts:
http: 30080
https: 30443
cpuRequest: 1m
memoryRequest: 100Mi
cpuLimit: 1000m
memoryLimit: 500Mi
ssl:
enabled: true
enforced: true
debug:
enabled: false
rbac:
enabled: true
dashboard:
enabled: true
domain: traefik.funkypenguin.co.nz
kubernetes:
# set these to all the namespaces you intend to use. I standardize on one-per-stack. You can always add more later
namespaces:
- kube-system
- unifi
- kanboard
- nextcloud
- huginn
- miniflux
accessLogs:
enabled: true
acme:
persistence:
enabled: true
# Add the necessary annotation to backup ACME store with k8s-snapshots
annotations: { "backup.kubernetes.io/deltas: P1D P7D" }
staging: false
enabled: true
logging: true
email: "<my letsencrypt email>"
challengeType: "dns-01"
dnsProvider:
name: cloudflare
cloudflare:
CLOUDFLARE_EMAIL: "<my cloudlare email"
CLOUDFLARE_API_KEY: "<my cloudflare API key>"
domains:
enabled: true
domainsList:
- main: "*.funkypenguin.co.nz" # name of the wildcard domain name for the certificate
- sans:
- "funkypenguin.co.nz"
metrics:
prometheus:
enabled: true
```
!!! note
The helm chart doesn't enable the Traefik dashboard by default. I intend to add an oauth_proxy pod to secure this, in a future recipe update.
### Prepare phone-home pod
[Remember](/kubernetes/loadbalancer/) how our load balancer design ties a phone-home container to another container using a pod, so that the phone-home container can tell our external load balancer (_using a webhook_) where to send our traffic?
Since we deployed Traefik using helm, we need to take a slightly different approach, so we'll create a pod with an affinity which ensures it runs on the same host which runs the Traefik container (_more precisely, containers with the label app=traefik_).
Create phone-home.yaml as per the following example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: phonehome-traefik
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- traefik
topologyKey: failure-domain.beta.kubernetes.io/zone
containers:
- image: funkypenguin/poor-mans-k8s-lb
imagePullPolicy: Always
name: phonehome-traefik
env:
- name: REPEAT_INTERVAL
value: "600"
- name: FRONTEND_PORT
value: "443"
- name: BACKEND_PORT
value: "30443"
- name: NAME
value: "traefik"
- name: WEBHOOK
value: "https://<your loadbalancer hostname>:9000/hooks/update-haproxy"
- name: WEBHOOK_TOKEN
valueFrom:
secretKeyRef:
name: traefik-credentials
key: webhook_token.secret
```
Create your webhook token secret by running:
```bash
echo -n "imtoosecretformyshorts" > webhook_token.secret
kubectl create secret generic traefik-credentials --from-file=webhook_token.secret
```
!!! warning
Yes, the "-n" in the echo statement is needed. [Read here for why](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/).
## Serving
### Install the chart
To install the chart, simply run ```helm install stable/traefik --name traefik --namespace kube-system```
That's it, traefik is running.
You can confirm this by running ```kubectl get pods```, and even watch the traefik logs, by running ```kubectl logs -f traefik<tab-to-autocomplete>```
### Deploy the phone-home pod
We still can't access traefik yet, since it's listening on port 30443 on node it happens to be running on. We'll launch our phone-home pod, to tell our [load balancer](/kubernetes/loadbalancer/) where to send incoming traffic on port 443.
Optionally, on your loadbalancer VM, run ```journalctl -u webhook -f``` to watch for the container calling the webhook.
Run ```kubectl create -f phone-home.yaml``` to create the pod.
Run ```kubectl get pods -o wide``` to confirm that both the phone-home pod and the traefik pod are on the same node:
```bash
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
phonehome-traefik 1/1 Running 0 20h 10.56.2.55 gke-penguins-are-sexy-8b85ef4d-2c9g
traefik-69db67f64c-5666c 1/1 Running 0 10d 10.56.2.30 gkepenguins-are-sexy-8b85ef4d-2c9g
```
Now browse to `https://<your load balancer`, and you should get a valid SSL cert, along with a 404 error (_you haven't deployed any other recipes yet_)
### Making changes
If you change a value in values.yaml, and want to update the traefik pod, run:
```bash
helm upgrade --values values.yml traefik stable/traefik --recreate-pods
```
## Review
We're doneburgers! 🍔 We now have all the pieces to safely deploy recipes into our Kubernetes cluster, knowing:
1. Our HTTPS traffic will be secured with LetsEncrypt (thanks Traefik!)
2. Our non-HTTPS ports (like UniFi adoption) will be load-balanced using an free-to-scale [external load balancer](/kubernetes/loadbalancer/)
3. Our persistent data will be [automatically backed up](/kubernetes/snapshots/)
Here's a recap:
* [Start](/kubernetes/) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* Traefik (this page) - Traefik Ingress via Helm
## Where to next?
I'll be adding more Kubernetes versions of existing recipes soon. Check out the [MQTT](/recipes/mqtt/) recipe for a start!
[^1]: It's kinda lame to be able to bring up Traefik but not to use it. I'll be adding the oauth_proxy element shortly, which will make this last step a little more conclusive and exciting!
--8<-- "recipe-footer.md"