mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 09:46:23 +00:00
Add authentik, tidy up recipe-footer
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
@@ -68,4 +68,4 @@ What have we achieved? We've got snapshot-controller running, and ready to manag
|
||||
|
||||
* [ ] Configure [Velero](/kubernetes/backup/velero/) with a VolumeSnapshotLocation, so that volume snapshots can be made as part of a BackupSchedule!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -45,4 +45,4 @@ What have we achieved? We now have the snapshot validation admission webhook run
|
||||
|
||||
* [ ] Deploy [snapshot-controller]( (/kubernetes/backup/csi-snapshots/snapshot-controller/)) itself
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -17,4 +17,4 @@ For your backup needs, I present, Velero, by VMWare:
|
||||
|
||||
* [Velero](/kubernetes/backup/velero/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -11,6 +11,8 @@ helmrelease_namespace: velero
|
||||
kustomization_name: velero
|
||||
slug: Velero
|
||||
status: new
|
||||
upstream: https://velero.io/
|
||||
github_repo: https://github.com/vmware-tanzu/velero
|
||||
---
|
||||
|
||||
# Velero
|
||||
@@ -326,4 +328,4 @@ What have we achieved? We've got scheduled backups running, and we've successful
|
||||
[^1]: This is where you'd add multiple Volume Groups if you wanted a storageclass per Volume Group
|
||||
[^2]: But not the rook-ceph cluster. If that dies, the snapshots are toast :toast: too!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -76,4 +76,4 @@ That's it. You have a beautiful new kubernetes cluster ready for some action!
|
||||
|
||||
[^1]: Do you live in the CLI? Install the kubectl autocompletion for [bash or zsh](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) to make your life much easier!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -62,4 +62,4 @@ You'll learn more about how to care for and feed your cluster if you build it yo
|
||||
|
||||
Go with a self-hosted cluster if you want to learn more, you'd rather spend time than money, or you've already got significant investment in local infructure and technical skillz.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -156,4 +156,4 @@ Cuddle your beautiful new cluster by running `kubectl cluster-info` [^1] - if th
|
||||
[^2]: Looking for your k3s logs? Under Ubuntu LTS, run `journalctl -u k3s` to show your logs
|
||||
[^3]: k3s is not the only "lightweight kubernetes" game in town. Minikube (*virtualization-based*) and mikrok8s (*possibly better for Ubuntu users since it's installed in a "snap" - haha*) are also popular options. One day I'll write a "mikrok8s vs k3s" review, but it doesn't really matter for our cluster operations - as I understand it, microk8s makes HA clustering slightly easire than k3s, but you get slightly less "out-of-the-box" in return, so mikrok8s may be more suitable for experience users / production edge deployments.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -63,4 +63,4 @@ Good! I describe how to put this design into action on the [next page](/kubernet
|
||||
|
||||
[^1]: ERDs are fancy diagrams for nERDs which [represent cardinality between entities](https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model#Crow's_foot_notation) scribbled using the foot of a crow 🐓
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -147,7 +147,7 @@ If you used my template repo, some extra things also happened..
|
||||
|
||||
That's best explained on the [next page](/kubernetes/deployment/flux/design/), describing the design we're using...
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The [template repo](https://github.com/geek-cookbook/template-flux/) also "bootstraps" a simple example re how to [operate flux](/kubernetes/deployment/flux/operate/), by deploying the podinfo helm chart.
|
||||
[^2]: TIL that GitHub listens for SSH on `ssh.github.com` on port 443!
|
||||
|
||||
@@ -154,6 +154,6 @@ Commit your changes, and once again do the waiting / impatient-reconciling jig.
|
||||
|
||||
We did it. The Holy Grail. We deployed an application into the cluster, without touching the cluster. Pinch yourself, and then prove it worked by running `flux get kustomizations`, or `kubectl get helmreleases -n podinfo`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Got suggestions for improvements here? Shout out in the comments below!
|
||||
|
||||
@@ -129,4 +129,4 @@ Still with me? Good. Move on to creating your cluster!
|
||||
- [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||
- [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -154,6 +154,6 @@ What have we achieved? By simply creating another YAML in our flux repo alongsid
|
||||
|
||||
* [X] DNS records are created automatically based on YAMLs (*or even just on services and ingresses!*)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Why yes, I **have** accidentally caused outages / conflicts by "leaking" DNS entries automatically!
|
||||
|
||||
@@ -53,4 +53,4 @@ Still with me? Good. Move on to understanding Helm charts...
|
||||
|
||||
[^1]: Of course, you can have lots of fun deploying all sorts of things via Helm. Check out <https://artifacthub.io> for some examples.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -46,6 +46,6 @@ Primarily you need 2 things:
|
||||
|
||||
Practically, you need some extras too, but you can mix-and-match these.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Of course, if you **do** enjoy understanding the intricacies of how your tools work, you're in good company!
|
||||
|
||||
@@ -14,6 +14,6 @@ There are many popular Ingress Controllers, we're going to cover two equally use
|
||||
|
||||
Choose at least one of the above (*there may be valid reasons to use both!* [^1]), so that you can expose applications via Ingress.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: One cluster I manage uses traefik Traefik for public services, but Nginx for internal management services such as Prometheus, etc. The idea is that you'd need one type of Ingress to help debug problems with the _other_ type!
|
||||
|
||||
@@ -234,6 +234,6 @@ Commit your changes, wait for the reconciliation, and the next time you point yo
|
||||
|
||||
Are things not working as expected? Watch the nginx-ingress-controller's logs with ```kubectl logs -n nginx-ingress-controller -l app.kubernetes.io/name=nginx-ingress-controller -f```.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup!
|
||||
|
||||
@@ -15,6 +15,6 @@ One of the advantages [Traefik](/kubernetes/ingress/traefik/) offers over [Nginx
|
||||
* [x] A [load-balancer](/kubernetes/loadbalancer/) solution (*either [k3s](/kubernetes/loadbalancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*)
|
||||
* [x] [Traefik](/kubernetes/ingress/traefik/) deployed per-design
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup!
|
||||
|
||||
@@ -236,6 +236,6 @@ Commit your changes, wait for the reconciliation, and the next time you point yo
|
||||
|
||||
Are things not working as expected? Watch the traefik's logs with ```kubectl logs -n traefik -l app.kubernetes.io/name=traefik -f```.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup!
|
||||
|
||||
@@ -50,6 +50,6 @@ Assuming you only had a single Kubernetes node (*say, a small k3s deployment*),
|
||||
|
||||
(*This is [the way k3s works](/kubernetes/loadbalancer/k3s/) by default, although it's still called a LoadBalancer*)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: It is possible to be prescriptive about which port is used for a Nodeport-exposed service, and this is occasionally [a valid deployment strategy](https://github.com/portainer/k8s/#using-nodeport-on-a-localremote-cluster), but you're usually limited to ports between 30000 and 32768.
|
||||
|
||||
@@ -23,6 +23,6 @@ Yes, to get you started. But consider the following limitations:
|
||||
|
||||
To tackle these issues, you need some more advanced network configuration, along with [MetalLB](/kubernetes/loadbalancer/metallb/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: And seriously, if you're building a Kubernetes cluster, of **course** you'll want more than one host!
|
||||
|
||||
@@ -328,6 +328,6 @@ To:
|
||||
|
||||
Commit your changes, wait for a reconciliation, and run `kubectl get services -n podinfo`. All going well, you should see that the service now has an IP assigned from the pool you chose for MetalLB!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: I've documented an example re [how to configure BGP between MetalLB and pfsense](/kubernetes/loadbalancer/metallb/pfsense/).
|
||||
|
||||
@@ -75,6 +75,6 @@ If you're not receiving any routes from MetalLB, or if the neighbors aren't in a
|
||||
2. Examine the metallb speaker logs in the cluster, by running `kubectl logs -n metallb-system -l app.kubernetes.io/name=metallb`
|
||||
3. SSH to the pfsense, start a shell and launch the FFR shell by running `vtysh`. Now you're in a cisco-like console where commands like `show ip bgp sum` and `show ip bgp neighbors <neighbor ip> received-routes` will show you interesting debugging things.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: If you decide to deploy some policy with route-maps, prefix-lists, etc, it's all found under **Services -> FRR Global/Zebra** 🦓
|
||||
|
||||
@@ -311,4 +311,4 @@ At this point, you should be able to access your instance on your chosen DNS nam
|
||||
|
||||
To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux <name of pod per above> -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -40,6 +40,6 @@ A few things you should know:
|
||||
|
||||
In summary, Local Path Provisioner is fine if you have very specifically sized workloads and you don't care about node redundancy.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: [TopoLVM](/kubernetes/persistence/topolvm/) also creates per-node volumes which aren't "portable" between nodes, but because it relies on LVM, it is "capacity-aware", and is able to distribute storage among multiple nodes based on available capacity.
|
||||
|
||||
@@ -243,6 +243,6 @@ What have we achieved? We have a storage provider that can use an NFS server as
|
||||
|
||||
* [X] We have a new storage provider
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The reason I shortened it is so I didn't have to type nfs-subdirectory-provider each time. If you want that sort of pain in your life, feel free to change it!
|
||||
|
||||
@@ -415,6 +415,6 @@ What have we achieved? We're half-way to getting a ceph cluster, having deployed
|
||||
* [X] StorageClasses are available so that the cluster storage can be consumed by your pods
|
||||
* [X] Pretty graphs are viewable in the Ceph Dashboard
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Unless you **wanted** to deploy your cluster components in a separate namespace to the operator, of course!
|
||||
|
||||
@@ -178,4 +178,4 @@ What have we achieved? We're half-way to getting a ceph cluster, having deployed
|
||||
|
||||
* [ ] Deploy the ceph [cluster](/kubernetes/persistence/rook-ceph/cluster/) using a CR
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -271,6 +271,6 @@ Are things not working as expected? Try one of the following to look for issues:
|
||||
3. Watch the scheduler logs, by running `kubectl logs -f -n topolvm-system -l app.kubernetes.io/name=scheduler`
|
||||
4. Watch the controller node logs, by running `kubectl logs -f -n topolvm-system -l app.kubernetes.io/name=controller`
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: This is where you'd add multiple Volume Groups if you wanted a storageclass per Volume Group
|
||||
|
||||
@@ -624,6 +624,6 @@ root@shredder:~#
|
||||
|
||||
And now when you create your seadsecrets, refer to the public key you just created using `--cert <path to cert>`. These secrets will be decryptable by **any** sealedsecrets controller bootstrapped with the same keypair (*above*).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: There's no harm in storing the **public** key in the repo though, which means it's easy to refer to when sealing secrets.
|
||||
|
||||
@@ -176,4 +176,4 @@ Still with me? Good. Move on to understanding Helm charts...
|
||||
[^1]: I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).
|
||||
```
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -52,7 +52,7 @@ spec:
|
||||
Now that the "global" elements of this deployment (*just the HelmRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/cert-manager`. I create this example Kustomization in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/kustomizations/kustomization-cert-manager.yaml"
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: cert-manager
|
||||
@@ -65,7 +65,6 @@ spec:
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
validation: server
|
||||
healthChecks:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
@@ -135,6 +134,6 @@ What do we have now? Well, we've got the cert-manager controller **running**, bu
|
||||
|
||||
If your certificate is not created **aren't** created as you expect, then the best approach is to check the cert-manager logs, by running `kubectl logs -n cert-manager -l app.kubernetes.io/name=cert-manager`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Why yes, I **have** accidentally rate-limited myself by deleting/recreating my prod certificates a few times!
|
||||
|
||||
@@ -17,6 +17,6 @@ I've split this section, conceptually, into 3 separate tasks:
|
||||
2. Setup "[Issuers](/kubernetes/ssl-certificates/letsencrypt-issuers/)" for LetsEncrypt, which Cert Manager will use to request certificates
|
||||
3. Setup a [wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/) in such a way that it can be used by Ingresses like Traefik or Nginx
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: I had a really annoying but smart boss once who taught me this. Hi Mark! :wave:
|
||||
|
||||
@@ -32,7 +32,7 @@ metadata:
|
||||
Now we need a kustomization to tell Flux to install any YAMLs it finds in `/letsencrypt-wildcard-cert`. I create this example Kustomization in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/kustomizations/kustomization-letsencrypt-wildcard-cert.yaml"
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: letsencrypt-wildcard-cert
|
||||
@@ -48,7 +48,6 @@ spec:
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
validation: server
|
||||
```
|
||||
|
||||
!!! tip
|
||||
@@ -140,6 +139,6 @@ Events: <none>
|
||||
|
||||
Provided your account is registered, you're ready to proceed with [creating a wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/)!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Since a ClusterIssuer is not a namespaced resource, it doesn't exist in any specific namespace. Therefore, my assumption is that the `apiTokenSecretRef` secret is only "looked for" when a certificate (*which **is** namespaced*) requires validation.
|
||||
|
||||
@@ -164,6 +164,6 @@ Look for secrets across the whole cluster, by running `kubectl get secrets -A |
|
||||
|
||||
If your certificate is not created **aren't** created as you expect, then the best approach is to check the secret-replicator logs, by running `kubectl logs -n secret-replicator -l app.kubernetes.io/name=secret-replicator`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: To my great New Zealandy confusion, "Kiwigrid GmbH" is a German company :shrug:
|
||||
|
||||
@@ -108,6 +108,6 @@ spec:
|
||||
|
||||
Commit the certificate and follow the steps above to confirm that your prod certificate has been issued.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: This process can take a frustratingly long time, and watching the cert-manager logs at least gives some assurance that it's progressing!
|
||||
|
||||
Reference in New Issue
Block a user