1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-13 17:56:26 +00:00

Add authentik, tidy up recipe-footer

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
David Young
2023-10-31 14:37:29 +13:00
parent 0378e356fe
commit f22dd8eb50
142 changed files with 805 additions and 708 deletions

View File

@@ -40,6 +40,6 @@ A few things you should know:
In summary, Local Path Provisioner is fine if you have very specifically sized workloads and you don't care about node redundancy.
--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
[^1]: [TopoLVM](/kubernetes/persistence/topolvm/) also creates per-node volumes which aren't "portable" between nodes, but because it relies on LVM, it is "capacity-aware", and is able to distribute storage among multiple nodes based on available capacity.

View File

@@ -243,6 +243,6 @@ What have we achieved? We have a storage provider that can use an NFS server as
* [X] We have a new storage provider
--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
[^1]: The reason I shortened it is so I didn't have to type nfs-subdirectory-provider each time. If you want that sort of pain in your life, feel free to change it!

View File

@@ -415,6 +415,6 @@ What have we achieved? We're half-way to getting a ceph cluster, having deployed
* [X] StorageClasses are available so that the cluster storage can be consumed by your pods
* [X] Pretty graphs are viewable in the Ceph Dashboard
--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
[^1]: Unless you **wanted** to deploy your cluster components in a separate namespace to the operator, of course!

View File

@@ -178,4 +178,4 @@ What have we achieved? We're half-way to getting a ceph cluster, having deployed
* [ ] Deploy the ceph [cluster](/kubernetes/persistence/rook-ceph/cluster/) using a CR
--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}

View File

@@ -271,6 +271,6 @@ Are things not working as expected? Try one of the following to look for issues:
3. Watch the scheduler logs, by running `kubectl logs -f -n topolvm-system -l app.kubernetes.io/name=scheduler`
4. Watch the controller node logs, by running `kubectl logs -f -n topolvm-system -l app.kubernetes.io/name=controller`
--8<-- "recipe-footer.md"
{% include 'recipe-footer.md' %}
[^1]: This is where you'd add multiple Volume Groups if you wanted a storageclass per Volume Group