Add Kubernetes OIDC recipes
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
@@ -21,7 +21,7 @@ NAME READY MESSAGE REVISION SUSPENDED
|
||||
And you should have happy pods in the {{ page.meta.helmrelease_namespace }} namespace:
|
||||
|
||||
```bash
|
||||
~ ❯ k get pods -n {{ page.meta.helmrelease_namespace }} -l release={{ page.meta.helmrelease_name }}
|
||||
~ ❯ k get pods -n {{ page.meta.helmrelease_namespace }} -l app.kubernetes.io/name={{ page.meta.helmrelease_name }}
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
{{ page.meta.helmrelease_name }}-7c94b7446d-nwsss 1/1 Running 0 5m14s
|
||||
~ ❯
|
||||
|
||||
114
_includes/kubernetes-oidc-setup.md
Normal file
@@ -0,0 +1,114 @@
|
||||
### Install kubelogin
|
||||
|
||||
For CLI-based access to your cluster, you'll need a "helper" to perform the OIDC magic on behalf of kubectl. Install [int128/kubelogin](https://github.com/int128/kubelogin), which is design suited to this purpose.
|
||||
|
||||
Use kubelogin to test your OIDC parameters, by running:
|
||||
|
||||
```bash
|
||||
kubectl oidc-login setup \
|
||||
--oidc-issuer-url=ISSUER_URL \
|
||||
--oidc-client-id=YOUR_CLIENT_ID \
|
||||
--oidc-client-secret=YOUR_CLIENT_SECRET
|
||||
```
|
||||
|
||||
All going well, your browser will open a new window, logging you into authentik, and on the CLI you should get output something like this:
|
||||
|
||||
```
|
||||
~ ❯ kubectl oidc-login setup --oidc-issuer-url=https://authentik.example.com/application/o/kube-apiserver/ --oidc-client-id=kube-apiserver --oidc-client-secret=cVj4YqmB4VPcq6e7 --oidc-extra-scope=groups,email
|
||||
authentication in progress...
|
||||
|
||||
## 2. Verify authentication
|
||||
|
||||
You got a token with the following claims:
|
||||
|
||||
{
|
||||
"iss": "https://authentik.example.com/application/o/kube-apiserver/",
|
||||
"sub": "363d4d0814dbad2d930308dc848342e328b76f925ebba0978a51ddad699022b",
|
||||
"aud": "kube-apiserver",
|
||||
"exp": 1701511022,
|
||||
"iat": 1698919022,
|
||||
"auth_time": 1698891834,
|
||||
"acr": "goauthentik.io/providers/oauth2/default",
|
||||
"nonce": "qgKevTR1gU9Mh14HzOPPCTaP_Mgu9nvY7ZhJkCeFpGY",
|
||||
"at_hash": "TRZOLHHxFxl9HB7SHCIcMw",
|
||||
"email": "davidy@example.com",
|
||||
"email_verified": true,
|
||||
"groups": [
|
||||
"authentik Admins",
|
||||
"admin-kubernetes"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Huzzah, authentication works! :partying_face:
|
||||
|
||||
!!! tip
|
||||
Make sure you see a groups claim in the output above, and if you don't revisit your scope mapper and your claims in the provider under advanced protocol settings!
|
||||
|
||||
### Assemble your kubeconfig
|
||||
|
||||
Your kubectl access to k3s uses a kubeconfig file at `/etc/rancher/k3s/k3s.yaml`. Treat this file as a root password - it's includes a long-lived token which gives you clusteradmin ("*god mode*" on your cluster.)
|
||||
|
||||
Copy the `k3s.yaml` file to your local desktop (*the one with a web browser*), into `$HOME/.kube/config`, and modify it, changing `server: https://127.0.0.1:6443` to match the URL of (*one of*) your control-plane node(*s*).
|
||||
|
||||
Test using `kubectl cluster-info` locally, ensuring that you have access.
|
||||
|
||||
Amend the kubeconfig file for your OIDC user, by running a variation of:
|
||||
|
||||
```bash
|
||||
kubectl config set-credentials oidc \
|
||||
--exec-api-version=client.authentication.k8s.io/v1beta1 \
|
||||
--exec-command=kubectl \
|
||||
--exec-arg=oidc-login \
|
||||
--exec-arg=get-token \
|
||||
--exec-arg=--oidc-issuer-url=https://authentik.example.com/application/o/kube-apiserver/ \
|
||||
--exec-arg=--oidc-client-id=kube-apiserver \
|
||||
--exec-arg=--oidc-client-secret=<your client secret> \
|
||||
--exec-arg=--oidc-extra-scope=groups \
|
||||
--exec-arg=--oidc-extra-scope=email
|
||||
```
|
||||
|
||||
Test your OIDC powerz by running `kubectl --user=oidc cluster-info`.
|
||||
|
||||
Now gasp in dismay as you discover that your request was denied for lack of access! :scream:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): services is forbidden: User "oidc:davidy@funkypenguin.co.nz"
|
||||
cannot list resource "services" in API group "" in the namespace "kube-system"
|
||||
```
|
||||
|
||||
### Create clusterrolebinding
|
||||
|
||||
That's what you wanted, right? Security? Locking out unauthorized users? Ha.
|
||||
|
||||
Now that we've confirmed that kube-apiserver knows your **identity** (authn), create a clusterrolebinding to tell it what your identity is **authorized** to do (authz), based on your group membership.
|
||||
|
||||
The following is a simple clusterrolebinding which will grant all members of the `admin-kube-apiserver` full access (`cluster-admin`), to get you started:
|
||||
|
||||
```yaml title="/authentic/clusterrolebinding-oidc-group-admin-kube-apiserver.yaml"
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: oidc-group-admin-kube-apiserver
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin # (1)!
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: oidc:admin-kube-apiserver # (2)!
|
||||
```
|
||||
|
||||
1. The role to bind
|
||||
2. The subject (group, in this case) of the binding
|
||||
|
||||
Apply your clusterrolebinding using the usual GitOps magic (*I put mine in `/authentic/clusterrolebinding-oidc-group-admin-kube-apiserver.yaml`*).
|
||||
|
||||
Run `kubectl --user=oidc cluster-info` again, and confirm you are now authorized to see the cluster details.
|
||||
|
||||
If this works, set your user context permanently, using `kubectl config set-context --current --user=oidc`.
|
||||
|
||||
!!! tip "whoami?"
|
||||
Run `kubectl krew install whoami` to install the `whoami` plugin, and then `kubectl whoami` to confirm you're logged in with your OIDC account
|
||||
|
||||
You now have OIDC-secured CLI access to your cluster!
|
||||
2
_snippets/blog-series-elfhosted.md
Normal file
@@ -0,0 +1,2 @@
|
||||
1. Beginning
|
||||
2. Setup Kubernetes
|
||||
BIN
docs/images/authentik-kube-apiserver-1.png
Normal file
|
After Width: | Height: | Size: 194 KiB |
BIN
docs/images/authentik-kube-apiserver-2.png
Normal file
|
After Width: | Height: | Size: 178 KiB |
BIN
docs/images/authentik-kube-apiserver-3.png
Normal file
|
After Width: | Height: | Size: 219 KiB |
BIN
docs/images/authentik-kube-apiserver-4.png
Normal file
|
After Width: | Height: | Size: 211 KiB |
BIN
docs/images/authentik-kube-apiserver-5.png
Normal file
|
After Width: | Height: | Size: 211 KiB |
BIN
docs/images/authentik-kube-apiserver-6.png
Normal file
|
After Width: | Height: | Size: 153 KiB |
BIN
docs/images/eks-authentic-1.png
Normal file
|
After Width: | Height: | Size: 224 KiB |
33
docs/kubernetes/cluster/eks.md
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
description: Create a simple kubernetes cluster on EKS
|
||||
title: Create your Kubernetes cluster on EKS
|
||||
---
|
||||
|
||||
# A basic EKS cluster
|
||||
|
||||
If you're already in the AWS ecosystem, it may make sense for you to deploy your Kubernetes cluster using EKS.
|
||||
|
||||
What follows are notes I made while establishing a very basic cluster to work on [OIDC authentication for EKS](/kubernetes/oidc-authentication/eks-authentik/) using [authentik][k8s/authentik].
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. AWS CLI tools `awscli` and `eksctl`, configured for your IAM account
|
||||
2. Some spare change :moneybag: on your AWS account for a few hours / days of EC2 for the underlying nodepool.
|
||||
|
||||
## Preparation
|
||||
|
||||
### Create cluster
|
||||
|
||||
Creating an EKS cluster is a one-line command. I ran `eksctl create cluster --name funkypenguin-authentik-test --region ap-southeast-2` to create my cluster.
|
||||
|
||||
It took 14 minutes to complete :man_facepalming:
|
||||
|
||||
### Setup EBS CSI driver
|
||||
|
||||
The default storageclass (gp2) didn't work for me, and I like storage based on CSI, so that I can use [Velero][velero] with [csi-snapshotter](/kubernetes/backup/csi-snapshots), so I added the [EBS CSI Driver](/kubernetes/persistence/aws-ebs/). This is optional if you don't care about CSI or persistent storage!
|
||||
|
||||
## Summary
|
||||
|
||||
Well, I'm done. This is probably the shortest recipe ever (*although 14 min is a comparatively long time, IMO, to deploy a simple cluster*). The links on this page to the various steps (OIDC, storage) will provide more detail on those particular configs.
|
||||
|
||||
{% include 'recipe-footer.md' %}
|
||||
@@ -14,7 +14,7 @@ Popular options are:
|
||||
|
||||
* [DigitalOcean](/kubernetes/cluster/digitalocean/)
|
||||
* Google Kubernetes Engine (GKE)
|
||||
* Amazon Elastic Kubernetes Service (EKS)
|
||||
* [Amazon Elastic Kubernetes Service (EKS)](/kubernetes/cluster/eks/)
|
||||
* Azure Kubernetes Service (AKS)
|
||||
|
||||
### Upgrades
|
||||
@@ -47,7 +47,7 @@ Go with a managed provider if you want your infrastructure to be resilient to yo
|
||||
|
||||
Popular options are:
|
||||
|
||||
* Rancher's K3s
|
||||
* [Rancher's K3s](/kubernetes/cluster/k3s/)
|
||||
* Ubuntu's Charmed Kubernetes
|
||||
|
||||
### Flexible
|
||||
|
||||
93
docs/kubernetes/oidc-authentication/authentik.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
title: Configure K3s for OIDC authentication with Authentik
|
||||
description: How to configure your Kubernetes cluster for OIDC authentication with Authentik
|
||||
---
|
||||
# Authenticate to Kubernetes with OIDC on K3s
|
||||
|
||||
This recipe describes how to configure K3s for OIDC authentication against an [authentik][k8s/authentik] instance.
|
||||
|
||||
For details on **why** you'd want to do this, see the [Kubernetes Authentication Guide](/kubernetes/oidc-authentication/).
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/) deployed using [K3S](/kubernetes/cluster/k3s)
|
||||
* [x] [Authentik][k8s/authentik] deployed per the recipe
|
||||
|
||||
## Setup authentik for kube-apiserver
|
||||
|
||||
Start by logging into your [authentik][k8s/authentik] instance with a superuser account.
|
||||
|
||||
### Create provider
|
||||
|
||||
Navigate to **Applications** -> **Providers**, and **Create** a new `OAuth2/OpenID Provider`.
|
||||
|
||||

|
||||
|
||||
Give your provider a name (*I use `kube-apiserver`*), and set the following:
|
||||
|
||||
* Authentication flow: `default-authentication-flow (Welcome to authentik!)`
|
||||
* Authorization flow: `default-provider-authorization-implicit-consent (Authorize Application)`
|
||||
* Client type: `Confidential`
|
||||
|
||||

|
||||
|
||||
Scroll down, and set:
|
||||
|
||||
* Client ID: `kube-apiserver` *take note, this is non-default*
|
||||
* Client Secret: `<pick a secret, or use the randomly generated one>`
|
||||
* Redirect URIs/Origins (RegEx): `http://localhost:18000` [^1]
|
||||
|
||||

|
||||
|
||||
Under **Advanced Protocol Settings**, below the set the scopes to include the built-in `email` scope, as well as the extra `oidc-groups` scope you added when [initially setting up authentik][k8s/authentik]:
|
||||
|
||||

|
||||
|
||||
Finally, enable **Include claims in id_token**, instructing authentik to send the user claims back with the id token:
|
||||
|
||||

|
||||
|
||||
|
||||
..and click **Finish**. On the following summary page, under **OAuth2 Provider**, take note of the **OpenID Configuration** URL (*`/application/o/kube-apiserver/.well-known/openid-configuration` if you followed my conventions above*) - you'll need this when configuring Kubernetes.
|
||||
|
||||
!!! question "What's that redirect URI for?"
|
||||
We'll use [kubelogin](https://github.com/int128/kubelogin) to confirm OIDC login is working, which runs locally on port 18000 to provide a web-based OIDC login flow.
|
||||
|
||||
### Create application
|
||||
|
||||
authentik requires a one-to-one relationship between applications and providers, so navigate to **Applications** -> **Applications**, and **create** an application for your new provider.
|
||||
|
||||
You can name it anything you want (*but it might be more sensible to name it for your provider, rather than a superhero! :superhero:*)
|
||||
|
||||

|
||||
|
||||
### Create group
|
||||
|
||||
Remember how we setup a groups property-mapper when deploying [authentik][k8s/authentik]? When kube-apiserver requests the `groups` scope from Authentik, the mapper will return all a user's group names.
|
||||
|
||||
You can create whatever groups you prefer - later on, you'll configure clusterrolebindings to provide RBAC access to group members. I'd start with a group called `admin-kube-apiserver`, which we'll simply map to the `cluster-admin` clusterrole.
|
||||
|
||||
Navigate to **Directory** -> **Groups**, create the necessary groups, and make yourself a member.
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We've configured authentik as an OIDC provider, and we've got the details necessary to configure our Kubernetes platform(s) to authenticate against it!
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] [authentik][k8s/authentik] configured as an OIDC provider for kube-apiserver
|
||||
* [X] OIDC parameters, including:
|
||||
* [X] OIDC Client id (`kube-apiserver`)
|
||||
* [X] OIDC Client secret (`<your chosen secret>`)
|
||||
* [X] OIDC Configuration URL (`https://<your-authentic-hosts>/application/o/kube-apiserver/.well-known/openid-configuration`)
|
||||
|
||||
What's next?
|
||||
|
||||
Return to the [Kubernetes Authentication Guide](/kubernetes/oidc-authentication/) for instructions re configuring your particular Kubernetes platform!
|
||||
|
||||
[^1]: Later on, as we add more applications which need kube-apiserver authentication, we'll add more redirect URIs.
|
||||
|
||||
{% include 'recipe-footer.md' %}
|
||||
80
docs/kubernetes/oidc-authentication/eks-authentik.md
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: Configure EKS for OIDC authentication with Authentik
|
||||
description: How to configure your EKS Kubernetes cluster for OIDC authentication with Authentik
|
||||
---
|
||||
# Authenticate to Kubernetes with OIDC on EKS
|
||||
|
||||
This recipe describes how to configure an EKS cluster for OIDC authentication against an [authentik][k8s/authentik] instance.
|
||||
|
||||
For details on **why** you'd want to do this, see the [Kubernetes Authentication Guide](/kubernetes/oidc-authentication/).
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/) deployed on Amazon EKS
|
||||
* [x] [authentik][k8s/authentik] deployed per the recipe, secured with a valid SSL cert (*no self-signed schenanigans will work here!*)
|
||||
* [x] authentik [configured as an OIDC provider for kube-apiserver](/kubernetes/oidc-authentication/authentik/)
|
||||
* [x] `eksctl` tool configured and authorized for your IAM account
|
||||
|
||||
## Setup EKS for OIDC auth
|
||||
|
||||
In order to associate an OIDC provider with your EKS cluster[^1], you'll need (*guess what?*)..
|
||||
|
||||
.. some YAML.
|
||||
|
||||
Create an EKS magic YAML[^2] like this, and tweak it for your cluster name, region, and issuerUrl:
|
||||
|
||||
```yaml title="eks-cluster-setup.yaml"
|
||||
apiVersion: eksctl.io/v1alpha5
|
||||
kind: ClusterConfig
|
||||
|
||||
metadata:
|
||||
name: funkypenguin-authentik-test
|
||||
region: ap-southeast-2
|
||||
|
||||
identityProviders:
|
||||
- name: authentik
|
||||
type: oidc
|
||||
issuerUrl: https://authentik.funkypenguin.de/application/o/kube-apiserver/ # (1)!
|
||||
clientId: kube-apiserver
|
||||
usernameClaim: email
|
||||
usernamePrefix: 'oidc:'
|
||||
groupsClaim: groups
|
||||
groupsPrefix: 'oidc:'
|
||||
```
|
||||
|
||||
1. Make sure this ends in a `/`, and doesn't include `.well-known/openid-configuration`
|
||||
|
||||
Apply the EKS magic by running `eksctl associate identityprovider -f eks-cluster-setup.yaml`
|
||||
|
||||
That's it! It may take a few minutes (you can verify it's ready on your EKS console), but once complete, the authentik provider should be visible in your cluster console, under the "Authentication" tab, as illustrated below:
|
||||
|
||||

|
||||
|
||||
{% include 'kubernetes-oidc-setup.md' %}
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved?
|
||||
|
||||
We've setup our EKS cluster to authenticate against authentik, running on that same cluster! We can now create multiple users (*with multiple levels of access*) without having to provide them with tricky IAM accounts, and deploy kube-apiserver-integrated tools like Kubernetes Dashboard or Weaveworks GitOps for nice secured UIs.
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] EKS cluster with OIDC authentication against [authentik][k8s/authentik]
|
||||
* [X] Ability to support:
|
||||
* [X] Kubernetes Dashboard (*coming soon*)
|
||||
* [X] Weave GitOps (*coming soon*)
|
||||
* [X] We've also retained our static, IAM-based `kubernetes-admin` credentials in case OIDC auth fails at some point (*keep them safe!*)
|
||||
|
||||
What's next?
|
||||
|
||||
Deploy Weave GitOps to visualize your Flux / GitOps state, and Kubernetes Dashboard for UI management of your cluster!
|
||||
|
||||
[^1]: AWS docs are at https://docs.aws.amazon.com/eks/latest/userguide/authenticate-oidc-identity-provider.html
|
||||
[^2]: For details on available options, see https://docs.aws.amazon.com/cli/latest/reference/eks/associate-identity-provider-config.html
|
||||
|
||||
|
||||
{% include 'recipe-footer.md' %}
|
||||
37
docs/kubernetes/oidc-authentication/index.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
title: Configure Kubernetes for OIDC authentication
|
||||
description: How to configure your Kubernetes cluster for OIDC authentication, so that you can provide RBAC-protected access to multiple users
|
||||
---
|
||||
# Authenticate to Kubernetes with OIDC
|
||||
|
||||
So you've got a shiny Kubernetes cluster, and you're probably using the `cluster-admin` config which was created as a result of the initial bootstrap.
|
||||
|
||||
While this hard-coded, `cluster-admin` credential is OK while you're bootstrapping, and should be safely stored somewhere as a password-of-last-resort, you'll probably want to secure your cluster with something a little more... secure.
|
||||
|
||||
Consider the following downsides to a single, static, long-lived credential:
|
||||
|
||||
1. It can get stolen
|
||||
2. It can't be shared (*you might want to give your team access to the cluster, or even a limited subset of admin access*)
|
||||
3. It can't be MFA'd
|
||||
4. Using it for the Kubernetes Dashboard (*copying and pasting into a browser window*) is a huge PITA
|
||||
|
||||
True to form, Kubernetes doesn't provide any turnkey access solution, but all the necessary primitives (*RBAC, api-server arguments, etc*) to build your own solution, starting with [authenticating and authorizing access to the apiserver](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#configuring-the-api-server).
|
||||
|
||||
## Requirements
|
||||
|
||||
Securing access to Kubernetes' API server requires an OIDC provider, be it an external service like Auth0 or Octa, or a self-hosted, open-source IDP like KeyCloak or [authentik][k8s/authentik].
|
||||
|
||||
### Setup Provider
|
||||
|
||||
1. Setup [Authentik for Kubernetes API authentication](/kubernetes/authentication/authentik/)
|
||||
2. KeyCloak (*coming soon*)
|
||||
|
||||
### Configure Kubernetes for OIDC auth
|
||||
|
||||
Once you've configured your OIDC provider, review the following, based on your provider and your Kubernetes platform:
|
||||
|
||||
#### Authentik
|
||||
|
||||
* [Authenticate K3s with Authentik as an OIDC provider](/kubernetes/oidc-authentication/k3s-authentik/)
|
||||
* Authenticate EKS with Authentik as an OIDC provider
|
||||
* Authenticate a kubeadm cluster using Authentik as an OIDC provider
|
||||
199
docs/kubernetes/oidc-authentication/k3s-authentik.md
Normal file
@@ -0,0 +1,199 @@
|
||||
---
|
||||
title: Configure K3s for OIDC authentication with Authentik
|
||||
description: How to configure your K3s Kubernetes cluster for OIDC authentication with Authentik
|
||||
---
|
||||
# Authenticate to Kubernetes with OIDC on K3s
|
||||
|
||||
This recipe describes how to configure K3s for OIDC authentication against an [authentik][k8s/authentik] instance.
|
||||
|
||||
For details on **why** you'd want to do this, see the [Kubernetes Authentication Guide](/kubernetes/oidc-authentication/).
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/) deployed using [K3S](/kubernetes/cluster/k3s)
|
||||
* [x] [authentik][k8s/authentik] deployed per the recipe
|
||||
* [x] authentik [configured as an OIDC provider for kube-apiserver](/kubernetes/oidc-authentication/authentik/)
|
||||
|
||||
## Setup K3s for OIDC auth
|
||||
|
||||
If you followed the k3s install guide, you'll have installed K3s with a command something like this:
|
||||
|
||||
```bash
|
||||
MYSECRET=iambatman
|
||||
curl -fL https://get.k3s.io | K3S_TOKEN=${MYSECRET} \
|
||||
sh -s - --disable traefik server
|
||||
```
|
||||
|
||||
To configure the apiserver to perform OIDC authentication, you need to add some extra kube-apiserver arguments. There are two ways to do this:
|
||||
|
||||
1. Append the arguments to your `curl | bash` command, like a lunatic
|
||||
2. Add the arguments to a config file which K3s will parse upon start, like a gentleman
|
||||
|
||||
Here's the lunatic option:
|
||||
|
||||
```bash title="Lunatic curl | bash option"
|
||||
--kube-apiserver-arg=oidc-issuer-url=https://authentik.example.com/application/o/kube-apiserver/
|
||||
--kube-apiserver-arg=oidc-client-id=kube-apiserver
|
||||
--kube-apiserver-arg=oidc-username-claim=email
|
||||
--kube-apiserver-arg=oidc-groups-claim=groups
|
||||
--kube-apiserver-arg=oidc-username-prefix='oidc:'
|
||||
--kube-apiserver-arg=oidc-groups-prefix='oidc:'
|
||||
```
|
||||
|
||||
And here's the gentlemanly option:
|
||||
|
||||
Created `/etc/rancher/k3s/config.yaml`, and add:
|
||||
|
||||
```yaml title="Gentlemanly YAML config option"
|
||||
kube-apiserver-arg:
|
||||
- "oidc-issuer-url=https://authentik.infra.example.com/application/o/kube-apiserver/"
|
||||
- "oidc-client-id=kube-apiserver"
|
||||
- "oidc-username-claim=email"
|
||||
- "oidc-groups-claim=groups"
|
||||
- "oidc-username-prefix='oidc:'"
|
||||
- "oidc-groups-prefix='oidc:'"
|
||||
```
|
||||
|
||||
Now restart k3s (*`systemctl restart k3s` on Ubuntu*), and confirm it starts correctly by watching the logs (*`journalctl -u k3s -f` on Ubuntu*)
|
||||
|
||||
Assuming nothing explodes, you're good-to-go on attempting to actually connect...
|
||||
|
||||
### Install kubelogin
|
||||
|
||||
For CLI-based access to your cluster, you'll need a "helper" to perform the OIDC magic on behalf of kubectl. Install [int128/kubelogin](https://github.com/int128/kubelogin), which is design suited to this purpose.
|
||||
|
||||
Use kubelogin to test your OIDC parameters, by running:
|
||||
|
||||
```bash
|
||||
kubectl oidc-login setup \
|
||||
--oidc-issuer-url=ISSUER_URL \
|
||||
--oidc-client-id=YOUR_CLIENT_ID \
|
||||
--oidc-client-secret=YOUR_CLIENT_SECRET
|
||||
```
|
||||
|
||||
All going well, your browser will open a new window, logging you into authentik, and on the CLI you should get output something like this:
|
||||
|
||||
```
|
||||
~ ❯ kubectl oidc-login setup --oidc-issuer-url=https://authentik.example.com/application/o/kube-apiserver/ --oidc-client-id=kube-apiserver --oidc-client-secret=cVj4YqmB4VPcq6e7 --oidc-extra-scope=groups,email
|
||||
authentication in progress...
|
||||
|
||||
## 2. Verify authentication
|
||||
|
||||
You got a token with the following claims:
|
||||
|
||||
{
|
||||
"iss": "https://authentik.example.com/application/o/kube-apiserver/",
|
||||
"sub": "363d4d0814dbad2d930308dc848342e328b76f925ebba0978a51ddad699022b",
|
||||
"aud": "kube-apiserver",
|
||||
"exp": 1701511022,
|
||||
"iat": 1698919022,
|
||||
"auth_time": 1698891834,
|
||||
"acr": "goauthentik.io/providers/oauth2/default",
|
||||
"nonce": "qgKevTR1gU9Mh14HzOPPCTaP_Mgu9nvY7ZhJkCeFpGY",
|
||||
"at_hash": "TRZOLHHxFxl9HB7SHCIcMw",
|
||||
"email": "davidy@example.com",
|
||||
"email_verified": true,
|
||||
"groups": [
|
||||
"authentik Admins",
|
||||
"admin-kubernetes"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Huzzah, authentication works! :partying_face:
|
||||
|
||||
!!! tip
|
||||
Make sure you see a groups claim in the output above, and if you don't revisit your scope mapper and your claims in the provider under advanced protocol settings!
|
||||
|
||||
### Assemble your kubeconfig
|
||||
|
||||
Your kubectl access to k3s uses a kubeconfig file at `/etc/rancher/k3s/k3s.yaml`. Treat this file as a root password - it's includes a long-lived token which gives you clusteradmin ("*god mode*" on your cluster.)
|
||||
|
||||
Copy the `k3s.yaml` file to your local desktop (*the one with a web browser*), into `$HOME/.kube/config`, and modify it, changing `server: https://127.0.0.1:6443` to match the URL of (*one of*) your control-plane node(*s*).
|
||||
|
||||
Test using `kubectl cluster-info` locally, ensuring that you have access.
|
||||
|
||||
Amend the kubeconfig file for your OIDC user, by running a variation of:
|
||||
|
||||
```bash
|
||||
kubectl config set-credentials oidc \
|
||||
--exec-api-version=client.authentication.k8s.io/v1beta1 \
|
||||
--exec-command=kubectl \
|
||||
--exec-arg=oidc-login \
|
||||
--exec-arg=get-token \
|
||||
--exec-arg=--oidc-issuer-url=https://authentik.example.com/application/o/kube-apiserver/ \
|
||||
--exec-arg=--oidc-client-id=kube-apiserver \
|
||||
--exec-arg=--oidc-client-secret=<your client secret> \
|
||||
--exec-arg=--oidc-extra-scope=groups \
|
||||
--exec-arg=--oidc-extra-scope=email
|
||||
```
|
||||
|
||||
Test your OIDC powerz by running `kubectl --user=oidc cluster-info`.
|
||||
|
||||
Now gasp in dismay as you discover that your request was denied for lack of access! :scream:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): services is forbidden: User "oidc:davidy@funkypenguin.co.nz"
|
||||
cannot list resource "services" in API group "" in the namespace "kube-system"
|
||||
```
|
||||
|
||||
### Create clusterrolebinding
|
||||
|
||||
That's what you wanted, right? Security? Locking out unauthorized users? Ha.
|
||||
|
||||
Now that we've confirmed that kube-apiserver knows your **identity** (authn), create a clusterrolebinding to tell it what your identity is **authorized** to do (authz), based on your group membership.
|
||||
|
||||
The following is a simple clusterrolebinding which will grant all members of the `admin-kube-apiserver` full access (`cluster-admin`), to get you started:
|
||||
|
||||
```yaml title="/authentic/clusterrolebinding-oidc-group-admin-kube-apiserver.yaml"
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: oidc-group-admin-kube-apiserver
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin # (1)!
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: oidc:admin-kube-apiserver # (2)!
|
||||
```
|
||||
|
||||
1. The role to bind
|
||||
2. The subject (group, in this case) of the binding
|
||||
|
||||
Apply your clusterrolebinding using the usual GitOps magic (*I put mine in `/authentic/clusterrolebinding-oidc-group-admin-kube-apiserver.yaml`*).
|
||||
|
||||
Run `kubectl --user=oidc cluster-info` again, and confirm you are now authorized to see the cluster details.
|
||||
|
||||
If this works, set your user context permanently, using `kubectl config set-context --current --user=oidc`.
|
||||
|
||||
!!! tip "whoami?"
|
||||
Run `kubectl krew install whoami` to install the `whoami` plugin, and then `kubectl whoami` to confirm you're logged in with your OIDC account
|
||||
|
||||
You now have OIDC-secured CLI access to your cluster!
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved?
|
||||
|
||||
We've setup our K3s cluster to authenticate against authentik, running on that same cluster! We can now create multiple users (*with multiple levels of access*) without having to provide them with tricky IAM accounts, and deploy kube-apiserver-integrated tools like Kubernetes Dashboard or Weaveworks GitOps for nice secured UIs.
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] EKS cluster with OIDC authentication against [authentik][k8s/authentik]
|
||||
* [X] Ability to support:
|
||||
* [X] Kubernetes Dashboard (*coming soon*)
|
||||
* [X] Weave GitOps (*coming soon*)
|
||||
* [X] We've also retained our static, IAM-based `kubernetes-admin` credentials in case OIDC auth fails at some point (*keep them safe!*)
|
||||
|
||||
What's next?
|
||||
|
||||
Deploy Weave GitOps to visualize your Flux / GitOps state, and Kubernetes Dashboard for UI management of your cluster!
|
||||
|
||||
[^1]: Later on, as we add more applications which need kube-apiserver authentication, we'll add more redirect URIs.
|
||||
|
||||
{% include 'recipe-footer.md' %}
|
||||
231
docs/kubernetes/persistence/aws-ebs.md
Normal file
@@ -0,0 +1,231 @@
|
||||
---
|
||||
title: Support CSI VolumeSnapshots with snapshot-controller
|
||||
description: Add CSI VolumeSnapshot support with snapshot support
|
||||
values_yaml_url: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/charts/aws-ebs-csi-driver/values.yaml
|
||||
helm_chart_version: 2.24.x
|
||||
helm_chart_name: aws-ebs-csi-driver
|
||||
helm_chart_repo_name: aws-ebs-csi-driver
|
||||
helm_chart_repo_url:
|
||||
helmrelease_name: aws-ebs-csi-driver
|
||||
helmrelease_namespace: aws-ebs-csi-driver
|
||||
kustomization_name: aws-ebs-csi-driver
|
||||
slug: EBS CSI Driver
|
||||
status: new
|
||||
upstream: https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
|
||||
github_repo: https://github.com/kubernetes-sigs/aws-ebs-csi-driver
|
||||
---
|
||||
|
||||
# Install the AWS EBS CSI driver
|
||||
|
||||
The Amazon Elastic Block Store Container Storage Interface (CSI) Driver provides a CSI interface used by Container Orchestrators to manage the lifecycle of Amazon EBS volumes. It's a convenient way to consume EBS storage, which works consistently with other CSI-based tooling (*for example, you can dynamically expand and snapshot volumes*).
|
||||
|
||||
??? question "Tell me about the features..."
|
||||
|
||||
* Static Provisioning - Associate an externally-created EBS volume with a PersistentVolume (PV) for consumption within Kubernetes.
|
||||
* Dynamic Provisioning - Automatically create EBS volumes and associated PersistentVolumes (PV) from PersistentVolumeClaims) (PVC). Parameters can be passed via a StorageClass for fine-grained control over volume creation.
|
||||
* Mount Options - Mount options could be specified in the PersistentVolume (PV) resource to define how the volume should be mounted.
|
||||
* NVMe Volumes - Consume NVMe volumes from EC2 Nitro instances.
|
||||
* Block Volumes - Consume an EBS volume as a raw block device.
|
||||
* Volume Snapshots - Create and restore snapshots taken from a volume in Kubernetes.
|
||||
* Volume Resizing - Expand the volume by specifying a new size in the PersistentVolumeClaim (PVC).
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/) on [AWS EKS](/kubernetes/cluster/eks/)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
|
||||
{% include 'kubernetes-flux-namespace.md' %}
|
||||
{% include 'kubernetes-flux-helmrepository.md' %}
|
||||
{% include 'kubernetes-flux-kustomization.md' %}
|
||||
{% include 'kubernetes-flux-helmrelease.md' %}
|
||||
|
||||
### Setup IRSA
|
||||
|
||||
Before you deploy aws-ebs-csi-driver, it's necessary to perform some AWS IAM acronym-salad first..
|
||||
|
||||
The CSI driver pods need access to your AWS account in order to provision EBS volumes. You **could** feed them with classic access key/secret keys, but a more "sophisticated" method is to use "[IAM roles for service accounts]"(https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html), or IRSA.
|
||||
|
||||
IRSA lets you associate a Kubernetes service account with an IAM role, so instead of stashing access secrets somewhere in a namespace (*and in your GitOps repo[^1]*), you simply tell AWS "grant the service account `batcave-music` in the namespace `bat-ertainment` the ability to use my `streamToAlexa` IAM role.
|
||||
|
||||
Before we start, we have to use `eksctl` to generate an IAM OIDC provider for your cluster. I ran:
|
||||
|
||||
```bash
|
||||
eksctl utils associate-iam-oidc-provider --cluster=funkypenguin-authentik-test --approve
|
||||
```
|
||||
|
||||
(*It's harmless to run it more than once, if you already have an IAM OIDC provider associated, the command will just error*)
|
||||
|
||||
Once complete, I ran the following to grant the `aws-ebs-csi-driver` service account in the `aws-ebs-csi-driver` namespace the power to use the AWS-managed `AmazonEBSCSIDriverPolicy` policy, which exists for exactly this purpose:
|
||||
|
||||
```bash
|
||||
eksctl create iamserviceaccount \
|
||||
--name ebs-csi-controller-sa \
|
||||
--namespace aws-ebs-csi-driver \
|
||||
--cluster funkypenguin-authentik-test \
|
||||
--role-name AmazonEKS_EBS_CSI_DriverRole \
|
||||
--role-only \
|
||||
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
|
||||
--approve
|
||||
```
|
||||
|
||||
Part of what this does is **creates** the target service account in the target namespace - before we've deployed aws-ebs-csi-driver's HelmRelease.
|
||||
|
||||
Confirm it's worked by **describing** the serviceAccount - you should see an annotation indicating the role attached, like this:
|
||||
|
||||
```
|
||||
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::6831384437293:role/AmazonEKS_EBS_CSI_DriverRole
|
||||
```
|
||||
|
||||
Now there's a problem - when the HelmRelease is installed, it'll try to create the serviceaccount, which we've just created. Flux's helm controller will then refuse to install the HelmRelease, because it can't "adopt" the service account as its own, under management.
|
||||
|
||||
The simplest fix I found for this was to run the following **before** reconciling the HelmRelease:
|
||||
|
||||
```bash
|
||||
kubectl label serviceaccounts -n aws-ebs-csi-driver \
|
||||
ebs-csi-controller-sa app.kubernetes.io/managed-by=Helm --overwrite
|
||||
kubectl annotate serviceaccounts -n aws-ebs-csi-driver \
|
||||
ebs-csi-controller-sa meta.helm.sh/release-name=aws-ebs-csi-driver
|
||||
kubectl annotate serviceaccounts -n aws-ebs-csi-driver\
|
||||
kube-system ebs-csi-controller-sa meta.helm.sh/release-namespace=kube-system
|
||||
```
|
||||
|
||||
Once these labels/annotations are added, the HelmRelease will happily deploy, without altering the all-important annotation which lets the EBS driver work!
|
||||
|
||||
## Install {{ page.meta.slug }}!
|
||||
|
||||
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using `flux reconcile source git flux-system`. You should see the kustomization appear...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get kustomizations {{ page.meta.kustomization_name }}
|
||||
NAME READY MESSAGE REVISION SUSPENDED
|
||||
{{ page.meta.kustomization_name }} True Applied revision: main/70da637 main/70da637 False
|
||||
~ ❯
|
||||
```
|
||||
|
||||
The helmrelease should be reconciled...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get helmreleases -n {{ page.meta.helmrelease_namespace }} {{ page.meta.helmrelease_name }}
|
||||
NAME READY MESSAGE REVISION SUSPENDED
|
||||
{{ page.meta.helmrelease_name }} True Release reconciliation succeeded v{{ page.meta.helm_chart_version }} False
|
||||
~ ❯
|
||||
```
|
||||
|
||||
And you should have happy pods in the {{ page.meta.helmrelease_namespace }} namespace:
|
||||
|
||||
```bash
|
||||
~ ❯ k get pods -n {{ page.meta.helmrelease_namespace }} -l app.kubernetes.io/name={{ page.meta.helmrelease_name }}
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
ebs-csi-controller-77bddb4c95-2bzw5 5/5 Running 1 (10h ago) 37h
|
||||
ebs-csi-controller-77bddb4c95-qr2hk 5/5 Running 0 37h
|
||||
ebs-csi-node-4f8kz 3/3 Running 0 37h
|
||||
ebs-csi-node-fq8bn 3/3 Running 0 37h
|
||||
~ ❯
|
||||
```
|
||||
|
||||
## How do I know it's working?
|
||||
|
||||
So the AWS EBS CSI driver is installed, but how do we know it's working, especially that IRSA voodoo?
|
||||
|
||||
### Check pod logs
|
||||
|
||||
First off, check the pod logs for any errors, by running:
|
||||
|
||||
```bash
|
||||
kubectl logs -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver
|
||||
```
|
||||
|
||||
If you see nasty errors about EBS access denied, then revisit the IRSA magic above. If not, proceed with the acid test :test_tube: below..
|
||||
|
||||
### Create resources
|
||||
|
||||
#### Create PVCs
|
||||
|
||||
Create a PVCs (*persistent volume claim*), by running:
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl create -f -
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: aws-ebs-csi-test
|
||||
labels:
|
||||
test: aws-ebs-csi
|
||||
funkypenguin-is: a-smartass
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: ebs-sc
|
||||
resources:
|
||||
requests:
|
||||
storage: 128Mi
|
||||
EOF
|
||||
```
|
||||
|
||||
Examine the PVC, and note that it's in a pending state (*this is normal*):
|
||||
|
||||
```bash
|
||||
kubectl get pvc -l test=aws-ebs-csi
|
||||
```
|
||||
|
||||
#### Create Pod
|
||||
|
||||
Now create a pod to consume the PVC, by running:
|
||||
|
||||
```bash
|
||||
cat <<EOF | kubectl create -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: aws-ebs-csi-test
|
||||
labels:
|
||||
test: aws-ebs-csi
|
||||
funkypenguin-is: a-smartass
|
||||
spec:
|
||||
containers:
|
||||
- name: volume-test
|
||||
image: nginx:stable-alpine
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: ebs-volume
|
||||
mountPath: /i-am-a-volume
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumes:
|
||||
- name: ebs-volume
|
||||
persistentVolumeClaim:
|
||||
claimName: aws-ebs-csi-test
|
||||
EOF
|
||||
```
|
||||
|
||||
|
||||
Ensure the pods have started successfully (*this indicates the PVCs were correctly attached*) by running:
|
||||
|
||||
```bash
|
||||
kubectl get pod -l test=aws-ebs-csi
|
||||
```
|
||||
|
||||
#### Clean up
|
||||
|
||||
Assuming that the pod is in a `Running` state, then your EBS provisioning, and all the background AWS plumbing, worked!
|
||||
|
||||
Clean up your mess, little cloud-monkey :monkey_face:, by running:
|
||||
|
||||
```bash
|
||||
kubectl delete pod -l funkypenguin-is=a-smartass
|
||||
kubectl delete pvc -l funkypenguin-is=a-smartass
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We're now able to persist data in our EKS cluster, and have left the door open for future options like snapshots, volume expansion, etc.
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] AWS EBS CSI driver installed and tested in our EKS cluster
|
||||
* [X] Future support for [Velero][velero] with [csi-snapshots](/kubernetes/backup/csi-snapshots/), and volume expansion
|
||||
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Negated somewhat with [Sealed Secrets](/kubernetes/sealed-secrets/)
|
||||
@@ -58,6 +58,19 @@ The following sections detail suggested changes to the values pasted into `/{{ p
|
||||
!!! tip
|
||||
Confusingly, the authentik helm chart defaults to having the bundled redis and postgresql **disabled**, but the [authentik Kubernetes install](https://goauthentik.io/docs/installation/kubernetes/) docs require that they be enabled. Take care to change the respective `enabled: false` values to `enabled: true` below.
|
||||
|
||||
### Set authentik secret key
|
||||
|
||||
Authentik needs a secret key for signing cookies (*not singing for cookies! :cookie:*), so set it below, and don't change it later (*or feed it after midnight!*):
|
||||
|
||||
```yaml hl_lines="6" title="Set mandatory secret key"
|
||||
authentik:
|
||||
# -- Log level for server and worker
|
||||
log_level: info
|
||||
# -- Secret key used for cookie singing and unique user IDs,
|
||||
# don't change this after the first install
|
||||
secret_key: "ilovesingingcookies"
|
||||
```
|
||||
|
||||
### Set bootstrap credentials
|
||||
|
||||
By default, when you install the authentik helm chart, you'll get to set your admin user's (`akadmin`) when you first login. You can pre-configure this password by setting the `AUTHENTIK_BOOTSTRAP_PASSWORD` env var as illustrated below.
|
||||
@@ -83,7 +96,7 @@ authentik uses Redis as the broker for [Celery](https://docs.celeryq.dev/en/stab
|
||||
|
||||
### Configure PostgreSQL for authentik
|
||||
|
||||
Depending on your risk profile / exposure, you may want to set a secure PostgreSQL password, or you may be content to leave the default password blank.
|
||||
Although technically you **can** leave the PostgreSQL password blank, authentik-server will just error with an error like `fe_sendauth: no password supplied`, so ensure you set the password, both in `authentik.postgresql.password` and in `postgresql.postgresqlPassword`:
|
||||
|
||||
At the very least, you'll want to set the following
|
||||
|
||||
@@ -172,6 +185,24 @@ Eureka! :tada:
|
||||
|
||||
Your user is now an authentik superuser. Confirm this by logging out as **akadmin**, and logging back in with your own credentials.
|
||||
|
||||
## Add "groups" scope
|
||||
|
||||
Since you'll probably want to use authentik for OIDC-secured access to various tools like the [kube-apiserver](/kubernetes/authentication/), Grafana, etc, you'll want authentik to be able to support the "groups" scope, telling OIDC clients what groups the logging-in user belongs to.
|
||||
|
||||
Curiously, the OIDC groups scope is **not** a default feature of authentik (*there are [requests](https://github.com/goauthentik/authentik/issues/6184) underway to address this*). There's a simple workaround to add a groups scope though, until such support becomes native...
|
||||
|
||||
As your new superuser, navigate to **Customization** -> **Property Mapping**, and create a new **Scope Mapping**. You can pick whatever name you want (*I used `oidc-groups`*), but you'll want to set the scope name to `groups`, since this is the convention for OIDC clients.
|
||||
|
||||
Set the expression to:
|
||||
|
||||
```python
|
||||
return {
|
||||
"groups": [group.name for group in user.ak_groups.all()]
|
||||
}
|
||||
```
|
||||
|
||||
That's it! Now if your OIDC clients request the `groups` scope, they'll get a list of all the authentik groups the user is a member of.
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We've got authentik running and accessible, we've created a superuser account, and we're ready to flex :muscle: the power of authentik to deploy an OIDC provider for Kubernetes, or simply secure unprotected UIs with proxy outposts!
|
||||
|
||||
13
mkdocs.yml
@@ -171,7 +171,7 @@ nav:
|
||||
- kubernetes/cluster/index.md
|
||||
- Digital Ocean: kubernetes/cluster/digitalocean.md
|
||||
# - Bare Metal: kubernetes/cluster/baremetal.md
|
||||
# - Home Lab: kubernetes/cluster/baremetal.md
|
||||
- EKS: kubernetes/cluster/eks.md
|
||||
- k3s: kubernetes/cluster/k3s.md
|
||||
# - The Hard Way: kubernetes/cluster/the-hard-way.md
|
||||
- Deployment:
|
||||
@@ -210,15 +210,20 @@ nav:
|
||||
- kubernetes/persistence/index.md
|
||||
- Local Path Provisioner: kubernetes/persistence/local-path-provisioner.md
|
||||
- TopoLVM: kubernetes/persistence/topolvm.md
|
||||
- AWS EBS: kubernetes/persistence/aws-ebs.md
|
||||
# - OpenEBS: kubernetes/persistence/openebs.md
|
||||
- Rook Ceph:
|
||||
- kubernetes/persistence/rook-ceph/index.md
|
||||
- Operator: kubernetes/persistence/rook-ceph/operator.md
|
||||
- Cluster: kubernetes/persistence/rook-ceph/cluster.md
|
||||
# - LongHorn: kubernetes/persistence/longhorn.md
|
||||
# - OIDC Authentication:
|
||||
# - kubernetes/oidc-authentication/index.md
|
||||
# - Authentik: kubernetes/oidc-authentication/authentik.md
|
||||
- OIDC Authentication:
|
||||
- Guide: kubernetes/oidc-authentication/index.md
|
||||
- Providers:
|
||||
- authentik: kubernetes/oidc-authentication/authentik.md
|
||||
- Platforms:
|
||||
- EKS (authentik): kubernetes/oidc-authentication/eks-authentik.md
|
||||
- K3s (authentik): kubernetes/oidc-authentication/k3s-authentik.md
|
||||
- Backup:
|
||||
- kubernetes/backup/index.md
|
||||
- CSI Snapshots:
|
||||
|
||||