1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-11 00:36:29 +00:00

make premix public

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
David Young
2024-08-07 14:00:33 +12:00
parent e483525c0a
commit 695c6ea497
14 changed files with 545 additions and 82 deletions

View File

@@ -1,4 +1,4 @@
!!! tip "Get your FREE "elfhosted" {{ page.meta.slug }} instance for demo / trial :partying_face: "
Want to see a live demo, or "kick the tyres" :blue_car: before you commit to self-hosting {{ page.meta.slug }}?
!!! tip "Skip the setup and get {{ page.meta.slug }} "ElfHosted"! :partying_face: "
Want to skip the self-assembly, and have {{ page.meta.slug }} **INSTANTLY** "Just Work(tm)"?
Try out an "[ElfHosted][elfhosted]" :elf: instance of [{{ page.meta.slug }}](https://elfhosted.com/app/{{ page.meta.slug | lower }}) for **FREE**!
ElfHosted is an [open-source](https://elfhosted.com/open/) app hosting platform (*geek-cookbook-as-a-service*), crafted with love by @funkypenguin - get your [ElfHosted][elfhosted] :elf: instance of [{{ page.meta.slug }}](https://elfhosted.com/app/{{ page.meta.slug | lower }}) now!

View File

@@ -1,6 +1,6 @@
!!! tip "Fast-track your fluxing! 🚀"
Is crafting all these YAMLs by hand too much of a PITA?
I automatically and **instantly** share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "[_premix_](https://geek-cookbook.funkypenguin.co.nz/premix/)" git repository, which includes an ansible playbook to auto-create all the necessary files in your flux repository, for each chosen recipe!
"[Premix](/premix/)" is a git repository, which includes an ansible playbook to auto-create all the necessary files in your flux repository, for each chosen recipe!
Let the machines :material-robot-outline: do the TOIL! :man_lifting_weights:

View File

@@ -1,4 +1,4 @@
!!! tip "Fast-track with premix! 🚀"
I automatically and **instantly** share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "[_premix_](https://geek-cookbook.funkypenguin.co.nz/premix/)" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍.
"[Premix](/premix/)" is a git repository which includes necessary docker-compose and env files for all published recipes. This means that you can launch any recipe with just a `git pull` and a `docker stack deploy` 👍.
🚀 **Update**: Premix now includes an ansible playbook, so that sponsors can deploy an entire stack + recipes, with a single ansible command! (*more [here](https://geek-cookbook.funkypenguin.co.nz/premix/ansible/operation/)*)
🚀 **Update**: Premix now includes an ansible playbook, enabling you to deploy an entire stack + recipes, with a single ansible command! (*more [here](https://geek-cookbook.funkypenguin.co.nz/premix/ansible/operation/)*)

View File

@@ -6,7 +6,6 @@ tags:
- matrix
links:
- Matrix Community: community/matrix.md
- Slack Community: community/slack.md
description: Not into Discord? Now we're bridged to Matrix and Slack!
title: Our Discord server is now bridged to Matrix and Slack
image: /images/bridge-ception.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 256 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 606 KiB

View File

@@ -39,58 +39,6 @@ The Amazon Elastic Block Store Container Storage Interface (CSI) Driver provides
{% include 'kubernetes-flux-kustomization.md' %}
{% include 'kubernetes-flux-helmrelease.md' %}
### Setup IRSA
Before you deploy aws-ebs-csi-driver, it's necessary to perform some AWS IAM acronym-salad first :salad: ..
The CSI driver pods need access to your AWS account in order to provision EBS volumes. You **could** feed them with classic access key/secret keys, but a more "sophisticated" method is to use "[IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)", or IRSA.
IRSA lets you associate a Kubernetes service account with an IAM role, so instead of stashing access secrets somewhere in a namespace (*and in your GitOps repo[^1]*), you simply tell AWS "grant the service account `batcave-music` in the namespace `bat-ertainment` the ability to use my `streamToAlexa` IAM role.
Before we start, we have to use `eksctl` to generate an IAM OIDC provider for your cluster. I ran:
```bash
eksctl utils associate-iam-oidc-provider --cluster=funkypenguin-authentik-test --approve
```
(*It's harmless to run it more than once, if you already have an IAM OIDC provider associated, the command will just error*)
Once complete, I ran the following to grant the `aws-ebs-csi-driver` service account in the `aws-ebs-csi-driver` namespace the power to use the AWS-managed `AmazonEBSCSIDriverPolicy` policy, which exists for exactly this purpose:
```bash
eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace aws-ebs-csi-driver \
--cluster funkypenguin-authentik-test \
--role-name AmazonEKS_EBS_CSI_DriverRole \
--role-only \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve
```
Part of what this does is **creates** the target service account in the target namespace - before we've deployed aws-ebs-csi-driver's HelmRelease.
Confirm it's worked by **describing** the serviceAccount - you should see an annotation indicating the role attached, like this:
```
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::6831384437293:role/AmazonEKS_EBS_CSI_DriverRole
```
Now there's a problem - when the HelmRelease is installed, it'll try to create the serviceaccount, which we've just created. Flux's helm controller will then refuse to install the HelmRelease, because it can't "adopt" the service account as its own, under management.
The simplest fix I found for this was to run the following **before** reconciling the HelmRelease:
```bash
kubectl label serviceaccounts -n aws-ebs-csi-driver \
ebs-csi-controller-sa app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate serviceaccounts -n aws-ebs-csi-driver \
ebs-csi-controller-sa meta.helm.sh/release-name=aws-ebs-csi-driver
kubectl annotate serviceaccounts -n aws-ebs-csi-driver\
kube-system ebs-csi-controller-sa meta.helm.sh/release-namespace=kube-system
```
Once these labels/annotations are added, the HelmRelease will happily deploy, without altering the all-important annotation which lets the EBS driver work!
## Install {{ page.meta.slug }}!
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using `flux reconcile source git flux-system`. You should see the kustomization appear...
@@ -123,6 +71,58 @@ ebs-csi-node-fq8bn 3/3 Running 0 37h
~
```
### Setup IRSA
Before you can attach EBS volumes with aws-ebs-csi-driver, it's necessary to perform some AWS IAM acronym-salad first :salad: ..
The CSI driver pods need access to your AWS account in order to provision EBS volumes. You **could** feed them with classic access key/secret keys, but a more "sophisticated" method is to use "[IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)", or IRSA.
IRSA lets you associate a Kubernetes service account with an IAM role, so instead of stashing access secrets somewhere in a namespace (*and in your GitOps repo[^1]*), you simply tell AWS "grant the service account `batcave-music` in the namespace `bat-ertainment` the ability to use my `streamToAlexa` IAM role.
Before we start, we have to use `eksctl` to generate an IAM OIDC provider for your cluster, in case we don't have one. I ran:
```bash
eksctl utils associate-iam-oidc-provider --cluster=funkypenguin-authentik-test --approve
```
(*It's harmless to run it more than once, if you already have an IAM OIDC provider associated, the command will just error*)
Once complete, I ran the following to grant the `aws-ebs-csi-driver` service account in the `aws-ebs-csi-driver` namespace the power to use the AWS-managed `AmazonEBSCSIDriverPolicy` policy, which exists for exactly this purpose:
```bash
eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace aws-ebs-csi-driver \
--cluster funkypenguin-authentik-test \
--role-name AmazonEKS_EBS_CSI_DriverRole \
--override-existing-serviceaccounts \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve
```
This will annotate the existing serviceaccount in the `aws-ebs-csi-driver` namespace, with the role to be attached.
Confirm it's worked by **describing** the serviceAccount - you should see an annotation indicating the role attached, like this:
```
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::6831384437293:role/AmazonEKS_EBS_CSI_DriverRole
```
#### Troubleshooting
If it **doesn't** work for some reason (*like you ran the command once with a typo!*), you may find yourself unable to re-run the command. Cloudformation logs will show you that the action is failing because the role name already exits. To work around this, grab the ARN of the existing role, and change the command slightly:
```bash
eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace aws-ebs-csi-driver \
--cluster funkypenguin-authentik-test \
--attach-role-arn arn:aws:iam::683179697293:role/AmazonEKS_EBS_CSI_DriverRole \
--override-existing-serviceaccounts \
--approve
```
## How do I know it's working?
So the AWS EBS CSI driver is installed, but how do we know it's working, especially that IRSA voodoo?

View File

@@ -3,16 +3,10 @@ title: Pre-made ansible playbooks to deploy our self-hosted recipes
---
# Premix Repository
"Premix" is a private repository shared with [GitHub sponsors](https://github.com/sponsors/funkypenguin), which contains the necessary files and automation to quickly deploy any recipe, or even an entire [swarm](/docker-swarm/) / [cluster](/kubernetes/)! :muscle:
![Screenshot of premix repo](/images/premix.png){ loading=lazy }
"Premix" is a git repository which contains the necessary files and automation to quickly deploy any recipe, or even an entire [swarm](/docker-swarm/) / [cluster](/kubernetes/)! :muscle:
## Benefits
### 🎁 Early access
Recipes are usually "baked" in premix first, before they are published on the website. Having access to premix means having access to all the freshest recipes!
### ⛳️ Eliminate toil
Building hosts, installing OS and deploying tooling is all "prep" for the really fun stuff - deploying and using recipes!
@@ -31,18 +25,4 @@ Typically you'd fork the repository to overlay your own config and changes. As m
## How to get Premix
To get invited to the premix repo, follow these steps:
1. Become a **public** [sponsor](https://github.com/sponsors/funkypenguin) on GitHub
2. Join us in the [Discord server](http://chat.funkypenguin.co.nz)
3. Link your accounts at [PenguinPatrol](https://penguinpatrol.funkypenguin.co.nz)
4. Say something in any of the discord channels (*this triggers the bot*)
You'll receive an invite to premix to the email address associated with your GitHub account, and a fancy VIP role in the Discord server! 💪
!!! question "Why require public sponsorship?"
Public sponsorship is required for the bot to realize that you're a sponsor, based on what the GitHub API provides
### Without a credit card
Got no credit card / GitHubz? We got you covered, with this nifty [PayPal-based subscription](https://www.paypal.com/webapps/billing/plans/subscribe?plan_id=P-95D29301K5084144PMKCWFEY)
Premix used to be sponsors-only (*I'd still love it if you [sponsored](https://github.com/sponsors/funkypenguin)!*), but is now open to all geeks, at https://github.com/geek-cookbook/premix.

View File

@@ -0,0 +1,80 @@
---
title: Balance node usage on Kubernetes with descheduler
description: Use descheduler to balance load on your Kubernetes cluster by "descheduling" pods (to be rescheduled on appropriate nodes)
values_yaml_url: https://github.com/kubernetes-sigs/descheduler/blob/master/charts/descheduler/values.yaml
helm_chart_version: 0.27.x
helm_chart_name: descheduler
helm_chart_repo_name: descheduler
helm_chart_repo_url: https://kubernetes-sigs.github.io/descheduler/
helmrelease_name: descheduler
helmrelease_namespace: descheduler
kustomization_name: descheduler
slug: Descheduler
status: new
upstream: https://sigs.k8s.io/descheduler
links:
- name: GitHub Repo
uri: https://github.com/kubernetes-sigs/descheduler
---
# Balancing a Kubernetes cluster with descheduler
So you've got multiple nodes in your kubernetes cluster, you throw a bunch of workloads in there, and Kubernetes schedules the workloads onto the nodes, making sensible choices based on load, affinity, etc.
Note that this scheduling only happens when a pod is created. Once a pod has been scheduled to a node, Kubernetes won't take it **away** from that node. This can result in "sub-optimal" node loading, especially if you're elastically expanding your nodes themselves, or working through rolling updates.
Descheduler is used to rebalance clusters by evicting pods that can potentially be scheduled on better nodes.
![descheduler login](/images/descheduler.png){ loading=lazy }
Here are some reasons you might need to rebalance your cluster:
* Some nodes are under or over utilized.
* The original scheduling decision does not hold true any more, as taints or labels are added to or removed from nodes, pod/node affinity requirements are not satisfied any more.
* Some nodes failed and their pods moved to other nodes.
* New nodes are added to clusters.
Descheduler works by "kicking out" (evicting) certain nodes based on a policy you feed it, depending what you want to achieve. (*You may want to converge as many pods as possible on as few nodes as possible, or more evenly distribute load across a static set of nodes*)
## {{ page.meta.slug }} requirements
!!! summary "Ingredients"
Already deployed:
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
{% include 'kubernetes-flux-namespace.md' %}
{% include 'kubernetes-flux-helmrepository.md' %}
{% include 'kubernetes-flux-kustomization.md' %}
{% include 'kubernetes-flux-helmrelease.md' %}
{% include 'kubernetes-flux-check.md' %}
## Configure descheduler Helm Chart
The following sections detail suggested changes to the values pasted into `/{{ page.meta.helmrelease_namespace }}/helmrelease-{{ page.meta.helmrelease_name }}.yaml` from the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}). The values are already indented correctly to be copied, pasted into the HelmRelease, and adjusted as necessary.
!!! tip
Confusingly, the descheduler helm chart defaults to having the bundled redis and postgresql **disabled**, but the [descheduler Kubernetes install](https://godescheduler.io/docs/installation/kubernetes/) docs require that they be enabled. Take care to change the respective `enabled: false` values to `enabled: true` below.
### Set descheduler secret key
## Create your admin user
## Summary
What have we achieved? We've got descheduler running and accessible, we've created a superuser account, and we're ready to flex :muscle: the power of descheduler to deploy an OIDC provider for Kubernetes, or simply secure unprotected UIs with proxy outposts!
!!! summary "Summary"
Created:
* [X] descheduler running and ready to "deschedulerate" :lock: !
Next:
* [ ] Configure [Kubernetes OIDC authentication](/kubernetes/oidc-authentication/), unlocking production readiness as well as the [Kubernetes Dashboard][k8s/dashboard] and Weave GitOps UIs (*coming soon*)
{% include 'recipe-footer.md' %}
[^1]: Yes, the lower-case thing bothers me too. That's how the official docs do it though, so I'm following suit.

View File

@@ -0,0 +1,202 @@
---
title: How to deploy promtail on Kubernetes
description: Deploy promtail on Kubernetes to provide SSO to your cluster and workloads
values_yaml_url: https://github.com/grafana/loki/blob/main/production/helm/loki/values.yaml
helm_chart_version: 5.36.x
helm_chart_name: grafana
helm_chart_repo_name: grafana
helm_chart_repo_url: https://grafana.github.io/helm-charts
helmrelease_name: promtail
helmrelease_namespace: promtail
kustomization_name: promtail
slug: Promtail
status: new
upstream: https://grafana.com/docs/loki/latest/send-data/promtail/
links:
- name: GitHub Repo
uri: https://github.com/grafana/loki
---
# promtail on Kubernetes
authentik[^1] is an open-source Identity Provider, focused on flexibility and versatility. With authentik, site administrators, application developers, and security engineers have a dependable and secure solution for authentication in almost any type of environment.
![authentik login](/images/authentik.png){ loading=lazy }
There are robust recovery actions available for the users and applications, including user profile and password management. You can quickly edit, deactivate, or even impersonate a user profile, and set a new password for new users or reset an existing password.
You can use authentik in an existing environment to add support for new protocols, so introducing authentik to your current tech stack doesn't present re-architecting challenges. We already support all of the major providers, such as OAuth2, SAML, [LDAP][openldap] :t_rex:, and SCIM, so you can pick the protocol that you need for each application.
See a comparison with other IDPs [here](https://goauthentik.io/#comparison).
## {{ page.meta.slug }} requirements
!!! summary "Ingredients"
Already deployed:
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
* [x] An [Ingress controller](/kubernetes/ingress/) to route incoming traffic to services
* [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff
Optional:
* [ ] [External DNS](/kubernetes/external-dns/) to create an DNS entry the "flux" way
{% include 'kubernetes-flux-namespace.md' %}
{% include 'kubernetes-flux-helmrepository.md' %}
{% include 'kubernetes-flux-kustomization.md' %}
{% include 'kubernetes-flux-dnsendpoint.md' %}
{% include 'kubernetes-flux-helmrelease.md' %}
## Configure authentik Helm Chart
The following sections detail suggested changes to the values pasted into `/{{ page.meta.helmrelease_namespace }}/helmrelease-{{ page.meta.helmrelease_name }}.yaml` from the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}). The values are already indented correctly to be copied, pasted into the HelmRelease, and adjusted as necessary.
!!! tip
Confusingly, the authentik helm chart defaults to having the bundled redis and postgresql **disabled**, but the [authentik Kubernetes install](https://goauthentik.io/docs/installation/kubernetes/) docs require that they be enabled. Take care to change the respective `enabled: false` values to `enabled: true` below.
### Set authentik secret key
Authentik needs a secret key for signing cookies (*not singing for cookies! :cookie:*), so set it below, and don't change it later (*or feed it after midnight!*):
```yaml hl_lines="6" title="Set mandatory secret key"
authentik:
# -- Log level for server and worker
log_level: info
# -- Secret key used for cookie singing and unique user IDs,
# don't change this after the first install
secret_key: "ilovesingingcookies"
```
### Set bootstrap credentials
By default, when you install the authentik helm chart, you'll get to set your admin user's (`akadmin`) when you first login. You can pre-configure this password by setting the `AUTHENTIK_BOOTSTRAP_PASSWORD` env var as illustrated below.
If you're after a more hands-off implementation, you can also pre-set a "bootstrap token", which can be used to interact with the authentik API programatically (*see example below*):
```yaml hl_lines="2-3" title="Optionally pre-configure your bootstrap secrets"
env:
AUTHENTIK_BOOTSTRAP_PASSWORD: "iamusedbyhumanz"
AUTHENTIK_BOOTSTRAP_TOKEN: "iamusedbymachinez"
```
### Configure Redis for authentik
authentik uses Redis as the broker for [Celery](https://docs.celeryq.dev/en/stable/) background tasks. The authentik helm chart defaults to provisioning an 8Gi PVC for redis, which seems like overkill for a simple broker. You can tweak the size of the Redis PVC by setting:
```yaml hl_lines="4" title="1Gi should be fine for redis"
redis:
master:
persistence:
size: 1Gi
```
### Configure PostgreSQL for authentik
Although technically you **can** leave the PostgreSQL password blank, authentik-server will just error with an error like `fe_sendauth: no password supplied`, so ensure you set the password, both in `authentik.postgresql.password` and in `postgresql.postgresqlPassword`:
At the very least, you'll want to set the following
```yaml hl_lines="3 6" title="Set a secure Postgresql password"
authentik:
postgresql:
password: "Iamaverysecretpassword"
postgresql:
postgresqlPassword: "Iamaverysecretpassword"
```
As with Redis above, you may feel (*like I do*) that provisioning an 8Gi PVC for a database containing 1 user and a handful of app configs is overkill. You can adjust the size of the PostgreSQL PVC by setting:
```yaml hl_lines="3" title="1Gi is fine for a small database"
postgresql:
persistence:
size: 1Gi
```
### Ingress
Setup your ingress for the authentik UI. If you plan to add outposts to proxy other un-authenticated endpoints later, this is where you'll add them:
```yaml hl_lines="3 7" title="Configure your ingress"
ingress:
enabled: true
ingressClassName: "nginx" # (1)!
annotations: {}
labels: {}
hosts:
- host: authentik.example.com
paths:
- path: "/"
pathType: Prefix
tls: []
```
1. Either leave blank to accept the default ingressClassName, or set to whichever [ingress controller](/kubernetes/ingress/) you want to use.
## Install {{ page.meta.slug }}!
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using `flux reconcile source git flux-system`. You should see the kustomization appear...
```bash
~ flux get kustomizations {{ page.meta.kustomization_name }}
NAME READY MESSAGE REVISION SUSPENDED
{{ page.meta.kustomization_name }} True Applied revision: main/70da637 main/70da637 False
~
```
The helmrelease should be reconciled...
```bash
~ flux get helmreleases -n {{ page.meta.helmrelease_namespace }} {{ page.meta.helmrelease_name }}
NAME READY MESSAGE REVISION SUSPENDED
{{ page.meta.helmrelease_name }} True Release reconciliation succeeded v{{ page.meta.helm_chart_version }} False
~
```
And you should have happy pods in the {{ page.meta.helmrelease_namespace }} namespace:
```bash
~ k get pods -n authentik
NAME READY STATUS RESTARTS AGE
authentik-redis-master-0 1/1 Running 1 (3d17h ago) 26d
authentik-server-548c6d4d5f-ljqft 1/1 Running 1 (3d17h ago) 20d
authentik-postgresql-0 1/1 Running 1 (3d17h ago) 26d
authentik-worker-7bb8f55bcb-5jwrr 1/1 Running 0 23h
~
```
Browse to the URL you configured in your ingress above, and confirm that the authentik UI is displayed.
## Create your admin user
You may be a little confused re how to login for the first time. If you didn't use a bootstrap password as above, you'll want to go to `https://<ingress-host-name>/if/flow/initial-setup/`, and set an initial password for your `akadmin` user.
Now store the `akadmin` password somewhere safely, and proceed to create your own user account (*you'll presumably want to use your own username and email address*).
Navigate to **Admin Interface** --> **Directory** --> **Users**, and create your new user. Edit your user and manually set your password.
Next, navigate to **Directory** --> **Groups**, and edit the **authentik Admins** group. Within the group, click the **Users** tab to add your new user to the **authentik Admins** group.
Eureka! :tada:
Your user is now an authentik superuser. Confirm this by logging out as **akadmin**, and logging back in with your own credentials.
## Summary
What have we achieved? We've got authentik running and accessible, we've created a superuser account, and we're ready to flex :muscle: the power of authentik to deploy an OIDC provider for Kubernetes, or simply secure unprotected UIs with proxy outposts!
!!! summary "Summary"
Created:
* [X] authentik running and ready to "authentikate" :lock: !
Next:
* [ ] Configure [Kubernetes OIDC authentication](/kubernetes/oidc-authentication/), unlocking production readiness as well as the [Kubernetes Dashboard][k8s/dashboard] and Weave GitOps UIs (*coming soon*)
{% include 'recipe-footer.md' %}
[^1]: Yes, the lower-case thing bothers me too. That's how the official docs do it though, so I'm following suit.

View File

@@ -0,0 +1,202 @@
---
title: How to deploy promtail on Kubernetes
description: Deploy promtail on Kubernetes to provide SSO to your cluster and workloads
values_yaml_url: https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml
helm_chart_version: 6.15.x
helm_chart_name: grafana
helm_chart_repo_name: grafana
helm_chart_repo_url: https://grafana.github.io/helm-charts
helmrelease_name: promtail
helmrelease_namespace: promtail
kustomization_name: promtail
slug: Promtail
status: new
upstream: https://grafana.com/docs/loki/latest/send-data/promtail/
links:
- name: GitHub Repo
uri: https://github.com/grafana/loki
---
# promtail on Kubernetes
authentik[^1] is an open-source Identity Provider, focused on flexibility and versatility. With authentik, site administrators, application developers, and security engineers have a dependable and secure solution for authentication in almost any type of environment.
![authentik login](/images/authentik.png){ loading=lazy }
There are robust recovery actions available for the users and applications, including user profile and password management. You can quickly edit, deactivate, or even impersonate a user profile, and set a new password for new users or reset an existing password.
You can use authentik in an existing environment to add support for new protocols, so introducing authentik to your current tech stack doesn't present re-architecting challenges. We already support all of the major providers, such as OAuth2, SAML, [LDAP][openldap] :t_rex:, and SCIM, so you can pick the protocol that you need for each application.
See a comparison with other IDPs [here](https://goauthentik.io/#comparison).
## {{ page.meta.slug }} requirements
!!! summary "Ingredients"
Already deployed:
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
* [x] An [Ingress controller](/kubernetes/ingress/) to route incoming traffic to services
* [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff
Optional:
* [ ] [External DNS](/kubernetes/external-dns/) to create an DNS entry the "flux" way
{% include 'kubernetes-flux-namespace.md' %}
{% include 'kubernetes-flux-helmrepository.md' %}
{% include 'kubernetes-flux-kustomization.md' %}
{% include 'kubernetes-flux-dnsendpoint.md' %}
{% include 'kubernetes-flux-helmrelease.md' %}
## Configure authentik Helm Chart
The following sections detail suggested changes to the values pasted into `/{{ page.meta.helmrelease_namespace }}/helmrelease-{{ page.meta.helmrelease_name }}.yaml` from the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}). The values are already indented correctly to be copied, pasted into the HelmRelease, and adjusted as necessary.
!!! tip
Confusingly, the authentik helm chart defaults to having the bundled redis and postgresql **disabled**, but the [authentik Kubernetes install](https://goauthentik.io/docs/installation/kubernetes/) docs require that they be enabled. Take care to change the respective `enabled: false` values to `enabled: true` below.
### Set authentik secret key
Authentik needs a secret key for signing cookies (*not singing for cookies! :cookie:*), so set it below, and don't change it later (*or feed it after midnight!*):
```yaml hl_lines="6" title="Set mandatory secret key"
authentik:
# -- Log level for server and worker
log_level: info
# -- Secret key used for cookie singing and unique user IDs,
# don't change this after the first install
secret_key: "ilovesingingcookies"
```
### Set bootstrap credentials
By default, when you install the authentik helm chart, you'll get to set your admin user's (`akadmin`) when you first login. You can pre-configure this password by setting the `AUTHENTIK_BOOTSTRAP_PASSWORD` env var as illustrated below.
If you're after a more hands-off implementation, you can also pre-set a "bootstrap token", which can be used to interact with the authentik API programatically (*see example below*):
```yaml hl_lines="2-3" title="Optionally pre-configure your bootstrap secrets"
env:
AUTHENTIK_BOOTSTRAP_PASSWORD: "iamusedbyhumanz"
AUTHENTIK_BOOTSTRAP_TOKEN: "iamusedbymachinez"
```
### Configure Redis for authentik
authentik uses Redis as the broker for [Celery](https://docs.celeryq.dev/en/stable/) background tasks. The authentik helm chart defaults to provisioning an 8Gi PVC for redis, which seems like overkill for a simple broker. You can tweak the size of the Redis PVC by setting:
```yaml hl_lines="4" title="1Gi should be fine for redis"
redis:
master:
persistence:
size: 1Gi
```
### Configure PostgreSQL for authentik
Although technically you **can** leave the PostgreSQL password blank, authentik-server will just error with an error like `fe_sendauth: no password supplied`, so ensure you set the password, both in `authentik.postgresql.password` and in `postgresql.postgresqlPassword`:
At the very least, you'll want to set the following
```yaml hl_lines="3 6" title="Set a secure Postgresql password"
authentik:
postgresql:
password: "Iamaverysecretpassword"
postgresql:
postgresqlPassword: "Iamaverysecretpassword"
```
As with Redis above, you may feel (*like I do*) that provisioning an 8Gi PVC for a database containing 1 user and a handful of app configs is overkill. You can adjust the size of the PostgreSQL PVC by setting:
```yaml hl_lines="3" title="1Gi is fine for a small database"
postgresql:
persistence:
size: 1Gi
```
### Ingress
Setup your ingress for the authentik UI. If you plan to add outposts to proxy other un-authenticated endpoints later, this is where you'll add them:
```yaml hl_lines="3 7" title="Configure your ingress"
ingress:
enabled: true
ingressClassName: "nginx" # (1)!
annotations: {}
labels: {}
hosts:
- host: authentik.example.com
paths:
- path: "/"
pathType: Prefix
tls: []
```
1. Either leave blank to accept the default ingressClassName, or set to whichever [ingress controller](/kubernetes/ingress/) you want to use.
## Install {{ page.meta.slug }}!
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using `flux reconcile source git flux-system`. You should see the kustomization appear...
```bash
~ flux get kustomizations {{ page.meta.kustomization_name }}
NAME READY MESSAGE REVISION SUSPENDED
{{ page.meta.kustomization_name }} True Applied revision: main/70da637 main/70da637 False
~
```
The helmrelease should be reconciled...
```bash
~ flux get helmreleases -n {{ page.meta.helmrelease_namespace }} {{ page.meta.helmrelease_name }}
NAME READY MESSAGE REVISION SUSPENDED
{{ page.meta.helmrelease_name }} True Release reconciliation succeeded v{{ page.meta.helm_chart_version }} False
~
```
And you should have happy pods in the {{ page.meta.helmrelease_namespace }} namespace:
```bash
~ k get pods -n authentik
NAME READY STATUS RESTARTS AGE
authentik-redis-master-0 1/1 Running 1 (3d17h ago) 26d
authentik-server-548c6d4d5f-ljqft 1/1 Running 1 (3d17h ago) 20d
authentik-postgresql-0 1/1 Running 1 (3d17h ago) 26d
authentik-worker-7bb8f55bcb-5jwrr 1/1 Running 0 23h
~
```
Browse to the URL you configured in your ingress above, and confirm that the authentik UI is displayed.
## Create your admin user
You may be a little confused re how to login for the first time. If you didn't use a bootstrap password as above, you'll want to go to `https://<ingress-host-name>/if/flow/initial-setup/`, and set an initial password for your `akadmin` user.
Now store the `akadmin` password somewhere safely, and proceed to create your own user account (*you'll presumably want to use your own username and email address*).
Navigate to **Admin Interface** --> **Directory** --> **Users**, and create your new user. Edit your user and manually set your password.
Next, navigate to **Directory** --> **Groups**, and edit the **authentik Admins** group. Within the group, click the **Users** tab to add your new user to the **authentik Admins** group.
Eureka! :tada:
Your user is now an authentik superuser. Confirm this by logging out as **akadmin**, and logging back in with your own credentials.
## Summary
What have we achieved? We've got authentik running and accessible, we've created a superuser account, and we're ready to flex :muscle: the power of authentik to deploy an OIDC provider for Kubernetes, or simply secure unprotected UIs with proxy outposts!
!!! summary "Summary"
Created:
* [X] authentik running and ready to "authentikate" :lock: !
Next:
* [ ] Configure [Kubernetes OIDC authentication](/kubernetes/oidc-authentication/), unlocking production readiness as well as the [Kubernetes Dashboard][k8s/dashboard] and Weave GitOps UIs (*coming soon*)
{% include 'recipe-footer.md' %}
[^1]: Yes, the lower-case thing bothers me too. That's how the official docs do it though, so I'm following suit.