1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-13 01:36:23 +00:00

Add kubernetes dashboard, baby!

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
David Young
2023-11-08 14:54:40 +13:00
parent 7f30a0d25f
commit 579a6edd44
14 changed files with 289 additions and 552 deletions

View File

@@ -0,0 +1,33 @@
---
date: 2023-11-08
categories:
- CHANGELOG
tags:
- kubernetes
links:
- OAuth2 Proxy: recipes/kubernetes/oauth2-proxy.md
- Kubernetes Dashboard: recipes/kubernetes/dashboard.md
description: New Recipe Added - authentik - Flexible Identity Provider, running on Kubernetes
title: Added Kubernetes Dashboard and OAuth2 Proxy
image: /images/kubernetes-dashboard.png
---
# Added recipe for Kubernetes Dashboard with OIDC auth
Unless you're a cave-dwelling CLI geek like me, you might prefer a beautiful web-based dashboard to administer your Kubernetes cluster.
![Screenshot of Kubernetes Dashboard]({{ page.meta.image }}){ loading=lazy }
I've recently documented the necessary building blocks to make the dashboard work with your OIDC-enabled cluster, such that a simple browser login will give you authenticated access to the dashboard, with the option to add more users / tiered access, based on your OIDC provider.
Here's all the pieces you need..
<!-- more -->
* [x] An OIDC Provider, like [authentik][k8s/authentik] or [KeyCloak][keycloak] (*Kubernetes recipe coming soon*)
* [x] An OIDC-enabled cluster, using [K3s](/kubernetes/cluster/k3s/), [EKS](/kubernetes/cluster/eks/), or (*coming soon*) kubeadm
* [x] [OAuth2-Proxy][k8s/oauth2proxy] to provide the Kubernetes Dashboard token
And finally, see the [Kubernetes Dashboard tutorial][k8s/dashboard] for more!
--8<-- "common-links.md"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 502 KiB

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@@ -33,5 +33,7 @@ Once you've configured your OIDC provider, review the following, based on your p
#### Authentik
* [Authenticate K3s with Authentik as an OIDC provider](/kubernetes/oidc-authentication/k3s-authentik/)
* Authenticate EKS with Authentik as an OIDC provider
* Authenticate a kubeadm cluster using Authentik as an OIDC provider
* [Authenticate EKS with Authentik as an OIDC provider](/kubernetes/oidc-authentication/eks-authentik/)
* Authenticate a kubeadm cluster using Authentik as an OIDC provider (*coming soon*)
--8<-- "common-links.md"

View File

@@ -91,7 +91,7 @@ Repeat after me: "If you don't verify your backup, **it's not a backup**".
!!! warning
Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data.
Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/docker-swarm/traefik/), since this is likely to exist for every reader_).
Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipe](/docker-swarm/traefik/), since this is likely to exist for every reader_).
```yaml
docker run --env-file duplicity.env -it --rm \

View File

@@ -9,7 +9,7 @@ Home Assistant is a home automation platform written in Python, with extensive s
![Home Assistant Screenshot](../images/homeassistant.png){ loading=lazy }
This recipie combines the [extensibility](https://home-assistant.io/components/) of [Home Assistant](https://home-assistant.io/) with the flexibility of [InfluxDB](https://docs.influxdata.com/influxdb/v1.4/) (_for time series data store_) and [Grafana](https://grafana.com/) (_for **beautiful** visualisation of that data_).
This recipe combines the [extensibility](https://home-assistant.io/components/) of [Home Assistant](https://home-assistant.io/) with the flexibility of [InfluxDB](https://docs.influxdata.com/influxdb/v1.4/) (_for time series data store_) and [Grafana](https://grafana.com/) (_for **beautiful** visualisation of that data_).
## {{ page.meta.recipe }} Requirements

View File

@@ -47,6 +47,7 @@ See a comparison with other IDPs [here](https://goauthentik.io/#comparison).
* [ ] [External DNS](/kubernetes/external-dns/) to create an DNS entry the "flux" way
{% include 'kubernetes-flux-namespace.md' %}
{% include 'kubernetes-flux-helmrepository.md' %}
{% include 'kubernetes-flux-kustomization.md' %}
{% include 'kubernetes-flux-dnsendpoint.md' %}
{% include 'kubernetes-flux-helmrelease.md' %}

View File

@@ -0,0 +1,85 @@
---
title: Deploy Kubernetes Dashboard with OIDC token auth
description: Here's how to deploy the Kubernetes Dashboard in your cluster, and autheticate with a bearer token from your OIDC-enabled cluster.
values_yaml_url: https://artifacthub.io/packages/helm/k8s-dashboard/kubernetes-dashboard/6.0.8
helm_chart_version: 6.0.x
helm_chart_name: kubernetes-dashboard
helm_chart_repo_name: kubernetes-dashboard
helm_chart_repo_url: https://kubernetes.github.io/dashboard/
helmrelease_name: kubernetes-dashboard
helmrelease_namespace: kubernetes-dashboard
kustomization_name: kubernetes-dashboard
slug: Dashboard
status: new
upstream: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
links:
- name: GitHub Repo
uri: https://github.com/kubernetes/dashboard
---
# Kubernetes Dashboard (with OIDC token auth)
Kubernetes Dashboard is the polished, general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.
![authentik login](/images/kubernetes-dashboard.png){ loading=lazy }
Importantly, the Dashboard interacts with the kube-apiserver using the credentials you give it. While it's possible to just create a `cluster-admin` service account, and hard-code the necessary service account into Dashboard, this is far less secure, since you're effectively granting anyone with HTTP access to the dashboard full access to your cluster[^1].
There are [several ways to pass a Kubernetes Dashboard](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/README.md) token, this recipe will focus on the [Authentication Header method](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/README.md#authorization-header), under which every request to the dashboard includes the `Authorization: Bearer <token>` header.
We'll utilize [OAuth2 Proxy][k8s/oauth2proxy], integrated with our [OIDC-enabled cluster](/kubernetes/oidc-authentication/), to achieve this seamlessly and securely.
## {{ page.meta.slug }} requirements
!!! summary "Ingredients"
Already deployed:
* [x] A [Kubernetes cluster](/kubernetes/cluster/), configured for [OIDC authentication](/kubernetes/oidc-authentication/) against a supported [provider](/kubernetes/oidc-authentication/providers/)
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
* [x] [OAuth2 Proxy][k8s/oauth2proxy] to pass the necessary authentication token
{% include 'kubernetes-flux-helmrepository.md' %}
{% include 'kubernetes-flux-kustomization.md' %}
{% include 'kubernetes-flux-dnsendpoint.md' %}
{% include 'kubernetes-flux-helmrelease.md' %}
!!! warning "Beware v3.0.0-alpha0"
The Dasboard repo's `master` branch has already been updated to a (breaking) new architecture. Since we're not lunatics, we're going to use the latest stable `6.0.x` instead! For this reason, take care to avoid the `values.yaml` in the repo, but use the link to artifacthub instead.
The following sections detail suggested changes to the values pasted into `/{{ page.meta.helmrelease_namespace }}/helmrelease-{{ page.meta.helmrelease_name }}.yaml` from the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}). The values are already indented correctly to be copied, pasted into the HelmRelease, and adjusted as necessary.
## Enable insecure mode
Because we're using OAuth2 Proxy in front of Dashboard, the incoming request will be HTTP from Dashboard's perspective, rather than HTTPS. We're happy to permit this, so make at least the following change to `ExtraArgs` below:
```yaml hl_lines="3"
extraArgs:
# - --enable-skip-login
- --enable-insecure-login
```
{% include 'kubernetes-flux-check.md' %}
## Is that all?
Feels too easy, doesn't it?
The reason is that all the hard work (*ingress, OIDC authentication, etc*) is all handled by [OAuth2 Proxy][k8s/oauth2proxy], so provided that's been deployed and tested, you're good-to-go!
Browse to the URL you configured in your OAuth2 Proxy ingress, log into your OIDC provider, and your should be directed to your Kubernetes Dashboard UI, with all the privileges your authentication token gets you :muscle:
## Summary
What have we achieved? We've got a dashboard for Kubernetes, dammit! That's **amaaazing**!
And even better, it doesn't rely on some hacky copy/pasting of tokens, or disabling security, but it uses our existing, trusted OIDC cluster auth. This also means that you can grant other users access to the dashboard with more restrictive (*i.e., read-only access*) privileges.
!!! summary "Summary"
Created:
* [X] Kubernetes Dashboard deployed, authenticated with [OIDC-enabled cluster](/recipes/kubernetes/oidc-authentication/) using an Authorization Header with a bearer token, magically provided by [OAuth2 Proxy][k8s/oauth2proxy]!
{% include 'recipe-footer.md' %}
[^1]: Plus, you wouldn't be able to do tiered access in this scenario

View File

@@ -0,0 +1,156 @@
---
title: Use OAuth2 proxy on Kubernetes to secure access
description: Deploy oauth2-proxy on Kubernetes to provide SSO to your cluster and workloads
values_yaml_url: https://github.com/oauth2-proxy/manifests/blob/main/helm/oauth2-proxy/values.yaml
helm_chart_version: 6.18.x
helm_chart_name: oauth2-proxy
helm_chart_repo_name: oauth2-proxy
helm_chart_repo_url: https://oauth2-proxy.github.io/manifests/
helmrelease_name: oauth2-proxy
helmrelease_namespace: kubernetes-dashboard
kustomization_name: oauth2-proxy
slug: OAuth2 Proxy
status: new
upstream: https://oauth2-proxy.github.io/oauth2-proxy/
links:
- name: GitHub Repo
uri: https://github.com/oauth2-proxy/oauth2-proxy
- name: Helm Chart
uri: https://github.com/oauth2-proxy/manifests/tree/main/helm/oauth2-proxy
---
# Using OAuth2 proxy for Kubernetes Dashboard
[OAuth2-proxy](https://oauth2-proxy.github.io/oauth2-proxy/) was once a bit.ly project, but was officially archived in Sept 2018. It lives on though, at https://github.com/oauth2-proxy/oauth2-proxy.
OAuth2-proxy is a lightweight proxy which you put **in front of** your vulnerable services, enforcing an OAuth authentication against an [impressive collection of providers](https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/oauth_provider) (*including generic OIDC*) before the backend service is displayed to the calling user.
![OAuth2-proxy architecture](/images/oauth2-proxy.png){ loading=lazy }
This recipe will describe setting up OAuth2 Proxy for the purposes of passing authentication headers to [Kubernetes Dashboard][k8s/dashboard], which doesn't provide its own authentication, but instead relies on Kubernetes' own RBAC auth.
In order to view your Kubernetes resources on the dashboard, you either create a fully-privileged service account (*yuk! :face_vomiting:*), copy and paste your own auth token upon login (*double yuk! :face_vomiting::face_vomiting:*), or use OAuth2 Proxy to authenticate against the kube-apiserver on your behalf, and pass the authentication token to [Kubernetes Dashboard][k8s/dashboard] (*like a boss! :muscle:*)
If you're after a generic authentication middleware which **doesn't** need to pass OAuth headers, then [Traefik Forward Auth][tfa] is a better option, since it supports multiple backends in "auth host" mode.
## {{ page.meta.slug }} requirements
!!! summary "Ingredients"
Already deployed:
* [x] A [Kubernetes cluster](/kubernetes/cluster/), configured for [OIDC authentication](/kubernetes/oidc-authentication/) against a supported [provider](/kubernetes/oidc-authentication/providers/)
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
* [x] An [Ingress controller](/kubernetes/ingress/) to route incoming traffic to services
Optional:
* [ ] [External DNS](/kubernetes/external-dns/) to create an DNS entry the "flux" way
* [ ] [Persistent storage](/kubernetes/persistence/) if you want to use Redis for session persistence
{% include 'kubernetes-flux-namespace.md' %}
{% include 'kubernetes-flux-helmrepository.md' %}
{% include 'kubernetes-flux-kustomization.md' %}
{% include 'kubernetes-flux-helmrelease.md' %}
## Configure OAuth2 Proxy
The following sections detail suggested changes to the values pasted into `/{{ page.meta.helmrelease_namespace }}/helmrelease-{{ page.meta.helmrelease_name }}.yaml` from the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}). The values are already indented correctly to be copied, pasted into the HelmRelease, and adjusted as necessary.
### OAuth2 Proxy Config
Set your `clientID` and `clientSecret` to match what you've setup in your OAuth provider. You can choose whatever you like for your `cookieSecret`! :cookie:
```yaml hl_lines="5 7 13 32"
config:
# Add config annotations
annotations: {}
# OAuth client ID
clientID: "XXXXXXX"
# OAuth client secret
clientSecret: "XXXXXXXX"
# Create a new secret with the following command
# openssl rand -base64 32 | head -c 32 | base64
# Use an existing secret for OAuth2 credentials (see secret.yaml for required fields)
# Example:
# existingSecret: secret
cookieSecret: "XXXXXXXXXXXXXXXX"
# The name of the cookie that oauth2-proxy will create
# If left empty, it will default to the release name
cookieName: ""
google: {}
# adminEmail: xxxx
# useApplicationDefaultCredentials: true
# targetPrincipal: xxxx
# serviceAccountJson: xxxx
# Alternatively, use an existing secret (see google-secret.yaml for required fields)
# Example:
# existingSecret: google-secret
# groups: []
# Example:
# - group1@example.com
# - group2@example.com
# Default configuration, to be overridden
configFile: |-
email_domains = [ "*" ] # (1)!
upstreams = [ "http://kubernetes-dashboard" ] # (2)!
```
1. Accept any emails passed to us by the auth provider, which we fully control. You might do this differently if you were using an auth provider like Google or GitHub
2. Set `upstreams[]` to match the backend service you want to protect, in this case, the kubernetes-dashboard service in the same namespace. [^1]
### Set ExtraArgs
Take note of the following, specifically:
```yaml
extraArgs:
provider: oidc
provider-display-name: "Authentik"
skip-provider-button: "true"
pass-authorization-header: "true" # (1)!
redis-connection-url: "redis://redis-master" # if you want to use redis
session-store-type: redis # alternative is to use cookies
cookie-refresh: 15m
```
1. This is critically important, and is what makes OAuth2 Proxy suited to this task. We need the authorization headers produced from the OIDC transaction to be passed to [Kubernetes Dashboard][k8s/dashboard], so that it can interact with kube-apiserver on our behalf.
### Setup Ingress
Now you'll need an ingress, but note that this'll be the ingress you'll want to use for the [Kubernetes Dashboard][k8s/dashboard], so you'll want something like the following:
```yaml hl_lines="2 3 9"
ingress:
enabled: true
className: nginx
path: /
# Only used if API capabilities (networking.k8s.io/v1) allow it
pathType: ImplementationSpecific
# Used to create an Ingress record.
hosts:
- kubernetes-dashboard.example.com
```
{% include 'kubernetes-flux-check.md' %}
## Is it working?
Browse to the URL you configured in your ingress above, and confirm that you're prompted to log into your OIDC provider.
## Summary
What have we achieved? We're half-way to getting [Kubernetes Dashboard][k8s/dashboard] working against our OIDC-enabled cluster. We've got OAuth2 Proxy authenticating against our OIDC provider, and passing on the auth headers to the upstream.
!!! summary "Summary"
Created:
* [X] OAuth2 Proxy on Kubernetes, running and ready pass auth headers to [Kubernetes Dashboard][k8s/dashboard]
Next:
* [ ] Deploy [[Kubernetes Dashboard][k8s/dashboard]][kuberetesdashboard], protected by the upstream to OAuth2 Proxy
{% include 'recipe-footer.md' %}
[^1]: While you might, like me, hope that since `upstreams` is a list, you might be able to use one OAuth2 Proxy instance in front of multiple upstreams. Sadly, the intention here is to split the upstream by path, not to provide entirely different upstreams based FQDN. Thus, you're stuck with one OAuth2 Proxy per protected instance.

View File

@@ -1,545 +0,0 @@
---
description: Neat one-sentence description of recipe for social media previews
recipe: Recipe Name
title: Short, punchy title for search engine results / social previews
image: /images/<recipe name>.png
---
# {{ page.meta.recipe }} on Docker Swarm
This is a template to get you started with a Kubernetes recipe. The things to be aware of are:
:one: Every recipe has the same set of flux YAMLs (helmrelease, namespace, etc)<br/>
:two: Every recipe needs at least one "chef's note", done using a footer, like this[^1]<br/>
![Screenshot of {{ page.meta.recipe }}]({{ page.meta.image }}){ loading=lazy }
[Linx](https://github.com/andreimarcu/linx-server) is self-hosted file/media-sharing service, which features:
Here's an example from my public instance (*yes, running on Kubernetes*):
## {{ page.meta.recipe }} requirements
!!! summary "Ingredients"
Already deployed:
* [x] A [Kubernetes cluster](/kubernetes/cluster/) (*not running Kubernetes? Use the [Docker Swarm recipe instead][invidious]*)
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
* [x] An [Ingress](/kubernetes/ingress/) to route incoming traffic to services
* [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff
* [x] [External DNS](/kubernetes/external-dns/) to create an DNS entry
New:
* [ ] Chosen DNS FQDN for your instance
## Preparation
### GitRepository
The Invidious project doesn't currently publish a versioned helm chart - there's just a [helm chart stored in the repository](https://github.com/invidious/invidious/tree/main/chart) (*I plan to submit a PR to address this*). For now, we use a GitRepository instead of a HelmRepository as the source of a HelmRelease.
```yaml title="/bootstrap/gitrepositories/gitepository-invidious.yaml"
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
name: invidious
namespace: flux-system
spec:
interval: 1h0s
ref:
branch: master
url: https://github.com/iv-org/invidious
```
### Namespace
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `/bootstrap/namespaces/namespace-invidious.yaml`:
```yaml title="/bootstrap/namespaces/namespace-invidious.yaml"
apiVersion: v1
kind: Namespace
metadata:
name: invidious
```
### Kustomization
Now that the "global" elements of this deployment (*just the GitRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/invidious`. I create this example Kustomization in my flux repo:
```yaml title="/bootstrap/kustomizations/kustomization-invidious.yaml"
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: invidious
namespace: flux-system
spec:
interval: 15m
path: invidious
prune: true # remove any elements later removed from the above path
timeout: 2m # if not set, this defaults to interval duration, which is 1h
sourceRef:
kind: GitRepository
name: flux-system
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: invidious-invidious # (1)!
namespace: invidious
- apiVersion: apps/v1
kind: StatefulSet
name: invidious-postgresql
namespace: invidious
```
1. No, that's not a typo, just another pecularity of the helm chart!
### ConfigMap
Now we're into the invidious-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/iv-org/invidious/blob/master/kubernetes/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 spaces (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo:
```yaml title="invidious/configmap-invidious-helm-chart-value-overrides.yaml"
apiVersion: v1
kind: ConfigMap
metadata:
name: invidious-helm-chart-value-overrides
namespace: invidious
data:
values.yaml: |- # (1)!
# <upstream values go here>
```
1. Paste in the contents of the upstream `values.yaml` here, intended 4 spaces, and then change the values you need as illustrated below.
Values I change from the default are:
```yaml
postgresql:
image:
tag: 14
auth:
username: invidious
password: <redacted>
database: invidious
primary:
initdb:
username: invidious
password: <redacted>
scriptsConfigMap: invidious-postgresql-init
persistence:
size: 1Gi # (1)!
podAnnotations: # (2)!
backup.velero.io/backup-volumes: backup
pre.hook.backup.velero.io/command: '["/bin/bash", "-c", "PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U postgres -d $POSTGRES_DB -h 127.0.0.1 > /scratch/backup.sql"]'
pre.hook.backup.velero.io/timeout: 3m
post.hook.restore.velero.io/command: '["/bin/bash", "-c", "[ -f \"/scratch/backup.sql\" ] && PGPASSWORD=$POSTGRES_PASSWORD psql -U postgres -h 127.0.0.1 -d $POSTGRES_DB -f /scratch/backup.sql && rm -f /scratch/backup.sql;"]'
extraVolumes:
- name: backup
emptyDir:
sizeLimit: 1Gi
extraVolumeMounts:
- name: backup
mountPath: /scratch
resources:
requests:
cpu: "10m"
memory: 32Mi
# Adapted from ../config/config.yml
config:
channel_threads: 1
feed_threads: 1
db:
user: invidious
password: <redacted>
host: invidious-postgresql
port: 5432
dbname: invidious
full_refresh: false
https_only: true
domain: in.fnky.nz # (3)!
external_port: 443 # (4)!
banner: ⚠️ Note - This public Invidious instance is sponsored ❤️ by <A HREF='https://geek-cookbook.funkypenguin.co.nz'>Funky Penguin's Geek Cookbook</A>. It's intended to support the published <A HREF='https://geek-cookbook.funkypenguin.co.nz/recipes/invidious/'>Docker Swarm recipes</A>, but may be removed at any time without notice. # (5)!
default_user_preferences: # (6)!
quality: dash # (7)! auto-adapts or lets you choose > 720P
```
1. 1Gi is fine for the database for now
2. These annotations / extra Volumes / volumeMounts support automated backup using Velero
3. Invidious needs this to generate external links for sharing / embedding
4. Invidious needs this too, to generate external links for sharing / embedding
5. It's handy to tell people what's special about your instance
6. Check out the [official config docs](https://github.com/iv-org/invidious/blob/master/config/config.example.yml) for comprehensive details on how to configure / tweak your instance!
7. Default all users to DASH (*adaptive*) quality, rather than limiting to 720P (*the default*)
### HelmRelease
Finally, having set the scene above, we define the HelmRelease which will actually deploy the invidious into the cluster. I save this in my flux repo:
```yaml title="/invidious/helmrelease-invidious.yaml"
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: invidious
namespace: invidious
spec:
chart:
spec:
chart: ./charts/invidious
sourceRef:
kind: GitRepository
name: invidious
namespace: flux-system
interval: 15m
timeout: 5m
releaseName: invidious
valuesFrom:
- kind: ConfigMap
name: invidious-helm-chart-value-overrides
valuesKey: values.yaml # (1)!
```
1. This is the default, but best to be explicit for clarity
### Ingress / IngressRoute
Oddly, the upstream chart doesn't include any Ingress resource. We have to manually create our Ingress as below (*note that it's also possible to use a Traefik IngressRoute directly*)
```yaml title="/invidious/ingress-invidious.yaml"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: invidious
namespace: invidious
spec:
ingressClassName: nginx
rules:
- host: in.fnky.nz
http:
paths:
- backend:
service:
name: invidious
port:
number: 3000
path: /
pathType: ImplementationSpecific
```
An alternative implementation using an `IngressRoute` could look like this:
```yaml title="/invidious/ingressroute-invidious.yaml"
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: in.fnky.nz
namespace: invidious
spec:
routes:
- match: Host(`in.fnky.nz`)
kind: Rule
services:
- name: invidious-invidious
kind: Service
port: 3000
```
### Create postgres-init ConfigMap
Another pecularity of the Invidious helm chart is that you have to create your own ConfigMap containing the PostgreSQL data structure. I suspect that the helm chart has received minimal attention in the past 3+ years, and this could probably easily be turned into a job as a pre-install helm hook (*perhaps a future PR?*).
In the meantime, you'll need to create ConfigMap manually per the [repo instructions](https://github.com/iv-org/invidious/tree/master/kubernetes#installing-helm-chart), or cheat, and copy the one I paste below:
??? example "Configmap (click to expand)"
```yaml title="/invidious/configmap-invidious-postgresql-init.yaml"
apiVersion: v1
kind: ConfigMap
metadata:
name: invidious-postgresql-init
namespace: invidious
data:
annotations.sql: |
-- Table: public.annotations
-- DROP TABLE public.annotations;
CREATE TABLE IF NOT EXISTS public.annotations
(
id text NOT NULL,
annotations xml,
CONSTRAINT annotations_id_key UNIQUE (id)
);
GRANT ALL ON TABLE public.annotations TO current_user;
channel_videos.sql: |+
-- Table: public.channel_videos
-- DROP TABLE public.channel_videos;
CREATE TABLE IF NOT EXISTS public.channel_videos
(
id text NOT NULL,
title text,
published timestamp with time zone,
updated timestamp with time zone,
ucid text,
author text,
length_seconds integer,
live_now boolean,
premiere_timestamp timestamp with time zone,
views bigint,
CONSTRAINT channel_videos_id_key UNIQUE (id)
);
GRANT ALL ON TABLE public.channel_videos TO current_user;
-- Index: public.channel_videos_ucid_idx
-- DROP INDEX public.channel_videos_ucid_idx;
CREATE INDEX IF NOT EXISTS channel_videos_ucid_idx
ON public.channel_videos
USING btree
(ucid COLLATE pg_catalog."default");
channels.sql: |+
-- Table: public.channels
-- DROP TABLE public.channels;
CREATE TABLE IF NOT EXISTS public.channels
(
id text NOT NULL,
author text,
updated timestamp with time zone,
deleted boolean,
subscribed timestamp with time zone,
CONSTRAINT channels_id_key UNIQUE (id)
);
GRANT ALL ON TABLE public.channels TO current_user;
-- Index: public.channels_id_idx
-- DROP INDEX public.channels_id_idx;
CREATE INDEX IF NOT EXISTS channels_id_idx
ON public.channels
USING btree
(id COLLATE pg_catalog."default");
nonces.sql: |+
-- Table: public.nonces
-- DROP TABLE public.nonces;
CREATE TABLE IF NOT EXISTS public.nonces
(
nonce text,
expire timestamp with time zone,
CONSTRAINT nonces_id_key UNIQUE (nonce)
);
GRANT ALL ON TABLE public.nonces TO current_user;
-- Index: public.nonces_nonce_idx
-- DROP INDEX public.nonces_nonce_idx;
CREATE INDEX IF NOT EXISTS nonces_nonce_idx
ON public.nonces
USING btree
(nonce COLLATE pg_catalog."default");
playlist_videos.sql: |
-- Table: public.playlist_videos
-- DROP TABLE public.playlist_videos;
CREATE TABLE IF NOT EXISTS public.playlist_videos
(
title text,
id text,
author text,
ucid text,
length_seconds integer,
published timestamptz,
plid text references playlists(id),
index int8,
live_now boolean,
PRIMARY KEY (index,plid)
);
GRANT ALL ON TABLE public.playlist_videos TO current_user;
playlists.sql: |
-- Type: public.privacy
-- DROP TYPE public.privacy;
CREATE TYPE public.privacy AS ENUM
(
'Public',
'Unlisted',
'Private'
);
-- Table: public.playlists
-- DROP TABLE public.playlists;
CREATE TABLE IF NOT EXISTS public.playlists
(
title text,
id text primary key,
author text,
description text,
video_count integer,
created timestamptz,
updated timestamptz,
privacy privacy,
index int8[]
);
GRANT ALL ON public.playlists TO current_user;
session_ids.sql: |+
-- Table: public.session_ids
-- DROP TABLE public.session_ids;
CREATE TABLE IF NOT EXISTS public.session_ids
(
id text NOT NULL,
email text,
issued timestamp with time zone,
CONSTRAINT session_ids_pkey PRIMARY KEY (id)
);
GRANT ALL ON TABLE public.session_ids TO current_user;
-- Index: public.session_ids_id_idx
-- DROP INDEX public.session_ids_id_idx;
CREATE INDEX IF NOT EXISTS session_ids_id_idx
ON public.session_ids
USING btree
(id COLLATE pg_catalog."default");
users.sql: |+
-- Table: public.users
-- DROP TABLE public.users;
CREATE TABLE IF NOT EXISTS public.users
(
updated timestamp with time zone,
notifications text[],
subscriptions text[],
email text NOT NULL,
preferences text,
password text,
token text,
watched text[],
feed_needs_update boolean,
CONSTRAINT users_email_key UNIQUE (email)
);
GRANT ALL ON TABLE public.users TO current_user;
-- Index: public.email_unique_idx
-- DROP INDEX public.email_unique_idx;
CREATE UNIQUE INDEX IF NOT EXISTS email_unique_idx
ON public.users
USING btree
(lower(email) COLLATE pg_catalog."default");
videos.sql: |+
-- Table: public.videos
-- DROP TABLE public.videos;
CREATE UNLOGGED TABLE IF NOT EXISTS public.videos
(
id text NOT NULL,
info text,
updated timestamp with time zone,
CONSTRAINT videos_pkey PRIMARY KEY (id)
);
GRANT ALL ON TABLE public.videos TO current_user;
-- Index: public.id_idx
-- DROP INDEX public.id_idx;
CREATE UNIQUE INDEX IF NOT EXISTS id_idx
ON public.videos
USING btree
(id COLLATE pg_catalog."default");
```
## :octicons-video-16: Install Invidious!
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation[^1] using `flux reconcile source git flux-system`. You should see the kustomization appear...
```bash
~ flux get kustomizations | grep invidious
invidious main/d34779f False True Applied revision: main/d34779f
~
```
The helmrelease should be reconciled...
```bash
~ flux get helmreleases -n invidious
NAME REVISION SUSPENDED READY MESSAGE
invidious 1.1.1 False True Release reconciliation succeeded
~
```
And you should have happy Invidious pods:
```bash
~ k get pods -n invidious
NAME READY STATUS RESTARTS AGE
invidious-invidious-64f4fb8d75-kr4tw 1/1 Running 0 77m
invidious-postgresql-0 1/1 Running 0 11h
~
```
... and finally check that the ingress was created as desired:
```bash
~ k get ingress -n invidious
NAME CLASS HOSTS ADDRESS PORTS AGE
invidious <none> in.fnky.nz 80, 443 19h
~
```
Or in the case of an ingressRoute:
```bash
~ k get ingressroute -n invidious
NAME AGE
in.fnky.nz 19h
```
Now hit the URL you defined in your config, you'll see the basic search screen. Enter a search phrase (*"marvel movie trailer"*) to see the YouTube video results, or paste in a YouTube URL such as `https://www.youtube.com/watch?v=bxqLsrlakK8`, change the domain name from `www.youtube.com` to your instance's FQDN, and watch the fun [^2]!
You can also install a range of browser add-ons to automatically redirect you from youtube.com to your Invidious instance. I'm testing "[libredirect](https://addons.mozilla.org/en-US/firefox/addon/libredirect/)" currently, which seems to work as advertised!
## Summary
What have we achieved? We have an HTTPS-protected private YouTube frontend - we can now watch whatever videos we please, without feeding Google's profile on us. We can also subscribe to channels without requiring a Google account, and we can share individual videos directly via our instance (*by generating links*).
!!! summary "Summary"
Created:
* [X] We are free of the creepy tracking attached to YouTube videos!
{% include 'recipe-footer.md' %}
[^1]: This is how a footnote works