diff --git a/_includes/kubernetes-flux-dnsendpoint.md b/_includes/kubernetes-flux-dnsendpoint.md new file mode 100644 index 0000000..cfceb85 --- /dev/null +++ b/_includes/kubernetes-flux-dnsendpoint.md @@ -0,0 +1,21 @@ +### {{ page.meta.slug }} DNSEndpoint + +If, like me, you prefer to create your DNS records the "GitOps way" using [ExternalDNS](/kubernetes/external-dns/), create something like the following example to create a DNS entry for your Authentik ingress: + +```yaml title="/{{ page.meta.helmrelease_namespace }}/dnsendpoint-{{ page.meta.helmrelease_name }}.example.com.yaml" +apiVersion: externaldns.k8s.io/v1alpha1 +kind: DNSEndpoint +metadata: + name: "{{ page.meta.helmrelease_name }}.example.com" + namespace: {{ page.meta.helmrelease_namespace }} +spec: + endpoints: + - dnsName: "{{ page.meta.helmrelease_name }}.example.com" + recordTTL: 180 + recordType: CNAME + targets: + - "traefik-ingress.example.com" +``` + +!!! tip + Rather than creating individual A records for each host, I prefer to create one A record (*`nginx-ingress.example.com` in the example above*), and then create individual CNAME records pointing to that A record. diff --git a/_includes/kubernetes-flux-helmrelease.md b/_includes/kubernetes-flux-helmrelease.md index 204b2a7..2d9be9a 100644 --- a/_includes/kubernetes-flux-helmrelease.md +++ b/_includes/kubernetes-flux-helmrelease.md @@ -1,4 +1,4 @@ -### HelmRelease +### {{ page.meta.slug }} HelmRelease Lastly, having set the scene above, we define the HelmRelease which will actually deploy {{ page.meta.helmrelease_name }} into the cluster. We start with a basic HelmRelease YAML, like this example: @@ -23,10 +23,10 @@ spec: values: # paste contents of upstream values.yaml below, indented 4 spaces (2) ``` -1. I like to set this to the semver minor version of the upstream chart, so that I'll inherit bug fixes but not any new features (*since I'll need to manually update my values to accommodate new releases anyway*) +1. I like to set this to the semver minor version of the {{ page.meta.slug }} current helm chart, so that I'll inherit bug fixes but not any new features (*since I'll need to manually update my values to accommodate new releases anyway*) 2. Paste the full contents of the upstream [values.yaml]({{ page.meta.values_yaml_url }}) here, indented 4 spaces under the `values:` key -If we deploy this helmrelease as-is, we'll inherit every default from the upstream chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the helm chart's [values.yaml]({{ page.meta.values_yaml_url }}), and to paste these (*indented*), under the `values` key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler. +If we deploy this helmrelease as-is, we'll inherit every default from the upstream {{ page.meta.slug }} helm chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}), and to paste these (*indented*), under the `values` key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler. --8<-- "kubernetes-why-not-full-values-in-configmap.md" diff --git a/_includes/kubernetes-flux-helmrepository.md b/_includes/kubernetes-flux-helmrepository.md index 3810b20..954d7f7 100644 --- a/_includes/kubernetes-flux-helmrepository.md +++ b/_includes/kubernetes-flux-helmrepository.md @@ -1,6 +1,6 @@ -### HelmRepository +### {{ page.meta.slug }} HelmRepository -We're going to install a helm chart from the [{{ page.meta.helm_chart_repo_name }}]({{ page.meta.helm_chart_repo_url }}) repository, so I create the following in my flux repo (*assuming it doesn't already exist*): +We're going to install the {{ page.slug }} helm chart from the [{{ page.meta.helm_chart_repo_name }}]({{ page.meta.helm_chart_repo_url }}) repository, so I create the following in my flux repo (*assuming it doesn't already exist*): ```yaml title="/bootstrap/helmrepositories/helmrepository-{{ page.meta.helm_chart_repo_name }}.yaml" apiVersion: source.toolkit.fluxcd.io/v1beta1 diff --git a/_includes/kubernetes-flux-kustomization.md b/_includes/kubernetes-flux-kustomization.md index d452903..f5eca89 100644 --- a/_includes/kubernetes-flux-kustomization.md +++ b/_includes/kubernetes-flux-kustomization.md @@ -1,4 +1,4 @@ -### Kustomization +### {{ page.meta.slug }} Kustomization Now that the "global" elements of this deployment (*just the HelmRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/{{ page.meta.helmrelease_namespace }}/`. I create this example Kustomization in my flux repo: diff --git a/_includes/kubernetes-flux-namespace.md b/_includes/kubernetes-flux-namespace.md index f736be5..60ebaf3 100644 --- a/_includes/kubernetes-flux-namespace.md +++ b/_includes/kubernetes-flux-namespace.md @@ -1,6 +1,6 @@ ## Preparation -### Namespace +### {{ page.meta.slug }} Namespace We need a namespace to deploy our HelmRelease and associated YAMLs into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `/bootstrap/namespaces/namespace-{{ page.meta.helmrelease_namespace }}.yaml`: diff --git a/_snippets/recipe-footer.md b/_includes/recipe-footer.md similarity index 86% rename from _snippets/recipe-footer.md rename to _includes/recipe-footer.md index 201e86c..ba4186a 100644 --- a/_snippets/recipe-footer.md +++ b/_includes/recipe-footer.md @@ -2,6 +2,17 @@ ///Footnotes Go Here/// +{% if page.meta.upstream %} +### {{ page.meta.slug }} resources + +* [{{ page.meta.slug }} (official site)]({{ page.meta.upstream }}) +{% endif %} +{% if page.meta.links %} +{% for link in page.meta.links %} +* [{{ page.meta.slug }} {{ link.name }}]({{ link.uri }}) +{% endfor %} +{% endif %} + ### Tip your waiter (sponsor) 👏 Did you receive excellent service? Want to compliment the chef? (_..and support development of current and future recipes!_) Sponsor me on [Github][github_sponsor] / [Ko-Fi][kofi] / [Patreon][patreon], or see the [contribute](/community/contribute/) page for more (_free or paid)_ ways to say thank you! 👏 diff --git a/docs/blog/posts/notes/elfhosted/secure-private-network-within-hetzner.md b/docs/blog/posts/notes/elfhosted/secure-private-network-within-hetzner.md new file mode 100644 index 0000000..a1c988e --- /dev/null +++ b/docs/blog/posts/notes/elfhosted/secure-private-network-within-hetzner.md @@ -0,0 +1,67 @@ +--- +date: 2023-06-09 +categories: + - note +tags: + - elfhosted +title: Baby steps towards ElfHosted +description: Every journey has a beginning. This is the beginning of the ElfHosted journey +draft: true +--- + +Securing the Hetzner environment + +Before building out our Kubernetes cluster, I wanted to secure the environment a little. On Hetzner, each server is assigned a public IP from a huge pool, and is directly accessible over the internet. This provides quick access for administration, but before building out our controlplane, I wanted to lock down access. + +## Requirements + +* [x] Kubernetes worker/controlplane nodes are privately addressed +* [x] Control plane (API) will be accessible only internally +* [x] Nodes can be administered directly on their private address range + +## The bastion VM + +I created a small cloud "ampere" VM using Hetzner's cloud console. These cloud VMs are provisioned separately from dedicated servers, but it's possible to interconnect them with dedicated servers using vSwitches/subnets (bascically VLANs) + +I needed a "bastion" host - a small node (probably a VM), which I could secure and then use for further ingress into my infrastructure. + +## Connecting Bastion VM to dedicated VMs + +I + +https://tailscale.com/kb/1150/cloud-hetzner/ + + +https://tailscale.com/kb/1077/secure-server-ubuntu-18-04/ + + +https://docs.hetzner.com/cloud/networks/connect-dedi-vswitch + +```bash + tailscale up --advertise-routes 10.0.42.0/24 + ``` + +sysctl edit + +```bash +# NAT table rules +*nat +:POSTROUTING ACCEPT [0:0] + +# Forward traffic through eth0 - Change to match you out-interface +-A POSTROUTING -s -j MASQUERADE + +# don't delete the 'COMMIT' line or these nat table rules won't +# be processed +COMMIT +``` + + +hetzner_cloud_console_subnet_routes.png + +hetzner_vswitch_setup.png + +## Secure hosts + +* [ ] Create last-resort root password +* [ ] Setup non-root sudo account (ansiblize this?) \ No newline at end of file diff --git a/docs/blog/posts/notes/elfhosted/setup-k3s.md b/docs/blog/posts/notes/elfhosted/setup-k3s.md new file mode 100644 index 0000000..c1528bd --- /dev/null +++ b/docs/blog/posts/notes/elfhosted/setup-k3s.md @@ -0,0 +1,151 @@ +--- +date: 2023-06-11 +categories: + - note +tags: + - elfhosted +title: Kubernetes on Hetzner dedicated server +description: How to setup and secure a bare-metal Kubernetes infrastructure on Hetzner dedicated servers +draft: true +--- + +# Kubernetes (K3s) on Hetzner + +In this post, we continue our adventure setting up an app hosting platform running on Kubernetes. + +--8<-- "blog-series-elfhosted.md" + +My two physical servers were "delivered" (to my inbox), along with instructions re SSHing to the "rescueimage" environment, which looks like this: + + + + + +--8<-- "what-is-elfhosted.md" + + +## Secure nodes + +Per the K3s docs, there are some local firewall requirements for K3s server/worker nodes: + +https://docs.k3s.io/installation/requirements#inbound-rules-for-k3s-server-nodes + + + +It's aliiive! + +``` +root@fairy01 ~ # kubectl get nodes +NAME STATUS ROLES AGE VERSION +elf01 Ready 15s v1.26.5+k3s1 +fairy01 Ready control-plane,etcd,master 96s v1.26.5+k3s1 +root@fairy01 ~ # +``` + +Now install flux, according to this documentedb bootstrap process... + + +https://metallb.org/configuration/k3s/ + + +Prepare for Longhorn's [NFS schenanigans](https://longhorn.io/docs/1.4.2/deploy/install/#installing-nfsv4-client): + +``` +apt-get -y install nfs-common tuned +``` + +Performance mode! + +`tuned-adm profile throughput-performance` + +Taint the master(s) + +``` +kubectl taint node fairy01 node-role.kubernetes.io/control-plane=true:NoSchedule +``` + + +``` +increase max pods: +https://stackoverflow.com/questions/65894616/how-do-you-increase-maximum-pods-per-node-in-k3s + +https://gist.github.com/rosskirkpat/57aa392a4b44cca3d48dfe58b5716954 + +curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --disable traefik --disable servicelb --flannel-backend=wireguard-native --flannel-iface=enp0s31f6.4000 --kube-controller-manager-arg=node-cidr-mask-size=22 --kubelet-arg=max-pods=500 --node-taint node-role.kubernetes.io/control-plane --prefer-bundled-bin" sh - +``` + +create secondary masters: + +``` +curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --disable traefik --disable servicelb --flannel-backend=wireguard-native --flannel-iface=enp0s31f6.4000 --kube-controller-manager-arg=node-cidr-mask-size=22 --kubelet-arg=max-pods=500 --node-taint node-role.kubernetes.io/control-plane --prefer-bundled-bin" sh - + +``` + + +``` +mkdir -p /etc/rancher/k3s/ +cat << EOF >> /etc/rancher/k3s/kubelet-server.config +apiVersion: kubelet.config.k8s.io/v1beta1 +kind: KubeletConfiguration +maxPods: 500 +EOF +``` + + + + +and on the worker + + +Ensure that `/etc/rancher/k3s` exists, to hold our kubelet custom configuration file: + +```bash +mkdir -p /etc/rancher/k3s/ +cat << EOF >> /etc/rancher/k3s/kubelet-server.config +apiVersion: kubelet.config.k8s.io/v1beta1 +kind: KubeletConfiguration +maxPods: 500 +EOF +``` + +Get [token](https://docs.k3s.io/cli/token) from `/var/lib/rancher/k3s/server/token` on the server, and prepare the environment like this: +```bash +export K3S_TOKEN= +export K3S_URL=https://:6443 +``` + +Now join the worker using + +``` +curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --flannel-iface=eno1.4000 --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --prefer-bundled-bin" sh - + +``` + + +``` +flux bootstrap github \ + --owner=geek-cookbook \ + --repository=geek-cookbook/elfhosted-flux \ + --path bootstrap + ``` + +``` +root@fairy01:~# kubectl -n sealed-secrets create secret tls elfhosted-expires-june-2033 \ + --cert=mytls.crt --key=mytls.key +secret/elfhosted-expires-june-2033 created +root@fairy01:~# kubectl kubectl -n sealed-secrets label secret^C +root@fairy01:~# kubectl -n sealed-secrets label secret elfhosted-expires-june-2033 sealedsecrets.bitnami.com/sealed-secrets-key=active +secret/elfhosted-expires-june-2033 labeled +root@fairy01:~# kubectl rollout restart -n sealed-secrets deployment sealed-secrets +deployment.apps/sealed-secrets restarted +``` + +increase watchers (jellyfin) +echo fs.inotify.max_user_watches=2097152 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p + +echo 512 > /proc/sys/fs/inotify/max_user_instances + +on dwarves + +k taint node dwarf01.elfhosted.com node-role.elfhosted.com/node=storage:NoSchedule + diff --git a/docs/docker-swarm/authelia.md b/docs/docker-swarm/authelia.md index 4163e88..394dc74 100644 --- a/docs/docker-swarm/authelia.md +++ b/docs/docker-swarm/authelia.md @@ -274,4 +274,4 @@ What have we achieved? By adding a simple label to any service, we can secure an [^1]: The initial inclusion of Authelia was due to the efforts of @bencey in Discord (Thanks Ben!) ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/design.md b/docs/docker-swarm/design.md index d6d1af4..ff348d6 100644 --- a/docs/docker-swarm/design.md +++ b/docs/docker-swarm/design.md @@ -94,4 +94,4 @@ In summary, although I suffered an **unplanned power outage to all of my infrast [^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/docker-swarm-mode.md b/docs/docker-swarm/docker-swarm-mode.md index 92c30a7..058351f 100644 --- a/docs/docker-swarm/docker-swarm-mode.md +++ b/docs/docker-swarm/docker-swarm-mode.md @@ -180,4 +180,4 @@ What have we achieved? * [X] [Docker swarm cluster](/docker-swarm/design/) ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/index.md b/docs/docker-swarm/index.md index 8910c14..4b3822c 100644 --- a/docs/docker-swarm/index.md +++ b/docs/docker-swarm/index.md @@ -23,7 +23,7 @@ You too, action-geek, can save the day, by... Ready to enter the matrix? Jump in on one of the links above, or start reading the [design](/docker-swarm/design/) ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: This was an [iconic movie](https://www.imdb.com/title/tt0111257/). It even won 2 Oscars! (*but not for the acting*) [^2]: There are significant advantages to using Docker Swarm, even on just a single node. diff --git a/docs/docker-swarm/keepalived.md b/docs/docker-swarm/keepalived.md index 15a5fad..c6cf2b1 100644 --- a/docs/docker-swarm/keepalived.md +++ b/docs/docker-swarm/keepalived.md @@ -88,4 +88,4 @@ What have we achieved? [^1]: Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections. [^2]: More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/nodes.md b/docs/docker-swarm/nodes.md index 1b2ec9b..7eefe75 100644 --- a/docs/docker-swarm/nodes.md +++ b/docs/docker-swarm/nodes.md @@ -77,4 +77,4 @@ After completing the above, you should have: * At least 20GB disk space (_but it'll be tight_) * [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_) ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/registry.md b/docs/docker-swarm/registry.md index adaa6a3..8a274d1 100644 --- a/docs/docker-swarm/registry.md +++ b/docs/docker-swarm/registry.md @@ -110,4 +110,4 @@ Then restart docker itself, by running `systemctl restart docker` [^1]: Note the extra comma required after "false" above ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/shared-storage-ceph.md b/docs/docker-swarm/shared-storage-ceph.md index 2f179cf..41ed53f 100644 --- a/docs/docker-swarm/shared-storage-ceph.md +++ b/docs/docker-swarm/shared-storage-ceph.md @@ -227,4 +227,4 @@ Here's a screencast of the playbook in action. I sped up the boring parts, it ac [patreon]: [github_sponsor]: ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/shared-storage-gluster.md b/docs/docker-swarm/shared-storage-gluster.md index 65963c5..d8dfe9f 100644 --- a/docs/docker-swarm/shared-storage-gluster.md +++ b/docs/docker-swarm/shared-storage-gluster.md @@ -172,4 +172,4 @@ After completing the above, you should have: 1. Migration of shared storage from GlusterFS to Ceph 2. Correct the fact that volumes don't automount on boot ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/traefik-forward-auth/dex-static.md b/docs/docker-swarm/traefik-forward-auth/dex-static.md index 94f476c..b897c09 100644 --- a/docs/docker-swarm/traefik-forward-auth/dex-static.md +++ b/docs/docker-swarm/traefik-forward-auth/dex-static.md @@ -203,4 +203,4 @@ What have we achieved? By adding an additional label to any service, we can secu [^1]: You can remove the `whoami` container once you know Traefik Forward Auth is working properly ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/traefik-forward-auth/google.md b/docs/docker-swarm/traefik-forward-auth/google.md index f9578d1..4d418b7 100644 --- a/docs/docker-swarm/traefik-forward-auth/google.md +++ b/docs/docker-swarm/traefik-forward-auth/google.md @@ -133,4 +133,4 @@ What have we achieved? By adding an additional three simple labels to any servic [^1]: Be sure to populate `WHITELIST` in `traefik-forward-auth.env`, else you'll happily be granting **any** authenticated Google account access to your services! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/traefik-forward-auth/index.md b/docs/docker-swarm/traefik-forward-auth/index.md index 59eda31..22cb2d0 100644 --- a/docs/docker-swarm/traefik-forward-auth/index.md +++ b/docs/docker-swarm/traefik-forward-auth/index.md @@ -52,6 +52,6 @@ Traefik Forward Auth needs to authenticate an incoming user against a provider. * [Authenticate Traefik Forward Auth against a whitelist of Google accounts][tfa-google] * [Authenticate Traefik Forward Auth against a self-hosted Keycloak instance][tfa-keycloak] with an optional [OpenLDAP backend][openldap] ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: Authhost mode is specifically handy for Google authentication, since Google doesn't permit wildcard redirect_uris, like [Keycloak][keycloak] does. diff --git a/docs/docker-swarm/traefik-forward-auth/keycloak.md b/docs/docker-swarm/traefik-forward-auth/keycloak.md index 1e65329..6b97fcc 100644 --- a/docs/docker-swarm/traefik-forward-auth/keycloak.md +++ b/docs/docker-swarm/traefik-forward-auth/keycloak.md @@ -100,4 +100,4 @@ What have we achieved? By adding an additional three simple labels to any servic [KeyCloak][keycloak] is the "big daddy" of self-hosted authentication platforms - it has a beautiful GUI, and a very advanced and mature featureset. Like Authelia, KeyCloak can [use an LDAP server](/recipes/keycloak/authenticate-against-openldap/) as a backend, but _unlike_ Authelia, KeyCloak allows for 2-way sync between that LDAP backend, meaning KeyCloak can be used to _create_ and _update_ the LDAP entries (*Authelia's is just a one-way LDAP lookup - you'll need another tool to actually administer your LDAP database*). ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/docker-swarm/traefik.md b/docs/docker-swarm/traefik.md index fa06a57..30988f6 100644 --- a/docs/docker-swarm/traefik.md +++ b/docs/docker-swarm/traefik.md @@ -250,4 +250,4 @@ You should now be able to access[^1] your traefik instance on `https://traefik.< [^1]: Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/docker-swarm/traefik-forward-auth/)! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/images/authentik.png b/docs/images/authentik.png new file mode 100644 index 0000000..b070b60 Binary files /dev/null and b/docs/images/authentik.png differ diff --git a/docs/images/joplin-server.png b/docs/images/joplin-server.png new file mode 100644 index 0000000..add96d3 Binary files /dev/null and b/docs/images/joplin-server.png differ diff --git a/docs/kubernetes/backup/csi-snapshots/snapshot-controller.md b/docs/kubernetes/backup/csi-snapshots/snapshot-controller.md index 8b54760..988249d 100644 --- a/docs/kubernetes/backup/csi-snapshots/snapshot-controller.md +++ b/docs/kubernetes/backup/csi-snapshots/snapshot-controller.md @@ -68,4 +68,4 @@ What have we achieved? We've got snapshot-controller running, and ready to manag * [ ] Configure [Velero](/kubernetes/backup/velero/) with a VolumeSnapshotLocation, so that volume snapshots can be made as part of a BackupSchedule! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/backup/csi-snapshots/snapshot-validation-webhook.md b/docs/kubernetes/backup/csi-snapshots/snapshot-validation-webhook.md index 93e2b45..bbbd3fd 100644 --- a/docs/kubernetes/backup/csi-snapshots/snapshot-validation-webhook.md +++ b/docs/kubernetes/backup/csi-snapshots/snapshot-validation-webhook.md @@ -45,4 +45,4 @@ What have we achieved? We now have the snapshot validation admission webhook run * [ ] Deploy [snapshot-controller]( (/kubernetes/backup/csi-snapshots/snapshot-controller/)) itself ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/backup/index.md b/docs/kubernetes/backup/index.md index d8eedbe..a72179e 100644 --- a/docs/kubernetes/backup/index.md +++ b/docs/kubernetes/backup/index.md @@ -17,4 +17,4 @@ For your backup needs, I present, Velero, by VMWare: * [Velero](/kubernetes/backup/velero/) ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/backup/velero.md b/docs/kubernetes/backup/velero.md index 5e6ac12..bb5035c 100644 --- a/docs/kubernetes/backup/velero.md +++ b/docs/kubernetes/backup/velero.md @@ -11,6 +11,8 @@ helmrelease_namespace: velero kustomization_name: velero slug: Velero status: new +upstream: https://velero.io/ +github_repo: https://github.com/vmware-tanzu/velero --- # Velero @@ -326,4 +328,4 @@ What have we achieved? We've got scheduled backups running, and we've successful [^1]: This is where you'd add multiple Volume Groups if you wanted a storageclass per Volume Group [^2]: But not the rook-ceph cluster. If that dies, the snapshots are toast :toast: too! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/cluster/digitalocean.md b/docs/kubernetes/cluster/digitalocean.md index 6c8013e..e7802d2 100644 --- a/docs/kubernetes/cluster/digitalocean.md +++ b/docs/kubernetes/cluster/digitalocean.md @@ -76,4 +76,4 @@ That's it. You have a beautiful new kubernetes cluster ready for some action! [^1]: Do you live in the CLI? Install the kubectl autocompletion for [bash or zsh](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) to make your life much easier! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/cluster/index.md b/docs/kubernetes/cluster/index.md index 6f0a2fd..e070449 100644 --- a/docs/kubernetes/cluster/index.md +++ b/docs/kubernetes/cluster/index.md @@ -62,4 +62,4 @@ You'll learn more about how to care for and feed your cluster if you build it yo Go with a self-hosted cluster if you want to learn more, you'd rather spend time than money, or you've already got significant investment in local infructure and technical skillz. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/cluster/k3s.md b/docs/kubernetes/cluster/k3s.md index d74984e..25cddb4 100644 --- a/docs/kubernetes/cluster/k3s.md +++ b/docs/kubernetes/cluster/k3s.md @@ -156,4 +156,4 @@ Cuddle your beautiful new cluster by running `kubectl cluster-info` [^1] - if th [^2]: Looking for your k3s logs? Under Ubuntu LTS, run `journalctl -u k3s` to show your logs [^3]: k3s is not the only "lightweight kubernetes" game in town. Minikube (*virtualization-based*) and mikrok8s (*possibly better for Ubuntu users since it's installed in a "snap" - haha*) are also popular options. One day I'll write a "mikrok8s vs k3s" review, but it doesn't really matter for our cluster operations - as I understand it, microk8s makes HA clustering slightly easire than k3s, but you get slightly less "out-of-the-box" in return, so mikrok8s may be more suitable for experience users / production edge deployments. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/deployment/flux/design.md b/docs/kubernetes/deployment/flux/design.md index e7742b7..52a78c6 100644 --- a/docs/kubernetes/deployment/flux/design.md +++ b/docs/kubernetes/deployment/flux/design.md @@ -63,4 +63,4 @@ Good! I describe how to put this design into action on the [next page](/kubernet [^1]: ERDs are fancy diagrams for nERDs which [represent cardinality between entities](https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model#Crow's_foot_notation) scribbled using the foot of a crow 🐓 ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/deployment/flux/install.md b/docs/kubernetes/deployment/flux/install.md index 731dc50..f4160fb 100644 --- a/docs/kubernetes/deployment/flux/install.md +++ b/docs/kubernetes/deployment/flux/install.md @@ -147,7 +147,7 @@ If you used my template repo, some extra things also happened.. That's best explained on the [next page](/kubernetes/deployment/flux/design/), describing the design we're using... ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: The [template repo](https://github.com/geek-cookbook/template-flux/) also "bootstraps" a simple example re how to [operate flux](/kubernetes/deployment/flux/operate/), by deploying the podinfo helm chart. [^2]: TIL that GitHub listens for SSH on `ssh.github.com` on port 443! diff --git a/docs/kubernetes/deployment/flux/operate.md b/docs/kubernetes/deployment/flux/operate.md index d64dba2..0fe2133 100644 --- a/docs/kubernetes/deployment/flux/operate.md +++ b/docs/kubernetes/deployment/flux/operate.md @@ -154,6 +154,6 @@ Commit your changes, and once again do the waiting / impatient-reconciling jig. We did it. The Holy Grail. We deployed an application into the cluster, without touching the cluster. Pinch yourself, and then prove it worked by running `flux get kustomizations`, or `kubectl get helmreleases -n podinfo`. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: Got suggestions for improvements here? Shout out in the comments below! diff --git a/docs/kubernetes/design.md b/docs/kubernetes/design.md index 097addd..95ae42e 100644 --- a/docs/kubernetes/design.md +++ b/docs/kubernetes/design.md @@ -129,4 +129,4 @@ Still with me? Good. Move on to creating your cluster! - [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks - [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/external-dns.md b/docs/kubernetes/external-dns.md index e3f7d06..fd4c4a0 100644 --- a/docs/kubernetes/external-dns.md +++ b/docs/kubernetes/external-dns.md @@ -154,6 +154,6 @@ What have we achieved? By simply creating another YAML in our flux repo alongsid * [X] DNS records are created automatically based on YAMLs (*or even just on services and ingresses!*) ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: Why yes, I **have** accidentally caused outages / conflicts by "leaking" DNS entries automatically! diff --git a/docs/kubernetes/helm.md b/docs/kubernetes/helm.md index cdd0874..7ae4c09 100644 --- a/docs/kubernetes/helm.md +++ b/docs/kubernetes/helm.md @@ -53,4 +53,4 @@ Still with me? Good. Move on to understanding Helm charts... [^1]: Of course, you can have lots of fun deploying all sorts of things via Helm. Check out for some examples. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/index.md b/docs/kubernetes/index.md index 24f3d4a..597e753 100644 --- a/docs/kubernetes/index.md +++ b/docs/kubernetes/index.md @@ -46,6 +46,6 @@ Primarily you need 2 things: Practically, you need some extras too, but you can mix-and-match these. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: Of course, if you **do** enjoy understanding the intricacies of how your tools work, you're in good company! diff --git a/docs/kubernetes/ingress/index.md b/docs/kubernetes/ingress/index.md index dacbac5..38899b1 100644 --- a/docs/kubernetes/ingress/index.md +++ b/docs/kubernetes/ingress/index.md @@ -14,6 +14,6 @@ There are many popular Ingress Controllers, we're going to cover two equally use Choose at least one of the above (*there may be valid reasons to use both!* [^1]), so that you can expose applications via Ingress. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: One cluster I manage uses traefik Traefik for public services, but Nginx for internal management services such as Prometheus, etc. The idea is that you'd need one type of Ingress to help debug problems with the _other_ type! diff --git a/docs/kubernetes/ingress/nginx.md b/docs/kubernetes/ingress/nginx.md index a468b25..4914942 100644 --- a/docs/kubernetes/ingress/nginx.md +++ b/docs/kubernetes/ingress/nginx.md @@ -234,6 +234,6 @@ Commit your changes, wait for the reconciliation, and the next time you point yo Are things not working as expected? Watch the nginx-ingress-controller's logs with ```kubectl logs -n nginx-ingress-controller -l app.kubernetes.io/name=nginx-ingress-controller -f```. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup! diff --git a/docs/kubernetes/ingress/traefik/dashboard.md b/docs/kubernetes/ingress/traefik/dashboard.md index 328c1b3..9bb5c2a 100644 --- a/docs/kubernetes/ingress/traefik/dashboard.md +++ b/docs/kubernetes/ingress/traefik/dashboard.md @@ -15,6 +15,6 @@ One of the advantages [Traefik](/kubernetes/ingress/traefik/) offers over [Nginx * [x] A [load-balancer](/kubernetes/loadbalancer/) solution (*either [k3s](/kubernetes/loadbalancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*) * [x] [Traefik](/kubernetes/ingress/traefik/) deployed per-design ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup! diff --git a/docs/kubernetes/ingress/traefik/index.md b/docs/kubernetes/ingress/traefik/index.md index be9c7a3..3ba2d2f 100644 --- a/docs/kubernetes/ingress/traefik/index.md +++ b/docs/kubernetes/ingress/traefik/index.md @@ -236,6 +236,6 @@ Commit your changes, wait for the reconciliation, and the next time you point yo Are things not working as expected? Watch the traefik's logs with ```kubectl logs -n traefik -l app.kubernetes.io/name=traefik -f```. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup! diff --git a/docs/kubernetes/loadbalancer/index.md b/docs/kubernetes/loadbalancer/index.md index ccadc69..150f156 100644 --- a/docs/kubernetes/loadbalancer/index.md +++ b/docs/kubernetes/loadbalancer/index.md @@ -50,6 +50,6 @@ Assuming you only had a single Kubernetes node (*say, a small k3s deployment*), (*This is [the way k3s works](/kubernetes/loadbalancer/k3s/) by default, although it's still called a LoadBalancer*) ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: It is possible to be prescriptive about which port is used for a Nodeport-exposed service, and this is occasionally [a valid deployment strategy](https://github.com/portainer/k8s/#using-nodeport-on-a-localremote-cluster), but you're usually limited to ports between 30000 and 32768. diff --git a/docs/kubernetes/loadbalancer/k3s.md b/docs/kubernetes/loadbalancer/k3s.md index d4283c5..239c93a 100644 --- a/docs/kubernetes/loadbalancer/k3s.md +++ b/docs/kubernetes/loadbalancer/k3s.md @@ -23,6 +23,6 @@ Yes, to get you started. But consider the following limitations: To tackle these issues, you need some more advanced network configuration, along with [MetalLB](/kubernetes/loadbalancer/metallb/). ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: And seriously, if you're building a Kubernetes cluster, of **course** you'll want more than one host! diff --git a/docs/kubernetes/loadbalancer/metallb/index.md b/docs/kubernetes/loadbalancer/metallb/index.md index bd54ca0..06bf786 100644 --- a/docs/kubernetes/loadbalancer/metallb/index.md +++ b/docs/kubernetes/loadbalancer/metallb/index.md @@ -328,6 +328,6 @@ To: Commit your changes, wait for a reconciliation, and run `kubectl get services -n podinfo`. All going well, you should see that the service now has an IP assigned from the pool you chose for MetalLB! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: I've documented an example re [how to configure BGP between MetalLB and pfsense](/kubernetes/loadbalancer/metallb/pfsense/). diff --git a/docs/kubernetes/loadbalancer/metallb/pfsense.md b/docs/kubernetes/loadbalancer/metallb/pfsense.md index 74799a2..c9acef4 100644 --- a/docs/kubernetes/loadbalancer/metallb/pfsense.md +++ b/docs/kubernetes/loadbalancer/metallb/pfsense.md @@ -75,6 +75,6 @@ If you're not receiving any routes from MetalLB, or if the neighbors aren't in a 2. Examine the metallb speaker logs in the cluster, by running `kubectl logs -n metallb-system -l app.kubernetes.io/name=metallb` 3. SSH to the pfsense, start a shell and launch the FFR shell by running `vtysh`. Now you're in a cisco-like console where commands like `show ip bgp sum` and `show ip bgp neighbors received-routes` will show you interesting debugging things. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: If you decide to deploy some policy with route-maps, prefix-lists, etc, it's all found under **Services -> FRR Global/Zebra** 🦓 diff --git a/docs/kubernetes/monitoring/index.md b/docs/kubernetes/monitoring/index.md index 309165c..467fa2c 100644 --- a/docs/kubernetes/monitoring/index.md +++ b/docs/kubernetes/monitoring/index.md @@ -311,4 +311,4 @@ At this point, you should be able to access your instance on your chosen DNS nam To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/). ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/persistence/local-path-provisioner.md b/docs/kubernetes/persistence/local-path-provisioner.md index 1f01705..98aa2a5 100644 --- a/docs/kubernetes/persistence/local-path-provisioner.md +++ b/docs/kubernetes/persistence/local-path-provisioner.md @@ -40,6 +40,6 @@ A few things you should know: In summary, Local Path Provisioner is fine if you have very specifically sized workloads and you don't care about node redundancy. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: [TopoLVM](/kubernetes/persistence/topolvm/) also creates per-node volumes which aren't "portable" between nodes, but because it relies on LVM, it is "capacity-aware", and is able to distribute storage among multiple nodes based on available capacity. diff --git a/docs/kubernetes/persistence/nfs-subdirectory-provider.md b/docs/kubernetes/persistence/nfs-subdirectory-provider.md index 1246e28..ce7fde9 100644 --- a/docs/kubernetes/persistence/nfs-subdirectory-provider.md +++ b/docs/kubernetes/persistence/nfs-subdirectory-provider.md @@ -243,6 +243,6 @@ What have we achieved? We have a storage provider that can use an NFS server as * [X] We have a new storage provider ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: The reason I shortened it is so I didn't have to type nfs-subdirectory-provider each time. If you want that sort of pain in your life, feel free to change it! diff --git a/docs/kubernetes/persistence/rook-ceph/cluster.md b/docs/kubernetes/persistence/rook-ceph/cluster.md index 6248156..73a3a0f 100644 --- a/docs/kubernetes/persistence/rook-ceph/cluster.md +++ b/docs/kubernetes/persistence/rook-ceph/cluster.md @@ -415,6 +415,6 @@ What have we achieved? We're half-way to getting a ceph cluster, having deployed * [X] StorageClasses are available so that the cluster storage can be consumed by your pods * [X] Pretty graphs are viewable in the Ceph Dashboard ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: Unless you **wanted** to deploy your cluster components in a separate namespace to the operator, of course! diff --git a/docs/kubernetes/persistence/rook-ceph/operator.md b/docs/kubernetes/persistence/rook-ceph/operator.md index 0589a91..6d0f87c 100644 --- a/docs/kubernetes/persistence/rook-ceph/operator.md +++ b/docs/kubernetes/persistence/rook-ceph/operator.md @@ -178,4 +178,4 @@ What have we achieved? We're half-way to getting a ceph cluster, having deployed * [ ] Deploy the ceph [cluster](/kubernetes/persistence/rook-ceph/cluster/) using a CR ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/persistence/topolvm.md b/docs/kubernetes/persistence/topolvm.md index d805573..d9a7888 100644 --- a/docs/kubernetes/persistence/topolvm.md +++ b/docs/kubernetes/persistence/topolvm.md @@ -271,6 +271,6 @@ Are things not working as expected? Try one of the following to look for issues: 3. Watch the scheduler logs, by running `kubectl logs -f -n topolvm-system -l app.kubernetes.io/name=scheduler` 4. Watch the controller node logs, by running `kubectl logs -f -n topolvm-system -l app.kubernetes.io/name=controller` ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: This is where you'd add multiple Volume Groups if you wanted a storageclass per Volume Group diff --git a/docs/kubernetes/sealed-secrets.md b/docs/kubernetes/sealed-secrets.md index fc5bf79..5e82246 100644 --- a/docs/kubernetes/sealed-secrets.md +++ b/docs/kubernetes/sealed-secrets.md @@ -624,6 +624,6 @@ root@shredder:~# And now when you create your seadsecrets, refer to the public key you just created using `--cert `. These secrets will be decryptable by **any** sealedsecrets controller bootstrapped with the same keypair (*above*). ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: There's no harm in storing the **public** key in the repo though, which means it's easy to refer to when sealing secrets. diff --git a/docs/kubernetes/snapshots.md b/docs/kubernetes/snapshots.md index f10cc3f..2c8b9ac 100644 --- a/docs/kubernetes/snapshots.md +++ b/docs/kubernetes/snapshots.md @@ -176,4 +176,4 @@ Still with me? Good. Move on to understanding Helm charts... [^1]: I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74). ``` ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/kubernetes/ssl-certificates/cert-manager.md b/docs/kubernetes/ssl-certificates/cert-manager.md index 40ca24f..2a80381 100644 --- a/docs/kubernetes/ssl-certificates/cert-manager.md +++ b/docs/kubernetes/ssl-certificates/cert-manager.md @@ -52,7 +52,7 @@ spec: Now that the "global" elements of this deployment (*just the HelmRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/cert-manager`. I create this example Kustomization in my flux repo: ```yaml title="/bootstrap/kustomizations/kustomization-cert-manager.yaml" -apiVersion: kustomize.toolkit.fluxcd.io/v1beta1 +apiVersion: kustomize.toolkit.fluxcd.io/v1 kind: Kustomization metadata: name: cert-manager @@ -65,7 +65,6 @@ spec: sourceRef: kind: GitRepository name: flux-system - validation: server healthChecks: - apiVersion: apps/v1 kind: Deployment @@ -135,6 +134,6 @@ What do we have now? Well, we've got the cert-manager controller **running**, bu If your certificate is not created **aren't** created as you expect, then the best approach is to check the cert-manager logs, by running `kubectl logs -n cert-manager -l app.kubernetes.io/name=cert-manager`. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: Why yes, I **have** accidentally rate-limited myself by deleting/recreating my prod certificates a few times! diff --git a/docs/kubernetes/ssl-certificates/index.md b/docs/kubernetes/ssl-certificates/index.md index 69afc34..fe1427f 100644 --- a/docs/kubernetes/ssl-certificates/index.md +++ b/docs/kubernetes/ssl-certificates/index.md @@ -17,6 +17,6 @@ I've split this section, conceptually, into 3 separate tasks: 2. Setup "[Issuers](/kubernetes/ssl-certificates/letsencrypt-issuers/)" for LetsEncrypt, which Cert Manager will use to request certificates 3. Setup a [wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/) in such a way that it can be used by Ingresses like Traefik or Nginx ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: I had a really annoying but smart boss once who taught me this. Hi Mark! :wave: diff --git a/docs/kubernetes/ssl-certificates/letsencrypt-issuers.md b/docs/kubernetes/ssl-certificates/letsencrypt-issuers.md index 7e1439b..ecb8431 100644 --- a/docs/kubernetes/ssl-certificates/letsencrypt-issuers.md +++ b/docs/kubernetes/ssl-certificates/letsencrypt-issuers.md @@ -32,7 +32,7 @@ metadata: Now we need a kustomization to tell Flux to install any YAMLs it finds in `/letsencrypt-wildcard-cert`. I create this example Kustomization in my flux repo: ```yaml title="/bootstrap/kustomizations/kustomization-letsencrypt-wildcard-cert.yaml" -apiVersion: kustomize.toolkit.fluxcd.io/v1beta1 +apiVersion: kustomize.toolkit.fluxcd.io/v1 kind: Kustomization metadata: name: letsencrypt-wildcard-cert @@ -48,7 +48,6 @@ spec: sourceRef: kind: GitRepository name: flux-system - validation: server ``` !!! tip @@ -140,6 +139,6 @@ Events: Provided your account is registered, you're ready to proceed with [creating a wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/)! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: Since a ClusterIssuer is not a namespaced resource, it doesn't exist in any specific namespace. Therefore, my assumption is that the `apiTokenSecretRef` secret is only "looked for" when a certificate (*which **is** namespaced*) requires validation. diff --git a/docs/kubernetes/ssl-certificates/secret-replicator.md b/docs/kubernetes/ssl-certificates/secret-replicator.md index 2c345fe..e1a042f 100644 --- a/docs/kubernetes/ssl-certificates/secret-replicator.md +++ b/docs/kubernetes/ssl-certificates/secret-replicator.md @@ -164,6 +164,6 @@ Look for secrets across the whole cluster, by running `kubectl get secrets -A | If your certificate is not created **aren't** created as you expect, then the best approach is to check the secret-replicator logs, by running `kubectl logs -n secret-replicator -l app.kubernetes.io/name=secret-replicator`. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: To my great New Zealandy confusion, "Kiwigrid GmbH" is a German company :shrug: diff --git a/docs/kubernetes/ssl-certificates/wildcard-certificate.md b/docs/kubernetes/ssl-certificates/wildcard-certificate.md index 1f17f60..adde6f9 100644 --- a/docs/kubernetes/ssl-certificates/wildcard-certificate.md +++ b/docs/kubernetes/ssl-certificates/wildcard-certificate.md @@ -108,6 +108,6 @@ spec: Commit the certificate and follow the steps above to confirm that your prod certificate has been issued. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: This process can take a frustratingly long time, and watching the cert-manager logs at least gives some assurance that it's progressing! diff --git a/docs/recipes/archivebox.md b/docs/recipes/archivebox.md index 29a04b4..b76c4e5 100644 --- a/docs/recipes/archivebox.md +++ b/docs/recipes/archivebox.md @@ -90,4 +90,4 @@ Launch the Archivebox stack by running ```docker stack deploy archivebox -c [^2]: As mentioned above, readers should refer to the [dani-garcia/vaultwarden wiki](https://github.com/dani-garcia/vaultwarden) for details on customizing the behaviour of Bitwarden. [^3]: The inclusion of Bitwarden was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/bookstack.md b/docs/recipes/bookstack.md index d8c6035..4adb3e1 100644 --- a/docs/recipes/bookstack.md +++ b/docs/recipes/bookstack.md @@ -136,4 +136,4 @@ Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_pro [^1]: If you wanted to expose the Bookstack UI directly, you could remove the traefik-forward-auth from the design. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/calibre-web.md b/docs/recipes/calibre-web.md index c3c9f06..aea4133 100644 --- a/docs/recipes/calibre-web.md +++ b/docs/recipes/calibre-web.md @@ -114,4 +114,4 @@ Log into your new instance at `https://**YOUR-FQDN**`. You'll be directed to the [^1]: Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_) [^2]: A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web. [^3]: If you plan to use calibre-web to send `.mobi` files to your Kindle via `@kindle.com` email addresses, be sure to add the sending address to the "[Approved Personal Documents Email List](https://www.amazon.com/hz/mycd/myx#/home/settings/payment)" ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/collabora-online.md b/docs/recipes/collabora-online.md index ffdf7bd..0fb294c 100644 --- a/docs/recipes/collabora-online.md +++ b/docs/recipes/collabora-online.md @@ -307,4 +307,4 @@ Now browse your NextCloud files. Click the plus (+) sign to create a new documen [^1]: Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/cyberchef.md b/docs/recipes/cyberchef.md index 455e864..51c3938 100644 --- a/docs/recipes/cyberchef.md +++ b/docs/recipes/cyberchef.md @@ -69,7 +69,7 @@ networks: Launch your CyberChef stack by running ```docker stack deploy cyberchef -c ```, and then visit the URL you chose to begin the hackery! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [2]: https://gchq.github.io/CyberChef/#recipe=From_Base64('A-Za-z0-9%2B/%3D',true)&input=VTI4Z2JHOXVaeUJoYm1RZ2RHaGhibXR6SUdadmNpQmhiR3dnZEdobElHWnBjMmd1 [6]: https://gchq.github.io/CyberChef/#recipe=RC4(%7B'option':'UTF8','string':'secret'%7D,'Hex','Hex')Disassemble_x86('64','Full%20x86%20architecture',16,0,true,true)&input=MjFkZGQyNTQwMTYwZWU2NWZlMDc3NzEwM2YyYTM5ZmJlNWJjYjZhYTBhYWJkNDE0ZjkwYzZjYWY1MzEyNzU0YWY3NzRiNzZiM2JiY2QxOTNjYjNkZGZkYmM1YTI2NTMzYTY4NmI1OWI4ZmVkNGQzODBkNDc0NDIwMWFlYzIwNDA1MDcxMzhlMmZlMmIzOTUwNDQ2ZGIzMWQyYmM2MjliZTRkM2YyZWIwMDQzYzI5M2Q3YTVkMjk2MmMwMGZlNmRhMzAwNzJkOGM1YTZiNGZlN2Q4NTlhMDQwZWVhZjI5OTczMzYzMDJmNWEwZWMxOQ diff --git a/docs/recipes/duplicati.md b/docs/recipes/duplicati.md index 5e8509d..504c8f9 100644 --- a/docs/recipes/duplicati.md +++ b/docs/recipes/duplicati.md @@ -126,4 +126,4 @@ Once we authenticate through the traefik-forward-auth provider, we can start con [^1]: Quote attributed to Mila Kunis [^2]: The [Duplicati 2 User's Manual](https://duplicati.readthedocs.io/en/latest/) contains all the information you'll need to configure backup endpoints, restore jobs, scheduling and advanced properties for your backup jobs. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/duplicity.md b/docs/recipes/duplicity.md index 9d6edd7..62907fe 100644 --- a/docs/recipes/duplicity.md +++ b/docs/recipes/duplicity.md @@ -159,4 +159,4 @@ Nothing will happen. Very boring. But when the cron script fires (daily), duplic [^1]: Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs. [^2]: The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add `SMTP_HOST`, `SMTP_PORT`, `EMAIL_FROM` and `EMAIL_TO` variables to `duplicity.env`. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/elkarbackup.md b/docs/recipes/elkarbackup.md index 6270f1d..e2f74b4 100644 --- a/docs/recipes/elkarbackup.md +++ b/docs/recipes/elkarbackup.md @@ -225,4 +225,4 @@ This takes you to a list of backup names and file paths. You can choose to downl [^1]: If you wanted to expose the ElkarBackup UI directly, you could remove the traefik-forward-auth from the design. [^2]: The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/emby.md b/docs/recipes/emby.md index 506c6cd..2abf678 100644 --- a/docs/recipes/emby.md +++ b/docs/recipes/emby.md @@ -92,4 +92,4 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas [^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media! [^3]: We don't bother exposing the HTTPS port for Emby, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/funkwhale.md b/docs/recipes/funkwhale.md index 554409e..9776624 100644 --- a/docs/recipes/funkwhale.md +++ b/docs/recipes/funkwhale.md @@ -141,4 +141,4 @@ root@swarm:~# [^3]: It should be noted that if you import your existing media, the files will be **copied** into Funkwhale's data folder. There doesn't seem to be a way to point Funkwhale at an existing collection and have it just play it from the filesystem. To this end, be prepared for double disk space usage if you plan to import your entire music collection! [^5]: No consideration is given at this point to backing up the Funkwhale data. Post a comment below if you'd like to see a backup container added! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/ghost.md b/docs/recipes/ghost.md index 0c8948c..9010318 100644 --- a/docs/recipes/ghost.md +++ b/docs/recipes/ghost.md @@ -71,4 +71,4 @@ Create your first administrative account at https://**YOUR-FQDN**/admin/ [^1]: A default using the SQlite database takes 548k of space ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/gitlab-runner.md b/docs/recipes/gitlab-runner.md index 804d9a5..922844e 100644 --- a/docs/recipes/gitlab-runner.md +++ b/docs/recipes/gitlab-runner.md @@ -94,4 +94,4 @@ Launch the GitLab Runner stack by running `docker stack deploy gitlab-runner -c [^1]: You'll note that I setup 2 runners. One is locked to a single project (_this cookbook build_), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case. [^2]: Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (_and GitLab starts **sooo** slowly!_), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/gitlab.md b/docs/recipes/gitlab.md index 4b75190..5ceb593 100644 --- a/docs/recipes/gitlab.md +++ b/docs/recipes/gitlab.md @@ -134,4 +134,4 @@ Log into your new instance at `https://[your FQDN]`, with user "root" and the pa [^1]: I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?) ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/gollum.md b/docs/recipes/gollum.md index f698abc..33a2f26 100644 --- a/docs/recipes/gollum.md +++ b/docs/recipes/gollum.md @@ -109,4 +109,4 @@ Launch the Gollum stack by running ```docker stack deploy gollum -c ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: "wife-insurance": When the developer's wife is a primary user of the platform, you can bet he'll be writing quality code! :woman: :material-karate: :man: :bed: :cry: [^2]: There's a [friendly Discord server](https://discord.com/invite/D8JsnBEuKb) for Immich too! diff --git a/docs/recipes/instapy.md b/docs/recipes/instapy.md index 5b35bb2..a5425e3 100644 --- a/docs/recipes/instapy.md +++ b/docs/recipes/instapy.md @@ -130,4 +130,4 @@ You can **also** watch the bot at work by VNCing to your docker swarm, password [^1]: Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/invidious.md b/docs/recipes/invidious.md index 560ea09..dbbf877 100644 --- a/docs/recipes/invidious.md +++ b/docs/recipes/invidious.md @@ -1,7 +1,6 @@ --- title: Invidious, your Youtube frontend instance in Docker Swarm description: How to create your own private Youtube frontend using Invidious in Docker Swarm -status: new --- # Invidious: Private Youtube frontend instance in Docker Swarm @@ -169,7 +168,7 @@ What have we achieved? We have an HTTPS-protected private YouTube frontend - we * [X] We are free of the creepy tracking attached to YouTube videos! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: Check out the [official config docs](https://github.com/iv-org/invidious/blob/master/config/config.example.yml) for comprehensive details on how to configure / tweak your instance! [^2]: Gotcha! diff --git a/docs/recipes/jellyfin.md b/docs/recipes/jellyfin.md index bac40b5..67f2bb8 100644 --- a/docs/recipes/jellyfin.md +++ b/docs/recipes/jellyfin.md @@ -102,4 +102,4 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas [^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media! [^3]: We don't bother exposing the HTTPS port for Jellyfin, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/joplin-server.md b/docs/recipes/joplin-server.md new file mode 100644 index 0000000..6af347d --- /dev/null +++ b/docs/recipes/joplin-server.md @@ -0,0 +1,225 @@ +--- +title: Sync, share and publish your Joplin notes with joplin-server +description: joplin-server is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. +recipe: Joplin Server +--- + +# Joplin Server + +{% include 'try-in-elfhosted.md' %} + +joplin-server is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](/recipes/duplicity/) would produce for you. + +![Joplin Screenshot](../images/joplin-server.png){ loading=lazy } + +## {{ page.meta.recipe }} Requirements + +--8<-- "recipe-standard-ingredients.md" + +## Preparation + +### Setup data locations + +We'll need several directories to bind-mount into our container, so create them in `/var/data/``: + +```bash +mkdir -p /var/data/joplin-server/ +mkdir -p /var/data/runtime/joplin-server/db +mkdir -p /var/data/config/joplin-server +``` + +### Prepare {{ page.meta.recipe }} environment + +Create /var/data/config/joplin-server/joplin-server.env, and populate with the following variables + +```bash +SYMFONY__DATABASE__PASSWORD=password +EB_CRON=enabled +TZ='Etc/UTC' + +#SMTP - Populate these if you want email notifications +#SYMFONY__MAILER__HOST= +#SYMFONY__MAILER__USER= +#SYMFONY__MAILER__PASSWORD= +#SYMFONY__MAILER__FROM= + +# For mysql +MYSQL_ROOT_PASSWORD=password +``` + +Create ```/var/data/config/joplin-server/joplin-server-db-backup.env```, and populate with the following, to setup the nightly database dump. + +!!! note + Running a daily database dump might be considered overkill, since joplin-server can be configured to backup its own database. However, making my own backup keeps the operation of this stack consistent with **other** stacks which employ MariaDB. + + Also, did you ever hear about the guy who said "_I wish I had fewer backups"? + + No, me either :shrug: + +```bash +# For database backup (keep 7 days daily backups) +MYSQL_PWD= +MYSQL_USER=root +BACKUP_NUM_KEEP=7 +BACKUP_FREQUENCY=1d +``` + +### {{ page.meta.recipe }} Docker Swarm config + +Create a docker swarm config file in docker-compose syntax (v3), something like the example below: + +--8<-- "premix-cta.md" + +```yaml +version: "3" + +services: + db: + image: mariadb:10.4 + env_file: /var/data/config/joplin-server/joplin-server.env + networks: + - internal + volumes: + - /etc/localtime:/etc/localtime:ro + - /var/data/runtime/joplin-server/db:/var/lib/mysql + + db-backup: + image: mariadb:10.4 + env_file: /var/data/config/joplin-server/joplin-server-db-backup.env + volumes: + - /var/data/joplin-server/database-dump:/dump + - /etc/localtime:/etc/localtime:ro + entrypoint: | + bash -c 'bash -s < /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz + (ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {} + sleep $$BACKUP_FREQUENCY + done + EOF' + networks: + - internal + + app: + image: joplin-server/joplin-server + env_file: /var/data/config/joplin-server/joplin-server.env + networks: + - internal + - traefik_public + volumes: + - /etc/localtime:/etc/localtime:ro + - /var/data/:/var/data + - /var/data/joplin-server/backups:/app/backups + - /var/data/joplin-server/uploads:/app/uploads + - /var/data/joplin-server/sshkeys:/app/.ssh + deploy: + labels: + # traefik common + - traefik.enable=true + - traefik.docker.network=traefik_public + + # traefikv1 + - traefik.frontend.rule=Host:joplin-server.example.com + - traefik.port=80 + + # traefikv2 + - "traefik.http.routers.joplin-server.rule=Host(`joplin-server.example.com`)" + - "traefik.http.services.joplin-server.loadbalancer.server.port=80" + - "traefik.enable=true" + + # Remove if you wish to access the URL directly + - "traefik.http.routers.joplin-server.middlewares=forward-auth@file" + +networks: + traefik_public: + external: true + internal: + driver: overlay + ipam: + config: + - subnet: 172.16.36.0/24 +``` + +--8<-- "reference-networks.md" + +## Serving + +### Launch joplin-server stack + +Launch the joplin-server stack by running ```docker stack deploy joplin-server -c ``` + +Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password default password "root": + +![joplin-server Login Screen](/images/joplin-server-setup-1.png){ loading=lazy } + +First thing you do, change your password, using the gear icon, and "Change Password" link: + +![joplin-server Login Screen](/images/joplin-server-setup-2.png){ loading=lazy } + +Have a read of the [joplin-server Docs](https://docs.joplin-server.org/docs/introduction.html) - they introduce the concept of **clients** (_hosts containing data to be backed up_), **jobs** (_what data gets backed up_), **policies** (_when is data backed up and how long is it kept_). + +At the very least, you want to setup a **client** called "_localhost_" with an empty path (_i.e., the job path will be accessed locally, without SSH_), and then add a job to this client to backup /var/data, **excluding** ```/var/data/runtime``` and ```/var/data/joplin-server/backup``` (_unless you **like** "backup-ception"_) + +### Copying your backup data offsite + +From the WebUI, you can download a script intended to be executed on a remote host, to backup your backup data to an offsite location. This is a **Good Idea**(tm), but needs some massaging for a Docker swarm deployment. + +Here's a variation to the standard script, which I've employed: + +```bash +#!/bin/bash + +REPOSITORY=/var/data/joplin-server/backups +SERVER= +SERVER_USER=joplin-server +UPLOADS=/var/data/joplin-server/uploads +TARGET=/srv/backup/joplin-server + +echo "Starting backup..." +echo "Date: " `date "+%Y-%m-%d (%H:%M)"` + +ssh "$SERVER_USER@$SERVER" "cd '$REPOSITORY'; find . -maxdepth 2 -mindepth 2" | sed s/^..// | while read jobId +do + echo Backing up job $jobId + mkdir -p $TARGET/$jobId 2>/dev/null + rsync -aH --delete "$SERVER_USER@$SERVER:$REPOSITORY/$jobId/" $TARGET/$jobId +done + +echo Backing up uploads +rsync -aH --delete "$SERVER_USER@$SERVER":"$UPLOADS/" $TARGET/uploads + +USED=`df -h . | awk 'NR==2 { print $3 }'` +USE=`df -h . | awk 'NR==2 { print $5 }'` +AVAILABLE=`df -h . | awk 'NR==2 { print $4 }'` + +echo "Backup finished succesfully!" +echo "Date: " `date "+%Y-%m-%d (%H:%M)"` +echo "" +echo "**** INFO ****" +echo "Used disk space: $USED ($USE)" +echo "Available disk space: $AVAILABLE" +echo "" +``` + +!!! note + You'll note that I don't use the script to create a mysql dump (_since Elkar is running within a container anyway_), rather I just rely on the database dump which is made nightly into ```/var/data/joplin-server/database-dump/``` + +### Restoring data + +Repeat after me : "**It's not a backup unless you've tested a restore**" + +!!! note + I had some difficulty making restoring work well in the webUI. My attempts to "Restore to client" failed with an SSH error about "localhost" not found. I **was** able to download the backup from my web browser, so I considered it a successful restore, since I can retrieve the backed-up data either from the webUI or from the filesystem directly. + +To restore files form a job, click on the "Restore" button in the WebUI, while on the **Jobs** tab: + +![joplin-server Login Screen](/images/joplin-server-setup-3.png){ loading=lazy } + +This takes you to a list of backup names and file paths. You can choose to download the entire contents of the backup from your browser as a .tar.gz, or to restore the backup to the client. If you click on the **name** of the backup, you can also drill down into the file structure, choosing to restore a single file or directory. + +[^1]: If you wanted to expose the joplin-server UI directly, you could remove the traefik-forward-auth from the design. +[^2]: The original inclusion of joplin-server was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel! + +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/kanboard.md b/docs/recipes/kanboard.md index f433066..2d34b22 100644 --- a/docs/recipes/kanboard.md +++ b/docs/recipes/kanboard.md @@ -84,4 +84,4 @@ Log into your new instance at https://**YOUR-FQDN**. Default credentials are adm [^1]: The default theme can be significantly improved by applying the [ThemePlus](https://github.com/phsteffen/kanboard-themeplus) plugin. [^2]: Kanboard becomes more useful when you integrate in/outbound email with [MailGun](https://github.com/kanboard/plugin-mailgun), [SendGrid](https://github.com/kanboard/plugin-sendgrid), or [Postmark](https://github.com/kanboard/plugin-postmark). ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/kavita.md b/docs/recipes/kavita.md index 25f25e4..e5a6e7c 100644 --- a/docs/recipes/kavita.md +++ b/docs/recipes/kavita.md @@ -92,4 +92,4 @@ Log into your new instance at https://**YOUR-FQDN**. Since it's a fresh installa [^2]: There's an [active subreddit](https://www.reddit.com/r/KavitaManga/) for Kavita ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/keycloak/authenticate-against-openldap.md b/docs/recipes/keycloak/authenticate-against-openldap.md index a86673c..d169fe5 100644 --- a/docs/recipes/keycloak/authenticate-against-openldap.md +++ b/docs/recipes/keycloak/authenticate-against-openldap.md @@ -70,4 +70,4 @@ We've setup a new realm in Keycloak, and configured read-write federation to an [^1]: A much nicer experience IMO! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/keycloak/index.md b/docs/recipes/keycloak/index.md index 9f2d197..0b99b7f 100644 --- a/docs/recipes/keycloak/index.md +++ b/docs/recipes/keycloak/index.md @@ -181,6 +181,6 @@ Something didn't work? Try the following: 1. Confirm that Keycloak did, in fact, start, by looking at the state of the stack, with `docker stack ps keycloak --no-trunc` ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: For more geeky {--pain--}{++fun++}, try integrating Keycloak with [OpenLDAP][openldap] for an authentication backend! diff --git a/docs/recipes/keycloak/setup-oidc-provider.md b/docs/recipes/keycloak/setup-oidc-provider.md index 9f86b49..a79bda7 100644 --- a/docs/recipes/keycloak/setup-oidc-provider.md +++ b/docs/recipes/keycloak/setup-oidc-provider.md @@ -56,4 +56,4 @@ We've setup an OIDC client in Keycloak, which we can now use to protect vulnerab * [X] Client ID and Client Secret used to authenticate against Keycloak with OpenID Connect ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/komga.md b/docs/recipes/komga.md index 83d834c..829b1bd 100644 --- a/docs/recipes/komga.md +++ b/docs/recipes/komga.md @@ -87,4 +87,4 @@ If Komga scratches your particular itch, please join me in [sponsoring the devel [^1]: Since Komga doesn't need to communicate with any other services, we don't need a separate overlay network for it. Provided Traefik can reach Komga via the `traefik_public` overlay network, we've got all we need. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/kubernetes/authentik.md b/docs/recipes/kubernetes/authentik.md new file mode 100644 index 0000000..bfd6d2d --- /dev/null +++ b/docs/recipes/kubernetes/authentik.md @@ -0,0 +1,185 @@ +--- +title: How to deploy Authentik on Kubernetes +description: Deploy Authentik on Kubernetes to provide SSO to your cluster and workloads +values_yaml_url: https://github.com/goauthentik/helm/blob/main/charts/authentik/values.yaml +helm_chart_version: 2023.10.x +helm_chart_name: authentik +helm_chart_repo_name: authentik +helm_chart_repo_url: https://charts.goauthentik.io/ +helmrelease_name: authentik +helmrelease_namespace: authentik +kustomization_name: authentik +slug: Authentik +status: new +github_repo: https://github.com/goauthentik/authentik +upstream: https://goauthentik.io +links: +- name: Discord + uri: https://goauthentik.io/discord +--- + +# Authentik on Kubernetes + +Authentik is an open-source Identity Provider focused on flexibility and versatility. It not only supports modern authentication standards (*like OIDC*), but includes "outposts" to provide support for less-modern protocols such as [LDAP][openldap] :t_rex:, or basic authentication proxies. + +![Authentik login](/images/authentik.png){ loading=lazy } + +See a comparison with other IDPs [here](https://goauthentik.io/#comparison). + +## {{ page.meta.slug }} requirements + +!!! summary "Ingredients" + + Already deployed: + + * [x] A [Kubernetes cluster](/kubernetes/cluster/) + * [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped + * [x] An [Ingress controller](/kubernetes/ingress/) to route incoming traffic to services + * [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff + + Optional: + + * [ ] [External DNS](/kubernetes/external-dns/) to create an DNS entry the "flux" way + +{% include 'kubernetes-flux-namespace.md' %} +{% include 'kubernetes-flux-kustomization.md' %} +{% include 'kubernetes-flux-dnsendpoint.md' %} +{% include 'kubernetes-flux-helmrelease.md' %} + +## Configure Authentik Helm Chart + +The following sections detail suggested changes to the values pasted into `/{{ page.meta.helmrelease_namespace }}/helmrelease-{{ page.meta.helmrelease_name }}.yaml` from the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}). The values are already indented correctly to be copied, pasted into the HelmRelease, and adjusted as necessary. + +!!! tip + Confusingly, the Authentik helm chart defaults to having the bundled redis and postgresql **disabled**, but the [Authentik Kubernetes install](https://goauthentik.io/docs/installation/kubernetes/) docs require that they be enabled. Take care to change the respective `enabled: false` values to `enabled: true` below. + +### Set bootstrap credentials + +By default, when you install the Authentik helm chart, you'll get to set your admin user's (`akadmin`) when you first login. You can pre-configure this password by setting the `AUTHENTIK_BOOTSTRAP_PASSWORD` env var as illustrated below. + +If you're after a more hands-off implementation[^1], you can also pre-set a "bootstrap token", which can be used to interact with the Authentik API programatically (*see example below*): + +```yaml hl_lines="2-3" title="Optionally pre-configure your bootstrap secrets" + env: + AUTHENTIK_BOOTSTRAP_PASSWORD: "iamusedbyhumanz" + AUTHENTIK_BOOTSTRAP_TOKEN: "iamusedbymachinez" +``` + +### Configure Redis for Authentik + +Authentik uses Redis as the broker for [Celery](https://docs.celeryq.dev/en/stable/) background tasks. The Authentik helm chart defaults to provisioning an 8Gi PVC for redis, which seems like overkill for a simple broker. You can tweak the size of the Redis PVC by setting: + +```yaml hl_lines="4" title="1Gi should be fine for redis" + redis: + master: + persistence: + size: 1Gi +``` + +### Configure PostgreSQL for Authentik + +Depending on your risk profile / exposure, you may want to set a secure PostgreSQL password, or you may be content to leave the default password blank. + +At the very least, you'll want to set the following + +```yaml hl_lines="3 6" title="Set a secure Postgresql password" + authentik: + postgresql: + password: "Iamaverysecretpassword" + + postgresql: + postgresqlPassword: "Iamaverysecretpassword" +``` + +As with Redis above, you may feel (*like I do*) that provisioning an 8Gi PVC for a database containing 1 user and a handful of app configs is overkill. You can adjust the size of the PostgreSQL PVC by setting: + +```yaml hl_lines="3" title="1Gi is fine for a small database" + postgresql: + persistence: + size: 1Gi +``` + +### Ingress + +Setup your ingress for the Authentik UI. If you plan to add outposts to proxy other un-authenticated endpoints later, this is where you'll add them: + +```yaml hl_lines="3 7" title="Configure your ingress" + ingress: + enabled: true + ingressClassName: "nginx" # (1)! + annotations: {} + labels: {} + hosts: + - host: authentik.example.com + paths: + - path: "/" + pathType: Prefix + tls: [] +``` + +1. Either leave blank to accept the default ingressClassName, or set to whichever [ingress controller](/kubernetes/ingress/) you want to use. + +## Install {{ page.meta.slug }}! + +Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using `flux reconcile source git flux-system`. You should see the kustomization appear... + +```bash +~ ❯ flux get kustomizations {{ page.meta.kustomization_name }} +NAME READY MESSAGE REVISION SUSPENDED +{{ page.meta.kustomization_name }} True Applied revision: main/70da637 main/70da637 False +~ ❯ +``` + +The helmrelease should be reconciled... + +```bash +~ ❯ flux get helmreleases -n {{ page.meta.helmrelease_namespace }} {{ page.meta.helmrelease_name }} +NAME READY MESSAGE REVISION SUSPENDED +{{ page.meta.helmrelease_name }} True Release reconciliation succeeded v{{ page.meta.helm_chart_version }} False +~ ❯ +``` + +And you should have happy pods in the {{ page.meta.helmrelease_namespace }} namespace: + +```bash +~ ❯ k get pods -n authentik +NAME READY STATUS RESTARTS AGE +authentik-redis-master-0 1/1 Running 1 (3d17h ago) 26d +authentik-server-548c6d4d5f-ljqft 1/1 Running 1 (3d17h ago) 20d +authentik-postgresql-0 1/1 Running 1 (3d17h ago) 26d +authentik-worker-7bb8f55bcb-5jwrr 1/1 Running 0 23h +~ ❯ +``` + +Browse to the URL you configured in your ingress above, and confirm that the Authentik UI is displayed. + +## Create your admin user + +You may be a little confused re how to login for the first time. If you didn't use a bootstrap password as above, you'll want to go to `https:///if/flow/initial-setup/`, and set an initial password for your `akadmin` user. + +Now store the `akadmin` password somewhere safely, and proceed to create your own user account (*you'll presumably want to use your own username and email address*). + +Navigate to **Admin Interface** --> **Directory** --> **Users**, and create your new user. Edit your user and manually set your password. + +Next, navigate to **Directory** --> **Groups**, and edit the **authentik Admins** group. Within the group, click the **Users** tab to add your new user to the **authentik Admins** group. + +Eureka! :tada: + +Your user is now an Authentik superuser. Confirm this by logging out as **akadmin**, and logging back in with your own credentials. + +## Summary + +What have we achieved? We've got Authentik running and accessible, we've created a superuser account, and we're ready to flex :muscle: the power of Authentik to deploy an OIDC provider for Kubernetes, or simply secure unprotected UIs with proxy outposts! + +!!! summary "Summary" + Created: + + * [X] Authentik running and ready to "authentikate" :lock: ! + + Next: + + * [ ] Configure Kubernetes for OIDC authentication, unlocking production readiness as well as the Kubernetes Dashboard in Weave GitOps UIs (*coming soon*) + +{% include 'recipe-footer.md' %} + +[^1]: I use the bootstrap token with an ansible playbook which provisions my users / apps using the Authentik API \ No newline at end of file diff --git a/docs/recipes/kubernetes/invidious.md b/docs/recipes/kubernetes/invidious.md index 9445aa8..a05315e 100644 --- a/docs/recipes/kubernetes/invidious.md +++ b/docs/recipes/kubernetes/invidious.md @@ -29,7 +29,7 @@ Here's an example from my public instance (*yes, running on Kubernetes*): * [x] A [Kubernetes cluster](/kubernetes/cluster/) (*not running Kubernetes? Use the [Docker Swarm recipe instead][invidious]*) * [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped - * [x] An [Ingress](/kubernetes/ingress/) to route incoming traffic to services + * [x] An [Ingress controller](/kubernetes/ingress/) to route incoming traffic to services * [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff * [x] [External DNS](/kubernetes/external-dns/) to create an DNS entry @@ -543,7 +543,7 @@ What have we achieved? We have an HTTPS-protected private YouTube frontend - we * [X] We are free of the creepy tracking attached to YouTube videos! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: There is also a 3rd option, using the Flux webhook receiver to trigger a reconcilliation - to be covered in a future recipe! [^2]: Gotcha! diff --git a/docs/recipes/kubernetes/mastodon.md b/docs/recipes/kubernetes/mastodon.md index b16ff9c..9e4202f 100644 --- a/docs/recipes/kubernetes/mastodon.md +++ b/docs/recipes/kubernetes/mastodon.md @@ -24,7 +24,7 @@ description: How to install your own Mastodon instance using Kubernetes * [x] A [Kubernetes cluster](/kubernetes/cluster/) (*not running Kubernetes? Use the [Docker Swarm recipe instead][mastodon]*) * [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped - * [x] An [Ingress](/kubernetes/ingress/) to route incoming traffic to services + * [x] An [Ingress controller](/kubernetes/ingress/) to route incoming traffic to services * [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff * [x] [External DNS](/kubernetes/external-dns/) to create an DNS entry @@ -271,6 +271,6 @@ What have we achieved? We now have a fully-swarmed Mastodon instance, ready to f * [X] Mastodon configured, running, and ready to toot! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: There is also a 3rd option, using the Flux webhook receiver to trigger a reconcilliation - to be covered in a future recipe! diff --git a/docs/recipes/kubernetes/matrix.md b/docs/recipes/kubernetes/matrix.md index ffd23fa..3edb62a 100644 --- a/docs/recipes/kubernetes/matrix.md +++ b/docs/recipes/kubernetes/matrix.md @@ -61,544 +61,3 @@ Success! root@matrix-synapse-5d7cf8579-zjk7c:/# ``` -YouTube is ubiquitious now. Almost every video I'm sent, takes me to YouTube. Worse, every YouTube video I watch feeds Google's profile about me, so shortly after enjoying the latest Marvel movie trailers, I find myself seeing related adverts on **unrelated** websites. - -Creepy :bug:! - -As the connection between the videos I watch and the adverts I see has become move obvious, I've become more discerning re which videos I choose to watch, since I don't necessarily **want** algorithmically-related videos popping up next time I load the YouTube app on my TV, or Marvel merchandise advertised to me on every second news site I visit. - -This is a PITA since it means I have to "self-censor" which links I'll even click on, knowing that once I *do* click the video link, it's forever associated with my Google account :facepalm: - -After playing around with [some of the available public instances](https://docs.invidious.io/instances/) for a while, today I finally deployed my own instance of [Invidious](https://invidious.io/) - an open source alternative front-end to YouTube. - -![Invidious Screenshot](/images/invidious.png){ loading=lazy } - -Here's an example from my public instance (*yes, running on Kubernetes*): - - - -## Invidious requirements - -!!! summary "Ingredients" - - Already deployed: - - * [x] A [Kubernetes cluster](/kubernetes/cluster/) (*not running Kubernetes? Use the [Docker Swarm recipe instead][invidious]*) - * [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped - * [x] An [Ingress](/kubernetes/ingress/) to route incoming traffic to services - * [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff - * [x] [External DNS](/kubernetes/external-dns/) to create an DNS entry - - New: - - * [ ] Chosen DNS FQDN for your instance - -## Preparation - -### GitRepository - -The Invidious project doesn't currently publish a versioned helm chart - there's just a [helm chart stored in the repository](https://github.com/invidious/invidious/tree/main/chart) (*I plan to submit a PR to address this*). For now, we use a GitRepository instead of a HelmRepository as the source of a HelmRelease. - -```yaml title="/bootstrap/gitrepositories/gitepository-invidious.yaml" -apiVersion: source.toolkit.fluxcd.io/v1beta2 -kind: GitRepository -metadata: - name: invidious - namespace: flux-system -spec: - interval: 1h0s - ref: - branch: master - url: https://github.com/iv-org/invidious -``` - -### Namespace - -We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `/bootstrap/namespaces/namespace-invidious.yaml`: - -```yaml title="/bootstrap/namespaces/namespace-invidious.yaml" -apiVersion: v1 -kind: Namespace -metadata: - name: invidious -``` - -### Kustomization - -Now that the "global" elements of this deployment (*just the GitRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/invidious`. I create this example Kustomization in my flux repo: - -```yaml title="/bootstrap/kustomizations/kustomization-invidious.yaml" -apiVersion: kustomize.toolkit.fluxcd.io/v1 -kind: Kustomization -metadata: - name: invidious - namespace: flux-system -spec: - interval: 15m - path: invidious - prune: true # remove any elements later removed from the above path - timeout: 2m # if not set, this defaults to interval duration, which is 1h - sourceRef: - kind: GitRepository - name: flux-system - healthChecks: - - apiVersion: apps/v1 - kind: Deployment - name: invidious-invidious # (1)! - namespace: invidious - - apiVersion: apps/v1 - kind: StatefulSet - name: invidious-postgresql - namespace: invidious -``` - -1. No, that's not a typo, just another pecularity of the helm chart! - -### ConfigMap - -Now we're into the invidious-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/iv-org/invidious/blob/master/kubernetes/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 spaces (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo: - -```yaml title="invidious/configmap-invidious-helm-chart-value-overrides.yaml" -apiVersion: v1 -kind: ConfigMap -metadata: - name: invidious-helm-chart-value-overrides - namespace: invidious -data: - values.yaml: |- # (1)! - # -``` - -1. Paste in the contents of the upstream `values.yaml` here, intended 4 spaces, and then change the values you need as illustrated below. - -Values I change from the default are: - -```yaml -postgresql: -image: - tag: 14 -auth: - username: invidious - password: - database: invidious -primary: - initdb: - username: invidious - password: - scriptsConfigMap: invidious-postgresql-init - persistence: - size: 1Gi # (1)! - podAnnotations: # (2)! - backup.velero.io/backup-volumes: backup - pre.hook.backup.velero.io/command: '["/bin/bash", "-c", "PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U postgres -d $POSTGRES_DB -h 127.0.0.1 > /scratch/backup.sql"]' - pre.hook.backup.velero.io/timeout: 3m - post.hook.restore.velero.io/command: '["/bin/bash", "-c", "[ -f \"/scratch/backup.sql\" ] && PGPASSWORD=$POSTGRES_PASSWORD psql -U postgres -h 127.0.0.1 -d $POSTGRES_DB -f /scratch/backup.sql && rm -f /scratch/backup.sql;"]' - extraVolumes: - - name: backup - emptyDir: - sizeLimit: 1Gi - extraVolumeMounts: - - name: backup - mountPath: /scratch - resources: - requests: - cpu: "10m" - memory: 32Mi - -# Adapted from ../config/config.yml -config: -channel_threads: 1 -feed_threads: 1 -db: - user: invidious - password: - host: invidious-postgresql - port: 5432 - dbname: invidious -full_refresh: false -https_only: true -domain: in.fnky.nz # (3)! -external_port: 443 # (4)! -banner: ⚠️ Note - This public Invidious instance is sponsored ❤️ by Funky Penguin's Geek Cookbook. It's intended to support the published Docker Swarm recipes, but may be removed at any time without notice. # (5)! -default_user_preferences: # (6)! - quality: dash # (7)! auto-adapts or lets you choose > 720P -``` - -1. 1Gi is fine for the database for now -2. These annotations / extra Volumes / volumeMounts support automated backup using Velero -3. Invidious needs this to generate external links for sharing / embedding -4. Invidious needs this too, to generate external links for sharing / embedding -5. It's handy to tell people what's special about your instance -6. Check out the [official config docs](https://github.com/iv-org/invidious/blob/master/config/config.example.yml) for comprehensive details on how to configure / tweak your instance! -7. Default all users to DASH (*adaptive*) quality, rather than limiting to 720P (*the default*) - -### HelmRelease - -Finally, having set the scene above, we define the HelmRelease which will actually deploy the invidious into the cluster. I save this in my flux repo: - -```yaml title="/invidious/helmrelease-invidious.yaml" -apiVersion: helm.toolkit.fluxcd.io/v2beta1 -kind: HelmRelease -metadata: - name: invidious - namespace: invidious -spec: - chart: - spec: - chart: ./charts/invidious - sourceRef: - kind: GitRepository - name: invidious - namespace: flux-system - interval: 15m - timeout: 5m - releaseName: invidious - valuesFrom: - - kind: ConfigMap - name: invidious-helm-chart-value-overrides - valuesKey: values.yaml # (1)! -``` - -1. This is the default, but best to be explicit for clarity - -### Ingress / IngressRoute - -Oddly, the upstream chart doesn't include any Ingress resource. We have to manually create our Ingress as below (*note that it's also possible to use a Traefik IngressRoute directly*) - -```yaml title="/invidious/ingress-invidious.yaml" -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: invidious - namespace: invidious -spec: - ingressClassName: nginx - rules: - - host: in.fnky.nz - http: - paths: - - backend: - service: - name: invidious - port: - number: 3000 - path: / - pathType: ImplementationSpecific -``` - -An alternative implementation using an `IngressRoute` could look like this: - -```yaml title="/invidious/ingressroute-invidious.yaml" -apiVersion: traefik.containo.us/v1alpha1 -kind: IngressRoute -metadata: - name: in.fnky.nz - namespace: invidious -spec: - routes: - - match: Host(`in.fnky.nz`) - kind: Rule - services: - - name: invidious-invidious - kind: Service - port: 3000 -``` - -### Create postgres-init ConfigMap - -Another pecularity of the Invidious helm chart is that you have to create your own ConfigMap containing the PostgreSQL data structure. I suspect that the helm chart has received minimal attention in the past 3+ years, and this could probably easily be turned into a job as a pre-install helm hook (*perhaps a future PR?*). - -In the meantime, you'll need to create ConfigMap manually per the [repo instructions](https://github.com/iv-org/invidious/tree/master/kubernetes#installing-helm-chart), or cheat, and copy the one I paste below: - -??? example "Configmap (click to expand)" - ```yaml title="/invidious/configmap-invidious-postgresql-init.yaml" - apiVersion: v1 - kind: ConfigMap - metadata: - name: invidious-postgresql-init - namespace: invidious - data: - annotations.sql: | - -- Table: public.annotations - - -- DROP TABLE public.annotations; - - CREATE TABLE IF NOT EXISTS public.annotations - ( - id text NOT NULL, - annotations xml, - CONSTRAINT annotations_id_key UNIQUE (id) - ); - - GRANT ALL ON TABLE public.annotations TO current_user; - channel_videos.sql: |+ - -- Table: public.channel_videos - - -- DROP TABLE public.channel_videos; - - CREATE TABLE IF NOT EXISTS public.channel_videos - ( - id text NOT NULL, - title text, - published timestamp with time zone, - updated timestamp with time zone, - ucid text, - author text, - length_seconds integer, - live_now boolean, - premiere_timestamp timestamp with time zone, - views bigint, - CONSTRAINT channel_videos_id_key UNIQUE (id) - ); - - GRANT ALL ON TABLE public.channel_videos TO current_user; - - -- Index: public.channel_videos_ucid_idx - - -- DROP INDEX public.channel_videos_ucid_idx; - - CREATE INDEX IF NOT EXISTS channel_videos_ucid_idx - ON public.channel_videos - USING btree - (ucid COLLATE pg_catalog."default"); - - channels.sql: |+ - -- Table: public.channels - - -- DROP TABLE public.channels; - - CREATE TABLE IF NOT EXISTS public.channels - ( - id text NOT NULL, - author text, - updated timestamp with time zone, - deleted boolean, - subscribed timestamp with time zone, - CONSTRAINT channels_id_key UNIQUE (id) - ); - - GRANT ALL ON TABLE public.channels TO current_user; - - -- Index: public.channels_id_idx - - -- DROP INDEX public.channels_id_idx; - - CREATE INDEX IF NOT EXISTS channels_id_idx - ON public.channels - USING btree - (id COLLATE pg_catalog."default"); - - nonces.sql: |+ - -- Table: public.nonces - - -- DROP TABLE public.nonces; - - CREATE TABLE IF NOT EXISTS public.nonces - ( - nonce text, - expire timestamp with time zone, - CONSTRAINT nonces_id_key UNIQUE (nonce) - ); - - GRANT ALL ON TABLE public.nonces TO current_user; - - -- Index: public.nonces_nonce_idx - - -- DROP INDEX public.nonces_nonce_idx; - - CREATE INDEX IF NOT EXISTS nonces_nonce_idx - ON public.nonces - USING btree - (nonce COLLATE pg_catalog."default"); - - playlist_videos.sql: | - -- Table: public.playlist_videos - - -- DROP TABLE public.playlist_videos; - - CREATE TABLE IF NOT EXISTS public.playlist_videos - ( - title text, - id text, - author text, - ucid text, - length_seconds integer, - published timestamptz, - plid text references playlists(id), - index int8, - live_now boolean, - PRIMARY KEY (index,plid) - ); - - GRANT ALL ON TABLE public.playlist_videos TO current_user; - playlists.sql: | - -- Type: public.privacy - - -- DROP TYPE public.privacy; - - CREATE TYPE public.privacy AS ENUM - ( - 'Public', - 'Unlisted', - 'Private' - ); - - -- Table: public.playlists - - -- DROP TABLE public.playlists; - - CREATE TABLE IF NOT EXISTS public.playlists - ( - title text, - id text primary key, - author text, - description text, - video_count integer, - created timestamptz, - updated timestamptz, - privacy privacy, - index int8[] - ); - - GRANT ALL ON public.playlists TO current_user; - session_ids.sql: |+ - -- Table: public.session_ids - - -- DROP TABLE public.session_ids; - - CREATE TABLE IF NOT EXISTS public.session_ids - ( - id text NOT NULL, - email text, - issued timestamp with time zone, - CONSTRAINT session_ids_pkey PRIMARY KEY (id) - ); - - GRANT ALL ON TABLE public.session_ids TO current_user; - - -- Index: public.session_ids_id_idx - - -- DROP INDEX public.session_ids_id_idx; - - CREATE INDEX IF NOT EXISTS session_ids_id_idx - ON public.session_ids - USING btree - (id COLLATE pg_catalog."default"); - - users.sql: |+ - -- Table: public.users - - -- DROP TABLE public.users; - - CREATE TABLE IF NOT EXISTS public.users - ( - updated timestamp with time zone, - notifications text[], - subscriptions text[], - email text NOT NULL, - preferences text, - password text, - token text, - watched text[], - feed_needs_update boolean, - CONSTRAINT users_email_key UNIQUE (email) - ); - - GRANT ALL ON TABLE public.users TO current_user; - - -- Index: public.email_unique_idx - - -- DROP INDEX public.email_unique_idx; - - CREATE UNIQUE INDEX IF NOT EXISTS email_unique_idx - ON public.users - USING btree - (lower(email) COLLATE pg_catalog."default"); - - videos.sql: |+ - -- Table: public.videos - - -- DROP TABLE public.videos; - - CREATE UNLOGGED TABLE IF NOT EXISTS public.videos - ( - id text NOT NULL, - info text, - updated timestamp with time zone, - CONSTRAINT videos_pkey PRIMARY KEY (id) - ); - - GRANT ALL ON TABLE public.videos TO current_user; - - -- Index: public.id_idx - - -- DROP INDEX public.id_idx; - - CREATE UNIQUE INDEX IF NOT EXISTS id_idx - ON public.videos - USING btree - (id COLLATE pg_catalog."default"); - ``` - -## :octicons-video-16: Install Invidious! - -Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation[^1] using `flux reconcile source git flux-system`. You should see the kustomization appear... - -```bash -~ ❯ flux get kustomizations | grep invidious -invidious main/d34779f False True Applied revision: main/d34779f -~ ❯ -``` - -The helmrelease should be reconciled... - -```bash -~ ❯ flux get helmreleases -n invidious -NAME REVISION SUSPENDED READY MESSAGE -invidious 1.1.1 False True Release reconciliation succeeded -~ ❯ -``` - -And you should have happy Invidious pods: - -```bash -~ ❯ k get pods -n invidious -NAME READY STATUS RESTARTS AGE -invidious-invidious-64f4fb8d75-kr4tw 1/1 Running 0 77m -invidious-postgresql-0 1/1 Running 0 11h -~ ❯ -``` - -... and finally check that the ingress was created as desired: - -```bash -~ ❯ k get ingress -n invidious -NAME CLASS HOSTS ADDRESS PORTS AGE -invidious in.fnky.nz 80, 443 19h -~ ❯ -``` - -Or in the case of an ingressRoute: - -```bash -~ ❯ k get ingressroute -n invidious -NAME AGE -in.fnky.nz 19h -``` - -Now hit the URL you defined in your config, you'll see the basic search screen. Enter a search phrase (*"marvel movie trailer"*) to see the YouTube video results, or paste in a YouTube URL such as `https://www.youtube.com/watch?v=bxqLsrlakK8`, change the domain name from `www.youtube.com` to your instance's FQDN, and watch the fun [^2]! - -You can also install a range of browser add-ons to automatically redirect you from youtube.com to your Invidious instance. I'm testing "[libredirect](https://addons.mozilla.org/en-US/firefox/addon/libredirect/)" currently, which seems to work as advertised! - -## Summary - -What have we achieved? We have an HTTPS-protected private YouTube frontend - we can now watch whatever videos we please, without feeding Google's profile on us. We can also subscribe to channels without requiring a Google account, and we can share individual videos directly via our instance (*by generating links*). - -!!! summary "Summary" - Created: - - * [X] We are free of the creepy tracking attached to YouTube videos! - ---8<-- "recipe-footer.md" - -[^2]: Gotcha! diff --git a/docs/recipes/kubernetes/template.md b/docs/recipes/kubernetes/template.md index 18b44a7..71c7b7e 100644 --- a/docs/recipes/kubernetes/template.md +++ b/docs/recipes/kubernetes/template.md @@ -69,7 +69,7 @@ metadata: Now that the "global" elements of this deployment (*just the GitRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/invidious`. I create this example Kustomization in my flux repo: ```yaml title="/bootstrap/kustomizations/kustomization-invidious.yaml" -apiVersion: kustomize.toolkit.fluxcd.io/v1beta1 +apiVersion: kustomize.toolkit.fluxcd.io/v1 kind: Kustomization metadata: name: invidious @@ -82,7 +82,6 @@ spec: sourceRef: kind: GitRepository name: flux-system - validation: server healthChecks: - apiVersion: apps/v1 kind: Deployment @@ -541,6 +540,6 @@ What have we achieved? We have an HTTPS-protected private YouTube frontend - we * [X] We are free of the creepy tracking attached to YouTube videos! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: This is how a footnote works diff --git a/docs/recipes/linx.md b/docs/recipes/linx.md index 206dca2..dba4c8e 100644 --- a/docs/recipes/linx.md +++ b/docs/recipes/linx.md @@ -93,4 +93,4 @@ Launch the Linx stack by running ```docker stack deploy linx -c Integration** page. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/minio.md b/docs/recipes/minio.md index dab81d2..55809b2 100644 --- a/docs/recipes/minio.md +++ b/docs/recipes/minio.md @@ -208,4 +208,4 @@ goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode= [^2]: Some applications (_like [NextCloud](/recipes/nextcloud/)_) can natively mount S3 buckets [^3]: Some backup tools (_like [Duplicity](/recipes/duplicity/)_) can backup directly to S3 buckets ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/munin.md b/docs/recipes/munin.md index 309e725..882d1aa 100644 --- a/docs/recipes/munin.md +++ b/docs/recipes/munin.md @@ -124,4 +124,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user and password pass [^1]: If you wanted to expose the Munin UI directly, you could remove the traefik-forward-auth from the design. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/nextcloud.md b/docs/recipes/nextcloud.md index f8cf8e3..e231e80 100644 --- a/docs/recipes/nextcloud.md +++ b/docs/recipes/nextcloud.md @@ -190,4 +190,4 @@ Log into your new instance at https://**YOUR-FQDN**, and setup your admin userna [^1]: Since many of my other recipes use PostgreSQL, I'd have preferred to use Postgres over MariaDB, but MariaDB seems to be the [preferred database type](https://github.com/nextcloud/server/issues/5912). [^2]: If you want better performance when using Photos in Nextcloud, have a look at [this detailed write-up](https://rayagainstthemachine.net/linux%20administration/nextcloud-photos/)! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/nightscout.md b/docs/recipes/nightscout.md index 1aed864..bc5faaf 100644 --- a/docs/recipes/nightscout.md +++ b/docs/recipes/nightscout.md @@ -174,4 +174,4 @@ Launch the nightscout stack by running ```docker stack deploy nightscout -c index* and do a complete rescan (it will take a while, depending on your collection size) ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/phpipam.md b/docs/recipes/phpipam.md index a4c07f5..be158ac 100644 --- a/docs/recipes/phpipam.md +++ b/docs/recipes/phpipam.md @@ -163,4 +163,4 @@ Log into your new instance at https://**YOUR-FQDN**, and follow the on-screen pr [^1]: If you wanted to expose the phpIPAM UI directly, you could remove the `traefik.http.routers.api.middlewares` label from the app container :thumbsup: ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/pixelfed.md b/docs/recipes/pixelfed.md index 9b84999..644c9ae 100644 --- a/docs/recipes/pixelfed.md +++ b/docs/recipes/pixelfed.md @@ -470,6 +470,6 @@ What have we achieved? Even though we had to jump through some extra hoops to se * [X] Pixelfed configured, running, and ready for selfies! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} [^1]: There's an iOS mobile app [currently in beta](https://testflight.apple.com/join/5HpHJD5l) diff --git a/docs/recipes/plex.md b/docs/recipes/plex.md index 99e705a..88e474a 100644 --- a/docs/recipes/plex.md +++ b/docs/recipes/plex.md @@ -106,4 +106,4 @@ Log into your new instance at https://**YOUR-FQDN** (You'll need to setup a plex [^1]: Plex uses port 32400 for remote access, using your plex.tv user/password to authenticate you. The inclusion of the traefik proxy in this recipe is simply to allow you to use the web client (as opposed to a client app) by connecting directly to your instance, as opposed to browsing your media via [^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media! ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/portainer.md b/docs/recipes/portainer.md index 317730b..3a5e63c 100644 --- a/docs/recipes/portainer.md +++ b/docs/recipes/portainer.md @@ -122,4 +122,4 @@ Log into your new instance at https://**YOUR-FQDN**. You'll be prompted to set y [^1]: There are [some schenanigans](https://www.reddit.com/r/docker/comments/au9wnu/linuxserverio_templates_for_portainer/) you can do to install LinuxServer.io templates in Portainer. Don't go crying to them for support though! :crying_cat_face: ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/privatebin.md b/docs/recipes/privatebin.md index fb8b5eb..415593a 100644 --- a/docs/recipes/privatebin.md +++ b/docs/recipes/privatebin.md @@ -75,4 +75,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa [^1]: The [PrivateBin repo](https://github.com/PrivateBin/PrivateBin/blob/master/INSTALL.md) explains how to tweak configuration options, or to use a database instead of file storage, if your volume justifies it :) [^2]: The inclusion of Privatebin was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/realms.md b/docs/recipes/realms.md index f3ae671..169b332 100644 --- a/docs/recipes/realms.md +++ b/docs/recipes/realms.md @@ -100,4 +100,4 @@ Log into your new instance at https://**YOUR-FQDN**, authenticate against oauth_ [^2]: The inclusion of Realms was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed. ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/restic.md b/docs/recipes/restic.md index 8d316d5..20b131e 100644 --- a/docs/recipes/restic.md +++ b/docs/recipes/restic.md @@ -205,4 +205,4 @@ root@raphael:~# [^2]: A recent benchmark of various backup tools, including Restic, can be found [here](https://forum.duplicati.com/t/big-comparison-borg-vs-restic-vs-arq-5-vs-duplicacy-vs-duplicati/9952). [^3]: A paid-for UI for Restic can be found [here](https://forum.restic.net/t/web-ui-for-restic/667/26). ---8<-- "recipe-footer.md" +{% include 'recipe-footer.md' %} diff --git a/docs/recipes/rss-bridge.md b/docs/recipes/rss-bridge.md index 3d4f516..a66d11c 100644 --- a/docs/recipes/rss-bridge.md +++ b/docs/recipes/rss-bridge.md @@ -68,4 +68,4 @@ Launch the RSS Bridge stack by running ```docker stack deploy rssbridge -c -///Footnotes Go Here/// - -Updated - -## Tip your waiter (sponsor) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) Sponsor me on [Github][github_sponsor] / [Patreon][patreon], or see the [support](/support/) page for more (_free or paid)_ ways to say thank you! 👏 - -## Flirt with waiter (subscribe) 💌 - -Want to know now when this recipe gets updated, or when future recipes are added? Subscribe to the [RSS feed](https://mastodon.social/@geekcookbook_changes.atom), or leave your email address below, and we'll keep you updated. - - -

Notify me 🔔

Be the first to know when recipes are added / improved!

    We won't send you spam. Unsubscribe at any time. No monkey-business.

    Powered By ConvertKit
    - -## Your comments? 💬 - -[patreon]: https://www.patreon.com/bePatron?u=6982506 -[github_sponsor]: https://github.com/sponsors/funkypenguin