mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-12 17:26:19 +00:00
Add authentik, tidy up recipe-footer
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
21
_includes/kubernetes-flux-dnsendpoint.md
Normal file
21
_includes/kubernetes-flux-dnsendpoint.md
Normal file
@@ -0,0 +1,21 @@
|
||||
### {{ page.meta.slug }} DNSEndpoint
|
||||
|
||||
If, like me, you prefer to create your DNS records the "GitOps way" using [ExternalDNS](/kubernetes/external-dns/), create something like the following example to create a DNS entry for your Authentik ingress:
|
||||
|
||||
```yaml title="/{{ page.meta.helmrelease_namespace }}/dnsendpoint-{{ page.meta.helmrelease_name }}.example.com.yaml"
|
||||
apiVersion: externaldns.k8s.io/v1alpha1
|
||||
kind: DNSEndpoint
|
||||
metadata:
|
||||
name: "{{ page.meta.helmrelease_name }}.example.com"
|
||||
namespace: {{ page.meta.helmrelease_namespace }}
|
||||
spec:
|
||||
endpoints:
|
||||
- dnsName: "{{ page.meta.helmrelease_name }}.example.com"
|
||||
recordTTL: 180
|
||||
recordType: CNAME
|
||||
targets:
|
||||
- "traefik-ingress.example.com"
|
||||
```
|
||||
|
||||
!!! tip
|
||||
Rather than creating individual A records for each host, I prefer to create one A record (*`nginx-ingress.example.com` in the example above*), and then create individual CNAME records pointing to that A record.
|
||||
@@ -1,4 +1,4 @@
|
||||
### HelmRelease
|
||||
### {{ page.meta.slug }} HelmRelease
|
||||
|
||||
Lastly, having set the scene above, we define the HelmRelease which will actually deploy {{ page.meta.helmrelease_name }} into the cluster. We start with a basic HelmRelease YAML, like this example:
|
||||
|
||||
@@ -23,10 +23,10 @@ spec:
|
||||
values: # paste contents of upstream values.yaml below, indented 4 spaces (2)
|
||||
```
|
||||
|
||||
1. I like to set this to the semver minor version of the upstream chart, so that I'll inherit bug fixes but not any new features (*since I'll need to manually update my values to accommodate new releases anyway*)
|
||||
1. I like to set this to the semver minor version of the {{ page.meta.slug }} current helm chart, so that I'll inherit bug fixes but not any new features (*since I'll need to manually update my values to accommodate new releases anyway*)
|
||||
2. Paste the full contents of the upstream [values.yaml]({{ page.meta.values_yaml_url }}) here, indented 4 spaces under the `values:` key
|
||||
|
||||
If we deploy this helmrelease as-is, we'll inherit every default from the upstream chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the helm chart's [values.yaml]({{ page.meta.values_yaml_url }}), and to paste these (*indented*), under the `values` key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler.
|
||||
If we deploy this helmrelease as-is, we'll inherit every default from the upstream {{ page.meta.slug }} helm chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}), and to paste these (*indented*), under the `values` key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler.
|
||||
|
||||
--8<-- "kubernetes-why-not-full-values-in-configmap.md"
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
### HelmRepository
|
||||
### {{ page.meta.slug }} HelmRepository
|
||||
|
||||
We're going to install a helm chart from the [{{ page.meta.helm_chart_repo_name }}]({{ page.meta.helm_chart_repo_url }}) repository, so I create the following in my flux repo (*assuming it doesn't already exist*):
|
||||
We're going to install the {{ page.slug }} helm chart from the [{{ page.meta.helm_chart_repo_name }}]({{ page.meta.helm_chart_repo_url }}) repository, so I create the following in my flux repo (*assuming it doesn't already exist*):
|
||||
|
||||
```yaml title="/bootstrap/helmrepositories/helmrepository-{{ page.meta.helm_chart_repo_name }}.yaml"
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta1
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
### Kustomization
|
||||
### {{ page.meta.slug }} Kustomization
|
||||
|
||||
Now that the "global" elements of this deployment (*just the HelmRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/{{ page.meta.helmrelease_namespace }}/`. I create this example Kustomization in my flux repo:
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
## Preparation
|
||||
|
||||
### Namespace
|
||||
### {{ page.meta.slug }} Namespace
|
||||
|
||||
We need a namespace to deploy our HelmRelease and associated YAMLs into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `/bootstrap/namespaces/namespace-{{ page.meta.helmrelease_namespace }}.yaml`:
|
||||
|
||||
|
||||
@@ -2,6 +2,17 @@
|
||||
|
||||
///Footnotes Go Here///
|
||||
|
||||
{% if page.meta.upstream %}
|
||||
### {{ page.meta.slug }} resources
|
||||
|
||||
* [{{ page.meta.slug }} (official site)]({{ page.meta.upstream }})
|
||||
{% endif %}
|
||||
{% if page.meta.links %}
|
||||
{% for link in page.meta.links %}
|
||||
* [{{ page.meta.slug }} {{ link.name }}]({{ link.uri }})
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
### Tip your waiter (sponsor) 👏
|
||||
|
||||
Did you receive excellent service? Want to compliment the chef? (_..and support development of current and future recipes!_) Sponsor me on [Github][github_sponsor] / [Ko-Fi][kofi] / [Patreon][patreon], or see the [contribute](/community/contribute/) page for more (_free or paid)_ ways to say thank you! 👏
|
||||
@@ -0,0 +1,67 @@
|
||||
---
|
||||
date: 2023-06-09
|
||||
categories:
|
||||
- note
|
||||
tags:
|
||||
- elfhosted
|
||||
title: Baby steps towards ElfHosted
|
||||
description: Every journey has a beginning. This is the beginning of the ElfHosted journey
|
||||
draft: true
|
||||
---
|
||||
|
||||
Securing the Hetzner environment
|
||||
|
||||
Before building out our Kubernetes cluster, I wanted to secure the environment a little. On Hetzner, each server is assigned a public IP from a huge pool, and is directly accessible over the internet. This provides quick access for administration, but before building out our controlplane, I wanted to lock down access.
|
||||
|
||||
## Requirements
|
||||
|
||||
* [x] Kubernetes worker/controlplane nodes are privately addressed
|
||||
* [x] Control plane (API) will be accessible only internally
|
||||
* [x] Nodes can be administered directly on their private address range
|
||||
|
||||
## The bastion VM
|
||||
|
||||
I created a small cloud "ampere" VM using Hetzner's cloud console. These cloud VMs are provisioned separately from dedicated servers, but it's possible to interconnect them with dedicated servers using vSwitches/subnets (bascically VLANs)
|
||||
|
||||
I needed a "bastion" host - a small node (probably a VM), which I could secure and then use for further ingress into my infrastructure.
|
||||
|
||||
## Connecting Bastion VM to dedicated VMs
|
||||
|
||||
I
|
||||
|
||||
https://tailscale.com/kb/1150/cloud-hetzner/
|
||||
|
||||
|
||||
https://tailscale.com/kb/1077/secure-server-ubuntu-18-04/
|
||||
|
||||
|
||||
https://docs.hetzner.com/cloud/networks/connect-dedi-vswitch
|
||||
|
||||
```bash
|
||||
tailscale up --advertise-routes 10.0.42.0/24
|
||||
```
|
||||
|
||||
sysctl edit
|
||||
|
||||
```bash
|
||||
# NAT table rules
|
||||
*nat
|
||||
:POSTROUTING ACCEPT [0:0]
|
||||
|
||||
# Forward traffic through eth0 - Change to match you out-interface
|
||||
-A POSTROUTING -s <your tailscale ip> -j MASQUERADE
|
||||
|
||||
# don't delete the 'COMMIT' line or these nat table rules won't
|
||||
# be processed
|
||||
COMMIT
|
||||
```
|
||||
|
||||
|
||||
hetzner_cloud_console_subnet_routes.png
|
||||
|
||||
hetzner_vswitch_setup.png
|
||||
|
||||
## Secure hosts
|
||||
|
||||
* [ ] Create last-resort root password
|
||||
* [ ] Setup non-root sudo account (ansiblize this?)
|
||||
151
docs/blog/posts/notes/elfhosted/setup-k3s.md
Normal file
151
docs/blog/posts/notes/elfhosted/setup-k3s.md
Normal file
@@ -0,0 +1,151 @@
|
||||
---
|
||||
date: 2023-06-11
|
||||
categories:
|
||||
- note
|
||||
tags:
|
||||
- elfhosted
|
||||
title: Kubernetes on Hetzner dedicated server
|
||||
description: How to setup and secure a bare-metal Kubernetes infrastructure on Hetzner dedicated servers
|
||||
draft: true
|
||||
---
|
||||
|
||||
# Kubernetes (K3s) on Hetzner
|
||||
|
||||
In this post, we continue our adventure setting up an app hosting platform running on Kubernetes.
|
||||
|
||||
--8<-- "blog-series-elfhosted.md"
|
||||
|
||||
My two physical servers were "delivered" (to my inbox), along with instructions re SSHing to the "rescueimage" environment, which looks like this:
|
||||
|
||||
|
||||
|
||||
<!-- more -->
|
||||
|
||||
--8<-- "what-is-elfhosted.md"
|
||||
|
||||
|
||||
## Secure nodes
|
||||
|
||||
Per the K3s docs, there are some local firewall requirements for K3s server/worker nodes:
|
||||
|
||||
https://docs.k3s.io/installation/requirements#inbound-rules-for-k3s-server-nodes
|
||||
|
||||
|
||||
|
||||
It's aliiive!
|
||||
|
||||
```
|
||||
root@fairy01 ~ # kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
elf01 Ready <none> 15s v1.26.5+k3s1
|
||||
fairy01 Ready control-plane,etcd,master 96s v1.26.5+k3s1
|
||||
root@fairy01 ~ #
|
||||
```
|
||||
|
||||
Now install flux, according to this documentedb bootstrap process...
|
||||
|
||||
|
||||
https://metallb.org/configuration/k3s/
|
||||
|
||||
|
||||
Prepare for Longhorn's [NFS schenanigans](https://longhorn.io/docs/1.4.2/deploy/install/#installing-nfsv4-client):
|
||||
|
||||
```
|
||||
apt-get -y install nfs-common tuned
|
||||
```
|
||||
|
||||
Performance mode!
|
||||
|
||||
`tuned-adm profile throughput-performance`
|
||||
|
||||
Taint the master(s)
|
||||
|
||||
```
|
||||
kubectl taint node fairy01 node-role.kubernetes.io/control-plane=true:NoSchedule
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
increase max pods:
|
||||
https://stackoverflow.com/questions/65894616/how-do-you-increase-maximum-pods-per-node-in-k3s
|
||||
|
||||
https://gist.github.com/rosskirkpat/57aa392a4b44cca3d48dfe58b5716954
|
||||
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --disable traefik --disable servicelb --flannel-backend=wireguard-native --flannel-iface=enp0s31f6.4000 --kube-controller-manager-arg=node-cidr-mask-size=22 --kubelet-arg=max-pods=500 --node-taint node-role.kubernetes.io/control-plane --prefer-bundled-bin" sh -
|
||||
```
|
||||
|
||||
create secondary masters:
|
||||
|
||||
```
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --disable traefik --disable servicelb --flannel-backend=wireguard-native --flannel-iface=enp0s31f6.4000 --kube-controller-manager-arg=node-cidr-mask-size=22 --kubelet-arg=max-pods=500 --node-taint node-role.kubernetes.io/control-plane --prefer-bundled-bin" sh -
|
||||
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
mkdir -p /etc/rancher/k3s/
|
||||
cat << EOF >> /etc/rancher/k3s/kubelet-server.config
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
maxPods: 500
|
||||
EOF
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
and on the worker
|
||||
|
||||
|
||||
Ensure that `/etc/rancher/k3s` exists, to hold our kubelet custom configuration file:
|
||||
|
||||
```bash
|
||||
mkdir -p /etc/rancher/k3s/
|
||||
cat << EOF >> /etc/rancher/k3s/kubelet-server.config
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
kind: KubeletConfiguration
|
||||
maxPods: 500
|
||||
EOF
|
||||
```
|
||||
|
||||
Get [token](https://docs.k3s.io/cli/token) from `/var/lib/rancher/k3s/server/token` on the server, and prepare the environment like this:
|
||||
```bash
|
||||
export K3S_TOKEN=<token from master>
|
||||
export K3S_URL=https://<ip of master>:6443
|
||||
```
|
||||
|
||||
Now join the worker using
|
||||
|
||||
```
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --flannel-iface=eno1.4000 --kubelet-arg=config=/etc/rancher/k3s/kubelet-server.config --prefer-bundled-bin" sh -
|
||||
|
||||
```
|
||||
|
||||
|
||||
```
|
||||
flux bootstrap github \
|
||||
--owner=geek-cookbook \
|
||||
--repository=geek-cookbook/elfhosted-flux \
|
||||
--path bootstrap
|
||||
```
|
||||
|
||||
```
|
||||
root@fairy01:~# kubectl -n sealed-secrets create secret tls elfhosted-expires-june-2033 \
|
||||
--cert=mytls.crt --key=mytls.key
|
||||
secret/elfhosted-expires-june-2033 created
|
||||
root@fairy01:~# kubectl kubectl -n sealed-secrets label secret^C
|
||||
root@fairy01:~# kubectl -n sealed-secrets label secret elfhosted-expires-june-2033 sealedsecrets.bitnami.com/sealed-secrets-key=active
|
||||
secret/elfhosted-expires-june-2033 labeled
|
||||
root@fairy01:~# kubectl rollout restart -n sealed-secrets deployment sealed-secrets
|
||||
deployment.apps/sealed-secrets restarted
|
||||
```
|
||||
|
||||
increase watchers (jellyfin)
|
||||
echo fs.inotify.max_user_watches=2097152 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
|
||||
|
||||
echo 512 > /proc/sys/fs/inotify/max_user_instances
|
||||
|
||||
on dwarves
|
||||
|
||||
k taint node dwarf01.elfhosted.com node-role.elfhosted.com/node=storage:NoSchedule
|
||||
|
||||
@@ -274,4 +274,4 @@ What have we achieved? By adding a simple label to any service, we can secure an
|
||||
|
||||
[^1]: The initial inclusion of Authelia was due to the efforts of @bencey in Discord (Thanks Ben!)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -94,4 +94,4 @@ In summary, although I suffered an **unplanned power outage to all of my infrast
|
||||
|
||||
[^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -180,4 +180,4 @@ What have we achieved?
|
||||
|
||||
* [X] [Docker swarm cluster](/docker-swarm/design/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -23,7 +23,7 @@ You too, action-geek, can save the day, by...
|
||||
|
||||
Ready to enter the matrix? Jump in on one of the links above, or start reading the [design](/docker-swarm/design/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: This was an [iconic movie](https://www.imdb.com/title/tt0111257/). It even won 2 Oscars! (*but not for the acting*)
|
||||
[^2]: There are significant advantages to using Docker Swarm, even on just a single node.
|
||||
|
||||
@@ -88,4 +88,4 @@ What have we achieved?
|
||||
[^1]: Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
|
||||
[^2]: More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -77,4 +77,4 @@ After completing the above, you should have:
|
||||
* At least 20GB disk space (_but it'll be tight_)
|
||||
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -110,4 +110,4 @@ Then restart docker itself, by running `systemctl restart docker`
|
||||
|
||||
[^1]: Note the extra comma required after "false" above
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -227,4 +227,4 @@ Here's a screencast of the playbook in action. I sped up the boring parts, it ac
|
||||
[patreon]: <https://www.patreon.com/bePatron?u=6982506>
|
||||
[github_sponsor]: <https://github.com/sponsors/funkypenguin>
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -172,4 +172,4 @@ After completing the above, you should have:
|
||||
1. Migration of shared storage from GlusterFS to Ceph
|
||||
2. Correct the fact that volumes don't automount on boot
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -203,4 +203,4 @@ What have we achieved? By adding an additional label to any service, we can secu
|
||||
|
||||
[^1]: You can remove the `whoami` container once you know Traefik Forward Auth is working properly
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -133,4 +133,4 @@ What have we achieved? By adding an additional three simple labels to any servic
|
||||
|
||||
[^1]: Be sure to populate `WHITELIST` in `traefik-forward-auth.env`, else you'll happily be granting **any** authenticated Google account access to your services!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -52,6 +52,6 @@ Traefik Forward Auth needs to authenticate an incoming user against a provider.
|
||||
* [Authenticate Traefik Forward Auth against a whitelist of Google accounts][tfa-google]
|
||||
* [Authenticate Traefik Forward Auth against a self-hosted Keycloak instance][tfa-keycloak] with an optional [OpenLDAP backend][openldap]
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Authhost mode is specifically handy for Google authentication, since Google doesn't permit wildcard redirect_uris, like [Keycloak][keycloak] does.
|
||||
|
||||
@@ -100,4 +100,4 @@ What have we achieved? By adding an additional three simple labels to any servic
|
||||
|
||||
[KeyCloak][keycloak] is the "big daddy" of self-hosted authentication platforms - it has a beautiful GUI, and a very advanced and mature featureset. Like Authelia, KeyCloak can [use an LDAP server](/recipes/keycloak/authenticate-against-openldap/) as a backend, but _unlike_ Authelia, KeyCloak allows for 2-way sync between that LDAP backend, meaning KeyCloak can be used to _create_ and _update_ the LDAP entries (*Authelia's is just a one-way LDAP lookup - you'll need another tool to actually administer your LDAP database*).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -250,4 +250,4 @@ You should now be able to access[^1] your traefik instance on `https://traefik.<
|
||||
|
||||
[^1]: Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/docker-swarm/traefik-forward-auth/)!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
BIN
docs/images/authentik.png
Normal file
BIN
docs/images/authentik.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 15 KiB |
BIN
docs/images/joplin-server.png
Normal file
BIN
docs/images/joplin-server.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 4.3 MiB |
@@ -68,4 +68,4 @@ What have we achieved? We've got snapshot-controller running, and ready to manag
|
||||
|
||||
* [ ] Configure [Velero](/kubernetes/backup/velero/) with a VolumeSnapshotLocation, so that volume snapshots can be made as part of a BackupSchedule!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -45,4 +45,4 @@ What have we achieved? We now have the snapshot validation admission webhook run
|
||||
|
||||
* [ ] Deploy [snapshot-controller]( (/kubernetes/backup/csi-snapshots/snapshot-controller/)) itself
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -17,4 +17,4 @@ For your backup needs, I present, Velero, by VMWare:
|
||||
|
||||
* [Velero](/kubernetes/backup/velero/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -11,6 +11,8 @@ helmrelease_namespace: velero
|
||||
kustomization_name: velero
|
||||
slug: Velero
|
||||
status: new
|
||||
upstream: https://velero.io/
|
||||
github_repo: https://github.com/vmware-tanzu/velero
|
||||
---
|
||||
|
||||
# Velero
|
||||
@@ -326,4 +328,4 @@ What have we achieved? We've got scheduled backups running, and we've successful
|
||||
[^1]: This is where you'd add multiple Volume Groups if you wanted a storageclass per Volume Group
|
||||
[^2]: But not the rook-ceph cluster. If that dies, the snapshots are toast :toast: too!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -76,4 +76,4 @@ That's it. You have a beautiful new kubernetes cluster ready for some action!
|
||||
|
||||
[^1]: Do you live in the CLI? Install the kubectl autocompletion for [bash or zsh](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) to make your life much easier!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -62,4 +62,4 @@ You'll learn more about how to care for and feed your cluster if you build it yo
|
||||
|
||||
Go with a self-hosted cluster if you want to learn more, you'd rather spend time than money, or you've already got significant investment in local infructure and technical skillz.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -156,4 +156,4 @@ Cuddle your beautiful new cluster by running `kubectl cluster-info` [^1] - if th
|
||||
[^2]: Looking for your k3s logs? Under Ubuntu LTS, run `journalctl -u k3s` to show your logs
|
||||
[^3]: k3s is not the only "lightweight kubernetes" game in town. Minikube (*virtualization-based*) and mikrok8s (*possibly better for Ubuntu users since it's installed in a "snap" - haha*) are also popular options. One day I'll write a "mikrok8s vs k3s" review, but it doesn't really matter for our cluster operations - as I understand it, microk8s makes HA clustering slightly easire than k3s, but you get slightly less "out-of-the-box" in return, so mikrok8s may be more suitable for experience users / production edge deployments.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -63,4 +63,4 @@ Good! I describe how to put this design into action on the [next page](/kubernet
|
||||
|
||||
[^1]: ERDs are fancy diagrams for nERDs which [represent cardinality between entities](https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_model#Crow's_foot_notation) scribbled using the foot of a crow 🐓
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -147,7 +147,7 @@ If you used my template repo, some extra things also happened..
|
||||
|
||||
That's best explained on the [next page](/kubernetes/deployment/flux/design/), describing the design we're using...
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The [template repo](https://github.com/geek-cookbook/template-flux/) also "bootstraps" a simple example re how to [operate flux](/kubernetes/deployment/flux/operate/), by deploying the podinfo helm chart.
|
||||
[^2]: TIL that GitHub listens for SSH on `ssh.github.com` on port 443!
|
||||
|
||||
@@ -154,6 +154,6 @@ Commit your changes, and once again do the waiting / impatient-reconciling jig.
|
||||
|
||||
We did it. The Holy Grail. We deployed an application into the cluster, without touching the cluster. Pinch yourself, and then prove it worked by running `flux get kustomizations`, or `kubectl get helmreleases -n podinfo`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Got suggestions for improvements here? Shout out in the comments below!
|
||||
|
||||
@@ -129,4 +129,4 @@ Still with me? Good. Move on to creating your cluster!
|
||||
- [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
|
||||
- [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -154,6 +154,6 @@ What have we achieved? By simply creating another YAML in our flux repo alongsid
|
||||
|
||||
* [X] DNS records are created automatically based on YAMLs (*or even just on services and ingresses!*)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Why yes, I **have** accidentally caused outages / conflicts by "leaking" DNS entries automatically!
|
||||
|
||||
@@ -53,4 +53,4 @@ Still with me? Good. Move on to understanding Helm charts...
|
||||
|
||||
[^1]: Of course, you can have lots of fun deploying all sorts of things via Helm. Check out <https://artifacthub.io> for some examples.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -46,6 +46,6 @@ Primarily you need 2 things:
|
||||
|
||||
Practically, you need some extras too, but you can mix-and-match these.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Of course, if you **do** enjoy understanding the intricacies of how your tools work, you're in good company!
|
||||
|
||||
@@ -14,6 +14,6 @@ There are many popular Ingress Controllers, we're going to cover two equally use
|
||||
|
||||
Choose at least one of the above (*there may be valid reasons to use both!* [^1]), so that you can expose applications via Ingress.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: One cluster I manage uses traefik Traefik for public services, but Nginx for internal management services such as Prometheus, etc. The idea is that you'd need one type of Ingress to help debug problems with the _other_ type!
|
||||
|
||||
@@ -234,6 +234,6 @@ Commit your changes, wait for the reconciliation, and the next time you point yo
|
||||
|
||||
Are things not working as expected? Watch the nginx-ingress-controller's logs with ```kubectl logs -n nginx-ingress-controller -l app.kubernetes.io/name=nginx-ingress-controller -f```.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup!
|
||||
|
||||
@@ -15,6 +15,6 @@ One of the advantages [Traefik](/kubernetes/ingress/traefik/) offers over [Nginx
|
||||
* [x] A [load-balancer](/kubernetes/loadbalancer/) solution (*either [k3s](/kubernetes/loadbalancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*)
|
||||
* [x] [Traefik](/kubernetes/ingress/traefik/) deployed per-design
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup!
|
||||
|
||||
@@ -236,6 +236,6 @@ Commit your changes, wait for the reconciliation, and the next time you point yo
|
||||
|
||||
Are things not working as expected? Watch the traefik's logs with ```kubectl logs -n traefik -l app.kubernetes.io/name=traefik -f```.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The beauty of this design is that the same process will now work for any other application you deploy, without any additional manual effort for DNS or SSL setup!
|
||||
|
||||
@@ -50,6 +50,6 @@ Assuming you only had a single Kubernetes node (*say, a small k3s deployment*),
|
||||
|
||||
(*This is [the way k3s works](/kubernetes/loadbalancer/k3s/) by default, although it's still called a LoadBalancer*)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: It is possible to be prescriptive about which port is used for a Nodeport-exposed service, and this is occasionally [a valid deployment strategy](https://github.com/portainer/k8s/#using-nodeport-on-a-localremote-cluster), but you're usually limited to ports between 30000 and 32768.
|
||||
|
||||
@@ -23,6 +23,6 @@ Yes, to get you started. But consider the following limitations:
|
||||
|
||||
To tackle these issues, you need some more advanced network configuration, along with [MetalLB](/kubernetes/loadbalancer/metallb/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: And seriously, if you're building a Kubernetes cluster, of **course** you'll want more than one host!
|
||||
|
||||
@@ -328,6 +328,6 @@ To:
|
||||
|
||||
Commit your changes, wait for a reconciliation, and run `kubectl get services -n podinfo`. All going well, you should see that the service now has an IP assigned from the pool you chose for MetalLB!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: I've documented an example re [how to configure BGP between MetalLB and pfsense](/kubernetes/loadbalancer/metallb/pfsense/).
|
||||
|
||||
@@ -75,6 +75,6 @@ If you're not receiving any routes from MetalLB, or if the neighbors aren't in a
|
||||
2. Examine the metallb speaker logs in the cluster, by running `kubectl logs -n metallb-system -l app.kubernetes.io/name=metallb`
|
||||
3. SSH to the pfsense, start a shell and launch the FFR shell by running `vtysh`. Now you're in a cisco-like console where commands like `show ip bgp sum` and `show ip bgp neighbors <neighbor ip> received-routes` will show you interesting debugging things.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: If you decide to deploy some policy with route-maps, prefix-lists, etc, it's all found under **Services -> FRR Global/Zebra** 🦓
|
||||
|
||||
@@ -311,4 +311,4 @@ At this point, you should be able to access your instance on your chosen DNS nam
|
||||
|
||||
To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux <name of pod per above> -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -40,6 +40,6 @@ A few things you should know:
|
||||
|
||||
In summary, Local Path Provisioner is fine if you have very specifically sized workloads and you don't care about node redundancy.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: [TopoLVM](/kubernetes/persistence/topolvm/) also creates per-node volumes which aren't "portable" between nodes, but because it relies on LVM, it is "capacity-aware", and is able to distribute storage among multiple nodes based on available capacity.
|
||||
|
||||
@@ -243,6 +243,6 @@ What have we achieved? We have a storage provider that can use an NFS server as
|
||||
|
||||
* [X] We have a new storage provider
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: The reason I shortened it is so I didn't have to type nfs-subdirectory-provider each time. If you want that sort of pain in your life, feel free to change it!
|
||||
|
||||
@@ -415,6 +415,6 @@ What have we achieved? We're half-way to getting a ceph cluster, having deployed
|
||||
* [X] StorageClasses are available so that the cluster storage can be consumed by your pods
|
||||
* [X] Pretty graphs are viewable in the Ceph Dashboard
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Unless you **wanted** to deploy your cluster components in a separate namespace to the operator, of course!
|
||||
|
||||
@@ -178,4 +178,4 @@ What have we achieved? We're half-way to getting a ceph cluster, having deployed
|
||||
|
||||
* [ ] Deploy the ceph [cluster](/kubernetes/persistence/rook-ceph/cluster/) using a CR
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -271,6 +271,6 @@ Are things not working as expected? Try one of the following to look for issues:
|
||||
3. Watch the scheduler logs, by running `kubectl logs -f -n topolvm-system -l app.kubernetes.io/name=scheduler`
|
||||
4. Watch the controller node logs, by running `kubectl logs -f -n topolvm-system -l app.kubernetes.io/name=controller`
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: This is where you'd add multiple Volume Groups if you wanted a storageclass per Volume Group
|
||||
|
||||
@@ -624,6 +624,6 @@ root@shredder:~#
|
||||
|
||||
And now when you create your seadsecrets, refer to the public key you just created using `--cert <path to cert>`. These secrets will be decryptable by **any** sealedsecrets controller bootstrapped with the same keypair (*above*).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: There's no harm in storing the **public** key in the repo though, which means it's easy to refer to when sealing secrets.
|
||||
|
||||
@@ -176,4 +176,4 @@ Still with me? Good. Move on to understanding Helm charts...
|
||||
[^1]: I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).
|
||||
```
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -52,7 +52,7 @@ spec:
|
||||
Now that the "global" elements of this deployment (*just the HelmRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/cert-manager`. I create this example Kustomization in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/kustomizations/kustomization-cert-manager.yaml"
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: cert-manager
|
||||
@@ -65,7 +65,6 @@ spec:
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
validation: server
|
||||
healthChecks:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
@@ -135,6 +134,6 @@ What do we have now? Well, we've got the cert-manager controller **running**, bu
|
||||
|
||||
If your certificate is not created **aren't** created as you expect, then the best approach is to check the cert-manager logs, by running `kubectl logs -n cert-manager -l app.kubernetes.io/name=cert-manager`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Why yes, I **have** accidentally rate-limited myself by deleting/recreating my prod certificates a few times!
|
||||
|
||||
@@ -17,6 +17,6 @@ I've split this section, conceptually, into 3 separate tasks:
|
||||
2. Setup "[Issuers](/kubernetes/ssl-certificates/letsencrypt-issuers/)" for LetsEncrypt, which Cert Manager will use to request certificates
|
||||
3. Setup a [wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/) in such a way that it can be used by Ingresses like Traefik or Nginx
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: I had a really annoying but smart boss once who taught me this. Hi Mark! :wave:
|
||||
|
||||
@@ -32,7 +32,7 @@ metadata:
|
||||
Now we need a kustomization to tell Flux to install any YAMLs it finds in `/letsencrypt-wildcard-cert`. I create this example Kustomization in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/kustomizations/kustomization-letsencrypt-wildcard-cert.yaml"
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: letsencrypt-wildcard-cert
|
||||
@@ -48,7 +48,6 @@ spec:
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
validation: server
|
||||
```
|
||||
|
||||
!!! tip
|
||||
@@ -140,6 +139,6 @@ Events: <none>
|
||||
|
||||
Provided your account is registered, you're ready to proceed with [creating a wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/)!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Since a ClusterIssuer is not a namespaced resource, it doesn't exist in any specific namespace. Therefore, my assumption is that the `apiTokenSecretRef` secret is only "looked for" when a certificate (*which **is** namespaced*) requires validation.
|
||||
|
||||
@@ -164,6 +164,6 @@ Look for secrets across the whole cluster, by running `kubectl get secrets -A |
|
||||
|
||||
If your certificate is not created **aren't** created as you expect, then the best approach is to check the secret-replicator logs, by running `kubectl logs -n secret-replicator -l app.kubernetes.io/name=secret-replicator`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: To my great New Zealandy confusion, "Kiwigrid GmbH" is a German company :shrug:
|
||||
|
||||
@@ -108,6 +108,6 @@ spec:
|
||||
|
||||
Commit the certificate and follow the steps above to confirm that your prod certificate has been issued.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: This process can take a frustratingly long time, and watching the cert-manager logs at least gives some assurance that it's progressing!
|
||||
|
||||
@@ -90,4 +90,4 @@ Launch the Archivebox stack by running ```docker stack deploy archivebox -c <pat
|
||||
|
||||
[^1]: The inclusion of Archivebox was due to the efforts of @bencey in Discord (Thanks Ben!)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -126,4 +126,4 @@ What have we achieved? We can now easily consume our audio books / podcasts via
|
||||
[^1]: The apps also allow you to download entire books to your device, so that you can listen without being directly connected!
|
||||
[^2]: Audiobookshelf pairs very nicely with [Readarr][readarr], and [Prowlarr][prowlarr], to automate your audio book sourcing and management!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -15,4 +15,4 @@ Log into each of your new tools at its respective HTTPS URL. You'll be prompted
|
||||
|
||||
[^1]: This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -49,4 +49,4 @@ headphones_proxy:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -48,6 +48,6 @@ To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include th
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^2:] The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!
|
||||
|
||||
@@ -126,4 +126,4 @@ networks:
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -48,4 +48,4 @@ jackett:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -64,6 +64,6 @@ calibre-server:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^2]: The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web][calibre-web] recipe.
|
||||
|
||||
@@ -59,4 +59,4 @@ Lidarr and [Headphones][headphones] perform the same basic function. The primary
|
||||
|
||||
I've not tried this yet, but it seems that it's possible to [integrate Lidarr with Beets](https://www.reddit.com/r/Lidarr/comments/rahcer/my_lidarrbeets_automation_setup/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -50,6 +50,6 @@ mylar:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^2]. If you intend to configure Mylar to perform its own NZB searches and push the hits to a downloader such as SABnzbd, then in addition to configuring the connection to SAB with host, port and api key, you will need to set the parameter `host_return` parameter to the fully qualified Mylar address (e.g. `http://mylar:8090`).
|
||||
|
||||
@@ -54,4 +54,4 @@ nzbget:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -66,4 +66,4 @@ nzbhydra2:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -64,4 +64,4 @@ ombi_proxy:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -75,6 +75,6 @@ Prowlarr and [Jackett][jackett] perform similar roles (*they help you aggregate
|
||||
2. Given app API keys, Prowlarr can auto-configuer your Arr apps, adding its indexers. Prowlarr currently auto-configures [Radarr][radarr], [Sonarr][sonarr], [Lidarr][lidarr], [Mylar][mylar], [Readarr][Readarr], and [LazyLibrarian][lazylibrarian]
|
||||
3. Prowlarr can integrate with Flaresolverr to make it possible to query indexers behind Cloudflare "are-you-a-robot" protection, which would otherwise not be possible.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Because Prowlarr is so young (*just a little kitten! :cat:*), there is no `:latest` image tag yet, so we're using the `:nightly` tag instead. Don't come crying to me if baby-Prowlarr bites your ass!
|
||||
|
||||
@@ -64,5 +64,5 @@ radarr:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
--8<-- "common-links.md"
|
||||
@@ -62,4 +62,4 @@ readarr:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -56,4 +56,4 @@ rtorrent:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -61,4 +61,4 @@ sabnzbd:
|
||||
For example, mine simply reads ```host_whitelist = sabnzbd.funkypenguin.co.nz, sabnzbd```
|
||||
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -50,4 +50,4 @@ sonarr:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -110,4 +110,4 @@ Once you've created your account, jump over to <https://bitwarden.com/#download>
|
||||
[^2]: As mentioned above, readers should refer to the [dani-garcia/vaultwarden wiki](https://github.com/dani-garcia/vaultwarden) for details on customizing the behaviour of Bitwarden.
|
||||
[^3]: The inclusion of Bitwarden was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -136,4 +136,4 @@ Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_pro
|
||||
|
||||
[^1]: If you wanted to expose the Bookstack UI directly, you could remove the traefik-forward-auth from the design.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -114,4 +114,4 @@ Log into your new instance at `https://**YOUR-FQDN**`. You'll be directed to the
|
||||
[^1]: Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
|
||||
[^2]: A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.
|
||||
[^3]: If you plan to use calibre-web to send `.mobi` files to your Kindle via `@kindle.com` email addresses, be sure to add the sending address to the "[Approved Personal Documents Email List](https://www.amazon.com/hz/mycd/myx#/home/settings/payment)"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -307,4 +307,4 @@ Now browse your NextCloud files. Click the plus (+) sign to create a new documen
|
||||
|
||||
[^1]: Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -69,7 +69,7 @@ networks:
|
||||
|
||||
Launch your CyberChef stack by running ```docker stack deploy cyberchef -c <path -to-docker-compose.yml>```, and then visit the URL you chose to begin the hackery!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[2]: https://gchq.github.io/CyberChef/#recipe=From_Base64('A-Za-z0-9%2B/%3D',true)&input=VTI4Z2JHOXVaeUJoYm1RZ2RHaGhibXR6SUdadmNpQmhiR3dnZEdobElHWnBjMmd1
|
||||
[6]: https://gchq.github.io/CyberChef/#recipe=RC4(%7B'option':'UTF8','string':'secret'%7D,'Hex','Hex')Disassemble_x86('64','Full%20x86%20architecture',16,0,true,true)&input=MjFkZGQyNTQwMTYwZWU2NWZlMDc3NzEwM2YyYTM5ZmJlNWJjYjZhYTBhYWJkNDE0ZjkwYzZjYWY1MzEyNzU0YWY3NzRiNzZiM2JiY2QxOTNjYjNkZGZkYmM1YTI2NTMzYTY4NmI1OWI4ZmVkNGQzODBkNDc0NDIwMWFlYzIwNDA1MDcxMzhlMmZlMmIzOTUwNDQ2ZGIzMWQyYmM2MjliZTRkM2YyZWIwMDQzYzI5M2Q3YTVkMjk2MmMwMGZlNmRhMzAwNzJkOGM1YTZiNGZlN2Q4NTlhMDQwZWVhZjI5OTczMzYzMDJmNWEwZWMxOQ
|
||||
|
||||
@@ -126,4 +126,4 @@ Once we authenticate through the traefik-forward-auth provider, we can start con
|
||||
[^1]: Quote attributed to Mila Kunis
|
||||
[^2]: The [Duplicati 2 User's Manual](https://duplicati.readthedocs.io/en/latest/) contains all the information you'll need to configure backup endpoints, restore jobs, scheduling and advanced properties for your backup jobs.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -159,4 +159,4 @@ Nothing will happen. Very boring. But when the cron script fires (daily), duplic
|
||||
[^1]: Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
|
||||
[^2]: The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add `SMTP_HOST`, `SMTP_PORT`, `EMAIL_FROM` and `EMAIL_TO` variables to `duplicity.env`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -225,4 +225,4 @@ This takes you to a list of backup names and file paths. You can choose to downl
|
||||
[^1]: If you wanted to expose the ElkarBackup UI directly, you could remove the traefik-forward-auth from the design.
|
||||
[^2]: The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -92,4 +92,4 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
[^3]: We don't bother exposing the HTTPS port for Emby, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -141,4 +141,4 @@ root@swarm:~#
|
||||
[^3]: It should be noted that if you import your existing media, the files will be **copied** into Funkwhale's data folder. There doesn't seem to be a way to point Funkwhale at an existing collection and have it just play it from the filesystem. To this end, be prepared for double disk space usage if you plan to import your entire music collection!
|
||||
[^5]: No consideration is given at this point to backing up the Funkwhale data. Post a comment below if you'd like to see a backup container added!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -71,4 +71,4 @@ Create your first administrative account at https://**YOUR-FQDN**/admin/
|
||||
|
||||
[^1]: A default using the SQlite database takes 548k of space
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -94,4 +94,4 @@ Launch the GitLab Runner stack by running `docker stack deploy gitlab-runner -c
|
||||
[^1]: You'll note that I setup 2 runners. One is locked to a single project (_this cookbook build_), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
|
||||
[^2]: Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (_and GitLab starts **sooo** slowly!_), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -134,4 +134,4 @@ Log into your new instance at `https://[your FQDN]`, with user "root" and the pa
|
||||
|
||||
[^1]: I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -109,4 +109,4 @@ Launch the Gollum stack by running ```docker stack deploy gollum -c <path-to-doc
|
||||
|
||||
[^1]: In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -134,4 +134,4 @@ Log into your new instance at https://**YOUR-FQDN**, the password you created in
|
||||
|
||||
[^1]: I **tried** to protect Home Assistant using [oauth2_proxy](/reference/oauth_proxy/), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -151,4 +151,4 @@ Log into your new instance at https://**YOUR-FQDN**. You'll need to use the "Sig
|
||||
|
||||
[^1]: I initially considered putting an oauth proxy in front of Huginn, but since the invitation code logic prevents untrusted access, and since using a proxy would break oauth for features such as Twitter integration, I left it out.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -251,7 +251,7 @@ What have we achieved? We have an HTTPS-protected endpoint to target with the na
|
||||
Sponsors have access to a [Premix](/premix/) playbook, which will set up Immich in under 60s (*see below*):
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/s-NZjYrNOPg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: "wife-insurance": When the developer's wife is a primary user of the platform, you can bet he'll be writing quality code! :woman: :material-karate: :man: :bed: :cry:
|
||||
[^2]: There's a [friendly Discord server](https://discord.com/invite/D8JsnBEuKb) for Immich too!
|
||||
|
||||
@@ -130,4 +130,4 @@ You can **also** watch the bot at work by VNCing to your docker swarm, password
|
||||
|
||||
[^1]: Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
---
|
||||
title: Invidious, your Youtube frontend instance in Docker Swarm
|
||||
description: How to create your own private Youtube frontend using Invidious in Docker Swarm
|
||||
status: new
|
||||
---
|
||||
|
||||
# Invidious: Private Youtube frontend instance in Docker Swarm
|
||||
@@ -169,7 +168,7 @@ What have we achieved? We have an HTTPS-protected private YouTube frontend - we
|
||||
|
||||
* [X] We are free of the creepy tracking attached to YouTube videos!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Check out the [official config docs](https://github.com/iv-org/invidious/blob/master/config/config.example.yml) for comprehensive details on how to configure / tweak your instance!
|
||||
[^2]: Gotcha!
|
||||
|
||||
@@ -102,4 +102,4 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
[^3]: We don't bother exposing the HTTPS port for Jellyfin, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
225
docs/recipes/joplin-server.md
Normal file
225
docs/recipes/joplin-server.md
Normal file
@@ -0,0 +1,225 @@
|
||||
---
|
||||
title: Sync, share and publish your Joplin notes with joplin-server
|
||||
description: joplin-server is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes.
|
||||
recipe: Joplin Server
|
||||
---
|
||||
|
||||
# Joplin Server
|
||||
|
||||
{% include 'try-in-elfhosted.md' %}
|
||||
|
||||
joplin-server is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](/recipes/duplicity/) would produce for you.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## {{ page.meta.recipe }} Requirements
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in `/var/data/``:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/joplin-server/
|
||||
mkdir -p /var/data/runtime/joplin-server/db
|
||||
mkdir -p /var/data/config/joplin-server
|
||||
```
|
||||
|
||||
### Prepare {{ page.meta.recipe }} environment
|
||||
|
||||
Create /var/data/config/joplin-server/joplin-server.env, and populate with the following variables
|
||||
|
||||
```bash
|
||||
SYMFONY__DATABASE__PASSWORD=password
|
||||
EB_CRON=enabled
|
||||
TZ='Etc/UTC'
|
||||
|
||||
#SMTP - Populate these if you want email notifications
|
||||
#SYMFONY__MAILER__HOST=
|
||||
#SYMFONY__MAILER__USER=
|
||||
#SYMFONY__MAILER__PASSWORD=
|
||||
#SYMFONY__MAILER__FROM=
|
||||
|
||||
# For mysql
|
||||
MYSQL_ROOT_PASSWORD=password
|
||||
```
|
||||
|
||||
Create ```/var/data/config/joplin-server/joplin-server-db-backup.env```, and populate with the following, to setup the nightly database dump.
|
||||
|
||||
!!! note
|
||||
Running a daily database dump might be considered overkill, since joplin-server can be configured to backup its own database. However, making my own backup keeps the operation of this stack consistent with **other** stacks which employ MariaDB.
|
||||
|
||||
Also, did you ever hear about the guy who said "_I wish I had fewer backups"?
|
||||
|
||||
No, me either :shrug:
|
||||
|
||||
```bash
|
||||
# For database backup (keep 7 days daily backups)
|
||||
MYSQL_PWD=<same as SYMFONY__DATABASE__PASSWORD above>
|
||||
MYSQL_USER=root
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
### {{ page.meta.recipe }} Docker Swarm config
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like the example below:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
db:
|
||||
image: mariadb:10.4
|
||||
env_file: /var/data/config/joplin-server/joplin-server.env
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/runtime/joplin-server/db:/var/lib/mysql
|
||||
|
||||
db-backup:
|
||||
image: mariadb:10.4
|
||||
env_file: /var/data/config/joplin-server/joplin-server-db-backup.env
|
||||
volumes:
|
||||
- /var/data/joplin-server/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
|
||||
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
app:
|
||||
image: joplin-server/joplin-server
|
||||
env_file: /var/data/config/joplin-server/joplin-server.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/:/var/data
|
||||
- /var/data/joplin-server/backups:/app/backups
|
||||
- /var/data/joplin-server/uploads:/app/uploads
|
||||
- /var/data/joplin-server/sshkeys:/app/.ssh
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:joplin-server.example.com
|
||||
- traefik.port=80
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.joplin-server.rule=Host(`joplin-server.example.com`)"
|
||||
- "traefik.http.services.joplin-server.loadbalancer.server.port=80"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.joplin-server.middlewares=forward-auth@file"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.36.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch joplin-server stack
|
||||
|
||||
Launch the joplin-server stack by running ```docker stack deploy joplin-server -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password default password "root":
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
First thing you do, change your password, using the gear icon, and "Change Password" link:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Have a read of the [joplin-server Docs](https://docs.joplin-server.org/docs/introduction.html) - they introduce the concept of **clients** (_hosts containing data to be backed up_), **jobs** (_what data gets backed up_), **policies** (_when is data backed up and how long is it kept_).
|
||||
|
||||
At the very least, you want to setup a **client** called "_localhost_" with an empty path (_i.e., the job path will be accessed locally, without SSH_), and then add a job to this client to backup /var/data, **excluding** ```/var/data/runtime``` and ```/var/data/joplin-server/backup``` (_unless you **like** "backup-ception"_)
|
||||
|
||||
### Copying your backup data offsite
|
||||
|
||||
From the WebUI, you can download a script intended to be executed on a remote host, to backup your backup data to an offsite location. This is a **Good Idea**(tm), but needs some massaging for a Docker swarm deployment.
|
||||
|
||||
Here's a variation to the standard script, which I've employed:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
REPOSITORY=/var/data/joplin-server/backups
|
||||
SERVER=<target host member of docker swarm>
|
||||
SERVER_USER=joplin-server
|
||||
UPLOADS=/var/data/joplin-server/uploads
|
||||
TARGET=/srv/backup/joplin-server
|
||||
|
||||
echo "Starting backup..."
|
||||
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
|
||||
|
||||
ssh "$SERVER_USER@$SERVER" "cd '$REPOSITORY'; find . -maxdepth 2 -mindepth 2" | sed s/^..// | while read jobId
|
||||
do
|
||||
echo Backing up job $jobId
|
||||
mkdir -p $TARGET/$jobId 2>/dev/null
|
||||
rsync -aH --delete "$SERVER_USER@$SERVER:$REPOSITORY/$jobId/" $TARGET/$jobId
|
||||
done
|
||||
|
||||
echo Backing up uploads
|
||||
rsync -aH --delete "$SERVER_USER@$SERVER":"$UPLOADS/" $TARGET/uploads
|
||||
|
||||
USED=`df -h . | awk 'NR==2 { print $3 }'`
|
||||
USE=`df -h . | awk 'NR==2 { print $5 }'`
|
||||
AVAILABLE=`df -h . | awk 'NR==2 { print $4 }'`
|
||||
|
||||
echo "Backup finished succesfully!"
|
||||
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
|
||||
echo ""
|
||||
echo "**** INFO ****"
|
||||
echo "Used disk space: $USED ($USE)"
|
||||
echo "Available disk space: $AVAILABLE"
|
||||
echo ""
|
||||
```
|
||||
|
||||
!!! note
|
||||
You'll note that I don't use the script to create a mysql dump (_since Elkar is running within a container anyway_), rather I just rely on the database dump which is made nightly into ```/var/data/joplin-server/database-dump/```
|
||||
|
||||
### Restoring data
|
||||
|
||||
Repeat after me : "**It's not a backup unless you've tested a restore**"
|
||||
|
||||
!!! note
|
||||
I had some difficulty making restoring work well in the webUI. My attempts to "Restore to client" failed with an SSH error about "localhost" not found. I **was** able to download the backup from my web browser, so I considered it a successful restore, since I can retrieve the backed-up data either from the webUI or from the filesystem directly.
|
||||
|
||||
To restore files form a job, click on the "Restore" button in the WebUI, while on the **Jobs** tab:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
This takes you to a list of backup names and file paths. You can choose to download the entire contents of the backup from your browser as a .tar.gz, or to restore the backup to the client. If you click on the **name** of the backup, you can also drill down into the file structure, choosing to restore a single file or directory.
|
||||
|
||||
[^1]: If you wanted to expose the joplin-server UI directly, you could remove the traefik-forward-auth from the design.
|
||||
[^2]: The original inclusion of joplin-server was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||
|
||||
{% include 'recipe-footer.md' %}
|
||||
@@ -84,4 +84,4 @@ Log into your new instance at https://**YOUR-FQDN**. Default credentials are adm
|
||||
[^1]: The default theme can be significantly improved by applying the [ThemePlus](https://github.com/phsteffen/kanboard-themeplus) plugin.
|
||||
[^2]: Kanboard becomes more useful when you integrate in/outbound email with [MailGun](https://github.com/kanboard/plugin-mailgun), [SendGrid](https://github.com/kanboard/plugin-sendgrid), or [Postmark](https://github.com/kanboard/plugin-postmark).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user