From 6892542f9dcbad77119cd1e1a2fc5ffbec130399 Mon Sep 17 00:00:00 2001 From: Thomas Date: Mon, 4 Jan 2021 16:00:48 +1300 Subject: [PATCH] Fix Dead Links (#129) --- manuscript/ha-docker-swarm/registry.md | 20 ++-- .../traefik-forward-auth/keycloak.md | 35 +++--- manuscript/kubernetes/design.md | 55 ++++----- manuscript/kubernetes/loadbalancer.md | 46 ++++---- manuscript/kubernetes/snapshots.md | 108 ++++++++++-------- manuscript/kubernetes/start.md | 34 +++--- manuscript/recipes/autopirate/heimdall.md | 45 ++++---- manuscript/recipes/autopirate/jackett.md | 41 ++++--- manuscript/recipes/autopirate/nzbhydra2.md | 73 ++++++------ manuscript/recipes/bitwarden.md | 4 +- manuscript/recipes/bookstack.md | 2 +- manuscript/recipes/calibre-web.md | 2 +- manuscript/recipes/collabora-online.md | 4 +- manuscript/recipes/duplicati.md | 2 +- manuscript/recipes/duplicity.md | 27 ++--- manuscript/recipes/elkarbackup.md | 4 +- manuscript/recipes/emby.md | 2 +- manuscript/recipes/ghost.md | 4 +- manuscript/recipes/gitlab-runner.md | 19 ++- manuscript/recipes/gitlab.md | 4 +- manuscript/recipes/gollum.md | 4 +- manuscript/recipes/homeassistant.md | 2 +- manuscript/recipes/homeassistant/ibeacon.md | 6 +- manuscript/recipes/instapy.md | 4 +- manuscript/recipes/jellyfin.md | 2 +- manuscript/recipes/kanboard.md | 2 +- manuscript/recipes/keycloak.md | 25 ++-- manuscript/recipes/keycloak/create-user.md | 12 +- .../recipes/keycloak/setup-oidc-provider.md | 4 +- manuscript/recipes/kubernetes/nextcloud.md | 2 +- manuscript/recipes/kubernetes/phpipam.md | 2 +- manuscript/recipes/kubernetes/privatebin.md | 2 +- manuscript/recipes/mattermost.md | 4 +- manuscript/recipes/miniflux.md | 2 +- manuscript/recipes/minio.md | 4 +- manuscript/recipes/mqtt.md | 17 +-- manuscript/recipes/munin.md | 21 ++-- manuscript/recipes/nextcloud.md | 2 +- manuscript/recipes/openldap.md | 8 +- manuscript/recipes/owntracks.md | 2 +- manuscript/recipes/photoprism.md | 4 +- manuscript/recipes/phpipam.md | 21 ++-- manuscript/recipes/plex.md | 2 +- manuscript/recipes/privatebin.md | 4 +- manuscript/recipes/realms.md | 4 +- manuscript/recipes/restic.md | 2 +- manuscript/recipes/swarmprom.md | 4 +- manuscript/recipes/template.md | 4 +- manuscript/recipes/wallabag.md | 2 +- manuscript/recipes/wetty.md | 4 +- manuscript/reference/networks.md | 6 +- 51 files changed, 354 insertions(+), 361 deletions(-) diff --git a/manuscript/ha-docker-swarm/registry.md b/manuscript/ha-docker-swarm/registry.md index 234e6f7..d57985a 100644 --- a/manuscript/ha-docker-swarm/registry.md +++ b/manuscript/ha-docker-swarm/registry.md @@ -6,14 +6,13 @@ When dealing with large container (looking at you, GitLab!), this can result in The solution is to run an official Docker registry container as a ["pull-through" cache, or "registry mirror"](https://docs.docker.com/registry/recipes/mirror/). By using our persistent storage for the registry cache, we can ensure we have a single copy of all the containers we've pulled at least once. After the first pull, any subsequent pulls from our nodes will use the cached version from our registry mirror. As a result, services are available more quickly when restarting container nodes, and we can be more aggressive about cleaning up unused containers on our nodes (more later) -The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Customize __your mirror FQDN__ below, so that Traefik will generate the appropriate LetsEncrypt certificates for it, and make it available via HTTPS. +The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Customize **your mirror FQDN** below, so that Traefik will generate the appropriate LetsEncrypt certificates for it, and make it available via HTTPS. ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP - +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation @@ -45,10 +44,10 @@ networks: ``` !!! note "Unencrypted registry" - We create this registry without consideration for SSL, which will fail if we attempt to use the registry directly. However, we're going to use the HTTPS-proxied version via Traefik, leveraging Traefik to manage the LetsEncrypt certificates required. - +We create this registry without consideration for SSL, which will fail if we attempt to use the registry directly. However, we're going to use the HTTPS-proxied version via Traefik, leveraging Traefik to manage the LetsEncrypt certificates required. Create /var/data/registry/registry-mirror-config.yml as follows: + ``` version: 0.1 log: @@ -78,7 +77,7 @@ proxy: ### Launch registry stack -Launch the registry stack by running ```docker stack deploy registry -c ``` +Launch the registry stack by running `docker stack deploy registry -c ` ### Enable registry mirror and experimental features @@ -103,11 +102,12 @@ To: ``` Then restart docker by running: -```` + +``` systemctl restart docker-latest -```` +``` !!! tip "" - Note the extra comma required after "false" above +Note the extra comma required after "false" above -## Chef's notes πŸ““ \ No newline at end of file +## Chef's notes πŸ““ diff --git a/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md b/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md index 28a15af..875112a 100644 --- a/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md +++ b/manuscript/ha-docker-swarm/traefik-forward-auth/keycloak.md @@ -5,37 +5,36 @@ While the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe ## Ingredients !!! Summary - Existing: +Existing: * [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully, with a [local user](/recipes/keycloak/create-user/) and an [OIDC client](/recipes/keycloak/setup-oidc-provider/) - + New: - * [ ] DNS entry for your auth host (*"auth.yourdomain.com" is a good choice*), pointed to your [keepalived](ha-docker-swarm/keepalived/) IP + * [ ] DNS entry for your auth host (*"auth.yourdomain.com" is a good choice*), pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation ### What is AuthHost mode -Under normal OIDC auth, you have to tell your auth provider which URLs it may redirect an authenticated user back to, post-authentication. This is a security feture of the OIDC spec, preventing a malicious landing page from capturing your session and using it to impersonate you. When you're securing many URLs though, explicitly listing them can be a PITA. +Under normal OIDC auth, you have to tell your auth provider which URLs it may redirect an authenticated user back to, post-authentication. This is a security feture of the OIDC spec, preventing a malicious landing page from capturing your session and using it to impersonate you. When you're securing many URLs though, explicitly listing them can be a PITA. -[@thomaseddon's traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth) includes an ingenious mechanism to simulate an "*auth host*" in your OIDC authentication, so that you can protect an unlimited amount of DNS names (*with a common domain suffix*), without having to manually maintain a list. +[@thomaseddon's traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth) includes an ingenious mechanism to simulate an "_auth host_" in your OIDC authentication, so that you can protect an unlimited amount of DNS names (_with a common domain suffix_), without having to manually maintain a list. #### How does it work? -Say you're protecting **radarr.example.com**. When you first browse to **https://radarr.example.com**, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider's login (*KeyCloak, in this case*), but instructs the OIDC provider to redirect a successfully authenticated session **back** to **https://auth.example.com/_oauth**, rather than to **https://radarr.example.com/_oauth**. +Say you're protecting **radarr.example.com**. When you first browse to **https://radarr.example.com**, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider's login (_KeyCloak, in this case_), but instructs the OIDC provider to redirect a successfully authenticated session **back** to **https://auth.example.com/_oauth**, rather than to **https://radarr.example.com/_oauth**. -When you successfully authenticate against the OIDC provider, you are redirected to the "*redirect_uri*" of https://auth.example.com. Again, your request hits Traefik, whichforwards the session to traefik-forward-auth, which **knows** that you've just been authenticated (*cookies have a role to play here*). Traefik-forward-auth also knows the URL of your **original** request (*thanks to the X-Forwarded-Whatever header*). Traefik-forward-auth redirects you to your original destination, and everybody is happy. +When you successfully authenticate against the OIDC provider, you are redirected to the "_redirect_uri_" of https://auth.example.com. Again, your request hits Traefik, whichforwards the session to traefik-forward-auth, which **knows** that you've just been authenticated (_cookies have a role to play here_). Traefik-forward-auth also knows the URL of your **original** request (_thanks to the X-Forwarded-Whatever header_). Traefik-forward-auth redirects you to your original destination, and everybody is happy. This clever workaround only works under 2 conditions: - -1. Your "auth host" has the same domain name as the hosts you're protecting (*i.e., auth.example.com protecting radarr.example.com*) -2. You explictly tell traefik-forward-auth to use a cookie authenticating your **whole** domain (*i.e. example.com*) +1. Your "auth host" has the same domain name as the hosts you're protecting (_i.e., auth.example.com protecting radarr.example.com_) +2. You explictly tell traefik-forward-auth to use a cookie authenticating your **whole** domain (_i.e. example.com_) ### Setup environment -Create `/var/data/config/traefik/traefik-forward-auth.env` as follows (*change "master" if you created a different realm*): +Create `/var/data/config/traefik/traefik-forward-auth.env` as follows (_change "master" if you created a different realm_): ``` CLIENT_ID= @@ -48,7 +47,7 @@ COOKIE_DOMAIN= ### Prepare the docker service config -This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe: +This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/ha-docker-swarm/traefik/) recipe: ``` traefik-forward-auth: @@ -82,21 +81,21 @@ If you're not confident that forward authentication is working, add a simple "wh ``` !!! tip - I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` πŸ‘ +I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` πŸ‘ ## Serving ### Launch -Redeploy traefik with ```docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml```, to launch the traefik-forward-auth container. +Redeploy traefik with `docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml`, to launch the traefik-forward-auth container. ### Test -Browse to https://whoami.example.com (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you'll be redirected to a KeyCloak login. Once successfully logged in, you'll be directed to the basic whoami page. +Browse to https://whoami.example.com (_obviously, customized for your domain and having created a DNS record_), and all going according to plan, you'll be redirected to a KeyCloak login. Once successfully logged in, you'll be directed to the basic whoami page. ### Protect services -To protect any other service, ensure the service itself is exposed by Traefik (*if you were previously using an oauth_proxy for this, you may have to migrate some labels from the oauth_proxy serivce to the service itself*). Add the following 3 labels: +To protect any other service, ensure the service itself is exposed by Traefik (_if you were previously using an oauth_proxy for this, you may have to migrate some labels from the oauth_proxy serivce to the service itself_). Add the following 3 labels: ``` - traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181 @@ -111,12 +110,10 @@ And re-deploy your services :) What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our KeyCloak OIDC provider, with minimal processing / handling overhead. !!! summary "Summary" - Created: +Created: * [X] Traefik-forward-auth configured to authenticate against KeyCloak - - ## Chef's Notes πŸ““ 1. KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;) diff --git a/manuscript/kubernetes/design.md b/manuscript/kubernetes/design.md index 926d108..f46cb96 100644 --- a/manuscript/kubernetes/design.md +++ b/manuscript/kubernetes/design.md @@ -1,17 +1,17 @@ # Design -Like the [Docker Swarm](ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is: +Like the [Docker Swarm](/ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is: -* **Highly-available** (_can tolerate the failure of a single component_) -* **Scalable** (_can add resource or capacity as required_) -* **Portable** (_run it in DigitalOcean today, AWS tomorrow and Azure on Thursday_) -* **Secure** (_access protected with LetsEncrypt certificates_) -* **Automated** (_requires minimal care and feeding_) +- **Highly-available** (_can tolerate the failure of a single component_) +- **Scalable** (_can add resource or capacity as required_) +- **Portable** (_run it in DigitalOcean today, AWS tomorrow and Azure on Thursday_) +- **Secure** (_access protected with LetsEncrypt certificates_) +- **Automated** (_requires minimal care and feeding_) -*Unlike* the Docker Swarm design, the Kubernetes design is: +_Unlike_ the Docker Swarm design, the Kubernetes design is: -* **Cloud-Native** (_While you **can** [run your own Kubernetes cluster](https://microk8s.io/), it's far simpler to let someone else manage the infrastructure, freeing you to play with the fun stuff_) -* **Complex** (_Requires more basic elements, more verbose configuration, and provides more flexibility and customisability_) +- **Cloud-Native** (_While you **can** [run your own Kubernetes cluster](https://microk8s.io/), it's far simpler to let someone else manage the infrastructure, freeing you to play with the fun stuff_) +- **Complex** (_Requires more basic elements, more verbose configuration, and provides more flexibility and customisability_) ## Design Decisions @@ -19,21 +19,21 @@ Like the [Docker Swarm](ha-docker-swarm/design/) "_private cloud_" design, the K This means that: -* The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s -* Custom service elements specific to individual providers are avoided +- The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s +- Custom service elements specific to individual providers are avoided **The simplest solution to achieve the desired result will be preferred** This means that: -* Persistent volumes from the cloud provider are used for all persistent storage -* We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks. +- Persistent volumes from the cloud provider are used for all persistent storage +- We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks. **Insofar as possible, the format of recipes will align with Docker Swarm** This means that: -* We use Kubernetes namespaces to replicate Docker Swarm's "_per-stack_" networking and service discovery +- We use Kubernetes namespaces to replicate Docker Swarm's "_per-stack_" networking and service discovery ## Security @@ -41,12 +41,12 @@ Under this design, the only inbound connections we're permitting to our Kubernet ### Network Flows -* HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_) -* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_) +- HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_) +- Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_) ### Authentication -* Other than when an SSL-served application provides a trusted level of authentication, or where the application requires public exposure, applications served via Traefik will be protected with an OAuth proxy. +- Other than when an SSL-served application provides a trusted level of authentication, or where the application requires public exposure, applications served via Traefik will be protected with an OAuth proxy. ## The challenges of external access @@ -55,8 +55,9 @@ Because we're Cloude-Native now, it's complex to get traffic **into** our cluste 1. **HostIP**: Map a port on the host to a service. This is analogous to Docker's port exposure, but lacking in that it restricts us to one host port per-container, and it's not possible to anticipate _which_ of your Kubernetes hosts is running a given container. Kubernetes does not have Docker Swarm's "routing mesh", allowing for simple load-balancing of incoming connections. 2. **LoadBalancer**: Purchase a "loadbalancer" per-service from your cloud provider. While this is the simplest way to assure a fixed IP and port combination will always exist for your service, it has 2 significant limitations: - 1. Cost is prohibitive, at roughly $US10/month per port - 2. You won't get the _same_ fixed IP for multiple ports. So if you wanted to expose 443 and 25 (_webmail and smtp server, for example_), you'd find yourself assigned a port each on two **unique** IPs, a challenge for a single DNS-based service, like "_mail.batman.com_" + + 1. Cost is prohibitive, at roughly \$US10/month per port + 2. You won't get the _same_ fixed IP for multiple ports. So if you wanted to expose 443 and 25 (_webmail and smtp server, for example_), you'd find yourself assigned a port each on two **unique** IPs, a challenge for a single DNS-based service, like "_mail.batman.com_" 3. **NodePort**: Expose our service as a port (_between 30000-32767_) on the host which happens to be running the service. This is challenging because you might want to expose port 443, but that's not possible with NodePort. @@ -92,7 +93,7 @@ The phone-home container calls the webhook, and tells HAProxy to listen on port ### 2 : The Traefik Ingress -In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/docker-ha-swarm/traefik/). +In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/ha-docker-swarm/traefik/). What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number. @@ -120,10 +121,10 @@ Finally, the DNS for all externally-accessible services is pointed to the IP of Still with me? Good. Move on to creating your cluster! -* [Start](/kubernetes/start/) - Why Kubernetes? -* Design (this page) - How does it fit together? -* [Cluster](/kubernetes/cluster/) - Setup a basic cluster -* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm \ No newline at end of file +- [Start](/kubernetes/start/) - Why Kubernetes? +- Design (this page) - How does it fit together? +- [Cluster](/kubernetes/cluster/) - Setup a basic cluster +- [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access +- [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data +- [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks +- [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm diff --git a/manuscript/kubernetes/loadbalancer.md b/manuscript/kubernetes/loadbalancer.md index 436b686..6018025 100644 --- a/manuscript/kubernetes/loadbalancer.md +++ b/manuscript/kubernetes/loadbalancer.md @@ -4,7 +4,7 @@ One of the issues I encountered early on in migrating my Docker Swarm workloads There were several complications with the "traditional" mechanisms of providing a load-balanced ingress, not the least of which was cost. I also found that even if I paid my cloud provider (_Google_) for a load-balancer Kubernetes service, this service required a unique IP per exposed port, which was incompatible with my mining pool empire (_mining pools need to expose multiple ports on the same DNS name_). -See further examination of the problem and possible solutions in the [Kubernetes design](kubernetes/design/#the-challenges-of-external-access) page. +See further examination of the problem and possible solutions in the [Kubernetes design](/kubernetes/design/#the-challenges-of-external-access) page. This recipe details a simple design to permit the exposure of as many ports as you like, on a single public IP, to a cluster of Kubernetes nodes running as many pods/containers as you need, with services exposed via NodePort. @@ -13,10 +13,9 @@ This recipe details a simple design to permit the exposure of as many ports as y ## Ingredients 1. [Kubernetes cluster](/kubernetes/cluster/) -2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar 🍷 for me!_) +2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [\$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar 🍷 for me!_) 3. Geek-Fu required : 🐧🐧🐧 (_complex - inline adjustments required_) - ## Preparation ### Summary @@ -24,7 +23,7 @@ This recipe details a simple design to permit the exposure of as many ports as y ### Create LetsEncrypt certificate !!! warning - Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internet without first securing it with a valid SSL certificate? Of course not, I didn't think so! +Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internet without first securing it with a valid SSL certificate? Of course not, I didn't think so! Use whatever method you prefer to generate (and later, renew) your LetsEncrypt cert. The example below uses the CertBot docker image for CloudFlare DNS validation, since that's what I've used elsewhere. @@ -38,13 +37,14 @@ dns_cloudflare_api_key=supersekritnevergonnatellyou ``` I request my cert by running: + ``` cd /etc/webhook/ docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare --preferred-challenges dns certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d ''*.funkypenguin.co.nz' ``` !!! question - Why use a wildcard cert? So my enemies can't examine my certs to enumerate my various services and discover my weaknesses, of course! +Why use a wildcard cert? So my enemies can't examine my certs to enumerate my various services and discover my weaknesses, of course! I add the following as a cron command to renew my certs every day: @@ -52,15 +52,15 @@ I add the following as a cron command to renew my certs every day: cd /etc/webhook && docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini ``` -Once you've confirmed you've got a valid LetsEncrypt certificate stored in ```/etc/webhook/letsencrypt/live//fullcert.pem```, proceed to the next step.. +Once you've confirmed you've got a valid LetsEncrypt certificate stored in `/etc/webhook/letsencrypt/live//fullcert.pem`, proceed to the next step.. ### Install webhook -We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❀️ ya, Debian!_), webhook and its associated systemd config can be installed by running ```apt-get install webhook```. +We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❀️ ya, Debian!_), webhook and its associated systemd config can be installed by running `apt-get install webhook`. ### Create webhook config -We'll create a single webhook, by creating ```/etc/webhook/hooks.json``` as follows. Choose a nice secure random string for your MY_TOKEN value! +We'll create a single webhook, by creating `/etc/webhook/hooks.json` as follows. Choose a nice secure random string for your MY_TOKEN value! ``` mkdir /etc/webhook @@ -113,14 +113,14 @@ EOF ``` !!! note - Note that to avoid any bozo from calling our we're matching on a token header in the request called ```X-Funkypenguin-Token```. Webhook will **ignore** any request which doesn't include a matching token in the request header. +Note that to avoid any bozo from calling our we're matching on a token header in the request called `X-Funkypenguin-Token`. Webhook will **ignore** any request which doesn't include a matching token in the request header. ### Update systemd for webhook !!! note - This section is particular to Debian Stretch and its webhook package. If you're using another OS for your VM, just ensure that you can start webhook with a config similar to the one illustrated below. +This section is particular to Debian Stretch and its webhook package. If you're using another OS for your VM, just ensure that you can start webhook with a config similar to the one illustrated below. -Since we want to force webhook to run in secure mode (_no point having a token if it can be extracted from a simple packet capture!_) I ran ```systemctl edit webhook```, and pasted in the following: +Since we want to force webhook to run in secure mode (_no point having a token if it can be extracted from a simple packet capture!_) I ran `systemctl edit webhook`, and pasted in the following: ``` [Service] @@ -129,7 +129,7 @@ ExecStart= ExecStart=/usr/bin/webhook -hooks /etc/webhook/hooks.json -verbose -secure -cert /etc/webhook/letsencrypt/live/funkypenguin.co.nz/fullchain.pem -key /etc/webhook/letsencrypt/live/funkypenguin.co.nz/privkey.pem ``` -Then I restarted webhook by running ```systemctl enable webhook && systemctl restart webhook```. I watched the subsequent logs by running ```journalctl -u webhook -f``` +Then I restarted webhook by running `systemctl enable webhook && systemctl restart webhook`. I watched the subsequent logs by running `journalctl -u webhook -f` ### Create /etc/webhook/update-haproxy.sh @@ -210,7 +210,7 @@ fi ### Create /etc/webhook/haproxy/global -Create ```/etc/webhook/haproxy/global``` and populate with something like the following. This will be the non-dynamically generated part of our HAProxy config: +Create `/etc/webhook/haproxy/global` and populate with something like the following. This will be the non-dynamically generated part of our HAProxy config: ``` global @@ -260,7 +260,7 @@ Whew! We now have all the components of our automated load-balancing solution in If you don't see the above, then check the following: -1. Does the webhook verbose log (```journalctl -u webhook -f```) complain about invalid arguments or missing files? +1. Does the webhook verbose log (`journalctl -u webhook -f`) complain about invalid arguments or missing files? 2. Is port 9000 open to the internet on your VM? ### Apply to pods @@ -315,20 +315,18 @@ Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 command ou ``` - ## Move on.. Still with me? Good. Move on to setting up an ingress SSL terminating proxy with Traefik.. -* [Start](/kubernetes/start/) - Why Kubernetes? -* [Design](/kubernetes/design/) - How does it fit together? -* [Cluster](/kubernetes/cluster/) - Setup a basic cluster -* Load Balancer (this page) - Setup inbound access -* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data -* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks -* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm - +- [Start](/kubernetes/start/) - Why Kubernetes? +- [Design](/kubernetes/design/) - How does it fit together? +- [Cluster](/kubernetes/cluster/) - Setup a basic cluster +- Load Balancer (this page) - Setup inbound access +- [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data +- [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks +- [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm ## Chef's Notes -1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome πŸ˜‰ \ No newline at end of file +1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome πŸ˜‰ diff --git a/manuscript/kubernetes/snapshots.md b/manuscript/kubernetes/snapshots.md index 69dd02c..4bcf45d 100644 --- a/manuscript/kubernetes/snapshots.md +++ b/manuscript/kubernetes/snapshots.md @@ -2,7 +2,7 @@ Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact. -Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](recipes/elkarbackup/)) to automate backups of our persistent data. +Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](/recipes/elkarbackup/)) to automate backups of our persistent data. Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution... @@ -23,7 +23,7 @@ This recipe employs a clever tool ([miracle2k/k8s-snapshots](https://github.com/ If you're running GKE, run the following to create a RoleBinding, allowing your user to grant rights-it-doesn't-currently-have to the service account responsible for creating the snapshots: -```kubectl create clusterrolebinding your-user-cluster-admin-binding \ +````kubectl create clusterrolebinding your-user-cluster-admin-binding \ --clusterrole=cluster-admin --user=``` !!! question @@ -33,8 +33,10 @@ If you're running GKE, run the following to create a RoleBinding, allowing your If your cluster is RBAC-enabled (_it probably is_), you'll need to create a ClusterRole and ClusterRoleBinding to allow k8s_snapshots to see your PVs and friends: -``` +```` + kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/master/rbac.yaml + ``` ## Serving @@ -44,24 +46,25 @@ kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/maste Ready? Run the following to create a deployment in to the kube-system namespace: ``` + cat <``` @@ -71,7 +74,8 @@ k8s-snapshots relies on annotations to tell it how frequently to snapshot your P From the k8s-snapshots README: -``` +```` + The generations are defined by a list of deltas formatted as ISO 8601 durations (this differs from tarsnapper). PT60S or PT1M means a minute, PT12H or P0.5D is half a day, P1W or P7D is a week. The number of backups in each generation is implied by it's and the parent generation's delta. For example, given the deltas PT1H P1D P7D, the first generation will consist of 24 backups each one hour older than the previous (or the closest approximation possible given the available backups), the second generation of 7 backups each one day older than the previous, and backups older than 7 days will be discarded for good. @@ -79,38 +83,44 @@ For example, given the deltas PT1H P1D P7D, the first generation will consist of The most recent backup is always kept. The first delta is the backup interval. + ``` To add the annotation to an existing PV, run something like this: ``` + kubectl patch pv pvc-01f74065-8fe9-11e6-abdd-42010af00148 -p \ - '{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}' + '{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}' + ``` To add the annotation to a _new_ PV, add the following annotation to your **PVC**: ``` + backup.kubernetes.io/deltas: PT1H P2D P30D P180D + ``` Here's an example of the PVC for the UniFi recipe, which includes 7 daily snapshots of the PV: ``` + kind: PersistentVolumeClaim apiVersion: v1 metadata: - name: controller-volumeclaim - namespace: unifi - annotations: - backup.kubernetes.io/deltas: P1D P7D +name: controller-volumeclaim +namespace: unifi +annotations: +backup.kubernetes.io/deltas: P1D P7D spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 1Gi -``` +accessModes: - ReadWriteOnce +resources: +requests: +storage: 1Gi + +```` And here's what my snapshot list looks like after a few days: @@ -122,40 +132,43 @@ If you're running traditional compute instances with your cloud provider (I do t To do so, first create a custom resource, ```SnapshotRule```: -``` +```` + cat < "[Settings](https://console.cloud.google.com/project/_/storage/settings)" > "Interoperability" tab > click "Enable interoperable access" and then "Create a new key" button and note both Access Key and Secret. - ### Prepare environment 1. Generate a random passphrase to use to encrypt your data. **Save this somewhere safe**, without it you won't be able to restore! 2. Seriously, **save**. **it**. **somewhere**. **safe**. 3. Create duplicity.env, and populate with the following variables + ``` SRC=/var/data/ DST=gs://jack-and-jills-bucket/yes-you-can-have-subdirectories @@ -68,7 +66,7 @@ PASSPHRASE= ``` !!! note - See the [data layout reference](/reference/data_layout/) for an explanation of the included/excluded paths above. +See the [data layout reference](/reference/data_layout/) for an explanation of the included/excluded paths above. ### Run a test backup @@ -88,9 +86,9 @@ You should see some activity, with a summary of bytes transferred at the end. Repeat after me: "If you don't verify your backup, **it's not a backup**". !!! warning - Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data. +Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data. -Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/recipie/traefik/), since this is likely to exist for every reader_). +Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/ha-docker-swarm/traefik/), since this is likely to exist for every reader_). ``` docker run --env-file duplicity.env -it --rm \ @@ -100,6 +98,7 @@ docker run --env-file duplicity.env -it --rm \ duplicity list-current-files \ \$DST | grep traefik.yml ``` + Once you've identified a file to test-restore, use a variation of the following to restore it to /tmp (_from the perspective of the container - it's actually /var/data/duplicity/tmp_) ``` @@ -114,14 +113,12 @@ tecnativa/duplicity duplicity restore \ Examine the contents of /var/data/duplicity/tmp/traefik-restored.yml to confirm it contains valid data. - ### Setup Docker Swarm Now that we have confidence in our backup/restore process, let's automate it by creating a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` πŸ‘ - +I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` πŸ‘ ``` version: "3" @@ -148,19 +145,17 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. - - +Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. ## Serving ### Launch Duplicity stack -Launch Duplicity stack by running ```docker stack deploy duplicity -c ``` +Launch Duplicity stack by running `docker stack deploy duplicity -c ` Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination. ## Chef's Notes πŸ““ 1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs. -2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env +2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add `SMTP_HOST`, `SMTP_PORT`, `EMAIL_FROM` and `EMAIL_TO` variables to duplicity.env diff --git a/manuscript/recipes/elkarbackup.md b/manuscript/recipes/elkarbackup.md index 696d980..4f15366 100644 --- a/manuscript/recipes/elkarbackup.md +++ b/manuscript/recipes/elkarbackup.md @@ -20,8 +20,8 @@ ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/emby.md b/manuscript/recipes/emby.md index 9f3dfab..471592b 100644 --- a/manuscript/recipes/emby.md +++ b/manuscript/recipes/emby.md @@ -10,7 +10,7 @@ I've started experimenting with Emby as an alternative to Plex, because of the a 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/ghost.md b/manuscript/recipes/ghost.md index 7ea428a..b336f84 100644 --- a/manuscript/recipes/ghost.md +++ b/manuscript/recipes/ghost.md @@ -12,8 +12,8 @@ hero: Ghost - A recipe for beautiful online publication. Existing: 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) - 2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design - 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP + 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design + 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/gitlab-runner.md b/manuscript/recipes/gitlab-runner.md index e48026a..c6076db 100644 --- a/manuscript/recipes/gitlab-runner.md +++ b/manuscript/recipes/gitlab-runner.md @@ -7,12 +7,12 @@ While a runner isn't strictly required to use GitLab, if you want to do CI, you' ## Ingredients !!! summary "Ingredients" - Existing: +Existing: 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) - 2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design - 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP - 4. [X] [GitLab](/ha-docker-swarm/gitlab) installation (see previous recipe) + 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design + 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP + 4. [X] [GitLab](/recipes/gitlab) installation (see previous recipe) ## Preparation @@ -32,7 +32,7 @@ mkdir -p {runners/1,runners/2} Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` πŸ‘ +I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` πŸ‘ ``` version: '3' @@ -60,10 +60,9 @@ networks: - subnet: 172.16.23.0/24 ``` - ### Configure runners -From your GitLab UI, you can retrieve a "token" necessary to register a new runner. To register the runner, you can either create config.toml in each runner's bind-mounted folder (example below), or just `docker exec` into each runner container and execute ```gitlab-runner register``` to interactively generate config.toml. +From your GitLab UI, you can retrieve a "token" necessary to register a new runner. To register the runner, you can either create config.toml in each runner's bind-mounted folder (example below), or just `docker exec` into each runner container and execute `gitlab-runner register` to interactively generate config.toml. Sample runner config.toml: @@ -90,11 +89,11 @@ check_interval = 0 ### Launch runners -Launch the mail server stack by running ```docker stack deploy gitlab-runner -c ``` +Launch the mail server stack by running `docker stack deploy gitlab-runner -c ` Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env. ## Chef's Notes πŸ““ -1. You'll note that I setup 2 runners. One is locked to a single project (*this cookbook build*), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case. -2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (*and GitLab starts **sooo** slowly!*), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem. \ No newline at end of file +1. You'll note that I setup 2 runners. One is locked to a single project (_this cookbook build_), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case. +2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (_and GitLab starts **sooo** slowly!_), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem. diff --git a/manuscript/recipes/gitlab.md b/manuscript/recipes/gitlab.md index bf8c905..8087bb0 100644 --- a/manuscript/recipes/gitlab.md +++ b/manuscript/recipes/gitlab.md @@ -12,8 +12,8 @@ Docker does maintain an [official "Omnibus" container](https://docs.gitlab.com/o Existing: 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) - 2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design - 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP + 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design + 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/gollum.md b/manuscript/recipes/gollum.md index c87f5fe..09be173 100644 --- a/manuscript/recipes/gollum.md +++ b/manuscript/recipes/gollum.md @@ -37,8 +37,8 @@ Gollum meets all these requirements, and as an added bonus, is extremely fast an Existing: 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) - 2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design - 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP + 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design + 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/homeassistant.md b/manuscript/recipes/homeassistant.md index 13aa6d6..aeff847 100644 --- a/manuscript/recipes/homeassistant.md +++ b/manuscript/recipes/homeassistant.md @@ -10,7 +10,7 @@ This recipie combines the [extensibility](https://home-assistant.io/components/) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/homeassistant/ibeacon.md b/manuscript/recipes/homeassistant/ibeacon.md index 460029a..87b62e4 100644 --- a/manuscript/recipes/homeassistant/ibeacon.md +++ b/manuscript/recipes/homeassistant/ibeacon.md @@ -1,15 +1,15 @@ # iBeacons with Home assistant !!! warning - This is not a complete recipe - it's an optional additional of the [HomeAssistant](/recipes/homeassistant/) "recipe", since it only applies to a subset of users +This is not a complete recipe - it's an optional additional of the [HomeAssistant](/recipes/homeassistant/) "recipe", since it only applies to a subset of users One of the most useful features of Home Assistant is location awareness. I don't care if someone opens my office door when I'm home, but you bet I care about (_and want to be notified_) it if I'm away! ## Ingredients -1. [HomeAssistant](/recipes/home-assistant/) per recipe +1. [HomeAssistant](/recipes/homeassistant/) per recipe 2. iBeacon(s) - This recipe is for https://s.click.aliexpress.com/e/bzyLCnAp -4. [LightBlue Explorer](https://itunes.apple.com/nz/app/lightblue-explorer/id557428110?mt=8) +3. [LightBlue Explorer](https://itunes.apple.com/nz/app/lightblue-explorer/id557428110?mt=8) ## Preparation diff --git a/manuscript/recipes/instapy.md b/manuscript/recipes/instapy.md index f7f4d07..07c80ef 100644 --- a/manuscript/recipes/instapy.md +++ b/manuscript/recipes/instapy.md @@ -14,8 +14,8 @@ Great power, right? A client (_yes, you can [hire](https://www.funkypenguin.co.n Existing: 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) - 2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design - 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP + 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design + 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/jellyfin.md b/manuscript/recipes/jellyfin.md index 5fbeff5..1b83c5b 100644 --- a/manuscript/recipes/jellyfin.md +++ b/manuscript/recipes/jellyfin.md @@ -10,7 +10,7 @@ If it looks very similar as Emby, is because it started as a fork of it, but it 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/kanboard.md b/manuscript/recipes/kanboard.md index e878424..7988f40 100644 --- a/manuscript/recipes/kanboard.md +++ b/manuscript/recipes/kanboard.md @@ -26,7 +26,7 @@ Features include: 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry pointing your NextCloud url (_kanboard.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP +3. DNS entry pointing your NextCloud url (_kanboard.example.com_) to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/keycloak.md b/manuscript/recipes/keycloak.md index bc9f8d7..d5197e7 100644 --- a/manuscript/recipes/keycloak.md +++ b/manuscript/recipes/keycloak.md @@ -1,9 +1,9 @@ # KeyCloak -[KeyCloak](https://www.keycloak.org/) is "*an open source identity and access management solution*". Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML. KeyCloak's OpenID provider can be used in combination with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), to protect [vulnerable services](/recipe/nzbget/) with an extra layer of authentication. +[KeyCloak](https://www.keycloak.org/) is "_an open source identity and access management solution_". Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML. KeyCloak's OpenID provider can be used in combination with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), to protect [vulnerable services](/recipes/autopirate/nzbget/) with an extra layer of authentication. !!! important - Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys! +Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys! [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) @@ -12,10 +12,10 @@ ## Ingredients !!! Summary - Existing: +Existing: * [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph/) - * [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design + * [X] [Traefik](/ha-docker-swarm/traefik) configured per design * [X] DNS entry for the hostname (_i.e. "keycloak.your-domain.com"_) you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation @@ -69,7 +69,8 @@ BACKUP_FREQUENCY=1d Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` πŸ‘ +I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` πŸ‘ + ``` version: '3' @@ -78,7 +79,7 @@ services: image: jboss/keycloak env_file: /var/data/config/keycloak/keycloak.env volumes: - - /etc/localtime:/etc/localtime:ro + - /etc/localtime:/etc/localtime:ro networks: - traefik_public - internal @@ -93,7 +94,7 @@ services: image: postgres:10.1 volumes: - /var/data/runtime/keycloak/database:/var/lib/postgresql/data - - /etc/localtime:/etc/localtime:ro + - /etc/localtime:/etc/localtime:ro networks: - internal @@ -123,25 +124,23 @@ networks: driver: overlay ipam: config: - - subnet: 172.16.49.0/24 + - subnet: 172.16.49.0/24 ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. - +Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. ## Serving ### Launch KeyCloak stack -Launch the KeyCloak stack by running ```docker stack deploy keycloak -c ``` +Launch the KeyCloak stack by running `docker stack deploy keycloak -c ` Log into your new instance at https://**YOUR-FQDN**, and login with the user/password you defined in `keycloak.env`. !!! important - Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys! +Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys! [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) - ## Chef's Notes diff --git a/manuscript/recipes/keycloak/create-user.md b/manuscript/recipes/keycloak/create-user.md index 603107d..01bc8bc 100644 --- a/manuscript/recipes/keycloak/create-user.md +++ b/manuscript/recipes/keycloak/create-user.md @@ -1,20 +1,20 @@ # Create KeyCloak Users !!! warning - This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity. +This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity. -Unless you plan to authenticate against an outside provider (*[OpenLDAP](/recipes/keycloak/openldap/), below, for example*), you'll want to create some local users.. +Unless you plan to authenticate against an outside provider (_[OpenLDAP](/recipes/keycloak/authenticate-against-openldap/), below, for example_), you'll want to create some local users.. ## Ingredients !!! Summary - Existing: +Existing: * [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully - + ### Create User -Within the "Master" realm (*no need for more realms yet*), navigate to **Manage** -> **Users**, and then click **Add User** at the top right: +Within the "Master" realm (_no need for more realms yet_), navigate to **Manage** -> **Users**, and then click **Add User** at the top right: ![Navigating to the add user interface in Keycloak](/images/keycloak-add-user-1.png) @@ -33,6 +33,6 @@ Once your user is created, to set their password, click on the "**Credentials**" We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it's used as an [OIDC Provider](/recipes/keycloak/setup-oidc-provider/), potentially to secure vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). !!! Summary - Created: +Created: * [X] Username / password to authenticate against [KeyCloak](/recipes/keycloak/) diff --git a/manuscript/recipes/keycloak/setup-oidc-provider.md b/manuscript/recipes/keycloak/setup-oidc-provider.md index 188107e..334b62b 100644 --- a/manuscript/recipes/keycloak/setup-oidc-provider.md +++ b/manuscript/recipes/keycloak/setup-oidc-provider.md @@ -3,7 +3,7 @@ !!! warning This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity. -Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/recipe/traefik-forward-auth/), we'll setup a client in KeyCloak... +Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), we'll setup a client in KeyCloak... ## Ingredients @@ -14,7 +14,7 @@ Having an authentication provider is not much use until you start authenticating New: - * [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/recipe/traefik-forward-auth/) recipe for more information + * [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe for more information ## Preparation diff --git a/manuscript/recipes/kubernetes/nextcloud.md b/manuscript/recipes/kubernetes/nextcloud.md index 5f7b97a..de385e3 100644 --- a/manuscript/recipes/kubernetes/nextcloud.md +++ b/manuscript/recipes/kubernetes/nextcloud.md @@ -15,7 +15,7 @@ Details ## Ingredients -1. [Kubernetes cluster](/kubernetes/digital-ocean/) +1. [Kubernetes cluster](/kubernetes/cluster/) ## Preparation diff --git a/manuscript/recipes/kubernetes/phpipam.md b/manuscript/recipes/kubernetes/phpipam.md index 979a068..3c591b3 100644 --- a/manuscript/recipes/kubernetes/phpipam.md +++ b/manuscript/recipes/kubernetes/phpipam.md @@ -8,7 +8,7 @@ Details ## Ingredients -1. [Kubernetes cluster](/kubernetes/digital-ocean/) +1. [Kubernetes cluster](/kubernetes/cluster/) ## Preparation diff --git a/manuscript/recipes/kubernetes/privatebin.md b/manuscript/recipes/kubernetes/privatebin.md index 5f7b97a..de385e3 100644 --- a/manuscript/recipes/kubernetes/privatebin.md +++ b/manuscript/recipes/kubernetes/privatebin.md @@ -15,7 +15,7 @@ Details ## Ingredients -1. [Kubernetes cluster](/kubernetes/digital-ocean/) +1. [Kubernetes cluster](/kubernetes/cluster/) ## Preparation diff --git a/manuscript/recipes/mattermost.md b/manuscript/recipes/mattermost.md index 7739985..bae4744 100644 --- a/manuscript/recipes/mattermost.md +++ b/manuscript/recipes/mattermost.md @@ -9,8 +9,8 @@ Details ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/miniflux.md b/manuscript/recipes/miniflux.md index 0c6f620..bb4e186 100644 --- a/manuscript/recipes/miniflux.md +++ b/manuscript/recipes/miniflux.md @@ -23,7 +23,7 @@ I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/rev 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry pointing your Miniflux url (i.e. _miniflux.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP +3. DNS entry pointing your Miniflux url (i.e. _miniflux.example.com_) to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/minio.md b/manuscript/recipes/minio.md index 315514b..c4d9aa2 100644 --- a/manuscript/recipes/minio.md +++ b/manuscript/recipes/minio.md @@ -18,8 +18,8 @@ Possible use-cases: ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/mqtt.md b/manuscript/recipes/mqtt.md index 5b39605..088e520 100644 --- a/manuscript/recipes/mqtt.md +++ b/manuscript/recipes/mqtt.md @@ -1,7 +1,7 @@ hero: Kubernetes. The hero we deserve. !!! danger "This recipe is a work in progress" - This recipe is **incomplete**, and is featured to align the [sponsors](https://github.com/sponsors/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [GitHub sponsors](https://github.com/sponsors/funkypenguin), which includes all the necessary .yml files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` πŸ‘ +This recipe is **incomplete**, and is featured to align the [sponsors](https://github.com/sponsors/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [GitHub sponsors](https://github.com/sponsors/funkypenguin), which includes all the necessary .yml files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `kubectl create -f *.yml` πŸ‘ So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁 @@ -19,7 +19,7 @@ A workaround to this bug is to run an MQTT broker **external** to the raspberry ## Ingredients -1. A [Kubernetes cluster](/kubernetes/digital-ocean/) +1. A [Kubernetes cluster](/kubernetes/cluster/) ## Preparation @@ -89,6 +89,7 @@ spec: EOF kubectl create -f /var/data/mqtt/service-nodeport.yml ``` + ### Create secrets It's not always desirable to have sensitive data stored in your .yml files. Maybe you want to check your config into a git repository, or share it. Using Kubernetes Secrets means that you can create "secrets", and use these in your deployments by name, without exposing their contents. @@ -104,8 +105,8 @@ kubectl create secret -n mqtt generic mqtt-credentials \ --from-file=letsencrypt-email.secret ``` -!!! tip "Why use ```echo -n```?" - Because. See [my blog post here](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why! +!!! tip "Why use `echo -n`?" +Because. See [my blog post here](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why! ## Serving @@ -114,7 +115,7 @@ kubectl create secret -n mqtt generic mqtt-credentials \ Now that we have a volume, a service, and a namespace, we can create a deployment for the mqtt pod. Note below the use of volume mounts, environment variables, as well as the secrets. !!! tip - I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` πŸ‘ +I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `kubectl create -f *.yml` πŸ‘ ``` cat < /var/data/mqtt/mqtt.yml @@ -193,7 +194,7 @@ EOF kubectl create -f /var/data/mqtt/mqtt.yml ``` -Check that your deployment is running, with ```kubectl get pods -n mqtt```. After a minute or so, you should see a "Running" pod, as illustrated below: +Check that your deployment is running, with `kubectl get pods -n mqtt`. After a minute or so, you should see a "Running" pod, as illustrated below: ``` [davidy:~/Documents/Personal/Projects/mqtt-k8s] 130 % kubectl get pods -n mqtt @@ -202,6 +203,6 @@ mqtt-65f4d96945-bjj44 1/1 Running 0 5m [davidy:~/Documents/Personal/Projects/mqtt-k8s] % ``` -To actually **use** your new MQTT broker, you'll need to connect to any one of your nodes (```kubectl get nodes -o wide```) on port 30883 (_the nodeport service we created earlier_). More info on that, and a loadbalancer design, to follow shortly :) +To actually **use** your new MQTT broker, you'll need to connect to any one of your nodes (`kubectl get nodes -o wide`) on port 30883 (_the nodeport service we created earlier_). More info on that, and a loadbalancer design, to follow shortly :) -## Chef's Notes πŸ““ \ No newline at end of file +## Chef's Notes πŸ““ diff --git a/manuscript/recipes/munin.md b/manuscript/recipes/munin.md index 8ce556a..a0cdf39 100644 --- a/manuscript/recipes/munin.md +++ b/manuscript/recipes/munin.md @@ -12,13 +12,13 @@ Munin uses the excellent ​RRDTool (written by Tobi Oetiker) and the framework 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation ### Prepare target nodes -Depending on what you want to monitor, you'll want to install munin-node. On Ubuntu/Debian, you'll use ```apt-get install munin-node```, and on RHEL/CentOS, run ```yum install munin-node```. Remember to edit ```/etc/munin/munin-node.conf```, and set your node to allow the server to poll it, by adding ```cidr_allow x.x.x.x/x```. +Depending on what you want to monitor, you'll want to install munin-node. On Ubuntu/Debian, you'll use `apt-get install munin-node`, and on RHEL/CentOS, run `yum install munin-node`. Remember to edit `/etc/munin/munin-node.conf`, and set your node to allow the server to poll it, by adding `cidr_allow x.x.x.x/x`. On CentOS Atomic, of course, you can't install munin-node directly, but you can run it as a containerized instance. In this case, you can't use swarm since you need the container running in privileged mode, so launch a munin-node container on each atomic host using: @@ -33,7 +33,6 @@ docker run -d --name munin-node --restart=always \ funkypenguin/munin-node ``` - ### Setup data locations We'll need several directories to bind-mount into our container, so create them in /var/data/munin: @@ -46,7 +45,7 @@ mkdir -p {log,lib,run,cache} ### Prepare environment -Create /var/data/config/munin/munin.env, and populate with the following variables. Use the OAUTH2 variables if you plan to use an [oauth2_proxy](/reference/oauth_proxy/) to protect munin, and set at a **minimum** the ```MUNIN_USER```, ```MUNIN_PASSWORD```, and ```NODES``` values: +Create /var/data/config/munin/munin.env, and populate with the following variables. Use the OAUTH2 variables if you plan to use an [oauth2_proxy](/reference/oauth_proxy/) to protect munin, and set at a **minimum** the `MUNIN_USER`, `MUNIN_PASSWORD`, and `NODES` values: ``` # Use these if you plan to protect the webUI with an oauth_proxy @@ -74,8 +73,7 @@ SNMP_NODES="router1:10.0.0.254:9999" Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` πŸ‘ - +I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` πŸ‘ ``` version: '3' @@ -84,14 +82,14 @@ services: munin: image: funkypenguin/munin-server - env_file: /var/data/config/munin/munin.env + env_file: /var/data/config/munin/munin.env networks: - internal volumes: - /var/data/munin/log:/var/log/munin - /var/data/munin/lib:/var/lib/munin - /var/data/munin/run:/var/run/munin - - /var/data/munin/cache:/var/cache/munin + - /var/data/munin/cache:/var/cache/munin proxy: image: funkypenguin/oauth2_proxy @@ -123,17 +121,16 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. - +Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. ## Serving ### Launch Munin stack -Launch the Munin stack by running ```docker stack deploy munin -c ``` +Launch the Munin stack by running `docker stack deploy munin -c ` Log into your new instance at https://**YOUR-FQDN**, with user and password password you specified in munin.env above. ## Chef's Notes πŸ““ -1. If you wanted to expose the Munin UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the munin container. You'd also need to add the traefik_public network to the munin container. \ No newline at end of file +1. If you wanted to expose the Munin UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the munin container. You'd also need to add the traefik_public network to the munin container. diff --git a/manuscript/recipes/nextcloud.md b/manuscript/recipes/nextcloud.md index a4298c3..d47fb74 100644 --- a/manuscript/recipes/nextcloud.md +++ b/manuscript/recipes/nextcloud.md @@ -18,7 +18,7 @@ This recipe is based on the official NextCloud docker image, but includes seprat 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry pointing your NextCloud url (_nextcloud.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP +3. DNS entry pointing your NextCloud url (_nextcloud.example.com_) to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/openldap.md b/manuscript/recipes/openldap.md index 0838152..5eddb46 100644 --- a/manuscript/recipes/openldap.md +++ b/manuscript/recipes/openldap.md @@ -5,7 +5,7 @@ [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) -LDAP is probably the most ubiquitous authentication backend, before the current era of "[stupid social sign-ons](https://www.usatoday.com/story/tech/columnist/2018/10/23/how-separate-your-social-networks-your-regular-sites/1687763002/)". Many of the recipes featured in the cookbook (_[NextCloud](/recipe/nextcloud/), [Kanboard](/recipe/kanboard/), [Gitlab](/recipe/gitlab/), etc_) offer LDAP integration. +LDAP is probably the most ubiquitous authentication backend, before the current era of "[stupid social sign-ons](https://www.usatoday.com/story/tech/columnist/2018/10/23/how-separate-your-social-networks-your-regular-sites/1687763002/)". Many of the recipes featured in the cookbook (_[NextCloud](/recipes/nextcloud/), [Kanboard](/recipes/kanboard/), [Gitlab](/recipes/gitlab/), etc_) offer LDAP integration. ## Big deal, who cares? @@ -21,13 +21,13 @@ This recipe combines the raw power of OpenLDAP with the flexibility and features ## What's the takeaway? -What you'll end up with is a directory structure which will allow integration with popular tools (_[NextCloud](/recipe/nextcloud/), [Kanboard](/recipe/kanboard/), [Gitlab](/recipe/gitlab/), etc_), as well as with KeyCloak (_an upcoming recipe_), for **true** SSO. +What you'll end up with is a directory structure which will allow integration with popular tools (_[NextCloud](/recipes/nextcloud/), [Kanboard](/recipes/kanboard/), [Gitlab](/recipes/gitlab/), etc_), as well as with KeyCloak (_an upcoming recipe_), for **true** SSO. ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname (_i.e. "lam.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname (_i.e. "lam.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/owntracks.md b/manuscript/recipes/owntracks.md index ad311f8..bae1790 100644 --- a/manuscript/recipes/owntracks.md +++ b/manuscript/recipes/owntracks.md @@ -13,7 +13,7 @@ Using a smartphone app, OwnTracks allows you to collect and analyse your own loc 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/photoprism.md b/manuscript/recipes/photoprism.md index 4bbdab0..c240237 100644 --- a/manuscript/recipes/photoprism.md +++ b/manuscript/recipes/photoprism.md @@ -10,8 +10,8 @@ hero: Your own private google photos ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/phpipam.md b/manuscript/recipes/phpipam.md index 638ad13..de963b3 100644 --- a/manuscript/recipes/phpipam.md +++ b/manuscript/recipes/phpipam.md @@ -8,7 +8,7 @@ phpIPAM fulfils a non-sexy, but important role - It helps you manage your IP add ## Why should you care about this? -You probably have a home network, with 20-30 IP addresses, for your family devices, your ![IoT devices](/recipe/home-assistant), your smart TV, etc. If you want to (a) monitor them, and (b) audit who does what, you care about what IPs they're assigned by your DHCP server. +You probably have a home network, with 20-30 IP addresses, for your family devices, your ![IoT devices](/recipes/homeassistant), your smart TV, etc. If you want to (a) monitor them, and (b) audit who does what, you care about what IPs they're assigned by your DHCP server. You could simple keep track of all devices with leases in your DHCP server, but what happens if your (_hypothetical?_) Ubiquity Edge Router X crashes and burns due to lack of disk space, and you loose track of all your leases? Well, you have to start from scratch, is what! @@ -19,8 +19,8 @@ Enter phpIPAM. A tool designed to help home keeps as well as large organisations ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname (_i.e. "phpipam.your-domain.com"_) you intend to use for phpIPAM, pointed to your [keepalived](ha-docker-swarm/keepalived/) IPIP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname (_i.e. "phpipam.your-domain.com"_) you intend to use for phpIPAM, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IPIP ## Preparation @@ -36,6 +36,7 @@ mkdir /var/data/runtime/phpipam -p ### Prepare environment Create phpipam.env, and populate with the following variables + ``` # Setup for github, phpipam application OAUTH2_PROXY_CLIENT_ID= @@ -77,13 +78,12 @@ BACKUP_FREQUENCY=1d I usually protect my stacks using an [oauth proxy](/reference/oauth_proxy/) container in front of the app. This protects me from either accidentally exposing a platform to the world, or having a insecure platform accessed and abused. -In the case of phpIPAM, the oauth_proxy creates an additional complexity, since it passes the "Authorization" HTTP header to the phpIPAM container. phpIPAH then examines the header, determines that the provided username (_my email address associated with my oauth provider_) doesn't match a local user account, and denies me access without the opportunity to retry. +In the case of phpIPAM, the oauth*proxy creates an additional complexity, since it passes the "Authorization" HTTP header to the phpIPAM container. phpIPAH then examines the header, determines that the provided username (\_my email address associated with my oauth provider*) doesn't match a local user account, and denies me access without the opportunity to retry. The (_dirty_) solution I've come up with is to insert an Nginx instance in the path between the oauth_proxy and the phpIPAM container itself. Nginx can remove the authorization header, so that phpIPAM can prompt me to login with a web-based form. Create /var/data/phpipam/nginx.conf as follows: - ``` upstream app-upstream { server app:80; @@ -108,8 +108,7 @@ server { Create a docker swarm config file in docker-compose syntax (v3), something like this: !!! tip - I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` πŸ‘ - +I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` πŸ‘ ``` version: '3' @@ -193,18 +192,16 @@ networks: ``` !!! note - Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. - - +Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. ## Serving ### Launch phpIPAM stack -Launch the phpIPAM stack by running ```docker stack deploy phpipam -c ``` +Launch the phpIPAM stack by running `docker stack deploy phpipam -c ` Log into your new instance at https://**YOUR-FQDN**, and follow the on-screen prompts to set your first user/password. ## Chef's Notes πŸ““ -1. If you wanted to expose the phpIPAM UI directly, you could remove the oauth2_proxy and the nginx services from the design, and move the traefik_public-related labels directly to the phpipam container. You'd also need to add the traefik_public network to the phpipam container. \ No newline at end of file +1. If you wanted to expose the phpIPAM UI directly, you could remove the oauth2_proxy and the nginx services from the design, and move the traefik_public-related labels directly to the phpipam container. You'd also need to add the traefik_public network to the phpipam container. diff --git a/manuscript/recipes/plex.md b/manuscript/recipes/plex.md index 2cd3705..04978be 100644 --- a/manuscript/recipes/plex.md +++ b/manuscript/recipes/plex.md @@ -10,7 +10,7 @@ hero: A recipe to manage your Media πŸŽ₯ πŸ“Ί 🎡 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. A DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +3. A DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/privatebin.md b/manuscript/recipes/privatebin.md index 5f7cf06..8b6561d 100644 --- a/manuscript/recipes/privatebin.md +++ b/manuscript/recipes/privatebin.md @@ -7,8 +7,8 @@ PrivateBin is a minimalist, open source online pastebin where the server (can) h ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/realms.md b/manuscript/recipes/realms.md index 0da7589..438abae 100644 --- a/manuscript/recipes/realms.md +++ b/manuscript/recipes/realms.md @@ -23,8 +23,8 @@ Features include: ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/restic.md b/manuscript/recipes/restic.md index ab164b9..3c0d812 100644 --- a/manuscript/recipes/restic.md +++ b/manuscript/recipes/restic.md @@ -14,7 +14,7 @@ Restic is one of the more popular open-source backup solutions, and is often [co !!! summary "Ingredients" * [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) - * [X] [Traefik](/ha-docker-swarm/traefik_public) and [Traefik-Forward-Auth](/ha-docker-swarm/traefik-forward-auth) configured per design + * [X] [Traefik](/ha-docker-swarm/traefik) and [Traefik-Forward-Auth](/ha-docker-swarm/traefik-forward-auth) configured per design * [X] Credentials for one of Restic's [supported repositories](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html) ## Preparation diff --git a/manuscript/recipes/swarmprom.md b/manuscript/recipes/swarmprom.md index 4eda76f..3097a88 100644 --- a/manuscript/recipes/swarmprom.md +++ b/manuscript/recipes/swarmprom.md @@ -22,8 +22,8 @@ I'd encourage you to spend some time reading https://github.com/stefanprodan/swa ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) on **17.09.0 or newer** (_doesn't work with CentOS Atomic, unfortunately_) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostnames you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostnames you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/template.md b/manuscript/recipes/template.md index 91ea67b..04a6305 100644 --- a/manuscript/recipes/template.md +++ b/manuscript/recipes/template.md @@ -16,8 +16,8 @@ Details ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/wallabag.md b/manuscript/recipes/wallabag.md index 8e9105a..a0a47d2 100644 --- a/manuscript/recipes/wallabag.md +++ b/manuscript/recipes/wallabag.md @@ -16,7 +16,7 @@ There are plugins for [Chrome](https://chrome.google.com/webstore/detail/wallaba 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 2. [Traefik](/ha-docker-swarm/traefik) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/recipes/wetty.md b/manuscript/recipes/wetty.md index 750a90b..2d18808 100644 --- a/manuscript/recipes/wetty.md +++ b/manuscript/recipes/wetty.md @@ -19,8 +19,8 @@ Here are some other possible use cases: ## Ingredients 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) -2. [Traefik](/ha-docker-swarm/traefik_public) configured per design -3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP +2. [Traefik](/ha-docker-swarm/traefik) configured per design +3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP ## Preparation diff --git a/manuscript/reference/networks.md b/manuscript/reference/networks.md index 88ebda5..453dcb4 100644 --- a/manuscript/reference/networks.md +++ b/manuscript/reference/networks.md @@ -3,7 +3,7 @@ In order to avoid IP addressing conflicts as we bring swarm networks up/down, we will statically address each docker overlay network, and record the details below: | Network | Range | -|-----------------------------------------------------------------------------------------------------------------------|----------------| +| --------------------------------------------------------------------------------------------------------------------- | -------------- | | [Traefik](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik/) | _unspecified_ | | [Docker-cleanup](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/docker-swarm-mode/#setup-automated-cleanup) | 172.16.0.0/24 | | [Mail Server](https://geek-cookbook.funkypenguin.co.nz/recipes/mail/) | 172.16.1.0/24 | @@ -19,7 +19,7 @@ In order to avoid IP addressing conflicts as we bring swarm networks up/down, we | [Autopirate](https://geek-cookbook.funkypenguin.co.nz/recipes/autopirate/) | 172.16.11.0/24 | | [Nextcloud](https://geek-cookbook.funkypenguin.co.nz/recipes/nextcloud/) | 172.16.12.0/24 | | [Portainer](https://geek-cookbook.funkypenguin.co.nz/recipes/portainer/) | 172.16.13.0/24 | -| [Home-Assistant](https://geek-cookbook.funkypenguin.co.nz/recipes/home-assistant/) | 172.16.14.0/24 | +| [Home Assistant](https://geek-cookbook.funkypenguin.co.nz/recipes/homeassistant/) | 172.16.14.0/24 | | [OwnTracks](https://geek-cookbook.funkypenguin.co.nz/recipes/owntracks/) | 172.16.15.0/24 | | [Plex](https://geek-cookbook.funkypenguin.co.nz/recipes/plex/) | 172.16.16.0/24 | | [Emby](https://geek-cookbook.funkypenguin.co.nz/recipes/emby/) | 172.16.17.0/24 | @@ -33,7 +33,7 @@ In order to avoid IP addressing conflicts as we bring swarm networks up/down, we | [Bookstack](https://geek-cookbook.funkypenguin.co.nz/recipes/bookstack/) | 172.16.33.0/24 | | [Swarmprom](https://geek-cookbook.funkypenguin.co.nz/recipes/swarmprom/) | 172.16.34.0/24 | | [Realms](https://geek-cookbook.funkypenguin.co.nz/recipes/realms/) | 172.16.35.0/24 | -| [ElkarBackup](https://geek-cookbook.funkypenguin.co.nz/recipes/elkarbackp/) | 172.16.36.0/24 | +| [ElkarBackup](https://geek-cookbook.funkypenguin.co.nz/recipes/elkarbackup/) | 172.16.36.0/24 | | [Mayan EDMS](https://geek-cookbook.funkypenguin.co.nz/recipes/realms/) | 172.16.37.0/24 | | [Shaarli](https://geek-cookbook.funkypenguin.co.nz/recipes/shaarli/) | 172.16.38.0/24 | | [OpenLDAP](https://geek-cookbook.funkypenguin.co.nz/recipes/openldap/) | 172.16.39.0/24 |