1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-13 17:56:26 +00:00

Fix Dead Links (#129)

This commit is contained in:
Thomas
2021-01-04 16:00:48 +13:00
committed by GitHub
parent 77184f5937
commit 6892542f9d
51 changed files with 354 additions and 361 deletions

View File

@@ -6,14 +6,13 @@ When dealing with large container (looking at you, GitLab!), this can result in
The solution is to run an official Docker registry container as a ["pull-through" cache, or "registry mirror"](https://docs.docker.com/registry/recipes/mirror/). By using our persistent storage for the registry cache, we can ensure we have a single copy of all the containers we've pulled at least once. After the first pull, any subsequent pulls from our nodes will use the cached version from our registry mirror. As a result, services are available more quickly when restarting container nodes, and we can be more aggressive about cleaning up unused containers on our nodes (more later) The solution is to run an official Docker registry container as a ["pull-through" cache, or "registry mirror"](https://docs.docker.com/registry/recipes/mirror/). By using our persistent storage for the registry cache, we can ensure we have a single copy of all the containers we've pulled at least once. After the first pull, any subsequent pulls from our nodes will use the cached version from our registry mirror. As a result, services are available more quickly when restarting container nodes, and we can be more aggressive about cleaning up unused containers on our nodes (more later)
The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Customize __your mirror FQDN__ below, so that Traefik will generate the appropriate LetsEncrypt certificates for it, and make it available via HTTPS. The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Customize **your mirror FQDN** below, so that Traefik will generate the appropriate LetsEncrypt certificates for it, and make it available via HTTPS.
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation
@@ -45,10 +44,10 @@ networks:
``` ```
!!! note "Unencrypted registry" !!! note "Unencrypted registry"
We create this registry without consideration for SSL, which will fail if we attempt to use the registry directly. However, we're going to use the HTTPS-proxied version via Traefik, leveraging Traefik to manage the LetsEncrypt certificates required. We create this registry without consideration for SSL, which will fail if we attempt to use the registry directly. However, we're going to use the HTTPS-proxied version via Traefik, leveraging Traefik to manage the LetsEncrypt certificates required.
Create /var/data/registry/registry-mirror-config.yml as follows: Create /var/data/registry/registry-mirror-config.yml as follows:
``` ```
version: 0.1 version: 0.1
log: log:
@@ -78,7 +77,7 @@ proxy:
### Launch registry stack ### Launch registry stack
Launch the registry stack by running ```docker stack deploy registry -c <path-to-docker-compose.yml>``` Launch the registry stack by running `docker stack deploy registry -c <path-to-docker-compose.yml>`
### Enable registry mirror and experimental features ### Enable registry mirror and experimental features
@@ -103,11 +102,12 @@ To:
``` ```
Then restart docker by running: Then restart docker by running:
````
```
systemctl restart docker-latest systemctl restart docker-latest
```` ```
!!! tip "" !!! tip ""
Note the extra comma required after "false" above Note the extra comma required after "false" above
## Chef's notes 📓 ## Chef's notes 📓

View File

@@ -5,13 +5,13 @@ While the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe
## Ingredients ## Ingredients
!!! Summary !!! Summary
Existing: Existing:
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully, with a [local user](/recipes/keycloak/create-user/) and an [OIDC client](/recipes/keycloak/setup-oidc-provider/) * [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully, with a [local user](/recipes/keycloak/create-user/) and an [OIDC client](/recipes/keycloak/setup-oidc-provider/)
New: New:
* [ ] DNS entry for your auth host (*"auth.yourdomain.com" is a good choice*), pointed to your [keepalived](ha-docker-swarm/keepalived/) IP * [ ] DNS entry for your auth host (*"auth.yourdomain.com" is a good choice*), pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation
@@ -19,23 +19,22 @@ While the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe
Under normal OIDC auth, you have to tell your auth provider which URLs it may redirect an authenticated user back to, post-authentication. This is a security feture of the OIDC spec, preventing a malicious landing page from capturing your session and using it to impersonate you. When you're securing many URLs though, explicitly listing them can be a PITA. Under normal OIDC auth, you have to tell your auth provider which URLs it may redirect an authenticated user back to, post-authentication. This is a security feture of the OIDC spec, preventing a malicious landing page from capturing your session and using it to impersonate you. When you're securing many URLs though, explicitly listing them can be a PITA.
[@thomaseddon's traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth) includes an ingenious mechanism to simulate an "*auth host*" in your OIDC authentication, so that you can protect an unlimited amount of DNS names (*with a common domain suffix*), without having to manually maintain a list. [@thomaseddon's traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth) includes an ingenious mechanism to simulate an "_auth host_" in your OIDC authentication, so that you can protect an unlimited amount of DNS names (_with a common domain suffix_), without having to manually maintain a list.
#### How does it work? #### How does it work?
Say you're protecting **radarr.example.com**. When you first browse to **https://radarr.example.com**, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider's login (*KeyCloak, in this case*), but instructs the OIDC provider to redirect a successfully authenticated session **back** to **https://auth.example.com/_oauth**, rather than to **https://radarr.example.com/_oauth**. Say you're protecting **radarr.example.com**. When you first browse to **https://radarr.example.com**, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider's login (_KeyCloak, in this case_), but instructs the OIDC provider to redirect a successfully authenticated session **back** to **https://auth.example.com/_oauth**, rather than to **https://radarr.example.com/_oauth**.
When you successfully authenticate against the OIDC provider, you are redirected to the "*redirect_uri*" of https://auth.example.com. Again, your request hits Traefik, whichforwards the session to traefik-forward-auth, which **knows** that you've just been authenticated (*cookies have a role to play here*). Traefik-forward-auth also knows the URL of your **original** request (*thanks to the X-Forwarded-Whatever header*). Traefik-forward-auth redirects you to your original destination, and everybody is happy. When you successfully authenticate against the OIDC provider, you are redirected to the "_redirect_uri_" of https://auth.example.com. Again, your request hits Traefik, whichforwards the session to traefik-forward-auth, which **knows** that you've just been authenticated (_cookies have a role to play here_). Traefik-forward-auth also knows the URL of your **original** request (_thanks to the X-Forwarded-Whatever header_). Traefik-forward-auth redirects you to your original destination, and everybody is happy.
This clever workaround only works under 2 conditions: This clever workaround only works under 2 conditions:
1. Your "auth host" has the same domain name as the hosts you're protecting (_i.e., auth.example.com protecting radarr.example.com_)
1. Your "auth host" has the same domain name as the hosts you're protecting (*i.e., auth.example.com protecting radarr.example.com*) 2. You explictly tell traefik-forward-auth to use a cookie authenticating your **whole** domain (_i.e. example.com_)
2. You explictly tell traefik-forward-auth to use a cookie authenticating your **whole** domain (*i.e. example.com*)
### Setup environment ### Setup environment
Create `/var/data/config/traefik/traefik-forward-auth.env` as follows (*change "master" if you created a different realm*): Create `/var/data/config/traefik/traefik-forward-auth.env` as follows (_change "master" if you created a different realm_):
``` ```
CLIENT_ID=<your keycloak client name> CLIENT_ID=<your keycloak client name>
@@ -48,7 +47,7 @@ COOKIE_DOMAIN=<the root FQDN of your domain>
### Prepare the docker service config ### Prepare the docker service config
This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe: This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/ha-docker-swarm/traefik/) recipe:
``` ```
traefik-forward-auth: traefik-forward-auth:
@@ -82,21 +81,21 @@ If you're not confident that forward authentication is working, add a simple "wh
``` ```
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍
## Serving ## Serving
### Launch ### Launch
Redeploy traefik with ```docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml```, to launch the traefik-forward-auth container. Redeploy traefik with `docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml`, to launch the traefik-forward-auth container.
### Test ### Test
Browse to https://whoami.example.com (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you'll be redirected to a KeyCloak login. Once successfully logged in, you'll be directed to the basic whoami page. Browse to https://whoami.example.com (_obviously, customized for your domain and having created a DNS record_), and all going according to plan, you'll be redirected to a KeyCloak login. Once successfully logged in, you'll be directed to the basic whoami page.
### Protect services ### Protect services
To protect any other service, ensure the service itself is exposed by Traefik (*if you were previously using an oauth_proxy for this, you may have to migrate some labels from the oauth_proxy serivce to the service itself*). Add the following 3 labels: To protect any other service, ensure the service itself is exposed by Traefik (_if you were previously using an oauth_proxy for this, you may have to migrate some labels from the oauth_proxy serivce to the service itself_). Add the following 3 labels:
``` ```
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181 - traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
@@ -111,12 +110,10 @@ And re-deploy your services :)
What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our KeyCloak OIDC provider, with minimal processing / handling overhead. What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our KeyCloak OIDC provider, with minimal processing / handling overhead.
!!! summary "Summary" !!! summary "Summary"
Created: Created:
* [X] Traefik-forward-auth configured to authenticate against KeyCloak * [X] Traefik-forward-auth configured to authenticate against KeyCloak
## Chef's Notes 📓 ## Chef's Notes 📓
1. KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;) 1. KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;)

View File

@@ -1,17 +1,17 @@
# Design # Design
Like the [Docker Swarm](ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is: Like the [Docker Swarm](/ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is:
* **Highly-available** (_can tolerate the failure of a single component_) - **Highly-available** (_can tolerate the failure of a single component_)
* **Scalable** (_can add resource or capacity as required_) - **Scalable** (_can add resource or capacity as required_)
* **Portable** (_run it in DigitalOcean today, AWS tomorrow and Azure on Thursday_) - **Portable** (_run it in DigitalOcean today, AWS tomorrow and Azure on Thursday_)
* **Secure** (_access protected with LetsEncrypt certificates_) - **Secure** (_access protected with LetsEncrypt certificates_)
* **Automated** (_requires minimal care and feeding_) - **Automated** (_requires minimal care and feeding_)
*Unlike* the Docker Swarm design, the Kubernetes design is: _Unlike_ the Docker Swarm design, the Kubernetes design is:
* **Cloud-Native** (_While you **can** [run your own Kubernetes cluster](https://microk8s.io/), it's far simpler to let someone else manage the infrastructure, freeing you to play with the fun stuff_) - **Cloud-Native** (_While you **can** [run your own Kubernetes cluster](https://microk8s.io/), it's far simpler to let someone else manage the infrastructure, freeing you to play with the fun stuff_)
* **Complex** (_Requires more basic elements, more verbose configuration, and provides more flexibility and customisability_) - **Complex** (_Requires more basic elements, more verbose configuration, and provides more flexibility and customisability_)
## Design Decisions ## Design Decisions
@@ -19,21 +19,21 @@ Like the [Docker Swarm](ha-docker-swarm/design/) "_private cloud_" design, the K
This means that: This means that:
* The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s - The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s
* Custom service elements specific to individual providers are avoided - Custom service elements specific to individual providers are avoided
**The simplest solution to achieve the desired result will be preferred** **The simplest solution to achieve the desired result will be preferred**
This means that: This means that:
* Persistent volumes from the cloud provider are used for all persistent storage - Persistent volumes from the cloud provider are used for all persistent storage
* We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks. - We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks.
**Insofar as possible, the format of recipes will align with Docker Swarm** **Insofar as possible, the format of recipes will align with Docker Swarm**
This means that: This means that:
* We use Kubernetes namespaces to replicate Docker Swarm's "_per-stack_" networking and service discovery - We use Kubernetes namespaces to replicate Docker Swarm's "_per-stack_" networking and service discovery
## Security ## Security
@@ -41,12 +41,12 @@ Under this design, the only inbound connections we're permitting to our Kubernet
### Network Flows ### Network Flows
* HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_) - HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_)
* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_) - Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_)
### Authentication ### Authentication
* Other than when an SSL-served application provides a trusted level of authentication, or where the application requires public exposure, applications served via Traefik will be protected with an OAuth proxy. - Other than when an SSL-served application provides a trusted level of authentication, or where the application requires public exposure, applications served via Traefik will be protected with an OAuth proxy.
## The challenges of external access ## The challenges of external access
@@ -55,8 +55,9 @@ Because we're Cloude-Native now, it's complex to get traffic **into** our cluste
1. **HostIP**: Map a port on the host to a service. This is analogous to Docker's port exposure, but lacking in that it restricts us to one host port per-container, and it's not possible to anticipate _which_ of your Kubernetes hosts is running a given container. Kubernetes does not have Docker Swarm's "routing mesh", allowing for simple load-balancing of incoming connections. 1. **HostIP**: Map a port on the host to a service. This is analogous to Docker's port exposure, but lacking in that it restricts us to one host port per-container, and it's not possible to anticipate _which_ of your Kubernetes hosts is running a given container. Kubernetes does not have Docker Swarm's "routing mesh", allowing for simple load-balancing of incoming connections.
2. **LoadBalancer**: Purchase a "loadbalancer" per-service from your cloud provider. While this is the simplest way to assure a fixed IP and port combination will always exist for your service, it has 2 significant limitations: 2. **LoadBalancer**: Purchase a "loadbalancer" per-service from your cloud provider. While this is the simplest way to assure a fixed IP and port combination will always exist for your service, it has 2 significant limitations:
1. Cost is prohibitive, at roughly $US10/month per port
2. You won't get the _same_ fixed IP for multiple ports. So if you wanted to expose 443 and 25 (_webmail and smtp server, for example_), you'd find yourself assigned a port each on two **unique** IPs, a challenge for a single DNS-based service, like "_mail.batman.com_" 1. Cost is prohibitive, at roughly \$US10/month per port
2. You won't get the _same_ fixed IP for multiple ports. So if you wanted to expose 443 and 25 (_webmail and smtp server, for example_), you'd find yourself assigned a port each on two **unique** IPs, a challenge for a single DNS-based service, like "_mail.batman.com_"
3. **NodePort**: Expose our service as a port (_between 30000-32767_) on the host which happens to be running the service. This is challenging because you might want to expose port 443, but that's not possible with NodePort. 3. **NodePort**: Expose our service as a port (_between 30000-32767_) on the host which happens to be running the service. This is challenging because you might want to expose port 443, but that's not possible with NodePort.
@@ -92,7 +93,7 @@ The phone-home container calls the webhook, and tells HAProxy to listen on port
### 2 : The Traefik Ingress ### 2 : The Traefik Ingress
In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/docker-ha-swarm/traefik/). In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/ha-docker-swarm/traefik/).
What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number. What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number.
@@ -120,10 +121,10 @@ Finally, the DNS for all externally-accessible services is pointed to the IP of
Still with me? Good. Move on to creating your cluster! Still with me? Good. Move on to creating your cluster!
* [Start](/kubernetes/start/) - Why Kubernetes? - [Start](/kubernetes/start/) - Why Kubernetes?
* Design (this page) - How does it fit together? - Design (this page) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster - [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access - [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data - [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks - [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm - [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm

View File

@@ -4,7 +4,7 @@ One of the issues I encountered early on in migrating my Docker Swarm workloads
There were several complications with the "traditional" mechanisms of providing a load-balanced ingress, not the least of which was cost. I also found that even if I paid my cloud provider (_Google_) for a load-balancer Kubernetes service, this service required a unique IP per exposed port, which was incompatible with my mining pool empire (_mining pools need to expose multiple ports on the same DNS name_). There were several complications with the "traditional" mechanisms of providing a load-balanced ingress, not the least of which was cost. I also found that even if I paid my cloud provider (_Google_) for a load-balancer Kubernetes service, this service required a unique IP per exposed port, which was incompatible with my mining pool empire (_mining pools need to expose multiple ports on the same DNS name_).
See further examination of the problem and possible solutions in the [Kubernetes design](kubernetes/design/#the-challenges-of-external-access) page. See further examination of the problem and possible solutions in the [Kubernetes design](/kubernetes/design/#the-challenges-of-external-access) page.
This recipe details a simple design to permit the exposure of as many ports as you like, on a single public IP, to a cluster of Kubernetes nodes running as many pods/containers as you need, with services exposed via NodePort. This recipe details a simple design to permit the exposure of as many ports as you like, on a single public IP, to a cluster of Kubernetes nodes running as many pods/containers as you need, with services exposed via NodePort.
@@ -13,10 +13,9 @@ This recipe details a simple design to permit the exposure of as many ports as y
## Ingredients ## Ingredients
1. [Kubernetes cluster](/kubernetes/cluster/) 1. [Kubernetes cluster](/kubernetes/cluster/)
2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar 🍷 for me!_) 2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [\$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar 🍷 for me!_)
3. Geek-Fu required : 🐧🐧🐧 (_complex - inline adjustments required_) 3. Geek-Fu required : 🐧🐧🐧 (_complex - inline adjustments required_)
## Preparation ## Preparation
### Summary ### Summary
@@ -24,7 +23,7 @@ This recipe details a simple design to permit the exposure of as many ports as y
### Create LetsEncrypt certificate ### Create LetsEncrypt certificate
!!! warning !!! warning
Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internet without first securing it with a valid SSL certificate? Of course not, I didn't think so! Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internet without first securing it with a valid SSL certificate? Of course not, I didn't think so!
Use whatever method you prefer to generate (and later, renew) your LetsEncrypt cert. The example below uses the CertBot docker image for CloudFlare DNS validation, since that's what I've used elsewhere. Use whatever method you prefer to generate (and later, renew) your LetsEncrypt cert. The example below uses the CertBot docker image for CloudFlare DNS validation, since that's what I've used elsewhere.
@@ -38,13 +37,14 @@ dns_cloudflare_api_key=supersekritnevergonnatellyou
``` ```
I request my cert by running: I request my cert by running:
``` ```
cd /etc/webhook/ cd /etc/webhook/
docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare --preferred-challenges dns certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d ''*.funkypenguin.co.nz' docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare --preferred-challenges dns certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d ''*.funkypenguin.co.nz'
``` ```
!!! question !!! question
Why use a wildcard cert? So my enemies can't examine my certs to enumerate my various services and discover my weaknesses, of course! Why use a wildcard cert? So my enemies can't examine my certs to enumerate my various services and discover my weaknesses, of course!
I add the following as a cron command to renew my certs every day: I add the following as a cron command to renew my certs every day:
@@ -52,15 +52,15 @@ I add the following as a cron command to renew my certs every day:
cd /etc/webhook && docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini cd /etc/webhook && docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini
``` ```
Once you've confirmed you've got a valid LetsEncrypt certificate stored in ```/etc/webhook/letsencrypt/live/<your domain>/fullcert.pem```, proceed to the next step.. Once you've confirmed you've got a valid LetsEncrypt certificate stored in `/etc/webhook/letsencrypt/live/<your domain>/fullcert.pem`, proceed to the next step..
### Install webhook ### Install webhook
We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❤ ya, Debian!_), webhook and its associated systemd config can be installed by running ```apt-get install webhook```. We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❤ ya, Debian!_), webhook and its associated systemd config can be installed by running `apt-get install webhook`.
### Create webhook config ### Create webhook config
We'll create a single webhook, by creating ```/etc/webhook/hooks.json``` as follows. Choose a nice secure random string for your MY_TOKEN value! We'll create a single webhook, by creating `/etc/webhook/hooks.json` as follows. Choose a nice secure random string for your MY_TOKEN value!
``` ```
mkdir /etc/webhook mkdir /etc/webhook
@@ -113,14 +113,14 @@ EOF
``` ```
!!! note !!! note
Note that to avoid any bozo from calling our we're matching on a token header in the request called ```X-Funkypenguin-Token```. Webhook will **ignore** any request which doesn't include a matching token in the request header. Note that to avoid any bozo from calling our we're matching on a token header in the request called `X-Funkypenguin-Token`. Webhook will **ignore** any request which doesn't include a matching token in the request header.
### Update systemd for webhook ### Update systemd for webhook
!!! note !!! note
This section is particular to Debian Stretch and its webhook package. If you're using another OS for your VM, just ensure that you can start webhook with a config similar to the one illustrated below. This section is particular to Debian Stretch and its webhook package. If you're using another OS for your VM, just ensure that you can start webhook with a config similar to the one illustrated below.
Since we want to force webhook to run in secure mode (_no point having a token if it can be extracted from a simple packet capture!_) I ran ```systemctl edit webhook```, and pasted in the following: Since we want to force webhook to run in secure mode (_no point having a token if it can be extracted from a simple packet capture!_) I ran `systemctl edit webhook`, and pasted in the following:
``` ```
[Service] [Service]
@@ -129,7 +129,7 @@ ExecStart=
ExecStart=/usr/bin/webhook -hooks /etc/webhook/hooks.json -verbose -secure -cert /etc/webhook/letsencrypt/live/funkypenguin.co.nz/fullchain.pem -key /etc/webhook/letsencrypt/live/funkypenguin.co.nz/privkey.pem ExecStart=/usr/bin/webhook -hooks /etc/webhook/hooks.json -verbose -secure -cert /etc/webhook/letsencrypt/live/funkypenguin.co.nz/fullchain.pem -key /etc/webhook/letsencrypt/live/funkypenguin.co.nz/privkey.pem
``` ```
Then I restarted webhook by running ```systemctl enable webhook && systemctl restart webhook```. I watched the subsequent logs by running ```journalctl -u webhook -f``` Then I restarted webhook by running `systemctl enable webhook && systemctl restart webhook`. I watched the subsequent logs by running `journalctl -u webhook -f`
### Create /etc/webhook/update-haproxy.sh ### Create /etc/webhook/update-haproxy.sh
@@ -210,7 +210,7 @@ fi
### Create /etc/webhook/haproxy/global ### Create /etc/webhook/haproxy/global
Create ```/etc/webhook/haproxy/global``` and populate with something like the following. This will be the non-dynamically generated part of our HAProxy config: Create `/etc/webhook/haproxy/global` and populate with something like the following. This will be the non-dynamically generated part of our HAProxy config:
``` ```
global global
@@ -260,7 +260,7 @@ Whew! We now have all the components of our automated load-balancing solution in
If you don't see the above, then check the following: If you don't see the above, then check the following:
1. Does the webhook verbose log (```journalctl -u webhook -f```) complain about invalid arguments or missing files? 1. Does the webhook verbose log (`journalctl -u webhook -f`) complain about invalid arguments or missing files?
2. Is port 9000 open to the internet on your VM? 2. Is port 9000 open to the internet on your VM?
### Apply to pods ### Apply to pods
@@ -315,19 +315,17 @@ Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 command ou
<HAProxy restarts> <HAProxy restarts>
``` ```
## Move on.. ## Move on..
Still with me? Good. Move on to setting up an ingress SSL terminating proxy with Traefik.. Still with me? Good. Move on to setting up an ingress SSL terminating proxy with Traefik..
* [Start](/kubernetes/start/) - Why Kubernetes? - [Start](/kubernetes/start/) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together? - [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster - [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* Load Balancer (this page) - Setup inbound access - Load Balancer (this page) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data - [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks - [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm - [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
## Chef's Notes ## Chef's Notes

View File

@@ -2,7 +2,7 @@
Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact. Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact.
Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](recipes/elkarbackup/)) to automate backups of our persistent data. Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](/recipes/elkarbackup/)) to automate backups of our persistent data.
Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution... Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution...
@@ -23,7 +23,7 @@ This recipe employs a clever tool ([miracle2k/k8s-snapshots](https://github.com/
If you're running GKE, run the following to create a RoleBinding, allowing your user to grant rights-it-doesn't-currently-have to the service account responsible for creating the snapshots: If you're running GKE, run the following to create a RoleBinding, allowing your user to grant rights-it-doesn't-currently-have to the service account responsible for creating the snapshots:
```kubectl create clusterrolebinding your-user-cluster-admin-binding \ ````kubectl create clusterrolebinding your-user-cluster-admin-binding \
--clusterrole=cluster-admin --user=<your user@yourdomain>``` --clusterrole=cluster-admin --user=<your user@yourdomain>```
!!! question !!! question
@@ -33,8 +33,10 @@ If you're running GKE, run the following to create a RoleBinding, allowing your
If your cluster is RBAC-enabled (_it probably is_), you'll need to create a ClusterRole and ClusterRoleBinding to allow k8s_snapshots to see your PVs and friends: If your cluster is RBAC-enabled (_it probably is_), you'll need to create a ClusterRole and ClusterRoleBinding to allow k8s_snapshots to see your PVs and friends:
``` ````
kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/master/rbac.yaml kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/master/rbac.yaml
``` ```
## Serving ## Serving
@@ -44,24 +46,25 @@ kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/maste
Ready? Run the following to create a deployment in to the kube-system namespace: Ready? Run the following to create a deployment in to the kube-system namespace:
``` ```
cat <<EOF | kubectl create -f - cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: k8s-snapshots name: k8s-snapshots
namespace: kube-system namespace: kube-system
spec: spec:
replicas: 1 replicas: 1
template: template:
metadata: metadata:
labels: labels:
app: k8s-snapshots app: k8s-snapshots
spec: spec:
containers: containers: - name: k8s-snapshots
- name: k8s-snapshots image: elsdoerfer/k8s-snapshots:v2.0
image: elsdoerfer/k8s-snapshots:v2.0
EOF EOF
```
````
Confirm your pod is running and happy by running ```kubectl get pods -n kubec-system```, and ```kubectl -n kube-system logs k8s-snapshots<tab-to-auto-complete>``` Confirm your pod is running and happy by running ```kubectl get pods -n kubec-system```, and ```kubectl -n kube-system logs k8s-snapshots<tab-to-auto-complete>```
@@ -71,7 +74,8 @@ k8s-snapshots relies on annotations to tell it how frequently to snapshot your P
From the k8s-snapshots README: From the k8s-snapshots README:
``` ````
The generations are defined by a list of deltas formatted as ISO 8601 durations (this differs from tarsnapper). PT60S or PT1M means a minute, PT12H or P0.5D is half a day, P1W or P7D is a week. The number of backups in each generation is implied by it's and the parent generation's delta. The generations are defined by a list of deltas formatted as ISO 8601 durations (this differs from tarsnapper). PT60S or PT1M means a minute, PT12H or P0.5D is half a day, P1W or P7D is a week. The number of backups in each generation is implied by it's and the parent generation's delta.
For example, given the deltas PT1H P1D P7D, the first generation will consist of 24 backups each one hour older than the previous (or the closest approximation possible given the available backups), the second generation of 7 backups each one day older than the previous, and backups older than 7 days will be discarded for good. For example, given the deltas PT1H P1D P7D, the first generation will consist of 24 backups each one hour older than the previous (or the closest approximation possible given the available backups), the second generation of 7 backups each one day older than the previous, and backups older than 7 days will be discarded for good.
@@ -79,38 +83,44 @@ For example, given the deltas PT1H P1D P7D, the first generation will consist of
The most recent backup is always kept. The most recent backup is always kept.
The first delta is the backup interval. The first delta is the backup interval.
``` ```
To add the annotation to an existing PV, run something like this: To add the annotation to an existing PV, run something like this:
``` ```
kubectl patch pv pvc-01f74065-8fe9-11e6-abdd-42010af00148 -p \ kubectl patch pv pvc-01f74065-8fe9-11e6-abdd-42010af00148 -p \
'{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}' '{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}'
``` ```
To add the annotation to a _new_ PV, add the following annotation to your **PVC**: To add the annotation to a _new_ PV, add the following annotation to your **PVC**:
``` ```
backup.kubernetes.io/deltas: PT1H P2D P30D P180D backup.kubernetes.io/deltas: PT1H P2D P30D P180D
``` ```
Here's an example of the PVC for the UniFi recipe, which includes 7 daily snapshots of the PV: Here's an example of the PVC for the UniFi recipe, which includes 7 daily snapshots of the PV:
``` ```
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: controller-volumeclaim name: controller-volumeclaim
namespace: unifi namespace: unifi
annotations: annotations:
backup.kubernetes.io/deltas: P1D P7D backup.kubernetes.io/deltas: P1D P7D
spec: spec:
accessModes: accessModes: - ReadWriteOnce
- ReadWriteOnce resources:
resources: requests:
requests: storage: 1Gi
storage: 1Gi
``` ````
And here's what my snapshot list looks like after a few days: And here's what my snapshot list looks like after a few days:
@@ -122,40 +132,43 @@ If you're running traditional compute instances with your cloud provider (I do t
To do so, first create a custom resource, ```SnapshotRule```: To do so, first create a custom resource, ```SnapshotRule```:
``` ````
cat <<EOF | kubectl create -f - cat <<EOF | kubectl create -f -
apiVersion: apiextensions.k8s.io/v1beta1 apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition kind: CustomResourceDefinition
metadata: metadata:
name: snapshotrules.k8s-snapshots.elsdoerfer.com name: snapshotrules.k8s-snapshots.elsdoerfer.com
spec: spec:
group: k8s-snapshots.elsdoerfer.com group: k8s-snapshots.elsdoerfer.com
version: v1 version: v1
scope: Namespaced scope: Namespaced
names: names:
plural: snapshotrules plural: snapshotrules
singular: snapshotrule singular: snapshotrule
kind: SnapshotRule kind: SnapshotRule
shortNames: shortNames: - sr
- sr
EOF EOF
```
````
Then identify the volume ID of your volume, and create an appropriate ```SnapshotRule```: Then identify the volume ID of your volume, and create an appropriate ```SnapshotRule```:
``` ````
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply -f -
apiVersion: "k8s-snapshots.elsdoerfer.com/v1" apiVersion: "k8s-snapshots.elsdoerfer.com/v1"
kind: SnapshotRule kind: SnapshotRule
metadata: metadata:
name: haproxy-badass-loadbalancer name: haproxy-badass-loadbalancer
spec: spec:
deltas: P1D P7D deltas: P1D P7D
backend: google backend: google
disk: disk:
name: haproxy2 name: haproxy2
zone: australia-southeast1-a zone: australia-southeast1-a
EOF EOF
``` ```
!!! note !!! note
@@ -178,3 +191,4 @@ Still with me? Good. Move on to understanding Helm charts...
## Chef's Notes ## Chef's Notes
1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74). 1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).
```

View File

@@ -24,27 +24,27 @@ Yes, but that's a necessary sacrifice for the maturity, power and flexibility it
Let's talk some definitions. Kubernetes.io provides a [glossary](https://kubernetes.io/docs/reference/glossary/?fundamental=true). My definitions are below: Let's talk some definitions. Kubernetes.io provides a [glossary](https://kubernetes.io/docs/reference/glossary/?fundamental=true). My definitions are below:
* **Node** : A compute instance which runs docker containers, managed by a cluster master. - **Node** : A compute instance which runs docker containers, managed by a cluster master.
* **Cluster** : One or more "worker nodes" which run containers. Very similar to a Docker Swarm node. In most cloud provider deployments, the [master node for your cluster is provided free of charge](https://www.sdxcentral.com/articles/news/google-eliminates-gke-management-fees-kubernetes-clusters/2017/11/), but you don't get to access it. - **Cluster** : One or more "worker nodes" which run containers. Very similar to a Docker Swarm node. In most cloud provider deployments, the [master node for your cluster is provided free of charge](https://www.sdxcentral.com/articles/news/google-eliminates-gke-management-fees-kubernetes-clusters/2017/11/), but you don't get to access it.
* **Pod** : A collection of one or more the containers. If a pod runs multiple containers, these containers always run on the same node. - **Pod** : A collection of one or more the containers. If a pod runs multiple containers, these containers always run on the same node.
* **Deployment** : A definition of a desired state. I.e., "I want a pod with containers A and B running". The Kubernetes master then ensures that any changes necessary to maintain the state are taken. (_I.e., if a pod crashes, but is supposed to be running, a new pod will be started_) - **Deployment** : A definition of a desired state. I.e., "I want a pod with containers A and B running". The Kubernetes master then ensures that any changes necessary to maintain the state are taken. (_I.e., if a pod crashes, but is supposed to be running, a new pod will be started_)
* **Service** : Unlike Docker Swarm, service discovery is not _built in_ to Kubernetes. For your pods to discover each other (say, to have "webserver" talk to "database"), you create a service for each pod, and refer to these services when you want your containers (_in pods_) to talk to each other. Complicated, yes, but the abstraction allows you to do powerful things, like auto-scale-up a bunch of database "pods" behind a service called "database", or perform a rolling container image upgrade with zero impact. - **Service** : Unlike Docker Swarm, service discovery is not _built in_ to Kubernetes. For your pods to discover each other (say, to have "webserver" talk to "database"), you create a service for each pod, and refer to these services when you want your containers (_in pods_) to talk to each other. Complicated, yes, but the abstraction allows you to do powerful things, like auto-scale-up a bunch of database "pods" behind a service called "database", or perform a rolling container image upgrade with zero impact.
* **External access** : Services not only allow pods to discover each other, but they're also the mechanism through which the outside world can talk to a container. At the simplest level, this is akin to exposing a container port on a docker host. - **External access** : Services not only allow pods to discover each other, but they're also the mechanism through which the outside world can talk to a container. At the simplest level, this is akin to exposing a container port on a docker host.
* **Ingress** : When mapping ports to applications is inadequate (think virtual web hosts), an ingress is a sort of "inbound router" which can receive requests on one port (i.e., HTTPS), and forward them to a variety of internal pods, based on things like VHOST, etc. For us, this is the functional equivalent of what Traefik does in Docker Swarm. In fact, we use a Traefik Ingress in Kubernetes to accomplish the same. - **Ingress** : When mapping ports to applications is inadequate (think virtual web hosts), an ingress is a sort of "inbound router" which can receive requests on one port (i.e., HTTPS), and forward them to a variety of internal pods, based on things like VHOST, etc. For us, this is the functional equivalent of what Traefik does in Docker Swarm. In fact, we use a Traefik Ingress in Kubernetes to accomplish the same.
* **Persistent Volume** : A virtual disk which is attached to a pod, storing persistent data. Meets the requirement for shared storage from Docker Swarm. I.e., if a persistent volume (PV) is bound to a pod, and the pod dies and is recreated, or get upgraded to a new image, the PV the data is bound to the new container. PVs can be "claimed" in a YAML definition, so that your Kubernetes provider will auto-create a PV when you launch your pod. PVs can be snapshotted. - **Persistent Volume** : A virtual disk which is attached to a pod, storing persistent data. Meets the requirement for shared storage from Docker Swarm. I.e., if a persistent volume (PV) is bound to a pod, and the pod dies and is recreated, or get upgraded to a new image, the PV the data is bound to the new container. PVs can be "claimed" in a YAML definition, so that your Kubernetes provider will auto-create a PV when you launch your pod. PVs can be snapshotted.
* **Namespace** : An abstraction to separate a collection of pods, services, ingresses, etc. A "virtual cluster within a cluster". Can be used for security, or simplicity. For example, since we don't have individual docker stacks anymore, if you commonly name your database container "db", and you want to deploy two applications which both use a database container, how will you name your services? Use namespaces to keep each application ("nextcloud" vs "kanboard") separate. Namespaces also allow you to allocate resources **limits** to the aggregate of containers in a namespace, so you could, for example, limit the "nextcloud" namespace to 2.3 CPUs and 1200MB RAM. - **Namespace** : An abstraction to separate a collection of pods, services, ingresses, etc. A "virtual cluster within a cluster". Can be used for security, or simplicity. For example, since we don't have individual docker stacks anymore, if you commonly name your database container "db", and you want to deploy two applications which both use a database container, how will you name your services? Use namespaces to keep each application ("nextcloud" vs "kanboard") separate. Namespaces also allow you to allocate resources **limits** to the aggregate of containers in a namespace, so you could, for example, limit the "nextcloud" namespace to 2.3 CPUs and 1200MB RAM.
## Mm.. maaaaybe, how do I start? ## Mm.. maaaaybe, how do I start?
If you're like me, and you learn by doing, either play with the examples at https://labs.play-with-k8s.com/, or jump right in by setting up a Google Cloud trial (_you get $300 credit for 12 months_), or a small cluster on [Digital Ocean](/kubernetes/digitalocean/). If you're like me, and you learn by doing, either play with the examples at https://labs.play-with-k8s.com/, or jump right in by setting up a Google Cloud trial (_you get \$300 credit for 12 months_), or a small cluster on [Digital Ocean](/kubernetes/cluster/).
If you're the learn-by-watching type, just search for "Kubernetes introduction video". There's a **lot** of great content available. If you're the learn-by-watching type, just search for "Kubernetes introduction video". There's a **lot** of great content available.
@@ -58,10 +58,10 @@ I'd love for your [feedback](/support/) on the Kubernetes recipes, as well as su
Still with me? Good. Move on to reviewing the design elements Still with me? Good. Move on to reviewing the design elements
* Start (this page) - Why Kubernetes? - Start (this page) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together? - [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster - [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access - [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data - [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks - [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm - [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm

View File

@@ -1,5 +1,5 @@
!!! warning !!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Heimdall # Heimdall
@@ -15,7 +15,7 @@ Heimdall provides a single URL to manage access to all of your autopirate tools,
To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```` ```
heimdall: heimdall:
image: linuxserver/heimdall:latest image: linuxserver/heimdall:latest
env_file: /var/data/config/autopirate/heimdall.env env_file: /var/data/config/autopirate/heimdall.env
@@ -50,31 +50,30 @@ To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include th
```` ```
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍
## Assemble more tools.. ## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md) - [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md) - [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/) - [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/) - [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/) - [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylarr/) - [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) - [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones) - [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/) - [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/) - [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/) - [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/) - [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/) - [Jackett](/recipes/autopirate/jackett/)
* Heimdall (this page) - Heimdall (this page)
* [End](/recipes/autopirate/end/) (launch the stack) - [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes 📓

View File

@@ -1,5 +1,5 @@
!!! warning !!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Jackett # Jackett
@@ -13,7 +13,7 @@ This allows for getting recent uploads (like RSS) and performing searches. Jacke
To include Jackett in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: To include Jackett in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```` ```
jackett: jackett:
image: linuxserver/jackett:latest image: linuxserver/jackett:latest
env_file : /var/data/config/autopirate/jackett.env env_file : /var/data/config/autopirate/jackett.env
@@ -44,31 +44,30 @@ jackett_proxy:
-provider=github -provider=github
-authenticated-emails-file=/authenticated-emails.txt -authenticated-emails-file=/authenticated-emails.txt
```` ```
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍
## Assemble more tools.. ## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md) - [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md) - [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/) - [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/) - [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/) - [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylarr/) - [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) - [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones) - [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/) - [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/) - [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/) - [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/) - [Ombi](/recipes/autopirate/ombi/)
* Jackett (this page) - Jackett (this page)
* [Heimdall](/recipes/autopirate/heimdall/) - [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack) - [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes 📓

View File

@@ -1,39 +1,37 @@
!!! warning !!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity. This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# NZBHydra 2 # NZBHydra 2
[NZBHydra 2](https://github.com/theotherp/nzbhydra2) is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as an indexer source for tools like Sonarr, Radarr or CouchPotato. [NZBHydra 2](https://github.com/theotherp/nzbhydra2) is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as an indexer source for tools like Sonarr, Radarr or CouchPotato.
!!! note !!! note
NZBHydra 2 is a complete rewrite of [NZBHydra (1)](/recipes/autopirate/nzbhybra/). It's currently in Beta. It works mostly fine but some functions might not be completely done and incompatibilities with some tools might still exist. You might want to run both in parallel for migration / testing purposes, but ultimately you'll probably want to switch over to NZBHydra 2 exclusively. NZBHydra 2 is a complete rewrite of [NZBHydra (1)](/recipes/autopirate/nzbhydra/). It's currently in Beta. It works mostly fine but some functions might not be completely done and incompatibilities with some tools might still exist. You might want to run both in parallel for migration / testing purposes, but ultimately you'll probably want to switch over to NZBHydra 2 exclusively.
![NZBHydra Screenshot](../../images/nzbhydra2.png) ![NZBHydra Screenshot](../../images/nzbhydra2.png)
Features include: Features include:
* Searches Anizb, BinSearch, NZBIndex and any newznab compatible indexers. Merges all results, filters them by a number of configurable restrictions, recognizes duplicates and returns them all in one place - Searches Anizb, BinSearch, NZBIndex and any newznab compatible indexers. Merges all results, filters them by a number of configurable restrictions, recognizes duplicates and returns them all in one place
* Add results to [NZBGet](/recipes/autopirate/nzbget/) or [SABnzbd](/recipes/autopirate/sabnzbd/) - Add results to [NZBGet](/recipes/autopirate/nzbget/) or [SABnzbd](/recipes/autopirate/sabnzbd/)
* Support for all relevant media IDs (IMDB, TMDB, TVDB, TVRage, TVMaze) and conversion between them - Support for all relevant media IDs (IMDB, TMDB, TVDB, TVRage, TVMaze) and conversion between them
* Query generation, meaning a query will be generated if only a media ID is provided in the search and the indexer doesn't support the ID or if no results were found - Query generation, meaning a query will be generated if only a media ID is provided in the search and the indexer doesn't support the ID or if no results were found
* Compatible with [Sonarr](/recipes/autopirate/sonarr/), [Radarr](/recipes/autopirate/radarr/), [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), nzb360, CouchPotato, [Mylar](/recipes/autopirate/mylar/), [Lazy Librarian](/recipes/autopirate/lazylibrarian/), Sick Beard, [Jackett/Cardigann](/recipes/autopirate/jackett/), Watcher, etc. - Compatible with [Sonarr](/recipes/autopirate/sonarr/), [Radarr](/recipes/autopirate/radarr/), [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), nzb360, CouchPotato, [Mylar](/recipes/autopirate/mylar/), [Lazy Librarian](/recipes/autopirate/lazylibrarian/), Sick Beard, [Jackett/Cardigann](/recipes/autopirate/jackett/), Watcher, etc.
* Search and download history and extensive stats. E.g. indexer response times, download shares, NZB age, etc. - Search and download history and extensive stats. E.g. indexer response times, download shares, NZB age, etc.
* Authentication and multi-user support - Authentication and multi-user support
* Automatic update of NZB download status by querying configured downloaders - Automatic update of NZB download status by querying configured downloaders
* RSS support with configurable cache times - RSS support with configurable cache times
* Torrent support (_Although I prefer [Jackett](/recipes/autopirate/jackett/) for this_): - Torrent support (_Although I prefer [Jackett](/recipes/autopirate/jackett/) for this_):
* For GUI searches, allowing you to download torrents to a blackhole folder - For GUI searches, allowing you to download torrents to a blackhole folder
* A separate Torznab compatible endpoint for API requests, allowing you to merge multiple trackers - A separate Torznab compatible endpoint for API requests, allowing you to merge multiple trackers
* Extensive configurability - Extensive configurability
* Migration of database and settings from v1 - Migration of database and settings from v1
## Inclusion into AutoPirate ## Inclusion into AutoPirate
To include NZBHydra2 in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file: To include NZBHydra2 in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```` ```
nzbhydra2: nzbhydra2:
image: linuxserver/hydra2:latest image: linuxserver/hydra2:latest
env_file : /var/data/config/autopirate/nzbhydra2.env env_file : /var/data/config/autopirate/nzbhydra2.env
@@ -63,31 +61,30 @@ nzbhydra2_proxy:
-email-domain=example.com -email-domain=example.com
-provider=github -provider=github
-authenticated-emails-file=/authenticated-emails.txt -authenticated-emails-file=/authenticated-emails.txt
```` ```
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍
## Assemble more tools.. ## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section: Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md) - [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md) - [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/) - [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/) - [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/) - [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/) - [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/) - [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/) - [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/) - [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/) - [NZBHydra](/recipes/autopirate/nzbhydra/)
* NZBHydra2 (this page) - NZBHydra2 (this page)
* [Ombi](/recipes/autopirate/ombi/) - [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/) - [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/) - [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack) - [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes 📓

View File

@@ -26,8 +26,8 @@ Bitwarden is a free and open source password management solution for individuals
Existing: Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -12,7 +12,7 @@ I like to protect my public-facing web UIs with an [oauth_proxy](/reference/oaut
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik/) configured per design 2. [Traefik](/ha-docker-swarm/traefik/) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -25,7 +25,7 @@ Support for editing eBook metadata and deleting eBooks from Calibre library
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -14,8 +14,8 @@ It's basically the [LibreOffice](https://www.libreoffice.org/) interface in a we
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname (_i.e. "collabora.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname (_i.e. "collabora.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
4. [NextCloud](/recipes/nextcloud/) installed and operational 4. [NextCloud](/recipes/nextcloud/) installed and operational
5. [Docker-compose](https://docs.docker.com/compose/install/) installed on your node(s) - this is a special case which needs to run outside of Docker Swarm 5. [Docker-compose](https://docs.docker.com/compose/install/) installed on your node(s) - this is a special case which needs to run outside of Docker Swarm

View File

@@ -22,7 +22,7 @@ Similar to the other backup options in the Cookbook, we can use Duplicati to bac
!!! summary "Ingredients" !!! summary "Ingredients"
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) * [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
* [X] [Traefik](/ha-docker-swarm/traefik_public) and [Traefik-Forward-Auth](/ha-docker-swarm/traefik-forward-auth) configured per design * [X] [Traefik](/ha-docker-swarm/traefik) and [Traefik-Forward-Auth](/ha-docker-swarm/traefik-forward-auth) configured per design
* [X] Credentials for one of the Duplicati's supported upload destinations * [X] Credentials for one of the Duplicati's supported upload destinations
## Preparation ## Preparation

View File

@@ -6,7 +6,6 @@ Intro
![Duplicity Screenshot](../images/duplicity.png) ![Duplicity Screenshot](../images/duplicity.png)
[Duplicity](http://duplicity.nongnu.org/) backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server. [Duplicity](http://duplicity.nongnu.org/) backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.
So what does this mean for our stack? It means we can leverage Duplicity to backup all our data-at-rest to a wide variety of cloud providers, including, but not limited to: So what does this mean for our stack? It means we can leverage Duplicity to backup all our data-at-rest to a wide variety of cloud providers, including, but not limited to:
@@ -25,7 +24,6 @@ So what does this mean for our stack? It means we can leverage Duplicity to back
- ssh/scp - ssh/scp
- SwiftStack - SwiftStack
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
@@ -35,7 +33,7 @@ So what does this mean for our stack? It means we can leverage Duplicity to back
### Setup data locations ### Setup data locations
We'll need a folder to store a docker-compose .yml file, and an associated .env file. If you're following my filesystem layout, create `/var/data/config/duplicity` (*for the config*), and `/var/data/duplicity` (*for the metadata*) as follows: We'll need a folder to store a docker-compose .yml file, and an associated .env file. If you're following my filesystem layout, create `/var/data/config/duplicity` (_for the config_), and `/var/data/duplicity` (_for the metadata_) as follows:
``` ```
mkdir /var/data/config/duplicity mkdir /var/data/config/duplicity
@@ -45,17 +43,17 @@ cd /var/data/config/duplicity
### (Optional) Create Google Cloud Storage bucket ### (Optional) Create Google Cloud Storage bucket
I didn't already have an archival/backup provider, so I chose Google Cloud "cloud" storage for the low price-point - 0.7 cents per GB/month (_Plus you [start with $300 credit](https://cloud.google.com/free/) even when signing up for the free tier_). You can use any destination supported by [Duplicity's URL scheme though](http://duplicity.nongnu.org/duplicity.1.html#sect7), just make sure you specify the necessary [environment variables](http://duplicity.nongnu.org/duplicity.1.html#sect6). I didn't already have an archival/backup provider, so I chose Google Cloud "cloud" storage for the low price-point - 0.7 cents per GB/month (_Plus you [start with \$300 credit](https://cloud.google.com/free/) even when signing up for the free tier_). You can use any destination supported by [Duplicity's URL scheme though](http://duplicity.nongnu.org/duplicity.1.html#sect7), just make sure you specify the necessary [environment variables](http://duplicity.nongnu.org/duplicity.1.html#sect6).
1. [Sign up](https://cloud.google.com/storage/docs/getting-started-console), create an empty project, enable billing, and create a bucket. Give your bucket a unique name, example "**jack-and-jills-bucket**" (_it's unique across the entire Google Cloud_) 1. [Sign up](https://cloud.google.com/storage/docs/getting-started-console), create an empty project, enable billing, and create a bucket. Give your bucket a unique name, example "**jack-and-jills-bucket**" (_it's unique across the entire Google Cloud_)
2. Under "Storage" section > "[Settings](https://console.cloud.google.com/project/_/storage/settings)" > "Interoperability" tab > click "Enable interoperable access" and then "Create a new key" button and note both Access Key and Secret. 2. Under "Storage" section > "[Settings](https://console.cloud.google.com/project/_/storage/settings)" > "Interoperability" tab > click "Enable interoperable access" and then "Create a new key" button and note both Access Key and Secret.
### Prepare environment ### Prepare environment
1. Generate a random passphrase to use to encrypt your data. **Save this somewhere safe**, without it you won't be able to restore! 1. Generate a random passphrase to use to encrypt your data. **Save this somewhere safe**, without it you won't be able to restore!
2. Seriously, **save**. **it**. **somewhere**. **safe**. 2. Seriously, **save**. **it**. **somewhere**. **safe**.
3. Create duplicity.env, and populate with the following variables 3. Create duplicity.env, and populate with the following variables
``` ```
SRC=/var/data/ SRC=/var/data/
DST=gs://jack-and-jills-bucket/yes-you-can-have-subdirectories DST=gs://jack-and-jills-bucket/yes-you-can-have-subdirectories
@@ -68,7 +66,7 @@ PASSPHRASE=<YOUR CHOSEN PASSPHRASE>
``` ```
!!! note !!! note
See the [data layout reference](/reference/data_layout/) for an explanation of the included/excluded paths above. See the [data layout reference](/reference/data_layout/) for an explanation of the included/excluded paths above.
### Run a test backup ### Run a test backup
@@ -88,9 +86,9 @@ You should see some activity, with a summary of bytes transferred at the end.
Repeat after me: "If you don't verify your backup, **it's not a backup**". Repeat after me: "If you don't verify your backup, **it's not a backup**".
!!! warning !!! warning
Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data. Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data.
Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/recipie/traefik/), since this is likely to exist for every reader_). Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/ha-docker-swarm/traefik/), since this is likely to exist for every reader_).
``` ```
docker run --env-file duplicity.env -it --rm \ docker run --env-file duplicity.env -it --rm \
@@ -100,6 +98,7 @@ docker run --env-file duplicity.env -it --rm \
duplicity list-current-files \ duplicity list-current-files \
\$DST | grep traefik.yml \$DST | grep traefik.yml
``` ```
Once you've identified a file to test-restore, use a variation of the following to restore it to /tmp (_from the perspective of the container - it's actually /var/data/duplicity/tmp_) Once you've identified a file to test-restore, use a variation of the following to restore it to /tmp (_from the perspective of the container - it's actually /var/data/duplicity/tmp_)
``` ```
@@ -114,14 +113,12 @@ tecnativa/duplicity duplicity restore \
Examine the contents of /var/data/duplicity/tmp/traefik-restored.yml to confirm it contains valid data. Examine the contents of /var/data/duplicity/tmp/traefik-restored.yml to confirm it contains valid data.
### Setup Docker Swarm ### Setup Docker Swarm
Now that we have confidence in our backup/restore process, let's automate it by creating a docker swarm config file in docker-compose syntax (v3), something like this: Now that we have confidence in our backup/restore process, let's automate it by creating a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍
``` ```
version: "3" version: "3"
@@ -148,19 +145,17 @@ networks:
``` ```
!!! note !!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving ## Serving
### Launch Duplicity stack ### Launch Duplicity stack
Launch Duplicity stack by running ```docker stack deploy duplicity -c <path -to-docker-compose.yml>``` Launch Duplicity stack by running `docker stack deploy duplicity -c <path -to-docker-compose.yml>`
Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination. Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination.
## Chef's Notes 📓 ## Chef's Notes 📓
1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs. 1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env 2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add `SMTP_HOST`, `SMTP_PORT`, `EMAIL_FROM` and `EMAIL_TO` variables to duplicity.env

View File

@@ -20,8 +20,8 @@ ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -10,7 +10,7 @@ I've started experimenting with Emby as an alternative to Plex, because of the a
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -12,8 +12,8 @@ hero: Ghost - A recipe for beautiful online publication.
Existing: Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -7,12 +7,12 @@ While a runner isn't strictly required to use GitLab, if you want to do CI, you'
## Ingredients ## Ingredients
!!! summary "Ingredients" !!! summary "Ingredients"
Existing: Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
4. [X] [GitLab](/ha-docker-swarm/gitlab) installation (see previous recipe) 4. [X] [GitLab](/recipes/gitlab) installation (see previous recipe)
## Preparation ## Preparation
@@ -32,7 +32,7 @@ mkdir -p {runners/1,runners/2}
Create a docker swarm config file in docker-compose syntax (v3), something like this: Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍
``` ```
version: '3' version: '3'
@@ -60,10 +60,9 @@ networks:
- subnet: 172.16.23.0/24 - subnet: 172.16.23.0/24
``` ```
### Configure runners ### Configure runners
From your GitLab UI, you can retrieve a "token" necessary to register a new runner. To register the runner, you can either create config.toml in each runner's bind-mounted folder (example below), or just `docker exec` into each runner container and execute ```gitlab-runner register``` to interactively generate config.toml. From your GitLab UI, you can retrieve a "token" necessary to register a new runner. To register the runner, you can either create config.toml in each runner's bind-mounted folder (example below), or just `docker exec` into each runner container and execute `gitlab-runner register` to interactively generate config.toml.
Sample runner config.toml: Sample runner config.toml:
@@ -90,11 +89,11 @@ check_interval = 0
### Launch runners ### Launch runners
Launch the mail server stack by running ```docker stack deploy gitlab-runner -c <path -to-docker-compose.yml>``` Launch the mail server stack by running `docker stack deploy gitlab-runner -c <path -to-docker-compose.yml>`
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env. Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓 ## Chef's Notes 📓
1. You'll note that I setup 2 runners. One is locked to a single project (*this cookbook build*), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case. 1. You'll note that I setup 2 runners. One is locked to a single project (_this cookbook build_), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (*and GitLab starts **sooo** slowly!*), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem. 2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (_and GitLab starts **sooo** slowly!_), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.

View File

@@ -12,8 +12,8 @@ Docker does maintain an [official "Omnibus" container](https://docs.gitlab.com/o
Existing: Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -37,8 +37,8 @@ Gollum meets all these requirements, and as an added bonus, is extremely fast an
Existing: Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -10,7 +10,7 @@ This recipie combines the [extensibility](https://home-assistant.io/components/)
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -1,15 +1,15 @@
# iBeacons with Home assistant # iBeacons with Home assistant
!!! warning !!! warning
This is not a complete recipe - it's an optional additional of the [HomeAssistant](/recipes/homeassistant/) "recipe", since it only applies to a subset of users This is not a complete recipe - it's an optional additional of the [HomeAssistant](/recipes/homeassistant/) "recipe", since it only applies to a subset of users
One of the most useful features of Home Assistant is location awareness. I don't care if someone opens my office door when I'm home, but you bet I care about (_and want to be notified_) it if I'm away! One of the most useful features of Home Assistant is location awareness. I don't care if someone opens my office door when I'm home, but you bet I care about (_and want to be notified_) it if I'm away!
## Ingredients ## Ingredients
1. [HomeAssistant](/recipes/home-assistant/) per recipe 1. [HomeAssistant](/recipes/homeassistant/) per recipe
2. iBeacon(s) - This recipe is for https://s.click.aliexpress.com/e/bzyLCnAp 2. iBeacon(s) - This recipe is for https://s.click.aliexpress.com/e/bzyLCnAp
4. [LightBlue Explorer](https://itunes.apple.com/nz/app/lightblue-explorer/id557428110?mt=8) 3. [LightBlue Explorer](https://itunes.apple.com/nz/app/lightblue-explorer/id557428110?mt=8)
## Preparation ## Preparation

View File

@@ -14,8 +14,8 @@ Great power, right? A client (_yes, you can [hire](https://www.funkypenguin.co.n
Existing: Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -10,7 +10,7 @@ If it looks very similar as Emby, is because it started as a fork of it, but it
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -26,7 +26,7 @@ Features include:
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry pointing your NextCloud url (_kanboard.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry pointing your NextCloud url (_kanboard.example.com_) to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -1,9 +1,9 @@
# KeyCloak # KeyCloak
[KeyCloak](https://www.keycloak.org/) is "*an open source identity and access management solution*". Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML. KeyCloak's OpenID provider can be used in combination with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), to protect [vulnerable services](/recipe/nzbget/) with an extra layer of authentication. [KeyCloak](https://www.keycloak.org/) is "_an open source identity and access management solution_". Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML. KeyCloak's OpenID provider can be used in combination with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), to protect [vulnerable services](/recipes/autopirate/nzbget/) with an extra layer of authentication.
!!! important !!! important
Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys! Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
@@ -12,10 +12,10 @@
## Ingredients ## Ingredients
!!! Summary !!! Summary
Existing: Existing:
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph/) * [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph/)
* [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design * [X] [Traefik](/ha-docker-swarm/traefik) configured per design
* [X] DNS entry for the hostname (_i.e. "keycloak.your-domain.com"_) you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP * [X] DNS entry for the hostname (_i.e. "keycloak.your-domain.com"_) you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation
@@ -69,7 +69,8 @@ BACKUP_FREQUENCY=1d
Create a docker swarm config file in docker-compose syntax (v3), something like this: Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍
``` ```
version: '3' version: '3'
@@ -127,21 +128,19 @@ networks:
``` ```
!!! note !!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving ## Serving
### Launch KeyCloak stack ### Launch KeyCloak stack
Launch the KeyCloak stack by running ```docker stack deploy keycloak -c <path -to-docker-compose.yml>``` Launch the KeyCloak stack by running `docker stack deploy keycloak -c <path -to-docker-compose.yml>`
Log into your new instance at https://**YOUR-FQDN**, and login with the user/password you defined in `keycloak.env`. Log into your new instance at https://**YOUR-FQDN**, and login with the user/password you defined in `keycloak.env`.
!!! important !!! important
Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys! Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes ## Chef's Notes

View File

@@ -1,20 +1,20 @@
# Create KeyCloak Users # Create KeyCloak Users
!!! warning !!! warning
This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity. This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
Unless you plan to authenticate against an outside provider (*[OpenLDAP](/recipes/keycloak/openldap/), below, for example*), you'll want to create some local users.. Unless you plan to authenticate against an outside provider (_[OpenLDAP](/recipes/keycloak/authenticate-against-openldap/), below, for example_), you'll want to create some local users..
## Ingredients ## Ingredients
!!! Summary !!! Summary
Existing: Existing:
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully * [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully
### Create User ### Create User
Within the "Master" realm (*no need for more realms yet*), navigate to **Manage** -> **Users**, and then click **Add User** at the top right: Within the "Master" realm (_no need for more realms yet_), navigate to **Manage** -> **Users**, and then click **Add User** at the top right:
![Navigating to the add user interface in Keycloak](/images/keycloak-add-user-1.png) ![Navigating to the add user interface in Keycloak](/images/keycloak-add-user-1.png)
@@ -33,6 +33,6 @@ Once your user is created, to set their password, click on the "**Credentials**"
We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it's used as an [OIDC Provider](/recipes/keycloak/setup-oidc-provider/), potentially to secure vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it's used as an [OIDC Provider](/recipes/keycloak/setup-oidc-provider/), potentially to secure vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
!!! Summary !!! Summary
Created: Created:
* [X] Username / password to authenticate against [KeyCloak](/recipes/keycloak/) * [X] Username / password to authenticate against [KeyCloak](/recipes/keycloak/)

View File

@@ -3,7 +3,7 @@
!!! warning !!! warning
This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity. This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/recipe/traefik-forward-auth/), we'll setup a client in KeyCloak... Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), we'll setup a client in KeyCloak...
## Ingredients ## Ingredients
@@ -14,7 +14,7 @@ Having an authentication provider is not much use until you start authenticating
New: New:
* [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/recipe/traefik-forward-auth/) recipe for more information * [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe for more information
## Preparation ## Preparation

View File

@@ -15,7 +15,7 @@ Details
## Ingredients ## Ingredients
1. [Kubernetes cluster](/kubernetes/digital-ocean/) 1. [Kubernetes cluster](/kubernetes/cluster/)
## Preparation ## Preparation

View File

@@ -8,7 +8,7 @@ Details
## Ingredients ## Ingredients
1. [Kubernetes cluster](/kubernetes/digital-ocean/) 1. [Kubernetes cluster](/kubernetes/cluster/)
## Preparation ## Preparation

View File

@@ -15,7 +15,7 @@ Details
## Ingredients ## Ingredients
1. [Kubernetes cluster](/kubernetes/digital-ocean/) 1. [Kubernetes cluster](/kubernetes/cluster/)
## Preparation ## Preparation

View File

@@ -9,8 +9,8 @@ Details
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -23,7 +23,7 @@ I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/rev
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry pointing your Miniflux url (i.e. _miniflux.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry pointing your Miniflux url (i.e. _miniflux.example.com_) to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -18,8 +18,8 @@ Possible use-cases:
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -1,7 +1,7 @@
hero: Kubernetes. The hero we deserve. hero: Kubernetes. The hero we deserve.
!!! danger "This recipe is a work in progress" !!! danger "This recipe is a work in progress"
This recipe is **incomplete**, and is featured to align the [sponsors](https://github.com/sponsors/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [GitHub sponsors](https://github.com/sponsors/funkypenguin), which includes all the necessary .yml files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍 This recipe is **incomplete**, and is featured to align the [sponsors](https://github.com/sponsors/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [GitHub sponsors](https://github.com/sponsors/funkypenguin), which includes all the necessary .yml files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `kubectl create -f *.yml` 👍
So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁 So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁
@@ -19,7 +19,7 @@ A workaround to this bug is to run an MQTT broker **external** to the raspberry
## Ingredients ## Ingredients
1. A [Kubernetes cluster](/kubernetes/digital-ocean/) 1. A [Kubernetes cluster](/kubernetes/cluster/)
## Preparation ## Preparation
@@ -89,6 +89,7 @@ spec:
EOF EOF
kubectl create -f /var/data/mqtt/service-nodeport.yml kubectl create -f /var/data/mqtt/service-nodeport.yml
``` ```
### Create secrets ### Create secrets
It's not always desirable to have sensitive data stored in your .yml files. Maybe you want to check your config into a git repository, or share it. Using Kubernetes Secrets means that you can create "secrets", and use these in your deployments by name, without exposing their contents. It's not always desirable to have sensitive data stored in your .yml files. Maybe you want to check your config into a git repository, or share it. Using Kubernetes Secrets means that you can create "secrets", and use these in your deployments by name, without exposing their contents.
@@ -104,8 +105,8 @@ kubectl create secret -n mqtt generic mqtt-credentials \
--from-file=letsencrypt-email.secret --from-file=letsencrypt-email.secret
``` ```
!!! tip "Why use ```echo -n```?" !!! tip "Why use `echo -n`?"
Because. See [my blog post here](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why! Because. See [my blog post here](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why!
## Serving ## Serving
@@ -114,7 +115,7 @@ kubectl create secret -n mqtt generic mqtt-credentials \
Now that we have a volume, a service, and a namespace, we can create a deployment for the mqtt pod. Note below the use of volume mounts, environment variables, as well as the secrets. Now that we have a volume, a service, and a namespace, we can create a deployment for the mqtt pod. Note below the use of volume mounts, environment variables, as well as the secrets.
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `kubectl create -f *.yml` 👍
``` ```
cat <<EOF > /var/data/mqtt/mqtt.yml cat <<EOF > /var/data/mqtt/mqtt.yml
@@ -193,7 +194,7 @@ EOF
kubectl create -f /var/data/mqtt/mqtt.yml kubectl create -f /var/data/mqtt/mqtt.yml
``` ```
Check that your deployment is running, with ```kubectl get pods -n mqtt```. After a minute or so, you should see a "Running" pod, as illustrated below: Check that your deployment is running, with `kubectl get pods -n mqtt`. After a minute or so, you should see a "Running" pod, as illustrated below:
``` ```
[davidy:~/Documents/Personal/Projects/mqtt-k8s] 130 % kubectl get pods -n mqtt [davidy:~/Documents/Personal/Projects/mqtt-k8s] 130 % kubectl get pods -n mqtt
@@ -202,6 +203,6 @@ mqtt-65f4d96945-bjj44 1/1 Running 0 5m
[davidy:~/Documents/Personal/Projects/mqtt-k8s] % [davidy:~/Documents/Personal/Projects/mqtt-k8s] %
``` ```
To actually **use** your new MQTT broker, you'll need to connect to any one of your nodes (```kubectl get nodes -o wide```) on port 30883 (_the nodeport service we created earlier_). More info on that, and a loadbalancer design, to follow shortly :) To actually **use** your new MQTT broker, you'll need to connect to any one of your nodes (`kubectl get nodes -o wide`) on port 30883 (_the nodeport service we created earlier_). More info on that, and a loadbalancer design, to follow shortly :)
## Chef's Notes 📓 ## Chef's Notes 📓

View File

@@ -12,13 +12,13 @@ Munin uses the excellent RRDTool (written by Tobi Oetiker) and the framework
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation
### Prepare target nodes ### Prepare target nodes
Depending on what you want to monitor, you'll want to install munin-node. On Ubuntu/Debian, you'll use ```apt-get install munin-node```, and on RHEL/CentOS, run ```yum install munin-node```. Remember to edit ```/etc/munin/munin-node.conf```, and set your node to allow the server to poll it, by adding ```cidr_allow x.x.x.x/x```. Depending on what you want to monitor, you'll want to install munin-node. On Ubuntu/Debian, you'll use `apt-get install munin-node`, and on RHEL/CentOS, run `yum install munin-node`. Remember to edit `/etc/munin/munin-node.conf`, and set your node to allow the server to poll it, by adding `cidr_allow x.x.x.x/x`.
On CentOS Atomic, of course, you can't install munin-node directly, but you can run it as a containerized instance. In this case, you can't use swarm since you need the container running in privileged mode, so launch a munin-node container on each atomic host using: On CentOS Atomic, of course, you can't install munin-node directly, but you can run it as a containerized instance. In this case, you can't use swarm since you need the container running in privileged mode, so launch a munin-node container on each atomic host using:
@@ -33,7 +33,6 @@ docker run -d --name munin-node --restart=always \
funkypenguin/munin-node funkypenguin/munin-node
``` ```
### Setup data locations ### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/munin: We'll need several directories to bind-mount into our container, so create them in /var/data/munin:
@@ -46,7 +45,7 @@ mkdir -p {log,lib,run,cache}
### Prepare environment ### Prepare environment
Create /var/data/config/munin/munin.env, and populate with the following variables. Use the OAUTH2 variables if you plan to use an [oauth2_proxy](/reference/oauth_proxy/) to protect munin, and set at a **minimum** the ```MUNIN_USER```, ```MUNIN_PASSWORD```, and ```NODES``` values: Create /var/data/config/munin/munin.env, and populate with the following variables. Use the OAUTH2 variables if you plan to use an [oauth2_proxy](/reference/oauth_proxy/) to protect munin, and set at a **minimum** the `MUNIN_USER`, `MUNIN_PASSWORD`, and `NODES` values:
``` ```
# Use these if you plan to protect the webUI with an oauth_proxy # Use these if you plan to protect the webUI with an oauth_proxy
@@ -74,8 +73,7 @@ SNMP_NODES="router1:10.0.0.254:9999"
Create a docker swarm config file in docker-compose syntax (v3), something like this: Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍
``` ```
version: '3' version: '3'
@@ -123,14 +121,13 @@ networks:
``` ```
!!! note !!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving ## Serving
### Launch Munin stack ### Launch Munin stack
Launch the Munin stack by running ```docker stack deploy munin -c <path -to-docker-compose.yml>``` Launch the Munin stack by running `docker stack deploy munin -c <path -to-docker-compose.yml>`
Log into your new instance at https://**YOUR-FQDN**, with user and password password you specified in munin.env above. Log into your new instance at https://**YOUR-FQDN**, with user and password password you specified in munin.env above.

View File

@@ -18,7 +18,7 @@ This recipe is based on the official NextCloud docker image, but includes seprat
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry pointing your NextCloud url (_nextcloud.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry pointing your NextCloud url (_nextcloud.example.com_) to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -5,7 +5,7 @@
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
LDAP is probably the most ubiquitous authentication backend, before the current era of "[stupid social sign-ons](https://www.usatoday.com/story/tech/columnist/2018/10/23/how-separate-your-social-networks-your-regular-sites/1687763002/)". Many of the recipes featured in the cookbook (_[NextCloud](/recipe/nextcloud/), [Kanboard](/recipe/kanboard/), [Gitlab](/recipe/gitlab/), etc_) offer LDAP integration. LDAP is probably the most ubiquitous authentication backend, before the current era of "[stupid social sign-ons](https://www.usatoday.com/story/tech/columnist/2018/10/23/how-separate-your-social-networks-your-regular-sites/1687763002/)". Many of the recipes featured in the cookbook (_[NextCloud](/recipes/nextcloud/), [Kanboard](/recipes/kanboard/), [Gitlab](/recipes/gitlab/), etc_) offer LDAP integration.
## Big deal, who cares? ## Big deal, who cares?
@@ -21,13 +21,13 @@ This recipe combines the raw power of OpenLDAP with the flexibility and features
## What's the takeaway? ## What's the takeaway?
What you'll end up with is a directory structure which will allow integration with popular tools (_[NextCloud](/recipe/nextcloud/), [Kanboard](/recipe/kanboard/), [Gitlab](/recipe/gitlab/), etc_), as well as with KeyCloak (_an upcoming recipe_), for **true** SSO. What you'll end up with is a directory structure which will allow integration with popular tools (_[NextCloud](/recipes/nextcloud/), [Kanboard](/recipes/kanboard/), [Gitlab](/recipes/gitlab/), etc_), as well as with KeyCloak (_an upcoming recipe_), for **true** SSO.
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname (_i.e. "lam.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname (_i.e. "lam.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -13,7 +13,7 @@ Using a smartphone app, OwnTracks allows you to collect and analyse your own loc
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -10,8 +10,8 @@ hero: Your own private google photos
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -8,7 +8,7 @@ phpIPAM fulfils a non-sexy, but important role - It helps you manage your IP add
## Why should you care about this? ## Why should you care about this?
You probably have a home network, with 20-30 IP addresses, for your family devices, your ![IoT devices](/recipe/home-assistant), your smart TV, etc. If you want to (a) monitor them, and (b) audit who does what, you care about what IPs they're assigned by your DHCP server. You probably have a home network, with 20-30 IP addresses, for your family devices, your ![IoT devices](/recipes/homeassistant), your smart TV, etc. If you want to (a) monitor them, and (b) audit who does what, you care about what IPs they're assigned by your DHCP server.
You could simple keep track of all devices with leases in your DHCP server, but what happens if your (_hypothetical?_) Ubiquity Edge Router X crashes and burns due to lack of disk space, and you loose track of all your leases? Well, you have to start from scratch, is what! You could simple keep track of all devices with leases in your DHCP server, but what happens if your (_hypothetical?_) Ubiquity Edge Router X crashes and burns due to lack of disk space, and you loose track of all your leases? Well, you have to start from scratch, is what!
@@ -19,8 +19,8 @@ Enter phpIPAM. A tool designed to help home keeps as well as large organisations
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname (_i.e. "phpipam.your-domain.com"_) you intend to use for phpIPAM, pointed to your [keepalived](ha-docker-swarm/keepalived/) IPIP 3. DNS entry for the hostname (_i.e. "phpipam.your-domain.com"_) you intend to use for phpIPAM, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IPIP
## Preparation ## Preparation
@@ -36,6 +36,7 @@ mkdir /var/data/runtime/phpipam -p
### Prepare environment ### Prepare environment
Create phpipam.env, and populate with the following variables Create phpipam.env, and populate with the following variables
``` ```
# Setup for github, phpipam application # Setup for github, phpipam application
OAUTH2_PROXY_CLIENT_ID= OAUTH2_PROXY_CLIENT_ID=
@@ -77,13 +78,12 @@ BACKUP_FREQUENCY=1d
I usually protect my stacks using an [oauth proxy](/reference/oauth_proxy/) container in front of the app. This protects me from either accidentally exposing a platform to the world, or having a insecure platform accessed and abused. I usually protect my stacks using an [oauth proxy](/reference/oauth_proxy/) container in front of the app. This protects me from either accidentally exposing a platform to the world, or having a insecure platform accessed and abused.
In the case of phpIPAM, the oauth_proxy creates an additional complexity, since it passes the "Authorization" HTTP header to the phpIPAM container. phpIPAH then examines the header, determines that the provided username (_my email address associated with my oauth provider_) doesn't match a local user account, and denies me access without the opportunity to retry. In the case of phpIPAM, the oauth*proxy creates an additional complexity, since it passes the "Authorization" HTTP header to the phpIPAM container. phpIPAH then examines the header, determines that the provided username (\_my email address associated with my oauth provider*) doesn't match a local user account, and denies me access without the opportunity to retry.
The (_dirty_) solution I've come up with is to insert an Nginx instance in the path between the oauth_proxy and the phpIPAM container itself. Nginx can remove the authorization header, so that phpIPAM can prompt me to login with a web-based form. The (_dirty_) solution I've come up with is to insert an Nginx instance in the path between the oauth_proxy and the phpIPAM container itself. Nginx can remove the authorization header, so that phpIPAM can prompt me to login with a web-based form.
Create /var/data/phpipam/nginx.conf as follows: Create /var/data/phpipam/nginx.conf as follows:
``` ```
upstream app-upstream { upstream app-upstream {
server app:80; server app:80;
@@ -108,8 +108,7 @@ server {
Create a docker swarm config file in docker-compose syntax (v3), something like this: Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip !!! tip
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍 I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `docker stack deploy` 👍
``` ```
version: '3' version: '3'
@@ -193,15 +192,13 @@ networks:
``` ```
!!! note !!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here. Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving ## Serving
### Launch phpIPAM stack ### Launch phpIPAM stack
Launch the phpIPAM stack by running ```docker stack deploy phpipam -c <path -to-docker-compose.yml>``` Launch the phpIPAM stack by running `docker stack deploy phpipam -c <path -to-docker-compose.yml>`
Log into your new instance at https://**YOUR-FQDN**, and follow the on-screen prompts to set your first user/password. Log into your new instance at https://**YOUR-FQDN**, and follow the on-screen prompts to set your first user/password.

View File

@@ -10,7 +10,7 @@ hero: A recipe to manage your Media 🎥 📺 🎵
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. A DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. A DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -7,8 +7,8 @@ PrivateBin is a minimalist, open source online pastebin where the server (can) h
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -23,8 +23,8 @@ Features include:
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -14,7 +14,7 @@ Restic is one of the more popular open-source backup solutions, and is often [co
!!! summary "Ingredients" !!! summary "Ingredients"
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) * [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
* [X] [Traefik](/ha-docker-swarm/traefik_public) and [Traefik-Forward-Auth](/ha-docker-swarm/traefik-forward-auth) configured per design * [X] [Traefik](/ha-docker-swarm/traefik) and [Traefik-Forward-Auth](/ha-docker-swarm/traefik-forward-auth) configured per design
* [X] Credentials for one of Restic's [supported repositories](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html) * [X] Credentials for one of Restic's [supported repositories](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html)
## Preparation ## Preparation

View File

@@ -22,8 +22,8 @@ I'd encourage you to spend some time reading https://github.com/stefanprodan/swa
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) on **17.09.0 or newer** (_doesn't work with CentOS Atomic, unfortunately_) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) on **17.09.0 or newer** (_doesn't work with CentOS Atomic, unfortunately_) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostnames you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostnames you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -16,8 +16,8 @@ Details
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -16,7 +16,7 @@ There are plugins for [Chrome](https://chrome.google.com/webstore/detail/wallaba
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -19,8 +19,8 @@ Here are some other possible use cases:
## Ingredients ## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md) 1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design 2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP 3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation ## Preparation

View File

@@ -3,7 +3,7 @@
In order to avoid IP addressing conflicts as we bring swarm networks up/down, we will statically address each docker overlay network, and record the details below: In order to avoid IP addressing conflicts as we bring swarm networks up/down, we will statically address each docker overlay network, and record the details below:
| Network | Range | | Network | Range |
|-----------------------------------------------------------------------------------------------------------------------|----------------| | --------------------------------------------------------------------------------------------------------------------- | -------------- |
| [Traefik](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik/) | _unspecified_ | | [Traefik](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik/) | _unspecified_ |
| [Docker-cleanup](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/docker-swarm-mode/#setup-automated-cleanup) | 172.16.0.0/24 | | [Docker-cleanup](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/docker-swarm-mode/#setup-automated-cleanup) | 172.16.0.0/24 |
| [Mail Server](https://geek-cookbook.funkypenguin.co.nz/recipes/mail/) | 172.16.1.0/24 | | [Mail Server](https://geek-cookbook.funkypenguin.co.nz/recipes/mail/) | 172.16.1.0/24 |
@@ -19,7 +19,7 @@ In order to avoid IP addressing conflicts as we bring swarm networks up/down, we
| [Autopirate](https://geek-cookbook.funkypenguin.co.nz/recipes/autopirate/) | 172.16.11.0/24 | | [Autopirate](https://geek-cookbook.funkypenguin.co.nz/recipes/autopirate/) | 172.16.11.0/24 |
| [Nextcloud](https://geek-cookbook.funkypenguin.co.nz/recipes/nextcloud/) | 172.16.12.0/24 | | [Nextcloud](https://geek-cookbook.funkypenguin.co.nz/recipes/nextcloud/) | 172.16.12.0/24 |
| [Portainer](https://geek-cookbook.funkypenguin.co.nz/recipes/portainer/) | 172.16.13.0/24 | | [Portainer](https://geek-cookbook.funkypenguin.co.nz/recipes/portainer/) | 172.16.13.0/24 |
| [Home-Assistant](https://geek-cookbook.funkypenguin.co.nz/recipes/home-assistant/) | 172.16.14.0/24 | | [Home Assistant](https://geek-cookbook.funkypenguin.co.nz/recipes/homeassistant/) | 172.16.14.0/24 |
| [OwnTracks](https://geek-cookbook.funkypenguin.co.nz/recipes/owntracks/) | 172.16.15.0/24 | | [OwnTracks](https://geek-cookbook.funkypenguin.co.nz/recipes/owntracks/) | 172.16.15.0/24 |
| [Plex](https://geek-cookbook.funkypenguin.co.nz/recipes/plex/) | 172.16.16.0/24 | | [Plex](https://geek-cookbook.funkypenguin.co.nz/recipes/plex/) | 172.16.16.0/24 |
| [Emby](https://geek-cookbook.funkypenguin.co.nz/recipes/emby/) | 172.16.17.0/24 | | [Emby](https://geek-cookbook.funkypenguin.co.nz/recipes/emby/) | 172.16.17.0/24 |
@@ -33,7 +33,7 @@ In order to avoid IP addressing conflicts as we bring swarm networks up/down, we
| [Bookstack](https://geek-cookbook.funkypenguin.co.nz/recipes/bookstack/) | 172.16.33.0/24 | | [Bookstack](https://geek-cookbook.funkypenguin.co.nz/recipes/bookstack/) | 172.16.33.0/24 |
| [Swarmprom](https://geek-cookbook.funkypenguin.co.nz/recipes/swarmprom/) | 172.16.34.0/24 | | [Swarmprom](https://geek-cookbook.funkypenguin.co.nz/recipes/swarmprom/) | 172.16.34.0/24 |
| [Realms](https://geek-cookbook.funkypenguin.co.nz/recipes/realms/) | 172.16.35.0/24 | | [Realms](https://geek-cookbook.funkypenguin.co.nz/recipes/realms/) | 172.16.35.0/24 |
| [ElkarBackup](https://geek-cookbook.funkypenguin.co.nz/recipes/elkarbackp/) | 172.16.36.0/24 | | [ElkarBackup](https://geek-cookbook.funkypenguin.co.nz/recipes/elkarbackup/) | 172.16.36.0/24 |
| [Mayan EDMS](https://geek-cookbook.funkypenguin.co.nz/recipes/realms/) | 172.16.37.0/24 | | [Mayan EDMS](https://geek-cookbook.funkypenguin.co.nz/recipes/realms/) | 172.16.37.0/24 |
| [Shaarli](https://geek-cookbook.funkypenguin.co.nz/recipes/shaarli/) | 172.16.38.0/24 | | [Shaarli](https://geek-cookbook.funkypenguin.co.nz/recipes/shaarli/) | 172.16.38.0/24 |
| [OpenLDAP](https://geek-cookbook.funkypenguin.co.nz/recipes/openldap/) | 172.16.39.0/24 | | [OpenLDAP](https://geek-cookbook.funkypenguin.co.nz/recipes/openldap/) | 172.16.39.0/24 |