mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-13 17:56:26 +00:00
Fix tons of broken links (messy, messy penguin!)
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
@@ -6,7 +6,7 @@
|
|||||||
[blogurl]: https://www.funkypenguin.co.nz
|
[blogurl]: https://www.funkypenguin.co.nz
|
||||||
[twitchurl]: https://www.twitch.tv/funkypenguinz
|
[twitchurl]: https://www.twitch.tv/funkypenguinz
|
||||||
[twitterurl]: https://twitter.com/funkypenguin
|
[twitterurl]: https://twitter.com/funkypenguin
|
||||||
[dockerurl]: https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/design
|
[dockerurl]: https://geek-cookbook.funkypenguin.co.nz/docker-swarm/design
|
||||||
[k8surl]: https://geek-cookbook.funkypenguin.co.nz/kubernetes/
|
[k8surl]: https://geek-cookbook.funkypenguin.co.nz/kubernetes/
|
||||||
|
|
||||||
<!-- markdownlint-disable MD033 MD041 -->
|
<!-- markdownlint-disable MD033 MD041 -->
|
||||||
@@ -33,12 +33,12 @@
|
|||||||
|
|
||||||
# What is this?
|
# What is this?
|
||||||
|
|
||||||
Funky Penguin's "**[Geek Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](/ha-docker-swarm/design/) or [Kubernetes](/kubernetes/).
|
Funky Penguin's "**[Geek Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](/docker-swarm/design/) or [Kubernetes](/kubernetes/).
|
||||||
|
|
||||||
Running such a platform enables you to run self-hosted tools such as [AutoPirate](/recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex][plex], [NextCloud][nextcloud], and includes elements such as:
|
Running such a platform enables you to run self-hosted tools such as [AutoPirate](/recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex][plex], [NextCloud][nextcloud], and includes elements such as:
|
||||||
|
|
||||||
- [Automatic SSL-secured access](/ha-docker-swarm/traefik/) to all services (*with LetsEncrypt*)
|
- [Automatic SSL-secured access](/docker-swarm/traefik/) to all services (*with LetsEncrypt*)
|
||||||
- [SSO / authentication layer](/ha-docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services
|
- [SSO / authentication layer](/docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services
|
||||||
- [Automated backup](/recipes/elkarbackup/) of configuration and data
|
- [Automated backup](/recipes/elkarbackup/) of configuration and data
|
||||||
- [Monitoring and metrics](/recipes/swarmprom/) collection, graphing and alerting
|
- [Monitoring and metrics](/recipes/swarmprom/) collection, graphing and alerting
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
|
|
||||||
[archivebox]: /recipes/archivebox/
|
[archivebox]: /recipes/archivebox/
|
||||||
[authelia]: /ha-docker-swarm/authelia/
|
[authelia]: /docker-swarm/authelia/
|
||||||
[autopirate]: /recipes/autopirate/
|
[autopirate]: /recipes/autopirate/
|
||||||
[bazarr]: /recipes/autopirate/bazarr/
|
[bazarr]: /recipes/autopirate/bazarr/
|
||||||
[calibre-web]: /recipes/calibre-web/
|
[calibre-web]: /recipes/calibre-web/
|
||||||
@@ -38,7 +38,7 @@
|
|||||||
[rtorrent]: /recipes/autopirate/rtorrent/
|
[rtorrent]: /recipes/autopirate/rtorrent/
|
||||||
[sabnzbd]: /recipes/autopirate/sabnzbd/
|
[sabnzbd]: /recipes/autopirate/sabnzbd/
|
||||||
[sonarr]: /recipes/autopirate/sonarr/
|
[sonarr]: /recipes/autopirate/sonarr/
|
||||||
[tfa-dex-static]: /ha-docker-swarm/traefik-forward-auth/dex-static/
|
[tfa-dex-static]: /docker-swarm/traefik-forward-auth/dex-static/
|
||||||
[tfa-google]: /ha-docker-swarm/traefik-forward-auth/google/
|
[tfa-google]: /docker-swarm/traefik-forward-auth/google/
|
||||||
[tfa-keycloak]: /ha-docker-swarm/traefik-forward-auth/keycloak/
|
[tfa-keycloak]: /docker-swarm/traefik-forward-auth/keycloak/
|
||||||
[tfa]: /ha-docker-swarm/traefik-forward-auth/
|
[tfa]: /docker-swarm/traefik-forward-auth/
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
|
|
||||||
### Tip your waiter (sponsor) 👏
|
### Tip your waiter (sponsor) 👏
|
||||||
|
|
||||||
Did you receive excellent service? Want to compliment the chef? (_..and support development of current and future recipes!_) Sponsor me on [Github][github_sponsor] / [Patreon][patreon], or see the [contribute](/community/support/) page for more (_free or paid)_ ways to say thank you! 👏
|
Did you receive excellent service? Want to compliment the chef? (_..and support development of current and future recipes!_) Sponsor me on [Github][github_sponsor] / [Patreon][patreon], or see the [contribute](/community/contribute/) page for more (_free or paid)_ ways to say thank you! 👏
|
||||||
|
|
||||||
### Employ your chef (engage) 🤝
|
### Employ your chef (engage) 🤝
|
||||||
|
|
||||||
|
|||||||
@@ -3,10 +3,10 @@
|
|||||||
!!! summary "Ingredients"
|
!!! summary "Ingredients"
|
||||||
Already deployed:
|
Already deployed:
|
||||||
|
|
||||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
* [X] [Traefik](/ha-docker-swarm/traefik) configured per design
|
* [X] [Traefik](/docker-swarm/traefik) configured per design
|
||||||
* [X] DNS entry for the hostname you intend to use (*or a wildcard*), pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
|
* [X] DNS entry for the hostname you intend to use (*or a wildcard*), pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||||
|
|
||||||
Related:
|
Related:
|
||||||
|
|
||||||
* [X] [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) to secure your Traefik-exposed services with an additional layer of authentication
|
* [X] [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) to secure your Traefik-exposed services with an additional layer of authentication
|
||||||
|
|||||||
@@ -3,9 +3,9 @@
|
|||||||
!!! summary "Ingredients"
|
!!! summary "Ingredients"
|
||||||
Already deployed:
|
Already deployed:
|
||||||
|
|
||||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
* [X] [Traefik](/ha-docker-swarm/traefik) configured per design
|
* [X] [Traefik](/docker-swarm/traefik) configured per design
|
||||||
|
|
||||||
New:
|
New:
|
||||||
|
|
||||||
* [ ] DNS entry for your auth host (*"auth.yourdomain.com" is a good choice*), pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
|
* [ ] DNS entry for your auth host (*"auth.yourdomain.com" is a good choice*), pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||||
@@ -10,7 +10,7 @@ In the design described below, our "private cloud" platform is:
|
|||||||
* **Highly-available** (_can tolerate the failure of a single component_)
|
* **Highly-available** (_can tolerate the failure of a single component_)
|
||||||
* **Scalable** (_can add resource or capacity as required_)
|
* **Scalable** (_can add resource or capacity as required_)
|
||||||
* **Portable** (_run it on your garage server today, run it in AWS tomorrow_)
|
* **Portable** (_run it on your garage server today, run it in AWS tomorrow_)
|
||||||
* **Secure** (_access protected with [LetsEncrypt certificates](/ha-docker-swarm/traefik/) and optional [OIDC with 2FA](/ha-docker-swarm/traefik-forward-auth/)_)
|
* **Secure** (_access protected with [LetsEncrypt certificates](/docker-swarm/traefik/) and optional [OIDC with 2FA](/docker-swarm/traefik-forward-auth/)_)
|
||||||
* **Automated** (_requires minimal care and feeding_)
|
* **Automated** (_requires minimal care and feeding_)
|
||||||
|
|
||||||
## Design Decisions
|
## Design Decisions
|
||||||
@@ -20,7 +20,7 @@ In the design described below, our "private cloud" platform is:
|
|||||||
This means that:
|
This means that:
|
||||||
|
|
||||||
* At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure.
|
* At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure.
|
||||||
* [Ceph](/ha-docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure.
|
* [Ceph](/docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure.
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
An exception to the 3-nodes decision is running a single-node configuration. If you only **have** one node, then obviously your swarm is only as resilient as that node. It's still a perfectly valid swarm configuration, ideal for starting your self-hosting journey. In fact, under the single-node configuration, you don't need ceph either, and you can simply use the local volume on your host for storage. You'll be able to migrate to ceph/more nodes if/when you expand.
|
An exception to the 3-nodes decision is running a single-node configuration. If you only **have** one node, then obviously your swarm is only as resilient as that node. It's still a perfectly valid swarm configuration, ideal for starting your self-hosting journey. In fact, under the single-node configuration, you don't need ceph either, and you can simply use the local volume on your host for storage. You'll be able to migrate to ceph/more nodes if/when you expand.
|
||||||
@@ -178,6 +178,6 @@ What have we achieved?
|
|||||||
!!! summary "Summary"
|
!!! summary "Summary"
|
||||||
Created:
|
Created:
|
||||||
|
|
||||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/)
|
* [X] [Docker swarm cluster](/docker-swarm/design/)
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
97
manuscript/docker-swarm/index.md
Normal file
97
manuscript/docker-swarm/index.md
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
---
|
||||||
|
title: Launch your secure, scalable Docker Swarm
|
||||||
|
description: Using Docker Swarm to build your own container-hosting platform which is highly-available, scalable, portable, secure and automated! 💪
|
||||||
|
---
|
||||||
|
|
||||||
|
# Highly Available Docker Swarm Design
|
||||||
|
|
||||||
|
In the design described below, our "private cloud" platform is:
|
||||||
|
|
||||||
|
* **Highly-available** (_can tolerate the failure of a single component_)
|
||||||
|
* **Scalable** (_can add resource or capacity as required_)
|
||||||
|
* **Portable** (_run it on your garage server today, run it in AWS tomorrow_)
|
||||||
|
* **Secure** (_access protected with [LetsEncrypt certificates](/docker-swarm/traefik/) and optional [OIDC with 2FA](/docker-swarm/traefik-forward-auth/)_)
|
||||||
|
* **Automated** (_requires minimal care and feeding_)
|
||||||
|
|
||||||
|
## Design Decisions
|
||||||
|
|
||||||
|
### Where possible, services will be highly available.**
|
||||||
|
|
||||||
|
This means that:
|
||||||
|
|
||||||
|
* At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure.
|
||||||
|
* [Ceph](/docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure.
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
An exception to the 3-nodes decision is running a single-node configuration. If you only **have** one node, then obviously your swarm is only as resilient as that node. It's still a perfectly valid swarm configuration, ideal for starting your self-hosting journey. In fact, under the single-node configuration, you don't need ceph either, and you can simply use the local volume on your host for storage. You'll be able to migrate to ceph/more nodes if/when you expand.
|
||||||
|
|
||||||
|
**Where multiple solutions to a requirement exist, preference will be given to the most portable solution.**
|
||||||
|
|
||||||
|
This means that:
|
||||||
|
|
||||||
|
* Services are defined using docker-compose v3 YAML syntax
|
||||||
|
* Services are portable, meaning a particular stack could be shut down and moved to a new provider with minimal effort.
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
Under this design, the only inbound connections we're permitting to our docker swarm in a **minimal** configuration (*you may add custom services later, like UniFi Controller*) are:
|
||||||
|
|
||||||
|
### Network Flows
|
||||||
|
|
||||||
|
* **HTTP (TCP 80)** : Redirects to https
|
||||||
|
* **HTTPS (TCP 443)** : Serves individual docker containers via SSL-encrypted reverse proxy
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](/recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](/recipes/privatebin/)*), no additional layer of authentication will be required.
|
||||||
|
* Where the hosted application provides inadequate (*i.e. [NZBGet](/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](/recipes/gollum/)*), a further authentication against an OAuth provider will be required.
|
||||||
|
|
||||||
|
## High availability
|
||||||
|
|
||||||
|
### Normal function
|
||||||
|
|
||||||
|
Assuming a 3-node configuration, under normal circumstances the following is illustrated:
|
||||||
|
|
||||||
|
* All 3 nodes provide shared storage via Ceph, which is provided by a docker container on each node.
|
||||||
|
* All 3 nodes participate in the Docker Swarm as managers.
|
||||||
|
* The various containers belonging to the application "stacks" deployed within Docker Swarm are automatically distributed amongst the swarm nodes.
|
||||||
|
* Persistent storage for the containers is provide via cephfs mount.
|
||||||
|
* The **traefik** service (*in swarm mode*) receives incoming requests (*on HTTP and HTTPS*), and forwards them to individual containers. Traefik knows the containers names because it's able to read the docker socket.
|
||||||
|
* All 3 nodes run keepalived, at varying priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (*no matter which node it's on*), and then onto the target backend.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Node failure
|
||||||
|
|
||||||
|
In the case of a failure (or scheduled maintenance) of one of the nodes, the following is illustrated:
|
||||||
|
|
||||||
|
* The failed node no longer participates in Ceph, but the remaining nodes provide enough fault-tolerance for the cluster to operate.
|
||||||
|
* The remaining two nodes in Docker Swarm achieve a quorum and agree that the failed node is to be removed.
|
||||||
|
* The (*possibly new*) leader manager node reschedules the containers known to be running on the failed node, onto other nodes.
|
||||||
|
* The **traefik** service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
|
||||||
|
* The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Node restore
|
||||||
|
|
||||||
|
When the failed (*or upgraded*) host is restored to service, the following is illustrated:
|
||||||
|
|
||||||
|
* Ceph regains full redundancy
|
||||||
|
* Docker Swarm managers become aware of the recovered node, and will use it for scheduling **new** containers
|
||||||
|
* Existing containers which were migrated off the node are not migrated backend
|
||||||
|
* Keepalived VIP regains full redundancy
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Total cluster failure
|
||||||
|
|
||||||
|
A day after writing this, my environment suffered a fault whereby all 3 VMs were unexpectedly and simultaneously powered off.
|
||||||
|
|
||||||
|
Upon restore, docker failed to start on one of the VMs due to local disk space issue[^1]. However, the other two VMs started, established the swarm, mounted their shared storage, and started up all the containers (services) which were managed by the swarm.
|
||||||
|
|
||||||
|
In summary, although I suffered an **unplanned power outage to all of my infrastructure**, followed by a **failure of a third of my hosts**... ==all my platforms are 100% available[^1] with **absolutely no manual intervention**==.
|
||||||
|
|
||||||
|
[^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient.
|
||||||
|
|
||||||
|
--8<-- "recipe-footer.md"
|
||||||
@@ -10,9 +10,9 @@ The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Cu
|
|||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
1. [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
2. [Traefik](/ha-docker-swarm/traefik) configured per design
|
2. [Traefik](/docker-swarm/traefik) configured per design
|
||||||
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
|
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|
||||||
@@ -3,7 +3,7 @@
|
|||||||
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
|
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](/ha-docker-swarm/shared-storage-ceph/) instead. - 2019 Chef
|
This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](/docker-swarm/shared-storage-ceph/) instead. - 2019 Chef
|
||||||
|
|
||||||
## Design
|
## Design
|
||||||
|
|
||||||
@@ -4,7 +4,7 @@ description: Traefik forward auth needs an authentication backend, but if you do
|
|||||||
---
|
---
|
||||||
# Using Traefik Forward Auth with Dex (Static)
|
# Using Traefik Forward Auth with Dex (Static)
|
||||||
|
|
||||||
[Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) is incredibly useful to secure services with an additional layer of authentication, provided by an OIDC-compatible provider. The simplest possible provider is a self-hosted instance of [CoreOS's Dex](https://github.com/dexidp/dex), configured with a static username and password. This recipe will "get you started" with Traefik Forward Auth, providing a basic authentication layer. In time, you might want to migrate to a "public" provider, like [Google][tfa-google], or GitHub, or to a [KeyCloak][keycloak] installation.
|
[Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) is incredibly useful to secure services with an additional layer of authentication, provided by an OIDC-compatible provider. The simplest possible provider is a self-hosted instance of [CoreOS's Dex](https://github.com/dexidp/dex), configured with a static username and password. This recipe will "get you started" with Traefik Forward Auth, providing a basic authentication layer. In time, you might want to migrate to a "public" provider, like [Google][tfa-google], or GitHub, or to a [KeyCloak][keycloak] installation.
|
||||||
|
|
||||||
--8<-- "recipe-tfa-ingredients.md"
|
--8<-- "recipe-tfa-ingredients.md"
|
||||||
|
|
||||||
@@ -4,7 +4,7 @@ description: Traefik forward auth can selectively secure your Docker services ag
|
|||||||
---
|
---
|
||||||
# Using Traefik Forward Auth with KeyCloak
|
# Using Traefik Forward Auth with KeyCloak
|
||||||
|
|
||||||
While the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own KeyCloak instance to secure **any** URLs within your DNS domain.
|
While the [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own KeyCloak instance to secure **any** URLs within your DNS domain.
|
||||||
|
|
||||||
!!! tip "Keycloak with Traefik"
|
!!! tip "Keycloak with Traefik"
|
||||||
Did you land here from a search, looking for information about using Keycloak with Traefik? All this and more is covered in the [Keycloak][keycloak] recipe!
|
Did you land here from a search, looking for information about using Keycloak with Traefik? All this and more is covered in the [Keycloak][keycloak] recipe!
|
||||||
@@ -28,7 +28,7 @@ COOKIE_DOMAIN=<the root FQDN of your domain>
|
|||||||
|
|
||||||
### Prepare the docker service config
|
### Prepare the docker service config
|
||||||
|
|
||||||
This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/ha-docker-swarm/traefik/) recipe:
|
This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/docker-swarm/traefik/) recipe:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
traefik-forward-auth:
|
traefik-forward-auth:
|
||||||
@@ -21,8 +21,8 @@ To deal with these gaps, we need a front-end load-balancer, and in this design,
|
|||||||
!!! summary "Ingredients"
|
!!! summary "Ingredients"
|
||||||
Already deployed:
|
Already deployed:
|
||||||
|
|
||||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
* [X] DNS entry for the hostname you intend to use (*or a wildcard*), pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
|
* [X] DNS entry for the hostname you intend to use (*or a wildcard*), pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||||
|
|
||||||
New:
|
New:
|
||||||
* [ ] Traefik configured per design
|
* [ ] Traefik configured per design
|
||||||
@@ -244,6 +244,6 @@ You should now be able to access[^1] your traefik instance on `https://traefik.<
|
|||||||
* [X] Frontend proxy which will dynamically configure itself for new backend containers
|
* [X] Frontend proxy which will dynamically configure itself for new backend containers
|
||||||
* [X] Automatic SSL support for all proxied resources
|
* [X] Automatic SSL support for all proxied resources
|
||||||
|
|
||||||
[^1]: Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/ha-docker-swarm/traefik-forward-auth/)!
|
[^1]: Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/docker-swarm/traefik-forward-auth/)!
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
@@ -6,17 +6,17 @@ hide:
|
|||||||
|
|
||||||
# Welcome, fellow geek :wave:, start here :point_down:
|
# Welcome, fellow geek :wave:, start here :point_down:
|
||||||
|
|
||||||
[Dive into Docker Swarm](/ha-docker-swarm/design/){: .md-button .md-button--primary}
|
[Dive into Docker Swarm](/docker-swarm/design/){: .md-button .md-button--primary}
|
||||||
[Kick it with Kubernetes](/kubernetes/){: .md-button .md-button--primary}
|
[Kick it with Kubernetes](/kubernetes/){: .md-button .md-button--primary}
|
||||||
|
|
||||||
## What is this?
|
## What is this?
|
||||||
|
|
||||||
The "*Geek Cookbook*" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](/ha-docker-swarm/design/) or [Kubernetes](/kubernetes/).
|
The "*Geek Cookbook*" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](/docker-swarm/design/) or [Kubernetes](/kubernetes/).
|
||||||
|
|
||||||
Running such a platform enables you to run self-hosted tools such as [AutoPirate](/recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex](https://www.plex.tv/), [NextCloud](https://nextcloud.com/), and includes elements such as:
|
Running such a platform enables you to run self-hosted tools such as [AutoPirate](/recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex](https://www.plex.tv/), [NextCloud](https://nextcloud.com/), and includes elements such as:
|
||||||
|
|
||||||
* [Automatic SSL-secured access](/ha-docker-swarm/traefik/) to all services (*with LetsEncrypt*)
|
* [Automatic SSL-secured access](/docker-swarm/traefik/) to all services (*with LetsEncrypt*)
|
||||||
* [SSO / authentication layer](/ha-docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services
|
* [SSO / authentication layer](/docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services
|
||||||
* [Automated backup](/recipes/elkarbackup/) of configuration and data
|
* [Automated backup](/recipes/elkarbackup/) of configuration and data
|
||||||
* [Monitoring and metrics](/recipes/swarmprom/) collection, graphing and alerting
|
* [Monitoring and metrics](/recipes/swarmprom/) collection, graphing and alerting
|
||||||
|
|
||||||
@@ -44,7 +44,7 @@ So if you're familiar enough with the concepts above, and you've done self-hosti
|
|||||||
|
|
||||||
<div class="grid cards" markdown>
|
<div class="grid cards" markdown>
|
||||||
|
|
||||||
- __Dive into :material-docker:{ .docker .lg .middle } [Docker Swarm](/ha-docker-swarm/design/)__
|
- __Dive into :material-docker:{ .docker .lg .middle } [Docker Swarm](/docker-swarm/design/)__
|
||||||
- __Kick it with :material-kubernetes:{ .kubernetes .lg .middle } [Kubernetes](/kubernetes/)__
|
- __Kick it with :material-kubernetes:{ .kubernetes .lg .middle } [Kubernetes](/kubernetes/)__
|
||||||
- __Geek out in :fontawesome-brands-discord:{ .discord .lg .middle } [Discord](http://chat.funkypenguin.co.nz)__
|
- __Geek out in :fontawesome-brands-discord:{ .discord .lg .middle } [Discord](http://chat.funkypenguin.co.nz)__
|
||||||
- __Fast-track with 🚀 [Premix](/premix)!__
|
- __Fast-track with 🚀 [Premix](/premix)!__
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ One of the key drawcards for Kubernetes is horizonal scaling. You want to be abl
|
|||||||
|
|
||||||
### Load Balancing
|
### Load Balancing
|
||||||
|
|
||||||
Even if you had enough hardware capacity to handle any unexpected scaling requirements, ensuring that traffic can reliably reach your cluster is a complicated problem. You need to present a "virtual" IP for external traffic to ingress the cluster on. There are popular solutions to provide LoadBalancer services to a self-managed cluster (*i.e., [MetalLB](/kubernetes/load-balancer/metallb/)*), but they do represent extra complexity, and won't necessarily be resilient to outages outside of the cluster (*network devices, power, etc*).
|
Even if you had enough hardware capacity to handle any unexpected scaling requirements, ensuring that traffic can reliably reach your cluster is a complicated problem. You need to present a "virtual" IP for external traffic to ingress the cluster on. There are popular solutions to provide LoadBalancer services to a self-managed cluster (*i.e., [MetalLB](/kubernetes/loadbalancer/metallb/)*), but they do represent extra complexity, and won't necessarily be resilient to outages outside of the cluster (*network devices, power, etc*).
|
||||||
|
|
||||||
### Storage
|
### Storage
|
||||||
|
|
||||||
@@ -34,7 +34,7 @@ Cloud providers make it easy to connect their storage solutions to your cluster,
|
|||||||
|
|
||||||
### Services
|
### Services
|
||||||
|
|
||||||
Some things just "work better" in a cloud provider environment. For example, to run a highly available Postgres instance on Kubernetes requires at least 3 nodes, and 3 x storage, plus manual failover/failback in the event of an actual issue. This can represent a huge cost if you simply need a PostgreSQL database to provide (*for example*) a backend to an authentication service like [KeyCloak](/recipes/kubernetes/keycloak/). Cloud providers will have a range of managed database solutions which will cost far less than do-it-yourselfing, and integrate easily and securely into their kubernetes offerings.
|
Some things just "work better" in a cloud provider environment. For example, to run a highly available Postgres instance on Kubernetes requires at least 3 nodes, and 3 x storage, plus manual failover/failback in the event of an actual issue. This can represent a huge cost if you simply need a PostgreSQL database to provide (*for example*) a backend to an authentication service like KeyCloak. Cloud providers will have a range of managed database solutions which will cost far less than do-it-yourselfing, and integrate easily and securely into their kubernetes offerings.
|
||||||
|
|
||||||
### Summary
|
### Summary
|
||||||
|
|
||||||
|
|||||||
11
manuscript/kubernetes/deployment/flux/index.md
Normal file
11
manuscript/kubernetes/deployment/flux/index.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
---
|
||||||
|
title: Using flux for deployment in Kubernetes
|
||||||
|
---
|
||||||
|
|
||||||
|
# Deployment
|
||||||
|
|
||||||
|
In a break from tradition, the flux design is best understood *after* installing it, so this section makes the most sense read in the following order:
|
||||||
|
|
||||||
|
1. [Install](/kubernetes/deployment/flux/install/)
|
||||||
|
2. [Design](/kubernetes/deployment/flux/design/)
|
||||||
|
3. [Operate](/kubernetes/deployment/flux/operate/)
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
# Design
|
# Design
|
||||||
|
|
||||||
Like the [Docker Swarm](/ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is:
|
Like the [Docker Swarm](/docker-swarm/design/) "_private cloud_" design, the Kubernetes design is:
|
||||||
|
|
||||||
- **Highly-available** (_can tolerate the failure of a single component_)
|
- **Highly-available** (_can tolerate the failure of a single component_)
|
||||||
- **Scalable** (_can add resource or capacity as required_)
|
- **Scalable** (_can add resource or capacity as required_)
|
||||||
@@ -93,7 +93,7 @@ The phone-home container calls the webhook, and tells HAProxy to listen on port
|
|||||||
|
|
||||||
### 2 : The Traefik Ingress
|
### 2 : The Traefik Ingress
|
||||||
|
|
||||||
In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/ha-docker-swarm/traefik/).
|
In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/docker-swarm/traefik/).
|
||||||
|
|
||||||
What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number.
|
What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number.
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ My first introduction to Kubernetes was a children's story:
|
|||||||
|
|
||||||
## Why Kubernetes?
|
## Why Kubernetes?
|
||||||
|
|
||||||
Why would you want to Kubernetes for your self-hosted recipes, over simple [Docker Swarm](/ha-docker-swarm/)? Here's my personal take..
|
Why would you want to Kubernetes for your self-hosted recipes, over simple [Docker Swarm](/docker-swarm/)? Here's my personal take..
|
||||||
|
|
||||||
### Docker Swarm is dead
|
### Docker Swarm is dead
|
||||||
|
|
||||||
|
|||||||
@@ -13,11 +13,11 @@ Nginx Ingress Controller does make for a nice, simple "default" Ingress controll
|
|||||||
|
|
||||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||||
* [x] A [load-balancer](/kubernetes/load-balancer/) solution (*either [k3s](/kubernetes/load-balancer/k3s/) or [MetalLB](/kubernetes/load-balancer/metallb/)*)
|
* [x] A [load-balancer](/kubernetes/load-balancer/) solution (*either [k3s](/kubernetes/load-balancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*)
|
||||||
|
|
||||||
Optional:
|
Optional:
|
||||||
|
|
||||||
* [x] [Cert-Manager](/kubernetes/cert-manager/) deployed to request/renew certificates
|
* [x] [Cert-Manager](/kubernetes/ssl-certificates/cert-manager/) deployed to request/renew certificates
|
||||||
* [x] [External DNS](/kubernetes/external-dns/) configured to respond to ingresses, or with a wildcard DNS entry
|
* [x] [External DNS](/kubernetes/external-dns/) configured to respond to ingresses, or with a wildcard DNS entry
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ One of the advantages [Traefik](/kubernetes/ingress/traefik/) offers over [Nginx
|
|||||||
|
|
||||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||||
* [x] A [load-balancer](/kubernetes/load-balancer/) solution (*either [k3s](/kubernetes/load-balancer/k3s/) or [MetalLB](/kubernetes/load-balancer/metallb/)*)
|
* [x] A [load-balancer](/kubernetes/load-balancer/) solution (*either [k3s](/kubernetes/load-balancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*)
|
||||||
* [x] [Traefik](/kubernetes/ingress/traefik/) deployed per-design
|
* [x] [Traefik](/kubernetes/ingress/traefik/) deployed per-design
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
|
|||||||
@@ -18,11 +18,11 @@ Traefik natively includes some features which Nginx lacks:
|
|||||||
|
|
||||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||||
* [x] A [load-balancer](/kubernetes/load-balancer/) solution (*either [k3s](/kubernetes/load-balancer/k3s/) or [MetalLB](/kubernetes/load-balancer/metallb/)*)
|
* [x] A [load-balancer](/kubernetes/load-balancer/) solution (*either [k3s](/kubernetes/load-balancer/k3s/) or [MetalLB](/kubernetes/loadbalancer/metallb/)*)
|
||||||
|
|
||||||
Optional:
|
Optional:
|
||||||
|
|
||||||
* [x] [Cert-Manager](/kubernetes/cert-manager/) deployed to request/renew certificates
|
* [x] [Cert-Manager](/kubernetes/ssl-certificates/cert-manager/) deployed to request/renew certificates
|
||||||
* [x] [External DNS](/kubernetes/external-dns/) configured to respond to ingresses, or with a wildcard DNS entry
|
* [x] [External DNS](/kubernetes/external-dns/) configured to respond to ingresses, or with a wildcard DNS entry
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact.
|
Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact.
|
||||||
|
|
||||||
Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](/recipes/elkarbackup/)) to automate backups of our persistent data.
|
Under [Docker Swarm](/docker-swarm/design/), we used [shared storage](/docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](/recipes/elkarbackup/)) to automate backups of our persistent data.
|
||||||
|
|
||||||
Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution...
|
Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution...
|
||||||
|
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ I've split this section, conceptually, into 3 separate tasks:
|
|||||||
|
|
||||||
1. Setup [Cert Manager](/kubernetes/ssl-certificates/cert-manager/), a controller whose job it is to request / renew certificates
|
1. Setup [Cert Manager](/kubernetes/ssl-certificates/cert-manager/), a controller whose job it is to request / renew certificates
|
||||||
2. Setup "[Issuers](/kubernetes/ssl-certificates/letsencrypt-issuers/)" for LetsEncrypt, which Cert Manager will use to request certificates
|
2. Setup "[Issuers](/kubernetes/ssl-certificates/letsencrypt-issuers/)" for LetsEncrypt, which Cert Manager will use to request certificates
|
||||||
3. Setup a [wildcard certificate](/kubernetes/ssl-certificates/letsencrypt-wildcard/) in such a way that it can be used by Ingresses like Traefik or Ngnix
|
3. Setup a [wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/) in such a way that it can be used by Ingresses like Traefik or Ngnix
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
|
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ In order for Cert Manager to request/renew certificates, we have to tell it abou
|
|||||||
|
|
||||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||||
* [x] [Cert-Manager](/kubernetes/cert-manager/) deployed to request/renew certificates
|
* [x] [Cert-Manager](/kubernetes/ssl-certificates/cert-manager/) deployed to request/renew certificates
|
||||||
* [x] API credentials for a [supported DNS01 provider](https://cert-manager.io/docs/configuration/acme/dns01/) for LetsEncrypt wildcard certs
|
* [x] API credentials for a [supported DNS01 provider](https://cert-manager.io/docs/configuration/acme/dns01/) for LetsEncrypt wildcard certs
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
@@ -79,7 +79,7 @@ As you'd imagine, the "prod" version of the LetsEncrypt issues is very similar,
|
|||||||
```
|
```
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
You'll note that there are two secrets referred to above - `privateKeySecretRef`, referencing `letsencrypt-prod` is for cert-manager to populate as a result of its ACME schenanigans - you don't have to do anything about this particular secret! The cloudflare-specific secret (*and this will change based on your provider*) is expected to be found in the same namespace as the certificate we'll be issuing, and will be discussed when we create our [wildcard certificate](/kubernetes/ssl-certificates/letsencrypt-wildcard/).
|
You'll note that there are two secrets referred to above - `privateKeySecretRef`, referencing `letsencrypt-prod` is for cert-manager to populate as a result of its ACME schenanigans - you don't have to do anything about this particular secret! The cloudflare-specific secret (*and this will change based on your provider*) is expected to be found in the same namespace as the certificate we'll be issuing, and will be discussed when we create our [wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/).
|
||||||
|
|
||||||
## Serving
|
## Serving
|
||||||
|
|
||||||
@@ -102,7 +102,7 @@ Status:
|
|||||||
Events: <none>
|
Events: <none>
|
||||||
```
|
```
|
||||||
|
|
||||||
Provided your account is registered, you're ready to proceed with [creating a wildcard certificate](/kubernetes/ssl-certificates/letsencrypt-wildcard/)!
|
Provided your account is registered, you're ready to proceed with [creating a wildcard certificate](/kubernetes/ssl-certificates/wildcard-certificate/)!
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
|
|
||||||
|
|||||||
@@ -1,13 +1,13 @@
|
|||||||
# Secret Replicator
|
# Secret Replicator
|
||||||
|
|
||||||
As explained when creating our [LetsEncrypt Wildcard certificates](/kubernetes/ssl-certificates/letsencrypt-wildcard/), it can be problematic that Certificates can't be **shared** between namespaces. One simple solution to this problem is simply to "replicate" secrets from one "source" namespace into all other namespaces.
|
As explained when creating our [LetsEncrypt Wildcard certificates](/kubernetes/ssl-certificates/wildcard-certificate/), it can be problematic that Certificates can't be **shared** between namespaces. One simple solution to this problem is simply to "replicate" secrets from one "source" namespace into all other namespaces.
|
||||||
|
|
||||||
!!! summary "Ingredients"
|
!!! summary "Ingredients"
|
||||||
|
|
||||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||||
* [x] [secret-replicator](/kubernetes/secret-replicator/) deployed to request/renew certificates
|
* [x] [secret-replicator](/kubernetes/secret-replicator/) deployed to request/renew certificates
|
||||||
* [x] [LetsEncrypt Wildcard Certificates](/kubernetes/ssl-certificates/letsencrypt-wildcard/) created in the `letsencrypt-wildcard-cert` namespace
|
* [x] [LetsEncrypt Wildcard Certificates](/kubernetes/ssl-certificates/wildcard-certificate/) created in the `letsencrypt-wildcard-cert` namespace
|
||||||
|
|
||||||
Kiwigrid's "[Secret Replicator](https://github.com/kiwigrid/secret-replicator)" is a simple controller which replicates secrets from one namespace to another.[^1]
|
Kiwigrid's "[Secret Replicator](https://github.com/kiwigrid/secret-replicator)" is a simple controller which replicates secrets from one namespace to another.[^1]
|
||||||
|
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ Now that we have an [Issuer](/kubernetes/ssl-certificates/letsencrypt-issuers/)
|
|||||||
|
|
||||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||||
* [x] [Cert-Manager](/kubernetes/cert-manager/) deployed to request/renew certificates
|
* [x] [Cert-Manager](/kubernetes/ssl-certificates/cert-manager/) deployed to request/renew certificates
|
||||||
* [x] [LetsEncrypt ClusterIssuers](/kubernetes/ssl-certificates/letsencrypt-issuers/) created using DNS01 validation solvers
|
* [x] [LetsEncrypt ClusterIssuers](/kubernetes/ssl-certificates/letsencrypt-issuers/) created using DNS01 validation solvers
|
||||||
|
|
||||||
Certificates are Kubernetes secrets, and so are subject to the same limitations / RBAC controls as other secrets. Importantly, they are **namespaced**, so it's not possible to refer to a secret in one namespace, from a pod in **another** namespace. This restriction also applies to Ingress resources (*although there are workarounds*) - An Ingress can only refer to TLS secrets in its own namespace.
|
Certificates are Kubernetes secrets, and so are subject to the same limitations / RBAC controls as other secrets. Importantly, they are **namespaced**, so it's not possible to refer to a secret in one namespace, from a pod in **another** namespace. This restriction also applies to Ingress resources (*although there are workarounds*) - An Ingress can only refer to TLS secrets in its own namespace.
|
||||||
|
|||||||
@@ -49,13 +49,13 @@ Since this recipe is so long, and so many of the tools are optional to the final
|
|||||||
!!! summary "Ingredients"
|
!!! summary "Ingredients"
|
||||||
Already deployed:
|
Already deployed:
|
||||||
|
|
||||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
* [X] [Traefik](/ha-docker-swarm/traefik) configured per design
|
* [X] [Traefik](/docker-swarm/traefik) configured per design
|
||||||
* [X] DNS entry for the hostname you intend to use (*or a wildcard*), pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
|
* [X] DNS entry for the hostname you intend to use (*or a wildcard*), pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||||
|
|
||||||
Related:
|
Related:
|
||||||
|
|
||||||
* [X] [Traefik Forward Auth](ha-docker-swarm/traefik-forward-auth/) to secure your Traefik-exposed services with an additional layer of authentication
|
* [X] [Traefik Forward Auth](docker-swarm/traefik-forward-auth/) to secure your Traefik-exposed services with an additional layer of authentication
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|
||||||
|
|||||||
@@ -12,9 +12,9 @@ It's basically the [LibreOffice](https://www.libreoffice.org/) interface in a we
|
|||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
1. [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
2. [Traefik](/ha-docker-swarm/traefik) configured per design
|
2. [Traefik](/docker-swarm/traefik) configured per design
|
||||||
3. DNS entry for the hostname (_i.e. "collabora.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
|
3. DNS entry for the hostname (_i.e. "collabora.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||||
4. [NextCloud](/recipes/nextcloud/) installed and operational
|
4. [NextCloud](/recipes/nextcloud/) installed and operational
|
||||||
5. [Docker-compose](https://docs.docker.com/compose/install/) installed on your node(s) - this is a special case which needs to run outside of Docker Swarm
|
5. [Docker-compose](https://docs.docker.com/compose/install/) installed on your node(s) - this is a special case which needs to run outside of Docker Swarm
|
||||||
|
|
||||||
|
|||||||
@@ -17,13 +17,13 @@ Similar to the other backup options in the Cookbook, we can use Duplicati to bac
|
|||||||
- Cloud services (OneDrive, Google Drive, etc)
|
- Cloud services (OneDrive, Google Drive, etc)
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
Since Duplicati itself offers no user authentication, this design secures Duplicati behind [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth), so that in order to gain access to the Duplicati UI at all, authentication through the mechanism configured in traefik-forward-auth (_to GitHub, GitLab, Google, etc_) must have already occurred.
|
Since Duplicati itself offers no user authentication, this design secures Duplicati behind [Traefik Forward Auth](/docker-swarm/traefik-forward-auth), so that in order to gain access to the Duplicati UI at all, authentication through the mechanism configured in traefik-forward-auth (_to GitHub, GitLab, Google, etc_) must have already occurred.
|
||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
!!! summary "Ingredients"
|
!!! summary "Ingredients"
|
||||||
*[X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
*[X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
* [X] [Traefik](/ha-docker-swarm/traefik) and [Traefik-Forward-Auth](/ha-docker-swarm/traefik-forward-auth) configured per design
|
* [X] [Traefik](/docker-swarm/traefik) and [Traefik-Forward-Auth](/docker-swarm/traefik-forward-auth) configured per design
|
||||||
* [X] Credentials for one of the Duplicati's supported upload destinations
|
* [X] Credentials for one of the Duplicati's supported upload destinations
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|||||||
@@ -28,7 +28,7 @@ So what does this mean for our stack? It means we can leverage Duplicity to back
|
|||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
1. [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
2. Credentials for one of the Duplicity's supported upload destinations
|
2. Credentials for one of the Duplicity's supported upload destinations
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
@@ -90,7 +90,7 @@ Repeat after me: "If you don't verify your backup, **it's not a backup**".
|
|||||||
!!! warning
|
!!! warning
|
||||||
Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data.
|
Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data.
|
||||||
|
|
||||||
Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/ha-docker-swarm/traefik/), since this is likely to exist for every reader_).
|
Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/docker-swarm/traefik/), since this is likely to exist for every reader_).
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
docker run --env-file duplicity.env -it --rm \
|
docker run --env-file duplicity.env -it --rm \
|
||||||
|
|||||||
@@ -83,6 +83,6 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas
|
|||||||
|
|
||||||
[^1]: I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
|
[^1]: I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
|
||||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||||
[^3]: We don't bother exposing the HTTPS port for Emby, since [Traefik](/ha-docker-swarm/traefik/) is doing the SSL termination for us already.
|
[^3]: We don't bother exposing the HTTPS port for Emby, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
|
|||||||
@@ -132,7 +132,7 @@ Superuser created successfully.
|
|||||||
root@swarm:~#
|
root@swarm:~#
|
||||||
```
|
```
|
||||||
|
|
||||||
[^1]: Since the whole purpose of media sharing is to share **publically**, and Funkwhale includes robust user authentication, this recipe doesn't employ traefik-based authentication using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
[^1]: Since the whole purpose of media sharing is to share **publically**, and Funkwhale includes robust user authentication, this recipe doesn't employ traefik-based authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||||
[^2]: These instructions are an opinionated simplication of the official instructions found at <https://docs.funkwhale.audio/installation/docker.html>
|
[^2]: These instructions are an opinionated simplication of the official instructions found at <https://docs.funkwhale.audio/installation/docker.html>
|
||||||
[^3]: It should be noted that if you import your existing media, the files will be **copied** into Funkwhale's data folder. There doesn't seem to be a way to point Funkwhale at an existing collection and have it just play it from the filesystem. To this end, be prepared for double disk space usage if you plan to import your entire music collection!
|
[^3]: It should be noted that if you import your existing media, the files will be **copied** into Funkwhale's data folder. There doesn't seem to be a way to point Funkwhale at an existing collection and have it just play it from the filesystem. To this end, be prepared for double disk space usage if you plan to import your entire music collection!
|
||||||
[^5]: No consideration is given at this point to backing up the Funkwhale data. Post a comment below if you'd like to see a backup container added!
|
[^5]: No consideration is given at this point to backing up the Funkwhale data. Post a comment below if you'd like to see a backup container added!
|
||||||
|
|||||||
@@ -13,9 +13,9 @@ While a runner isn't strictly required to use GitLab, if you want to do CI, you'
|
|||||||
!!! summary "Ingredients"
|
!!! summary "Ingredients"
|
||||||
Existing:
|
Existing:
|
||||||
|
|
||||||
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
1. [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
2. [X] [Traefik](/ha-docker-swarm/traefik) configured per design
|
2. [X] [Traefik](/docker-swarm/traefik) configured per design
|
||||||
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
|
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||||
4. [X] [GitLab](/recipes/gitlab) installation (see previous recipe)
|
4. [X] [GitLab](/recipes/gitlab) installation (see previous recipe)
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|||||||
@@ -29,7 +29,7 @@ As you'll note in the (_real world_) screenshot above, my requirements for a per
|
|||||||
Gollum meets all these requirements, and as an added bonus, is extremely fast and lightweight.
|
Gollum meets all these requirements, and as an added bonus, is extremely fast and lightweight.
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
Since Gollum itself offers no user authentication, this design secures gollum behind [traefik-forward-auth](/ha-docker-swarm/traefik-forward-auth/), so that in order to gain access to the Gollum UI at all, authentication must have already occurred.
|
Since Gollum itself offers no user authentication, this design secures gollum behind [traefik-forward-auth](/docker-swarm/traefik-forward-auth/), so that in order to gain access to the Gollum UI at all, authentication must have already occurred.
|
||||||
|
|
||||||
--8<-- "recipe-standard-ingredients.md"
|
--8<-- "recipe-standard-ingredients.md"
|
||||||
|
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ Description. IPFS is a peer-to-peer distributed file system that seeks to connec
|
|||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
1. [Docker swarm cluster](/ha-docker-swarm/design/)
|
1. [Docker swarm cluster](/docker-swarm/design/)
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|
||||||
|
|||||||
@@ -93,6 +93,6 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas
|
|||||||
|
|
||||||
[^1]: I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
|
[^1]: I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
|
||||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||||
[^3]: We don't bother exposing the HTTPS port for Jellyfin, since [Traefik](/ha-docker-swarm/traefik/) is doing the SSL termination for us already.
|
[^3]: We don't bother exposing the HTTPS port for Jellyfin, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
!!! warning
|
!!! warning
|
||||||
This is not a complete recipe - it's an **optional** component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
|
This is not a complete recipe - it's an **optional** component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
|
||||||
|
|
||||||
KeyCloak gets really sexy when you integrate it into your [OpenLDAP](/recipes/openldap/) stack (_also, it's great not to have to play with ugly LDAP tree UIs_). Note that OpenLDAP integration is **not necessary** if you want to use KeyCloak with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) - all you need for that is [local users](/recipes/keycloak/create-user/), and an [OIDC client](http://localhost:8000/recipes/keycloak/setup-oidc-provider/).
|
KeyCloak gets really sexy when you integrate it into your [OpenLDAP](/recipes/openldap/) stack (_also, it's great not to have to play with ugly LDAP tree UIs_). Note that OpenLDAP integration is **not necessary** if you want to use KeyCloak with [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) - all you need for that is [local users](/recipes/keycloak/create-user/), and an [OIDC client](http://localhost:8000/recipes/keycloak/setup-oidc-provider/).
|
||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
@@ -57,7 +57,7 @@ For each of the following mappers, click the name, and set the "_Read Only_" fla
|
|||||||
|
|
||||||
## Summary
|
## Summary
|
||||||
|
|
||||||
We've setup a new realm in KeyCloak, and configured read-write federation to an [OpenLDAP](/recipes/openldap/) backend. We can now manage our LDAP users using either KeyCloak or LDAP directly, and we can protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
We've setup a new realm in KeyCloak, and configured read-write federation to an [OpenLDAP](/recipes/openldap/) backend. We can now manage our LDAP users using either KeyCloak or LDAP directly, and we can protect vulnerable services using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||||
|
|
||||||
!!! Summary
|
!!! Summary
|
||||||
Created:
|
Created:
|
||||||
|
|||||||
@@ -30,7 +30,7 @@ Once your user is created, to set their password, click on the "**Credentials**"
|
|||||||
|
|
||||||
## Summary
|
## Summary
|
||||||
|
|
||||||
We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it's used as an [OIDC Provider](/recipes/keycloak/setup-oidc-provider/), potentially to secure vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it's used as an [OIDC Provider](/recipes/keycloak/setup-oidc-provider/), potentially to secure vulnerable services using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||||
|
|
||||||
!!! Summary
|
!!! Summary
|
||||||
Created:
|
Created:
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ description: Kick-ass OIDC and identity management
|
|||||||
|
|
||||||
[KeyCloak](https://www.keycloak.org/) is "_an open source identity and access management solution_". Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML.
|
[KeyCloak](https://www.keycloak.org/) is "_an open source identity and access management solution_". Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML.
|
||||||
|
|
||||||
KeyCloak's OpenID provider can also be used in combination with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), to protect [vulnerable services](/recipes/autopirate/nzbget/) with an extra layer of authentication.
|
KeyCloak's OpenID provider can also be used in combination with [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/), to protect [vulnerable services](/recipes/autopirate/nzbget/) with an extra layer of authentication.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
!!! warning
|
!!! warning
|
||||||
This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
|
This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
|
||||||
|
|
||||||
Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), we'll setup a client in KeyCloak...
|
Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/), we'll setup a client in KeyCloak...
|
||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
@@ -14,7 +14,7 @@ Having an authentication provider is not much use until you start authenticating
|
|||||||
|
|
||||||
New:
|
New:
|
||||||
|
|
||||||
* [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe for more information
|
* [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) recipe for more information
|
||||||
|
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|
||||||
@@ -45,7 +45,7 @@ Now that you've changed the access type, and clicked **Save**, an additional **C
|
|||||||
|
|
||||||
## Summary
|
## Summary
|
||||||
|
|
||||||
We've setup an OIDC client in KeyCloak, which we can now use to protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). The OIDC URL provided by KeyCloak in the master realm, is `https://<your-keycloak-url>/realms/master/.well-known/openid-configuration`
|
We've setup an OIDC client in KeyCloak, which we can now use to protect vulnerable services using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/). The OIDC URL provided by KeyCloak in the master realm, is `https://<your-keycloak-url>/realms/master/.well-known/openid-configuration`
|
||||||
|
|
||||||
!!! Summary
|
!!! Summary
|
||||||
Created:
|
Created:
|
||||||
|
|||||||
@@ -87,6 +87,6 @@ networks:
|
|||||||
|
|
||||||
Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker-compose.yml>```
|
Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker-compose.yml>```
|
||||||
|
|
||||||
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ docker-mailserver doesn't include a webmail client, and one is not strictly need
|
|||||||
|
|
||||||
## Ingredients
|
## Ingredients
|
||||||
|
|
||||||
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
1. [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||||
2. LetsEncrypt authorized email address for domain
|
2. LetsEncrypt authorized email address for domain
|
||||||
3. Access to manage DNS records for domains
|
3. Access to manage DNS records for domains
|
||||||
|
|
||||||
|
|||||||
@@ -187,7 +187,7 @@ Want to use Calendar/Contacts on your iOS device? Want to avoid dictating long,
|
|||||||
|
|
||||||
Huzzah! NextCloud supports [service discovery for CalDAV/CardDAV](https://tools.ietf.org/html/rfc6764), allowing you to simply tell your device the primary URL of your server (_**nextcloud.batcave.org**, for example_), and have the device figure out the correct WebDAV path to use.
|
Huzzah! NextCloud supports [service discovery for CalDAV/CardDAV](https://tools.ietf.org/html/rfc6764), allowing you to simply tell your device the primary URL of your server (_**nextcloud.batcave.org**, for example_), and have the device figure out the correct WebDAV path to use.
|
||||||
|
|
||||||
We (_and anyone else using the [NextCloud Docker image](https://hub.docker.com/_/nextcloud/)_) are using an SSL-terminating reverse proxy ([Traefik](/ha-docker-swarm/traefik/)) in front of our NextCloud container. In fact, it's not **possible** to setup SSL **within** the NextCloud container.
|
We (_and anyone else using the [NextCloud Docker image](https://hub.docker.com/_/nextcloud/)_) are using an SSL-terminating reverse proxy ([Traefik](/docker-swarm/traefik/)) in front of our NextCloud container. In fact, it's not **possible** to setup SSL **within** the NextCloud container.
|
||||||
|
|
||||||
When using a reverse proxy, your device requests a URL from your proxy (<https://nextcloud.batcave.com/.well-known/caldav>), and the reverse proxy then passes that request **unencrypted** to the internal URL of the NextCloud instance (i.e., <http://172.16.12.123/.well-known/caldav>)
|
When using a reverse proxy, your device requests a URL from your proxy (<https://nextcloud.batcave.com/.well-known/caldav>), and the reverse proxy then passes that request **unencrypted** to the internal URL of the NextCloud instance (i.e., <http://172.16.12.123/.well-known/caldav>)
|
||||||
|
|
||||||
|
|||||||
@@ -164,6 +164,6 @@ networks:
|
|||||||
|
|
||||||
Launch the nightscout stack by running ```docker stack deploy nightscout -c <path -to-docker-compose.yml>```
|
Launch the nightscout stack by running ```docker stack deploy nightscout -c <path -to-docker-compose.yml>```
|
||||||
|
|
||||||
[^1]: Most of the time, you'll need an app which syncs to Nightscout, and these apps won't support OIDC auth, so this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). Instead, NightScout is secured entirely with your `API_SECRET` above (*although it is possible to add more users once you're an admin*)
|
[^1]: Most of the time, you'll need an app which syncs to Nightscout, and these apps won't support OIDC auth, so this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/). Instead, NightScout is secured entirely with your `API_SECRET` above (*although it is possible to add more users once you're an admin*)
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
|
|||||||
@@ -101,7 +101,7 @@ networks:
|
|||||||
|
|
||||||
* Expose Portainer via Traefik with valid LetsEncrypt SSL certs
|
* Expose Portainer via Traefik with valid LetsEncrypt SSL certs
|
||||||
* Optionally protected Portainer's web UI with OIDC auth via Traefik Forward Auth
|
* Optionally protected Portainer's web UI with OIDC auth via Traefik Forward Auth
|
||||||
* Use filesystem paths instead of Docker volumes for maximum "swarminess" (*We want an HA swarm, and HA Docker Volumes are a PITA, so we just use our [ceph shared storage](/ha-docker-swarm/shared-storage-ceph/)*)
|
* Use filesystem paths instead of Docker volumes for maximum "swarminess" (*We want an HA swarm, and HA Docker Volumes are a PITA, so we just use our [ceph shared storage](/docker-swarm/shared-storage-ceph/)*)
|
||||||
|
|
||||||
## Serving
|
## Serving
|
||||||
|
|
||||||
|
|||||||
@@ -84,6 +84,6 @@ networks:
|
|||||||
|
|
||||||
Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker-compose.yml>```
|
Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker-compose.yml>```
|
||||||
|
|
||||||
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||||
|
|
||||||
--8<-- "recipe-footer.md"
|
--8<-- "recipe-footer.md"
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ Wekan allows to create Boards, on which Cards can be moved around between a numb
|
|||||||
There's a [video](https://www.youtube.com/watch?v=N3iMLwCNOro) of the developer showing off the app, as well as a [functional demo](https://boards.wekan.team/b/D2SzJKZDS4Z48yeQH/wekan-open-source-kanban-board-with-mit-license).
|
There's a [video](https://www.youtube.com/watch?v=N3iMLwCNOro) of the developer showing off the app, as well as a [functional demo](https://boards.wekan.team/b/D2SzJKZDS4Z48yeQH/wekan-open-source-kanban-board-with-mit-license).
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
For added privacy, this design secures wekan behind a [traefik-forward-auth](/ha-docker-swarm/traefik-forward-auth/), so that in order to gain access to the wekan UI at all, authentication must have already occurred.
|
For added privacy, this design secures wekan behind a [traefik-forward-auth](/docker-swarm/traefik-forward-auth/), so that in order to gain access to the wekan UI at all, authentication must have already occurred.
|
||||||
|
|
||||||
--8<-- "recipe-standard-ingredients.md"
|
--8<-- "recipe-standard-ingredients.md"
|
||||||
|
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ description: Terminal in a browser, baby!
|
|||||||
|
|
||||||
## Why would you need SSH in a browser window?
|
## Why would you need SSH in a browser window?
|
||||||
|
|
||||||
Need shell access to a node with no external access? Deploy Wetty behind an [traefik-forward-auth](/ha-docker-swarm/traefik-forward-auth/) with a SSL-terminating reverse proxy ([traefik](/ha-docker-swarm/traefik/)), and suddenly you have the means to SSH to your private host from any web browser (_protected by your [traefik-forward-auth](/ha-docker-swarm/traefik-forward-auth/) of course._)
|
Need shell access to a node with no external access? Deploy Wetty behind an [traefik-forward-auth](/docker-swarm/traefik-forward-auth/) with a SSL-terminating reverse proxy ([traefik](/docker-swarm/traefik/)), and suddenly you have the means to SSH to your private host from any web browser (_protected by your [traefik-forward-auth](/docker-swarm/traefik-forward-auth/) of course._)
|
||||||
|
|
||||||
Here are some other possible use cases:
|
Here are some other possible use cases:
|
||||||
|
|
||||||
|
|||||||
@@ -4,8 +4,8 @@ In order to avoid IP addressing conflicts as we bring swarm networks up/down, we
|
|||||||
|
|
||||||
| Network | Range |
|
| Network | Range |
|
||||||
|-----------------------------------------------------------------------------------------------------------------------|----------------|
|
|-----------------------------------------------------------------------------------------------------------------------|----------------|
|
||||||
| [Traefik](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/traefik/) | _unspecified_ |
|
| [Traefik](https://geek-cookbook.funkypenguin.co.nz/docker-swarm/traefik/) | _unspecified_ |
|
||||||
| [Docker-cleanup](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/docker-swarm-mode/#setup-automated-cleanup) | 172.16.0.0/24 |
|
| [Docker-cleanup](https://geek-cookbook.funkypenguin.co.nz/docker-swarm/docker-swarm-mode/#setup-automated-cleanup) | 172.16.0.0/24 |
|
||||||
| [Mail Server](https://geek-cookbook.funkypenguin.co.nz/recipes/mail/) | 172.16.1.0/24 |
|
| [Mail Server](https://geek-cookbook.funkypenguin.co.nz/recipes/mail/) | 172.16.1.0/24 |
|
||||||
| [Gitlab](https://geek-cookbook.funkypenguin.co.nz/recipes/gitlab/) | 172.16.2.0/24 |
|
| [Gitlab](https://geek-cookbook.funkypenguin.co.nz/recipes/gitlab/) | 172.16.2.0/24 |
|
||||||
| [Wekan](https://geek-cookbook.funkypenguin.co.nz/recipes/wekan/) | 172.16.3.0/24 |
|
| [Wekan](https://geek-cookbook.funkypenguin.co.nz/recipes/wekan/) | 172.16.3.0/24 |
|
||||||
|
|||||||
@@ -1,3 +1,3 @@
|
|||||||
# Oauth2 proxy
|
# Oauth2 proxy
|
||||||
|
|
||||||
I've deprecated the oauth2-proxy recipe in favor of [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). It's infinitely more scalable and easier to manage!
|
I've deprecated the oauth2-proxy recipe in favor of [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/). It's infinitely more scalable and easier to manage!
|
||||||
|
|||||||
38
mkdocs.yml
38
mkdocs.yml
@@ -41,21 +41,22 @@ plugins:
|
|||||||
nav:
|
nav:
|
||||||
- Home: index.md
|
- Home: index.md
|
||||||
- Docker Swarm:
|
- Docker Swarm:
|
||||||
|
- docker-swarm/index.md
|
||||||
- Preparation:
|
- Preparation:
|
||||||
- Design: ha-docker-swarm/design.md
|
- Design: docker-swarm/design.md
|
||||||
- Nodes: ha-docker-swarm/nodes.md
|
- Nodes: docker-swarm/nodes.md
|
||||||
- Shared Storage (Ceph): ha-docker-swarm/shared-storage-ceph.md
|
- Shared Storage (Ceph): docker-swarm/shared-storage-ceph.md
|
||||||
- Shared Storage (GlusterFS): ha-docker-swarm/shared-storage-gluster.md
|
- Shared Storage (GlusterFS): docker-swarm/shared-storage-gluster.md
|
||||||
- Keepalived: ha-docker-swarm/keepalived.md
|
- Keepalived: docker-swarm/keepalived.md
|
||||||
- Docker Swarm Mode: ha-docker-swarm/docker-swarm-mode.md
|
- Docker Swarm Mode: docker-swarm/docker-swarm-mode.md
|
||||||
- Traefik: ha-docker-swarm/traefik.md
|
- Traefik: docker-swarm/traefik.md
|
||||||
- Traefik Forward Auth:
|
- Traefik Forward Auth:
|
||||||
- ha-docker-swarm/traefik-forward-auth/index.md
|
- docker-swarm/traefik-forward-auth/index.md
|
||||||
- Dex (static): ha-docker-swarm/traefik-forward-auth/dex-static.md
|
- Dex (static): docker-swarm/traefik-forward-auth/dex-static.md
|
||||||
- Google: ha-docker-swarm/traefik-forward-auth/google.md
|
- Google: docker-swarm/traefik-forward-auth/google.md
|
||||||
- KeyCloak: ha-docker-swarm/traefik-forward-auth/keycloak.md
|
- KeyCloak: docker-swarm/traefik-forward-auth/keycloak.md
|
||||||
- Authelia: ha-docker-swarm/authelia.md
|
- Authelia: docker-swarm/authelia.md
|
||||||
- Registry: ha-docker-swarm/registry.md
|
- Registry: docker-swarm/registry.md
|
||||||
- Mail Server: recipes/mail.md
|
- Mail Server: recipes/mail.md
|
||||||
- Duplicity: recipes/duplicity.md
|
- Duplicity: recipes/duplicity.md
|
||||||
- Chef's Favorites:
|
- Chef's Favorites:
|
||||||
@@ -130,12 +131,12 @@ nav:
|
|||||||
- Restic: recipes/restic.md
|
- Restic: recipes/restic.md
|
||||||
- RSS Bridge: recipes/rss-bridge.md
|
- RSS Bridge: recipes/rss-bridge.md
|
||||||
- Tiny Tiny RSS: recipes/tiny-tiny-rss.md
|
- Tiny Tiny RSS: recipes/tiny-tiny-rss.md
|
||||||
- Traefik: ha-docker-swarm/traefik.md
|
- Traefik: docker-swarm/traefik.md
|
||||||
- Traefik Forward Auth:
|
- Traefik Forward Auth:
|
||||||
- ha-docker-swarm/traefik-forward-auth/index.md
|
- docker-swarm/traefik-forward-auth/index.md
|
||||||
- Dex (static): ha-docker-swarm/traefik-forward-auth/dex-static.md
|
- Dex (static): docker-swarm/traefik-forward-auth/dex-static.md
|
||||||
- Google: ha-docker-swarm/traefik-forward-auth/google.md
|
- Google: docker-swarm/traefik-forward-auth/google.md
|
||||||
- KeyCloak: ha-docker-swarm/traefik-forward-auth/keycloak.md
|
- KeyCloak: docker-swarm/traefik-forward-auth/keycloak.md
|
||||||
- Wallabag: recipes/wallabag.md
|
- Wallabag: recipes/wallabag.md
|
||||||
- Wekan: recipes/wekan.md
|
- Wekan: recipes/wekan.md
|
||||||
- Wetty: recipes/wetty.md
|
- Wetty: recipes/wetty.md
|
||||||
@@ -162,6 +163,7 @@ nav:
|
|||||||
# - Helm: kubernetes/wip.md
|
# - Helm: kubernetes/wip.md
|
||||||
# - GitHub Actions: kubernetes/wip.md
|
# - GitHub Actions: kubernetes/wip.md
|
||||||
- Flux:
|
- Flux:
|
||||||
|
- kubernetes/deployment/flux/index.md
|
||||||
- Install: kubernetes/deployment/flux/install.md
|
- Install: kubernetes/deployment/flux/install.md
|
||||||
- Design: kubernetes/deployment/flux/design.md
|
- Design: kubernetes/deployment/flux/design.md
|
||||||
- Operate: kubernetes/deployment/flux/operate.md
|
- Operate: kubernetes/deployment/flux/operate.md
|
||||||
|
|||||||
@@ -7,3 +7,11 @@ https://geek-cookbook.funkypenguin.co.nz/recipies/* https://geek-cookbook.funkyp
|
|||||||
# Proxy plausible analytics per https://plausible.io/docs/proxy/guides/netlify
|
# Proxy plausible analytics per https://plausible.io/docs/proxy/guides/netlify
|
||||||
/js/i-am-groot.js https://plausible.io/js/plausible.outbound-links.js 200
|
/js/i-am-groot.js https://plausible.io/js/plausible.outbound-links.js 200
|
||||||
/api/event https://plausible.io/api/event 202
|
/api/event https://plausible.io/api/event 202
|
||||||
|
|
||||||
|
# Now we have indexs on subfolders, we can use a better URL for docker-swarm
|
||||||
|
/docker-swarm/design /docker/swarm 301!
|
||||||
|
/ha-docker-swarm/design /docker-swarm 301!
|
||||||
|
|
||||||
|
# Prefer "docker-swarm" to "ha-docker-swarm"
|
||||||
|
/ha-docker-swarm/* /docker-swarm/:splat 301!
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user