1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-25 23:51:49 +00:00

Fix more broken links, add lazy-loading to images

This commit is contained in:
David Young
2022-07-10 11:01:46 +12:00
parent 635b43afb2
commit 76e919afe9
78 changed files with 166 additions and 155 deletions

View File

@@ -7,7 +7,7 @@ description: Authelia is an open-source authentication and authorization server
[Authelia](https://github.com/authelia/authelia) is an open-source authentication and authorization server providing 2-factor authentication and single sign-on (SSO) for your applications via a web portal. It acts as a companion of reverse proxies like Nginx, Traefik, or HAProxy to let them know whether queries should pass through. Unauthenticated users are redirected to Authelia Sign-in portal instead.
![Authelia Screenshot](../images/authelia.png)
![Authelia Screenshot](../images/authelia.png){ loading=lazy }
Features include
@@ -245,7 +245,7 @@ Launch the Authelia stack by running ```docker stack deploy authelia -c <path -t
To test the service works successfully. Try to access a service that you had added the middleware label to. If it works successfully you will be presented with a login screen
![Authelia Screenshot](../images/authelia_login.png)
![Authelia Screenshot](../images/authelia_login.png){ loading=lazy }
[^1]: The inclusion of Authelia was due to the efforts of @bencey in Discord (Thanks Ben!)

View File

@@ -59,7 +59,7 @@ Assuming a 3-node configuration, under normal circumstances the following is ill
* The **traefik** service (*in swarm mode*) receives incoming requests (*on HTTP and HTTPS*), and forwards them to individual containers. Traefik knows the containers names because it's able to read the docker socket.
* All 3 nodes run keepalived, at varying priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (*no matter which node it's on*), and then onto the target backend.
![HA function](../images/docker-swarm-ha-function.png)
![HA function](../images/docker-swarm-ha-function.png){ loading=lazy }
### Node failure
@@ -71,7 +71,7 @@ In the case of a failure (or scheduled maintenance) of one of the nodes, the fol
* The **traefik** service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
* The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
![HA function](../images/docker-swarm-node-failure.png)
![HA function](../images/docker-swarm-node-failure.png){ loading=lazy }
### Node restore
@@ -82,7 +82,7 @@ When the failed (*or upgraded*) host is restored to service, the following is il
* Existing containers which were migrated off the node are not migrated backend
* Keepalived VIP regains full redundancy
![HA function](../images/docker-swarm-node-restore.png)
![HA function](../images/docker-swarm-node-restore.png){ loading=lazy }
### Total cluster failure

View File

@@ -59,7 +59,7 @@ Assuming a 3-node configuration, under normal circumstances the following is ill
* The **traefik** service (*in swarm mode*) receives incoming requests (*on HTTP and HTTPS*), and forwards them to individual containers. Traefik knows the containers names because it's able to read the docker socket.
* All 3 nodes run keepalived, at varying priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (*no matter which node it's on*), and then onto the target backend.
![HA function](../images/docker-swarm-ha-function.png)
![HA function](../images/docker-swarm-ha-function.png){ loading=lazy }
### Node failure
@@ -71,7 +71,7 @@ In the case of a failure (or scheduled maintenance) of one of the nodes, the fol
* The **traefik** service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
* The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
![HA function](../images/docker-swarm-node-failure.png)
![HA function](../images/docker-swarm-node-failure.png){ loading=lazy }
### Node restore
@@ -82,7 +82,7 @@ When the failed (*or upgraded*) host is restored to service, the following is il
* Existing containers which were migrated off the node are not migrated backend
* Keepalived VIP regains full redundancy
![HA function](../images/docker-swarm-node-restore.png)
![HA function](../images/docker-swarm-node-restore.png){ loading=lazy }
### Total cluster failure

View File

@@ -13,7 +13,7 @@ Normally this is done using a HA loadbalancer, but since Docker Swarm aready pro
This is accomplished with the use of keepalived on at least two nodes.
![Ceph Screenshot](../images/keepalived.png)
![Ceph Screenshot](../images/keepalived.png){ loading=lazy }
## Ingredients

View File

@@ -2,7 +2,7 @@
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
![Ceph Screenshot](../images/ceph.png)
![Ceph Screenshot](../images/ceph.png){ loading=lazy }
## Ingredients

View File

@@ -23,7 +23,7 @@ This is the role of Traefik Forward Auth.
When employing Traefik Forward Auth as "[middleware](https://doc.traefik.io/traefik/middlewares/forwardauth/)", the forward-auth process sits in the middle of this transaction - traefik receives the incoming request, "checks in" with the auth server to determine whether or not further authentication is required. If the user is authenticated, the auth server returns a 200 response code, and Traefik is authorized to forward the request to the backend. If not, traefik passes the auth server response back to the user - this process will usually direct the user to an authentication provider (*[Google][tfa-google], [Keycloak][tfa-keycloak], and [Dex][tfa-dex-static] are common examples*), so that they can perform a login.
Illustrated below:
![Traefik Forward Auth](../../images/traefik-forward-auth.png)
![Traefik Forward Auth](../../images/traefik-forward-auth.png){ loading=lazy }
The advantage under this design is additional security. If I'm deploying a web app which I expect only an authenticated user to require access to (*unlike something intended to be accessed publically, like [Linx][linx]*), I'll pass the request through Traefik Forward Auth. The overhead is negligible, and the additional layer of security is well-worth it.

View File

@@ -11,7 +11,7 @@ There are some gaps to this approach though:
To deal with these gaps, we need a front-end load-balancer, and in this design, that role is provided by [Traefik](https://traefik.io/).
![Traefik Screenshot](../images/traefik.png)
![Traefik Screenshot](../images/traefik.png){ loading=lazy }
!!! tip
In 2021, this recipe was updated for Traefik v2. There's really no reason to be using Traefikv1 anymore ;)
@@ -233,7 +233,7 @@ root@raphael:~#
You should now be able to access[^1] your traefik instance on `https://traefik.<your domain\>` (*if your LetsEncrypt certificate is working*), or `http://<node IP\>:8080` (*if it's not*)- It'll look a little lonely currently (*below*), but we'll populate it as we add recipes :grin:
![Screenshot of Traefik, post-launch](/images/traefik-post-launch.png)
![Screenshot of Traefik, post-launch](/images/traefik-post-launch.png){ loading=lazy }
### Summary