1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-13 09:46:23 +00:00

Tidy up like it's 2019

This commit is contained in:
David Young
2019-05-15 10:54:50 +12:00
parent 480dee111a
commit 4654c131b2
97 changed files with 266 additions and 204 deletions

View File

@@ -1,11 +1,11 @@
# Design
In the design described below, the "private cloud" platform is:
In the design described below, our "private cloud" platform is:
* **Highly-available** (_can tolerate the failure of a single component_)
* **Scalable** (_can add resource or capacity as required_)
* **Portable** (_run it on your garage server today, run it in AWS tomorrow_)
* **Secure** (_access protected with LetsEncrypt certificates_)
* **Secure** (_access protected with [LetsEncrypt certificates](/ha-docker-swarm/traefik/) and optional [OIDC with 2FA](/ha-docker-swarm/traefik-forward-auth/)_)
* **Automated** (_requires minimal care and feeding_)
## Design Decisions
@@ -15,7 +15,10 @@ In the design described below, the "private cloud" platform is:
This means that:
* At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure.
* GlusterFS is employed for share filesystem, because it too can be made tolerant of a single failure.
* [Ceph](/ha-docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure.
!!! note
An exception to the 3-nodes decision is running a single-node configuration. If you only **have** one node, then obviously your swarm is only as resilient as that node. It's still a perfectly valid swarm configuration, ideal for starting your self-hosting journey. In fact, under the single-node configuration, you don't need ceph either, and you can simply use the local volume on your host for storage. You'll be able to migrate to ceph/more nodes if/when you expand.
**Where multiple solutions to a requirement exist, preference will be given to the most portable solution.**
@@ -26,30 +29,31 @@ This means that:
## Security
Under this design, the only inbound connections we're permitting to our docker swarm are:
Under this design, the only inbound connections we're permitting to our docker swarm in a **minimal** configuration (*you may add custom services later, like UniFi Controller*) are:
### Network Flows
* HTTP (TCP 80) : Redirects to https
* HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy
* **HTTP (TCP 80)** : Redirects to https
* **HTTPS (TCP 443)** : Serves individual docker containers via SSL-encrypted reverse proxy
### Authentication
* Where the proxied application provides a trusted level of authentication, or where the application requires public exposure,
* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](/recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](/recipes/privatebin/)*), no additional layer of authentication will be required.
* Where the hosted application provides inadequate (*i.e. [NZBGet](/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](/recipes/gollum/)*), a further authentication against an OAuth provider will be required.
## High availability
### Normal function
Assuming 3 nodes, under normal circumstances the following is illustrated:
Assuming a 3-node configuration, under normal circumstances the following is illustrated:
* All 3 nodes provide shared storage via GlusterFS, which is provided by a docker container on each node. (i.e., not running in swarm mode)
* All 3 nodes provide shared storage via Ceph, which is provided by a docker container on each node.
* All 3 nodes participate in the Docker Swarm as managers.
* The various containers belonging to the application "stacks" deployed within Docker Swarm are automatically distributed amongst the swarm nodes.
* Persistent storage for the containers is provide via GlusterFS mount.
* The **traefik** service (in swarm mode) receives incoming requests (on http and https), and forwards them to individual containers. Traefik knows the containers names because it's able to access the docker socket.
* All 3 nodes run keepalived, at different priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (no matter which node it's on), and then onto the target backend.
* Persistent storage for the containers is provide via cephfs mount.
* The **traefik** service (*in swarm mode*) receives incoming requests (*on HTTP and HTTPS*), and forwards them to individual containers. Traefik knows the containers names because it's able to read the docker socket.
* All 3 nodes run keepalived, at varying priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (*no matter which node it's on*), and then onto the target backend.
![HA function](../images/docker-swarm-ha-function.png)
@@ -57,9 +61,9 @@ Assuming 3 nodes, under normal circumstances the following is illustrated:
In the case of a failure (or scheduled maintenance) of one of the nodes, the following is illustrated:
* The failed node no longer participates in GlusterFS, but the remaining nodes provide enough fault-tolerance for the cluster to operate.
* The failed node no longer participates in Ceph, but the remaining nodes provide enough fault-tolerance for the cluster to operate.
* The remaining two nodes in Docker Swarm achieve a quorum and agree that the failed node is to be removed.
* The (possibly new) leader manager node reschedules the containers known to be running on the failed node, onto other nodes.
* The (*possibly new*) leader manager node reschedules the containers known to be running on the failed node, onto other nodes.
* The **traefik** service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
* The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
@@ -67,9 +71,9 @@ In the case of a failure (or scheduled maintenance) of one of the nodes, the fol
### Node restore
When the failed (or upgraded) host is restored to service, the following is illustrated:
When the failed (*or upgraded*) host is restored to service, the following is illustrated:
* GlusterFS regains full redundancy
* Ceph regains full redundancy
* Docker Swarm managers become aware of the recovered node, and will use it for scheduling **new** containers
* Existing containers which were migrated off the node are not migrated backend
* Keepalived VIP regains full redundancy
@@ -90,7 +94,7 @@ In summary, although I suffered an **unplanned power outage to all of my infrast
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -4,21 +4,20 @@ For truly highly-available services with Docker containers, we need an orchestra
## Ingredients
* 3 x CentOS Atomic hosts (bare-metal or VMs). A reasonable minimum would be:
* 1 x vCPU
* 1GB repo_name
* 10GB HDD
* Hosts must be within the same subnet, and connected on a low-latency link (i.e., no WAN links)
!!! summary
* [X] 3 x modern linux hosts (*bare-metal or VMs*). A reasonable minimum would be:
* 2 x vCPU
* 2GB RAM
* 20GB HDD
* [X] Hosts must be within the same subnet, and connected on a low-latency link (*i.e., no WAN links*)
## Preparation
### Release the swarm!
Now, to launch my swarm:
Now, to launch a swarm. Pick a target node, and run `docker swarm init`
```docker swarm init```
Yeah, that was it. Now I have a 1-node swarm.
Yeah, that was it. Seriously. Now we have a 1-node swarm.
```
[root@ds1 ~]# docker swarm init
@@ -35,7 +34,7 @@ To add a manager to this swarm, run 'docker swarm join-token manager' and follow
[root@ds1 ~]#
```
Run ```docker node ls``` to confirm that I have a 1-node swarm:
Run `docker node ls` to confirm that you have a 1-node swarm:
```
[root@ds1 ~]# docker node ls
@@ -44,7 +43,7 @@ b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leade
[root@ds1 ~]#
```
Note that when I ran ```docker swarm init``` above, the CLI output gave me a command to run to join further nodes to my swarm. This would join the nodes as __workers__ (as opposed to __managers__). Workers can easily be promoted to managers (and demoted again), but since we know that we want our other two nodes to be managers too, it's simpler just to add them to the swarm as managers immediately.
Note that when you run `docker swarm init` above, the CLI output gives youe a command to run to join further nodes to my swarm. This command would join the nodes as __workers__ (*as opposed to __managers__*). Workers can easily be promoted to managers (*and demoted again*), but since we know that we want our other two nodes to be managers too, it's simpler just to add them to the swarm as managers immediately.
On the first swarm node, generate the necessary token to join another manager by running ```docker swarm join-token manager```:
@@ -59,7 +58,7 @@ To add a manager to this swarm, run the following command:
[root@ds1 ~]#
```
Run the command provided on your second node to join it to the swarm as a manager. After adding the second node, the output of ```docker node ls``` (on either host) should reflect two nodes:
Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of ```docker node ls``` (on either host) should reflect all the nodes:
````
@@ -70,19 +69,6 @@ xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reach
[root@ds2 davidy]#
````
Repeat the process to add your third node.
Finally, ```docker node ls``` should reflect that you have 3 reachable manager nodes, one of whom is the "Leader":
```
[root@ds3 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
36b4twca7i3hkb7qr77i0pr9i ds1.example.com Ready Active Reachable
l14rfzazbmibh1p9wcoivkv1s * ds3.example.com Ready Active Reachable
tfsgxmu7q23nuo51wwa4ycpsj ds2.example.com Ready Active Leader
[root@ds3 ~]#
```
### Setup automated cleanup
Docker swarm doesn't do any cleanup of old images, so as you experiment with various stacks, and as updated containers are released upstream, you'll soon find yourself loosing gigabytes of disk space to old, unused images.
@@ -135,8 +121,8 @@ Create /var/data/config/shepherd/shepherd.env as follows:
```
# Don't auto-update Plex or Emby, I might be watching a movie! (Customize this for the containers you _don't_ want to auto-update)
BLACKLIST_SERVICES="plex_plex emby_emby"
# Run every 24 hours. I _really_ don't need new images more frequently than that!
SLEEP_TIME=1440
# Run every 24 hours. Note that SLEEP_TIME appears to be in seconds.
SLEEP_TIME=86400
```
Then create /var/data/config/shepherd/shepherd.yml as follows:
@@ -178,7 +164,7 @@ echo 'source ~/gcb-aliases.sh' >> ~/.bash_profile
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -4,20 +4,21 @@ While having a self-healing, scalable docker swarm is great for availability and
In order to provide seamless external access to clustered resources, regardless of which node they're on and tolerant of node failure, you need to present a single IP to the world for external access.
Normally this is done using a HA loadbalancer, but since Docker Swarm aready provides the load-balancing capabilities (routing mesh), all we need for seamless HA is a virtual IP which will be provided by more than one docker node.
Normally this is done using a HA loadbalancer, but since Docker Swarm aready provides the load-balancing capabilities (*[routing mesh](https://docs.docker.com/engine/swarm/ingress/)*), all we need for seamless HA is a virtual IP which will be provided by more than one docker node.
This is accomplished with the use of keepalived on at least two nodes.
## Ingredients
```
!!! summary "Ingredients"
Already deployed:
[X] At least 2 x CentOS/Fedora Atomic VMs
[X] low-latency link (i.e., no WAN links)
* [X] At least 2 x swarm nodes
* [X] low-latency link (i.e., no WAN links)
New:
[ ] 3 x IPv4 addresses (one for each node and one for the virtual IP)
```
* [ ] At least 3 x IPv4 addresses (one for each node and one for the virtual IP)
## Preparation
@@ -66,10 +67,10 @@ That's it. Each node will talk to the other via unicast (no need to un-firewall
## Chef's notes
1. Some hosting platforms (OpenStack, for one) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS and Azure would likely include similar protections.
1. Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -112,7 +112,7 @@ systemctl restart docker-latest
## Chef's notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -196,7 +196,7 @@ Future enhancements to this recipe include:
1. Rather than pasting a secret key into /etc/fstab (which feels wrong), I'd prefer to be able to set "secretfile" in /etc/fstab (which just points ceph.mount to a file containing the secret), but under the current CentOS Atomic, we're stuck with "secret", per https://bugzilla.redhat.com/show_bug.cgi?id=1030402
2. This recipe was written with Ceph v11 "Jewel". Ceph have subsequently releaesd v12 "Kraken". I've updated the recipe for the addition of "Manager" daemons, but it should be noted that the [only reader so far](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley) to attempt a Ceph install using CentOS Atomic and Ceph v12 had issues with OSDs, which lead him to [move to Ubuntu 1604](https://discourse.geek-kitchen.funkypenguin.co.nz/t/shared-storage-ceph-funky-penguins-geek-cookbook/47/24?u=funkypenguin) instead.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -2,6 +2,9 @@
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
!!! warning
This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](/ha-docker-swarm/shared-storage-ceph/) instead. - 2019 Chef
## Design
### Why GlusterFS?
@@ -163,7 +166,7 @@ Future enhancements to this recipe include:
1. Migration of shared storage from GlusterFS to Ceph ()[#2](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/2))
2. Correct the fact that volumes don't automount on boot ([#3](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/3))
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -103,7 +103,7 @@ What have we achieved? By adding an additional three simple labels to any servic
3. I reviewed several implementations of forward authenticators for Traefik, but found most to be rather heavy-handed, or specific to a single auth provider. @thomaseddon's go-based docker image is 7MB in size, and with the generic OIDC patch (above), it can be extended to work with any OIDC provider.
4. No, not github natively, but you can ferderate GitHub into KeyCloak, and then use KeyCloak as the OIDC provider.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -11,15 +11,24 @@ There are some gaps to this approach though:
To deal with these gaps, we need a front-end load-balancer, and in this design, that role is provided by [Traefik](https://traefik.io/).
![Traefik Screenshot](../images/traefik.png)
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph)
!!! summary "You'll need"
Already deployed:
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph)
New to this recipe:
* [ ] Access to update your DNS records for manual/automated [LetsEncrypt](https://letsencrypt.org/docs/challenge-types/) DNS-01 validation, or ingress HTTP/HTTPS for HTTP-01 validation
## Preparation
### Prepare the host
The traefik container is aware of the __other__ docker containers in the swarm, because it has access to the docker socket at **/var/run/docker.sock**. This allows traefik to dynamically configure itself based on the labels found on containers in the swarm, which is hugely useful. To make this functionality work on our SELinux-enabled Atomic hosts, we need to add custom SELinux policy.
The traefik container is aware of the __other__ docker containers in the swarm, because it has access to the docker socket at **/var/run/docker.sock**. This allows traefik to dynamically configure itself based on the labels found on containers in the swarm, which is hugely useful. To make this functionality work on a SELinux-enabled CentOS7 host, we need to add custom SELinux policy.
!!! tip
The following is only necessary if you're using SELinux!
@@ -38,9 +47,9 @@ make && semodule -i dockersock.pp
### Prepare traefik.toml
While it's possible to configure traefik via docker command arguments, I prefer to create a config file (traefik.toml). This allows me to change traefik's behaviour by simply changing the file, and keeps my docker config simple.
While it's possible to configure traefik via docker command arguments, I prefer to create a config file (`traefik.toml`). This allows me to change traefik's behaviour by simply changing the file, and keeps my docker config simple.
Create ```/var/data/traefik/```, and then create ```traefik.toml``` inside it as follows:
Create `/var/data/traefik/traefik.toml` as follows:
```
checkNewVersion = true
@@ -81,14 +90,43 @@ watch = true
swarmmode = true
```
### Prepare the docker service config
!!! tip
"We'll want an overlay network, independent of our traefik stack, so that we can attach/detach all our other stacks (including traefik) to the overlay network. This way, we can undeploy/redepoly the traefik stack without having to bring every other stack first!" - voice of experience
Create `/var/data/config/traefik/traefik.yml` as follows:
```
version: "3.2"
# What is this?
#
# This stack exists solely to deploy the traefik_public overlay network, so that
# other stacks (including traefik-app) can attach to it
services:
scratch:
image: scratch
deploy:
replicas: 0
networks:
- public
networks:
public:
driver: overlay
attachable: true
ipam:
config:
- subnet: 172.16.200.0/24
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
Create /var/data/config/traefik/docker-compose.yml as follows:
Create `/var/data/config/traefik/traefik-app.yml` as follows:
```
version: "3"
@@ -97,6 +135,10 @@ services:
traefik:
image: traefik
command: --web --docker --docker.swarmmode --docker.watch --docker.domain=example.com --logLevel=DEBUG
# Note below that we use host mode to avoid source nat being applied to our ingress HTTP/HTTPS sessions
# Without host mode, all inbound sessions would have the source IP of the swarm nodes, rather than the
# original source IP, which would impact logging. If you don't care about this, you can expose ports the
# "minimal" way instead
ports:
- target: 80
published: 80
@@ -111,13 +153,16 @@ services:
protocol: tcp
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/data/traefik/traefik.toml:/traefik.toml:ro
- /var/data/config/traefik:/etc/traefik
- /var/data/traefik/traefik.log:/traefik.log
- /var/data/traefik/acme.json:/acme.json
labels:
- "traefik.enable=false"
networks:
- public
# Global mode makes an instance of traefik listen on _every_ node, so that regardless of which
# node the request arrives on, it'll be forwarded to the correct backend service.
deploy:
labels:
- "traefik.enable=false"
mode: global
placement:
constraints: [node.role == manager]
@@ -125,46 +170,75 @@ services:
condition: on-failure
networks:
public:
driver: overlay
ipam:
driver: default
config:
- subnet: 10.1.0.0/24
traefik_public:
external: true
```
Docker won't start an image with a bind-mount to a non-existent file, so prepare an empty acme.json (_with the appropriate permissions_) by running:
Docker won't start a service with a bind-mount to a non-existent file, so prepare an empty acme.json (_with the appropriate permissions_) by running:
```
touch /var/data/traefik/acme.json
chmod 600 /var/data/traefik/acme.json
```
!!! warning
Pay attention above. You **must** set `acme.json`'s permissions to owner-readable-only, else the container will fail to start with an [ID-10T](https://en.wikipedia.org/wiki/User_error#ID-10-T_error) error!
Traefik will populate acme.json itself when it runs, but it needs to exist before the container will start (_Chicken, meet egg._)
### Launch
Deploy traefik with ```docker stack deploy traefik -c /var/data/traefik/docker-compose.yml```
Confirm traefik is running with ```docker stack ps traefik```
## Serving
You now have:
### Launch
1. Frontend proxy which will dynamically configure itself for new backend containers
2. Automatic SSL support for all proxied resources
First, launch the traefik stack, which will do nothing other than create an overlay network by running `docker stack deploy traefik -c /var/data/traefik/traefik.yml`
```
[root@kvm ~]# docker stack deploy traefik -c traefik.yml
Creating network traefik_public
Creating service traefik_scratch
[root@kvm ~]#
```
Now deploy the traefik appliation itself (*which will attach to the overlay network*) by running `docker stack deploy traefik-app -c /var/data/traefik/traefik-app.yml`
```
[root@kvm ~]# docker stack deploy traefik-app -c traefik-app.yml
Creating service traefik-app_app
[root@kvm ~]#
```
Confirm traefik is running with `docker stack ps traefik-app`:
```
[root@kvm ~]# docker stack ps traefik-app
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
74uipz4sgasm traefik-app_app.t4vcm8siwc9s1xj4c2o4orhtx traefik:alpine kvm.funkypenguin.co.nz Running Running 33 seconds ago *:443->443/tcp,*:80->80/tcp
[root@kvm ~]#
```
### Check Traefik Dashboard
You should now be able to access your traefik instance on http://<node IP\>:8080 - It'll look a little lonely currently (*below*), but we'll populate it as we add recipes :)
![Screenshot of Traefik, post-launch](/images/traefik-post-launch.png)
### Summary
!!! summary
We've achieved:
* [X] An overlay network to permit traefik to access all future stacks we deploy
* [X] Frontend proxy which will dynamically configure itself for new backend containers
* [X] Automatic SSL support for all proxied resources
## Chef's Notes
Additional features I'd like to see in this recipe are:
1. Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/ha-docker-swarm/traefik-forward-auth/)!
1. Include documentation of oauth2_proxy container for protecting individual backends
2. Traefik webUI is available via HTTPS, protected with oauth_proxy
3. Pending a feature in docker-swarm to avoid NAT on routing-mesh-delivered traffic, update the design
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -1,21 +1,17 @@
# Virtual Machines
Let's start building our cloud with virtual machines. You could use bare-metal machines as well, the configuration would be the same. Given that most readers (myself included) will be using virtual infrastructure, from now on I'll be referring strictly to VMs.
I chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the VM layer because:
1. I want less responsibility for maintaining the system, including ensuring regular software updates and reboots. Atomic's idempotent nature means the OS is largely read-only, and updates/rollbacks are "atomic" (haha) procedures, which can be easily rolled back if required.
2. For someone used to administrating servers individually, Atomic is a PITA. You have to employ [tricky](https://spinningmatt.wordpress.com/2014/01/08/a-recipe-for-starting-cloud-images-with-virt-install/) [tricks](http://blog.oddbit.com/2015/03/10/booting-cloud-images-with-libvirt/) to get it to install in a non-cloud environment. It's not designed for tweaking or customizing beyond what cloud-config is capable of. For my purposes, this is good, because it forces me to change my thinking - to consider every daemon as a container, and every config as code, to be checked in and version-controlled. Atomic forces this thinking on you.
3. I want the design to be as "portable" as possible. While I run it on VPSs now, I may want to migrate it to a "cloud" provider in the future, and I'll want the most portable, reproducible design.
Let's start building our cluster. You can use either bare-metal machines or virtual machines - the configuration would be the same. Given that most readers (myself included) will be using virtual infrastructure, from now on I'll be referring strictly to VMs.
!!! note
In 2017, I **initially** chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the swarm hosts, but later found its outdated version of Docker to be problematic with advanced features like GPU transcoding (in [Plex](/recipes/plex/)), [Swarmprom](/recipes/swarmprom/), etc. In the end, I went mainstream and simply preferred a modern Ubuntu installation.
## Ingredients
!!! summary "Ingredients"
3 x Virtual Machines, each with:
* [ ] CentOS/Fedora Atomic
* [ ] At least 1GB RAM
* [ ] A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
* [ ] At least 2GB RAM
* [ ] At least 20GB disk space (_but it'll be tight_)
* [ ] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
@@ -30,25 +26,13 @@ I chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for th
!!! tip
If you're not using a platform with cloud-init support (i.e., you're building a VM manually, not provisioning it through a cloud provider), you'll need to refer to [trick #1](https://spinningmatt.wordpress.com/2014/01/08/a-recipe-for-starting-cloud-images-with-virt-install/) and [trick #2](http://blog.oddbit.com/2015/03/10/booting-cloud-images-with-libvirt/) for a means to override the automated setup, apply a manual password to the CentOS account, and enable SSH password logins.
### Permit connectivity between hosts
### Prefer docker-latest
Most modern Linux distributions include firewall rules which only only permit minimal required incoming connections (like SSH). We'll want to allow all traffic between our nodes. The steps to achieve this in CentOS/Ubuntu are a little different...
Run the following on each node to replace the default docker 1.12 with docker 1.13 (_which we need for swarm mode_):
```
systemctl disable docker --now
systemctl enable docker-latest --now
sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker
```
#### CentOS
### Upgrade Atomic
Finally, apply any Atomic host updates, and reboot, by running: ```atomic host upgrade && systemctl reboot```.
### Permit connectivity between VMs
By default, Atomic only permits incoming SSH. We'll want to allow all traffic between our nodes, so add something like this to /etc/sysconfig/iptables:
Add something like this to `/etc/sysconfig/iptables`:
```
# Allow all inter-node communication
@@ -57,6 +41,17 @@ By default, Atomic only permits incoming SSH. We'll want to allow all traffic be
And restart iptables with ```systemctl restart iptables```
#### Ubuntu
Install the (*non-default*) persistent iptables tools, by running `apt-get install iptables-persistent`, establishing some default rules (*dkpg will prompt you to save current ruleset*), and then add something like this to `/etc/iptables/rules.v4`:
```
# Allow all inter-node communication
-A INPUT -s 192.168.31.0/24 -j ACCEPT
```
And refresh your running iptables rules with `iptables-restore < /etc/iptables/rules.v4`
### Enable host resolution
Depending on your hosting environment, you may have DNS automatically setup for your VMs. If not, it's useful to set up static entries in /etc/hosts for the nodes. For example, I setup the following:
@@ -80,13 +75,12 @@ ln -sf /usr/share/zoneinfo/<your timezone> /etc/localtime
After completing the above, you should have:
```
[X] 3 x fresh atomic instances, at the latest releases,
running Docker v1.13 (docker-latest)
[X] 3 x fresh linux instances, ready to become swarm nodes
```
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 354 KiB

View File

@@ -85,7 +85,7 @@ Still with me? Good. Move on to creating your own external load balancer..
1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -131,7 +131,7 @@ Still with me? Good. Move on to creating your cluster!
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -61,7 +61,7 @@ Still with me? Good. Move on to understanding Helm charts...
1. Of course, you can have lots of fun deploying all sorts of things via Helm. Check out https://github.com/helm/charts for some examples.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -333,7 +333,7 @@ Still with me? Good. Move on to setting up an ingress SSL terminating proxy with
1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -180,7 +180,7 @@ Still with me? Good. Move on to understanding Helm charts...
1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -69,7 +69,7 @@ Still with me? Good. Move on to reviewing the design elements
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -213,7 +213,7 @@ I'll be adding more Kubernetes versions of existing recipes soon. Check out the
1. It's kinda lame to be able to bring up Traefik but not to use it. I'll be adding the oauth_proxy element shortly, which will make this last step a little more conclusive and exciting!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -125,7 +125,7 @@ Now work your way through the list of tools below, adding whichever tools your w
* [Jackett](/recipes/autopirate/jackett/)
* [End](/recipes/autopirate/end/) (launch the stack)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -13,7 +13,7 @@ Log into each of your new tools at its respective HTTPS URL. You'll be prompted
1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -74,7 +74,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -81,7 +81,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -74,7 +74,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -87,7 +87,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipes/calibre-web) recipe.
2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -76,7 +76,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -72,7 +72,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -79,7 +79,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -78,7 +78,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -94,7 +94,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_).
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -79,7 +79,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -90,7 +90,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -79,7 +79,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -86,7 +86,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -76,7 +76,7 @@ Continue through the list of tools below, adding whichever tools your want to us
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -90,7 +90,7 @@ Once you've created your account, jump over to https://bitwarden.com/#download a
1. You'll notice we're not using the *official* container images (*[all 6 of them required](https://help.bitwarden.com/article/install-on-premise/#install-bitwarden)!)*, but rather a [more lightweight version ideal for self-hosting](https://hub.docker.com/r/mprasil/bitwarden). All of the elements are contained within a single container, and SQLite is used for the database backend.
2. The inclusion of Bitwarden was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Thanks Gerry!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -145,7 +145,7 @@ Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_pro
1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -127,7 +127,7 @@ Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the i
1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -304,7 +304,7 @@ Now browse your NextCloud files. Click the plus (+) sign to create a new documen
1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -35,7 +35,7 @@ For readability, I've split this recipe into multiple sub-recipes, which can be
1. Ultimately I hope to move all the configuration / mining executables into docker containers, but for now, they're running on a CentOS7 host for direct access to GPUs. (Apparently it _may_ be possible to pass-thru the GPUs to docker containers, but I wanted stability first, before abstracting my hardware away from my miners)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -162,7 +162,7 @@ Now, continue to the next stage of your grand mining adventure:
1. My two RX580 cards (_bought alongside each other_) perform slightly differently. GPU0 works with a 2050Mhz memory clock, but GPU1 only works at 2000Mhz. Anything over 2000Mhz causes system instability. YMMV.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -48,7 +48,7 @@ Now, continue to the next stage of your grand mining adventure:
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -98,7 +98,7 @@ Now, continue to the next stage of your grand mining adventure:
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -51,7 +51,7 @@ Now, continue to the next stage of your grand mining adventure:
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -48,7 +48,7 @@ Yes. It's the ultimate _#firstworldproblem_, but if you have a means to remotely
(_I hooked up a remote-controlled outlet to my rig, so that I can power-cycle it without having to crawl under the desk!_)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -86,7 +86,7 @@ Now, continue to the next stage of your grand mining adventure:
1. Ultimately I hope to move all the configuration / mining executables into docker containers, but for now, they're running on a CentOS7 host for direct access to GPUs. (_Apparently it **may** be possible to pass-thru the GPUs to docker containers, but I wanted stability first, before abstracting my hardware away from my miners_)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -157,7 +157,7 @@ Now, continue to the next stage of your grand mining adventure:
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -21,7 +21,7 @@ Get in touch and share your experience - there's a special [discord](https://dis
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -34,7 +34,7 @@ Now, continue to the next stage of your grand mining adventure:
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -165,7 +165,7 @@ Nothing will happen. Very boring. But when the cron script fires (daily), duplic
1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -248,7 +248,7 @@ This takes you to a list of backup names and file paths. You can choose to downl
1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
2. The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -89,7 +89,7 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas
2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
3. We don't bother exposing the HTTPS port for Emby, since [Traefik](/ha-docker-swarm/traefik/) is doing the SSL termination for us already.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -70,7 +70,7 @@ Create your first administrative account at https://**YOUR-FQDN**/admin/
[root@ds1 ghost]#
```
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -95,7 +95,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (and GitLab starts so slowly!), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -136,7 +136,7 @@ A few comments on decisions taken in this design:
1. I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -129,7 +129,7 @@ Authenticate against your OAuth provider, and then start editing your wiki!
1. In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -132,7 +132,7 @@ Log into your new instance at https://**YOUR-FQDN**, the password you created in
1. I **tried** to protect Home Assistant using [oauth2_proxy](/reference/oauth_proxy), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -146,7 +146,7 @@ Log into your new instance at https://**YOUR-FQDN**. You'll need to use the "Sig
1. I initially considered putting an oauth proxy in front of Huginn, but since the invitation code logic prevents untrusted access, and since using a proxy would break oauth for sevices like Twitter integration, I left it out.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -129,7 +129,7 @@ You can **also** watch the bot at work by VNCing to your docker swarm, password
1. Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -184,7 +184,7 @@ QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx | 28c13ec68f33 | Sees 2 other pee
1. I'm still trying to work out how to _mount_ the ipfs data in my filesystem in a usable way. Which is why this is still a WIP :)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -121,7 +121,7 @@ Log into your new instance at https://**YOUR-FQDN**. Default credentials are adm
1. The default theme can be significantly improved by applying the [ThemePlus](https://github.com/phsteffen/kanboard-themeplus) plugin.
2. Kanboard becomes more useful when you integrate in/outbound email with [MailGun](https://github.com/kanboard/plugin-mailgun), [SendGrid](https://github.com/kanboard/plugin-sendgrid), or [Postmark](https://github.com/kanboard/plugin-postmark).
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -180,7 +180,7 @@ For each of the following mappers, click the name, and set the "_Read Only_" fla
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -264,7 +264,7 @@ To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard <name of pod
1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -321,7 +321,7 @@ To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux <name of pod
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -126,7 +126,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -119,7 +119,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -126,7 +126,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -264,7 +264,7 @@ To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard <name of pod
1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -184,7 +184,7 @@ Launch the mail server stack by running ```docker stack deploy docker-mailserver
2. If you're using sieve with Rainloop, take note of the [workaround](https://discourse.geek-kitchen.funkypenguin.co.nz/t/mail-server-funky-penguins-geek-cookbook/70/15) identified by [ggilley](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley)
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -120,7 +120,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -140,7 +140,7 @@ Log into your new instance at https://**YOUR-FQDN**, using the credentials you s
1. Find the bookmarklet under the **Settings -> Integration** page.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -176,7 +176,7 @@ goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode=
2. Some applications (_like [NextCloud](/recipes/nextcloud/)_) can natively mount S3 buckets
3. Some backup tools (_like [Duplicity](/recipes/duplicity/)_) can backup directly to S3 buckets
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -206,7 +206,7 @@ To actually **use** your new MQTT broker, you'll need to connect to any one of y
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -138,7 +138,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user and password pass
1. If you wanted to expose the Munin UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the munin container. You'd also need to add the traefik_public network to the munin container.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -234,7 +234,7 @@ Note that this .htaccess can be overwritten by NextCloud, and you may have to re
1. Since many of my other recipes use PostgreSQL, I'd have preferred to use Postgres over MariaDB, but MariaDB seems to be the [preferred database type](https://github.com/nextcloud/server/issues/5912).
2. I'm [not the first user](https://github.com/nextcloud/docker/issues/528) to stumble across the service discovery bug with reverse proxies.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -441,7 +441,7 @@ Create your users using the "**New User**" button.
1. The KeyCloak](/recipes/keycloak/) recipe illustrates how to integrate KeyCloak with your LDAP directory, giving you a cleaner interface to manage users, and a raft of SSO / OAuth features.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -115,7 +115,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
2. I'm using my own image rather than owntracks/recorderd, because of a [potentially swarm-breaking bug](https://github.com/owntracks/recorderd/issues/14) I found in the official container. If this gets resolved (_or if I was mistaken_) I'll update the recipe accordingly.
3. By default, you'll get a fully accessible, unprotected MQTT broker. This may not be suitable for public exposure, so you'll want to look into securing mosquitto with TLS and ACLs.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -209,7 +209,7 @@ Log into your new instance at https://**YOUR-FQDN**, and follow the on-screen pr
1. If you wanted to expose the phpIPAM UI directly, you could remove the oauth2_proxy and the nginx services from the design, and move the traefik_public-related labels directly to the phpipam container. You'd also need to add the traefik_public network to the phpipam container.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -92,7 +92,7 @@ Launch the Piwik stack by running ```docker stack deploy piwik -c <path -to-dock
Log into your new instance at https://**YOUR-FQDN**, and follow the wizard to complete the setup.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -99,7 +99,7 @@ Log into your new instance at https://**YOUR-FQDN** (You'll need to setup a plex
1. Plex uses port 32400 for remote access, using your plex.tv user/password to authenticate you. The inclusion of the traefik proxy in this recipe is simply to allow you to use the web client (as opposed to a client app) by connecting directly to your instance, as opposed to browsing your media via https://plex.tv/web
2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -68,7 +68,7 @@ Log into your new instance at https://**YOUR-FQDN**. You'll be prompted to set y
1. I wanted to use oauth2_proxy to provide an additional layer of security for Portainer, but the proxy seems to break the authentication mechanism, effectively making the stack **so** secure, that it can't be logged into!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -63,7 +63,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
1. The [PrivateBin repo](https://github.com/PrivateBin/PrivateBin/blob/master/INSTALL.md) explains how to tweak configuration options, or to use a database instead of file storage, if your volume justifies it :)
2. The inclusion of PrivateBin was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Jerry!!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -113,7 +113,7 @@ Log into your new instance at https://**YOUR-FQDN**, authenticate against oauth_
1. If you wanted to expose the Realms UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the realms container. You'd also need to add the traefik_public network to the realms container.
2. The inclusion of Realms was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -395,7 +395,7 @@ Log into your new grafana instance, check out your beautiful graphs. Move onto d
1. Pay close attention to the ```grafana.env``` config. If you encounter errors about ```basic auth failed```, or failed CSS, it's likely due to misconfiguration of one of the grafana environment variables.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -118,7 +118,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -135,7 +135,7 @@ There are several TTRSS containers available on docker hub, none of them "offici
2. The upstream git URL [changed recently](https://discourse.tt-rss.org/t/gitlab-is-overbloated-shit-garbage/325/6), but my experience of the new repository is that it's **SO** slow, that the initial "git clone" on setup of the container times out. To work around this, I created [my own repo](https://github.com/funkypenguin/tt-rss.git), cloned upstream, pushed it into my repo, and pointed the container at my own repo with TTRSS_REPO. I don't get the _latest_ code changes, but at least the app container starts up. When upstream git is performing properly, I'll remove TTRSS_REPO to revert back to the "rolling release".
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -448,7 +448,7 @@ Two possible solutions to this are (1) disable banning, or (2) update the pool b
3. After a [power fault in my datacenter caused daemon DB corruption](https://www.reddit.com/r/TRTL/comments/8jftzt/funky_penguin_nz_mining_pool_down_with_daemon/), I added a second daemon, running in parallel to the first. The failsafe daemon runs once an hour, syncs with the running daemons, and shuts down again, providing a safely halted version of the daemon DB for recovery.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -201,7 +201,7 @@ Even with all these elements in place, you still need to enable Redis under Inte
1. If you wanted to expose the Wallabag UI directly (_required for the iOS/Android apps_), you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wallabag container. You'd also need to add the traefik_public network to the wallabag container. I found the iOS app to be unreliable and clunky, so elected to leave my oauth_proxy enabled, and to simply use the webUI on my mobile devices instead. YMMMV.
2. I've not tested the email integration, but you'd need an SMTP server listening on port 25 (_since we can't change the port_) to use it
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -144,7 +144,7 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wekan container. You'd also need to add the traefik network to the wekan container.
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -103,7 +103,7 @@ Browse to your new browser-cli-terminal at https://**YOUR-FQDN**. Authenticate w
1. You could set SSHHOST to the IP of the "docker0" interface on your host, which is normally 172.17.0.1. (_Or run ```/sbin/ip route|awk '/default/ { print $3 }'``` in the container_) This would then provide you the ability to remote-manage your swarm with only web access to Wetty.
2. The inclusion of Wetty was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -44,7 +44,7 @@ Name | Description | Badges
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -19,7 +19,7 @@ Static data goes into /var/data/[recipe name], and includes anything that can be
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -54,7 +54,7 @@ Now add the contents of /var/data/git-docker/data/.ssh/id_ed25519.pub to your gi
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -53,7 +53,7 @@ Network | Range
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -81,7 +81,7 @@ Note above how:
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -60,7 +60,7 @@ Now every time my node boots, it establishes a VPN tunnel back to my pfsense hos
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -27,7 +27,7 @@ Example:
## Chef's Notes
### Tip your waiter (donate) 👏
### Tip your waiter (support me) 👏
Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏

View File

@@ -163,6 +163,6 @@ networks:
<ol type="1">
<li>If you wanted to expose the phpIPAM UI directly, you could remove the oauth2_proxy and the nginx services from the design, and move the traefik_public-related labels directly to the phpipam container. Youd also need to add the traefik_public network to the phpipam container.</li>
</ol>
<h3 id="tip-your-waiter-donate">Tip your waiter (donate) 👏</h3>
<h3 id="tip-your-waiter-donate">Tip your waiter (support me) 👏</h3>
<p>Did you receive excellent service? Want to make your waiter happy? (<em>..and support development of current and future recipes!</em>) See the <a href="/support/">support</a> page for (<em>free or paid)</em> ways to say thank you! 👏</p>
<h3 id="your-comments">Your comments? 💬</h3>