1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-17 11:41:45 +00:00

Update for leanpub preview

This commit is contained in:
AutoPenguin
2020-06-03 01:42:01 +00:00
parent 65dd34c7ea
commit 2e8e16157b
193 changed files with 12667 additions and 155 deletions

26
manuscript/CHANGELOG.mde Normal file
View File

@@ -0,0 +1,26 @@
# CHANGELOG
## Subscribe to updates
* Email : Sign up [here](http://eepurl.com/dfx95n) (double-opt-in) to receive email updates on new and improve recipes!
* Mastodon: https://mastodon.social/@geekcookbook_changes
* RSS: https://mastodon.social/@geekcookbook_changes.rss
* The #changelog channel in our [Discord server](http://chat.funkypenguin.co.nz)
## Recent additions to work-in-progress
* Kubernetes recipes for UniFi controller, Miniflux, Kanboard and PrivateBin coming in March! (_19 Mar 2019_)
## Recently added recipes
* Overhauled [Ceph (Shared Storage)](https://geek-cookbook.funkypenguin.co.nz/ha-docker-swarm/shared-storage-ceph/) recipe for Ceph Octopus (v15) (_25 May 2020_)
* Added recipe for making your own [DIY Kubernetes Cluster](/kubernetes/diycluster/) (_14 December 2019_)
* Added recipe for [authenticating Traefik Forward Auth against KeyCloak](/ha-docker-swarm/traefik-forward-auth/keycloak/) (_16 May 2019_)
* Added [Bitwarden](/recipes/bitwarden/), an **awesome** open-source password manager, with great mobile sync support (_14 May 2019_)
* Added [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), replacing function of multiple [oauth_proxies](/reference/oauth_proxy/) with a single, 7MB Go application, which can authenticate against Google, [KeyCloak](/recipes/keycloak/), and other OIDC providers (_10 May 2019_)
## Recent improvements
* Added recipe for [automated snapshots of Kubernetes Persistent Volumes](/kubernetes/snapshots/), instructions for using [Helm](/kubernetes/helm/), and recipe for deploying [Traefik](/kubernetes/traefik/), which completes the Kubernetes cluster design! (_9 Feb 2019_)
* Added detailed description (_and diagram_) of our [Kubernetes design](/kubernetes/design/), plus a [simple load-balancer design](kubernetes/loadbalancer/) to avoid the complexities/costs of permitting ingress access to a cluster (_7 Feb 2019_)
* Added an [introductory/explanatory page, including a children's story, on Kubernetes](/kubernetes/start/) (_29 Jan 2019_)
* [NextCloud](/recipes/nextcloud/) updated to fix CalDAV/CardDAV service discovery behind Traefik reverse proxy (_12 Dec 2018_)

11
manuscript/README-UI.mde Normal file
View File

@@ -0,0 +1,11 @@
# How to read this book
## Structure
1. "Recipes" generally follow on from each other. I.e., if a particular recipe requires a mail server, that mail server would have been described in an earlier recipe.
2. Each recipe contains enough detail in a single page to take a project from start to completion.
3. When there are optional add-ons/integrations possible to a project (_i.e., the addition of "smart LED bulbs" to Home Assistant_), this will be reflected either as a brief "Chef's note" after the recipe, or if they're substantial enough, as a sub-page of the main project
## Conventions
1. When creating swarm networks, we always explicitly set the subnet in the overlay network, to avoid potential conflicts (_which docker won't prevent, but which will generate errors_) (https://github.com/moby/moby/issues/26912)

View File

@@ -92,4 +92,4 @@ In summary, although I suffered an **unplanned power outage to all of my infrast
[^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient. [^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient.
## Chef's Notes 📓 ## Chef's Notes

View File

@@ -0,0 +1,95 @@
# Design
In the design described below, our "private cloud" platform is:
* **Highly-available** (_can tolerate the failure of a single component_)
* **Scalable** (_can add resource or capacity as required_)
* **Portable** (_run it on your garage server today, run it in AWS tomorrow_)
* **Secure** (_access protected with [LetsEncrypt certificates](/ha-docker-swarm/traefik/) and optional [OIDC with 2FA](/ha-docker-swarm/traefik-forward-auth/)_)
* **Automated** (_requires minimal care and feeding_)
## Design Decisions
**Where possible, services will be highly available.**
This means that:
* At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure.
* [Ceph](/ha-docker-swarm/shared-storage-ceph/) is employed for share storage, because it too can be made tolerant of a single failure.
!!! note
An exception to the 3-nodes decision is running a single-node configuration. If you only **have** one node, then obviously your swarm is only as resilient as that node. It's still a perfectly valid swarm configuration, ideal for starting your self-hosting journey. In fact, under the single-node configuration, you don't need ceph either, and you can simply use the local volume on your host for storage. You'll be able to migrate to ceph/more nodes if/when you expand.
**Where multiple solutions to a requirement exist, preference will be given to the most portable solution.**
This means that:
* Services are defined using docker-compose v3 YAML syntax
* Services are portable, meaning a particular stack could be shut down and moved to a new provider with minimal effort.
## Security
Under this design, the only inbound connections we're permitting to our docker swarm in a **minimal** configuration (*you may add custom services later, like UniFi Controller*) are:
### Network Flows
* **HTTP (TCP 80)** : Redirects to https
* **HTTPS (TCP 443)** : Serves individual docker containers via SSL-encrypted reverse proxy
### Authentication
* Where the hosted application provides a trusted level of authentication (*i.e., [NextCloud](/recipes/nextcloud/)*), or where the application requires public exposure (*i.e. [Privatebin](/recipes/privatebin/)*), no additional layer of authentication will be required.
* Where the hosted application provides inadequate (*i.e. [NZBGet](/recipes/autopirate/nzbget/)*) or no authentication (*i.e. [Gollum](/recipes/gollum/)*), a further authentication against an OAuth provider will be required.
## High availability
### Normal function
Assuming a 3-node configuration, under normal circumstances the following is illustrated:
* All 3 nodes provide shared storage via Ceph, which is provided by a docker container on each node.
* All 3 nodes participate in the Docker Swarm as managers.
* The various containers belonging to the application "stacks" deployed within Docker Swarm are automatically distributed amongst the swarm nodes.
* Persistent storage for the containers is provide via cephfs mount.
* The **traefik** service (*in swarm mode*) receives incoming requests (*on HTTP and HTTPS*), and forwards them to individual containers. Traefik knows the containers names because it's able to read the docker socket.
* All 3 nodes run keepalived, at varying priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at **any** of the swarm nodes will be forwarded to the traefik container (*no matter which node it's on*), and then onto the target backend.
![HA function](../images/docker-swarm-ha-function.png)
### Node failure
In the case of a failure (or scheduled maintenance) of one of the nodes, the following is illustrated:
* The failed node no longer participates in Ceph, but the remaining nodes provide enough fault-tolerance for the cluster to operate.
* The remaining two nodes in Docker Swarm achieve a quorum and agree that the failed node is to be removed.
* The (*possibly new*) leader manager node reschedules the containers known to be running on the failed node, onto other nodes.
* The **traefik** service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
* The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
![HA function](../images/docker-swarm-node-failure.png)
### Node restore
When the failed (*or upgraded*) host is restored to service, the following is illustrated:
* Ceph regains full redundancy
* Docker Swarm managers become aware of the recovered node, and will use it for scheduling **new** containers
* Existing containers which were migrated off the node are not migrated backend
* Keepalived VIP regains full redundancy
![HA function](../images/docker-swarm-node-restore.png)
### Total cluster failure
A day after writing this, my environment suffered a fault whereby all 3 VMs were unexpectedly and simultaneously powered off.
Upon restore, docker failed to start on one of the VMs due to local disk space issue[^1]. However, the other two VMs started, established the swarm, mounted their shared storage, and started up all the containers (services) which were managed by the swarm.
In summary, although I suffered an **unplanned power outage to all of my infrastructure**, followed by a **failure of a third of my hosts**... ==all my platforms are 100% available with **absolutely no manual intervention**==.
[^1]: Since there's no impact to availability, I can fix (or just reinstall) the failed node whenever convenient.
## Chef's Notes 📓

View File

@@ -172,4 +172,4 @@ After completing the above, you should have:
* [X] [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/) * [X] [Docker swarm cluster](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/design/)
## Chef's Notes 📓 ## Chef's Notes

View File

@@ -0,0 +1,175 @@
# Docker Swarm Mode
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (as defined at 1.13) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
## Ingredients
!!! summary
Existing
* [X] 3 x nodes (*bare-metal or VMs*), each with:
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
* At least 2GB RAM
* At least 20GB disk space (_but it'll be tight_)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
## Preparation
### Bash auto-completion
Add some handy bash auto-completion for docker. Without this, you'll get annoyed that you can't autocomplete ```docker stack deploy <blah> -c <blah.yml>``` commands.
```
cd /etc/bash_completion.d/
curl -O https://raw.githubusercontent.com/docker/cli/b75596e1e4d5295ac69b9934d1bd8aff691a0de8/contrib/completion/bash/docker
```
Install some useful bash aliases on each host
```
cd ~
curl -O https://raw.githubusercontent.com/funkypenguin/geek-cookbook/master/examples/scripts/gcb-aliases.sh
echo 'source ~/gcb-aliases.sh' >> ~/.bash_profile
```
## Serving
### Release the swarm!
Now, to launch a swarm. Pick a target node, and run `docker swarm init`
Yeah, that was it. Seriously. Now we have a 1-node swarm.
```
[root@ds1 ~]# docker swarm init
Swarm initialized: current node (b54vls3wf8xztwfz79nlkivt8) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-bsud7xnvhv4cicwi7l6c9s6l0 \
202.170.164.47:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@ds1 ~]#
```
Run `docker node ls` to confirm that you have a 1-node swarm:
```
[root@ds1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leader
[root@ds1 ~]#
```
Note that when you run `docker swarm init` above, the CLI output gives youe a command to run to join further nodes to my swarm. This command would join the nodes as __workers__ (*as opposed to __managers__*). Workers can easily be promoted to managers (*and demoted again*), but since we know that we want our other two nodes to be managers too, it's simpler just to add them to the swarm as managers immediately.
On the first swarm node, generate the necessary token to join another manager by running ```docker swarm join-token manager```:
```
[root@ds1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-cfm24bq2zvfkcwujwlp5zqxta \
202.170.164.47:2377
[root@ds1 ~]#
```
Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of ```docker node ls``` (on either host) should reflect all the nodes:
```
[root@ds2 davidy]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader
xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reachable
[root@ds2 davidy]#
```
### Setup automated cleanup
Docker swarm doesn't do any cleanup of old images, so as you experiment with various stacks, and as updated containers are released upstream, you'll soon find yourself loosing gigabytes of disk space to old, unused images.
To address this, we'll run the "[meltwater/docker-cleanup](https://github.com/meltwater/docker-cleanup)" container on all of our nodes. The container will clean up unused images after 30 minutes.
First, create docker-cleanup.env (_mine is under /var/data/config/docker-cleanup_), and exclude container images we **know** we want to keep:
```
KEEP_IMAGES=traefik,keepalived,docker-mailserver
DEBUG=1
```
Then create a docker-compose.yml as follows:
```
version: "3"
services:
docker-cleanup:
image: meltwater/docker-cleanup:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker:/var/lib/docker
networks:
- internal
deploy:
mode: global
env_file: /var/data/config/docker-cleanup/docker-cleanup.env
networks:
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.0.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
Launch the cleanup stack by running ```docker stack deploy docker-cleanup -c <path-to-docker-compose.yml>```
### Setup automatic updates
If your swarm runs for a long time, you might find yourself running older container images, after newer versions have been released. If you're the sort of geek who wants to live on the edge, configure [shepherd](https://github.com/djmaze/shepherd) to auto-update your container images regularly.
Create /var/data/config/shepherd/shepherd.env as follows:
```
# Don't auto-update Plex or Emby, I might be watching a movie! (Customize this for the containers you _don't_ want to auto-update)
BLACKLIST_SERVICES="plex_plex emby_emby"
# Run every 24 hours. Note that SLEEP_TIME appears to be in seconds.
SLEEP_TIME=86400
```
Then create /var/data/config/shepherd/shepherd.yml as follows:
```
version: "3"
services:
shepherd-app:
image: mazzolino/shepherd
env_file : /var/data/config/shepherd/shepherd.env
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
labels:
- "traefik.enable=false"
deploy:
placement:
constraints: [node.role == manager]
```
Launch shepherd by running ```docker stack deploy shepherd -c /var/data/config/shepherd/shepherd.yml```, and then just forget about it, comfortable in the knowledge that every day, Shepherd will check that your images are the latest available, and if not, will destroy and recreate the container on the latest available image.
### Summary
After completing the above, you should have:
* [X] [Docker swarm cluster](/ha-docker-swarm/design/)
## Chef's Notes 📓

View File

View File

@@ -65,7 +65,7 @@ docker run -d --name keepalived --restart=always \
That's it. Each node will talk to the other via unicast (no need to un-firewall multicast addresses), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node. That's it. Each node will talk to the other via unicast (no need to un-firewall multicast addresses), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node.
## Chef's notes 📓 ## Chef's notes
1. Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections. 1. Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master. 2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.

View File

@@ -0,0 +1,71 @@
# Keepalived
While having a self-healing, scalable docker swarm is great for availability and scalability, none of that is any good if nobody can connect to your cluster.
In order to provide seamless external access to clustered resources, regardless of which node they're on and tolerant of node failure, you need to present a single IP to the world for external access.
Normally this is done using a HA loadbalancer, but since Docker Swarm aready provides the load-balancing capabilities (*[routing mesh](https://docs.docker.com/engine/swarm/ingress/)*), all we need for seamless HA is a virtual IP which will be provided by more than one docker node.
This is accomplished with the use of keepalived on at least two nodes.
## Ingredients
!!! summary "Ingredients"
Already deployed:
* [X] At least 2 x swarm nodes
* [X] low-latency link (i.e., no WAN links)
New:
* [ ] At least 3 x IPv4 addresses (one for each node and one for the virtual IP)
## Preparation
### Enable IPVS module
On all nodes which will participate in keepalived, we need the "ip_vs" kernel module, in order to permit serivces to bind to non-local interface addresses.
Set this up once for both the primary and secondary nodes, by running:
```
echo "modprobe ip_vs" >> /etc/rc.local
modprobe ip_vs
```
### Setup nodes
Assuming your IPs are as follows:
* 192.168.4.1 : Primary
* 192.168.4.2 : Secondary
* 192.168.4.3 : Virtual
Run the following on the primary
```
docker run -d --name keepalived --restart=always \
--cap-add=NET_ADMIN --net=host \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.4.1', '192.168.4.2']" \
-e KEEPALIVED_VIRTUAL_IPS=192.168.4.3 \
-e KEEPALIVED_PRIORITY=200 \
osixia/keepalived:1.3.5
```
And on the secondary:
```
docker run -d --name keepalived --restart=always \
--cap-add=NET_ADMIN --net=host \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.4.1', '192.168.4.2']" \
-e KEEPALIVED_VIRTUAL_IPS=192.168.4.3 \
-e KEEPALIVED_PRIORITY=100 \
osixia/keepalived:1.3.5
```
## Serving
That's it. Each node will talk to the other via unicast (no need to un-firewall multicast addresses), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node.
## Chef's notes 📓
1. Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.

View File

@@ -0,0 +1,83 @@
# Introduction
## Adding a host
## Adding storage
gluster volume add-brick VOLNAME NEW_BRICK
example
# gluster volume add-brick test-volume server4:/exp4
Add Brick successful
# Replacing failed host
Followed https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Replacing_Hosts.html
[root@glusterfs-server /]# gluster peer status
Number of Peers: 1
Hostname: ds1
Uuid: db9c80da-11e4-461d-8ea5-66dd12ca897c
State: Peer in Cluster (Disconnected)
[root@glusterfs-server /]#
Grab UUID above
edit /var/lib/glusterd/glusterd.info
change:
UUID=aee45c2c-aa19-4d29-bc94-4833f2b22863
to
UUID=db9c80da-11e4-461d-8ea5-66dd12ca897c
My peer's id (ds2):
[root@glusterfs-server /]# gluster system:: uuid get
UUID: 38ca4e8b-8ef5-4165-9f41-5c8b3f0103cc
[root@glusterfs-server /]#
vi /var/lib/glusterd/peers/38ca4e8b-8ef5-4165-9f41-5c8b3f0103cc
UUID=38ca4e8b-8ef5-4165-9f41-5c8b3f0103cc
state=3
hostname=ds3
Got volume info
[root@glusterfs-server /]# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: 84e1169c-41dc-467a-9ae1-a474efaf789f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ds1:/var/no-direct-write-here/brick1/gv0
Brick2: ds3:/var/no-direct-write-here/brick1/gv0
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
[root@glusterfs-server /]#
----
[root@glusterfs-server /]# getfattr -d -m. -ehex /var/no-direct-write-here/brick1/gv0/
getfattr: Removing leading '/' from absolute path names
# file: var/no-direct-write-here/brick1/gv0/
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0x84e1169c41dc467a9ae1a474efaf789f
[root@glusterfs-server /]#
setfattr -n trusted.glusterfs.volume-id -v 0x84e1169c41dc467a9ae1a474efaf789f /var/no-direct-write-here/brick1/gv0

View File

@@ -76,4 +76,4 @@ After completing the above, you should have:
* At least 20GB disk space (_but it'll be tight_) * At least 20GB disk space (_but it'll be tight_)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_) * [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
## Chef's Notes 📓 ## Chef's Notes

View File

@@ -0,0 +1,79 @@
# Nodes
Let's start building our cluster. You can use either bare-metal machines or virtual machines - the configuration would be the same. To avoid confusion, I'll be referring to these as "nodes" from now on.
!!! note
In 2017, I **initially** chose the "[Atomic](https://www.projectatomic.io/)" CentOS/Fedora image for the swarm hosts, but later found its outdated version of Docker to be problematic with advanced features like GPU transcoding (in [Plex](/recipes/plex/)), [Swarmprom](/recipes/swarmprom/), etc. In the end, I went mainstream and simply preferred a modern Ubuntu installation.
## Ingredients
!!! summary "Ingredients"
New in this recipe:
* [ ] 3 x nodes (*bare-metal or VMs*), each with:
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
* At least 2GB RAM
* At least 20GB disk space (_but it'll be tight_)
* [ ] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
## Preparation
### Permit connectivity
Most modern Linux distributions include firewall rules which only only permit minimal required incoming connections (like SSH). We'll want to allow all traffic between our nodes. The steps to achieve this in CentOS/Ubuntu are a little different...
#### CentOS
Add something like this to `/etc/sysconfig/iptables`:
```
# Allow all inter-node communication
-A INPUT -s 192.168.31.0/24 -j ACCEPT
```
And restart iptables with ```systemctl restart iptables```
#### Ubuntu
Install the (*non-default*) persistent iptables tools, by running `apt-get install iptables-persistent`, establishing some default rules (*dkpg will prompt you to save current ruleset*), and then add something like this to `/etc/iptables/rules.v4`:
```
# Allow all inter-node communication
-A INPUT -s 192.168.31.0/24 -j ACCEPT
```
And refresh your running iptables rules with `iptables-restore < /etc/iptables/rules.v4`
### Enable hostname resolution
Depending on your hosting environment, you may have DNS automatically setup for your VMs. If not, it's useful to set up static entries in /etc/hosts for the nodes. For example, I setup the following:
```
192.168.31.11 ds1 ds1.funkypenguin.co.nz
192.168.31.12 ds2 ds2.funkypenguin.co.nz
192.168.31.13 ds3 ds3.funkypenguin.co.nz
```
### Set timezone
Set your local timezone, by running:
```
ln -sf /usr/share/zoneinfo/<your timezone> /etc/localtime
```
## Serving
After completing the above, you should have:
!!! summary "Summary"
Deployed in this recipe:
* [X] 3 x nodes (*bare-metal or VMs*), each with:
* A mainstream Linux OS (*tested on either [CentOS](https://www.centos.org) 7+ or [Ubuntu](http://releases.ubuntu.com) 16.04+*)
* At least 2GB RAM
* At least 20GB disk space (_but it'll be tight_)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
## Chef's Notes 📓

View File

@@ -110,4 +110,4 @@ systemctl restart docker-latest
!!! tip "" !!! tip ""
Note the extra comma required after "false" above Note the extra comma required after "false" above
## Chef's notes 📓 ## Chef's notes

View File

@@ -0,0 +1,113 @@
# Create registry mirror
Although we now have shared storage for our persistent container data, our docker nodes don't share any other docker data, such as container images. This results in an inefficiency - every node which participates in the swarm will, at some point, need the docker image for every container deployed in the swarm.
When dealing with large container (looking at you, GitLab!), this can result in several gigabytes of wasted bandwidth per-node, and long delays when restarting containers on an alternate node. (_It also wastes disk space on each node, but we'll get to that in the next section_)
The solution is to run an official Docker registry container as a ["pull-through" cache, or "registry mirror"](https://docs.docker.com/registry/recipes/mirror/). By using our persistent storage for the registry cache, we can ensure we have a single copy of all the containers we've pulled at least once. After the first pull, any subsequent pulls from our nodes will use the cached version from our registry mirror. As a result, services are available more quickly when restarting container nodes, and we can be more aggressive about cleaning up unused containers on our nodes (more later)
The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Customize __your mirror FQDN__ below, so that Traefik will generate the appropriate LetsEncrypt certificates for it, and make it available via HTTPS.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
Create /var/data/config/registry/registry.yml as follows:
```
version: "3"
services:
registry-mirror:
image: registry:2
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:<your mirror FQDN>
- traefik.docker.network=traefik_public
- traefik.port=5000
ports:
- 5000:5000
volumes:
- /var/data/registry/registry-mirror-data:/var/lib/registry
- /var/data/registry/registry-mirror-config.yml:/etc/docker/registry/config.yml
networks:
traefik_public:
external: true
```
!!! note "Unencrypted registry"
We create this registry without consideration for SSL, which will fail if we attempt to use the registry directly. However, we're going to use the HTTPS-proxied version via Traefik, leveraging Traefik to manage the LetsEncrypt certificates required.
Create /var/data/registry/registry-mirror-config.yml as follows:
```
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
proxy:
remoteurl: https://registry-1.docker.io
```
## Serving
### Launch registry stack
Launch the registry stack by running ```docker stack deploy registry -c <path-to-docker-compose.yml>```
### Enable registry mirror and experimental features
To tell docker to use the registry mirror, and (_while we're here_) in order to be able to watch the logs of any service from any manager node (_an experimental feature in the current Atomic docker build_), edit **/etc/docker-latest/daemon.json** on each node, and change from:
```
{
"log-driver": "journald",
"signature-verification": false
}
```
To:
```
{
"log-driver": "journald",
"signature-verification": false,
"experimental": true,
"registry-mirrors": ["https://<your registry mirror FQDN>"]
}
```
Then restart docker by running:
```
systemctl restart docker-latest
```
!!! tip ""
Note the extra comma required after "false" above
## Chef's notes 📓

View File

@@ -213,6 +213,6 @@ Here's a screencast of the playbook in action. I sped up the boring parts, it ac
[patreon]: https://www.patreon.com/bePatron?u=6982506 [patreon]: https://www.patreon.com/bePatron?u=6982506
[github_sponsor]: https://github.com/sponsors/funkypenguin [github_sponsor]: https://github.com/sponsors/funkypenguin
## Chef's Notes 📓 ## Chef's Notes
[^1]: Minimum Viable Cluster acronym copyright, trademark, and whatever else, to Funky Penguin for 1,000,000 years. [^1]: Minimum Viable Cluster acronym copyright, trademark, and whatever else, to Funky Penguin for 1,000,000 years.

View File

@@ -0,0 +1,218 @@
# Shared Storage (Ceph)
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
![Ceph Screenshot](../images/ceph.png)
## Ingredients
!!! summary "Ingredients"
3 x Virtual Machines (configured earlier), each with:
* [X] Support for "modern" versions of Python and LVM
* [X] At least 1GB RAM
* [X] At least 20GB disk space (_but it'll be tight_)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
* [X] A second disk dedicated to the Ceph OSD
* [X] Each node should have the IP of every other participating node hard-coded in /etc/hosts (*including its own IP*)
## Preparation
!!! tip "No more [foolish games](https://www.youtube.com/watch?v=UNoouLa7uxA)"
Earlier iterations of this recipe (*based on [Ceph Jewel](https://docs.ceph.com/docs/master/releases/jewel/)*) required significant manual effort to install Ceph in a Docker environment. In the 2+ years since Jewel was released, significant improvements have been made to the ceph "deploy-in-docker" process, including the [introduction of the cephadm tool](https://ceph.io/ceph-management/introducing-cephadm/). Cephadm is the tool which now does all the heavy lifting, below, for the current version of ceph, codenamed "[Octopus](https://www.youtube.com/watch?v=Gi58pN8W3hY)".
### Pick a master node
One of your nodes will become the cephadm "master" node. Although all nodes will participate in the Ceph cluster, the master node will be the node which we bootstrap ceph on. It's also the node which will run the Ceph dashboard, and on which future upgrades will be processed. It doesn't matter _which_ node you pick, and the cluster itself will operate in the event of a loss of the master node (although you won't see the dashboard)
### Install cephadm on master node
Run the following on the ==master== node:
```
MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x cephadm
mkdir -p /etc/ceph
./cephadm bootstrap --mon-ip $MYIP
```
The process takes about 30 seconds, after which, you'll have a MVC (*Minimum Viable Cluster*)[^1], encompassing a single monitor and mgr instance on your chosen node. Here's the complete output from a fresh install:
??? "Example output from a fresh cephadm bootstrap"
```
root@raphael:~# MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
root@raphael:~# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
root@raphael:~# chmod +x cephadm
root@raphael:~# mkdir -p /etc/ceph
root@raphael:~# ./cephadm bootstrap --mon-ip $MYIP
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/docker) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: bf3eff78-9e27-11ea-b40a-525400380101
INFO:cephadm:Verifying IP 192.168.38.101 port 3300 ...
INFO:cephadm:Verifying IP 192.168.38.101 port 6789 ...
INFO:cephadm:Mon IP 192.168.38.101 is in CIDR network 192.168.38.0/24
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:mon is available
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:mgr not available, waiting (3/10)...
INFO:cephadm:mgr is available
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Mgr epoch 5 is available
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to root@localhost's authorized_keys...
INFO:cephadm:Adding host raphael...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 13...
INFO:cephadm:Mgr epoch 13 is available
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:
URL: https://raphael:8443/
User: admin
Password: mid28k0yg5
INFO:cephadm:You can access the Ceph CLI with:
sudo ./cephadm shell --fsid bf3eff78-9e27-11ea-b40a-525400380101 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
INFO:cephadm:Bootstrap complete.
root@raphael:~#
```
### Prepare other nodes
It's now necessary to tranfer the following files to your ==other== nodes, so that cephadm can add them to your cluster, and so that they'll be able to mount the cephfs when we're done:
Path on master | Path on non-master
--------------- | -----
`/etc/ceph/ceph.conf` | `/etc/ceph/ceph.conf`
`/etc/ceph/ceph.client.admin.keyring` | `/etc/ceph/ceph.client.admin.keyring`
`/etc/ceph/ceph.pub` | `/root/.ssh/authorized_keys` (append to anything existing)
Back on the ==master== node, run `ceph orch host add <node-name>` once for each other node you want to join to the cluster. You can validate the results by running `ceph orch host ls`
!!! question "Should we be concerned about giving cephadm using root access over SSH?"
Not really. Docker is inherently insecure at the host-level anyway (*think what would happen if you launched a global-mode stack with a malicious container image which mounted `/root/.ssh`*), so worrying about cephadm seems a little barn-door-after-horses-bolted. If you take host-level security seriously, consider switching to [Kubernetes](/kubernetes/start/) :)
### Add OSDs
Now the best improvement since the days of ceph-deploy and manual disks.. on the ==master== node, run `ceph orch apply osd --all-available-devices`. This will identify any unloved (*unpartitioned, unmounted*) disks attached to each participating node, and configure these disks as OSDs.
### Setup CephFS
On the ==master== node, create a cephfs volume in your cluster, by running `ceph fs volume create data`. Ceph will handle the necessary orchestration itself, creating the necessary pool, mds daemon, etc.
You can watch the progress by running `ceph fs ls` (to see the fs is configured), and `ceph -s` to wait for `HEALTH_OK`
### Mount CephFS volume
On ==every== node, create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume:
```
mkdir /var/data
MYNODES="<node1>,<node2>,<node3>" # Add your own nodes here, comma-delimited
MYHOST=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
echo -e "
# Mount cephfs volume \n
raphael,donatello,leonardo:/ /var/data ceph name=admin,noatime,_netdev 0 0" >> /etc/fstab
mount -a
```
## Serving
### Sprinkle with tools
Although it's possible to use `cephadm shell` to exec into a container with the necessary ceph tools, it's more convenient to use the native CLI tools. To this end, on each node, run the following, which will install the appropriate apt repository, and install the latest ceph CLI tools:
```
curl -L https://download.ceph.com/keys/release.asc | sudo apt-key add -
cephadm add-repo --release octopus
cephadm install ceph-common
```
### Drool over dashboard
Ceph now includes a comprehensive dashboard, provided by the mgr daemon. The dashboard will be accessible at https://[IP of your ceph master node]:8443, but you'll need to run `ceph dashboard ac-user-create <username> <password> administrator` first, to create an administrator account:
```
root@raphael:~# ceph dashboard ac-user-create batman supermansucks administrator
{"username": "batman", "password": "$2b$12$3HkjY85mav.dq3HHAZiWP.KkMiuoV2TURZFH.6WFfo/BPZCT/0gr.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1590372281, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": false}
root@raphael:~#
```
## Summary
What have we achieved?
!!! summary "Summary"
Created:
* [X] Persistent storage available to every node
* [X] Resiliency in the event of the failure of a single node
* [X] Beautiful dashboard
## The easy, 5-minute install
I share (_with [sponsors][github_sponsor] and [patrons][patreon]_) a private "_premix_" GitHub repository, which includes an ansible playbook for deploying the entire Geek's Cookbook stack, automatically. This means that members can create the entire environment with just a ```git pull``` and an ```ansible-playbook deploy.yml``` 👍
Here's a screencast of the playbook in action. I sped up the boring parts, it actually takes ==5 min== (*you can tell by the timestamps on the prompt*):
![Screencast of ceph install via ansible](https://static.funkypenguin.co.nz/ceph_install_via_ansible_playbook.gif)
[patreon]: https://www.patreon.com/bePatron?u=6982506
[github_sponsor]: https://github.com/sponsors/funkypenguin
## Chef's Notes 📓
[^1]: Minimum Viable Cluster acronym copyright, trademark, and whatever else, to Funky Penguin for 1,000,000 years.

View File

@@ -157,7 +157,7 @@ After completing the above, you should have:
* [X] Persistent storage available to every node * [X] Persistent storage available to every node
* [X] Resiliency in the event of the failure of a single (gluster) node * [X] Resiliency in the event of the failure of a single (gluster) node
## Chef's Notes 📓 ## Chef's Notes
Future enhancements to this recipe include: Future enhancements to this recipe include:

View File

@@ -0,0 +1,165 @@
# Shared Storage (GlusterFS)
While Docker Swarm is great for keeping containers running (_and restarting those that fail_), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (_hint: you do!_), you need to provide shared storage to every docker node.
!!! warning
This recipe is deprecated. It didn't work well in 2017, and it's not likely to work any better now. It remains here as a reference. I now recommend the use of [Ceph for shared storage](/ha-docker-swarm/shared-storage-ceph/) instead. - 2019 Chef
## Design
### Why GlusterFS?
This GlusterFS recipe was my original design for shared storage, but I [found it to be flawed](shared-storage-ceph/#why-not-glusterfs), and I replaced it with a [design which employs Ceph instead](shared-storage-ceph/#why-ceph). This recipe is an alternate to the Ceph design, if you happen to prefer GlusterFS.
## Ingredients
!!! summary "Ingredients"
3 x Virtual Machines (configured earlier), each with:
* [X] CentOS/Fedora Atomic
* [X] At least 1GB RAM
* [X] At least 20GB disk space (_but it'll be tight_)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (_i.e., no WAN links_)
* [ ] A second disk, or adequate space on the primary disk for a dedicated data partition
## Preparation
### Create Gluster "bricks"
To build our Gluster volume, we need 2 out of the 3 VMs to provide one "brick". The bricks will be used to create the replicated volume. Assuming a replica count of 2 (_i.e., 2 copies of the data are kept in gluster_), our total number of bricks must be divisible by our replica count. (_I.e., you can't have 3 bricks if you want 2 replicas. You can have 4 though - We have to have minimum 3 swarm manager nodes for fault-tolerance, but only 2 of those nodes need to run as gluster servers._)
On each host, run a variation following to create your bricks, adjusted for the path to your disk.
!!! note "The example below assumes /dev/vdb is dedicated to the gluster volume"
```
(
echo o # Create a new empty DOS partition table
echo n # Add a new partition
echo p # Primary partition
echo 1 # Partition number
echo # First sector (Accept default: 1)
echo # Last sector (Accept default: varies)
echo w # Write changes
) | sudo fdisk /dev/vdb
mkfs.xfs -i size=512 /dev/vdb1
mkdir -p /var/no-direct-write-here/brick1
echo '' >> /etc/fstab >> /etc/fstab
echo '# Mount /dev/vdb1 so that it can be used as a glusterfs volume' >> /etc/fstab
echo '/dev/vdb1 /var/no-direct-write-here/brick1 xfs defaults 1 2' >> /etc/fstab
mount -a && mount
```
!!! warning "Don't provision all your LVM space"
Atomic uses LVM to store docker data, and **automatically grows** Docker's volumes as requried. If you commit all your free LVM space to your brick, you'll quickly find (as I did) that docker will start to fail with error messages about insufficient space. If you're going to slice off a portion of your LVM space in /dev/atomicos, make sure you leave enough space for Docker storage, where "enough" depends on how much you plan to pull images, make volumes, etc. I ate through 20GB very quickly doing development, so I ended up provisioning 50GB for atomic alone, with a separate volume for the brick.
### Create glusterfs container
Atomic doesn't include the Gluster server components. This means we'll have to run glusterd from within a container, with privileged access to the host. Although convoluted, I've come to prefer this design since it once again makes the OS "disposable", moving all the config into containers and code.
Run the following on each host:
```
docker run \
-h glusterfs-server \
-v /etc/glusterfs:/etc/glusterfs:z \
-v /var/lib/glusterd:/var/lib/glusterd:z \
-v /var/log/glusterfs:/var/log/glusterfs:z \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v /var/no-direct-write-here/brick1:/var/no-direct-write-here/brick1 \
-d --privileged=true --net=host \
--restart=always \
--name="glusterfs-server" \
gluster/gluster-centos
```
### Create trusted pool
On a single node (doesn't matter which), run ```docker exec -it glusterfs-server bash``` to launch a shell inside the container.
From the node, run
```gluster peer probe <other host>```
Example output:
```
[root@glusterfs-server /]# gluster peer probe ds1
peer probe: success.
[root@glusterfs-server /]#
```
Run ```gluster peer status``` on both nodes to confirm that they're properly connected to each other:
Example output:
```
[root@glusterfs-server /]# gluster peer status
Number of Peers: 1
Hostname: ds3
Uuid: 3e115ba9-6a4f-48dd-87d7-e843170ff499
State: Peer in Cluster (Connected)
[root@glusterfs-server /]#
```
### Create gluster volume
Now we create a *replicated volume* out of our individual "bricks".
Create the gluster volume by running
```
gluster volume create gv0 replica 2 \
server1:/var/no-direct-write-here/brick1 \
server2:/var/no-direct-write-here/brick1
```
Example output:
```
[root@glusterfs-server /]# gluster volume create gv0 replica 2 ds1:/var/no-direct-write-here/brick1/gv0 ds3:/var/no-direct-write-here/brick1/gv0
volume create: gv0: success: please start the volume to access data
[root@glusterfs-server /]#
```
Start the volume by running ```gluster volume start gv0```
```
[root@glusterfs-server /]# gluster volume start gv0
volume start: gv0: success
[root@glusterfs-server /]#
```
The volume is only present on the host you're shelled into though. To add the other hosts to the volume, run ```gluster peer probe <servername>```. Don't probe host from itself.
From one other host, run ```docker exec -it glusterfs-server bash``` to shell into the gluster-server container, and run ```gluster peer probe <original server name>``` to update the name of the host which started the volume.
### Mount gluster volume
On the host (i.e., outside of the container - type ```exit``` if you're still shelled in), create a mountpoint for the data, by running ```mkdir /var/data```, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually _mounted_ if there's a network / boot delay getting access to the gluster volume:
```
mkdir /var/data
MYHOST=`hostname -s`
echo '' >> /etc/fstab >> /etc/fstab
echo '# Mount glusterfs volume' >> /etc/fstab
echo "$MYHOST:/gv0 /var/data glusterfs defaults,_netdev,context="system_u:object_r:svirt_sandbox_file_t:s0" 0 0" >> /etc/fstab
mount -a
```
For some reason, my nodes won't auto-mount this volume on boot. I even tried the trickery below, but they stubbornly refuse to automount.
```
echo -e "\n\n# Give GlusterFS 10s to start before \
mounting\nsleep 10s && mount -a" >> /etc/rc.local
systemctl enable rc-local.service
```
For non-gluster nodes, you'll need to replace $MYHOST above with the name of one of the gluster hosts (I haven't worked out how to make this fully HA yet)
## Serving
After completing the above, you should have:
* [X] Persistent storage available to every node
* [X] Resiliency in the event of the failure of a single (gluster) node
## Chef's Notes 📓
Future enhancements to this recipe include:
1. Migration of shared storage from GlusterFS to Ceph ()[#2](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/2))
2. Correct the fact that volumes don't automount on boot ([#3](https://gitlab.funkypenguin.co.nz/funkypenguin/geeks-cookbook/issues/3))

View File

@@ -108,7 +108,7 @@ What have we achieved? By adding an additional three simple labels to any servic
## Chef's Notes 📓 ## Chef's Notes
1. Traefik forward auth replaces the use of [oauth_proxy containers](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) found in some of the existing recipes 1. Traefik forward auth replaces the use of [oauth_proxy containers](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) found in some of the existing recipes
2. [@thomaseddon's original version](https://github.com/thomseddon/traefik-forward-auth) of traefik-forward-auth only works with Google currently, but I've created a [fork](https://www.github.com/funkypenguin/traefik-forward-auth) of a [fork](https://github.com/noelcatt/traefik-forward-auth), which implements generic OIDC providers. 2. [@thomaseddon's original version](https://github.com/thomseddon/traefik-forward-auth) of traefik-forward-auth only works with Google currently, but I've created a [fork](https://www.github.com/funkypenguin/traefik-forward-auth) of a [fork](https://github.com/noelcatt/traefik-forward-auth), which implements generic OIDC providers.

View File

@@ -0,0 +1,116 @@
# Traefik Forward Auth
Now that we have Traefik deployed, automatically exposing SSL access to our Docker Swarm services using LetsEncrypt wildcard certificates, let's pause to consider that we may not _want_ some services exposed directly to the internet...
..Wait, why not? Well, Traefik doesn't provide any form of authentication, it simply secures the **transmission** of the service between Docker Swarm and the end user. If you were to deploy a service with no native security (*[Radarr](/recipes/autopirate/radarr/) or [Sonarr](/recipes/autopirate/sonarr/) come to mind*), then anybody would be able to use it! Even services which _may_ have a layer of authentication **might** not be safe to expose publically - often open source projects may be maintained by enthusiasts who happily add extra features, but just pay lip service to security, on the basis that "*it's the user's problem to secure it in their own network*".
To give us confidence that **we** can access our services, but BadGuys(tm) cannot, we'll deploy a layer of authentication **in front** of Traefik, using [Forward Authentication](https://docs.traefik.io/configuration/entrypoints/#forward-authentication). You can use your own [KeyCloak](/recipes/keycloak/) instance for authentication, but to lower the barrier to entry, this recipe will assume you're authenticating against your own Google account.
## Ingredients
!!! summary "Ingredients"
Existing:
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph)
* [X] [Traefik](/ha-docker-swarm/traefik/) configured per design
New:
* [ ] Client ID and secret from an OpenID-Connect provider (Google, [KeyCloak](/recipes/keycloak/), Microsoft, etc..)
## Preparation
### Obtain OAuth credentials
!!! note
This recipe will demonstrate using Google OAuth for traefik forward authentication, but it's also possible to use a self-hosted KeyCloak instance - see the [KeyCloak OIDC Provider](/recipes/keycloak/setup-oidc-provider/) recipe for more details!
Log into https://console.developers.google.com/, create a new project then search for and select "Credentials" in the search bar.
Fill out the "OAuth Consent Screen" tab, and then click, "**Create Credentials**" > "**OAuth client ID**". Select "**Web Application**", fill in the name of your app, skip "**Authorized JavaScript origins**" and fill "**Authorized redirect URIs**" with either all the domains you will allow authentication from, appended with the url-path (*e.g. https://radarr.example.com/_oauth, https://radarr.example.com/_oauth, etc*), or if you don't like frustration, use a "auth host" URL instead, like "*https://auth.example.com/_oauth*" (*see below for details*)
Store your client ID and secret safely - you'll need them for the next step.
### Prepare environment
Create `/var/data/config/traefik/traefik-forward-auth.env` as follows:
```
CLIENT_ID=<your client id>
CLIENT_SECRET=<your client secret>
OIDC_ISSUER=https://accounts.google.com
SECRET=<a random string, make it up>
# uncomment this to use a single auth host instead of individual redirect_uris (recommended but advanced)
#AUTH_HOST=auth.example.com
COOKIE_DOMAINS=example.com
```
### Prepare the docker service config
This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe:
```
traefik-forward-auth:
image: funkypenguin/traefik-forward-auth
env_file: /var/data/config/traefik/traefik-forward-auth.env
networks:
- traefik_public
# Uncomment these lines if you're using auth host mode
#deploy:
# labels:
# - traefik.port=4181
# - traefik.frontend.rule=Host:auth.example.com
# - traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
# - traefik.frontend.auth.forward.trustForwardHeader=true
```
If you're not confident that forward authentication is working, add a simple "whoami" test container, to help debug traefik forward auth, before attempting to add it to a more complex container.
```
# This simply validates that traefik forward authentication is working
whoami:
image: containous/whoami
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:whoami.example.com
- traefik.port=80
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
- traefik.frontend.auth.forward.trustForwardHeader=true
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Serving
### Launch
Redeploy traefik with ```docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml```, to launch the traefik-forward-auth container.
### Test
Browse to https://whoami.example.com (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you should be redirected to a Google login. Once successfully logged in, you'll be directed to the basic whoami page.
## Summary
What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our choice of OAuth provider, with minimal processing / handling overhead.
!!! summary "Summary"
Created:
* [X] Traefik-forward-auth configured to authenticate against an OIDC provider
## Chef's Notes 📓
1. Traefik forward auth replaces the use of [oauth_proxy containers](/reference/oauth_proxy/) found in some of the existing recipes
2. [@thomaseddon's original version](https://github.com/thomseddon/traefik-forward-auth) of traefik-forward-auth only works with Google currently, but I've created a [fork](https://www.github.com/funkypenguin/traefik-forward-auth) of a [fork](https://github.com/noelcatt/traefik-forward-auth), which implements generic OIDC providers.
3. I reviewed several implementations of forward authenticators for Traefik, but found most to be rather heavy-handed, or specific to a single auth provider. @thomaseddon's go-based docker image is 7MB in size, and with the generic OIDC patch (above), it can be extended to work with any OIDC provider.
4. No, not github natively, but you can ferderate GitHub into KeyCloak, and then use KeyCloak as the OIDC provider.

View File

@@ -117,6 +117,6 @@ What have we achieved? By adding an additional three simple labels to any servic
## Chef's Notes 📓 ## Chef's Notes
1. KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;) 1. KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;)

View File

@@ -0,0 +1,122 @@
# Using Traefik Forward Auth with KeyCloak
While the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own KeyCloak instance to secure **any** URLs within your DNS domain.
## Ingredients
!!! Summary
Existing:
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully, with a [local user](/recipes/keycloak/create-user/) and an [OIDC client](/recipes/keycloak/setup-oidc-provider/)
New:
* [ ] DNS entry for your auth host (*"auth.yourdomain.com" is a good choice*), pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### What is AuthHost mode
Under normal OIDC auth, you have to tell your auth provider which URLs it may redirect an authenticated user back to, post-authentication. This is a security feture of the OIDC spec, preventing a malicious landing page from capturing your session and using it to impersonate you. When you're securing many URLs though, explicitly listing them can be a PITA.
[@thomaseddon's traefik-forward-auth](https://github.com/thomseddon/traefik-forward-auth) includes an ingenious mechanism to simulate an "*auth host*" in your OIDC authentication, so that you can protect an unlimited amount of DNS names (*with a common domain suffix*), without having to manually maintain a list.
#### How does it work?
Say you're protecting **radarr.example.com**. When you first browse to **https://radarr.example.com**, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider's login (*KeyCloak, in this case*), but instructs the OIDC provider to redirect a successfully authenticated session **back** to **https://auth.example.com/_oauth**, rather than to **https://radarr.example.com/_oauth**.
When you successfully authenticate against the OIDC provider, you are redirected to the "*redirect_uri*" of https://auth.example.com. Again, your request hits Traefik, whichforwards the session to traefik-forward-auth, which **knows** that you've just been authenticated (*cookies have a role to play here*). Traefik-forward-auth also knows the URL of your **original** request (*thanks to the X-Forwarded-Whatever header*). Traefik-forward-auth redirects you to your original destination, and everybody is happy.
This clever workaround only works under 2 conditions:
1. Your "auth host" has the same domain name as the hosts you're protecting (*i.e., auth.example.com protecting radarr.example.com*)
2. You explictly tell traefik-forward-auth to use a cookie authenticating your **whole** domain (*i.e. example.com*)
### Setup environment
Create `/var/data/config/traefik/traefik-forward-auth.env` as follows (*change "master" if you created a different realm*):
```
CLIENT_ID=<your keycloak client name>
CLIENT_SECRET=<your keycloak client secret>
OIDC_ISSUER=https://<your keycloak URL>/auth/realms/master
SECRET=<a random string to secure your cookie>
AUTH_HOST=<the FQDN to use for your auth host>
COOKIE_DOMAIN=<the root FQDN of your domain>
```
### Prepare the docker service config
This is a small container, you can simply add the following content to the existing `traefik-app.yml` deployed in the previous [Traefik](/recipes/traefik/) recipe:
```
traefik-forward-auth:
image: funkypenguin/traefik-forward-auth
env_file: /var/data/config/traefik/traefik-forward-auth.env
networks:
- traefik_public
deploy:
labels:
- traefik.port=4181
- traefik.frontend.rule=Host:auth.example.com
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
- traefik.frontend.auth.forward.trustForwardHeader=true
```
If you're not confident that forward authentication is working, add a simple "whoami" test container, to help debug traefik forward auth, before attempting to add it to a more complex container.
```
# This simply validates that traefik forward authentication is working
whoami:
image: containous/whoami
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:whoami.example.com
- traefik.port=80
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
- traefik.frontend.auth.forward.trustForwardHeader=true
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Serving
### Launch
Redeploy traefik with ```docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml```, to launch the traefik-forward-auth container.
### Test
Browse to https://whoami.example.com (*obviously, customized for your domain and having created a DNS record*), and all going according to plan, you'll be redirected to a KeyCloak login. Once successfully logged in, you'll be directed to the basic whoami page.
### Protect services
To protect any other service, ensure the service itself is exposed by Traefik (*if you were previously using an oauth_proxy for this, you may have to migrate some labels from the oauth_proxy serivce to the service itself*). Add the following 3 labels:
```
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
- traefik.frontend.auth.forward.trustForwardHeader=true
```
And re-deploy your services :)
## Summary
What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our KeyCloak OIDC provider, with minimal processing / handling overhead.
!!! summary "Summary"
Created:
* [X] Traefik-forward-auth configured to authenticate against KeyCloak
## Chef's Notes 📓
1. KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;)

View File

@@ -234,6 +234,6 @@ You should now be able to access your traefik instance on http://<node IP\>:8080
* [X] Automatic SSL support for all proxied resources * [X] Automatic SSL support for all proxied resources
## Chef's Notes 📓 ## Chef's Notes
1. Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/)! 1. Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](https://geek-cookbook.funkypenguin.co.nz/)ha-docker-swarm/traefik-forward-auth/)!

View File

@@ -0,0 +1,239 @@
# Traefik
The platforms we plan to run on our cloud are generally web-based, and each listening on their own unique TCP port. When a container in a swarm exposes a port, then connecting to **any** swarm member on that port will result in your request being forwarded to the appropriate host running the container. (_Docker calls this the swarm "[routing mesh](https://docs.docker.com/engine/swarm/ingress/)"_)
So we get a rudimentary load balancer built into swarm. We could stop there, just exposing a series of ports on our hosts, and making them HA using keepalived.
There are some gaps to this approach though:
- No consideration is given to HTTPS. Implementation would have to be done manually, per-container.
- No mechanism is provided for authentication outside of that which the container providers. We may not **want** to expose every interface on every container to the world, especially if we are playing with tools or containers whose quality and origin are unknown.
To deal with these gaps, we need a front-end load-balancer, and in this design, that role is provided by [Traefik](https://traefik.io/).
![Traefik Screenshot](../images/traefik.png)
## Ingredients
!!! summary "You'll need"
Existing
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph)
New
* [ ] Access to update your DNS records for manual/automated [LetsEncrypt](https://letsencrypt.org/docs/challenge-types/) DNS-01 validation, or ingress HTTP/HTTPS for HTTP-01 validation
## Preparation
### Prepare the host
The traefik container is aware of the __other__ docker containers in the swarm, because it has access to the docker socket at **/var/run/docker.sock**. This allows traefik to dynamically configure itself based on the labels found on containers in the swarm, which is hugely useful. To make this functionality work on a SELinux-enabled CentOS7 host, we need to add custom SELinux policy.
!!! tip
The following is only necessary if you're using SELinux!
Run the following to build and activate policy to permit containers to access docker.sock:
```
mkdir ~/dockersock
cd ~/dockersock
curl -O https://raw.githubusercontent.com/dpw/\
selinux-dockersock/master/Makefile
curl -O https://raw.githubusercontent.com/dpw/\
selinux-dockersock/master/dockersock.te
make && semodule -i dockersock.pp
```
### Prepare traefik.toml
While it's possible to configure traefik via docker command arguments, I prefer to create a config file (`traefik.toml`). This allows me to change traefik's behaviour by simply changing the file, and keeps my docker config simple.
Create `/var/data/traefik/traefik.toml` as follows:
```
checkNewVersion = true
defaultEntryPoints = ["http", "https"]
# This section enable LetsEncrypt automatic certificate generation / renewal
[acme]
email = "<your LetsEncrypt email address>"
storage = "acme.json" # or "traefik/acme/account" if using KV store
entryPoint = "https"
acmeLogging = true
onDemand = true
OnHostRule = true
# Request wildcard certificates per https://docs.traefik.io/configuration/acme/#wildcard-domains
[[acme.domains]]
main = "*.example.com"
sans = ["example.com"]
# Redirect all HTTP to HTTPS (why wouldn't you?)
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[web]
address = ":8080"
watch = true
[docker]
endpoint = "tcp://127.0.0.1:2375"
domain = "example.com"
watch = true
swarmmode = true
```
### Prepare the docker service config
!!! tip
"We'll want an overlay network, independent of our traefik stack, so that we can attach/detach all our other stacks (including traefik) to the overlay network. This way, we can undeploy/redepoly the traefik stack without having to bring every other stack first!" - voice of experience
Create `/var/data/config/traefik/traefik.yml` as follows:
```
version: "3.2"
# What is this?
#
# This stack exists solely to deploy the traefik_public overlay network, so that
# other stacks (including traefik-app) can attach to it
services:
scratch:
image: scratch
deploy:
replicas: 0
networks:
- public
networks:
public:
driver: overlay
attachable: true
ipam:
config:
- subnet: 172.16.200.0/24
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
Create `/var/data/config/traefik/traefik-app.yml` as follows:
```
version: "3"
services:
traefik:
image: traefik
command: --web --docker --docker.swarmmode --docker.watch --docker.domain=example.com --logLevel=DEBUG
# Note below that we use host mode to avoid source nat being applied to our ingress HTTP/HTTPS sessions
# Without host mode, all inbound sessions would have the source IP of the swarm nodes, rather than the
# original source IP, which would impact logging. If you don't care about this, you can expose ports the
# "minimal" way instead
ports:
- target: 80
published: 80
protocol: tcp
mode: host
- target: 443
published: 443
protocol: tcp
mode: host
- target: 8080
published: 8080
protocol: tcp
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/data/config/traefik:/etc/traefik
- /var/data/traefik/traefik.log:/traefik.log
- /var/data/traefik/acme.json:/acme.json
networks:
- traefik_public
# Global mode makes an instance of traefik listen on _every_ node, so that regardless of which
# node the request arrives on, it'll be forwarded to the correct backend service.
deploy:
labels:
- "traefik.enable=false"
mode: global
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
networks:
traefik_public:
external: true
```
Docker won't start a service with a bind-mount to a non-existent file, so prepare an empty acme.json (_with the appropriate permissions_) by running:
```
touch /var/data/traefik/acme.json
chmod 600 /var/data/traefik/acme.json
```
!!! warning
Pay attention above. You **must** set `acme.json`'s permissions to owner-readable-only, else the container will fail to start with an [ID-10T](https://en.wikipedia.org/wiki/User_error#ID-10-T_error) error!
Traefik will populate acme.json itself when it runs, but it needs to exist before the container will start (_Chicken, meet egg._)
## Serving
### Launch
First, launch the traefik stack, which will do nothing other than create an overlay network by running `docker stack deploy traefik -c /var/data/traefik/traefik.yml`
```
[root@kvm ~]# docker stack deploy traefik -c traefik.yml
Creating network traefik_public
Creating service traefik_scratch
[root@kvm ~]#
```
Now deploy the traefik appliation itself (*which will attach to the overlay network*) by running `docker stack deploy traefik-app -c /var/data/traefik/traefik-app.yml`
```
[root@kvm ~]# docker stack deploy traefik-app -c traefik-app.yml
Creating service traefik-app_app
[root@kvm ~]#
```
Confirm traefik is running with `docker stack ps traefik-app`:
```
[root@kvm ~]# docker stack ps traefik-app
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
74uipz4sgasm traefik-app_app.t4vcm8siwc9s1xj4c2o4orhtx traefik:alpine kvm.funkypenguin.co.nz Running Running 33 seconds ago *:443->443/tcp,*:80->80/tcp
[root@kvm ~]#
```
### Check Traefik Dashboard
You should now be able to access your traefik instance on http://<node IP\>:8080 - It'll look a little lonely currently (*below*), but we'll populate it as we add recipes :)
![Screenshot of Traefik, post-launch](/images/traefik-post-launch.png)
### Summary
!!! summary
We've achieved:
* [X] An overlay network to permit traefik to access all future stacks we deploy
* [X] Frontend proxy which will dynamically configure itself for new backend containers
* [X] Automatic SSL support for all proxied resources
## Chef's Notes 📓
1. Did you notice how no authentication was required to view the Traefik dashboard? Eek! We'll tackle that in the next section, regarding [Traefik Forward Authentication](/ha-docker-swarm/traefik-forward-auth/)!

View File

@@ -59,7 +59,7 @@ The best way to support this work is to become a [GitHub Sponsor](https://github
Impulsively **[click here (NOW quick do it!)][github_sponsor]** to [sponsor me][github_sponsor] via GitHub, or [patronize me via Patreon][patreon]! Impulsively **[click here (NOW quick do it!)][github_sponsor]** to [sponsor me][github_sponsor] via GitHub, or [patronize me via Patreon][patreon]!
### Work with me 🤝 ### Work with me
Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified][aws_cert] consultant, this stuff is my bread and butter! :bread: :fork_and_knife: [Get in touch][contact], and let's talk business! Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified][aws_cert] consultant, this stuff is my bread and butter! :bread: :fork_and_knife: [Get in touch][contact], and let's talk business!

93
manuscript/index.mde Normal file
View File

@@ -0,0 +1,93 @@
# What is this?
Funky Penguin's "**[Geek Cookbook](https://geek-cookbook.funkypenguin.co.nz)**" is a collection of how-to guides for establishing your own container-based self-hosting platform, using either [Docker Swarm](/ha-docker-swarm/design/) or [Kubernetes](/kubernetes/start/).
Running such a platform enables you to run self-hosted tools such as [AutoPirate](/recipes/autopirate/) (*Radarr, Sonarr, NZBGet and friends*), [Plex][plex], [NextCloud][nextcloud], and includes elements such as:
* [Automatic SSL-secured access](/ha-docker-swarm/traefik/) to all services (*with LetsEncrypt*)
* [SSO / authentication layer](/ha-docker-swarm/traefik-forward-auth/) to protect unsecured / vulnerable services
* [Automated backup](/recipes/elkarbackup/) of configuration and data
* [Monitoring and metrics](/recipes/swarmprom/) collection, graphing and alerting
Recent updates and additions are posted on the [CHANGELOG](/CHANGELOG/), and there's a friendly community of like-minded geeks in the [Discord server](http://chat.funkypenguin.co.nz).
## Who is this for?
You already have a familiarity with concepts such as virtual machines, [Docker](https://www.docker.com/) containers, [LetsEncrypt SSL certificates](https://letsencrypt.org/), databases, and command-line interfaces.
You've probably played with self-hosting some mainstream apps yourself, like [Plex][plex], [NextCloud][nextcloud], [Wordpress][wordpress] or [Ghost][ghost].
## Why should I read this?
So if you're familiar enough with the concepts above, and you've done self-hosting before, why would you read any further?
1. You want to upskill. You want to work with container orchestration, Prometheus and Grafana, Kubernetes
2. You want to play. You want a safe sandbox to test new tools, keeping the ones you want and tossing the ones you don't.
3. You want reliability. Once you go from __playing__ with a tool to actually __using__ it, you want it to be available when you need it. Having to "*quickly ssh into the basement server and restart plex*" doesn't cut it when you finally convince your wife to sit down with you to watch sci-fi.
!!! quote "...how useful the recipes are for people just getting started with containers..."
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">.<a href="https://twitter.com/funkypenguin?ref_src=twsrc%5Etfw">@funkypenguin</a> One of the surprising realizations from following Funky Penguins cookbooks <a href="https://t.co/XvZ2qLJa5N">https://t.co/XvZ2qLJa5N</a> for so long is how useful the recipes are for people just getting started with containers and how it gives them real, interesting usecases to attach to their learning</p>&mdash; DevOps Daniel (@DanielSHouston) <a href="https://twitter.com/DanielSHouston/status/1213419203379773442?ref_src=twsrc%5Etfw">January 4, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
## What have you done for me lately? (CHANGELOG)
Check out recent change at [CHANGELOG](/CHANGELOG/)
## What do you want from me?
I want your [support][github_sponsor], either in the [financial][github_sponsor] sense, or as a member of our [friendly geek community][discord] (*or both!*)
### Get in touch 👋
* Come and say hi to me and the friendly geeks in the [Discord][discord] chat or the [Discourse][discourse] forums - say hi, ask a question, or suggest a new recipe!
* Tweet me up, I'm [@funkypenguin][twitter]! 🐦
* [Contact me][contact] by a variety of channels
### [Sponsor][github_sponsor] / [Patronize][patreon] me ❤️
The best way to support this work is to become a [GitHub Sponsor](https://github.com/sponsors/funkypenguin) / [Patreon patron][patreon]. You get:
* warm fuzzies,
* access to the pre-mix repo,
* an anonymous plug you can pull at any time,
* and a bunch more loot based on tier
.. and I get some pocket money every month to buy wine, cheese, and cryptocurrency! 🍷 💰
Impulsively **[click here (NOW quick do it!)][github_sponsor]** to [sponsor me][github_sponsor] via GitHub, or [patronize me via Patreon][patreon]!
### Work with me 🤝
Need some Cloud / Microservices / DevOps / Infrastructure design work done? I'm a full-time [AWS-certified][aws_cert] consultant, this stuff is my bread and butter! :bread: :fork_and_knife: [Get in touch][contact], and let's talk business!
[plex]: https://www.plex.tv/
[nextcloud]: https://nextcloud.com/
[wordpress]: https://wordpress.org/
[ghost]: https://ghost.io/
[discord]: http://chat.funkypenguin.co.nz
[patreon]: https://www.patreon.com/bePatron?u=6982506
[github_sponsor]: https://github.com/sponsors/funkypenguin
[github]: https://github.com/sponsors/funkypenguin
[discourse]: https://discourse.geek-kitchen.funkypenguin.co.nz/
[twitter]: https://twitter.com/funkypenguin
[contact]: https://www.funkypenguin.co.nz
[aws_cert]: https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574
!!! quote "He unblocked me on all the technical hurdles to launching my SaaS in GKE!"
By the time I had enlisted Funky Penguin's help, I'd architected myself into a bit of a nightmare with Kubernetes. I knew what I wanted to achieve, but I'd made a mess of it. Funky Penguin (David) was able to jump right in and offer a vital second-think on everything I'd done, pointing out where things could be simplified and streamlined, and better alternatives.
He unblocked me on all the technical hurdles to launching my SaaS in GKE!
With him delivering the container/Kubernetes architecture and helm CI/CD workflow, I was freed up to focus on coding and design, which fast-tracked me to launching on time. And now I have a simple deployment process that is easy for me to execute and maintain as a solo founder.
I have no hesitation in recommending him for your project, and I'll certainly be calling on him again in the future.
-- John McDowall, Founder, [kiso.io](https://kiso.io)
### Buy my book 📖
I'm publishing the Geek Cookbook as a formal eBook (*PDF, mobi, epub*), on Leanpub (https://leanpub.com/geek-cookbook). Check it out!

View File

@@ -7,7 +7,7 @@ IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean](
## Ingredients ## Ingredients
1. [DigitalOcean](https://www.digitalocean.com/?refcode=e33b78ad621b) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. (_yes, this is a referral link, making me some to buy _) 1. [DigitalOcean](https://www.digitalocean.com/?refcode=e33b78ad621b) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. (_yes, this is a referral link, making me some to buy _)
2. Geek-Fu required : 🐱 (easy - even has screenshots!) 2. Geek-Fu required : (easy - even has screenshots!)
## Preparation ## Preparation

View File

@@ -0,0 +1,86 @@
# Kubernetes on DigitalOcean
IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean](https://m.do.co/c/e33b78ad621b) (_this is a referral link_). I've included instructions below to start a basic cluster.
![Kubernetes on Digital Ocean](/images/kubernetes-on-digitalocean.jpg)
## Ingredients
1. [DigitalOcean](https://www.digitalocean.com/?refcode=e33b78ad621b) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. (_yes, this is a referral link, making me some 💰 to buy 🍷_)
2. Geek-Fu required : 🐱 (easy - even has screenshots!)
## Preparation
### Create DigitalOcean Account
Create a project, and then from your project page, click **Manage** -> **Kubernetes (LTD)** in the left-hand panel:
![Kubernetes on Digital Ocean Screenshot #1](/images/kubernetes-on-digitalocean-screenshot-1.png)
Until DigitalOcean considers their Kubernetes offering to be "production ready", you'll need the additional step of clicking on **Enable Limited Access**:
![Kubernetes on Digital Ocean Screenshot #2](/images/kubernetes-on-digitalocean-screenshot-2.png)
The _Enable Limited Access_ button changes to read _Create a Kubernetes Cluster_ . Cleeeek it:
![Kubernetes on Digital Ocean Screenshot #3](/images/kubernetes-on-digitalocean-screenshot-3.png)
When prompted, choose some defaults for your first node pool (_your pool of "compute" resources for your cluster_), and give it a name. In more complex deployments, you can use this concept of "node pools" to run certain applications (_like an inconsequential nightly batch job_) on a particular class of compute instance (_such as cheap, preemptible instances_)
![Kubernetes on Digital Ocean Screenshot #4](/images/kubernetes-on-digitalocean-screenshot-4.png)
That's it! Have a sip of your 🍷, a bite of your :cheese:, and wait for your cluster to build. While you wait, follow the instructions to setup kubectl (if you don't already have it)
![Kubernetes on Digital Ocean Screenshot #5](/images/kubernetes-on-digitalocean-screenshot-5.png)
DigitalOcean will provide you with a "kubeconfig" file to use to access your cluster. It's at the bottom of the page (_illustrated below_), and easy to miss (_in my experience_).
![Kubernetes on Digital Ocean Screenshot #6](/images/kubernetes-on-digitalocean-screenshot-6.png)
## Release the kubectl!
Save your kubeconfig file somewhere, and test it our by running ```kubectl --kubeconfig=<PATH TO KUBECONFIG> get nodes```
Example output:
```
[davidy:~/Downloads] 130 % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
NAME STATUS ROLES AGE VERSION
festive-merkle-8n9e Ready <none> 20s v1.13.1
[davidy:~/Downloads] %
```
In the example above, my nodes were being deployed. Repeat the command to see your nodes spring into existence:
```
[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
NAME STATUS ROLES AGE VERSION
festive-merkle-8n96 Ready <none> 6s v1.13.1
festive-merkle-8n9e Ready <none> 34s v1.13.1
[davidy:~/Downloads] %
[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes
NAME STATUS ROLES AGE VERSION
festive-merkle-8n96 Ready <none> 30s v1.13.1
festive-merkle-8n9a Ready <none> 17s v1.13.1
festive-merkle-8n9e Ready <none> 58s v1.13.1
[davidy:~/Downloads] %
```
That's it. You have a beautiful new kubernetes cluster ready for some action!
## Move on..
Still with me? Good. Move on to creating your own external load balancer..
* [Start](/kubernetes/start/) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* Cluster (this page) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
## Chef's Notes
1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come!

View File

@@ -0,0 +1,129 @@
# Design
Like the [Docker Swarm](ha-docker-swarm/design/) "_private cloud_" design, the Kubernetes design is:
* **Highly-available** (_can tolerate the failure of a single component_)
* **Scalable** (_can add resource or capacity as required_)
* **Portable** (_run it in DigitalOcean today, AWS tomorrow and Azure on Thursday_)
* **Secure** (_access protected with LetsEncrypt certificates_)
* **Automated** (_requires minimal care and feeding_)
*Unlike* the Docker Swarm design, the Kubernetes design is:
* **Cloud-Native** (_While you **can** [run your own Kubernetes cluster](https://microk8s.io/), it's far simpler to let someone else manage the infrastructure, freeing you to play with the fun stuff_)
* **Complex** (_Requires more basic elements, more verbose configuration, and provides more flexibility and customisability_)
## Design Decisions
**The design and recipes are provider-agnostic**
This means that:
* The design should work on GKE, AWS, DigitalOcean, Azure, or even MicroK8s
* Custom service elements specific to individual providers are avoided
**The simplest solution to achieve the desired result will be preferred**
This means that:
* Persistent volumes from the cloud provider are used for all persistent storage
* We'll do things the "_Kubernetes way_", i.e., using secrets and configmaps, rather than trying to engineer around the Kubernetes basic building blocks.
**Insofar as possible, the format of recipes will align with Docker Swarm**
This means that:
* We use Kubernetes namespaces to replicate Docker Swarm's "_per-stack_" networking and service discovery
## Security
Under this design, the only inbound connections we're permitting to our Kubernetes swarm are:
### Network Flows
* HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy (_Traefik_)
* Individual additional ports we choose to expose for specific recipes (_i.e., port 8443 for [MQTT](/recipes/mqtt/)_)
### Authentication
* Other than when an SSL-served application provides a trusted level of authentication, or where the application requires public exposure, applications served via Traefik will be protected with an OAuth proxy.
## The challenges of external access
Because we're Cloude-Native now, it's complex to get traffic **into** our cluster from outside. We basically have 3 options:
1. **HostIP**: Map a port on the host to a service. This is analogous to Docker's port exposure, but lacking in that it restricts us to one host port per-container, and it's not possible to anticipate _which_ of your Kubernetes hosts is running a given container. Kubernetes does not have Docker Swarm's "routing mesh", allowing for simple load-balancing of incoming connections.
2. **LoadBalancer**: Purchase a "loadbalancer" per-service from your cloud provider. While this is the simplest way to assure a fixed IP and port combination will always exist for your service, it has 2 significant limitations:
1. Cost is prohibitive, at roughly $US10/month per port
2. You won't get the _same_ fixed IP for multiple ports. So if you wanted to expose 443 and 25 (_webmail and smtp server, for example_), you'd find yourself assigned a port each on two **unique** IPs, a challenge for a single DNS-based service, like "_mail.batman.com_"
3. **NodePort**: Expose our service as a port (_between 30000-32767_) on the host which happens to be running the service. This is challenging because you might want to expose port 443, but that's not possible with NodePort.
To further complicate options #1 and #3 above, our cloud provider may, without notice, change the IP of the host running your containers (_O hai, Google!_).
Our solution to these challenges is to employ a simple-but-effective solution which places an HAProxy instance in front of the services exposed by NodePort. For example, this allows us to expose a container on 443 as NodePort 30443, and to cause HAProxy to listen on port 443, and forward all requests to our Node's IP on port 30443, after which it'll be forwarded onto our container on the original port 443.
We use a phone-home container, which calls a simple webhook on our haproxy VM, advising HAProxy to update its backend for the calling IP. This means that when our provider changes the host's IP, we automatically update HAProxy and keep-on-truckin'!
Here's a high-level diagram:
![Kubernetes Design](/images/kubernetes-cluster-design.png)
## Overview
So what's happening in the diagram above? I'm glad you asked - let's go through it!
### Setting the scene
In the diagram, we have a Kubernetes cluster comprised of 3 nodes. You'll notice that there's no visible master node. This is because most cloud providers will give you "_free_" master node, but you don't get to access it. The master node is just a part of the Kubernetes "_as-a-service_" which you're purchasing.
Our nodes are partitioned into several namespaces, which logically separate our individual recipes. (_I.e., allowing both a "gitlab" and a "nextcloud" namespace to include a service named "db", which would be challenging without namespaces_)
Outside of our cluster (_could be anywhere on the internet_) is a single VM servicing as a load-balancer, running HAProxy and a webhook service. This load-balancer is described in detail, [in its own section](/kubernetes/loadbalancer/), but what's important up-front is that this VM is the **only element of the design for which we need to provide a fixed IP address**.
### 1 : The mosquitto pod
In the "mqtt" namespace, we have a single pod, running 2 containers - the mqtt broker, and a "phone-home" container.
Why 2 containers in one pod, instead of 2 independent pods? Because all the containers in a pod are **always** run on the same physical host. We're using the phone-home container as a simple way to call a webhook on the not-in-the-cluster VM.
The phone-home container calls the webhook, and tells HAProxy to listen on port 8443, and to forward any incoming requests to port 30843 (_within the NodePort range_) on the IP of the host running the container (_and because of the pod, tho phone-home container is guaranteed to be on the same host as the MQTT container_).
### 2 : The Traefik Ingress
In the "default" namespace, we have a Traefik "Ingress Controller". An Ingress controller is a way to use a single port (_say, 443_) plus some intelligence (_say, a defined mapping of URLs to services_) to route incoming requests to the appropriate containers (_via services_). Basically, the Trafeik ingress does what [Traefik does for us under Docker Swarm](/docker-ha-swarm/traefik/).
What's happening in the diagram is that a phone-home pod is tied to the traefik pod using affinity, so that both containers will be executed on the same host. Again, the phone-home container calls a webhook on the HAProxy VM, auto-configuring HAproxy to send any HTTPs traffic to its calling address and customer NodePort port number.
When an inbound HTTPS request is received by Traefik, based on some internal Kubernetes elements (ingresses), Traefik provides SSL termination, and routes the request to the appropriate service (_In this case, either the GitLab UI or teh UniFi UI_)
### 3 : The UniFi pod
What's happening in the UniFi pod is a combination of #1 and #2 above. UniFi controller provides a webUI (_typically 8443, but we serve it via Traefik on 443_), plus some extra ports for device adoption, which are using a proprietary protocol, and can't be proxied with Traefik.
To make both the webUI and the adoption ports work, we use a combination of an ingress for the webUI (_see #2 above_), and a phone-home container to tell HAProxy to forward port 8080 (_the adoption port_) directly to the host, using a NodePort-exposed service.
This allows us to retain the use of a single IP for all controller functions, as accessed outside of the cluster.
### 4 : The webhook
Each phone-home container is calling a webhook on the HAProxy VM, secured with a secret shared token. The phone-home container passes the desired frontend port (i.e., 443), the corresponding NodeIP port (i.e., 30443), and the node's current public IP address.
The webhook uses the provided details to update HAProxy for the combination of values, validate the config, and then restart HAProxy.
### 5 : The user
Finally, the DNS for all externally-accessible services is pointed to the IP of the HAProxy VM. On receiving an inbound request (_be it port 443, 8080, or anything else configured_), HAProxy will forward the request to the IP and NodePort port learned from the phone-home container.
## Move on..
Still with me? Good. Move on to creating your cluster!
* [Start](/kubernetes/start/) - Why Kubernetes?
* Design (this page) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm

View File

@@ -29,7 +29,7 @@ If you want to use minikube, there is a guide below but again, I recommend using
they add in additional complexities to the installation as they they add in additional complexities to the installation as they
require running a Linux based image running in a VM, require running a Linux based image running in a VM,
that although minikube will manage, adds to the complexities. And that although minikube will manage, adds to the complexities. And
even then, who uses Windows or macOS in production anyways? 🙂 even then, who uses Windows or macOS in production anyways?
If you are serious about running on windows/macOS, If you are serious about running on windows/macOS,
check the official MiniKube guides check the official MiniKube guides
[here](https://minikube.sigs.k8s.io/docs/start/) [here](https://minikube.sigs.k8s.io/docs/start/)
@@ -82,7 +82,7 @@ Then spin yourself up as many systems as you need with the following guide
!!! note !!! note
I am running a 3 node cluster, with nodes running on Ubuntu 19.04, all virtualized with VMWare ESXi I am running a 3 node cluster, with nodes running on Ubuntu 19.04, all virtualized with VMWare ESXi
Your setup doesn't need to be as complex as mine, you can use 3 old Dell OptiPlex if you really want 🙂 Your setup doesn't need to be as complex as mine, you can use 3 old Dell OptiPlex if you really want
1. Insert your installation medium into the machine, and boot it. 1. Insert your installation medium into the machine, and boot it.
2. Select your language 2. Select your language
@@ -183,7 +183,7 @@ thomas-k3s-node1$ curl -sfL https://get.k3s.io | sh -
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit [INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service /etc/systemd/system/k3s.service. Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s [INFO] systemd: Starting k3s
``` ```
@@ -284,7 +284,7 @@ That is all! You have yourself a Kubernetes cluster for you and your dog to enjo
DRP or Digital Rebar Provisioning Tool is a tool designed to automatically setup your cluster, installing an operating system for you, and doing all the configuration like we did in the k3s setup. DRP or Digital Rebar Provisioning Tool is a tool designed to automatically setup your cluster, installing an operating system for you, and doing all the configuration like we did in the k3s setup.
This section is WIP, instead, try using the K3S guide above 🙂 This section is WIP, instead, try using the K3S guide above
## Where from now ## Where from now
@@ -304,7 +304,7 @@ This article, believe it or not, was not diced up by your regular chef (funkypen
Instead, today's article was diced up by HexF, a fellow kiwi (hence a lot of kiwi references) who enjoys his sysadmin time. Instead, today's article was diced up by HexF, a fellow kiwi (hence a lot of kiwi references) who enjoys his sysadmin time.
Feel free to talk to today's chef in the discord, or see one of his many other links that you can follow below Feel free to talk to today's chef in the discord, or see one of his many other links that you can follow below
[Twitter](https://hexf.me/api/social/twitter/geekcookbook) [Website](https://hexf.me/api/social/website/geekcookbook) [Github](https://hexf.me/api/social/github/geekcookbook) [Twitter](https://hexf.me/api/social/twitter/geekcookbook) [Website](https://hexf.me/api/social/website/geekcookbook) [Github](https://hexf.me/api/social/github/geekcookbook)
<!-- <!--
The links above are just redirect links incase anything ever changes, and it has analytics too The links above are just redirect links incase anything ever changes, and it has analytics too

View File

@@ -0,0 +1,311 @@
# DIY Kubernetes
If you are looking for a little more of a challenge, or just don't have the money to fork out to managed Kubernetes, you're in luck.
Kubernetes provides many ways to run a cluster, by far the simplest method is with `minikube` but there are other methods like `k3s` and using `drp` to deploy a cluster.
After all, DIY its in our DNA.
## Ingredients
1. Basic knowledge of Kubernetes terms (Will come in handy) [Start](/kubernetes/start)
2. Some Linux machines (Depends on what recipe you follow)
## Minikube
First, what is minikube?
Minikube is a method of running Kubernetes on your local machine.
It is mainly targeted at developers looking to test if their application will work with Kubernetes without deploying it to a production cluster. For this reason,
I do not recommend running your cluster on minikube as it isn't designed for deployment, and is only a single node cluster.
If you want to use minikube, there is a guide below but again, I recommend using something more production-ready like `k3s` or `drp`
### Ingredients
1. A Fresh Linux Machine
2. Some basic Linux knowledge (or can just copy-paste)
!!! note
Make sure you are running a SystemD based distro like Ubuntu.
Although minikube will run on macOS and Windows,
they add in additional complexities to the installation as they
require running a Linux based image running in a VM,
that although minikube will manage, adds to the complexities. And
even then, who uses Windows or macOS in production anyways? 🙂
If you are serious about running on windows/macOS,
check the official MiniKube guides
[here](https://minikube.sigs.k8s.io/docs/start/)
### Installation
After booting yourself up a fresh Linux machine and getting to a console,
you can now install minikube.
Download and install our minikube binary
```sh
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
```
Now we can boot up our cluster
```sh
sudo minikube start --vm-driver=none
#Start our minikube instance, and make it use the machine to host the cluster, instead of a VM
sudo minikube config set vm-driver none #Set our default vm driver to none
```
You are now set up with minikube!
!!! warning
MiniKube is not a production-grade method of deploying Kubernetes
## K3S
What is k3s?
K3s is a production-ready method of deploying Kubernetes on many machines,
where a full Kubernetes deployment is not required, AKA - your cluster (unless your a big SaaS company, in that case, can I get a job?).
### Ingredients
1. A handful of Linux machines (3 or more, virtualized or not)
2. Some Linux knowledge.
3. Patience.
### Setting your Linux Machines up
Firstly, my flavour of choice for deployment is Ubuntu Server,
although it is not as enterprise-friendly as RHEL (That's Red Hat Enterprise Linux for my less geeky readers) or CentOS (The free version of RHEL).
Ubuntu ticks all the boxes for k3s to run on and allows you to follow lots of other guides on managing and maintaining your Ubuntu server.
Firstly, download yourself a version of Ubuntu Server from [here](https://ubuntu.com/download/server) (Whatever is latest)
Then spin yourself up as many systems as you need with the following guide
!!! note
I am running a 3 node cluster, with nodes running on Ubuntu 19.04, all virtualized with VMWare ESXi
Your setup doesn't need to be as complex as mine, you can use 3 old Dell OptiPlex if you really want 🙂
1. Insert your installation medium into the machine, and boot it.
2. Select your language
3. Select your keyboard layout
4. Select `Install Ubuntu`
5. Check and modify your network settings if required, make sure to write down your IPs
6. Select Done on Proxy, unless you use a proxy
7. Select Done on Mirror, as it has picked the best mirror for you unless you have a local mirror you want to use (in that case you are uber-geek)
8. Select `Use An Entire Disk` for Filesystem, and basically hit enter for the rest of the disk setup,
just make sure to read the prompts and understand what you are doing
9. Now that you are up to setting up the profile, this is where things change.
You are going to want to set up the same account on all the machines, but change the server name just a tad every time.
![Profile Setup for Node 1](../images/diycluster-k3s-profile-setup.png)
![Profile Setup for Node 2](../images/diycluster-k3s-profile-setup-node2.png)
10. Now install OpenSSH on the server, if you wish to import your existing SSH key from GitHub or Launchpad,
you can do that now and save yourself a step later.
11. Skip over Featured Server snaps by clicking `Done`
12. Wait for your server to install everything and drop you to a Linux prompt
13. Repeat for all your nodes
### Pre-installation of k3s
For the rest of this guide, you will need some sort of Linux/macOS based terminal.
On Windows you can do this with Windows Subsystem for Linux (WSL) see [here for information on WSL.](https://aka.ms/wslinstall)
The rest of this guide will all be from your local terminal.
If you already have an SSH key generated or added an existing one, skip this step.
From your PC,run `ssh-keygen` to generate a public and private key pair
(You can use this instead of typing your password in every time you want to connect via ssh)
```sh
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/thomas/.ssh/id_rsa): [enter]
Enter passphrase (empty for no passphrase): [password]
Enter same passphrase again: [password]
Your identification has been saved in /home/thomas/.ssh/id_rsa.
Your public key has been saved in /home/thomas/.ssh/id_rsa.pub.
The key fingerprint is:
...
The key's randomart image is:
...
```
If you have already imported a key from GitHub or Launchpad, skip this step.
```sh
$ ssh-copy-id [username]@[hostname]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/thomas/.ssh/id_rsa.pub"
The authenticity of host 'thomas-k3s-node1 (theipaddress)' can't be established.
ECDSA key fingerprint is SHA256:...
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
thomas@thomas-k3s-node1's password: [insert your password now]
Number of key(s) added: 1
```
You will want to do this once for every machine, replacing the hostname with the other next nodes hostname each time.
!!! note
If your hostnames aren't resolving correct, try adding them to your `/etc/hosts` file
### Installation
If you have access to the premix repository, you can download the ansible-playbook and follow the steps contained in there, if not sit back and prepare to do it manually.
!!! tip
Becoming a patron will allow you to get the ansible-playbook to setup k3s on your own hosts. For as little as 5$/m you can get access to the ansible playbooks for this recipe, and more!
See [funkypenguin's Patreon](https://www.patreon.com/funkypenguin) for more!
<!---
(Just someone needs to remind me (HexF) to write such playbook)
-->
Select one node to become your master, in my case `thomas-k3s-node1`.
Now SSH into this node, and run the following:
```sh
localpc$ ssh thomas@thomas-k3s-node1
Enter passphrase for key '/home/thomas/.ssh/id_rsa': [ssh key password]
thomas-k3s-node1$ curl -sfL https://get.k3s.io | sh -
[sudo] password for thomas: [password entered in setup]
[INFO] Finding latest release
[INFO] Using v1.0.0 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.0.0/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.0.0/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
```
Before we log out of the master, we need the token from it.
Make sure to note this token down
(please don't write it on paper, use something like `notepad` or `vim`, it's ~100 characters)
```sh
thomas-k3s-node1$ sudo cat /var/lib/rancher/k3s/server/node-token
K1097e226f95f56d90a4bab7151...
```
Make sure all nodes can access each other by hostname, whether you add them to `/etc/hosts` or to your DNS server
Now that you have your master node setup, you can now add worker nodes
SSH into the other nodes, and run the following making sure to replace values with ones that suit your installation
```sh
localpc$ ssh thomas@thomas-k3s-node2
Enter passphrase for key '/home/thomas/.ssh/id_rsa': [ssh key password]
thomas-k3s-node2$ curl -sfL https://get.k3s.io | K3S_URL=https://thomas-k3s-node1:6443 K3S_TOKEN=K1097e226f95f56d90a4bab7151... sh -
```
Now test your installation!
SSH into your master node
```sh
ssh thomas@thomas-k3s-node1
Enter passphrase for key '/home/thomas/.ssh/id_rsa': [ssh key password]
thomas-k3s-node1$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
thomas-k3s-node1 Ready master 15m3s v1.16.3-k3s.2
thomas-k3s-node2 Ready <none> 6m58s v1.16.3-k3s.2
thomas-k3s-node3 Ready <none> 6m12s v1.16.3-k3s.2
```
If you got Ready for all your nodes, Well Done! Your k3s cluster is now running! If not try getting help in our discord.
### Post-Installation
Now you can get yourself a kubeconfig for your cluster.
SSH into your master node, and run the following
```sh
localpc$ ssh thomas@thomas-k3s-node1
Enter passphrase for key '/home/thomas/.ssh/id_rsa': [ssh key password]
thomas-k3s-node1$ sudo kubectl config view --flatten
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBD...
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
password: thisishowtolosecontrolofyourk3s
username: admin
```
Make sure to change `clusters.cluster.server` to have the master node's name instead of `127.0.0.1`, in my case making it `https://thomas-k3s-node1:6443`
!!! warning
This kubeconfig file can grant full access to your Kubernetes installation, I recommend you protect this file just as well as you protect your passwords
You will probably want to save this kubeconfig file into a file on your local machine, say `my-k3s-cluster.yml` or `where-8-hours-of-my-life-went.yml`.
Now test it out!
```sh
localpc$ kubectl --kubeconfig=my-k3s-cluster.yml get nodes
NAME STATUS ROLES AGE VERSION
thomas-k3s-node1 Ready master 495m v1.16.3-k3s.2
thomas-k3s-node2 Ready <none> 488m v1.16.3-k3s.2
thomas-k3s-node3 Ready <none> 487m v1.16.3-k3s.2
```
<!--
To the reader concerned about my health, no I did not actually spend 8 hours writing this guide, Instead I spent most of it helping you guys on the discord (👍) and other stuff
-->
That is all! You have yourself a Kubernetes cluster for you and your dog to enjoy.
## DRP
DRP or Digital Rebar Provisioning Tool is a tool designed to automatically setup your cluster, installing an operating system for you, and doing all the configuration like we did in the k3s setup.
This section is WIP, instead, try using the K3S guide above 🙂
## Where from now
Now that you have wasted half a lifetime on installing your very own cluster, you can install more to it. Like a load balancer!
* [Start](/kubernetes/start/) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* Cluster (this page) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
## About your Chef
This article, believe it or not, was not diced up by your regular chef (funkypenguin).
Instead, today's article was diced up by HexF, a fellow kiwi (hence a lot of kiwi references) who enjoys his sysadmin time.
Feel free to talk to today's chef in the discord, or see one of his many other links that you can follow below
[Twitter](https://hexf.me/api/social/twitter/geekcookbook) • [Website](https://hexf.me/api/social/website/geekcookbook) • [Github](https://hexf.me/api/social/github/geekcookbook)
<!--
The links above are just redirect links incase anything ever changes, and it has analytics too
-->

View File

@@ -10,7 +10,7 @@
## Ingredients ## Ingredients
1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) 1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/)
2. Geek-Fu required : 🐤 (_easy - copy and paste_) 2. Geek-Fu required : (_easy - copy and paste_)
## Preparation ## Preparation

View File

@@ -0,0 +1,62 @@
# Helm
[Helm](https://github.com/helm/helm) is a tool for managing Kubernetes "charts" (_think of it as an uber-polished collection of recipes_). Using one simple command, and by tweaking one simple config file (values.yaml), you can launch a complex stack. There are many publicly available helm charts for popular packages like [elasticsearch](https://github.com/helm/charts/tree/master/stable/elasticsearch), [ghost](https://github.com/helm/charts/tree/master/stable/ghost), [grafana](https://github.com/helm/charts/tree/master/stable/grafana), [mediawiki](https://github.com/helm/charts/tree/master/stable/mediawiki), etc.
![Kubernetes Snapshots](/images/kubernetes-helm.png)
!!! note
Given enough interest, I may provide a helm-compatible version of the pre-mix repository for [supporters](/support/). [Hit me up](/whoami/#contact-me) if you're interested!
## Ingredients
1. [Kubernetes cluster](/kubernetes/cluster/)
2. Geek-Fu required : 🐤 (_easy - copy and paste_)
## Preparation
### Install Helm
This section is from the Helm README:
Binary downloads of the Helm client can be found on [the Releases page](https://github.com/helm/helm/releases/latest).
Unpack the `helm` binary and add it to your PATH and you are good to go!
If you want to use a package manager:
- [Homebrew](https://brew.sh/) users can use `brew install kubernetes-helm`.
- [Chocolatey](https://chocolatey.org/) users can use `choco install kubernetes-helm`.
- [Scoop](https://scoop.sh/) users can use `scoop install helm`.
- [GoFish](https://gofi.sh/) users can use `gofish install helm`.
To rapidly get Helm up and running, start with the [Quick Start Guide](https://docs.helm.sh/using_helm/#quickstart-guide).
See the [installation guide](https://docs.helm.sh/using_helm/#installing-helm) for more options,
including installing pre-releases.
## Serving
### Initialise Helm
After installing Helm, initialise it by running ```helm init```. This will install "tiller" pod into your cluster, which works with the locally installed helm binaries to launch/update/delete Kubernetes elements based on helm charts.
That's it - not very exciting I know, but we'll need helm for the next and final step in building our Kubernetes cluster - deploying the [Traefik ingress controller (via helm)](/kubernetes/traefik/)!
## Move on..
Still with me? Good. Move on to understanding Helm charts...
* [Start](/kubernetes/start/) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* Helm (this page) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
## Chef's Notes
1. Of course, you can have lots of fun deploying all sorts of things via Helm. Check out https://github.com/helm/charts for some examples.

View File

@@ -14,7 +14,7 @@ This recipe details a simple design to permit the exposure of as many ports as y
1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) 1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/)
2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar for me!_) 2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar for me!_)
3. Geek-Fu required : 🐧🐧🐧 (_complex - inline adjustments required_) 3. Geek-Fu required : (_complex - inline adjustments required_)
## Preparation ## Preparation
@@ -331,4 +331,4 @@ Still with me? Good. Move on to setting up an ingress SSL terminating proxy with
## Chef's Notes ## Chef's Notes
1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉 1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome

View File

@@ -0,0 +1,334 @@
# Load Balancer
One of the issues I encountered early on in migrating my Docker Swarm workloads to Kubernetes on GKE, was how to reliably permit inbound traffic into the cluster.
There were several complications with the "traditional" mechanisms of providing a load-balanced ingress, not the least of which was cost. I also found that even if I paid my cloud provider (_Google_) for a load-balancer Kubernetes service, this service required a unique IP per exposed port, which was incompatible with my mining pool empire (_mining pools need to expose multiple ports on the same DNS name_).
See further examination of the problem and possible solutions in the [Kubernetes design](kubernetes/design/#the-challenges-of-external-access) page.
This recipe details a simple design to permit the exposure of as many ports as you like, on a single public IP, to a cluster of Kubernetes nodes running as many pods/containers as you need, with services exposed via NodePort.
![Kubernetes Design](/images/kubernetes-cluster-design.png)
## Ingredients
1. [Kubernetes cluster](/kubernetes/cluster/)
2. VM _outside_ of Kubernetes cluster, with a fixed IP address. Perhaps, on a [$5/month Digital Ocean Droplet](https://www.digitalocean.com/?refcode=e33b78ad621b).. (_yes, another referral link. Mooar 🍷 for me!_)
3. Geek-Fu required : 🐧🐧🐧 (_complex - inline adjustments required_)
## Preparation
### Summary
### Create LetsEncrypt certificate
!!! warning
Safety first, folks. You wouldn't run a webhook exposed to the big bad ol' internet without first securing it with a valid SSL certificate? Of course not, I didn't think so!
Use whatever method you prefer to generate (and later, renew) your LetsEncrypt cert. The example below uses the CertBot docker image for CloudFlare DNS validation, since that's what I've used elsewhere.
We **could** run our webhook as a simple HTTP listener, but really, in a world where LetsEncrypt cacn assign you a wildcard certificate in under 30 seconds, thaht's unforgivable. Use the following **general** example to create a LetsEncrypt wildcard cert for your host:
In my case, since I use CloudFlare, I create /etc/webhook/letsencrypt/cloudflare.ini:
```
dns_cloudflare_email=davidy@funkypenguin.co.nz
dns_cloudflare_api_key=supersekritnevergonnatellyou
```
I request my cert by running:
```
cd /etc/webhook/
docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare --preferred-challenges dns certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini -d ''*.funkypenguin.co.nz'
```
!!! question
Why use a wildcard cert? So my enemies can't examine my certs to enumerate my various services and discover my weaknesses, of course!
I add the following as a cron command to renew my certs every day:
```
cd /etc/webhook && docker run -ti --rm -v "$(pwd)"/letsencrypt:/etc/letsencrypt certbot/dns-cloudflare renew --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini
```
Once you've confirmed you've got a valid LetsEncrypt certificate stored in ```/etc/webhook/letsencrypt/live/<your domain>/fullcert.pem```, proceed to the next step..
### Install webhook
We're going to use https://github.com/adnanh/webhook to run our webhook. On some distributions (_❤ ya, Debian!_), webhook and its associated systemd config can be installed by running ```apt-get install webhook```.
### Create webhook config
We'll create a single webhook, by creating ```/etc/webhook/hooks.json``` as follows. Choose a nice secure random string for your MY_TOKEN value!
```
mkdir /etc/webhook
export MY_TOKEN=ilovecheese
echo << EOF > /etc/webhook/hooks.json
[
{
"id": "update-haproxy",
"execute-command": "/etc/webhook/update-haproxy.sh",
"command-working-directory": "/etc/webhook",
"pass-arguments-to-command":
[
{
"source": "payload",
"name": "name"
},
{
"source": "payload",
"name": "frontend-port"
},
{
"source": "payload",
"name": "backend-port"
},
{
"source": "payload",
"name": "dst-ip"
},
{
"source": "payload",
"name": "action"
}
],
"trigger-rule":
{
"match":
{
"type": "value",
"value": "$MY_TOKEN",
"parameter":
{
"source": "header",
"name": "X-Funkypenguin-Token"
}
}
}
}
]
EOF
```
!!! note
Note that to avoid any bozo from calling our we're matching on a token header in the request called ```X-Funkypenguin-Token```. Webhook will **ignore** any request which doesn't include a matching token in the request header.
### Update systemd for webhook
!!! note
This section is particular to Debian Stretch and its webhook package. If you're using another OS for your VM, just ensure that you can start webhook with a config similar to the one illustrated below.
Since we want to force webhook to run in secure mode (_no point having a token if it can be extracted from a simple packet capture!_) I ran ```systemctl edit webhook```, and pasted in the following:
```
[Service]
# Override the default (non-secure) behaviour of webhook by passing our certificate details and custom hooks.json location
ExecStart=
ExecStart=/usr/bin/webhook -hooks /etc/webhook/hooks.json -verbose -secure -cert /etc/webhook/letsencrypt/live/funkypenguin.co.nz/fullchain.pem -key /etc/webhook/letsencrypt/live/funkypenguin.co.nz/privkey.pem
```
Then I restarted webhook by running ```systemctl enable webhook && systemctl restart webhook```. I watched the subsequent logs by running ```journalctl -u webhook -f```
### Create /etc/webhook/update-haproxy.sh
When successfully authenticated with our top-secret token, our webhook will execute a local script, defined as follows (_yes, you should create this file_):
```
#!/bin/bash
NAME=$1
FRONTEND_PORT=$2
BACKEND_PORT=$3
DST_IP=$4
ACTION=$5
# Bail if we haven't received our expected parameters
if [[ "$#" -ne 5 ]]
then
echo "illegal number of parameters"
exit 2;
fi
# Either add or remove a service based on $ACTION
case $ACTION in
add)
# Create the portion of haproxy config
cat << EOF > /etc/webhook/haproxy/$FRONTEND_PORT.inc
### >> Used to run $NAME:${FRONTEND_PORT}
frontend ${FRONTEND_PORT}_frontend
bind *:$FRONTEND_PORT
mode tcp
default_backend ${FRONTEND_PORT}_backend
backend ${FRONTEND_PORT}_backend
mode tcp
balance roundrobin
stick-table type ip size 200k expire 30m
stick on src
server s1 $DST_IP:$BACKEND_PORT
### << Used to run $NAME:$FRONTEND_PORT
EOF
;;
delete)
rm /etc/webhook/haproxy/$FRONTEND_PORT.inc
;;
*)
echo "Invalid action $ACTION"
exit 2
esac
# Concatenate all the haproxy configs into a single file
cat /etc/webhook/haproxy/global /etc/webhook/haproxy/*.inc > /etc/webhook/haproxy/pre_validate.cfg
# Validate the generated config
haproxy -f /etc/webhook/haproxy/pre_validate.cfg -c
# If validation was successful, only _then_ copy it over to /etc/haproxy/haproxy.cfg, and reload
if [[ $? -gt 0 ]]
then
echo "HAProxy validation failed, not continuing"
exit 2
else
# Remember what the original file looked like
m1=$(md5sum "/etc/haproxy/haproxy.cfg")
# Overwrite the original file
cp /etc/webhook/haproxy/pre_validate.cfg /etc/haproxy/haproxy.cfg
# Get MD5 of new file
m2=$(md5sum "/etc/haproxy/haproxy.cfg")
# Only if file has changed, then we need to reload haproxy
if [ "$m1" != "$m2" ] ; then
echo "HAProxy config has changed, reloading"
systemctl reload haproxy
fi
fi
```
### Create /etc/webhook/haproxy/global
Create ```/etc/webhook/haproxy/global``` and populate with something like the following. This will be the non-dynamically generated part of our HAProxy config:
```
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode tcp
option tcplog
option dontlognull
timeout connect 5000
timeout client 5000000
timeout server 5000000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
```
## Serving
### Take the bait!
Whew! We now have all the components of our automated load-balancing solution in place. Browse to your VM's FQDN at https://whatever.it.is:9000/hooks/update-haproxy, and you should see the text "_Hook rules were not satisfied_", with a valid SSL certificate (_You didn't send a token_).
If you don't see the above, then check the following:
1. Does the webhook verbose log (```journalctl -u webhook -f```) complain about invalid arguments or missing files?
2. Is port 9000 open to the internet on your VM?
### Apply to pods
You'll see me use this design in any Kubernetes-based recipe which requires container-specific ports, like UniFi. Here's an excerpt of the .yml which defines the UniFi controller:
```
<snip>
spec:
containers:
- image: linuxserver/unifi
name: controller
volumeMounts:
- name: controller-volumeclaim
mountPath: /config
- image: funkypenguin/poor-mans-k8s-lb
imagePullPolicy: Always
name: 8080-phone-home
env:
- name: REPEAT_INTERVAL
value: "600"
- name: FRONTEND_PORT
value: "8080"
- name: BACKEND_PORT
value: "30808"
- name: NAME
value: "unifi-adoption"
- name: WEBHOOK
value: "https://my-secret.url.wouldnt.ya.like.to.know:9000/hooks/update-haproxy"
- name: WEBHOOK_TOKEN
valueFrom:
secretKeyRef:
name: unifi-credentials
key: webhook_token.secret
<snip>
```
The takeaways here are:
1. We add the funkypenguin/poor-mans-k8s-lb containier to any pod which has special port requirements, forcing the container to run on the same node as the other containers in the pod (_in this case, the UniFi controller_)
2. We use a Kubernetes secret for the webhook token, so that our .yml can be shared without exposing sensitive data
Here's what the webhook logs look like when the above is added to the UniFi deployment:
```
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 Started POST /hooks/update-haproxy
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 update-haproxy got matched
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 update-haproxy hook triggered successfully
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 Completed 200 OK in 2.123921ms
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 executing /etc/webhook/update-haproxy.sh (/etc/webhook/update-haproxy.sh) with arguments ["/etc/webhook/update-haproxy.sh" "unifi-adoption" "8080" "30808" "35.244.91.178" "add"] and environment [] using /etc/webhook as cwd
Feb 06 23:04:28 haproxy2 webhook[1433]: [webhook] 2019/02/06 23:04:28 command output: Configuration file is valid
<HAProxy restarts>
```
## Move on..
Still with me? Good. Move on to setting up an ingress SSL terminating proxy with Traefik..
* [Start](/kubernetes/start/) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* Load Balancer (this page) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
## Chef's Notes
1. This is MVP of the load balancer solution. Any suggestions for improvements are welcome 😉

View File

@@ -15,7 +15,7 @@ This recipe employs a clever tool ([miracle2k/k8s-snapshots](https://github.com/
## Ingredients ## Ingredients
1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) with either AWS or GKE (currently, but apparently other providers are [easy to implement](https://github.com/miracle2k/k8s-snapshots/blob/master/k8s_snapshots/backends/abstract.py)) 1. [Kubernetes cluster](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/cluster/) with either AWS or GKE (currently, but apparently other providers are [easy to implement](https://github.com/miracle2k/k8s-snapshots/blob/master/k8s_snapshots/backends/abstract.py))
2. Geek-Fu required : 🐒🐒 (_medium - minor adjustments may be required_) 2. Geek-Fu required : (_medium - minor adjustments may be required_)
## Preparation ## Preparation

View File

@@ -0,0 +1,180 @@
# Snapshots
Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact.
Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](recipes/elkarbackup/)) to automate backups of our persistent data.
Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution...
It bears repeating though - don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
This recipe employs a clever tool ([miracle2k/k8s-snapshots](https://github.com/miracle2k/k8s-snapshots)), running _inside_ your cluster, to trigger automated snapshots of your persistent volumes, using your cloud provider's APIs.
## Ingredients
1. [Kubernetes cluster](/kubernetes/cluster/) with either AWS or GKE (currently, but apparently other providers are [easy to implement](https://github.com/miracle2k/k8s-snapshots/blob/master/k8s_snapshots/backends/abstract.py))
2. Geek-Fu required : 🐒🐒 (_medium - minor adjustments may be required_)
## Preparation
### Create RoleBinding (GKE only)
If you're running GKE, run the following to create a RoleBinding, allowing your user to grant rights-it-doesn't-currently-have to the service account responsible for creating the snapshots:
```kubectl create clusterrolebinding your-user-cluster-admin-binding \
--clusterrole=cluster-admin --user=<your user@yourdomain>```
!!! question
Why do we have to do this? Check [this blog post](https://www.funkypenguin.co.nz/workaround-blocked-attempt-to-grant-extra-privileges-on-gke/) for details
### Apply RBAC
If your cluster is RBAC-enabled (_it probably is_), you'll need to create a ClusterRole and ClusterRoleBinding to allow k8s_snapshots to see your PVs and friends:
```
kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/master/rbac.yaml
```
## Serving
### Deploy the pod
Ready? Run the following to create a deployment in to the kube-system namespace:
```
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-snapshots
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: k8s-snapshots
spec:
containers:
- name: k8s-snapshots
image: elsdoerfer/k8s-snapshots:v2.0
EOF
```
Confirm your pod is running and happy by running ```kubectl get pods -n kubec-system```, and ```kubectl -n kube-system logs k8s-snapshots<tab-to-auto-complete>```
### Pick PVs to snapshot
k8s-snapshots relies on annotations to tell it how frequently to snapshot your PVs. A PV requires the ```backup.kubernetes.io/deltas``` annotation in order to be snapshotted.
From the k8s-snapshots README:
```
The generations are defined by a list of deltas formatted as ISO 8601 durations (this differs from tarsnapper). PT60S or PT1M means a minute, PT12H or P0.5D is half a day, P1W or P7D is a week. The number of backups in each generation is implied by it's and the parent generation's delta.
For example, given the deltas PT1H P1D P7D, the first generation will consist of 24 backups each one hour older than the previous (or the closest approximation possible given the available backups), the second generation of 7 backups each one day older than the previous, and backups older than 7 days will be discarded for good.
The most recent backup is always kept.
The first delta is the backup interval.
```
To add the annotation to an existing PV, run something like this:
```
kubectl patch pv pvc-01f74065-8fe9-11e6-abdd-42010af00148 -p \
'{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}'
```
To add the annotation to a _new_ PV, add the following annotation to your **PVC**:
```
backup.kubernetes.io/deltas: PT1H P2D P30D P180D
```
Here's an example of the PVC for the UniFi recipe, which includes 7 daily snapshots of the PV:
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: controller-volumeclaim
namespace: unifi
annotations:
backup.kubernetes.io/deltas: P1D P7D
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
And here's what my snapshot list looks like after a few days:
![Kubernetes Snapshots](/images/kubernetes-snapshots.png)
### Snapshot a non-Kubernetes volume (optional)
If you're running traditional compute instances with your cloud provider (I do this for my poor man's load balancer), you might want to backup _these_ volumes as well.
To do so, first create a custom resource, ```SnapshotRule```:
```
cat <<EOF | kubectl create -f -
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: snapshotrules.k8s-snapshots.elsdoerfer.com
spec:
group: k8s-snapshots.elsdoerfer.com
version: v1
scope: Namespaced
names:
plural: snapshotrules
singular: snapshotrule
kind: SnapshotRule
shortNames:
- sr
EOF
```
Then identify the volume ID of your volume, and create an appropriate ```SnapshotRule```:
```
cat <<EOF | kubectl apply -f -
apiVersion: "k8s-snapshots.elsdoerfer.com/v1"
kind: SnapshotRule
metadata:
name: haproxy-badass-loadbalancer
spec:
deltas: P1D P7D
backend: google
disk:
name: haproxy2
zone: australia-southeast1-a
EOF
```
!!! note
Example syntaxes for the SnapshotRule for different providers can be found at https://github.com/miracle2k/k8s-snapshots/tree/master/examples
## Move on..
Still with me? Good. Move on to understanding Helm charts...
* [Start](/kubernetes/start/) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access
* Snapshots (this page) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm
## Chef's Notes
1. I've submitted [2 PRs](https://github.com/miracle2k/k8s-snapshots/pulls/funkypenguin) to the k8s-snapshots repo. The first [updates the README for GKE RBAC requirements](https://github.com/miracle2k/k8s-snapshots/pull/71), and the second [fixes a minor typo](https://github.com/miracle2k/k8s-snapshots/pull/74).

View File

@@ -0,0 +1,67 @@
# Why Kubernetes?
My first introduction to Kubernetes was a children's story:
<iframe width="560" height="315" src="https://www.youtube.com/embed/4ht22ReBjno" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Wait, what?
Why would you want to use Kubernetes for your self-hosted recipes over simple Docker Swarm? Here's my personal take..
I use Docker swarm both at home (_on a single-node swarm_), and on a trio of Ubuntu 16.04 VPSs in a shared lab OpenStack environment.
In both cases above, I'm responsible for maintaining the infrastructure supporting Docker - either the physical host, or the VPS operating systems.
I started experimenting with Kubernetes as a plan to improve the reliability of my cryptocurrency mining pools (_the contended lab VPSs negatively impacted the likelihood of finding a block_), and as a long-term replacement for my aging home server.
What I enjoy about building recipes and self-hosting is **not** the operating system maintenance, it's the tools and applications that I can quickly launch in my swarms. If I could **only** play with the applications, and not bother with the maintenance, I totally would.
Kubernetes (_on a cloud provider, mind you!_) does this for me. I feed Kubernetes a series of YAML files, and it takes care of all the rest, including version upgrades, node failures/replacements, disk attach/detachments, etc.
## Uggh, it's so complicated!
Yes, but that's a necessary sacrifice for the maturity, power and flexibility it offers. Like docker-compose syntax, Kubernetes uses YAML to define its various, interworking components.
Let's talk some definitions. Kubernetes.io provides a [glossary](https://kubernetes.io/docs/reference/glossary/?fundamental=true). My definitions are below:
* **Node** : A compute instance which runs docker containers, managed by a cluster master.
* **Cluster** : One or more "worker nodes" which run containers. Very similar to a Docker Swarm node. In most cloud provider deployments, the [master node for your cluster is provided free of charge](https://www.sdxcentral.com/articles/news/google-eliminates-gke-management-fees-kubernetes-clusters/2017/11/), but you don't get to access it.
* **Pod** : A collection of one or more the containers. If a pod runs multiple containers, these containers always run on the same node.
* **Deployment** : A definition of a desired state. I.e., "I want a pod with containers A and B running". The Kubernetes master then ensures that any changes necessary to maintain the state are taken. (_I.e., if a pod crashes, but is supposed to be running, a new pod will be started_)
* **Service** : Unlike Docker Swarm, service discovery is not _built in_ to Kubernetes. For your pods to discover each other (say, to have "webserver" talk to "database"), you create a service for each pod, and refer to these services when you want your containers (_in pods_) to talk to each other. Complicated, yes, but the abstraction allows you to do powerful things, like auto-scale-up a bunch of database "pods" behind a service called "database", or perform a rolling container image upgrade with zero impact.
* **External access** : Services not only allow pods to discover each other, but they're also the mechanism through which the outside world can talk to a container. At the simplest level, this is akin to exposing a container port on a docker host.
* **Ingress** : When mapping ports to applications is inadequate (think virtual web hosts), an ingress is a sort of "inbound router" which can receive requests on one port (i.e., HTTPS), and forward them to a variety of internal pods, based on things like VHOST, etc. For us, this is the functional equivalent of what Traefik does in Docker Swarm. In fact, we use a Traefik Ingress in Kubernetes to accomplish the same.
* **Persistent Volume** : A virtual disk which is attached to a pod, storing persistent data. Meets the requirement for shared storage from Docker Swarm. I.e., if a persistent volume (PV) is bound to a pod, and the pod dies and is recreated, or get upgraded to a new image, the PV the data is bound to the new container. PVs can be "claimed" in a YAML definition, so that your Kubernetes provider will auto-create a PV when you launch your pod. PVs can be snapshotted.
* **Namespace** : An abstraction to separate a collection of pods, services, ingresses, etc. A "virtual cluster within a cluster". Can be used for security, or simplicity. For example, since we don't have individual docker stacks anymore, if you commonly name your database container "db", and you want to deploy two applications which both use a database container, how will you name your services? Use namespaces to keep each application ("nextcloud" vs "kanboard") separate. Namespaces also allow you to allocate resources **limits** to the aggregate of containers in a namespace, so you could, for example, limit the "nextcloud" namespace to 2.3 CPUs and 1200MB RAM.
## Mm.. maaaaybe, how do I start?
If you're like me, and you learn by doing, either play with the examples at https://labs.play-with-k8s.com/, or jump right in by setting up a Google Cloud trial (_you get $300 credit for 12 months_), or a small cluster on [Digital Ocean](/kubernetes/digitalocean/).
If you're the learn-by-watching type, just search for "Kubernetes introduction video". There's a **lot** of great content available.
## I'm ready, gimme some recipes!
As of Jan 2019, our first (_and only!_) Kubernetes recipe is a WIP for the Mosquitto [MQTT](/recipes/mqtt/) broker. It's a good, simple starter if you're into home automation (_shoutout to [Home Assistant](/recipes/homeassistant/)!_), since it only requires a single container, and a simple NodePort service.
I'd love for your [feedback](/support/) on the Kubernetes recipes, as well as suggestions for what to add next. The current rough plan is to replicate the Chef's Favorites recipes (_see the left-hand panel_) into Kubernetes first.
## Move on..
Still with me? Good. Move on to reviewing the design elements
* Start (this page) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) - Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* [Traefik](/kubernetes/traefik/) - Traefik Ingress via Helm

View File

@@ -188,7 +188,7 @@ helm upgrade --values values.yml traefik stable/traefik --recreate-pods
## Review ## Review
We're doneburgers! 🍔 We now have all the pieces to safely deploy recipes into our Kubernetes cluster, knowing: We're doneburgers! We now have all the pieces to safely deploy recipes into our Kubernetes cluster, knowing:
1. Our HTTPS traffic will be secured with LetsEncrypt (thanks Traefik!) 1. Our HTTPS traffic will be secured with LetsEncrypt (thanks Traefik!)
2. Our non-HTTPS ports (like UniFi adoption) will be load-balanced using an free-to-scale [external load balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/) 2. Our non-HTTPS ports (like UniFi adoption) will be load-balanced using an free-to-scale [external load balancer](https://geek-cookbook.funkypenguin.co.nz/)kubernetes/loadbalancer/)

View File

@@ -0,0 +1,214 @@
# Traefik
This recipe utilises the [traefik helm chart](https://github.com/helm/charts/tree/master/stable/traefik) to proving LetsEncrypt-secured HTTPS access to multiple containers within your cluster.
## Ingredients
1. [Kubernetes cluster](/kubernetes/cluster/)
2. [Helm](/kubernetes/helm/) installed and initialised in your cluster
## Preparation
### Clone helm charts
Clone the helm charts, by running:
```
git clone https://github.com/helm/charts
```
Change to stable/traefik:
```
cd charts/stable/traefik
```
### Edit values.yaml
The beauty of the helm approach is that all the complexity of the Kubernetes elements' YAML files are hidden from you (created using templates), and all your changes go into values.yaml.
These are my values, you'll need to adjust for your own situation:
```
imageTag: alpine
serviceType: NodePort
# yes, we're not listening on 80 or 443 because we don't want to pay for a loadbalancer IP to do this. I use poor-mans-k8s-lb instead
service:
nodePorts:
http: 30080
https: 30443
cpuRequest: 1m
memoryRequest: 100Mi
cpuLimit: 1000m
memoryLimit: 500Mi
ssl:
enabled: true
enforced: true
debug:
enabled: false
rbac:
enabled: true
dashboard:
enabled: true
domain: traefik.funkypenguin.co.nz
kubernetes:
# set these to all the namespaces you intend to use. I standardize on one-per-stack. You can always add more later
namespaces:
- kube-system
- unifi
- kanboard
- nextcloud
- huginn
- miniflux
accessLogs:
enabled: true
acme:
persistence:
enabled: true
# Add the necessary annotation to backup ACME store with k8s-snapshots
annotations: { "backup.kubernetes.io/deltas: P1D P7D" }
staging: false
enabled: true
logging: true
email: "<my letsencrypt email>"
challengeType: "dns-01"
dnsProvider:
name: cloudflare
cloudflare:
CLOUDFLARE_EMAIL: "<my cloudlare email"
CLOUDFLARE_API_KEY: "<my cloudflare API key>"
domains:
enabled: true
domainsList:
- main: "*.funkypenguin.co.nz" # name of the wildcard domain name for the certificate
- sans:
- "funkypenguin.co.nz"
metrics:
prometheus:
enabled: true
```
!!! note
The helm chart doesn't enable the Traefik dashboard by default. I intend to add an oauth_proxy pod to secure this, in a future recipe update.
### Prepare phone-home pod
[Remember](/kubernetes/loadbalancer/) how our load balancer design ties a phone-home container to another container using a pod, so that the phone-home container can tell our external load balancer (_using a webhook_) where to send our traffic?
Since we deployed Traefik using helm, we need to take a slightly different approach, so we'll create a pod with an affinity which ensures it runs on the same host which runs the Traefik container (_more precisely, containers with the label app=traefik_).
Create phone-home.yaml as follows:
```
apiVersion: v1
kind: Pod
metadata:
name: phonehome-traefik
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- traefik
topologyKey: failure-domain.beta.kubernetes.io/zone
containers:
- image: funkypenguin/poor-mans-k8s-lb
imagePullPolicy: Always
name: phonehome-traefik
env:
- name: REPEAT_INTERVAL
value: "600"
- name: FRONTEND_PORT
value: "443"
- name: BACKEND_PORT
value: "30443"
- name: NAME
value: "traefik"
- name: WEBHOOK
value: "https://<your loadbalancer hostname>:9000/hooks/update-haproxy"
- name: WEBHOOK_TOKEN
valueFrom:
secretKeyRef:
name: traefik-credentials
key: webhook_token.secret
```
Create your webhook token secret by running:
```
echo -n "imtoosecretformyshorts" > webhook_token.secret
kubectl create secret generic traefik-credentials --from-file=webhook_token.secret
```
!!! warning
Yes, the "-n" in the echo statement is needed. [Read here for why](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/).
## Serving
### Install the chart
To install the chart, simply run ```helm install stable/traefik --name traefik --namespace kube-system```
That's it, traefik is running.
You can confirm this by running ```kubectl get pods```, and even watch the traefik logs, by running ```kubectl logs -f traefik<tab-to-autocomplete>```
### Deploy the phone-home pod
We still can't access traefik yet, since it's listening on port 30443 on node it happens to be running on. We'll launch our phone-home pod, to tell our [load balancer](/kubernetes/loadbalancer/) where to send incoming traffic on port 443.
Optionally, on your loadbalancer VM, run ```journalctl -u webhook -f``` to watch for the container calling the webhook.
Run ```kubectl create -f phone-home.yaml``` to create the pod.
Run ```kubectl get pods -o wide``` to confirm that both the phone-home pod and the traefik pod are on the same node:
```
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
phonehome-traefik 1/1 Running 0 20h 10.56.2.55 gke-penguins-are-sexy-8b85ef4d-2c9g
traefik-69db67f64c-5666c 1/1 Running 0 10d 10.56.2.30 gkepenguins-are-sexy-8b85ef4d-2c9g
```
Now browse to https://<your load balancer>, and you should get a valid SSL cert, along with a 404 error (_you haven't deployed any other recipes yet_)
### Making changes
If you change a value in values.yaml, and want to update the traefik pod, run:
```
helm upgrade --values values.yml traefik stable/traefik --recreate-pods
```
## Review
We're doneburgers! 🍔 We now have all the pieces to safely deploy recipes into our Kubernetes cluster, knowing:
1. Our HTTPS traffic will be secured with LetsEncrypt (thanks Traefik!)
2. Our non-HTTPS ports (like UniFi adoption) will be load-balanced using an free-to-scale [external load balancer](/kubernetes/loadbalancer/)
3. Our persistent data will be [automatically backed up](/kubernetes/snapshots/)
Here's a recap:
* [Start](/kubernetes/start/) - Why Kubernetes?
* [Design](/kubernetes/design/) - How does it fit together?
* [Cluster](/kubernetes/cluster/) - Setup a basic cluster
* [Load Balancer](/kubernetes/loadbalancer/) Setup inbound access
* [Snapshots](/kubernetes/snapshots/) - Automatically backup your persistent data
* [Helm](/kubernetes/helm/) - Uber-recipes from fellow geeks
* Traefik (this page) - Traefik Ingress via Helm
## Where to next?
I'll be adding more Kubernetes versions of existing recipes soon. Check out the [MQTT](/recipes/mqtt/) recipe for a start!
## Chef's Notes
1. It's kinda lame to be able to bring up Traefik but not to use it. I'll be adding the oauth_proxy element shortly, which will make this last step a little more conclusive and exciting!

View File

@@ -0,0 +1,126 @@
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 📖
# AutoPirate
Once the cutting edge of the "internet" (_pre-world-wide-web and mosiac days_), Usenet is now a murky, geeky alternative to torrents for file-sharing. However, it's **cool** geeky, especially if you're into having a fully automated media platform.
A good starter for the usenet scene is https://www.reddit.com/r/usenet/. Because it's so damn complicated, a host of automated tools exist to automate the process of finding, downloading, and managing content. The tools included in this recipe are as follows:
![Autopirate Screenshot](../images/autopirate.png)
This recipe presents a method to combine these tools into a single swarm deployment, and make them available securely.
## Menu
Tools included in the AutoPirate stack are:
* **[SABnzbd](http://sabnzbd.org)** : downloads data from usenet servers based on .nzb definitions
* **[NZBGet](https://nzbget.net/)** : downloads data from usenet servers based on .nzb definitions, but written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources (_this is a popular alternative to SABnzbd_)
* **[RTorrent](https://github.com/rakshasa/rtorrent/wiki)** is a CLI-based torrent client, which when combined with **[ruTorrent](https://github.com/Novik/ruTorrent)** becomes a powerful and fully browser-managed torrent client. (_Yes, it's not Usenet, but Sonarr/Radarr will let fulfill your watchlist using either Usenet **or** torrents, so it's worth including_)
* **[NZBHydra](https://github.com/theotherp/nzbhydra)** : acts as a "meta-indexer", so that your downloading tools (_radarr, sonarr, etc_) only need to be setup for a single indexes. Also produces interesting stats on indexers, which helps when evaluating which indexers are performing well.
* **[NZBHydra2](https://github.com/theotherp/nzbhydra2)** : is a high-performance rewrite of the original NZBHydra, with extra features. While still in beta, this NZBHydra2 will eventually supercede NZBHydra
* **[Sonarr](https://sonarr.tv)** : finds, downloads and manages TV shows
* **[Radarr](https://radarr.video)** : finds, downloads and manages movies
* **[Mylar](https://github.com/evilhero/mylar)** : finds, downloads and manages comic books
* **[Headphones](https://github.com/rembo10/headphones)** : finds, downloads and manages music
* **[Lazy Librarian](https://github.com/itsmegb/LazyLibrarian)** : finds, downloads and manages ebooks
* **[Ombi](https://github.com/tidusjar/Ombi)** : provides an interface to request additions to a [Plex](/recipes/plex/)/[Emby](/recipes/emby/) library using the above tools
* **[Jackett](https://github.com/Jackett/Jackett)** : Provides an local, caching, API-based interface to torrent trackers, simplifying the way your tools search for torrents.
Since this recipe is so long, and so many of the tools are optional to the final result (_i.e., if you're not interested in comics, you won't want Mylar_), I've described each individual tool on its own sub-recipe page (_below_), even though most of them are deployed very similarly.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. Access to NZB indexers and Usenet servers
4. DNS entries configured for each of the NZB tools in this recipe that you want to use
## Preparation
### Setup data locations
We'll need a unique directories for each tool in the stack, bind-mounted into our containers, so create them upfront, in /var/data/autopirate:
```
mkdir /var/data/autopirate
cd /var/data/autopirate
mkdir -p {lazylibrarian,mylar,ombi,sonarr,radarr,headphones,plexpy,nzbhydra,sabnzbd,nzbget,rtorrent,jackett}
```
Create a directory for the storage of your downloaded media, i.e., something like:
```
mkdir /var/data/media
```
Create a user to "own" the above directories, and note the uid and gid of the created user. You'll need to specify the UID/GID in the environment variables passed to the container (in the example below, I used 4242 - twice the meaning of life).
### Secure public access
What you'll quickly notice about this recipe is that __every__ web interface is protected by an [OAuth proxy](/reference/oauth_proxy/).
Why? Because these tools are developed by a handful of volunteer developers who are focused on adding features, not necessarily implementing robust security. Most users wouldn't expose these tools directly to the internet, so the tools have rudimentary (if any) access control.
To mitigate the risk associated with public exposure of these tools (_you're on your smartphone and you want to add a movie to your watchlist, what do you do, hotshot?_), in order to gain access to each tool you'll first need to authenticate against your given OAuth provider.
This is tedious, but you only have to do it once. Each tool (Sonarr, Radarr, etc) to be protected by an OAuth proxy, requires unique configuration. I use github to provide my oauth, giving each tool a unique logo while I'm at it (make up your own random string for OAUTH2PROXYCOOKIE_SECRET)
For each tool, create /var/data/autopirate/<tool>.env, and set the following:
```
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
PUID=4242
PGID=4242
```
Create at least /var/data/autopirate/authenticated-emails.txt, containing at least your own email address with your OAuth provider. If you wanted to grant access to a specific tool to other users, you'd need a unique authenticated-emails-<tool>.txt which included both normal email address as well as any addresses to be granted tool-specific access.
### Setup components
#### Stack basics
**Start** with a swarm config file in docker-compose syntax, like this:
```
version: '3'
services:
```
And **end** with a stanza like this:
```
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.11.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
#### Assemble the tools..
Now work your way through the list of tools below, adding whichever tools your want to use, and finishing with the **end** section:
* [SABnzbd](/recipes/autopirate/sabnzbd/)
* [NZBGet](/recipes/autopirate/nzbget/)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [End](/recipes/autopirate/end/) (launch the stack)

View File

@@ -9,6 +9,6 @@ Confirm the container status by running "docker stack ps autopirate", and wait f
Log into each of your new tools at its respective HTTPS URL. You'll be prompted to authenticate against your OAuth provider, and upon success, redirected to the tool's UI. Log into each of your new tools at its respective HTTPS URL. You'll be prompted to authenticate against your OAuth provider, and upon success, redirected to the tool's UI.
## Chef's Notes 📓 ## Chef's Notes
1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :) 1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)

View File

@@ -0,0 +1,14 @@
!!! warning
This is not a complete recipe - it's the conclusion to the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
### Launch Autopirate stack
Launch the AutoPirate stack by running ```docker stack deploy autopirate -c <path -to-docker-compose.yml>```
Confirm the container status by running "docker stack ps autopirate", and wait for all containers to enter the "Running" state.
Log into each of your new tools at its respective HTTPS URL. You'll be prompted to authenticate against your OAuth provider, and upon success, redirected to the tool's UI.
## Chef's Notes 📓
1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)

View File

@@ -5,7 +5,7 @@ hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and
# Headphones # Headphones
[Headphones](https://github.com/rembo10/headphones) is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, µTorrent, Deluge and Blackhole. [Headphones](https://github.com/rembo10/headphones) is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, Torrent, Deluge and Blackhole.
![Headphones Screenshot](../../images/headphones.png) ![Headphones Screenshot](../../images/headphones.png)
@@ -70,6 +70,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,75 @@
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 📖
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Headphones
[Headphones](https://github.com/rembo10/headphones) is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, µTorrent, Deluge and Blackhole.
![Headphones Screenshot](../../images/headphones.png)
## Inclusion into AutoPirate
To include Headphones in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
headphones:
image: linuxserver/headphones:latest
env_file : /var/data/config/autopirate/headphones.env
volumes:
- /var/data/autopirate/headphones:/config
- /var/data/media:/media
networks:
- internal
headphones_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/headphones.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:headphones.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://headphones:8181
-redirect-url=https://headphones.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](https://github.com/evilhero/mylar)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* Headphones (this page)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -5,7 +5,7 @@
[Heimdall Application Dashboard](https://heimdall.site/) is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like. [Heimdall Application Dashboard](https://heimdall.site/) is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like.
Heimdall is an elegant solution to organise all your web applications. Its dedicated to this purpose so you wont lose your links in a sea of bookmarks. Heimdall is an elegant solution to organise all your web applications. Its dedicated to this purpose so you wont lose your links in a sea of bookmarks.
Heimdall provides a single URL to manage access to all of your autopirate tools, and includes "enhanced" (_i.e., display stats within Heimdall without launching the app_) access to [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md), [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), and friends. Heimdall provides a single URL to manage access to all of your autopirate tools, and includes "enhanced" (_i.e., display stats within Heimdall without launching the app_) access to [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md), [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), and friends.
@@ -76,7 +76,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk! 2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!

View File

@@ -0,0 +1,82 @@
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Heimdall
[Heimdall Application Dashboard](https://heimdall.site/) is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like.
Heimdall is an elegant solution to organise all your web applications. Its dedicated to this purpose so you wont lose your links in a sea of bookmarks.
Heimdall provides a single URL to manage access to all of your autopirate tools, and includes "enhanced" (_i.e., display stats within Heimdall without launching the app_) access to [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), and friends.
![Heimdall Screenshot](../../images/heimdall.jpg)
## Inclusion into AutoPirate
To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
heimdall:
image: linuxserver/heimdall:latest
env_file: /var/data/config/autopirate/heimdall.env
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/heimdall:/config
networks:
- internal
heimdall_proxy:
image: funkypenguin/oauth2_proxy:latest
env_file : /var/data/config/autopirate/heimdall.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:heimdall.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://heimdall:80
-redirect-url=https://heimdall.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylarr/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* Heimdall (this page)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!

View File

@@ -70,6 +70,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,75 @@
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Jackett
[Jackett](https://github.com/Jackett/Jackett) works as a proxy server: it translates queries from apps (Sonarr, Radarr, Mylar, etc) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software.
This allows for getting recent uploads (like RSS) and performing searches. Jackett is a single repository of maintained indexer scraping & translation logic - removing the burden from other apps.
![Jackett Screenshot](../../images/jackett.png)
## Inclusion into AutoPirate
To include Jackett in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
jackett:
image: linuxserver/jackett:latest
env_file : /var/data/config/autopirate/jackett.env
volumes:
- /var/data/autopirate/jackett:/config
networks:
- internal
jackett_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/jackett.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:jackett.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://jackett:9117
-redirect-url=https://jackett.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylarr/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* Jackett (this page)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -82,7 +82,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](https://geek-cookbook.funkypenguin.co.nz/)recipes/calibre-web) recipe. 1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](https://geek-cookbook.funkypenguin.co.nz/)recipes/calibre-web) recipe.
2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,88 @@
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# LazyLibrarian
[LazyLibrarian](https://github.com/DobyTang/LazyLibrarian) is a tool to follow authors and grab metadata for all your digital reading needs. It uses a combination of Goodreads Librarything and optionally GoogleBooks as sources for author info and book info. Features include:
* Find authors and add them to the database
* List all books of an author and mark ebooks or audiobooks as 'wanted'.
* When processing the downloaded books it will save a cover picture (if available) and save all metadata into metadata.opf next to the bookfile (calibre compatible format)
* AutoAdd feature for book management tools like Calibre which must have books in flattened directory structure, or use calibre to import your books into an existing calibre library
* LazyLibrarian can also be used to search for and download magazines, and monitor for new issues
![Lazy Librarian Screenshot](../../images/lazylibrarian.png)
## Inclusion into AutoPirate
To include LazyLibrarian in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
lazylibrarian:
image: linuxserver/lazylibrarian:latest
env_file : /var/data/config/autopirate/lazylibrarian.env
volumes:
- /var/data/autopirate/lazylibrarian:/config
- /var/data/media:/media
networks:
- internal
lazylibrarian_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/lazylibrarian.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:lazylibrarian.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://lazylibrarian:5299
-redirect-url=https://lazylibrarian.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
calibre-server:
image: regueiro/calibre-server
volumes:
- /var/data/media/Ebooks/calibre/:/opt/calibre/library
networks:
- internal
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](https://github.com/evilhero/mylar)
* Lazy Librarian (this page)
* [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipes/calibre-web) recipe.
2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -5,7 +5,7 @@ hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and
# Lidarr # Lidarr
[Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones), but is written using the same(ish) codebase as [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) and [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr). It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget/), Transmission, µTorrent, Deluge and Blackhole (_just like Sonarr / Radarr_) [Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones), but is written using the same(ish) codebase as [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) and [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr). It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget/), Transmission, Torrent, Deluge and Blackhole (_just like Sonarr / Radarr_)
![Lidarr Screenshot](../../images/lidarr.png) ![Lidarr Screenshot](../../images/lidarr.png)
@@ -71,7 +71,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel! 2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel!

View File

@@ -0,0 +1,77 @@
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 📖
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Lidarr
[Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones](/recipes/autopirate/headphones), but is written using the same(ish) codebase as [Radarr](/recipes/autopirate/radarr/) and [Sonarr](/recipes/autopirate/sonarr). It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd](/recipes/autopirate/sabnzbd/), [NZBGet](/recipes/autopirate/nzbget/), Transmission, µTorrent, Deluge and Blackhole (_just like Sonarr / Radarr_)
![Lidarr Screenshot](../../images/lidarr.png)
## Inclusion into AutoPirate
To include Lidarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
lidarr:
image: linuxserver/lidarr:latest
env_file : /var/data/config/autopirate/lidarr.env
volumes:
- /var/data/autopirate/lidarr:/config
- /var/data/media:/media
networks:
- internal
lidarr_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/lidarr.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:lidarr.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://lidarr:8181
-redirect-url=https://lidarr.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](https://github.com/evilhero/mylar)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* Lidarr (this page)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel!

View File

@@ -68,7 +68,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,77 @@
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Mylar
[Mylar](https://github.com/evilhero/mylar) is a tool for downloading and managing digital comic books.
![Mylar Screenshot](../../images/mylar.jpg)
## Inclusion into AutoPirate
To include Mylar in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
mylar:
image: linuxserver/mylar:latest
env_file : /var/data/config/autopirate/mylar.env
volumes:
- /var/data/autopirate/mylar:/config
- /var/data/media:/media
networks:
- internal
mylar_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/mylar.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:mylar.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://mylar:8090
-redirect-url=https://mylar.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* Mylar (this page)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. If you intend to configure Mylar to perform its own NZB searches and push the hits to a downloader such as SABnzbd, then in addition to configuring the connection to SAB with host, port and api key, you will need to set the parameter `host_return` parameter to the fully qualified Mylar address (e.g. `http://mylar:8090`).
This will provide the link to the downloader necessary to initiate the download. This parameter is not presented in the user interface so the config file (`$MYLAR_HOME/config.ini`) will need to be manually updated. The parameter can be found under the [Interface] section of the file. ([Details](https://github.com/evilhero/mylar/issues/2242))

View File

@@ -75,6 +75,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,80 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# NZBGet
## Introduction
NZBGet performs the same function as [SABnzbd](/recipes/autopirate/sabnzbd.md) (_downloading content from Usenet servers_), but it's lightweight and fast(er), written in C++ (_as opposed to Python_).
![NZBGet Screenshot](../../images/nzbget.jpg)
## Inclusion into AutoPirate
To include NZBGet in your [AutoPirate](/recipes/autopirate/) stack
(_The only reason you **wouldn't** use NZBGet, would be if you were using [SABnzbd](/recipes/autopirate/sabnzbd/) instead_), include the following in your autopirate.yml stack definition file:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
nzbget:
image: linuxserver/nzbget
env_file : /var/data/config/autopirate/nzbget.env
volumes:
- /var/data/autopirate/nzbget:/config
- /var/data/media:/data
networks:
- internal
nzbget_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/nzbget.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:nzbget.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://nzbget:6789
-redirect-url=https://nzbget.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! note
NZBGet uses a 401 header to prompt for authentication. When you use OAuth2_proxy, this seems to break. Since we trust OAuth to authenticate us, we can just disable NZGet's own authentication, by changing ControlPassword to null in nzbget.conf (i.e. ```ControlPassword=```)
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* NZBGet (this page)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -74,6 +74,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,79 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# NZBHydra
[NZBHydra](https://github.com/theotherp/nzbhydra) is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as indexer source for tools like Sonarr or CouchPotato. Features include:
* Search by IMDB, TMDB, TVDB, TVRage and TVMaze ID (including season and episode) and filter by age and size. If an ID is not supported by an indexer it is attempted to be converted (e.g. TMDB to IMDB)
* Query generation, meaning when you search for a movie using e.g. an IMDB ID a query will be generated for raw indexers. Searching for a series season 1 episode 2 will also generate queries for raw indexers, like s01e02 and 1x02
* Grouping of results with the same title and of duplicate results, accounting for result posting time, size, group and poster. By default only one of the duplicates is shown. You can provide an indexer score to influence which one that might be
* Compatible with Sonarr, CP, NZB 360, SickBeard, Mylar and Lazy Librarian (and others)
* Statistics on indexers (average response time, share of results, access errors), searches and downloads per time of day and day of week, NZB download history and search history (both via internal GUI and API)
![NZBHydra Screenshot](../../images/nzbhydra.png)
## Inclusion into AutoPirate
To include NZBHydra in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
nzbhydra:
image: linuxserver/hydra:latest
env_file : /var/data/config/autopirate/nzbhydra.env
volumes:
- /var/data/autopirate/nzbhydra:/config
networks:
- internal
nzbhydra_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/nzbhydra.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:nzbhydra.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://nzbhydra:5075
-redirect-url=https://nzbhydra.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* NZBHydra (this page)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -89,7 +89,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_). 2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_).

View File

@@ -0,0 +1,95 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# NZBHydra 2
[NZBHydra 2](https://github.com/theotherp/nzbhydra2) is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as an indexer source for tools like Sonarr, Radarr or CouchPotato.
!!! note
NZBHydra 2 is a complete rewrite of [NZBHydra (1)](/recipes/autopirate/nzbhybra/). It's currently in Beta. It works mostly fine but some functions might not be completely done and incompatibilities with some tools might still exist. You might want to run both in parallel for migration / testing purposes, but ultimately you'll probably want to switch over to NZBHydra 2 exclusively.
![NZBHydra Screenshot](../../images/nzbhydra2.png)
Features include:
* Searches Anizb, BinSearch, NZBIndex and any newznab compatible indexers. Merges all results, filters them by a number of configurable restrictions, recognizes duplicates and returns them all in one place
* Add results to [NZBGet](/recipes/autopirate/nzbget/) or [SABnzbd](/recipes/autopirate/sabnzbd/)
* Support for all relevant media IDs (IMDB, TMDB, TVDB, TVRage, TVMaze) and conversion between them
* Query generation, meaning a query will be generated if only a media ID is provided in the search and the indexer doesn't support the ID or if no results were found
* Compatible with [Sonarr](/recipes/autopirate/sonarr/), [Radarr](/recipes/autopirate/radarr/), [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), nzb360, CouchPotato, [Mylar](/recipes/autopirate/mylar/), [Lazy Librarian](/recipes/autopirate/lazylibrarian/), Sick Beard, [Jackett/Cardigann](/recipes/autopirate/jackett/), Watcher, etc.
* Search and download history and extensive stats. E.g. indexer response times, download shares, NZB age, etc.
* Authentication and multi-user support
* Automatic update of NZB download status by querying configured downloaders
* RSS support with configurable cache times
* Torrent support (_Although I prefer [Jackett](/recipes/autopirate/jackett/) for this_):
* For GUI searches, allowing you to download torrents to a blackhole folder
* A separate Torznab compatible endpoint for API requests, allowing you to merge multiple trackers
* Extensive configurability
* Migration of database and settings from v1
## Inclusion into AutoPirate
To include NZBHydra2 in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
nzbhydra2:
image: linuxserver/hydra2:latest
env_file : /var/data/config/autopirate/nzbhydra2.env
volumes:
- /var/data/autopirate/nzbhydra2:/config
networks:
- internal
nzbhydra2_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/nzbhydra2.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:nzbhydra2.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://nzbhydra2:5076
-redirect-url=https://nzbhydra2.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* NZBHydra2 (this page)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_).

View File

@@ -75,6 +75,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,80 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Ombi
[Ombi](https://github.com/tidusjar/Ombi) is a useful addition to the [autopirate](/recipes/autopirate/) stack. Features include:
* Lets users request Movies and TV Shows (_whether it being the entire series, an entire season, or even single episodes._)
* Easily manage your requests
User management system (_supports plex.tv, Emby and local accounts_)
* A landing page that will give you the availability of your Plex/Emby server and also add custom notification text to inform your users of downtime.
* Allows your users to get custom notifications!
* Will show if the request is already on plex or even if it's already monitored.
Automatically updates the status of requests when they are available on Plex/Emby
![Ombi Screenshot](../../images/ombi.png)
## Inclusion into AutoPirate
To include Ombi in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
ombi:
image: linuxserver/ombi:latest
env_file : /var/data/config/autopirate/ombi.env
volumes:
- /var/data/autopirate/ombi:/config
networks:
- internal
ombi_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/ombi.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:ombi.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://ombi:3579
-redirect-url=https://ombi.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* Ombi (this page)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -86,6 +86,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,91 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Radarr
[Radarr](https://radarr.video/) is a tool for finding, downloading and managing movies. Features include:
* Adding new movies with lots of information, such as trailers, ratings, etc.
* Can watch for better quality of the movies you have and do an automatic upgrade. eg. from DVD to Blu-Ray
* Automatic failed download handling will try another release if one fails
* Manual search so you can pick any release or to see why a release was not downloaded automatically
* Full integration with SABnzbd and NZBGet
* Automatically searching for releases as well as RSS Sync
* Automatically importing downloaded movies
* Recognizing Special Editions, Director's Cut, etc.
* Identifying releases with hardcoded subs
* Importing movies from various online sources, such as IMDb Watchlists (A complete list can be found here)
* Full integration with Kodi, Plex (notification, library update)
* And a beautiful UI
* Importing Metadata such as trailers or subtitles
![Radarr Screenshot](../../images/radarr.png)
!!! tip "Sponsored Project"
Sonarr is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates 😁
## Inclusion into AutoPirate
To include Radarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
radarr:
image: linuxserver/radarr:latest
env_file : /var/data/config/autopirate/radarr.env
volumes:
- /var/data/autopirate/radarr:/config
- /var/data/media:/media
networks:
- internal
radarr_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/radarr.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:radarr.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://radarr:7878
-redirect-url=https://radarr.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* Radarr (this page)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -75,6 +75,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,80 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# RTorrent / ruTorrent
[RTorrent](http://rakshasa.github.io/rtorrent) is a popular CLI-based bittorrent client, and [ruTorrent](https://github.com/Novik/ruTorrent) is a powerful web interface for rtorrent.
![Rtorrent Screenshot](../../images/rtorrent.png)
## Choose incoming port
When using a torrent client from behind NAT (_which swarm, by nature, is_), you typically need to set a static port for inbound torrent communications. In the example below, I've set the port to 36258. You'll need to configure /var/data/autopirate/rtorrent/rtorrent/rtorrent.rc with the equivalent port.
## Inclusion into AutoPirate
To include ruTorrent in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
rtorrent:
image: linuxserver/rutorrent
env_file : /var/data/config/autopirate/rtorrent.env
ports:
- 36258:36258
volumes:
- /var/data/media/:/media
- /var/data/autopirate/rtorrent:/config
networks:
- internal
rtorrent_proxy:
image: skippy/oauth2_proxy
env_file : /var/data/config/autopirate/rtorrent.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:rtorrent.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://rtorrent:80
-redirect-url=https://rtorrent.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* RTorrent (this page)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -82,6 +82,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,87 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# SABnzbd
## Introduction
SABnzbd is the workhorse of the stack. It takes .nzb files as input (_manually or from other [autopirate](/recipes/autopirate/) stack tools_), then connects to your chosen Usenet provider, downloads all the individual binaries referenced by the .nzb, and then tests/repairs/combines/uncompresses them all into the final result - media files.
![SABNZBD Screenshot](../../images/sabnzbd.png)
!!! tip "Sponsored Project"
SABnzbd is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. It's not sexy, but it's consistent and reliable, and I enjoy the fruits of its labor near-daily.
## Inclusion into AutoPirate
To include SABnzbd in your [AutoPirate](/recipes/autopirate/) stack
(_The only reason you **wouldn't** use SABnzbd, would be if you were using [NZBGet](/recipes/autopirate/nzbget.md) instead_), include the following in your autopirate.yml stack definition file:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
sabnzbd:
image: linuxserver/sabnzbd:latest
env_file : /var/data/config/autopirate/sabnzbd.env
volumes:
- /var/data/autopirate/sabnzbd:/config
- /var/data/media:/media
networks:
- internal
sabnzbd_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/sabnzbd.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:sabnzbd.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://sabnzbd:8080
-redirect-url=https://sabnzbd.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! warning "Important Note re hostname validation"
(**Updated 10 June 2018**) : In SABnzbd [2.3.3](https://sabnzbd.org/wiki/extra/hostname-check.html), hostname verification was added as a mandatory check. SABnzbd will refuse inbound connections which weren't addressed to its own (_initially, autodetected_) hostname. This presents a problem within Docker Swarm, where container hostnames are random and disposable.
You'll need to edit sabnzbd.ini (_only created after your first launch_), and **replace** the value in ```host_whitelist``` configuration (_it's comma-separated_) with the name of your service within the swarm definition, as well as your FQDN as accessed via traefik.
For example, mine simply reads ```host_whitelist = sabnzbd.funkypenguin.co.nz, sabnzbd```
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* SABnzbd (this page)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -72,6 +72,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack) * [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓 ## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition. 1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,77 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Sonarr
[Sonarr](https://sonarr.tv/) is a tool for finding, downloading and managing your TV series.
![Sonarr Screenshot](../../images/sonarr.png)
!!! tip "Sponsored Project"
Sonarr is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates 😁
## Inclusion into AutoPirate
To include Sonarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
sonarr:
image: linuxserver/sonarr:latest
env_file : /var/data/config/autopirate/sonarr.env
volumes:
- /var/data/autopirate/sonarr:/config
- /var/data/media:/media
networks:
- internal
sonarr_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/sonarr.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:sonarr.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://sonarr:8989
-redirect-url=https://sonarr.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* Sonarr (this page)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -94,7 +94,7 @@ Browse to your new instance at https://**YOUR-FQDN**, and create a new user acco
Once you've created your account, jump over to https://bitwarden.com/#download and download the apps for your mobile and browser, and start adding your logins! Once you've created your account, jump over to https://bitwarden.com/#download and download the apps for your mobile and browser, and start adding your logins!
## Chef's Notes 📓 ## Chef's Notes
1. You'll notice we're not using the *official* container images (*[all 6 of them required](https://help.bitwarden.com/article/install-on-premise/#install-bitwarden)!)*, but rather a [more lightweight version ideal for self-hosting](https://hub.docker.com/r/bitwardenrs/server). All of the elements are contained within a single container, and SQLite is used for the database backend. 1. You'll notice we're not using the *official* container images (*[all 6 of them required](https://help.bitwarden.com/article/install-on-premise/#install-bitwarden)!)*, but rather a [more lightweight version ideal for self-hosting](https://hub.docker.com/r/bitwardenrs/server). All of the elements are contained within a single container, and SQLite is used for the database backend.
2. As mentioned above, readers should refer to the [dani-garcia/bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs) for details on customizing the behaviour of Bitwarden. 2. As mentioned above, readers should refer to the [dani-garcia/bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs) for details on customizing the behaviour of Bitwarden.

View File

@@ -0,0 +1,101 @@
# Bitwarden
Heard about the [latest password breach](https://www.databreaches.net) (*since lunch*)? [HaveYouBeenPowned](http://haveibeenpwned.com) yet (*today*)? [Passwords are broken](https://www.theguardian.com/technology/2008/nov/13/internet-passwords), and as the amount of sites for which you need to store credentials grows exponetially, so does the risk of using a common password.
"*Duh, use a password manager*", you say. Sure, but be aware that [even password managers have security flaws](https://www.securityevaluators.com/casestudies/password-manager-hacking/).
**OK, look smartass..** no software is perfect, and there will always be a risk of your credentials being exposed in ways you didn't intend. You can at least **minimize** the impact of such exposure by using a password manager to store unique credentials per-site. While [1Password](http://1password.com) is king of the commercial password manager, [BitWarden](https://bitwarden.com) is king of the open-source, self-hosted password manager.
Enter Bitwarden..
![BitWarden Screenshot](../images/bitwarden.png)
Bitwarden is a free and open source password management solution for individuals, teams, and business organizations. While Bitwarden does offer a paid / hosted version, the free version comes with the following (*better than any other free password manager!*):
* Access & install all Bitwarden apps
* Sync all of your devices, no limits!
* Store unlimited items in your vault
* Logins, secure notes, credit cards, & identities
* Two-step authentication (2FA)
* Secure password generator
* Self-host on your own server (optional)
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need to create a directory to bind-mount into our container, so create `/var/data/bitwarden`:
```
mkdir /var/data/bitwarden
```
### Setup environment
Create `/var/data/config/bitwarden/bitwarden.env`, and **leave it empty for now**.
!!! question
What, why an empty env file? Well, the container supports lots of customizations via environment variables, for things like toggling self-registration, 2FA, etc. These are too complex to go into for this recipe, but readers are recommended to review the [dani-garcia/bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs), and customize their installation to suite.
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3"
services:
bitwarden:
image: bitwardenrs/server
env_file: /var/data/config/bitwarden/bitwarden.env
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/bitwarden:/data/:rw
deploy:
labels:
- traefik.enable=true
- traefik.web.frontend.rule=Host:bitwarden.example.com
- traefik.web.port=80
- traefik.hub.frontend.rule=Host:bitwarden.example.com;Path:/notifications/hub
- traefik.hub.port=3012
- traefik.docker.network=traefik_public
networks:
- traefik_public
networks:
traefik_public:
external: true
```
!!! note
Note the clever use of two Traefik frontends to expose the notifications hub on port 3012. Thanks @gkoerk!
## Serving
### Launch Bitwarden stack
Launch the Bitwarden stack by running ```docker stack deploy bitwarden -c <path -to-docker-compose.yml>```
Browse to your new instance at https://**YOUR-FQDN**, and create a new user account and master password (*Just click the **Create Account** button without filling in your email address or master password*)
### Get the apps / extensions
Once you've created your account, jump over to https://bitwarden.com/#download and download the apps for your mobile and browser, and start adding your logins!
## Chef's Notes 📓
1. You'll notice we're not using the *official* container images (*[all 6 of them required](https://help.bitwarden.com/article/install-on-premise/#install-bitwarden)!)*, but rather a [more lightweight version ideal for self-hosting](https://hub.docker.com/r/bitwardenrs/server). All of the elements are contained within a single container, and SQLite is used for the database backend.
2. As mentioned above, readers should refer to the [dani-garcia/bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs) for details on customizing the behaviour of Bitwarden.
3. The inclusion of Bitwarden was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Thanks Gerry!

View File

@@ -139,6 +139,6 @@ Launch the BookStack stack by running ```docker stack deploy bookstack -c <path
Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_proxy, and then login with username 'admin@admin.com' and password 'password'. Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_proxy, and then login with username 'admin@admin.com' and password 'password'.
## Chef's Notes 📓 ## Chef's Notes
1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container. 1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container.

View File

@@ -0,0 +1,144 @@
# BookStack
BookStack is a simple, self-hosted, easy-to-use platform for organising and storing information.
A friendly middle ground between heavyweights like MediaWiki or Confluence and [Gollum](/recipes/gollum/), BookStack relies on a database backend (so searching and versioning is easy), but limits itself to a pre-defined, 3-tier structure (book, chapter, page). The result is a lightweight, approachable personal documentation stack, which includes search and Markdown editing.
![BookStack Screenshot](../images/bookstack.png)
I like to protect my public-facing web UIs with an [oauth_proxy](/reference/oauth_proxy), ensuring that if an application bug (or a user misconfiguration) exposes the app to unplanned public scrutiny, I have a second layer of defense.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik/) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/bookstack:
```
mkdir -p /var/data/bookstack/database-dump
mkdir -p /var/data/runtime/bookstack/db
```
### Prepare environment
Create bookstack.env, and populate with the following variables. Set the [oauth_proxy](/reference/oauth_proxy) variables provided by your OAuth provider (if applicable.)
```
# For oauth-proxy (optional)
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
# For MariaDB/MySQL database
MYSQL_RANDOM_ROOT_PASSWORD=true
MYSQL_DATABASE=bookstack
MYSQL_USER=bookstack
MYSQL_PASSWORD=secret
# Bookstack-specific variables
DB_HOST=bookstack_db:3306
DB_DATABASE=bookstack
DB_USERNAME=bookstack
DB_PASSWORD=secret
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
db:
image: mariadb:10
env_file: /var/data/config/bookstack/bookstack.env
networks:
- internal
volumes:
- /var/data/runtime/bookstack/db:/var/lib/mysql
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/bookstack/bookstack.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:bookstack.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/bookstack/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app
-redirect-url=https://bookstack.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
app:
image: solidnerd/bookstack
env_file: /var/data/config/bookstack/bookstack.env
networks:
- internal
db-backup:
image: mariadb:10
env_file: /var/data/config/bookstack/bookstack.env
volumes:
- /var/data/bookstack/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.33.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Bookstack stack
Launch the BookStack stack by running ```docker stack deploy bookstack -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_proxy, and then login with username 'admin@admin.com' and password 'password'.
## Chef's Notes 📓
1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container.

View File

@@ -122,7 +122,7 @@ Launch the Calibre-Web stack by running ```docker stack deploy calibre-web -c <p
Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the initial GUI configuraition. Set the first field (_Location of Calibre database_) to "_/books/_", and when complete, login using defaults username of "**admin**" with password "**admin123**". Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the initial GUI configuraition. Set the first field (_Location of Calibre database_) to "_/books/_", and when complete, login using defaults username of "**admin**" with password "**admin123**".
## Chef's Notes 📓 ## Chef's Notes
1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_) 1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web. 2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.

View File

@@ -0,0 +1,128 @@
hero: Manage your ebook collection. Like a BOSS.
# Calibre-Web
The [AutoPirate](/recipes/autopirate/) recipe includes [Lazy Librarian](https://github.com/itsmegb/LazyLibrarian), a tool for tracking, finding, and downloading eBooks. However, after the eBooks are downloaded, Lazy Librarian is not much use for organising, tracking, and actually **reading** them.
[Calibre-Web](https://github.com/janeczku/calibre-web) could be described as "_[Plex](/recipes/plex/) (or [Emby](/recipes/emby/)) for eBooks_" - it's a web-based interface to manage your eBook library, screenshot below:
![Calibre-Web Screenshot](../images/calibre-web.png)
Of course, you probably already manage your eBooks using the excellent [Calibre](https://calibre-ebook.com/), but this is primarily a (_powerful_) desktop application. Calibre-Web is an alternative way to manage / view your existing Calibre database, meaning you can continue to use Calibre on your desktop if you wish.
As a long-time Kindle user, Calibre-Web brings (among [others](https://github.com/janeczku/calibre-web)) the following features which appeal to me:
* Filter and search by titles, authors, tags, **series** and language
* Create custom book collection (shelves)
Support for editing eBook metadata and deleting eBooks from Calibre library
* Support for converting eBooks from EPUB to Kindle format (mobi/azw)
* Send eBooks to Kindle devices with the click of a button
* Support for reading eBooks directly in the browser (.txt, .epub, .pdf, .cbr, .cbt, .cbz)
* Upload new books in PDF, epub, fb2 format
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need a directory to store some config data for Calibre-Web, container, so create /var/data/calibre-web, and ensure the directory is owned by the same use which owns your Calibre data (below)
```
mkdir /var/data/calibre-web
chown calibre:calibre /var/data/calibre-web # for example
```
Ensure that your Calibre library is accessible to the swarm (_i.e., exists on shared storage_), and that the same user who owns the config directory above, also owns the actual calibre library data (_including the ebooks managed by Calibre_).
### Prepare environment
We'll use an [oauth-proxy](/reference/oauth_proxy/) to protect the UI from public access, so create calibre-web.env, and populate with the following variables:
```
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=<make this a random string>
PUID=
PGID=
```
Follow the [instructions](https://github.com/bitly/oauth2_proxy) to setup your oauth provider. You need to setup a unique key/secret for each instance of the proxy you want to run, since in each case the callback URL will differ.
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
app:
image: technosoft2000/calibre-web
env_file : /var/data/config/calibre-web/calibre-web.env
volumes:
- /var/data/calibre-web:/config
- /srv/data/Archive/Ebooks/calibre:/books
networks:
- internal
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/calibre-web/calibre-web.env
dns_search: hq.example.com
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:calibre-web.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/calibre-web/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app:8083
-redirect-url=https://calibre-web.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.18.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Calibre-Web
Launch the Calibre-Web stack by running ```docker stack deploy calibre-web -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the initial GUI configuraition. Set the first field (_Location of Calibre database_) to "_/books/_", and when complete, login using defaults username of "**admin**" with password "**admin123**".
## Chef's Notes 📓
1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.

View File

@@ -58,7 +58,7 @@ Create /var/data/config/collabora/collabora.env, and populate with the following
1. Variables are in lower-case, unlike our standard convention. This is to align with the CODE container 1. Variables are in lower-case, unlike our standard convention. This is to align with the CODE container
2. Set domain to your [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/) domain, and escape all the periods as per the example 2. Set domain to your [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/) domain, and escape all the periods as per the example
3. Set your server_name to collabora.<yourdomain\>. Escaping periods is unnecessary 3. Set your server_name to collabora.<yourdomain\>. Escaping periods is unnecessary
4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen 🔥 4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen
``` ```
username=admin username=admin
@@ -300,7 +300,7 @@ Now browse your NextCloud files. Click the plus (+) sign to create a new documen
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓 ## Chef's Notes
1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables. 1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.

View File

@@ -0,0 +1,306 @@
# Collabora Online
!!! important
Development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
Collabora Online Development Edition (or "[CODE](https://www.collaboraoffice.com/code/#what_is_code)"), is the lightweight, or "home" edition of the commercially-supported [Collabora Online](https://www.collaboraoffice.com/collabora-online/) platform. It
It's basically the [LibreOffice](https://www.libreoffice.org/) interface in a web-browser. CODE is not a standalone app, it's a backend intended to be accessed via "WOPI" from an existing interface (_in our case, [NextCloud](/recipes/nextcloud/)_)
![CODE Screenshot](../images/collabora-online.png)
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. DNS entry for the hostname (_i.e. "collabora.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
4. [NextCloud](/recipes/nextcloud/) installed and operational
5. [Docker-compose](https://docs.docker.com/compose/install/) installed on your node(s) - this is a special case which needs to run outside of Docker Swarm
## Preparation
### Explanation for complexity
Due to the clever magic that Collabora does to present a "headless" LibreOffice UI to the browser, the CODE docker container requires system capabilities which cannot be granted under Docker Swarm (_specifically, MKNOD_).
So we have to run Collabora itself in the next best thing to Docker swarm - a docker-compose stack. Using docker-compose will at least provide us with consistent and version-able configuration files.
This presents another problem though - Docker Swarm with Traefik is superb at making all our stacks "just work" with ingress routing and LetsEncyrpt certificates. We don't want to have to do this manually (_like a cave-man_), so we engage in some trickery to allow us to still use our swarmed Traefik to terminate SSL.
We run a single swarmed Nginx instance, which forwards all requests to an upstream, with the target IP of the docker0 interface, on port 9980 (_the port exposed by the CODE container_)
We attach the necessary labels to the Nginx container to instruct Trafeik to setup a front/backend for collabora.<ourdomain\>. Now incoming requests to **https://collabora.<ourdomain\>** will hit Traefik, be forwarded to nginx (_wherever in the swarm it's running_), and then to port 9980 on the same node that nginx is running on.
What if we're running multiple nodes in our swarm, and nginx ends up on a different node to the one running Collabora via docker-compose? Well, either constrain nginx to the same node as Collabora (_example below_), or just launch an instance of Collabora on _every_ node then. It's just a rendering / GUI engine after all, it doesn't hold any persistent data.
Here's a (_highly technical_) diagram to illustrate:
![CODE traffic flow](../images/collabora-traffic-flow.png)
### Setup data locations
We'll need a directory for holding config to bind-mount into our containers, so create ```/var/data/collabora```, and ```/var/data/config/collabora``` for holding the docker/swarm config
```
mkdir /var/data/collabora/
mkdir /var/data/config/collabora/
```
### Prepare environment
Create /var/data/config/collabora/collabora.env, and populate with the following variables, customized for your installation.
!!! warning
Note the following:
1. Variables are in lower-case, unlike our standard convention. This is to align with the CODE container
2. Set domain to your [NextCloud](/recipes/nextcloud/) domain, and escape all the periods as per the example
3. Set your server_name to collabora.<yourdomain\>. Escaping periods is unnecessary
4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen 🔥
```
username=admin
password=ilovemypassword
domain=nextcloud\.batcave\.com
server_name=collabora.batcave.com
termination=true
```
### Create docker-compose.yml
Create ```/var/data/config/collabora/docker-compose.yml``` as follows:
```
version: "3.0"
services:
local-collabora:
image: funkypenguin/collabora
# the funkypenguin version has a patch to include "termination" behind SSL-terminating reverse proxy (traefik), see CODE PR #50.
# Once merged, the official container can be used again.
#image: collabora/code
env_file: /var/data/config/collabora/collabora.env
volumes:
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
cap_add:
- MKNOD
ports:
- 9980:9980
```
### Create nginx.conf
Create ```/var/data/config/collabora/nginx.conf``` as follows, changing the ```server_name``` value to match the environment variable you established above.
```
upstream collabora-upstream {
# Run collabora under docker-compose, since it needs MKNOD cap, which can't be provided by Docker Swarm.
# The IP here is the typical IP of docker0 - change if yours is different.
server 172.17.0.1:9980;
}
server {
listen 80;
server_name collabora.batcave.com;
# static files
location ^~ /loleaflet {
proxy_pass http://collabora-upstream;
proxy_set_header Host $http_host;
}
# WOPI discovery URL
location ^~ /hosting/discovery {
proxy_pass http://collabora-upstream;
proxy_set_header Host $http_host;
}
# Main websocket
location ~ /lool/(.*)/ws$ {
proxy_pass http://collabora-upstream;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
# Admin Console websocket
location ^~ /lool/adminws {
proxy_buffering off;
proxy_pass http://collabora-upstream;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
# download, presentation and image upload
location ~ /lool {
proxy_pass https://collabora-upstream;
proxy_set_header Host $http_host;
}
}
```
### Create loolwsd.xml
[Until we understand](https://github.com/CollaboraOnline/Docker-CODE/pull/50) how to [pass trusted network parameters to the entrypoint script using environment variables](https://github.com/CollaboraOnline/Docker-CODE/issues/49), we have to maintain a manually edited version of ```loolwsd.xml```, and bind-mount it into our collabora container.
The way we do this is we mount
```/var/data/collabora/loolwsd.xml``` as ```/etc/loolwsd/loolwsd.xml-new```, then allow the container to create its default ```/etc/loolwsd/loolwsd.xml```, copy this default **over** our ```/var/data/collabora/loolwsd.xml``` as ```/etc/loolwsd/loolwsd.xml-new```, and then update the container to use **our** ```/var/data/collabora/loolwsd.xml``` as ```/etc/loolwsd/loolwsd.xml``` instead (_confused yet?_)
Create an empty ```/var/data/collabora/loolwsd.xml``` by running ```touch /var/data/collabora/loolwsd.xml```. We'll populate this in the next section...
### Setup Docker Swarm
Create ```/var/data/config/collabora/collabora.yml``` as follows, changing the traefik frontend_rule as necessary:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3.0"
services:
nginx:
image: nginx:latest
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:collabora.batcave.com
- traefik.docker.network=traefik_public
- traefik.port=80
- traefik.frontend.passHostHeader=true
# uncomment this line if you want to force nginx to always run on one node (i.e., the one running collabora)
#placement:
# constraints:
# - node.hostname == ds1
volumes:
- /var/data/collabora/nginx.conf:/etc/nginx/conf.d/default.conf:ro
networks:
traefik_public:
external: true
```
## Serving
### Generate loolwsd.xml
Well. This is awkward. There's no documented way to make Collabora work with Docker Swarm, so we're doing a bit of a hack here, until I understand [how to pass these arguments](https://github.com/CollaboraOnline/Docker-CODE/issues/49) via environment variables.
Launching Collabora is (_for now_) a 2-step process. First.. we launch collabora itself, by running:
```
cd /var/data/config/collabora/
docker-compose -d up
```
Output looks something like this:
```
root@ds1:/var/data/config/collabora# docker-compose up -d
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Pulling local-collabora (funkypenguin/collabora:latest)...
latest: Pulling from funkypenguin/collabora
7b8b6451c85f: Pull complete
ab4d1096d9ba: Pull complete
e6797d1788ac: Pull complete
e25c5c290bde: Pull complete
4b8e1b074e06: Pull complete
f51a3d1fb75e: Pull complete
8b826e2ae5ad: Pull complete
Digest: sha256:6cd38cb5cbd170da0e3f0af85cecf07a6bc366e44555c236f81d5b433421a39d
Status: Downloaded newer image for funkypenguin/collabora:latest
Creating collabora_local-collabora_1 ...
Creating collabora_local-collabora_1 ... done
root@ds1:/var/data/config/collabora#
```
Now exec into the container (_from another shell session_), by running ```exec <container name> -it /bin/bash```. Make a copy of /etc/loolwsd/loolwsd, by running ```cp /etc/loolwsd/loolwsd.xml /etc/loolwsd/loolwsd.xml-new```, and then exit the container with ```exit```.
Delete the collabora container by hitting CTRL-C in the docker-compose shell, running ```docker-compose rm```, and then altering this line in docker-compose.yml:
```
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
```
To this:
```
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml
```
Edit /var/data/collabora/loolwsd.xml, find the **storage.filesystem.wopi** section, and add lines like this to the existing allow rules (_to allow IPv6-enabled hosts to still connect with their IPv4 addreses_):
```
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
```
Find the **net.post_allow** section, and add a line like this:
```
<host desc="RFC1918 private addressing in inet6 format">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="RFC1918 private addressing in inet6 format">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
```
Find these 2 lines:
```
<ssl desc="SSL settings">
<enable type="bool" default="true">true</enable>
```
And change to:
```
<ssl desc="SSL settings">
<enable type="bool" default="true">false</enable>
```
Now re-launch collabora (_with the correct with loolwsd.xml_) under docker-compose, by running:
```
docker-compose -d up
```
Once collabora is up, we launch the swarm stack, by running:
```
docker stack deploy collabora -c /var/data/config/collabora/collabora.yml
```
Visit **https://collabora.<yourdomain\>/l/loleaflet/dist/admin/admin.html** and confirm you can login with the user/password you specified in collabora.env
### Integrate into NextCloud
In NextCloud, Install the **Collabora Online** app (https://apps.nextcloud.com/apps/richdocuments), and then under **Settings -> Collabora Online**, set your Collabora Online Server to ```https://collabora.<your domain>```
![CODE Screenshot](../images/collabora-online-in-nextcloud.png)
Now browse your NextCloud files. Click the plus (+) sign to create a new document, and create either a new document, spreadsheet, or presentation. Name your document and then click on it. If Collabora is setup correctly, you'll shortly enter into the rich editing interface provided by Collabora :)
!!! important
Development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓
1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.

View File

@@ -0,0 +1,16 @@
# CryptoNote Mining Pool
[Cryptocurrency miners](/recipes/cryptominer) will "pool" their GPU resources ("_hashpower_") into aggregate "_mining pools_", so that by the combined effort of all the miners, the pool will receive a reward for the blocks "mined" into the blockchain, and this reward will be distributed among the miners.
[CryptoNote](https://cryptonote.org/) is an open-source toolset designed to facilitate the creation of new privacy-focused [cryptocurrencies](https://cryptonote.org/coins)
(_CryptoNote = 'Kryptonite'. In a pool. Get it?_)
![CryptoNote Mining Pool Screenshot](/images/cryptonote-mining-pool.png)
The fact that all these currencies share a common ancestry means that a common mining pool platform can be used for miners. The following recipes all use variations of [Dvandal's cryptonote-nodejs-pool ](https://github.com/dvandal/cryptonote-nodejs-pool)
## Mining Pool Recipies
* [TurtleCoin](/recipes/turtle-pool/), the no-BS, fun baby cryptocurrency
* [Athena](/recipes/cryptonote-mining-pool/athena/), TurtleCoin's newborn baby sister

View File

@@ -160,7 +160,7 @@ Launch Duplicity stack by running ```docker stack deploy duplicity -c <path -to-
Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination. Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination.
## Chef's Notes 📓 ## Chef's Notes
1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs. 1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env 2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env

View File

@@ -0,0 +1,166 @@
hero: Duplicity - A boring recipe to backup your exciting stuff. Boring is good.
# Duplicity
Intro
![Duplicity Screenshot](../images/duplicity.png)
[Duplicity](http://duplicity.nongnu.org/) backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.
So what does this mean for our stack? It means we can leverage Duplicity to backup all our data-at-rest to a wide variety of cloud providers, including, but not limited to:
- acd_cli
- Amazon S3
- Backblaze B2
- DropBox
- ftp
- Google Docs
- Google Drive
- Microsoft Azure
- Microsoft Onedrive
- Rackspace Cloudfiles
- rsync
- ssh/scp
- SwiftStack
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. Credentials for one of the Duplicity's supported upload destinations
## Preparation
### Setup data locations
We'll need a folder to store a docker-compose .yml file, and an associated .env file. If you're following my filesystem layout, create /var/data/config/duplicity (for the config), and /var/data/duplicity (for the metadata) as follows:
```
mkdir /var/data/config/duplicity
mkdir /var/data/duplicity
cd /var/data/config/duplicity
```
### (Optional) Create Google Cloud Storage bucket
I didn't already have an archival/backup provider, so I chose Google Cloud "cloud" storage for the low price-point - 0.7 cents per GB/month (_Plus you [start with $300 credit](https://cloud.google.com/free/) even when signing up for the free tier_). You can use any destination supported by [Duplicity's URL scheme though](http://duplicity.nongnu.org/duplicity.1.html#sect7), just make sure you specify the necessary [environment variables](http://duplicity.nongnu.org/duplicity.1.html#sect6).
1. [Sign up](https://cloud.google.com/storage/docs/getting-started-console), create an empty project, enable billing, and create a bucket. Give your bucket a unique name, example "**jack-and-jills-bucket**" (_it's unique across the entire Google Cloud_)
2. Under "Storage" section > "[Settings](https://console.cloud.google.com/project/_/storage/settings)" > "Interoperability" tab > click "Enable interoperable access" and then "Create a new key" button and note both Access Key and Secret.
### Prepare environment
1. Generate a random passphrase to use to encrypt your data. **Save this somewhere safe**, without it you won't be able to restore!
2. Seriously, **save**. **it**. **somewhere**. **safe**.
3. Create duplicity.env, and populate with the following variables
```
SRC=/var/data/
DST=gs://jack-and-jills-bucket/yes-you-can-have-subdirectories
TMPDIR=/tmp
GS_ACCESS_KEY_ID=<YOUR GS ACCESS KEY>
GS_SECRET_ACCESS_KEY=<YOUR GS SECRET ACCESS KEY>
OPTIONS=--allow-source-mismatch --exclude /var/data/runtime --exclude /var/data/registry --exclude /var/data/duplicity --archive-dir=/archive
PASSPHRASE=<YOUR CHOSEN PASSPHRASE>
```
!!! note
See the [data layout reference](/reference/data_layout/) for an explanation of the included/excluded paths above.
### Run a test backup
Before we launch the automated daily backups, let's run a test backup, as follows:
```
docker run --env-file duplicity.env -it --rm -v \
/var/data:/var/data:ro -v /var/data/duplicity/tmp:/tmp -v \
/var/data/duplicity/archive:/archive tecnativa/duplicity \
/etc/periodic/daily/jobrunner
```
You should see some activity, with a summary of bytes transferred at the end.
### Run a test restore
Repeat after me: "If you don't verify your backup, **it's not a backup**".
!!! warning
Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data.
Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/recipie/traefik/), since this is likely to exist for every reader_).
```
docker run --env-file duplicity.env -it --rm \
-v /var/data:/var/data:ro \
-v /var/data/duplicity/tmp:/tmp \
-v /var/data/duplicity/archive:/archive tecnativa/duplicity \
duplicity list-current-files \
\$DST | grep traefik.yml
```
Once you've identified a file to test-restore, use a variation of the following to restore it to /tmp (_from the perspective of the container - it's actually /var/data/duplicity/tmp_)
```
docker run --env-file duplicity.env -it --rm \
-v /var/data:/var/data:ro \
-v /var/data/duplicity/tmp:/tmp \
-v /var/data/duplicity/archive:/archive \
tecnativa/duplicity duplicity restore \
--file-to-restore config/traefik/traefik.yml \
\$DST /tmp/traefik-restored.yml
```
Examine the contents of /var/data/duplicity/tmp/traefik-restored.yml to confirm it contains valid data.
### Setup Docker Swarm
Now that we have confidence in our backup/restore process, let's automate it by creating a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3"
services:
backup:
image: tecnativa/duplicity
env_file: /var/data/config/duplicity/duplicity.env
networks:
- internal
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data:/var/data:ro
- /var/data/duplicity/tmp:/tmp
- /var/data/duplicity/archive:/archive
networks:
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.10.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Duplicity stack
Launch Duplicity stack by running ```docker stack deploy duplicity -c <path -to-docker-compose.yml>```
Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination.
## Chef's Notes 📓
1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env

View File

@@ -1,4 +1,4 @@
hero: Real heroes backup their 💾 hero: Real heroes backup their
# Elkar Backup # Elkar Backup
@@ -243,7 +243,7 @@ This takes you to a list of backup names and file paths. You can choose to downl
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/) [![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓 ## Chef's Notes
1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service. 1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
2. The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel! 2. The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!

View File

@@ -0,0 +1,249 @@
hero: Real heroes backup their 💾
# Elkar Backup
Don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
!!! important
Ongoing development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](/recipes/duplicity/) would produce for you.
![ElkarBackup Screenshot](../images/elkarbackup.png)
## Details
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/elkarbackup:
```
mkdir -p /var/data/elkarbackup/{backups,uploads,sshkeys,database-dump}
mkdir -p /var/data/runtime/elkarbackup/db
mkdir -p /var/data/config/elkarbackup
```
### Prepare environment
Create /var/data/config/elkarbackup/elkarbackup.env, and populate with the following variables
```
SYMFONY__DATABASE__PASSWORD=password
EB_CRON=enabled
TZ='Etc/UTC'
#SMTP - Populate these if you want email notifications
#SYMFONY__MAILER__HOST=
#SYMFONY__MAILER__USER=
#SYMFONY__MAILER__PASSWORD=
#SYMFONY__MAILER__FROM=
# For mysql
MYSQL_ROOT_PASSWORD=password
#oauth2_proxy
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
```
Create ```/var/data/config/elkarbackup/elkarbackup-db-backup.env```, and populate with the following, to setup the nightly database dump.
!!! note
Running a daily database dump might be considered overkill, since ElkarBackup can be configured to backup its own database. However, making my own backup keeps the operation of this stack consistent with **other** stacks which employ MariaDB.
Also, did you ever hear about the guy who said "_I wish I had fewer backups"?
No, me either :shrug:
```
# For database backup (keep 7 days daily backups)
MYSQL_PWD=<same as SYMFONY__DATABASE__PASSWORD above>
MYSQL_USER=root
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3"
services:
db:
image: mariadb:10.4
env_file: /var/data/config/elkarbackup/elkarbackup.env
networks:
- internal
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/runtime/elkarbackup/db:/var/lib/mysql
db-backup:
image: mariadb:10.4
env_file: /var/data/config/elkarbackup/elkarbackup-db-backup.env
volumes:
- /var/data/elkarbackup/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
app:
image: elkarbackup/elkarbackup
env_file: /var/data/config/elkarbackup/elkarbackup.env
networks:
- internal
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/:/var/data
- /var/data/elkarbackup/backups:/app/backups
- /var/data/elkarbackup/uploads:/app/uploads
- /var/data/elkarbackup/sshkeys:/app/.ssh
proxy:
image: funkypenguin/oauth2_proxy
env_file: /var/data/config/elkarbackup/elkarbackup.env
networks:
- traefik_public
- internal
deploy:
labels:
- traefik.frontend.rule=Host:elkarbackup.example.com
- traefik.port=4180
volumes:
- /var/data/config/traefik/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app:80
-redirect-url=https://elkarbackup.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.36.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch ElkarBackup stack
Launch the ElkarBackup stack by running ```docker stack deploy elkarbackup -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password default password "root":
![ElkarBackup Login Screen](/images/elkarbackup-setup-1.png)
First thing you do, change your password, using the gear icon, and "Change Password" link:
![ElkarBackup Login Screen](/images/elkarbackup-setup-2.png)
Have a read of the [Elkarbackup Docs](https://docs.elkarbackup.org/docs/introduction.html) - they introduce the concept of **clients** (_hosts containing data to be backed up_), **jobs** (_what data gets backed up_), **policies** (_when is data backed up and how long is it kept_).
At the very least, you want to setup a **client** called "_localhost_" with an empty path (_i.e., the job path will be accessed locally, without SSH_), and then add a job to this client to backup /var/data, **excluding** ```/var/data/runtime``` and ```/var/data/elkarbackup/backup``` (_unless you **like** "backup-ception"_)
### Copying your backup data offsite
From the WebUI, you can download a script intended to be executed on a remote host, to backup your backup data to an offsite location. This is a **Good Idea**(tm), but needs some massaging for a Docker swarm deployment.
Here's a variation to the standard script, which I've employed:
```
#!/bin/bash
REPOSITORY=/var/data/elkarbackup/backups
SERVER=<target host member of docker swarm>
SERVER_USER=elkarbackup
UPLOADS=/var/data/elkarbackup/uploads
TARGET=/srv/backup/elkarbackup
echo "Starting backup..."
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
ssh "$SERVER_USER@$SERVER" "cd '$REPOSITORY'; find . -maxdepth 2 -mindepth 2" | sed s/^..// | while read jobId
do
echo Backing up job $jobId
mkdir -p $TARGET/$jobId 2>/dev/null
rsync -aH --delete "$SERVER_USER@$SERVER:$REPOSITORY/$jobId/" $TARGET/$jobId
done
echo Backing up uploads
rsync -aH --delete "$SERVER_USER@$SERVER":"$UPLOADS/" $TARGET/uploads
USED=`df -h . | awk 'NR==2 { print $3 }'`
USE=`df -h . | awk 'NR==2 { print $5 }'`
AVAILABLE=`df -h . | awk 'NR==2 { print $4 }'`
echo "Backup finished succesfully!"
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
echo ""
echo "**** INFO ****"
echo "Used disk space: $USED ($USE)"
echo "Available disk space: $AVAILABLE"
echo ""
```
!!! note
You'll note that I don't use the script to create a mysql dump (_since Elkar is running within a container anyway_), rather I just rely on the database dump which is made nightly into ```/var/data/elkarbackup/database-dump/```
### Restoring data
Repeat after me : "**It's not a backup unless you've tested a restore**"
!!! note
I had some difficulty making restoring work well in the webUI. My attempts to "Restore to client" failed with an SSH error about "localhost" not found. I **was** able to download the backup from my web browser, so I considered it a successful restore, since I can retrieve the backed-up data either from the webUI or from the filesystem directly.
To restore files form a job, click on the "Restore" button in the WebUI, while on the **Jobs** tab:
![ElkarBackup Login Screen](/images/elkarbackup-setup-3.png)
This takes you to a list of backup names and file paths. You can choose to download the entire contents of the backup from your browser as a .tar.gz, or to restore the backup to the client. If you click on the **name** of the backup, you can also drill down into the file structure, choosing to restore a single file or directory.
!!! important
Ongoing development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓
1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
2. The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!

View File

@@ -83,7 +83,7 @@ Launch the stack by running ```docker stack deploy emby -c <path -to-docker-comp
Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-based setup to complete deploying your Emby. Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-based setup to complete deploying your Emby.
## Chef's Notes 📓 ## Chef's Notes
1. I didn't use an [oauth2_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) for this stack, because it would interfere with mobile client support. 1. I didn't use an [oauth2_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media! 2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!

View File

@@ -0,0 +1,90 @@
# Emby
[Emby](https://emby.media/) (_think "M.B." or "Media Browser"_) is best described as "_like [Plex](/recipes/plex/) but different_" 😁 - It's a bit geekier and less polished than Plex, but it allows for more flexibility and customization.
![Emby Screenshot](../images/emby.png)
I've started experimenting with Emby as an alternative to Plex, because of the advanced [parental controls](https://github.com/MediaBrowser/Wiki/wiki/Parental-Controls) it offers. Based on my experimentation thus far, I have a "**kid-safe**" profile which automatically logs in, and only displays kid-safe content, based on ratings.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need a location to store Emby's library data, config files, logs and temporary transcoding space, so create /var/data/emby, and make sure it's owned by the user and group who also own your media data.
```
mkdir /var/data/emby
```
### Prepare environment
Create emby.env, and populate with PUID/GUID for the user who owns the /var/data/emby directory (_above_) and your actual media content (_in this example, the media content is at **/srv/data**_)
```
PUID=
GUID=
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3.0"
services:
emby:
image: emby/emby-server
env_file: /var/data/config/emby/emby.env
volumes:
- /var/data/emby/emby:/config
- /srv/data/:/data
deploy:
labels:
- traefik.frontend.rule=Host:emby.example.com
- traefik.docker.network=traefik_public
- traefik.port=8096
networks:
- traefik_public
- internal
ports:
- 8096:8096
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.17.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Emby stack
Launch the stack by running ```docker stack deploy emby -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-based setup to complete deploying your Emby.
## Chef's Notes 📓
1. I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
3. We don't bother exposing the HTTPS port for Emby, since [Traefik](/ha-docker-swarm/traefik/) is doing the SSL termination for us already.

View File

@@ -0,0 +1,60 @@
version: '3'
services:
flightairmap:
image: richarvey/nginx-php-fpm
volumes:
- "/var/data/flightairmap/conf:/var/www/html/conf"
- "/var/data/flightairmap/scripts:/var/www/html/scripts"
- "/var/data/flightairmap/html:/var/www/flightairmap/"
env_file:
- "/var/data/config/flightairmap/flightairmap.env"
environment:
- PHP_MEM_LIMIT=256
- RUN_SCRIPTS=1
- MYSQL_HOST=${MYSQL_HOST}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:www.observe.global
- traefik.docker.network=traefik_public
- traefik.port=80
db:
image: mariadb:10
env_file: /var/data/config/flightairmap/flightairmap.env
networks:
- internal
volumes:
- /var/data/runtime/flightairmap/db:/var/lib/mysql
db-backup:
image: mariadb:10
env_file: /var/data/config/flightairmap/flightairmap.env
volumes:
- /var/data/flightairmap/database-dump:/dump
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.44.0/24

View File

@@ -0,0 +1 @@
Hello

View File

@@ -0,0 +1 @@
Hello

View File

View File

@@ -64,7 +64,7 @@ Launch the Ghost stack by running ```docker stack deploy ghost -c <path -to-dock
Create your first administrative account at https://**YOUR-FQDN**/admin/ Create your first administrative account at https://**YOUR-FQDN**/admin/
## Chef's Notes 📓 ## Chef's Notes
1. If I wasn't committed to a [static-site-generated blog](https://www.funkypenguin.co.nz/blog/), Ghost is the platform I'd use for my blog. 1. If I wasn't committed to a [static-site-generated blog](https://www.funkypenguin.co.nz/blog/), Ghost is the platform I'd use for my blog.
2. A default using the SQlite database takes 548k of space: 2. A default using the SQlite database takes 548k of space:

View File

@@ -0,0 +1,75 @@
hero: Ghost - A recipe for beautiful online publication.
# Ghost
[Ghost](https://ghost.org) is "a fully open source, hackable platform for building and running a modern online publication."
![](/images/ghost.png)
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
Create the location for the bind-mount of the application data, so that it's persistent:
```
mkdir -p /var/data/ghost
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
ghost:
image: ghost:1-alpine
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/ghost/:/var/lib/ghost/content
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:ghost.example.com
- traefik.docker.network=traefik
- traefik.port=2368
networks:
traefik_public:
external: true
```
## Serving
### Launch Ghost stack
Launch the Ghost stack by running ```docker stack deploy ghost -c <path -to-docker-compose.yml>```
Create your first administrative account at https://**YOUR-FQDN**/admin/
## Chef's Notes 📓
1. If I wasn't committed to a [static-site-generated blog](https://www.funkypenguin.co.nz/blog/), Ghost is the platform I'd use for my blog.
2. A default using the SQlite database takes 548k of space:
```
[root@ds1 ghost]# du -sh /var/data/ghost/
548K /var/data/ghost/
[root@ds1 ghost]#
```

View File

@@ -94,7 +94,7 @@ Launch the mail server stack by running ```docker stack deploy gitlab-runner -c
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env. Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓 ## Chef's Notes
1. You'll note that I setup 2 runners. One is locked to a single project (*this cookbook build*), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case. 1. You'll note that I setup 2 runners. One is locked to a single project (*this cookbook build*), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (*and GitLab starts **sooo** slowly!*), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem. 2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (*and GitLab starts **sooo** slowly!*), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.

View File

@@ -0,0 +1,100 @@
# Gitlab Runner
Some features of GitLab require a "[runner](https://docs.gitlab.com/runner/)" (_in the sense of a "gopher" or a "minion"_). A runner "registers" itself with a GitLab instance, and is given tasks to run. Tasks include running Continuous Integration (CI) builds, and building container images.
While a runner isn't strictly required to use GitLab, if you want to do CI, you'll need at least one. There are many ways to deploy a runner - this recipe focuses on the docker container model.
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
4. [X] [GitLab](/ha-docker-swarm/gitlab) installation (see previous recipe)
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our runner containers, so create them in `/var/data/gitlab`:
```
cd /var/data
mkdir gitlab
cd gitlab
mkdir -p {runners/1,runners/2}
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
thing1:
image: gitlab/gitlab-runner
volumes:
- /var/data/gitlab/runners/1:/etc/gitlab-runner
networks:
- internal
thing2:
image: gitlab/gitlab-runner
volumes:
- /var/data/gitlab/runners/2:/etc/gitlab-runner
networks:
- internal
networks:
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.23.0/24
```
### Configure runners
From your GitLab UI, you can retrieve a "token" necessary to register a new runner. To register the runner, you can either create config.toml in each runner's bind-mounted folder (example below), or just `docker exec` into each runner container and execute ```gitlab-runner register``` to interactively generate config.toml.
Sample runner config.toml:
```
concurrent = 1
check_interval = 0
[[runners]]
name = "myrunner1"
url = "https://gitlab.example.com"
token = "<long string here>"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.1"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
```
## Serving
### Launch runners
Launch the mail server stack by running ```docker stack deploy gitlab-runner -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓
1. You'll note that I setup 2 runners. One is locked to a single project (*this cookbook build*), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (*and GitLab starts **sooo** slowly!*), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.

View File

@@ -133,7 +133,7 @@ Launch the mail server stack by running ```docker stack deploy gitlab -c <path -
Log into your new instance at https://[your FQDN], with user "root" and the password you specified in gitlab.env. Log into your new instance at https://[your FQDN], with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓 ## Chef's Notes
A few comments on decisions taken in this design: A few comments on decisions taken in this design:

View File

@@ -0,0 +1,140 @@
hero: Gitlab - A recipe for a self-hosted GitHub alternative
# GitLab
GitLab is a self-hosted [alternative to GitHub](https://about.gitlab.com/comparison/). The most common use case is (a set of) developers with the desire for the rich feature-set of GitHub, but with unlimited private repositories.
Docker does maintain an [official "Omnibus" container](https://docs.gitlab.com/omnibus/docker/README.html), but for this recipe I prefer the "[dockerized gitlab](https://github.com/sameersbn/docker-gitlab)" project, since it allows distribution of the various Gitlab components across multiple swarm nodes.
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/gitlab:
```
cd /var/data
mkdir gitlab
cd gitlab
mkdir -p {postgresql,redis,gitlab}
```
### Prepare environment
You'll need to know the following:
1. Choose a password for postgresql, you'll need it for DB_PASS in the compose file (below)
2. Generate 3 passwords using ```pwgen -Bsv1 64```. You'll use these for the XXX_KEY_BASE environment variables below
2. Create gitlab.env, and populate with **at least** the following variables (the full set is available at https://github.com/sameersbn/docker-gitlab#available-configuration-parameters):
```
DB_USER=gitlab
DB_PASS=gitlabdbpass
DB_NAME=gitlabhq_production
DB_EXTENSION=pg_trgm
DB_ADAPTER=postgresql
DB_HOST=postgresql
TZ=Pacific/Auckland
REDIS_HOST=redis
REDIS_PORT=6379
GITLAB_TIMEZONE=Auckland
GITLAB_HTTPS=true
SSL_SELF_SIGNED=false
GITLAB_HOST=gitlab.example.com
GITLAB_PORT=443
GITLAB_SSH_PORT=2222
GITLAB_SECRETS_DB_KEY_BASE=CFf7sS3kV2nGXBtMHDsTcjkRX8PWLlKTPJMc3lRc6GCzJDdVljZ85NkkzJ8mZbM5
GITLAB_SECRETS_SECRET_KEY_BASE=h2LBVffktDgb6BxM3B97mDSjhnSNwLc5VL2Hqzq9cdrvBtVw48WSp5wKj5HZrJM5
GITLAB_SECRETS_OTP_KEY_BASE=t9LPjnLzbkJ7Nt6LZJj6hptdpgG58MPJPwnMMMDdx27KSwLWHDrz9bMWXQMjq5mp
GITLAB_ROOT_PASSWORD=changeme
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
redis:
image: sameersbn/redis:latest
command:
- --loglevel warning
volumes:
- /var/data/gitlab/redis:/var/lib/redis:Z
networks:
- internal
postgresql:
image: sameersbn/postgresql:9.6-2
env_file: /var/data/config/gitlab/gitlab.env
volumes:
- /var/data/gitlab/postgresql:/var/lib/postgresql:Z
networks:
- internal
gitlab:
image: sameersbn/gitlab:latest
env_file: /var/data/config/gitlab/gitlab.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:gitlab.example.com
- traefik.docker.network=traefik
- traefik.port=80
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
ports:
- "2222:22"
volumes:
- /var/data/gitlab/gitlab:/home/git/data:Z
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.2.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch gitlab
Launch the mail server stack by running ```docker stack deploy gitlab -c <path -to-docker-compose.yml>```
Log into your new instance at https://[your FQDN], with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓
A few comments on decisions taken in this design:
1. I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?)

View File

@@ -129,6 +129,6 @@ Launch the Gollum stack by running ```docker stack deploy gollum -c <path-to-doc
Authenticate against your OAuth provider, and then start editing your wiki! Authenticate against your OAuth provider, and then start editing your wiki!
## Chef's Notes 📓 ## Chef's Notes
1. In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous" 1. In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"

View File

@@ -0,0 +1,134 @@
hero: Gollum - A recipe for your own git-based wiki
# Gollum
Gollum is a simple wiki system built on top of Git. A Gollum Wiki is simply a git repository (_either bare or regular_) of a specific nature:
* A Gollum repository's contents are human-editable, unless the repository is bare.
* Pages are unique text files which may be organized into directories any way you choose.
* Other content can also be included, for example images, PDFs and headers/footers for your pages.
Gollum pages:
* May be written in a variety of markups.
* Can be edited with your favourite system editor or IDE (_changes will be visible after committing_) or with the built-in web interface.
* Can be displayed in all versions (_commits_).
![Gollum Screenshot](../images/gollum.png)
As you'll note in the (_real world_) screenshot above, my requirements for a personal wiki are:
* Portable across my devices
* Supports images
* Full-text search
* Supports inter-note links
* Revision control
Gollum meets all these requirements, and as an added bonus, is extremely fast and lightweight.
!!! note
Since Gollum itself offers no user authentication, this design secures gollum behind an [oauth2 proxy](/reference/oauth_proxy/), so that in order to gain access to the Gollum UI at all, oauth2 authentication (_to GitHub, GitLab, Google, etc_) must have already occurred.
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need an empty git repository in /var/data/gollum for our data:
```
mkdir /var/data/gollum
cd /var/data/gollum
git init
```
### Prepare environment
1. Choose an oauth provider, and obtain a client ID and secret
2. Create gollum.env, and populate with the following variables (_you can make the cookie secret whatever you like_)
```
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
app:
image: dakue/gollum
volumes:
- /var/data/gollum:/gollum
networks:
- internal
command: |
--allow-uploads
--emoji
--user-icons gravatar
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/gollum/gollum.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:gollum.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/gollum/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app:4567
-redirect-url=https://gollum.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.9.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Gollum stack
Launch the Gollum stack by running ```docker stack deploy gollum -c <path-to-docker-compose.yml>```
Authenticate against your OAuth provider, and then start editing your wiki!
## Chef's Notes 📓
1. In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"

View File

@@ -128,6 +128,6 @@ Launch the Home Assistant stack by running ```docker stack deploy homeassistant
Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into https://grafana.**YOUR FQDN** and create some beautiful graphs :) Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into https://grafana.**YOUR FQDN** and create some beautiful graphs :)
## Chef's Notes 📓 ## Chef's Notes
1. I **tried** to protect Home Assistant using [oauth2_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am! 1. I **tried** to protect Home Assistant using [oauth2_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!

View File

@@ -0,0 +1,133 @@
# Home Assistant
Home Assistant is a home automation platform written in Python, with extensive support for 3rd-party home-automation platforms including Xaomi, Phillips Hue, and a [bazillion](https://home-assistant.io/components/) others.
![Home Assistant Screenshot](../images/homeassistant.png)
This recipie combines the [extensibility](https://home-assistant.io/components/) of [Home Assistant](https://home-assistant.io/) with the flexibility of [InfluxDB](https://docs.influxdata.com/influxdb/v1.4/) (_for time series data store_) and [Grafana](https://grafana.com/) (_for **beautiful** visualisation of that data_).
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/homeassistant:
```
mkdir /var/data/homeassistant
cd /var/data/homeassistant
mkdir -p {homeassistant,grafana,influxdb-backup}
```
Now create a directory for the influxdb realtime data:
```
mkdir /var/data/runtime/homeassistant/influxdb
```
### Prepare environment
Create /var/data/config/homeassistant/grafana.env, and populate with the following - this is to enable grafana to work with oauth2_proxy without requiring an additional level of authentication:
```
GF_AUTH_BASIC_ENABLED=false
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3"
services:
influxdb:
image: influxdb
networks:
- internal
volumes:
- /var/data/homeassistant/influxdb:/var/lib/influxdb
- /etc/localtime:/etc/localtime:ro
homeassistant:
image: homeassistant/home-assistant
dns_search: hq.example.com
volumes:
- /var/data/homeassistant/homeassistant:/config
- /etc/localtime:/etc/localtime:ro
deploy:
labels:
- traefik.frontend.rule=Host:homeassistant.example.com
- traefik.docker.network=traefik_public
- traefik.port=8123
networks:
- traefik_public
- internal
ports:
- 8123:8123
grafana-app:
image: grafana/grafana
env_file : /var/data/config/homeassistant/grafana.env
volumes:
- /var/data/homeassistant/grafana:/var/lib/grafana
- /etc/localtime:/etc/localtime:ro
networks:
- internal
grafana-proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/homeassistant/grafana.env
dns_search: hq.example.com
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:grafana.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/homeassistant/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://grafana-app:3000
-redirect-url=https://grafana.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.13.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Home Assistant stack
Launch the Home Assistant stack by running ```docker stack deploy homeassistant -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into https://grafana.**YOUR FQDN** and create some beautiful graphs :)
## Chef's Notes 📓
1. I **tried** to protect Home Assistant using [oauth2_proxy](/reference/oauth_proxy), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!

Some files were not shown because too many files have changed in this diff Show More