1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-30 10:01:52 +00:00

Update for leanpub preview

This commit is contained in:
AutoPenguin
2020-06-03 01:42:01 +00:00
parent 65dd34c7ea
commit 2e8e16157b
193 changed files with 12667 additions and 155 deletions

View File

@@ -0,0 +1,126 @@
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 📖
# AutoPirate
Once the cutting edge of the "internet" (_pre-world-wide-web and mosiac days_), Usenet is now a murky, geeky alternative to torrents for file-sharing. However, it's **cool** geeky, especially if you're into having a fully automated media platform.
A good starter for the usenet scene is https://www.reddit.com/r/usenet/. Because it's so damn complicated, a host of automated tools exist to automate the process of finding, downloading, and managing content. The tools included in this recipe are as follows:
![Autopirate Screenshot](../images/autopirate.png)
This recipe presents a method to combine these tools into a single swarm deployment, and make them available securely.
## Menu
Tools included in the AutoPirate stack are:
* **[SABnzbd](http://sabnzbd.org)** : downloads data from usenet servers based on .nzb definitions
* **[NZBGet](https://nzbget.net/)** : downloads data from usenet servers based on .nzb definitions, but written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources (_this is a popular alternative to SABnzbd_)
* **[RTorrent](https://github.com/rakshasa/rtorrent/wiki)** is a CLI-based torrent client, which when combined with **[ruTorrent](https://github.com/Novik/ruTorrent)** becomes a powerful and fully browser-managed torrent client. (_Yes, it's not Usenet, but Sonarr/Radarr will let fulfill your watchlist using either Usenet **or** torrents, so it's worth including_)
* **[NZBHydra](https://github.com/theotherp/nzbhydra)** : acts as a "meta-indexer", so that your downloading tools (_radarr, sonarr, etc_) only need to be setup for a single indexes. Also produces interesting stats on indexers, which helps when evaluating which indexers are performing well.
* **[NZBHydra2](https://github.com/theotherp/nzbhydra2)** : is a high-performance rewrite of the original NZBHydra, with extra features. While still in beta, this NZBHydra2 will eventually supercede NZBHydra
* **[Sonarr](https://sonarr.tv)** : finds, downloads and manages TV shows
* **[Radarr](https://radarr.video)** : finds, downloads and manages movies
* **[Mylar](https://github.com/evilhero/mylar)** : finds, downloads and manages comic books
* **[Headphones](https://github.com/rembo10/headphones)** : finds, downloads and manages music
* **[Lazy Librarian](https://github.com/itsmegb/LazyLibrarian)** : finds, downloads and manages ebooks
* **[Ombi](https://github.com/tidusjar/Ombi)** : provides an interface to request additions to a [Plex](/recipes/plex/)/[Emby](/recipes/emby/) library using the above tools
* **[Jackett](https://github.com/Jackett/Jackett)** : Provides an local, caching, API-based interface to torrent trackers, simplifying the way your tools search for torrents.
Since this recipe is so long, and so many of the tools are optional to the final result (_i.e., if you're not interested in comics, you won't want Mylar_), I've described each individual tool on its own sub-recipe page (_below_), even though most of them are deployed very similarly.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. Access to NZB indexers and Usenet servers
4. DNS entries configured for each of the NZB tools in this recipe that you want to use
## Preparation
### Setup data locations
We'll need a unique directories for each tool in the stack, bind-mounted into our containers, so create them upfront, in /var/data/autopirate:
```
mkdir /var/data/autopirate
cd /var/data/autopirate
mkdir -p {lazylibrarian,mylar,ombi,sonarr,radarr,headphones,plexpy,nzbhydra,sabnzbd,nzbget,rtorrent,jackett}
```
Create a directory for the storage of your downloaded media, i.e., something like:
```
mkdir /var/data/media
```
Create a user to "own" the above directories, and note the uid and gid of the created user. You'll need to specify the UID/GID in the environment variables passed to the container (in the example below, I used 4242 - twice the meaning of life).
### Secure public access
What you'll quickly notice about this recipe is that __every__ web interface is protected by an [OAuth proxy](/reference/oauth_proxy/).
Why? Because these tools are developed by a handful of volunteer developers who are focused on adding features, not necessarily implementing robust security. Most users wouldn't expose these tools directly to the internet, so the tools have rudimentary (if any) access control.
To mitigate the risk associated with public exposure of these tools (_you're on your smartphone and you want to add a movie to your watchlist, what do you do, hotshot?_), in order to gain access to each tool you'll first need to authenticate against your given OAuth provider.
This is tedious, but you only have to do it once. Each tool (Sonarr, Radarr, etc) to be protected by an OAuth proxy, requires unique configuration. I use github to provide my oauth, giving each tool a unique logo while I'm at it (make up your own random string for OAUTH2PROXYCOOKIE_SECRET)
For each tool, create /var/data/autopirate/<tool>.env, and set the following:
```
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
PUID=4242
PGID=4242
```
Create at least /var/data/autopirate/authenticated-emails.txt, containing at least your own email address with your OAuth provider. If you wanted to grant access to a specific tool to other users, you'd need a unique authenticated-emails-<tool>.txt which included both normal email address as well as any addresses to be granted tool-specific access.
### Setup components
#### Stack basics
**Start** with a swarm config file in docker-compose syntax, like this:
```
version: '3'
services:
```
And **end** with a stanza like this:
```
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.11.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
#### Assemble the tools..
Now work your way through the list of tools below, adding whichever tools your want to use, and finishing with the **end** section:
* [SABnzbd](/recipes/autopirate/sabnzbd/)
* [NZBGet](/recipes/autopirate/nzbget/)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [End](/recipes/autopirate/end/) (launch the stack)

View File

@@ -9,6 +9,6 @@ Confirm the container status by running "docker stack ps autopirate", and wait f
Log into each of your new tools at its respective HTTPS URL. You'll be prompted to authenticate against your OAuth provider, and upon success, redirected to the tool's UI.
## Chef's Notes 📓
## Chef's Notes
1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)

View File

@@ -0,0 +1,14 @@
!!! warning
This is not a complete recipe - it's the conclusion to the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
### Launch Autopirate stack
Launch the AutoPirate stack by running ```docker stack deploy autopirate -c <path -to-docker-compose.yml>```
Confirm the container status by running "docker stack ps autopirate", and wait for all containers to enter the "Running" state.
Log into each of your new tools at its respective HTTPS URL. You'll be prompted to authenticate against your OAuth provider, and upon success, redirected to the tool's UI.
## Chef's Notes 📓
1. This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)

View File

@@ -5,7 +5,7 @@ hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and
# Headphones
[Headphones](https://github.com/rembo10/headphones) is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, µTorrent, Deluge and Blackhole.
[Headphones](https://github.com/rembo10/headphones) is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, Torrent, Deluge and Blackhole.
![Headphones Screenshot](../../images/headphones.png)
@@ -70,6 +70,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,75 @@
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 📖
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Headphones
[Headphones](https://github.com/rembo10/headphones) is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, µTorrent, Deluge and Blackhole.
![Headphones Screenshot](../../images/headphones.png)
## Inclusion into AutoPirate
To include Headphones in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
headphones:
image: linuxserver/headphones:latest
env_file : /var/data/config/autopirate/headphones.env
volumes:
- /var/data/autopirate/headphones:/config
- /var/data/media:/media
networks:
- internal
headphones_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/headphones.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:headphones.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://headphones:8181
-redirect-url=https://headphones.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](https://github.com/evilhero/mylar)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* Headphones (this page)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -5,7 +5,7 @@
[Heimdall Application Dashboard](https://heimdall.site/) is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like.
Heimdall is an elegant solution to organise all your web applications. Its dedicated to this purpose so you wont lose your links in a sea of bookmarks.
Heimdall is an elegant solution to organise all your web applications. Its dedicated to this purpose so you wont lose your links in a sea of bookmarks.
Heimdall provides a single URL to manage access to all of your autopirate tools, and includes "enhanced" (_i.e., display stats within Heimdall without launching the app_) access to [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget.md), [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), and friends.
@@ -76,7 +76,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!

View File

@@ -0,0 +1,82 @@
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Heimdall
[Heimdall Application Dashboard](https://heimdall.site/) is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like.
Heimdall is an elegant solution to organise all your web applications. Its dedicated to this purpose so you wont lose your links in a sea of bookmarks.
Heimdall provides a single URL to manage access to all of your autopirate tools, and includes "enhanced" (_i.e., display stats within Heimdall without launching the app_) access to [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), and friends.
![Heimdall Screenshot](../../images/heimdall.jpg)
## Inclusion into AutoPirate
To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
heimdall:
image: linuxserver/heimdall:latest
env_file: /var/data/config/autopirate/heimdall.env
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/heimdall:/config
networks:
- internal
heimdall_proxy:
image: funkypenguin/oauth2_proxy:latest
env_file : /var/data/config/autopirate/heimdall.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:heimdall.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://heimdall:80
-redirect-url=https://heimdall.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylarr/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* Heimdall (this page)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!

View File

@@ -70,6 +70,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,75 @@
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Jackett
[Jackett](https://github.com/Jackett/Jackett) works as a proxy server: it translates queries from apps (Sonarr, Radarr, Mylar, etc) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software.
This allows for getting recent uploads (like RSS) and performing searches. Jackett is a single repository of maintained indexer scraping & translation logic - removing the burden from other apps.
![Jackett Screenshot](../../images/jackett.png)
## Inclusion into AutoPirate
To include Jackett in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
jackett:
image: linuxserver/jackett:latest
env_file : /var/data/config/autopirate/jackett.env
volumes:
- /var/data/autopirate/jackett:/config
networks:
- internal
jackett_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/jackett.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:jackett.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://jackett:9117
-redirect-url=https://jackett.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylarr/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* Jackett (this page)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -82,7 +82,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](https://geek-cookbook.funkypenguin.co.nz/)recipes/calibre-web) recipe.
2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,88 @@
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# LazyLibrarian
[LazyLibrarian](https://github.com/DobyTang/LazyLibrarian) is a tool to follow authors and grab metadata for all your digital reading needs. It uses a combination of Goodreads Librarything and optionally GoogleBooks as sources for author info and book info. Features include:
* Find authors and add them to the database
* List all books of an author and mark ebooks or audiobooks as 'wanted'.
* When processing the downloaded books it will save a cover picture (if available) and save all metadata into metadata.opf next to the bookfile (calibre compatible format)
* AutoAdd feature for book management tools like Calibre which must have books in flattened directory structure, or use calibre to import your books into an existing calibre library
* LazyLibrarian can also be used to search for and download magazines, and monitor for new issues
![Lazy Librarian Screenshot](../../images/lazylibrarian.png)
## Inclusion into AutoPirate
To include LazyLibrarian in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
lazylibrarian:
image: linuxserver/lazylibrarian:latest
env_file : /var/data/config/autopirate/lazylibrarian.env
volumes:
- /var/data/autopirate/lazylibrarian:/config
- /var/data/media:/media
networks:
- internal
lazylibrarian_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/lazylibrarian.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:lazylibrarian.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://lazylibrarian:5299
-redirect-url=https://lazylibrarian.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
calibre-server:
image: regueiro/calibre-server
volumes:
- /var/data/media/Ebooks/calibre/:/opt/calibre/library
networks:
- internal
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](https://github.com/evilhero/mylar)
* Lazy Librarian (this page)
* [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipes/calibre-web) recipe.
2. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -5,7 +5,7 @@ hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and
# Lidarr
[Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones), but is written using the same(ish) codebase as [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) and [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr). It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget/), Transmission, µTorrent, Deluge and Blackhole (_just like Sonarr / Radarr_)
[Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/headphones), but is written using the same(ish) codebase as [Radarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/radarr/) and [Sonarr](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sonarr). It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/sabnzbd/), [NZBGet](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/nzbget/), Transmission, Torrent, Deluge and Blackhole (_just like Sonarr / Radarr_)
![Lidarr Screenshot](../../images/lidarr.png)
@@ -71,7 +71,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel!

View File

@@ -0,0 +1,77 @@
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media 📺 🎥 🎵 📖
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Lidarr
[Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones](/recipes/autopirate/headphones), but is written using the same(ish) codebase as [Radarr](/recipes/autopirate/radarr/) and [Sonarr](/recipes/autopirate/sonarr). It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd](/recipes/autopirate/sabnzbd/), [NZBGet](/recipes/autopirate/nzbget/), Transmission, µTorrent, Deluge and Blackhole (_just like Sonarr / Radarr_)
![Lidarr Screenshot](../../images/lidarr.png)
## Inclusion into AutoPirate
To include Lidarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
lidarr:
image: linuxserver/lidarr:latest
env_file : /var/data/config/autopirate/lidarr.env
volumes:
- /var/data/autopirate/lidarr:/config
- /var/data/media:/media
networks:
- internal
lidarr_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/lidarr.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:lidarr.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://lidarr:8181
-redirect-url=https://lidarr.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](https://github.com/evilhero/mylar)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* Lidarr (this page)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel!

View File

@@ -68,7 +68,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,77 @@
!!! warning
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Mylar
[Mylar](https://github.com/evilhero/mylar) is a tool for downloading and managing digital comic books.
![Mylar Screenshot](../../images/mylar.jpg)
## Inclusion into AutoPirate
To include Mylar in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
mylar:
image: linuxserver/mylar:latest
env_file : /var/data/config/autopirate/mylar.env
volumes:
- /var/data/autopirate/mylar:/config
- /var/data/media:/media
networks:
- internal
mylar_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/mylar.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:mylar.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://mylar:8090
-redirect-url=https://mylar.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* Mylar (this page)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. If you intend to configure Mylar to perform its own NZB searches and push the hits to a downloader such as SABnzbd, then in addition to configuring the connection to SAB with host, port and api key, you will need to set the parameter `host_return` parameter to the fully qualified Mylar address (e.g. `http://mylar:8090`).
This will provide the link to the downloader necessary to initiate the download. This parameter is not presented in the user interface so the config file (`$MYLAR_HOME/config.ini`) will need to be manually updated. The parameter can be found under the [Interface] section of the file. ([Details](https://github.com/evilhero/mylar/issues/2242))

View File

@@ -75,6 +75,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,80 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# NZBGet
## Introduction
NZBGet performs the same function as [SABnzbd](/recipes/autopirate/sabnzbd.md) (_downloading content from Usenet servers_), but it's lightweight and fast(er), written in C++ (_as opposed to Python_).
![NZBGet Screenshot](../../images/nzbget.jpg)
## Inclusion into AutoPirate
To include NZBGet in your [AutoPirate](/recipes/autopirate/) stack
(_The only reason you **wouldn't** use NZBGet, would be if you were using [SABnzbd](/recipes/autopirate/sabnzbd/) instead_), include the following in your autopirate.yml stack definition file:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
nzbget:
image: linuxserver/nzbget
env_file : /var/data/config/autopirate/nzbget.env
volumes:
- /var/data/autopirate/nzbget:/config
- /var/data/media:/data
networks:
- internal
nzbget_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/nzbget.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:nzbget.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://nzbget:6789
-redirect-url=https://nzbget.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! note
NZBGet uses a 401 header to prompt for authentication. When you use OAuth2_proxy, this seems to break. Since we trust OAuth to authenticate us, we can just disable NZGet's own authentication, by changing ControlPassword to null in nzbget.conf (i.e. ```ControlPassword=```)
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* NZBGet (this page)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -74,6 +74,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,79 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# NZBHydra
[NZBHydra](https://github.com/theotherp/nzbhydra) is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as indexer source for tools like Sonarr or CouchPotato. Features include:
* Search by IMDB, TMDB, TVDB, TVRage and TVMaze ID (including season and episode) and filter by age and size. If an ID is not supported by an indexer it is attempted to be converted (e.g. TMDB to IMDB)
* Query generation, meaning when you search for a movie using e.g. an IMDB ID a query will be generated for raw indexers. Searching for a series season 1 episode 2 will also generate queries for raw indexers, like s01e02 and 1x02
* Grouping of results with the same title and of duplicate results, accounting for result posting time, size, group and poster. By default only one of the duplicates is shown. You can provide an indexer score to influence which one that might be
* Compatible with Sonarr, CP, NZB 360, SickBeard, Mylar and Lazy Librarian (and others)
* Statistics on indexers (average response time, share of results, access errors), searches and downloads per time of day and day of week, NZB download history and search history (both via internal GUI and API)
![NZBHydra Screenshot](../../images/nzbhydra.png)
## Inclusion into AutoPirate
To include NZBHydra in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
nzbhydra:
image: linuxserver/hydra:latest
env_file : /var/data/config/autopirate/nzbhydra.env
volumes:
- /var/data/autopirate/nzbhydra:/config
networks:
- internal
nzbhydra_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/nzbhydra.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:nzbhydra.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://nzbhydra:5075
-redirect-url=https://nzbhydra.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* NZBHydra (this page)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -89,7 +89,7 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_).

View File

@@ -0,0 +1,95 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# NZBHydra 2
[NZBHydra 2](https://github.com/theotherp/nzbhydra2) is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as an indexer source for tools like Sonarr, Radarr or CouchPotato.
!!! note
NZBHydra 2 is a complete rewrite of [NZBHydra (1)](/recipes/autopirate/nzbhybra/). It's currently in Beta. It works mostly fine but some functions might not be completely done and incompatibilities with some tools might still exist. You might want to run both in parallel for migration / testing purposes, but ultimately you'll probably want to switch over to NZBHydra 2 exclusively.
![NZBHydra Screenshot](../../images/nzbhydra2.png)
Features include:
* Searches Anizb, BinSearch, NZBIndex and any newznab compatible indexers. Merges all results, filters them by a number of configurable restrictions, recognizes duplicates and returns them all in one place
* Add results to [NZBGet](/recipes/autopirate/nzbget/) or [SABnzbd](/recipes/autopirate/sabnzbd/)
* Support for all relevant media IDs (IMDB, TMDB, TVDB, TVRage, TVMaze) and conversion between them
* Query generation, meaning a query will be generated if only a media ID is provided in the search and the indexer doesn't support the ID or if no results were found
* Compatible with [Sonarr](/recipes/autopirate/sonarr/), [Radarr](/recipes/autopirate/radarr/), [NZBGet](/recipes/autopirate/nzbget.md), [SABnzbd](/recipes/autopirate/sabnzbd/), nzb360, CouchPotato, [Mylar](/recipes/autopirate/mylar/), [Lazy Librarian](/recipes/autopirate/lazylibrarian/), Sick Beard, [Jackett/Cardigann](/recipes/autopirate/jackett/), Watcher, etc.
* Search and download history and extensive stats. E.g. indexer response times, download shares, NZB age, etc.
* Authentication and multi-user support
* Automatic update of NZB download status by querying configured downloaders
* RSS support with configurable cache times
* Torrent support (_Although I prefer [Jackett](/recipes/autopirate/jackett/) for this_):
* For GUI searches, allowing you to download torrents to a blackhole folder
* A separate Torznab compatible endpoint for API requests, allowing you to merge multiple trackers
* Extensive configurability
* Migration of database and settings from v1
## Inclusion into AutoPirate
To include NZBHydra2 in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
nzbhydra2:
image: linuxserver/hydra2:latest
env_file : /var/data/config/autopirate/nzbhydra2.env
volumes:
- /var/data/autopirate/nzbhydra2:/config
networks:
- internal
nzbhydra2_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/nzbhydra2.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:nzbhydra2.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://nzbhydra2:5076
-redirect-url=https://nzbhydra2.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* NZBHydra2 (this page)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
2. Note that NZBHydra2 _can_ co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you'll need to change both the target hostname (_to "hydra2"_) and the target port (_to 5076_).

View File

@@ -75,6 +75,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,80 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Ombi
[Ombi](https://github.com/tidusjar/Ombi) is a useful addition to the [autopirate](/recipes/autopirate/) stack. Features include:
* Lets users request Movies and TV Shows (_whether it being the entire series, an entire season, or even single episodes._)
* Easily manage your requests
User management system (_supports plex.tv, Emby and local accounts_)
* A landing page that will give you the availability of your Plex/Emby server and also add custom notification text to inform your users of downtime.
* Allows your users to get custom notifications!
* Will show if the request is already on plex or even if it's already monitored.
Automatically updates the status of requests when they are available on Plex/Emby
![Ombi Screenshot](../../images/ombi.png)
## Inclusion into AutoPirate
To include Ombi in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
ombi:
image: linuxserver/ombi:latest
env_file : /var/data/config/autopirate/ombi.env
volumes:
- /var/data/autopirate/ombi:/config
networks:
- internal
ombi_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/ombi.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:ombi.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://ombi:3579
-redirect-url=https://ombi.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* Ombi (this page)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -86,6 +86,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,91 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Radarr
[Radarr](https://radarr.video/) is a tool for finding, downloading and managing movies. Features include:
* Adding new movies with lots of information, such as trailers, ratings, etc.
* Can watch for better quality of the movies you have and do an automatic upgrade. eg. from DVD to Blu-Ray
* Automatic failed download handling will try another release if one fails
* Manual search so you can pick any release or to see why a release was not downloaded automatically
* Full integration with SABnzbd and NZBGet
* Automatically searching for releases as well as RSS Sync
* Automatically importing downloaded movies
* Recognizing Special Editions, Director's Cut, etc.
* Identifying releases with hardcoded subs
* Importing movies from various online sources, such as IMDb Watchlists (A complete list can be found here)
* Full integration with Kodi, Plex (notification, library update)
* And a beautiful UI
* Importing Metadata such as trailers or subtitles
![Radarr Screenshot](../../images/radarr.png)
!!! tip "Sponsored Project"
Sonarr is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates 😁
## Inclusion into AutoPirate
To include Radarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
radarr:
image: linuxserver/radarr:latest
env_file : /var/data/config/autopirate/radarr.env
volumes:
- /var/data/autopirate/radarr:/config
- /var/data/media:/media
networks:
- internal
radarr_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/radarr.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:radarr.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://radarr:7878
-redirect-url=https://radarr.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* Radarr (this page)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -75,6 +75,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,80 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# RTorrent / ruTorrent
[RTorrent](http://rakshasa.github.io/rtorrent) is a popular CLI-based bittorrent client, and [ruTorrent](https://github.com/Novik/ruTorrent) is a powerful web interface for rtorrent.
![Rtorrent Screenshot](../../images/rtorrent.png)
## Choose incoming port
When using a torrent client from behind NAT (_which swarm, by nature, is_), you typically need to set a static port for inbound torrent communications. In the example below, I've set the port to 36258. You'll need to configure /var/data/autopirate/rtorrent/rtorrent/rtorrent.rc with the equivalent port.
## Inclusion into AutoPirate
To include ruTorrent in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
rtorrent:
image: linuxserver/rutorrent
env_file : /var/data/config/autopirate/rtorrent.env
ports:
- 36258:36258
volumes:
- /var/data/media/:/media
- /var/data/autopirate/rtorrent:/config
networks:
- internal
rtorrent_proxy:
image: skippy/oauth2_proxy
env_file : /var/data/config/autopirate/rtorrent.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:rtorrent.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://rtorrent:80
-redirect-url=https://rtorrent.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* RTorrent (this page)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -82,6 +82,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,87 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# SABnzbd
## Introduction
SABnzbd is the workhorse of the stack. It takes .nzb files as input (_manually or from other [autopirate](/recipes/autopirate/) stack tools_), then connects to your chosen Usenet provider, downloads all the individual binaries referenced by the .nzb, and then tests/repairs/combines/uncompresses them all into the final result - media files.
![SABNZBD Screenshot](../../images/sabnzbd.png)
!!! tip "Sponsored Project"
SABnzbd is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. It's not sexy, but it's consistent and reliable, and I enjoy the fruits of its labor near-daily.
## Inclusion into AutoPirate
To include SABnzbd in your [AutoPirate](/recipes/autopirate/) stack
(_The only reason you **wouldn't** use SABnzbd, would be if you were using [NZBGet](/recipes/autopirate/nzbget.md) instead_), include the following in your autopirate.yml stack definition file:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
sabnzbd:
image: linuxserver/sabnzbd:latest
env_file : /var/data/config/autopirate/sabnzbd.env
volumes:
- /var/data/autopirate/sabnzbd:/config
- /var/data/media:/media
networks:
- internal
sabnzbd_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/sabnzbd.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:sabnzbd.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://sabnzbd:8080
-redirect-url=https://sabnzbd.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! warning "Important Note re hostname validation"
(**Updated 10 June 2018**) : In SABnzbd [2.3.3](https://sabnzbd.org/wiki/extra/hostname-check.html), hostname verification was added as a mandatory check. SABnzbd will refuse inbound connections which weren't addressed to its own (_initially, autodetected_) hostname. This presents a problem within Docker Swarm, where container hostnames are random and disposable.
You'll need to edit sabnzbd.ini (_only created after your first launch_), and **replace** the value in ```host_whitelist``` configuration (_it's comma-separated_) with the name of your service within the swarm definition, as well as your FQDN as accessed via traefik.
For example, mine simply reads ```host_whitelist = sabnzbd.funkypenguin.co.nz, sabnzbd```
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* SABnzbd (this page)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* [Sonarr](/recipes/autopirate/sonarr/)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -72,6 +72,6 @@ Continue through the list of tools below, adding whichever tools your want to us
* [End](https://geek-cookbook.funkypenguin.co.nz/)recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
## Chef's Notes
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -0,0 +1,77 @@
!!! warning
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
# Sonarr
[Sonarr](https://sonarr.tv/) is a tool for finding, downloading and managing your TV series.
![Sonarr Screenshot](../../images/sonarr.png)
!!! tip "Sponsored Project"
Sonarr is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I forget it's there until I (reliably) receive an email with new and exciting updates 😁
## Inclusion into AutoPirate
To include Sonarr in your [AutoPirate](/recipes/autopirate/) stack, include the following in your autopirate.yml stack definition file:
```
sonarr:
image: linuxserver/sonarr:latest
env_file : /var/data/config/autopirate/sonarr.env
volumes:
- /var/data/autopirate/sonarr:/config
- /var/data/media:/media
networks:
- internal
sonarr_proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/autopirate/sonarr.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:sonarr.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://sonarr:8989
-redirect-url=https://sonarr.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
```
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
## Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the **[end](/recipes/autopirate/end/)** section:
* [SABnzbd](/recipes/autopirate/sabnzbd.md)
* [NZBGet](/recipes/autopirate/nzbget.md)
* [RTorrent](/recipes/autopirate/rtorrent/)
* Sonarr (this page)
* [Radarr](/recipes/autopirate/radarr/)
* [Mylar](/recipes/autopirate/mylar/)
* [Lazy Librarian](/recipes/autopirate/lazylibrarian/)
* [Headphones](/recipes/autopirate/headphones/)
* [Lidarr](/recipes/autopirate/lidarr/)
* [NZBHydra](/recipes/autopirate/nzbhydra/)
* [NZBHydra2](/recipes/autopirate/nzbhydra2/)
* [Ombi](/recipes/autopirate/ombi/)
* [Jackett](/recipes/autopirate/jackett/)
* [Heimdall](/recipes/autopirate/heimdall/)
* [End](/recipes/autopirate/end/) (launch the stack)
## Chef's Notes 📓
1. In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. "radarr"), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.

View File

@@ -94,7 +94,7 @@ Browse to your new instance at https://**YOUR-FQDN**, and create a new user acco
Once you've created your account, jump over to https://bitwarden.com/#download and download the apps for your mobile and browser, and start adding your logins!
## Chef's Notes 📓
## Chef's Notes
1. You'll notice we're not using the *official* container images (*[all 6 of them required](https://help.bitwarden.com/article/install-on-premise/#install-bitwarden)!)*, but rather a [more lightweight version ideal for self-hosting](https://hub.docker.com/r/bitwardenrs/server). All of the elements are contained within a single container, and SQLite is used for the database backend.
2. As mentioned above, readers should refer to the [dani-garcia/bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs) for details on customizing the behaviour of Bitwarden.

View File

@@ -0,0 +1,101 @@
# Bitwarden
Heard about the [latest password breach](https://www.databreaches.net) (*since lunch*)? [HaveYouBeenPowned](http://haveibeenpwned.com) yet (*today*)? [Passwords are broken](https://www.theguardian.com/technology/2008/nov/13/internet-passwords), and as the amount of sites for which you need to store credentials grows exponetially, so does the risk of using a common password.
"*Duh, use a password manager*", you say. Sure, but be aware that [even password managers have security flaws](https://www.securityevaluators.com/casestudies/password-manager-hacking/).
**OK, look smartass..** no software is perfect, and there will always be a risk of your credentials being exposed in ways you didn't intend. You can at least **minimize** the impact of such exposure by using a password manager to store unique credentials per-site. While [1Password](http://1password.com) is king of the commercial password manager, [BitWarden](https://bitwarden.com) is king of the open-source, self-hosted password manager.
Enter Bitwarden..
![BitWarden Screenshot](../images/bitwarden.png)
Bitwarden is a free and open source password management solution for individuals, teams, and business organizations. While Bitwarden does offer a paid / hosted version, the free version comes with the following (*better than any other free password manager!*):
* Access & install all Bitwarden apps
* Sync all of your devices, no limits!
* Store unlimited items in your vault
* Logins, secure notes, credit cards, & identities
* Two-step authentication (2FA)
* Secure password generator
* Self-host on your own server (optional)
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need to create a directory to bind-mount into our container, so create `/var/data/bitwarden`:
```
mkdir /var/data/bitwarden
```
### Setup environment
Create `/var/data/config/bitwarden/bitwarden.env`, and **leave it empty for now**.
!!! question
What, why an empty env file? Well, the container supports lots of customizations via environment variables, for things like toggling self-registration, 2FA, etc. These are too complex to go into for this recipe, but readers are recommended to review the [dani-garcia/bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs), and customize their installation to suite.
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3"
services:
bitwarden:
image: bitwardenrs/server
env_file: /var/data/config/bitwarden/bitwarden.env
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/bitwarden:/data/:rw
deploy:
labels:
- traefik.enable=true
- traefik.web.frontend.rule=Host:bitwarden.example.com
- traefik.web.port=80
- traefik.hub.frontend.rule=Host:bitwarden.example.com;Path:/notifications/hub
- traefik.hub.port=3012
- traefik.docker.network=traefik_public
networks:
- traefik_public
networks:
traefik_public:
external: true
```
!!! note
Note the clever use of two Traefik frontends to expose the notifications hub on port 3012. Thanks @gkoerk!
## Serving
### Launch Bitwarden stack
Launch the Bitwarden stack by running ```docker stack deploy bitwarden -c <path -to-docker-compose.yml>```
Browse to your new instance at https://**YOUR-FQDN**, and create a new user account and master password (*Just click the **Create Account** button without filling in your email address or master password*)
### Get the apps / extensions
Once you've created your account, jump over to https://bitwarden.com/#download and download the apps for your mobile and browser, and start adding your logins!
## Chef's Notes 📓
1. You'll notice we're not using the *official* container images (*[all 6 of them required](https://help.bitwarden.com/article/install-on-premise/#install-bitwarden)!)*, but rather a [more lightweight version ideal for self-hosting](https://hub.docker.com/r/bitwardenrs/server). All of the elements are contained within a single container, and SQLite is used for the database backend.
2. As mentioned above, readers should refer to the [dani-garcia/bitwarden_rs wiki](https://github.com/dani-garcia/bitwarden_rs) for details on customizing the behaviour of Bitwarden.
3. The inclusion of Bitwarden was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Thanks Gerry!

View File

@@ -139,6 +139,6 @@ Launch the BookStack stack by running ```docker stack deploy bookstack -c <path
Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_proxy, and then login with username 'admin@admin.com' and password 'password'.
## Chef's Notes 📓
## Chef's Notes
1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container.

View File

@@ -0,0 +1,144 @@
# BookStack
BookStack is a simple, self-hosted, easy-to-use platform for organising and storing information.
A friendly middle ground between heavyweights like MediaWiki or Confluence and [Gollum](/recipes/gollum/), BookStack relies on a database backend (so searching and versioning is easy), but limits itself to a pre-defined, 3-tier structure (book, chapter, page). The result is a lightweight, approachable personal documentation stack, which includes search and Markdown editing.
![BookStack Screenshot](../images/bookstack.png)
I like to protect my public-facing web UIs with an [oauth_proxy](/reference/oauth_proxy), ensuring that if an application bug (or a user misconfiguration) exposes the app to unplanned public scrutiny, I have a second layer of defense.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik/) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/bookstack:
```
mkdir -p /var/data/bookstack/database-dump
mkdir -p /var/data/runtime/bookstack/db
```
### Prepare environment
Create bookstack.env, and populate with the following variables. Set the [oauth_proxy](/reference/oauth_proxy) variables provided by your OAuth provider (if applicable.)
```
# For oauth-proxy (optional)
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
# For MariaDB/MySQL database
MYSQL_RANDOM_ROOT_PASSWORD=true
MYSQL_DATABASE=bookstack
MYSQL_USER=bookstack
MYSQL_PASSWORD=secret
# Bookstack-specific variables
DB_HOST=bookstack_db:3306
DB_DATABASE=bookstack
DB_USERNAME=bookstack
DB_PASSWORD=secret
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
db:
image: mariadb:10
env_file: /var/data/config/bookstack/bookstack.env
networks:
- internal
volumes:
- /var/data/runtime/bookstack/db:/var/lib/mysql
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/bookstack/bookstack.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:bookstack.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/bookstack/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app
-redirect-url=https://bookstack.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
app:
image: solidnerd/bookstack
env_file: /var/data/config/bookstack/bookstack.env
networks:
- internal
db-backup:
image: mariadb:10
env_file: /var/data/config/bookstack/bookstack.env
volumes:
- /var/data/bookstack/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.33.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Bookstack stack
Launch the BookStack stack by running ```docker stack deploy bookstack -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_proxy, and then login with username 'admin@admin.com' and password 'password'.
## Chef's Notes 📓
1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container.

View File

@@ -122,7 +122,7 @@ Launch the Calibre-Web stack by running ```docker stack deploy calibre-web -c <p
Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the initial GUI configuraition. Set the first field (_Location of Calibre database_) to "_/books/_", and when complete, login using defaults username of "**admin**" with password "**admin123**".
## Chef's Notes 📓
## Chef's Notes
1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.

View File

@@ -0,0 +1,128 @@
hero: Manage your ebook collection. Like a BOSS.
# Calibre-Web
The [AutoPirate](/recipes/autopirate/) recipe includes [Lazy Librarian](https://github.com/itsmegb/LazyLibrarian), a tool for tracking, finding, and downloading eBooks. However, after the eBooks are downloaded, Lazy Librarian is not much use for organising, tracking, and actually **reading** them.
[Calibre-Web](https://github.com/janeczku/calibre-web) could be described as "_[Plex](/recipes/plex/) (or [Emby](/recipes/emby/)) for eBooks_" - it's a web-based interface to manage your eBook library, screenshot below:
![Calibre-Web Screenshot](../images/calibre-web.png)
Of course, you probably already manage your eBooks using the excellent [Calibre](https://calibre-ebook.com/), but this is primarily a (_powerful_) desktop application. Calibre-Web is an alternative way to manage / view your existing Calibre database, meaning you can continue to use Calibre on your desktop if you wish.
As a long-time Kindle user, Calibre-Web brings (among [others](https://github.com/janeczku/calibre-web)) the following features which appeal to me:
* Filter and search by titles, authors, tags, **series** and language
* Create custom book collection (shelves)
Support for editing eBook metadata and deleting eBooks from Calibre library
* Support for converting eBooks from EPUB to Kindle format (mobi/azw)
* Send eBooks to Kindle devices with the click of a button
* Support for reading eBooks directly in the browser (.txt, .epub, .pdf, .cbr, .cbt, .cbz)
* Upload new books in PDF, epub, fb2 format
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need a directory to store some config data for Calibre-Web, container, so create /var/data/calibre-web, and ensure the directory is owned by the same use which owns your Calibre data (below)
```
mkdir /var/data/calibre-web
chown calibre:calibre /var/data/calibre-web # for example
```
Ensure that your Calibre library is accessible to the swarm (_i.e., exists on shared storage_), and that the same user who owns the config directory above, also owns the actual calibre library data (_including the ebooks managed by Calibre_).
### Prepare environment
We'll use an [oauth-proxy](/reference/oauth_proxy/) to protect the UI from public access, so create calibre-web.env, and populate with the following variables:
```
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=<make this a random string>
PUID=
PGID=
```
Follow the [instructions](https://github.com/bitly/oauth2_proxy) to setup your oauth provider. You need to setup a unique key/secret for each instance of the proxy you want to run, since in each case the callback URL will differ.
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
app:
image: technosoft2000/calibre-web
env_file : /var/data/config/calibre-web/calibre-web.env
volumes:
- /var/data/calibre-web:/config
- /srv/data/Archive/Ebooks/calibre:/books
networks:
- internal
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/calibre-web/calibre-web.env
dns_search: hq.example.com
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:calibre-web.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/calibre-web/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app:8083
-redirect-url=https://calibre-web.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.18.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Calibre-Web
Launch the Calibre-Web stack by running ```docker stack deploy calibre-web -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the initial GUI configuraition. Set the first field (_Location of Calibre database_) to "_/books/_", and when complete, login using defaults username of "**admin**" with password "**admin123**".
## Chef's Notes 📓
1. Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
2. A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.

View File

@@ -58,7 +58,7 @@ Create /var/data/config/collabora/collabora.env, and populate with the following
1. Variables are in lower-case, unlike our standard convention. This is to align with the CODE container
2. Set domain to your [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/) domain, and escape all the periods as per the example
3. Set your server_name to collabora.<yourdomain\>. Escaping periods is unnecessary
4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen 🔥
4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen
```
username=admin
@@ -300,7 +300,7 @@ Now browse your NextCloud files. Click the plus (+) sign to create a new documen
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓
## Chef's Notes
1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.

View File

@@ -0,0 +1,306 @@
# Collabora Online
!!! important
Development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
Collabora Online Development Edition (or "[CODE](https://www.collaboraoffice.com/code/#what_is_code)"), is the lightweight, or "home" edition of the commercially-supported [Collabora Online](https://www.collaboraoffice.com/collabora-online/) platform. It
It's basically the [LibreOffice](https://www.libreoffice.org/) interface in a web-browser. CODE is not a standalone app, it's a backend intended to be accessed via "WOPI" from an existing interface (_in our case, [NextCloud](/recipes/nextcloud/)_)
![CODE Screenshot](../images/collabora-online.png)
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. DNS entry for the hostname (_i.e. "collabora.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
4. [NextCloud](/recipes/nextcloud/) installed and operational
5. [Docker-compose](https://docs.docker.com/compose/install/) installed on your node(s) - this is a special case which needs to run outside of Docker Swarm
## Preparation
### Explanation for complexity
Due to the clever magic that Collabora does to present a "headless" LibreOffice UI to the browser, the CODE docker container requires system capabilities which cannot be granted under Docker Swarm (_specifically, MKNOD_).
So we have to run Collabora itself in the next best thing to Docker swarm - a docker-compose stack. Using docker-compose will at least provide us with consistent and version-able configuration files.
This presents another problem though - Docker Swarm with Traefik is superb at making all our stacks "just work" with ingress routing and LetsEncyrpt certificates. We don't want to have to do this manually (_like a cave-man_), so we engage in some trickery to allow us to still use our swarmed Traefik to terminate SSL.
We run a single swarmed Nginx instance, which forwards all requests to an upstream, with the target IP of the docker0 interface, on port 9980 (_the port exposed by the CODE container_)
We attach the necessary labels to the Nginx container to instruct Trafeik to setup a front/backend for collabora.<ourdomain\>. Now incoming requests to **https://collabora.<ourdomain\>** will hit Traefik, be forwarded to nginx (_wherever in the swarm it's running_), and then to port 9980 on the same node that nginx is running on.
What if we're running multiple nodes in our swarm, and nginx ends up on a different node to the one running Collabora via docker-compose? Well, either constrain nginx to the same node as Collabora (_example below_), or just launch an instance of Collabora on _every_ node then. It's just a rendering / GUI engine after all, it doesn't hold any persistent data.
Here's a (_highly technical_) diagram to illustrate:
![CODE traffic flow](../images/collabora-traffic-flow.png)
### Setup data locations
We'll need a directory for holding config to bind-mount into our containers, so create ```/var/data/collabora```, and ```/var/data/config/collabora``` for holding the docker/swarm config
```
mkdir /var/data/collabora/
mkdir /var/data/config/collabora/
```
### Prepare environment
Create /var/data/config/collabora/collabora.env, and populate with the following variables, customized for your installation.
!!! warning
Note the following:
1. Variables are in lower-case, unlike our standard convention. This is to align with the CODE container
2. Set domain to your [NextCloud](/recipes/nextcloud/) domain, and escape all the periods as per the example
3. Set your server_name to collabora.<yourdomain\>. Escaping periods is unnecessary
4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen 🔥
```
username=admin
password=ilovemypassword
domain=nextcloud\.batcave\.com
server_name=collabora.batcave.com
termination=true
```
### Create docker-compose.yml
Create ```/var/data/config/collabora/docker-compose.yml``` as follows:
```
version: "3.0"
services:
local-collabora:
image: funkypenguin/collabora
# the funkypenguin version has a patch to include "termination" behind SSL-terminating reverse proxy (traefik), see CODE PR #50.
# Once merged, the official container can be used again.
#image: collabora/code
env_file: /var/data/config/collabora/collabora.env
volumes:
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
cap_add:
- MKNOD
ports:
- 9980:9980
```
### Create nginx.conf
Create ```/var/data/config/collabora/nginx.conf``` as follows, changing the ```server_name``` value to match the environment variable you established above.
```
upstream collabora-upstream {
# Run collabora under docker-compose, since it needs MKNOD cap, which can't be provided by Docker Swarm.
# The IP here is the typical IP of docker0 - change if yours is different.
server 172.17.0.1:9980;
}
server {
listen 80;
server_name collabora.batcave.com;
# static files
location ^~ /loleaflet {
proxy_pass http://collabora-upstream;
proxy_set_header Host $http_host;
}
# WOPI discovery URL
location ^~ /hosting/discovery {
proxy_pass http://collabora-upstream;
proxy_set_header Host $http_host;
}
# Main websocket
location ~ /lool/(.*)/ws$ {
proxy_pass http://collabora-upstream;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
# Admin Console websocket
location ^~ /lool/adminws {
proxy_buffering off;
proxy_pass http://collabora-upstream;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 36000s;
}
# download, presentation and image upload
location ~ /lool {
proxy_pass https://collabora-upstream;
proxy_set_header Host $http_host;
}
}
```
### Create loolwsd.xml
[Until we understand](https://github.com/CollaboraOnline/Docker-CODE/pull/50) how to [pass trusted network parameters to the entrypoint script using environment variables](https://github.com/CollaboraOnline/Docker-CODE/issues/49), we have to maintain a manually edited version of ```loolwsd.xml```, and bind-mount it into our collabora container.
The way we do this is we mount
```/var/data/collabora/loolwsd.xml``` as ```/etc/loolwsd/loolwsd.xml-new```, then allow the container to create its default ```/etc/loolwsd/loolwsd.xml```, copy this default **over** our ```/var/data/collabora/loolwsd.xml``` as ```/etc/loolwsd/loolwsd.xml-new```, and then update the container to use **our** ```/var/data/collabora/loolwsd.xml``` as ```/etc/loolwsd/loolwsd.xml``` instead (_confused yet?_)
Create an empty ```/var/data/collabora/loolwsd.xml``` by running ```touch /var/data/collabora/loolwsd.xml```. We'll populate this in the next section...
### Setup Docker Swarm
Create ```/var/data/config/collabora/collabora.yml``` as follows, changing the traefik frontend_rule as necessary:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3.0"
services:
nginx:
image: nginx:latest
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:collabora.batcave.com
- traefik.docker.network=traefik_public
- traefik.port=80
- traefik.frontend.passHostHeader=true
# uncomment this line if you want to force nginx to always run on one node (i.e., the one running collabora)
#placement:
# constraints:
# - node.hostname == ds1
volumes:
- /var/data/collabora/nginx.conf:/etc/nginx/conf.d/default.conf:ro
networks:
traefik_public:
external: true
```
## Serving
### Generate loolwsd.xml
Well. This is awkward. There's no documented way to make Collabora work with Docker Swarm, so we're doing a bit of a hack here, until I understand [how to pass these arguments](https://github.com/CollaboraOnline/Docker-CODE/issues/49) via environment variables.
Launching Collabora is (_for now_) a 2-step process. First.. we launch collabora itself, by running:
```
cd /var/data/config/collabora/
docker-compose -d up
```
Output looks something like this:
```
root@ds1:/var/data/config/collabora# docker-compose up -d
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Pulling local-collabora (funkypenguin/collabora:latest)...
latest: Pulling from funkypenguin/collabora
7b8b6451c85f: Pull complete
ab4d1096d9ba: Pull complete
e6797d1788ac: Pull complete
e25c5c290bde: Pull complete
4b8e1b074e06: Pull complete
f51a3d1fb75e: Pull complete
8b826e2ae5ad: Pull complete
Digest: sha256:6cd38cb5cbd170da0e3f0af85cecf07a6bc366e44555c236f81d5b433421a39d
Status: Downloaded newer image for funkypenguin/collabora:latest
Creating collabora_local-collabora_1 ...
Creating collabora_local-collabora_1 ... done
root@ds1:/var/data/config/collabora#
```
Now exec into the container (_from another shell session_), by running ```exec <container name> -it /bin/bash```. Make a copy of /etc/loolwsd/loolwsd, by running ```cp /etc/loolwsd/loolwsd.xml /etc/loolwsd/loolwsd.xml-new```, and then exit the container with ```exit```.
Delete the collabora container by hitting CTRL-C in the docker-compose shell, running ```docker-compose rm```, and then altering this line in docker-compose.yml:
```
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
```
To this:
```
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml
```
Edit /var/data/collabora/loolwsd.xml, find the **storage.filesystem.wopi** section, and add lines like this to the existing allow rules (_to allow IPv6-enabled hosts to still connect with their IPv4 addreses_):
```
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
```
Find the **net.post_allow** section, and add a line like this:
```
<host desc="RFC1918 private addressing in inet6 format">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host desc="RFC1918 private addressing in inet6 format">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
```
Find these 2 lines:
```
<ssl desc="SSL settings">
<enable type="bool" default="true">true</enable>
```
And change to:
```
<ssl desc="SSL settings">
<enable type="bool" default="true">false</enable>
```
Now re-launch collabora (_with the correct with loolwsd.xml_) under docker-compose, by running:
```
docker-compose -d up
```
Once collabora is up, we launch the swarm stack, by running:
```
docker stack deploy collabora -c /var/data/config/collabora/collabora.yml
```
Visit **https://collabora.<yourdomain\>/l/loleaflet/dist/admin/admin.html** and confirm you can login with the user/password you specified in collabora.env
### Integrate into NextCloud
In NextCloud, Install the **Collabora Online** app (https://apps.nextcloud.com/apps/richdocuments), and then under **Settings -> Collabora Online**, set your Collabora Online Server to ```https://collabora.<your domain>```
![CODE Screenshot](../images/collabora-online-in-nextcloud.png)
Now browse your NextCloud files. Click the plus (+) sign to create a new document, and create either a new document, spreadsheet, or presentation. Name your document and then click on it. If Collabora is setup correctly, you'll shortly enter into the rich editing interface provided by Collabora :)
!!! important
Development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓
1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.

View File

@@ -0,0 +1,16 @@
# CryptoNote Mining Pool
[Cryptocurrency miners](/recipes/cryptominer) will "pool" their GPU resources ("_hashpower_") into aggregate "_mining pools_", so that by the combined effort of all the miners, the pool will receive a reward for the blocks "mined" into the blockchain, and this reward will be distributed among the miners.
[CryptoNote](https://cryptonote.org/) is an open-source toolset designed to facilitate the creation of new privacy-focused [cryptocurrencies](https://cryptonote.org/coins)
(_CryptoNote = 'Kryptonite'. In a pool. Get it?_)
![CryptoNote Mining Pool Screenshot](/images/cryptonote-mining-pool.png)
The fact that all these currencies share a common ancestry means that a common mining pool platform can be used for miners. The following recipes all use variations of [Dvandal's cryptonote-nodejs-pool ](https://github.com/dvandal/cryptonote-nodejs-pool)
## Mining Pool Recipies
* [TurtleCoin](/recipes/turtle-pool/), the no-BS, fun baby cryptocurrency
* [Athena](/recipes/cryptonote-mining-pool/athena/), TurtleCoin's newborn baby sister

View File

@@ -160,7 +160,7 @@ Launch Duplicity stack by running ```docker stack deploy duplicity -c <path -to-
Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination.
## Chef's Notes 📓
## Chef's Notes
1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env

View File

@@ -0,0 +1,166 @@
hero: Duplicity - A boring recipe to backup your exciting stuff. Boring is good.
# Duplicity
Intro
![Duplicity Screenshot](../images/duplicity.png)
[Duplicity](http://duplicity.nongnu.org/) backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.
So what does this mean for our stack? It means we can leverage Duplicity to backup all our data-at-rest to a wide variety of cloud providers, including, but not limited to:
- acd_cli
- Amazon S3
- Backblaze B2
- DropBox
- ftp
- Google Docs
- Google Drive
- Microsoft Azure
- Microsoft Onedrive
- Rackspace Cloudfiles
- rsync
- ssh/scp
- SwiftStack
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. Credentials for one of the Duplicity's supported upload destinations
## Preparation
### Setup data locations
We'll need a folder to store a docker-compose .yml file, and an associated .env file. If you're following my filesystem layout, create /var/data/config/duplicity (for the config), and /var/data/duplicity (for the metadata) as follows:
```
mkdir /var/data/config/duplicity
mkdir /var/data/duplicity
cd /var/data/config/duplicity
```
### (Optional) Create Google Cloud Storage bucket
I didn't already have an archival/backup provider, so I chose Google Cloud "cloud" storage for the low price-point - 0.7 cents per GB/month (_Plus you [start with $300 credit](https://cloud.google.com/free/) even when signing up for the free tier_). You can use any destination supported by [Duplicity's URL scheme though](http://duplicity.nongnu.org/duplicity.1.html#sect7), just make sure you specify the necessary [environment variables](http://duplicity.nongnu.org/duplicity.1.html#sect6).
1. [Sign up](https://cloud.google.com/storage/docs/getting-started-console), create an empty project, enable billing, and create a bucket. Give your bucket a unique name, example "**jack-and-jills-bucket**" (_it's unique across the entire Google Cloud_)
2. Under "Storage" section > "[Settings](https://console.cloud.google.com/project/_/storage/settings)" > "Interoperability" tab > click "Enable interoperable access" and then "Create a new key" button and note both Access Key and Secret.
### Prepare environment
1. Generate a random passphrase to use to encrypt your data. **Save this somewhere safe**, without it you won't be able to restore!
2. Seriously, **save**. **it**. **somewhere**. **safe**.
3. Create duplicity.env, and populate with the following variables
```
SRC=/var/data/
DST=gs://jack-and-jills-bucket/yes-you-can-have-subdirectories
TMPDIR=/tmp
GS_ACCESS_KEY_ID=<YOUR GS ACCESS KEY>
GS_SECRET_ACCESS_KEY=<YOUR GS SECRET ACCESS KEY>
OPTIONS=--allow-source-mismatch --exclude /var/data/runtime --exclude /var/data/registry --exclude /var/data/duplicity --archive-dir=/archive
PASSPHRASE=<YOUR CHOSEN PASSPHRASE>
```
!!! note
See the [data layout reference](/reference/data_layout/) for an explanation of the included/excluded paths above.
### Run a test backup
Before we launch the automated daily backups, let's run a test backup, as follows:
```
docker run --env-file duplicity.env -it --rm -v \
/var/data:/var/data:ro -v /var/data/duplicity/tmp:/tmp -v \
/var/data/duplicity/archive:/archive tecnativa/duplicity \
/etc/periodic/daily/jobrunner
```
You should see some activity, with a summary of bytes transferred at the end.
### Run a test restore
Repeat after me: "If you don't verify your backup, **it's not a backup**".
!!! warning
Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data.
Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/recipie/traefik/), since this is likely to exist for every reader_).
```
docker run --env-file duplicity.env -it --rm \
-v /var/data:/var/data:ro \
-v /var/data/duplicity/tmp:/tmp \
-v /var/data/duplicity/archive:/archive tecnativa/duplicity \
duplicity list-current-files \
\$DST | grep traefik.yml
```
Once you've identified a file to test-restore, use a variation of the following to restore it to /tmp (_from the perspective of the container - it's actually /var/data/duplicity/tmp_)
```
docker run --env-file duplicity.env -it --rm \
-v /var/data:/var/data:ro \
-v /var/data/duplicity/tmp:/tmp \
-v /var/data/duplicity/archive:/archive \
tecnativa/duplicity duplicity restore \
--file-to-restore config/traefik/traefik.yml \
\$DST /tmp/traefik-restored.yml
```
Examine the contents of /var/data/duplicity/tmp/traefik-restored.yml to confirm it contains valid data.
### Setup Docker Swarm
Now that we have confidence in our backup/restore process, let's automate it by creating a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3"
services:
backup:
image: tecnativa/duplicity
env_file: /var/data/config/duplicity/duplicity.env
networks:
- internal
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data:/var/data:ro
- /var/data/duplicity/tmp:/tmp
- /var/data/duplicity/archive:/archive
networks:
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.10.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Duplicity stack
Launch Duplicity stack by running ```docker stack deploy duplicity -c <path -to-docker-compose.yml>```
Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination.
## Chef's Notes 📓
1. Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
2. The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add ```SMTP_HOST```, ```SMTP_PORT```, ```EMAIL_FROM``` and ```EMAIL_TO``` variables to duplicity.env

View File

@@ -1,4 +1,4 @@
hero: Real heroes backup their 💾
hero: Real heroes backup their
# Elkar Backup
@@ -243,7 +243,7 @@ This takes you to a list of backup names and file paths. You can choose to downl
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓
## Chef's Notes
1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
2. The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!

View File

@@ -0,0 +1,249 @@
hero: Real heroes backup their 💾
# Elkar Backup
Don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
!!! important
Ongoing development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](/recipes/duplicity/) would produce for you.
![ElkarBackup Screenshot](../images/elkarbackup.png)
## Details
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/elkarbackup:
```
mkdir -p /var/data/elkarbackup/{backups,uploads,sshkeys,database-dump}
mkdir -p /var/data/runtime/elkarbackup/db
mkdir -p /var/data/config/elkarbackup
```
### Prepare environment
Create /var/data/config/elkarbackup/elkarbackup.env, and populate with the following variables
```
SYMFONY__DATABASE__PASSWORD=password
EB_CRON=enabled
TZ='Etc/UTC'
#SMTP - Populate these if you want email notifications
#SYMFONY__MAILER__HOST=
#SYMFONY__MAILER__USER=
#SYMFONY__MAILER__PASSWORD=
#SYMFONY__MAILER__FROM=
# For mysql
MYSQL_ROOT_PASSWORD=password
#oauth2_proxy
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
```
Create ```/var/data/config/elkarbackup/elkarbackup-db-backup.env```, and populate with the following, to setup the nightly database dump.
!!! note
Running a daily database dump might be considered overkill, since ElkarBackup can be configured to backup its own database. However, making my own backup keeps the operation of this stack consistent with **other** stacks which employ MariaDB.
Also, did you ever hear about the guy who said "_I wish I had fewer backups"?
No, me either :shrug:
```
# For database backup (keep 7 days daily backups)
MYSQL_PWD=<same as SYMFONY__DATABASE__PASSWORD above>
MYSQL_USER=root
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3"
services:
db:
image: mariadb:10.4
env_file: /var/data/config/elkarbackup/elkarbackup.env
networks:
- internal
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/runtime/elkarbackup/db:/var/lib/mysql
db-backup:
image: mariadb:10.4
env_file: /var/data/config/elkarbackup/elkarbackup-db-backup.env
volumes:
- /var/data/elkarbackup/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
app:
image: elkarbackup/elkarbackup
env_file: /var/data/config/elkarbackup/elkarbackup.env
networks:
- internal
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/:/var/data
- /var/data/elkarbackup/backups:/app/backups
- /var/data/elkarbackup/uploads:/app/uploads
- /var/data/elkarbackup/sshkeys:/app/.ssh
proxy:
image: funkypenguin/oauth2_proxy
env_file: /var/data/config/elkarbackup/elkarbackup.env
networks:
- traefik_public
- internal
deploy:
labels:
- traefik.frontend.rule=Host:elkarbackup.example.com
- traefik.port=4180
volumes:
- /var/data/config/traefik/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app:80
-redirect-url=https://elkarbackup.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.36.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch ElkarBackup stack
Launch the ElkarBackup stack by running ```docker stack deploy elkarbackup -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password default password "root":
![ElkarBackup Login Screen](/images/elkarbackup-setup-1.png)
First thing you do, change your password, using the gear icon, and "Change Password" link:
![ElkarBackup Login Screen](/images/elkarbackup-setup-2.png)
Have a read of the [Elkarbackup Docs](https://docs.elkarbackup.org/docs/introduction.html) - they introduce the concept of **clients** (_hosts containing data to be backed up_), **jobs** (_what data gets backed up_), **policies** (_when is data backed up and how long is it kept_).
At the very least, you want to setup a **client** called "_localhost_" with an empty path (_i.e., the job path will be accessed locally, without SSH_), and then add a job to this client to backup /var/data, **excluding** ```/var/data/runtime``` and ```/var/data/elkarbackup/backup``` (_unless you **like** "backup-ception"_)
### Copying your backup data offsite
From the WebUI, you can download a script intended to be executed on a remote host, to backup your backup data to an offsite location. This is a **Good Idea**(tm), but needs some massaging for a Docker swarm deployment.
Here's a variation to the standard script, which I've employed:
```
#!/bin/bash
REPOSITORY=/var/data/elkarbackup/backups
SERVER=<target host member of docker swarm>
SERVER_USER=elkarbackup
UPLOADS=/var/data/elkarbackup/uploads
TARGET=/srv/backup/elkarbackup
echo "Starting backup..."
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
ssh "$SERVER_USER@$SERVER" "cd '$REPOSITORY'; find . -maxdepth 2 -mindepth 2" | sed s/^..// | while read jobId
do
echo Backing up job $jobId
mkdir -p $TARGET/$jobId 2>/dev/null
rsync -aH --delete "$SERVER_USER@$SERVER:$REPOSITORY/$jobId/" $TARGET/$jobId
done
echo Backing up uploads
rsync -aH --delete "$SERVER_USER@$SERVER":"$UPLOADS/" $TARGET/uploads
USED=`df -h . | awk 'NR==2 { print $3 }'`
USE=`df -h . | awk 'NR==2 { print $5 }'`
AVAILABLE=`df -h . | awk 'NR==2 { print $4 }'`
echo "Backup finished succesfully!"
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
echo ""
echo "**** INFO ****"
echo "Used disk space: $USED ($USE)"
echo "Available disk space: $AVAILABLE"
echo ""
```
!!! note
You'll note that I don't use the script to create a mysql dump (_since Elkar is running within a container anyway_), rather I just rely on the database dump which is made nightly into ```/var/data/elkarbackup/database-dump/```
### Restoring data
Repeat after me : "**It's not a backup unless you've tested a restore**"
!!! note
I had some difficulty making restoring work well in the webUI. My attempts to "Restore to client" failed with an SSH error about "localhost" not found. I **was** able to download the backup from my web browser, so I considered it a successful restore, since I can retrieve the backed-up data either from the webUI or from the filesystem directly.
To restore files form a job, click on the "Restore" button in the WebUI, while on the **Jobs** tab:
![ElkarBackup Login Screen](/images/elkarbackup-setup-3.png)
This takes you to a list of backup names and file paths. You can choose to download the entire contents of the backup from your browser as a .tar.gz, or to restore the backup to the client. If you click on the **name** of the backup, you can also drill down into the file structure, choosing to restore a single file or directory.
!!! important
Ongoing development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓
1. If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
2. The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!

View File

@@ -83,7 +83,7 @@ Launch the stack by running ```docker stack deploy emby -c <path -to-docker-comp
Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-based setup to complete deploying your Emby.
## Chef's Notes 📓
## Chef's Notes
1. I didn't use an [oauth2_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!

View File

@@ -0,0 +1,90 @@
# Emby
[Emby](https://emby.media/) (_think "M.B." or "Media Browser"_) is best described as "_like [Plex](/recipes/plex/) but different_" 😁 - It's a bit geekier and less polished than Plex, but it allows for more flexibility and customization.
![Emby Screenshot](../images/emby.png)
I've started experimenting with Emby as an alternative to Plex, because of the advanced [parental controls](https://github.com/MediaBrowser/Wiki/wiki/Parental-Controls) it offers. Based on my experimentation thus far, I have a "**kid-safe**" profile which automatically logs in, and only displays kid-safe content, based on ratings.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need a location to store Emby's library data, config files, logs and temporary transcoding space, so create /var/data/emby, and make sure it's owned by the user and group who also own your media data.
```
mkdir /var/data/emby
```
### Prepare environment
Create emby.env, and populate with PUID/GUID for the user who owns the /var/data/emby directory (_above_) and your actual media content (_in this example, the media content is at **/srv/data**_)
```
PUID=
GUID=
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3.0"
services:
emby:
image: emby/emby-server
env_file: /var/data/config/emby/emby.env
volumes:
- /var/data/emby/emby:/config
- /srv/data/:/data
deploy:
labels:
- traefik.frontend.rule=Host:emby.example.com
- traefik.docker.network=traefik_public
- traefik.port=8096
networks:
- traefik_public
- internal
ports:
- 8096:8096
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.17.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Emby stack
Launch the stack by running ```docker stack deploy emby -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-based setup to complete deploying your Emby.
## Chef's Notes 📓
1. I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
2. Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
3. We don't bother exposing the HTTPS port for Emby, since [Traefik](/ha-docker-swarm/traefik/) is doing the SSL termination for us already.

View File

@@ -0,0 +1,60 @@
version: '3'
services:
flightairmap:
image: richarvey/nginx-php-fpm
volumes:
- "/var/data/flightairmap/conf:/var/www/html/conf"
- "/var/data/flightairmap/scripts:/var/www/html/scripts"
- "/var/data/flightairmap/html:/var/www/flightairmap/"
env_file:
- "/var/data/config/flightairmap/flightairmap.env"
environment:
- PHP_MEM_LIMIT=256
- RUN_SCRIPTS=1
- MYSQL_HOST=${MYSQL_HOST}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:www.observe.global
- traefik.docker.network=traefik_public
- traefik.port=80
db:
image: mariadb:10
env_file: /var/data/config/flightairmap/flightairmap.env
networks:
- internal
volumes:
- /var/data/runtime/flightairmap/db:/var/lib/mysql
db-backup:
image: mariadb:10
env_file: /var/data/config/flightairmap/flightairmap.env
volumes:
- /var/data/flightairmap/database-dump:/dump
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.44.0/24

View File

@@ -0,0 +1 @@
Hello

View File

@@ -0,0 +1 @@
Hello

View File

View File

@@ -64,7 +64,7 @@ Launch the Ghost stack by running ```docker stack deploy ghost -c <path -to-dock
Create your first administrative account at https://**YOUR-FQDN**/admin/
## Chef's Notes 📓
## Chef's Notes
1. If I wasn't committed to a [static-site-generated blog](https://www.funkypenguin.co.nz/blog/), Ghost is the platform I'd use for my blog.
2. A default using the SQlite database takes 548k of space:

View File

@@ -0,0 +1,75 @@
hero: Ghost - A recipe for beautiful online publication.
# Ghost
[Ghost](https://ghost.org) is "a fully open source, hackable platform for building and running a modern online publication."
![](/images/ghost.png)
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
Create the location for the bind-mount of the application data, so that it's persistent:
```
mkdir -p /var/data/ghost
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
ghost:
image: ghost:1-alpine
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/ghost/:/var/lib/ghost/content
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:ghost.example.com
- traefik.docker.network=traefik
- traefik.port=2368
networks:
traefik_public:
external: true
```
## Serving
### Launch Ghost stack
Launch the Ghost stack by running ```docker stack deploy ghost -c <path -to-docker-compose.yml>```
Create your first administrative account at https://**YOUR-FQDN**/admin/
## Chef's Notes 📓
1. If I wasn't committed to a [static-site-generated blog](https://www.funkypenguin.co.nz/blog/), Ghost is the platform I'd use for my blog.
2. A default using the SQlite database takes 548k of space:
```
[root@ds1 ghost]# du -sh /var/data/ghost/
548K /var/data/ghost/
[root@ds1 ghost]#
```

View File

@@ -94,7 +94,7 @@ Launch the mail server stack by running ```docker stack deploy gitlab-runner -c
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓
## Chef's Notes
1. You'll note that I setup 2 runners. One is locked to a single project (*this cookbook build*), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (*and GitLab starts **sooo** slowly!*), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.

View File

@@ -0,0 +1,100 @@
# Gitlab Runner
Some features of GitLab require a "[runner](https://docs.gitlab.com/runner/)" (_in the sense of a "gopher" or a "minion"_). A runner "registers" itself with a GitLab instance, and is given tasks to run. Tasks include running Continuous Integration (CI) builds, and building container images.
While a runner isn't strictly required to use GitLab, if you want to do CI, you'll need at least one. There are many ways to deploy a runner - this recipe focuses on the docker container model.
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
4. [X] [GitLab](/ha-docker-swarm/gitlab) installation (see previous recipe)
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our runner containers, so create them in `/var/data/gitlab`:
```
cd /var/data
mkdir gitlab
cd gitlab
mkdir -p {runners/1,runners/2}
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
thing1:
image: gitlab/gitlab-runner
volumes:
- /var/data/gitlab/runners/1:/etc/gitlab-runner
networks:
- internal
thing2:
image: gitlab/gitlab-runner
volumes:
- /var/data/gitlab/runners/2:/etc/gitlab-runner
networks:
- internal
networks:
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.23.0/24
```
### Configure runners
From your GitLab UI, you can retrieve a "token" necessary to register a new runner. To register the runner, you can either create config.toml in each runner's bind-mounted folder (example below), or just `docker exec` into each runner container and execute ```gitlab-runner register``` to interactively generate config.toml.
Sample runner config.toml:
```
concurrent = 1
check_interval = 0
[[runners]]
name = "myrunner1"
url = "https://gitlab.example.com"
token = "<long string here>"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.1"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
```
## Serving
### Launch runners
Launch the mail server stack by running ```docker stack deploy gitlab-runner -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓
1. You'll note that I setup 2 runners. One is locked to a single project (*this cookbook build*), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
2. Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (*and GitLab starts **sooo** slowly!*), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.

View File

@@ -133,7 +133,7 @@ Launch the mail server stack by running ```docker stack deploy gitlab -c <path -
Log into your new instance at https://[your FQDN], with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓
## Chef's Notes
A few comments on decisions taken in this design:

View File

@@ -0,0 +1,140 @@
hero: Gitlab - A recipe for a self-hosted GitHub alternative
# GitLab
GitLab is a self-hosted [alternative to GitHub](https://about.gitlab.com/comparison/). The most common use case is (a set of) developers with the desire for the rich feature-set of GitHub, but with unlimited private repositories.
Docker does maintain an [official "Omnibus" container](https://docs.gitlab.com/omnibus/docker/README.html), but for this recipe I prefer the "[dockerized gitlab](https://github.com/sameersbn/docker-gitlab)" project, since it allows distribution of the various Gitlab components across multiple swarm nodes.
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/gitlab:
```
cd /var/data
mkdir gitlab
cd gitlab
mkdir -p {postgresql,redis,gitlab}
```
### Prepare environment
You'll need to know the following:
1. Choose a password for postgresql, you'll need it for DB_PASS in the compose file (below)
2. Generate 3 passwords using ```pwgen -Bsv1 64```. You'll use these for the XXX_KEY_BASE environment variables below
2. Create gitlab.env, and populate with **at least** the following variables (the full set is available at https://github.com/sameersbn/docker-gitlab#available-configuration-parameters):
```
DB_USER=gitlab
DB_PASS=gitlabdbpass
DB_NAME=gitlabhq_production
DB_EXTENSION=pg_trgm
DB_ADAPTER=postgresql
DB_HOST=postgresql
TZ=Pacific/Auckland
REDIS_HOST=redis
REDIS_PORT=6379
GITLAB_TIMEZONE=Auckland
GITLAB_HTTPS=true
SSL_SELF_SIGNED=false
GITLAB_HOST=gitlab.example.com
GITLAB_PORT=443
GITLAB_SSH_PORT=2222
GITLAB_SECRETS_DB_KEY_BASE=CFf7sS3kV2nGXBtMHDsTcjkRX8PWLlKTPJMc3lRc6GCzJDdVljZ85NkkzJ8mZbM5
GITLAB_SECRETS_SECRET_KEY_BASE=h2LBVffktDgb6BxM3B97mDSjhnSNwLc5VL2Hqzq9cdrvBtVw48WSp5wKj5HZrJM5
GITLAB_SECRETS_OTP_KEY_BASE=t9LPjnLzbkJ7Nt6LZJj6hptdpgG58MPJPwnMMMDdx27KSwLWHDrz9bMWXQMjq5mp
GITLAB_ROOT_PASSWORD=changeme
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
redis:
image: sameersbn/redis:latest
command:
- --loglevel warning
volumes:
- /var/data/gitlab/redis:/var/lib/redis:Z
networks:
- internal
postgresql:
image: sameersbn/postgresql:9.6-2
env_file: /var/data/config/gitlab/gitlab.env
volumes:
- /var/data/gitlab/postgresql:/var/lib/postgresql:Z
networks:
- internal
gitlab:
image: sameersbn/gitlab:latest
env_file: /var/data/config/gitlab/gitlab.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:gitlab.example.com
- traefik.docker.network=traefik
- traefik.port=80
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
ports:
- "2222:22"
volumes:
- /var/data/gitlab/gitlab:/home/git/data:Z
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.2.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch gitlab
Launch the mail server stack by running ```docker stack deploy gitlab -c <path -to-docker-compose.yml>```
Log into your new instance at https://[your FQDN], with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓
A few comments on decisions taken in this design:
1. I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?)

View File

@@ -129,6 +129,6 @@ Launch the Gollum stack by running ```docker stack deploy gollum -c <path-to-doc
Authenticate against your OAuth provider, and then start editing your wiki!
## Chef's Notes 📓
## Chef's Notes
1. In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"

View File

@@ -0,0 +1,134 @@
hero: Gollum - A recipe for your own git-based wiki
# Gollum
Gollum is a simple wiki system built on top of Git. A Gollum Wiki is simply a git repository (_either bare or regular_) of a specific nature:
* A Gollum repository's contents are human-editable, unless the repository is bare.
* Pages are unique text files which may be organized into directories any way you choose.
* Other content can also be included, for example images, PDFs and headers/footers for your pages.
Gollum pages:
* May be written in a variety of markups.
* Can be edited with your favourite system editor or IDE (_changes will be visible after committing_) or with the built-in web interface.
* Can be displayed in all versions (_commits_).
![Gollum Screenshot](../images/gollum.png)
As you'll note in the (_real world_) screenshot above, my requirements for a personal wiki are:
* Portable across my devices
* Supports images
* Full-text search
* Supports inter-note links
* Revision control
Gollum meets all these requirements, and as an added bonus, is extremely fast and lightweight.
!!! note
Since Gollum itself offers no user authentication, this design secures gollum behind an [oauth2 proxy](/reference/oauth_proxy/), so that in order to gain access to the Gollum UI at all, oauth2 authentication (_to GitHub, GitLab, Google, etc_) must have already occurred.
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need an empty git repository in /var/data/gollum for our data:
```
mkdir /var/data/gollum
cd /var/data/gollum
git init
```
### Prepare environment
1. Choose an oauth provider, and obtain a client ID and secret
2. Create gollum.env, and populate with the following variables (_you can make the cookie secret whatever you like_)
```
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
app:
image: dakue/gollum
volumes:
- /var/data/gollum:/gollum
networks:
- internal
command: |
--allow-uploads
--emoji
--user-icons gravatar
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/gollum/gollum.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:gollum.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/gollum/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app:4567
-redirect-url=https://gollum.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.9.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Gollum stack
Launch the Gollum stack by running ```docker stack deploy gollum -c <path-to-docker-compose.yml>```
Authenticate against your OAuth provider, and then start editing your wiki!
## Chef's Notes 📓
1. In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"

View File

@@ -128,6 +128,6 @@ Launch the Home Assistant stack by running ```docker stack deploy homeassistant
Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into https://grafana.**YOUR FQDN** and create some beautiful graphs :)
## Chef's Notes 📓
## Chef's Notes
1. I **tried** to protect Home Assistant using [oauth2_proxy](https://geek-cookbook.funkypenguin.co.nz/)reference/oauth_proxy), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!

View File

@@ -0,0 +1,133 @@
# Home Assistant
Home Assistant is a home automation platform written in Python, with extensive support for 3rd-party home-automation platforms including Xaomi, Phillips Hue, and a [bazillion](https://home-assistant.io/components/) others.
![Home Assistant Screenshot](../images/homeassistant.png)
This recipie combines the [extensibility](https://home-assistant.io/components/) of [Home Assistant](https://home-assistant.io/) with the flexibility of [InfluxDB](https://docs.influxdata.com/influxdb/v1.4/) (_for time series data store_) and [Grafana](https://grafana.com/) (_for **beautiful** visualisation of that data_).
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/homeassistant:
```
mkdir /var/data/homeassistant
cd /var/data/homeassistant
mkdir -p {homeassistant,grafana,influxdb-backup}
```
Now create a directory for the influxdb realtime data:
```
mkdir /var/data/runtime/homeassistant/influxdb
```
### Prepare environment
Create /var/data/config/homeassistant/grafana.env, and populate with the following - this is to enable grafana to work with oauth2_proxy without requiring an additional level of authentication:
```
GF_AUTH_BASIC_ENABLED=false
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3"
services:
influxdb:
image: influxdb
networks:
- internal
volumes:
- /var/data/homeassistant/influxdb:/var/lib/influxdb
- /etc/localtime:/etc/localtime:ro
homeassistant:
image: homeassistant/home-assistant
dns_search: hq.example.com
volumes:
- /var/data/homeassistant/homeassistant:/config
- /etc/localtime:/etc/localtime:ro
deploy:
labels:
- traefik.frontend.rule=Host:homeassistant.example.com
- traefik.docker.network=traefik_public
- traefik.port=8123
networks:
- traefik_public
- internal
ports:
- 8123:8123
grafana-app:
image: grafana/grafana
env_file : /var/data/config/homeassistant/grafana.env
volumes:
- /var/data/homeassistant/grafana:/var/lib/grafana
- /etc/localtime:/etc/localtime:ro
networks:
- internal
grafana-proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/homeassistant/grafana.env
dns_search: hq.example.com
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:grafana.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/homeassistant/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://grafana-app:3000
-redirect-url=https://grafana.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.13.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Home Assistant stack
Launch the Home Assistant stack by running ```docker stack deploy homeassistant -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into https://grafana.**YOUR FQDN** and create some beautiful graphs :)
## Chef's Notes 📓
1. I **tried** to protect Home Assistant using [oauth2_proxy](/reference/oauth_proxy), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!

View File

@@ -23,4 +23,4 @@ Plug in your iBeacon, launch LightBlue Explorer, and find your iBeacon. The firs
Having paired, you'll be able to see the vital statistics of your iBeacon.
## Chef's Notes 📓
## Chef's Notes

View File

@@ -0,0 +1,26 @@
# iBeacons with Home assistant
!!! warning
This is not a complete recipe - it's an optional additional of the [HomeAssistant](/recipes/homeassistant/) "recipe", since it only applies to a subset of users
One of the most useful features of Home Assistant is location awareness. I don't care if someone opens my office door when I'm home, but you bet I care about (_and want to be notified_) it if I'm away!
## Ingredients
1. [HomeAssistant](/recipes/home-assistant/) per recipe
2. iBeacon(s) - This recipe is for https://s.click.aliexpress.com/e/bzyLCnAp
4. [LightBlue Explorer](https://itunes.apple.com/nz/app/lightblue-explorer/id557428110?mt=8)
## Preparation
### Write UUID to iBeacon
The iBeacons come with no UUID. We use the LightBlue Explorer app to pair with them (_code is "123456"_), and assign own own UUID.
Generate your own UUID, or get a random one at https://www.uuidgenerator.net/
Plug in your iBeacon, launch LightBlue Explorer, and find your iBeacon. The first time you attempt to interrogate it, you'll be prompted to pair. Although it's not recorded anywhere in the documentation (_grr!_), the pairing code is **123456**
Having paired, you'll be able to see the vital statistics of your iBeacon.
## Chef's Notes 📓

View File

@@ -142,6 +142,6 @@ Launch the Huginn stack by running ```docker stack deploy huginn -c <path -to-do
Log into your new instance at https://**YOUR-FQDN**. You'll need to use the "Sign Up" button, and (optionally) enter your invitation code in order to create your account.
## Chef's Notes 📓
## Chef's Notes
1. I initially considered putting an oauth proxy in front of Huginn, but since the invitation code logic prevents untrusted access, and since using a proxy would break oauth for sevices like Twitter integration, I left it out.

View File

@@ -0,0 +1,147 @@
hero: Huginn - A recipe for self-hosted, hackable version of IFFTT / Zapier
# Huginn
Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn's Agents create and consume events, propagating them along a directed graph. Think of it as a hackable version of IFTTT or Zapier on your own server.
<iframe src="https://player.vimeo.com/video/61976251" width="640" height="433" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
## Preparation
### Setup data locations
Create the location for the bind-mount of the database, so that it's persistent:
```
mkdir -p /var/data/huginn/database
```
### Create email address
Strictly speaking, you don't **have** to integrate Huginn with email. However, since we created our own mailserver stack earlier, it's worth using it to enable emails within Huginn.
```
cd /var/data/docker-mailserver/
./setup.sh email add huginn@huginn.example.com my-password-here
# Setup MX and DKIM if they don't already exist:
./setup.sh config dkim
cat config/opendkim/keys/huginn.example.com/mail.txt
```
### Prepare environment
Create /var/data/huginn/huginn.env, and populate with the following variables. Set the "INVITATION_CODE" variable if you want to require users to enter a code to sign up (protects the UI from abuse) (The full list of Huginn environment variables is available [here](https://github.com/huginn/huginn/blob/master/.env.example))
```
# For huginn/huginn - essential
SMTP_DOMAIN=your-domain-here.com
SMTP_USER_NAME=you@gmail.com
SMTP_PASSWORD=somepassword
SMTP_SERVER=your-mailserver-here.com
SMTP_PORT=587
SMTP_AUTHENTICATION=plain
SMTP_ENABLE_STARTTLS_AUTO=true
INVITATION_CODE=<set an invitation code here>
POSTGRES_PORT_5432_TCP_ADDR=db
POSTGRES_PORT_5432_TCP_PORT=5432
DATABASE_USERNAME=huginn
DATABASE_PASSWORD=<database password>
DATABASE_ADAPTER=postgresql
# Optional extras for huginn/huginn, customize or append based on .env.example lined above
TWITTER_OAUTH_KEY=
TWITTER_OAUTH_SECRET=
# For postgres/postgres
POSTGRES_USER=huginn
POSTGRES_PASSWORD=<database password>
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
huginn:
image: huginn/huginn
env_file: /var/data/config/huginn/huginn.env
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- internal
- traefik
deploy:
labels:
- traefik.frontend.rule=Host:huginn.funkypenguin.co.nz
- traefik.docker.network=traefik
- traefik.port=3000
db:
env_file: /var/data/config/huginn/huginn.env
image: postgres:latest
volumes:
- /var/data/runtime/huginn/database:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
networks:
- internal
db-backup:
image: postgres:latest
env_file: /var/data/config/huginn/huginn.env
volumes:
- /var/data/huginn/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.6.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Huginn stack
Launch the Huginn stack by running ```docker stack deploy huginn -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**. You'll need to use the "Sign Up" button, and (optionally) enter your invitation code in order to create your account.
## Chef's Notes 📓
1. I initially considered putting an oauth proxy in front of Huginn, but since the invitation code logic prevents untrusted access, and since using a proxy would break oauth for sevices like Twitter integration, I left it out.

View File

@@ -129,6 +129,6 @@ After swarm deploys, you won't see much, but you can monitor what InstaPy is doi
You can **also** watch the bot at work by VNCing to your docker swarm, password "secret". You'll see Selenium browser window cycling away, interacting with all your real/fake friends on Instagram :)
## Chef's Notes 📓
## Chef's Notes
1. Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags!

View File

@@ -0,0 +1,134 @@
# InstaPy
[InstaPy](https://github.com/timgrossmann/InstaPy) is an Instagram bot, developed by [Tim Grossman](https://github.com/timgrossmann). Tim describes his motivation and experiences developing the bot [here](https://medium.freecodecamp.org/my-open-source-instagram-bot-got-me-2-500-real-followers-for-5-in-server-costs-e40491358340).
What's an Instagram bot? Basically, you feed the bot your Instagram user/password, and it executes follows/unfollows/likes/comments on your behalf based on rules you set. (_I set my bot to like one photo tagged with "[#penguin](https://www.instagram.com/explore/tags/penguin/?hl=en)" per-run_)
![InstaPy Screenshot](../images/instapy.png)
Great power, right? A client (_yes, you can [hire](https://www.funkypenguin.co.nz/) me!_) asked me to integrate InstaPy into their swarm, and this recipe is the result.
## Ingredients
!!! summary "Ingredients"
Existing:
1. [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We need a data location to store InstaPy's config, as well as its log files. Create /var/data/instapy per below
```
mkdir -p /var/data/instapy/logs
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
web:
command: ["./wait-for", "selenium:4444", "--", "python", "docker_quickstart.py"]
environment:
- PYTHONUNBUFFERED=0
# Modify the image to whatever Tim's image tag ends up as. I used funkypenguin/instapy for my build
image: funkypenguin/instapy:latest
# When using swarm, you can't use relative paths, so the following needs to be set to the full filesystem path to your logs and docker_quickstart.py
# Bind-mount docker_quickstart.py, since now that we're using a public image, we can't "bake" our credentials into the image anymore
volumes:
- /var/data/instapy/logs:/code/logs
- var/data/instapy/instapy.py:/code/docker_quickstart.py:ro
# This section allows docker to restart the container when it exits (either normally or abnormally), which ensures that
# InstaPy keeps re-running. Tweak the delay to avoid being banned for excessive activity
deploy:
restart_policy:
condition: any
delay: 3600s
selenium:
image: selenium/standalone-chrome-debug
ports:
- "5900:5900"
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
### Command your bot
Create a variation of https://github.com/timgrossmann/InstaPy/blob/master/docker_quickstart.py at /var/data/instapy/instapy.py (the file we bind-mounted in the swarm config above)
Change at least the following:
```
insta_username = ''
insta_password = ''
```
Here's an example of my config, set to like a single penguin-pic per run:
```
insta_username = 'funkypenguin'
insta_password = 'followmemypersonalbrandisawesome'
dont_like = ['food','girl','batman','gotham','dead','nsfw','porn','slut','baby','tv','athlete','nhl','hockey','estate','music','band','clothes']
friend_list = ['therock','ruinporn']
# If you want to enter your Instagram Credentials directly just enter
# username=<your-username-here> and password=<your-password> into InstaPy
# e.g like so InstaPy(username="instagram", password="test1234")
bot = InstaPy(username='insta_username', password='insta_password', selenium_local_session=False)
bot.set_selenium_remote_session(selenium_url='http://selenium:4444/wd/hub')
bot.login()
bot.set_upper_follower_count(limit=10000)
bot.set_lower_follower_count(limit=10)
bot.set_comments([u'Cool :penguin:!', u'Awesome :penguin:!!', u'Nice :penguin:!!'])
bot.set_dont_include(friend_list)
bot.set_dont_like(dont_like)
#bot.set_ignore_if_contains(ignore_words)
# OK, so go through my feed and like stuff, interacting with people I follow
# bot.like_by_feed(amount=3, randomize=True, unfollow=True, interact=True)
# Now find posts tagged as #penguin, and like 'em, commenting 50% of the time
bot.set_do_comment(True, percentage=50)
bot.set_comments([u'Cool :penguin:!', u'Awesome :penguin:!!', u'Nice :penguin:!!'])
bot.like_by_tags(['#penguin'], amount=1)
# goodnight, sweet bot
bot.end()
```
## Serving
### Destroy all humans
Launch the bot by running ```docker stack deploy instapy -c <path -to-docker-compose.yml>```
While you're waiting for Docker to pull down the images, educate yourself on the risk of a robotic uprising:
<iframe width="560" height="315" src="https://www.youtube.com/embed/B1BdQcJ2ZYY" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
After swarm deploys, you won't see much, but you can monitor what InstaPy is doing, by running ```docker service logs instapy_web```.
You can **also** watch the bot at work by VNCing to your docker swarm, password "secret". You'll see Selenium browser window cycling away, interacting with all your real/fake friends on Instagram :)
## Chef's Notes 📓
1. Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags!

View File

@@ -21,7 +21,7 @@ Description. IPFS is a peer-to-peer distributed file system that seeks to connec
### Setup data locations (per-node)
Since IPFS may _replace_ ceph or glusterfs as a shared-storage provider for the swarm, we can't use sharded storage to store its persistent data. (🐔, meet :egg:)
Since IPFS may _replace_ ceph or glusterfs as a shared-storage provider for the swarm, we can't use sharded storage to store its persistent data. (, meet :egg:)
On _each_ node, therefore run the following, to create the persistent data storage for ipfs and ipfs-cluster:
@@ -180,6 +180,6 @@ QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx | 28c13ec68f33 | Sees 2 other pee
```
## Chef's Notes 📓
## Chef's Notes
1. I'm still trying to work out how to _mount_ the ipfs data in my filesystem in a usable way. Which is why this is still a WIP :)

View File

@@ -0,0 +1,185 @@
!!! danger "This recipe is a work in progress"
This recipe is **incomplete**, and remains a work in progress.
So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁
!!! important
Development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/)
# IPFS
The intention of this recipe is to provide a local IPFS cluster for the purpose of providing persistent storage for the various components of the recipes
![IPFS Screenshot](../images/ipfs.png)
Description. IPFS is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the World Wide Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/)
## Preparation
### Setup data locations (per-node)
Since IPFS may _replace_ ceph or glusterfs as a shared-storage provider for the swarm, we can't use sharded storage to store its persistent data. (🐔, meet :egg:)
On _each_ node, therefore run the following, to create the persistent data storage for ipfs and ipfs-cluster:
```
mkdir -p {/var/ipfs/daemon,/var/ipfs/cluster}
```
### Setup environment
ipfs-cluster nodes require a common secret, a 32-bit hex-encoded string, in order to "trust" each other, so generate one, and add it to ipfs.env on your first node, by running ```od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n'; echo```
Now on _each_ node, create ```/var/ipfs/cluster:/data/ipfs-cluster```, including both the secret, *and* the IP of docker0 interface on your hosts (_on my hosts, this is always 172.17.0.1_). We do this (_the trick with docker0)_ to allow ipfs-cluster to talk to the local ipfs daemon, per-node:
```
SECRET=<string generated above>
# Use docker0 to access daemon
IPFS_API=/ip4/172.17.0.1/tcp/5001
```
### Create docker-compose file
Yes, I know. It's not as snazzy as docker swarm. Maybe we'll get there. But this implementation uses docker-compose, so create the following (_identical_) docker-compose.yml on each node:
```
version: "3"
services:
cluster:
image: ipfs/ipfs-cluster
volumes:
- /var/ipfs/cluster:/data/ipfs-cluster
env_file: /var/data/config/ipfs/ipfs.env
ports:
- 9095:9095
- 9096:9096
depends_on:
- daemon
daemon:
image: ipfs/go-ipfs
ports:
- 4001:4001
- 5001:5001
- 8080:8080
volumes:
- /var/ipfs/daemon:/data/ipfs
```
### Launch independent nodes
Launch all nodes independently with ```docker-compose -f ipfs.yml up```. At this point, the nodes are each running independently, unaware of each other. But we do this to ensure that service.json is populated on each node, using the IPFS_API environment variable we specified in ipfs.env. (_it's only used on the first run_)
The output looks something like this:
```
cluster_1 | 11:03:33.272 INFO restapi: REST API (libp2p-http): ENABLED. Listening on:
cluster_1 | /ip4/127.0.0.1/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
cluster_1 | /ip4/172.18.0.3/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
cluster_1 | /p2p-circuit/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
daemon_1 | Swarm listening on /ip4/127.0.0.1/tcp/4001
daemon_1 | Swarm listening on /ip4/172.19.0.2/tcp/4001
daemon_1 | Swarm listening on /p2p-circuit
daemon_1 | Swarm announcing /ip4/127.0.0.1/tcp/4001
daemon_1 | Swarm announcing /ip4/172.19.0.2/tcp/4001
daemon_1 | Swarm announcing /ip4/202.170.161.77/tcp/4001
daemon_1 | API server listening on /ip4/0.0.0.0/tcp/5001
daemon_1 | Gateway (readonly) server listening on /ip4/0.0.0.0/tcp/8080
daemon_1 | Daemon is ready
cluster_1 | 10:49:19.720 INFO consensus: Current Raft Leader: QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp raft.go:293
cluster_1 | 10:49:19.721 INFO cluster: Cluster Peers (without including ourselves): cluster.go:403
cluster_1 | 10:49:19.721 INFO cluster: - No other peers cluster.go:405
cluster_1 | 10:49:19.722 INFO cluster: ** IPFS Cluster is READY ** cluster.go:418
```
### Pick a leader
Pick a node to be your primary node, and CTRL-C the others.
Look for a line like this in the output of the primary node:
```
/ip4/127.0.0.1/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
```
You'll note several addresses listed, all ending in the same hash. None of these addresses will be your docker node's actual IP address, however, since we exposed port 9096, we can substitute your docker node's IP.
### Bootstrap the followers
On each of the non-primary nodes, run the following, replacing **IP-OF-PRIMARY-NODE** with the actual IP of the primary node, and **HASHY-MC-HASHFACE** with your own hash from primary output above.
```
docker run --rm -it -v /var/ipfs/cluster:/data/ipfs-cluster \
--entrypoint ipfs-cluster-service ipfs/ipfs-cluster \
daemon --bootstrap \ /ip4/IP-OF-PRIMARY-NODE/tcp/9096/ipfs/HASHY-MC-HASHFACE
```
You'll see output like this:
```
10:55:26.121 INFO service: Bootstrapping to /ip4/192.168.31.13/tcp/9096/ipfs/QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT daemon.go:153
10:55:26.121 INFO ipfshttp: IPFS Proxy: /ip4/0.0.0.0/tcp/9095 -> /ip4/172.17.0.1/tcp/5001 ipfshttp.go:221
10:55:26.304 ERROR ipfshttp: error posting to IPFS: Post http://172.17.0.1:5001/api/v0/id: dial tcp 172.17.0.1:5001: connect: connection refused ipfshttp.go:708
10:55:26.622 INFO consensus: Current Raft Leader: QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT raft.go:293
10:55:26.623 INFO cluster: Cluster Peers (without including ourselves): cluster.go:403
10:55:26.623 INFO cluster: - QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT cluster.go:410
10:55:26.624 INFO cluster: - QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx cluster.go:410
10:55:26.625 INFO cluster: ** IPFS Cluster is READY ** cluster.go:418
```
!!! note
You can ignore the warnings about port 5001 refused - this is because we weren't running the ipfs daemon while bootstrapping the cluster. Its harmless.
I haven't worked out why yet, but running the bootstrap in docker-run format reset the permissions on /var/ipfs/cluster/, so look at /var/ipfs/daemon, and make the permissions of /var/ipfs/cluster the same.
You can now run ```docker-compose -f ipfs.yml up``` on the "follower" nodes, to bring your cluster online.
### Confirm cluster
docker-exec into one of the cluster containers (_it doesn't matter which one_), and run ```ipfs-cluster-ctl peers ls```
You should see output from each node member, indicating it can see its other peers. Here's my output from a 3-node cluster:
```
/ # ipfs-cluster-ctl peers ls
QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT | ef68b1437c56 | Sees 2 other peers
> Addresses:
- /ip4/127.0.0.1/tcp/9096/ipfs/QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT
- /ip4/172.19.0.3/tcp/9096/ipfs/QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT
- /p2p-circuit/ipfs/QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT
> IPFS: QmU6buucy4FX9XqPoj4ZEiJiu7xUq2dnth5puU1rswtrGg
- /ip4/127.0.0.1/tcp/4001/ipfs/QmU6buucy4FX9XqPoj4ZEiJiu7xUq2dnth5puU1rswtrGg
- /ip4/172.19.0.2/tcp/4001/ipfs/QmU6buucy4FX9XqPoj4ZEiJiu7xUq2dnth5puU1rswtrGg
- /ip4/202.170.161.75/tcp/4001/ipfs/QmU6buucy4FX9XqPoj4ZEiJiu7xUq2dnth5puU1rswtrGg
QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp | 6558e1bf32e2 | Sees 2 other peers
> Addresses:
- /ip4/127.0.0.1/tcp/9096/ipfs/QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp
- /ip4/172.19.0.3/tcp/9096/ipfs/QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp
- /p2p-circuit/ipfs/QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp
> IPFS: QmYMUwHHsaeP2H8D2G3iXKhs1fHm2gQV6SKWiRWxbZfxX7
- /ip4/127.0.0.1/tcp/4001/ipfs/QmYMUwHHsaeP2H8D2G3iXKhs1fHm2gQV6SKWiRWxbZfxX7
- /ip4/172.19.0.2/tcp/4001/ipfs/QmYMUwHHsaeP2H8D2G3iXKhs1fHm2gQV6SKWiRWxbZfxX7
- /ip4/202.170.161.77/tcp/4001/ipfs/QmYMUwHHsaeP2H8D2G3iXKhs1fHm2gQV6SKWiRWxbZfxX7
QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx | 28c13ec68f33 | Sees 2 other peers
> Addresses:
- /ip4/127.0.0.1/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
- /ip4/172.18.0.3/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
- /p2p-circuit/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
> IPFS: QmazkAuAPpWw913HKiGsr1ief2N8cLa6xcqeAZxqDMsWmE
- /ip4/127.0.0.1/tcp/4001/ipfs/QmazkAuAPpWw913HKiGsr1ief2N8cLa6xcqeAZxqDMsWmE
- /ip4/172.18.0.2/tcp/4001/ipfs/QmazkAuAPpWw913HKiGsr1ief2N8cLa6xcqeAZxqDMsWmE
- /ip4/202.170.161.96/tcp/4001/ipfs/QmazkAuAPpWw913HKiGsr1ief2N8cLa6xcqeAZxqDMsWmE
/ #
```
## Chef's Notes 📓
1. I'm still trying to work out how to _mount_ the ipfs data in my filesystem in a usable way. Which is why this is still a WIP :)

View File

@@ -2,10 +2,10 @@ hero: Kanboard - A recipe to get your personal kanban on
# Kanboard
Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux/)_)
Kanboard is a Kanban tool, developed by [Frdric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux/)_)
!!! tip "Sponsored Project"
Kanboard is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓
Kanboard is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments!
Features include:
@@ -116,7 +116,7 @@ Launch the Kanboard stack by running ```docker stack deploy kanboard -c <path -t
Log into your new instance at https://**YOUR-FQDN**. Default credentials are admin/admin, after which you can change (_under 'profile'_) and add more users.
## Chef's Notes 📓
## Chef's Notes
1. The default theme can be significantly improved by applying the [ThemePlus](https://github.com/phsteffen/kanboard-themeplus) plugin.
2. Kanboard becomes more useful when you integrate in/outbound email with [MailGun](https://github.com/kanboard/plugin-mailgun), [SendGrid](https://github.com/kanboard/plugin-sendgrid), or [Postmark](https://github.com/kanboard/plugin-postmark).

View File

@@ -0,0 +1,122 @@
hero: Kanboard - A recipe to get your personal kanban on
# Kanboard
Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](/recipes/miniflux/)_)
!!! tip "Sponsored Project"
Kanboard is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓
Features include:
* Visualize your work
* Limit your work in progress to be more efficient
* Customize your boards according to your business activities
* Multiple projects with the ability to drag and drop tasks
* Reports and analytics
* Fast and simple to use
* Access from anywhere with a modern browser
* Plugins and integrations with external services
* Free, open source and self-hosted
* Super simple installation
![](/images/kanboard.png)
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry pointing your NextCloud url (_kanboard.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
Create the location for the bind-mount of the application data, so that it's persistent:
```
mkdir -p /var/data/kanboard
```
### Setup Environment
If you intend to use an [OAuth proxy](/reference/oauth_proxy/) to further secure public access to your instance, create a ```kanboard.env``` file to hold your environment variables, and populate with your OAuth provider's details (_the cookie secret you can just make up_):
```
# If you decide to protect kanboard with an oauth_proxy, complete these
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
kanboard:
image: kanboard/kanboard
volumes:
- /var/data/kanboard/data:/var/www/app/data
- /var/data/kanboard/plugins:/var/www/app/plugins
networks:
- internal
deploy:
labels:
- traefik.frontend.rule=Host:kanboard.example.com
- traefik.docker.network=traefik_public
- traefik.port=80
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/kanboard/kanboard.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:kanboard.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/kanboard/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app
-redirect-url=https://kanboard.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.8.0/24
```
## Serving
### Launch Kanboard stack
Launch the Kanboard stack by running ```docker stack deploy kanboard -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**. Default credentials are admin/admin, after which you can change (_under 'profile'_) and add more users.
## Chef's Notes 📓
1. The default theme can be significantly improved by applying the [ThemePlus](https://github.com/phsteffen/kanboard-themeplus) plugin.
2. Kanboard becomes more useful when you integrate in/outbound email with [MailGun](https://github.com/kanboard/plugin-mailgun), [SendGrid](https://github.com/kanboard/plugin-sendgrid), or [Postmark](https://github.com/kanboard/plugin-postmark).

View File

@@ -0,0 +1,147 @@
# KeyCloak
[KeyCloak](https://www.keycloak.org/) is "*an open source identity and access management solution*". Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML. KeyCloak's OpenID provider can be used in combination with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/), to protect [vulnerable services](/recipe/nzbget/) with an extra layer of authentication.
!!! important
Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
![KeyCloak Screenshot](../images/keycloak.png)
## Ingredients
!!! Summary
Existing:
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph/)
* [X] [Traefik](/ha-docker-swarm/traefik_public) configured per design
* [X] DNS entry for the hostname (_i.e. "keycloak.your-domain.com"_) you intend to use, pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container for both runtime and backup data, so create them as follows
```
mkdir -p /var/data/runtime/keycloak/database
mkdir -p /var/data/keycloak/database-dump
```
### Prepare environment
Create `/var/data/keycloak/keycloak.env`, and populate with the following variables, customized for your own domain structure.
```
# Technically, this could be auto-detected, but we prefer to be prescriptive
DB_VENDOR=postgres
DB_DATABASE=keycloak
DB_ADDR=keycloak-db
DB_USER=keycloak
DB_PASSWORD=myuberpassword
KEYCLOAK_USER=admin
KEYCLOAK_PASSWORD=ilovepasswords
# This is required to run keycloak behind traefik
PROXY_ADDRESS_FORWARDING=true
# What's our hostname?
KEYCLOAK_HOSTNAME=keycloak.batcave.com
# Tell Postgress what user/password to create
POSTGRES_USER=keycloak
POSTGRES_PASSWORD=myuberpassword
```
Create `/var/data/keycloak/keycloak-backup.env`, and populate with the following, so that your database can be backed up to the filesystem, daily:
```
PGHOST=keycloak-db
PGUSER=keycloak
PGPASSWORD=myuberpassword
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
keycloak:
image: jboss/keycloak
env_file: /var/data/config/keycloak/keycloak.env
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- traefik_public
- internal
deploy:
labels:
- traefik.frontend.rule=Host:keycloak.batcave.com
- traefik.port=8080
- traefik.docker.network=traefik_public
keycloak-db:
env_file: /var/data/config/keycloak/keycloak.env
image: postgres:10.1
volumes:
- /var/data/runtime/keycloak/database:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
networks:
- internal
keycloak-db-backup:
image: postgres:10.1
env_file: /var/data/config/keycloak/keycloak-backup.env
volumes:
- /var/data/keycloak/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.49.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch KeyCloak stack
Launch the KeyCloak stack by running ```docker stack deploy keycloak -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, and login with the user/password you defined in `keycloak.env`.
!!! important
Initial development of this recipe was sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes

View File

@@ -65,4 +65,4 @@ We've setup a new realm in KeyCloak, and configured read-write federation to an
* [X] KeyCloak realm in read-write federation with [OpenLDAP](https://geek-cookbook.funkypenguin.co.nz/)recipes/openldap/) directory
## Chef's Notes 📓
## Chef's Notes

View File

@@ -0,0 +1,68 @@
# Authenticate KeyCloak against OpenLDAP
!!! warning
This is not a complete recipe - it's an **optional** component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
KeyCloak gets really sexy when you integrate it into your [OpenLDAP](/recipes/openldap/) stack (_also, it's great not to have to play with ugly LDAP tree UIs_). Note that OpenLDAP integration is **not necessary** if you want to use KeyCloak with [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) - all you need for that is [local users](/recipes/keycloak/create-user/), and an [OIDC client](http://localhost:8000/recipes/keycloak/setup-oidc-provider/).
## Ingredients
!!! Summary
Existing:
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully
New:
* [ ] An [OpenLDAP server](/recipes/openldap/) (*assuming you want to authenticate against it*)
## Preparation
You'll need to have completed the [OpenLDAP](/recipes/openldap/) recipe
You start in the "Master" realm - but mouseover the realm name, to a dropdown box allowing you add an new realm:
### Create Realm
![KeyCloak Add Realm Screenshot](/images/sso-stack-keycloak-1.png)
Enter a name for your new realm, and click "_Create_":
![KeyCloak Add Realm Screenshot](/images/sso-stack-keycloak-2.png)
### Setup User Federation
Once in the desired realm, click on **User Federation**, and click **Add Provider**. On the next page ("_Required Settings_"), set the following:
* **Edit Mode** : Writeable
* **Vendor** : Other
* **Connection URL** : ldap://openldap
* **Users DN** : ou=People,<your base DN\>
* **Authentication Type** : simple
* **Bind DN** : cn=admin,<your base DN\>
* **Bind Credential** : <your chosen admin password\>
Save your changes, and then navigate back to "User Federation" > Your LDAP name > Mappers:
![KeyCloak Add Realm Screenshot](/images/sso-stack-keycloak-3.png)
For each of the following mappers, click the name, and set the "_Read Only_" flag to "_Off_" (_this enables 2-way sync between KeyCloak and OpenLDAP_)
* last name
* username
* email
* first name
![KeyCloak Add Realm Screenshot](/images/sso-stack-keycloak-4.png)
## Summary
We've setup a new realm in KeyCloak, and configured read-write federation to an [OpenLDAP](/recipes/openldap/) backend. We can now manage our LDAP users using either KeyCloak or LDAP directly, and we can protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
!!! Summary
Created:
* [X] KeyCloak realm in read-write federation with [OpenLDAP](/recipes/openldap/) directory
## Chef's Notes 📓

View File

@@ -0,0 +1,38 @@
# Create KeyCloak Users
!!! warning
This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
Unless you plan to authenticate against an outside provider (*[OpenLDAP](/recipes/keycloak/openldap/), below, for example*), you'll want to create some local users..
## Ingredients
!!! Summary
Existing:
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully
### Create User
Within the "Master" realm (*no need for more realms yet*), navigate to **Manage** -> **Users**, and then click **Add User** at the top right:
![Navigating to the add user interface in Keycloak](/images/keycloak-add-user-1.png)
Populate your new user's username (it's the only mandatory field)
![Populating a username in the add user interface in Keycloak](/images/keycloak-add-user-2.png)
### Set User Credentials
Once your user is created, to set their password, click on the "**Credentials**" tab, and procede to reset it. Set the password to non-temporary, unless you like extra work!
![Resetting a user's password in Keycloak](/images/keycloak-add-user-3.png)
## Summary
We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it's used as an [OIDC Provider](/recipes/keycloak/setup-oidc-provider/), potentially to secure vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
!!! Summary
Created:
* [X] Username / password to authenticate against [KeyCloak](/recipes/keycloak/)

View File

@@ -52,4 +52,4 @@ We've setup an OIDC client in KeyCloak, which we can now use to protect vulnerab
* [X] Client ID and Client Secret used to authenticate against KeyCloak with OpenID Connect
## Chef's Notes 📓
## Chef's Notes

View File

@@ -0,0 +1,55 @@
# Add OIDC Provider to KeyCloak
!!! warning
This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/recipe/traefik-forward-auth/), we'll setup a client in KeyCloak...
## Ingredients
!!! Summary
Existing:
* [X] [KeyCloak](/recipes/keycloak/) recipe deployed successfully
New:
* [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/recipe/traefik-forward-auth/) recipe for more information
## Preparation
### Create Client
Within the "Master" realm (*no need for more realms yet*), navigate to **Clients**, and then click **Create** at the top right:
![Navigating to the add user interface in Keycloak](/images/keycloak-add-client-1.png)
Enter a name for your client (*remember, we're authenticating **applications** now, not users, so use an application-specific name*):
![Adding a client in KeyCloak](/images/keycloak-add-client-2.png)
### Configure Client
Once your client is created, set at **least** the following, and click **Save**
* **Access Type** : Confidential
* **Valid Redirect URIs** : <The URIs you want to protect\>
![Set KeyCloak client to confidential access type, add redirect URIs](/images/keycloak-add-client-3.png)
### Retrieve Client Secret
Now that you've changed the access type, and clicked **Save**, an additional **Credentials** tab appears at the top of the window. Click on the tab, and capture the KeyCloak-generated secret. This secret, plus your client name, is required to authenticate against KeyCloak via OIDC.
![Capture client secret from KeyCloak](/images/keycloak-add-client-4.png)
## Summary
We've setup an OIDC client in KeyCloak, which we can now use to protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). The OIDC URL provided by KeyCloak in the master realm, is *https://<your-keycloak-url\>/realms/master/.well-known/openid-configuration*
!!! Summary
Created:
* [X] Client ID and Client Secret used to authenticate against KeyCloak with OpenID Connect
## Chef's Notes 📓

View File

@@ -1,11 +1,11 @@
#Kanboard
Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux/)_)
Kanboard is a Kanban tool, developed by [Frdric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux/)_)
![Kanboard Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/kanboard.png)
!!! tip "Sponsored Project"
Kanboard is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓
Kanboard is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments!
Features include:

View File

@@ -0,0 +1,265 @@
#Kanboard
Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](/recipes/miniflux/)_)
![Kanboard Screenshot](/images/kanboard.png)
!!! tip "Sponsored Project"
Kanboard is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓
Features include:
* Visualize your work
* Limit your work in progress to be more efficient
* Customize your boards according to your business activities
* Multiple projects with the ability to drag and drop tasks
* Reports and analytics
* Fast and simple to use
* Access from anywhere with a modern browser
* Plugins and integrations with external services
* Free, open source and self-hosted
* Super simple installation
## Ingredients
1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/)
2. A DNS name for your kanboard instance (*kanboard.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your Traefik ingress
## Preparation
### Prepare traefik for namespace
When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *kanboard* namespace, as illustrated below:
```
<snip>
kubernetes:
namespaces:
- kube-system
- nextcloud
- kanboard
- miniflux
<snip>
```
If you've updated ```values.yml```, upgrade your traefik deployment via helm, by running ```helm upgrade --values values.yml traefik stable/traefik --recreate-pods```
### Create data locations
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
```
mkdir /var/data/config/kanboard
```
### Create namespace
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the kanboard stack with the following .yml:
```
cat <<EOF > /var/data/config/kanboard/namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: kanboard
EOF
kubectl create -f /var/data/config/kanboard/namespace.yaml
```
### Create persistent volume claim
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the kanboard app and plugin data:
```
cat <<EOF > /var/data/config/kanboard/persistent-volumeclaim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: kanboard-volumeclaim
namespace: kanboard
annotations:
backup.kubernetes.io/deltas: P1D P7D
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
kubectl create -f /var/data/config/kanboard/kanboard-volumeclaim.yaml
```
!!! question "What's that annotation about?"
The annotation is used by [k8s-snapshots](/kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days.
### Create ConfigMap
Kanboard's configuration is all contained within ```config.php```, which needs to be presented to the container. We _could_ maintain ```config.php``` in the persistent volume we created above, but this would require manually accessing the pod every time we wanted to make a change.
Instead, we'll create ```config.php``` as a [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), meaning it "lives" within the Kuberetes cluster and can be **presented** to our pod. When we want to make changes, we simply update the ConfigMap (*delete and recreate, to be accurate*), and relaunch the pod.
Grab a copy of [config.default.php](https://github.com/kanboard/kanboard/blob/master/config.default.php), save it to ```/var/data/config/kanboard/config.php```, and customize it per [the guide](https://docs.kanboard.org/en/latest/admin_guide/config_file.html).
At the very least, I'd suggest making the following changes:
```
define('PLUGIN_INSTALLER', true); // Yes, I want to install plugins using the UI
define('ENABLE_URL_REWRITE', false); // Yes, I want pretty URLs
```
Now create the configmap from config.php, by running ```kubectl create configmap -n kanboard kanboard-config --from-file=config.php```
## Serving
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create a [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [service](https://kubernetes.io/docs/concepts/services-networking/service/), and [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the kanboard [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
### Create deployment
Create a deployment to tell Kubernetes about the desired state of the pod (*which it will then attempt to maintain*). Note below that we mount the persistent volume **twice**, to both ```/var/www/app/data``` and ```/var/www/app/plugins```, using the subPath value to differentiate them. This trick avoids us having to provision **two** persistent volumes just for data mounted in 2 separate locations.
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍
```
cat <<EOF > /var/data/kanboard/deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: kanboard
name: app
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- image: kanboard/kanboard
name: app
volumeMounts:
- name: kanboard-config
mountPath: /var/www/app/config.php
subPath: config.php
- name: kanboard-app
mountPath: /var/www/app/data
subPath: data
- name: kanboard-app
mountPath: /var/www/app/plugins
subPath: plugins
volumes:
- name: kanboard-app
persistentVolumeClaim:
claimName: kanboard-app
- name: kanboard-config
configMap:
name: kanboard-config
EOF
kubectl create -f /var/data/kanboard/deployment.yml
```
Check that your deployment is running, with ```kubectl get pods -n kanboard```. After a minute or so, you should see a "Running" pod, as illustrated below:
```
[funkypenguin:~] % kubectl get pods -n kanboard
NAME READY STATUS RESTARTS AGE
app-79f97f7db6-hsmfg 1/1 Running 0 11d
[funkypenguin:~] %
```
### Create service
The service resource "advertises" the availability of TCP port 80 in your pod, to the rest of the cluster (*constrained within your namespace*). It seems a little like overkill coming from the Docker Swarm's automated "service discovery" model, but the Kubernetes design allows for load balancing, rolling upgrades, and health checks of individual pods, without impacting the rest of the cluster elements.
```
cat <<EOF > /var/data/kanboard/service.yml
kind: Service
apiVersion: v1
metadata:
name: app
namespace: kanboard
spec:
selector:
app: app
ports:
- protocol: TCP
port: 80
clusterIP: None
EOF
kubectl create -f /var/data/kanboard/service.yml
```
Check that your service is deployed, with ```kubectl get services -n kanboard```. You should see something like this:
```
[funkypenguin:~] % kubectl get service -n kanboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app ClusterIP None <none> 80/TCP 38d
[funkypenguin:~] %
```
### Create ingress
The ingress resource tells Traefik what to forward inbound requests for *kanboard.example.com* to your service (defined above), which in turn passes the request to the "app" pod. Adjust the config below for your domain.
```
cat <<EOF > /var/data/kanboard/ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
namespace: kanboard
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: kanboard.example.com
http:
paths:
- backend:
serviceName: app
servicePort: 80
EOF
kubectl create -f /var/data/kanboard/ingress.yml
```
Check that your service is deployed, with ```kubectl get ingress -n kanboard```. You should see something like this:
```
[funkypenguin:~] % kubectl get ingress -n kanboard
NAME HOSTS ADDRESS PORTS AGE
app kanboard.funkypenguin.co.nz 80 38d
[funkypenguin:~] %
```
### Access Kanboard
At this point, you should be able to access your instance on your chosen DNS name (*i.e. https://kanboard.example.com*)
### Updating config.php
Since ```config.php``` is a ConfigMap now, to update it, make your local changes, and then delete and recreate the ConfigMap, by running:
```
kubectl delete configmap -n kanboard kanboard-config
kubectl create configmap -n kanboard kanboard-config --from-file=config.php
```
Then, in the absense of any other changes to the deployement definition, force the pod to restart by issuing a "null patch", as follows:
```
kubectl patch -n kanboard deployment app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
```
### Troubleshooting
To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard <name of pod per above> -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/).
## Chef's Notes
1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)

View File

@@ -0,0 +1,35 @@
# Kubernetes Dashboard
Yes, Kubernetes is complicated. There are lots of moving parts, and debugging _what's_ gone wrong and _why_, can be challenging.
Fortunately, to assist in day-to-day operation of our cluster, and in the occasional "how-did-that-ever-work" troubleshooting, we have available to us, the mighty **[Kubernetes Dashboard](https://github.com/kubernetes/dashboard)**:
![Kubernetes Dashboard Screenshot](/images/kubernetes-dashboard.png)
Using the dashboard, you can:
* Visual cluster load, pod distribution
* Examine Kubernetes objects, such as Deployments, Daemonsets, ConfigMaps, etc
* View logs
* Deploy new YAML manifests
* Lots more!
## Ingredients
1. A [Kubernetes Cluster](/kubernetes/design/), with
2. OIDC-enabled authentication
3. An Ingress Controller ([Traefik Ingress](/kubernetes/traefik/) or [NGinx Ingress](/kubernetes/nginx-ingress/))
4. A DNS name for your dashboard instance (*dashboard.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your ingress controller
5. A [KeyCloak](/recipes/keycloak/) instance for authentication
## Preparation
### Access Kanboard
At this point, you should be able to access your instance on your chosen DNS name (*i.e. https://dashboard.example.com*)
## Chef's Notes
1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)

View File

@@ -1,6 +1,6 @@
#Miniflux
Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](https://geek-cookbook.funkypenguin.co.nz/)recipes/kanboard/)_)
Miniflux is a lightweight RSS reader, developed by [Frdric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](https://geek-cookbook.funkypenguin.co.nz/)recipes/kanboard/)_)
![Miniflux Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/miniflux.png)

View File

@@ -0,0 +1,320 @@
#Miniflux
Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](/recipes/kanboard/)_)
![Miniflux Screenshot](/images/miniflux.png)
!!! tip "Sponsored Project"
Miniflux is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. Although I get to process my RSS feeds less frequently than I'd like to!
I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/review/miniflux-lightweight-self-hosted-rss-reader/), but features (among many) that I appreciate:
* Compatible with the Fever API, read your feeds through existing mobile and desktop clients (_This is the killer feature for me. I hardly ever read RSS on my desktop, I typically read on my iPhone or iPad, using [Fiery Feeds](http://cocoacake.net/apps/fiery/) or my new squeeze, [Unread](https://www.goldenhillsoftware.com/unread/)_)
* Send your bookmarks to Pinboard, Wallabag, Shaarli or Instapaper (_I use this to automatically pin my bookmarks for collection on my [blog](https://www.funkypenguin.co.nz/blog/)_)
* Feeds can be configured to download a "full" version of the content (_rather than an excerpt_)
* Use the Bookmarklet to subscribe to a website directly from any browsers
!!! abstract "2.0+ is a bit different"
[Some things changed](https://docs.miniflux.net/en/latest/migration.html) when Miniflux 2.0 was released. For one thing, the only supported database is now postgresql (_no more SQLite_). External themes are gone, as is PHP (_in favor of golang_). It's been a controversial change, but I'm keen on minimal and single-purpose, so I'm still very happy with the direction of development. The developer has laid out his [opinions](https://docs.miniflux.net/en/latest/opinionated.html) re the decisions he's made in the course of development.
## Ingredients
1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/)
2. A DNS name for your miniflux instance (*miniflux.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your Traefik ingress
## Preparation
### Prepare traefik for namespace
When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *miniflux* namespace, as illustrated below:
```
<snip>
kubernetes:
namespaces:
- kube-system
- nextcloud
- kanboard
- miniflux
<snip>
```
If you've updated ```values.yml```, upgrade your traefik deployment via helm, by running ```helm upgrade --values values.yml traefik stable/traefik --recreate-pods```
### Create data locations
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
```
mkdir /var/data/config/miniflux
```
### Create namespace
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the miniflux stack with the following .yml:
```
cat <<EOF > /var/data/config/miniflux/namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: miniflux
EOF
kubectl create -f /var/data/config/miniflux/namespace.yaml
```
### Create persistent volume claim
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the miniflux postgres database:
```
cat <<EOF > /var/data/config/miniflux/db-persistent-volumeclaim.yml
kkind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: miniflux-db
namespace: miniflux
annotations:
backup.kubernetes.io/deltas: P1D P7D
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
kubectl create -f /var/data/config/miniflux/db-persistent-volumeclaim.yaml
```
!!! question "What's that annotation about?"
The annotation is used by [k8s-snapshots](/kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days.
### Create secrets
It's not always desirable to have sensitive data stored in your .yml files. Maybe you want to check your config into a git repository, or share it. Using Kubernetes Secrets means that you can create "secrets", and use these in your deployments by name, without exposing their contents. Run the following, replacing ```imtoosexyformyadminpassword```, and the ```mydbpass``` value in both postgress-password.secret **and** database-url.secret:
```
echo -n "imtoosexyformyadminpassword" > admin-password.secret
echo -n "mydbpass" > postgres-password.secret
echo -n "postgres://miniflux:mydbpass@db/miniflux?sslmode=disable" > database-url.secret
kubectl create secret -n mqtt generic miniflux-credentials \
--from-file=admin-password.secret \
--from-file=database-url.secret \
--from-file=database-url.secret
```
!!! tip "Why use ```echo -n```?"
Because. See [my blog post here](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why!
## Serving
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [services](https://kubernetes.io/docs/concepts/services-networking/service/), and an [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the miniflux [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
### Create db deployment
Deployments tell Kubernetes about the desired state of the pod (*which it will then attempt to maintain*). Create the db deployment by excecuting the following. Note that the deployment refers to the secrets created above.
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍
```
cat <<EOF > /var/data/miniflux/db-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: miniflux
name: db
labels:
app: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- image: postgres:11
name: db
volumeMounts:
- name: miniflux-db
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_USER
value: "miniflux"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: miniflux-credentials
key: postgres-password.secret
volumes:
- name: miniflux-db
persistentVolumeClaim:
claimName: miniflux-db
```
### Create app deployment
Create the app deployment by excecuting the following. Again, note that the deployment refers to the secrets created above.
```
cat <<EOF > /var/data/miniflux/app-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: miniflux
name: app
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- image: miniflux/miniflux
name: app
env:
# This is necessary for the miniflux to update the db schema, even on an empty DB
- name: CREATE_ADMIN
value: "1"
- name: RUN_MIGRATIONS
value: "1"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: miniflux-credentials
key: admin-password.secret
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: miniflux-credentials
key: database-url.secret
EOF
kubectl create -f /var/data/miniflux/deployment.yml
```
### Check pods
Check that your deployment is running, with ```kubectl get pods -n miniflux```. After a minute or so, you should see 2 "Running" pods, as illustrated below:
```
[funkypenguin:~] % kubectl get pods -n miniflux
NAME READY STATUS RESTARTS AGE
app-667c667b75-5jjm9 1/1 Running 0 4d
db-fcd47b88f-9vvqt 1/1 Running 0 4d
[funkypenguin:~] %
```
### Create db service
The db service resource "advertises" the availability of PostgreSQL's port (TCP 5432) in your pod, to the rest of the cluster (*constrained within your namespace*). It seems a little like overkill coming from the Docker Swarm's automated "service discovery" model, but the Kubernetes design allows for load balancing, rolling upgrades, and health checks of individual pods, without impacting the rest of the cluster elements.
```
cat <<EOF > /var/data/miniflux/db-service.yml
kind: Service
apiVersion: v1
metadata:
name: db
namespace: miniflux
spec:
selector:
app: db
ports:
- protocol: TCP
port: 5432
clusterIP: None
EOF
kubectl create -f /var/data/miniflux/service.yml
```
### Create app service
The app service resource "advertises" the availability of miniflux's HTTP listener port (TCP 8080) in your pod. This is the service which will be referred to by the ingress (below), so that Traefik can route incoming traffic to the miniflux app.
```
cat <<EOF > /var/data/miniflux/app-service.yml
kind: Service
apiVersion: v1
metadata:
name: app
namespace: miniflux
spec:
selector:
app: app
ports:
- protocol: TCP
port: 8080
clusterIP: None
EOF
kubectl create -f /var/data/miniflux/app-service.yml
```
### Check services
Check that your services are deployed, with ```kubectl get services -n miniflux```. You should see something like this:
```
[funkypenguin:~] % kubectl get services -n miniflux
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app ClusterIP None <none> 8080/TCP 55d
db ClusterIP None <none> 5432/TCP 55d
[funkypenguin:~] %
```
### Create ingress
The ingress resource tells Traefik what to forward inbound requests for *miniflux.example.com* to your service (defined above), which in turn passes the request to the "app" pod. Adjust the config below for your domain.
```
cat <<EOF > /var/data/miniflux/ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
namespace: miniflux
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: miniflux.example.com
http:
paths:
- backend:
serviceName: app
servicePort: 8080
EOF
kubectl create -f /var/data/miniflux/ingress.yml
```
Check that your service is deployed, with ```kubectl get ingress -n miniflux```. You should see something like this:
```
[funkypenguin:~] 130 % kubectl get ingress -n miniflux
NAME HOSTS ADDRESS PORTS AGE
app miniflux.funkypenguin.co.nz 80 55d
[funkypenguin:~] %
```
### Access Miniflux
At this point, you should be able to access your instance on your chosen DNS name (*i.e. https://miniflux.example.com*)
### Troubleshooting
To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux <name of pod per above> -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/).

View File

@@ -0,0 +1,127 @@
hero: Not all heroes wear capes
!!! danger "This recipe is a work in progress"
This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁
# NAME
Intro
![NAME Screenshot](../../images/name.jpg)
Details
## Ingredients
1. [Kubernetes cluster](/kubernetes/digital-ocean/)
## Preparation
### Create data locations
```
mkdir /var/data/config/mqtt
```
### Create namespace
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the mqtt stack by creating the following .yaml:
```
cat <<EOF > /var/data/mqtt/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: mqtt
EOF
kubectl create -f /var/data/mqtt/namespace.yaml
```
### Prepare environment
Create wekan.env, and populate with the following variables
```
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
MONGO_URL=mongodb://wekandb:27017/wekan
ROOT_URL=https://wekan.example.com
MAIL_URL=smtp://wekan@wekan.example.com:password@mail.example.com:587/
MAIL_FROM="Wekan <wekan@wekan.example.com>"
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
wekandb:
image: mongo:3.2.15
command: mongod --smallfiles --oplogSize 128
networks:
- internal
volumes:
- /var/data/wekan/wekan-db:/data/db
- /var/data/wekan/wekan-db-dump:/dump
proxy:
image: a5huynh/oauth2_proxy
env_file: /var/data/wekan/wekan.env
networks:
- traefik_public
- internal
deploy:
labels:
- traefik_public.frontend.rule=Host:wekan.example.com
- traefik_public.docker.network=traefik_public
- traefik_public.port=4180
command: |
-cookie-secure=false
-upstream=http://wekan:80
-redirect-url=https://wekan.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
wekan:
image: wekanteam/wekan:latest
networks:
- internal
env_file: /var/data/wekan/wekan.env
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.3.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Wekan stack
Launch the Wekan stack by running ```docker stack deploy wekan -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.

View File

@@ -0,0 +1,120 @@
# Kanboard
Intro
![NAME Screenshot](../../images/name.jpg)
Details
## Ingredients
1. [Kubernetes cluster](/kubernetes/digital-ocean/)
## Preparation
### Create data locations
```
mkdir /var/data/config/mqtt
```
### Create namespace
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the mqtt stack by creating the following .yaml:
```
cat <<EOF > /var/data/mqtt/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: mqtt
EOF
kubectl create -f /var/data/mqtt/namespace.yaml
```
### Prepare environment
Create wekan.env, and populate with the following variables
```
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
MONGO_URL=mongodb://wekandb:27017/wekan
ROOT_URL=https://wekan.example.com
MAIL_URL=smtp://wekan@wekan.example.com:password@mail.example.com:587/
MAIL_FROM="Wekan <wekan@wekan.example.com>"
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
wekandb:
image: mongo:3.2.15
command: mongod --smallfiles --oplogSize 128
networks:
- internal
volumes:
- /var/data/wekan/wekan-db:/data/db
- /var/data/wekan/wekan-db-dump:/dump
proxy:
image: a5huynh/oauth2_proxy
env_file: /var/data/wekan/wekan.env
networks:
- traefik_public
- internal
deploy:
labels:
- traefik_public.frontend.rule=Host:wekan.example.com
- traefik_public.docker.network=traefik_public
- traefik_public.port=4180
command: |
-cookie-secure=false
-upstream=http://wekan:80
-redirect-url=https://wekan.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
wekan:
image: wekanteam/wekan:latest
networks:
- internal
env_file: /var/data/wekan/wekan.env
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.3.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Wekan stack
Launch the Wekan stack by running ```docker stack deploy wekan -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.

View File

@@ -0,0 +1,127 @@
hero: Not all heroes wear capes
!!! danger "This recipe is a work in progress"
This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁
# NAME
Intro
![NAME Screenshot](../../images/name.jpg)
Details
## Ingredients
1. [Kubernetes cluster](/kubernetes/digital-ocean/)
## Preparation
### Create data locations
```
mkdir /var/data/config/mqtt
```
### Create namespace
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the mqtt stack by creating the following .yaml:
```
cat <<EOF > /var/data/mqtt/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: mqtt
EOF
kubectl create -f /var/data/mqtt/namespace.yaml
```
### Prepare environment
Create wekan.env, and populate with the following variables
```
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
MONGO_URL=mongodb://wekandb:27017/wekan
ROOT_URL=https://wekan.example.com
MAIL_URL=smtp://wekan@wekan.example.com:password@mail.example.com:587/
MAIL_FROM="Wekan <wekan@wekan.example.com>"
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
wekandb:
image: mongo:3.2.15
command: mongod --smallfiles --oplogSize 128
networks:
- internal
volumes:
- /var/data/wekan/wekan-db:/data/db
- /var/data/wekan/wekan-db-dump:/dump
proxy:
image: a5huynh/oauth2_proxy
env_file: /var/data/wekan/wekan.env
networks:
- traefik_public
- internal
deploy:
labels:
- traefik_public.frontend.rule=Host:wekan.example.com
- traefik_public.docker.network=traefik_public
- traefik_public.port=4180
command: |
-cookie-secure=false
-upstream=http://wekan:80
-redirect-url=https://wekan.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
wekan:
image: wekanteam/wekan:latest
networks:
- internal
env_file: /var/data/wekan/wekan.env
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.3.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Wekan stack
Launch the Wekan stack by running ```docker stack deploy wekan -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.

View File

@@ -1,11 +1,11 @@
#Kanboard
Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux/)_)
Kanboard is a Kanban tool, developed by [Frdric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](https://geek-cookbook.funkypenguin.co.nz/)recipes/miniflux/)_)
![Kanboard Screenshot](https://geek-cookbook.funkypenguin.co.nz/)images/kanboard.png)
!!! tip "Sponsored Project"
Kanboard is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓
Kanboard is one of my [sponsored projects](https://geek-cookbook.funkypenguin.co.nz/)sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments!
Features include:

View File

@@ -0,0 +1,265 @@
#Kanboard
Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](/recipes/miniflux/)_)
![Kanboard Screenshot](/images/kanboard.png)
!!! tip "Sponsored Project"
Kanboard is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓
Features include:
* Visualize your work
* Limit your work in progress to be more efficient
* Customize your boards according to your business activities
* Multiple projects with the ability to drag and drop tasks
* Reports and analytics
* Fast and simple to use
* Access from anywhere with a modern browser
* Plugins and integrations with external services
* Free, open source and self-hosted
* Super simple installation
## Ingredients
1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/)
2. A DNS name for your kanboard instance (*kanboard.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your Traefik ingress
## Preparation
### Prepare traefik for namespace
When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *kanboard* namespace, as illustrated below:
```
<snip>
kubernetes:
namespaces:
- kube-system
- nextcloud
- kanboard
- miniflux
<snip>
```
If you've updated ```values.yml```, upgrade your traefik deployment via helm, by running ```helm upgrade --values values.yml traefik stable/traefik --recreate-pods```
### Create data locations
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
```
mkdir /var/data/config/kanboard
```
### Create namespace
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the kanboard stack with the following .yml:
```
cat <<EOF > /var/data/config/kanboard/namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: kanboard
EOF
kubectl create -f /var/data/config/kanboard/namespace.yaml
```
### Create persistent volume claim
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the kanboard app and plugin data:
```
cat <<EOF > /var/data/config/kanboard/persistent-volumeclaim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: kanboard-volumeclaim
namespace: kanboard
annotations:
backup.kubernetes.io/deltas: P1D P7D
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
kubectl create -f /var/data/config/kanboard/kanboard-volumeclaim.yaml
```
!!! question "What's that annotation about?"
The annotation is used by [k8s-snapshots](/kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days.
### Create ConfigMap
Kanboard's configuration is all contained within ```config.php```, which needs to be presented to the container. We _could_ maintain ```config.php``` in the persistent volume we created above, but this would require manually accessing the pod every time we wanted to make a change.
Instead, we'll create ```config.php``` as a [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), meaning it "lives" within the Kuberetes cluster and can be **presented** to our pod. When we want to make changes, we simply update the ConfigMap (*delete and recreate, to be accurate*), and relaunch the pod.
Grab a copy of [config.default.php](https://github.com/kanboard/kanboard/blob/master/config.default.php), save it to ```/var/data/config/kanboard/config.php```, and customize it per [the guide](https://docs.kanboard.org/en/latest/admin_guide/config_file.html).
At the very least, I'd suggest making the following changes:
```
define('PLUGIN_INSTALLER', true); // Yes, I want to install plugins using the UI
define('ENABLE_URL_REWRITE', false); // Yes, I want pretty URLs
```
Now create the configmap from config.php, by running ```kubectl create configmap -n kanboard kanboard-config --from-file=config.php```
## Serving
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create a [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [service](https://kubernetes.io/docs/concepts/services-networking/service/), and [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the kanboard [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
### Create deployment
Create a deployment to tell Kubernetes about the desired state of the pod (*which it will then attempt to maintain*). Note below that we mount the persistent volume **twice**, to both ```/var/www/app/data``` and ```/var/www/app/plugins```, using the subPath value to differentiate them. This trick avoids us having to provision **two** persistent volumes just for data mounted in 2 separate locations.
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍
```
cat <<EOF > /var/data/kanboard/deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: kanboard
name: app
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- image: kanboard/kanboard
name: app
volumeMounts:
- name: kanboard-config
mountPath: /var/www/app/config.php
subPath: config.php
- name: kanboard-app
mountPath: /var/www/app/data
subPath: data
- name: kanboard-app
mountPath: /var/www/app/plugins
subPath: plugins
volumes:
- name: kanboard-app
persistentVolumeClaim:
claimName: kanboard-app
- name: kanboard-config
configMap:
name: kanboard-config
EOF
kubectl create -f /var/data/kanboard/deployment.yml
```
Check that your deployment is running, with ```kubectl get pods -n kanboard```. After a minute or so, you should see a "Running" pod, as illustrated below:
```
[funkypenguin:~] % kubectl get pods -n kanboard
NAME READY STATUS RESTARTS AGE
app-79f97f7db6-hsmfg 1/1 Running 0 11d
[funkypenguin:~] %
```
### Create service
The service resource "advertises" the availability of TCP port 80 in your pod, to the rest of the cluster (*constrained within your namespace*). It seems a little like overkill coming from the Docker Swarm's automated "service discovery" model, but the Kubernetes design allows for load balancing, rolling upgrades, and health checks of individual pods, without impacting the rest of the cluster elements.
```
cat <<EOF > /var/data/kanboard/service.yml
kind: Service
apiVersion: v1
metadata:
name: app
namespace: kanboard
spec:
selector:
app: app
ports:
- protocol: TCP
port: 80
clusterIP: None
EOF
kubectl create -f /var/data/kanboard/service.yml
```
Check that your service is deployed, with ```kubectl get services -n kanboard```. You should see something like this:
```
[funkypenguin:~] % kubectl get service -n kanboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app ClusterIP None <none> 80/TCP 38d
[funkypenguin:~] %
```
### Create ingress
The ingress resource tells Traefik what to forward inbound requests for *kanboard.example.com* to your service (defined above), which in turn passes the request to the "app" pod. Adjust the config below for your domain.
```
cat <<EOF > /var/data/kanboard/ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
namespace: kanboard
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: kanboard.example.com
http:
paths:
- backend:
serviceName: app
servicePort: 80
EOF
kubectl create -f /var/data/kanboard/ingress.yml
```
Check that your service is deployed, with ```kubectl get ingress -n kanboard```. You should see something like this:
```
[funkypenguin:~] % kubectl get ingress -n kanboard
NAME HOSTS ADDRESS PORTS AGE
app kanboard.funkypenguin.co.nz 80 38d
[funkypenguin:~] %
```
### Access Kanboard
At this point, you should be able to access your instance on your chosen DNS name (*i.e. https://kanboard.example.com*)
### Updating config.php
Since ```config.php``` is a ConfigMap now, to update it, make your local changes, and then delete and recreate the ConfigMap, by running:
```
kubectl delete configmap -n kanboard kanboard-config
kubectl create configmap -n kanboard kanboard-config --from-file=config.php
```
Then, in the absense of any other changes to the deployement definition, force the pod to restart by issuing a "null patch", as follows:
```
kubectl patch -n kanboard deployment app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
```
### Troubleshooting
To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard <name of pod per above> -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/).
## Chef's Notes
1. The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)

View File

@@ -178,7 +178,7 @@ SSL_TYPE=letsencrypt
Launch the mail server stack by running ```docker stack deploy docker-mailserver -c <path-to-docker-mailserver.yml>```
## Chef's Notes 📓
## Chef's Notes
1. One of the elements of this design which I didn't appreciate at first is that since the config is entirely file-based, **setup.sh** can be run on any container host, provided it has the shared data mounted. This means that even though docker-mailserver was not designed with docker swarm in mind, it works perfectl with swarm. I.e., from any node, regardless of where the container is actually running, you're able to add/delete email addresses, view logs, etc.

185
manuscript/recipes/mail.mde Normal file
View File

@@ -0,0 +1,185 @@
hero: Docker-mailserver - A recipe for a self-contained mailserver and friends ✉️
# Mail Server
Many of the recipes that follow require email access of some kind. It's normally possible to use a hosted service such as SendGrid, or just a gmail account. If (like me) you'd like to self-host email for your stacks, then the following recipe provides a full-stack mail server running on the docker HA swarm.
Of value to me in choosing docker-mailserver were:
1. Automatically renews LetsEncrypt certificates
2. Creation of email accounts across multiple domains (i.e., the same container gives me mailbox wekan@wekan.example.com, and gitlab@gitlab.example.com)
3. The entire configuration is based on flat files, so there's no database or persistence to worry about
docker-mailserver doesn't include a webmail client, and one is not strictly needed. Rainloop can be added either as another service within the stack, or as a standalone service. Rainloop will be covered in a future recipe.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. LetsEncrypt authorized email address for domain
4. Access to manage DNS records for domains
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/docker-mailserver:
```
cd /var/data
mkdir docker-mailserver
cd docker-mailserver
mkdir {maildata,mailstate,config,letsencrypt,rainloop}
```
### Get LetsEncrypt certificate
Decide on the FQDN to assign to your mailserver. You can service multiple domains from a single mailserver - i.e., bob@dev.example.com and daphne@prod.example.com can both be served by **mail.example.com**.
The docker-mailserver container can _renew_ our LetsEncrypt certs for us, but it can't generate them. To do this, we need to run certbot (from a container) to request the initial certs and create the appropriate directory structure.
In the example below, since I'm already using Traefik to manage the LE certs for my web platforms, I opted to use the DNS challenge to prove my ownership of the domain. The certbot client will prompt you to add a DNS record for domain verification.
```
docker run -ti --rm -v \
"$(pwd)"/letsencrypt:/etc/letsencrypt certbot/certbot \
--manual --preferred-challenges dns certonly \
-d mail.example.com
```
### Get setup.sh
docker-mailserver comes with a handy bash script for managing the stack (which is just really a wrapper around the container.) It'll make our setup easier, so download it into the root of your configuration/data directory, and make it executable:
```
curl -o setup.sh \
https://raw.githubusercontent.com/tomav/docker-mailserver/master/setup.sh \
chmod a+x ./setup.sh
```
### Create email accounts
For every email address required, run ```./setup.sh email add <email> <password>``` to create the account. The command returns no output.
You can run ```./setup.sh email list``` to confirm all of your addresses have been created.
### Create DKIM DNS entries
Run ```./setup.sh config dkim``` to create the necessary DKIM entries. The command returns no output.
Examine the keys created by opendkim to identify the DNS TXT records required:
```
for i in `find config/opendkim/keys/ -name mail.txt`; do \
echo $i; \
cat $i; \
done
```
You'll end up with something like this:
```
config/opendkim/keys/gitlab.example.com/mail.txt
mail._domainkey IN TXT ( "v=DKIM1; k=rsa; "
"p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCYuQqDg2ZG8ZOfI1PvarF1Gcr5cJnCR8BeCj5HYgeRohSrxKL5utPEF/AWAxXYwnKpgYN837fu74GfqsIuOhu70lPhGV+O2gFVgpXYWHELvIiTqqO0QgarIN63WE2gzE4s0FckfLrMuxMoXr882wuzuJhXywGxOavybmjpnNHhbQIDAQAB" ) ; ----- DKIM key mail for gitlab.example.com
[root@ds1 mail]#
```
Create the necessary DNS TXT entries for your domain(s). Note that although opendkim splits the record across two lines, the actual record should be concatenated on creation. I.e., the DNS TXT record above should read:
```
"v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCYuQqDg2ZG8ZOfI1PvarF1Gcr5cJnCR8BeCj5HYgeRohSrxKL5utPEF/AWAxXYwnKpgYN837fu74GfqsIuOhu70lPhGV+O2gFVgpXYWHELvIiTqqO0QgarIN63WE2gzE4s0FckfLrMuxMoXr882wuzuJhXywGxOavybmjpnNHhbQIDAQAB"
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (_v3.2 - because we need to expose mail ports in "host mode"_), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3.2'
services:
mail:
image: tvial/docker-mailserver:latest
ports:
- target: 25
published: 25
protocol: tcp
mode: host
- target: 587
published: 587
protocol: tcp
mode: host
- target: 993
published: 993
protocol: tcp
mode: host
- target: 995
published: 995
protocol: tcp
mode: host
volumes:
- /var/data/docker-mailserver/maildata:/var/mail
- /var/data/docker-mailserver/mailstate:/var/mail-state
- /var/data/docker-mailserver/config:/tmp/docker-mailserver
- /var/data/docker-mailserver/letsencrypt:/etc/letsencrypt
env_file: /var/data/docker-mailserver/docker-mailserver.env
networks:
- internal
deploy:
replicas: 1
rainloop:
image: hardware/rainloop
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:rainloop.example.com
- traefik.docker.network=traefik_public
- traefik.port=8888
volumes:
- /var/data/mailserver/rainloop:/rainloop/data
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.2.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot.
A sample docker-mailserver.env file looks like this:
```
ENABLE_SPAMASSASSIN=1
ENABLE_CLAMAV=1
ENABLE_POSTGREY=1
ONE_DIR=1
OVERRIDE_HOSTNAME=mail.example.com
OVERRIDE_DOMAINNAME=mail.example.com
POSTMASTER_ADDRESS=admin@example.com
PERMIT_DOCKER=network
SSL_TYPE=letsencrypt
```
## Serving
### Launch mailserver
Launch the mail server stack by running ```docker stack deploy docker-mailserver -c <path-to-docker-mailserver.yml>```
## Chef's Notes 📓
1. One of the elements of this design which I didn't appreciate at first is that since the config is entirely file-based, **setup.sh** can be run on any container host, provided it has the shared data mounted. This means that even though docker-mailserver was not designed with docker swarm in mind, it works perfectl with swarm. I.e., from any node, regardless of where the container is actually running, you're able to add/delete email addresses, view logs, etc.
2. If you're using sieve with Rainloop, take note of the [workaround](https://discourse.geek-kitchen.funkypenguin.co.nz/t/mail-server-funky-penguins-geek-cookbook/70/15) identified by [ggilley](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley)

View File

@@ -116,6 +116,6 @@ Launch the MatterMost stack by running ```docker stack deploy mattermost -c <pat
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓
## Chef's Notes
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.

View File

@@ -0,0 +1,121 @@
# MatterMost
Intro
![MatterMost Screenshot](../images/mattermost.png)
Details
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/mattermost:
```
mkdir -p /var/data/mattermost/{cert,config,data,logs,plugins,database-dump}
mkdir -p /var/data/realtime/mattermost/database
```
### Prepare environment
Create mattermost.env, and populate with the following variables
```
POSTGRES_USER=mmuser
POSTGRES_PASSWORD=mmuser_password
POSTGRES_DB=mattermost
MM_USERNAME=mmuser
MM_PASSWORD=mmuser_password
MM_DBNAME=mattermost
```
Now create mattermost-backup.env, and populate with the following variables:
```
PGHOST=db
PGUSER=mmuser
PGPASSWORD=mmuser_password
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
db:
image: mattermost/mattermost-prod-db
env_file: /var/data/config/mattermost/mattermost.env
volumes:
- /var/data/realtime/mattermost/database:/var/lib/postgresql/data
networks:
- internal
app:
image: mattermost/mattermost-team-edition
env_file: /var/data/config/mattermost/mattermost.env
volumes:
- /var/data/mattermost/config:/mattermost/config:rw
- /var/data/mattermost/data:/mattermost/data:rw
- /var/data/mattermost/logs:/mattermost/logs:rw
- /var/data/mattermost/plugins:/mattermost/plugins:rw
db-backup:
image: mattermost/mattermost-prod-db
env_file: /var/data/config/mattermost/mattermost-backup.env
volumes:
- /var/data/mattermost/database-dump:/dump
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.40.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch MatterMost stack
Launch the MatterMost stack by running ```docker stack deploy mattermost -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
## Chef's Notes 📓
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the wekan container. You'd also need to add the traefik_public network to the wekan container.

View File

@@ -2,7 +2,7 @@ hero: Miniflux - A recipe for a lightweight minimalist RSS reader
# Miniflux
Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](https://geek-cookbook.funkypenguin.co.nz/)recipes/kanboard/)_)
Miniflux is a lightweight RSS reader, developed by [Frdric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](https://geek-cookbook.funkypenguin.co.nz/)recipes/kanboard/)_)
![Miniflux Screenshot](../images/miniflux.png)
@@ -136,6 +136,6 @@ Launch the Miniflux stack by running ```docker stack deploy miniflux -c <path -t
Log into your new instance at https://**YOUR-FQDN**, using the credentials you setup in the environment flie. After this, change your user/password as you see fit, and comment out the ```CREATE_ADMIN``` line in the env file (_if you don't, then an **additional** admin will be created the next time you deploy_)
## Chef's Notes 📓
## Chef's Notes
1. Find the bookmarklet under the **Settings -> Integration** page.

View File

@@ -0,0 +1,141 @@
hero: Miniflux - A recipe for a lightweight minimalist RSS reader
# Miniflux
Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](/recipes/kanboard/)_)
![Miniflux Screenshot](../images/miniflux.png)
!!! tip "Sponsored Project"
Miniflux is one of my [sponsored projects](/sponsored-projects/) - a project I financially support on a regular basis because of its utility to me. Although I get to process my RSS feeds less frequently than I'd like to!
I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/review/miniflux-lightweight-self-hosted-rss-reader/), but features (among many) that I appreciate:
* Compatible with the Fever API, read your feeds through existing mobile and desktop clients (_This is the killer feature for me. I hardly ever read RSS on my desktop, I typically read on my iPhone or iPad, using [Fiery Feeds](http://cocoacake.net/apps/fiery/) or my new squeeze, [Unread](https://www.goldenhillsoftware.com/unread/)_)
* Send your bookmarks to Pinboard, Wallabag, Shaarli or Instapaper (_I use this to automatically pin my bookmarks for collection on my [blog](https://www.funkypenguin.co.nz/blog/)_)
* Feeds can be configured to download a "full" version of the content (_rather than an excerpt_)
* Use the Bookmarklet to subscribe to a website directly from any browsers
!!! abstract "2.0+ is a bit different"
[Some things changed](https://docs.miniflux.net/en/latest/migration.html) when Miniflux 2.0 was released. For one thing, the only supported database is now postgresql (_no more SQLite_). External themes are gone, as is PHP (_in favor of golang_). It's been a controversial change, but I'm keen on minimal and single-purpose, so I'm still very happy with the direction of development. The developer has laid out his [opinions](https://docs.miniflux.net/en/latest/opinionated.html) re the decisions he's made in the course of development.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry pointing your Miniflux url (i.e. _miniflux.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
Create the location for the bind-mount of the application data, so that it's persistent:
```
mkdir -p /var/data/miniflux/database-dump
mkdir -p /var/data/runtime/miniflux/database
```
### Setup environment
Create ```/var/data/config/miniflux/miniflux.env``` something like this:
```
DATABASE_URL=postgres://miniflux:secret@db/miniflux?sslmode=disable
POSTGRES_USER=miniflux
POSTGRES_PASSWORD=secret
# This is necessary for the miniflux to update the db schema, even on an empty DB
RUN_MIGRATIONS=1
# Uncomment this on first run, else leave it commented out after adding your own user account
CREATE_ADMIN=1
ADMIN_USERNAME=admin
ADMIN_PASSWORD=test1234
# For backups
PGUSER=miniflux
PGPASSWORD=secret
PGHOST=db
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
```
The entire application is configured using environment variables, including the initial username. Once you've successfully deployed once, comment out ```CREATE_ADMIN``` and the two successive lines.
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
miniflux:
image: miniflux/miniflux:2.0.7
env_file: /var/data/config/miniflux/miniflux.env
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:miniflux.example.com
- traefik.port=8080
- traefik.docker.network=traefik_public
db:
env_file: /var/data/config/miniflux/miniflux.env
image: postgres:10.1
volumes:
- /var/data/runtime/miniflux/database:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
networks:
- internal
db-backup:
image: postgres:10.1
env_file: /var/data/config/miniflux/miniflux.env
volumes:
- /var/data/miniflux/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.22.0/24
```
## Serving
### Launch Miniflux stack
Launch the Miniflux stack by running ```docker stack deploy miniflux -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, using the credentials you setup in the environment flie. After this, change your user/password as you see fit, and comment out the ```CREATE_ADMIN``` line in the env file (_if you don't, then an **additional** admin will be created the next time you deploy_)
## Chef's Notes 📓
1. Find the bookmarklet under the **Settings -> Integration** page.

View File

@@ -81,7 +81,7 @@ Log into your new instance at https://**YOUR-FQDN**, with the access key and sec
If you created ```/var/data/minio```, you'll see nothing. If you referenced existing data, you should see all subdirectories in your existing folder represented as buckets.
If all you need is single-user access to your data, you're done! 🎉
If all you need is single-user access to your data, you're done!
If, however, you want to expose data to multiple users, at different privilege levels, you'll need the minio client to create some users and (_potentially_) policies...
@@ -170,7 +170,7 @@ To permanently mount an S3 bucket using goofys, I'd add something like this to /
goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode=0666 0 0
```
## Chef's Notes 📓
## Chef's Notes
1. There are many S3-filesystem-mounting tools available, I just picked Goofys because it's simple. Google is your friend :)
2. Some applications (_like [NextCloud](https://geek-cookbook.funkypenguin.co.nz/)recipes/nextcloud/)_) can natively mount S3 buckets

View File

@@ -0,0 +1,178 @@
# Minio
Minio is a high performance distributed object storage server, designed for
large-scale private cloud infrastructure.
However, at its simplest, Minio allows you to expose a local filestructure via the [Amazon S3 API](https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html). You could, for example, use it to provide access to "buckets" (folders) of data on your filestore, secured by access/secret keys, just like AWS S3. You can further interact with your "buckets" with common tools, just as if they were hosted on S3.
Under a more advanced configuration, Minio runs in distributed mode, with [features](https://www.minio.io/features.html) including high-availability, mirroring, erasure-coding, and "bitrot detection".
![Minio Screenshot](../images/minio.png)
Possible use-cases:
1. Sharing files (_protected by user accounts with secrets_) via HTTPS, either as read-only or read-write, in such a way that the bucket could be mounted to a remote filesystem using common S3-compatible tools, like [goofys](https://github.com/kahing/goofys). Ever wanted to share a folder with friends, but didn't want to open additional firewall ports etc?
2. Simulating S3 in a dev environment
3. Mirroring an S3 bucket locally
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik_public) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need a directory to hold our minio file store, as well as our minio client config, so create a structure at /var/data/minio:
```
mkdir /var/data/minio
cd /var/data/minio
mkdir -p {mc,data}
```
### Prepare environment
Create minio.env, and populate with the following variables
```
MINIO_ACCESS_KEY=<some random, complex string>
MINIO_SECRET_KEY=<another random, complex string>
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3.1'
services:
app:
image: minio/minio
env_file: /var/data/config/minio/minio.env
volumes:
- /var/data/minio/data:/data
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:minio.example.com
- traefik.port=9000
command: minio server /data
networks:
traefik_public:
external: true
```
## Serving
### Launch Minio stack
Launch the Minio stack by running ```docker stack deploy minio -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with the access key and secret key you specified in minio.env.
If you created ```/var/data/minio```, you'll see nothing. If you referenced existing data, you should see all subdirectories in your existing folder represented as buckets.
If all you need is single-user access to your data, you're done! 🎉
If, however, you want to expose data to multiple users, at different privilege levels, you'll need the minio client to create some users and (_potentially_) policies...
### Setup minio client
To administer the Minio server, we need the Minio client. While it's possible to download the minio client and run it locally, it's just as easy to do it within a small (5Mb) container.
I created an alias on my docker nodes, allowing me to run mc quickly:
```
alias mc='docker run -it -v /docker/minio/mc/:/root/.mc/ --network traefik_public minio/mc'
```
Now I use the alias to launch the client shell, and connect to my minio instance (_I could also use the external, traefik-provided URL_)
```
root@ds1:~# mc config host add minio http://app:9000 admin iambatman
mc: Configuration written to `/root/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/root/.mc/share`.
mc: Initialized share uploads `/root/.mc/share/uploads.json` file.
mc: Initialized share downloads `/root/.mc/share/downloads.json` file.
Added `minio` successfully.
root@ds1:~#
```
### Add (readonly) user
Use mc to add a (readonly or readwrite) user, by running ``` mc admin user add minio <access key> <secret key> <access level>```
Example:
```
root@ds1:~# mc admin user add minio spiderman peterparker readonly
Added user `spiderman` successfully.
root@ds1:~#
```
Confirm by listing your users (_admin is excluded from the list_):
```
root@node1:~# mc admin user list minio
enabled spiderman readonly
root@node1:~#
```
### Make a bucket accessible to users
By default, all buckets have no "policies" attached to them, and so can only be accessed by the administrative user. Having created some readonly/read-write users above, you'll be wanting to grant them access to buckets.
The simplest permission scheme is "on or off". Either a bucket has a policy, or it doesn't. (_I believe you can apply policies to subdirectories of buckets in a more advanced configuration_)
After **no** policy, the most restrictive policy you can attach to a bucket is "download". This policy will allow authenticated users to download contents from the bucket. Apply the "download" policy to a bucket by running ```mc policy download minio/<bucket name>```, i.e.:
```
root@ds1:# mc policy download minio/comics
Access permission for `minio/comics` is set to `download`
root@ds1:#
```
### Advanced bucketing
There are some clever complexities you can achieve with user/bucket policies, including:
* A public bucket, which requires no authentication to read or even write (_for a public dropbox, for example_)
* A special bucket, hidden from most users, but available to VIP users by application of a custom "[canned policy](https://docs.minio.io/docs/minio-multi-user-quickstart-guide.html)"
### Mount a minio share remotely
Having setup your buckets, users, and policies - you can give out your minio external URL, and user access keys to your remote users, and they can S3-mount your buckets, interacting with them based on their user policy (_read-only or read/write_)
I tested the S3 mount using [goofys](https://github.com/kahing/goofys), "a high-performance, POSIX-ish Amazon S3 file system written in Go".
First, I created ~/.aws/credentials, as follows:
```
[default]
aws_access_key_id=spiderman
aws_secret_access_key=peterparker
```
And then I ran (_in the foreground, for debugging_), ```goofys --f -debug_s3 --debug_fuse --endpoint=https://traefik.example.com <bucketname> <local mount point>```
To permanently mount an S3 bucket using goofys, I'd add something like this to /etc/fstab:
```
goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode=0666 0 0
```
## Chef's Notes 📓
1. There are many S3-filesystem-mounting tools available, I just picked Goofys because it's simple. Google is your friend :)
2. Some applications (_like [NextCloud](/recipes/nextcloud/)_) can natively mount S3 buckets
3. Some backup tools (_like [Duplicity](/recipes/duplicity/)_) can backup directly to S3 buckets

View File

@@ -15,7 +15,7 @@ A workaround to this bug is to run an MQTT broker **external** to the raspberry
![MQTT Screenshot](../images/mqtt.png)
[MQTT](https://mqtt.org/faq) stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging machine-to-machine (M2M) or Internet of Things world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.
[MQTT](https://mqtt.org/faq) stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging machine-to-machine (M2M) or Internet of Things world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.
## Ingredients
@@ -204,4 +204,4 @@ mqtt-65f4d96945-bjj44 1/1 Running 0 5m
To actually **use** your new MQTT broker, you'll need to connect to any one of your nodes (```kubectl get nodes -o wide```) on port 30883 (_the nodeport service we created earlier_). More info on that, and a loadbalancer design, to follow shortly :)
## Chef's Notes 📓
## Chef's Notes

207
manuscript/recipes/mqtt.mde Normal file
View File

@@ -0,0 +1,207 @@
hero: Kubernetes. The hero we deserve.
!!! danger "This recipe is a work in progress"
This recipe is **incomplete**, and is featured to align the [patrons](https://www.patreon.com/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [all Patreon patrons](https://www.patreon.com/funkypenguin), which includes all the necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍
So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁
# MQTT broker
I use Elias Kotlyar's [excellent custom firmware](https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks) for Xiaomi DaFang/XiaoFang cameras, enabling RTSP, MQTT, motion tracking, and other features, integrating directly with [Home Assistant](/recipes/homeassistant/).
There's currently a [mysterious bug](https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks/issues/638) though, which prevents TCP communication between Home Assistant and the camera, when MQTT services are enabled on the camera and the mqtt broker runs on the same Raspberry Pi as Home Assistant, using [Hass.io](https://www.home-assistant.io/hassio/).
A workaround to this bug is to run an MQTT broker **external** to the raspberry pi, which makes the whole problem GoAway(tm). Since an MQTT broker is a single, self-contained container, I've written this recipe as an introduction to our Kubernetes cluster design.
![MQTT Screenshot](../images/mqtt.png)
[MQTT](https://mqtt.org/faq) stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging “machine-to-machine” (M2M) or “Internet of Things” world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.
## Ingredients
1. A [Kubernetes cluster](/kubernetes/digital-ocean/)
## Preparation
### Create data locations
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
```
mkdir /var/data/config/mqtt
```
### Create namespace
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the mqtt stack by creating the following .yaml:
```
cat <<EOF > /var/data/mqtt/namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: mqtt
EOF
kubectl create -f /var/data/mqtt/namespace.yaml
```
### Create persistent volume claim
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the certbot data:
```
cat <<EOF > /var/data/mqtt/persistent-volumeclaim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mqtt-volumeclaim
namespace: mqtt
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
kubectl create -f /var/data/mqtt/mqtt-volumeclaim.yaml
```
### Create nodeport service
I like to expose my services using nodeport (_limited to ports 30000-32767_), and then use an external haproxy load balancer to make these available externally. (_This avoids having to pay per-port changes for a loadbalancer from the cloud provider_)
```
cat <<EOF > /var/data/mqtt/service-nodeport.yml
kind: Service
apiVersion: v1
metadata:
name: mqtt-nodeport
namespace: mqtt
spec:
selector:
app: mqtt
type: NodePort
ports:
- name: mqtts
port: 8883
protocol: TCP
nodePort : 30883
EOF
kubectl create -f /var/data/mqtt/service-nodeport.yml
```
### Create secrets
It's not always desirable to have sensitive data stored in your .yml files. Maybe you want to check your config into a git repository, or share it. Using Kubernetes Secrets means that you can create "secrets", and use these in your deployments by name, without exposing their contents.
```
echo -n "myapikeyissosecret" > cloudflare-key.secret
echo -n "myemailaddress" > cloudflare-email.secret
echo -n "myemailaddress" > letsencrypt-email.secret
kubectl create secret -n mqtt generic mqtt-credentials \
--from-file=cloudflare-key.secret \
--from-file=cloudflare-email.secret \
--from-file=letsencrypt-email.secret
```
!!! tip "Why use ```echo -n```?"
Because. See [my blog post here](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why!
## Serving
### Create deployment
Now that we have a volume, a service, and a namespace, we can create a deployment for the mqtt pod. Note below the use of volume mounts, environment variables, as well as the secrets.
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary .yml files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```kubectl create -f *.yml``` 👍
```
cat <<EOF > /var/data/mqtt/mqtt.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: mqtt
name: mqtt
labels:
app: mqtt
spec:
replicas: 1
selector:
matchLabels:
app: mqtt
template:
metadata:
labels:
app: mqtt
spec:
containers:
- image: funkypenguin/mqtt-certbot-dns
imagePullPolicy: Always
# only uncomment these to get the container to run so that we can transfer files into the PV
# command: [ "/bin/sleep" ]
# args: [ "1h" ]
env:
- name: DOMAIN
value: "*.funkypenguin.co.nz"
- name: EMAIL
valueFrom:
secretKeyRef:
name: mqtt-credentials
key: letsencrypt-email.secret
- name: CLOUDFLARE_EMAIL
valueFrom:
secretKeyRef:
name: mqtt-credentials
key: cloudflare-email.secret
- name: CLOUDFLARE_KEY
valueFrom:
secretKeyRef:
name: mqtt-credentials
key: cloudflare-key.secret
# uncomment this to test LetsEncrypt validations
# - name: TESTCERT
# value: "true"
name: mqtt
resources:
requests:
memory: "50Mi"
cpu: "0.1"
volumeMounts:
# We need the LE certs to persist across reboots to avoid getting rate-limited (bad, bad)
- name: mqtt-volumeclaim
mountPath: /etc/letsencrypt
# A configmap for the mosquitto.conf file
- name: mosquitto-conf
mountPath: /mosquitto/conf/mosquitto.conf
subPath: mosquitto.conf
# A configmap for the mosquitto passwd file
- name: mosquitto-passwd
mountPath: /mosquitto/conf/passwd
subPath: passwd
volumes:
- name: mqtt-volumeclaim
persistentVolumeClaim:
claimName: mqtt-volumeclaim
- name: mosquitto-conf
configMap:
name: mosquitto.conf
- name: mosquitto-passwd
configMap:
name: passwd
EOF
kubectl create -f /var/data/mqtt/mqtt.yml
```
Check that your deployment is running, with ```kubectl get pods -n mqtt```. After a minute or so, you should see a "Running" pod, as illustrated below:
```
[davidy:~/Documents/Personal/Projects/mqtt-k8s] 130 % kubectl get pods -n mqtt
NAME READY STATUS RESTARTS AGE
mqtt-65f4d96945-bjj44 1/1 Running 0 5m
[davidy:~/Documents/Personal/Projects/mqtt-k8s] %
```
To actually **use** your new MQTT broker, you'll need to connect to any one of your nodes (```kubectl get nodes -o wide```) on port 30883 (_the nodeport service we created earlier_). More info on that, and a loadbalancer design, to follow shortly :)
## Chef's Notes 📓

View File

@@ -6,7 +6,7 @@ Munin is a networked resource monitoring tool that can help analyze resource tre
Using Munin you can easily monitor the performance of your computers, networks, SANs, applications, weather measurements and whatever comes to mind. It makes it easy to determine "what's different today" when a performance problem crops up. It makes it easy to see how you're doing capacity-wise on any resources.
Munin uses the excellent RRDTool (written by Tobi Oetiker) and the framework is written in Perl, while plugins may be written in any language. Munin has a master/node architecture in which the master connects to all the nodes at regular intervals and asks them for data. It then stores the data in RRD files, and (if needed) updates the graphs. One of the main goals has been ease of creating new plugins (graphs).
Munin uses the excellent RRDTool (written by Tobi Oetiker) and the framework is written in Perl, while plugins may be written in any language. Munin has a master/node architecture in which the master connects to all the nodes at regular intervals and asks them for data. It then stores the data in RRD files, and (if needed) updates the graphs. One of the main goals has been ease of creating new plugins (graphs).
## Ingredients
@@ -134,6 +134,6 @@ Launch the Munin stack by running ```docker stack deploy munin -c <path -to-dock
Log into your new instance at https://**YOUR-FQDN**, with user and password password you specified in munin.env above.
## Chef's Notes 📓
## Chef's Notes
1. If you wanted to expose the Munin UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the munin container. You'd also need to add the traefik_public network to the munin container.

View File

@@ -0,0 +1,139 @@
# Munin
Munin is a networked resource monitoring tool that can help analyze resource trends and "what just happened to kill our performance?" problems. It is designed to be very plug and play. A default installation provides a lot of graphs with almost no work.
![Munin Screenshot](../images/munin.png)
Using Munin you can easily monitor the performance of your computers, networks, SANs, applications, weather measurements and whatever comes to mind. It makes it easy to determine "what's different today" when a performance problem crops up. It makes it easy to see how you're doing capacity-wise on any resources.
Munin uses the excellent RRDTool (written by Tobi Oetiker) and the framework is written in Perl, while plugins may be written in any language. Munin has a master/node architecture in which the master connects to all the nodes at regular intervals and asks them for data. It then stores the data in RRD files, and (if needed) updates the graphs. One of the main goals has been ease of creating new plugins (graphs).
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry for the hostname you intend to use, pointed to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Prepare target nodes
Depending on what you want to monitor, you'll want to install munin-node. On Ubuntu/Debian, you'll use ```apt-get install munin-node```, and on RHEL/CentOS, run ```yum install munin-node```. Remember to edit ```/etc/munin/munin-node.conf```, and set your node to allow the server to poll it, by adding ```cidr_allow x.x.x.x/x```.
On CentOS Atomic, of course, you can't install munin-node directly, but you can run it as a containerized instance. In this case, you can't use swarm since you need the container running in privileged mode, so launch a munin-node container on each atomic host using:
```
docker run -d --name munin-node --restart=always \
--privileged --net=host \
-v /:/rootfs:ro \
-v /sys:/sys:ro \
-e ALLOW="cidr_allow 0.0.0.0/0" \
-p 4949:4949 \
--restart=always \
funkypenguin/munin-node
```
### Setup data locations
We'll need several directories to bind-mount into our container, so create them in /var/data/munin:
```
mkdir /var/data/munin
cd /var/data/munin
mkdir -p {log,lib,run,cache}
```
### Prepare environment
Create /var/data/config/munin/munin.env, and populate with the following variables. Use the OAUTH2 variables if you plan to use an [oauth2_proxy](/reference/oauth_proxy/) to protect munin, and set at a **minimum** the ```MUNIN_USER```, ```MUNIN_PASSWORD```, and ```NODES``` values:
```
# Use these if you plan to protect the webUI with an oauth_proxy
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
MUNIN_USER=odin
MUNIN_PASSWORD=lokiisadopted
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=smtp-username
SMTP_PASSWORD=smtp-password
SMTP_USE_TLS=false
SMTP_ALWAYS_SEND=false
SMTP_MESSAGE='[${var:group};${var:host}] -> ${var:graph_title} -> warnings: ${loop<,>:wfields ${var:label}=${var:value}} / criticals: ${loop<,>:cfields ${var:label}=${var:value}}'
ALERT_RECIPIENT=monitoring@example.com
ALERT_SENDER=alerts@example.com
NODES="node1:192.168.1.1 node2:192.168.1.2 node3:192.168.1.3"
SNMP_NODES="router1:10.0.0.254:9999"
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: '3'
services:
munin:
image: funkypenguin/munin-server
env_file: /var/data/config/munin/munin.env
networks:
- internal
volumes:
- /var/data/munin/log:/var/log/munin
- /var/data/munin/lib:/var/lib/munin
- /var/data/munin/run:/var/run/munin
- /var/data/munin/cache:/var/cache/munin
proxy:
image: funkypenguin/oauth2_proxy
env_file: /var/data/config/munin/munin.env
networks:
- traefik_public
- internal
deploy:
labels:
- traefik.frontend.rule=Host:munin.example.com
- traefik.docker.network=traefik
- traefik.port=4180
command: |
-cookie-secure=false
-upstream=http://munin:8080
-redirect-url=https://munin.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.24.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch Munin stack
Launch the Munin stack by running ```docker stack deploy munin -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user and password password you specified in munin.env above.
## Chef's Notes 📓
1. If you wanted to expose the Munin UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the munin container. You'd also need to add the traefik_public network to the munin container.

View File

@@ -229,7 +229,7 @@ Note that this .htaccess can be overwritten by NextCloud, and you may have to re
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓
## Chef's Notes
1. Since many of my other recipes use PostgreSQL, I'd have preferred to use Postgres over MariaDB, but MariaDB seems to be the [preferred database type](https://github.com/nextcloud/server/issues/5912).
2. I'm [not the first user](https://github.com/nextcloud/docker/issues/528) to stumble across the service discovery bug with reverse proxies.

View File

@@ -0,0 +1,235 @@
hero: Backup all your stuff. Share it. Privately.
# NextCloud
!!! important
Ongoing development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
[NextCloud](https://www.nextcloud.org/) (_a [fork of OwnCloud](https://owncloud.org/blog/owncloud-statement-concerning-the-formation-of-nextcloud-by-frank-karlitschek/), led by original developer Frank Karlitschek_) is a suite of client-server software for creating and using file hosting services. It is functionally similar to Dropbox, although Nextcloud is free and open-source, allowing anyone to install and operate it on a private server.
- https://en.wikipedia.org/wiki/Nextcloud
![NextCloud Screenshot](../images/nextcloud.png)
This recipe is based on the official NextCloud docker image, but includes seprate containers ofor the database (_MariaDB_), Redis (_for transactional locking_), Apache Solr (_for full-text searching_), automated database backup, (_you *do* backup the stuff you care about, right?_) and a separate cron container for running NextCloud's 15-min crons.
## Ingredients
1. [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
2. [Traefik](/ha-docker-swarm/traefik) configured per design
3. DNS entry pointing your NextCloud url (_nextcloud.example.com_) to your [keepalived](ha-docker-swarm/keepalived/) IP
## Preparation
### Setup data locations
We'll need several directories for [static data](/reference/data_layout/#static-data) to bind-mount into our container, so create them in /var/data/nextcloud (_so that they can be [backed up](/recipes/duplicity/)_)
```
mkdir /var/data/nextcloud
cd /var/data/nextcloud
mkdir -p {html,apps,config,data,database-dump}
```
Now make **more** directories for [runtime data](/reference/data_layout/#runtime-data) (_so that they can be **not** backed-up_):
```
mkdir /var/data/runtime/nextcloud
cd /var/data/runtime/nextcloud
mkdir -p {db,redis}
```
### Prepare environment
Create nextcloud.env, and populate with the following variables
```
NEXTCLOUD_ADMIN_USER=admin
NEXTCLOUD_ADMIN_PASSWORD=FVuojphozxMVyaYCUWomiP9b
MYSQL_HOST=db
# For mysql
MYSQL_ROOT_PASSWORD=<set to something secure>
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
MYSQL_PASSWORD=set to something secure>
```
Now create a **separate** nextcloud-db-backup.env file, to capture the environment variables necessary to perform the backup. (_If the same variables are shared with the mariadb container, they [cause issues](https://discourse.geek-kitchen.funkypenguin.co.nz/t/nextcloud-funky-penguins-geek-cookbook/254/3?u=funkypenguin) with database access_)
```
# For database backup (keep 7 days daily backups)
MYSQL_PWD=<set to something secure, same as MYSQL_ROOT_PASSWORD above>
MYSQL_USER=root
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
```
### Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
!!! tip
I share (_with my [patreon patrons](https://www.patreon.com/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
```
version: "3.0"
services:
nextcloud:
image: nextcloud
env_file: /var/data/config/nextcloud/nextcloud.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:nextcloud.example.com
- traefik.docker.network=traefik_public
- traefik.port=80
volumes:
- /var/data/nextcloud/html:/var/www/html
- /var/data/nextcloud/apps:/var/www/html/custom_apps
- /var/data/nextcloud/config:/var/www/html/config
- /var/data/nextcloud/data:/var/www/html/data
db:
image: mariadb:10
env_file: /var/data/config/nextcloud/nextcloud.env
networks:
- internal
volumes:
- /var/data/runtime/nextcloud/db:/var/lib/mysql
db-backup:
image: mariadb:10
env_file: /var/data/config/nextcloud/nextcloud.env
volumes:
- /var/data/nextcloud/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
redis:
image: redis:alpine
networks:
- internal
volumes:
- /var/data/runtime/nextcloud/redis:/data
cron:
image: nextcloud
volumes:
- /var/data/nextcloud/:/var/www/html
user: www-data
networks:
- internal
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while [ ! -f /var/www/html/config/config.php ]; do
sleep 1
done
while true; do
php -f /var/www/html/cron.php
sleep 15m
done
EOF'
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.12.0/24
```
!!! note
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
## Serving
### Launch NextCloud stack
Launch the NextCloud stack by running ```docker stack deploy nextcloud -c <path -to-docker-compose.yml>```
Log into your new instance at https://**YOUR-FQDN**, with user "admin" and the password you specified in nextcloud.env.
### Enable redis
To make NextCloud [a little snappier](https://docs.nextcloud.com/server/13/admin_manual/configuration_server/caching_configuration.html), edit ```/var/data/nextcloud/config/config.php``` (_now that it's been created on the first container launch_), and add the following:
```
'redis' => array(
'host' => 'redis',
'port' => 6379,
),
```
### Use service discovery
Want to use Calendar/Contacts on your iOS device? Want to avoid dictating long, rambling URL strings to your users, like ```https://nextcloud.batcave.com/remote.php/dav/principals/users/USERNAME/``` ?
Huzzah! NextCloud supports [service discovery for CalDAV/CardDAV](https://tools.ietf.org/html/rfc6764), allowing you to simply tell your device the primary URL of your server (_**nextcloud.batcave.org**, for example_), and have the device figure out the correct WebDAV path to use.
We (_and anyone else using the [NextCloud Docker image](https://hub.docker.com/_/nextcloud/)_) are using an SSL-terminating reverse proxy ([Traefik](/ha-docker-swarm/traefik/)) in front of our NextCloud container. In fact, it's not **possible** to setup SSL **within** the NextCloud container.
When using a reverse proxy, your device requests a URL from your proxy (https://nextcloud.batcave.com/.well-known/caldav), and the reverse proxy then passes that request **unencrypted** to the internal URL of the NextCloud instance (i.e., http://172.16.12.123/.well-known/caldav)
The Apache webserver on the NextCloud container (_knowing it was spoken to via HTTP_), responds with a 301 redirect to http://nextcloud.batcave.com/remote.php/dav/. See the problem? You requested an **HTTPS** (_encrypted_) url, and in return, you received a redirect to an **HTTP** (_unencrypted_) URL. Any sensible client (_iOS included_) will refuse such schenanigans.
To correct this, we need to tell NextCloud to always redirect the .well-known URLs to an HTTPS location. This can only be done **after** deploying NextCloud, since it's only on first launch of the container that the .htaccess file is created in the first place.
To make NextCloud service discovery work with Traefik reverse proxy, edit ```/var/data/nextcloud/html/.htaccess```, and change this:
```
RewriteRule ^\.well-known/carddav /remote.php/dav/ [R=301,L]
RewriteRule ^\.well-known/caldav /remote.php/dav/ [R=301,L]
```
To this:
```
RewriteRule ^\.well-known/carddav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
RewriteRule ^\.well-known/caldav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
```
Then restart your container with ```docker service update nextcloud_nextcloud --force``` to restart apache.
Your can test for success by running ```curl -i https://nextcloud.batcave.org/.well-known/carddav```. You should get a 301 redirect to your equivalent of https://nextcloud.batcave.org/remote.php/dav/, as below:
```
[davidy:~] % curl -i https://nextcloud.batcave.org/.well-known/carddav
HTTP/2 301
content-type: text/html; charset=iso-8859-1
date: Wed, 12 Dec 2018 08:30:11 GMT
location: https://nextcloud.batcave.org/remote.php/dav/
```
Note that this .htaccess can be overwritten by NextCloud, and you may have to reapply the change in future. I've created an [issue requesting a permanent fix](https://github.com/nextcloud/docker/issues/577).
!!! important
Ongoing development of this recipe is sponsored by [The Common Observatory](https://www.observe.global/). Thanks guys!
[![Common Observatory](../images/common_observatory.png)](https://www.observe.global/)
## Chef's Notes 📓
1. Since many of my other recipes use PostgreSQL, I'd have preferred to use Postgres over MariaDB, but MariaDB seems to be the [preferred database type](https://github.com/nextcloud/server/issues/5912).
2. I'm [not the first user](https://github.com/nextcloud/docker/issues/528) to stumble across the service discovery bug with reverse proxies.

Some files were not shown because too many files have changed in this diff Show More