mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-12 17:26:19 +00:00
Experiment with PDF generation
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
88
docs/recipes/archivebox.md
Normal file
88
docs/recipes/archivebox.md
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
title: Archivebox - bookmark manager for your self-hosted stack
|
||||
---
|
||||
# Archivebox
|
||||
|
||||
[ArchiveBox](https://github.com/ArchiveBox/ArchiveBox) is a self-hosted internet archiving solution to collect and save sites you wish to view offline.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Features include:
|
||||
|
||||
- Uses standard formats such as HTML, JSON, PDF, PNG
|
||||
- Ability to autosave to [archive.org](https://github.com/ArchiveBox/ArchiveBox/wiki/Configuration#submit_archive_dot_org)
|
||||
- Supports Scheduled importing
|
||||
- Supports Realtime importing
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First, we create a directory to hold the data which archivebox will store:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/archivebox
|
||||
mkdir /var/data/config/archivebox
|
||||
cd /var/data/config/archivebox
|
||||
```
|
||||
|
||||
### Create docker-compose.yml
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3.2'
|
||||
|
||||
services:
|
||||
archivebox:
|
||||
image: archivebox/archivebox
|
||||
command: server --quick-init 0.0.0.0:8000
|
||||
ports:
|
||||
- 8000:8000
|
||||
networks:
|
||||
- traefik_public
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=Pacific/Auckland
|
||||
- USE_COLOR=True
|
||||
- SHOW_PROGRESS=False
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:archive.example.com
|
||||
- traefik.port=8000
|
||||
# traefikv2
|
||||
- "traefik.http.routers.archive.rule=Host(`archive.example.com`)"
|
||||
- "traefik.http.routers.archive.entrypoints=https"
|
||||
- "traefik.http.services.archive.loadbalancer.server.port=8000"
|
||||
volumes:
|
||||
- /var/data/archivebox:/data
|
||||
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
### Initalizing Archivebox
|
||||
|
||||
Once you have created the docker file you will need to run the following command to configure archivebox and create an account.
|
||||
`docker run -v /var/data/archivebox:/data -it archivebox/archivebox init --setup`
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Archivebox!
|
||||
|
||||
Launch the Archivebox stack by running ```docker stack deploy archivebox -c <path -to-docker-compose.yml>```
|
||||
|
||||
[^1]: The inclusion of Archivebox was due to the efforts of @bencey in Discord (Thanks Ben!)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
18
docs/recipes/autopirate/end.md
Normal file
18
docs/recipes/autopirate/end.md
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title: Launch the Autopirate Docker Swarm stack!
|
||||
description: We're done. Launch your stack and enjoy watching the various apps interact with each other!
|
||||
---
|
||||
# Launch Autopirate stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's the conclusion to the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
Launch the AutoPirate stack by running ```docker stack deploy autopirate -c <path -to-docker-compose.yml>```
|
||||
|
||||
Confirm the container status by running "docker stack ps autopirate", and wait for all containers to enter the "Running" state.
|
||||
|
||||
Log into each of your new tools at its respective HTTPS URL. You'll be prompted to authenticate against your OAuth provider, and upon success, redirected to the tool's UI.
|
||||
|
||||
[^1]: This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
52
docs/recipes/autopirate/headphones.md
Normal file
52
docs/recipes/autopirate/headphones.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# Headphones
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Headphones](https://github.com/rembo10/headphones) is an automated music downloader for NZB and Torrent, written in Python. It supports [SABnzbd][sabnzbd], [NZBget][nzbget], Transmission, µTorrent, Deluge and Blackhole.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Headphones in your [AutoPirate][autopirate] stack, include the following in your autopirate.yml stack definition file:
|
||||
|
||||
```yaml
|
||||
headphones:
|
||||
image: lscr.io/linuxserver/headphones:latest
|
||||
env_file : /var/data/config/autopirate/headphones.env
|
||||
volumes:
|
||||
- /var/data/autopirate/headphones:/config
|
||||
- /var/data/media:/media
|
||||
networks:
|
||||
- internal
|
||||
|
||||
headphones_proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/headphones.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:headphones.example.com
|
||||
- traefik.port=8181
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.headphones.rule=Host(`headphones.example.com`)"
|
||||
- "traefik.http.routers.headphones.entrypoints=https"
|
||||
- "traefik.http.services.headphones.loadbalancer.server.port=8181"
|
||||
- "traefik.http.routers.headphones.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
53
docs/recipes/autopirate/heimdall.md
Normal file
53
docs/recipes/autopirate/heimdall.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Install Heimdall Dashboard with Docker
|
||||
description: Heimdall is a beautiful dashboard for all your web applications, and is a perfect combination your self-hosted Docker applications!
|
||||
---
|
||||
# Heimdall in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Heimdall Application Dashboard](https://heimdall.site/) is a dashboard for all your web applications. It doesn't need to be limited to applications though, you can add links to anything you like.
|
||||
|
||||
Heimdall provides a single URL to manage access to all of your autopirate tools, and includes "enhanced" (_i.e., display stats within Heimdall without launching the app_) access to [NZBGet][nzbget], [SABnzbd][sabnzbd], and friends.
|
||||
|
||||

|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include the following example in your autopirate.yml docker-compose stack definition file:
|
||||
|
||||
```yaml
|
||||
heimdall:
|
||||
image: lscr.io/linuxserver/heimdall:latest
|
||||
env_file: /var/data/config/autopirate/heimdall.env
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/heimdall:/config
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:heimdall.example.com
|
||||
- traefik.port=80
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.heimdall.rule=Host(`heimdall.example.com`)"
|
||||
- "traefik.http.routers.heimdall.entrypoints=https"
|
||||
- "traefik.http.services.heimdall.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.heimdall.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^2:] The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!
|
||||
129
docs/recipes/autopirate/index.md
Normal file
129
docs/recipes/autopirate/index.md
Normal file
@@ -0,0 +1,129 @@
|
||||
---
|
||||
description: A fully-featured recipe to automate finding, downloading, and organising media
|
||||
---
|
||||
|
||||
# AutoPirate
|
||||
|
||||
Once the cutting edge of the "internet" (_pre-world-wide-web and mosiac days_), Usenet is now a murky, geeky alternative to torrents for file-sharing. However, it's **cool** geeky, especially if you're into having a fully automated media platform.
|
||||
|
||||
A good starter for the usenet scene is <https://www.reddit.com/r/usenet/>. Because it's so damn complicated, a host of automated tools exist to automate the process of finding, downloading, and managing content. The tools included in this recipe are as per the following example:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
This recipe presents a method to combine these tools into a single swarm deployment, and make them available securely.
|
||||
|
||||
## Menu
|
||||
|
||||
Tools included in the AutoPirate stack are:
|
||||
|
||||
* [SABnzbd][sabnzbd] is the workhorse. It takes `.nzb` files as input (_manually or from [Sonarr][sonarr], [Radarr][radarr], etc_), then connects to your chosen Usenet provider, downloads all the individual binaries referenced by the .nzb, and then tests/repairs/combines/uncompresses them all into the final result - media files, to be consumed by [Plex][plex], [Emby][emby], [Komga][komga], [Calibre-Web][calibre-web]), etc.
|
||||
|
||||
* [NZBGet][nzbget] downloads data from usenet servers based on .nzb definitions. Like [SABnzbd][sabnzbd], but written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources (_this is a popular alternative to SABnzbd_)
|
||||
|
||||
* [RTorrent][rtorrent] is a popular CLI-based bittorrent client, and [ruTorrent](https://github.com/Novik/ruTorrent) is a powerful web interface for rtorrent. (_Yes, it's not Usenet, but Sonarr/Radarr will let fulfill your watchlist using either Usenet **or** torrents, so it's worth including_)
|
||||
|
||||
* [NZBHydra][nzbhydra] is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as indexer source for tools like [Sonarr][sonarr] or [Radarr][radarr].
|
||||
|
||||
* [Sonarr][sonarr] finds, downloads and manages TV shows
|
||||
|
||||
* [Radarr][radarr] finds, downloads and manages movies
|
||||
|
||||
* [Readarr][readarr] finds, downloads, and manages eBooks
|
||||
|
||||
* [Lidarr][lidarr] is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones][headphones], but is written using the same(ish) codebase as [Radarr][radarr] and [Sonarr][sonarr]. It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd][sabnzbd], [NZBGet][nzbget], Transmission, µTorrent, Deluge and Blackhole (_just like Sonarr / Radarr_)
|
||||
|
||||
* [Mylar][mylar] is a tool for downloading and managing digital comic books / "graphic novels"
|
||||
|
||||
* [Headphones][headphones] is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, µTorrent, Deluge and Blackhole.
|
||||
|
||||
* [Lazy Librarian][lazylibrarian] is a tool to follow authors and grab metadata for all your digital reading needs. It uses a combination of Goodreads Librarything and optionally GoogleBooks as sources for author info and book info.
|
||||
|
||||
* [Ombi][ombi] provides an interface to request additions to a [Plex][plex]/[Emby][emby]/[Jellyfin][jellyfin] library using the above tools
|
||||
|
||||
* [Jackett][jackett] works as a proxy server: it translates queries from apps (*[Sonarr][sonarr], [Radarr][radarr], [Mylar][mylar], etc*) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software.
|
||||
|
||||
Since this recipe is so long, and so many of the tools are optional to the final result (_i.e., if you're not interested in comics, you won't want Mylar_), I've described each individual tool on its own sub-recipe page (_below_), even though most of them are deployed very similarly.
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
Already deployed:
|
||||
|
||||
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
* [X] [Traefik](/docker-swarm/traefik/) configured per design
|
||||
* [X] DNS entry for the hostname you intend to use (*or a wildcard*), pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||
|
||||
Related:
|
||||
|
||||
* [X] [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) to secure your Traefik-exposed services with an additional layer of authentication
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a unique directories for each tool in the stack, bind-mounted into our containers, so create them upfront, in /var/data/autopirate:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/autopirate
|
||||
cd /var/data/autopirate
|
||||
mkdir -p {lazylibrarian,mylar,ombi,sonarr,radarr,headphones,plexpy,nzbhydra,sabnzbd,nzbget,rtorrent,jackett}
|
||||
```
|
||||
|
||||
Create a directory for the storage of your downloaded media, i.e., something like:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/media
|
||||
```
|
||||
|
||||
Create a user to "own" the above directories, and note the uid and gid of the created user. You'll need to specify the UID/GID in the environment variables passed to the container (in the example below, I used 4242 - twice the meaning of life).
|
||||
|
||||
### Secure public access
|
||||
|
||||
What you'll quickly notice about this recipe is that __every__ web interface is protected by an [OAuth proxy](/reference/oauth_proxy/).
|
||||
|
||||
Why? Because these tools are developed by a handful of volunteer developers who are focused on adding features, not necessarily implementing robust security. Most users wouldn't expose these tools directly to the internet, so the tools have rudimentary (if any) access control.
|
||||
|
||||
To mitigate the risk associated with public exposure of these tools (_you're on your smartphone and you want to add a movie to your watchlist, what do you do, hotshot?_), in order to gain access to each tool you'll first need to authenticate against your given OAuth provider.
|
||||
|
||||
This is tedious, but you only have to do it once. Each tool (Sonarr, Radarr, etc) to be protected by an OAuth proxy, requires unique configuration. I use github to provide my oauth, giving each tool a unique logo while I'm at it (make up your own random string for OAUTH2PROXYCOOKIE_SECRET)
|
||||
|
||||
For each tool, create `/var/data/autopirate/<tool>.env`, and set the following:
|
||||
|
||||
```bash
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
PUID=4242
|
||||
PGID=4242
|
||||
```
|
||||
|
||||
Create at least /var/data/autopirate/authenticated-emails.txt, containing at least your own email address with your OAuth provider. If you wanted to grant access to a specific tool to other users, you'd need a unique `authenticated-emails-<tool>.txt` which included both normal email address as well as any addresses to be granted tool-specific access.
|
||||
|
||||
### Setup components
|
||||
|
||||
#### Stack basics
|
||||
|
||||
**Start** with a swarm config file in docker-compose syntax, like this:
|
||||
|
||||
````yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
````
|
||||
|
||||
And **end** with a stanza like this:
|
||||
|
||||
````yaml
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.11.0/24
|
||||
````
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
51
docs/recipes/autopirate/jackett.md
Normal file
51
docs/recipes/autopirate/jackett.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
title: How to setup Jackett in Docker alongside Sonarr / Radarr
|
||||
description: Jackett works as a proxy server, standardizing your apps' (Radarr / Sonarr specifically) access to torrent indexers, and is a useful addition to the Autopirate Docker Swarm stack
|
||||
---
|
||||
# Jackett in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Jackett](https://github.com/Jackett/Jackett) works as a proxy server: it translates queries from apps (*[Sonarr][sonarr], [Radarr][radarr], [Mylar][mylar], etc*) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software.
|
||||
|
||||
This allows for getting recent uploads (like RSS) and performing searches. Jackett is a single repository of maintained indexer scraping & translation logic - removing the burden from other apps.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Jackett in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
```yaml
|
||||
jackett:
|
||||
image: lscr.io/linuxserver/jackett:latest
|
||||
env_file : /var/data/config/autopirate/jackett.env
|
||||
volumes:
|
||||
- /var/data/autopirate/jackett:/config
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:jackett.example.com
|
||||
- traefik.port=9117
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.jackett.rule=Host(`jackett.example.com`)"
|
||||
- "traefik.http.routers.jackett.entrypoints=https"
|
||||
- "traefik.http.services.jackett.loadbalancer.server.port=9117"
|
||||
- "traefik.http.routers.jackett.middlewares=forward-auth"
|
||||
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
66
docs/recipes/autopirate/lazylibrarian.md
Normal file
66
docs/recipes/autopirate/lazylibrarian.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
title: How to install Lazy Librarian in Docker
|
||||
description: LazyLibrarian is a tool to follow authors and manage your ebook / audiobook collection. It's a handy addition to the Autopirate Docker Swarm stack!
|
||||
---
|
||||
|
||||
# LazyLibrarian in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[LazyLibrarian](https://github.com/DobyTang/LazyLibrarian) is a tool to follow authors and grab metadata for all your digital reading needs. It uses a combination of Goodreads Librarything and optionally GoogleBooks as sources for author info and book info. Features include:
|
||||
|
||||
* Find authors and add them to the database
|
||||
* List all books of an author and mark ebooks or audiobooks as 'wanted'.
|
||||
* When processing the downloaded books it will save a cover picture (if available) and save all metadata into metadata.opf next to the bookfile (calibre compatible format)
|
||||
* AutoAdd feature for book management tools like Calibre which must have books in flattened directory structure, or use calibre to import your books into an existing calibre library
|
||||
* LazyLibrarian can also be used to search for and download magazines, and monitor for new issues
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include LazyLibrarian in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
```yaml
|
||||
lazylibrarian:
|
||||
image: lscr.io/linuxserver/lazylibrarian:latest
|
||||
env_file : /var/data/config/autopirate/lazylibrarian.env
|
||||
volumes:
|
||||
- /var/data/autopirate/lazylibrarian:/config
|
||||
- /var/data/media:/media
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:lazylibrarian.example.com
|
||||
- traefik.port=5299
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.lazylibrarian.rule=Host(`lazylibrarian.example.com`)"
|
||||
- "traefik.http.routers.lazylibrarian.entrypoints=https"
|
||||
- "traefik.http.services.lazylibrarian.loadbalancer.server.port=5299"
|
||||
- "traefik.http.routers.lazylibrarian.middlewares=forward-auth"
|
||||
|
||||
calibre-server:
|
||||
image: regueiro/calibre-server
|
||||
volumes:
|
||||
- /var/data/media/Ebooks/calibre/:/opt/calibre/library
|
||||
networks:
|
||||
- internal
|
||||
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^2]: The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web][calibre-web] recipe.
|
||||
58
docs/recipes/autopirate/lidarr.md
Normal file
58
docs/recipes/autopirate/lidarr.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
title: How to install Lidarr (Music arr tool) in Docker
|
||||
description: Lidarr is an automated music downloader for NZB and Torrent
|
||||
---
|
||||
# Lidarr in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Lidarr](https://lidarr.audio/) is an automated music downloader for NZB and Torrent. It performs the same function as [Headphones][headphones], but is written using the same(ish) codebase as [Radarr][radarr] and [Sonarr][sonarr]. It's blazingly fast, and includes beautiful album/artist art. Lidarr supports [SABnzbd][sabnzbd], [NZBGet][nzbget], Transmission, µTorrent, Deluge and Blackhole (_just like Sonarr / Radarr_)
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Lidarr in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
````yaml
|
||||
lidarr:
|
||||
image: lscr.io/linuxserver/lidarr:latest
|
||||
env_file: /var/data/config/lidarr/lidarr.env
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/media:/media
|
||||
- /var/data/lidarr:/config
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:lidarr.example.com
|
||||
- traefik.port=8686
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.lidarr.rule=Host(`lidarr.example.com`)"
|
||||
- "traefik.http.routers.lidarr.entrypoints=https"
|
||||
- "traefik.http.services.lidarr.loadbalancer.server.port=8686"
|
||||
- "traefik.http.routers.lidarr.middlewares=forward-auth"
|
||||
````
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
|
||||
## Lidarr vs Headphones
|
||||
|
||||
Lidarr and [Headphones][headphones] perform the same basic function. The primary difference, from what I can tell, is that Lidarr is build on the Arr stack, and so plays nicely with [Prowlarr][prowlarr].
|
||||
|
||||
## Integrate Lidarr with Beets
|
||||
|
||||
I've not tried this yet, but it seems that it's possible to [integrate Lidarr with Beets](https://www.reddit.com/r/Lidarr/comments/rahcer/my_lidarrbeets_automation_setup/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
52
docs/recipes/autopirate/mylar.md
Normal file
52
docs/recipes/autopirate/mylar.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: How to run Mylar3 in Docker
|
||||
description: Mylar is a tool for downloading and managing digital comic books, and is a valuable addition to the docker-swarm AutoPirate stack
|
||||
---
|
||||
|
||||
# Mylar3 in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Mylar](https://github.com/mylar3/mylar3) is a tool for downloading and managing digital comic books.
|
||||
|
||||

|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Mylar in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose v3 stack definition file:
|
||||
|
||||
```yaml
|
||||
mylar:
|
||||
image: lscr.io/linuxserver/mylar3:latest
|
||||
env_file : /var/data/config/autopirate/mylar.env
|
||||
volumes:
|
||||
- /var/data/autopirate/mylar:/config
|
||||
- /var/data/media:/media
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:mylar.example.com
|
||||
- traefik.port=8090
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.mylar.rule=Host(`mylar.example.com`)"
|
||||
- "traefik.http.routers.mylar.entrypoints=https"
|
||||
- "traefik.http.services.mylar.loadbalancer.server.port=8090"
|
||||
- "traefik.http.routers.mylar.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^2]. If you intend to configure Mylar to perform its own NZB searches and push the hits to a downloader such as SABnzbd, then in addition to configuring the connection to SAB with host, port and api key, you will need to set the parameter `host_return` parameter to the fully qualified Mylar address (e.g. `http://mylar:8090`).
|
||||
54
docs/recipes/autopirate/nzbget.md
Normal file
54
docs/recipes/autopirate/nzbget.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
title: How to download from usenet using NZBGet in Docker
|
||||
description: NZBGet is a tool for downloading "content" from Usenet providers, and is the workhorse of our Autopirate Docker Swarm stack
|
||||
---
|
||||
|
||||
# NZBGet in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
## Introduction
|
||||
|
||||
NZBGet performs the same function as [SABnzbd][sabnzbd] (_downloading content from Usenet servers_), but it's lightweight and fast(er), written in C++ (_as opposed to Python_).
|
||||
|
||||

|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include NZBGet in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
```yaml
|
||||
nzbget:
|
||||
image: lscr.io/linuxserver/nzbget
|
||||
env_file : /var/data/config/autopirate/nzbget.env
|
||||
volumes:
|
||||
- /var/data/autopirate/nzbget:/config
|
||||
- /var/data/media:/data
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:nzbget.example.com
|
||||
- traefik.port=6789
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.nzbget.rule=Host(`nzbget.example.com`)"
|
||||
- "traefik.http.routers.nzbget.entrypoints=https"
|
||||
- "traefik.http.services.nzbget.loadbalancer.server.port=6789"
|
||||
- "traefik.http.routers.nzbget.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
[^tfa]: Since we're relying on [Traefik Forward Auth][tfa] to protect us, we can just disable NZGet's own authentication, by changing ControlPassword to null in nzbget.conf (i.e. ```ControlPassword=```)
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
66
docs/recipes/autopirate/nzbhydra.md
Normal file
66
docs/recipes/autopirate/nzbhydra.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
title: Run nzbhydra2 in Docker
|
||||
description: NZBHydra is a meta search engine for NZB indexers, and can be used to provide aggregated search results to usenet search tools such as Radarr, Sonarr, etc. Here's how to deploy NZBHydra2 in the Docker Swarm Autopirate stack
|
||||
---
|
||||
|
||||
# NZBHydra 2 in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[NZBHydra2](https://github.com/theotherp/nzbhydra2) is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as an indexer source for tools like Sonarr, Radarr or CouchPotato.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Features include:
|
||||
|
||||
- Searches Anizb, BinSearch, NZBIndex and any newznab compatible indexers. Merges all results, filters them by a number of configurable restrictions, recognizes duplicates and returns them all in one place
|
||||
- Add results to [NZBGet][nzbget] or [SABnzbd][sabnzbd]
|
||||
- Support for all relevant media IDs (IMDB, TMDB, TVDB, TVRage, TVMaze) and conversion between them
|
||||
- Query generation, meaning a query will be generated if only a media ID is provided in the search and the indexer doesn't support the ID or if no results were found
|
||||
- Compatible with [Sonarr][sonarr], [Radarr][radarr], [NZBGet][nzbget], [SABnzbd][sabnzbd], nzb360, CouchPotato, [Mylar][mylar], [Lazy Librarian][lazylibrarian], Sick Beard, [Jackett][jackett], Watcher, etc.
|
||||
- Search and download history and extensive stats. E.g. indexer response times, download shares, NZB age, etc.
|
||||
- Authentication and multi-user support
|
||||
- Automatic update of NZB download status by querying configured downloaders
|
||||
- RSS support with configurable cache times
|
||||
- Torrent support (_Although I prefer [Jackett][jackett] for this_):
|
||||
- For GUI searches, allowing you to download torrents to a blackhole folder
|
||||
- A separate Torznab compatible endpoint for API requests, allowing you to merge multiple trackers
|
||||
- Extensive configurability
|
||||
- Migration of database and settings from v1
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include NZBHydra2 in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
```yaml
|
||||
nzbhydra2:
|
||||
image: lscr.io/linuxserver/hydra2:latest
|
||||
env_file : /var/data/config/autopirate/nzbhydra2.env
|
||||
volumes:
|
||||
- /var/data/autopirate/nzbhydra2:/config
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:nzbhydra.example.com
|
||||
- traefik.port=5076
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.nzbhydra.rule=Host(`nzbhydra.example.com`)"
|
||||
- "traefik.http.routers.nzbhydra.entrypoints=https"
|
||||
- "traefik.http.services.nzbhydra.loadbalancer.server.port=5076"
|
||||
- "traefik.http.routers.nzbhydra.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
64
docs/recipes/autopirate/ombi.md
Normal file
64
docs/recipes/autopirate/ombi.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
title: Run Ombi in Docker (protecting the API with SSL)
|
||||
description: Ombi is like your media butler - it recommends, finds what you want to watch! It includes a rich API, and since it's behind our traefik proxy, it inherits the same automatic SSL certificate generation as the rest of the Autopirate Docker Swarm stack.
|
||||
---
|
||||
|
||||
# Ombi in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Ombi](https://github.com/tidusjar/Ombi) is a useful addition to the [autopirate][autopirate]stack. Features include:
|
||||
|
||||
* Lets users request Movies and TV Shows (_whether it being the entire series, an entire season, or even single episodes._)
|
||||
* Easily manage your requests
|
||||
User management system (_supports plex.tv, Emby and local accounts_)
|
||||
* A landing page that will give you the availability of your [Plex][plex]/[Emby][emby]/[Jellyfin][jellyfin] server and also add custom notification text to inform your users of downtime.
|
||||
* Allows your users to get custom notifications!
|
||||
* Will show if the request is already on plex or even if it's already monitored.
|
||||
* Automatically updates the status of requests when they are available on Plex/Emby/Jellyfin
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Ombi in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
```yaml
|
||||
ombi:
|
||||
image: lscr.io/linuxserver/ombi:latest
|
||||
env_file : /var/data/config/autopirate/ombi.env
|
||||
volumes:
|
||||
- /var/data/autopirate/ombi:/config
|
||||
networks:
|
||||
- internal
|
||||
|
||||
ombi_proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/autopirate/ombi.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:ombi.example.com
|
||||
- traefik.port=3579
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.ombi.rule=Host(`ombi.example.com`)"
|
||||
- "traefik.http.routers.ombi.entrypoints=https"
|
||||
- "traefik.http.services.ombi.loadbalancer.server.port=3579"
|
||||
- "traefik.http.routers.ombi.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
77
docs/recipes/autopirate/prowlarr.md
Normal file
77
docs/recipes/autopirate/prowlarr.md
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
title: Install Prowlarr in Docker
|
||||
description: Prowlarr aggregates nzb/torrent searches. Imagine NZBHydra and Jackett had a baby, but it came out Arrr. Here's how you install Prowlarr into the Docker Swarm Autopirate stack
|
||||
---
|
||||
|
||||
# Prowlarr in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Prowlarr](https://github.com/Prowlarr/Prowlarr) is an indexer manager/proxy built on the popular arr .net/reactjs base stack to integrate with your various PVR apps.
|
||||
|
||||
Prowlarr supports management of both Torrent Trackers and Usenet Indexers. It integrates seamlessly with [Lidarr][lidarr], [Mylar3][mylar], [Radarr][radarr], [Readarr][readarr], and [Sonarr][sonarr] offering complete management of your indexers with no per app Indexer setup required!
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Fancy features include:
|
||||
|
||||
* Usenet support for 24 indexers natively, including Headphones VIP, and support for any Newznab compatible indexer via "Generic Newznab"
|
||||
* Torrent support for over 500 trackers with more added all the time
|
||||
* Torrent support for any Torznab compatible tracker via "Generic Torznab"
|
||||
* Indexer Sync to Sonarr/Radarr/Readarr/Lidarr/Mylar3, so no manual configuration of the other applications are required
|
||||
* Indexer history and statistics
|
||||
* Manual searching of Trackers & Indexers at a category level
|
||||
* Support for pushing releases directly to your download clients from Prowlarr
|
||||
* Indexer health and status notifications
|
||||
* Per Indexer proxy support (SOCKS4, SOCKS5, HTTP, Flaresolverr)
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Prowlarr in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
```yaml
|
||||
prowlarr:
|
||||
image: lscr.io/linuxserver/prowlarr:nightly
|
||||
env_file: /var/data/config/prowlarr/prowlarr.env
|
||||
volumes:
|
||||
- /var/data/media/:/media
|
||||
- /var/data/prowlarr:/config
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:prowlarr.example.com
|
||||
- traefik.port=9696
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.prowlarr.rule=Host(`prowlarr.example.com`)"
|
||||
- "traefik.http.routers.prowlarr.entrypoints=https"
|
||||
- "traefik.http.services.prowlarr.loadbalancer.server.port=9696"
|
||||
- "traefik.http.routers.prowlarr.middlewares=forward-auth"
|
||||
networks:
|
||||
- internal
|
||||
- autopiratev2_public
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
|
||||
## Prowlarr vs Jackett
|
||||
|
||||
Prowlarr and [Jackett][jackett] perform similar roles (*they help you aggregate indexers*), but Prowlarr includes the following advantages over Jackett:
|
||||
|
||||
1. Prowlarr can search both Usenet **and** Torrent indexers
|
||||
2. Given app API keys, Prowlarr can auto-configuer your Arr apps, adding its indexers. Prowlarr currently auto-configures [Radarr][radarr], [Sonarr][sonarr], [Lidarr][lidarr], [Mylar][mylar], [Readarr][Readarr], and [LazyLibrarian][lazylibrarian]
|
||||
3. Prowlarr can integrate with Flaresolverr to make it possible to query indexers behind Cloudflare "are-you-a-robot" protection, which would otherwise not be possible.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: Because Prowlarr is so young (*just a little kitten! :cat:*), there is no `:latest` image tag yet, so we're using the `:nightly` tag instead. Don't come crying to me if baby-Prowlarr bites your ass!
|
||||
64
docs/recipes/autopirate/radarr.md
Normal file
64
docs/recipes/autopirate/radarr.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
title: How to run Radarr in Docker
|
||||
description: Radarr is a tool for finding, downloading and managing movies, and is a valuable addition to the docker-swarm AutoPirate stack
|
||||
---
|
||||
|
||||
# Radarr in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Radarr](https://radarr.video/) is a tool for finding, downloading and managing movies. Features include:
|
||||
|
||||
* Adding new movies with lots of information, such as trailers, ratings, etc.
|
||||
* Can watch for better quality of the movies you have and do an automatic upgrade. eg. from DVD to Blu-Ray
|
||||
* Automatic failed download handling will try another release if one fails
|
||||
* Manual search so you can pick any release or to see why a release was not downloaded automatically
|
||||
* Full integration with SABnzbd and NZBGet
|
||||
* Automatically searching for releases as well as RSS Sync
|
||||
* Automatically importing downloaded movies
|
||||
* Recognizing Special Editions, Director's Cut, etc.
|
||||
* Identifying releases with hardcoded subs
|
||||
* Importing movies from various online sources, such as IMDb Watchlists (A complete list can be found here)
|
||||
* Full integration with Kodi, Plex (notification, library update)
|
||||
* And a beautiful UI
|
||||
* Importing Metadata such as trailers or subtitles
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Radarr in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose v3 stack definition file:
|
||||
|
||||
```yaml
|
||||
radarr:
|
||||
image: lscr.io/linuxserver/radarr:latest
|
||||
env_file : /var/data/config/autopirate/radarr.env
|
||||
volumes:
|
||||
- /var/data/autopirate/radarr:/config
|
||||
- /var/data/media:/media
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:radarr.example.com
|
||||
- traefik.port=7878
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.radarr.rule=Host(`radarr.example.com`)"
|
||||
- "traefik.http.routers.radarr.entrypoints=https"
|
||||
- "traefik.http.services.radarr.loadbalancer.server.port=7878"
|
||||
- "traefik.http.routers.radarr.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
62
docs/recipes/autopirate/readarr.md
Normal file
62
docs/recipes/autopirate/readarr.md
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
title: Run Readarr (Sonarr for books / audiobooks) in Docker
|
||||
description: Readarr is "Sonarr/Radarr for eBooks and audiobooks, and plays perfectly with the rest of the Autopirate Docker Swarm stack"
|
||||
---
|
||||
|
||||
# Readarr in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Readarr](https://github.com/Readarr/Readarr), in the fine tradition of [Radarr][radarr] and [Sonarr][sonarr], is a tool for "sourcing" eBooks, using usenet or bittorrent indexers.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Features include:
|
||||
|
||||
* Support for major platforms: Windows, Linux, macOS, Raspberry Pi, etc.
|
||||
* Automatically detects new books
|
||||
* Can scan your existing library and download any missing books
|
||||
* Automatic failed download handling will try another release if one fails
|
||||
* Manual search so you can pick any release or to see why a release was not downloaded automatically
|
||||
* Fully configurable book renaming
|
||||
* Full integration with [SABnzbd][sabnzbd] and [NZBGet][sabnzbd]
|
||||
* Full integration with [Calibre][calibre-web] (add to library, conversion)
|
||||
* And a beautiful UI!
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Readarr in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
```yaml
|
||||
readarr:
|
||||
image: lscr.io/linuxserver/readarr:latest
|
||||
env_file : /var/data/config/autopirate/readarr.env
|
||||
volumes:
|
||||
- /var/data/autopirate/readarr:/config
|
||||
- /var/data/media/books:/books
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:readarr.example.com
|
||||
- traefik.port=8787
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.readarr.rule=Host(`readarr.example.com`)"
|
||||
- "traefik.http.routers.readarr.entrypoints=https"
|
||||
- "traefik.http.services.readarr.loadbalancer.server.port=8787"
|
||||
- "traefik.http.routers.readarr.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
56
docs/recipes/autopirate/rtorrent.md
Normal file
56
docs/recipes/autopirate/rtorrent.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: Install rutorrent / rtorrent in Docker
|
||||
description: ruTorrent (looks like uTorrent) is a popular web UI frontend to rtorrent, the de-facto ncurses-based CLI torrent client. And it's a handy addition to our Autopirate Docker Swarm stack!
|
||||
---
|
||||
|
||||
# RTorrent / ruTorrent in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[RTorrent](http://rakshasa.github.io/rtorrent) is a popular CLI-based bittorrent client, and [ruTorrent](https://github.com/Novik/ruTorrent) is a powerful web interface for rtorrent.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Choose incoming port
|
||||
|
||||
When using a torrent client from behind NAT (_which swarm, by nature, is_), you typically need to set a static port for inbound torrent communications. In the example below, I've set the port to 36258. You'll need to configure `/var/data/autopirate/rtorrent/rtorrent/rtorrent.rc` with the equivalent port.
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include ruTorrent in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
```yaml
|
||||
rtorrent:
|
||||
image: lscr.io/linuxserver/rutorrent
|
||||
env_file : /var/data/config/autopirate/rtorrent.env
|
||||
ports:
|
||||
- 36258:36258
|
||||
volumes:
|
||||
- /var/data/media/:/media
|
||||
- /var/data/autopirate/rtorrent:/config
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:rtorrent.example.com
|
||||
- "traefik.http.services.linx.loadbalancer.server.port=80"
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.rtorrent.rule=Host(`rtorrent.example.com`)"
|
||||
- "traefik.http.routers.rtorrent.entrypoints=https"
|
||||
- "traefik.http.services.rtorrent.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.rtorrent.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
61
docs/recipes/autopirate/sabnzbd.md
Normal file
61
docs/recipes/autopirate/sabnzbd.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: How to download from usenet using SABnzbd in Docker
|
||||
description: SABnzbd is a tool for downloading "content" from Usenet providers, and is the (older) workhorse of our Autopirate Docker Swarm stack
|
||||
---
|
||||
|
||||
# SABnzbd in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
## Introduction
|
||||
|
||||
SABnzbd is a workhorse of the stack. It takes .nzb files as input (_manually or from other [autopirate](/recipes/autopirate/) stack tools_), then connects to your chosen Usenet provider, downloads all the individual binaries referenced by the .nzb, and then tests/repairs/combines/uncompresses them all into the final result - media files.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include SABnzbd in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose stack definition file:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
sabnzbd:
|
||||
image: lscr.io/linuxserver/sabnzbd:latest
|
||||
env_file : /var/data/config/autopirate/sabnzbd.env
|
||||
volumes:
|
||||
- /var/data/autopirate/sabnzbd:/config
|
||||
- /var/data/media:/media
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:sabnzbd.example.com
|
||||
- traefik.port=8080
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.sabnzbd.rule=Host(`sabnzbd.example.com`)"
|
||||
- "traefik.http.routers.sabnzbd.entrypoints=https"
|
||||
- "traefik.http.services.sabnzbd.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.sabnzbd.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
!!! warning "Important Note re hostname validation"
|
||||
|
||||
(**Updated 10 June 2018**) : In SABnzbd [2.3.3](https://sabnzbd.org/wiki/extra/hostname-check.html), hostname verification was added as a mandatory check. SABnzbd will refuse inbound connections which weren't addressed to its own (_initially, autodetected_) hostname. This presents a problem within Docker Swarm, where container hostnames are random and disposable.
|
||||
|
||||
You'll need to edit sabnzbd.ini (_only created after your first launch_), and **replace** the value in ```host_whitelist``` configuration (_it's comma-separated_) with the name of your service within the swarm definition, as well as your FQDN as accessed via traefik.
|
||||
|
||||
For example, mine simply reads ```host_whitelist = sabnzbd.funkypenguin.co.nz, sabnzbd```
|
||||
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
50
docs/recipes/autopirate/sonarr.md
Normal file
50
docs/recipes/autopirate/sonarr.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
title: How to setup Sonarr v3 in Docker
|
||||
description: Sonarr is a tool for finding, downloading and managing TV series*, and is a valuable addition to the docker-swarm AutoPirate stack
|
||||
---
|
||||
|
||||
# Sonarr in Autopirate Docker Swarm stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
[Sonarr](https://sonarr.tv/) is a tool for finding, downloading and managing your TV series.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Sonarr in your [AutoPirate](/recipes/autopirate/) stack, include something like the following example in your `autopirate.yml` docker-compose v3 stack definition file:
|
||||
|
||||
```yaml
|
||||
sonarr:
|
||||
image: lscr.io/linuxserver/sonarr:latest
|
||||
env_file : /var/data/config/autopirate/sonarr.env
|
||||
volumes:
|
||||
- /var/data/autopirate/sonarr:/config
|
||||
- /var/data/media:/media
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:sonarr.example.com
|
||||
- traefik.port=8989
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.sonarr.rule=Host(`sonarr.example.com`)"
|
||||
- "traefik.http.routers.sonarr.entrypoints=https"
|
||||
- "traefik.http.services.sonarr.loadbalancer.server.port=8989"
|
||||
- "traefik.http.routers.sonarr.middlewares=forward-auth"
|
||||
```
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
107
docs/recipes/bitwarden.md
Normal file
107
docs/recipes/bitwarden.md
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
title: How to run Bitwarden / bitwardenrs self hosted in Docker
|
||||
description: Bitwarden / bitwardenrs is a self-hosted internet archiving solution
|
||||
---
|
||||
|
||||
# Bitwarden, self hosted in Docker Swarm
|
||||
|
||||
Heard about the [latest password breach](https://www.databreaches.net) (*since lunch*)? [HaveYouBeenPowned](http://haveibeenpwned.com) yet (*today*)? [Passwords are broken](https://www.theguardian.com/technology/2008/nov/13/internet-passwords), and as the amount of sites for which you need to store credentials grows exponetially, so does the risk of using a common password.
|
||||
|
||||
"*Duh, use a password manager*", you say. Sure, but be aware that [even password managers have security flaws](https://www.securityevaluators.com/casestudies/password-manager-hacking/).
|
||||
|
||||
**OK, look smartass..** no software is perfect, and there will always be a risk of your credentials being exposed in ways you didn't intend. You can at least **minimize** the impact of such exposure by using a password manager to store unique credentials per-site. While [1Password](http://1password.com) is king of the commercial password manager, [BitWarden](https://bitwarden.com) is king of the open-source, self-hosted password manager.
|
||||
|
||||
Enter Bitwarden..
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Bitwarden is a free and open source password management solution for individuals, teams, and business organizations. While Bitwarden does offer a paid / hosted version, the free version comes with the following (*better than any other free password manager!*):
|
||||
|
||||
* Access & install all Bitwarden apps
|
||||
* Sync all of your devices, no limits!
|
||||
* Store unlimited items in your vault
|
||||
* Logins, secure notes, credit cards, & identities
|
||||
* Two-step authentication (2FA)
|
||||
* Secure password generator
|
||||
* Self-host on your own server (optional)
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need to create a directory to bind-mount into our container, so create `/var/data/bitwarden`:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/bitwarden
|
||||
```
|
||||
|
||||
### Setup environment
|
||||
|
||||
Create `/var/data/config/bitwarden/bitwarden.env`, and **leave it empty for now**.
|
||||
|
||||
!!! question
|
||||
What, why an empty env file? Well, the container supports lots of customizations via environment variables, for things like toggling self-registration, 2FA, etc. These are too complex to go into for this recipe, but readers are recommended to review the [dani-garcia/vaultwarden wiki](https://github.com/dani-garcia/vaultwarden/wiki), and customize their installation to suite.
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
services:
|
||||
bitwarden:
|
||||
image: vaultwarden/server
|
||||
env_file: /var/data/config/bitwarden/bitwarden.env
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/bitwarden:/data/:rw
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=traefik_public"
|
||||
# traefikv1
|
||||
- traefik.web.frontend.rule=Host:bitwarden.example.com
|
||||
- traefik.web.port=80
|
||||
- traefik.hub.frontend.rule=Host:bitwarden.example.com;Path:/notifications/hub
|
||||
- traefik.hub.port=3012
|
||||
|
||||
#traefikv2
|
||||
- "traefik.http.routers.bitwarden.rule=Host(`bitwarden.example.com`)"
|
||||
- "traefik.http.services.bitwarden.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.bitwarden.service=bitwarden"
|
||||
- "traefik.http.routers.bitwarden-websocket.rule=Host(`bitwarden.example.com`) && Path(`/notifications/hub`)"
|
||||
- "traefik.http.routers.bitwarden-websocket.service=bitwarden-websocket"
|
||||
- "traefik.http.services.bitwarden-websocket.loadbalancer.server.port=3012"
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
!!! note
|
||||
Note the clever use of two Traefik frontends to expose the notifications hub on port 3012. Thanks @gkoerk!
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Bitwarden stack
|
||||
|
||||
Launch the Bitwarden stack by running ```docker stack deploy bitwarden -c <path -to-docker-compose.yml>```
|
||||
|
||||
Browse to your new instance at https://**YOUR-FQDN**, and create a new user account and master password (*Just click the **Create Account** button without filling in your email address or master password*)
|
||||
|
||||
### Get the apps / extensions
|
||||
|
||||
Once you've created your account, jump over to <https://bitwarden.com/#download> and download the apps for your mobile and browser, and start adding your logins!
|
||||
|
||||
[^1]: You'll notice we're not using the *official* container images (*[all 6 of them required](https://help.bitwarden.com/article/install-on-premise/#install-bitwarden)!)*, but rather a [more lightweight version ideal for self-hosting](https://hub.docker.com/r/vaultwarden/server). All of the elements are contained within a single container, and SQLite is used for the database backend.
|
||||
[^2]: As mentioned above, readers should refer to the [dani-garcia/vaultwarden wiki](https://github.com/dani-garcia/vaultwarden) for details on customizing the behaviour of Bitwarden.
|
||||
[^3]: The inclusion of Bitwarden was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
136
docs/recipes/bookstack.md
Normal file
136
docs/recipes/bookstack.md
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
title: How to run BookStack in Docker
|
||||
description: BookStack is a simple, self-hosted, easy-to-use platform for organising and storing information. Here's how to integrate linuxserver's bookstack image into your Docker Swarm stack.
|
||||
---
|
||||
|
||||
# BookStack in Docker
|
||||
|
||||
BookStack is a simple, self-hosted, easy-to-use platform for organising and storing information.
|
||||
|
||||
A friendly middle ground between heavyweights like MediaWiki or Confluence and [Gollum](/recipes/gollum/), BookStack relies on a database backend (so searching and versioning is easy), but limits itself to a pre-defined, 3-tier structure (book, chapter, page). The result is a lightweight, approachable personal documentation stack, which includes search and Markdown editing.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
I like to protect my public-facing web UIs with an [oauth_proxy](/reference/oauth_proxy/), ensuring that if an application bug (or a user misconfiguration) exposes the app to unplanned public scrutiny, I have a second layer of defense.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/bookstack:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/bookstack/database-dump
|
||||
mkdir -p /var/data/runtime/bookstack/db
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create bookstack.env, and populate with the following variables. Set the [oauth_proxy](/reference/oauth_proxy/) variables provided by your OAuth provider (if applicable.)
|
||||
|
||||
```bash
|
||||
# For oauth-proxy (optional)
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
|
||||
# For MariaDB/MySQL database
|
||||
MYSQL_RANDOM_ROOT_PASSWORD=true
|
||||
MYSQL_DATABASE=bookstack
|
||||
MYSQL_USER=bookstack
|
||||
MYSQL_PASSWORD=secret
|
||||
|
||||
# Bookstack-specific variables
|
||||
DB_HOST=bookstack_db:3306
|
||||
DB_DATABASE=bookstack
|
||||
DB_USERNAME=bookstack
|
||||
DB_PASSWORD=secret
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
|
||||
db:
|
||||
image: mariadb:10
|
||||
env_file: /var/data/config/bookstack/bookstack.env
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/runtime/bookstack/db:/var/lib/mysql
|
||||
|
||||
app:
|
||||
image: solidnerd/bookstack
|
||||
env_file: /var/data/config/bookstack/bookstack.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:bookstack.example.com
|
||||
- traefik.port=4180
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.bookstack.rule=Host(`bookstack.example.com`)"
|
||||
- "traefik.http.services.bookstack.loadbalancer.server.port=4180"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.bookstack.middlewares=forward-auth@file"
|
||||
|
||||
db-backup:
|
||||
image: mariadb:10
|
||||
env_file: /var/data/config/bookstack/bookstack.env
|
||||
volumes:
|
||||
- /var/data/bookstack/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
|
||||
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.33.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Bookstack stack
|
||||
|
||||
Launch the BookStack stack by running ```docker stack deploy bookstack -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_proxy, and then login with username 'admin@admin.com' and password 'password'.
|
||||
|
||||
[^1]: If you wanted to expose the Bookstack UI directly, you could remove the traefik-forward-auth from the design.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
111
docs/recipes/calibre-web.md
Normal file
111
docs/recipes/calibre-web.md
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
title: Run calibre-web in Docker
|
||||
description: Manage your ebook collection. Like a BOSS.
|
||||
---
|
||||
|
||||
# Calibre-Web in Docker
|
||||
|
||||
The [AutoPirate](/recipes/autopirate/) recipe includes [Lazy Librarian](https://github.com/itsmegb/LazyLibrarian), a tool for tracking, finding, and downloading eBooks. However, after the eBooks are downloaded, Lazy Librarian is not much use for organising, tracking, and actually **reading** them.
|
||||
|
||||
[Calibre-Web](https://github.com/janeczku/calibre-web) could be described as "_[Plex](/recipes/plex/) (or [Emby](/recipes/emby/)) for eBooks_" - it's a web-based interface to manage your eBook library, screenshot below:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Of course, you probably already manage your eBooks using the excellent [Calibre](https://calibre-ebook.com/), but this is primarily a (_powerful_) desktop application. Calibre-Web is an alternative way to manage / view your existing Calibre database, meaning you can continue to use Calibre on your desktop if you wish.
|
||||
|
||||
As a long-time Kindle user, Calibre-Web brings (among [others](https://github.com/janeczku/calibre-web)) the following features which appeal to me:
|
||||
|
||||
* Filter and search by titles, authors, tags, **series** and language
|
||||
* Create custom book collection (shelves)
|
||||
Support for editing eBook metadata and deleting eBooks from Calibre library
|
||||
* Support for converting eBooks from EPUB to Kindle format (mobi/azw)
|
||||
* Send eBooks to Kindle devices with the click of a button
|
||||
* Support for reading eBooks directly in the browser (.txt, .epub, .pdf, .cbr, .cbt, .cbz)
|
||||
* Upload new books in PDF, epub, fb2 format
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a directory to store some config data for Calibre-Web, container, so create /var/data/calibre-web, and ensure the directory is owned by the same use which owns your Calibre data (below)
|
||||
|
||||
```bash
|
||||
mkdir /var/data/calibre-web
|
||||
chown calibre:calibre /var/data/calibre-web # for example
|
||||
```
|
||||
|
||||
Ensure that your Calibre library is accessible to the swarm (_i.e., exists on shared storage_), and that the same user who owns the config directory above, also owns the actual calibre library data (_including the ebooks managed by Calibre_).
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create `/var/data/config/calibre-web/calibre-web.env`, and populate with the following variables
|
||||
|
||||
```bash
|
||||
|
||||
PUID=
|
||||
PGID=
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: technosoft2000/calibre-web
|
||||
env_file : /var/data/config/calibre-web/calibre-web.env
|
||||
volumes:
|
||||
- /var/data/calibre-web:/config
|
||||
- /srv/data/Archive/Ebooks/calibre:/books
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:calibre.example.com
|
||||
- traefik.port=8083
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.calibre.rule=Host(`calibre.example.com`)"
|
||||
- "traefik.http.services.calibre.loadbalancer.server.port=8083"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.calibre.middlewares=forward-auth@file"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.18.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Calibre-Web
|
||||
|
||||
Launch the Calibre-Web stack by running ```docker stack deploy calibre-web -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at `https://**YOUR-FQDN**`. You'll be directed to the initial GUI configuraition. Set the first field (_Location of Calibre database_) to "_/books/_", and when complete, login using defaults username of "**admin**" with password "**admin123**".
|
||||
|
||||
[^1]: Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
|
||||
[^2]: A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.
|
||||
[^3]: If you plan to use calibre-web to send `.mobi` files to your Kindle via `@kindle.com` email addresses, be sure to add the sending address to the "[Approved Personal Documents Email List](https://www.amazon.com/hz/mycd/myx#/home/settings/payment)"
|
||||
--8<-- "recipe-footer.md"
|
||||
310
docs/recipes/collabora-online.md
Normal file
310
docs/recipes/collabora-online.md
Normal file
@@ -0,0 +1,310 @@
|
||||
---
|
||||
description: Collabora Online is a FOSS alternative to MS Office, in your browser!
|
||||
---
|
||||
|
||||
# Collabora Online
|
||||
|
||||
Collabora Online Development Edition (or "[CODE](https://www.collaboraoffice.com/code/#what_is_code)"), is the lightweight, or "home" edition of the commercially-supported [Collabora Online](https://www.collaboraoffice.com/collabora-online/) platform. It
|
||||
|
||||
It's basically the [LibreOffice](https://www.libreoffice.org/) interface in a web-browser. CODE is not a standalone app, it's a backend intended to be accessed via "WOPI" from an existing interface (_in our case, [NextCloud](/recipes/nextcloud/)_)
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
2. [Traefik](/docker-swarm/traefik/) configured per design
|
||||
3. DNS entry for the hostname (_i.e. "collabora.your-domain.com"_) you intend to use for LDAP Account Manager, pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||
4. [NextCloud](/recipes/nextcloud/) installed and operational
|
||||
5. [Docker-compose](https://docs.docker.com/compose/install/) installed on your node(s) - this is a special case which needs to run outside of Docker Swarm
|
||||
|
||||
## Preparation
|
||||
|
||||
### Explanation for complexity
|
||||
|
||||
Due to the clever magic that Collabora does to present a "headless" LibreOffice UI to the browser, the CODE docker container requires system capabilities which cannot be granted under Docker Swarm (_specifically, MKNOD_).
|
||||
|
||||
So we have to run Collabora itself in the next best thing to Docker swarm - a docker-compose stack. Using docker-compose will at least provide us with consistent and version-able configuration files.
|
||||
|
||||
This presents another problem though - Docker Swarm with Traefik is superb at making all our stacks "just work" with ingress routing and LetsEncyrpt certificates. We don't want to have to do this manually (_like a cave-man_), so we engage in some trickery to allow us to still use our swarmed Traefik to terminate SSL.
|
||||
|
||||
We run a single swarmed Nginx instance, which forwards all requests to an upstream, with the target IP of the docker0 interface, on port 9980 (_the port exposed by the CODE container_)
|
||||
|
||||
We attach the necessary labels to the Nginx container to instruct Trafeik to setup a front/backend for collabora.<ourdomain\>. Now incoming requests to `https://collabora.<ourdomain\>` will hit Traefik, be forwarded to nginx (_wherever in the swarm it's running_), and then to port 9980 on the same node that nginx is running on.
|
||||
|
||||
What if we're running multiple nodes in our swarm, and nginx ends up on a different node to the one running Collabora via docker-compose? Well, either constrain nginx to the same node as Collabora (_example below_), or just launch an instance of Collabora on _every_ node then. It's just a rendering / GUI engine after all, it doesn't hold any persistent data.
|
||||
|
||||
Here's a (_highly technical_) diagram to illustrate:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a directory for holding config to bind-mount into our containers, so create ```/var/data/collabora```, and ```/var/data/config/collabora``` for holding the docker/swarm config
|
||||
|
||||
```bash
|
||||
mkdir /var/data/collabora/
|
||||
mkdir /var/data/config/collabora/
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create /var/data/config/collabora/collabora.env, and populate with the following variables, customized for your installation.
|
||||
|
||||
!!! warning
|
||||
Note the following:
|
||||
|
||||
1. Variables are in lower-case, unlike our standard convention. This is to align with the CODE container
|
||||
2. Set domain to your [NextCloud](/recipes/nextcloud/) domain, and escape all the periods as per the example
|
||||
3. Set your server_name to collabora.<yourdomain\>. Escaping periods is unnecessary
|
||||
4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen 🔥
|
||||
|
||||
```bash
|
||||
username=admin
|
||||
password=ilovemypassword
|
||||
domain=nextcloud\.batcave\.com
|
||||
server_name=collabora.batcave.com
|
||||
termination=true
|
||||
```
|
||||
|
||||
### Create docker-compose.yml
|
||||
|
||||
Create ```/var/data/config/collabora/docker-compose.yml``` as per the following example:
|
||||
|
||||
```yaml
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
local-collabora:
|
||||
image: funkypenguin/collabora
|
||||
# the funkypenguin version has a patch to include "termination" behind SSL-terminating reverse proxy (traefik), see CODE PR #50.
|
||||
# Once merged, the official container can be used again.
|
||||
#image: collabora/code
|
||||
env_file: /var/data/config/collabora/collabora.env
|
||||
volumes:
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
|
||||
cap_add:
|
||||
- MKNOD
|
||||
ports:
|
||||
- 9980:9980
|
||||
```
|
||||
|
||||
### Create nginx.conf
|
||||
|
||||
Create ```/var/data/config/collabora/nginx.conf``` as per the following example, changing the ```server_name``` value to match the environment variable you established above:
|
||||
|
||||
```ini
|
||||
upstream collabora-upstream {
|
||||
# Run collabora under docker-compose, since it needs MKNOD cap, which can't be provided by Docker Swarm.
|
||||
# The IP here is the typical IP of docker0 - change if yours is different.
|
||||
server 172.17.0.1:9980;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name collabora.batcave.com;
|
||||
|
||||
# static files
|
||||
location ^~ /loleaflet {
|
||||
proxy_pass http://collabora-upstream;
|
||||
proxy_set_header Host $http_host;
|
||||
}
|
||||
|
||||
# WOPI discovery URL
|
||||
location ^~ /hosting/discovery {
|
||||
proxy_pass http://collabora-upstream;
|
||||
proxy_set_header Host $http_host;
|
||||
}
|
||||
|
||||
# Main websocket
|
||||
location ~ /lool/(.*)/ws$ {
|
||||
proxy_pass http://collabora-upstream;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "Upgrade";
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_read_timeout 36000s;
|
||||
}
|
||||
|
||||
# Admin Console websocket
|
||||
location ^~ /lool/adminws {
|
||||
proxy_buffering off;
|
||||
proxy_pass http://collabora-upstream;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "Upgrade";
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_read_timeout 36000s;
|
||||
}
|
||||
|
||||
# download, presentation and image upload
|
||||
location ~ /lool {
|
||||
proxy_pass https://collabora-upstream;
|
||||
proxy_set_header Host $http_host;
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### Create loolwsd.xml
|
||||
|
||||
[Until we understand](https://github.com/CollaboraOnline/Docker-CODE/pull/50) how to [pass trusted network parameters to the entrypoint script using environment variables](https://github.com/CollaboraOnline/Docker-CODE/issues/49), we have to maintain a manually edited version of ```loolwsd.xml```, and bind-mount it into our collabora container.
|
||||
|
||||
The way we do this is we mount
|
||||
`/var/data/collabora/loolwsd.xml` as `/etc/loolwsd/loolwsd.xml-new`, then allow the container to create its default `/etc/loolwsd/loolwsd.xml`, copy this default **over** our `/var/data/collabora/loolwsd.xml` as `/etc/loolwsd/loolwsd.xml-new`, and then update the container to use **our** `/var/data/collabora/loolwsd.xml` as `/etc/loolwsd/loolwsd.xml` instead (_confused yet?_)
|
||||
|
||||
Create an empty `/var/data/collabora/loolwsd.xml` by running `touch /var/data/collabora/loolwsd.xml`. We'll populate this in the next section...
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create `/var/data/config/collabora/collabora.yml` as per the following example, changing the traefik frontend_rule as necessary:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
|
||||
nginx:
|
||||
image: nginx:latest
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:collabora.example.com
|
||||
- traefik.port=80
|
||||
- traefik.frontend.passHostHeader=true
|
||||
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.collabora.rule=Host(`collabora.example.com`)"
|
||||
- "traefik.http.services.collabora.loadbalancer.server.port=80"
|
||||
- "traefik.enable=true"
|
||||
# uncomment this line if you want to force nginx to always run on one node (i.e., the one running collabora)
|
||||
#placement:
|
||||
# constraints:
|
||||
# - node.hostname == ds1
|
||||
volumes:
|
||||
- /var/data/collabora/nginx.conf:/etc/nginx/conf.d/default.conf:ro
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Generate loolwsd.xml
|
||||
|
||||
Well. This is awkward. There's no documented way to make Collabora work with Docker Swarm, so we're doing a bit of a hack here, until I understand [how to pass these arguments](https://github.com/CollaboraOnline/Docker-CODE/issues/49) via environment variables.
|
||||
|
||||
Launching Collabora is (_for now_) a 2-step process. First.. we launch collabora itself, by running:
|
||||
|
||||
```bash
|
||||
cd /var/data/config/collabora/
|
||||
docker-compose -d up
|
||||
```
|
||||
|
||||
Output looks something like this:
|
||||
|
||||
```bash
|
||||
root@ds1:/var/data/config/collabora# docker-compose up -d
|
||||
WARNING: The Docker Engine you're using is running in swarm mode.
|
||||
|
||||
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
|
||||
|
||||
To deploy your application across the swarm, use `docker stack deploy`.
|
||||
|
||||
Pulling local-collabora (funkypenguin/collabora:latest)...
|
||||
latest: Pulling from funkypenguin/collabora
|
||||
7b8b6451c85f: Pull complete
|
||||
ab4d1096d9ba: Pull complete
|
||||
e6797d1788ac: Pull complete
|
||||
e25c5c290bde: Pull complete
|
||||
4b8e1b074e06: Pull complete
|
||||
f51a3d1fb75e: Pull complete
|
||||
8b826e2ae5ad: Pull complete
|
||||
Digest: sha256:6cd38cb5cbd170da0e3f0af85cecf07a6bc366e44555c236f81d5b433421a39d
|
||||
Status: Downloaded newer image for funkypenguin/collabora:latest
|
||||
Creating collabora_local-collabora_1 ...
|
||||
Creating collabora_local-collabora_1 ... done
|
||||
root@ds1:/var/data/config/collabora#
|
||||
```
|
||||
|
||||
Now exec into the container (_from another shell session_), by running ```exec <container name> -it /bin/bash```. Make a copy of /etc/loolwsd/loolwsd, by running ```cp /etc/loolwsd/loolwsd.xml /etc/loolwsd/loolwsd.xml-new```, and then exit the container with ```exit```.
|
||||
|
||||
Delete the collabora container by hitting CTRL-C in the docker-compose shell, running ```docker-compose rm```, and then altering this line in docker-compose.yml:
|
||||
|
||||
```bash
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
|
||||
```
|
||||
|
||||
To this:
|
||||
|
||||
```bash
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml
|
||||
```
|
||||
|
||||
Edit /var/data/collabora/loolwsd.xml, find the **storage.filesystem.wopi** section, and add lines like this to the existing allow rules (_to allow IPv6-enabled hosts to still connect with their IPv4 addreses_):
|
||||
|
||||
```xml
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
```
|
||||
|
||||
Find the **net.post_allow** section, and add a line like this:
|
||||
|
||||
```xml
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.3[01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:192\.168\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
```
|
||||
|
||||
Find these 2 lines:
|
||||
|
||||
```xml
|
||||
<ssl desc="SSL settings">
|
||||
<enable type="bool" default="true">true</enable>
|
||||
```
|
||||
|
||||
And change to:
|
||||
|
||||
```xml
|
||||
<ssl desc="SSL settings">
|
||||
<enable type="bool" default="true">false</enable>
|
||||
```
|
||||
|
||||
Now re-launch collabora (_with the correct with loolwsd.xml_) under docker-compose, by running:
|
||||
|
||||
```bash
|
||||
docker-compose -d up
|
||||
```
|
||||
|
||||
Once collabora is up, we launch the swarm stack, by running:
|
||||
|
||||
```bash
|
||||
docker stack deploy collabora -c /var/data/config/collabora/collabora.yml
|
||||
```
|
||||
|
||||
Visit `https://collabora.<yourdomain\>/l/loleaflet/dist/admin/admin.html` and confirm you can login with the user/password you specified in collabora.env
|
||||
|
||||
### Integrate into NextCloud
|
||||
|
||||
In NextCloud, Install the **Collabora Online** app (<https://apps.nextcloud.com/apps/richdocuments>), and then under **Settings -> Collabora Online**, set your Collabora Online Server to ```https://collabora.<your domain>```
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Now browse your NextCloud files. Click the plus (+) sign to create a new document, and create either a new document, spreadsheet, or presentation. Name your document and then click on it. If Collabora is setup correctly, you'll shortly enter into the rich editing interface provided by Collabora :)
|
||||
|
||||
[^1]: Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
74
docs/recipes/cyberchef.md
Normal file
74
docs/recipes/cyberchef.md
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
title: Run an online a1z26 decoder with cyberchef (among others)
|
||||
description: Be a l33t h@xor with this toolkit from the GHCQ. Run your own online instance of cyberchef, and decode / encode those nasty a1z26s!
|
||||
---
|
||||
|
||||
# CyberChef
|
||||
|
||||
Are you a [l33t h@x0r](https://en.wikipedia.org/wiki/Hackers_(film))? Do you need the right tools at your fingertips to support your [#masterhacker](https://reddit.com/r/masterhacker) skillz? Look no further than CyberChef, lovingly baked for you by your friends at GHCQ[^1]!
|
||||
|
||||
[^1]: [Government Communications Headquarters](https://en.wikipedia.org/wiki/GCHQ), commonly known as GCHQ, is an intelligence and security organisation responsible for providing signals intelligence and information assurance to the government and armed forces of the United Kingdom
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
[CyberChef](https://github.com/gchq/CyberChef) is a simple, intuitive web app for carrying out all manner of "cyber" operations within a web browser. These operations include simple encoding like XOR or Base64, more complex encryption like AES, DES and Blowfish, creating binary and hexdumps, compression and decompression of data, calculating hashes and checksums, IPv6 and X.509 parsing, changing character encodings, and much more.
|
||||
|
||||
Here are some examples of fancy hax0r tricks you can do with CyberChef:
|
||||
|
||||
- [Decode a Base64-encoded string][2]
|
||||
- [Decrypt and disassemble shellcode][6]
|
||||
- [Perform AES decryption, extracting the IV from the beginning of the cipher stream][10]
|
||||
- [Automagically detect several layers of nested encoding][12]
|
||||
|
||||
Here's a [live demo](https://gchq.github.io/CyberChef)!
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
CyberChef doesn't require any persistent storage, or fancy configuration, so simply create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.2" # https://docs.docker.com/compose/compose-file/compose-versioning/#version-3
|
||||
|
||||
services:
|
||||
cyberchef:
|
||||
image: mpepping/cyberchef
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:cyberchef.example.com
|
||||
- traefik.port=8000
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.cyberchef.rule=Host(`cyberchef.example.com`)"
|
||||
- "traefik.http.routers.cyberchef.entrypoints=https"
|
||||
- "traefik.http.services.cyberchef.loadbalancer.server.port=8000"
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Cyber the Chef!
|
||||
|
||||
Launch your CyberChef stack by running ```docker stack deploy cyberchef -c <path -to-docker-compose.yml>```, and then visit the URL you chose to begin the hackery!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[2]: https://gchq.github.io/CyberChef/#recipe=From_Base64('A-Za-z0-9%2B/%3D',true)&input=VTI4Z2JHOXVaeUJoYm1RZ2RHaGhibXR6SUdadmNpQmhiR3dnZEdobElHWnBjMmd1
|
||||
[6]: https://gchq.github.io/CyberChef/#recipe=RC4(%7B'option':'UTF8','string':'secret'%7D,'Hex','Hex')Disassemble_x86('64','Full%20x86%20architecture',16,0,true,true)&input=MjFkZGQyNTQwMTYwZWU2NWZlMDc3NzEwM2YyYTM5ZmJlNWJjYjZhYTBhYWJkNDE0ZjkwYzZjYWY1MzEyNzU0YWY3NzRiNzZiM2JiY2QxOTNjYjNkZGZkYmM1YTI2NTMzYTY4NmI1OWI4ZmVkNGQzODBkNDc0NDIwMWFlYzIwNDA1MDcxMzhlMmZlMmIzOTUwNDQ2ZGIzMWQyYmM2MjliZTRkM2YyZWIwMDQzYzI5M2Q3YTVkMjk2MmMwMGZlNmRhMzAwNzJkOGM1YTZiNGZlN2Q4NTlhMDQwZWVhZjI5OTczMzYzMDJmNWEwZWMxOQ
|
||||
[10]: https://gchq.github.io/CyberChef/#recipe=Register('(.%7B32%7D)',true,false)Drop_bytes(0,32,false)AES_Decrypt(%7B'option':'Hex','string':'1748e7179bd56570d51fa4ba287cc3e5'%7D,%7B'option':'Hex','string':'$R0'%7D,'CTR','Hex','Raw',%7B'option':'Hex','string':''%7D)&input=NTFlMjAxZDQ2MzY5OGVmNWY3MTdmNzFmNWI0NzEyYWYyMGJlNjc0YjNiZmY1M2QzODU0NjM5NmVlNjFkYWFjNDkwOGUzMTljYTNmY2Y3MDg5YmZiNmIzOGVhOTllNzgxZDI2ZTU3N2JhOWRkNmYzMTFhMzk0MjBiODk3OGU5MzAxNGIwNDJkNDQ3MjZjYWVkZjU0MzZlYWY2NTI0MjljMGRmOTRiNTIxNjc2YzdjMmNlODEyMDk3YzI3NzI3M2M3YzcyY2Q4OWFlYzhkOWZiNGEyNzU4NmNjZjZhYTBhZWUyMjRjMzRiYTNiZmRmN2FlYjFkZGQ0Nzc2MjJiOTFlNzJjOWU3MDlhYjYwZjhkYWY3MzFlYzBjYzg1Y2UwZjc0NmZmMTU1NGE1YTNlYzI5MWNhNDBmOWU2MjlhODcyNTkyZDk4OGZkZDgzNDUzNGFiYTc5YzFhZDE2NzY3NjlhN2MwMTBiZjA0NzM5ZWNkYjY1ZDk1MzAyMzcxZDYyOWQ5ZTM3ZTdiNGEzNjFkYTQ2OGYxZWQ1MzU4OTIyZDJlYTc1MmRkMTFjMzY2ZjMwMTdiMTRhYTAxMWQyYWYwM2M0NGY5NTU3OTA5OGExNWUzY2Y5YjQ0ODZmOGZmZTljMjM5ZjM0ZGU3MTUxZjZjYTY1MDBmZTRiODUwYzNmMWMwMmU4MDFjYWYzYTI0NDY0NjE0ZTQyODAxNjE1YjhmZmFhMDdhYzgyNTE0OTNmZmRhN2RlNWRkZjMzNjg4ODBjMmI5NWIwMzBmNDFmOGYxNTA2NmFkZDA3MWE2NmNmNjBlNWY0NmYzYTIzMGQzOTdiNjUyOTYzYTIxYTUzZg
|
||||
[12]: https://gchq.github.io/CyberChef/#recipe=Magic(3,false,false)&input=V1VhZ3dzaWFlNm1QOGdOdENDTFVGcENwQ0IyNlJtQkRvREQ4UGFjZEFtekF6QlZqa0syUXN0RlhhS2hwQzZpVVM3UkhxWHJKdEZpc29SU2dvSjR3aGptMWFybTg2NHFhTnE0UmNmVW1MSHJjc0FhWmM1VFhDWWlmTmRnUzgzZ0RlZWpHWDQ2Z2FpTXl1QlY2RXNrSHQxc2NnSjg4eDJ0TlNvdFFEd2JHWTFtbUNvYjJBUkdGdkNLWU5xaU45aXBNcTFaVTFtZ2tkYk51R2NiNzZhUnRZV2hDR1VjOGc5M1VKdWRoYjhodHNoZVpud1RwZ3FoeDgzU1ZKU1pYTVhVakpUMnptcEM3dVhXdHVtcW9rYmRTaTg4WXRrV0RBYzFUb291aDJvSDRENGRkbU5LSldVRHBNd21uZ1VtSzE0eHdtb21jY1BRRTloTTE3MkFQblNxd3hkS1ExNzJSa2NBc3lzbm1qNWdHdFJtVk5OaDJzMzU5d3I2bVMyUVJQ
|
||||
129
docs/recipes/duplicati.md
Normal file
129
docs/recipes/duplicati.md
Normal file
@@ -0,0 +1,129 @@
|
||||
---
|
||||
title: Use Duplicati in Docker to backup to backblaze / b2 and friends
|
||||
description: Duplicati - Yet another boring option to backup your exciting stuff, especially to Backblaze / B2 - It's good to have options.
|
||||
---
|
||||
|
||||
# Duplicati
|
||||
|
||||
Always have a backup plan[^1]
|
||||
|
||||

|
||||
|
||||
[Duplicati](https://www.duplicati.com/) is a free and open-source backup software to store encrypted backups online For Windows, macOS and Linux (our favorite, yay!).
|
||||
|
||||
Similar to the other backup options in the Cookbook, we can use Duplicati to backup all our data-at-rest to a wide variety of locations, including, but not limited to:
|
||||
|
||||
- Generic endpoints (FTP, SSH, or WebDAV servers)
|
||||
- Cloud storage providers (Amazon S3, BackBlaze B2, etc)
|
||||
- Cloud services (OneDrive, Google Drive, etc)
|
||||
|
||||
!!! note
|
||||
Since Duplicati itself offers no user authentication, this design secures Duplicati behind [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/), so that in order to gain access to the Duplicati UI at all, authentication through the mechanism configured in traefik-forward-auth (_to GitHub, GitLab, Google, etc_) must have already occurred.
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
*[X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
* [X] [Traefik](/docker-swarm/traefik/) and [Traefik-Forward-Auth](/docker-swarm/traefik-forward-auth/) configured per design
|
||||
* [X] Credentials for one of the Duplicati's supported upload destinations
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a folder to store a docker-compose configuration file and an associated environment file. If you're following my filesystem layout, create `/var/data/config/duplicati` (*for the config*), and `/var/data/duplicati` (*for the metadata*) as per the following example:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/duplicati
|
||||
mkdir /var/data/duplicati
|
||||
cd /var/data/config/duplicati
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
1. Generate a random passphrase to use to encrypt your data. **Save this somewhere safe**, without it you won't be able to restore!
|
||||
2. Seriously, **save**. **it**. **somewhere**. **safe**.
|
||||
3. Create `duplicati.env`, and populate with the following variables (_replace "Europe/London" with your appropriate time zone from [this list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)_)
|
||||
|
||||
```bash
|
||||
PUID=0
|
||||
PGID=0
|
||||
TZ=Europe/London
|
||||
CLI_ARGS= #optional
|
||||
```
|
||||
|
||||
!!! question "Excuse me! Why are we running Duplicati as root?"
|
||||
That's a great question! We're running Duplicati as the `root` user of the host system because we need Duplicati to be able to read files of all the other services no matter which user that service is running as. After all, Duplicati can't backup your exciting stuff if it can't read the files.
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
services:
|
||||
duplicati:
|
||||
image: lscr.io/linuxserver/duplicati
|
||||
env_file: /var/data/config/duplicati/duplicati.env
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:duplicati.example.com
|
||||
- traefik.port=8200
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.duplicati.rule=Host(`duplicati.example.com`)"
|
||||
- "traefik.http.routers.duplicati.entrypoints=https"
|
||||
- "traefik.http.services.duplicati.loadbalancer.server.port=8200"
|
||||
- "traefik.http.routers.duplicati.middlewares=forward-auth"
|
||||
volumes:
|
||||
- /var/data/config/duplicati:/config
|
||||
- /var/data:/source
|
||||
ports:
|
||||
- 8200:8200
|
||||
networks:
|
||||
- traefik_public
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.55.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Duplicati stack
|
||||
|
||||
Launch the Duplicati stack by running ```docker stack deploy duplicati -c <path-to-docker-compose.yml>```
|
||||
|
||||
### Create (and verify!) Your First Backup
|
||||
|
||||
Once we authenticate through the traefik-forward-auth provider, we can start configuring your backup jobs via the Duplicati UI. All backup and restore job configuration is done through the UI. Be sure to read through the documentation on [Creating a new backup job](https://duplicati.readthedocs.io/en/latest/03-using-the-graphical-user-interface/#creating-a-new-backup-job) and [Restoring files from a backup](https://duplicati.readthedocs.io/en/latest/03-using-the-graphical-user-interface/#restoring-files-from-a-backup) for information on how to configure those jobs.
|
||||
|
||||
!!! warning
|
||||
An untested backup is not really a backup at all. Being ***sure*** you can succesfully restore files from your backup now could save you lots of heartache later after "something bad" happens.
|
||||
|
||||
!!! tip
|
||||
Backing up files on a regular basis is going to use a continually-increasing amount of disk space. To help with this, Duplicati offers a "Smart Backup Retention" scheme that will intelligently remove certain backups as they age while still maintaining a comprehensive backup history. You can set that configuration on the "Options" tab of the backup configuration.
|
||||
|
||||
[^1]: Quote attributed to Mila Kunis
|
||||
[^2]: The [Duplicati 2 User's Manual](https://duplicati.readthedocs.io/en/latest/) contains all the information you'll need to configure backup endpoints, restore jobs, scheduling and advanced properties for your backup jobs.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
162
docs/recipes/duplicity.md
Normal file
162
docs/recipes/duplicity.md
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
title: Use Duplicity in Docker to backup to backblaze / b2 and friends
|
||||
description: A boring recipe to backup your exciting stuff. Boring is good.
|
||||
---
|
||||
|
||||
# Duplicity
|
||||
|
||||
Intro
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
[Duplicity](https://duplicity.gitlab.io/duplicity-web/) backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.
|
||||
|
||||
So what does this mean for our stack? It means we can leverage Duplicity to backup all our data-at-rest to a wide variety of cloud providers, including, but not limited to:
|
||||
|
||||
- acd_cli
|
||||
- Amazon S3
|
||||
- Backblaze B2
|
||||
- DropBox
|
||||
- ftp
|
||||
- Google Docs
|
||||
- Google Drive
|
||||
- Microsoft Azure
|
||||
- Microsoft Onedrive
|
||||
- Rackspace Cloudfiles
|
||||
- rsync
|
||||
- ssh/scp
|
||||
- SwiftStack
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
2. Credentials for one of the Duplicity's supported upload destinations
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a folder to store a docker-compose .yml file, and an associated .env file. If you're following my filesystem layout, create `/var/data/config/duplicity` (_for the config_), and `/var/data/duplicity` (_for the metadata_) as per the following example:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/duplicity
|
||||
mkdir /var/data/duplicity
|
||||
cd /var/data/config/duplicity
|
||||
```
|
||||
|
||||
### (Optional) Create Google Cloud Storage bucket
|
||||
|
||||
I didn't already have an archival/backup provider, so I chose Google Cloud "cloud" storage for the low price-point - 0.7 cents per GB/month (_Plus you [start with \$300 credit](https://cloud.google.com/free/) even when signing up for the free tier_). You can use any destination supported by [Duplicity's URL scheme though](https://duplicity.gitlab.io/duplicity-web/vers7/duplicity.1.html#sect7), just make sure you specify the necessary [environment variables](https://duplicity.gitlab.io/duplicity-web/vers7/duplicity.1.html#sect6).
|
||||
|
||||
1. [Sign up](https://cloud.google.com/storage/docs/getting-started-console), create an empty project, enable billing, and create a bucket. Give your bucket a unique name, example "**jack-and-jills-bucket**" (_it's unique across the entire Google Cloud_)
|
||||
2. Under "Storage" section > "[Settings](https://console.cloud.google.com/project/_/storage/settings)" > "Interoperability" tab > click "Enable interoperable access" and then "Create a new key" button and note both Access Key and Secret.
|
||||
|
||||
### Prepare environment
|
||||
|
||||
1. Generate a random passphrase to use to encrypt your data. **Save this somewhere safe**, without it you won't be able to restore!
|
||||
2. Seriously, **save**. **it**. **somewhere**. **safe**.
|
||||
3. Create duplicity.env, and populate with the following variables
|
||||
|
||||
```bash
|
||||
SRC=/var/data/
|
||||
DST=gs://jack-and-jills-bucket/yes-you-can-have-subdirectories
|
||||
TMPDIR=/tmp
|
||||
GS_ACCESS_KEY_ID=<YOUR GS ACCESS KEY>
|
||||
GS_SECRET_ACCESS_KEY=<YOUR GS SECRET ACCESS KEY>
|
||||
|
||||
OPTIONS=--allow-source-mismatch --exclude /var/data/runtime --exclude /var/data/registry --exclude /var/data/duplicity --archive-dir=/archive
|
||||
PASSPHRASE=<YOUR CHOSEN PASSPHRASE>
|
||||
```
|
||||
|
||||
!!! note
|
||||
See the [data layout reference](/reference/data_layout/) for an explanation of the included/excluded paths above.
|
||||
|
||||
### Run a test backup
|
||||
|
||||
Before we launch the automated daily backups, let's run a test backup, as per the following example:
|
||||
|
||||
```bash
|
||||
docker run --env-file duplicity.env -it --rm -v \
|
||||
/var/data:/var/data:ro -v /var/data/duplicity/tmp:/tmp -v \
|
||||
/var/data/duplicity/archive:/archive tecnativa/duplicity \
|
||||
/etc/periodic/daily/jobrunner
|
||||
```
|
||||
|
||||
You should see some activity, with a summary of bytes transferred at the end.
|
||||
|
||||
### Run a test restore
|
||||
|
||||
Repeat after me: "If you don't verify your backup, **it's not a backup**".
|
||||
|
||||
!!! warning
|
||||
Depending on what tier of storage you chose from your provider (_i.e., Google Coldline, or Amazon S3_), you may be charged for downloading data.
|
||||
|
||||
Run a variation of the following to confirm a file you expect to be backed up, **is** backed up. (_I used traefik.yml from the [traefik recipie](/docker-swarm/traefik/), since this is likely to exist for every reader_).
|
||||
|
||||
```yaml
|
||||
docker run --env-file duplicity.env -it --rm \
|
||||
-v /var/data:/var/data:ro \
|
||||
-v /var/data/duplicity/tmp:/tmp \
|
||||
-v /var/data/duplicity/archive:/archive tecnativa/duplicity \
|
||||
duplicity list-current-files \
|
||||
\$DST | grep traefik.yml
|
||||
```
|
||||
|
||||
Once you've identified a file to test-restore, use a variation of the following to restore it to /tmp (_from the perspective of the container - it's actually /var/data/duplicity/tmp_)
|
||||
|
||||
```bash
|
||||
docker run --env-file duplicity.env -it --rm \
|
||||
-v /var/data:/var/data:ro \
|
||||
-v /var/data/duplicity/tmp:/tmp \
|
||||
-v /var/data/duplicity/archive:/archive \
|
||||
tecnativa/duplicity duplicity restore \
|
||||
--file-to-restore config/traefik/traefik.yml \
|
||||
\$DST /tmp/traefik-restored.yml
|
||||
```
|
||||
|
||||
Examine the contents of /var/data/duplicity/tmp/traefik-restored.yml to confirm it contains valid data.
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Now that we have confidence in our backup/restore process, let's automate it by creating a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
backup:
|
||||
image: tecnativa/duplicity
|
||||
env_file: /var/data/config/duplicity/duplicity.env
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data:/var/data:ro
|
||||
- /var/data/duplicity/tmp:/tmp
|
||||
- /var/data/duplicity/archive:/archive
|
||||
|
||||
|
||||
networks:
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.10.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Duplicity stack
|
||||
|
||||
Launch Duplicity stack by running `docker stack deploy duplicity -c <path -to-docker-compose.yml>`
|
||||
|
||||
Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination.
|
||||
|
||||
[^1]: Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
|
||||
[^2]: The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add `SMTP_HOST`, `SMTP_PORT`, `EMAIL_FROM` and `EMAIL_TO` variables to `duplicity.env`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
227
docs/recipes/elkarbackup.md
Normal file
227
docs/recipes/elkarbackup.md
Normal file
@@ -0,0 +1,227 @@
|
||||
---
|
||||
title: Use elkarbackup in Docker to backup to backblaze / b2 and friends
|
||||
description: ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes.
|
||||
---
|
||||
|
||||
# Elkar Backup
|
||||
|
||||
Don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](/recipes/duplicity/) would produce for you.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Details
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/elkarbackup:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/elkarbackup/{backups,uploads,sshkeys,database-dump}
|
||||
mkdir -p /var/data/runtime/elkarbackup/db
|
||||
mkdir -p /var/data/config/elkarbackup
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create /var/data/config/elkarbackup/elkarbackup.env, and populate with the following variables
|
||||
|
||||
```bash
|
||||
SYMFONY__DATABASE__PASSWORD=password
|
||||
EB_CRON=enabled
|
||||
TZ='Etc/UTC'
|
||||
|
||||
#SMTP - Populate these if you want email notifications
|
||||
#SYMFONY__MAILER__HOST=
|
||||
#SYMFONY__MAILER__USER=
|
||||
#SYMFONY__MAILER__PASSWORD=
|
||||
#SYMFONY__MAILER__FROM=
|
||||
|
||||
# For mysql
|
||||
MYSQL_ROOT_PASSWORD=password
|
||||
```
|
||||
|
||||
Create ```/var/data/config/elkarbackup/elkarbackup-db-backup.env```, and populate with the following, to setup the nightly database dump.
|
||||
|
||||
!!! note
|
||||
Running a daily database dump might be considered overkill, since ElkarBackup can be configured to backup its own database. However, making my own backup keeps the operation of this stack consistent with **other** stacks which employ MariaDB.
|
||||
|
||||
Also, did you ever hear about the guy who said "_I wish I had fewer backups"?
|
||||
|
||||
No, me either :shrug:
|
||||
|
||||
```bash
|
||||
# For database backup (keep 7 days daily backups)
|
||||
MYSQL_PWD=<same as SYMFONY__DATABASE__PASSWORD above>
|
||||
MYSQL_USER=root
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
db:
|
||||
image: mariadb:10.4
|
||||
env_file: /var/data/config/elkarbackup/elkarbackup.env
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/runtime/elkarbackup/db:/var/lib/mysql
|
||||
|
||||
db-backup:
|
||||
image: mariadb:10.4
|
||||
env_file: /var/data/config/elkarbackup/elkarbackup-db-backup.env
|
||||
volumes:
|
||||
- /var/data/elkarbackup/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
|
||||
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
app:
|
||||
image: elkarbackup/elkarbackup
|
||||
env_file: /var/data/config/elkarbackup/elkarbackup.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/:/var/data
|
||||
- /var/data/elkarbackup/backups:/app/backups
|
||||
- /var/data/elkarbackup/uploads:/app/uploads
|
||||
- /var/data/elkarbackup/sshkeys:/app/.ssh
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:elkarbackup.example.com
|
||||
- traefik.port=80
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.elkarbackup.rule=Host(`elkarbackup.example.com`)"
|
||||
- "traefik.http.services.elkarbackup.loadbalancer.server.port=80"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.elkarbackup.middlewares=forward-auth@file"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.36.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch ElkarBackup stack
|
||||
|
||||
Launch the ElkarBackup stack by running ```docker stack deploy elkarbackup -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password default password "root":
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
First thing you do, change your password, using the gear icon, and "Change Password" link:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Have a read of the [Elkarbackup Docs](https://docs.elkarbackup.org/docs/introduction.html) - they introduce the concept of **clients** (_hosts containing data to be backed up_), **jobs** (_what data gets backed up_), **policies** (_when is data backed up and how long is it kept_).
|
||||
|
||||
At the very least, you want to setup a **client** called "_localhost_" with an empty path (_i.e., the job path will be accessed locally, without SSH_), and then add a job to this client to backup /var/data, **excluding** ```/var/data/runtime``` and ```/var/data/elkarbackup/backup``` (_unless you **like** "backup-ception"_)
|
||||
|
||||
### Copying your backup data offsite
|
||||
|
||||
From the WebUI, you can download a script intended to be executed on a remote host, to backup your backup data to an offsite location. This is a **Good Idea**(tm), but needs some massaging for a Docker swarm deployment.
|
||||
|
||||
Here's a variation to the standard script, which I've employed:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
REPOSITORY=/var/data/elkarbackup/backups
|
||||
SERVER=<target host member of docker swarm>
|
||||
SERVER_USER=elkarbackup
|
||||
UPLOADS=/var/data/elkarbackup/uploads
|
||||
TARGET=/srv/backup/elkarbackup
|
||||
|
||||
echo "Starting backup..."
|
||||
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
|
||||
|
||||
ssh "$SERVER_USER@$SERVER" "cd '$REPOSITORY'; find . -maxdepth 2 -mindepth 2" | sed s/^..// | while read jobId
|
||||
do
|
||||
echo Backing up job $jobId
|
||||
mkdir -p $TARGET/$jobId 2>/dev/null
|
||||
rsync -aH --delete "$SERVER_USER@$SERVER:$REPOSITORY/$jobId/" $TARGET/$jobId
|
||||
done
|
||||
|
||||
echo Backing up uploads
|
||||
rsync -aH --delete "$SERVER_USER@$SERVER":"$UPLOADS/" $TARGET/uploads
|
||||
|
||||
USED=`df -h . | awk 'NR==2 { print $3 }'`
|
||||
USE=`df -h . | awk 'NR==2 { print $5 }'`
|
||||
AVAILABLE=`df -h . | awk 'NR==2 { print $4 }'`
|
||||
|
||||
echo "Backup finished succesfully!"
|
||||
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
|
||||
echo ""
|
||||
echo "**** INFO ****"
|
||||
echo "Used disk space: $USED ($USE)"
|
||||
echo "Available disk space: $AVAILABLE"
|
||||
echo ""
|
||||
```
|
||||
|
||||
!!! note
|
||||
You'll note that I don't use the script to create a mysql dump (_since Elkar is running within a container anyway_), rather I just rely on the database dump which is made nightly into ```/var/data/elkarbackup/database-dump/```
|
||||
|
||||
### Restoring data
|
||||
|
||||
Repeat after me : "**It's not a backup unless you've tested a restore**"
|
||||
|
||||
!!! note
|
||||
I had some difficulty making restoring work well in the webUI. My attempts to "Restore to client" failed with an SSH error about "localhost" not found. I **was** able to download the backup from my web browser, so I considered it a successful restore, since I can retrieve the backed-up data either from the webUI or from the filesystem directly.
|
||||
|
||||
To restore files form a job, click on the "Restore" button in the WebUI, while on the **Jobs** tab:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
This takes you to a list of backup names and file paths. You can choose to download the entire contents of the backup from your browser as a .tar.gz, or to restore the backup to the client. If you click on the **name** of the backup, you can also drill down into the file structure, choosing to restore a single file or directory.
|
||||
|
||||
[^1]: If you wanted to expose the ElkarBackup UI directly, you could remove the traefik-forward-auth from the design.
|
||||
[^2]: The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
89
docs/recipes/emby.md
Normal file
89
docs/recipes/emby.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
title: Run Emby server with docker compose (using swarm)
|
||||
description: Kick-ass media player!
|
||||
---
|
||||
|
||||
# Emby
|
||||
|
||||
[Emby](https://emby.media/) (_think "M.B." or "Media Browser"_) is best described as "_like [Plex](/recipes/plex/) but different_" 😁 - It's a bit geekier and less polished than Plex, but it allows for more flexibility and customization.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
I've started experimenting with Emby as an alternative to Plex, because of the advanced [parental controls](https://github.com/MediaBrowser/Wiki/wiki/Parental-Controls) it offers. Based on my experimentation thus far, I have a "**kid-safe**" profile which automatically logs in, and only displays kid-safe content, based on ratings.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a location to store Emby's library data, config files, logs and temporary transcoding space, so create /var/data/emby, and make sure it's owned by the user and group who also own your media data.
|
||||
|
||||
```bash
|
||||
mkdir /var/data/emby
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create emby.env, and populate with PUID/GUID for the user who owns the /var/data/emby directory (_above_) and your actual media content (_in this example, the media content is at **/srv/data**_)
|
||||
|
||||
```bash
|
||||
PUID=
|
||||
GUID=
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
emby:
|
||||
image: emby/emby-server
|
||||
env_file: /var/data/config/emby/emby.env
|
||||
volumes:
|
||||
- /var/data/emby/emby:/config
|
||||
- /srv/data/:/data
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:emby.example.com
|
||||
- traefik.port=8096
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.emby.rule=Host(`emby.example.com`)"
|
||||
- "traefik.http.services.emby.loadbalancer.server.port=8096"
|
||||
- "traefik.enable=true"
|
||||
networks:
|
||||
- traefik_public
|
||||
ports:
|
||||
- 8096:8096
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Emby stack
|
||||
|
||||
Launch the stack by running ```docker stack deploy emby -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-based setup to complete deploying your Emby.
|
||||
|
||||
[^1]: I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
[^3]: We don't bother exposing the HTTPS port for Emby, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
141
docs/recipes/funkwhale.md
Normal file
141
docs/recipes/funkwhale.md
Normal file
@@ -0,0 +1,141 @@
|
||||
---
|
||||
title: Install funkwhale with docker-compose / swarm
|
||||
description: Funkwhale is a decentralized, federated music streaming platform
|
||||
---
|
||||
|
||||
# Funkwhale
|
||||
|
||||
[Funkwhale](https://funkwhale.audio) is a decentralized, federated, and open music streaming / sharing platform. Think of it as "Mastodon for music".
|
||||
|
||||

|
||||
|
||||
The idea is that you run a "pod" (*just like whales, Funkwhale users gather in pods*). A pod is a website running the Funkwhale server software. You join the network by registering an account on a pod (*sometimes called "server" or "instance"*), which will be your home.
|
||||
|
||||
You will be then able to interact with other people regardless of which pod they are using.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold our funky data:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/funkwhale
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Funkwhale is configured using environment variables. Create `/var/data/config/funkwhale/funkwhale.env`, by running something like this:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/config/funkwhale/
|
||||
cat > /var/data/config/funkwhale/funkwhale.env << EOF
|
||||
# Replace 'funkwhale.example.com' with your actual domain
|
||||
FUNKWHALE_HOSTNAME=funkwhale.example.com
|
||||
# Protocol may also be: http
|
||||
FUNKWHALE_PROTOCOL=https
|
||||
# This limits the upload size
|
||||
NGINX_MAX_BODY_SIZE=100M
|
||||
# Bind to localhost
|
||||
FUNKWHALE_API_IP=127.0.0.1
|
||||
# Container port you want to expose on the host
|
||||
FUNKWHALE_API_PORT=80
|
||||
# Generate and store a secure secret key for your instance
|
||||
DJANGO_SECRET_KEY=$(openssl rand -hex 45)
|
||||
# Remove this if you expose the container directly on ports 80/443
|
||||
NESTED_PROXY=1
|
||||
# adapt to the pid/gid that own /var/data/funkwhale/
|
||||
PUID=1000
|
||||
PGID=1000
|
||||
EOF
|
||||
# reduce permissions on the .env file since it contains sensitive data
|
||||
chmod 600 /var/data/funkwhale/funkwhale.env
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3) (*I store all my config files as `/var/data/config/<stack name\>/<stack name\>.yml`*), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.2" # https://docs.docker.com/compose/compose-file/compose-versioning/#version-3
|
||||
|
||||
services:
|
||||
funkwhale:
|
||||
image: funkwhale/all-in-one:1.0.1
|
||||
env_file: /var/data/config/funkwhale/funkwhale.env
|
||||
volumes:
|
||||
- /var/data/funkwhale/:/data/
|
||||
- /path/to/your/music/dir:/music:ro
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=traefik_public"
|
||||
|
||||
# traefikv1
|
||||
- "traefik.frontend.rule=Host:funkwhale.example.com"
|
||||
- "traefik.port=80"
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.linx.rule=Host(`funkwhale.example.com`)"
|
||||
- "traefik.http.routers.linx.entrypoints=https"
|
||||
- "traefik.http.services.linx.loadbalancer.server.port=80"
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Unleash the Whale! 🐳
|
||||
|
||||
Launch the Funkwhale stack by running `docker stack deploy funkwhale -c <path -to-docker-compose.yml>`, and then watch the container logs using `docker stack logs funkywhale_funkywhale<tab-completion-helps>`.
|
||||
|
||||
You'll know the container is ready when you see an ascii version of the Funkwhale logo, followed by:
|
||||
|
||||
```bash
|
||||
[2021-01-27 22:52:24 +0000] [411] [INFO] ASGI 'lifespan' protocol appears unsupported.
|
||||
[2021-01-27 22:52:24 +0000] [411] [INFO] Application startup complete.
|
||||
```
|
||||
|
||||
The first time we run Funkwhale, we need to setup the superuser account.
|
||||
|
||||
!!! tip
|
||||
If you're running a multi-node swarm, this next step needs to be executed on the node which is currently running Funkwhale. Identify this with `docker stack ps funkwhale`
|
||||
|
||||
Run something like the following:
|
||||
|
||||
```bash
|
||||
docker exec -it funkwhale_funkwhale.1.<tab-completion-helps-here\> \
|
||||
manage createsuperuser \
|
||||
--username admin \
|
||||
--email <your admin email address\>
|
||||
```
|
||||
|
||||
You'll be prompted to enter the admin password - here's some sample output:
|
||||
|
||||
```bash
|
||||
root@swarm:~# docker exec -it funkwhale_funkwhale.1.gnx96tfr0lgmx5u3e8x4tkags \
|
||||
manage createsuperuser \
|
||||
--username admin \
|
||||
--email admin@funkypenguin.co.nz
|
||||
2021-01-27 22:44:01,953 funkwhale_api.config INFO Running with the following plugins enabled: funkwhale_api.contrib.scrobbler
|
||||
Password:
|
||||
Password (again):
|
||||
Superuser created successfully.
|
||||
root@swarm:~#
|
||||
```
|
||||
|
||||
[^1]: Since the whole purpose of media sharing is to share **publically**, and Funkwhale includes robust user authentication, this recipe doesn't employ traefik-based authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||
[^2]: These instructions are an opinionated simplication of the official instructions found at <https://docs.funkwhale.audio/installation/docker.html>
|
||||
[^3]: It should be noted that if you import your existing media, the files will be **copied** into Funkwhale's data folder. There doesn't seem to be a way to point Funkwhale at an existing collection and have it just play it from the filesystem. To this end, be prepared for double disk space usage if you plan to import your entire music collection!
|
||||
[^5]: No consideration is given at this point to backing up the Funkwhale data. Post a comment below if you'd like to see a backup container added!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
71
docs/recipes/ghost.md
Normal file
71
docs/recipes/ghost.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: Blog with Ghost in Docker
|
||||
description: How to run the beautiful, publication-focused blogging engine "Ghost" using Docker
|
||||
---
|
||||
|
||||
# Ghost
|
||||
|
||||
[Ghost](https://ghost.org) is "a fully open source, hackable platform for building and running a modern online publication."
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
Create the location for the bind-mount of the application data, so that it's persistent:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/ghost
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
ghost:
|
||||
image: ghost:1-alpine
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/ghost/:/var/lib/ghost/content
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:ghost.example.com
|
||||
- traefik.port=2368
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.ghost.rule=Host(`ghost.example.com`)"
|
||||
- "traefik.http.services.ghost.loadbalancer.server.port=2368"
|
||||
- "traefik.enable=true"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Ghost stack
|
||||
|
||||
Launch the Ghost stack by running ```docker stack deploy ghost -c <path -to-docker-compose.yml>```
|
||||
|
||||
Create your first administrative account at https://**YOUR-FQDN**/admin/
|
||||
|
||||
[^1]: A default using the SQlite database takes 548k of space
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
97
docs/recipes/gitlab-runner.md
Normal file
97
docs/recipes/gitlab-runner.md
Normal file
@@ -0,0 +1,97 @@
|
||||
---
|
||||
title: How to run Gitlab Runner in Docker
|
||||
---
|
||||
|
||||
# Gitlab Runner
|
||||
|
||||
Some features of GitLab require a "[runner](https://docs.gitlab.com/runner/)" (_in the sense of a "gopher" or a "minion"_). A runner "registers" itself with a GitLab instance, and is given tasks to run. Tasks include running Continuous Integration (CI) builds, and building container images.
|
||||
|
||||
While a runner isn't strictly required to use GitLab, if you want to do CI, you'll need at least one. There are many ways to deploy a runner - this recipe focuses on the docker container model.
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
Existing:
|
||||
|
||||
1. [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
2. [X] [Traefik](/docker-swarm/traefik) configured per design
|
||||
3. [X] DNS entry for the hostname you intend to use, pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||
4. [X] [GitLab](/recipes/gitlab) installation (see previous recipe)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our runner containers, so create them in `/var/data/gitlab`:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/gitlab/runners/{1,2}
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
thing1:
|
||||
image: gitlab/gitlab-runner
|
||||
volumes:
|
||||
- /var/data/gitlab/runners/1:/etc/gitlab-runner
|
||||
networks:
|
||||
- internal
|
||||
|
||||
thing2:
|
||||
image: gitlab/gitlab-runner
|
||||
volumes:
|
||||
- /var/data/gitlab/runners/2:/etc/gitlab-runner
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.23.0/24
|
||||
```
|
||||
|
||||
### Configure runners
|
||||
|
||||
From your GitLab UI, you can retrieve a "token" necessary to register a new runner. To register the runner, you can either create config.toml in each runner's bind-mounted folder (example below), or just `docker exec` into each runner container and execute `gitlab-runner register` to interactively generate config.toml.
|
||||
|
||||
Sample runner config.toml:
|
||||
|
||||
```ini
|
||||
concurrent = 1
|
||||
check_interval = 0
|
||||
|
||||
[[runners]]
|
||||
name = "myrunner1"
|
||||
url = "https://gitlab.example.com"
|
||||
token = "<long string here>"
|
||||
executor = "docker"
|
||||
[runners.docker]
|
||||
tls_verify = false
|
||||
image = "ruby:2.1"
|
||||
privileged = false
|
||||
disable_cache = false
|
||||
volumes = ["/cache"]
|
||||
shm_size = 0
|
||||
[runners.cache]
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch runners
|
||||
|
||||
Launch the GitLab Runner stack by running `docker stack deploy gitlab-runner -c <path -to-docker-compose.yml>`
|
||||
|
||||
[^1]: You'll note that I setup 2 runners. One is locked to a single project (_this cookbook build_), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
|
||||
[^2]: Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (_and GitLab starts **sooo** slowly!_), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
133
docs/recipes/gitlab.md
Normal file
133
docs/recipes/gitlab.md
Normal file
@@ -0,0 +1,133 @@
|
||||
---
|
||||
title: How to run Gitlab in Docker
|
||||
---
|
||||
# GitLab
|
||||
|
||||
GitLab is a self-hosted [alternative to GitHub](https://about.gitlab.com/pricing/self-managed/feature-comparison/). The most common use case is (a set of) developers with the desire for the rich feature-set of GitHub, but with unlimited private repositories.
|
||||
|
||||
Docker does maintain an [official "Omnibus" container](https://docs.gitlab.com/omnibus/docker/README.html), but for this recipe I prefer the "[dockerized gitlab](https://github.com/sameersbn/docker-gitlab)" project, since it allows distribution of the various Gitlab components across multiple swarm nodes.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/gitlab:
|
||||
|
||||
```bash
|
||||
cd /var/data
|
||||
mkdir gitlab
|
||||
cd gitlab
|
||||
mkdir -p {postgresql,redis,gitlab}
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
You'll need to know the following:
|
||||
|
||||
1. Choose a password for postgresql, you'll need it for DB_PASS in the compose file (below)
|
||||
2. Generate 3 passwords using ```pwgen -Bsv1 64```. You'll use these for the XXX_KEY_BASE environment variables below
|
||||
3. Create gitlab.env, and populate with **at least** the following variables (the full set is available at <https://github.com/sameersbn/docker-gitlab#available-configuration-parameters>):
|
||||
|
||||
```bash
|
||||
DB_USER=gitlab
|
||||
DB_PASS=gitlabdbpass
|
||||
DB_NAME=gitlabhq_production
|
||||
DB_EXTENSION=pg_trgm
|
||||
DB_ADAPTER=postgresql
|
||||
DB_HOST=postgresql
|
||||
TZ=Pacific/Auckland
|
||||
REDIS_HOST=redis
|
||||
REDIS_PORT=6379
|
||||
GITLAB_TIMEZONE=Auckland
|
||||
GITLAB_HTTPS=true
|
||||
SSL_SELF_SIGNED=false
|
||||
GITLAB_HOST=gitlab.example.com
|
||||
GITLAB_PORT=443
|
||||
GITLAB_SSH_PORT=2222
|
||||
GITLAB_SECRETS_DB_KEY_BASE=CFf7sS3kV2nGXBtMHDsTcjkRX8PWLlKTPJMc3lRc6GCzJDdVljZ85NkkzJ8mZbM5
|
||||
GITLAB_SECRETS_SECRET_KEY_BASE=h2LBVffktDgb6BxM3B97mDSjhnSNwLc5VL2Hqzq9cdrvBtVw48WSp5wKj5HZrJM5
|
||||
GITLAB_SECRETS_OTP_KEY_BASE=t9LPjnLzbkJ7Nt6LZJj6hptdpgG58MPJPwnMMMDdx27KSwLWHDrz9bMWXQMjq5mp
|
||||
GITLAB_ROOT_PASSWORD=changeme
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: sameersbn/redis:latest
|
||||
command:
|
||||
- --loglevel warning
|
||||
volumes:
|
||||
- /var/data/gitlab/redis:/var/lib/redis:Z
|
||||
networks:
|
||||
- internal
|
||||
|
||||
postgresql:
|
||||
image: sameersbn/postgresql:9.6-2
|
||||
env_file: /var/data/config/gitlab/gitlab.env
|
||||
volumes:
|
||||
- /var/data/gitlab/postgresql:/var/lib/postgresql:Z
|
||||
networks:
|
||||
- internal
|
||||
|
||||
gitlab:
|
||||
image: sameersbn/gitlab:latest
|
||||
env_file: /var/data/config/gitlab/gitlab.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:gitlab.example.com
|
||||
- traefik.port=80
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.gitlab.rule=Host(`gitlab.example.com`)"
|
||||
- "traefik.http.services.gitlab.loadbalancer.server.port=80"
|
||||
- "traefik.enable=true"
|
||||
restart_policy:
|
||||
delay: 10s
|
||||
max_attempts: 10
|
||||
window: 60s
|
||||
ports:
|
||||
- "2222:22"
|
||||
volumes:
|
||||
- /var/data/gitlab/gitlab:/home/git/data:Z
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.2.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch gitlab
|
||||
|
||||
Launch the mail server stack by running ```docker stack deploy gitlab -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at `https://[your FQDN]`, with user "root" and the password you specified in gitlab.env.
|
||||
|
||||
[^1]: I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
109
docs/recipes/gollum.md
Normal file
109
docs/recipes/gollum.md
Normal file
@@ -0,0 +1,109 @@
|
||||
---
|
||||
title: Run Gollum in Docker
|
||||
---
|
||||
|
||||
# Gollum
|
||||
|
||||
Gollum is a simple wiki system built on top of Git. A Gollum Wiki is simply a git repository (_either bare or regular_) of a specific nature:
|
||||
|
||||
* A Gollum repository's contents are human-editable, unless the repository is bare.
|
||||
* Pages are unique text files which may be organized into directories any way you choose.
|
||||
* Other content can also be included, for example images, PDFs and headers/footers for your pages.
|
||||
|
||||
Gollum pages:
|
||||
|
||||
* May be written in a variety of markups.
|
||||
* Can be edited with your favourite system editor or IDE (_changes will be visible after committing_) or with the built-in web interface.
|
||||
* Can be displayed in all versions (_commits_).
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
As you'll note in the (_real world_) screenshot above, my requirements for a personal wiki are:
|
||||
|
||||
* Portable across my devices
|
||||
* Supports images
|
||||
* Full-text search
|
||||
* Supports inter-note links
|
||||
* Revision control
|
||||
|
||||
Gollum meets all these requirements, and as an added bonus, is extremely fast and lightweight.
|
||||
|
||||
!!! note
|
||||
Since Gollum itself offers no user authentication, this design secures gollum behind [traefik-forward-auth](/docker-swarm/traefik-forward-auth/), so that in order to gain access to the Gollum UI at all, authentication must have already occurred.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need an empty git repository in /var/data/gollum for our data:
|
||||
|
||||
```bash
|
||||
|
||||
mkdir /var/data/gollum
|
||||
cd /var/data/gollum
|
||||
git init
|
||||
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: dakue/gollum
|
||||
volumes:
|
||||
- /var/data/gollum:/gollum
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:gollum.example.com
|
||||
- traefik.port=4567
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.gollum.rule=Host(`gollum.example.com`)"
|
||||
- "traefik.http.services.gollum.loadbalancer.server.port=4567"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.wekan.middlewares=forward-auth@file"
|
||||
command: |
|
||||
--allow-uploads
|
||||
--emoji
|
||||
--user-icons gravatar
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.9.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Gollum stack
|
||||
|
||||
Launch the Gollum stack by running ```docker stack deploy gollum -c <path-to-docker-compose.yml>```
|
||||
|
||||
[^1]: In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
134
docs/recipes/homeassistant.md
Normal file
134
docs/recipes/homeassistant.md
Normal file
@@ -0,0 +1,134 @@
|
||||
---
|
||||
description: Assist your home automation
|
||||
---
|
||||
|
||||
# Home Assistant
|
||||
|
||||
Home Assistant is a home automation platform written in Python, with extensive support for 3rd-party home-automation platforms including Xaomi, Phillips Hue, and a [bazillion](https://home-assistant.io/components/) others.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
This recipie combines the [extensibility](https://home-assistant.io/components/) of [Home Assistant](https://home-assistant.io/) with the flexibility of [InfluxDB](https://docs.influxdata.com/influxdb/v1.4/) (_for time series data store_) and [Grafana](https://grafana.com/) (_for **beautiful** visualisation of that data_).
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/homeassistant:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/homeassistant
|
||||
cd /var/data/homeassistant
|
||||
mkdir -p {homeassistant,grafana,influxdb-backup}
|
||||
```
|
||||
|
||||
Now create a directory for the influxdb realtime data:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/runtime/homeassistant/influxdb
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create /var/data/config/homeassistant/grafana.env, and populate with the following - this is to enable grafana to work with oauth2_proxy without requiring an additional level of authentication:
|
||||
|
||||
```bash
|
||||
GF_AUTH_BASIC_ENABLED=false
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
influxdb:
|
||||
image: influxdb
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/runtime/homeassistant/influxdb:/var/lib/influxdb
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
|
||||
homeassistant:
|
||||
image: homeassistant/home-assistant
|
||||
dns_search: hq.example.com
|
||||
volumes:
|
||||
- /var/data/homeassistant/homeassistant:/config
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:homeassistant.example.com
|
||||
- traefik.port=8123
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.homeassistant.rule=Host(`homeassistant.example.com`)"
|
||||
- "traefik.http.services.homeassistant.loadbalancer.server.port=8123"
|
||||
- "traefik.enable=true"
|
||||
networks:
|
||||
- traefik_public
|
||||
- internal
|
||||
ports:
|
||||
- 8123:8123
|
||||
|
||||
grafana-app:
|
||||
image: grafana/grafana
|
||||
env_file : /var/data/config/homeassistant/grafana.env
|
||||
volumes:
|
||||
- /var/data/homeassistant/grafana:/var/lib/grafana
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:grafana.example.com
|
||||
- traefik.port=3000
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.grafana.rule=Host(`grafana.example.com`)"
|
||||
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.grafana.middlewares=forward-auth@file"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.13.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Home Assistant stack
|
||||
|
||||
Launch the Home Assistant stack by running ```docker stack deploy homeassistant -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into <https://grafana>.**YOUR FQDN** and create some beautiful graphs :)
|
||||
|
||||
[^1]: I **tried** to protect Home Assistant using [oauth2_proxy](/reference/oauth_proxy/), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
151
docs/recipes/huginn.md
Normal file
151
docs/recipes/huginn.md
Normal file
@@ -0,0 +1,151 @@
|
||||
---
|
||||
title: Run Huggin in Docker
|
||||
---
|
||||
|
||||
# Huginn
|
||||
|
||||
Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn's Agents create and consume events, propagating them along a directed graph. Think of it as a hackable version of IFTTT or Zapier on your own server.
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe src="https://player.vimeo.com/video/61976251" width="640" height="433" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
Create the location for the bind-mount of the database, so that it's persistent:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/huginn/database
|
||||
```
|
||||
|
||||
### Create email address
|
||||
|
||||
Strictly speaking, you don't **have** to integrate Huginn with email. However, since we created our own mailserver stack earlier, it's worth using it to enable emails within Huginn.
|
||||
|
||||
```bash
|
||||
cd /var/data/docker-mailserver/
|
||||
./setup.sh email add huginn@huginn.example.com my-password-here
|
||||
# Setup MX and DKIM if they don't already exist:
|
||||
./setup.sh config dkim
|
||||
cat config/opendkim/keys/huginn.example.com/mail.txt
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create /var/data/config/huginn/huginn.env, and populate with the following variables. Set the "INVITATION_CODE" variable if you want to require users to enter a code to sign up (protects the UI from abuse) (The full list of Huginn environment variables is available [here](https://github.com/huginn/huginn/blob/master/.env.example))
|
||||
|
||||
```bash
|
||||
# For huginn/huginn - essential
|
||||
SMTP_DOMAIN=your-domain-here.com
|
||||
SMTP_USER_NAME=you@gmail.com
|
||||
SMTP_PASSWORD=somepassword
|
||||
SMTP_SERVER=your-mailserver-here.com
|
||||
SMTP_PORT=587
|
||||
SMTP_AUTHENTICATION=plain
|
||||
SMTP_ENABLE_STARTTLS_AUTO=true
|
||||
INVITATION_CODE=<set an invitation code here>
|
||||
POSTGRES_PORT_5432_TCP_ADDR=db
|
||||
POSTGRES_PORT_5432_TCP_PORT=5432
|
||||
DATABASE_USERNAME=huginn
|
||||
DATABASE_PASSWORD=<database password>
|
||||
DATABASE_ADAPTER=postgresql
|
||||
|
||||
# Optional extras for huginn/huginn, customize or append based on .env.example lined above
|
||||
TWITTER_OAUTH_KEY=
|
||||
TWITTER_OAUTH_SECRET=
|
||||
|
||||
# For postgres/postgres
|
||||
POSTGRES_USER=huginn
|
||||
POSTGRES_PASSWORD=<database password>
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
|
||||
huginn:
|
||||
image: huginn/huginn
|
||||
env_file: /var/data/config/huginn/huginn.env
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:huginn.example.com
|
||||
- traefik.port=3000
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.huginn.rule=Host(`huginn.example.com`)"
|
||||
- "traefik.http.routers.huginn.entrypoints=https"
|
||||
- "traefik.http.services.huginn.loadbalancer.server.port=3000"
|
||||
|
||||
db:
|
||||
env_file: /var/data/config/huginn/huginn.env
|
||||
image: postgres:latest
|
||||
volumes:
|
||||
- /var/data/runtime/huginn/database:/var/lib/postgresql/data
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
|
||||
db-backup:
|
||||
image: postgres:latest
|
||||
env_file: /var/data/config/huginn/huginn.env
|
||||
volumes:
|
||||
- /var/data/huginn/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
|
||||
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.6.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Huginn stack
|
||||
|
||||
Launch the Huginn stack by running ```docker stack deploy huginn -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**. You'll need to use the "Sign Up" button, and (optionally) enter your invitation code in order to create your account.
|
||||
|
||||
[^1]: I initially considered putting an oauth proxy in front of Huginn, but since the invitation code logic prevents untrusted access, and since using a proxy would break oauth for features such as Twitter integration, I left it out.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
257
docs/recipes/immich.md
Normal file
257
docs/recipes/immich.md
Normal file
@@ -0,0 +1,257 @@
|
||||
---
|
||||
title: Run Immich in Docker Swarm
|
||||
description: How to install your own immich instance using Docker Swarm
|
||||
---
|
||||
|
||||
# Immich in Docker Swarm
|
||||
|
||||
Immich is a promising self-hosted alternative to Google Photos. Its UI and features are clearly heavily inspired by Google Photos, and like [Photoprism][photoprism], Immich uses tensorflow-based machine learning to auto-tag your photos!
|
||||
|
||||
!!! warning "Pre-production warning"
|
||||
The developer makes it abundantly clear that Immich is under heavy development (*although it's covered by "wife-insurance"[^1]*), features and APIs may change, and all your photos may be lost, or (worse) auto-shared with your :dragon_face: mother-in-law! Take due care :wink:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
See my detailed review of Immich, as a Google Photos replacement, [here][review/immich]
|
||||
|
||||
## Immich requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
Already deployed:
|
||||
|
||||
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
* [X] [Traefik](/docker-swarm/traefik/) configured per design
|
||||
|
||||
New:
|
||||
|
||||
* [ ] DNS entry for your Immich instance, pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First, we create a directory to hold the immich docker-compose configuration:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/immich
|
||||
```
|
||||
|
||||
Then we setup directories to hold all the various data:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/immich/database-dump
|
||||
mkdir -p /var/data/immich/upload
|
||||
mkdir -p /var/data/runtime/immich/database
|
||||
```
|
||||
|
||||
### Setup Immich enviroment
|
||||
|
||||
Create `/var/data/config/immich/immich.env` something like the example below..
|
||||
|
||||
```yaml title="/var/data/config/immich/immich.env"
|
||||
###################################################################################
|
||||
# Database
|
||||
###################################################################################
|
||||
|
||||
# These are for the Immich components
|
||||
DB_HOSTNAME=db
|
||||
DB_USERNAME=postgres
|
||||
DB_PASSWORD=postgres
|
||||
DB_DATABASE_NAME=immich
|
||||
|
||||
# These are specific to how the postgres image likes to receive its ENV vars
|
||||
POSTGRES_PASSWORD=postgres
|
||||
#POSTGRES_USER=postgres
|
||||
POSTGRES_DB=immich
|
||||
|
||||
###################################################################################
|
||||
# Redis
|
||||
###################################################################################
|
||||
|
||||
REDIS_HOSTNAME=redis
|
||||
|
||||
# Optional Redis settings:
|
||||
# REDIS_PORT=6379
|
||||
# REDIS_DBINDEX=0
|
||||
# REDIS_PASSWORD=
|
||||
# REDIS_SOCKET=
|
||||
|
||||
|
||||
###################################################################################
|
||||
# JWT SECRET
|
||||
###################################################################################
|
||||
|
||||
JWT_SECRET=randomstringthatissolongandpowerfulthatnoonecanguess # (1)!
|
||||
|
||||
###################################################################################
|
||||
# MAPBOX
|
||||
####################################################################################
|
||||
|
||||
# ENABLE_MAPBOX is either true of false -> if true, you have to provide MAPBOX_KEY
|
||||
ENABLE_MAPBOX=false
|
||||
MAPBOX_KEY=
|
||||
|
||||
###################################################################################
|
||||
# WEB - Required
|
||||
###################################################################################
|
||||
|
||||
# This is the URL of your vm/server where you host Immich, so that the web frontend
|
||||
# know where can it make the request to.
|
||||
# For example: If your server IP address is 10.1.11.50, the environment variable will
|
||||
# be VITE_SERVER_ENDPOINT=http://10.1.11.50:2283/api
|
||||
# !CAUTION! THERE IS NO FORWARD SLASH AT THE END
|
||||
|
||||
VITE_SERVER_ENDPOINT=https://immich.example.com/api
|
||||
|
||||
|
||||
####################################################################################
|
||||
# WEB - Optional
|
||||
####################################################################################
|
||||
|
||||
# Custom message on the login page, should be written in HTML form.
|
||||
# For example VITE_LOGIN_PAGE_MESSAGE="This is a demo instance of Immich.<br><br>Email: <i>demo@demo.de</i><br>Password: <i>demo</i>"
|
||||
|
||||
VITE_LOGIN_PAGE_MESSAGE=
|
||||
|
||||
NODE_ENV=production
|
||||
```
|
||||
|
||||
1. Yes, this has to be long. At least 20 characters.
|
||||
|
||||
### Immich Docker Swarm config
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this example:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml title="/var/data/config/immich/immich.yml"
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
immich-server:
|
||||
image: altran1502/immich-server:release
|
||||
entrypoint: ["/bin/sh", "./start-server.sh"]
|
||||
volumes:
|
||||
- /var/data/immich/upload:/usr/src/app/upload
|
||||
env_file: /var/data/config/immich/immich.env
|
||||
networks:
|
||||
- internal
|
||||
|
||||
immich-microservices:
|
||||
image: altran1502/immich-server:release
|
||||
entrypoint: ["/bin/sh", "./start-microservices.sh"]
|
||||
volumes:
|
||||
- /var/data/immich/upload:/usr/src/app/upload
|
||||
env_file: /var/data/config/immich/immich.env
|
||||
networks:
|
||||
- internal
|
||||
|
||||
immich-machine-learning:
|
||||
image: altran1502/immich-machine-learning:release
|
||||
entrypoint: ["/bin/sh", "./entrypoint.sh"]
|
||||
volumes:
|
||||
- /var/data/immich/upload:/usr/src/app/upload
|
||||
env_file: /var/data/config/immich/immich.env
|
||||
networks:
|
||||
- internal
|
||||
|
||||
immich-web:
|
||||
image: altran1502/immich-web:release
|
||||
entrypoint: ["/bin/sh", "./entrypoint.sh"]
|
||||
env_file: /var/data/config/immich/immich.env
|
||||
networks:
|
||||
- internal
|
||||
|
||||
redis:
|
||||
image: redis:6.2
|
||||
networks:
|
||||
- internal
|
||||
|
||||
db:
|
||||
image: postgres:14
|
||||
env_file: /var/data/config/immich/immich.env
|
||||
volumes:
|
||||
- /var/data/runtime/immich/database:/var/lib/postgresql/data
|
||||
networks:
|
||||
- internal
|
||||
|
||||
db-backup:
|
||||
image: postgres:14
|
||||
env_file: /var/data/config/immich/immich-db-backup.env
|
||||
volumes:
|
||||
- /var/data/immich/database-dump:/dump
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
|
||||
ls -tr /dump/dump_*.psql | head -n -"$$BACKUP_NUM_KEEP" | xargs -r rm
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
immich-proxy:
|
||||
container_name: immich_proxy
|
||||
image: altran1502/immich-proxy:release
|
||||
ports:
|
||||
- 2283:80
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:immich.example.com
|
||||
- traefik.port=80
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.immich.rule=Host(`immich.example.com`)"
|
||||
- "traefik.http.routers.immich.entrypoints=https"
|
||||
- "traefik.http.services.immich.loadbalancer.server.port=80"
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.8.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Launch Immich!
|
||||
|
||||
Launch the Immich stack by running
|
||||
|
||||
```bash
|
||||
docker stack deploy immich -c /var/data/config/immich/immich.yml
|
||||
```
|
||||
|
||||
Now hit the URL you defined in your config, and you should be prompted to create your first (admin) account, after which you can login (*with the details you just created*), and start admin-ing. Install a mobile app, connect using the same credentials, and start backing up all your photos!
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We have an HTTPS-protected endpoint to target with the native mobile apps, allowing us to backup photos from mobile devices and have them become searchable, shareable, and browseable via a beautiful, Google Photos-esque interface!
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Photos can be synced from mobile device, or manually uploaded via web UI
|
||||
|
||||
## Setup Immich in < 60s
|
||||
|
||||
Sponsors have access to a [Premix](/premix/) playbook, which will set up Immich in under 60s (*see below*):
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/s-NZjYrNOPg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: "wife-insurance": When the developer's wife is a primary user of the platform, you can bet he'll be writing quality code! :woman: :material-karate: :man: :bed: :cry:
|
||||
[^2]: There's a [friendly Discord server](https://discord.com/invite/D8JsnBEuKb) for Immich too!
|
||||
130
docs/recipes/instapy.md
Normal file
130
docs/recipes/instapy.md
Normal file
@@ -0,0 +1,130 @@
|
||||
---
|
||||
title: How to run InstaPy in Docker
|
||||
description: Automate your fake Instagram life with automated fakery using InstaPy in Docker
|
||||
---
|
||||
|
||||
# InstaPy
|
||||
|
||||
[InstaPy](https://github.com/timgrossmann/InstaPy) is an Instagram bot, developed by [Tim Grossman](https://github.com/timgrossmann). Tim describes his motivation and experiences developing the bot [here](https://medium.freecodecamp.org/my-open-source-instagram-bot-got-me-2-500-real-followers-for-5-in-server-costs-e40491358340).
|
||||
|
||||
What's an Instagram bot? Basically, you feed the bot your Instagram user/password, and it executes follows/unfollows/likes/comments on your behalf based on rules you set. (_I set my bot to like one photo tagged with "[#penguin](https://www.instagram.com/explore/tags/penguin/?hl=en)" per-run_)
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Great power, right? A client (_yes, you can [hire](https://www.funkypenguin.co.nz/) me!_) asked me to integrate InstaPy into their swarm, and this recipe is the result.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We need a data location to store InstaPy's config, as well as its log files. Create /var/data/instapy per below
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/instapy/logs
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
web:
|
||||
command: ["./wait-for", "selenium:4444", "--", "python", "docker_quickstart.py"]
|
||||
environment:
|
||||
- PYTHONUNBUFFERED=0
|
||||
|
||||
# Modify the image to whatever Tim's image tag ends up as. I used funkypenguin/instapy for my build
|
||||
image: funkypenguin/instapy:latest
|
||||
|
||||
# When using swarm, you can't use relative paths, so the following needs to be set to the full filesystem path to your logs and docker_quickstart.py
|
||||
# Bind-mount docker_quickstart.py, since now that we're using a public image, we can't "bake" our credentials into the image anymore
|
||||
volumes:
|
||||
- /var/data/instapy/logs:/code/logs
|
||||
- var/data/instapy/instapy.py:/code/docker_quickstart.py:ro
|
||||
|
||||
# This section allows docker to restart the container when it exits (either normally or abnormally), which ensures that
|
||||
# InstaPy keeps re-running. Tweak the delay to avoid being banned for excessive activity
|
||||
deploy:
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 3600s
|
||||
|
||||
|
||||
selenium:
|
||||
image: selenium/standalone-chrome-debug
|
||||
ports:
|
||||
- "5900:5900"
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
### Command your bot
|
||||
|
||||
Create a variation of <https://github.com/timgrossmann/InstaPy/blob/master/docker_quickstart.py> at /var/data/instapy/instapy.py (the file we bind-mounted in the swarm config above)
|
||||
|
||||
Change at least the following:
|
||||
|
||||
```bash
|
||||
insta_username = ''
|
||||
insta_password = ''
|
||||
```
|
||||
|
||||
Here's an example of my config, set to like a single penguin-pic per run:
|
||||
|
||||
```python
|
||||
insta_username = 'funkypenguin'
|
||||
insta_password = 'followmemypersonalbrandisawesome'
|
||||
|
||||
dont_like = ['food','girl','batman','gotham','dead','nsfw','porn','slut','baby','tv','athlete','nhl','hockey','estate','music','band','clothes']
|
||||
friend_list = ['therock','ruinporn']
|
||||
|
||||
# If you want to enter your Instagram Credentials directly just enter
|
||||
# username=<your-username-here> and password=<your-password> into InstaPy
|
||||
# e.g like so InstaPy(username="instagram", password="test1234")
|
||||
|
||||
bot = InstaPy(username='insta_username', password='insta_password', selenium_local_session=False)
|
||||
bot.set_selenium_remote_session(selenium_url='http://selenium:4444/wd/hub')
|
||||
bot.login()
|
||||
bot.set_upper_follower_count(limit=10000)
|
||||
bot.set_lower_follower_count(limit=10)
|
||||
bot.set_comments([u'Cool :penguin:!', u'Awesome :penguin:!!', u'Nice :penguin:!!'])
|
||||
bot.set_dont_include(friend_list)
|
||||
bot.set_dont_like(dont_like)
|
||||
#bot.set_ignore_if_contains(ignore_words)
|
||||
|
||||
# OK, so go through my feed and like stuff, interacting with people I follow
|
||||
# bot.like_by_feed(amount=3, randomize=True, unfollow=True, interact=True)
|
||||
|
||||
# Now find posts tagged as #penguin, and like 'em, commenting 50% of the time
|
||||
bot.set_do_comment(True, percentage=50)
|
||||
bot.set_comments([u'Cool :penguin:!', u'Awesome :penguin:!!', u'Nice :penguin:!!'])
|
||||
bot.like_by_tags(['#penguin'], amount=1)
|
||||
|
||||
# goodnight, sweet bot
|
||||
bot.end()
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Destroy all humans
|
||||
|
||||
Launch the bot by running ```docker stack deploy instapy -c <path -to-docker-compose.yml>```
|
||||
|
||||
While you're waiting for Docker to pull down the images, educate yourself on the risk of a robotic uprising:
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/B1BdQcJ2ZYY" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
|
||||
|
||||
After swarm deploys, you won't see much, but you can monitor what InstaPy is doing, by running ```docker service logs instapy_web```.
|
||||
|
||||
You can **also** watch the bot at work by VNCing to your docker swarm, password "secret". You'll see Selenium browser window cycling away, interacting with all your real/fake friends on Instagram :)
|
||||
|
||||
[^1]: Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
183
docs/recipes/ipfs-cluster.md
Normal file
183
docs/recipes/ipfs-cluster.md
Normal file
@@ -0,0 +1,183 @@
|
||||
---
|
||||
title: How to build an IPFS cluster in Docker
|
||||
description: IPFS is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the World Wide Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository.
|
||||
---
|
||||
# IPFS
|
||||
|
||||
!!! danger "This recipe is a work in progress"
|
||||
This recipe is **incomplete**, and remains a work in progress.
|
||||
So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁
|
||||
|
||||
The intention of this recipe is to provide a local IPFS cluster for the purpose of providing persistent storage for the various components of the recipes
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
IPFS is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the World Wide Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository.
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Docker swarm cluster](/docker-swarm/design/)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations (per-node)
|
||||
|
||||
Since IPFS may _replace_ ceph or glusterfs as a shared-storage provider for the swarm, we can't use sharded storage to store its persistent data. (🐔, meet :egg:)
|
||||
|
||||
On _each_ node, therefore run the following, to create the persistent data storage for ipfs and ipfs-cluster:
|
||||
|
||||
```bash
|
||||
mkdir -p {/var/ipfs/daemon,/var/ipfs/cluster}
|
||||
```
|
||||
|
||||
### Setup environment
|
||||
|
||||
ipfs-cluster nodes require a common secret, a 32-bit hex-encoded string, in order to "trust" each other, so generate one, and add it to ipfs.env on your first node, by running ```od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n'; echo```
|
||||
|
||||
Now on _each_ node, create ```/var/ipfs/cluster:/data/ipfs-cluster```, including both the secret, *and* the IP of docker0 interface on your hosts (_on my hosts, this is always 172.17.0.1_). We do this (_the trick with docker0)_ to allow ipfs-cluster to talk to the local ipfs daemon, per-node:
|
||||
|
||||
```bash
|
||||
SECRET=<string generated above>
|
||||
|
||||
# Use docker0 to access daemon
|
||||
IPFS_API=/ip4/172.17.0.1/tcp/5001
|
||||
```
|
||||
|
||||
### Create docker-compose file
|
||||
|
||||
Yes, I know. It's not as snazzy as docker swarm. Maybe we'll get there. But this implementation uses docker-compose, so create the following (_identical_) docker-compose.yml on each node:
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
cluster:
|
||||
image: ipfs/ipfs-cluster
|
||||
volumes:
|
||||
- /var/ipfs/cluster:/data/ipfs-cluster
|
||||
env_file: /var/data/config/ipfs/ipfs.env
|
||||
ports:
|
||||
- 9095:9095
|
||||
- 9096:9096
|
||||
depends_on:
|
||||
- daemon
|
||||
|
||||
daemon:
|
||||
image: ipfs/go-ipfs
|
||||
ports:
|
||||
- 4001:4001
|
||||
- 5001:5001
|
||||
- 8080:8080
|
||||
volumes:
|
||||
- /var/ipfs/daemon:/data/ipfs
|
||||
```
|
||||
|
||||
### Launch independent nodes
|
||||
|
||||
Launch all nodes independently with ```docker-compose -f ipfs.yml up```. At this point, the nodes are each running independently, unaware of each other. But we do this to ensure that service.json is populated on each node, using the IPFS_API environment variable we specified in ipfs.env. (_it's only used on the first run_)
|
||||
|
||||
The output looks something like this:
|
||||
|
||||
```bash
|
||||
cluster_1 | 11:03:33.272 INFO restapi: REST API (libp2p-http): ENABLED. Listening on:
|
||||
cluster_1 | /ip4/127.0.0.1/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
cluster_1 | /ip4/172.18.0.3/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
cluster_1 | /p2p-circuit/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
daemon_1 | Swarm listening on /ip4/127.0.0.1/tcp/4001
|
||||
daemon_1 | Swarm listening on /ip4/172.19.0.2/tcp/4001
|
||||
daemon_1 | Swarm listening on /p2p-circuit
|
||||
daemon_1 | Swarm announcing /ip4/127.0.0.1/tcp/4001
|
||||
daemon_1 | Swarm announcing /ip4/172.19.0.2/tcp/4001
|
||||
daemon_1 | Swarm announcing /ip4/202.170.161.77/tcp/4001
|
||||
daemon_1 | API server listening on /ip4/0.0.0.0/tcp/5001
|
||||
daemon_1 | Gateway (readonly) server listening on /ip4/0.0.0.0/tcp/8080
|
||||
daemon_1 | Daemon is ready
|
||||
cluster_1 | 10:49:19.720 INFO consensus: Current Raft Leader: QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp raft.go:293
|
||||
cluster_1 | 10:49:19.721 INFO cluster: Cluster Peers (without including ourselves): cluster.go:403
|
||||
cluster_1 | 10:49:19.721 INFO cluster: - No other peers cluster.go:405
|
||||
cluster_1 | 10:49:19.722 INFO cluster: ** IPFS Cluster is READY ** cluster.go:418
|
||||
```
|
||||
|
||||
### Pick a leader
|
||||
|
||||
Pick a node to be your primary node, and CTRL-C the others.
|
||||
|
||||
Look for a line like this in the output of the primary node:
|
||||
|
||||
```bash
|
||||
/ip4/127.0.0.1/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
```
|
||||
|
||||
You'll note several addresses listed, all ending in the same hash. None of these addresses will be your docker node's actual IP address, however, since we exposed port 9096, we can substitute your docker node's IP.
|
||||
|
||||
### Bootstrap the followers
|
||||
|
||||
On each of the non-primary nodes, run the following, replacing **IP-OF-PRIMARY-NODE** with the actual IP of the primary node, and **HASHY-MC-HASHFACE** with your own hash from primary output above.
|
||||
|
||||
```bash
|
||||
docker run --rm -it -v /var/ipfs/cluster:/data/ipfs-cluster \
|
||||
--entrypoint ipfs-cluster-service ipfs/ipfs-cluster \
|
||||
daemon --bootstrap \ /ip4/IP-OF-PRIMARY-NODE/tcp/9096/ipfs/HASHY-MC-HASHFACE
|
||||
```
|
||||
|
||||
You'll see output like this:
|
||||
|
||||
```bash
|
||||
10:55:26.121 INFO service: Bootstrapping to /ip4/192.168.31.13/tcp/9096/ipfs/QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT daemon.go:153
|
||||
10:55:26.121 INFO ipfshttp: IPFS Proxy: /ip4/0.0.0.0/tcp/9095 -> /ip4/172.17.0.1/tcp/5001 ipfshttp.go:221
|
||||
10:55:26.304 ERROR ipfshttp: error posting to IPFS: Post http://172.17.0.1:5001/api/v0/id: dial tcp 172.17.0.1:5001: connect: connection refused ipfshttp.go:708
|
||||
10:55:26.622 INFO consensus: Current Raft Leader: QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT raft.go:293
|
||||
10:55:26.623 INFO cluster: Cluster Peers (without including ourselves): cluster.go:403
|
||||
10:55:26.623 INFO cluster: - QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT cluster.go:410
|
||||
10:55:26.624 INFO cluster: - QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx cluster.go:410
|
||||
10:55:26.625 INFO cluster: ** IPFS Cluster is READY ** cluster.go:418
|
||||
```
|
||||
|
||||
!!! note
|
||||
You can ignore the warnings about port 5001 refused - this is because we weren't running the ipfs daemon while bootstrapping the cluster. Its harmless.
|
||||
|
||||
I haven't worked out why yet, but running the bootstrap in docker-run format reset the permissions on /var/ipfs/cluster/, so look at /var/ipfs/daemon, and make the permissions of /var/ipfs/cluster the same.
|
||||
|
||||
You can now run ```docker-compose -f ipfs.yml up``` on the "follower" nodes, to bring your cluster online.
|
||||
|
||||
### Confirm cluster
|
||||
|
||||
docker-exec into one of the cluster containers (_it doesn't matter which one_), and run ```ipfs-cluster-ctl peers ls```
|
||||
|
||||
You should see output from each node member, indicating it can see its other peers. Here's my output from a 3-node cluster:
|
||||
|
||||
```bash
|
||||
/ # ipfs-cluster-ctl peers ls
|
||||
QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT | ef68b1437c56 | Sees 2 other peers
|
||||
> Addresses:
|
||||
- /ip4/127.0.0.1/tcp/9096/ipfs/QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT
|
||||
- /ip4/172.19.0.3/tcp/9096/ipfs/QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT
|
||||
- /p2p-circuit/ipfs/QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT
|
||||
> IPFS: QmU6buucy4FX9XqPoj4ZEiJiu7xUq2dnth5puU1rswtrGg
|
||||
- /ip4/127.0.0.1/tcp/4001/ipfs/QmU6buucy4FX9XqPoj4ZEiJiu7xUq2dnth5puU1rswtrGg
|
||||
- /ip4/172.19.0.2/tcp/4001/ipfs/QmU6buucy4FX9XqPoj4ZEiJiu7xUq2dnth5puU1rswtrGg
|
||||
- /ip4/202.170.161.75/tcp/4001/ipfs/QmU6buucy4FX9XqPoj4ZEiJiu7xUq2dnth5puU1rswtrGg
|
||||
QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp | 6558e1bf32e2 | Sees 2 other peers
|
||||
> Addresses:
|
||||
- /ip4/127.0.0.1/tcp/9096/ipfs/QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp
|
||||
- /ip4/172.19.0.3/tcp/9096/ipfs/QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp
|
||||
- /p2p-circuit/ipfs/QmaAiMDP7PY3CX1xqzgAoNQav5M29P5WPWVqqSBdNu1Nsp
|
||||
> IPFS: QmYMUwHHsaeP2H8D2G3iXKhs1fHm2gQV6SKWiRWxbZfxX7
|
||||
- /ip4/127.0.0.1/tcp/4001/ipfs/QmYMUwHHsaeP2H8D2G3iXKhs1fHm2gQV6SKWiRWxbZfxX7
|
||||
- /ip4/172.19.0.2/tcp/4001/ipfs/QmYMUwHHsaeP2H8D2G3iXKhs1fHm2gQV6SKWiRWxbZfxX7
|
||||
- /ip4/202.170.161.77/tcp/4001/ipfs/QmYMUwHHsaeP2H8D2G3iXKhs1fHm2gQV6SKWiRWxbZfxX7
|
||||
QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx | 28c13ec68f33 | Sees 2 other peers
|
||||
> Addresses:
|
||||
- /ip4/127.0.0.1/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
- /ip4/172.18.0.3/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
- /p2p-circuit/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
> IPFS: QmazkAuAPpWw913HKiGsr1ief2N8cLa6xcqeAZxqDMsWmE
|
||||
- /ip4/127.0.0.1/tcp/4001/ipfs/QmazkAuAPpWw913HKiGsr1ief2N8cLa6xcqeAZxqDMsWmE
|
||||
- /ip4/172.18.0.2/tcp/4001/ipfs/QmazkAuAPpWw913HKiGsr1ief2N8cLa6xcqeAZxqDMsWmE
|
||||
- /ip4/202.170.161.96/tcp/4001/ipfs/QmazkAuAPpWw913HKiGsr1ief2N8cLa6xcqeAZxqDMsWmE
|
||||
/ #
|
||||
```
|
||||
|
||||
[^1]: I'm still trying to work out how to _mount_ the ipfs data in my filesystem in a usable way. Which is why this is still a WIP :)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
99
docs/recipes/jellyfin.md
Normal file
99
docs/recipes/jellyfin.md
Normal file
@@ -0,0 +1,99 @@
|
||||
---
|
||||
title: Run Jellyfin in Docker with docker compose / swarm
|
||||
description: Jellyfin is best described as "like Emby but really FOSS"
|
||||
---
|
||||
|
||||
# Jellyfin
|
||||
|
||||
[Jellyfin](https://jellyfin.org/) is best described as "_like [Emby][emby] but really [FOSS](https://en.wikipedia.org/wiki/Free_and_open-source_software)_".
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
If it looks very similar as Emby, is because it started as a fork of it, but it has evolved since them. For a complete explanation of the why, look [here](https://jellyfin.org/docs/general/about.html).
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a location to store Jellyfin's library data, config files, logs and temporary transcoding space, so create ``/var/data/jellyfin``, and make sure it's owned by the user and group who also own your media data.
|
||||
|
||||
```bash
|
||||
mkdir /var/data/jellyfin
|
||||
```
|
||||
|
||||
Also if we want to avoid the cache to be part of the backup, we should create a location to map it on the runtime folder. It also has to be owned by the user and group who also own your media data.
|
||||
|
||||
```bash
|
||||
mkdir /var/data/runtime/jellyfin
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create jellyfin.env, and populate with PUID/GUID for the user who owns the /var/data/jellyfin directory (_above_) and your actual media content (_in this example, the media content is at **/srv/data**_)
|
||||
|
||||
```bash
|
||||
PUID=
|
||||
GUID=
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
jellyfin:
|
||||
image: jellyfin/jellyfin
|
||||
env_file: /var/data/config/jellyfin/jellyfin.env
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/jellyfin:/config
|
||||
- /var/data/runtime/jellyfin:/cache
|
||||
- /var/data/jellyfin/jellyfin:/config
|
||||
- /srv/data/:/data
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:jellyfin.example.com
|
||||
- traefik.port=8096
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.jellyfin.rule=Host(`jellyfin.example.com`)"
|
||||
- "traefik.http.services.jellyfin.loadbalancer.server.port=8096"
|
||||
- "traefik.enable=true"
|
||||
|
||||
networks:
|
||||
- traefik_public
|
||||
ports:
|
||||
- 8096:8096
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Jellyfin stack
|
||||
|
||||
Launch the stack by running ```docker stack deploy jellyfin -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-based setup to complete deploying your Jellyfin.
|
||||
|
||||
[^1]: I didn't use an [oauth2_proxy](/reference/oauth_proxy/) for this stack, because it would interfere with mobile client support.
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
[^3]: We don't bother exposing the HTTPS port for Jellyfin, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
84
docs/recipes/kanboard.md
Normal file
84
docs/recipes/kanboard.md
Normal file
@@ -0,0 +1,84 @@
|
||||
---
|
||||
title: How to run Kanboard using Docker
|
||||
description: Run Kanboard with Docker to get your personal kanban on!
|
||||
---
|
||||
|
||||
# Kanboard
|
||||
|
||||
Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](/recipes/miniflux/)_)
|
||||
|
||||
Features include:
|
||||
|
||||
* Visualize your work
|
||||
* Limit your work in progress to be more efficient
|
||||
* Customize your boards according to your business activities
|
||||
* Multiple projects with the ability to drag and drop tasks
|
||||
* Reports and analytics
|
||||
* Fast and simple to use
|
||||
* Access from anywhere with a modern browser
|
||||
* Plugins and integrations with external services
|
||||
* Free, open source and self-hosted
|
||||
* Super simple installation
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
Create the location for the bind-mount of the application data, so that it's persistent:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/kanboard
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
kanboard:
|
||||
image: kanboard/kanboard
|
||||
volumes:
|
||||
- /var/data/kanboard:/var/www/app/
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:kanboard.example.com
|
||||
- traefik.port=80
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.kanboard.rule=Host(`kanboard.example.com`)"
|
||||
- "traefik.http.services.kanboard.loadbalancer.server.port=80"
|
||||
- "traefik.enable=true"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Kanboard stack
|
||||
|
||||
Launch the Kanboard stack by running ```docker stack deploy kanboard -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**. Default credentials are admin/admin, after which you can change (_under 'profile'_) and add more users.
|
||||
|
||||
[^1]: The default theme can be significantly improved by applying the [ThemePlus](https://github.com/phsteffen/kanboard-themeplus) plugin.
|
||||
[^2]: Kanboard becomes more useful when you integrate in/outbound email with [MailGun](https://github.com/kanboard/plugin-mailgun), [SendGrid](https://github.com/kanboard/plugin-sendgrid), or [Postmark](https://github.com/kanboard/plugin-postmark).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
88
docs/recipes/kavita.md
Normal file
88
docs/recipes/kavita.md
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
title: Kavita Reader in Docker - Read ebooks / Manga / Comics
|
||||
description: Here's a recipe to run Kavita under Docker Swarm to read your comics / manga / ebooks
|
||||
---
|
||||
|
||||
# Kavita Reader in Docker Swarm
|
||||
|
||||
So you've just watched a bunch of superhero movies, and you're suddenly inspired to deep-dive into the weird world of comic books? You're already rocking [AutoPirate](/recipes/autopirate/) with [Mylar](/recipes/autopirate/mylar/) and [NZBGet](/recipes/autopirate/nzbget/) to grab content, but how to manage and enjoy your growing collection?
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
[Kavita Reader](https://www.kavitareader.com) is a "*rocket fueled self-hosted digital library which supports a vast array of file formats*". Primarily used for cosuming Manga (*but quite capable of managing ebooks too*), Kavita's killer feature is an OPDS server for integration with other mobile apps such as [Chunky on iPad](http://chunkyreader.com/), and the ability to save your reading position across multiple devices.
|
||||
|
||||
There's a [public demo available](https://www.kavitareader.com/#demo) too!
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
* [X] [AutoPirate](/recipes/autopirate/) components (*specifically [Mylar](/recipes/autopirate/mylar/)*), for searching for, downloading, and managing comic books
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold the kavita database, logs and other persistent data:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/kavita
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml title="/var/data/config/kavita.yml"
|
||||
version: "3.2" # https://docs.docker.com/compose/compose-file/compose-versioning/#version-3
|
||||
|
||||
services:
|
||||
kavita:
|
||||
image: kizaing/kavita:latest
|
||||
env_file: /var/data/config/kavita/kavita.env
|
||||
volumes:
|
||||
- /var/data/kavita:/kavita/config
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:kavita.example.com
|
||||
- traefik.port=8000
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.kavita.rule=Host(`kavita.example.com`)"
|
||||
- "traefik.http.routers.kavita.entrypoints=https"
|
||||
- "traefik.http.services.kavita.loadbalancer.server.port=5000"
|
||||
|
||||
# uncomment for traefik-forward-auth (1)
|
||||
# - "traefik.http.routers.radarr.middlewares=forward-auth"
|
||||
|
||||
# uncomment for authelia (2)
|
||||
# - "traefik.http.routers.radarr.middlewares=authelia"
|
||||
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
1. Uncomment to protect Kavita with an additional layer of authentication, using [Traefik Forward Auth][tfa]
|
||||
2. Uncomment to protect Kavita with an additional layer of authentication, using [Authelia][authelia]
|
||||
|
||||
## Serving
|
||||
|
||||
### Avengers Assemble!
|
||||
|
||||
Launch the Kavita stack by running ```docker stack deploy kavita -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**. Since it's a fresh installation, Kavita will prompt you to setup a username and password, after which you'll be able to setup your library, and tweak all teh butt0ns!
|
||||
|
||||
[^1]: Since Kavita doesn't need to communicate with any other local docker services, we don't need a separate overlay network for it. Provided Traefik can reach kavita via the `traefik_public` overlay network, we've got all we need.
|
||||
|
||||
[^2]: There's an [active subreddit](https://www.reddit.com/r/KavitaManga/) for Kavita
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
71
docs/recipes/keycloak/authenticate-against-openldap.md
Normal file
71
docs/recipes/keycloak/authenticate-against-openldap.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: Integrate LDAP server with Keycloak for user federation
|
||||
description: Here's how we'll add an LDAP provider to our Keycloak server for user federation.
|
||||
---
|
||||
# Authenticate Keycloak against OpenLDAP
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's an **optional** component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
|
||||
|
||||
Keycloak gets really sexy when you integrate it into your [OpenLDAP](/recipes/openldap/) stack (_also, it's great not to have to play with ugly LDAP tree UIs_). Note that OpenLDAP integration is **not necessary** if you want to use Keycloak with [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) - all you need for that is [local users][keycloak], and an [OIDC client](/recipes/keycloak/setup-oidc-provider/).
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! Summary
|
||||
Existing:
|
||||
|
||||
* [X] [Keycloak](/recipes/keycloak/) recipe deployed successfully
|
||||
|
||||
New:
|
||||
|
||||
* [ ] An [OpenLDAP server](/recipes/openldap/) (*assuming you want to authenticate against it*)
|
||||
|
||||
## Preparation
|
||||
|
||||
You'll need to have completed the [OpenLDAP](/recipes/openldap/) recipe
|
||||
|
||||
You start in the "Master" realm - but mouseover the realm name, to a dropdown box allowing you add an new realm:
|
||||
|
||||
### Create Realm
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Enter a name for your new realm, and click "_Create_":
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Setup User Federation
|
||||
|
||||
Once in the desired realm, click on **User Federation**, and click **Add Provider**. On the next page ("_Required Settings_"), set the following:
|
||||
|
||||
* **Edit Mode** : Writeable
|
||||
* **Vendor** : Other
|
||||
* **Connection URL** : ldap://openldap
|
||||
* **Users DN** : ou=People,<your base DN\>
|
||||
* **Authentication Type** : simple
|
||||
* **Bind DN** : cn=admin,<your base DN\>
|
||||
* **Bind Credential** : <your chosen admin password\>
|
||||
|
||||
Save your changes, and then navigate back to "User Federation" > Your LDAP name > Mappers:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
For each of the following mappers, click the name, and set the "_Read Only_" flag to "_Off_" (_this enables 2-way sync between Keycloak and OpenLDAP_)
|
||||
|
||||
* last name
|
||||
* username
|
||||
* email
|
||||
* first name
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Summary
|
||||
|
||||
We've setup a new realm in Keycloak, and configured read-write federation to an [OpenLDAP](/recipes/openldap/) backend. We can now manage our LDAP users using either Keycloak or LDAP directly, and we can protect vulnerable services using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||
|
||||
!!! Summary
|
||||
Created:
|
||||
|
||||
* [X] Keycloak realm in read-write federation with [OpenLDAP](/recipes/openldap/) directory
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
183
docs/recipes/keycloak/index.md
Normal file
183
docs/recipes/keycloak/index.md
Normal file
@@ -0,0 +1,183 @@
|
||||
---
|
||||
title: Run Keycloak behind traefik in Docker
|
||||
---
|
||||
|
||||
# Keycloak (in Docker Swarm)
|
||||
|
||||
[Keycloak](https://www.keycloak.org/) is "_an open source identity and access management solution_". Using a local database, or a variety of backends (_think [OpenLDAP](/recipes/openldap/)_), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML.
|
||||
|
||||
Keycloak's OpenID provider can also be used in combination with [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/), to protect [vulnerable services](/recipes/autopirate/nzbget/) with an extra layer of authentication.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Setup
|
||||
|
||||
### Keycloak filesystem paths
|
||||
|
||||
We'll need several directories to bind-mount into our container for both runtime and backup data, so create them as per the following example:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/runtime/keycloak/database
|
||||
mkdir -p /var/data/keycloak/database-dump
|
||||
```
|
||||
|
||||
### Keycloak environment vars
|
||||
|
||||
Create `/var/data/config/keycloak/keycloak.env`, and populate with the following example variables, customized for your own domain structure.
|
||||
|
||||
```bash
|
||||
# Technically, this could be auto-detected, but we prefer to be prescriptive
|
||||
DB_VENDOR=postgres
|
||||
DB_DATABASE=keycloak
|
||||
DB_ADDR=keycloak-db
|
||||
DB_USER=keycloak
|
||||
DB_PASSWORD=myuberpassword
|
||||
KEYCLOAK_USER=admin
|
||||
KEYCLOAK_PASSWORD=ilovepasswords
|
||||
|
||||
# This is required to run keycloak behind traefik
|
||||
PROXY_ADDRESS_FORWARDING=true
|
||||
|
||||
# What's our hostname?
|
||||
KEYCLOAK_HOSTNAME=keycloak.example.com
|
||||
|
||||
# Tell Postgress what user/password to create
|
||||
POSTGRES_USER=keycloak
|
||||
POSTGRES_PASSWORD=myuberpassword
|
||||
```
|
||||
|
||||
Create `/var/data/config/keycloak/keycloak-backup.env`, and populate with the following, so that your database can be backed up to the filesystem, daily:
|
||||
|
||||
```bash
|
||||
PGHOST=keycloak-db
|
||||
PGUSER=keycloak
|
||||
PGPASSWORD=myuberpassword
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
## Docker compose example
|
||||
|
||||
Create a Keycloak docker-compose (v3) stack config file, something like this example:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
keycloak:
|
||||
image: jboss/keycloak
|
||||
env_file: /var/data/config/keycloak/keycloak.env
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- traefik_public
|
||||
- internal
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:keycloak.example.com
|
||||
- traefik.port=8080
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.keycloak.rule=Host(`keycloak.example.com`)"
|
||||
- "traefik.http.routers.keycloak.entrypoints=https"
|
||||
- "traefik.http.services.keycloak.loadbalancer.server.port=8080"
|
||||
|
||||
keycloak-db:
|
||||
env_file: /var/data/config/keycloak/keycloak.env
|
||||
image: postgres:10.1
|
||||
volumes:
|
||||
- /var/data/runtime/keycloak/database:/var/lib/postgresql/data
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
|
||||
keycloak-db-backup:
|
||||
image: postgres:10.1
|
||||
env_file: /var/data/config/keycloak/keycloak-backup.env
|
||||
volumes:
|
||||
- /var/data/keycloak/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
|
||||
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.49.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Run Keycloak
|
||||
|
||||
### Launch Keycloak docker-swarm stack
|
||||
|
||||
Launch the Keycloak stack by running `docker stack deploy keycloak -c <path -to-docker-compose.yml>`
|
||||
|
||||
Log into your new instance at `https://YOUR-FQDN`, and login with the user/password you defined in `keycloak.env`.
|
||||
|
||||
### Create Keycloak user
|
||||
|
||||
!!! question "Why are we adding a user when I have an admin user already?"
|
||||
Do you keep a spare set of house keys somewhere _other_ than your house? Do you login as `root` onto all your systems? Think of this as the same prinicple - lock the literal `admin` account away somewhere as a "password of last resort", and create a new user for your day-to-day interaction with Keycloak.
|
||||
|
||||
Within the "Master" realm (_no need for more realms yet_), navigate to **Manage** -> **Users**, and then click **Add User** at the top right:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Populate your new user's username (it's the only mandatory field)
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
#### Set Keycloak user credentials
|
||||
|
||||
Once your user is created, to set their password, click on the "**Credentials**" tab, and procede to reset it. Set the password to non-temporary, unless you like extra work!
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Tips
|
||||
|
||||
### Keycloak with Traefik
|
||||
|
||||
Keycloak can be used with Traefik in two ways..
|
||||
|
||||
#### Keycloak behind Traefik
|
||||
|
||||
You'll notice that the docker compose example above includes labels for both Traefik v2 and Traefik v2. You obviously don't need both (*although it wont't hurt*), but make sure you update the example domain in the Traefik labels. Keycloak should work behind Traefik without any further customization.
|
||||
|
||||
#### Keycloak as Traefik middleware
|
||||
|
||||
Irrespective of whether Keycloak itself is behind Traefik, you can secure access to **other** services [behind Traefik using Keycloak][tfa-keycloak], using the [Traefik Forward Auth][tfa] middleware. Other similar middleware solutions are traefik-gatekeeper, and oauth2-proxy.
|
||||
|
||||
### Keycloak Troubleshooting
|
||||
|
||||
Something didn't work? Try the following:
|
||||
|
||||
1. Confirm that Keycloak did, in fact, start, by looking at the state of the stack, with `docker stack ps keycloak --no-trunc`
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: For more geeky {--pain--}{++fun++}, try integrating Keycloak with [OpenLDAP][openldap] for an authentication backend!
|
||||
59
docs/recipes/keycloak/setup-oidc-provider.md
Normal file
59
docs/recipes/keycloak/setup-oidc-provider.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
title: How to setup OIDC provider in Keycloak
|
||||
description: Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against Keycloak using OpenID Connect (OIDC), which is required for Traefik Forward Auth, we'll setup a client in Keycloak...
|
||||
---
|
||||
# Add OIDC Provider to Keycloak
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's an optional component of the [Keycloak recipe](/recipes/keycloak/), but has been split into its own page to reduce complexity.
|
||||
|
||||
Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against Keycloak using OpenID Connect (OIDC), which is required for [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/), we'll setup a client in Keycloak...
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! Summary
|
||||
Existing:
|
||||
|
||||
* [X] [Keycloak](/recipes/keycloak/) recipe deployed successfully
|
||||
|
||||
New:
|
||||
|
||||
* [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/) recipe for more information
|
||||
|
||||
## Preparation
|
||||
|
||||
### Create Client
|
||||
|
||||
Within the "Master" realm (*no need for more realms yet*), navigate to **Clients**, and then click **Create** at the top right:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Enter a name for your client (*remember, we're authenticating **applications** now, not users, so use an application-specific name*):
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Configure Client
|
||||
|
||||
Once your client is created, set at **least** the following, and click **Save**
|
||||
|
||||
* **Access Type** : Confidential
|
||||
* **Valid Redirect URIs** : <The URIs you want to protect\>
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
### Retrieve Client Secret
|
||||
|
||||
Now that you've changed the access type, and clicked **Save**, an additional **Credentials** tab appears at the top of the window. Click on the tab, and capture the Keycloak-generated secret. This secret, plus your client name, is required to authenticate against Keycloak via OIDC.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Summary
|
||||
|
||||
We've setup an OIDC client in Keycloak, which we can now use to protect vulnerable services using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/). The OIDC URL provided by Keycloak in the master realm, is `https://<your-keycloak-url>/realms/master/.well-known/openid-configuration`
|
||||
|
||||
!!! Summary
|
||||
Created:
|
||||
|
||||
* [X] Client ID and Client Secret used to authenticate against Keycloak with OpenID Connect
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
85
docs/recipes/komga.md
Normal file
85
docs/recipes/komga.md
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
title: How to run Komga with Docker
|
||||
description: Run Komga under Docker Swarm in docker-compose syntax
|
||||
---
|
||||
|
||||
# Komga in Docker Swarm
|
||||
|
||||
So you've just watched a bunch of superhero movies, and you're suddenly inspired to deep-dive into the weird world of comic books? You're already rocking [AutoPirate](/recipes/autopirate/) with [Mylar](/recipes/autopirate/mylar/) and [NZBGet](/recipes/autopirate/nzbget/) to grab content, but how to manage and enjoy your growing collection?
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
[Komga](https://komga.org/) is a media server with a beautifully slick interface, allowing you to read your comics / manga in CBZ, CBR, PDF and epub format. Komga includes an integrated web reader, as well as a [Tachiyomi](https://tachiyomi.org/) plugin and an OPDS server for integration with other mobile apps such as [Chunky on iPad](http://chunkyreader.com/).
|
||||
|
||||
## Ingredients
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
*[X] [AutoPirate](/recipes/autopirate/) components (*specifically [Mylar](/recipes/autopirate/mylar/)*), for searching for, downloading, and managing comic books
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold the komga database, logs and other persistent data:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/komga
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
komga:
|
||||
image: gotson/komga
|
||||
env_file: /var/data/config/komga/komga.env
|
||||
volumes:
|
||||
- /var/data/media/:/media
|
||||
- /var/data/komga:/config
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:komga.example.com
|
||||
- traefik.port=8080
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.komga.rule=Host(`komga.example.com`)"
|
||||
- "traefik.http.services.komga.loadbalancer.server.port=8080"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.komga.middlewares=forward-auth@file"
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Avengers Assemble!
|
||||
|
||||
Launch the Komga stack by running ```docker stack deploy komga -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**. Since it's a fresh installation, Komga will prompt you to setup a username and password, after which you'll be able to setup your library, and tweak all teh butt0ns!
|
||||
|
||||
### Save teh wurld!
|
||||
|
||||
If Komga scratches your particular itch, please join me in [sponsoring the developer](/#sponsored-projects) :heart:
|
||||
|
||||
[^1]: Since Komga doesn't need to communicate with any other services, we don't need a separate overlay network for it. Provided Traefik can reach Komga via the `traefik_public` overlay network, we've got all we need.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
3
docs/recipes/kubernetes/harbor/index.md
Normal file
3
docs/recipes/kubernetes/harbor/index.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Harbor
|
||||
|
||||
harbor
|
||||
1
docs/recipes/kubernetes/harbor/istio.md
Normal file
1
docs/recipes/kubernetes/harbor/istio.md
Normal file
@@ -0,0 +1 @@
|
||||
# Istio with Harbor
|
||||
277
docs/recipes/kubernetes/mastodon.md
Normal file
277
docs/recipes/kubernetes/mastodon.md
Normal file
@@ -0,0 +1,277 @@
|
||||
---
|
||||
title: Install Mastodon in Kubernetes
|
||||
description: How to install your own Mastodon instance using Kubernetes
|
||||
---
|
||||
|
||||
# Install Mastodon in Kubernetes
|
||||
|
||||
[Mastodon](https://joinmastodon.org/) is an open-source, federated (*i.e., decentralized*) social network, inspired by Twitter's "microblogging" format, and used by upwards of 4.4M early-adopters, to share links, pictures, video and text.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
!!! question "Why would I run my own instance?"
|
||||
That's a good question. After all, there are all sorts of public instances available, with a [range of themes and communities](https://joinmastodon.org/communities). You may want to run your own instance because you like the tech, because you just think it's cool :material-emoticon-cool-outline:
|
||||
|
||||
You may also have realized that since Mastodon is **federated**, users on your instance can follow, toot, and interact with users on any other instance!
|
||||
|
||||
If you're **not** into that much effort / pain, you're welcome to [join our instance][community/mastodon] :material-mastodon:
|
||||
|
||||
## Mastodon requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
Already deployed:
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/) (*not running Kubernetes? Use the [Docker Swarm recipe instead][mastodon]*)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] An [Ingress](/kubernetes/ingress/) to route incoming traffic to services
|
||||
* [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff
|
||||
* [x] [External DNS](/kubernetes/external-dns/) to create an DNS entry
|
||||
|
||||
New:
|
||||
|
||||
* [ ] Chosen DNS FQDN for your epic new social network
|
||||
* [ ] An S3-compatible bucket for serving media (*I use [Backblaze B2](https://www.backblaze.com/b2/docs/s3_compatible_api.html)*)
|
||||
* [ ] An SMTP gateway for delivering email notifications (*I use [Mailgun](https://www.mailgun.com/)*)
|
||||
* [ ] A business card, with the title "[*I'm CEO, Bitch*](https://nextshark.com/heres-the-story-behind-mark-zuckerbergs-im-ceo-bitch-business-card/)"
|
||||
|
||||
## Preparation
|
||||
|
||||
### GitRepository
|
||||
|
||||
The Mastodon project doesn't currently publish a versioned helm chart - there's just a [helm chart stored in the repository](https://github.com/mastodon/mastodon/tree/main/chart) (*I plan to submit a PR to address this*). For now, we use a GitRepository instead of a HelmRepository as the source of a HelmRelease.
|
||||
|
||||
```yaml title="/bootstrap/gitrepositories/gitepository-mastodon.yaml"
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||
kind: GitRepository
|
||||
metadata:
|
||||
name: mastodon
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 1h0s
|
||||
ref:
|
||||
branch: main
|
||||
url: https://github.com/funkypenguin/mastodon # (1)!
|
||||
```
|
||||
|
||||
1. I'm using my own fork because I've been working on improvements to the upstream chart, but `https://github.com/mastodon/mastodon` would work too.
|
||||
|
||||
### Namespace
|
||||
|
||||
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `/bootstrap/namespaces/namespace-mastodon.yaml`:
|
||||
|
||||
```yaml title="/bootstrap/namespaces/namespace-mastodon.yaml"
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: mastodon
|
||||
```
|
||||
|
||||
### Kustomization
|
||||
|
||||
Now that the "global" elements of this deployment (*just the GitRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/mastodon`. I create this example Kustomization in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/kustomizations/kustomization-mastodon.yaml"
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: mastodon
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 15m
|
||||
path: mastodon
|
||||
prune: true # remove any elements later removed from the above path
|
||||
timeout: 2m # if not set, this defaults to interval duration, which is 1h
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
validation: server
|
||||
healthChecks:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: mastodon-web
|
||||
namespace: mastodon
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: mastodon-streaming
|
||||
namespace: mastodon
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: mastodon-sidekiq
|
||||
namespace: mastodon
|
||||
```
|
||||
|
||||
### ConfigMap
|
||||
|
||||
Now we're into the mastodon-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/bitnami-labs/mastodon/blob/main/helm/mastodon/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 tabs (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo:
|
||||
|
||||
```yaml title="mastodon/configmap-mastodon-helm-chart-value-overrides.yaml"
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mastodon-helm-chart-value-overrides
|
||||
namespace: mastodon
|
||||
data:
|
||||
values.yaml: |- # (1)!
|
||||
# <upstream values go here>
|
||||
```
|
||||
|
||||
1. Paste in the contents of the upstream `values.yaml` here, intended 4 spaces, and then change the values you need as illustrated below.
|
||||
|
||||
Values I change from the default are:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
values:
|
||||
mastodon:
|
||||
createAdmin:
|
||||
enabled: true
|
||||
username: funkypenguin
|
||||
email: davidy@funkypenguin.co.nz
|
||||
local_domain: so.fnky.nz
|
||||
s3:
|
||||
enabled: true
|
||||
access_key: "<redacted>"
|
||||
access_secret: "<redacted>"
|
||||
bucket: "so-fnky-nz"
|
||||
endpoint: https://s3.us-west-000.backblazeb2.com
|
||||
hostname: s3.us-west-000.backblazeb2.com
|
||||
secrets:
|
||||
secret_key_base: "<redacted>"
|
||||
otp_secret: "<redacted>"
|
||||
vapid:
|
||||
private_key: "<redacted>"
|
||||
public_key: "<redacted>"
|
||||
smtp:
|
||||
domain: mg.funkypenguin.co.nz
|
||||
enable_starttls_auto: true
|
||||
from_address: mastodon@mg.funkypenguin.co.nz
|
||||
login: mastodon@mg.funkypenguin.co.nz
|
||||
openssl_verify_mode: peer
|
||||
password: <redacted>
|
||||
port: 587
|
||||
reply_to: mastodon@mg.funkypenguin.co.nz
|
||||
server: smtp.mailgun.org
|
||||
tls: false
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: 10m
|
||||
hosts:
|
||||
- host: so.fnky.nz
|
||||
paths:
|
||||
- path: '/'
|
||||
|
||||
postgresql:
|
||||
auth:
|
||||
postgresPassword: "<redacted>"
|
||||
username: postgres
|
||||
password: "<redacted>"
|
||||
primary:
|
||||
persistence:
|
||||
size: 1Gi
|
||||
|
||||
redis:
|
||||
password: "<redacted>"
|
||||
master:
|
||||
persistence:
|
||||
size: 1Gi
|
||||
architecture: standalone
|
||||
```
|
||||
|
||||
### HelmRelease
|
||||
|
||||
Finally, having set the scene above, we define the HelmRelease which will actually deploy the mastodon into the cluster. I save this in my flux repo:
|
||||
|
||||
```yaml title="/mastodon/helmrelease-mastodon.yaml"
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: mastodon
|
||||
namespace: mastodon
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
chart: ./charts/mastodon
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: mastodon
|
||||
namespace: flux-system
|
||||
interval: 15m
|
||||
timeout: 5m
|
||||
releaseName: mastodon
|
||||
valuesFrom:
|
||||
- kind: ConfigMap
|
||||
name: mastodon-helm-chart-value-overrides
|
||||
valuesKey: values.yaml # (1)!
|
||||
```
|
||||
|
||||
1. This is the default, but best to be explicit for clarity
|
||||
|
||||
## :material-mastodon: Install Mastodon!
|
||||
|
||||
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation[^1] using `flux reconcile source git flux-system`. You should see the kustomization appear...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get kustomizations | grep mastodon
|
||||
mastodon main/d34779f False True Applied revision: main/d34779f
|
||||
~ ❯
|
||||
```
|
||||
|
||||
The helmrelease should be reconciled...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get helmreleases -n mastodon
|
||||
NAME REVISION SUSPENDED READY MESSAGE
|
||||
mastodon 1.2.2-pre-02 False True Release reconciliation succeeded
|
||||
~ ❯
|
||||
```
|
||||
|
||||
And you should have happy Mastodon pods:
|
||||
|
||||
```bash
|
||||
~ ❯ k get pods -n mastodon
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
mastodon-media-remove-27663840-l2xvt 0/1 Completed 0 22h
|
||||
mastodon-postgresql-0 1/1 Running 0 5d20h
|
||||
mastodon-redis-master-0 1/1 Running 0 5d20h
|
||||
mastodon-sidekiq-5ffd544f98-k86qp 1/1 Running 0 5d20h
|
||||
mastodon-streaming-676fdcf75-hz52z 1/1 Running 0 5d20h
|
||||
mastodon-web-597cf7c8d5-2hzkl 1/1 Running 4 5d20h
|
||||
~ ❯
|
||||
```
|
||||
|
||||
... and finally check that the ingress was created as desired:
|
||||
|
||||
```bash
|
||||
~ ❯ k get ingress -n mastodon
|
||||
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
mastodon <none> so.fnky.nz 80, 443 8d
|
||||
~ ❯
|
||||
```
|
||||
|
||||
Now hit the URL you defined in your config, and you should see your beautiful new Mastodon instance! Login with your configured credentials, navigate to **Preferences**, and have fun tweaking and tooting away!
|
||||
|
||||
!!! question "What's my Mastodon admin password?"
|
||||
|
||||
The admin username _may_ be output by the post-install hook job which creates it, but I didn't notice this at the time I deployed mine. Since I had a working SMTP setup however, I just used the "forgot password" feature to perform a password reset, which feels more secure anyway.
|
||||
|
||||
Once you're done, "toot" me up by mentioning [funkypenguin@so.fnky.nz](https://so.fnky.nz/@funkypenguin) in a toot! :wave:
|
||||
|
||||
!!! tip
|
||||
If your instance feels lonely, try using some [relays](https://github.com/brodi1/activitypub-relays) to bring in the federated firehose!
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We now have a fully-swarmed Mastodon instance, ready to federate with the world! :material-mastodon:
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Mastodon configured, running, and ready to toot!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: There is also a 3rd option, using the Flux webhook receiver to trigger a reconcilliation - to be covered in a future recipe!
|
||||
93
docs/recipes/linx.md
Normal file
93
docs/recipes/linx.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
title: How to share screenshots with linx under Docker
|
||||
description: Quickly share self-destructing screenshots, text, etc
|
||||
---
|
||||
|
||||
# Linx
|
||||
|
||||
Ever wanted to quickly share a screenshot, but don't want to use imgur, sign up for a service, or have your image tracked across the internet for all time?
|
||||
|
||||
Want to privately share some log output with a password, or a self-destructing cat picture?
|
||||
|
||||
{: loading=lazy }
|
||||
|
||||
[Linx](https://github.com/andreimarcu/linx-server) is self-hosted file/media-sharing service, which features:
|
||||
|
||||
- :white_check_mark: Display common filetypes (*image, video, audio, markdown, pdf*)
|
||||
- :white_check_mark: Display syntax-highlighted code with in-place editing
|
||||
- :white_check_mark: Documented API with keys for restricting uploads
|
||||
- :white_check_mark: Torrent download of files using web seeding
|
||||
- :white_check_mark: File expiry, deletion key, file access key, and random filename options
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold the data which linx will serve:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/linx
|
||||
```
|
||||
|
||||
### Create config file
|
||||
|
||||
Linx is configured using a flat text file, so create this on the Docker host, and then we'll mount it (*read-only*) into the container, below.
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/linx
|
||||
cat << EOF > /var/data/config/linx/linx.conf
|
||||
# Refer to https://github.com/andreimarcu/linx-server for details
|
||||
cleanup-every-minutes = 5
|
||||
EOF
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.2" # https://docs.docker.com/compose/compose-file/compose-versioning/#version-3
|
||||
|
||||
services:
|
||||
linx:
|
||||
image: andreimarcu/linx-server
|
||||
command: -config /linx.conf
|
||||
volumes:
|
||||
- /var/data/linx/:/files/
|
||||
- /var/data/config/linx/linx.conf:/linx.conf:ro
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:linx.example.com
|
||||
- traefik.port=8080
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.linx.rule=Host(`linx.example.com`)"
|
||||
- "traefik.http.routers.linx.entrypoints=https"
|
||||
- "traefik.http.services.linx.loadbalancer.server.port=8080"
|
||||
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch the Linx!
|
||||
|
||||
Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker-compose.yml>```
|
||||
|
||||
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
194
docs/recipes/mail.md
Normal file
194
docs/recipes/mail.md
Normal file
@@ -0,0 +1,194 @@
|
||||
---
|
||||
title: Run postfix / dovecot with docker-mailserver
|
||||
description: A self-contained mailserver (postfix, dovecot) in Docker with spam-fighting friends (spamassassin, clamav)
|
||||
---
|
||||
|
||||
# Mail Server
|
||||
|
||||
Many of the recipes that follow require email access of some kind. It's normally possible to use a hosted service such as SendGrid, or just a gmail account. If (like me) you'd like to self-host email for your stacks, then the following recipe provides a full-stack mail server running on the docker HA swarm.
|
||||
|
||||
Of value to me in choosing docker-mailserver were:
|
||||
|
||||
1. Automatically renews LetsEncrypt certificates
|
||||
2. Creation of email accounts across multiple domains (i.e., the same container gives me mailbox wekan@wekan.example.com, and gitlab@gitlab.example.com)
|
||||
3. The entire configuration is based on flat files, so there's no database or persistence to worry about
|
||||
|
||||
docker-mailserver doesn't include a webmail client, and one is not strictly needed. Rainloop can be added either as another service within the stack, or as a standalone service. Rainloop will be covered in a future recipe.
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
|
||||
2. LetsEncrypt authorized email address for domain
|
||||
3. Access to manage DNS records for domains
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/docker-mailserver:
|
||||
|
||||
```bash
|
||||
cd /var/data
|
||||
mkdir docker-mailserver
|
||||
cd docker-mailserver
|
||||
mkdir {maildata,mailstate,config,letsencrypt,rainloop}
|
||||
```
|
||||
|
||||
### Get LetsEncrypt certificate
|
||||
|
||||
Decide on the FQDN to assign to your mailserver. You can service multiple domains from a single mailserver - i.e., bob@dev.example.com and daphne@prod.example.com can both be served by **mail.example.com**.
|
||||
|
||||
The docker-mailserver container can _renew_ our LetsEncrypt certs for us, but it can't generate them. To do this, we need to run certbot (from a container) to request the initial certs and create the appropriate directory structure.
|
||||
|
||||
In the example below, since I'm already using Traefik to manage the LE certs for my web platforms, I opted to use the DNS challenge to prove my ownership of the domain. The certbot client will prompt you to add a DNS record for domain verification.
|
||||
|
||||
```bash
|
||||
docker run -ti --rm -v \
|
||||
"$(pwd)"/letsencrypt:/etc/letsencrypt certbot/certbot \
|
||||
--manual --preferred-challenges dns certonly \
|
||||
-d mail.example.com
|
||||
```
|
||||
|
||||
### Get setup.sh
|
||||
|
||||
docker-mailserver comes with a handy bash script for managing the stack (which is just really a wrapper around the container.) It'll make our setup easier, so download it into the root of your configuration/data directory, and make it executable:
|
||||
|
||||
```bash
|
||||
curl -o setup.sh \
|
||||
https://raw.githubusercontent.com/tomav/docker-mailserver/master/setup.sh \
|
||||
chmod a+x ./setup.sh
|
||||
```
|
||||
|
||||
### Create email accounts
|
||||
|
||||
For every email address required, run ```./setup.sh email add <email> <password>``` to create the account. The command returns no output.
|
||||
|
||||
You can run ```./setup.sh email list``` to confirm all of your addresses have been created.
|
||||
|
||||
### Create DKIM DNS entries
|
||||
|
||||
Run ```./setup.sh config dkim``` to create the necessary DKIM entries. The command returns no output.
|
||||
|
||||
Examine the keys created by opendkim to identify the DNS TXT records required:
|
||||
|
||||
```bash
|
||||
for i in `find config/opendkim/keys/ -name mail.txt`; do \
|
||||
echo $i; \
|
||||
cat $i; \
|
||||
done
|
||||
```
|
||||
|
||||
You'll end up with something like this:
|
||||
|
||||
```bash
|
||||
config/opendkim/keys/gitlab.example.com/mail.txt
|
||||
mail._domainkey IN TXT ( "v=DKIM1; k=rsa; "
|
||||
"p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCYuQqDg2ZG8ZOfI1PvarF1Gcr5cJnCR8BeCj5HYgeRohSrxKL5utPEF/AWAxXYwnKpgYN837fu74GfqsIuOhu70lPhGV+O2gFVgpXYWHELvIiTqqO0QgarIN63WE2gzE4s0FckfLrMuxMoXr882wuzuJhXywGxOavybmjpnNHhbQIDAQAB" ) ; ----- DKIM key mail for gitlab.example.com
|
||||
[root@ds1 mail]#
|
||||
```
|
||||
|
||||
Create the necessary DNS TXT entries for your domain(s). Note that although opendkim splits the record across two lines, the actual record should be concatenated on creation. I.e., the DNS TXT record above should read:
|
||||
|
||||
```bash
|
||||
"v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCYuQqDg2ZG8ZOfI1PvarF1Gcr5cJnCR8BeCj5HYgeRohSrxKL5utPEF/AWAxXYwnKpgYN837fu74GfqsIuOhu70lPhGV+O2gFVgpXYWHELvIiTqqO0QgarIN63WE2gzE4s0FckfLrMuxMoXr882wuzuJhXywGxOavybmjpnNHhbQIDAQAB"
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (_v3.2 - because we need to expose mail ports in "host mode"_), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3.2'
|
||||
|
||||
services:
|
||||
mail:
|
||||
image: tvial/docker-mailserver:latest
|
||||
ports:
|
||||
- target: 25
|
||||
published: 25
|
||||
protocol: tcp
|
||||
mode: host
|
||||
- target: 587
|
||||
published: 587
|
||||
protocol: tcp
|
||||
mode: host
|
||||
- target: 993
|
||||
published: 993
|
||||
protocol: tcp
|
||||
mode: host
|
||||
- target: 995
|
||||
published: 995
|
||||
protocol: tcp
|
||||
mode: host
|
||||
volumes:
|
||||
- /var/data/docker-mailserver/maildata:/var/mail
|
||||
- /var/data/docker-mailserver/mailstate:/var/mail-state
|
||||
- /var/data/docker-mailserver/config:/tmp/docker-mailserver
|
||||
- /var/data/docker-mailserver/letsencrypt:/etc/letsencrypt
|
||||
env_file: /var/data/docker-mailserver/docker-mailserver.env
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
replicas: 1
|
||||
|
||||
rainloop:
|
||||
image: hardware/rainloop
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:rainloop.example.com
|
||||
- traefik.port=8888
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.rainloop.rule=Host(`rainloop.example.com`)"
|
||||
- "traefik.http.services.rainloop.loadbalancer.server.port=8888"
|
||||
- "traefik.enable=true"
|
||||
volumes:
|
||||
- /var/data/mailserver/rainloop:/rainloop/data
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.2.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
A sample docker-mailserver.env file looks like this:
|
||||
|
||||
```bash
|
||||
ENABLE_SPAMASSASSIN=1
|
||||
ENABLE_CLAMAV=1
|
||||
ENABLE_POSTGREY=1
|
||||
ONE_DIR=1
|
||||
OVERRIDE_HOSTNAME=mail.example.com
|
||||
OVERRIDE_DOMAINNAME=mail.example.com
|
||||
POSTMASTER_ADDRESS=admin@example.com
|
||||
PERMIT_DOCKER=network
|
||||
SSL_TYPE=letsencrypt
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch mailserver
|
||||
|
||||
Launch the mail server stack by running ```docker stack deploy docker-mailserver -c <path-to-docker-mailserver.yml>```
|
||||
|
||||
[^1]: One of the elements of this design which I didn't appreciate at first is that since the config is entirely file-based, **setup.sh** can be run on any container host, provided it has the shared data mounted. This means that even though docker-mailserver was not designed with docker swarm in mind, it works perfectl with swarm. I.e., from any node, regardless of where the container is actually running, you're able to add/delete email addresses, view logs, etc.
|
||||
|
||||
[^2]: If you're using sieve with Rainloop, take note of the [workaround](https://forum.funkypenguin.co.nz/t/mail-server-funky-penguins-geek-cookbook/70/15) identified by [ggilley](https://forum.funkypenguin.co.nz/u/ggilley)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
398
docs/recipes/mastodon.md
Normal file
398
docs/recipes/mastodon.md
Normal file
@@ -0,0 +1,398 @@
|
||||
---
|
||||
title: Install Mastodon in Docker Swarm
|
||||
description: How to install your own Mastodon instance using Docker Swarm
|
||||
---
|
||||
|
||||
# Install Mastodon in Docker Swarm
|
||||
|
||||
[Mastodon](https://joinmastodon.org/) is an open-source, federated (*i.e., decentralized*) social network, inspired by Twitter's "microblogging" format, and used by upwards of 4.4M early-adopters, to share links, pictures, video and text.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
!!! question "Why would I run my own instance?"
|
||||
That's a good question. After all, there are all sorts of public instances available, with a [range of themes and communities](https://joinmastodon.org/communities). You may want to run your own instance because you like the tech, because you just think it's cool :material-emoticon-cool-outline:
|
||||
|
||||
You may also have realized that since Mastodon is **federated**, users on your instance can follow, toot, and interact with users on any other instance!
|
||||
|
||||
If you're **not** into that much effort / pain, you're welcome to [join our instance][community/mastodon] :material-mastodon:
|
||||
|
||||
## Mastodon requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
Already deployed:
|
||||
|
||||
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/) (*Alternatively, see the [Kubernetes recipe here][k8s/mastodon]*)
|
||||
* [X] [Traefik](/docker-swarm/traefik/) configured per design
|
||||
|
||||
New:
|
||||
|
||||
* [ ] DNS entry for your epic new social network, pointed to your [keepalived](/docker-swarm/keepalived/) IP
|
||||
* [ ] An S3-compatible bucket for serving media (*I use [Backblaze B2](https://www.backblaze.com/b2/docs/s3_compatible_api.html)*)
|
||||
* [ ] An SMTP gateway for delivering email notifications (*I use [Mailgun](https://www.mailgun.com/)*)
|
||||
* [ ] A business card, with the title "[*I'm CEO, Bitch*](https://nextshark.com/heres-the-story-behind-mark-zuckerbergs-im-ceo-bitch-business-card/)"
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First, we create a directory to hold the Mastodon docker-compose configuration:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/mastodon
|
||||
```
|
||||
|
||||
Then we setup directories to hold all the various data:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/runtime/mastodon/redis
|
||||
mkdir -p /var/data/runtime/mastodon/elasticsearch
|
||||
mkdir -p /var/data/runtime/mastodon/postgres
|
||||
```
|
||||
|
||||
!!! question "Why `/var/data/runtime/mastodon` and not just `/var/data/mastodon`?"
|
||||
The data won't be able to be backed up by a regular filesystem backup, because it'll be in use. We still need to store it **somewhere** though, so we use `/var/data/runtime`, which is excluded from automated backups. See [Data Layout](/reference/data_layout/) for details.
|
||||
|
||||
### Setup Mastodon enviroment
|
||||
|
||||
Create `/var/data/config/mastodon/mastodon.env` something like the example below..
|
||||
|
||||
```yaml title="/var/data/config/mastodon/mastodon.env"
|
||||
# This is a sample configuration file. You can generate your configuration
|
||||
# with the `rake mastodon:setup` interactive setup wizard, but to customize
|
||||
# your setup even further, you'll need to edit it manually. This sample does
|
||||
# not demonstrate all available configuration options. Please look at
|
||||
# https://docs.joinmastodon.org/admin/config/ for the full documentation.
|
||||
|
||||
# Note that this file accepts slightly different syntax depending on whether
|
||||
# you are using `docker-compose` or not. In particular, if you use
|
||||
# `docker-compose`, the value of each declared variable will be taken verbatim,
|
||||
# including surrounding quotes.
|
||||
# See: https://github.com/mastodon/mastodon/issues/16895
|
||||
|
||||
# Federation
|
||||
# ----------
|
||||
# This identifies your server and cannot be changed safely later
|
||||
# ----------
|
||||
LOCAL_DOMAIN=example.com # (1)!
|
||||
|
||||
# Redis
|
||||
# -----
|
||||
REDIS_HOST=redis
|
||||
REDIS_PORT=6379
|
||||
|
||||
# PostgreSQL
|
||||
# ----------
|
||||
DB_HOST=db
|
||||
DB_USER=postgres
|
||||
DB_NAME=postgres
|
||||
DB_PASS=tootmeupbuttercup # (2)!
|
||||
DB_PORT=5432
|
||||
|
||||
# Elasticsearch (optional)
|
||||
# ------------------------
|
||||
ES_ENABLED=false # (3)!
|
||||
ES_HOST=es
|
||||
ES_PORT=9200
|
||||
# Authentication for ES (optional)
|
||||
ES_USER=elastic
|
||||
ES_PASS=password
|
||||
|
||||
# Secrets
|
||||
# -------
|
||||
# Make sure to use `rake secret` to generate secrets
|
||||
# -------
|
||||
SECRET_KEY_BASE=imafreaksecretbaby # (4)!
|
||||
OTP_SECRET=imtoosecretformysocks
|
||||
|
||||
# Web Push
|
||||
# --------
|
||||
# Generate with `rake mastodon:webpush:generate_vapid_key`
|
||||
# docker run -it tootsuite/mastodon bundle exec rake mastodon:webpush:generate_vapid_key
|
||||
# --------
|
||||
VAPID_PRIVATE_KEY= # (5)!
|
||||
VAPID_PUBLIC_KEY=
|
||||
|
||||
# Sending mail # (6)!
|
||||
# ------------
|
||||
SMTP_SERVER=smtp.mailgun.org
|
||||
SMTP_PORT=587
|
||||
SMTP_LOGIN=
|
||||
SMTP_PASSWORD=
|
||||
SMTP_FROM_ADDRESS=notifications@example.com
|
||||
|
||||
# File storage (optional) # (7)!
|
||||
# -----------------------
|
||||
S3_ENABLED=true
|
||||
S3_BUCKET=files.example.com
|
||||
AWS_ACCESS_KEY_ID=
|
||||
AWS_SECRET_ACCESS_KEY=
|
||||
S3_ALIAS_HOST=files.example.com
|
||||
|
||||
# IP and session retention
|
||||
# -----------------------
|
||||
# Make sure to modify the scheduling of ip_cleanup_scheduler in config/sidekiq.yml
|
||||
# to be less than daily if you lower IP_RETENTION_PERIOD below two days (172800).
|
||||
# -----------------------
|
||||
IP_RETENTION_PERIOD=31556952
|
||||
SESSION_RETENTION_PERIOD=31556952
|
||||
```
|
||||
|
||||
1. Set this to the FQDN you plan to use for your instance.
|
||||
2. It doesn't matter what this is set to, since we're using `POSTGRES_HOST_AUTH_METHOD=trust`, but I've left it in for completeness and consistency with Mastodon's docs
|
||||
3. Only enable this if you have enough resources for an Elasticsearch instance for full-text indexing
|
||||
4. Generate these with `docker run -it tootsuite/mastodon bundle exec rake secret`
|
||||
5. Generate these with `docker run -it tootsuite/mastodon bundle exec rake mastodon:webpush:generate_vapid_key`
|
||||
6. You'll need to complete this if you want to send email
|
||||
7. You'll need to complete this if you want to host media elsewhere
|
||||
|
||||
### Mastodon Docker Swarm config
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this example:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml title="/var/data/config/mastodon/mastodon.yml"
|
||||
version: '3.5'
|
||||
services:
|
||||
db:
|
||||
image: postgres:14-alpine
|
||||
networks:
|
||||
- internal
|
||||
healthcheck:
|
||||
test: ['CMD', 'pg_isready', '-U', 'postgres']
|
||||
volumes:
|
||||
- /var/data/runtime/mastodon/postgres:/var/lib/postgresql/data
|
||||
environment:
|
||||
- 'POSTGRES_HOST_AUTH_METHOD=trust'
|
||||
|
||||
redis:
|
||||
image: redis:6-alpine
|
||||
networks:
|
||||
- internal
|
||||
healthcheck:
|
||||
test: ['CMD', 'redis-cli', 'ping']
|
||||
volumes:
|
||||
- /var/data/runtime/mastodon/redis:/data
|
||||
|
||||
# es:
|
||||
# image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4
|
||||
# environment:
|
||||
# - "ES_JAVA_OPTS=-Xms512m -Xmx512m -Des.enforce.bootstrap.checks=true"
|
||||
# - "xpack.license.self_generated.type=basic"
|
||||
# - "xpack.security.enabled=false"
|
||||
# - "xpack.watcher.enabled=false"
|
||||
# - "xpack.graph.enabled=false"
|
||||
# - "xpack.ml.enabled=false"
|
||||
# - "bootstrap.memory_lock=true"
|
||||
# - "cluster.name=es-mastodon"
|
||||
# - "discovery.type=single-node"
|
||||
# - "thread_pool.write.queue_size=1000"
|
||||
# networks:
|
||||
# - internal
|
||||
# healthcheck:
|
||||
# test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
|
||||
# volumes:
|
||||
# - /var/data/runtime/mastodon/elasticsearch:/usr/share/elasticsearch/data
|
||||
# ulimits:
|
||||
# memlock:
|
||||
# soft: -1
|
||||
# hard: -1
|
||||
# nofile:
|
||||
# soft: 65536
|
||||
# hard: 65536
|
||||
# ports:
|
||||
# - '9200:9200'
|
||||
|
||||
web:
|
||||
image: tootsuite/mastodon
|
||||
env_file: /var/data/config/mastodon/mastodon.env
|
||||
command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
healthcheck:
|
||||
test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:3000/health || exit 1']
|
||||
volumes:
|
||||
- /var/data/mastodon:/mastodon/public/system
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.mastodon.rule=Host(`mastodon.example.com`)"
|
||||
- "traefik.http.routers.mastodon.entrypoints=https"
|
||||
- "traefik.http.services.mastodon.loadbalancer.server.port=3000"
|
||||
|
||||
streaming:
|
||||
image: tootsuite/mastodon
|
||||
env_file: /var/data/config/mastodon/mastodon.env
|
||||
command: node ./streaming
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
healthcheck:
|
||||
test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1']
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.mastodon.rule=Host(`mastodon.example.com`) && PathPrefix(`/api/v1/streaming`))"
|
||||
- "traefik.http.routers.mastodon.entrypoints=https"
|
||||
- "traefik.http.services.mastodon.loadbalancer.server.port=3000"
|
||||
|
||||
sidekiq:
|
||||
image: tootsuite/mastodon
|
||||
env_file: /var/data/config/mastodon/mastodon.env
|
||||
command: bundle exec sidekiq
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/mastodon:/mastodon/public/system
|
||||
healthcheck:
|
||||
test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]
|
||||
|
||||
## Uncomment to enable federation with tor instances along with adding the following ENV variables
|
||||
## http_proxy=http://privoxy:8118
|
||||
## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true
|
||||
# tor:
|
||||
# image: sirboops/tor
|
||||
# networks:
|
||||
# - internal
|
||||
#
|
||||
# privoxy:
|
||||
# image: sirboops/privoxy
|
||||
# volumes:
|
||||
# - /var/data/mastodon/privoxy:/opt/config
|
||||
# networks:
|
||||
# - internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.9.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Pre-warming
|
||||
|
||||
Unlike most recipes, we can't just deploy Mastodon into Docker Swarm, and trust it to setup its database itself. We have to "pre-warm" it using docker-compose, per the official docs (*Docker Swarm is not officially supported*)
|
||||
|
||||
### Start with docker-compose
|
||||
|
||||
From the `/var/data/config/mastodon` directory, run the following to start up the Mastodon environment using docker-compose. This will result in a **broken** environment, since the database isn't configured yet, but it provides us the opportunity to configure it.
|
||||
|
||||
```bash
|
||||
docker-compose -f mastodon.yml up -d
|
||||
```
|
||||
|
||||
The output should look something like this:
|
||||
|
||||
```bash
|
||||
root@raphael:/var/data/config/mastodon# docker-compose -f mastodon.yml up -d
|
||||
WARNING: Some services (streaming, web) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
|
||||
WARNING: The Docker Engine you're using is running in swarm mode.
|
||||
|
||||
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
|
||||
|
||||
To deploy your application across the swarm, use `docker stack deploy`.
|
||||
|
||||
Creating mastodon_sidekiq_1 ... done
|
||||
Creating mastodon_db_1 ... done
|
||||
Creating mastodon_redis_1 ... done
|
||||
Creating mastodon_streaming_1 ... done
|
||||
Creating mastodon_web_1 ... done
|
||||
root@raphael:/var/data/config/mastodon#
|
||||
```
|
||||
|
||||
### Create database
|
||||
|
||||
Run the following to create the database. You can expect this to take a few minutes, and produce a **lot** of output:
|
||||
|
||||
```bash
|
||||
cd /var/data/config/mastodon
|
||||
docker-compose -f mastodon.yml run --rm web bundle exec rake db:migrate
|
||||
```
|
||||
|
||||
### Create admin user
|
||||
|
||||
Next, decide on your chosen username, and create your admin user:
|
||||
|
||||
```bash
|
||||
cd /var/data/config/mastodon
|
||||
docker-compose -f mastodon.yml run --rm web bin/tootctl accounts \
|
||||
create <username> --email <email address> --confirmed --role admin
|
||||
```
|
||||
|
||||
The password will be output on completion[^1]:
|
||||
|
||||
```bash
|
||||
root@raphael:/var/data/config/mastodon# docker-compose -f mastodon.yml run --rm web bin/tootctl accounts create batman --email batman@batcave.org --confirmed --role admin
|
||||
WARNING: Some services (streaming, web) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
|
||||
OK
|
||||
New password: c6eb8e0d10cd6f0aa874b7a384177a08
|
||||
root@raphael:/var/data/config/mastodon#
|
||||
```
|
||||
|
||||
### Turn off docker-compose
|
||||
|
||||
We've setup the essestials now, everything else can be configured either via the UI or via the `.env` file, so tear down the docker-compose environment with:
|
||||
|
||||
```bash
|
||||
docker-compose -f mastodon.yml down
|
||||
```
|
||||
|
||||
The output should look like this:
|
||||
|
||||
```bash
|
||||
root@raphael:/var/data/config/mastodon# docker-compose -f mastodon.yml down
|
||||
WARNING: Some services (streaming, web) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
|
||||
Stopping mastodon_streaming_1 ... done
|
||||
Stopping mastodon_web_1 ... done
|
||||
Stopping mastodon_db_1 ... done
|
||||
Stopping mastodon_redis_1 ... done
|
||||
Stopping mastodon_sidekiq_1 ... done
|
||||
Removing mastodon_streaming_1 ... done
|
||||
Removing mastodon_web_1 ... done
|
||||
Removing mastodon_db_1 ... done
|
||||
Removing mastodon_redis_1 ... done
|
||||
Removing mastodon_sidekiq_1 ... done
|
||||
Removing network mastodon_internal
|
||||
Network traefik_public is external, skipping
|
||||
root@raphael:/var/data/config/mastodon#
|
||||
```
|
||||
|
||||
## :material-mastodon: Launch Mastodon!
|
||||
|
||||
Launch the Mastodon stack by running:
|
||||
|
||||
```bash
|
||||
docker stack deploy mastodon -c /var/data/config/mastodon/mastodon.yml
|
||||
```
|
||||
|
||||
Now hit the URL you defined in your config, and you should see your beautiful new Mastodon instance! Login with your configured credentials, navigate to **Preferences**, and have fun tweaking and tooting away!
|
||||
|
||||
Once you're done, "toot" me by mentioning [funkypenguin@so.fnky.nz](https://so.fnky.nz/@funkypenguin) in a toot! :wave:
|
||||
|
||||
!!! tip
|
||||
If your instance feels lonely, try using some [relays](https://github.com/brodi1/activitypub-relays) to bring in the federated firehose!
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? Even though we had to jump through some extra hoops to setup database and users, we now have a fully-swarmed Mastodon instance, ready to federate with the world! :material-mastodon:
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Mastodon configured, running, and ready to toot!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^1]: Or, you can just reset your password from the UI, assuming you have SMTP working
|
||||
97
docs/recipes/mealie.md
Normal file
97
docs/recipes/mealie.md
Normal file
@@ -0,0 +1,97 @@
|
||||
---
|
||||
title: Mealie recipe manager on Docker
|
||||
description: A tasty tool to manage your meals and shopping list, on Docker swarm
|
||||
---
|
||||
|
||||
# Mealie
|
||||
|
||||
[Mealie](https://github.com/hay-kot/mealie) is a self hosted recipe manager and meal planner (*with a RestAPI backend and a reactive frontend application built in Vue for a pleasant user experience*) for the whole family.
|
||||
|
||||
Easily add recipes into your database by providing the url[^penguinfood], and mealie will automatically import the relevant data or add a family recipe with the UI editor.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Mealie also provides a secure API for interactions from 3rd party applications.
|
||||
|
||||
!!! question "Why does my recipe manager need an API?"
|
||||
An API allows integration into applications like Home Assistant that can act as notification engines to provide custom notifications based of Meal Plan data to remind you to defrost the chicken, marinade the steak, or start the CrockPot. See the [official docs](https://hay-kot.github.io/mealie/) for more information. Additionally, you can access any available API from the backend server. To explore the API spin up your server and navigate to <http://yourserver.com/docs> for interactive API documentation.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold the data which mealie will serve:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/mealie
|
||||
```
|
||||
|
||||
### Create environment
|
||||
|
||||
There's only one environment variable currently required (`db_type`), but let's create an `.env` file anyway, to keep the recipe consistent and extensible.
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/mealie
|
||||
cat << EOF > /var/data/config/mealie/mealie.env
|
||||
db_type=sqlite
|
||||
EOF
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.2" # https://docs.docker.com/compose/compose-file/compose-versioning/#version-3
|
||||
|
||||
services:
|
||||
app:
|
||||
image: hkotel/mealie:latest
|
||||
env_file: /var/data/config/mealie/mealie.env
|
||||
volumes:
|
||||
- /var/data/mealie:/app/data
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:mealie.example.com
|
||||
- traefik.port=9000
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.mealie.rule=Host(`mealie.example.com`)"
|
||||
- "traefik.http.routers.mealie.entrypoints=https"
|
||||
- "traefik.http.services.mealie.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.mealie.middlewares=forward-auth"
|
||||
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Mealie is served!
|
||||
|
||||
Launch the mealie stack by running ```docker stack deploy mealie -c <path -to-docker-compose.yml>```. The first time you access Mealie at https://**YOUR FQDN**, you might think there's something wrong. There are **no** recipes, and no instructions. Hover over the little plus sign at the bottom right, and within a second, two icons appear. Click the "link" icon to import a recipe from a URL:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
[^penguinfood]: I scraped all these recipes from <https://www.food.com/search/penguin>
|
||||
[^1]: If you plan to use Mealie for fancy things like an early-morning alarm to defrost the chicken, you may need to customize the [Traefik Forward Auth][tfa] rules, or even remove them entirely, for unauthenticated API access.
|
||||
[^2]: If you think Mealie is tasty, encourage the developer :cook: to keep on cookin', by [sponsoring him](https://github.com/sponsors/hay-kot) :heart:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
146
docs/recipes/miniflux.md
Normal file
146
docs/recipes/miniflux.md
Normal file
@@ -0,0 +1,146 @@
|
||||
---
|
||||
title: Read RSS in peace with miniflux on Docker
|
||||
---
|
||||
|
||||
# Miniflux
|
||||
|
||||
Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](/recipes/kanboard/)_)
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/review/miniflux-lightweight-self-hosted-rss-reader/), but features (among many) that I appreciate:
|
||||
|
||||
* Compatible with the Fever API, read your feeds through existing mobile and desktop clients (_This is the killer feature for me. I hardly ever read RSS on my desktop, I typically read on my iPhone or iPad, using [Fiery Feeds](http://cocoacake.net/apps/fiery/) or my new squeeze, [Unread](https://www.goldenhillsoftware.com/unread/)_)
|
||||
* Send your bookmarks to Pinboard, Wallabag, Shaarli or Instapaper (_I use this to automatically pin my bookmarks for collection on my [blog](https://www.funkypenguin.co.nz)_)
|
||||
* Feeds can be configured to download a "full" version of the content (_rather than an excerpt_)
|
||||
* Use the Bookmarklet to subscribe to a website directly from any browsers
|
||||
|
||||
!!! abstract "2.0+ is a bit different"
|
||||
[Some things changed](https://docs.miniflux.net/en/latest/migration.html) when Miniflux 2.0 was released. For one thing, the only supported database is now postgresql (_no more SQLite_). External themes are gone, as is PHP (_in favor of golang_). It's been a controversial change, but I'm keen on minimal and single-purpose, so I'm still very happy with the direction of development. The developer has laid out his [opinions](https://docs.miniflux.net/en/latest/opinionated.html) re the decisions he's made in the course of development.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
Create the location for the bind-mount of the application data, so that it's persistent:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/miniflux/database-dump
|
||||
mkdir -p /var/data/runtime/miniflux/database
|
||||
|
||||
```
|
||||
|
||||
### Setup environment
|
||||
|
||||
Create ```/var/data/config/miniflux/miniflux.env``` something like this:
|
||||
|
||||
```bash
|
||||
DATABASE_URL=postgres://miniflux:secret@miniflux-db/miniflux?sslmode=disable
|
||||
POSTGRES_USER=miniflux
|
||||
POSTGRES_PASSWORD=secret
|
||||
|
||||
# This is necessary for the miniflux to update the db schema, even on an empty DB
|
||||
RUN_MIGRATIONS=1
|
||||
|
||||
# Uncomment this on first run, else leave it commented out after adding your own user account
|
||||
CREATE_ADMIN=1
|
||||
ADMIN_USERNAME=admin
|
||||
ADMIN_PASSWORD=test1234
|
||||
```
|
||||
|
||||
Create ```/var/data/config/miniflux/miniflux-backup.env```, and populate with the following, so that your database can be backed up to the filesystem, daily:
|
||||
|
||||
```env
|
||||
PGHOST=miniflux-db
|
||||
PGUSER=miniflux
|
||||
PGPASSWORD=secret
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
The entire application is configured using environment variables, including the initial username. Once you've successfully deployed once, comment out ```CREATE_ADMIN``` and the two successive lines.
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
miniflux:
|
||||
image: miniflux/miniflux:2.0.7
|
||||
env_file: /var/data/config/miniflux/miniflux.env
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:miniflux.example.com
|
||||
- traefik.port=8080
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.miniflux.rule=Host(`miniflux.example.com`)"
|
||||
- "traefik.http.services.miniflux.loadbalancer.server.port=8080"
|
||||
- "traefik.enable=true"
|
||||
|
||||
miniflux-db:
|
||||
env_file: /var/data/config/miniflux/miniflux.env
|
||||
image: postgres:10.1
|
||||
volumes:
|
||||
- /var/data/runtime/miniflux/database:/var/lib/postgresql/data
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
networks:
|
||||
- internal
|
||||
|
||||
miniflux-db-backup:
|
||||
image: postgres:10.1
|
||||
env_file: /var/data/config/miniflux/miniflux-backup.env
|
||||
volumes:
|
||||
- /var/data/miniflux/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
|
||||
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.22.0/24
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Miniflux stack
|
||||
|
||||
Launch the Miniflux stack by running ```docker stack deploy miniflux -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, using the credentials you setup in the environment flie. After this, change your user/password as you see fit, and comment out the ```CREATE_ADMIN``` line in the env file (_if you don't, then an **additional** admin will be created the next time you deploy_)
|
||||
|
||||
[^1]: Find the bookmarklet under the **Settings -> Integration** page.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
208
docs/recipes/minio.md
Normal file
208
docs/recipes/minio.md
Normal file
@@ -0,0 +1,208 @@
|
||||
---
|
||||
title: Run Minio on Docker (using compose format in swarm)
|
||||
description: How to run Minio's self-hosted S3-compatible object storage under Docker Swarm, using docker-compose v3 syntax
|
||||
---
|
||||
|
||||
# Minio
|
||||
|
||||
Minio is a high performance distributed object storage server, designed for
|
||||
large-scale private cloud infrastructure.
|
||||
|
||||
However, at its simplest, Minio allows you to expose a local filestructure via the [Amazon S3 API](https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html). You could, for example, use it to provide access to "buckets" (folders) of data on your filestore, secured by access/secret keys, just like AWS S3. You can further interact with your "buckets" with common tools, just as if they were hosted on S3.
|
||||
|
||||
Under a more advanced configuration, Minio runs in distributed mode, with [features](https://docs.min.io/minio/baremetal/concepts/feature-overview.html) including high-availability, mirroring, erasure-coding, and "bitrot detection".
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Possible use-cases:
|
||||
|
||||
1. Sharing files (_protected by user accounts with secrets_) via HTTPS, either as read-only or read-write, in such a way that the bucket could be mounted to a remote filesystem using common S3-compatible tools, like [goofys](https://github.com/kahing/goofys). Ever wanted to share a folder with friends, but didn't want to open additional firewall ports etc?
|
||||
2. Simulating S3 in a dev environment
|
||||
3. Mirroring an S3 bucket locally
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a directory to hold our minio file store. You can create a blank directory wherever you like (*I used `/var/data/minio`*), or point the `/data` volume to a pre-existing folder structure.
|
||||
|
||||
```bash
|
||||
mkdir /var/data/minio
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create `minio.env`, and populate with the variables below.
|
||||
|
||||
```bash
|
||||
MINIO_ROOT_USER=hackme
|
||||
MINIO_ROOT_PASSWORD=becauseiforgottochangethepassword
|
||||
MINIO_BROWSER_REDIRECT_URL=https://minio-console.example.com
|
||||
MINIO_SERVER_URL=https://minio.example.com
|
||||
```
|
||||
|
||||
!!! note "If minio redirects you to :9001"
|
||||
`MINIO_BROWSER_REDIRECT_URL` is especially important since recent versions of Minio will redirect web browsers to this URL when they hit the API directly. (*If you find yourself redirected to `http://your-minio-url:9001`, then you've not set this value correctly!*)
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3.2'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: minio/minio
|
||||
env_file: /var/data/config/minio/minio.env
|
||||
volumes:
|
||||
- /var/data/minio:/data
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:minio.example.com
|
||||
- traefik.port=9000
|
||||
|
||||
- traefik.console.frontend.rule=Host:minio-console.example.com
|
||||
- traefik.console.port=9001
|
||||
|
||||
# traefikv2 (death-by-labels, much?)
|
||||
- traefik.http.middlewares.redirect-https.redirectScheme.scheme=https
|
||||
- traefik.http.middlewares.redirect-https.redirectScheme.permanent=true
|
||||
|
||||
- traefik.http.routers.minio-https.rule=Host(`minio.example.com`)
|
||||
- traefik.http.routers.minio-https.entrypoints=https
|
||||
- traefik.http.routers.minio-https.service=minio
|
||||
- traefik.http.routers.minio-http.rule=Host(`minio.example.com`)
|
||||
- traefik.http.routers.minio-http.entrypoints=http
|
||||
- traefik.http.routers.minio-http.middlewares=redirect-https
|
||||
- traefik.http.routers.minio-http.service=minio
|
||||
- traefik.http.services.minio.loadbalancer.server.port=9000
|
||||
|
||||
- traefik.http.routers.minio-console-https.rule=Host(`minio-console.example.com`)
|
||||
- traefik.http.routers.minio-console-https.entrypoints=https
|
||||
- traefik.http.routers.minio-console-https.service=minio-console
|
||||
- traefik.http.routers.minio-console-http.rule=Host(`minio-console.example.com`)
|
||||
- traefik.http.routers.minio-console-http.entrypoints=http
|
||||
- traefik.http.routers.minio-console-http.middlewares=redirect-https
|
||||
- traefik.http.routers.minio-console-http.service=minio-console
|
||||
- traefik.http.services.minio-console.loadbalancer.server.port=9001
|
||||
|
||||
command: minio server /data --console-address ":9001"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Minio stack
|
||||
|
||||
Launch the Minio stack by running ``docker stack deploy minio -c <path -to-docker-compose.yml>`
|
||||
|
||||
Log into your new instance at `https://minio-console.**YOUR-FQDN**`, with the root user and password you specified in `minio.env`.
|
||||
|
||||
If you created `/var/data/minio`, you'll see nothing. If you mapped `/data` to existing data, you should see all subdirectories in your existing folder represented as buckets.
|
||||
|
||||
Use the Minio console to create a user, or (*ill-advisedly*) continue using the root user/password!
|
||||
|
||||
If all you need is single-user access to your data, you're done! 🎉
|
||||
|
||||
If, however, you want to expose data to multiple users, at different privilege levels, you'll need the minio client to create some users and (_potentially_) policies...
|
||||
|
||||
## Minio Trickz :clown:
|
||||
|
||||
### Setup minio client
|
||||
|
||||
While it's possible to fully administer Minio using the console, it's also possible using the `mc` CLI client, as illustrated below
|
||||
|
||||
```bash
|
||||
root@ds1:~# mc config host add minio http://app:9000 admin iambatman
|
||||
mc: Configuration written to `/root/.mc/config.json`. Please update your access credentials.
|
||||
mc: Successfully created `/root/.mc/share`.
|
||||
mc: Initialized share uploads `/root/.mc/share/uploads.json` file.
|
||||
mc: Initialized share downloads `/root/.mc/share/downloads.json` file.
|
||||
Added `minio` successfully.
|
||||
root@ds1:~#
|
||||
```
|
||||
|
||||
### Add (readonly) user
|
||||
|
||||
Use mc to add a (readonly or readwrite) user, by running ```mc admin user add minio <access key> <secret key> <access level>```
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
root@ds1:~# mc admin user add minio spiderman peterparker readonly
|
||||
Added user `spiderman` successfully.
|
||||
root@ds1:~#
|
||||
```
|
||||
|
||||
Confirm by listing your users (_admin is excluded from the list_):
|
||||
|
||||
```bash
|
||||
root@node1:~# mc admin user list minio
|
||||
enabled spiderman readonly
|
||||
root@node1:~#
|
||||
```
|
||||
|
||||
### Make a bucket accessible to users
|
||||
|
||||
By default, all buckets have no "policies" attached to them, and so can only be accessed by the administrative user. Having created some readonly/read-write users above, you'll be wanting to grant them access to buckets.
|
||||
|
||||
The simplest permission scheme is "on or off". Either a bucket has a policy, or it doesn't. (_I believe you can apply policies to subdirectories of buckets in a more advanced configuration_)
|
||||
|
||||
After **no** policy, the most restrictive policy you can attach to a bucket is "download". This policy will allow authenticated users to download contents from the bucket. Apply the "download" policy to a bucket by running ```mc policy download minio/<bucket name>```, i.e.:
|
||||
|
||||
```bash
|
||||
root@ds1:# mc policy download minio/comics
|
||||
Access permission for `minio/comics` is set to `download`
|
||||
root@ds1:#
|
||||
```
|
||||
|
||||
### Advanced bucketing
|
||||
|
||||
There are some clever complexities you can achieve with user/bucket policies, including:
|
||||
|
||||
* A public bucket, which requires no authentication to read or even write (_for a public dropbox, for example_)
|
||||
* A special bucket, hidden from most users, but available to VIP users by application of a custom "[canned policy](https://docs.minio.io/docs/minio-multi-user-quickstart-guide.html)"
|
||||
|
||||
### Mount a minio share remotely
|
||||
|
||||
Having setup your buckets, users, and policies - you can give out your minio external URL, and user access keys to your remote users, and they can S3-mount your buckets, interacting with them based on their user policy (_read-only or read/write_)
|
||||
|
||||
I tested the S3 mount using [goofys](https://github.com/kahing/goofys), "a high-performance, POSIX-ish Amazon S3 file system written in Go".
|
||||
|
||||
First, I created ~/.aws/credentials, as per the following example:
|
||||
|
||||
```ini
|
||||
[default]
|
||||
aws_access_key_id=spiderman
|
||||
aws_secret_access_key=peterparker
|
||||
```
|
||||
|
||||
And then I ran (_in the foreground, for debugging_), `goofys --f -debug_s3 --debug_fuse --endpoint=https://traefik.example.com <bucketname> <local mount point>`
|
||||
|
||||
To permanently mount an S3 bucket using goofys, I'd add something like this to /etc/fstab:
|
||||
|
||||
```bash
|
||||
goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode=0666 0 0
|
||||
```
|
||||
|
||||
[^1]: There are many S3-filesystem-mounting tools available, I just picked Goofys because it's simple. Google is your friend :)
|
||||
[^2]: Some applications (_like [NextCloud](/recipes/nextcloud/)_) can natively mount S3 buckets
|
||||
[^3]: Some backup tools (_like [Duplicity](/recipes/duplicity/)_) can backup directly to S3 buckets
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
124
docs/recipes/munin.md
Normal file
124
docs/recipes/munin.md
Normal file
@@ -0,0 +1,124 @@
|
||||
---
|
||||
title: How to run Munin in Docker
|
||||
description: Network resource monitoring tool for quick analysis
|
||||
---
|
||||
|
||||
# Munin in Docker
|
||||
|
||||
Munin is a networked resource monitoring tool that can help analyze resource trends and "what just happened to kill our performance?" problems. It is designed to be very plug and play. A default installation provides a lot of graphs with almost no work.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Using Munin you can easily monitor the performance of your computers, networks, SANs, applications, weather measurements and whatever comes to mind. It makes it easy to determine "what's different today" when a performance problem crops up. It makes it easy to see how you're doing capacity-wise on any resources.
|
||||
|
||||
Munin uses the excellent RRDTool (written by Tobi Oetiker) and the framework is written in Perl, while plugins may be written in any language. Munin has a master/node architecture in which the master connects to all the nodes at regular intervals and asks them for data. It then stores the data in RRD files, and (if needed) updates the graphs. One of the main goals has been ease of creating new plugins (graphs).
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Prepare target nodes
|
||||
|
||||
Depending on what you want to monitor, you'll want to install munin-node. On Ubuntu/Debian, you'll use `apt-get install munin-node`, and on RHEL/CentOS, run `yum install munin-node`. Remember to edit `/etc/munin/munin-node.conf`, and set your node to allow the server to poll it, by adding `cidr_allow x.x.x.x/x`.
|
||||
|
||||
On CentOS Atomic, of course, you can't install munin-node directly, but you can run it as a containerized instance. In this case, you can't use swarm since you need the container running in privileged mode, so launch a munin-node container on each atomic host using:
|
||||
|
||||
```bash
|
||||
docker run -d --name munin-node --restart=always \
|
||||
--privileged --net=host \
|
||||
-v /:/rootfs:ro \
|
||||
-v /sys:/sys:ro \
|
||||
-e ALLOW="cidr_allow 0.0.0.0/0" \
|
||||
-p 4949:4949 \
|
||||
--restart=always \
|
||||
funkypenguin/munin-node
|
||||
```
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/munin:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/munin
|
||||
cd /var/data/munin
|
||||
mkdir -p {log,lib,run,cache}
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create /var/data/config/munin/munin.env, and populate with the following variables. Set at a **minimum** the `MUNIN_USER`, `MUNIN_PASSWORD`, and `NODES` values:
|
||||
|
||||
```bash
|
||||
|
||||
MUNIN_USER=odin
|
||||
MUNIN_PASSWORD=lokiisadopted
|
||||
SMTP_HOST=smtp.example.com
|
||||
SMTP_PORT=587
|
||||
SMTP_USERNAME=smtp-username
|
||||
SMTP_PASSWORD=smtp-password
|
||||
SMTP_USE_TLS=false
|
||||
SMTP_ALWAYS_SEND=false
|
||||
SMTP_MESSAGE='[${var:group};${var:host}] -> ${var:graph_title} -> warnings: ${loop<,>:wfields ${var:label}=${var:value}} / criticals: ${loop<,>:cfields ${var:label}=${var:value}}'
|
||||
ALERT_RECIPIENT=monitoring@example.com
|
||||
ALERT_SENDER=alerts@example.com
|
||||
NODES="node1:192.168.1.1 node2:192.168.1.2 node3:192.168.1.3"
|
||||
SNMP_NODES="router1:10.0.0.254:9999"
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
|
||||
munin:
|
||||
image: funkypenguin/munin-server
|
||||
env_file: /var/data/config/munin/munin.env
|
||||
networks:
|
||||
- traefik_public
|
||||
volumes:
|
||||
- /var/data/munin/log:/var/log/munin
|
||||
- /var/data/munin/lib:/var/lib/munin
|
||||
- /var/data/munin/run:/var/run/munin
|
||||
- /var/data/munin/cache:/var/cache/munin
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:munin.example.com
|
||||
- traefik.port=8080
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.munin.rule=Host(`munin.example.com`)"
|
||||
- "traefik.http.services.munin.loadbalancer.server.port=8080"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.wekan.middlewares=forward-auth@file"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Munin stack
|
||||
|
||||
Launch the Munin stack by running `docker stack deploy munin -c <path -to-docker-compose.yml>`
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, with user and password password you specified in munin.env above.
|
||||
|
||||
[^1]: If you wanted to expose the Munin UI directly, you could remove the traefik-forward-auth from the design.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
230
docs/recipes/nextcloud.md
Normal file
230
docs/recipes/nextcloud.md
Normal file
@@ -0,0 +1,230 @@
|
||||
---
|
||||
title: How to run Nextcloud in Docker (behind Traefik)
|
||||
description: We can now run Nextcloud in our Docker Swarm, with LetsEncrypt SSL termination handled by Traefik
|
||||
---
|
||||
|
||||
# NextCloud
|
||||
|
||||
[NextCloud](https://www.nextcloud.org/) (_a [fork of OwnCloud](https://owncloud.com/owncloud-vs-nextcloud/), led by original developer Frank Karlitschek_) is a suite of client-server software for creating and using file hosting services. It is functionally similar to Dropbox, although Nextcloud is free and open-source, allowing anyone to install and operate it on a private server.
|
||||
|
||||
- <https://en.wikipedia.org/wiki/Nextcloud>
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
This recipe is based on the official NextCloud docker image, but includes seprate containers ofor the database (_MariaDB_), Redis (_for transactional locking_), Apache Solr (_for full-text searching_), automated database backup, (_you *do* backup the stuff you care about, right?_) and a separate cron container for running NextCloud's 15-min crons.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories for [static data](/reference/data_layout/#static-data) to bind-mount into our container, so create them in /var/data/nextcloud (_so that they can be [backed up](/recipes/duplicity/)_)
|
||||
|
||||
```bash
|
||||
mkdir /var/data/nextcloud
|
||||
cd /var/data/nextcloud
|
||||
mkdir -p {html,apps,config,data,database-dump}
|
||||
```
|
||||
|
||||
Now make **more** directories for [runtime data](/reference/data_layout/#runtime-data) (_so that they can be **not** backed-up_):
|
||||
|
||||
```bash
|
||||
mkdir /var/data/runtime/nextcloud
|
||||
cd /var/data/runtime/nextcloud
|
||||
mkdir -p {db,redis}
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create nextcloud.env, and populate with the following variables
|
||||
|
||||
```bash
|
||||
NEXTCLOUD_ADMIN_USER=admin
|
||||
NEXTCLOUD_ADMIN_PASSWORD=FVuojphozxMVyaYCUWomiP9b
|
||||
MYSQL_HOST=db
|
||||
|
||||
# For mysql
|
||||
MYSQL_ROOT_PASSWORD=<set to something secure>
|
||||
MYSQL_DATABASE=nextcloud
|
||||
MYSQL_USER=nextcloud
|
||||
MYSQL_PASSWORD=set to something secure>
|
||||
```
|
||||
|
||||
Now create a **separate** nextcloud-db-backup.env file, to capture the environment variables necessary to perform the backup. (_If the same variables are shared with the mariadb container, they [cause issues](https://forum.funkypenguin.co.nz/t/nextcloud-funky-penguins-geek-cookbook/254/3?u=funkypenguin) with database access_)
|
||||
|
||||
````bash
|
||||
# For database backup (keep 7 days daily backups)
|
||||
MYSQL_PWD=<set to something secure, same as MYSQL_ROOT_PASSWORD above>
|
||||
MYSQL_USER=root
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
````
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
nextcloud:
|
||||
image: nextcloud
|
||||
env_file: /var/data/config/nextcloud/nextcloud.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:nextcloud.example.com
|
||||
- traefik.port=80
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.nextcloud.rule=Host(`nextcloud.example.com`)"
|
||||
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
|
||||
- "traefik.enable=true"
|
||||
|
||||
volumes:
|
||||
- /var/data/nextcloud/html:/var/www/html
|
||||
- /var/data/nextcloud/apps:/var/www/html/custom_apps
|
||||
- /var/data/nextcloud/config:/var/www/html/config
|
||||
- /var/data/nextcloud/data:/var/www/html/data
|
||||
|
||||
db:
|
||||
image: mariadb:10
|
||||
env_file: /var/data/config/nextcloud/nextcloud.env
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/runtime/nextcloud/db:/var/lib/mysql
|
||||
|
||||
db-backup:
|
||||
image: mariadb:10
|
||||
env_file: /var/data/config/nextcloud/nextcloud-db-backup.env
|
||||
volumes:
|
||||
- /var/data/nextcloud/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
|
||||
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
redis:
|
||||
image: redis:alpine
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/runtime/nextcloud/redis:/data
|
||||
|
||||
cron:
|
||||
image: nextcloud
|
||||
volumes:
|
||||
- /var/data/nextcloud/:/var/www/html
|
||||
user: www-data
|
||||
networks:
|
||||
- internal
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
while [ ! -f /var/www/html/config/config.php ]; do
|
||||
sleep 1
|
||||
done
|
||||
while true; do
|
||||
php -f /var/www/html/cron.php
|
||||
sleep 15m
|
||||
done
|
||||
EOF'
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.12.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch NextCloud stack
|
||||
|
||||
Launch the NextCloud stack by running ```docker stack deploy nextcloud -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, with user "admin" and the password you specified in nextcloud.env.
|
||||
|
||||
### Enable redis
|
||||
|
||||
To make NextCloud [a little snappier](https://docs.nextcloud.com/server/13/admin_manual/configuration_server/caching_configuration.html), edit ```/var/data/nextcloud/config/config.php``` (_now that it's been created on the first container launch_), and add the following:
|
||||
|
||||
```bash
|
||||
'redis' => array(
|
||||
'host' => 'redis',
|
||||
'port' => 6379,
|
||||
),
|
||||
```
|
||||
|
||||
### Use service discovery
|
||||
|
||||
Want to use Calendar/Contacts on your iOS device? Want to avoid dictating long, rambling URL strings to your users, like ```https://nextcloud.batcave.com/remote.php/dav/principals/users/USERNAME/``` ?
|
||||
|
||||
Huzzah! NextCloud supports [service discovery for CalDAV/CardDAV](https://tools.ietf.org/html/rfc6764), allowing you to simply tell your device the primary URL of your server (_**nextcloud.batcave.org**, for example_), and have the device figure out the correct WebDAV path to use.
|
||||
|
||||
We (_and anyone else using the [NextCloud Docker image](https://hub.docker.com/_/nextcloud/)_) are using an SSL-terminating reverse proxy ([Traefik](/docker-swarm/traefik/)) in front of our NextCloud container. In fact, it's not **possible** to setup SSL **within** the NextCloud container.
|
||||
|
||||
When using a reverse proxy, your device requests a URL from your proxy (<https://nextcloud.batcave.com/.well-known/caldav>), and the reverse proxy then passes that request **unencrypted** to the internal URL of the NextCloud instance (i.e., <http://172.16.12.123/.well-known/caldav>)
|
||||
|
||||
The Apache webserver on the NextCloud container (_knowing it was spoken to via HTTP_), responds with a 301 redirect to <http://nextcloud.batcave.com/remote.php/dav/>. See the problem? You requested an **HTTPS** (_encrypted_) url, and in return, you received a redirect to an **HTTP** (_unencrypted_) URL. Any sensible client (_iOS included_) will refuse such schenanigans.
|
||||
|
||||
To correct this, we need to tell NextCloud to always redirect the .well-known URLs to an HTTPS location. This can only be done **after** deploying NextCloud, since it's only on first launch of the container that the .htaccess file is created in the first place.
|
||||
|
||||
To make NextCloud service discovery work with Traefik reverse proxy, edit ```/var/data/nextcloud/html/.htaccess```, and change this:
|
||||
|
||||
```bash
|
||||
RewriteRule ^\.well-known/carddav /remote.php/dav/ [R=301,L]
|
||||
RewriteRule ^\.well-known/caldav /remote.php/dav/ [R=301,L]
|
||||
```
|
||||
|
||||
To this:
|
||||
|
||||
```bash
|
||||
RewriteRule ^\.well-known/carddav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
|
||||
RewriteRule ^\.well-known/caldav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
|
||||
```
|
||||
|
||||
Then restart your container with ```docker service update nextcloud_nextcloud --force``` to restart apache.
|
||||
|
||||
Your can test for success by running ```curl -i https://nextcloud.batcave.org/.well-known/carddav```. You should get a 301 redirect to your equivalent of <https://nextcloud.batcave.org/remote.php/dav/>, as below:
|
||||
|
||||
```bash
|
||||
[davidy:~] % curl -i https://nextcloud.batcave.org/.well-known/carddav
|
||||
HTTP/2 301
|
||||
content-type: text/html; charset=iso-8859-1
|
||||
date: Wed, 12 Dec 2018 08:30:11 GMT
|
||||
location: https://nextcloud.batcave.org/remote.php/dav/
|
||||
```
|
||||
|
||||
Note that this .htaccess can be overwritten by NextCloud, and you may have to reapply the change in future. I've created an [issue requesting a permanent fix](https://github.com/nextcloud/docker/issues/577).
|
||||
|
||||
[^1]: Since many of my other recipes use PostgreSQL, I'd have preferred to use Postgres over MariaDB, but MariaDB seems to be the [preferred database type](https://github.com/nextcloud/server/issues/5912).
|
||||
[^2]: I'm [not the first user](https://github.com/nextcloud/docker/issues/528) to stumble across the service discovery bug with reverse proxies.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
170
docs/recipes/nightscout.md
Normal file
170
docs/recipes/nightscout.md
Normal file
@@ -0,0 +1,170 @@
|
||||
---
|
||||
title: Setup nightscout in Docker
|
||||
description: CGM data with an API, for diabetic quality-of-life improvements
|
||||
---
|
||||
|
||||
# Nightscout
|
||||
|
||||
Nightscout is "*...an open source, DIY project that allows real time access to a CGM data via personal website, smartwatch viewers, or apps and widgets available for smartphones*"
|
||||
|
||||
!!! question "Yeah, but what's a CGM?"
|
||||
A CGM is a "continuos glucose monitor" :drop_of_blood: - If you have a blood-sugar-related disease (*i.e. diabetes*), you might wear a CGM in order to retrieve blood-glucose level readings, to inform your treatment.
|
||||
|
||||
NightScout frees you from the CGM's supplier's limited and proprietary app, and unlocks advanced charting, alarming, and sharing features :muscle:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
[Nightscout](https://nightscout.github.io/) is _the_ standard for open-source CGM data collection, used by diabetics and those who love them, to store, share, and retrieve blood-glocuse data, in order to live healthier and happier lives. It's used as the data sharing/syncing backend for all the popular smartphone apps, including [xDrip+](https://github.com/NightscoutFoundation/xDrip) (*Android*) and [Spike App](https://spike-app.com/) (*iOS*).
|
||||
|
||||
Most NightScout users will deploy to Heroko, using MongoDB Atlas, which is a [well-documented solution](https://nightscout.github.io/nightscout/new_user/). If you wanted to run NightScout on your own Docker stack though, then this recipe is for you!
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold Nightscout's database, as well as database backups:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/runtime/nightscout/database # excluded from automated backups
|
||||
mkdir -p /var/data/nightscout/database # included in automated backups
|
||||
```
|
||||
|
||||
### Create env file
|
||||
|
||||
NightScout is configured entirely using environment variables, so create something like this as `/var/data/config/nightscout/nightscout.env`:
|
||||
|
||||
!!! warning
|
||||
Your variables may vary significantly from what's illustrated below, and it's best to read up and understand exactly what each option does.
|
||||
|
||||
```yaml
|
||||
# Customize these per https://github.com/nightscout/cgm-remote-monitor/blob/master/README.md#environment
|
||||
|
||||
# Required
|
||||
MONGODB_URI=mongodb://db
|
||||
API_SECRET=myverysecritsecrit
|
||||
DISPLAY_UNITS=mmol # set to "mg/dl" if you're using US-style measurements
|
||||
BASE_URL=https://nightscout.example.com
|
||||
|
||||
# We rely on traefik to handle SSL, so don't bother using in in nightscout
|
||||
INSECURE_USE_HTTP=true
|
||||
|
||||
# Listen on all interfaces
|
||||
HOSTNAME=::
|
||||
|
||||
# # Features
|
||||
ENABLE=careportal basal dbsize rawbg iob maker bridge cob bwp cage iage sage boluscalc pushover treatmentnotify mmconnect loop pump profile food openaps bage alexa override cors
|
||||
# DISABLE=
|
||||
AUTH_DEFAULT_ROLES=denied
|
||||
THEME=colors
|
||||
|
||||
# IMPORT_CONFIG=
|
||||
# TREATMENTS_AUTH=
|
||||
|
||||
# # Alarms
|
||||
# ALARM_TYPES=
|
||||
# BG_HIGH
|
||||
# BG_TARGET_TOP
|
||||
# BG_TARGET_BOTTOM
|
||||
# BG_LOW
|
||||
# ALARM_URGENT_HIGH
|
||||
# ALARM_URGENT_HIGH_MINS
|
||||
# ALARM_HIGH
|
||||
# ALARM_HIGH_MINS
|
||||
# ALARM_LOW
|
||||
# ALARM_LOW_MINS
|
||||
# ALARM_URGENT_LOW
|
||||
# ALARM_URGENT_LOW_MINS
|
||||
# ALARM_URGENT_MINS
|
||||
# ALARM_WARN_MINS
|
||||
|
||||
# # Core
|
||||
# MONGO_TREATMENTS_COLLECTION=treatments
|
||||
|
||||
# Mongodb specific database dump details
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
!!! tip
|
||||
I'm keen to share any and all resources I have with diabetics or loved-ones of diabetics (*of which I am one*). [Contact me](https://www.funkypenguin.co.nz/contact/) directly for details!
|
||||
|
||||
```yaml
|
||||
version: '3.2'
|
||||
|
||||
services:
|
||||
|
||||
app:
|
||||
image: nightscout/cgm-remote-monitor
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
env_file: /var/data/config/nightscout/nightscout.env
|
||||
deploy:
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:nightscout.example.com
|
||||
- traefik.port=1337
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.nightscout.rule=Host(`nightscout.example.com`)"
|
||||
- "traefik.http.routers.nightscout.entrypoints=https"
|
||||
- "traefik.http.services.nightscout.loadbalancer.server.port=1337"
|
||||
|
||||
db:
|
||||
image: mongo:latest
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/runtime/nightscout/database:/data/db
|
||||
|
||||
db-backup:
|
||||
image: mongo:latest
|
||||
env_file: /var/data/config/nightscout/nightscout.env
|
||||
volumes:
|
||||
- /var/data/nightscout/database-dump:/dump
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mongodump -h db --gzip --archive=/dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.mongo.gz
|
||||
ls -tr /dump/dump_*.mongo.gz | head -n -"$$BACKUP_NUM_KEEP" | xargs -r rm
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.4.0/24
|
||||
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch nightscout!
|
||||
|
||||
Launch the nightscout stack by running ```docker stack deploy nightscout -c <path -to-docker-compose.yml>```
|
||||
|
||||
[^1]: Most of the time, you'll need an app which syncs to Nightscout, and these apps won't support OIDC auth, so this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/). Instead, NightScout is secured entirely with your `API_SECRET` above (*although it is possible to add more users once you're an admin*)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
427
docs/recipes/openldap.md
Normal file
427
docs/recipes/openldap.md
Normal file
@@ -0,0 +1,427 @@
|
||||
---
|
||||
title: Run OpenLDAP in Docker
|
||||
description: How to run an OpenLDAP server in Docker Swarm, with LDAP Account Manager. Authenticate like it's 1990!
|
||||
---
|
||||
|
||||
# OpenLDAP
|
||||
|
||||
LDAP is probably the most ubiquitous authentication backend, before the current era of "[stupid social sign-ons](https://www.usatoday.com/story/tech/columnist/2018/10/23/how-separate-your-social-networks-your-regular-sites/1687763002/)". Many of the recipes featured in the cookbook (_[NextCloud](/recipes/nextcloud/), [Kanboard](/recipes/kanboard/), [Gitlab](/recipes/gitlab/), etc_) offer LDAP integration.
|
||||
|
||||
## Big deal, who cares?
|
||||
|
||||
If you're the only user of your tools, it probably doesn't bother you _too_ much to setup new user accounts for every tool. As soon as you start sharing tools with collaborators (_think 10 staff using NextCloud_), you suddenly feel the pain of managing a growing collection of local user accounts per-service.
|
||||
|
||||
Enter OpenLDAP - the most crusty, PITA, fiddly platform to setup (_yes, I'm a little bitter, [dynamic configuration backend](https://linux.die.net/man/5/slapd-config)!_), but hugely useful for one job - a Lightweight Protocol for managing a Directory used for Access (_see what I did [there](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol)?_)
|
||||
|
||||
The nice thing about OpenLDAP is, like MySQL, once you've setup the server, you probably never have to interact directly with it. There are many tools which will let you interact with your LDAP database via a(n ugly) UI.
|
||||
|
||||
This recipe combines the raw power of OpenLDAP with the flexibility and featureset of LDAP Account Manager.
|
||||
|
||||

|
||||
|
||||
## What's the takeaway?
|
||||
|
||||
What you'll end up with is a directory structure which will allow integration with popular tools (_[NextCloud](/recipes/nextcloud/), [Kanboard](/recipes/kanboard/), [Gitlab](/recipes/gitlab/), etc_), as well as with Keycloak (_an upcoming recipe_), for **true** SSO.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/openldap:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/openldap/openldap
|
||||
mkdir /var/data/runtime/openldap/
|
||||
```
|
||||
|
||||
!!! note "Why 2 directories?"
|
||||
For rationale, see my [data layout explanation](/reference/data_layout/)
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create /var/data/openldap/openldap.env, and populate with the following variables, customized for your own domain structure. Take care with LDAP_DOMAIN, this is core to your directory structure, and can't easily be changed later.
|
||||
|
||||
```bash
|
||||
LDAP_DOMAIN=batcave.gotham
|
||||
LDAP_ORGANISATION=BatCave Inc
|
||||
LDAP_ADMIN_PASSWORD=supermansucks
|
||||
LDAP_TLS=false
|
||||
```
|
||||
|
||||
### Create config.cfg
|
||||
|
||||
The Dockerized version of LDAP Account Manager is a little fiddly. In order to maintain a config file which persists across container restarts, we need to present the container with a copy of /var/www/html/config/lam.conf, tweaked for our own requirements.
|
||||
|
||||
Create ```/var/data/openldap/lam/config/config.cfg``` as per the following example:
|
||||
|
||||
???+ note "Much scroll, very text. Click here to collapse it for better readability"
|
||||
|
||||
```bash
|
||||
# password to add/delete/rename configuration profiles (default: lam)
|
||||
password: {SSHA}D6AaX93kPmck9wAxNlq3GF93S7A= R7gkjQ==
|
||||
|
||||
# default profile, without ".conf"
|
||||
default: batcave
|
||||
|
||||
# log level
|
||||
logLevel: 4
|
||||
|
||||
# log destination
|
||||
logDestination: SYSLOG
|
||||
|
||||
# session timeout in minutes
|
||||
sessionTimeout: 30
|
||||
|
||||
# list of hosts which may access LAM
|
||||
allowedHosts:
|
||||
|
||||
# list of hosts which may access LAM Pro self service
|
||||
allowedHostsSelfService:
|
||||
|
||||
# encrypt session data
|
||||
encryptSession: true
|
||||
|
||||
# Password: minimum password length
|
||||
passwordMinLength: 0
|
||||
|
||||
# Password: minimum uppercase characters
|
||||
passwordMinUpper: 0
|
||||
|
||||
# Password: minimum lowercase characters
|
||||
passwordMinLower: 0
|
||||
|
||||
# Password: minimum numeric characters
|
||||
passwordMinNumeric: 0
|
||||
|
||||
# Password: minimum symbolic characters
|
||||
passwordMinSymbol: 0
|
||||
|
||||
# Password: minimum character classes (0-4)
|
||||
passwordMinClasses: 0
|
||||
|
||||
# Password: checked rules
|
||||
checkedRulesCount: -1
|
||||
|
||||
# Password: must not contain part of user name
|
||||
passwordMustNotContain3Chars: false
|
||||
|
||||
# Password: must not contain user name
|
||||
passwordMustNotContainUser: false
|
||||
|
||||
# Email format (default/unix)
|
||||
mailEOL: default
|
||||
|
||||
# PHP error reporting (default/system)
|
||||
errorReporting: default
|
||||
|
||||
# License
|
||||
license:
|
||||
```
|
||||
|
||||
### Create <profile\>.cfg
|
||||
|
||||
While config.cfg (_above_) defines application-level configuration, <profile\>.cfg is used to configure "domain-specific" configuration. You probably only need a single profile, but LAM could theoretically be used to administer several totally unrelated LDAP servers, ergo the concept of "profiles".
|
||||
|
||||
Create yours profile (_you chose a default profile in config.cfg above, remember?_) by creating ```/var/data/openldap/lam/config/<profile>.conf```, as per the following example:
|
||||
|
||||
???+ note "Much scroll, very text. Click here to collapse it for better readability"
|
||||
|
||||
```bash
|
||||
# LDAP Account Manager configuration
|
||||
#
|
||||
# Please do not modify this file manually. The configuration can be done completely by the LAM GUI.
|
||||
#
|
||||
###################################################################################################
|
||||
|
||||
# server address (e.g. ldap://localhost:389 or ldaps://localhost:636)
|
||||
ServerURL: ldap://openldap:389
|
||||
|
||||
# list of users who are allowed to use LDAP Account Manager
|
||||
# names have to be separated by semicolons
|
||||
# e.g. admins: cn=admin,dc=yourdomain,dc=org;cn=root,dc=yourdomain,dc=org
|
||||
Admins: cn=admin,dc=batcave,dc=gotham
|
||||
|
||||
# password to change these preferences via webfrontend (default: lam)
|
||||
Passwd: {SSHA}h39N9+gg/Qf1K/986VkKrjWlkcI= S/IAUQ==
|
||||
|
||||
# suffix of tree view
|
||||
# e.g. dc=yourdomain,dc=org
|
||||
treesuffix: dc=batcave,dc=gotham
|
||||
|
||||
# default language (a line from config/language)
|
||||
defaultLanguage: en_GB.utf8
|
||||
|
||||
# Path to external Script
|
||||
scriptPath:
|
||||
|
||||
# Server of external Script
|
||||
scriptServer:
|
||||
|
||||
# Access rights for home directories
|
||||
scriptRights: 750
|
||||
|
||||
# Number of minutes LAM caches LDAP searches.
|
||||
cachetimeout: 5
|
||||
|
||||
# LDAP search limit.
|
||||
searchLimit: 0
|
||||
|
||||
# Module settings
|
||||
|
||||
modules: posixAccount_user_minUID: 10000
|
||||
modules: posixAccount_user_maxUID: 30000
|
||||
modules: posixAccount_host_minMachine: 50000
|
||||
modules: posixAccount_host_maxMachine: 60000
|
||||
modules: posixGroup_group_minGID: 10000
|
||||
modules: posixGroup_group_maxGID: 20000
|
||||
modules: posixGroup_pwdHash: SSHA
|
||||
modules: posixAccount_pwdHash: SSHA
|
||||
|
||||
# List of active account types.
|
||||
activeTypes: user,group
|
||||
|
||||
|
||||
types: suffix_user: ou=People,dc=batcave,dc=gotham
|
||||
types: attr_user: #uid;#givenName;#sn;#uidNumber;#gidNumber
|
||||
types: modules_user: inetOrgPerson,posixAccount,shadowAccount
|
||||
|
||||
types: suffix_group: ou=Groups,dc=batcave,dc=gotham
|
||||
types: attr_group: #cn;#gidNumber;#memberUID;#description
|
||||
types: modules_group: posixGroup
|
||||
|
||||
# Password mail subject
|
||||
lamProMailSubject: Your password was reset
|
||||
|
||||
# Password mail text
|
||||
lamProMailText: Dear @@givenName@@ @@sn@@,+::++::+your password was reset to: @@newPassword@@+::++::++::+Best regards+::++::+deskside support+::+
|
||||
|
||||
|
||||
|
||||
serverDisplayName:
|
||||
|
||||
|
||||
# enable TLS encryption
|
||||
useTLS: no
|
||||
|
||||
|
||||
# follow referrals
|
||||
followReferrals: false
|
||||
|
||||
|
||||
# paged results
|
||||
pagedResults: false
|
||||
|
||||
referentialIntegrityOverlay: false
|
||||
|
||||
|
||||
# time zone
|
||||
timeZone: Europe/London
|
||||
|
||||
scriptUserName:
|
||||
|
||||
scriptSSHKey:
|
||||
|
||||
scriptSSHKeyPassword:
|
||||
|
||||
|
||||
# Access level for this profile.
|
||||
accessLevel: 100
|
||||
|
||||
|
||||
# Login method.
|
||||
loginMethod: list
|
||||
|
||||
|
||||
# Search suffix for LAM login.
|
||||
loginSearchSuffix: dc=batcave,dc=gotham
|
||||
|
||||
|
||||
# Search filter for LAM login.
|
||||
loginSearchFilter: uid=%USER%
|
||||
|
||||
|
||||
# Bind DN for login search.
|
||||
loginSearchDN:
|
||||
|
||||
|
||||
# Bind password for login search.
|
||||
loginSearchPassword:
|
||||
|
||||
|
||||
# HTTP authentication for LAM login.
|
||||
httpAuthentication: false
|
||||
|
||||
|
||||
# Password mail from
|
||||
lamProMailFrom:
|
||||
|
||||
|
||||
# Password mail reply-to
|
||||
lamProMailReplyTo:
|
||||
|
||||
|
||||
# Password mail is HTML
|
||||
lamProMailIsHTML: false
|
||||
|
||||
|
||||
# Allow alternate address
|
||||
lamProMailAllowAlternateAddress: true
|
||||
|
||||
jobsBindPassword:
|
||||
|
||||
jobsBindUser:
|
||||
|
||||
jobsDatabase:
|
||||
|
||||
jobsDBHost:
|
||||
|
||||
jobsDBPort:
|
||||
|
||||
jobsDBUser:
|
||||
|
||||
jobsDBPassword:
|
||||
|
||||
jobsDBName:
|
||||
|
||||
jobToken: 190339140545
|
||||
|
||||
pwdResetAllowSpecificPassword: true
|
||||
|
||||
pwdResetAllowScreenPassword: true
|
||||
|
||||
pwdResetForcePasswordChange: true
|
||||
|
||||
pwdResetDefaultPasswordOutput: 2
|
||||
|
||||
twoFactorAuthentication: none
|
||||
|
||||
twoFactorAuthenticationURL: https://localhost
|
||||
|
||||
twoFactorAuthenticationInsecure:
|
||||
|
||||
twoFactorAuthenticationLabel:
|
||||
|
||||
twoFactorAuthenticationOptional:
|
||||
|
||||
twoFactorAuthenticationCaption:
|
||||
tools: tool_hide_toolOUEditor: false
|
||||
tools: tool_hide_toolProfileEditor: false
|
||||
tools: tool_hide_toolSchemaBrowser: false
|
||||
tools: tool_hide_toolServerInformation: false
|
||||
tools: tool_hide_toolTests: false
|
||||
tools: tool_hide_toolPDFEditor: false
|
||||
tools: tool_hide_toolFileUpload: false
|
||||
tools: tool_hide_toolMultiEdit: false
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this, at (```/var/data/config/openldap/openldap.yml```)
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
openldap:
|
||||
image: osixia/openldap
|
||||
env_file: /var/data/config/openldap/openldap.env
|
||||
networks:
|
||||
- traefik_public
|
||||
- auth_internal
|
||||
volumes:
|
||||
- /var/data/runtime/openldap/:/var/lib/ldap
|
||||
- /var/data/openldap/openldap/:/etc/ldap/slapd.d
|
||||
|
||||
|
||||
lam:
|
||||
image: jacksgt/ldap-account-manager
|
||||
networks:
|
||||
- auth_internal
|
||||
- traefik_public
|
||||
volumes:
|
||||
- /var/data/openldap/lam/config/config.cfg:/var/www/html/config/config.cfg
|
||||
- /var/data/openldap/lam/config/batcave.conf:/var/www/html/config/batcave.conf
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:iam.example.com
|
||||
- traefik.port=8080
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.iam.rule=Host(`iam.example.com`)"
|
||||
- "traefik.http.services.iam.loadbalancer.server.port=8080"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.iam.middlewares=forward-auth@file"
|
||||
|
||||
|
||||
networks:
|
||||
# Used to expose lam-proxy to external access, and openldap to keycloak
|
||||
traefik_public:
|
||||
external: true
|
||||
|
||||
# Used to expose openldap to other apps which want to talk to LDAP, including LAM
|
||||
auth_internal:
|
||||
external: true
|
||||
```
|
||||
|
||||
!!! warning
|
||||
**Normally**, we set unique static subnets for every stack you deploy, and put the non-public facing components (like databases) in an dedicated <stack\>_internal network. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
|
||||
|
||||
However, you're likely to want to use OpenLdap with Keycloak, whose JBOSS startup script assumes a single interface, and will crash in a ball of 🔥 if you try to assign multiple interfaces to the container.
|
||||
|
||||
Since we're going to want Keycloak to be able to talk to OpenLDAP, we have no choice but to leave the OpenLDAP container on the "traefik_public" network. We can, however, create **another** overlay network (_auth_internal, see below_), add it to the openldap container, and use it to provide OpenLDAP access to our other stacks.
|
||||
|
||||
Create **another** stack config file (```/var/data/config/openldap/auth.yml```) containing just the auth_internal network, and a dummy container:
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
# What is this?
|
||||
#
|
||||
# This stack exists solely to deploy the auth_internal overlay network, so that
|
||||
# other stacks (including keycloak and openldap) can attach to it
|
||||
|
||||
services:
|
||||
scratch:
|
||||
image: scratch
|
||||
deploy:
|
||||
replicas: 0
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
internal:
|
||||
driver: overlay
|
||||
attachable: true
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.39.0/24
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch OpenLDAP stack
|
||||
|
||||
Create the auth_internal overlay network, by running ```docker stack deploy auth -c /var/data/config/openldap/auth.yml```, then launch the OpenLDAP stack by running ```docker stack deploy openldap -c /var/data/config/openldap/openldap.yml```
|
||||
|
||||
Log into your new LAM instance at https://**YOUR-FQDN**.
|
||||
|
||||
On first login, you'll be prompted to create the "_ou=People_" and "_ou=Group_" elements. Proceed to create these.
|
||||
|
||||
You've now setup your OpenLDAP directory structure, and your administration interface, and hopefully won't have to interact with the "special" LDAP Account Manager interface much again!
|
||||
|
||||
Create your users using the "**New User**" button.
|
||||
|
||||
[^1]: [The Keycloak](/recipes/keycloak/authenticate-against-openldap/) recipe illustrates how to integrate Keycloak with your LDAP directory, giving you a cleaner interface to manage users, and a raft of SSO / OAuth features.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
103
docs/recipes/owntracks.md
Normal file
103
docs/recipes/owntracks.md
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
title: Run OwnTracks under Docker
|
||||
---
|
||||
|
||||
# OwnTracks
|
||||
|
||||
[OwnTracks](https://owntracks.org/) allows you to keep track of your own location. You can build your private location diary or share it with your family and friends. OwnTracks is open-source and uses open protocols for communication so you can be sure your data stays secure and private.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Using a smartphone app, OwnTracks allows you to collect and analyse your own location data **without** sharing this data with a cloud provider (_i.e. Apple, Google_). Potential use cases are:
|
||||
|
||||
* Sharing family locations without relying on Apple Find-My-friends
|
||||
* Performing automated actions in [HomeAssistant](/recipes/homeassistant/) when you arrive/leave home
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a directory so store OwnTracks' data , so create ```/var/data/owntracks```:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/owntracks
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create owntracks.env, and populate with the following variables
|
||||
|
||||
```bash
|
||||
OTR_USER=recorder
|
||||
OTR_PASS=yourpassword
|
||||
OTR_HOST=owntracks.example.com
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
owntracks-app:
|
||||
image: funkypenguin/owntracks
|
||||
env_file : /var/data/config/owntracks/owntracks.env
|
||||
volumes:
|
||||
- /var/data/owntracks:/owntracks
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
ports:
|
||||
- 1883:1883
|
||||
- 8883:8883
|
||||
- 8083:8083
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:owntracks-app.example.com
|
||||
- traefik.port=8083
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.owntracks.rule=Host(`owntracks-app.example.com`)"
|
||||
- "traefik.http.services.owntracks.loadbalancer.server.port=8083"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.owntracks.middlewares=forward-auth@file"
|
||||
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.15.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch OwnTracks stack
|
||||
|
||||
Launch the OwnTracks stack by running ```docker stack deploy owntracks -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
|
||||
|
||||
[^1]: If you wanted to expose the Owntracks UI directly, you could remove the traefik-forward-auth from the design.
|
||||
[^2]: I'm using my own image rather than owntracks/recorderd, because of a [potentially swarm-breaking bug](https://github.com/owntracks/recorderd/issues/14) I found in the official container. If this gets resolved (_or if I was mistaken_) I'll update the recipe accordingly.
|
||||
[^3]: By default, you'll get a fully accessible, unprotected MQTT broker. This may not be suitable for public exposure, so you'll want to look into securing mosquitto with TLS and ACLs.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
182
docs/recipes/paperless-ng.md
Normal file
182
docs/recipes/paperless-ng.md
Normal file
@@ -0,0 +1,182 @@
|
||||
---
|
||||
title: Run paperless-ngx under Docker
|
||||
description: Easily index, search, and view archive all of your scanned dead-tree documents with Paperless NGX, under Docker, now using the linuxserver image since the fork from from paperless-ng to paperless-ngx!
|
||||
---
|
||||
|
||||
# Paperless NGX
|
||||
|
||||
Paper is a nightmare. Environmental issues aside, there’s no excuse for it in the 21st century. It takes up space, collects dust, doesn’t support any form of a search feature, indexing is tedious, it’s heavy and prone to damage & loss. [^1] Paperless NGX will OCR, index, and store data about your documents so they are easy to search and view, unlike that hulking metal file cabinet you have in your office.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
!!! question "What's this fork 🍴 thing about, and is it Paperless, Paperless-NG, or Paperless-NGX?"
|
||||
It's now.. Paperless-NGX. Paperless-ngx is a fork of paperless-ng, which itself was a fork of paperless. As I understand it, the original "forker" of paperless to paperless-ng has "gone dark", and [stopped communicating](https://github.com/jonaswinkler/paperless-ng/issues/1599), so while all are hopeful that he's OK and just busy/distracted, the [community formed paperless-ngx](https://github.com/jonaswinkler/paperless-ng/issues/1632) to carry on development work under a shared responsibility model. To save some typing though, we'll just call it "Paperless", although you'll note belowe that we're using the linuxserver paperless-ngx image. (Also, if you use the automated tooling in the Premix Repo, Ansible *really* doesn't like the hypen!)
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a folder to store a docker-compose configuration file and an associated environment file. If you're following my filesystem layout, create `/var/data/config/paperless` (*for the config*). We'll also need to create `/var/data/paperless` and a few subdirectories (*for the metadata*). Lastly, we need a directory for the database backups to reside in as well.
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/paperless
|
||||
mkdir /var/data/paperless
|
||||
mkdir /var/data/paperless/consume
|
||||
mkdir /var/data/paperless/data
|
||||
mkdir /var/data/paperless/export
|
||||
mkdir /var/data/paperless/media
|
||||
mkdir /var/data/runtime/paperless/pgdata
|
||||
mkdir /var/data/paperless/database-dump
|
||||
```
|
||||
|
||||
### Create environment
|
||||
|
||||
To stay consistent with the other recipes, we'll create a file to store environment variables in. There's more than 1 service in this stack, but we'll only create one one environment file that will be used by the web server (more on this later).
|
||||
|
||||
```bash
|
||||
cat << EOF > /var/data/config/paperless/paperless.env
|
||||
PAPERLESS_TIME_ZONE:<timezone>
|
||||
PAPERLESS_ADMIN_USER=<admin_user>
|
||||
PAPERLESS_ADMIN_PASSWORD=<admin_password>
|
||||
PAPERLESS_ADMIN_MAIL=<admin_email>
|
||||
PAPERLESS_REDIS=redis://broker:6379
|
||||
PAPERLESS_DBHOST=db
|
||||
PAPERLESS_TIKA_ENABLED=1
|
||||
PAPERLESS_TIKA_GOTENBERG_ENDPOINT=http://gotenberg:3000
|
||||
PAPERLESS_TIKA_ENDPOINT=http://tika:9998
|
||||
EOF
|
||||
```
|
||||
|
||||
You'll need to replace some of the text in the snippet above:
|
||||
|
||||
* `<timezone>` - Replace with an entry from [the timezone database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) (eg: America/New_York)
|
||||
* `<admin_user>` - Username of the superuser account that will be created on first run. Without this and the *<admin_password>* you won't be able to log into Paperless
|
||||
* `<admin_password>` - Password of the superuser account above.
|
||||
* `<admin_email>` - Email address of the superuser account above.
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like the following example:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
services:
|
||||
|
||||
broker:
|
||||
image: redis:6.0
|
||||
networks:
|
||||
- internal
|
||||
|
||||
webserver:
|
||||
image: linuxserver/paperless-ngx
|
||||
env_file: paperless.env
|
||||
volumes:
|
||||
- /var/data/paperless/data:/usr/src/paperless/data
|
||||
- /var/data/paperless/media:/usr/src/paperless/media
|
||||
- /var/data/paperless/export:/usr/src/paperless/export
|
||||
- /var/data/paperless/consume:/usr/src/paperless/consume
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:paperless.example.com
|
||||
- traefik.port=8000
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.paperless.rule=Host(`paperless.example.com`)"
|
||||
- "traefik.http.routers.paperless.entrypoints=https"
|
||||
- "traefik.http.services.paperless.loadbalancer.server.port=8000"
|
||||
- "traefik.http.routers.paperless.middlewares=forward-auth"
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
|
||||
gotenberg:
|
||||
image: thecodingmachine/gotenberg
|
||||
environment:
|
||||
DISABLE_GOOGLE_CHROME: 1
|
||||
networks:
|
||||
- internal
|
||||
|
||||
tika:
|
||||
image: apache/tika
|
||||
networks:
|
||||
- internal
|
||||
|
||||
db:
|
||||
image: postgres:13
|
||||
volumes:
|
||||
- /var/data/runtime/paperless/pgdata:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_DB: paperless
|
||||
POSTGRES_USER: paperless
|
||||
POSTGRES_PASSWORD: paperless
|
||||
networks:
|
||||
- internal
|
||||
|
||||
db-backup:
|
||||
image: postgres:13
|
||||
volumes:
|
||||
- /var/data/paperless/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
environment:
|
||||
PGHOST: db
|
||||
PGDATABASE: paperless
|
||||
PGUSER: paperless
|
||||
PGPASSWORD: paperless
|
||||
BACKUP_NUM_KEEP: 7
|
||||
BACKUP_FREQUENCY: 1d
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
|
||||
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.58.0/24
|
||||
|
||||
```
|
||||
|
||||
You'll notice that there are several items under "services" in this stack. Let's take a look at what each one does:
|
||||
|
||||
* broker - Redis server that other services use to share data
|
||||
* webserver - The UI that you will use to add and view documents, edit document metadata, and configure the application settings.
|
||||
* gotenburg - Tool that facilitates converting MS Office documents, HTML, Markdown and other document types to PDF
|
||||
* tika - The OCR engine that extracts text from image-only documents
|
||||
* db - PostgreSQL database engine to store metadata for all the documents. [^2]
|
||||
* db-backup - Service to dump the PostgreSQL database to a backup file on disk once per day
|
||||
|
||||
## Serving
|
||||
|
||||
Launch the paperless stack by running ```docker stack deploy paperless -c <path -to-docker-compose.yml>```. You can then log in with the username and password that you specified in the environment variables file above.
|
||||
|
||||
Head over to the [Paperless documentation](https://paperless-ng.readthedocs.io/en/latest) to see how to configure and use the application then revel in the fact you can now search all your scanned documents to to your heart's content.
|
||||
|
||||
[^1]: Taken directly from [Paperless documentation](https://paperless-ng.readthedocs.io/en/latest)
|
||||
[^2]: This particular stack configuration was chosen because it includes a "real" database in PostgreSQL versus the more lightweight SQLite database. After all, if you go to the trouble of scanning and importing a pile of documents, you want to know the database is robust enough to keep your data safe.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
184
docs/recipes/photoprism.md
Normal file
184
docs/recipes/photoprism.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
title: Run Photoprism on Docker
|
||||
description: ML-powered private photo hosting
|
||||
---
|
||||
|
||||
# Photoprism on Docker
|
||||
|
||||
[Photoprism™](https://github.com/photoprism/photoprism) "is a server-based application for browsing, organizing and sharing your personal photo collection. It makes use of the latest technologies to automatically tag and find pictures without getting in your way. Say goodbye to solutions that force you to upload your visual memories to the cloud."
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we need a folder to map the photoprism config file:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/photoprism/config
|
||||
```
|
||||
|
||||
We will need a location to store photoprism thumbnails, as they can be recreated anytime (althought depending on your collection size it could take a while), we store them on a "non-backed-up" folder
|
||||
|
||||
```bash
|
||||
mkdir /var/data/runtime/photoprism/cache
|
||||
```
|
||||
|
||||
We will need to map three folders on our system / data:
|
||||
|
||||
1. originals - the folder where our original photo collection is stored (photoprism doesn't modify any original file, it only adds sidecars files).
|
||||
2. import - the folder where photoprism will pick new photos to be added to the collection
|
||||
3. export - the folder where photoprism will export photos.
|
||||
|
||||
In order to be able to import/export files from / to the originals folder make sure that the running user of the photoprims instance has write / read access to those folders.
|
||||
|
||||
Photoprism has with its own running db, but if your collection is big (10K photos or more), the perfomance is best using an external db instance. We will use MariaDb, so we need the folders for running and backing the db:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/runtime/photoprism/db
|
||||
mkdir /var/data/photoprism/database-dump
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create ```photoprism.env```, and populate with the following variables. Change passwords
|
||||
|
||||
```bash
|
||||
PHOTOPRISM_URL=https://photoprism.example.com
|
||||
PHOTOPRISM_TITLE=PhotoPrism
|
||||
PHOTOPRISM_SUBTITLE=Browse your life
|
||||
PHOTOPRISM_DESCRIPTION=Personal Photo Management powered by Go and Google TensorFlow. Free and open-source.
|
||||
PHOTOPRISM_AUTHOR=Anonymous
|
||||
PHOTOPRISM_TWITTER=@rowseyourlife
|
||||
PHOTOPRISM_UPLOAD_NSFW=true
|
||||
PHOTOPRISM_HIDE_NSFW=false
|
||||
PHOTOPRISM_EXPERIMENTAL=false
|
||||
PHOTOPRISM_DEBUG=false
|
||||
PHOTOPRISM_READONLY=false
|
||||
PHOTOPRISM_PUBLIC=false
|
||||
PHOTOPRISM_ADMIN_PASSWORD=photoprism #change
|
||||
PHOTOPRISM_WEBDAV_PASSWORD=photoprism #change
|
||||
PHOTOPRISM_TIDB_HOST=0.0.0.0
|
||||
PHOTOPRISM_TIDB_PORT=2343
|
||||
PHOTOPRISM_TIDB_PASSWORD=photoprism
|
||||
PHOTOPRISM_DATABASE_DRIVER=mysql
|
||||
PHOTOPRISM_DATABASE_DSN=photoprism:photoprism@tcp(db:3306)/photoprism?parseTime=true
|
||||
PHOTOPRISM_SIDECAR_HIDDEN=true
|
||||
PHOTOPRISM_THUMB_FILTER=lanczos
|
||||
PHOTOPRISM_THUMB_UNCACHED=false
|
||||
PHOTOPRISM_THUMB_SIZE=2048
|
||||
MYSQL_ROOT_PASSWORD=<set to something secure>
|
||||
MYSQL_USER=photoprism
|
||||
MYSQL_PASSWORD=photoprism
|
||||
MYSQL_DATABASE=photoprism
|
||||
```
|
||||
|
||||
Now create a **separate** photoprism-db-backup.env file, to capture the environment variables necessary to perform the backup. (_If the same variables are shared with the mariadb container, they [cause issues](https://forum.funkypenguin.co.nz/t/nextcloud-funky-penguins-geek-cookbook/254/3?u=funkypenguin) with database access_)
|
||||
|
||||
````bash
|
||||
# For database backup (keep 7 days daily backups)
|
||||
MYSQL_PWD=<set to something secure, same as MYSQL_ROOT_PASSWORD above>
|
||||
MYSQL_USER=root
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
````
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3.3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: photoprism/photoprism:latest
|
||||
env_file: /var/data/config/photoprism/photoprism.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /path/to/originals:/photoprism/originals
|
||||
- /path/to/import:/photoprism/import
|
||||
- /path/to/export:/photoprism/export
|
||||
- /var/data/runtime/photoprism/cache:/photoprism/cache
|
||||
- /var/data/photoprism/config:/photoprism/config
|
||||
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:photoprism.example.com
|
||||
- traefik.port=2342
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.photoprism.rule=Host(`photoprism.example.com`)"
|
||||
- "traefik.http.services.photoprism.loadbalancer.server.port=2342"
|
||||
- "traefik.enable=true"
|
||||
|
||||
|
||||
db:
|
||||
image: mariadb:10.5
|
||||
env_file: /var/data/config/photoprism/photoprism.env
|
||||
command: |
|
||||
--character-set-server=utf8mb4
|
||||
--collation-server=utf8mb4_unicode_ci
|
||||
--max-connections=1024
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/runtime/photoprism/db:/var/lib/mysql
|
||||
|
||||
db-backup:
|
||||
image: mariadb:10.5
|
||||
env_file: /var/data/config/photoprism/photoprism-db-backup.env
|
||||
volumes:
|
||||
- /var/data/photoprism/database-dump:/dump
|
||||
- /var/data/runtime/photoprism/db:/var/lib/mysql
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
|
||||
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.90.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Photoprism stack
|
||||
|
||||
Launch the Photoprism stack by running ```docker stack deploy photoprism -c <path -to-docker-compose.yml>```
|
||||
|
||||
Browse to your new browser-cli-terminal at https://**YOUR-FQDN**, with user "admin" and the password you specified in photoprism.env
|
||||
|
||||
[^1]: Once it is running, you probably will want to launch an scan to index the originals photos. Go to *library -> index* and do a complete rescan (it will take a while, depending on your collection size)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
163
docs/recipes/phpipam.md
Normal file
163
docs/recipes/phpipam.md
Normal file
@@ -0,0 +1,163 @@
|
||||
---
|
||||
title: Run phpIPAM under Docker
|
||||
description: Is that IP address in use? Do some DHP / Discovery with phpIPAM under Docker
|
||||
---
|
||||
|
||||
# phpIPAM
|
||||
|
||||
phpIPAM is an open-source web IP address management application (_IPAM_). Its goal is to provide light, modern and useful IP address management. It is php-based application with MySQL database backend, using jQuery libraries, ajax and HTML5/CSS3 features.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
phpIPAM fulfils a non-sexy, but important role - It helps you manage your IP address allocation.
|
||||
|
||||
## Why should you care about this?
|
||||
|
||||
You probably have a home network, with 20-30 IP addresses, for your family devices, your [IoT devices][homeassistant], your smart TV, etc. If you want to (a) monitor them, and (b) audit who does what, you care about what IPs they're assigned by your DHCP server.
|
||||
|
||||
You could simple keep track of all devices with leases in your DHCP server, but what happens if your (_hypothetical?_) Ubiquity Edge Router X crashes and burns due to lack of disk space, and you loose track of all your leases? Well, you have to start from scratch, is what!
|
||||
|
||||
And that [HomeAssistant](/recipes/homeassistant/) config, which you so carefully compiled, refers to each device by IP/DNS name, so you'd better make sure you recreate it consistently!
|
||||
|
||||
Enter phpIPAM. A tool designed to help home keeps as well as large organisations keep track of their IP (_and VLAN, VRF, and AS number_) allocations.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in `/var/data/phpipam`:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/phpipam/databases-dump -p
|
||||
mkdir /var/data/runtime/phpipam -p
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create `phpipam.env`, and populate with the following variables
|
||||
|
||||
```bash
|
||||
# Setup for github, phpipam application
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
|
||||
# For MariaDB/MySQL database
|
||||
MYSQL_ROOT_PASSWORD=imtoosecretformyshorts
|
||||
MYSQL_DATABASE=phpipam
|
||||
MYSQL_USER=phpipam
|
||||
MYSQL_PASSWORD=secret
|
||||
|
||||
# phpIPAM-specific variables
|
||||
MYSQL_ENV_MYSQL_USER=phpipam
|
||||
MYSQL_ENV_MYSQL_PASSWORD=secret
|
||||
MYSQL_ENV_MYSQL_DB=phpipam
|
||||
MYSQL_ENV_MYSQL_HOST=db
|
||||
|
||||
# For backup
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
Additionally, create `phpipam-backup.env`, and populate with the following variables:
|
||||
|
||||
```bash
|
||||
# For MariaDB/MySQL database
|
||||
MYSQL_ROOT_PASSWORD=imtoosecretformyshorts
|
||||
MYSQL_DATABASE=phpipam
|
||||
MYSQL_USER=phpipam
|
||||
MYSQL_PASSWORD=secret
|
||||
|
||||
# For backup
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
|
||||
db:
|
||||
image: mariadb:10
|
||||
env_file: /var/data/config/phpipam/phpipam.env
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/runtime/phpipam/db:/var/lib/mysql
|
||||
|
||||
app:
|
||||
image: pierrecdn/phpipam
|
||||
env_file: /var/data/config/phpipam/phpipam.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- "traefik.enable=true"
|
||||
- "traefik.docker.network=traefik_public"
|
||||
|
||||
# traefikv1
|
||||
- "traefik.frontend.rule=Host:phpipam.example.com"
|
||||
- "traefik.port=80"
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.phpipam.rule=Host(`phpipam.example.com`)"
|
||||
- "traefik.http.routers.phpipam.entrypoints=https"
|
||||
- "traefik.http.services.phpipam.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.api.middlewares=forward-auth"
|
||||
|
||||
db-backup:
|
||||
image: mariadb:10
|
||||
env_file: /var/data/config/phpipam/phpipam.env
|
||||
volumes:
|
||||
- /var/data/phpipam/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
|
||||
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.47.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch phpIPAM stack
|
||||
|
||||
Launch the phpIPAM stack by running `docker stack deploy phpipam -c <path -to-docker-compose.yml>`
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, and follow the on-screen prompts to set your first user/password.
|
||||
|
||||
[^1]: If you wanted to expose the phpIPAM UI directly, you could remove the `traefik.http.routers.api.middlewares` label from the app container :thumbsup:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
103
docs/recipes/plex.md
Normal file
103
docs/recipes/plex.md
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
title: Run Plex in Docker
|
||||
description: Play back all your media on all your devices
|
||||
---
|
||||
|
||||
# Plex in Docker
|
||||
|
||||
[Plex](https://www.plex.tv/) is a client-server media player system and software suite comprising two main components (a media server and client applications)
|
||||
|
||||

|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a directories to bind-mount into our container for Plex to store its library, so create /var/data/plex:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/plex
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create plex.env, and populate with the following variables. Set PUID and GUID to the UID and GID of the user who owns your media files, on the local filesystem
|
||||
|
||||
```yaml
|
||||
EDGE=1
|
||||
VERSION=latest
|
||||
PUID=42
|
||||
PGID=42
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
plex:
|
||||
image: lscr.io/linuxserver/plex
|
||||
env_file: plex.env
|
||||
volumes:
|
||||
- /var/data/config/plex:/config
|
||||
- /var/data/media:/media
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:plex.example.com
|
||||
- traefik.port=32400
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.plex.rule=Host(`plex.example.com`)"
|
||||
- "traefik.http.services.plex.loadbalancer.server.port=32400"
|
||||
- "traefik.enable=true"
|
||||
networks:
|
||||
- traefik_public
|
||||
- internal
|
||||
ports:
|
||||
- 32469:32469
|
||||
- 32400:32400
|
||||
- 32401:32401
|
||||
- 3005:3005
|
||||
- 8324:8324
|
||||
- 1900:1900/udp
|
||||
- 32410:32410/udp
|
||||
- 32412:32412/udp
|
||||
- 32413:32413/udp
|
||||
- 32414:32414/udp
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.16.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Plex stack
|
||||
|
||||
Launch the Plex stack by running ```docker stack deploy plex -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN** (You'll need to setup a plex.tv login for remote access / discovery to work from certain clients)
|
||||
|
||||
[^1]: Plex uses port 32400 for remote access, using your plex.tv user/password to authenticate you. The inclusion of the traefik proxy in this recipe is simply to allow you to use the web client (as opposed to a client app) by connecting directly to your instance, as opposed to browsing your media via <https://plex.tv/web>
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
122
docs/recipes/portainer.md
Normal file
122
docs/recipes/portainer.md
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
title: Run Portainer in Docker Swarm (now with Dark Mode!)
|
||||
description: Portainer is a UI to make Docker less geeky, runs under Docker Swarm (and Kubernetes!) and most importantly, now supports dark mode!
|
||||
---
|
||||
|
||||
# Portainer
|
||||
|
||||
!!! tip
|
||||
Some time after originally publishing this recipe, I had the opportunity to meet the [Portainer team](https://www.reseller.co.nz/article/682233/kiwi-startup-portainer-io-closes-1-2m-seed-round/), who are based out of Auckland, New Zealand. We now have an ongoing friendly working relationship. For a time, Portainer was my [GitHub Sponsor][github_sponsor] :heart:, and in return, I maintained their [official Kubernetes helm charts](https://github.com/portainer/k8s)! :thumbsup:
|
||||
|
||||
[Portainer](https://portainer.io/) is a lightweight sexy UI for visualizing your docker environment. It also happens to integrate well with Docker Swarm clusters, which makes it a great fit for our stack.
|
||||
|
||||
Portainer attempts to take the "geekiness" out of containers, by wrapping all the jargon and complexity in a shiny UI and some simple abstractions. It's a great addition to any stack, especially if you're just starting your containerization journey!
|
||||
|
||||
!!! tip "I am all of the Sith!"
|
||||
In 2021, Portainer released "Dark Mode". Here's why I think this is [100% my fault](https://www.funkypenguin.co.nz/blog/portainer-dark-mode/) :)
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
Create a folder to store portainer's persistent data:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/portainer
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3.2'
|
||||
|
||||
services:
|
||||
portainer:
|
||||
image: portainer/portainer-ce
|
||||
command: -H tcp://tasks.agent:9001 --tlsskipverify
|
||||
ports:
|
||||
- "9000:9000"
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- /var/data/portainer:/data
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints: [node.role == manager]
|
||||
labels:
|
||||
# traefik
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:portainer.example.com
|
||||
- traefik.port=9000
|
||||
# uncomment if you want to protect portainer with traefik-forward-auth using traefikv1
|
||||
# - traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
# - traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
# - traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.portainer.rule=Host(`portainer.example.com`)"
|
||||
- "traefik.http.routers.portainer.entrypoints=https"
|
||||
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
|
||||
# uncomment if you want to protect portainer with traefik-forward-auth using traefikv2
|
||||
# - "traefik.http.routers.portainer.middlewares=forward-auth"
|
||||
|
||||
agent:
|
||||
image: portainer/agent
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /var/lib/docker/volumes:/var/lib/docker/volumes
|
||||
networks:
|
||||
- internal
|
||||
deploy:
|
||||
mode: global
|
||||
placement:
|
||||
constraints: [node.platform.os == linux]
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.13.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
!!! question "Umm.. didn't you just copy these from the [official Portainer docs](https://documentation.portainer.io/v2.0/deploy/linux/#docker-swarm)?"
|
||||
|
||||
Almost word-for-word! I've made a few (*opinionated*) improvements though:
|
||||
|
||||
* Expose Portainer via Traefik with valid LetsEncrypt SSL certs
|
||||
* Optionally protected Portainer's web UI with OIDC auth via Traefik Forward Auth
|
||||
* Use filesystem paths instead of Docker volumes for maximum "swarminess" (*We want an HA swarm, and HA Docker Volumes are a PITA, so we just use our [ceph shared storage](/docker-swarm/shared-storage-ceph/)*)
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Portainer stack
|
||||
|
||||
Launch the Portainer stack by running ```docker stack deploy portainer -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**. You'll be prompted to set your admin user/password on first login. Start at "Home", and click on "Primary" to manage your swarm (*you can manage multiple swarms via one Portainer instance using the agent*):
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
[^1]: There are [some schenanigans](https://www.reddit.com/r/docker/comments/au9wnu/linuxserverio_templates_for_portainer/) you can do to install LinuxServer.io templates in Portainer. Don't go crying to them for support though! :crying_cat_face:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
72
docs/recipes/privatebin.md
Normal file
72
docs/recipes/privatebin.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
title: Run PrivateBin on Docker
|
||||
description: A private imgur/pastebin, running on Docker
|
||||
---
|
||||
|
||||
# PrivateBin
|
||||
|
||||
PrivateBin is a minimalist, open source online pastebin where the server (can) has zero knowledge of pasted data. We all need to paste data / log files somewhere when it doesn't make sense to paste it inline. With PrivateBin, you can own the hosting, access, and eventual deletion of this data.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a single location to bind-mount into our container, so create /var/data/privatebin, and make it world-writable (_there might be a more secure way to do this!_)
|
||||
|
||||
```bash
|
||||
mkdir /var/data/privatebin
|
||||
chmod 777 /var/data/privatebin/
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: privatebin/nginx-fpm-alpine
|
||||
volumes:
|
||||
- /var/data/privatebin:/srv/data
|
||||
networks:
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:privatebin.example.com
|
||||
- traefik.port=4180
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.privatebin.rule=Host(`privatebin.example.com`)"
|
||||
- "traefik.http.services.privatebin.loadbalancer.server.port=4180"
|
||||
- "traefik.enable=true"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch PrivateBin stack
|
||||
|
||||
Launch the PrivateBin stack by running ```docker stack deploy privatebin -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in gitlab.env.
|
||||
|
||||
[^1]: The [PrivateBin repo](https://github.com/PrivateBin/PrivateBin/blob/master/INSTALL.md) explains how to tweak configuration options, or to use a database instead of file storage, if your volume justifies it :)
|
||||
[^2]: The inclusion of Privatebin was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
100
docs/recipes/realms.md
Normal file
100
docs/recipes/realms.md
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
title: Realms is a git-based wiki, and it runs under Docker!
|
||||
description: A git-based wiki with auth and registration
|
||||
---
|
||||
|
||||
# Realms
|
||||
|
||||
Realms is a git-based wiki (_like [Gollum](/recipes/gollum/), but with basic authentication and registration_)
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Features include:
|
||||
|
||||
* Built with Bootstrap 3.
|
||||
* Markdown (w/ HTML Support).
|
||||
* Syntax highlighting (Ace Editor).
|
||||
* Live preview.
|
||||
* Collaboration (TogetherJS / Firepad).
|
||||
* Drafts saved to local storage.
|
||||
* Handlebars for templates and logic.
|
||||
|
||||
!!! warning "Project likely abandoned"
|
||||
|
||||
In my limited trial, Realms seems _less_ useful than [Gollum](/recipes/gollum/) for my particular use-case (_i.e., you're limited to markdown syntax only_), but other users may enjoy the basic user authentication and registration features, which Gollum lacks.
|
||||
|
||||
Also of note is that the docker image is 1.17GB in size, and the handful of commits to the [source GitHub repo](https://github.com/scragg0x/realms-wiki/commits/master) in the past year has listed TravisCI build failures. This has many of the hallmarks of an abandoned project, to my mind.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
Since we'll start with a basic Realms install, let's just create a single directory to hold the realms (SQLite) data:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/realms/
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
realms:
|
||||
image: realms/realms-wiki:latest
|
||||
volumes:
|
||||
- /var/data/realms:/home/wiki/data
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:realms.example.com
|
||||
- traefik.port=5000
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.realms.rule=Host(`realms.example.com`)"
|
||||
- "traefik.http.services.realms.loadbalancer.server.port=5000"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.realms.middlewares=forward-auth@file"
|
||||
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.35.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Realms stack
|
||||
|
||||
Launch the Wekan stack by running ```docker stack deploy realms -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, authenticate against oauth_proxy, and you're immediately presented with Realms wiki, waiting for a fresh edit ;)
|
||||
|
||||
[^1]: If you wanted to expose the realms UI directly, you could remove the traefik-forward-auth from the design.
|
||||
|
||||
[^2]: The inclusion of Realms was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
207
docs/recipes/restic.md
Normal file
207
docs/recipes/restic.md
Normal file
@@ -0,0 +1,207 @@
|
||||
---
|
||||
title: Backup with restic in Docker Swarm
|
||||
description: Don't be like Cameron. Back up your shizz.
|
||||
---
|
||||
|
||||
# Restic
|
||||
|
||||
Don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
[Restic](https://restic.net/) is a backup program intended to be easy, fast, verifiable, secure, efficient, and free. Restic supports a range of backup targets, including local disk, [SFTP](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#sftp), [S3](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#amazon-s3) (*or compatible APIs like [Minio](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#minio-server)*), [Backblaze B2](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#backblaze-b2), [Azure](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#microsoft-azure-blob-storage), [Google Cloud Storage](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#google-cloud-storage), and zillions of others via [rclone](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#other-services-via-rclone).
|
||||
|
||||
Restic is one of the more popular open-source backup solutions, and is often [compared favorable](https://www.reddit.com/r/golang/comments/6mfe4q/a_performance_comparison_of_duplicacy_restic/dk2pkoj/?context=8&depth=9) to "freemium" products by virtue of its [licence](https://github.com/restic/restic/blob/master/LICENSE).
|
||||
|
||||
## Details
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
* [X] Credentials for one of Restic's [supported repositories](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need a data location to bind-mount persistent config (*an exclusion list*) into our container, so create them as below:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/restic/
|
||||
mkdir -p /var/data/config/restic
|
||||
echo /var/data/runtime >> /var/data/restic/restic.exclude
|
||||
```
|
||||
|
||||
!!! note
|
||||
`/var/data/restic/restic.exclude` details which files / directories to **exclude** from the backup. Per our [data layout](/reference/data_layout/), runtime data such as database files are stored in `/var/data/runtime/[recipe]`, and excluded from backups, since we can't safely backup/restore data-in-use. Databases should be backed up by taking dumps/snapshots, and backing up _these_ dumps/snapshots instead.
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create `/var/data/config/restic/restic-backup.env`, and populate with the following variables:
|
||||
|
||||
```bash
|
||||
# run on startup, otherwise just on cron
|
||||
RUN_ON_STARTUP=true
|
||||
|
||||
# when to run (TZ ensures it runs when you expect it!)
|
||||
BACKUP_CRON=0 0 1 * * *
|
||||
TZ=Pacific/Auckland
|
||||
|
||||
# restic backend/storage credentials
|
||||
# see https://restic.readthedocs.io/en/stable/040_backup.html#environment-variables
|
||||
#AWS_ACCESS_KEY_ID=xxxxxxxx
|
||||
#AWS_SECRET_ACCESS_KEY=yyyyyyyyy
|
||||
#B2_ACCOUNT_ID=xxxxxxxx
|
||||
#B2_ACCOUNT_KEY=yyyyyyyyy
|
||||
|
||||
# will initialise the repo on startup the first time (if not already initialised)
|
||||
# don't lose this password otherwise you WON'T be able to decrypt your backups!
|
||||
RESTIC_REPOSITORY=<repo_name>
|
||||
RESTIC_PASSWORD=<repo_password>
|
||||
|
||||
# what to backup (excluding anything in restic.exclude)
|
||||
RESTIC_BACKUP_SOURCES=/data
|
||||
|
||||
# define any args to pass to the backup operation (e.g. the exclude file)
|
||||
# see https://restic.readthedocs.io/en/stable/040_backup.html
|
||||
RESTIC_BACKUP_ARGS=--exclude-file /restic.exclude
|
||||
|
||||
# define any args to pass to the forget operation (e.g. what snapshots to keep)
|
||||
# see https://restic.readthedocs.io/en/stable/060_forget.html
|
||||
RESTIC_FORGET_ARGS=--keep-daily 7 --keep-monthly 12
|
||||
```
|
||||
|
||||
Create `/var/data/config/restic/restic-prune.env`, and populate with the following variables:
|
||||
|
||||
```bash
|
||||
# run on startup, otherwise just on cron
|
||||
RUN_ON_STARTUP=false
|
||||
|
||||
# when to run (TZ ensures it runs when you expect it!)
|
||||
PRUNE_CRON=0 0 4 * * *
|
||||
TZ=Pacific/Auckland
|
||||
|
||||
# restic backend/storage credentials
|
||||
# see https://restic.readthedocs.io/en/stable/040_backup.html#environment-variables
|
||||
#AWS_ACCESS_KEY_ID=xxxxxxxx
|
||||
#AWS_SECRET_ACCESS_KEY=yyyyyyyyy
|
||||
#B2_ACCOUNT_ID=xxxxxxxx
|
||||
#B2_ACCOUNT_KEY=yyyyyyyyy
|
||||
|
||||
# will initialise the repo on startup the first time (if not already initialised)
|
||||
# don't lose this password otherwise you WON'T be able to decrypt your backups!
|
||||
RESTIC_REPOSITORY=<repo_name>
|
||||
RESTIC_PASSWORD=<repo_password>
|
||||
|
||||
# prune will remove any *forgotten* snapshots, if there are some args you want
|
||||
# to pass to the prune operation define them here
|
||||
#RESTIC_PRUNE_ARGS=
|
||||
```
|
||||
|
||||
!!! question "Why create two separate .env files?"
|
||||
Although there's duplication involved, maintaining 2 files for the two services within the stack keeps it clean, and allows you to potentially alter the behaviour of one service without impacting the other in future
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3) in `/var/data/config/restic/restic.yml` , something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
backup:
|
||||
image: mazzolino/restic
|
||||
env_file: /var/data/config/restic/restic-backup.env
|
||||
hostname: docker
|
||||
volumes:
|
||||
- /var/data/restic/restic.exclude:/restic.exclude
|
||||
- /var/data:/data:ro
|
||||
deploy:
|
||||
labels:
|
||||
- "traefik.enabled=false"
|
||||
|
||||
prune:
|
||||
image: mazzolino/restic
|
||||
env_file: /var/data/config/restic/restic-prune.env
|
||||
hostname: docker
|
||||
deploy:
|
||||
labels:
|
||||
- "traefik.enabled=false"
|
||||
|
||||
networks:
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.56.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Restic stack
|
||||
|
||||
Launch the Restic stack by running `docker stack deploy restic -c <path -to-docker-compose.yml>`, and watch the logs by running `docker service logs restic_backup` - you should see something like this:
|
||||
|
||||
```bash
|
||||
root@raphael:~# docker service logs restic_backup -f
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Checking configured repository '<repo_name>' ...
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Fatal: unable to open config file: Stat: stat <repo_name>/config: no such file or directory
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Is there a repository at the following location?
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | <repo_name>
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Could not access the configured repository. Trying to initialize (in case it has not been initialized yet) ...
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | created restic repository 66ffec75f9 at <repo_name>
|
||||
restic_backup.1.9sii77j9jf0x@leonardo |
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Please note that knowledge of your password is required to access
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | the repository. Losing your password means that your data is
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | irrecoverably lost.
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Repository successfully initialized.
|
||||
restic_backup.1.9sii77j9jf0x@leonardo |
|
||||
restic_backup.1.9sii77j9jf0x@leonardo |
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Scheduling backup job according to cron expression.
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | new cron: 0 0 1 * * *
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | (0x50fac0,0xc0000cc000)
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Stopping
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Waiting
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Exiting
|
||||
```
|
||||
|
||||
Of note above is =="Repository successfully initialized"== - this indicates that the repository credentials passed to Restic are correct, and Restic has the necessary access to create repositories.
|
||||
|
||||
### Restoring data
|
||||
|
||||
Repeat after me : "**It's not a backup unless you've tested a restore**"
|
||||
|
||||
The simplest way to test your restore is to run the container once, using the variables you're already prepared, with custom arguments, as per the following example:
|
||||
|
||||
```bash
|
||||
docker run --rm -it --name restic-restore --env-file /var/data/config/restic/restic-backup.env \
|
||||
-v /tmp/restore:/restore mazzolino/restic restore latest --target /restore
|
||||
```
|
||||
|
||||
In my example:
|
||||
|
||||
```bash
|
||||
root@raphael:~# docker run --rm -it --name restic-restore --env-file /var/data/config/restic/restic-backup.env \
|
||||
> -v /tmp/restore:/restore mazzolino/restic restore latest --target /restore
|
||||
Unable to find image 'mazzolino/restic:latest' locally
|
||||
latest: Pulling from mazzolino/restic
|
||||
Digest: sha256:cb827c4c5e63952f8d114c87432ff12d3409a0ba4bcb52f53885dca889b1cb6b
|
||||
Status: Downloaded newer image for mazzolino/restic:latest
|
||||
Checking configured repository 's3:s3.amazonaws.com/restic-geek-cookbook-premix.elpenguino.be' ...
|
||||
Repository found.
|
||||
repository c50738d1 opened successfully, password is correct
|
||||
restoring <Snapshot b5c50b19 of [/data] at 2020-06-24 23:54:27.92318041 +0000 UTC by root@docker> to /restore
|
||||
root@raphael:~#
|
||||
```
|
||||
|
||||
!!! tip "Restoring a subset of data"
|
||||
The example above restores the **entire** `/var/data` folder (*minus any exclusions*). To restore just a subset of data, add the `-i <regex>` argument, i.e. `-i plex`
|
||||
|
||||
[^1]: The `/var/data/restic/restic.exclude` exists to provide you with a way to exclude data you don't care to backup.
|
||||
[^2]: A recent benchmark of various backup tools, including Restic, can be found [here](https://forum.duplicati.com/t/big-comparison-borg-vs-restic-vs-arq-5-vs-duplicacy-vs-duplicati/9952).
|
||||
[^3]: A paid-for UI for Restic can be found [here](https://forum.restic.net/t/web-ui-for-restic/667/26).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
67
docs/recipes/rss-bridge.md
Normal file
67
docs/recipes/rss-bridge.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
description: Stalk your ex on Facebook in your feedreader!
|
||||
---
|
||||
|
||||
# RSS Bridge
|
||||
|
||||
Do you hate having to access multiple sites to view specific content? [RSS-Bridge](https://github.com/RSS-Bridge/rss-bridge) can convert content from a wide variety of websites (*such as Reddit, Facebook, Twitter*) so that it can be viewed in a structured and consistent way, all from one place (Your feed reader)
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold the data which RSS Bridge will serve:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/rssbridge
|
||||
cd /var/data/config/rssbridge
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
rss:
|
||||
image: rssbridge/rss-bridge:latest
|
||||
volumes:
|
||||
- /var/data/config/rssbridge:/config
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:rssbridge.example.com
|
||||
- traefik.port=80
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.rssbridge.rule=Host(`rssbridge.example.com`)"
|
||||
- "traefik.http.services.rssbridge.loadbalancer.server.port=80"
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Deploy the bridge!
|
||||
|
||||
Launch the RSS Bridge stack by running ```docker stack deploy rssbridge -c <path -to-docker-compose.yml>```
|
||||
|
||||
[^1]: The inclusion of RSS Bridge was due to the efforts of @bencey in [Discord](http://chat.funkypenguin.co.nz) (Thanks Ben!)
|
||||
[^2]: This delicious recipe is well-paired with an RSS reader such as [Miniflux][miniflux]
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
394
docs/recipes/swarmprom.md
Normal file
394
docs/recipes/swarmprom.md
Normal file
@@ -0,0 +1,394 @@
|
||||
---
|
||||
description: Data is beautiful
|
||||
---
|
||||
|
||||
# Swarmprom
|
||||
|
||||
[Swarmprom](https://github.com/stefanprodan/swarmprom) is a starter kit for Docker Swarm monitoring with [Prometheus](https://prometheus.io/), [Grafana](http://grafana.org/), [cAdvisor](https://github.com/google/cadvisor), [Node Exporter](https://github.com/prometheus/node_exporter), [Alert Manager](https://github.com/prometheus/alertmanager) and [Unsee](https://github.com/cloudflare/unsee). And it's **damn** sexy. See for yourself:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
So what do all these components do?
|
||||
|
||||
* [Prometheus](https://prometheus.io/docs/introduction/overview/) is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.
|
||||
* [Grafana](http://grafana.org/) is a tool to make data beautiful.
|
||||
* [cAdvisor](https://github.com/google/cadvisor)
|
||||
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers.
|
||||
* [Node Exporter](https://github.com/prometheus/node_exporter) is a Prometheus exporter for hardware and OS metrics
|
||||
* [Alert Manager](https://github.com/prometheus/alertmanager) Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integrations such as email, Slack, etc.
|
||||
* [Unsee](https://github.com/cloudflare/unsee) is an alert dashboard for Alert Manager
|
||||
|
||||
## How does this magic work?
|
||||
|
||||
I'd encourage you to spend some time reading <https://github.com/stefanprodan/swarmprom>. Stefan has included detailed explanations about which elements perform which functions, as well as how to customize your stack. (_This is only a starting point, after all_)
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
This is basically a rehash of stefanprodan's [instructions](https://github.com/stefanprodan/swarmprom) to match the way I've configured other recipes.
|
||||
|
||||
### Setup oauth provider
|
||||
|
||||
Grafana includes decent login protections, but from what I can see, Prometheus, AlertManager, and Unsee do no authentication. In order to expose these publicly for your own consumption (my assumption for the rest of this recipe), you'll want to prepare to run [oauth_proxy](/reference/oauth_proxy/) containers in front of each of the 4 web UIs in this recipe.
|
||||
|
||||
### Setup metrics
|
||||
|
||||
Edit (_or create, depending on your OS_) /etc/docker/daemon.json, and add the following, to enable the experimental export of metrics to Prometheus:
|
||||
|
||||
```json
|
||||
{
|
||||
"metrics-addr" : "0.0.0.0:9323",
|
||||
"experimental" : true
|
||||
}
|
||||
```
|
||||
|
||||
Restart docker with ```systemctl restart docker```
|
||||
|
||||
### Setup and populate data locations
|
||||
|
||||
We'll need several files to bind-mount into our containers, so create directories for them and get the latest copies:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/swarmprom/dockerd-exporter/
|
||||
cd /var/data/swarmprom/dockerd-exporter/
|
||||
wget https://raw.githubusercontent.com/stefanprodan/swarmprom/master/dockerd-exporter/Caddyfile
|
||||
|
||||
mkdir -p /var/data/swarmprom/prometheus/rules/
|
||||
cd /var/data/swarmprom/prometheus/rules/
|
||||
wget https://raw.githubusercontent.com/stefanprodan/swarmprom/master/prometheus/rules/swarm_task.rules.yml
|
||||
wget https://raw.githubusercontent.com/stefanprodan/swarmprom/master/prometheus/rules/swarm_node.rules.yml
|
||||
|
||||
# Directories for holding runtime data
|
||||
mkdir /var/data/runtime/swarmprom/grafana/
|
||||
mkdir /var/data/runtime/swarmprom/alertmanager/
|
||||
mkdir /var/data/runtime/prometheus
|
||||
|
||||
chown nobody:nogroup /var/data/runtime/prometheus
|
||||
```
|
||||
|
||||
### Prepare Grafana
|
||||
|
||||
Grafana will make all the data we collect from our swarm beautiful.
|
||||
|
||||
Create /var/data/swarmprom/grafana.env, and populate with the following variables
|
||||
|
||||
```yaml
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
|
||||
# Disable basic auth (it conflicts with oauth_proxy)
|
||||
GF_AUTH_BASIC_ENABLED=false
|
||||
|
||||
# Set this to the real-world URL to your grafana install (else you get screwy CSS thanks to oauth_proxy)
|
||||
GF_SERVER_ROOT_URL=https://grafana.example.com
|
||||
GF_SERVER_DOMAIN=grafana.example.com
|
||||
|
||||
# Set your default admin/pass here
|
||||
GF_SECURITY_ADMIN_USER=admin
|
||||
GF_SECURITY_ADMIN_PASSWORD=ilovemybatmanunderpants
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), based on the original swarmprom [docker-compose.yml](https://github.com/stefanprodan/swarmprom/blob/master/docker-compose.yml) file
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
{% raw %}
|
||||
???+ note "This example is 274 lines long. Click here to collapse it for better readability"
|
||||
|
||||
```yaml
|
||||
version: "3.3"
|
||||
|
||||
networks:
|
||||
net:
|
||||
driver: overlay
|
||||
attachable: true
|
||||
|
||||
volumes:
|
||||
prometheus: {}
|
||||
grafana: {}
|
||||
alertmanager: {}
|
||||
|
||||
configs:
|
||||
dockerd_config:
|
||||
file: /var/data/swarmprom/dockerd-exporter/Caddyfile
|
||||
node_rules:
|
||||
file: /var/data/swarmprom/prometheus/rules/swarm_node.rules.yml
|
||||
task_rules:
|
||||
file: /var/data/swarmprom/prometheus/rules/swarm_task.rules.yml
|
||||
|
||||
services:
|
||||
dockerd-exporter:
|
||||
image: stefanprodan/caddy
|
||||
networks:
|
||||
- internal
|
||||
environment:
|
||||
- DOCKER_GWBRIDGE_IP=172.18.0.1
|
||||
configs:
|
||||
- source: dockerd_config
|
||||
target: /etc/caddy/Caddyfile
|
||||
deploy:
|
||||
mode: global
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 64M
|
||||
|
||||
cadvisor:
|
||||
image: google/cadvisor
|
||||
networks:
|
||||
- internal
|
||||
command: -logtostderr -docker_only
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- /:/rootfs:ro
|
||||
- /var/run:/var/run
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker/:/var/lib/docker:ro
|
||||
deploy:
|
||||
mode: global
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 64M
|
||||
|
||||
grafana:
|
||||
image: stefanprodan/swarmprom-grafana:5.3.4
|
||||
networks:
|
||||
- internal
|
||||
env_file: /var/data/config/swarmprom/grafana.env
|
||||
environment:
|
||||
- GF_USERS_ALLOW_SIGN_UP=false
|
||||
- GF_SMTP_ENABLED=${GF_SMTP_ENABLED:-false}
|
||||
- GF_SMTP_FROM_ADDRESS=${GF_SMTP_FROM_ADDRESS:-grafana@test.com}
|
||||
- GF_SMTP_FROM_NAME=${GF_SMTP_FROM_NAME:-Grafana}
|
||||
- GF_SMTP_HOST=${GF_SMTP_HOST:-smtp:25}
|
||||
- GF_SMTP_USER=${GF_SMTP_USER}
|
||||
- GF_SMTP_PASSWORD=${GF_SMTP_PASSWORD}
|
||||
volumes:
|
||||
- /var/data/runtime/swarmprom/grafana:/var/lib/grafana
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 64M
|
||||
|
||||
grafana-proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/swarmprom/grafana.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:grafana.swarmprom.example.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=4180
|
||||
volumes:
|
||||
- /var/data/config/swarmprom/authenticated-emails.txt:/authenticated-emails.txt
|
||||
command: |
|
||||
-cookie-secure=false
|
||||
-upstream=http://grafana:3000
|
||||
-redirect-url=https://grafana.swarmprom.example.com
|
||||
-http-address=http://0.0.0.0:4180
|
||||
-email-domain=example.com
|
||||
-provider=github
|
||||
-authenticated-emails-file=/authenticated-emails.txt
|
||||
|
||||
alertmanager:
|
||||
image: stefanprodan/swarmprom-alertmanager:v0.14.0
|
||||
networks:
|
||||
- internal
|
||||
environment:
|
||||
- SLACK_URL=${SLACK_URL:-https://hooks.slack.com/services/TOKEN}
|
||||
- SLACK_CHANNEL=${SLACK_CHANNEL:-general}
|
||||
- SLACK_USER=${SLACK_USER:-alertmanager}
|
||||
command:
|
||||
- '--config.file=/etc/alertmanager/alertmanager.yml'
|
||||
- '--storage.path=/alertmanager'
|
||||
volumes:
|
||||
- /var/data/runtime/swarmprom/alertmanager:/alertmanager
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 64M
|
||||
|
||||
alertmanager-proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/swarmprom/alertmanager.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:alertmanager.swarmprom.example.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=4180
|
||||
volumes:
|
||||
- /var/data/config/swarmprom/authenticated-emails.txt:/authenticated-emails.txt
|
||||
command: |
|
||||
-cookie-secure=false
|
||||
-upstream=http://alertmanager:9093
|
||||
-redirect-url=https://alertmanager.swarmprom.example.com
|
||||
-http-address=http://0.0.0.0:4180
|
||||
-email-domain=example.com
|
||||
-provider=github
|
||||
-authenticated-emails-file=/authenticated-emails.txt
|
||||
|
||||
unsee:
|
||||
image: cloudflare/unsee:v0.8.0
|
||||
networks:
|
||||
- internal
|
||||
environment:
|
||||
- "ALERTMANAGER_URIS=default:http://alertmanager:9093"
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
|
||||
unsee-proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/swarmprom/unsee.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:unsee.swarmprom.example.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=4180
|
||||
volumes:
|
||||
- /var/data/config/swarmprom/authenticated-emails.txt:/authenticated-emails.txt
|
||||
command: |
|
||||
-cookie-secure=false
|
||||
-upstream=http://unsee:8080
|
||||
-redirect-url=https://unsee.swarmprom.example.com
|
||||
-http-address=http://0.0.0.0:4180
|
||||
-email-domain=example.com
|
||||
-provider=github
|
||||
-authenticated-emails-file=/authenticated-emails.txt
|
||||
|
||||
|
||||
node-exporter:
|
||||
image: stefanprodan/swarmprom-node-exporter:v0.16.0
|
||||
networks:
|
||||
- internal
|
||||
environment:
|
||||
- NODE_ID={{.Node.ID}}
|
||||
volumes:
|
||||
- /proc:/host/proc:ro
|
||||
- /sys:/host/sys:ro
|
||||
- /:/rootfs:ro
|
||||
- /etc/hostname:/etc/nodename
|
||||
command:
|
||||
- '--path.sysfs=/host/sys'
|
||||
- '--path.procfs=/host/proc'
|
||||
- '--collector.textfile.directory=/etc/node-exporter/'
|
||||
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
|
||||
# no collectors are explicitely enabled here, because the defaults are just fine,
|
||||
# see https://github.com/prometheus/node_exporter
|
||||
# disable ipvs collector because it barfs the node-exporter logs full with errors on my centos 7 vm's
|
||||
- '--no-collector.ipvs'
|
||||
deploy:
|
||||
mode: global
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 64M
|
||||
|
||||
prometheus:
|
||||
image: stefanprodan/swarmprom-prometheus:v2.5.0
|
||||
networks:
|
||||
- internal
|
||||
command:
|
||||
- '--config.file=/etc/prometheus/prometheus.yml'
|
||||
- '--web.console.libraries=/etc/prometheus/console_libraries'
|
||||
- '--web.console.templates=/etc/prometheus/consoles'
|
||||
- '--storage.tsdb.path=/prometheus'
|
||||
- '--storage.tsdb.retention=24h'
|
||||
volumes:
|
||||
- /var/data/runtime/swarmprom/prometheus:/prometheus
|
||||
configs:
|
||||
- source: node_rules
|
||||
target: /etc/prometheus/swarm_node.rules.yml
|
||||
- source: task_rules
|
||||
target: /etc/prometheus/swarm_task.rules.yml
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 2048M
|
||||
reservations:
|
||||
memory: 128M
|
||||
|
||||
prometheus-proxy:
|
||||
image: a5huynh/oauth2_proxy
|
||||
env_file : /var/data/config/swarmprom/prometheus.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
deploy:
|
||||
labels:
|
||||
- traefik.frontend.rule=Host:prometheus.swarmprom.example.com
|
||||
- traefik.docker.network=traefik_public
|
||||
- traefik.port=4180
|
||||
volumes:
|
||||
- /var/data/config/swarmprom/authenticated-emails.txt:/authenticated-emails.txt
|
||||
command: |
|
||||
-cookie-secure=false
|
||||
-upstream=http://prometheus:9090
|
||||
-redirect-url=https://prometheus.swarmprom.example.com
|
||||
-http-address=http://0.0.0.0:4180
|
||||
-email-domain=example.com
|
||||
-provider=github
|
||||
-authenticated-emails-file=/authenticated-emails.txt
|
||||
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.29.0/24
|
||||
```
|
||||
|
||||
!!! note
|
||||
Setup unique static subnets for every stack you deploy. This avoids IP/gateway conflicts which can otherwise occur when you're creating/removing stacks a lot. See [my list](/reference/networks/) here.
|
||||
|
||||
{% endraw %}
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Swarmprom stack
|
||||
|
||||
Launch the Swarm stack by running ```docker stack deploy swarmprom -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new grafana instance, check out your beautiful graphs. Move onto drooling over Prometheus, AlertManager, and Unsee.
|
||||
|
||||
[^1]: Pay close attention to the ```grafana.env``` config. If you encounter errors about ```basic auth failed```, or failed CSS, it's likely due to misconfiguration of one of the grafana environment variables.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
89
docs/recipes/template.md
Normal file
89
docs/recipes/template.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
description: Neat one-sentence description of recipe for social media previews
|
||||
---
|
||||
|
||||
# <///RECIPE NAME>
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
[Linx](https://github.com/andreimarcu/linx-server) is self-hosted file/media-sharing service, which features:
|
||||
|
||||
- :white_check_mark: Display common filetypes (*image, video, audio, markdown, pdf*)
|
||||
- :white_check_mark: Display syntax-highlighted code with in-place editing
|
||||
- :white_check_mark: Documented API with keys for restricting uploads
|
||||
- :white_check_mark: Torrent download of files using web seeding
|
||||
- :white_check_mark: File expiry, deletion key, file access key, and random filename options
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold the data which linx will serve:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/linx
|
||||
```
|
||||
|
||||
### Create config file
|
||||
|
||||
Linx is configured using a flat text file, so create this on the Docker host, and then we'll mount it (*read-only*) into the container, below.
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/linx
|
||||
cat << EOF > /var/data/config/linx/linx.conf
|
||||
# Refer to https://github.com/andreimarcu/linx-server for details
|
||||
cleanup-every-minutes = 5
|
||||
EOF
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3.2" # https://docs.docker.com/compose/compose-file/compose-versioning/#version-3
|
||||
|
||||
services:
|
||||
linx:
|
||||
image: andreimarcu/linx-server
|
||||
env_file: /var/data/config/linx/linx.env
|
||||
command: -config /linx.conf
|
||||
volumes:
|
||||
- /var/data/linx/:/files/
|
||||
- /var/data/config/linx/linx.conf:/linx.conf:ro
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:linx.example.com
|
||||
- traefik.port=8080
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.linx.rule=Host(`linx.example.com`)"
|
||||
- "traefik.http.routers.linx.entrypoints=https"
|
||||
- "traefik.http.services.linx.loadbalancer.server.port=8080"
|
||||
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch the Linx!
|
||||
|
||||
Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker-compose.yml>```
|
||||
|
||||
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
137
docs/recipes/tiny-tiny-rss.md
Normal file
137
docs/recipes/tiny-tiny-rss.md
Normal file
@@ -0,0 +1,137 @@
|
||||
---
|
||||
description: Geeky RSS reader
|
||||
---
|
||||
|
||||
# Tiny Tiny RSS
|
||||
|
||||
[Tiny Tiny RSS](https://tt-rss.org/) is a self-hosted, AJAX-based RSS reader, which rose to popularity as a replacement for Google Reader. It supports ~~geeky~~ advanced features, such as:
|
||||
|
||||
* Plugins and themeing in a drop-in fashion
|
||||
* Filtering (discard all articles with title matching "trump")
|
||||
* Sharing articles via a unique public URL/feed
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/ttrss:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/ttrss
|
||||
cd /var/data/ttrss
|
||||
mkdir -p {database,database-dump}
|
||||
mkdir /var/data/config/ttrss
|
||||
cd /var/data/config/ttrss
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create `/var/data/config/ttrs/ttrss.env`, and populate with the following variables, customizing at least the database password (POSTGRES_PASSWORD **and** DB_PASS) and the TTRSS_SELF_URL to point to your installation.
|
||||
|
||||
```yaml
|
||||
# Variables for postgres:latest
|
||||
POSTGRES_USER=ttrss
|
||||
POSTGRES_PASSWORD=mypassword
|
||||
DB_EXTENSION=pg_trgm
|
||||
|
||||
# Variables for pg_dump running in postgres/latest (used for db-backup)
|
||||
PGUSER=ttrss
|
||||
PGPASSWORD=mypassword
|
||||
PGHOST=db
|
||||
BACKUP_NUM_KEEP=3
|
||||
BACKUP_FREQUENCY=1d
|
||||
|
||||
# Variables for funkypenguin/docker-ttrss
|
||||
DB_USER=ttrss
|
||||
DB_PASS=mypassword
|
||||
DB_PORT=5432
|
||||
DB_PORT_5432_TCP_ADDR=db
|
||||
DB_PORT_5432_TCP_PORT=5432
|
||||
TTRSS_SELF_URL=https://ttrss.example.com
|
||||
TTRSS_REPO=https://github.com/funkypenguin/tt-rss.git
|
||||
S6_BEHAVIOUR_IF_STAGE2_FAILS=2
|
||||
```
|
||||
|
||||
### Setup docker swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
db:
|
||||
image: postgres:latest
|
||||
env_file: /var/data/config/ttrss/ttrss.env
|
||||
volumes:
|
||||
- /var/data/ttrss/database:/var/lib/postgresql/data
|
||||
networks:
|
||||
- internal
|
||||
|
||||
app:
|
||||
image: funkypenguin/docker-ttrss
|
||||
env_file: /var/data/config/ttrss/ttrss.env
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:ttrss.example.com
|
||||
- traefik.port=8080
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.ttrss.rule=Host(`ttrss.example.com`)"
|
||||
- "traefik.http.services.ttrss.loadbalancer.server.port=8080"
|
||||
- "traefik.enable=true"
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
|
||||
db-backup:
|
||||
image: postgres:latest
|
||||
env_file: /var/data/config/ttrss/ttrss.env
|
||||
volumes:
|
||||
- /var/data/ttrss/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
|
||||
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.5.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch TTRSS stack
|
||||
|
||||
Launch the TTRSS stack by running ```docker stack deploy ttrss -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN** - the first user you create will be an administrative user.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
183
docs/recipes/wallabag.md
Normal file
183
docs/recipes/wallabag.md
Normal file
@@ -0,0 +1,183 @@
|
||||
---
|
||||
title: Run Wallabag under Docker (compose), mate!
|
||||
---
|
||||
|
||||
# Wallabag
|
||||
|
||||
Wallabag is a self-hosted webapp which allows you to save URLs to "read later", similar to [Instapaper](https://www.instapaper.com/u) or [Pocket](https://getpocket.com/a/). Like Instapaper (_but **not** Pocket, sadly_), Wallabag allows you to **annotate** any pages you grab for your own reference.
|
||||
|
||||
All saved data (_pages, annotations, images, tags, etc_) are stored on your own server, and can be shared/exported in a variety of formats, including ePub and PDF.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
There are plugins for [Chrome](https://chrome.google.com/webstore/detail/wallabagger/gbmgphmejlcoihgedabhgjdkcahacjlj) and [Firefox](https://addons.mozilla.org/firefox/addon/wallabagger/), as well as apps for [iOS](https://appsto.re/fr/YeqYfb.i), [Android](https://play.google.com/store/apps/details?id=fr.gaulupeau.apps.InThePoche), etc. Wallabag will also integrate nicely with my favorite RSS reader, [Miniflux](https://miniflux.net/) (_for which there is an [existing recipe][miniflux]_).
|
||||
|
||||
[Here's a video](https://player.vimeo.com/video/167435064) which shows off the UI a bit more.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We need a filesystem location to store images that Wallabag downloads from the original sources, to re-display when you read your articles, as well as nightly database dumps (_which you **should [backup](/recipes/duplicity/)**_), so create something like this:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/wallabag
|
||||
cd /var/data/wallabag
|
||||
mkdir -p {images,db-dump}
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create `/var/data/config/wallabag/wallabag.env`, and populate with the following variables. The only variable you **have** to change is SYMFONY__ENV__DOMAIN_NAME - this **must** be the URL that your Wallabag instance will be available at (_else you'll have no CSS_)
|
||||
|
||||
```yaml
|
||||
# For the DB container
|
||||
POSTGRES_PASSWORD=wallabag
|
||||
POSTGRES_USER=wallabag
|
||||
|
||||
# For the wallabag container
|
||||
SYMFONY__ENV__DATABASE_DRIVER=pdo_pgsql
|
||||
SYMFONY__ENV__DATABASE_HOST=db
|
||||
SYMFONY__ENV__DATABASE_PORT=5432
|
||||
SYMFONY__ENV__DATABASE_NAME=wallabag
|
||||
SYMFONY__ENV__DATABASE_USER=wallabag
|
||||
SYMFONY__ENV__DATABASE_PASSWORD=wallabag
|
||||
SYMFONY__ENV__DOMAIN_NAME=https://wallabag.example.com
|
||||
SYMFONY__ENV__DATABASE_DRIVER_CLASS=Wallabag\CoreBundle\Doctrine\DBAL\Driver\CustomPostgreSQLDriver
|
||||
SYMFONY__ENV__MAILER_HOST=127.0.0.1
|
||||
SYMFONY__ENV__MAILER_USER=~
|
||||
SYMFONY__ENV__MAILER_PASSWORD=~
|
||||
SYMFONY__ENV__FROM_EMAIL=wallabag@example.com
|
||||
SYMFONY__ENV__FOSUSER_REGISTRATION=false
|
||||
```
|
||||
|
||||
Now create wallabag-`/var/data/config/wallabag/backup.env` with the following contents. (_This is necessary to prevent environment variables required for backup from breaking the DB container_)
|
||||
|
||||
```yaml
|
||||
# For database backups
|
||||
PGUSER=wallabag
|
||||
PGPASSWORD=wallabag
|
||||
PGHOST=db
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
wallabag:
|
||||
image: wallabag/wallabag
|
||||
env_file: /var/data/config/wallabag/wallabag.env
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/wallabag/images:/var/www/wallabag/web/assets/images
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:wallabag.example.com
|
||||
- traefik.port=80
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.wallabag.rule=Host(`wallabag.example.com`)"
|
||||
- "traefik.http.services.wallabag.loadbalancer.server.port=80"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.wallabag.middlewares=forward-auth@file"
|
||||
|
||||
db:
|
||||
image: postgres
|
||||
env_file: /var/data/config/wallabag/wallabag.env
|
||||
dns_search:
|
||||
- hq.example.com
|
||||
volumes:
|
||||
- /var/data/runtime/wallabag/data:/var/lib/postgresql/data
|
||||
networks:
|
||||
- internal
|
||||
|
||||
db-backup:
|
||||
image: postgres:latest
|
||||
env_file: /var/data/config/wallabag/wallabag-backup.env
|
||||
volumes:
|
||||
- /var/data/wallabag/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
|
||||
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
redis:
|
||||
image: redis:alpine
|
||||
networks:
|
||||
- internal
|
||||
|
||||
import-instapaper:
|
||||
image: wallabag/wallabag
|
||||
env_file: /var/data/config/wallabag/wallabag.env
|
||||
networks:
|
||||
- internal
|
||||
command: |
|
||||
import instapaper
|
||||
|
||||
import-pocket:
|
||||
image: wallabag/wallabag
|
||||
env_file: /var/data/config/wallabag/wallabag.env
|
||||
networks:
|
||||
- internal
|
||||
command: |
|
||||
import pocket
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.21.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Wallabag stack
|
||||
|
||||
Launch the Wallabag stack by running ```docker stack deploy wallabag -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, with user "wallabag" and default password "wallabag".
|
||||
|
||||
### Enable asynchronous imports
|
||||
|
||||
You'll have noticed redis, plus the pocket/instapaper-importing containers included in the .yml above. Redis is there to allow [asynchronous](https://github.com/wallabag/doc/blob/master/en/admin/asynchronous.md) imports, and pocket and instapaper are there since they're likely the most popular platform you'd _want_ to import from. Other possibilities (_you'll need to adjust the .yml_) are **readability**, **firefox**, **chrome**, and **wallabag_v1** and **wallabag_v2**.
|
||||
|
||||
Even with all these elements in place, you still need to enable Redis under Internal Settings -> Import, via the **admin** user in the webUI. Here's a screenshot to help you find it:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
[^1]: If you wanted to expose the Wekan UI directly, you could remove the traefik-forward-auth from the design. I found the iOS app to be unreliable and clunky, so elected to leave my traefik-forward-auth enabled, and to simply use the webUI on my mobile devices instead. YMMMV.
|
||||
|
||||
[^2]: I've not tested the email integration, but you'd need an SMTP server listening on port 25 (_since we can't change the port_) to use it
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
131
docs/recipes/wekan.md
Normal file
131
docs/recipes/wekan.md
Normal file
@@ -0,0 +1,131 @@
|
||||
---
|
||||
title: Run Wekan under Docker
|
||||
---
|
||||
|
||||
# Wekan
|
||||
|
||||
Wekan is an open-source kanban board which allows a card-based task and to-do management, similar to tools like WorkFlowy or Trello.
|
||||
|
||||

|
||||
|
||||
Wekan allows to create Boards, on which Cards can be moved around between a number of Columns. Boards can have many members, allowing for easy collaboration, just add everyone that should be able to work with you on the board to it, and you are good to go! You can assign colored Labels to cards to facilitate grouping and filtering, additionally you can add members to a card, for example to assign a task to someone.
|
||||
|
||||
There's a [video](https://www.youtube.com/watch?v=N3iMLwCNOro) of the developer showing off the app, as well as a [functional demo](https://boards.wekan.team/b/D2SzJKZDS4Z48yeQH/wekan-open-source-kanban-board-with-mit-license).
|
||||
|
||||
!!! note
|
||||
For added privacy, this design secures wekan behind a [traefik-forward-auth](/docker-swarm/traefik-forward-auth/), so that in order to gain access to the wekan UI at all, authentication must have already occurred.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/wekan:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/wekan
|
||||
cd /var/data/wekan
|
||||
mkdir -p {wekan-db,wekan-db-dump}
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create `/var/data/config/wekan.env`, and populate with the following variables:
|
||||
|
||||
```yaml
|
||||
MONGO_URL=mongodb://wekandb:27017/wekan
|
||||
ROOT_URL=https://wekan.example.com
|
||||
MAIL_URL=smtp://wekan@wekan.example.com:password@mail.example.com:587/
|
||||
MAIL_FROM="Wekan <wekan@wekan.example.com>"
|
||||
|
||||
# Mongodb specific database dump details
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
wekandb:
|
||||
image: mongo:latest
|
||||
command: mongod --smallfiles --oplogSize 128
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /var/data/runtime/wekan/database:/data/db
|
||||
- /var/data/wekan/database-dump:/dump
|
||||
|
||||
wekan:
|
||||
image: wekanteam/wekan:latest
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
env_file: /var/data/config/wekan/wekan.env
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:wekan.example.com
|
||||
- traefik.port=4180
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.wekan.rule=Host(`wekan.example.com`)"
|
||||
- "traefik.http.services.wekan.loadbalancer.server.port=4180"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.wekan.middlewares=forward-auth@file"
|
||||
|
||||
db-backup:
|
||||
image: mongo:latest
|
||||
env_file : /var/data/config/wekan/wekan.env
|
||||
volumes:
|
||||
- /var/data/wekan/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mongodump -h db --gzip --archive=/dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.mongo.gz
|
||||
(ls -t /dump/dump*.mongo.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.mongo.gz)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.3.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Wekan stack
|
||||
|
||||
Launch the Wekan stack by running ```docker stack deploy wekan -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at `https://**YOUR-FQDN**`, with user "root" and the password you specified in `wekan.env`.
|
||||
|
||||
[^1]: If you wanted to expose the Wekan UI directly, you could remove the traefik-forward-auth from the design.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
102
docs/recipes/wetty.md
Normal file
102
docs/recipes/wetty.md
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
title: Use wetty under Docker for SSH in the browser
|
||||
description: Use weTTY to run a terminal in a browser, baby!
|
||||
---
|
||||
|
||||
# Wetty
|
||||
|
||||
[Wetty](https://github.com/krishnasrinivas/wetty) is a responsive, modern terminal, in your web browser. Yes, your browser. When combined with secure authentication and SSL encryption, it becomes a useful tool for quick and easy remote access.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## Why would you need SSH in a browser window?
|
||||
|
||||
Need shell access to a node with no external access? Deploy Wetty behind an [traefik-forward-auth](/docker-swarm/traefik-forward-auth/) with a SSL-terminating reverse proxy ([traefik](/docker-swarm/traefik/)), and suddenly you have the means to SSH to your private host from any web browser (_protected by your [traefik-forward-auth](/docker-swarm/traefik-forward-auth/) of course._)
|
||||
|
||||
Here are some other possible use cases:
|
||||
|
||||
1. Access to SSH / CLI from an environment where outgoing SSH is locked down, or SSH client isn't / can't be installed. (_i.e., a corporate network_)
|
||||
2. Access to long-running processes inside a tmux session (_like [irrsi](https://irssi.org/)_)
|
||||
3. Remote access to a VM / [container running Kali linux](https://gitlab.com/kalilinux/build-scripts/kali-docker), for penetration testing
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold the data which wetty will serve:
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/wetty
|
||||
cd /var/data/config/wetty
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create `/var/data/config/wetty.env`, and populate with the following variables
|
||||
|
||||
```yaml
|
||||
|
||||
# To use WeTTY to SSH to a host besides the (mostly useless) alpine container it comes with
|
||||
SSHHOST=batcomputer.batcave.com
|
||||
SSHUSER=batman
|
||||
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
services:
|
||||
wetty:
|
||||
image: krishnasrinivas/wetty
|
||||
env_file : /var/data/config/wetty/wetty.env
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:wetty.example.com
|
||||
- traefik.port=3000
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.wetty.rule=Host(`wetty.example.com`)"
|
||||
- "traefik.http.services.wetty.loadbalancer.server.port=3000"
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.wetty.middlewares=forward-auth@file"
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.45.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Wetty stack
|
||||
|
||||
Launch the Wetty stack by running ```docker stack deploy wetty -c <path -to-docker-compose.yml>```
|
||||
|
||||
Browse to your new browser-cli-terminal at https://**YOUR-FQDN**. Authenticate with your OAuth provider, and then proceed to login, either to the remote host you specified (_batcomputer.batcave.com, in the example above_), or using user and password "term" to log directly into the Wetty alpine container (_from which you can establish egress SSH_)
|
||||
|
||||
[^1]: You could set SSHHOST to the IP of the "docker0" interface on your host, which is normally 172.17.0.1. (_Or run ```/sbin/ip route|awk '/default/ { print $3 }'``` in the container_) This would then provide you the ability to remote-manage your swarm with only web access to Wetty.
|
||||
|
||||
[^2]: The inclusion of Wetty was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
Reference in New Issue
Block a user