mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-18 20:21:45 +00:00
Add markdown linting support
This commit is contained in:
@@ -1,8 +1,8 @@
|
||||
# Launch Autopirate stack
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's the conclusion to the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
### Launch Autopirate stack
|
||||
|
||||
Launch the AutoPirate stack by running ```docker stack deploy autopirate -c <path -to-docker-compose.yml>```
|
||||
|
||||
Confirm the container status by running "docker stack ps autopirate", and wait for all containers to enter the "Running" state.
|
||||
@@ -11,4 +11,4 @@ Log into each of your new tools at its respective HTTPS URL. You'll be prompted
|
||||
|
||||
[^1]: This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
description: Headphones is an automated music downloader for NZB and BitTorrent
|
||||
---
|
||||
# Headphones
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
@@ -51,4 +52,4 @@ headphones_proxy:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
description: Heimdall is a beautiful dashboard for all your web applications
|
||||
---
|
||||
# Heimdall
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ description: A fully-featured recipe to automate finding, downloading, and organ
|
||||
|
||||
Once the cutting edge of the "internet" (_pre-world-wide-web and mosiac days_), Usenet is now a murky, geeky alternative to torrents for file-sharing. However, it's **cool** geeky, especially if you're into having a fully automated media platform.
|
||||
|
||||
A good starter for the usenet scene is https://www.reddit.com/r/usenet/. Because it's so damn complicated, a host of automated tools exist to automate the process of finding, downloading, and managing content. The tools included in this recipe are as follows:
|
||||
A good starter for the usenet scene is <https://www.reddit.com/r/usenet/>. Because it's so damn complicated, a host of automated tools exist to automate the process of finding, downloading, and managing content. The tools included in this recipe are as follows:
|
||||
|
||||

|
||||
|
||||
@@ -25,7 +25,7 @@ Tools included in the AutoPirate stack are:
|
||||
* [NZBHydra][nzbhydra] is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as indexer source for tools like [Sonarr][sonarr] or [Radarr][radarr].
|
||||
|
||||
* [Sonarr][sonarr] finds, downloads and manages TV shows
|
||||
|
||||
|
||||
* [Radarr][radarr] finds, downloads and manages movies
|
||||
|
||||
* [Readarr][readarr] finds, downloads, and manages eBooks
|
||||
@@ -44,7 +44,6 @@ Tools included in the AutoPirate stack are:
|
||||
|
||||
Since this recipe is so long, and so many of the tools are optional to the final result (_i.e., if you're not interested in comics, you won't want Mylar_), I've described each individual tool on its own sub-recipe page (_below_), even though most of them are deployed very similarly.
|
||||
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
@@ -88,9 +87,9 @@ To mitigate the risk associated with public exposure of these tools (_you're on
|
||||
|
||||
This is tedious, but you only have to do it once. Each tool (Sonarr, Radarr, etc) to be protected by an OAuth proxy, requires unique configuration. I use github to provide my oauth, giving each tool a unique logo while I'm at it (make up your own random string for OAUTH2PROXYCOOKIE_SECRET)
|
||||
|
||||
For each tool, create /var/data/autopirate/<tool>.env, and set the following:
|
||||
For each tool, create `/var/data/autopirate/<tool>.env`, and set the following:
|
||||
|
||||
```
|
||||
```bash
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
@@ -98,7 +97,7 @@ PUID=4242
|
||||
PGID=4242
|
||||
```
|
||||
|
||||
Create at least /var/data/autopirate/authenticated-emails.txt, containing at least your own email address with your OAuth provider. If you wanted to grant access to a specific tool to other users, you'd need a unique authenticated-emails-<tool>.txt which included both normal email address as well as any addresses to be granted tool-specific access.
|
||||
Create at least /var/data/autopirate/authenticated-emails.txt, containing at least your own email address with your OAuth provider. If you wanted to grant access to a specific tool to other users, you'd need a unique `authenticated-emails-<tool>.txt` which included both normal email address as well as any addresses to be granted tool-specific access.
|
||||
|
||||
### Setup components
|
||||
|
||||
@@ -106,7 +105,7 @@ Create at least /var/data/autopirate/authenticated-emails.txt, containing at lea
|
||||
|
||||
**Start** with a swarm config file in docker-compose syntax, like this:
|
||||
|
||||
````
|
||||
````yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
@@ -114,7 +113,7 @@ services:
|
||||
|
||||
And **end** with a stanza like this:
|
||||
|
||||
````
|
||||
````yaml
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
@@ -127,4 +126,4 @@ networks:
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -47,4 +47,4 @@ jackett:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -3,6 +3,7 @@ description: LazyLibrarian is a tool to follow authors and grab metadata for all
|
||||
---
|
||||
|
||||
# LazyLibrarian
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
@@ -61,4 +62,4 @@ calibre-server:
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^2]: The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipes/calibre-web) recipe.
|
||||
[^2]: The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web](/recipes/calibre-web) recipe.
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
description: Lidarr is an automated music downloader for NZB and Torrent
|
||||
---
|
||||
# Lidarr
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [autopirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
|
||||
@@ -49,7 +49,6 @@ nzbget:
|
||||
|
||||
[^tfa]: Since we're relying on [Traefik Forward Auth][tfa] to protect us, we can just disable NZGet's own authentication, by changing ControlPassword to null in nzbget.conf (i.e. ```ControlPassword=```)
|
||||
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -62,4 +62,4 @@ nzbhydra2:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -60,4 +60,4 @@ radarr:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -4,6 +4,7 @@ description: Readarr is "Sonarr/Radarr for eBooks"
|
||||
|
||||
|
||||
# Readarr
|
||||
|
||||
!!! warning
|
||||
This is not a complete recipe - it's a component of the [AutoPirate](/recipes/autopirate/) "_uber-recipe_", but has been split into its own page to reduce complexity.
|
||||
|
||||
@@ -23,7 +24,6 @@ Features include:
|
||||
* Full integration with [Calibre][calibre-web] (add to library, conversion)
|
||||
* And a beautiful UI!
|
||||
|
||||
|
||||
## Inclusion into AutoPirate
|
||||
|
||||
To include Readarr in your [AutoPirate][autopirate] stack, include something like the following in your autopirate.yml stack definition file:
|
||||
@@ -59,4 +59,4 @@ radarr:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -52,4 +52,4 @@ rtorrent:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -58,4 +58,4 @@ sabnzbd:
|
||||
For example, mine simply reads ```host_whitelist = sabnzbd.funkypenguin.co.nz, sabnzbd```
|
||||
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -46,4 +46,4 @@ sonarr:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -32,9 +32,10 @@ Bitwarden is a free and open source password management solution for individuals
|
||||
|
||||
We'll need to create a directory to bind-mount into our container, so create `/var/data/bitwarden`:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/bitwarden
|
||||
```
|
||||
|
||||
### Setup environment
|
||||
|
||||
Create `/var/data/config/bitwarden/bitwarden.env`, and **leave it empty for now**.
|
||||
@@ -86,7 +87,6 @@ networks:
|
||||
!!! note
|
||||
Note the clever use of two Traefik frontends to expose the notifications hub on port 3012. Thanks @gkoerk!
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Bitwarden stack
|
||||
@@ -97,7 +97,7 @@ Browse to your new instance at https://**YOUR-FQDN**, and create a new user acco
|
||||
|
||||
### Get the apps / extensions
|
||||
|
||||
Once you've created your account, jump over to https://bitwarden.com/#download and download the apps for your mobile and browser, and start adding your logins!
|
||||
Once you've created your account, jump over to <https://bitwarden.com/#download> and download the apps for your mobile and browser, and start adding your logins!
|
||||
|
||||
[^1]: You'll notice we're not using the *official* container images (*[all 6 of them required](https://help.bitwarden.com/article/install-on-premise/#install-bitwarden)!)*, but rather a [more lightweight version ideal for self-hosting](https://hub.docker.com/r/vaultwarden/server). All of the elements are contained within a single container, and SQLite is used for the database backend.
|
||||
[^2]: As mentioned above, readers should refer to the [dani-garcia/vaultwarden wiki](https://github.com/dani-garcia/vaultwarden) for details on customizing the behaviour of Bitwarden.
|
||||
|
||||
@@ -20,7 +20,7 @@ I like to protect my public-facing web UIs with an [oauth_proxy](/reference/oaut
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/bookstack:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/bookstack/database-dump
|
||||
mkdir -p /var/data/runtime/bookstack/db
|
||||
```
|
||||
@@ -29,7 +29,7 @@ mkdir -p /var/data/runtime/bookstack/db
|
||||
|
||||
Create bookstack.env, and populate with the following variables. Set the [oauth_proxy](/reference/oauth_proxy) variables provided by your OAuth provider (if applicable.)
|
||||
|
||||
```
|
||||
```bash
|
||||
# For oauth-proxy (optional)
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
@@ -136,4 +136,4 @@ Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_pro
|
||||
|
||||
[^1]: If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You'd also need to add the traefik_public network to the bookstack container.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -22,7 +22,6 @@ Support for editing eBook metadata and deleting eBooks from Calibre library
|
||||
* Support for reading eBooks directly in the browser (.txt, .epub, .pdf, .cbr, .cbt, .cbz)
|
||||
* Upload new books in PDF, epub, fb2 format
|
||||
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
@@ -31,7 +30,7 @@ Support for editing eBook metadata and deleting eBooks from Calibre library
|
||||
|
||||
We'll need a directory to store some config data for Calibre-Web, container, so create /var/data/calibre-web, and ensure the directory is owned by the same use which owns your Calibre data (below)
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/calibre-web
|
||||
chown calibre:calibre /var/data/calibre-web # for example
|
||||
```
|
||||
@@ -42,7 +41,7 @@ Ensure that your Calibre library is accessible to the swarm (_i.e., exists on sh
|
||||
|
||||
We'll use an [oauth-proxy](/reference/oauth_proxy/) to protect the UI from public access, so create calibre-web.env, and populate with the following variables:
|
||||
|
||||
```
|
||||
```bash
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=<make this a random string>
|
||||
@@ -52,7 +51,6 @@ PGID=
|
||||
|
||||
Follow the [instructions](https://github.com/bitly/oauth2_proxy) to setup your oauth provider. You need to setup a unique key/secret for each instance of the proxy you want to run, since in each case the callback URL will differ.
|
||||
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
@@ -118,4 +116,4 @@ Log into your new instance at https://**YOUR-FQDN**. You'll be directed to the i
|
||||
[^1]: Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
|
||||
[^2]: A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.
|
||||
[^3]: If you plan to use calibre-web to send `.mobi` files to your Kindle via `@kindle.com` email addresses, be sure to add the sending address to the "[Approved Personal Documents Email List](https://www.amazon.com/hz/mycd/myx#/home/settings/payment)"
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -30,7 +30,7 @@ This presents another problem though - Docker Swarm with Traefik is superb at ma
|
||||
|
||||
We run a single swarmed Nginx instance, which forwards all requests to an upstream, with the target IP of the docker0 interface, on port 9980 (_the port exposed by the CODE container_)
|
||||
|
||||
We attach the necessary labels to the Nginx container to instruct Trafeik to setup a front/backend for collabora.<ourdomain\>. Now incoming requests to **https://collabora.<ourdomain\>** will hit Traefik, be forwarded to nginx (_wherever in the swarm it's running_), and then to port 9980 on the same node that nginx is running on.
|
||||
We attach the necessary labels to the Nginx container to instruct Trafeik to setup a front/backend for collabora.<ourdomain\>. Now incoming requests to `https://collabora.<ourdomain\>` will hit Traefik, be forwarded to nginx (_wherever in the swarm it's running_), and then to port 9980 on the same node that nginx is running on.
|
||||
|
||||
What if we're running multiple nodes in our swarm, and nginx ends up on a different node to the one running Collabora via docker-compose? Well, either constrain nginx to the same node as Collabora (_example below_), or just launch an instance of Collabora on _every_ node then. It's just a rendering / GUI engine after all, it doesn't hold any persistent data.
|
||||
|
||||
@@ -42,7 +42,7 @@ Here's a (_highly technical_) diagram to illustrate:
|
||||
|
||||
We'll need a directory for holding config to bind-mount into our containers, so create ```/var/data/collabora```, and ```/var/data/config/collabora``` for holding the docker/swarm config
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/collabora/
|
||||
mkdir /var/data/config/collabora/
|
||||
```
|
||||
@@ -59,7 +59,7 @@ Create /var/data/config/collabora/collabora.env, and populate with the following
|
||||
3. Set your server_name to collabora.<yourdomain\>. Escaping periods is unnecessary
|
||||
4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen 🔥
|
||||
|
||||
```
|
||||
```bash
|
||||
username=admin
|
||||
password=ilovemypassword
|
||||
domain=nextcloud\.batcave\.com
|
||||
@@ -93,8 +93,7 @@ services:
|
||||
|
||||
Create ```/var/data/config/collabora/nginx.conf``` as follows, changing the ```server_name``` value to match the environment variable you established above:
|
||||
|
||||
|
||||
```
|
||||
```ini
|
||||
upstream collabora-upstream {
|
||||
# Run collabora under docker-compose, since it needs MKNOD cap, which can't be provided by Docker Swarm.
|
||||
# The IP here is the typical IP of docker0 - change if yours is different.
|
||||
@@ -128,7 +127,7 @@ server {
|
||||
|
||||
# Admin Console websocket
|
||||
location ^~ /lool/adminws {
|
||||
proxy_buffering off;
|
||||
proxy_buffering off;
|
||||
proxy_pass http://collabora-upstream;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "Upgrade";
|
||||
@@ -160,7 +159,7 @@ Create `/var/data/config/collabora/collabora.yml` as follows, changing the traef
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```
|
||||
```yaml
|
||||
version: "3.0"
|
||||
|
||||
services:
|
||||
@@ -195,14 +194,14 @@ Well. This is awkward. There's no documented way to make Collabora work with Doc
|
||||
|
||||
Launching Collabora is (_for now_) a 2-step process. First.. we launch collabora itself, by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
cd /var/data/config/collabora/
|
||||
docker-compose -d up
|
||||
```
|
||||
|
||||
Output looks something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
root@ds1:/var/data/config/collabora# docker-compose up -d
|
||||
WARNING: The Docker Engine you're using is running in swarm mode.
|
||||
|
||||
@@ -230,19 +229,19 @@ Now exec into the container (_from another shell session_), by running ```exec <
|
||||
|
||||
Delete the collabora container by hitting CTRL-C in the docker-compose shell, running ```docker-compose rm```, and then altering this line in docker-compose.yml:
|
||||
|
||||
```
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
|
||||
```bash
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
|
||||
```
|
||||
|
||||
To this:
|
||||
|
||||
```
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml
|
||||
```bash
|
||||
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml
|
||||
```
|
||||
|
||||
Edit /var/data/collabora/loolwsd.xml, find the **storage.filesystem.wopi** section, and add lines like this to the existing allow rules (_to allow IPv6-enabled hosts to still connect with their IPv4 addreses_):
|
||||
|
||||
```
|
||||
```xml
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="Regex pattern of hostname to allow or deny." allow="true">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
@@ -252,7 +251,7 @@ Edit /var/data/collabora/loolwsd.xml, find the **storage.filesystem.wopi** secti
|
||||
|
||||
Find the **net.post_allow** section, and add a line like this:
|
||||
|
||||
```
|
||||
```xml
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:10\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.1[6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
<host desc="RFC1918 private addressing in inet6 format">::ffff:172\.2[0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
|
||||
@@ -262,35 +261,35 @@ Find the **net.post_allow** section, and add a line like this:
|
||||
|
||||
Find these 2 lines:
|
||||
|
||||
```
|
||||
```xml
|
||||
<ssl desc="SSL settings">
|
||||
<enable type="bool" default="true">true</enable>
|
||||
```
|
||||
|
||||
And change to:
|
||||
|
||||
```
|
||||
```xml
|
||||
<ssl desc="SSL settings">
|
||||
<enable type="bool" default="true">false</enable>
|
||||
```
|
||||
|
||||
Now re-launch collabora (_with the correct with loolwsd.xml_) under docker-compose, by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
docker-compose -d up
|
||||
```
|
||||
|
||||
Once collabora is up, we launch the swarm stack, by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
docker stack deploy collabora -c /var/data/config/collabora/collabora.yml
|
||||
```
|
||||
|
||||
Visit **https://collabora.<yourdomain\>/l/loleaflet/dist/admin/admin.html** and confirm you can login with the user/password you specified in collabora.env
|
||||
Visit `https://collabora.<yourdomain\>/l/loleaflet/dist/admin/admin.html` and confirm you can login with the user/password you specified in collabora.env
|
||||
|
||||
### Integrate into NextCloud
|
||||
|
||||
In NextCloud, Install the **Collabora Online** app (https://apps.nextcloud.com/apps/richdocuments), and then under **Settings -> Collabora Online**, set your Collabora Online Server to ```https://collabora.<your domain>```
|
||||
In NextCloud, Install the **Collabora Online** app (<https://apps.nextcloud.com/apps/richdocuments>), and then under **Settings -> Collabora Online**, set your Collabora Online Server to ```https://collabora.<your domain>```
|
||||
|
||||

|
||||
|
||||
@@ -298,4 +297,4 @@ Now browse your NextCloud files. Click the plus (+) sign to create a new documen
|
||||
|
||||
[^1]: Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -14,10 +14,10 @@ Are you a [l33t h@x0r](https://en.wikipedia.org/wiki/Hackers_(film))? Do you nee
|
||||
|
||||
Here are some examples of fancy hax0r tricks you can do with CyberChef:
|
||||
|
||||
- [Decode a Base64-encoded string][2]
|
||||
- [Decrypt and disassemble shellcode][6]
|
||||
- [Perform AES decryption, extracting the IV from the beginning of the cipher stream][10]
|
||||
- [Automagically detect several layers of nested encoding][12]
|
||||
- [Decode a Base64-encoded string][2]
|
||||
- [Decrypt and disassemble shellcode][6]
|
||||
- [Perform AES decryption, extracting the IV from the beginning of the cipher stream][10]
|
||||
- [Automagically detect several layers of nested encoding][12]
|
||||
|
||||
Here's a [live demo](https://gchq.github.io/CyberChef)!
|
||||
|
||||
@@ -70,4 +70,4 @@ Launch your CyberChef stack by running ```docker stack deploy cyberchef -c <path
|
||||
[2]: https://gchq.github.io/CyberChef/#recipe=From_Base64('A-Za-z0-9%2B/%3D',true)&input=VTI4Z2JHOXVaeUJoYm1RZ2RHaGhibXR6SUdadmNpQmhiR3dnZEdobElHWnBjMmd1
|
||||
[6]: https://gchq.github.io/CyberChef/#recipe=RC4(%7B'option':'UTF8','string':'secret'%7D,'Hex','Hex')Disassemble_x86('64','Full%20x86%20architecture',16,0,true,true)&input=MjFkZGQyNTQwMTYwZWU2NWZlMDc3NzEwM2YyYTM5ZmJlNWJjYjZhYTBhYWJkNDE0ZjkwYzZjYWY1MzEyNzU0YWY3NzRiNzZiM2JiY2QxOTNjYjNkZGZkYmM1YTI2NTMzYTY4NmI1OWI4ZmVkNGQzODBkNDc0NDIwMWFlYzIwNDA1MDcxMzhlMmZlMmIzOTUwNDQ2ZGIzMWQyYmM2MjliZTRkM2YyZWIwMDQzYzI5M2Q3YTVkMjk2MmMwMGZlNmRhMzAwNzJkOGM1YTZiNGZlN2Q4NTlhMDQwZWVhZjI5OTczMzYzMDJmNWEwZWMxOQ
|
||||
[10]: https://gchq.github.io/CyberChef/#recipe=Register('(.%7B32%7D)',true,false)Drop_bytes(0,32,false)AES_Decrypt(%7B'option':'Hex','string':'1748e7179bd56570d51fa4ba287cc3e5'%7D,%7B'option':'Hex','string':'$R0'%7D,'CTR','Hex','Raw',%7B'option':'Hex','string':''%7D)&input=NTFlMjAxZDQ2MzY5OGVmNWY3MTdmNzFmNWI0NzEyYWYyMGJlNjc0YjNiZmY1M2QzODU0NjM5NmVlNjFkYWFjNDkwOGUzMTljYTNmY2Y3MDg5YmZiNmIzOGVhOTllNzgxZDI2ZTU3N2JhOWRkNmYzMTFhMzk0MjBiODk3OGU5MzAxNGIwNDJkNDQ3MjZjYWVkZjU0MzZlYWY2NTI0MjljMGRmOTRiNTIxNjc2YzdjMmNlODEyMDk3YzI3NzI3M2M3YzcyY2Q4OWFlYzhkOWZiNGEyNzU4NmNjZjZhYTBhZWUyMjRjMzRiYTNiZmRmN2FlYjFkZGQ0Nzc2MjJiOTFlNzJjOWU3MDlhYjYwZjhkYWY3MzFlYzBjYzg1Y2UwZjc0NmZmMTU1NGE1YTNlYzI5MWNhNDBmOWU2MjlhODcyNTkyZDk4OGZkZDgzNDUzNGFiYTc5YzFhZDE2NzY3NjlhN2MwMTBiZjA0NzM5ZWNkYjY1ZDk1MzAyMzcxZDYyOWQ5ZTM3ZTdiNGEzNjFkYTQ2OGYxZWQ1MzU4OTIyZDJlYTc1MmRkMTFjMzY2ZjMwMTdiMTRhYTAxMWQyYWYwM2M0NGY5NTU3OTA5OGExNWUzY2Y5YjQ0ODZmOGZmZTljMjM5ZjM0ZGU3MTUxZjZjYTY1MDBmZTRiODUwYzNmMWMwMmU4MDFjYWYzYTI0NDY0NjE0ZTQyODAxNjE1YjhmZmFhMDdhYzgyNTE0OTNmZmRhN2RlNWRkZjMzNjg4ODBjMmI5NWIwMzBmNDFmOGYxNTA2NmFkZDA3MWE2NmNmNjBlNWY0NmYzYTIzMGQzOTdiNjUyOTYzYTIxYTUzZg
|
||||
[12]: https://gchq.github.io/CyberChef/#recipe=Magic(3,false,false)&input=V1VhZ3dzaWFlNm1QOGdOdENDTFVGcENwQ0IyNlJtQkRvREQ4UGFjZEFtekF6QlZqa0syUXN0RlhhS2hwQzZpVVM3UkhxWHJKdEZpc29SU2dvSjR3aGptMWFybTg2NHFhTnE0UmNmVW1MSHJjc0FhWmM1VFhDWWlmTmRnUzgzZ0RlZWpHWDQ2Z2FpTXl1QlY2RXNrSHQxc2NnSjg4eDJ0TlNvdFFEd2JHWTFtbUNvYjJBUkdGdkNLWU5xaU45aXBNcTFaVTFtZ2tkYk51R2NiNzZhUnRZV2hDR1VjOGc5M1VKdWRoYjhodHNoZVpud1RwZ3FoeDgzU1ZKU1pYTVhVakpUMnptcEM3dVhXdHVtcW9rYmRTaTg4WXRrV0RBYzFUb291aDJvSDRENGRkbU5LSldVRHBNd21uZ1VtSzE0eHdtb21jY1BRRTloTTE3MkFQblNxd3hkS1ExNzJSa2NBc3lzbm1qNWdHdFJtVk5OaDJzMzU5d3I2bVMyUVJQ
|
||||
[12]: https://gchq.github.io/CyberChef/#recipe=Magic(3,false,false)&input=V1VhZ3dzaWFlNm1QOGdOdENDTFVGcENwQ0IyNlJtQkRvREQ4UGFjZEFtekF6QlZqa0syUXN0RlhhS2hwQzZpVVM3UkhxWHJKdEZpc29SU2dvSjR3aGptMWFybTg2NHFhTnE0UmNmVW1MSHJjc0FhWmM1VFhDWWlmTmRnUzgzZ0RlZWpHWDQ2Z2FpTXl1QlY2RXNrSHQxc2NnSjg4eDJ0TlNvdFFEd2JHWTFtbUNvYjJBUkdGdkNLWU5xaU45aXBNcTFaVTFtZ2tkYk51R2NiNzZhUnRZV2hDR1VjOGc5M1VKdWRoYjhodHNoZVpud1RwZ3FoeDgzU1ZKU1pYTVhVakpUMnptcEM3dVhXdHVtcW9rYmRTaTg4WXRrV0RBYzFUb291aDJvSDRENGRkbU5LSldVRHBNd21uZ1VtSzE0eHdtb21jY1BRRTloTTE3MkFQblNxd3hkS1ExNzJSa2NBc3lzbm1qNWdHdFJtVk5OaDJzMzU5d3I2bVMyUVJQ
|
||||
|
||||
@@ -8,7 +8,6 @@ Always have a backup plan[^1]
|
||||
|
||||

|
||||
|
||||
|
||||
[Duplicati](https://www.duplicati.com/) is a free and open-source backup software to store encrypted backups online For Windows, macOS and Linux (our favorite, yay!).
|
||||
|
||||
Similar to the other backup options in the Cookbook, we can use Duplicati to backup all our data-at-rest to a wide variety of locations, including, but not limited to:
|
||||
@@ -23,7 +22,7 @@ Similar to the other backup options in the Cookbook, we can use Duplicati to bac
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||
*[X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||
* [X] [Traefik](/ha-docker-swarm/traefik) and [Traefik-Forward-Auth](/ha-docker-swarm/traefik-forward-auth) configured per design
|
||||
* [X] Credentials for one of the Duplicati's supported upload destinations
|
||||
|
||||
@@ -33,7 +32,7 @@ Similar to the other backup options in the Cookbook, we can use Duplicati to bac
|
||||
|
||||
We'll need a folder to store a docker-compose configuration file and an associated environment file. If you're following my filesystem layout, create `/var/data/config/duplicati` (*for the config*), and `/var/data/duplicati` (*for the metadata*) as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/config/duplicati
|
||||
mkdir /var/data/duplicati
|
||||
cd /var/data/config/duplicati
|
||||
@@ -44,7 +43,8 @@ cd /var/data/config/duplicati
|
||||
1. Generate a random passphrase to use to encrypt your data. **Save this somewhere safe**, without it you won't be able to restore!
|
||||
2. Seriously, **save**. **it**. **somewhere**. **safe**.
|
||||
3. Create `duplicati.env`, and populate with the following variables (_replace "Europe/London" with your appropriate time zone from [this list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)_)
|
||||
```
|
||||
|
||||
```bash
|
||||
PUID=0
|
||||
PGID=0
|
||||
TZ=Europe/London
|
||||
@@ -54,7 +54,6 @@ CLI_ARGS= #optional
|
||||
!!! question "Excuse me! Why are we running Duplicati as root?"
|
||||
That's a great question! We're running Duplicati as the `root` user of the host system because we need Duplicati to be able to read files of all the other services no matter which user that service is running as. After all, Duplicati can't backup your exciting stuff if it can't read the files.
|
||||
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
@@ -114,7 +113,8 @@ networks:
|
||||
Launch the Duplicati stack by running ```docker stack deploy duplicati -c <path-to-docker-compose.yml>```
|
||||
|
||||
### Create (and verify!) Your First Backup
|
||||
Once we authenticate through the traefik-forward-auth provider, we can start configuring your backup jobs via the Duplicati UI. All backup and restore job configuration is done through the UI. Be sure to read through the documentation on [Creating a new backup job](https://duplicati.readthedocs.io/en/latest/03-using-the-graphical-user-interface/#creating-a-new-backup-job) and [Restoring files from a backup](https://duplicati.readthedocs.io/en/latest/03-using-the-graphical-user-interface/#restoring-files-from-a-backup) for information on how to configure those jobs.
|
||||
|
||||
Once we authenticate through the traefik-forward-auth provider, we can start configuring your backup jobs via the Duplicati UI. All backup and restore job configuration is done through the UI. Be sure to read through the documentation on [Creating a new backup job](https://duplicati.readthedocs.io/en/latest/03-using-the-graphical-user-interface/#creating-a-new-backup-job) and [Restoring files from a backup](https://duplicati.readthedocs.io/en/latest/03-using-the-graphical-user-interface/#restoring-files-from-a-backup) for information on how to configure those jobs.
|
||||
|
||||
!!! warning
|
||||
An untested backup is not really a backup at all. Being ***sure*** you can succesfully restore files from your backup now could save you lots of heartache later after "something bad" happens.
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
hero: Duplicity - A boring recipe to backup your exciting stuff. Boring is good.
|
||||
---
|
||||
description: A boring recipe to backup your exciting stuff. Boring is good.
|
||||
---
|
||||
|
||||
# Duplicity
|
||||
|
||||
@@ -54,7 +56,7 @@ I didn't already have an archival/backup provider, so I chose Google Cloud "clou
|
||||
2. Seriously, **save**. **it**. **somewhere**. **safe**.
|
||||
3. Create duplicity.env, and populate with the following variables
|
||||
|
||||
```
|
||||
```bash
|
||||
SRC=/var/data/
|
||||
DST=gs://jack-and-jills-bucket/yes-you-can-have-subdirectories
|
||||
TMPDIR=/tmp
|
||||
@@ -72,7 +74,7 @@ See the [data layout reference](/reference/data_layout/) for an explanation of t
|
||||
|
||||
Before we launch the automated daily backups, let's run a test backup, as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
docker run --env-file duplicity.env -it --rm -v \
|
||||
/var/data:/var/data:ro -v /var/data/duplicity/tmp:/tmp -v \
|
||||
/var/data/duplicity/archive:/archive tecnativa/duplicity \
|
||||
@@ -101,7 +103,7 @@ duplicity list-current-files \
|
||||
|
||||
Once you've identified a file to test-restore, use a variation of the following to restore it to /tmp (_from the perspective of the container - it's actually /var/data/duplicity/tmp_)
|
||||
|
||||
```
|
||||
```bash
|
||||
docker run --env-file duplicity.env -it --rm \
|
||||
-v /var/data:/var/data:ro \
|
||||
-v /var/data/duplicity/tmp:/tmp \
|
||||
@@ -119,7 +121,7 @@ Now that we have confidence in our backup/restore process, let's automate it by
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
@@ -156,4 +158,4 @@ Nothing will happen. Very boring. But when the cron script fires (daily), duplic
|
||||
[^1]: Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
|
||||
[^2]: The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add `SMTP_HOST`, `SMTP_PORT`, `EMAIL_FROM` and `EMAIL_TO` variables to `duplicity.env`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -6,6 +6,7 @@ description: Real heroes backup their shizz!
|
||||
|
||||
Don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](/recipes/duplicity/) would produce for you.
|
||||
@@ -22,7 +23,7 @@ ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It's
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/elkarbackup:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/elkarbackup/{backups,uploads,sshkeys,database-dump}
|
||||
mkdir -p /var/data/runtime/elkarbackup/db
|
||||
mkdir -p /var/data/config/elkarbackup
|
||||
@@ -31,7 +32,8 @@ mkdir -p /var/data/config/elkarbackup
|
||||
### Prepare environment
|
||||
|
||||
Create /var/data/config/elkarbackup/elkarbackup.env, and populate with the following variables
|
||||
```
|
||||
|
||||
```bash
|
||||
SYMFONY__DATABASE__PASSWORD=password
|
||||
EB_CRON=enabled
|
||||
TZ='Etc/UTC'
|
||||
@@ -60,7 +62,7 @@ Create ```/var/data/config/elkarbackup/elkarbackup-db-backup.env```, and populat
|
||||
|
||||
No, me either :shrug:
|
||||
|
||||
```
|
||||
```bash
|
||||
# For database backup (keep 7 days daily backups)
|
||||
MYSQL_PWD=<same as SYMFONY__DATABASE__PASSWORD above>
|
||||
MYSQL_USER=root
|
||||
@@ -175,7 +177,7 @@ From the WebUI, you can download a script intended to be executed on a remote ho
|
||||
|
||||
Here's a variation to the standard script, which I've employed:
|
||||
|
||||
```
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
REPOSITORY=/var/data/elkarbackup/backups
|
||||
@@ -229,4 +231,4 @@ This takes you to a list of backup names and file paths. You can choose to downl
|
||||
[^1]: If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You'd also need to add the traefik_public network to the app service.
|
||||
[^2]: The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -18,7 +18,7 @@ I've started experimenting with Emby as an alternative to Plex, because of the a
|
||||
|
||||
We'll need a location to store Emby's library data, config files, logs and temporary transcoding space, so create /var/data/emby, and make sure it's owned by the user and group who also own your media data.
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/emby
|
||||
```
|
||||
|
||||
@@ -26,7 +26,7 @@ mkdir /var/data/emby
|
||||
|
||||
Create emby.env, and populate with PUID/GUID for the user who owns the /var/data/emby directory (_above_) and your actual media content (_in this example, the media content is at **/srv/data**_)
|
||||
|
||||
```
|
||||
```bash
|
||||
PUID=
|
||||
GUID=
|
||||
```
|
||||
@@ -82,4 +82,4 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
[^3]: We don't bother exposing the HTTPS port for Emby, since [Traefik](/ha-docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -20,7 +20,7 @@ You will be then able to interact with other people regardless of which pod they
|
||||
|
||||
First we create a directory to hold our funky data:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/funkwhale
|
||||
```
|
||||
|
||||
@@ -95,16 +95,16 @@ networks:
|
||||
|
||||
### Unleash the Whale! 🐳
|
||||
|
||||
Launch the Funkwhale stack by running `docker stack deploy funkwhale -c <path -to-docker-compose.yml>`, and then watch the container logs using `docker stack logs funkywhale_funkywhale<tab-completion-helps>`.
|
||||
Launch the Funkwhale stack by running `docker stack deploy funkwhale -c <path -to-docker-compose.yml>`, and then watch the container logs using `docker stack logs funkywhale_funkywhale<tab-completion-helps>`.
|
||||
|
||||
You'll know the container is ready when you see an ascii version of the Funkwhale logo, followed by:
|
||||
|
||||
```
|
||||
```bash
|
||||
[2021-01-27 22:52:24 +0000] [411] [INFO] ASGI 'lifespan' protocol appears unsupported.
|
||||
[2021-01-27 22:52:24 +0000] [411] [INFO] Application startup complete.
|
||||
```
|
||||
|
||||
The first time we run Funkwhale, we need to setup the superuser account.
|
||||
The first time we run Funkwhale, we need to setup the superuser account.
|
||||
|
||||
!!! tip
|
||||
If you're running a multi-node swarm, this next step needs to be executed on the node which is currently running Funkwhale. Identify this with `docker stack ps funkwhale`
|
||||
@@ -132,11 +132,10 @@ Superuser created successfully.
|
||||
root@swarm:~#
|
||||
```
|
||||
|
||||
|
||||
[^1]: Since the whole purpose of media sharing is to share **publically**, and Funkwhale includes robust user authentication, this recipe doesn't employ traefik-based authentication using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
||||
[^2]: These instructions are an opinionated simplication of the official instructions found at https://docs.funkwhale.audio/installation/docker.html
|
||||
[^2]: These instructions are an opinionated simplication of the official instructions found at <https://docs.funkwhale.audio/installation/docker.html>
|
||||
[^3]: It should be noted that if you import your existing media, the files will be **copied** into Funkwhale's data folder. There doesn't seem to be a way to point Funkwhale at an existing collection and have it just play it from the filesystem. To this end, be prepared for double disk space usage if you plan to import your entire music collection!
|
||||
[^5]: No consideration is given at this point to backing up the Funkwhale data. Post a comment below if you'd like to see a backup container added!
|
||||
[^4]: If the funky whale is "playing your song", note that the funkwhale project is [looking for maintainers](https://blog.funkwhale.audio/~/Announcements/funkwhale-is-looking-for-new-maintainers/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -6,7 +6,7 @@ description: Ghost - Beautiful online publicatio (who you gonna call?)
|
||||
|
||||
[Ghost](https://ghost.org) is "a fully open source, hackable platform for building and running a modern online publication."
|
||||
|
||||

|
||||

|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
@@ -16,7 +16,7 @@ description: Ghost - Beautiful online publicatio (who you gonna call?)
|
||||
|
||||
Create the location for the bind-mount of the application data, so that it's persistent:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/ghost
|
||||
```
|
||||
|
||||
@@ -48,7 +48,6 @@ networks:
|
||||
external: true
|
||||
```
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Ghost stack
|
||||
@@ -59,4 +58,4 @@ Create your first administrative account at https://**YOUR-FQDN**/admin/
|
||||
|
||||
[^1]: A default using the SQlite database takes 548k of space
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -24,7 +24,7 @@ Existing:
|
||||
|
||||
We'll need several directories to bind-mount into our runner containers, so create them in `/var/data/gitlab`:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/gitlab/runners/{1,2}
|
||||
```
|
||||
|
||||
@@ -66,7 +66,7 @@ From your GitLab UI, you can retrieve a "token" necessary to register a new runn
|
||||
|
||||
Sample runner config.toml:
|
||||
|
||||
```
|
||||
```ini
|
||||
concurrent = 1
|
||||
check_interval = 0
|
||||
|
||||
@@ -94,5 +94,4 @@ Launch the GitLab Runner stack by running `docker stack deploy gitlab-runner -c
|
||||
[^1]: You'll note that I setup 2 runners. One is locked to a single project (_this cookbook build_), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
|
||||
[^2]: Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (_and GitLab starts **sooo** slowly!_), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.
|
||||
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
hero: Gitlab - A recipe for a self-hosted GitHub alternative
|
||||
|
||||
# GitLab
|
||||
|
||||
GitLab is a self-hosted [alternative to GitHub](https://about.gitlab.com/comparison/). The most common use case is (a set of) developers with the desire for the rich feature-set of GitHub, but with unlimited private repositories.
|
||||
@@ -14,7 +12,7 @@ Docker does maintain an [official "Omnibus" container](https://docs.gitlab.com/o
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/gitlab:
|
||||
|
||||
```
|
||||
```bash
|
||||
cd /var/data
|
||||
mkdir gitlab
|
||||
cd gitlab
|
||||
@@ -27,8 +25,9 @@ You'll need to know the following:
|
||||
|
||||
1. Choose a password for postgresql, you'll need it for DB_PASS in the compose file (below)
|
||||
2. Generate 3 passwords using ```pwgen -Bsv1 64```. You'll use these for the XXX_KEY_BASE environment variables below
|
||||
2. Create gitlab.env, and populate with **at least** the following variables (the full set is available at https://github.com/sameersbn/docker-gitlab#available-configuration-parameters):
|
||||
```
|
||||
3. Create gitlab.env, and populate with **at least** the following variables (the full set is available at <https://github.com/sameersbn/docker-gitlab#available-configuration-parameters>):
|
||||
|
||||
```bash
|
||||
DB_USER=gitlab
|
||||
DB_PASS=gitlabdbpass
|
||||
DB_NAME=gitlabhq_production
|
||||
@@ -115,8 +114,8 @@ networks:
|
||||
|
||||
Launch the mail server stack by running ```docker stack deploy gitlab -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://[your FQDN], with user "root" and the password you specified in gitlab.env.
|
||||
Log into your new instance at <https://[your> FQDN], with user "root" and the password you specified in gitlab.env.
|
||||
|
||||
[^1]: I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -16,7 +16,6 @@ Gollum pages:
|
||||
* Can be edited with your favourite system editor or IDE (_changes will be visible after committing_) or with the built-in web interface.
|
||||
* Can be displayed in all versions (_commits_).
|
||||
|
||||
|
||||

|
||||
|
||||
As you'll note in the (_real world_) screenshot above, my requirements for a personal wiki are:
|
||||
@@ -40,7 +39,7 @@ Gollum meets all these requirements, and as an added bonus, is extremely fast an
|
||||
|
||||
We'll need an empty git repository in /var/data/gollum for our data:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/gollum
|
||||
cd /var/data/gollum
|
||||
git init
|
||||
@@ -51,7 +50,7 @@ git init
|
||||
1. Choose an oauth provider, and obtain a client ID and secret
|
||||
2. Create gollum.env, and populate with the following variables (_you can make the cookie secret whatever you like_)
|
||||
|
||||
```
|
||||
```bash
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
@@ -122,4 +121,4 @@ Authenticate against your OAuth provider, and then start editing your wiki!
|
||||
|
||||
[^1]: In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -18,7 +18,7 @@ This recipie combines the [extensibility](https://home-assistant.io/components/)
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/homeassistant:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/homeassistant
|
||||
cd /var/data/homeassistant
|
||||
mkdir -p {homeassistant,grafana,influxdb-backup}
|
||||
@@ -26,15 +26,15 @@ mkdir -p {homeassistant,grafana,influxdb-backup}
|
||||
|
||||
Now create a directory for the influxdb realtime data:
|
||||
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/runtime/homeassistant/influxdb
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create /var/data/config/homeassistant/grafana.env, and populate with the following - this is to enable grafana to work with oauth2_proxy without requiring an additional level of authentication:
|
||||
```
|
||||
|
||||
```bash
|
||||
GF_AUTH_BASIC_ENABLED=false
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
@@ -126,8 +126,8 @@ networks:
|
||||
|
||||
Launch the Home Assistant stack by running ```docker stack deploy homeassistant -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into https://grafana.**YOUR FQDN** and create some beautiful graphs :)
|
||||
Log into your new instance at https://**YOUR-FQDN**, the password you created in configuration.yml as "frontend - api_key". Then setup a bunch of sensors, and log into <https://grafana>.**YOUR FQDN** and create some beautiful graphs :)
|
||||
|
||||
[^1]: I **tried** to protect Home Assistant using [oauth2_proxy](/reference/oauth_proxy), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -8,7 +8,7 @@ One of the most useful features of Home Assistant is location awareness. I don't
|
||||
## Ingredients
|
||||
|
||||
1. [HomeAssistant](/recipes/homeassistant/) per recipe
|
||||
2. iBeacon(s) - This recipe is for https://s.click.aliexpress.com/e/bzyLCnAp
|
||||
2. iBeacon(s) - This recipe is for <https://s.click.aliexpress.com/e/bzyLCnAp>
|
||||
3. [LightBlue Explorer](https://itunes.apple.com/nz/app/lightblue-explorer/id557428110?mt=8)
|
||||
|
||||
## Preparation
|
||||
@@ -17,10 +17,10 @@ One of the most useful features of Home Assistant is location awareness. I don't
|
||||
|
||||
The iBeacons come with no UUID. We use the LightBlue Explorer app to pair with them (_code is "123456"_), and assign own own UUID.
|
||||
|
||||
Generate your own UUID, or get a random one at https://www.uuidgenerator.net/
|
||||
Generate your own UUID, or get a random one at <https://www.uuidgenerator.net/>
|
||||
|
||||
Plug in your iBeacon, launch LightBlue Explorer, and find your iBeacon. The first time you attempt to interrogate it, you'll be prompted to pair. Although it's not recorded anywhere in the documentation (_grr!_), the pairing code is **123456**
|
||||
|
||||
Having paired, you'll be able to see the vital statistics of your iBeacon.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -6,6 +6,7 @@ description: A self-hosted, hackable version of IFFTT / Zapier
|
||||
|
||||
Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn's Agents create and consume events, propagating them along a directed graph. Think of it as a hackable version of IFTTT or Zapier on your own server.
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe src="https://player.vimeo.com/video/61976251" width="640" height="433" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
@@ -16,7 +17,7 @@ Huginn is a system for building agents that perform automated tasks for you onli
|
||||
|
||||
Create the location for the bind-mount of the database, so that it's persistent:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/huginn/database
|
||||
```
|
||||
|
||||
@@ -24,7 +25,7 @@ mkdir -p /var/data/huginn/database
|
||||
|
||||
Strictly speaking, you don't **have** to integrate Huginn with email. However, since we created our own mailserver stack earlier, it's worth using it to enable emails within Huginn.
|
||||
|
||||
```
|
||||
```bash
|
||||
cd /var/data/docker-mailserver/
|
||||
./setup.sh email add huginn@huginn.example.com my-password-here
|
||||
# Setup MX and DKIM if they don't already exist:
|
||||
@@ -36,7 +37,7 @@ cat config/opendkim/keys/huginn.example.com/mail.txt
|
||||
|
||||
Create /var/data/config/huginn/huginn.env, and populate with the following variables. Set the "INVITATION_CODE" variable if you want to require users to enter a code to sign up (protects the UI from abuse) (The full list of Huginn environment variables is available [here](https://github.com/huginn/huginn/blob/master/.env.example))
|
||||
|
||||
```
|
||||
```bash
|
||||
# For huginn/huginn - essential
|
||||
SMTP_DOMAIN=your-domain-here.com
|
||||
SMTP_USER_NAME=you@gmail.com
|
||||
|
||||
@@ -20,7 +20,7 @@ Great power, right? A client (_yes, you can [hire](https://www.funkypenguin.co.n
|
||||
|
||||
We need a data location to store InstaPy's config, as well as its log files. Create /var/data/instapy per below
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/instapy/logs
|
||||
```
|
||||
|
||||
@@ -65,18 +65,18 @@ services:
|
||||
|
||||
### Command your bot
|
||||
|
||||
Create a variation of https://github.com/timgrossmann/InstaPy/blob/master/docker_quickstart.py at /var/data/instapy/instapy.py (the file we bind-mounted in the swarm config above)
|
||||
Create a variation of <https://github.com/timgrossmann/InstaPy/blob/master/docker_quickstart.py> at /var/data/instapy/instapy.py (the file we bind-mounted in the swarm config above)
|
||||
|
||||
Change at least the following:
|
||||
|
||||
````
|
||||
```bash
|
||||
insta_username = ''
|
||||
insta_password = ''
|
||||
````
|
||||
```
|
||||
|
||||
Here's an example of my config, set to like a single penguin-pic per run:
|
||||
|
||||
```
|
||||
```python
|
||||
insta_username = 'funkypenguin'
|
||||
insta_password = 'followmemypersonalbrandisawesome'
|
||||
|
||||
@@ -117,6 +117,7 @@ Launch the bot by running ```docker stack deploy instapy -c <path -to-docker-com
|
||||
|
||||
While you're waiting for Docker to pull down the images, educate yourself on the risk of a robotic uprising:
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/B1BdQcJ2ZYY" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
|
||||
|
||||
After swarm deploys, you won't see much, but you can monitor what InstaPy is doing, by running ```docker service logs instapy_web```.
|
||||
@@ -125,4 +126,4 @@ You can **also** watch the bot at work by VNCing to your docker swarm, password
|
||||
|
||||
[^1]: Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
# IPFS
|
||||
|
||||
!!! danger "This recipe is a work in progress"
|
||||
This recipe is **incomplete**, and remains a work in progress.
|
||||
So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁
|
||||
|
||||
# IPFS
|
||||
|
||||
The intention of this recipe is to provide a local IPFS cluster for the purpose of providing persistent storage for the various components of the recipes
|
||||
|
||||

|
||||
@@ -22,7 +22,7 @@ Since IPFS may _replace_ ceph or glusterfs as a shared-storage provider for the
|
||||
|
||||
On _each_ node, therefore run the following, to create the persistent data storage for ipfs and ipfs-cluster:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p {/var/ipfs/daemon,/var/ipfs/cluster}
|
||||
```
|
||||
|
||||
@@ -32,7 +32,7 @@ ipfs-cluster nodes require a common secret, a 32-bit hex-encoded string, in orde
|
||||
|
||||
Now on _each_ node, create ```/var/ipfs/cluster:/data/ipfs-cluster```, including both the secret, *and* the IP of docker0 interface on your hosts (_on my hosts, this is always 172.17.0.1_). We do this (_the trick with docker0)_ to allow ipfs-cluster to talk to the local ipfs daemon, per-node:
|
||||
|
||||
```
|
||||
```bash
|
||||
SECRET=<string generated above>
|
||||
|
||||
# Use docker0 to access daemon
|
||||
@@ -72,10 +72,9 @@ services:
|
||||
|
||||
Launch all nodes independently with ```docker-compose -f ipfs.yml up```. At this point, the nodes are each running independently, unaware of each other. But we do this to ensure that service.json is populated on each node, using the IPFS_API environment variable we specified in ipfs.env. (_it's only used on the first run_)
|
||||
|
||||
|
||||
The output looks something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
cluster_1 | 11:03:33.272 INFO restapi: REST API (libp2p-http): ENABLED. Listening on:
|
||||
cluster_1 | /ip4/127.0.0.1/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
cluster_1 | /ip4/172.18.0.3/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
@@ -101,7 +100,7 @@ Pick a node to be your primary node, and CTRL-C the others.
|
||||
|
||||
Look for a line like this in the output of the primary node:
|
||||
|
||||
```
|
||||
```bash
|
||||
/ip4/127.0.0.1/tcp/9096/ipfs/QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx
|
||||
```
|
||||
|
||||
@@ -111,8 +110,7 @@ You'll note several addresses listed, all ending in the same hash. None of these
|
||||
|
||||
On each of the non-primary nodes, run the following, replacing **IP-OF-PRIMARY-NODE** with the actual IP of the primary node, and **HASHY-MC-HASHFACE** with your own hash from primary output above.
|
||||
|
||||
|
||||
```
|
||||
```bash
|
||||
docker run --rm -it -v /var/ipfs/cluster:/data/ipfs-cluster \
|
||||
--entrypoint ipfs-cluster-service ipfs/ipfs-cluster \
|
||||
daemon --bootstrap \ /ip4/IP-OF-PRIMARY-NODE/tcp/9096/ipfs/HASHY-MC-HASHFACE
|
||||
@@ -120,7 +118,7 @@ docker run --rm -it -v /var/ipfs/cluster:/data/ipfs-cluster \
|
||||
|
||||
You'll see output like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
10:55:26.121 INFO service: Bootstrapping to /ip4/192.168.31.13/tcp/9096/ipfs/QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT daemon.go:153
|
||||
10:55:26.121 INFO ipfshttp: IPFS Proxy: /ip4/0.0.0.0/tcp/9095 -> /ip4/172.17.0.1/tcp/5001 ipfshttp.go:221
|
||||
10:55:26.304 ERROR ipfshttp: error posting to IPFS: Post http://172.17.0.1:5001/api/v0/id: dial tcp 172.17.0.1:5001: connect: connection refused ipfshttp.go:708
|
||||
@@ -144,7 +142,7 @@ docker-exec into one of the cluster containers (_it doesn't matter which one_),
|
||||
|
||||
You should see output from each node member, indicating it can see its other peers. Here's my output from a 3-node cluster:
|
||||
|
||||
```
|
||||
```bash
|
||||
/ # ipfs-cluster-ctl peers ls
|
||||
QmPrmQvW5knXLBE94jzpxvdtLSwXZeFE5DSY3FuMxypDsT | ef68b1437c56 | Sees 2 other peers
|
||||
> Addresses:
|
||||
@@ -178,4 +176,4 @@ QmbqPBLJNXWpbXEX6bVhYLo2ruEBE7mh1tfT9s6VXUzYYx | 28c13ec68f33 | Sees 2 other pee
|
||||
|
||||
[^1]: I'm still trying to work out how to _mount_ the ipfs data in my filesystem in a usable way. Which is why this is still a WIP :)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -18,13 +18,13 @@ If it looks very similar as Emby, is because it started as a fork of it, but it
|
||||
|
||||
We'll need a location to store Jellyfin's library data, config files, logs and temporary transcoding space, so create ``/var/data/jellyfin``, and make sure it's owned by the user and group who also own your media data.
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/jellyfin
|
||||
```
|
||||
|
||||
Also if we want to avoid the cache to be part of the backup, we should create a location to map it on the runtime folder. It also has to be owned by the user and group who also own your media data.
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/runtime/jellyfin
|
||||
```
|
||||
|
||||
@@ -32,7 +32,7 @@ mkdir /var/data/runtime/jellyfin
|
||||
|
||||
Create jellyfin.env, and populate with PUID/GUID for the user who owns the /var/data/jellyfin directory (_above_) and your actual media content (_in this example, the media content is at **/srv/data**_)
|
||||
|
||||
```
|
||||
```bash
|
||||
PUID=
|
||||
GUID=
|
||||
```
|
||||
@@ -91,4 +91,4 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
[^3]: We don't bother exposing the HTTPS port for Jellyfin, since [Traefik](/ha-docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -19,7 +19,7 @@ Features include:
|
||||
* Free, open source and self-hosted
|
||||
* Super simple installation
|
||||
|
||||

|
||||

|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
@@ -29,7 +29,7 @@ Features include:
|
||||
|
||||
Create the location for the bind-mount of the application data, so that it's persistent:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/kanboard
|
||||
```
|
||||
|
||||
@@ -37,7 +37,7 @@ mkdir -p /var/data/kanboard
|
||||
|
||||
If you intend to use an [OAuth proxy](/reference/oauth_proxy/) to further secure public access to your instance, create a ```kanboard.env``` file to hold your environment variables, and populate with your OAuth provider's details (_the cookie secret you can just make up_):
|
||||
|
||||
```
|
||||
```bash
|
||||
# If you decide to protect kanboard with an oauth_proxy, complete these
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
|
||||
@@ -16,7 +16,7 @@ description: Kick-ass OIDC and identity management
|
||||
|
||||
We'll need several directories to bind-mount into our container for both runtime and backup data, so create them as follows
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/runtime/keycloak/database
|
||||
mkdir -p /var/data/keycloak/database-dump
|
||||
```
|
||||
@@ -25,7 +25,7 @@ mkdir -p /var/data/keycloak/database-dump
|
||||
|
||||
Create `/var/data/config/keycloak/keycloak.env`, and populate with the following variables, customized for your own domain structure.
|
||||
|
||||
```
|
||||
```bash
|
||||
# Technically, this could be auto-detected, but we prefer to be prescriptive
|
||||
DB_VENDOR=postgres
|
||||
DB_DATABASE=keycloak
|
||||
@@ -48,7 +48,7 @@ POSTGRES_PASSWORD=myuberpassword
|
||||
|
||||
Create `/var/data/config/keycloak/keycloak-backup.env`, and populate with the following, so that your database can be backed up to the filesystem, daily:
|
||||
|
||||
```
|
||||
```bash
|
||||
PGHOST=keycloak-db
|
||||
PGUSER=keycloak
|
||||
PGPASSWORD=myuberpassword
|
||||
@@ -128,4 +128,4 @@ Launch the KeyCloak stack by running `docker stack deploy keycloak -c <path -to-
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, and login with the user/password you defined in `keycloak.env`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -55,7 +55,6 @@ For each of the following mappers, click the name, and set the "_Read Only_" fla
|
||||
|
||||

|
||||
|
||||
|
||||
## Summary
|
||||
|
||||
We've setup a new realm in KeyCloak, and configured read-write federation to an [OpenLDAP](/recipes/openldap/) backend. We can now manage our LDAP users using either KeyCloak or LDAP directly, and we can protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
||||
@@ -65,4 +64,4 @@ We've setup a new realm in KeyCloak, and configured read-write federation to an
|
||||
|
||||
* [X] KeyCloak realm in read-write federation with [OpenLDAP](/recipes/openldap/) directory
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -37,4 +37,4 @@ We've setup users in KeyCloak, which we can now use to authenticate to KeyCloak,
|
||||
|
||||
* [X] Username / password to authenticate against [KeyCloak](/recipes/keycloak/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -16,7 +16,7 @@ Having an authentication provider is not much use until you start authenticating
|
||||
|
||||
* [ ] The URI(s) to protect with the OIDC provider. Refer to the [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/) recipe for more information
|
||||
|
||||
## Preparation
|
||||
## Preparation
|
||||
|
||||
### Create Client
|
||||
|
||||
@@ -45,11 +45,11 @@ Now that you've changed the access type, and clicked **Save**, an additional **C
|
||||
|
||||
## Summary
|
||||
|
||||
We've setup an OIDC client in KeyCloak, which we can now use to protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). The OIDC URL provided by KeyCloak in the master realm, is *https://<your-keycloak-url\>/realms/master/.well-known/openid-configuration*
|
||||
We've setup an OIDC client in KeyCloak, which we can now use to protect vulnerable services using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). The OIDC URL provided by KeyCloak in the master realm, is `https://<your-keycloak-url>/realms/master/.well-known/openid-configuration`
|
||||
|
||||
!!! Summary
|
||||
Created:
|
||||
|
||||
* [X] Client ID and Client Secret used to authenticate against KeyCloak with OpenID Connect
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -13,7 +13,7 @@ So you've just watched a bunch of superhero movies, and you're suddenly inspired
|
||||
## Ingredients
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
* [X] [AutoPirate](/recipes/autopirate/) components (*specifically [Mylar](/recipes/autopirate/mylar/)*), for searching for, downloading, and managing comic books
|
||||
*[X] [AutoPirate](/recipes/autopirate/) components (*specifically [Mylar](/recipes/autopirate/mylar/)*), for searching for, downloading, and managing comic books
|
||||
|
||||
## Preparation
|
||||
|
||||
@@ -21,7 +21,7 @@ So you've just watched a bunch of superhero movies, and you're suddenly inspired
|
||||
|
||||
First we create a directory to hold the komga database, logs and other persistent data:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/komga
|
||||
```
|
||||
|
||||
@@ -73,4 +73,4 @@ If Komga scratches your particular itch, please join me in [sponsoring the devel
|
||||
|
||||
[^1]: Since Komga doesn't need to communicate with any other services, we don't need a separate overlay network for it. Provided Traefik can reach Komga via the `traefik_public` overlay network, we've got all we need.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#Kanboard
|
||||
# Kanboard
|
||||
|
||||
Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](/recipes/miniflux/)_)
|
||||
|
||||
@@ -28,7 +28,7 @@ Features include:
|
||||
|
||||
When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *kanboard* namespace, as illustrated below:
|
||||
|
||||
```
|
||||
```yaml
|
||||
<snip>
|
||||
kubernetes:
|
||||
namespaces:
|
||||
@@ -45,7 +45,7 @@ If you've updated ```values.yml```, upgrade your traefik deployment via helm, by
|
||||
|
||||
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/config/kanboard
|
||||
```
|
||||
|
||||
@@ -53,7 +53,7 @@ mkdir /var/data/config/kanboard
|
||||
|
||||
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the kanboard stack with the following .yml:
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/config/kanboard/namespace.yml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
@@ -67,7 +67,7 @@ kubectl create -f /var/data/config/kanboard/namespace.yaml
|
||||
|
||||
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the kanboard app and plugin data:
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/config/kanboard/persistent-volumeclaim.yml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
@@ -91,14 +91,15 @@ kubectl create -f /var/data/config/kanboard/kanboard-volumeclaim.yaml
|
||||
|
||||
### Create ConfigMap
|
||||
|
||||
Kanboard's configuration is all contained within ```config.php```, which needs to be presented to the container. We _could_ maintain ```config.php``` in the persistent volume we created above, but this would require manually accessing the pod every time we wanted to make a change.
|
||||
Kanboard's configuration is all contained within ```config.php```, which needs to be presented to the container. We _could_ maintain ```config.php``` in the persistent volume we created above, but this would require manually accessing the pod every time we wanted to make a change.
|
||||
|
||||
Instead, we'll create ```config.php``` as a [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), meaning it "lives" within the Kuberetes cluster and can be **presented** to our pod. When we want to make changes, we simply update the ConfigMap (*delete and recreate, to be accurate*), and relaunch the pod.
|
||||
|
||||
Grab a copy of [config.default.php](https://github.com/kanboard/kanboard/blob/master/config.default.php), save it to ```/var/data/config/kanboard/config.php```, and customize it per [the guide](https://docs.kanboard.org/en/latest/admin_guide/config_file.html).
|
||||
|
||||
At the very least, I'd suggest making the following changes:
|
||||
```
|
||||
|
||||
```php
|
||||
define('PLUGIN_INSTALLER', true); // Yes, I want to install plugins using the UI
|
||||
define('ENABLE_URL_REWRITE', false); // Yes, I want pretty URLs
|
||||
```
|
||||
@@ -107,7 +108,7 @@ Now create the configmap from config.php, by running ```kubectl create configmap
|
||||
|
||||
## Serving
|
||||
|
||||
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create a [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [service](https://kubernetes.io/docs/concepts/services-networking/service/), and [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the kanboard [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
|
||||
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create a [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [service](https://kubernetes.io/docs/concepts/services-networking/service/), and [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the kanboard [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
|
||||
|
||||
### Create deployment
|
||||
|
||||
@@ -115,7 +116,7 @@ Create a deployment to tell Kubernetes about the desired state of the pod (*whic
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/kanboard/deployment.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
@@ -160,7 +161,7 @@ kubectl create -f /var/data/kanboard/deployment.yml
|
||||
|
||||
Check that your deployment is running, with ```kubectl get pods -n kanboard```. After a minute or so, you should see a "Running" pod, as illustrated below:
|
||||
|
||||
```
|
||||
```bash
|
||||
[funkypenguin:~] % kubectl get pods -n kanboard
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
app-79f97f7db6-hsmfg 1/1 Running 0 11d
|
||||
@@ -171,7 +172,7 @@ app-79f97f7db6-hsmfg 1/1 Running 0 11d
|
||||
|
||||
The service resource "advertises" the availability of TCP port 80 in your pod, to the rest of the cluster (*constrained within your namespace*). It seems a little like overkill coming from the Docker Swarm's automated "service discovery" model, but the Kubernetes design allows for load balancing, rolling upgrades, and health checks of individual pods, without impacting the rest of the cluster elements.
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/kanboard/service.yml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
@@ -191,7 +192,7 @@ kubectl create -f /var/data/kanboard/service.yml
|
||||
|
||||
Check that your service is deployed, with ```kubectl get services -n kanboard```. You should see something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
[funkypenguin:~] % kubectl get service -n kanboard
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
app ClusterIP None <none> 80/TCP 38d
|
||||
@@ -202,7 +203,7 @@ app ClusterIP None <none> 80/TCP 38d
|
||||
|
||||
The ingress resource tells Traefik what to forward inbound requests for *kanboard.example.com* to your service (defined above), which in turn passes the request to the "app" pod. Adjust the config below for your domain.
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/kanboard/ingress.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
@@ -225,7 +226,7 @@ kubectl create -f /var/data/kanboard/ingress.yml
|
||||
|
||||
Check that your service is deployed, with ```kubectl get ingress -n kanboard```. You should see something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
[funkypenguin:~] % kubectl get ingress -n kanboard
|
||||
NAME HOSTS ADDRESS PORTS AGE
|
||||
app kanboard.funkypenguin.co.nz 80 38d
|
||||
@@ -234,21 +235,20 @@ app kanboard.funkypenguin.co.nz 80 38d
|
||||
|
||||
### Access Kanboard
|
||||
|
||||
At this point, you should be able to access your instance on your chosen DNS name (*i.e. https://kanboard.example.com*)
|
||||
|
||||
At this point, you should be able to access your instance on your chosen DNS name (*i.e. <https://kanboard.example.com>*)
|
||||
|
||||
### Updating config.php
|
||||
|
||||
Since ```config.php``` is a ConfigMap now, to update it, make your local changes, and then delete and recreate the ConfigMap, by running:
|
||||
|
||||
```
|
||||
```bash
|
||||
kubectl delete configmap -n kanboard kanboard-config
|
||||
kubectl create configmap -n kanboard kanboard-config --from-file=config.php
|
||||
```
|
||||
|
||||
Then, in the absense of any other changes to the deployement definition, force the pod to restart by issuing a "null patch", as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
kubectl patch -n kanboard deployment app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
|
||||
```
|
||||
|
||||
@@ -258,4 +258,4 @@ To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard <name of pod
|
||||
|
||||
[^1]: The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#Miniflux
|
||||
# Miniflux
|
||||
|
||||
Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](/recipes/kanboard/)_)
|
||||
|
||||
@@ -14,7 +14,6 @@ I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/rev
|
||||
!!! abstract "2.0+ is a bit different"
|
||||
[Some things changed](https://docs.miniflux.net/en/latest/migration.html) when Miniflux 2.0 was released. For one thing, the only supported database is now postgresql (_no more SQLite_). External themes are gone, as is PHP (_in favor of golang_). It's been a controversial change, but I'm keen on minimal and single-purpose, so I'm still very happy with the direction of development. The developer has laid out his [opinions](https://docs.miniflux.net/en/latest/opinionated.html) re the decisions he's made in the course of development.
|
||||
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/)
|
||||
@@ -26,7 +25,7 @@ I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/rev
|
||||
|
||||
When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *miniflux* namespace, as illustrated below:
|
||||
|
||||
```
|
||||
```yaml
|
||||
<snip>
|
||||
kubernetes:
|
||||
namespaces:
|
||||
@@ -43,7 +42,7 @@ If you've updated ```values.yml```, upgrade your traefik deployment via helm, by
|
||||
|
||||
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/config/miniflux
|
||||
```
|
||||
|
||||
@@ -51,7 +50,7 @@ mkdir /var/data/config/miniflux
|
||||
|
||||
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the miniflux stack with the following .yml:
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/config/miniflux/namespace.yml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
@@ -65,7 +64,7 @@ kubectl create -f /var/data/config/miniflux/namespace.yaml
|
||||
|
||||
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the miniflux postgres database:
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/config/miniflux/db-persistent-volumeclaim.yml
|
||||
kkind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
@@ -91,7 +90,7 @@ kubectl create -f /var/data/config/miniflux/db-persistent-volumeclaim.yaml
|
||||
|
||||
It's not always desirable to have sensitive data stored in your .yml files. Maybe you want to check your config into a git repository, or share it. Using Kubernetes Secrets means that you can create "secrets", and use these in your deployments by name, without exposing their contents. Run the following, replacing ```imtoosexyformyadminpassword```, and the ```mydbpass``` value in both postgress-password.secret **and** database-url.secret:
|
||||
|
||||
```
|
||||
```bash
|
||||
echo -n "imtoosexyformyadminpassword" > admin-password.secret
|
||||
echo -n "mydbpass" > postgres-password.secret
|
||||
echo -n "postgres://miniflux:mydbpass@db/miniflux?sslmode=disable" > database-url.secret
|
||||
@@ -105,10 +104,9 @@ kubectl create secret -n mqtt generic miniflux-credentials \
|
||||
!!! tip "Why use ```echo -n```?"
|
||||
Because. See [my blog post here](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why!
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [services](https://kubernetes.io/docs/concepts/services-networking/service/), and an [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the miniflux [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
|
||||
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [services](https://kubernetes.io/docs/concepts/services-networking/service/), and an [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the miniflux [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
|
||||
|
||||
### Create db deployment
|
||||
|
||||
@@ -116,7 +114,7 @@ Deployments tell Kubernetes about the desired state of the pod (*which it will t
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/db-deployment.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
@@ -159,7 +157,7 @@ spec:
|
||||
|
||||
Create the app deployment by excecuting the following. Again, note that the deployment refers to the secrets created above.
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/app-deployment.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
@@ -207,7 +205,7 @@ kubectl create -f /var/data/miniflux/deployment.yml
|
||||
|
||||
Check that your deployment is running, with ```kubectl get pods -n miniflux```. After a minute or so, you should see 2 "Running" pods, as illustrated below:
|
||||
|
||||
```
|
||||
```bash
|
||||
[funkypenguin:~] % kubectl get pods -n miniflux
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
app-667c667b75-5jjm9 1/1 Running 0 4d
|
||||
@@ -219,7 +217,7 @@ db-fcd47b88f-9vvqt 1/1 Running 0 4d
|
||||
|
||||
The db service resource "advertises" the availability of PostgreSQL's port (TCP 5432) in your pod, to the rest of the cluster (*constrained within your namespace*). It seems a little like overkill coming from the Docker Swarm's automated "service discovery" model, but the Kubernetes design allows for load balancing, rolling upgrades, and health checks of individual pods, without impacting the rest of the cluster elements.
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/db-service.yml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
@@ -241,8 +239,7 @@ kubectl create -f /var/data/miniflux/service.yml
|
||||
|
||||
The app service resource "advertises" the availability of miniflux's HTTP listener port (TCP 8080) in your pod. This is the service which will be referred to by the ingress (below), so that Traefik can route incoming traffic to the miniflux app.
|
||||
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/app-service.yml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
@@ -264,7 +261,7 @@ kubectl create -f /var/data/miniflux/app-service.yml
|
||||
|
||||
Check that your services are deployed, with ```kubectl get services -n miniflux```. You should see something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
[funkypenguin:~] % kubectl get services -n miniflux
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
app ClusterIP None <none> 8080/TCP 55d
|
||||
@@ -276,7 +273,7 @@ db ClusterIP None <none> 5432/TCP 55d
|
||||
|
||||
The ingress resource tells Traefik what to forward inbound requests for *miniflux.example.com* to your service (defined above), which in turn passes the request to the "app" pod. Adjust the config below for your domain.
|
||||
|
||||
```
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/ingress.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
@@ -299,7 +296,7 @@ kubectl create -f /var/data/miniflux/ingress.yml
|
||||
|
||||
Check that your service is deployed, with ```kubectl get ingress -n miniflux```. You should see something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
[funkypenguin:~] 130 % kubectl get ingress -n miniflux
|
||||
NAME HOSTS ADDRESS PORTS AGE
|
||||
app miniflux.funkypenguin.co.nz 80 55d
|
||||
@@ -308,11 +305,10 @@ app miniflux.funkypenguin.co.nz 80 55d
|
||||
|
||||
### Access Miniflux
|
||||
|
||||
At this point, you should be able to access your instance on your chosen DNS name (*i.e. https://miniflux.example.com*)
|
||||
|
||||
At this point, you should be able to access your instance on your chosen DNS name (*i.e. <https://miniflux.example.com>*)
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux <name of pod per above> -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -1,262 +0,0 @@
|
||||
#Kanboard
|
||||
|
||||
Kanboard is a Kanban tool, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of my favorite RSS reader, [Miniflux](/recipes/miniflux/)_)
|
||||
|
||||

|
||||
|
||||
!!! tip "Sponsored Project"
|
||||
Kanboard is one of my [sponsored projects](/#sponsored-projects) - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments! 😓
|
||||
|
||||
Features include:
|
||||
|
||||
* Visualize your work
|
||||
* Limit your work in progress to be more efficient
|
||||
* Customize your boards according to your business activities
|
||||
* Multiple projects with the ability to drag and drop tasks
|
||||
* Reports and analytics
|
||||
* Fast and simple to use
|
||||
* Access from anywhere with a modern browser
|
||||
* Plugins and integrations with external services
|
||||
* Free, open source and self-hosted
|
||||
* Super simple installation
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/)
|
||||
2. A DNS name for your kanboard instance (*kanboard.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your Traefik ingress
|
||||
|
||||
## Preparation
|
||||
|
||||
### Prepare traefik for namespace
|
||||
|
||||
When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *kanboard* namespace, as illustrated below:
|
||||
|
||||
```
|
||||
<snip>
|
||||
kubernetes:
|
||||
namespaces:
|
||||
- kube-system
|
||||
- nextcloud
|
||||
- kanboard
|
||||
- miniflux
|
||||
<snip>
|
||||
```
|
||||
|
||||
If you've updated ```values.yml```, upgrade your traefik deployment via helm, by running ```helm upgrade --values values.yml traefik stable/traefik --recreate-pods```
|
||||
|
||||
### Create data locations
|
||||
|
||||
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
|
||||
|
||||
```
|
||||
mkdir /var/data/config/kanboard
|
||||
```
|
||||
|
||||
### Create namespace
|
||||
|
||||
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the kanboard stack with the following .yml:
|
||||
|
||||
```
|
||||
cat <<EOF > /var/data/config/kanboard/namespace.yml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: kanboard
|
||||
EOF
|
||||
kubectl create -f /var/data/config/kanboard/namespace.yaml
|
||||
```
|
||||
|
||||
### Create persistent volume claim
|
||||
|
||||
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the kanboard app and plugin data:
|
||||
|
||||
```
|
||||
cat <<EOF > /var/data/config/kanboard/persistent-volumeclaim.yml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: kanboard-volumeclaim
|
||||
namespace: kanboard
|
||||
annotations:
|
||||
backup.kubernetes.io/deltas: P1D P7D
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
EOF
|
||||
kubectl create -f /var/data/config/kanboard/kanboard-volumeclaim.yaml
|
||||
```
|
||||
|
||||
!!! question "What's that annotation about?"
|
||||
The annotation is used by [k8s-snapshots](/kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days.
|
||||
|
||||
### Create ConfigMap
|
||||
|
||||
Kanboard's configuration is all contained within ```config.php```, which needs to be presented to the container. We _could_ maintain ```config.php``` in the persistent volume we created above, but this would require manually accessing the pod every time we wanted to make a change.
|
||||
|
||||
Instead, we'll create ```config.php``` as a [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), meaning it "lives" within the Kuberetes cluster and can be **presented** to our pod. When we want to make changes, we simply update the ConfigMap (*delete and recreate, to be accurate*), and relaunch the pod.
|
||||
|
||||
Grab a copy of [config.default.php](https://github.com/kanboard/kanboard/blob/master/config.default.php), save it to ```/var/data/config/kanboard/config.php```, and customize it per [the guide](https://docs.kanboard.org/en/latest/admin_guide/config_file.html).
|
||||
|
||||
At the very least, I'd suggest making the following changes:
|
||||
```
|
||||
define('PLUGIN_INSTALLER', true); // Yes, I want to install plugins using the UI
|
||||
define('ENABLE_URL_REWRITE', false); // Yes, I want pretty URLs
|
||||
```
|
||||
|
||||
Now create the configmap from config.php, by running ```kubectl create configmap -n kanboard kanboard-config --from-file=config.php```
|
||||
|
||||
## Serving
|
||||
|
||||
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create a [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [service](https://kubernetes.io/docs/concepts/services-networking/service/), and [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the kanboard [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
|
||||
|
||||
### Create deployment
|
||||
|
||||
Create a deployment to tell Kubernetes about the desired state of the pod (*which it will then attempt to maintain*). Note below that we mount the persistent volume **twice**, to both ```/var/www/app/data``` and ```/var/www/app/plugins```, using the subPath value to differentiate them. This trick avoids us having to provision **two** persistent volumes just for data mounted in 2 separate locations.
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```
|
||||
cat <<EOF > /var/data/kanboard/deployment.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
namespace: kanboard
|
||||
name: app
|
||||
labels:
|
||||
app: app
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: app
|
||||
spec:
|
||||
containers:
|
||||
- image: kanboard/kanboard
|
||||
name: app
|
||||
volumeMounts:
|
||||
- name: kanboard-config
|
||||
mountPath: /var/www/app/config.php
|
||||
subPath: config.php
|
||||
- name: kanboard-app
|
||||
mountPath: /var/www/app/data
|
||||
subPath: data
|
||||
- name: kanboard-app
|
||||
mountPath: /var/www/app/plugins
|
||||
subPath: plugins
|
||||
volumes:
|
||||
- name: kanboard-app
|
||||
persistentVolumeClaim:
|
||||
claimName: kanboard-app
|
||||
- name: kanboard-config
|
||||
configMap:
|
||||
name: kanboard-config
|
||||
EOF
|
||||
kubectl create -f /var/data/kanboard/deployment.yml
|
||||
```
|
||||
|
||||
Check that your deployment is running, with ```kubectl get pods -n kanboard```. After a minute or so, you should see a "Running" pod, as illustrated below:
|
||||
|
||||
```
|
||||
[funkypenguin:~] % kubectl get pods -n kanboard
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
app-79f97f7db6-hsmfg 1/1 Running 0 11d
|
||||
[funkypenguin:~] %
|
||||
```
|
||||
|
||||
### Create service
|
||||
|
||||
The service resource "advertises" the availability of TCP port 80 in your pod, to the rest of the cluster (*constrained within your namespace*). It seems a little like overkill coming from the Docker Swarm's automated "service discovery" model, but the Kubernetes design allows for load balancing, rolling upgrades, and health checks of individual pods, without impacting the rest of the cluster elements.
|
||||
|
||||
```
|
||||
cat <<EOF > /var/data/kanboard/service.yml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: app
|
||||
namespace: kanboard
|
||||
spec:
|
||||
selector:
|
||||
app: app
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
clusterIP: None
|
||||
EOF
|
||||
kubectl create -f /var/data/kanboard/service.yml
|
||||
```
|
||||
|
||||
Check that your service is deployed, with ```kubectl get services -n kanboard```. You should see something like this:
|
||||
|
||||
```
|
||||
[funkypenguin:~] % kubectl get service -n kanboard
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
app ClusterIP None <none> 80/TCP 38d
|
||||
[funkypenguin:~] %
|
||||
```
|
||||
|
||||
### Create ingress
|
||||
|
||||
The ingress resource tells Traefik what to forward inbound requests for *kanboard.example.com* to your service (defined above), which in turn passes the request to the "app" pod. Adjust the config below for your domain.
|
||||
|
||||
```
|
||||
cat <<EOF > /var/data/kanboard/ingress.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: app
|
||||
namespace: kanboard
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
spec:
|
||||
rules:
|
||||
- host: kanboard.example.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: app
|
||||
servicePort: 80
|
||||
EOF
|
||||
kubectl create -f /var/data/kanboard/ingress.yml
|
||||
```
|
||||
|
||||
Check that your service is deployed, with ```kubectl get ingress -n kanboard```. You should see something like this:
|
||||
|
||||
```
|
||||
[funkypenguin:~] % kubectl get ingress -n kanboard
|
||||
NAME HOSTS ADDRESS PORTS AGE
|
||||
app kanboard.funkypenguin.co.nz 80 38d
|
||||
[funkypenguin:~] %
|
||||
```
|
||||
|
||||
### Access Kanboard
|
||||
|
||||
At this point, you should be able to access your instance on your chosen DNS name (*i.e. https://kanboard.example.com*)
|
||||
|
||||
|
||||
### Updating config.php
|
||||
|
||||
Since ```config.php``` is a ConfigMap now, to update it, make your local changes, and then delete and recreate the ConfigMap, by running:
|
||||
|
||||
```
|
||||
kubectl delete configmap -n kanboard kanboard-config
|
||||
kubectl create configmap -n kanboard kanboard-config --from-file=config.php
|
||||
```
|
||||
|
||||
Then, in the absense of any other changes to the deployement definition, force the pod to restart by issuing a "null patch", as follows:
|
||||
|
||||
```
|
||||
kubectl patch -n kanboard deployment app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
To look at the Kanboard pod's logs, run ```kubectl logs -n kanboard <name of pod per above> -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/).
|
||||
|
||||
[^1]: The simplest deployment of Kanboard uses the default SQLite database backend, stored on the persistent volume. You can convert this to a "real" database running MySQL or PostgreSQL, and running an an additional database pod and service. Contact me if you'd like further details ;)
|
||||
@@ -4,7 +4,7 @@ description: Quickly share self-destructing screenshots, text, etc
|
||||
|
||||
# Linx
|
||||
|
||||
Ever wanted to quickly share a screenshot, but don't want to use imgur, sign up for a service, or have your image tracked across the internet for all time?
|
||||
Ever wanted to quickly share a screenshot, but don't want to use imgur, sign up for a service, or have your image tracked across the internet for all time?
|
||||
|
||||
Want to privately share some log output with a password, or a self-destructing cat picture?
|
||||
|
||||
@@ -26,7 +26,7 @@ Want to privately share some log output with a password, or a self-destructing c
|
||||
|
||||
First we create a directory to hold the data which linx will serve:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/linx
|
||||
```
|
||||
|
||||
@@ -34,7 +34,7 @@ mkdir /var/data/linx
|
||||
|
||||
Linx is configured using a flat text file, so create this on the Docker host, and then we'll mount it (*read-only*) into the container, below.
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/config/linx
|
||||
cat << EOF > /var/data/config/linx/linx.conf
|
||||
# Refer to https://github.com/andreimarcu/linx-server for details
|
||||
@@ -87,7 +87,6 @@ networks:
|
||||
|
||||
Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker-compose.yml>```
|
||||
|
||||
|
||||
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -26,7 +26,7 @@ docker-mailserver doesn't include a webmail client, and one is not strictly need
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/docker-mailserver:
|
||||
|
||||
```
|
||||
```bash
|
||||
cd /var/data
|
||||
mkdir docker-mailserver
|
||||
cd docker-mailserver
|
||||
@@ -41,7 +41,7 @@ The docker-mailserver container can _renew_ our LetsEncrypt certs for us, but it
|
||||
|
||||
In the example below, since I'm already using Traefik to manage the LE certs for my web platforms, I opted to use the DNS challenge to prove my ownership of the domain. The certbot client will prompt you to add a DNS record for domain verification.
|
||||
|
||||
```
|
||||
```bash
|
||||
docker run -ti --rm -v \
|
||||
"$(pwd)"/letsencrypt:/etc/letsencrypt certbot/certbot \
|
||||
--manual --preferred-challenges dns certonly \
|
||||
@@ -52,11 +52,12 @@ docker run -ti --rm -v \
|
||||
|
||||
docker-mailserver comes with a handy bash script for managing the stack (which is just really a wrapper around the container.) It'll make our setup easier, so download it into the root of your configuration/data directory, and make it executable:
|
||||
|
||||
```
|
||||
```bash
|
||||
curl -o setup.sh \
|
||||
https://raw.githubusercontent.com/tomav/docker-mailserver/master/setup.sh \
|
||||
chmod a+x ./setup.sh
|
||||
```
|
||||
|
||||
### Create email accounts
|
||||
|
||||
For every email address required, run ```./setup.sh email add <email> <password>``` to create the account. The command returns no output.
|
||||
@@ -69,7 +70,7 @@ Run ```./setup.sh config dkim``` to create the necessary DKIM entries. The comma
|
||||
|
||||
Examine the keys created by opendkim to identify the DNS TXT records required:
|
||||
|
||||
```
|
||||
```bash
|
||||
for i in `find config/opendkim/keys/ -name mail.txt`; do \
|
||||
echo $i; \
|
||||
cat $i; \
|
||||
@@ -78,16 +79,16 @@ done
|
||||
|
||||
You'll end up with something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
config/opendkim/keys/gitlab.example.com/mail.txt
|
||||
mail._domainkey IN TXT ( "v=DKIM1; k=rsa; "
|
||||
"p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCYuQqDg2ZG8ZOfI1PvarF1Gcr5cJnCR8BeCj5HYgeRohSrxKL5utPEF/AWAxXYwnKpgYN837fu74GfqsIuOhu70lPhGV+O2gFVgpXYWHELvIiTqqO0QgarIN63WE2gzE4s0FckfLrMuxMoXr882wuzuJhXywGxOavybmjpnNHhbQIDAQAB" ) ; ----- DKIM key mail for gitlab.example.com
|
||||
mail._domainkey IN TXT ( "v=DKIM1; k=rsa; "
|
||||
"p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCYuQqDg2ZG8ZOfI1PvarF1Gcr5cJnCR8BeCj5HYgeRohSrxKL5utPEF/AWAxXYwnKpgYN837fu74GfqsIuOhu70lPhGV+O2gFVgpXYWHELvIiTqqO0QgarIN63WE2gzE4s0FckfLrMuxMoXr882wuzuJhXywGxOavybmjpnNHhbQIDAQAB" ) ; ----- DKIM key mail for gitlab.example.com
|
||||
[root@ds1 mail]#
|
||||
```
|
||||
|
||||
Create the necessary DNS TXT entries for your domain(s). Note that although opendkim splits the record across two lines, the actual record should be concatenated on creation. I.e., the DNS TXT record above should read:
|
||||
|
||||
```
|
||||
```bash
|
||||
"v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCYuQqDg2ZG8ZOfI1PvarF1Gcr5cJnCR8BeCj5HYgeRohSrxKL5utPEF/AWAxXYwnKpgYN837fu74GfqsIuOhu70lPhGV+O2gFVgpXYWHELvIiTqqO0QgarIN63WE2gzE4s0FckfLrMuxMoXr882wuzuJhXywGxOavybmjpnNHhbQIDAQAB"
|
||||
```
|
||||
|
||||
@@ -131,7 +132,7 @@ services:
|
||||
deploy:
|
||||
replicas: 1
|
||||
|
||||
rainloop:
|
||||
rainloop:
|
||||
image: hardware/rainloop
|
||||
networks:
|
||||
- internal
|
||||
@@ -158,7 +159,7 @@ networks:
|
||||
|
||||
A sample docker-mailserver.env file looks like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
ENABLE_SPAMASSASSIN=1
|
||||
ENABLE_CLAMAV=1
|
||||
ENABLE_POSTGREY=1
|
||||
@@ -170,7 +171,6 @@ PERMIT_DOCKER=network
|
||||
SSL_TYPE=letsencrypt
|
||||
```
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch mailserver
|
||||
@@ -181,4 +181,4 @@ Launch the mail server stack by running ```docker stack deploy docker-mailserver
|
||||
|
||||
[^2]: If you're using sieve with Rainloop, take note of the [workaround](https://discourse.geek-kitchen.funkypenguin.co.nz/t/mail-server-funky-penguins-geek-cookbook/70/15) identified by [ggilley](https://discourse.geek-kitchen.funkypenguin.co.nz/u/ggilley)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -1,110 +0,0 @@
|
||||
# MatterMost
|
||||
|
||||
Intro
|
||||
|
||||

|
||||
|
||||
Details
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/mattermost:
|
||||
|
||||
```
|
||||
mkdir -p /var/data/mattermost/{cert,config,data,logs,plugins,database-dump}
|
||||
mkdir -p /var/data/runtime/mattermost/database
|
||||
```
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create mattermost.env, and populate with the following variables
|
||||
```
|
||||
POSTGRES_USER=mmuser
|
||||
POSTGRES_PASSWORD=mmuser_password
|
||||
POSTGRES_DB=mattermost
|
||||
MM_USERNAME=mmuser
|
||||
MM_PASSWORD=mmuser_password
|
||||
MM_DBNAME=mattermost
|
||||
```
|
||||
|
||||
Now create mattermost-backup.env, and populate with the following variables:
|
||||
```
|
||||
PGHOST=db
|
||||
PGUSER=mmuser
|
||||
PGPASSWORD=mmuser_password
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
|
||||
db:
|
||||
image: mattermost/mattermost-prod-db
|
||||
env_file: /var/data/config/mattermost/mattermost.env
|
||||
volumes:
|
||||
- /var/data/runtime/mattermost/database:/var/lib/postgresql/data
|
||||
networks:
|
||||
- internal
|
||||
|
||||
app:
|
||||
image: mattermost/mattermost-team-edition
|
||||
env_file: /var/data/config/mattermost/mattermost.env
|
||||
volumes:
|
||||
- /var/data/mattermost/config:/mattermost/config:rw
|
||||
- /var/data/mattermost/data:/mattermost/data:rw
|
||||
- /var/data/mattermost/logs:/mattermost/logs:rw
|
||||
- /var/data/mattermost/plugins:/mattermost/plugins:rw
|
||||
|
||||
db-backup:
|
||||
image: mattermost/mattermost-prod-db
|
||||
env_file: /var/data/config/mattermost/mattermost-backup.env
|
||||
volumes:
|
||||
- /var/data/mattermost/database-dump:/dump
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
|
||||
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.40.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch MatterMost stack
|
||||
|
||||
Launch the MatterMost stack by running ```docker stack deploy mattermost -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password you specified in mattermost.env.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
@@ -10,10 +10,10 @@ Easily add recipes into your database by providing the url[^penguinfood], and me
|
||||
|
||||

|
||||
|
||||
Mealie also provides a secure API for interactions from 3rd party applications.
|
||||
Mealie also provides a secure API for interactions from 3rd party applications.
|
||||
|
||||
!!! question "Why does my recipe manager need an API?"
|
||||
An API allows integration into applications like Home Assistant that can act as notification engines to provide custom notifications based of Meal Plan data to remind you to defrost the chicken, marinade the steak, or start the CrockPot. See the [official docs](https://hay-kot.github.io/mealie/) for more information. Additionally, you can access any available API from the backend server. To explore the API spin up your server and navigate to http://yourserver.com/docs for interactive API documentation.
|
||||
An API allows integration into applications like Home Assistant that can act as notification engines to provide custom notifications based of Meal Plan data to remind you to defrost the chicken, marinade the steak, or start the CrockPot. See the [official docs](https://hay-kot.github.io/mealie/) for more information. Additionally, you can access any available API from the backend server. To explore the API spin up your server and navigate to <http://yourserver.com/docs> for interactive API documentation.
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
@@ -23,7 +23,7 @@ Mealie also provides a secure API for interactions from 3rd party applications.
|
||||
|
||||
First we create a directory to hold the data which mealie will serve:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/mealie
|
||||
```
|
||||
|
||||
@@ -31,7 +31,7 @@ mkdir /var/data/mealie
|
||||
|
||||
There's only one environment variable currently required (`db_type`), but let's create an `.env` file anyway, to keep the recipe consistent and extensible.
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/config/mealie
|
||||
cat << EOF > /var/data/config/mealie/mealie.env
|
||||
db_type=sqlite
|
||||
@@ -89,8 +89,8 @@ Launch the mealie stack by running ```docker stack deploy mealie -c <path -to-do
|
||||
|
||||

|
||||
|
||||
[^penguinfood]: I scraped all these recipes from https://www.food.com/search/penguin
|
||||
[^penguinfood]: I scraped all these recipes from <https://www.food.com/search/penguin>
|
||||
[^1]: If you plan to use Mealie for fancy things like an early-morning alarm to defrost the chicken, you may need to customize the [Traefik Forward Auth][tfa] rules, or even remove them entirely, for unauthenticated API access.
|
||||
[^2]: If you think Mealie is tasty, encourage the developer :cook: to keep on cookin', by [sponsoring him](https://github.com/sponsors/hay-kot) :heart:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -26,7 +26,7 @@ I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/rev
|
||||
|
||||
Create the location for the bind-mount of the application data, so that it's persistent:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/miniflux/database-dump
|
||||
mkdir -p /var/data/runtime/miniflux/database
|
||||
|
||||
@@ -36,7 +36,7 @@ mkdir -p /var/data/runtime/miniflux/database
|
||||
|
||||
Create ```/var/data/config/miniflux/miniflux.env``` something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
DATABASE_URL=postgres://miniflux:secret@miniflux-db/miniflux?sslmode=disable
|
||||
POSTGRES_USER=miniflux
|
||||
POSTGRES_PASSWORD=secret
|
||||
@@ -52,7 +52,7 @@ ADMIN_PASSWORD=test1234
|
||||
|
||||
Create ```/var/data/config/miniflux/miniflux-backup.env```, and populate with the following, so that your database can be backed up to the filesystem, daily:
|
||||
|
||||
```
|
||||
```env
|
||||
PGHOST=miniflux-db
|
||||
PGUSER=miniflux
|
||||
PGPASSWORD=secret
|
||||
@@ -124,7 +124,6 @@ networks:
|
||||
- subnet: 172.16.22.0/24
|
||||
```
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch Miniflux stack
|
||||
@@ -135,4 +134,4 @@ Log into your new instance at https://**YOUR-FQDN**, using the credentials you s
|
||||
|
||||
[^1]: Find the bookmarklet under the **Settings -> Integration** page.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -27,7 +27,7 @@ Possible use-cases:
|
||||
|
||||
We'll need a directory to hold our minio file store, as well as our minio client config, so create a structure at /var/data/minio:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/minio
|
||||
cd /var/data/minio
|
||||
mkdir -p {mc,data}
|
||||
@@ -36,7 +36,8 @@ mkdir -p {mc,data}
|
||||
### Prepare environment
|
||||
|
||||
Create minio.env, and populate with the following variables
|
||||
```
|
||||
|
||||
```bash
|
||||
MINIO_ACCESS_KEY=<some random, complex string>
|
||||
MINIO_SECRET_KEY=<another random, complex string>
|
||||
```
|
||||
@@ -89,13 +90,13 @@ To administer the Minio server, we need the Minio client. While it's possible to
|
||||
|
||||
I created an alias on my docker nodes, allowing me to run mc quickly:
|
||||
|
||||
```
|
||||
```bash
|
||||
alias mc='docker run -it -v /docker/minio/mc/:/root/.mc/ --network traefik_public minio/mc'
|
||||
```
|
||||
|
||||
Now I use the alias to launch the client shell, and connect to my minio instance (_I could also use the external, traefik-provided URL_)
|
||||
|
||||
```
|
||||
```bash
|
||||
root@ds1:~# mc config host add minio http://app:9000 admin iambatman
|
||||
mc: Configuration written to `/root/.mc/config.json`. Please update your access credentials.
|
||||
mc: Successfully created `/root/.mc/share`.
|
||||
@@ -107,11 +108,11 @@ root@ds1:~#
|
||||
|
||||
### Add (readonly) user
|
||||
|
||||
Use mc to add a (readonly or readwrite) user, by running ``` mc admin user add minio <access key> <secret key> <access level>```
|
||||
Use mc to add a (readonly or readwrite) user, by running ```mc admin user add minio <access key> <secret key> <access level>```
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
```bash
|
||||
root@ds1:~# mc admin user add minio spiderman peterparker readonly
|
||||
Added user `spiderman` successfully.
|
||||
root@ds1:~#
|
||||
@@ -119,7 +120,7 @@ root@ds1:~#
|
||||
|
||||
Confirm by listing your users (_admin is excluded from the list_):
|
||||
|
||||
```
|
||||
```bash
|
||||
root@node1:~# mc admin user list minio
|
||||
enabled spiderman readonly
|
||||
root@node1:~#
|
||||
@@ -133,7 +134,7 @@ The simplest permission scheme is "on or off". Either a bucket has a policy, or
|
||||
|
||||
After **no** policy, the most restrictive policy you can attach to a bucket is "download". This policy will allow authenticated users to download contents from the bucket. Apply the "download" policy to a bucket by running ```mc policy download minio/<bucket name>```, i.e.:
|
||||
|
||||
```
|
||||
```bash
|
||||
root@ds1:# mc policy download minio/comics
|
||||
Access permission for `minio/comics` is set to `download`
|
||||
root@ds1:#
|
||||
@@ -154,7 +155,7 @@ I tested the S3 mount using [goofys](https://github.com/kahing/goofys), "a high-
|
||||
|
||||
First, I created ~/.aws/credentials, as follows:
|
||||
|
||||
```
|
||||
```ini
|
||||
[default]
|
||||
aws_access_key_id=spiderman
|
||||
aws_secret_access_key=peterparker
|
||||
@@ -164,7 +165,7 @@ And then I ran (_in the foreground, for debugging_), ```goofys --f -debug_s3 --d
|
||||
|
||||
To permanently mount an S3 bucket using goofys, I'd add something like this to /etc/fstab:
|
||||
|
||||
```
|
||||
```bash
|
||||
goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode=0666 0 0
|
||||
```
|
||||
|
||||
@@ -172,4 +173,4 @@ goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode=
|
||||
[^2]: Some applications (_like [NextCloud](/recipes/nextcloud/)_) can natively mount S3 buckets
|
||||
[^3]: Some backup tools (_like [Duplicity](/recipes/duplicity/)_) can backup directly to S3 buckets
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -1,207 +0,0 @@
|
||||
hero: Kubernetes. The hero we deserve.
|
||||
|
||||
!!! danger "This recipe is a work in progress"
|
||||
This recipe is **incomplete**, and is featured to align the [sponsors](https://github.com/sponsors/funkypenguin)'s "premix" repository with the cookbook. "_premix_" is a private git repository available to [GitHub sponsors](https://github.com/sponsors/funkypenguin), which includes all the necessary .yml files for all published recipes. This means that sponsors can launch any recipe with just a `git pull` and a `kubectl create -f *.yml` 👍
|
||||
|
||||
So... There may be errors and inaccuracies. Jump into [Discord](http://chat.funkypenguin.co.nz) if you're encountering issues 😁
|
||||
|
||||
# MQTT broker
|
||||
|
||||
I use Elias Kotlyar's [excellent custom firmware](https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks) for Xiaomi DaFang/XiaoFang cameras, enabling RTSP, MQTT, motion tracking, and other features, integrating directly with [Home Assistant](/recipes/homeassistant/).
|
||||
|
||||
There's currently a [mysterious bug](https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks/issues/638) though, which prevents TCP communication between Home Assistant and the camera, when MQTT services are enabled on the camera and the mqtt broker runs on the same Raspberry Pi as Home Assistant, using [Hass.io](https://www.home-assistant.io/hassio/).
|
||||
|
||||
A workaround to this bug is to run an MQTT broker **external** to the raspberry pi, which makes the whole problem GoAway(tm). Since an MQTT broker is a single, self-contained container, I've written this recipe as an introduction to our Kubernetes cluster design.
|
||||
|
||||

|
||||
|
||||
[MQTT](https://mqtt.org/faq) stands for MQ Telemetry Transport. It is a publish/subscribe, extremely simple and lightweight messaging protocol, designed for constrained devices and low-bandwidth, high-latency or unreliable networks. The design principles are to minimise network bandwidth and device resource requirements whilst also attempting to ensure reliability and some degree of assurance of delivery. These principles also turn out to make the protocol ideal of the emerging “machine-to-machine” (M2M) or “Internet of Things” world of connected devices, and for mobile applications where bandwidth and battery power are at a premium.
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
|
||||
## Preparation
|
||||
|
||||
### Create data locations
|
||||
|
||||
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
|
||||
|
||||
```
|
||||
mkdir /var/data/config/mqtt
|
||||
```
|
||||
|
||||
### Create namespace
|
||||
|
||||
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the mqtt stack by creating the following .yaml:
|
||||
|
||||
```
|
||||
cat <<EOF > /var/data/mqtt/namespace.yml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: mqtt
|
||||
EOF
|
||||
kubectl create -f /var/data/mqtt/namespace.yaml
|
||||
```
|
||||
|
||||
### Create persistent volume claim
|
||||
|
||||
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the certbot data:
|
||||
|
||||
```yaml
|
||||
cat <<EOF > /var/data/mqtt/persistent-volumeclaim.yml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mqtt-volumeclaim
|
||||
namespace: mqtt
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
EOF
|
||||
kubectl create -f /var/data/mqtt/mqtt-volumeclaim.yaml
|
||||
```
|
||||
|
||||
### Create nodeport service
|
||||
|
||||
I like to expose my services using nodeport (_limited to ports 30000-32767_), and then use an external haproxy load balancer to make these available externally. (_This avoids having to pay per-port changes for a loadbalancer from the cloud provider_)
|
||||
|
||||
```
|
||||
cat <<EOF > /var/data/mqtt/service-nodeport.yml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mqtt-nodeport
|
||||
namespace: mqtt
|
||||
spec:
|
||||
selector:
|
||||
app: mqtt
|
||||
type: NodePort
|
||||
ports:
|
||||
- name: mqtts
|
||||
port: 8883
|
||||
protocol: TCP
|
||||
nodePort : 30883
|
||||
EOF
|
||||
kubectl create -f /var/data/mqtt/service-nodeport.yml
|
||||
```
|
||||
|
||||
### Create secrets
|
||||
|
||||
It's not always desirable to have sensitive data stored in your .yml files. Maybe you want to check your config into a git repository, or share it. Using Kubernetes Secrets means that you can create "secrets", and use these in your deployments by name, without exposing their contents.
|
||||
|
||||
```
|
||||
echo -n "myapikeyissosecret" > cloudflare-key.secret
|
||||
echo -n "myemailaddress" > cloudflare-email.secret
|
||||
echo -n "myemailaddress" > letsencrypt-email.secret
|
||||
|
||||
kubectl create secret -n mqtt generic mqtt-credentials \
|
||||
--from-file=cloudflare-key.secret \
|
||||
--from-file=cloudflare-email.secret \
|
||||
--from-file=letsencrypt-email.secret
|
||||
```
|
||||
|
||||
!!! tip "Why use `echo -n`?"
|
||||
Because. See [my blog post here](https://www.funkypenguin.co.nz/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why!
|
||||
|
||||
## Serving
|
||||
|
||||
### Create deployment
|
||||
|
||||
Now that we have a volume, a service, and a namespace, we can create a deployment for the mqtt pod. Note below the use of volume mounts, environment variables, as well as the secrets.
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```
|
||||
cat <<EOF > /var/data/mqtt/mqtt.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
namespace: mqtt
|
||||
name: mqtt
|
||||
labels:
|
||||
app: mqtt
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mqtt
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mqtt
|
||||
spec:
|
||||
containers:
|
||||
- image: funkypenguin/mqtt-certbot-dns
|
||||
imagePullPolicy: Always
|
||||
# only uncomment these to get the container to run so that we can transfer files into the PV
|
||||
# command: [ "/bin/sleep" ]
|
||||
# args: [ "1h" ]
|
||||
env:
|
||||
- name: DOMAIN
|
||||
value: "*.funkypenguin.co.nz"
|
||||
- name: EMAIL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mqtt-credentials
|
||||
key: letsencrypt-email.secret
|
||||
- name: CLOUDFLARE_EMAIL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mqtt-credentials
|
||||
key: cloudflare-email.secret
|
||||
- name: CLOUDFLARE_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mqtt-credentials
|
||||
key: cloudflare-key.secret
|
||||
# uncomment this to test LetsEncrypt validations
|
||||
# - name: TESTCERT
|
||||
# value: "true"
|
||||
name: mqtt
|
||||
resources:
|
||||
requests:
|
||||
memory: "50Mi"
|
||||
cpu: "0.1"
|
||||
volumeMounts:
|
||||
# We need the LE certs to persist across reboots to avoid getting rate-limited (bad, bad)
|
||||
- name: mqtt-volumeclaim
|
||||
mountPath: /etc/letsencrypt
|
||||
# A configmap for the mosquitto.conf file
|
||||
- name: mosquitto-conf
|
||||
mountPath: /mosquitto/conf/mosquitto.conf
|
||||
subPath: mosquitto.conf
|
||||
# A configmap for the mosquitto passwd file
|
||||
- name: mosquitto-passwd
|
||||
mountPath: /mosquitto/conf/passwd
|
||||
subPath: passwd
|
||||
volumes:
|
||||
- name: mqtt-volumeclaim
|
||||
persistentVolumeClaim:
|
||||
claimName: mqtt-volumeclaim
|
||||
- name: mosquitto-conf
|
||||
configMap:
|
||||
name: mosquitto.conf
|
||||
- name: mosquitto-passwd
|
||||
configMap:
|
||||
name: passwd
|
||||
EOF
|
||||
kubectl create -f /var/data/mqtt/mqtt.yml
|
||||
```
|
||||
|
||||
Check that your deployment is running, with `kubectl get pods -n mqtt`. After a minute or so, you should see a "Running" pod, as illustrated below:
|
||||
|
||||
```
|
||||
[davidy:~/Documents/Personal/Projects/mqtt-k8s] 130 % kubectl get pods -n mqtt
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
mqtt-65f4d96945-bjj44 1/1 Running 0 5m
|
||||
[davidy:~/Documents/Personal/Projects/mqtt-k8s] %
|
||||
```
|
||||
|
||||
To actually **use** your new MQTT broker, you'll need to connect to any one of your nodes (`kubectl get nodes -o wide`) on port 30883 (_the nodeport service we created earlier_). More info on that, and a loadbalancer design, to follow shortly :)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
@@ -1,4 +1,3 @@
|
||||
|
||||
---
|
||||
description: Network resource monitoring tool for quick analysis
|
||||
---
|
||||
@@ -23,7 +22,7 @@ Depending on what you want to monitor, you'll want to install munin-node. On Ubu
|
||||
|
||||
On CentOS Atomic, of course, you can't install munin-node directly, but you can run it as a containerized instance. In this case, you can't use swarm since you need the container running in privileged mode, so launch a munin-node container on each atomic host using:
|
||||
|
||||
```
|
||||
```bash
|
||||
docker run -d --name munin-node --restart=always \
|
||||
--privileged --net=host \
|
||||
-v /:/rootfs:ro \
|
||||
@@ -38,7 +37,7 @@ docker run -d --name munin-node --restart=always \
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/munin:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/munin
|
||||
cd /var/data/munin
|
||||
mkdir -p {log,lib,run,cache}
|
||||
@@ -48,7 +47,7 @@ mkdir -p {log,lib,run,cache}
|
||||
|
||||
Create /var/data/config/munin/munin.env, and populate with the following variables. Use the OAUTH2 variables if you plan to use an [oauth2_proxy](/reference/oauth_proxy/) to protect munin, and set at a **minimum** the `MUNIN_USER`, `MUNIN_PASSWORD`, and `NODES` values:
|
||||
|
||||
```
|
||||
```bash
|
||||
# Use these if you plan to protect the webUI with an oauth_proxy
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
@@ -132,4 +131,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user and password pass
|
||||
|
||||
[^1]: If you wanted to expose the Munin UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the munin container. You'd also need to add the traefik_public network to the munin container.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -5,7 +5,8 @@ description: Share docs. Backup files. Share stuff.
|
||||
# NextCloud
|
||||
|
||||
[NextCloud](https://www.nextcloud.org/) (_a [fork of OwnCloud](https://owncloud.org/blog/owncloud-statement-concerning-the-formation-of-nextcloud-by-frank-karlitschek/), led by original developer Frank Karlitschek_) is a suite of client-server software for creating and using file hosting services. It is functionally similar to Dropbox, although Nextcloud is free and open-source, allowing anyone to install and operate it on a private server.
|
||||
- https://en.wikipedia.org/wiki/Nextcloud
|
||||
|
||||
- <https://en.wikipedia.org/wiki/Nextcloud>
|
||||
|
||||

|
||||
|
||||
@@ -19,7 +20,7 @@ This recipe is based on the official NextCloud docker image, but includes seprat
|
||||
|
||||
We'll need several directories for [static data](/reference/data_layout/#static-data) to bind-mount into our container, so create them in /var/data/nextcloud (_so that they can be [backed up](/recipes/duplicity/)_)
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/nextcloud
|
||||
cd /var/data/nextcloud
|
||||
mkdir -p {html,apps,config,data,database-dump}
|
||||
@@ -27,17 +28,17 @@ mkdir -p {html,apps,config,data,database-dump}
|
||||
|
||||
Now make **more** directories for [runtime data](/reference/data_layout/#runtime-data) (_so that they can be **not** backed-up_):
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/runtime/nextcloud
|
||||
cd /var/data/runtime/nextcloud
|
||||
mkdir -p {db,redis}
|
||||
```
|
||||
|
||||
|
||||
### Prepare environment
|
||||
|
||||
Create nextcloud.env, and populate with the following variables
|
||||
```
|
||||
|
||||
```bash
|
||||
NEXTCLOUD_ADMIN_USER=admin
|
||||
NEXTCLOUD_ADMIN_PASSWORD=FVuojphozxMVyaYCUWomiP9b
|
||||
MYSQL_HOST=db
|
||||
@@ -51,7 +52,7 @@ MYSQL_PASSWORD=set to something secure>
|
||||
|
||||
Now create a **separate** nextcloud-db-backup.env file, to capture the environment variables necessary to perform the backup. (_If the same variables are shared with the mariadb container, they [cause issues](https://discourse.geek-kitchen.funkypenguin.co.nz/t/nextcloud-funky-penguins-geek-cookbook/254/3?u=funkypenguin) with database access_)
|
||||
|
||||
````
|
||||
````bash
|
||||
# For database backup (keep 7 days daily backups)
|
||||
MYSQL_PWD=<set to something secure, same as MYSQL_ROOT_PASSWORD above>
|
||||
MYSQL_USER=root
|
||||
@@ -163,8 +164,8 @@ Log into your new instance at https://**YOUR-FQDN**, with user "admin" and the p
|
||||
|
||||
To make NextCloud [a little snappier](https://docs.nextcloud.com/server/13/admin_manual/configuration_server/caching_configuration.html), edit ```/var/data/nextcloud/config/config.php``` (_now that it's been created on the first container launch_), and add the following:
|
||||
|
||||
```
|
||||
'redis' => array(
|
||||
```bash
|
||||
'redis' => array(
|
||||
'host' => 'redis',
|
||||
'port' => 6379,
|
||||
),
|
||||
@@ -178,31 +179,31 @@ Huzzah! NextCloud supports [service discovery for CalDAV/CardDAV](https://tools.
|
||||
|
||||
We (_and anyone else using the [NextCloud Docker image](https://hub.docker.com/_/nextcloud/)_) are using an SSL-terminating reverse proxy ([Traefik](/ha-docker-swarm/traefik/)) in front of our NextCloud container. In fact, it's not **possible** to setup SSL **within** the NextCloud container.
|
||||
|
||||
When using a reverse proxy, your device requests a URL from your proxy (https://nextcloud.batcave.com/.well-known/caldav), and the reverse proxy then passes that request **unencrypted** to the internal URL of the NextCloud instance (i.e., http://172.16.12.123/.well-known/caldav)
|
||||
When using a reverse proxy, your device requests a URL from your proxy (<https://nextcloud.batcave.com/.well-known/caldav>), and the reverse proxy then passes that request **unencrypted** to the internal URL of the NextCloud instance (i.e., <http://172.16.12.123/.well-known/caldav>)
|
||||
|
||||
The Apache webserver on the NextCloud container (_knowing it was spoken to via HTTP_), responds with a 301 redirect to http://nextcloud.batcave.com/remote.php/dav/. See the problem? You requested an **HTTPS** (_encrypted_) url, and in return, you received a redirect to an **HTTP** (_unencrypted_) URL. Any sensible client (_iOS included_) will refuse such schenanigans.
|
||||
The Apache webserver on the NextCloud container (_knowing it was spoken to via HTTP_), responds with a 301 redirect to <http://nextcloud.batcave.com/remote.php/dav/>. See the problem? You requested an **HTTPS** (_encrypted_) url, and in return, you received a redirect to an **HTTP** (_unencrypted_) URL. Any sensible client (_iOS included_) will refuse such schenanigans.
|
||||
|
||||
To correct this, we need to tell NextCloud to always redirect the .well-known URLs to an HTTPS location. This can only be done **after** deploying NextCloud, since it's only on first launch of the container that the .htaccess file is created in the first place.
|
||||
|
||||
To make NextCloud service discovery work with Traefik reverse proxy, edit ```/var/data/nextcloud/html/.htaccess```, and change this:
|
||||
|
||||
```
|
||||
```bash
|
||||
RewriteRule ^\.well-known/carddav /remote.php/dav/ [R=301,L]
|
||||
RewriteRule ^\.well-known/caldav /remote.php/dav/ [R=301,L]
|
||||
```
|
||||
|
||||
To this:
|
||||
|
||||
```
|
||||
```bash
|
||||
RewriteRule ^\.well-known/carddav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
|
||||
RewriteRule ^\.well-known/caldav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
|
||||
```
|
||||
|
||||
Then restart your container with ```docker service update nextcloud_nextcloud --force``` to restart apache.
|
||||
|
||||
Your can test for success by running ```curl -i https://nextcloud.batcave.org/.well-known/carddav```. You should get a 301 redirect to your equivalent of https://nextcloud.batcave.org/remote.php/dav/, as below:
|
||||
Your can test for success by running ```curl -i https://nextcloud.batcave.org/.well-known/carddav```. You should get a 301 redirect to your equivalent of <https://nextcloud.batcave.org/remote.php/dav/>, as below:
|
||||
|
||||
```
|
||||
```bash
|
||||
[davidy:~] % curl -i https://nextcloud.batcave.org/.well-known/carddav
|
||||
HTTP/2 301
|
||||
content-type: text/html; charset=iso-8859-1
|
||||
@@ -215,4 +216,4 @@ Note that this .htaccess can be overwritten by NextCloud, and you may have to re
|
||||
[^1]: Since many of my other recipes use PostgreSQL, I'd have preferred to use Postgres over MariaDB, but MariaDB seems to be the [preferred database type](https://github.com/nextcloud/server/issues/5912).
|
||||
[^2]: I'm [not the first user](https://github.com/nextcloud/docker/issues/528) to stumble across the service discovery bug with reverse proxies.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -6,11 +6,10 @@ description: CGM data with an API, for diabetic quality-of-life improvements
|
||||
|
||||
Nightscout is "*...an open source, DIY project that allows real time access to a CGM data via personal website, smartwatch viewers, or apps and widgets available for smartphones*"
|
||||
|
||||
!!! question "Yeah, but what's a CGM?"
|
||||
A CGM is a "continuos glucose monitor" :drop_of_blood: - If you have a blood-sugar-related disease (*i.e. diabetes*), you might wear a CGM in order to retrieve blood-glucose level readings, to inform your treatment.
|
||||
|
||||
NightScout frees you from the CGM's supplier's limited and proprietary app, and unlocks advanced charting, alarming, and sharing features :muscle:
|
||||
!!! question "Yeah, but what's a CGM?"
|
||||
A CGM is a "continuos glucose monitor" :drop_of_blood: - If you have a blood-sugar-related disease (*i.e. diabetes*), you might wear a CGM in order to retrieve blood-glucose level readings, to inform your treatment.
|
||||
|
||||
NightScout frees you from the CGM's supplier's limited and proprietary app, and unlocks advanced charting, alarming, and sharing features :muscle:
|
||||
|
||||

|
||||
|
||||
@@ -25,14 +24,15 @@ Most NightScout users will deploy to Heroko, using MongoDB Atlas, which is a [we
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold Nightscout's database, as well as database backups:
|
||||
```
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/runtime/nightscout/database # excluded from automated backups
|
||||
mkdir -p /var/data/nightscout/database # included in automated backups
|
||||
```
|
||||
|
||||
### Create env file
|
||||
|
||||
NightScout is configured entirely using environment variables, so create something like this as `/var/data/config/nightscout/nightscout.env`:
|
||||
NightScout is configured entirely using environment variables, so create something like this as `/var/data/config/nightscout/nightscout.env`:
|
||||
|
||||
!!! warning
|
||||
Your variables may vary significantly from what's illustrated below, and it's best to read up and understand exactly what each option does.
|
||||
@@ -164,7 +164,6 @@ networks:
|
||||
|
||||
Launch the nightscout stack by running ```docker stack deploy nightscout -c <path -to-docker-compose.yml>```
|
||||
|
||||
|
||||
[^1]: Most of the time, you'll need an app which syncs to Nightscout, and these apps won't support OIDC auth, so this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/). Instead, NightScout is secured entirely with your `API_SECRET` above (*although it is possible to add more users once you're an admin*)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -30,7 +30,7 @@ What you'll end up with is a directory structure which will allow integration wi
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/openldap:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/openldap/openldap
|
||||
mkdir /var/data/runtime/openldap/
|
||||
```
|
||||
@@ -42,7 +42,7 @@ mkdir /var/data/runtime/openldap/
|
||||
|
||||
Create /var/data/openldap/openldap.env, and populate with the following variables, customized for your own domain structure. Take care with LDAP_DOMAIN, this is core to your directory structure, and can't easily be changed later.
|
||||
|
||||
```
|
||||
```bash
|
||||
LDAP_DOMAIN=batcave.gotham
|
||||
LDAP_ORGANISATION=BatCave Inc
|
||||
LDAP_ADMIN_PASSWORD=supermansucks
|
||||
@@ -67,7 +67,7 @@ Create ```/var/data/openldap/lam/config/config.cfg``` as follows:
|
||||
|
||||
???+ note "Much scroll, very text. Click here to collapse it for better readability"
|
||||
|
||||
```
|
||||
```bash
|
||||
# password to add/delete/rename configuration profiles (default: lam)
|
||||
password: {SSHA}D6AaX93kPmck9wAxNlq3GF93S7A= R7gkjQ==
|
||||
|
||||
@@ -137,7 +137,7 @@ Create yours profile (_you chose a default profile in config.cfg above, remember
|
||||
|
||||
???+ note "Much scroll, very text. Click here to collapse it for better readability"
|
||||
|
||||
```
|
||||
```bash
|
||||
# LDAP Account Manager configuration
|
||||
#
|
||||
# Please do not modify this file manually. The configuration can be done completely by the LAM GUI.
|
||||
@@ -392,7 +392,7 @@ networks:
|
||||
|
||||
Create **another** stack config file (```/var/data/config/openldap/auth.yml```) containing just the auth_internal network, and a dummy container:
|
||||
|
||||
```
|
||||
```yaml
|
||||
version: "3.2"
|
||||
|
||||
# What is this?
|
||||
@@ -417,9 +417,6 @@ networks:
|
||||
- subnet: 172.16.39.0/24
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch OpenLDAP stack
|
||||
@@ -436,4 +433,4 @@ Create your users using the "**New User**" button.
|
||||
|
||||
[^1]: [The KeyCloak](/recipes/keycloak/authenticate-against-openldap/) recipe illustrates how to integrate KeyCloak with your LDAP directory, giving you a cleaner interface to manage users, and a raft of SSO / OAuth features.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -21,7 +21,7 @@ Using a smartphone app, OwnTracks allows you to collect and analyse your own loc
|
||||
|
||||
We'll need a directory so store OwnTracks' data , so create ```/var/data/owntracks```:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/owntracks
|
||||
```
|
||||
|
||||
@@ -29,7 +29,7 @@ mkdir /var/data/owntracks
|
||||
|
||||
Create owntracks.env, and populate with the following variables
|
||||
|
||||
```
|
||||
```bash
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
@@ -107,4 +107,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
|
||||
[^2]: I'm using my own image rather than owntracks/recorderd, because of a [potentially swarm-breaking bug](https://github.com/owntracks/recorderd/issues/14) I found in the official container. If this gets resolved (_or if I was mistaken_) I'll update the recipe accordingly.
|
||||
[^3]: By default, you'll get a fully accessible, unprotected MQTT broker. This may not be suitable for public exposure, so you'll want to look into securing mosquitto with TLS and ACLs.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -8,7 +8,6 @@ Paper is a nightmare. Environmental issues aside, there’s no excuse for it in
|
||||
|
||||

|
||||
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
@@ -17,7 +16,7 @@ Paper is a nightmare. Environmental issues aside, there’s no excuse for it in
|
||||
|
||||
We'll need a folder to store a docker-compose configuration file and an associated environment file. If you're following my filesystem layout, create `/var/data/config/paperless` (*for the config*). We'll also need to create `/var/data/paperless` and a few subdirectories (*for the metadata*). Lastly, we need a directory for the database backups to reside in as well.
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/config/paperless
|
||||
mkdir /var/data/paperless
|
||||
mkdir /var/data/paperless/consume
|
||||
@@ -29,13 +28,13 @@ mkdir /var/data/paperless/database-dump
|
||||
```
|
||||
|
||||
!!! question "Which is it, Paperless or Paperless-NG?"
|
||||
Technically the name of the application is `paperless-ng`. However, the [original Paperless project](https://github.com/the-paperless-project/paperless) has been archived and the author recommends Paperless NG. So, to save some typing, we'll just call it "Paperless". Additionally, if you use the automated tooling in the Premix Repo, Ansible *really* doesn't like the hypen.
|
||||
Technically the name of the application is `paperless-ng`. However, the [original Paperless project](https://github.com/the-paperless-project/paperless) has been archived and the author recommends Paperless NG. So, to save some typing, we'll just call it "Paperless". Additionally, if you use the automated tooling in the Premix Repo, Ansible *really* doesn't like the hypen.
|
||||
|
||||
### Create environment
|
||||
|
||||
To stay consistent with the other recipes, we'll create a file to store environemnt variables in. There's more than 1 service in this stack, but we'll only create one one environment file that will be used by the web server (more on this later).
|
||||
|
||||
```
|
||||
```bash
|
||||
cat << EOF > /var/data/config/paperless/paperless.env
|
||||
PAPERLESS_TIME_ZONE:<timezone>
|
||||
PAPERLESS_ADMIN_USER=<admin_user>
|
||||
@@ -48,6 +47,7 @@ PAPERLESS_TIKA_GOTENBERG_ENDPOINT=http://gotenberg:3000
|
||||
PAPERLESS_TIKA_ENDPOINT=http://tika:9998
|
||||
EOF
|
||||
```
|
||||
|
||||
You'll need to replace some of the text in the snippet above:
|
||||
|
||||
* `<timezone>` - Replace with an entry from [the timezone database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) (eg: America/New_York)
|
||||
@@ -158,13 +158,14 @@ networks:
|
||||
- subnet: 172.16.58.0/24
|
||||
|
||||
```
|
||||
|
||||
You'll notice that there are several items under "services" in this stack. Let's take a look at what each one does:
|
||||
|
||||
* broker - Redis server that other services use to share data
|
||||
* webserver - The UI that you will use to add and view documents, edit document metadata, and configure the application settings.
|
||||
* gotenburg - Tool that facilitates converting MS Office documents, HTML, Markdown and other document types to PDF
|
||||
* tika - The OCR engine that extracts text from image-only documents
|
||||
* db - PostgreSQL database engine to store metadata for all the documents. [^2]
|
||||
* db - PostgreSQL database engine to store metadata for all the documents. [^2]
|
||||
* db-backup - Service to dump the PostgreSQL database to a backup file on disk once per day
|
||||
|
||||
## Serving
|
||||
|
||||
@@ -6,7 +6,6 @@ description: ML-powered private photo hosting
|
||||
|
||||
[Photoprism™](https://github.com/photoprism/photoprism) "is a server-based application for browsing, organizing and sharing your personal photo collection. It makes use of the latest technologies to automatically tag and find pictures without getting in your way. Say goodbye to solutions that force you to upload your visual memories to the cloud."
|
||||
|
||||
|
||||

|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
@@ -16,13 +15,14 @@ description: ML-powered private photo hosting
|
||||
### Setup data locations
|
||||
|
||||
First we need a folder to map the photoprism config file:
|
||||
```
|
||||
|
||||
```bash
|
||||
mkdir /var/data/photoprism/config
|
||||
```
|
||||
|
||||
We will need a location to store photoprism thumbnails, as they can be recreated anytime (althought depending on your collection size it could take a while), we store them on a "non-backed-up" folder
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/runtime/photoprism/cache
|
||||
```
|
||||
|
||||
@@ -36,7 +36,7 @@ In order to be able to import/export files from / to the originals folder make
|
||||
|
||||
Photoprism has with its own running db, but if your collection is big (10K photos or more), the perfomance is best using an external db instance. We will use MariaDb, so we need the folders for running and backing the db:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/runtime/photoprism/db
|
||||
mkdir /var/data/photoprism/database-dump
|
||||
```
|
||||
@@ -45,7 +45,7 @@ mkdir /var/data/photoprism/database-dump
|
||||
|
||||
Create ```photoprism.env```, and populate with the following variables. Change passwords
|
||||
|
||||
```
|
||||
```bash
|
||||
PHOTOPRISM_URL=https://photoprism.example.com
|
||||
PHOTOPRISM_TITLE=PhotoPrism
|
||||
PHOTOPRISM_SUBTITLE=Browse your life
|
||||
@@ -77,7 +77,7 @@ MYSQL_DATABASE=photoprism
|
||||
|
||||
Now create a **separate** photoprism-db-backup.env file, to capture the environment variables necessary to perform the backup. (_If the same variables are shared with the mariadb container, they [cause issues](https://discourse.geek-kitchen.funkypenguin.co.nz/t/nextcloud-funky-penguins-geek-cookbook/254/3?u=funkypenguin) with database access_)
|
||||
|
||||
````
|
||||
````bash
|
||||
# For database backup (keep 7 days daily backups)
|
||||
MYSQL_PWD=<set to something secure, same as MYSQL_ROOT_PASSWORD above>
|
||||
MYSQL_USER=root
|
||||
@@ -169,4 +169,4 @@ Browse to your new browser-cli-terminal at https://**YOUR-FQDN**, with user "adm
|
||||
|
||||
[^1]: Once it is running, you probably will want to launch an scan to index the originals photos. Go to *library -> index* and do a complete rescan (it will take a while, depending on your collection size)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -28,7 +28,7 @@ Enter phpIPAM. A tool designed to help home keeps as well as large organisations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in `/var/data/phpipam`:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/phpipam/databases-dump -p
|
||||
mkdir /var/data/runtime/phpipam -p
|
||||
```
|
||||
@@ -37,7 +37,7 @@ mkdir /var/data/runtime/phpipam -p
|
||||
|
||||
Create `phpipam.env`, and populate with the following variables
|
||||
|
||||
```
|
||||
```bash
|
||||
# Setup for github, phpipam application
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
@@ -62,7 +62,7 @@ BACKUP_FREQUENCY=1d
|
||||
|
||||
Additionally, create `phpipam-backup.env`, and populate with the following variables:
|
||||
|
||||
```
|
||||
```bash
|
||||
# For MariaDB/MySQL database
|
||||
MYSQL_ROOT_PASSWORD=imtoosecretformyshorts
|
||||
MYSQL_DATABASE=phpipam
|
||||
@@ -74,8 +74,6 @@ BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
@@ -161,4 +159,4 @@ Log into your new instance at https://**YOUR-FQDN**, and follow the on-screen pr
|
||||
|
||||
[^1]: If you wanted to expose the phpIPAM UI directly, you could remove the `traefik.http.routers.api.middlewares` label from the app container :thumbsup:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -16,7 +16,7 @@ description: Play back all your media on all your devices
|
||||
|
||||
We'll need a directories to bind-mount into our container for Plex to store its library, so create /var/data/plex:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/plex
|
||||
```
|
||||
|
||||
@@ -24,7 +24,7 @@ mkdir /var/data/plex
|
||||
|
||||
Create plex.env, and populate with the following variables. Set PUID and GUID to the UID and GID of the user who owns your media files, on the local filesystem
|
||||
|
||||
```
|
||||
```yaml
|
||||
EDGE=1
|
||||
VERSION=latest
|
||||
PUID=42
|
||||
@@ -87,7 +87,7 @@ Launch the Plex stack by running ```docker stack deploy plex -c <path -to-docker
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN** (You'll need to setup a plex.tv login for remote access / discovery to work from certain clients)
|
||||
|
||||
[^1]: Plex uses port 32400 for remote access, using your plex.tv user/password to authenticate you. The inclusion of the traefik proxy in this recipe is simply to allow you to use the web client (as opposed to a client app) by connecting directly to your instance, as opposed to browsing your media via https://plex.tv/web
|
||||
[^1]: Plex uses port 32400 for remote access, using your plex.tv user/password to authenticate you. The inclusion of the traefik proxy in this recipe is simply to allow you to use the web client (as opposed to a client app) by connecting directly to your instance, as opposed to browsing your media via <https://plex.tv/web>
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -7,7 +7,7 @@ description: A UI to make Docker less geeky
|
||||
!!! tip
|
||||
Some time after originally publishing this recipe, I had the opportunity to meet the [Portainer team](https://www.reseller.co.nz/article/682233/kiwi-startup-portainer-io-closes-1-2m-seed-round/), who are based out of Auckland, New Zealand. We now have an ongoing friendly working relationship. Portainer is my [GitHub Sponsor][github_sponsor] :heart:, and in return, I maintain their [official Kubernetes helm charts](https://github.com/portainer/k8s)! :thumbsup:
|
||||
|
||||
[Portainer](https://portainer.io/) is a lightweight sexy UI for visualizing your docker environment. It also happens to integrate well with Docker Swarm clusters, which makes it a great fit for our stack.
|
||||
[Portainer](https://portainer.io/) is a lightweight sexy UI for visualizing your docker environment. It also happens to integrate well with Docker Swarm clusters, which makes it a great fit for our stack.
|
||||
|
||||
Portainer attempts to take the "geekiness" out of containers, by wrapping all the jargon and complexity in a shiny UI and some simple abstractions. It's a great addition to any stack, especially if you're just starting your containerization journey!
|
||||
|
||||
@@ -21,7 +21,7 @@ Portainer attempts to take the "geekiness" out of containers, by wrapping all th
|
||||
|
||||
Create a folder to store portainer's persistent data:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/portainer
|
||||
```
|
||||
|
||||
@@ -115,4 +115,4 @@ Log into your new instance at https://**YOUR-FQDN**. You'll be prompted to set y
|
||||
|
||||
[^1]: There are [some schenanigans](https://www.reddit.com/r/docker/comments/au9wnu/linuxserverio_templates_for_portainer/) you can do to install LinuxServer.io templates in Portainer. Don't go crying to them for support though! :crying_cat_face:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -16,7 +16,7 @@ PrivateBin is a minimalist, open source online pastebin where the server (can) h
|
||||
|
||||
We'll need a single location to bind-mount into our container, so create /var/data/privatebin, and make it world-writable (_there might be a more secure way to do this!_)
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/privatebin
|
||||
chmod 777 /var/data/privatebin/
|
||||
```
|
||||
@@ -59,4 +59,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
|
||||
[^1]: The [PrivateBin repo](https://github.com/PrivateBin/PrivateBin/blob/master/INSTALL.md) explains how to tweak configuration options, or to use a database instead of file storage, if your volume justifies it :)
|
||||
[^2]: The inclusion of PrivateBin was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gerry!!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -32,12 +32,13 @@ Features include:
|
||||
|
||||
Since we'll start with a basic Realms install, let's just create a single directory to hold the realms (SQLite) data:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/realms/
|
||||
```
|
||||
|
||||
Create realms.env, and populate with the following variables (_if you intend to use an [oauth_proxy](/reference/oauth_proxy) to double-secure your installation, which I recommend_)
|
||||
```
|
||||
|
||||
```bash
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
@@ -106,4 +107,4 @@ Log into your new instance at https://**YOUR-FQDN**, authenticate against oauth_
|
||||
[^1]: If you wanted to expose the Realms UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the realms container. You'd also need to add the traefik_public network to the realms container.
|
||||
[^2]: The inclusion of Realms was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -6,6 +6,7 @@ description: Don't be like Cameron. Back up your shizz.
|
||||
|
||||
Don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
[Restic](https://restic.net/) is a backup program intended to be easy, fast, verifiable, secure, efficient, and free. Restic supports a range of backup targets, including local disk, [SFTP](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#sftp), [S3](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#amazon-s3) (*or compatible APIs like [Minio](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#minio-server)*), [Backblaze B2](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#backblaze-b2), [Azure](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#microsoft-azure-blob-storage), [Google Cloud Storage](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#google-cloud-storage), and zillions of others via [rclone](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#other-services-via-rclone).
|
||||
@@ -23,7 +24,7 @@ Restic is one of the more popular open-source backup solutions, and is often [co
|
||||
|
||||
We'll need a data location to bind-mount persistent config (*an exclusion list*) into our container, so create them as below:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/restic/
|
||||
mkdir -p /var/data/config/restic
|
||||
echo /var/data/runtime >> /var/data/restic/restic.exclude
|
||||
@@ -36,7 +37,7 @@ echo /var/data/runtime >> /var/data/restic/restic.exclude
|
||||
|
||||
Create `/var/data/config/restic/restic-backup.env`, and populate with the following variables:
|
||||
|
||||
```
|
||||
```bash
|
||||
# run on startup, otherwise just on cron
|
||||
RUN_ON_STARTUP=true
|
||||
|
||||
@@ -70,7 +71,7 @@ RESTIC_FORGET_ARGS=--keep-daily 7 --keep-monthly 12
|
||||
|
||||
Create `/var/data/config/restic/restic-prune.env`, and populate with the following variables:
|
||||
|
||||
```
|
||||
```bash
|
||||
# run on startup, otherwise just on cron
|
||||
RUN_ON_STARTUP=false
|
||||
|
||||
@@ -98,7 +99,6 @@ RESTIC_PASSWORD=<repo_password>
|
||||
!!! question "Why create two separate .env files?"
|
||||
Although there's duplication involved, maintaining 2 files for the two services within the stack keeps it clean, and allows you to potentially alter the behaviour of one service without impacting the other in future
|
||||
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3) in `/var/data/restic/restic.yml` , something like this:
|
||||
@@ -144,7 +144,7 @@ networks:
|
||||
|
||||
Launch the Restic stack by running `docker stack deploy restic -c <path -to-docker-compose.yml>`, and watch the logs by running `docker service logs restic_backup` - you should see something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
root@raphael:~# docker service logs restic_backup -f
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Checking configured repository '<repo_name>' ...
|
||||
restic_backup.1.9sii77j9jf0x@leonardo | Fatal: unable to open config file: Stat: stat <repo_name>/config: no such file or directory
|
||||
@@ -175,14 +175,14 @@ Repeat after me : "**It's not a backup unless you've tested a restore**"
|
||||
|
||||
The simplest way to test your restore is to run the container once, using the variables you're already prepared, with custom arguments, as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
docker run --rm -it --name restic-restore --env-file /var/data/config/restic/restic-backup.env \
|
||||
-v /tmp/restore:/restore mazzolino/restic restore latest --target /restore
|
||||
```
|
||||
|
||||
In my example:
|
||||
|
||||
```
|
||||
```bash
|
||||
root@raphael:~# docker run --rm -it --name restic-restore --env-file /var/data/config/restic/restic-backup.env \
|
||||
> -v /tmp/restore:/restore mazzolino/restic restore latest --target /restore
|
||||
Unable to find image 'mazzolino/restic:latest' locally
|
||||
@@ -199,9 +199,8 @@ root@raphael:~#
|
||||
!!! tip "Restoring a subset of data"
|
||||
The example above restores the **entire** `/var/data` folder (*minus any exclusions*). To restore just a subset of data, add the `-i <regex>` argument, i.e. `-i plex`
|
||||
|
||||
|
||||
[^1]: The `/var/data/restic/restic.exclude` exists to provide you with a way to exclude data you don't care to backup.
|
||||
[^2]: A recent benchmark of various backup tools, including Restic, can be found [here](https://forum.duplicati.com/t/big-comparison-borg-vs-restic-vs-arq-5-vs-duplicacy-vs-duplicati/9952).
|
||||
[^3]: A paid-for UI for Restic can be found [here](https://forum.restic.net/t/web-ui-for-restic/667/26).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -10,7 +10,6 @@ Do you hate having to access multiple sites to view specific content? [RSS-Bridg
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
@@ -18,10 +18,9 @@ cAdvisor (Container Advisor) provides container users an understanding of the re
|
||||
* [Alert Manager](https://github.com/prometheus/alertmanager) Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integrations such as email, Slack, etc.
|
||||
* [Unsee](https://github.com/cloudflare/unsee) is an alert dashboard for Alert Manager
|
||||
|
||||
|
||||
## How does this magic work?
|
||||
|
||||
I'd encourage you to spend some time reading https://github.com/stefanprodan/swarmprom. Stefan has included detailed explanations about which elements perform which functions, as well as how to customize your stack. (_This is only a starting point, after all_)
|
||||
I'd encourage you to spend some time reading <https://github.com/stefanprodan/swarmprom>. Stefan has included detailed explanations about which elements perform which functions, as well as how to customize your stack. (_This is only a starting point, after all_)
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
@@ -37,7 +36,7 @@ Grafana includes decent login protections, but from what I can see, Prometheus,
|
||||
|
||||
Edit (_or create, depending on your OS_) /etc/docker/daemon.json, and add the following, to enable the experimental export of metrics to Prometheus:
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"metrics-addr" : "0.0.0.0:9323",
|
||||
"experimental" : true
|
||||
@@ -46,12 +45,11 @@ Edit (_or create, depending on your OS_) /etc/docker/daemon.json, and add the fo
|
||||
|
||||
Restart docker with ```systemctl restart docker```
|
||||
|
||||
|
||||
### Setup and populate data locations
|
||||
|
||||
We'll need several files to bind-mount into our containers, so create directories for them and get the latest copies:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/swarmprom/dockerd-exporter/
|
||||
cd /var/data/swarmprom/dockerd-exporter/
|
||||
wget https://raw.githubusercontent.com/stefanprodan/swarmprom/master/dockerd-exporter/Caddyfile
|
||||
@@ -74,7 +72,8 @@ chown nobody:nogroup /var/data/runtime/prometheus
|
||||
Grafana will make all the data we collect from our swarm beautiful.
|
||||
|
||||
Create /var/data/swarmprom/grafana.env, and populate with the following variables
|
||||
```
|
||||
|
||||
```bash
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
@@ -392,4 +391,4 @@ Log into your new grafana instance, check out your beautiful graphs. Move onto d
|
||||
|
||||
[^1]: Pay close attention to the ```grafana.env``` config. If you encounter errors about ```basic auth failed```, or failed CSS, it's likely due to misconfiguration of one of the grafana environment variables.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -84,7 +84,6 @@ networks:
|
||||
|
||||
Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker-compose.yml>```
|
||||
|
||||
|
||||
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/ha-docker-swarm/traefik-forward-auth/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -20,7 +20,7 @@ description: Geeky RSS reader
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/ttrss:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/ttrss
|
||||
cd /var/data/ttrss
|
||||
mkdir -p {database,database-dump}
|
||||
@@ -32,7 +32,7 @@ cd /var/data/config/ttrss
|
||||
|
||||
Create ttrss.env, and populate with the following variables, customizing at least the database password (POSTGRES_PASSWORD **and** DB_PASS) and the TTRSS_SELF_URL to point to your installation.
|
||||
|
||||
```
|
||||
```bash
|
||||
# Variables for postgres:latest
|
||||
POSTGRES_USER=ttrss
|
||||
POSTGRES_PASSWORD=mypassword
|
||||
@@ -125,4 +125,4 @@ Launch the TTRSS stack by running ```docker stack deploy ttrss -c <path -to-dock
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN** - the first user you create will be an administrative user.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -22,7 +22,7 @@ There are plugins for [Chrome](https://chrome.google.com/webstore/detail/wallaba
|
||||
|
||||
We need a filesystem location to store images that Wallabag downloads from the original sources, to re-display when you read your articles, as well as nightly database dumps (_which you **should [backup](/recipes/duplicity/)**_), so create something like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir -p /var/data/wallabag
|
||||
cd /var/data/wallabag
|
||||
mkdir -p {images,db-dump}
|
||||
@@ -31,7 +31,8 @@ mkdir -p {images,db-dump}
|
||||
### Prepare environment
|
||||
|
||||
Create wallabag.env, and populate with the following variables. The only variable you **have** to change is SYMFONY__ENV__DOMAIN_NAME - this **must** be the URL that your Wallabag instance will be available at (_else you'll have no CSS_)
|
||||
```
|
||||
|
||||
```bash
|
||||
# For the DB container
|
||||
POSTGRES_PASSWORD=wallabag
|
||||
POSTGRES_USER=wallabag
|
||||
@@ -60,7 +61,7 @@ OAUTH2_PROXY_COOKIE_SECRET=
|
||||
|
||||
Now create wallabag-backup.env in the same folder, with the following contents. (_This is necessary to prevent environment variables required for backup from breaking the DB container_)
|
||||
|
||||
```
|
||||
```bash
|
||||
# For database backups
|
||||
PGUSER=wallabag
|
||||
PGPASSWORD=wallabag
|
||||
@@ -69,7 +70,6 @@ BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
@@ -191,4 +191,4 @@ Even with all these elements in place, you still need to enable Redis under Inte
|
||||
[^1]: If you wanted to expose the Wallabag UI directly (_required for the iOS/Android apps_), you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wallabag container. You'd also need to add the traefik_public network to the wallabag container. I found the iOS app to be unreliable and clunky, so elected to leave my oauth_proxy enabled, and to simply use the webUI on my mobile devices instead. YMMMV.
|
||||
[^2]: I've not tested the email integration, but you'd need an SMTP server listening on port 25 (_since we can't change the port_) to use it
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -23,7 +23,7 @@ There's a [video](https://www.youtube.com/watch?v=N3iMLwCNOro) of the developer
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in /var/data/wekan:
|
||||
|
||||
```
|
||||
```bash
|
||||
mkdir /var/data/wekan
|
||||
cd /var/data/wekan
|
||||
mkdir -p {wekan-db,wekan-db-dump}
|
||||
@@ -36,7 +36,7 @@ You'll need to know the following:
|
||||
1. Choose an oauth provider, and obtain a client ID and secret
|
||||
2. Create wekan.env, and populate with the following variables
|
||||
|
||||
```
|
||||
```bash
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
@@ -139,4 +139,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
|
||||
|
||||
[^1]: If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wekan container. You'd also need to add the traefik network to the wekan container.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
@@ -25,7 +25,8 @@ Here are some other possible use cases:
|
||||
### Prepare environment
|
||||
|
||||
Create wetty.env, and populate with the following variables per the [oauth_proxy](/reference/oauth_proxy/) instructions:
|
||||
```
|
||||
|
||||
```bash
|
||||
OAUTH2_PROXY_CLIENT_ID=
|
||||
OAUTH2_PROXY_CLIENT_SECRET=
|
||||
OAUTH2_PROXY_COOKIE_SECRET=
|
||||
@@ -94,4 +95,4 @@ Browse to your new browser-cli-terminal at https://**YOUR-FQDN**. Authenticate w
|
||||
[^1]: You could set SSHHOST to the IP of the "docker0" interface on your host, which is normally 172.17.0.1. (_Or run ```/sbin/ip route|awk '/default/ { print $3 }'``` in the container_) This would then provide you the ability to remote-manage your swarm with only web access to Wetty.
|
||||
[^2]: The inclusion of Wetty was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
Reference in New Issue
Block a user