mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-12 17:26:19 +00:00
Add authentik, tidy up recipe-footer
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
@@ -90,4 +90,4 @@ Launch the Archivebox stack by running ```docker stack deploy archivebox -c <pat
|
||||
|
||||
[^1]: The inclusion of Archivebox was due to the efforts of @bencey in Discord (Thanks Ben!)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -126,4 +126,4 @@ What have we achieved? We can now easily consume our audio books / podcasts via
|
||||
[^1]: The apps also allow you to download entire books to your device, so that you can listen without being directly connected!
|
||||
[^2]: Audiobookshelf pairs very nicely with [Readarr][readarr], and [Prowlarr][prowlarr], to automate your audio book sourcing and management!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -15,4 +15,4 @@ Log into each of your new tools at its respective HTTPS URL. You'll be prompted
|
||||
|
||||
[^1]: This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -49,4 +49,4 @@ headphones_proxy:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -48,6 +48,6 @@ To include Heimdall in your [AutoPirate](/recipes/autopirate/) stack, include th
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^2:] The inclusion of Heimdall was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz). Thanks gkoerk!
|
||||
|
||||
@@ -126,4 +126,4 @@ networks:
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -48,4 +48,4 @@ jackett:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -64,6 +64,6 @@ calibre-server:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^2]: The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the [calibre-web][calibre-web] recipe.
|
||||
|
||||
@@ -59,4 +59,4 @@ Lidarr and [Headphones][headphones] perform the same basic function. The primary
|
||||
|
||||
I've not tried this yet, but it seems that it's possible to [integrate Lidarr with Beets](https://www.reddit.com/r/Lidarr/comments/rahcer/my_lidarrbeets_automation_setup/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -50,6 +50,6 @@ mylar:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^2]. If you intend to configure Mylar to perform its own NZB searches and push the hits to a downloader such as SABnzbd, then in addition to configuring the connection to SAB with host, port and api key, you will need to set the parameter `host_return` parameter to the fully qualified Mylar address (e.g. `http://mylar:8090`).
|
||||
|
||||
@@ -54,4 +54,4 @@ nzbget:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -66,4 +66,4 @@ nzbhydra2:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -64,4 +64,4 @@ ombi_proxy:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -75,6 +75,6 @@ Prowlarr and [Jackett][jackett] perform similar roles (*they help you aggregate
|
||||
2. Given app API keys, Prowlarr can auto-configuer your Arr apps, adding its indexers. Prowlarr currently auto-configures [Radarr][radarr], [Sonarr][sonarr], [Lidarr][lidarr], [Mylar][mylar], [Readarr][Readarr], and [LazyLibrarian][lazylibrarian]
|
||||
3. Prowlarr can integrate with Flaresolverr to make it possible to query indexers behind Cloudflare "are-you-a-robot" protection, which would otherwise not be possible.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Because Prowlarr is so young (*just a little kitten! :cat:*), there is no `:latest` image tag yet, so we're using the `:nightly` tag instead. Don't come crying to me if baby-Prowlarr bites your ass!
|
||||
|
||||
@@ -64,5 +64,5 @@ radarr:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
--8<-- "common-links.md"
|
||||
@@ -62,4 +62,4 @@ readarr:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -56,4 +56,4 @@ rtorrent:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -61,4 +61,4 @@ sabnzbd:
|
||||
For example, mine simply reads ```host_whitelist = sabnzbd.funkypenguin.co.nz, sabnzbd```
|
||||
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -50,4 +50,4 @@ sonarr:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
--8<-- "recipe-autopirate-toc.md"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -110,4 +110,4 @@ Once you've created your account, jump over to <https://bitwarden.com/#download>
|
||||
[^2]: As mentioned above, readers should refer to the [dani-garcia/vaultwarden wiki](https://github.com/dani-garcia/vaultwarden) for details on customizing the behaviour of Bitwarden.
|
||||
[^3]: The inclusion of Bitwarden was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -136,4 +136,4 @@ Log into your new instance at https://**YOUR-FQDN**, authenticate with oauth_pro
|
||||
|
||||
[^1]: If you wanted to expose the Bookstack UI directly, you could remove the traefik-forward-auth from the design.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -114,4 +114,4 @@ Log into your new instance at `https://**YOUR-FQDN**`. You'll be directed to the
|
||||
[^1]: Yes, Calibre does provide a server component. But it's not as fully-featured as Calibre-Web (_i.e., you can't use it to send ebooks directly to your Kindle_)
|
||||
[^2]: A future enhancement might be integrating this recipe with the filestore for [NextCloud](/recipes/nextcloud/), so that the desktop database (Calibre) can be kept synced with Calibre-Web.
|
||||
[^3]: If you plan to use calibre-web to send `.mobi` files to your Kindle via `@kindle.com` email addresses, be sure to add the sending address to the "[Approved Personal Documents Email List](https://www.amazon.com/hz/mycd/myx#/home/settings/payment)"
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -307,4 +307,4 @@ Now browse your NextCloud files. Click the plus (+) sign to create a new documen
|
||||
|
||||
[^1]: Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -69,7 +69,7 @@ networks:
|
||||
|
||||
Launch your CyberChef stack by running ```docker stack deploy cyberchef -c <path -to-docker-compose.yml>```, and then visit the URL you chose to begin the hackery!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[2]: https://gchq.github.io/CyberChef/#recipe=From_Base64('A-Za-z0-9%2B/%3D',true)&input=VTI4Z2JHOXVaeUJoYm1RZ2RHaGhibXR6SUdadmNpQmhiR3dnZEdobElHWnBjMmd1
|
||||
[6]: https://gchq.github.io/CyberChef/#recipe=RC4(%7B'option':'UTF8','string':'secret'%7D,'Hex','Hex')Disassemble_x86('64','Full%20x86%20architecture',16,0,true,true)&input=MjFkZGQyNTQwMTYwZWU2NWZlMDc3NzEwM2YyYTM5ZmJlNWJjYjZhYTBhYWJkNDE0ZjkwYzZjYWY1MzEyNzU0YWY3NzRiNzZiM2JiY2QxOTNjYjNkZGZkYmM1YTI2NTMzYTY4NmI1OWI4ZmVkNGQzODBkNDc0NDIwMWFlYzIwNDA1MDcxMzhlMmZlMmIzOTUwNDQ2ZGIzMWQyYmM2MjliZTRkM2YyZWIwMDQzYzI5M2Q3YTVkMjk2MmMwMGZlNmRhMzAwNzJkOGM1YTZiNGZlN2Q4NTlhMDQwZWVhZjI5OTczMzYzMDJmNWEwZWMxOQ
|
||||
|
||||
@@ -126,4 +126,4 @@ Once we authenticate through the traefik-forward-auth provider, we can start con
|
||||
[^1]: Quote attributed to Mila Kunis
|
||||
[^2]: The [Duplicati 2 User's Manual](https://duplicati.readthedocs.io/en/latest/) contains all the information you'll need to configure backup endpoints, restore jobs, scheduling and advanced properties for your backup jobs.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -159,4 +159,4 @@ Nothing will happen. Very boring. But when the cron script fires (daily), duplic
|
||||
[^1]: Automatic backup can still fail if nobody checks that it's running successfully. I'll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
|
||||
[^2]: The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I've left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn't require auth), add `SMTP_HOST`, `SMTP_PORT`, `EMAIL_FROM` and `EMAIL_TO` variables to `duplicity.env`.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -225,4 +225,4 @@ This takes you to a list of backup names and file paths. You can choose to downl
|
||||
[^1]: If you wanted to expose the ElkarBackup UI directly, you could remove the traefik-forward-auth from the design.
|
||||
[^2]: The original inclusion of ElkarBackup was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -92,4 +92,4 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
[^3]: We don't bother exposing the HTTPS port for Emby, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -141,4 +141,4 @@ root@swarm:~#
|
||||
[^3]: It should be noted that if you import your existing media, the files will be **copied** into Funkwhale's data folder. There doesn't seem to be a way to point Funkwhale at an existing collection and have it just play it from the filesystem. To this end, be prepared for double disk space usage if you plan to import your entire music collection!
|
||||
[^5]: No consideration is given at this point to backing up the Funkwhale data. Post a comment below if you'd like to see a backup container added!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -71,4 +71,4 @@ Create your first administrative account at https://**YOUR-FQDN**/admin/
|
||||
|
||||
[^1]: A default using the SQlite database takes 548k of space
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -94,4 +94,4 @@ Launch the GitLab Runner stack by running `docker stack deploy gitlab-runner -c
|
||||
[^1]: You'll note that I setup 2 runners. One is locked to a single project (_this cookbook build_), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I'd tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
|
||||
[^2]: Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (_and GitLab starts **sooo** slowly!_), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a "pending" state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -134,4 +134,4 @@ Log into your new instance at `https://[your FQDN]`, with user "root" and the pa
|
||||
|
||||
[^1]: I use the **sameersbn/gitlab:latest** image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why **wouldn't** I want the latest version?)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -109,4 +109,4 @@ Launch the Gollum stack by running ```docker stack deploy gollum -c <path-to-doc
|
||||
|
||||
[^1]: In the current implementation, Gollum is a "single user" tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently "Anonymous"
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -134,4 +134,4 @@ Log into your new instance at https://**YOUR-FQDN**, the password you created in
|
||||
|
||||
[^1]: I **tried** to protect Home Assistant using [oauth2_proxy](/reference/oauth_proxy/), but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -151,4 +151,4 @@ Log into your new instance at https://**YOUR-FQDN**. You'll need to use the "Sig
|
||||
|
||||
[^1]: I initially considered putting an oauth proxy in front of Huginn, but since the invitation code logic prevents untrusted access, and since using a proxy would break oauth for features such as Twitter integration, I left it out.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -251,7 +251,7 @@ What have we achieved? We have an HTTPS-protected endpoint to target with the na
|
||||
Sponsors have access to a [Premix](/premix/) playbook, which will set up Immich in under 60s (*see below*):
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/s-NZjYrNOPg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: "wife-insurance": When the developer's wife is a primary user of the platform, you can bet he'll be writing quality code! :woman: :material-karate: :man: :bed: :cry:
|
||||
[^2]: There's a [friendly Discord server](https://discord.com/invite/D8JsnBEuKb) for Immich too!
|
||||
|
||||
@@ -130,4 +130,4 @@ You can **also** watch the bot at work by VNCing to your docker swarm, password
|
||||
|
||||
[^1]: Amazingly, my bot has ended up tagging more _non-penguins_ than actual penguins. I don't understand how Instagrammers come up with their hashtags!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
---
|
||||
title: Invidious, your Youtube frontend instance in Docker Swarm
|
||||
description: How to create your own private Youtube frontend using Invidious in Docker Swarm
|
||||
status: new
|
||||
---
|
||||
|
||||
# Invidious: Private Youtube frontend instance in Docker Swarm
|
||||
@@ -169,7 +168,7 @@ What have we achieved? We have an HTTPS-protected private YouTube frontend - we
|
||||
|
||||
* [X] We are free of the creepy tracking attached to YouTube videos!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Check out the [official config docs](https://github.com/iv-org/invidious/blob/master/config/config.example.yml) for comprehensive details on how to configure / tweak your instance!
|
||||
[^2]: Gotcha!
|
||||
|
||||
@@ -102,4 +102,4 @@ Log into your new instance at https://**YOUR-FQDN**, and complete the wizard-bas
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
[^3]: We don't bother exposing the HTTPS port for Jellyfin, since [Traefik](/docker-swarm/traefik/) is doing the SSL termination for us already.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
225
docs/recipes/joplin-server.md
Normal file
225
docs/recipes/joplin-server.md
Normal file
@@ -0,0 +1,225 @@
|
||||
---
|
||||
title: Sync, share and publish your Joplin notes with joplin-server
|
||||
description: joplin-server is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes.
|
||||
recipe: Joplin Server
|
||||
---
|
||||
|
||||
# Joplin Server
|
||||
|
||||
{% include 'try-in-elfhosted.md' %}
|
||||
|
||||
joplin-server is a free open-source backup solution based on RSync/RSnapshot. It's basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that [more sophisticated backup solutions](/recipes/duplicity/) would produce for you.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
## {{ page.meta.recipe }} Requirements
|
||||
|
||||
--8<-- "recipe-standard-ingredients.md"
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
We'll need several directories to bind-mount into our container, so create them in `/var/data/``:
|
||||
|
||||
```bash
|
||||
mkdir -p /var/data/joplin-server/
|
||||
mkdir -p /var/data/runtime/joplin-server/db
|
||||
mkdir -p /var/data/config/joplin-server
|
||||
```
|
||||
|
||||
### Prepare {{ page.meta.recipe }} environment
|
||||
|
||||
Create /var/data/config/joplin-server/joplin-server.env, and populate with the following variables
|
||||
|
||||
```bash
|
||||
SYMFONY__DATABASE__PASSWORD=password
|
||||
EB_CRON=enabled
|
||||
TZ='Etc/UTC'
|
||||
|
||||
#SMTP - Populate these if you want email notifications
|
||||
#SYMFONY__MAILER__HOST=
|
||||
#SYMFONY__MAILER__USER=
|
||||
#SYMFONY__MAILER__PASSWORD=
|
||||
#SYMFONY__MAILER__FROM=
|
||||
|
||||
# For mysql
|
||||
MYSQL_ROOT_PASSWORD=password
|
||||
```
|
||||
|
||||
Create ```/var/data/config/joplin-server/joplin-server-db-backup.env```, and populate with the following, to setup the nightly database dump.
|
||||
|
||||
!!! note
|
||||
Running a daily database dump might be considered overkill, since joplin-server can be configured to backup its own database. However, making my own backup keeps the operation of this stack consistent with **other** stacks which employ MariaDB.
|
||||
|
||||
Also, did you ever hear about the guy who said "_I wish I had fewer backups"?
|
||||
|
||||
No, me either :shrug:
|
||||
|
||||
```bash
|
||||
# For database backup (keep 7 days daily backups)
|
||||
MYSQL_PWD=<same as SYMFONY__DATABASE__PASSWORD above>
|
||||
MYSQL_USER=root
|
||||
BACKUP_NUM_KEEP=7
|
||||
BACKUP_FREQUENCY=1d
|
||||
```
|
||||
|
||||
### {{ page.meta.recipe }} Docker Swarm config
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like the example below:
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
db:
|
||||
image: mariadb:10.4
|
||||
env_file: /var/data/config/joplin-server/joplin-server.env
|
||||
networks:
|
||||
- internal
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/runtime/joplin-server/db:/var/lib/mysql
|
||||
|
||||
db-backup:
|
||||
image: mariadb:10.4
|
||||
env_file: /var/data/config/joplin-server/joplin-server-db-backup.env
|
||||
volumes:
|
||||
- /var/data/joplin-server/database-dump:/dump
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
entrypoint: |
|
||||
bash -c 'bash -s <<EOF
|
||||
trap "break;exit" SIGHUP SIGINT SIGTERM
|
||||
sleep 2m
|
||||
while /bin/true; do
|
||||
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.sql.gz
|
||||
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|sort|uniq -u|xargs rm -- {}
|
||||
sleep $$BACKUP_FREQUENCY
|
||||
done
|
||||
EOF'
|
||||
networks:
|
||||
- internal
|
||||
|
||||
app:
|
||||
image: joplin-server/joplin-server
|
||||
env_file: /var/data/config/joplin-server/joplin-server.env
|
||||
networks:
|
||||
- internal
|
||||
- traefik_public
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /var/data/:/var/data
|
||||
- /var/data/joplin-server/backups:/app/backups
|
||||
- /var/data/joplin-server/uploads:/app/uploads
|
||||
- /var/data/joplin-server/sshkeys:/app/.ssh
|
||||
deploy:
|
||||
labels:
|
||||
# traefik common
|
||||
- traefik.enable=true
|
||||
- traefik.docker.network=traefik_public
|
||||
|
||||
# traefikv1
|
||||
- traefik.frontend.rule=Host:joplin-server.example.com
|
||||
- traefik.port=80
|
||||
|
||||
# traefikv2
|
||||
- "traefik.http.routers.joplin-server.rule=Host(`joplin-server.example.com`)"
|
||||
- "traefik.http.services.joplin-server.loadbalancer.server.port=80"
|
||||
- "traefik.enable=true"
|
||||
|
||||
# Remove if you wish to access the URL directly
|
||||
- "traefik.http.routers.joplin-server.middlewares=forward-auth@file"
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
internal:
|
||||
driver: overlay
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.16.36.0/24
|
||||
```
|
||||
|
||||
--8<-- "reference-networks.md"
|
||||
|
||||
## Serving
|
||||
|
||||
### Launch joplin-server stack
|
||||
|
||||
Launch the joplin-server stack by running ```docker stack deploy joplin-server -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**, with user "root" and the password default password "root":
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
First thing you do, change your password, using the gear icon, and "Change Password" link:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Have a read of the [joplin-server Docs](https://docs.joplin-server.org/docs/introduction.html) - they introduce the concept of **clients** (_hosts containing data to be backed up_), **jobs** (_what data gets backed up_), **policies** (_when is data backed up and how long is it kept_).
|
||||
|
||||
At the very least, you want to setup a **client** called "_localhost_" with an empty path (_i.e., the job path will be accessed locally, without SSH_), and then add a job to this client to backup /var/data, **excluding** ```/var/data/runtime``` and ```/var/data/joplin-server/backup``` (_unless you **like** "backup-ception"_)
|
||||
|
||||
### Copying your backup data offsite
|
||||
|
||||
From the WebUI, you can download a script intended to be executed on a remote host, to backup your backup data to an offsite location. This is a **Good Idea**(tm), but needs some massaging for a Docker swarm deployment.
|
||||
|
||||
Here's a variation to the standard script, which I've employed:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
REPOSITORY=/var/data/joplin-server/backups
|
||||
SERVER=<target host member of docker swarm>
|
||||
SERVER_USER=joplin-server
|
||||
UPLOADS=/var/data/joplin-server/uploads
|
||||
TARGET=/srv/backup/joplin-server
|
||||
|
||||
echo "Starting backup..."
|
||||
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
|
||||
|
||||
ssh "$SERVER_USER@$SERVER" "cd '$REPOSITORY'; find . -maxdepth 2 -mindepth 2" | sed s/^..// | while read jobId
|
||||
do
|
||||
echo Backing up job $jobId
|
||||
mkdir -p $TARGET/$jobId 2>/dev/null
|
||||
rsync -aH --delete "$SERVER_USER@$SERVER:$REPOSITORY/$jobId/" $TARGET/$jobId
|
||||
done
|
||||
|
||||
echo Backing up uploads
|
||||
rsync -aH --delete "$SERVER_USER@$SERVER":"$UPLOADS/" $TARGET/uploads
|
||||
|
||||
USED=`df -h . | awk 'NR==2 { print $3 }'`
|
||||
USE=`df -h . | awk 'NR==2 { print $5 }'`
|
||||
AVAILABLE=`df -h . | awk 'NR==2 { print $4 }'`
|
||||
|
||||
echo "Backup finished succesfully!"
|
||||
echo "Date: " `date "+%Y-%m-%d (%H:%M)"`
|
||||
echo ""
|
||||
echo "**** INFO ****"
|
||||
echo "Used disk space: $USED ($USE)"
|
||||
echo "Available disk space: $AVAILABLE"
|
||||
echo ""
|
||||
```
|
||||
|
||||
!!! note
|
||||
You'll note that I don't use the script to create a mysql dump (_since Elkar is running within a container anyway_), rather I just rely on the database dump which is made nightly into ```/var/data/joplin-server/database-dump/```
|
||||
|
||||
### Restoring data
|
||||
|
||||
Repeat after me : "**It's not a backup unless you've tested a restore**"
|
||||
|
||||
!!! note
|
||||
I had some difficulty making restoring work well in the webUI. My attempts to "Restore to client" failed with an SSH error about "localhost" not found. I **was** able to download the backup from my web browser, so I considered it a successful restore, since I can retrieve the backed-up data either from the webUI or from the filesystem directly.
|
||||
|
||||
To restore files form a job, click on the "Restore" button in the WebUI, while on the **Jobs** tab:
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
This takes you to a list of backup names and file paths. You can choose to download the entire contents of the backup from your browser as a .tar.gz, or to restore the backup to the client. If you click on the **name** of the backup, you can also drill down into the file structure, choosing to restore a single file or directory.
|
||||
|
||||
[^1]: If you wanted to expose the joplin-server UI directly, you could remove the traefik-forward-auth from the design.
|
||||
[^2]: The original inclusion of joplin-server was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||
|
||||
{% include 'recipe-footer.md' %}
|
||||
@@ -84,4 +84,4 @@ Log into your new instance at https://**YOUR-FQDN**. Default credentials are adm
|
||||
[^1]: The default theme can be significantly improved by applying the [ThemePlus](https://github.com/phsteffen/kanboard-themeplus) plugin.
|
||||
[^2]: Kanboard becomes more useful when you integrate in/outbound email with [MailGun](https://github.com/kanboard/plugin-mailgun), [SendGrid](https://github.com/kanboard/plugin-sendgrid), or [Postmark](https://github.com/kanboard/plugin-postmark).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -92,4 +92,4 @@ Log into your new instance at https://**YOUR-FQDN**. Since it's a fresh installa
|
||||
|
||||
[^2]: There's an [active subreddit](https://www.reddit.com/r/KavitaManga/) for Kavita
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -70,4 +70,4 @@ We've setup a new realm in Keycloak, and configured read-write federation to an
|
||||
|
||||
[^1]: A much nicer experience IMO!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -181,6 +181,6 @@ Something didn't work? Try the following:
|
||||
|
||||
1. Confirm that Keycloak did, in fact, start, by looking at the state of the stack, with `docker stack ps keycloak --no-trunc`
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: For more geeky {--pain--}{++fun++}, try integrating Keycloak with [OpenLDAP][openldap] for an authentication backend!
|
||||
|
||||
@@ -56,4 +56,4 @@ We've setup an OIDC client in Keycloak, which we can now use to protect vulnerab
|
||||
|
||||
* [X] Client ID and Client Secret used to authenticate against Keycloak with OpenID Connect
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -87,4 +87,4 @@ If Komga scratches your particular itch, please join me in [sponsoring the devel
|
||||
|
||||
[^1]: Since Komga doesn't need to communicate with any other services, we don't need a separate overlay network for it. Provided Traefik can reach Komga via the `traefik_public` overlay network, we've got all we need.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
185
docs/recipes/kubernetes/authentik.md
Normal file
185
docs/recipes/kubernetes/authentik.md
Normal file
@@ -0,0 +1,185 @@
|
||||
---
|
||||
title: How to deploy Authentik on Kubernetes
|
||||
description: Deploy Authentik on Kubernetes to provide SSO to your cluster and workloads
|
||||
values_yaml_url: https://github.com/goauthentik/helm/blob/main/charts/authentik/values.yaml
|
||||
helm_chart_version: 2023.10.x
|
||||
helm_chart_name: authentik
|
||||
helm_chart_repo_name: authentik
|
||||
helm_chart_repo_url: https://charts.goauthentik.io/
|
||||
helmrelease_name: authentik
|
||||
helmrelease_namespace: authentik
|
||||
kustomization_name: authentik
|
||||
slug: Authentik
|
||||
status: new
|
||||
github_repo: https://github.com/goauthentik/authentik
|
||||
upstream: https://goauthentik.io
|
||||
links:
|
||||
- name: Discord
|
||||
uri: https://goauthentik.io/discord
|
||||
---
|
||||
|
||||
# Authentik on Kubernetes
|
||||
|
||||
Authentik is an open-source Identity Provider focused on flexibility and versatility. It not only supports modern authentication standards (*like OIDC*), but includes "outposts" to provide support for less-modern protocols such as [LDAP][openldap] :t_rex:, or basic authentication proxies.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
See a comparison with other IDPs [here](https://goauthentik.io/#comparison).
|
||||
|
||||
## {{ page.meta.slug }} requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
Already deployed:
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] An [Ingress controller](/kubernetes/ingress/) to route incoming traffic to services
|
||||
* [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff
|
||||
|
||||
Optional:
|
||||
|
||||
* [ ] [External DNS](/kubernetes/external-dns/) to create an DNS entry the "flux" way
|
||||
|
||||
{% include 'kubernetes-flux-namespace.md' %}
|
||||
{% include 'kubernetes-flux-kustomization.md' %}
|
||||
{% include 'kubernetes-flux-dnsendpoint.md' %}
|
||||
{% include 'kubernetes-flux-helmrelease.md' %}
|
||||
|
||||
## Configure Authentik Helm Chart
|
||||
|
||||
The following sections detail suggested changes to the values pasted into `/{{ page.meta.helmrelease_namespace }}/helmrelease-{{ page.meta.helmrelease_name }}.yaml` from the {{ page.meta.slug }} helm chart's [values.yaml]({{ page.meta.values_yaml_url }}). The values are already indented correctly to be copied, pasted into the HelmRelease, and adjusted as necessary.
|
||||
|
||||
!!! tip
|
||||
Confusingly, the Authentik helm chart defaults to having the bundled redis and postgresql **disabled**, but the [Authentik Kubernetes install](https://goauthentik.io/docs/installation/kubernetes/) docs require that they be enabled. Take care to change the respective `enabled: false` values to `enabled: true` below.
|
||||
|
||||
### Set bootstrap credentials
|
||||
|
||||
By default, when you install the Authentik helm chart, you'll get to set your admin user's (`akadmin`) when you first login. You can pre-configure this password by setting the `AUTHENTIK_BOOTSTRAP_PASSWORD` env var as illustrated below.
|
||||
|
||||
If you're after a more hands-off implementation[^1], you can also pre-set a "bootstrap token", which can be used to interact with the Authentik API programatically (*see example below*):
|
||||
|
||||
```yaml hl_lines="2-3" title="Optionally pre-configure your bootstrap secrets"
|
||||
env:
|
||||
AUTHENTIK_BOOTSTRAP_PASSWORD: "iamusedbyhumanz"
|
||||
AUTHENTIK_BOOTSTRAP_TOKEN: "iamusedbymachinez"
|
||||
```
|
||||
|
||||
### Configure Redis for Authentik
|
||||
|
||||
Authentik uses Redis as the broker for [Celery](https://docs.celeryq.dev/en/stable/) background tasks. The Authentik helm chart defaults to provisioning an 8Gi PVC for redis, which seems like overkill for a simple broker. You can tweak the size of the Redis PVC by setting:
|
||||
|
||||
```yaml hl_lines="4" title="1Gi should be fine for redis"
|
||||
redis:
|
||||
master:
|
||||
persistence:
|
||||
size: 1Gi
|
||||
```
|
||||
|
||||
### Configure PostgreSQL for Authentik
|
||||
|
||||
Depending on your risk profile / exposure, you may want to set a secure PostgreSQL password, or you may be content to leave the default password blank.
|
||||
|
||||
At the very least, you'll want to set the following
|
||||
|
||||
```yaml hl_lines="3 6" title="Set a secure Postgresql password"
|
||||
authentik:
|
||||
postgresql:
|
||||
password: "Iamaverysecretpassword"
|
||||
|
||||
postgresql:
|
||||
postgresqlPassword: "Iamaverysecretpassword"
|
||||
```
|
||||
|
||||
As with Redis above, you may feel (*like I do*) that provisioning an 8Gi PVC for a database containing 1 user and a handful of app configs is overkill. You can adjust the size of the PostgreSQL PVC by setting:
|
||||
|
||||
```yaml hl_lines="3" title="1Gi is fine for a small database"
|
||||
postgresql:
|
||||
persistence:
|
||||
size: 1Gi
|
||||
```
|
||||
|
||||
### Ingress
|
||||
|
||||
Setup your ingress for the Authentik UI. If you plan to add outposts to proxy other un-authenticated endpoints later, this is where you'll add them:
|
||||
|
||||
```yaml hl_lines="3 7" title="Configure your ingress"
|
||||
ingress:
|
||||
enabled: true
|
||||
ingressClassName: "nginx" # (1)!
|
||||
annotations: {}
|
||||
labels: {}
|
||||
hosts:
|
||||
- host: authentik.example.com
|
||||
paths:
|
||||
- path: "/"
|
||||
pathType: Prefix
|
||||
tls: []
|
||||
```
|
||||
|
||||
1. Either leave blank to accept the default ingressClassName, or set to whichever [ingress controller](/kubernetes/ingress/) you want to use.
|
||||
|
||||
## Install {{ page.meta.slug }}!
|
||||
|
||||
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using `flux reconcile source git flux-system`. You should see the kustomization appear...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get kustomizations {{ page.meta.kustomization_name }}
|
||||
NAME READY MESSAGE REVISION SUSPENDED
|
||||
{{ page.meta.kustomization_name }} True Applied revision: main/70da637 main/70da637 False
|
||||
~ ❯
|
||||
```
|
||||
|
||||
The helmrelease should be reconciled...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get helmreleases -n {{ page.meta.helmrelease_namespace }} {{ page.meta.helmrelease_name }}
|
||||
NAME READY MESSAGE REVISION SUSPENDED
|
||||
{{ page.meta.helmrelease_name }} True Release reconciliation succeeded v{{ page.meta.helm_chart_version }} False
|
||||
~ ❯
|
||||
```
|
||||
|
||||
And you should have happy pods in the {{ page.meta.helmrelease_namespace }} namespace:
|
||||
|
||||
```bash
|
||||
~ ❯ k get pods -n authentik
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
authentik-redis-master-0 1/1 Running 1 (3d17h ago) 26d
|
||||
authentik-server-548c6d4d5f-ljqft 1/1 Running 1 (3d17h ago) 20d
|
||||
authentik-postgresql-0 1/1 Running 1 (3d17h ago) 26d
|
||||
authentik-worker-7bb8f55bcb-5jwrr 1/1 Running 0 23h
|
||||
~ ❯
|
||||
```
|
||||
|
||||
Browse to the URL you configured in your ingress above, and confirm that the Authentik UI is displayed.
|
||||
|
||||
## Create your admin user
|
||||
|
||||
You may be a little confused re how to login for the first time. If you didn't use a bootstrap password as above, you'll want to go to `https://<ingress-host-name>/if/flow/initial-setup/`, and set an initial password for your `akadmin` user.
|
||||
|
||||
Now store the `akadmin` password somewhere safely, and proceed to create your own user account (*you'll presumably want to use your own username and email address*).
|
||||
|
||||
Navigate to **Admin Interface** --> **Directory** --> **Users**, and create your new user. Edit your user and manually set your password.
|
||||
|
||||
Next, navigate to **Directory** --> **Groups**, and edit the **authentik Admins** group. Within the group, click the **Users** tab to add your new user to the **authentik Admins** group.
|
||||
|
||||
Eureka! :tada:
|
||||
|
||||
Your user is now an Authentik superuser. Confirm this by logging out as **akadmin**, and logging back in with your own credentials.
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We've got Authentik running and accessible, we've created a superuser account, and we're ready to flex :muscle: the power of Authentik to deploy an OIDC provider for Kubernetes, or simply secure unprotected UIs with proxy outposts!
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Authentik running and ready to "authentikate" :lock: !
|
||||
|
||||
Next:
|
||||
|
||||
* [ ] Configure Kubernetes for OIDC authentication, unlocking production readiness as well as the Kubernetes Dashboard in Weave GitOps UIs (*coming soon*)
|
||||
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: I use the bootstrap token with an ansible playbook which provisions my users / apps using the Authentik API
|
||||
@@ -29,7 +29,7 @@ Here's an example from my public instance (*yes, running on Kubernetes*):
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/) (*not running Kubernetes? Use the [Docker Swarm recipe instead][invidious]*)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] An [Ingress](/kubernetes/ingress/) to route incoming traffic to services
|
||||
* [x] An [Ingress controller](/kubernetes/ingress/) to route incoming traffic to services
|
||||
* [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff
|
||||
* [x] [External DNS](/kubernetes/external-dns/) to create an DNS entry
|
||||
|
||||
@@ -543,7 +543,7 @@ What have we achieved? We have an HTTPS-protected private YouTube frontend - we
|
||||
|
||||
* [X] We are free of the creepy tracking attached to YouTube videos!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: There is also a 3rd option, using the Flux webhook receiver to trigger a reconcilliation - to be covered in a future recipe!
|
||||
[^2]: Gotcha!
|
||||
|
||||
@@ -24,7 +24,7 @@ description: How to install your own Mastodon instance using Kubernetes
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/) (*not running Kubernetes? Use the [Docker Swarm recipe instead][mastodon]*)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] An [Ingress](/kubernetes/ingress/) to route incoming traffic to services
|
||||
* [x] An [Ingress controller](/kubernetes/ingress/) to route incoming traffic to services
|
||||
* [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff
|
||||
* [x] [External DNS](/kubernetes/external-dns/) to create an DNS entry
|
||||
|
||||
@@ -271,6 +271,6 @@ What have we achieved? We now have a fully-swarmed Mastodon instance, ready to f
|
||||
|
||||
* [X] Mastodon configured, running, and ready to toot!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: There is also a 3rd option, using the Flux webhook receiver to trigger a reconcilliation - to be covered in a future recipe!
|
||||
|
||||
@@ -61,544 +61,3 @@ Success!
|
||||
root@matrix-synapse-5d7cf8579-zjk7c:/#
|
||||
```
|
||||
|
||||
YouTube is ubiquitious now. Almost every video I'm sent, takes me to YouTube. Worse, every YouTube video I watch feeds Google's profile about me, so shortly after enjoying the latest Marvel movie trailers, I find myself seeing related adverts on **unrelated** websites.
|
||||
|
||||
Creepy :bug:!
|
||||
|
||||
As the connection between the videos I watch and the adverts I see has become move obvious, I've become more discerning re which videos I choose to watch, since I don't necessarily **want** algorithmically-related videos popping up next time I load the YouTube app on my TV, or Marvel merchandise advertised to me on every second news site I visit.
|
||||
|
||||
This is a PITA since it means I have to "self-censor" which links I'll even click on, knowing that once I *do* click the video link, it's forever associated with my Google account :facepalm:
|
||||
|
||||
After playing around with [some of the available public instances](https://docs.invidious.io/instances/) for a while, today I finally deployed my own instance of [Invidious](https://invidious.io/) - an open source alternative front-end to YouTube.
|
||||
|
||||
{ loading=lazy }
|
||||
|
||||
Here's an example from my public instance (*yes, running on Kubernetes*):
|
||||
|
||||
<iframe id='ivplayer' width='640' height='360' src='https://in.fnky.nz/embed/o-YBDTqX_ZU?t=3' style='border:none;'></iframe>
|
||||
|
||||
## Invidious requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
Already deployed:
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/) (*not running Kubernetes? Use the [Docker Swarm recipe instead][invidious]*)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] An [Ingress](/kubernetes/ingress/) to route incoming traffic to services
|
||||
* [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff
|
||||
* [x] [External DNS](/kubernetes/external-dns/) to create an DNS entry
|
||||
|
||||
New:
|
||||
|
||||
* [ ] Chosen DNS FQDN for your instance
|
||||
|
||||
## Preparation
|
||||
|
||||
### GitRepository
|
||||
|
||||
The Invidious project doesn't currently publish a versioned helm chart - there's just a [helm chart stored in the repository](https://github.com/invidious/invidious/tree/main/chart) (*I plan to submit a PR to address this*). For now, we use a GitRepository instead of a HelmRepository as the source of a HelmRelease.
|
||||
|
||||
```yaml title="/bootstrap/gitrepositories/gitepository-invidious.yaml"
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||
kind: GitRepository
|
||||
metadata:
|
||||
name: invidious
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 1h0s
|
||||
ref:
|
||||
branch: master
|
||||
url: https://github.com/iv-org/invidious
|
||||
```
|
||||
|
||||
### Namespace
|
||||
|
||||
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `/bootstrap/namespaces/namespace-invidious.yaml`:
|
||||
|
||||
```yaml title="/bootstrap/namespaces/namespace-invidious.yaml"
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: invidious
|
||||
```
|
||||
|
||||
### Kustomization
|
||||
|
||||
Now that the "global" elements of this deployment (*just the GitRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/invidious`. I create this example Kustomization in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/kustomizations/kustomization-invidious.yaml"
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: invidious
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 15m
|
||||
path: invidious
|
||||
prune: true # remove any elements later removed from the above path
|
||||
timeout: 2m # if not set, this defaults to interval duration, which is 1h
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
healthChecks:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: invidious-invidious # (1)!
|
||||
namespace: invidious
|
||||
- apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
name: invidious-postgresql
|
||||
namespace: invidious
|
||||
```
|
||||
|
||||
1. No, that's not a typo, just another pecularity of the helm chart!
|
||||
|
||||
### ConfigMap
|
||||
|
||||
Now we're into the invidious-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/iv-org/invidious/blob/master/kubernetes/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 spaces (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo:
|
||||
|
||||
```yaml title="invidious/configmap-invidious-helm-chart-value-overrides.yaml"
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: invidious-helm-chart-value-overrides
|
||||
namespace: invidious
|
||||
data:
|
||||
values.yaml: |- # (1)!
|
||||
# <upstream values go here>
|
||||
```
|
||||
|
||||
1. Paste in the contents of the upstream `values.yaml` here, intended 4 spaces, and then change the values you need as illustrated below.
|
||||
|
||||
Values I change from the default are:
|
||||
|
||||
```yaml
|
||||
postgresql:
|
||||
image:
|
||||
tag: 14
|
||||
auth:
|
||||
username: invidious
|
||||
password: <redacted>
|
||||
database: invidious
|
||||
primary:
|
||||
initdb:
|
||||
username: invidious
|
||||
password: <redacted>
|
||||
scriptsConfigMap: invidious-postgresql-init
|
||||
persistence:
|
||||
size: 1Gi # (1)!
|
||||
podAnnotations: # (2)!
|
||||
backup.velero.io/backup-volumes: backup
|
||||
pre.hook.backup.velero.io/command: '["/bin/bash", "-c", "PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U postgres -d $POSTGRES_DB -h 127.0.0.1 > /scratch/backup.sql"]'
|
||||
pre.hook.backup.velero.io/timeout: 3m
|
||||
post.hook.restore.velero.io/command: '["/bin/bash", "-c", "[ -f \"/scratch/backup.sql\" ] && PGPASSWORD=$POSTGRES_PASSWORD psql -U postgres -h 127.0.0.1 -d $POSTGRES_DB -f /scratch/backup.sql && rm -f /scratch/backup.sql;"]'
|
||||
extraVolumes:
|
||||
- name: backup
|
||||
emptyDir:
|
||||
sizeLimit: 1Gi
|
||||
extraVolumeMounts:
|
||||
- name: backup
|
||||
mountPath: /scratch
|
||||
resources:
|
||||
requests:
|
||||
cpu: "10m"
|
||||
memory: 32Mi
|
||||
|
||||
# Adapted from ../config/config.yml
|
||||
config:
|
||||
channel_threads: 1
|
||||
feed_threads: 1
|
||||
db:
|
||||
user: invidious
|
||||
password: <redacted>
|
||||
host: invidious-postgresql
|
||||
port: 5432
|
||||
dbname: invidious
|
||||
full_refresh: false
|
||||
https_only: true
|
||||
domain: in.fnky.nz # (3)!
|
||||
external_port: 443 # (4)!
|
||||
banner: ⚠️ Note - This public Invidious instance is sponsored ❤️ by <A HREF='https://geek-cookbook.funkypenguin.co.nz'>Funky Penguin's Geek Cookbook</A>. It's intended to support the published <A HREF='https://geek-cookbook.funkypenguin.co.nz/recipes/invidious/'>Docker Swarm recipes</A>, but may be removed at any time without notice. # (5)!
|
||||
default_user_preferences: # (6)!
|
||||
quality: dash # (7)! auto-adapts or lets you choose > 720P
|
||||
```
|
||||
|
||||
1. 1Gi is fine for the database for now
|
||||
2. These annotations / extra Volumes / volumeMounts support automated backup using Velero
|
||||
3. Invidious needs this to generate external links for sharing / embedding
|
||||
4. Invidious needs this too, to generate external links for sharing / embedding
|
||||
5. It's handy to tell people what's special about your instance
|
||||
6. Check out the [official config docs](https://github.com/iv-org/invidious/blob/master/config/config.example.yml) for comprehensive details on how to configure / tweak your instance!
|
||||
7. Default all users to DASH (*adaptive*) quality, rather than limiting to 720P (*the default*)
|
||||
|
||||
### HelmRelease
|
||||
|
||||
Finally, having set the scene above, we define the HelmRelease which will actually deploy the invidious into the cluster. I save this in my flux repo:
|
||||
|
||||
```yaml title="/invidious/helmrelease-invidious.yaml"
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: invidious
|
||||
namespace: invidious
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
chart: ./charts/invidious
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: invidious
|
||||
namespace: flux-system
|
||||
interval: 15m
|
||||
timeout: 5m
|
||||
releaseName: invidious
|
||||
valuesFrom:
|
||||
- kind: ConfigMap
|
||||
name: invidious-helm-chart-value-overrides
|
||||
valuesKey: values.yaml # (1)!
|
||||
```
|
||||
|
||||
1. This is the default, but best to be explicit for clarity
|
||||
|
||||
### Ingress / IngressRoute
|
||||
|
||||
Oddly, the upstream chart doesn't include any Ingress resource. We have to manually create our Ingress as below (*note that it's also possible to use a Traefik IngressRoute directly*)
|
||||
|
||||
```yaml title="/invidious/ingress-invidious.yaml"
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: invidious
|
||||
namespace: invidious
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
rules:
|
||||
- host: in.fnky.nz
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
service:
|
||||
name: invidious
|
||||
port:
|
||||
number: 3000
|
||||
path: /
|
||||
pathType: ImplementationSpecific
|
||||
```
|
||||
|
||||
An alternative implementation using an `IngressRoute` could look like this:
|
||||
|
||||
```yaml title="/invidious/ingressroute-invidious.yaml"
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: in.fnky.nz
|
||||
namespace: invidious
|
||||
spec:
|
||||
routes:
|
||||
- match: Host(`in.fnky.nz`)
|
||||
kind: Rule
|
||||
services:
|
||||
- name: invidious-invidious
|
||||
kind: Service
|
||||
port: 3000
|
||||
```
|
||||
|
||||
### Create postgres-init ConfigMap
|
||||
|
||||
Another pecularity of the Invidious helm chart is that you have to create your own ConfigMap containing the PostgreSQL data structure. I suspect that the helm chart has received minimal attention in the past 3+ years, and this could probably easily be turned into a job as a pre-install helm hook (*perhaps a future PR?*).
|
||||
|
||||
In the meantime, you'll need to create ConfigMap manually per the [repo instructions](https://github.com/iv-org/invidious/tree/master/kubernetes#installing-helm-chart), or cheat, and copy the one I paste below:
|
||||
|
||||
??? example "Configmap (click to expand)"
|
||||
```yaml title="/invidious/configmap-invidious-postgresql-init.yaml"
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: invidious-postgresql-init
|
||||
namespace: invidious
|
||||
data:
|
||||
annotations.sql: |
|
||||
-- Table: public.annotations
|
||||
|
||||
-- DROP TABLE public.annotations;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS public.annotations
|
||||
(
|
||||
id text NOT NULL,
|
||||
annotations xml,
|
||||
CONSTRAINT annotations_id_key UNIQUE (id)
|
||||
);
|
||||
|
||||
GRANT ALL ON TABLE public.annotations TO current_user;
|
||||
channel_videos.sql: |+
|
||||
-- Table: public.channel_videos
|
||||
|
||||
-- DROP TABLE public.channel_videos;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS public.channel_videos
|
||||
(
|
||||
id text NOT NULL,
|
||||
title text,
|
||||
published timestamp with time zone,
|
||||
updated timestamp with time zone,
|
||||
ucid text,
|
||||
author text,
|
||||
length_seconds integer,
|
||||
live_now boolean,
|
||||
premiere_timestamp timestamp with time zone,
|
||||
views bigint,
|
||||
CONSTRAINT channel_videos_id_key UNIQUE (id)
|
||||
);
|
||||
|
||||
GRANT ALL ON TABLE public.channel_videos TO current_user;
|
||||
|
||||
-- Index: public.channel_videos_ucid_idx
|
||||
|
||||
-- DROP INDEX public.channel_videos_ucid_idx;
|
||||
|
||||
CREATE INDEX IF NOT EXISTS channel_videos_ucid_idx
|
||||
ON public.channel_videos
|
||||
USING btree
|
||||
(ucid COLLATE pg_catalog."default");
|
||||
|
||||
channels.sql: |+
|
||||
-- Table: public.channels
|
||||
|
||||
-- DROP TABLE public.channels;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS public.channels
|
||||
(
|
||||
id text NOT NULL,
|
||||
author text,
|
||||
updated timestamp with time zone,
|
||||
deleted boolean,
|
||||
subscribed timestamp with time zone,
|
||||
CONSTRAINT channels_id_key UNIQUE (id)
|
||||
);
|
||||
|
||||
GRANT ALL ON TABLE public.channels TO current_user;
|
||||
|
||||
-- Index: public.channels_id_idx
|
||||
|
||||
-- DROP INDEX public.channels_id_idx;
|
||||
|
||||
CREATE INDEX IF NOT EXISTS channels_id_idx
|
||||
ON public.channels
|
||||
USING btree
|
||||
(id COLLATE pg_catalog."default");
|
||||
|
||||
nonces.sql: |+
|
||||
-- Table: public.nonces
|
||||
|
||||
-- DROP TABLE public.nonces;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS public.nonces
|
||||
(
|
||||
nonce text,
|
||||
expire timestamp with time zone,
|
||||
CONSTRAINT nonces_id_key UNIQUE (nonce)
|
||||
);
|
||||
|
||||
GRANT ALL ON TABLE public.nonces TO current_user;
|
||||
|
||||
-- Index: public.nonces_nonce_idx
|
||||
|
||||
-- DROP INDEX public.nonces_nonce_idx;
|
||||
|
||||
CREATE INDEX IF NOT EXISTS nonces_nonce_idx
|
||||
ON public.nonces
|
||||
USING btree
|
||||
(nonce COLLATE pg_catalog."default");
|
||||
|
||||
playlist_videos.sql: |
|
||||
-- Table: public.playlist_videos
|
||||
|
||||
-- DROP TABLE public.playlist_videos;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS public.playlist_videos
|
||||
(
|
||||
title text,
|
||||
id text,
|
||||
author text,
|
||||
ucid text,
|
||||
length_seconds integer,
|
||||
published timestamptz,
|
||||
plid text references playlists(id),
|
||||
index int8,
|
||||
live_now boolean,
|
||||
PRIMARY KEY (index,plid)
|
||||
);
|
||||
|
||||
GRANT ALL ON TABLE public.playlist_videos TO current_user;
|
||||
playlists.sql: |
|
||||
-- Type: public.privacy
|
||||
|
||||
-- DROP TYPE public.privacy;
|
||||
|
||||
CREATE TYPE public.privacy AS ENUM
|
||||
(
|
||||
'Public',
|
||||
'Unlisted',
|
||||
'Private'
|
||||
);
|
||||
|
||||
-- Table: public.playlists
|
||||
|
||||
-- DROP TABLE public.playlists;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS public.playlists
|
||||
(
|
||||
title text,
|
||||
id text primary key,
|
||||
author text,
|
||||
description text,
|
||||
video_count integer,
|
||||
created timestamptz,
|
||||
updated timestamptz,
|
||||
privacy privacy,
|
||||
index int8[]
|
||||
);
|
||||
|
||||
GRANT ALL ON public.playlists TO current_user;
|
||||
session_ids.sql: |+
|
||||
-- Table: public.session_ids
|
||||
|
||||
-- DROP TABLE public.session_ids;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS public.session_ids
|
||||
(
|
||||
id text NOT NULL,
|
||||
email text,
|
||||
issued timestamp with time zone,
|
||||
CONSTRAINT session_ids_pkey PRIMARY KEY (id)
|
||||
);
|
||||
|
||||
GRANT ALL ON TABLE public.session_ids TO current_user;
|
||||
|
||||
-- Index: public.session_ids_id_idx
|
||||
|
||||
-- DROP INDEX public.session_ids_id_idx;
|
||||
|
||||
CREATE INDEX IF NOT EXISTS session_ids_id_idx
|
||||
ON public.session_ids
|
||||
USING btree
|
||||
(id COLLATE pg_catalog."default");
|
||||
|
||||
users.sql: |+
|
||||
-- Table: public.users
|
||||
|
||||
-- DROP TABLE public.users;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS public.users
|
||||
(
|
||||
updated timestamp with time zone,
|
||||
notifications text[],
|
||||
subscriptions text[],
|
||||
email text NOT NULL,
|
||||
preferences text,
|
||||
password text,
|
||||
token text,
|
||||
watched text[],
|
||||
feed_needs_update boolean,
|
||||
CONSTRAINT users_email_key UNIQUE (email)
|
||||
);
|
||||
|
||||
GRANT ALL ON TABLE public.users TO current_user;
|
||||
|
||||
-- Index: public.email_unique_idx
|
||||
|
||||
-- DROP INDEX public.email_unique_idx;
|
||||
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS email_unique_idx
|
||||
ON public.users
|
||||
USING btree
|
||||
(lower(email) COLLATE pg_catalog."default");
|
||||
|
||||
videos.sql: |+
|
||||
-- Table: public.videos
|
||||
|
||||
-- DROP TABLE public.videos;
|
||||
|
||||
CREATE UNLOGGED TABLE IF NOT EXISTS public.videos
|
||||
(
|
||||
id text NOT NULL,
|
||||
info text,
|
||||
updated timestamp with time zone,
|
||||
CONSTRAINT videos_pkey PRIMARY KEY (id)
|
||||
);
|
||||
|
||||
GRANT ALL ON TABLE public.videos TO current_user;
|
||||
|
||||
-- Index: public.id_idx
|
||||
|
||||
-- DROP INDEX public.id_idx;
|
||||
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS id_idx
|
||||
ON public.videos
|
||||
USING btree
|
||||
(id COLLATE pg_catalog."default");
|
||||
```
|
||||
|
||||
## :octicons-video-16: Install Invidious!
|
||||
|
||||
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation[^1] using `flux reconcile source git flux-system`. You should see the kustomization appear...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get kustomizations | grep invidious
|
||||
invidious main/d34779f False True Applied revision: main/d34779f
|
||||
~ ❯
|
||||
```
|
||||
|
||||
The helmrelease should be reconciled...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get helmreleases -n invidious
|
||||
NAME REVISION SUSPENDED READY MESSAGE
|
||||
invidious 1.1.1 False True Release reconciliation succeeded
|
||||
~ ❯
|
||||
```
|
||||
|
||||
And you should have happy Invidious pods:
|
||||
|
||||
```bash
|
||||
~ ❯ k get pods -n invidious
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
invidious-invidious-64f4fb8d75-kr4tw 1/1 Running 0 77m
|
||||
invidious-postgresql-0 1/1 Running 0 11h
|
||||
~ ❯
|
||||
```
|
||||
|
||||
... and finally check that the ingress was created as desired:
|
||||
|
||||
```bash
|
||||
~ ❯ k get ingress -n invidious
|
||||
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
invidious <none> in.fnky.nz 80, 443 19h
|
||||
~ ❯
|
||||
```
|
||||
|
||||
Or in the case of an ingressRoute:
|
||||
|
||||
```bash
|
||||
~ ❯ k get ingressroute -n invidious
|
||||
NAME AGE
|
||||
in.fnky.nz 19h
|
||||
```
|
||||
|
||||
Now hit the URL you defined in your config, you'll see the basic search screen. Enter a search phrase (*"marvel movie trailer"*) to see the YouTube video results, or paste in a YouTube URL such as `https://www.youtube.com/watch?v=bxqLsrlakK8`, change the domain name from `www.youtube.com` to your instance's FQDN, and watch the fun [^2]!
|
||||
|
||||
You can also install a range of browser add-ons to automatically redirect you from youtube.com to your Invidious instance. I'm testing "[libredirect](https://addons.mozilla.org/en-US/firefox/addon/libredirect/)" currently, which seems to work as advertised!
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We have an HTTPS-protected private YouTube frontend - we can now watch whatever videos we please, without feeding Google's profile on us. We can also subscribe to channels without requiring a Google account, and we can share individual videos directly via our instance (*by generating links*).
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] We are free of the creepy tracking attached to YouTube videos!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
[^2]: Gotcha!
|
||||
|
||||
@@ -69,7 +69,7 @@ metadata:
|
||||
Now that the "global" elements of this deployment (*just the GitRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/invidious`. I create this example Kustomization in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/kustomizations/kustomization-invidious.yaml"
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: invidious
|
||||
@@ -82,7 +82,6 @@ spec:
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
validation: server
|
||||
healthChecks:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
@@ -541,6 +540,6 @@ What have we achieved? We have an HTTPS-protected private YouTube frontend - we
|
||||
|
||||
* [X] We are free of the creepy tracking attached to YouTube videos!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: This is how a footnote works
|
||||
|
||||
@@ -93,4 +93,4 @@ Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker
|
||||
|
||||
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -191,4 +191,4 @@ Launch the mail server stack by running ```docker stack deploy docker-mailserver
|
||||
|
||||
[^2]: If you're using sieve with Rainloop, take note of the [workaround](https://forum.funkypenguin.co.nz/t/mail-server-funky-penguins-geek-cookbook/70/15) identified by [ggilley](https://forum.funkypenguin.co.nz/u/ggilley)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -393,6 +393,6 @@ What have we achieved? Even though we had to jump through some extra hoops to se
|
||||
|
||||
* [X] Mastodon configured, running, and ready to toot!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: Or, you can just reset your password from the UI, assuming you have SMTP working
|
||||
|
||||
@@ -97,4 +97,4 @@ Launch the mealie stack by running ```docker stack deploy mealie -c <path -to-do
|
||||
[^1]: If you plan to use Mealie for fancy things like an early-morning alarm to defrost the chicken, you may need to customize the [Traefik Forward Auth][tfa] rules, or even remove them entirely, for unauthenticated API access.
|
||||
[^2]: If you think Mealie is tasty, encourage the developer :cook: to keep on cookin', by [sponsoring him](https://github.com/sponsors/hay-kot) :heart:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -146,4 +146,4 @@ Log into your new instance at https://**YOUR-FQDN**, using the credentials you s
|
||||
|
||||
[^1]: Find the bookmarklet under the **Settings -> Integration** page.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -208,4 +208,4 @@ goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode=
|
||||
[^2]: Some applications (_like [NextCloud](/recipes/nextcloud/)_) can natively mount S3 buckets
|
||||
[^3]: Some backup tools (_like [Duplicity](/recipes/duplicity/)_) can backup directly to S3 buckets
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -124,4 +124,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user and password pass
|
||||
|
||||
[^1]: If you wanted to expose the Munin UI directly, you could remove the traefik-forward-auth from the design.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -190,4 +190,4 @@ Log into your new instance at https://**YOUR-FQDN**, and setup your admin userna
|
||||
[^1]: Since many of my other recipes use PostgreSQL, I'd have preferred to use Postgres over MariaDB, but MariaDB seems to be the [preferred database type](https://github.com/nextcloud/server/issues/5912).
|
||||
[^2]: If you want better performance when using Photos in Nextcloud, have a look at [this detailed write-up](https://rayagainstthemachine.net/linux%20administration/nextcloud-photos/)!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -174,4 +174,4 @@ Launch the nightscout stack by running ```docker stack deploy nightscout -c <pat
|
||||
|
||||
[^1]: Most of the time, you'll need an app which syncs to Nightscout, and these apps won't support OIDC auth, so this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/). Instead, NightScout is secured entirely with your `API_SECRET` above (*although it is possible to add more users once you're an admin*)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -174,4 +174,4 @@ What have we achieved? We have our own instance of Nitter[^1], and we can anonym
|
||||
|
||||
[^1]: Since Nitter is private and read-only anyway, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/). If you wanted to protect your Nitter instance behind either Traefik Forward Auth or [Authelia][authelia], you'll just need to add the appropriate `traefik.http.routers.nitter.middlewares` label.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -185,4 +185,4 @@ What have we achieved? We have our own instance of Nomie, syncing multi-device a
|
||||
|
||||
* [X] Our own Nomie instance, synced with own own CouchDB for multi-device access. Finally, nobody else will be able to tell how much you poop! :poo:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -427,4 +427,4 @@ Create your users using the "**New User**" button.
|
||||
|
||||
[^1]: [The Keycloak](/recipes/keycloak/authenticate-against-openldap/) recipe illustrates how to integrate Keycloak with your LDAP directory, giving you a cleaner interface to manage users, and a raft of SSO / OAuth features.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -103,4 +103,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
|
||||
[^2]: I'm using my own image rather than owntracks/recorderd, because of a [potentially swarm-breaking bug](https://github.com/owntracks/recorderd/issues/14) I found in the official container. If this gets resolved (_or if I was mistaken_) I'll update the recipe accordingly.
|
||||
[^3]: By default, you'll get a fully accessible, unprotected MQTT broker. This may not be suitable for public exposure, so you'll want to look into securing mosquitto with TLS and ACLs.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -182,4 +182,4 @@ Head over to the [Paperless documentation](https://paperless-ng.readthedocs.io/e
|
||||
[^1]: Taken directly from [Paperless documentation](https://paperless-ng.readthedocs.io/en/latest)
|
||||
[^2]: This particular stack configuration was chosen because it includes a "real" database in PostgreSQL versus the more lightweight SQLite database. After all, if you go to the trouble of scanning and importing a pile of documents, you want to know the database is robust enough to keep your data safe.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -184,4 +184,4 @@ Browse to your new browser-cli-terminal at https://**YOUR-FQDN**, with user "adm
|
||||
|
||||
[^1]: Once it is running, you probably will want to launch an scan to index the originals photos. Go to *library -> index* and do a complete rescan (it will take a while, depending on your collection size)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -163,4 +163,4 @@ Log into your new instance at https://**YOUR-FQDN**, and follow the on-screen pr
|
||||
|
||||
[^1]: If you wanted to expose the phpIPAM UI directly, you could remove the `traefik.http.routers.api.middlewares` label from the app container :thumbsup:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -470,6 +470,6 @@ What have we achieved? Even though we had to jump through some extra hoops to se
|
||||
|
||||
* [X] Pixelfed configured, running, and ready for selfies!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
[^1]: There's an iOS mobile app [currently in beta](https://testflight.apple.com/join/5HpHJD5l)
|
||||
|
||||
@@ -106,4 +106,4 @@ Log into your new instance at https://**YOUR-FQDN** (You'll need to setup a plex
|
||||
[^1]: Plex uses port 32400 for remote access, using your plex.tv user/password to authenticate you. The inclusion of the traefik proxy in this recipe is simply to allow you to use the web client (as opposed to a client app) by connecting directly to your instance, as opposed to browsing your media via <https://plex.tv/web>
|
||||
[^2]: Got an NVIDIA GPU? See [this blog post](https://www.funkypenguin.co.nz/note/gpu-transcoding-with-emby-plex-using-docker-nvidia/) re how to use your GPU to transcode your media!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -122,4 +122,4 @@ Log into your new instance at https://**YOUR-FQDN**. You'll be prompted to set y
|
||||
|
||||
[^1]: There are [some schenanigans](https://www.reddit.com/r/docker/comments/au9wnu/linuxserverio_templates_for_portainer/) you can do to install LinuxServer.io templates in Portainer. Don't go crying to them for support though! :crying_cat_face:
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -75,4 +75,4 @@ Log into your new instance at https://**YOUR-FQDN**, with user "root" and the pa
|
||||
[^1]: The [PrivateBin repo](https://github.com/PrivateBin/PrivateBin/blob/master/INSTALL.md) explains how to tweak configuration options, or to use a database instead of file storage, if your volume justifies it :)
|
||||
[^2]: The inclusion of Privatebin was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -100,4 +100,4 @@ Log into your new instance at https://**YOUR-FQDN**, authenticate against oauth_
|
||||
|
||||
[^2]: The inclusion of Realms was due to the efforts of @gkoerk in our [Discord server](http://chat.funkypenguin.co.nz)- Unfortunately on the 22nd August 2020 Jerry passed away. Jerry was very passionate and highly regarded in the field of Information Technology. He will be missed.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -205,4 +205,4 @@ root@raphael:~#
|
||||
[^2]: A recent benchmark of various backup tools, including Restic, can be found [here](https://forum.duplicati.com/t/big-comparison-borg-vs-restic-vs-arq-5-vs-duplicacy-vs-duplicati/9952).
|
||||
[^3]: A paid-for UI for Restic can be found [here](https://forum.restic.net/t/web-ui-for-restic/667/26).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -68,4 +68,4 @@ Launch the RSS Bridge stack by running ```docker stack deploy rssbridge -c <path
|
||||
[^1]: The inclusion of RSS Bridge was due to the efforts of @bencey in [Discord](http://chat.funkypenguin.co.nz) (Thanks Ben!)
|
||||
[^2]: This delicious recipe is well-paired with an RSS reader such as [Miniflux][miniflux]
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -140,4 +140,4 @@ For one, anyone who wanted to build their own crude "Google Alerts" - you'd perf
|
||||
|
||||
[^1]: Combine SearXNG's RSS results with [Huggin](/recipes/huginn/) for a more feature-full alternative to Google Alerts! 💪
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -395,4 +395,4 @@ Log into your new grafana instance, check out your beautiful graphs. Move onto d
|
||||
|
||||
[^1]: Pay close attention to the ```grafana.env``` config. If you encounter errors about ```basic auth failed```, or failed CSS, it's likely due to misconfiguration of one of the grafana environment variables.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -91,4 +91,4 @@ Launch the Linx stack by running ```docker stack deploy linx -c <path -to-docker
|
||||
|
||||
[^1]: Since the whole purpose of media/file sharing is to share stuff with **strangers**, this recipe doesn't take into account any sort of authentication using [Traefik Forward Auth](/docker-swarm/traefik-forward-auth/).
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -138,4 +138,4 @@ Launch the TTRSS stack by running ```docker stack deploy ttrss -c <path -to-dock
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN** - the first user you create will be an administrative user.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -183,4 +183,4 @@ Even with all these elements in place, you still need to enable Redis under Inte
|
||||
|
||||
[^2]: I've not tested the email integration, but you'd need an SMTP server listening on port 25 (_since we can't change the port_) to use it
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -131,4 +131,4 @@ Log into your new instance at `https://**YOUR-FQDN**`, with user "root" and the
|
||||
|
||||
[^1]: If you wanted to expose the Wekan UI directly, you could remove the traefik-forward-auth from the design.
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
@@ -102,4 +102,4 @@ Browse to your new browser-cli-terminal at https://**YOUR-FQDN**. Authenticate w
|
||||
|
||||
[^2]: The inclusion of Wetty was due to the efforts of @gpulido in our [Discord server](http://chat.funkypenguin.co.nz). Thanks Gabriel!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
{% include 'recipe-footer.md' %}
|
||||
|
||||
Reference in New Issue
Block a user