mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-12 17:26:19 +00:00
Add Komga recipe (#132)
This commit is contained in:
6
_snippets/5-min-install.md
Normal file
6
_snippets/5-min-install.md
Normal file
@@ -0,0 +1,6 @@
|
||||
## The easy, 5-minute install
|
||||
|
||||
I share (_with [sponsors][github_sponsor] and [patrons][patreon]_) a private "_premix_" GitHub repository, which includes an ansible playbook for deploying the entire Geek's Cookbook stack, automatically. This means that members can create the entire environment with just a ```git pull``` and an ```ansible-playbook deploy.yml``` 👍
|
||||
|
||||
[patreon]: https://www.patreon.com/bePatron?u=6982506
|
||||
[github_sponsor]: https://github.com/sponsors/funkypenguin
|
||||
2
_snippets/recipe-cta.md
Normal file
2
_snippets/recipe-cta.md
Normal file
@@ -0,0 +1,2 @@
|
||||
!!! tip
|
||||
I share (_with my [sponsors](https://github.com/sponsors/funkypenguin)_) a private "_premix_" git repository, which includes necessary docker-compose and env files for all published recipes. This means that sponsors can launch any recipe with just a ```git pull``` and a ```docker stack deploy``` 👍
|
||||
@@ -13,6 +13,7 @@ Also available via:
|
||||
|
||||
## Recently added recipes
|
||||
|
||||
* Added recipe for [Komga](/recipes/komga/), a beautiful interface to manage and enjoy your comics / graphic novels (_5 Jan 2021_)
|
||||
* Added recipe for [Photoprism](/recipes/photoprism/), self-hosted photo app incorporating automated tagging using Tensorflow (_6 Aug 2020_)
|
||||
* Added recipe for [JellyFin](/recipes/jellyfin/), the [FOSS fork](https://www.linuxuprising.com/2018/12/jellyfin-free-software-emby-media.html#:~:text=The%20free%20software%20Emby%20fork,differences%20with%20the%20core%20developers.) of [Emby](/recipes/emby/) (_6 Aug 2020_)
|
||||
* Added recipe for [Restic](/recipes/restic/), simple and secure backup solution with **huge** range of target platforms via rclone (_25 June 2020_)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Docker Swarm Mode
|
||||
|
||||
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (as defined at 1.13) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
|
||||
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (*as defined at 1.13*) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
|
||||
|
||||
## Ingredients
|
||||
|
||||
@@ -81,13 +81,13 @@ To add a manager to this swarm, run the following command:
|
||||
Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of ```docker node ls``` (on either host) should reflect all the nodes:
|
||||
|
||||
|
||||
````
|
||||
```
|
||||
[root@ds2 davidy]# docker node ls
|
||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
|
||||
b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader
|
||||
xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reachable
|
||||
[root@ds2 davidy]#
|
||||
````
|
||||
```
|
||||
|
||||
### Setup automated cleanup
|
||||
|
||||
@@ -95,7 +95,7 @@ Docker swarm doesn't do any cleanup of old images, so as you experiment with var
|
||||
|
||||
To address this, we'll run the "[meltwater/docker-cleanup](https://github.com/meltwater/docker-cleanup)" container on all of our nodes. The container will clean up unused images after 30 minutes.
|
||||
|
||||
First, create docker-cleanup.env (_mine is under /var/data/config/docker-cleanup_), and exclude container images we **know** we want to keep:
|
||||
First, create `docker-cleanup.env` (_mine is under `/var/data/config/docker-cleanup`_), and exclude container images we **know** we want to keep:
|
||||
|
||||
```
|
||||
KEEP_IMAGES=traefik,keepalived,docker-mailserver
|
||||
@@ -136,7 +136,7 @@ Launch the cleanup stack by running ```docker stack deploy docker-cleanup -c <pa
|
||||
|
||||
If your swarm runs for a long time, you might find yourself running older container images, after newer versions have been released. If you're the sort of geek who wants to live on the edge, configure [shepherd](https://github.com/djmaze/shepherd) to auto-update your container images regularly.
|
||||
|
||||
Create /var/data/config/shepherd/shepherd.env as follows:
|
||||
Create `/var/data/config/shepherd/shepherd.env` as follows:
|
||||
|
||||
```
|
||||
# Don't auto-update Plex or Emby (or Jellyfin), I might be watching a movie! (Customize this for the containers you _don't_ want to auto-update)
|
||||
@@ -165,11 +165,16 @@ services:
|
||||
|
||||
Launch shepherd by running ```docker stack deploy shepherd -c /var/data/config/shepherd/shepherd.yml```, and then just forget about it, comfortable in the knowledge that every day, Shepherd will check that your images are the latest available, and if not, will destroy and recreate the container on the latest available image.
|
||||
|
||||
### Summary
|
||||
## Summary
|
||||
|
||||
After completing the above, you should have:
|
||||
|
||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/)
|
||||
What have we achieved?
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/)
|
||||
|
||||
|
||||
--8<-- "5-min-install.md"
|
||||
|
||||
## Chef's Notes 📓
|
||||
@@ -8,6 +8,8 @@ Normally this is done using a HA loadbalancer, but since Docker Swarm aready pro
|
||||
|
||||
This is accomplished with the use of keepalived on at least two nodes.
|
||||
|
||||

|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
@@ -18,13 +20,13 @@ This is accomplished with the use of keepalived on at least two nodes.
|
||||
|
||||
New:
|
||||
|
||||
* [ ] At least 3 x IPv4 addresses (*one for each node and one for the virtual IP*)
|
||||
* [ ] At least 3 x IPv4 addresses (*one for each node and one for the virtual IP[^1])
|
||||
|
||||
## Preparation
|
||||
|
||||
### Enable IPVS module
|
||||
|
||||
On all nodes which will participate in keepalived, we need the "ip_vs" kernel module, in order to permit serivces to bind to non-local interface addresses.
|
||||
On all nodes which will participate in keepalived, we need the "ip_vs" kernel module, in order to permit services to bind to non-local interface addresses.
|
||||
|
||||
Set this up once-off for both the primary and secondary nodes, by running:
|
||||
|
||||
@@ -37,9 +39,11 @@ modprobe ip_vs
|
||||
|
||||
Assuming your IPs are as follows:
|
||||
|
||||
```
|
||||
* 192.168.4.1 : Primary
|
||||
* 192.168.4.2 : Secondary
|
||||
* 192.168.4.3 : Virtual
|
||||
```
|
||||
|
||||
Run the following on the primary
|
||||
```
|
||||
@@ -51,7 +55,7 @@ docker run -d --name keepalived --restart=always \
|
||||
osixia/keepalived:2.0.20
|
||||
```
|
||||
|
||||
And on the secondary:
|
||||
And on the secondary[^2]:
|
||||
```
|
||||
docker run -d --name keepalived --restart=always \
|
||||
--cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host \
|
||||
@@ -65,7 +69,20 @@ docker run -d --name keepalived --restart=always \
|
||||
|
||||
That's it. Each node will talk to the other via unicast (*no need to un-firewall multicast addresses*), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker's routing mesh will deliver it to the appropriate docker node.
|
||||
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved?
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] A Virtual IP to which all cluster traffic can be forwarded externally, making it "*Highly Available*"
|
||||
|
||||
--8<-- "5-min-install.md"
|
||||
|
||||
|
||||
## Chef's notes 📓
|
||||
|
||||
1. Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
|
||||
2. More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
|
||||
[^1]: Some hosting platforms (*OpenStack, for one*) won't allow you to simply "claim" a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron's "Load Balancer As A Service" (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
|
||||
[^2]: More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
|
||||
BIN
manuscript/images/keepalived.png
Normal file
BIN
manuscript/images/keepalived.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 47 KiB |
BIN
manuscript/images/komga.png
Normal file
BIN
manuscript/images/komga.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.9 MiB |
76
manuscript/recipes/komga.md
Normal file
76
manuscript/recipes/komga.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Komga
|
||||
|
||||
So you've just watched a bunch of superhero movies, and you're suddenly inspired to deep-dive into the weird world of comic books? You're already rocking [AutoPirate](/recipes/autopirate/) with [Mylar](/recipes/autopirate/mylar/) and [NZBGet](/recipes/autopirate/nzbget/) to grab content, but how to manage and enjoy your growing collection?
|
||||
|
||||

|
||||
|
||||
[Komga](https://komga.org/) is a media server with a beautifully slick interface, allowing you to read your comics / manga in CBZ, CBR, PDF and epub format. Komga includes an integrated web reader, as well as a [Tachiyomi](https://tachiyomi.org/) plugin and an OPDS server for integration with other mobile apps such as [Chunky on iPad](http://chunkyreader.com/).
|
||||
|
||||
## Ingredients
|
||||
|
||||
!!! summary "Ingredients"
|
||||
Already deployed:
|
||||
|
||||
* [X] [Docker swarm cluster](/ha-docker-swarm/design/) with [persistent shared storage](/ha-docker-swarm/shared-storage-ceph.md)
|
||||
* [X] [Traefik](/ha-docker-swarm/traefik) configured per design
|
||||
* [X] DNS entry for the hostname you intend to use (*or a wildcard*), pointed to your [keepalived](/ha-docker-swarm/keepalived/) IP
|
||||
|
||||
Related:
|
||||
|
||||
* [X] [AutoPirate](/autopirate/) components (*specifically [Mylar](/autopirate/mylar/)*), for searching for, downloading, and managing comic books
|
||||
|
||||
## Preparation
|
||||
|
||||
### Setup data locations
|
||||
|
||||
First we create a directory to hold the komga database, logs and other persistent data:
|
||||
|
||||
```
|
||||
mkdir /var/data/komga
|
||||
```
|
||||
|
||||
### Setup Docker Swarm
|
||||
|
||||
Create a docker swarm config file in docker-compose syntax (v3), something like this:
|
||||
|
||||
--8<-- "recipe-cta.md"
|
||||
|
||||
```
|
||||
version: "3.2"
|
||||
|
||||
services:
|
||||
komga:
|
||||
image: gotson/komga
|
||||
env_file: /var/data/config/komga/komga.env
|
||||
volumes:
|
||||
- /var/data/media/:/media
|
||||
- /var/data/komga:/config
|
||||
deploy:
|
||||
replicas: 1
|
||||
labels:
|
||||
- traefik.enable=true
|
||||
- traefik.frontend.rule=Host:komga.example.com
|
||||
- traefik.port=8080
|
||||
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
|
||||
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
|
||||
- traefik.frontend.auth.forward.trustForwardHeader=true
|
||||
- traefik.docker.network=traefik_public
|
||||
networks:
|
||||
- traefik_public
|
||||
|
||||
networks:
|
||||
traefik_public:
|
||||
external: true
|
||||
```
|
||||
|
||||
## Serving
|
||||
|
||||
### Avengers Assemble!
|
||||
|
||||
Launch the Komga stack by running ```docker stack deploy komga -c <path -to-docker-compose.yml>```
|
||||
|
||||
Log into your new instance at https://**YOUR-FQDN**. Since it's a fresh installation, Komga will prompt you to setup a username and password, after which you'll be able to setup your library, and tweak all teh butt0ns!
|
||||
|
||||
## Chef's Notes 📓
|
||||
|
||||
[^1]: Since Komga doesn't need to communicate with any other services, we don't need a separate overlay network for it. Provided Traefik can reach Komga via the `traefik_public` overlay network, we've got all we need.
|
||||
@@ -54,4 +54,4 @@ In order to avoid IP addressing conflicts as we bring swarm networks up/down, we
|
||||
| [Harbor-Clair](https://geek-cookbook.funkypenguin.co.nz/recipes/graylog/) | 172.16.54.0/24 |
|
||||
| [Duplicati](https://geek-cookbook.funkypenguin.co.nz/recipes/duplicati/) | 172.16.55.0/24 |
|
||||
| [Restic](https://geek-cookbook.funkypenguin.co.nz/recipes/restic/) | 172.16.56.0/24 |
|
||||
| [Jellyfin](https://geek-cookbook.funkypenguin.co.nz/recipes/jellyfin/) | 172.16.57.0/24 |
|
||||
| [Jellyfin](https://geek-cookbook.funkypenguin.co.nz/recipes/jellyfin/) | 172.16.57.0/24 |
|
||||
@@ -115,6 +115,7 @@ nav:
|
||||
- Users: recipes/keycloak/create-user.md
|
||||
- OIDC Provider: recipes/keycloak/setup-oidc-provider.md
|
||||
- OpenLDAP: recipes/keycloak/authenticate-against-openldap.md
|
||||
- Komga: recipes/komga.md
|
||||
- Minio: recipes/minio.md
|
||||
- OpenLDAP: recipes/openldap.md
|
||||
- OwnTracks: recipes/owntracks.md
|
||||
@@ -248,6 +249,9 @@ markdown_extensions:
|
||||
- pymdownx.caret
|
||||
- pymdownx.critic
|
||||
- pymdownx.details
|
||||
- pymdownx.snippets:
|
||||
check_paths: true
|
||||
base_path: _snippets
|
||||
- pymdownx.emoji:
|
||||
emoji_index: !!python/name:pymdownx.emoji.twemoji
|
||||
emoji_generator: !!python/name:pymdownx.emoji.to_svg
|
||||
|
||||
Reference in New Issue
Block a user