1
0
mirror of https://github.com/funkypenguin/geek-cookbook/ synced 2025-12-12 17:26:19 +00:00

Add Mastodon recipes

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
David Young
2022-08-08 10:33:47 +12:00
parent c73ced5238
commit 330570577a
10 changed files with 308 additions and 347 deletions

View File

@@ -12,6 +12,10 @@
"MD025":
"front_matter_title": ""
# Permit hard tabs in code blocks, since we are likely re-pasting console output
"MD010":
"code_blocks": false
# Allow trailing punctuation in headings
"MD026": false

View File

@@ -20,6 +20,7 @@
[immich]: /recipes/immich/
[jackett]: /recipes/autopirate/jackett/
[jellyfin]: /recipes/jellyfin/
[k8s/mastodon]: /recipes/kubernetes/mastodon/
[kavita]: /recipes/kavita/
[keycloak]: /recipes/keycloak/
[komga]: /recipes/komga/

View File

@@ -18,8 +18,6 @@ ExternalDNS is a controller for Kubernetes which watches the objects you create
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/namespaces/namespace-external-dns.yaml`:
??? example "Example Namespace (click to expand)"
```yaml
apiVersion: v1
kind: Namespace

View File

@@ -52,15 +52,14 @@ A "[SealedSecret](https://github.com/bitnami-labs/sealed-secrets)" can only be d
### Namespace
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `bootstrap/namespaces/namespace-sealed-secrets.yaml`:
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo:
??? example "Example Namespace (click to expand)"
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: sealed-secrets
```
```yaml title="/bootstrap/namespaces/namespace-sealed-secrets.yaml"
apiVersion: v1
kind: Namespace
metadata:
name: sealed-secrets
```
### HelmRepository

View File

@@ -6,30 +6,31 @@ hide:
## Recent additions
Recipe | Description | Date
-------------------------|------------------------------------------------------------------------------------------------------------------|--------------
[Mastodon][mastodon] | Federated social network. Think "*twitter but like email*" | _5 Aug 2022_
[Kavita][kavita] | "Rocket-fueled" reader for manga/comics/ebooks, able to save reading position across devices/sessions | _27 Jul 2022_
[Authelia][authelia] | Authentication and two factor authorization server with Authelia | _1 Nov 2021_
Recipe | Description | Date
-------------------------|----------------------------------------------------------------------------------------------------------------------------------|--------------
[Mastodon (K8s)][k8s/mastodon] | Kubernetes version of the Mastodon recipe below | _8 Aug 2022_
[Mastodon][mastodon] | Federated social network. Think "*twitter but like email*" | _5 Aug 2022_
[Kavita][kavita] | "Rocket-fueled" reader for manga/comics/ebooks, able to save reading position across devices/sessions | _27 Jul 2022_
[Authelia][authelia] | Authentication and two factor authorization server with Authelia | _1 Nov 2021_
[Prowlarr][prowlarr] | An indexer manager/proxy built on the popular arr .net/reactjs base stack to integrate with the [AutoPirate][autopirate] friends | _27 Oct 2021_
[Archivebox][archivebox] | Website Archiving service to save websites to view offline | _19 Oct 2021_
[Readarr][readarr] | [Autopirate][autopirate] component to grab and manage eBooks (*think "Sonarr/Radarr for books*") | _18 Oct 2021_
[Archivebox][archivebox] | Website Archiving service to save websites to view offline | _19 Oct 2021_
[Readarr][readarr] | [Autopirate][autopirate] component to grab and manage eBooks (*think "Sonarr/Radarr for books*") | _18 Oct 2021_
## Recent updates
Recipe | Description | Date
----------------------------|---------------------------------------------------------------------------------|--------------
[Authelia][authelia] | Updated with test services, fixed errors | _27 Jul 2022_
[Minio][minio] | Major update to Minio recipe, for new Console UI and Traefik v2 | _22 Oct 2021_
[Traefik Forward Auth][tfa] | Major update for Traefik v2, included instructions for Dex, Google, Keycloak | _29 Jan 2021_
[Autopirate][autopirate] | Updated all components for Traefik v2 labels | _29 Jan 2021_
Recipe | Description | Date
----------------------------|------------------------------------------------------------------------------|--------------
[Authelia][authelia] | Updated with test services, fixed errors | _27 Jul 2022_
[Minio][minio] | Major update to Minio recipe, for new Console UI and Traefik v2 | _22 Oct 2021_
[Traefik Forward Auth][tfa] | Major update for Traefik v2, included instructions for Dex, Google, Keycloak | _29 Jan 2021_
[Autopirate][autopirate] | Updated all components for Traefik v2 labels | _29 Jan 2021_
## Recent reviews
Recipe | Description | Date
----------------------------|---------------------------------------------------------------------------------|--------------
[Immich][review/immich] | First review | _3 Aug 2022_
Recipe | Description | Date
------------------------|--------------|-------------
[Immich][review/immich] | First review | _3 Aug 2022_
## Subscribe to updates

View File

@@ -1 +1,274 @@
hello
---
title: Install Mastodon in Docker Swarm
description: How to install your own Mastodon instance using Docker Swarm
---
# Install Mastodon in Kubernetes
[Mastodon](https://joinmastodon.org/) is an open-source, federated (*i.e., decentralized*) social network, inspired by Twitter's "microblogging" format, and used by upwards of 4.4M early-adopters, to share links, pictures, video and text.
![Mastodon Screenshot](/images/mastodon.png){ loading=lazy }
!!! question "Why would I run my own instance?"
That's a good question. After all, there are all sorts of public instances available, with a [range of themes and communities](https://joinmastodon.org/communities). You may want to run your own instance because you like the tech, because you just think it's cool :material-emoticon-cool-outline:
You may also have realized that since Mastodon is **federated**, users on your instance can follow, toot, and interact with users on any other instance!
If you're **not** into that much effort / pain, you're welcome to [join our instance][community/mastodon] :material-mastodon:
## Mastodon requirements
!!! summary "Ingredients"
Already deployed:
* [x] A [Kubernetes cluster](/kubernetes/cluster/) (*not running Kubernetes? Use the [Docker Swarm recipe instead][mastodon]*)
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
* [x] An [Ingress](/kubernetes/ingress/) to route incoming traffic to services
* [x] [Persistent storage](/kubernetes/persistence/) to store persistent stuff
* [x] [mastodon](/kubernetes/mastodon/) to create an DNS entry
New:
* [ ] Chosen DNS FQDN for your epic new social network
* [ ] An S3-compatible bucket for serving media (*I use [Backblaze B2](https://www.backblaze.com/b2/docs/s3_compatible_api.html)*)
* [ ] An SMTP gateway for delivering email notifications (*I use [Mailgun](https://www.mailgun.com/)*)
* [ ] A business card, with the title "[*I'm CEO, Bitch*](https://nextshark.com/heres-the-story-behind-mark-zuckerbergs-im-ceo-bitch-business-card/)"
## Preparation
### GitRepository
The Mastodon project doesn't currently publish a versioned helm chart - there's just a [helm chart stored in the repository](https://github.com/mastodon/mastodon/tree/main/chart) (*I plan to submit a PR to address this*). For now, we use a GitRepository instead of a HelmRepository as the source of a HelmRelease.
```yaml title="/bootstrap/gitrepositories/gitepository-mastodon.yaml"
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
name: mastodon
namespace: flux-system
spec:
interval: 1h0s
ref:
branch: main
url: https://github.com/funkypenguin/mastodon # (1)!
```
1. I'm using my own fork because I've been working on improvements to the upstream chart, but `https://github.com/mastodon/mastodon` would work too.
### Namespace
We need a namespace to deploy our HelmRelease and associated ConfigMaps into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `/bootstrap/namespaces/namespace-mastodon.yaml`:
```yaml title="/bootstrap/namespaces/namespace-mastodon.yaml"
apiVersion: v1
kind: Namespace
metadata:
name: mastodon
```
### Kustomization
Now that the "global" elements of this deployment (*just the GitRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/mastodon`. I create this example Kustomization in my flux repo:
```yaml title="/bootstrap/kustomizations/kustomization-mastodon.yaml"
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: mastodon
namespace: flux-system
spec:
interval: 15m
path: mastodon
prune: true # remove any elements later removed from the above path
timeout: 2m # if not set, this defaults to interval duration, which is 1h
sourceRef:
kind: GitRepository
name: flux-system
validation: server
healthChecks:
- apiVersion: apps/v1
kind: Deployment
name: mastodon-web
namespace: mastodon
- apiVersion: apps/v1
kind: Deployment
name: mastodon-streaming
namespace: mastodon
- apiVersion: apps/v1
kind: Deployment
name: mastodon-sidekiq
namespace: mastodon
```
### ConfigMap
Now we're into the mastodon-specific YAMLs. First, we create a ConfigMap, containing the entire contents of the helm chart's [values.yaml](https://github.com/bitnami-labs/mastodon/blob/main/helm/mastodon/values.yaml). Paste the values into a `values.yaml` key as illustrated below, indented 4 tabs (*since they're "encapsulated" within the ConfigMap YAML*). I create this example yaml in my flux repo:
```yaml title="mastodon/configmap-mastodon-helm-chart-value-overrides.yaml"
apiVersion: v1
kind: ConfigMap
metadata:
name: mastodon-helm-chart-value-overrides
namespace: mastodon
data:
values.yaml: |- # (1)!
# <upstream values go here>
```
1. Paste in the contents of the upstream `values.yaml` here, intended 4 spaces, and then change the values you need as illustrated below.
Values I change from the default are:
```yaml
spec:
values:
mastodon:
createAdmin:
enabled: true
username: funkypenguin
email: davidy@funkypenguin.co.nz
local_domain: so.fnky.nz
s3:
enabled: true
access_key: "<redacted>"
access_secret: "<redacted>"
bucket: "so-fnky-nz"
endpoint: https://s3.us-west-000.backblazeb2.com
hostname: s3.us-west-000.backblazeb2.com
secrets:
secret_key_base: "<redacted>"
otp_secret: "<redacted>"
vapid:
private_key: "<redacted>"
public_key: "<redacted>"
smtp:
domain: mg.funkypenguin.co.nz
enable_starttls_auto: true
from_address: mastodon@mg.funkypenguin.co.nz
login: mastodon@mg.funkypenguin.co.nz
openssl_verify_mode: peer
password: <redacted>
port: 587
reply_to: mastodon@mg.funkypenguin.co.nz
server: smtp.mailgun.org
tls: false
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: traefik
nginx.ingress.kubernetes.io/proxy-body-size: 10m
hosts:
- host: so.fnky.nz
paths:
- path: '/'
postgresql:
auth:
postgresPassword: "<redacted>"
username: postgres
password: "<redacted>"
primary:
persistence:
size: 1Gi
redis:
password: "<redacted>"
master:
persistence:
size: 1Gi
architecture: standalone
```
### HelmRelease
Finally, having set the scene above, we define the HelmRelease which will actually deploy the mastodon into the cluster. I save this in my flux repo:
```yaml title="/mastodon/helmrelease-mastodon.yaml"
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: mastodon
namespace: mastodon
spec:
chart:
spec:
chart: ./charts/mastodon
sourceRef:
kind: GitRepository
name: mastodon
namespace: flux-system
interval: 15m
timeout: 5m
releaseName: mastodon
valuesFrom:
- kind: ConfigMap
name: mastodon-helm-chart-value-overrides
valuesKey: values.yaml # (1)!
```
1. This is the default, but best to be explicit for clarity
## :material-mastodon: Install Mastodon!
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation[^1] using `flux reconcile source git flux-system`. You should see the kustomization appear...
```bash
~ flux get kustomizations | grep mastodon
mastodon main/d34779f False True Applied revision: main/d34779f
~
```
The helmrelease should be reconciled...
```bash
~ flux get helmreleases -n mastodon
NAME REVISION SUSPENDED READY MESSAGE
mastodon 1.2.2-pre-02 False True Release reconciliation succeeded
~
```
And you should have happy Mastodon pods:
```bash
~ k get pods -n mastodon
NAME READY STATUS RESTARTS AGE
mastodon-media-remove-27663840-l2xvt 0/1 Completed 0 22h
mastodon-postgresql-0 1/1 Running 0 5d20h
mastodon-redis-master-0 1/1 Running 0 5d20h
mastodon-sidekiq-5ffd544f98-k86qp 1/1 Running 0 5d20h
mastodon-streaming-676fdcf75-hz52z 1/1 Running 0 5d20h
mastodon-web-597cf7c8d5-2hzkl 1/1 Running 4 5d20h
~
```
... and finally check that the ingress was created as desired:
```bash
~ k get ingress -n mastodon
NAME CLASS HOSTS ADDRESS PORTS AGE
mastodon <none> so.fnky.nz 80, 443 8d
~
```
Now hit the URL you defined in your config, and you should see your beautiful new Mastodon instance! Login with your configured credentials, navigate to **Preferences**, and have fun tweaking and tooting away!
!!! question "What's my Mastodon admin password?"
The admin username _may_ be output by the post-install hook job which creates it, but I didn't notice this at the time I deployed mine. Since I had a working SMTP setup however, I just used the "forgot password" feature to perform a password reset, which feels more secure anyway.
Once you're done, "toot" me up by mentioning [funkypenguin@so.fnky.nz](https://so.fnky.nz/@funkypenguin) in a toot! :wave:
## Summary
What have we achieved? We now have a fully-swarmed Mastodon instance, ready to federate with the world! :material-mastodon:
!!! summary "Summary"
Created:
* [X] Mastodon configured, running, and ready to toot!
--8<-- "recipe-footer.md"
[^1]: There is also a 3rd option, using the Flux webhook receiver to trigger a reconcilliation - to be covered in a future recipe!

View File

@@ -1,314 +0,0 @@
# Miniflux
Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](/recipes/kanboard/)_)
![Miniflux Screenshot](/images/miniflux.png){ loading=lazy }
I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/review/miniflux-lightweight-self-hosted-rss-reader/), but features (among many) that I appreciate:
* Compatible with the Fever API, read your feeds through existing mobile and desktop clients (_This is the killer feature for me. I hardly ever read RSS on my desktop, I typically read on my iPhone or iPad, using [Fiery Feeds](http://cocoacake.net/apps/fiery/) or my new squeeze, [Unread](https://www.goldenhillsoftware.com/unread/)_)
* Send your bookmarks to Pinboard, Wallabag, Shaarli or Instapaper (_I use this to automatically pin my bookmarks for collection on my [blog](https://www.funkypenguin.co.nz/)_)
* Feeds can be configured to download a "full" version of the content (_rather than an excerpt_)
* Use the Bookmarklet to subscribe to a website directly from any browsers
!!! abstract "2.0+ is a bit different"
[Some things changed](https://docs.miniflux.net/en/latest/migration.html) when Miniflux 2.0 was released. For one thing, the only supported database is now postgresql (_no more SQLite_). External themes are gone, as is PHP (_in favor of golang_). It's been a controversial change, but I'm keen on minimal and single-purpose, so I'm still very happy with the direction of development. The developer has laid out his [opinions](https://docs.miniflux.net/en/latest/opinionated.html) re the decisions he's made in the course of development.
## Ingredients
1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/)
2. A DNS name for your miniflux instance (*miniflux.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your Traefik ingress
## Preparation
### Prepare traefik for namespace
When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *miniflux* namespace, as illustrated below:
```yaml
<snip>
kubernetes:
namespaces:
- kube-system
- nextcloud
- kanboard
- miniflux
<snip>
```
If you've updated ```values.yml```, upgrade your traefik deployment via helm, by running ```helm upgrade --values values.yml traefik stable/traefik --recreate-pods```
### Create data locations
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
```bash
mkdir /var/data/config/miniflux
```
### Create namespace
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the miniflux stack with the following .yml:
```bash
cat <<EOF > /var/data/config/miniflux/namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: miniflux
EOF
kubectl create -f /var/data/config/miniflux/namespace.yaml
```
### Create persistent volume claim
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the miniflux postgres database:
```bash
cat <<EOF > /var/data/config/miniflux/db-persistent-volumeclaim.yml
kkind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: miniflux-db
namespace: miniflux
annotations:
backup.kubernetes.io/deltas: P1D P7D
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
kubectl create -f /var/data/config/miniflux/db-persistent-volumeclaim.yaml
```
!!! question "What's that annotation about?"
The annotation is used by [k8s-snapshots](/kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days.
### Create secrets
It's not always desirable to have sensitive data stored in your .yml files. Maybe you want to check your config into a git repository, or share it. Using Kubernetes Secrets means that you can create "secrets", and use these in your deployments by name, without exposing their contents. Run the following, replacing ```imtoosexyformyadminpassword```, and the ```mydbpass``` value in both postgress-password.secret **and** database-url.secret:
```bash
echo -n "imtoosexyformyadminpassword" > admin-password.secret
echo -n "mydbpass" > postgres-password.secret
echo -n "postgres://miniflux:mydbpass@db/miniflux?sslmode=disable" > database-url.secret
kubectl create secret -n mqtt generic miniflux-credentials \
--from-file=admin-password.secret \
--from-file=database-url.secret \
--from-file=database-url.secret
```
!!! tip "Why use ```echo -n```?"
Because. See [my blog post here](https://www.funkypenguin.co.nz/blog/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why!
## Serving
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [services](https://kubernetes.io/docs/concepts/services-networking/service/), and an [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the miniflux [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
### Create db deployment
Deployments tell Kubernetes about the desired state of the pod (*which it will then attempt to maintain*). Create the db deployment by excecuting the following. Note that the deployment refers to the secrets created above.
--8<-- "premix-cta.md"
```bash
cat <<EOF > /var/data/miniflux/db-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: miniflux
name: db
labels:
app: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- image: postgres:11
name: db
volumeMounts:
- name: miniflux-db
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_USER
value: "miniflux"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: miniflux-credentials
key: postgres-password.secret
volumes:
- name: miniflux-db
persistentVolumeClaim:
claimName: miniflux-db
```
### Create app deployment
Create the app deployment by excecuting the following. Again, note that the deployment refers to the secrets created above.
```bash
cat <<EOF > /var/data/miniflux/app-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: miniflux
name: app
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- image: miniflux/miniflux
name: app
env:
# This is necessary for the miniflux to update the db schema, even on an empty DB
- name: CREATE_ADMIN
value: "1"
- name: RUN_MIGRATIONS
value: "1"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: miniflux-credentials
key: admin-password.secret
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: miniflux-credentials
key: database-url.secret
EOF
kubectl create -f /var/data/miniflux/deployment.yml
```
### Check pods
Check that your deployment is running, with ```kubectl get pods -n miniflux```. After a minute or so, you should see 2 "Running" pods, as illustrated below:
```bash
[funkypenguin:~] % kubectl get pods -n miniflux
NAME READY STATUS RESTARTS AGE
app-667c667b75-5jjm9 1/1 Running 0 4d
db-fcd47b88f-9vvqt 1/1 Running 0 4d
[funkypenguin:~] %
```
### Create db service
The db service resource "advertises" the availability of PostgreSQL's port (TCP 5432) in your pod, to the rest of the cluster (*constrained within your namespace*). It seems a little like overkill coming from the Docker Swarm's automated "service discovery" model, but the Kubernetes design allows for load balancing, rolling upgrades, and health checks of individual pods, without impacting the rest of the cluster elements.
```bash
cat <<EOF > /var/data/miniflux/db-service.yml
kind: Service
apiVersion: v1
metadata:
name: db
namespace: miniflux
spec:
selector:
app: db
ports:
- protocol: TCP
port: 5432
clusterIP: None
EOF
kubectl create -f /var/data/miniflux/service.yml
```
### Create app service
The app service resource "advertises" the availability of miniflux's HTTP listener port (TCP 8080) in your pod. This is the service which will be referred to by the ingress (below), so that Traefik can route incoming traffic to the miniflux app.
```bash
cat <<EOF > /var/data/miniflux/app-service.yml
kind: Service
apiVersion: v1
metadata:
name: app
namespace: miniflux
spec:
selector:
app: app
ports:
- protocol: TCP
port: 8080
clusterIP: None
EOF
kubectl create -f /var/data/miniflux/app-service.yml
```
### Check services
Check that your services are deployed, with ```kubectl get services -n miniflux```. You should see something like this:
```bash
[funkypenguin:~] % kubectl get services -n miniflux
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app ClusterIP None <none> 8080/TCP 55d
db ClusterIP None <none> 5432/TCP 55d
[funkypenguin:~] %
```
### Create ingress
The ingress resource tells Traefik what to forward inbound requests for *miniflux.example.com* to your service (defined above), which in turn passes the request to the "app" pod. Adjust the config below for your domain.
```bash
cat <<EOF > /var/data/miniflux/ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
namespace: miniflux
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: miniflux.example.com
http:
paths:
- backend:
serviceName: app
servicePort: 8080
EOF
kubectl create -f /var/data/miniflux/ingress.yml
```
Check that your service is deployed, with ```kubectl get ingress -n miniflux```. You should see something like this:
```bash
[funkypenguin:~] 130 % kubectl get ingress -n miniflux
NAME HOSTS ADDRESS PORTS AGE
app miniflux.funkypenguin.co.nz 80 55d
[funkypenguin:~] %
```
### Access Miniflux
At this point, you should be able to access your instance on your chosen DNS name (*i.e. <https://miniflux.example.com>*)
### Troubleshooting
To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux <name of pod per above> -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/).
--8<-- "recipe-footer.md"

View File

@@ -21,7 +21,7 @@ description: How to install your own Mastodon instance using Docker Swarm
!!! summary "Ingredients"
Already deployed:
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/)
* [X] [Docker swarm cluster](/docker-swarm/design/) with [persistent shared storage](/docker-swarm/shared-storage-ceph/) (*Alternatively, see the [Kubernetes recipe here][k8s/mastodon]*)
* [X] [Traefik](/docker-swarm/traefik/) configured per design
New:

View File

@@ -213,7 +213,7 @@ nav:
# - Dashboard: kubernetes/wip.md
# - Kured: kubernetes/wip.md
# - Keycloak: kubernetes/wip.md
# - Recipes:
- Recipes:
# - Harbor:
# - recipes/kubernetes/harbor/index.md
# Istio: recipes/kubernetes/harbor/istio.md
@@ -227,7 +227,7 @@ nav:
# - Istio: recipes/kubernetes/wip.md
# - Jaeger: kubernetes/wip.md
# - Kiali: kubernetes/wip.md
# - Mastodon: recipes/kubernetes/mastodon.md
- Mastodon: recipes/kubernetes/mastodon.md
# - NGINX Ingress: kubernetes/wip.md
# - Polaris: kubernetes/wip.md
# - Portainer: kubernetes/wip.md

View File

@@ -7,7 +7,6 @@ cp -rf manuscript docs_to_pdf
find docs_to_pdf -type f -exec gsed -i -e 's/recipe-footer.md/common-links.md/g' {} \;
# Build PDF from slimmed recipes
docker run --rm --name mkdocs-material-build-pdf -e ENABLE_PDF_EXPORT=1\
-v ${PWD}/docs_to_pdf:/docs\
-v ${PWD}/mkdocs-pdf-print.yml:/mkdocs-pdf-print.yml\
docker run -it --rm --name mkdocs-material-build-pdf -e ENABLE_PDF_EXPORT=1\
-v ${PWD}:/docs \
funkypenguin/mkdocs-material build -f mkdocs-pdf-print.yml