mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-14 10:16:27 +00:00
Add audiobookserver (#258)
This commit is contained in:
@@ -10,9 +10,9 @@ description: Here's how to configure Renovate to only create 1 PR per-file, even
|
||||
|
||||
# Consolidating multiple manager changes in Renovate PRs
|
||||
|
||||
I work on several large clusters, administered using [FluxCD](/kubernetes/deployment/flux/), which in which we carefully manage the update of Helm releases using Flux's `HelmRelease` CR.
|
||||
I work on several large clusters, administered using [FluxCD](/kubernetes/deployment/flux/), which in which we carefully manage the update of Helm releases using Flux's `HelmRelease` CR.
|
||||
|
||||
Recently, we've started using [Renovate](https://github.com/renovatebot/renovate) to alert us to pending upgrades, by creating PRs when a helm update _or_ an image update in the associated helm values.yaml is available (*I like to put the upstream chart's values in to the `HelmRelease` so that changes can be tracked in one place*)
|
||||
Recently, we've started using [Renovate](https://github.com/renovatebot/renovate) to alert us to pending upgrades, by creating PRs when a helm update *or* an image update in the associated helm values.yaml is available (*I like to put the upstream chart's values in to the `HelmRelease` so that changes can be tracked in one place*)
|
||||
|
||||
The problem is, it's likely that the images in a chart's `values.yaml` **will** be updated when the chart is updated, but I don't need a separate PR for each image! (*imagine a helm chart with 10 image references!*)
|
||||
|
||||
|
||||
@@ -10,9 +10,9 @@ description: Want to run your Mastodon instance behind Cloudflare, but put your
|
||||
|
||||
# Mastodon + CloudFlare + B2 Object Storage = free egress
|
||||
|
||||
When setting up my [Mastodon instance](https://so.fnky.nz), I jumped directly to storing all media in object storage (*Backblaze B2, in my case*), because I didn't want to allocate / estimate local storage requirements.
|
||||
When setting up my [Mastodon instance](https://so.fnky.nz), I jumped directly to storing all media in object storage (*Backblaze B2, in my case*), because I didn't want to allocate / estimate local storage requirements.
|
||||
|
||||
This turned out to be a great decision, as my media bucket quickly grew to over 100GB, but as a result, all of my media was served behind URLs like `https://f007.backblaze.com/file/something/something-else/another-something.jpg`, and could _technically_ be scraped without using my Mastodon URL.
|
||||
This turned out to be a great decision, as my media bucket quickly grew to over 100GB, but as a result, all of my media was served behind URLs like `https://f007.backblaze.com/file/something/something-else/another-something.jpg`, and could *technically* be scraped without using my Mastodon URL.
|
||||
|
||||
Here's how to improve this, and also serve your Mastodon instance from behind a CloudFlare proxy...
|
||||
|
||||
|
||||
@@ -55,7 +55,7 @@ podAnnotations:
|
||||
2. This attaches the above volume at `/scratch`
|
||||
3. It's necessary to sleep for "a period" before attempting the restore, so that postegresql has time to start up and be ready to interact with the `pg_restore` command.
|
||||
|
||||
[^1]: Details at https://github.com/bitnami/charts/tree/main/bitnami/postgresql
|
||||
[^1]: See the bitnami chart [here](https://github.com/bitnami/charts/tree/main/bitnami/postgresql)
|
||||
|
||||
During the process of setting up the preHooks for various iterations of a postgresql instance, I discovered that Velero will not necessary check that carefully re whether the hooks returned successfully or not. It's best to completely simulate a restore/backup of your pods by execing into the pod, and running each hook command manually, ensuring that you get the expected result.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user