mirror of
https://github.com/funkypenguin/geek-cookbook/
synced 2025-12-12 17:26:19 +00:00
Add Velero and snapshot controller for backups
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
This commit is contained in:
28
_includes/kubernetes-flux-check.md
Normal file
28
_includes/kubernetes-flux-check.md
Normal file
@@ -0,0 +1,28 @@
|
||||
## Install {{ page.meta.slug }}!
|
||||
|
||||
Commit the changes to your flux repository, and either wait for the reconciliation interval, or force a reconcilliation using `flux reconcile source git flux-system`. You should see the kustomization appear...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get kustomizations {{ page.meta.kustomization_name }}
|
||||
NAME READY MESSAGE REVISION SUSPENDED
|
||||
{{ page.meta.kustomization_name }} True Applied revision: main/70da637 main/70da637 False
|
||||
~ ❯
|
||||
```
|
||||
|
||||
The helmrelease should be reconciled...
|
||||
|
||||
```bash
|
||||
~ ❯ flux get helmreleases -n {{ page.meta.helmrelease_namespace }} {{ page.meta.helmrelease_name }}
|
||||
NAME READY MESSAGE REVISION SUSPENDED
|
||||
{{ page.meta.helmrelease_name }} True Release reconciliation succeeded v{{ page.meta.helm_chart_version }} False
|
||||
~ ❯
|
||||
```
|
||||
|
||||
And you should have happy pods in the {{ page.meta.helmrelease_namespace }} namespace:
|
||||
|
||||
```bash
|
||||
~ ❯ k get pods -n {{ page.meta.helmrelease_namespace }} -l release={{ page.meta.helmrelease_name }}
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
{{ page.meta.helmrelease_name }}-7c94b7446d-nwsss 1/1 Running 0 5m14s
|
||||
~ ❯
|
||||
```
|
||||
33
_includes/kubernetes-flux-helmrelease.md
Normal file
33
_includes/kubernetes-flux-helmrelease.md
Normal file
@@ -0,0 +1,33 @@
|
||||
### HelmRelease
|
||||
|
||||
Lastly, having set the scene above, we define the HelmRelease which will actually deploy {{ page.meta.helmrelease_name }} into the cluster. We start with a basic HelmRelease YAML, like this example:
|
||||
|
||||
```yaml title="/{{ page.meta.helmrelease_namespace }}/helmrelease-{{ page.meta.helmrelease_name }}.yaml"
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: {{ page.meta.helmrelease_name }}
|
||||
namespace: {{ page.meta.helmrelease_namespace }}
|
||||
spec:
|
||||
chart:
|
||||
spec:
|
||||
chart: {{ page.meta.helmrelease_namespace }}
|
||||
version: {{ page.meta.helm_chart_version }} # auto-update to semver bugfixes only (1)
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: {{ page.meta.helm_chart_repo_name }}
|
||||
namespace: flux-system
|
||||
interval: 15m
|
||||
timeout: 5m
|
||||
releaseName: {{ page.meta.helmrelease_namespace }}
|
||||
values: # paste contents of upstream values.yaml below, indented 4 spaces (2)
|
||||
```
|
||||
|
||||
1. I like to set this to the semver minor version of the upstream chart, so that I'll inherit bug fixes but not any new features (*since I'll need to manually update my values to accommodate new releases anyway*)
|
||||
2. Paste the full contents of the upstream [values.yaml]({{ page.meta.values_yaml_url }}) here, indented 4 spaces under the `values:` key
|
||||
|
||||
If we deploy this helmrelease as-is, we'll inherit every default from the upstream chart. That's probably hardly ever what we want to do, so my preference is to take the entire contents of the helm chart's [values.yaml]({{ page.meta.values_yaml_url }}), and to paste these (*indented*), under the `values` key. This means that I can then make my own changes in the context of the entire values.yaml, rather than cherry-picking just the items I want to change, to make future chart upgrades simpler.
|
||||
|
||||
--8<-- "kubernetes-why-not-full-values-in-configmap.md"
|
||||
|
||||
Then work your way through the values you pasted, and change any which are specific to your configuration.
|
||||
14
_includes/kubernetes-flux-helmrepository.md
Normal file
14
_includes/kubernetes-flux-helmrepository.md
Normal file
@@ -0,0 +1,14 @@
|
||||
### HelmRepository
|
||||
|
||||
We're going to install a helm chart from the [{{ page.meta.helm_chart_repo_name }}]({{ page.meta.helm_chart_repo_url }}) repository, so I create the following in my flux repo (*assuming it doesn't already exist*):
|
||||
|
||||
```yaml title="/bootstrap/helmrepositories/helmrepository-{{ page.meta.helm_chart_repo_name }}.yaml"
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta1
|
||||
kind: HelmRepository
|
||||
metadata:
|
||||
name: {{ page.meta.helm_chart_repo_name }}
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 15m
|
||||
url: {{ page.meta.helm_chart_repo_url }}
|
||||
```
|
||||
26
_includes/kubernetes-flux-kustomization.md
Normal file
26
_includes/kubernetes-flux-kustomization.md
Normal file
@@ -0,0 +1,26 @@
|
||||
### Kustomization
|
||||
|
||||
Now that the "global" elements of this deployment (*just the HelmRepository in this case*) have been defined, we do some "flux-ception", and go one layer deeper, adding another Kustomization, telling flux to deploy any YAMLs found in the repo at `/{{ page.meta.helmrelease_namespace }}/`. I create this example Kustomization in my flux repo:
|
||||
|
||||
```yaml title="/bootstrap/kustomizations/kustomization-{{ page.meta.kustomization_name }}.yaml"
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: {{ page.meta.kustomization_name }}
|
||||
namespace: flux-system
|
||||
spec:
|
||||
interval: 30m
|
||||
path: ./{{ page.meta.helmrelease_namespace }}
|
||||
prune: true # remove any elements later removed from the above path
|
||||
timeout: 10m # if not set, this defaults to interval duration, which is 1h
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
healthChecks:
|
||||
- apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
name: {{ page.meta.helmrelease_name }}
|
||||
namespace: {{ page.meta.helmrelease_namespace }}
|
||||
```
|
||||
|
||||
--8<-- "premix-cta-kubernetes.md"
|
||||
12
_includes/kubernetes-flux-namespace.md
Normal file
12
_includes/kubernetes-flux-namespace.md
Normal file
@@ -0,0 +1,12 @@
|
||||
## Preparation
|
||||
|
||||
### Namespace
|
||||
|
||||
We need a namespace to deploy our HelmRelease and associated YAMLs into. Per the [flux design](/kubernetes/deployment/flux/), I create this example yaml in my flux repo at `/bootstrap/namespaces/namespace-{{ page.meta.helmrelease_namespace }}.yaml`:
|
||||
|
||||
```yaml title="/bootstrap/namespaces/namespace-{{ page.meta.helmrelease_namespace }}.yaml"
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ page.meta.helmrelease_namespace }}
|
||||
```
|
||||
@@ -1,2 +1,2 @@
|
||||
!!! question "Why not just put config in the HelmRelease?"
|
||||
??? question "Why not just put config in the HelmRelease?"
|
||||
While it's true that we could embed values directly into the HelmRelease YAML, this becomes unweildy with large helm charts. It's also simpler (less likely to result in error) if changes to **HelmReleases**, which affect **deployment** of the chart, are defined in separate files to changes in helm chart **values**, which affect **operation** of the chart.
|
||||
11
_snippets/kubernetes-why-not-full-values-in-configmap.md
Normal file
11
_snippets/kubernetes-why-not-full-values-in-configmap.md
Normal file
@@ -0,0 +1,11 @@
|
||||
??? question "Why not put values in a separate ConfigMap?"
|
||||
> Didn't you previously advise to put helm chart values into a separate ConfigMap?
|
||||
|
||||
Yes, I did. And in practice, I've changed my mind.
|
||||
|
||||
Why? Because having the helm values directly in the HelmRelease offers the following advantages:
|
||||
|
||||
1. If you use the [YAML](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml) extension in VSCode, you'll see a full path to the YAML elements, which can make grokking complex charts easier.
|
||||
2. When flux detects a change to a value in a HelmRelease, this forces an immediate reconciliation of the HelmRelease, as opposed to the ConfigMap solution, which requires waiting on the next scheduled reconciliation.
|
||||
3. Renovate can parse HelmRelease YAMLs and create PRs when they contain docker image references which can be updated.
|
||||
4. In practice, adapting a HelmRelease to match upstream chart changes is no different to adapting a ConfigMap, and so there's no real benefit to splitting the chart values into a separate ConfigMap, IMO.
|
||||
18
docs/kubernetes/backup/csi-snapshots/index.md
Normal file
18
docs/kubernetes/backup/csi-snapshots/index.md
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title: FIXMEHow to use Rook Ceph for Persistent Storage in Kubernetes
|
||||
description: FIXMEHow to deploy Rook Ceph into your Kubernetes cluster for persistent storage
|
||||
---
|
||||
# Creating CSI snapshots
|
||||
|
||||
Available since Kubernetes 1.20, Volume Snapshots work with your storage provider to create snapshots of your volumes. If you're using a managed Kubernetes provider, you probably already have snapshot support, but if you're a bare-metal cave-monkey :monkey: using snapshot-capable storage provider (*like [Rook Ceph](/kubernetes/persistence/rook-ceph/)*), you need to jump through some hoops to enable support.
|
||||
|
||||
K8s-sig-storage publishes [external-snapshotter](https://github.com/kubernetes-csi/external-snapshotter), which talks to your CSI providers, and manages the creation / update / deletion of snapshots.
|
||||
|
||||
!!! question "Why do I care about snapshots?"
|
||||
If you've got persistent data you care about in your cluster, you probably care enough to [back it up](/kubernetes/backup/). Although you don't **need** snapshot support for backups, having a local snapshot managed by your backup tool can rapidly reduce the time taken to restore from a failed upgrade, accidental deletion, etc.
|
||||
|
||||
There are two components required in order to bring snapshot-taking powerz to your bare-metal cluster, detailed below:
|
||||
|
||||
1. First, install the [snapshot validation webhook](/kubernetes/csi-snapshots/snapshot-validation-webhook.md/)
|
||||
2. Then, install the [snapshot controller](/kubernetes/csi-snapshots/snapshot-controller.md)
|
||||
3. Install a snapshot-supporting :camera: [backup tool](/kubernetes/backup/)
|
||||
71
docs/kubernetes/backup/csi-snapshots/snapshot-controller.md
Normal file
71
docs/kubernetes/backup/csi-snapshots/snapshot-controller.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: Support CSI VolumeSnapshots with snapshot-controller
|
||||
description: Add CSI VolumeSnapshot support with snapshot support
|
||||
values_yaml_url: https://github.com/piraeusdatastore/helm-charts/blob/main/charts/snapshot-controller/values.yaml
|
||||
helm_chart_version: 1.8.x
|
||||
helm_chart_name: snapshot-controller
|
||||
helm_chart_repo_name: piraeus-charts
|
||||
helm_chart_repo_url: https://piraeus.io/helm-charts/
|
||||
helmrelease_name: snapshot-controller
|
||||
helmrelease_namespace: snapshot-controller
|
||||
kustomization_name: snapshot-controller
|
||||
slug: Snapshot Controller
|
||||
status: new
|
||||
---
|
||||
|
||||
# Add CSI VolumeSnapshot support with snapshot support
|
||||
|
||||
Before we deploy snapshot-controller to actually **manage** the snapshots we take, we need the validation webhook to make sure it's done "right".
|
||||
|
||||
## {{ page.meta.slug }} requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
Already deployed:
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
* [x] [snapshot-validation-webhook](/kubernetes/backup/csi-snapshots/snapshot-validation-webhook/) deployed
|
||||
|
||||
{% include 'kubernetes-flux-namespace.md' %}
|
||||
{% include 'kubernetes-flux-kustomization.md' %}
|
||||
{% include 'kubernetes-flux-helmrelease.md' %}
|
||||
|
||||
#### Configure for rook-ceph
|
||||
|
||||
Under the HelmRelease values which you pasted from upstream, you'll note a section for `volumeSnapshotClasses`. By default, this is populated with commented out examples. To configure snapshot-controller to work with rook-ceph, replace these commented values as illustrated below:
|
||||
|
||||
```yaml title="/{{ page.meta.helmrelease_namespace }}/helmrelease-{{ page.meta.helmrelease_name }}.yaml (continued)"
|
||||
values:
|
||||
# extra content from upstream
|
||||
volumeSnapshotClasses:
|
||||
- name: csi-rbdplugin-snapclass
|
||||
driver: rook-ceph.rbd.csi.ceph.com # driver:namespace:operator
|
||||
labels:
|
||||
velero.io/csi-volumesnapshot-class: "true"
|
||||
parameters:
|
||||
# Specify a string that identifies your cluster. Ceph CSI supports any
|
||||
# unique string. When Ceph CSI is deployed by Rook use the Rook namespace,
|
||||
# for example "rook-ceph".
|
||||
clusterID: rook-ceph # namespace:cluster
|
||||
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
|
||||
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph # namespace:cluster
|
||||
deletionPolicy: Delete # docs suggest this may need to be set to "Retain" for restoring
|
||||
```
|
||||
|
||||
{% include 'kubernetes-flux-check.md' %}
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We've got snapshot-controller running, and ready to manage VolumeSnapshots on behalf of Velero, for handy in-cluster volume backups!
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] snapshot-controller running and ready to snap :camera: !
|
||||
|
||||
Next:
|
||||
|
||||
* [ ] Configure [Velero](/kubernetes/backup/velero/) with a VolumeSnapshotLocation, so that volume snapshots can be made as part of a BackupSchedule!
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
@@ -0,0 +1,48 @@
|
||||
---
|
||||
title: Prepare for snapshot-controller with snapshot validation webhook
|
||||
description: Prepare your Kubernetes cluster for CSI snapshot support with snapshot validation webhook
|
||||
values_yaml_url: https://github.com/piraeusdatastore/helm-charts/blob/main/charts/snapshot-validation-webhook/values.yaml
|
||||
helm_chart_version: 1.8.x
|
||||
helm_chart_name: snapshot-validation-webhook
|
||||
helm_chart_repo_name: piraeus-charts
|
||||
helm_chart_repo_url: https://piraeus.io/helm-charts/
|
||||
helmrelease_name: snapshot-validation-webhook
|
||||
helmrelease_namespace: snapshot-validation-webhook
|
||||
kustomization_name: snapshot-validation-webhook
|
||||
slug: Snapshot Validation Webhook
|
||||
status: new
|
||||
---
|
||||
|
||||
# Prepare for CSI snapshots with the snapshot validation webhook
|
||||
|
||||
Before we deploy snapshot-controller to actually **manage** the snapshots we take, we need the validation webhook to make sure it's done "right".
|
||||
|
||||
## {{ page.meta.slug }} requirements
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
Already deployed:
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
|
||||
{% include 'kubernetes-flux-namespace.md' %}
|
||||
{% include 'kubernetes-flux-helmrepository.md' %}
|
||||
{% include 'kubernetes-flux-kustomization.md' %}
|
||||
{% include 'kubernetes-flux-helmrelease.md' %}
|
||||
{% include 'kubernetes-flux-check.md' %}
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We now have the snapshot validation admission webhook running in the cluster, ready to support [snapshot-controller](/kubernetes/backup/csi-snapshots/snapshot-controller/)!
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] snapshot-validation-webhook running and ready to validate!
|
||||
|
||||
Next:
|
||||
|
||||
* [ ] Deploy [snapshot-controller]( (/kubernetes/backup/csi-snapshots/snapshot-controller/)) itself
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
@@ -1,314 +1,15 @@
|
||||
# Miniflux
|
||||
# Backup
|
||||
|
||||
Miniflux is a lightweight RSS reader, developed by [Frédéric Guillot](https://github.com/fguillot). (_Who also happens to be the developer of the favorite Open Source Kanban app, [Kanboard](/recipes/kanboard/)_)
|
||||
> Waitasec, what happened to "cattle :cow:, not pets"? Why should you need backup in your cluster?
|
||||
|
||||
{ loading=lazy }
|
||||
Ha. good question. If you're happily running Kubernetes in a cloud provider and using managed services for all your stateful workloads (*managed databases, etc*) then you don't need backup.
|
||||
|
||||
I've [reviewed Miniflux in detail on my blog](https://www.funkypenguin.co.nz/review/miniflux-lightweight-self-hosted-rss-reader/), but features (among many) that I appreciate:
|
||||
If, on the other hand, you're actually **using** the [persistence](/kubernetes/persistence/) you deployed earlier, presumably some of what you persist is important to you, and you'd want to back it up in the event of a disaster (*or you need to roll back a database upgrade!*).
|
||||
|
||||
* Compatible with the Fever API, read your feeds through existing mobile and desktop clients (_This is the killer feature for me. I hardly ever read RSS on my desktop, I typically read on my iPhone or iPad, using [Fiery Feeds](http://cocoacake.net/apps/fiery/) or my new squeeze, [Unread](https://www.goldenhillsoftware.com/unread/)_)
|
||||
* Send your bookmarks to Pinboard, Wallabag, Shaarli or Instapaper (_I use this to automatically pin my bookmarks for collection on my [blog](https://www.funkypenguin.co.nz/)_)
|
||||
* Feeds can be configured to download a "full" version of the content (_rather than an excerpt_)
|
||||
* Use the Bookmarklet to subscribe to a website directly from any browsers
|
||||
The only backup solution I've put in place thus far is Velero, but this index page will be expanded to more options as they become available.
|
||||
|
||||
!!! abstract "2.0+ is a bit different"
|
||||
[Some things changed](https://miniflux.app/docs/migration.html) when Miniflux 2.0 was released. For one thing, the only supported database is now postgresql (_no more SQLite_). External themes are gone, as is PHP (_in favor of golang_). It's been a controversial change, but I'm keen on minimal and single-purpose, so I'm still very happy with the direction of development. The developer has laid out his [opinions](https://miniflux.app/docs/opionated.html) re the decisions he's made in the course of development.
|
||||
For your backup needs, I present, Velero, by VMWare:
|
||||
|
||||
## Ingredients
|
||||
|
||||
1. A [Kubernetes Cluster](/kubernetes/design/) including [Traefik Ingress](/kubernetes/traefik/)
|
||||
2. A DNS name for your miniflux instance (*miniflux.example.com*, below) pointing to your [load balancer](/kubernetes/loadbalancer/), fronting your Traefik ingress
|
||||
|
||||
## Preparation
|
||||
|
||||
### Prepare traefik for namespace
|
||||
|
||||
When you deployed [Traefik via the helm chart](/kubernetes/traefik/), you would have customized ```values.yml``` for your deployment. In ```values.yml``` is a list of namespaces which Traefik is permitted to access. Update ```values.yml``` to include the *miniflux* namespace, as illustrated below:
|
||||
|
||||
```yaml
|
||||
<snip>
|
||||
kubernetes:
|
||||
namespaces:
|
||||
- kube-system
|
||||
- nextcloud
|
||||
- kanboard
|
||||
- miniflux
|
||||
<snip>
|
||||
```
|
||||
|
||||
If you've updated ```values.yml```, upgrade your traefik deployment via helm, by running ```helm upgrade --values values.yml traefik stable/traefik --recreate-pods```
|
||||
|
||||
### Create data locations
|
||||
|
||||
Although we could simply bind-mount local volumes to a local Kubuernetes cluster, since we're targetting a cloud-based Kubernetes deployment, we only need a local path to store the YAML files which define the various aspects of our Kubernetes deployment.
|
||||
|
||||
```bash
|
||||
mkdir /var/data/config/miniflux
|
||||
```
|
||||
|
||||
### Create namespace
|
||||
|
||||
We use Kubernetes namespaces for service discovery and isolation between our stacks, so create a namespace for the miniflux stack with the following .yml:
|
||||
|
||||
```bash
|
||||
cat <<EOF > /var/data/config/miniflux/namespace.yml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: miniflux
|
||||
EOF
|
||||
kubectl create -f /var/data/config/miniflux/namespace.yaml
|
||||
```
|
||||
|
||||
### Create persistent volume claim
|
||||
|
||||
Persistent volume claims are a streamlined way to create a persistent volume and assign it to a container in a pod. Create a claim for the miniflux postgres database:
|
||||
|
||||
```bash
|
||||
cat <<EOF > /var/data/config/miniflux/db-persistent-volumeclaim.yml
|
||||
kkind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: miniflux-db
|
||||
namespace: miniflux
|
||||
annotations:
|
||||
backup.kubernetes.io/deltas: P1D P7D
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
EOF
|
||||
kubectl create -f /var/data/config/miniflux/db-persistent-volumeclaim.yaml
|
||||
```
|
||||
|
||||
!!! question "What's that annotation about?"
|
||||
The annotation is used by [k8s-snapshots](/kubernetes/snapshots/) to create daily incremental snapshots of your persistent volumes. In this case, our volume is snapshotted daily, and copies kept for 7 days.
|
||||
|
||||
### Create secrets
|
||||
|
||||
It's not always desirable to have sensitive data stored in your .yml files. Maybe you want to check your config into a git repository, or share it. Using Kubernetes Secrets means that you can create "secrets", and use these in your deployments by name, without exposing their contents. Run the following, replacing ```imtoosexyformyadminpassword```, and the ```mydbpass``` value in both postgress-password.secret **and** database-url.secret:
|
||||
|
||||
```bash
|
||||
echo -n "imtoosexyformyadminpassword" > admin-password.secret
|
||||
echo -n "mydbpass" > postgres-password.secret
|
||||
echo -n "postgres://miniflux:mydbpass@db/miniflux?sslmode=disable" > database-url.secret
|
||||
|
||||
kubectl create secret -n mqtt generic miniflux-credentials \
|
||||
--from-file=admin-password.secret \
|
||||
--from-file=database-url.secret \
|
||||
--from-file=database-url.secret
|
||||
```
|
||||
|
||||
!!! tip "Why use ```echo -n```?"
|
||||
Because. See [my blog post here](https://www.funkypenguin.co.nz/blog/beware-the-hidden-newlines-in-kubernetes-secrets/) for the pain of hunting invisible newlines, that's why!
|
||||
|
||||
## Serving
|
||||
|
||||
Now that we have a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), a [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/), and a [configmap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/), we can create [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), [services](https://kubernetes.io/docs/concepts/services-networking/service/), and an [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) for the miniflux [pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/).
|
||||
|
||||
### Create db deployment
|
||||
|
||||
Deployments tell Kubernetes about the desired state of the pod (*which it will then attempt to maintain*). Create the db deployment by excecuting the following. Note that the deployment refers to the secrets created above.
|
||||
|
||||
--8<-- "premix-cta.md"
|
||||
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/db-deployment.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
namespace: miniflux
|
||||
name: db
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: db
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: db
|
||||
spec:
|
||||
containers:
|
||||
- image: postgres:11
|
||||
name: db
|
||||
volumeMounts:
|
||||
- name: miniflux-db
|
||||
mountPath: /var/lib/postgresql/data
|
||||
env:
|
||||
- name: POSTGRES_USER
|
||||
value: "miniflux"
|
||||
- name: POSTGRES_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: miniflux-credentials
|
||||
key: postgres-password.secret
|
||||
volumes:
|
||||
- name: miniflux-db
|
||||
persistentVolumeClaim:
|
||||
claimName: miniflux-db
|
||||
```
|
||||
|
||||
### Create app deployment
|
||||
|
||||
Create the app deployment by excecuting the following. Again, note that the deployment refers to the secrets created above.
|
||||
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/app-deployment.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
namespace: miniflux
|
||||
name: app
|
||||
labels:
|
||||
app: app
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: app
|
||||
spec:
|
||||
containers:
|
||||
- image: miniflux/miniflux
|
||||
name: app
|
||||
env:
|
||||
# This is necessary for the miniflux to update the db schema, even on an empty DB
|
||||
- name: CREATE_ADMIN
|
||||
value: "1"
|
||||
- name: RUN_MIGRATIONS
|
||||
value: "1"
|
||||
- name: ADMIN_USERNAME
|
||||
value: "admin"
|
||||
- name: ADMIN_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: miniflux-credentials
|
||||
key: admin-password.secret
|
||||
- name: DATABASE_URL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: miniflux-credentials
|
||||
key: database-url.secret
|
||||
EOF
|
||||
kubectl create -f /var/data/miniflux/deployment.yml
|
||||
```
|
||||
|
||||
### Check pods
|
||||
|
||||
Check that your deployment is running, with ```kubectl get pods -n miniflux```. After a minute or so, you should see 2 "Running" pods, as illustrated below:
|
||||
|
||||
```bash
|
||||
[funkypenguin:~] % kubectl get pods -n miniflux
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
app-667c667b75-5jjm9 1/1 Running 0 4d
|
||||
db-fcd47b88f-9vvqt 1/1 Running 0 4d
|
||||
[funkypenguin:~] %
|
||||
```
|
||||
|
||||
### Create db service
|
||||
|
||||
The db service resource "advertises" the availability of PostgreSQL's port (TCP 5432) in your pod, to the rest of the cluster (*constrained within your namespace*). It seems a little like overkill coming from the Docker Swarm's automated "service discovery" model, but the Kubernetes design allows for load balancing, rolling upgrades, and health checks of individual pods, without impacting the rest of the cluster elements.
|
||||
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/db-service.yml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: db
|
||||
namespace: miniflux
|
||||
spec:
|
||||
selector:
|
||||
app: db
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 5432
|
||||
clusterIP: None
|
||||
EOF
|
||||
kubectl create -f /var/data/miniflux/service.yml
|
||||
```
|
||||
|
||||
### Create app service
|
||||
|
||||
The app service resource "advertises" the availability of miniflux's HTTP listener port (TCP 8080) in your pod. This is the service which will be referred to by the ingress (below), so that Traefik can route incoming traffic to the miniflux app.
|
||||
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/app-service.yml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: app
|
||||
namespace: miniflux
|
||||
spec:
|
||||
selector:
|
||||
app: app
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
clusterIP: None
|
||||
EOF
|
||||
kubectl create -f /var/data/miniflux/app-service.yml
|
||||
```
|
||||
|
||||
### Check services
|
||||
|
||||
Check that your services are deployed, with ```kubectl get services -n miniflux```. You should see something like this:
|
||||
|
||||
```bash
|
||||
[funkypenguin:~] % kubectl get services -n miniflux
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
app ClusterIP None <none> 8080/TCP 55d
|
||||
db ClusterIP None <none> 5432/TCP 55d
|
||||
[funkypenguin:~] %
|
||||
```
|
||||
|
||||
### Create ingress
|
||||
|
||||
The ingress resource tells Traefik what to forward inbound requests for *miniflux.example.com* to your service (defined above), which in turn passes the request to the "app" pod. Adjust the config below for your domain.
|
||||
|
||||
```bash
|
||||
cat <<EOF > /var/data/miniflux/ingress.yml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: app
|
||||
namespace: miniflux
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
spec:
|
||||
rules:
|
||||
- host: miniflux.example.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: app
|
||||
servicePort: 8080
|
||||
EOF
|
||||
kubectl create -f /var/data/miniflux/ingress.yml
|
||||
```
|
||||
|
||||
Check that your service is deployed, with ```kubectl get ingress -n miniflux```. You should see something like this:
|
||||
|
||||
```bash
|
||||
[funkypenguin:~] 130 % kubectl get ingress -n miniflux
|
||||
NAME HOSTS ADDRESS PORTS AGE
|
||||
app miniflux.funkypenguin.co.nz 80 55d
|
||||
[funkypenguin:~] %
|
||||
```
|
||||
|
||||
### Access Miniflux
|
||||
|
||||
At this point, you should be able to access your instance on your chosen DNS name (*i.e. `https://miniflux.example.com`)
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
To look at the Miniflux pod's logs, run ```kubectl logs -n miniflux <name of pod per above> -f```. For further troubleshooting hints, see [Troubleshooting](/reference/kubernetes/troubleshooting/).
|
||||
* [Velero](/kubernetes/backup/velero/)
|
||||
|
||||
--8<-- "recipe-footer.md"
|
||||
|
||||
328
docs/kubernetes/backup/velero.md
Normal file
328
docs/kubernetes/backup/velero.md
Normal file
@@ -0,0 +1,328 @@
|
||||
---
|
||||
title: Support CSI VolumeSnapshots with snapshot-controller
|
||||
description: Add CSI VolumeSnapshot support with snapshot support
|
||||
values_yaml_url: https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/values.yaml
|
||||
helm_chart_version: 5.1.x
|
||||
helm_chart_name: velero
|
||||
helm_chart_repo_name: vmware-tanzu
|
||||
helm_chart_repo_url: https://vmware-tanzu.github.io/helm-charts
|
||||
helmrelease_name: velero
|
||||
helmrelease_namespace: velero
|
||||
kustomization_name: velero
|
||||
slug: Velero
|
||||
status: new
|
||||
---
|
||||
|
||||
# Velero
|
||||
|
||||
Don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
|
||||
|
||||
<!-- markdownlint-disable MD033 -->
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/1UtFeMoqVHQ" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
[Velero](https://velero.io/), a VMWare-backed open-source project, is a mature cloud-native backup solution, able to selectively backup / restore your various workloads / data.
|
||||
|
||||
!!! summary "Ingredients"
|
||||
|
||||
* [x] A [Kubernetes cluster](/kubernetes/cluster/)
|
||||
* [x] [Flux deployment process](/kubernetes/deployment/flux/) bootstrapped
|
||||
|
||||
Optional:
|
||||
|
||||
* [ ] S3-based storage for off-cluster backup
|
||||
|
||||
Optionally for volume snapshot support:
|
||||
|
||||
* [ ] Persistence supporting PVC snapshots for in-cluster backup (*i.e., [Rook Ceph](/kubernetes/persistence/rook-ceph/)*)
|
||||
* [ ] [Snapshot controller](/kubernetes/backup/csi-snapshots/snapshot-controller/) with [validation webhook](/kubernetes/backup/csi-snapshots/snapshot-validation-webhook/)
|
||||
|
||||
## Terminology
|
||||
|
||||
Let's get some terminology out of the way. Velero manages [Backups](https://velero.io/docs/main/api-types/backup/) and [Restores](https://velero.io/docs/main/api-types/restore/), to [BackupStorageLocations](https://velero.io/docs/main/api-types/backupstoragelocation/), and optionally snapshots volumes to [VolumeSnapshotLocations](https://velero.io/docs/main/api-types/volumesnapshotlocation/), either manually or on a [Schedule](https://velero.io/docs/main/api-types/schedule/).
|
||||
|
||||
Clear as mud? :footprints:
|
||||
|
||||
{% include 'kubernetes-flux-namespace.md' %}
|
||||
{% include 'kubernetes-flux-helmrepository.md' %}
|
||||
{% include 'kubernetes-flux-kustomization.md' %}
|
||||
|
||||
### SealedSecret
|
||||
|
||||
We'll need credentials to be able to access our S3 storage, so let's create them now. Velero will use AWS credentials in the standard format preferred by the AWS SDK, so create a temporary file like this:
|
||||
|
||||
```bash title="mysecret.aws.is.dumb"
|
||||
[default]
|
||||
aws_access_key_id = YOUR_AWS_ACCESS_KEY_OR_S3_COMPATIBLE_EQUIVALENT
|
||||
aws_secret_access_key = YOUR_AWS_SECRET_KEY_OR_S3_COMPATIBLE_EQUIVALENT
|
||||
```
|
||||
|
||||
And then turn this file into a secret, and seal it, with:
|
||||
|
||||
```bash
|
||||
kubectl create secret generic -n velero velero-credentials \
|
||||
--from-file=cloud=mysecret.aws.is.dumb \
|
||||
-o yaml --dry-run=client \
|
||||
| kubeseal > velero/sealedsecret-velero-credentials.yaml
|
||||
```
|
||||
|
||||
You can now delete `mysecret.aws.is.dumb` :thumbsup:
|
||||
|
||||
{% include 'kubernetes-flux-helmrelease.md' %}
|
||||
|
||||
## Configure Velero
|
||||
|
||||
Here are some areas of the upstream values.yaml to pay attention to..
|
||||
|
||||
### initContainers
|
||||
|
||||
Uncomment `velero-plugin-for-aws` to use an S3 target for backup, and additionally uncomment `velero-plugin-for-csi` if you plan to create volume snapshots:
|
||||
|
||||
```yaml
|
||||
# Init containers to add to the Velero deployment's pod spec. At least one plugin provider image is required.
|
||||
# If the value is a string then it is evaluated as a template.
|
||||
initContainers:
|
||||
- name: velero-plugin-for-csi
|
||||
image: velero/velero-plugin-for-csi:v0.6.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.8.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
```
|
||||
|
||||
### backupStorageLocation
|
||||
|
||||
Additionally, it's required to configure certain values (*highlighted below*) under the `configuration` key:
|
||||
|
||||
```yaml hl_lines="7 9 11 25 27 31 33"
|
||||
configuration:
|
||||
# Parameters for the BackupStorageLocation(s). Configure multiple by adding other element(s) to the backupStorageLocation slice.
|
||||
# See https://velero.io/docs/v1.6/api-types/backupstoragelocation/
|
||||
backupStorageLocation:
|
||||
# name is the name of the backup storage location where backups should be stored. If a name is not provided,
|
||||
# a backup storage location will be created with the name "default". Optional.
|
||||
- name:
|
||||
# provider is the name for the backup storage location provider.
|
||||
provider: aws # if we're using S3-compatible storage (1)
|
||||
# bucket is the name of the bucket to store backups in. Required.
|
||||
bucket: my-awesome-bucket # the name of my specific bucket (2)
|
||||
# caCert defines a base64 encoded CA bundle to use when verifying TLS connections to the provider. Optional.
|
||||
caCert:
|
||||
# prefix is the directory under which all Velero data should be stored within the bucket. Optional.
|
||||
prefix: optional-subdir # a path under the bucket in which the backup data should be stored (3)
|
||||
# default indicates this location is the default backup storage location. Optional.
|
||||
default: true # prevents annoying warnings in the log
|
||||
# validationFrequency defines how frequently Velero should validate the object storage. Optional.
|
||||
validationFrequency:
|
||||
# accessMode determines if velero can write to this backup storage location. Optional.
|
||||
# default to ReadWrite, ReadOnly is used during migrations and restores.
|
||||
accessMode: ReadWrite
|
||||
credential:
|
||||
# name of the secret used by this backupStorageLocation.
|
||||
name: velero-credentials # this is the sealed-secret we created above (3)
|
||||
# name of key that contains the secret data to be used.
|
||||
key: cloud # this is the key we used in the sealed-secret we created above (3)
|
||||
# Additional provider-specific configuration. See link above
|
||||
# for details of required/optional fields for your provider.
|
||||
config:
|
||||
region: # set-this-to-your-b2-region, for example us-west-002
|
||||
s3ForcePathStyle:
|
||||
s3Url: # set this to the https URL to your endpoint, for example "https://s3.us-west-002.backblazeb2.com"
|
||||
# kmsKeyId:
|
||||
# resourceGroup:
|
||||
# The ID of the subscription containing the storage account, if different from the cluster’s subscription. (Azure only)
|
||||
# subscriptionId:
|
||||
# storageAccount:
|
||||
# publicUrl:
|
||||
# Name of the GCP service account to use for this backup storage location. Specify the
|
||||
# service account here if you want to use workload identity instead of providing the key file.(GCP only)
|
||||
# serviceAccount:
|
||||
# Option to skip certificate validation or not if insecureSkipTLSVerify is set to be true, the client side should set the
|
||||
# flag. For Velero client Command like velero backup describe, velero backup logs needs to add the flag --insecure-skip-tls-verify
|
||||
# insecureSkipTLSVerify:
|
||||
```
|
||||
|
||||
1. There are other providers
|
||||
2. Your bucket name, unique to your S3 provider
|
||||
3. I use prefixes to backup multiple clusters to the same bucket
|
||||
|
||||
### volumeSnapshotLocation
|
||||
|
||||
Also under the `config` key, you'll find the `volumeSnapshotLocation` section. Use this if you're using a [supported provider](https://velero.io/docs/v1.6/supported-providers/), and you want to create in-cluster snapshots. In the following example, I'm creating Velero snapshots with rook-ceph using the CSI provider. Take note of the highlighted sections, these are the minimal options you'll want to set:
|
||||
|
||||
```yaml hl_lines="3 6 40 65"
|
||||
volumeSnapshotLocation:
|
||||
# name is the name of the volume snapshot location where snapshots are being taken. Required.
|
||||
- name: rook-ceph
|
||||
# provider is the name for the volume snapshot provider. If omitted
|
||||
# `configuration.provider` will be used instead.
|
||||
provider: csi
|
||||
# Additional provider-specific configuration. See link above
|
||||
# for details of required/optional fields for your provider.
|
||||
config: {}
|
||||
# region:
|
||||
# apiTimeout:
|
||||
# resourceGroup:
|
||||
# The ID of the subscription where volume snapshots should be stored, if different from the cluster’s subscription. If specified, also requires `configuration.volumeSnapshotLocation.config.resourceGroup`to be set. (Azure only)
|
||||
# subscriptionId:
|
||||
# incremental:
|
||||
# snapshotLocation:
|
||||
# project:
|
||||
# These are server-level settings passed as CLI flags to the `velero server` command. Velero
|
||||
# uses default values if they're not passed in, so they only need to be explicitly specified
|
||||
# here if using a non-default value. The `velero server` default values are shown in the
|
||||
# comments below.
|
||||
# --------------------
|
||||
# `velero server` default: restic
|
||||
uploaderType:
|
||||
# `velero server` default: 1m
|
||||
backupSyncPeriod:
|
||||
# `velero server` default: 4h
|
||||
fsBackupTimeout:
|
||||
# `velero server` default: 30
|
||||
clientBurst:
|
||||
# `velero server` default: 500
|
||||
clientPageSize:
|
||||
# `velero server` default: 20.0
|
||||
clientQPS:
|
||||
# Name of the default backup storage location. Default: default
|
||||
defaultBackupStorageLocation:
|
||||
# How long to wait by default before backups can be garbage collected. Default: 72h
|
||||
defaultBackupTTL:
|
||||
# Name of the default volume snapshot location.
|
||||
defaultVolumeSnapshotLocations: csi:rook-ceph
|
||||
# `velero server` default: empty
|
||||
disableControllers:
|
||||
# `velero server` default: 1h
|
||||
garbageCollectionFrequency:
|
||||
# Set log-format for Velero pod. Default: text. Other option: json.
|
||||
logFormat:
|
||||
# Set log-level for Velero pod. Default: info. Other options: debug, warning, error, fatal, panic.
|
||||
logLevel:
|
||||
# The address to expose prometheus metrics. Default: :8085
|
||||
metricsAddress:
|
||||
# Directory containing Velero plugins. Default: /plugins
|
||||
pluginDir:
|
||||
# The address to expose the pprof profiler. Default: localhost:6060
|
||||
profilerAddress:
|
||||
# `velero server` default: false
|
||||
restoreOnlyMode:
|
||||
# `velero server` default: customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,persistentvolumes,persistentvolumeclaims,secrets,configmaps,serviceaccounts,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io
|
||||
restoreResourcePriorities:
|
||||
# `velero server` default: 1m
|
||||
storeValidationFrequency:
|
||||
# How long to wait on persistent volumes and namespaces to terminate during a restore before timing out. Default: 10m
|
||||
terminatingResourceTimeout:
|
||||
# Comma separated list of velero feature flags. default: empty
|
||||
# features: EnableCSI
|
||||
features: EnableCSI
|
||||
# `velero server` default: velero
|
||||
namespace:
|
||||
```
|
||||
|
||||
### schedules
|
||||
|
||||
Set up backup schedule(s) for your preferred coverage, TTL, etc. See [Schedule](https://velero.io/docs/main/api-types/schedule/) for a list of available configuration options under the `template` key:
|
||||
|
||||
```yaml
|
||||
schedules:
|
||||
daily-backups-r-cool:
|
||||
disabled: false
|
||||
labels:
|
||||
myenv: foo
|
||||
annotations:
|
||||
myenv: foo
|
||||
schedule: "0 0 * * *" # once a day, at midnight
|
||||
useOwnerReferencesInBackup: false
|
||||
template:
|
||||
ttl: "240h"
|
||||
storageLocation: default # use the same name you defined above in backupStorageLocation
|
||||
includedNamespaces:
|
||||
- foo
|
||||
```
|
||||
|
||||
{% include 'kubernetes-flux-check.md' %}
|
||||
|
||||
### Is it working?
|
||||
|
||||
Confirm that the basic config is good, by running `kubectl logs -n velero -l app.kubernetes.io/name=velero`:
|
||||
|
||||
```bash
|
||||
time="2023-10-17T22:24:40Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=velero/b2 controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:152"
|
||||
time="2023-10-17T22:24:41Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/b2 controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:137"
|
||||
```
|
||||
|
||||
!!! tip "Confirm Velero is happy with your BackupStorageLocation"
|
||||
The pod output will tell you if Velero is unable to access your BackupStorageLocation. If this happens, the most likely cause will be a misconfiguration of your S3 settings!
|
||||
|
||||
### Test backup
|
||||
|
||||
Next, you'll need the Velero CLI, which you can install on your OS based on the instructions [here](https://velero.io/docs/v1.12/basic-install/#install-the-cli)
|
||||
|
||||
Create a "quickie" backup of a namespace you can afford to loose, like this:
|
||||
|
||||
```bash
|
||||
velero backup create goandbeflameretardant --include-namespaces=chartmuseum --wait
|
||||
```
|
||||
|
||||
Confirm your backup completed successfully, with:
|
||||
|
||||
```bash
|
||||
velero backup describe goandbeflameretardant
|
||||
```
|
||||
|
||||
Then, like a boss, **delete** the original namespace (*you can afford to loose it, right?*) with some bad-ass command like `kubectl delete ns chartmuseum`. Now it's gone.
|
||||
|
||||
### Test restore
|
||||
|
||||
Finally, in a kick-ass move of ninja :ninja: sysadmin awesomeness, restore your backup with:
|
||||
|
||||
```bash
|
||||
velero create restore --from-backup goandbeflameretardant --wait
|
||||
```
|
||||
|
||||
Confirm that your pods / data have been restored.
|
||||
|
||||
Congratulations, you have a backup!
|
||||
|
||||
### Test scheduled backup
|
||||
|
||||
Confirm the basics are working by running `velero get schedules`, to list your schedules:
|
||||
|
||||
```bash
|
||||
davidy@gollum01:~$ velero get schedules
|
||||
NAME STATUS CREATED SCHEDULE BACKUP TTL LAST BACKUP SELECTOR PAUSED
|
||||
velero-daily Enabled 2023-10-13 04:20:42 +0000 UTC 0 0 * * * 240h0m0s 22h ago <none> false
|
||||
davidy@gollum01:~$
|
||||
```
|
||||
|
||||
Force an immediate backup per che schedule, by running `velero backup create --from-schedule=velero-daily`:
|
||||
|
||||
```bash
|
||||
davidy@gollum01:~$ velero backup create --from-schedule=velero-daily
|
||||
Creating backup from schedule, all other filters are ignored.
|
||||
Backup request "velero-daily-20231017222207" submitted successfully.
|
||||
Run `velero backup describe velero-daily-20231017222207` or `velero backup logs velero-daily-20231017222207` for more details.
|
||||
davidy@gollum01:~$
|
||||
```
|
||||
|
||||
Use the `describe` and `logs` command outputted above to check the state of your backup (*you'll only get the backup logs after the backup has completed*)
|
||||
|
||||
When describing your completed backup, if the result is anything but a complete success, then further investigation is required.
|
||||
|
||||
## Summary
|
||||
|
||||
What have we achieved? We've got scheduled backups running, and we've successfully tested a restore!
|
||||
|
||||
!!! summary "Summary"
|
||||
Created:
|
||||
|
||||
* [X] Velero running and creating restorable backups on schedule
|
||||
|
||||
|
||||
[^1]: This is where you'd add multiple Volume Groups if you wanted a storageclass per Volume Group
|
||||
Reference in New Issue
Block a user