# Snapshots
Before we get carried away creating pods, services, deployments etc, let's spare a thought for _security_... (_DevSecPenguinOps, here we come!_). In the context of this recipe, security refers to safe-guarding your data from accidental loss, as well as malicious impact.
Under [Docker Swarm](/ha-docker-swarm/design/), we used [shared storage](/ha-docker-swarm/shared-storage-ceph/) with [Duplicity](/recipes/duplicity/) (or [ElkarBackup](/recipes/elkarbackup/)) to automate backups of our persistent data.
Now that we're playing in the deep end with Kubernetes, we'll need a Cloud-native backup solution...
It bears repeating though - don't be like [Cameron](http://haltandcatchfire.wikia.com/wiki/Cameron_Howe). Backup your stuff.
This recipe employs a clever tool ([miracle2k/k8s-snapshots](https://github.com/miracle2k/k8s-snapshots)), running _inside_ your cluster, to trigger automated snapshots of your persistent volumes, using your cloud provider's APIs.
## Ingredients
1. [Kubernetes cluster](/kubernetes/cluster/) with either AWS or GKE (currently, but apparently other providers are [easy to implement](https://github.com/miracle2k/k8s-snapshots/blob/master/k8s_snapshots/backends/abstract.py))
2. Geek-Fu required : 🐒🐒 (_medium - minor adjustments may be required_)
## Preparation
### Create RoleBinding (GKE only)
If you're running GKE, run the following to create a RoleBinding, allowing your user to grant rights-it-doesn't-currently-have to the service account responsible for creating the snapshots:
````kubectl create clusterrolebinding your-user-cluster-admin-binding \
--clusterrole=cluster-admin --user=```
!!! question
Why do we have to do this? Check [this blog post](https://www.funkypenguin.co.nz/workaround-blocked-attempt-to-grant-extra-privileges-on-gke/) for details
### Apply RBAC
If your cluster is RBAC-enabled (_it probably is_), you'll need to create a ClusterRole and ClusterRoleBinding to allow k8s_snapshots to see your PVs and friends:
````
kubectl apply -f https://raw.githubusercontent.com/miracle2k/k8s-snapshots/master/rbac.yaml
```
## Serving
### Deploy the pod
Ready? Run the following to create a deployment in to the kube-system namespace:
```
cat <```
### Pick PVs to snapshot
k8s-snapshots relies on annotations to tell it how frequently to snapshot your PVs. A PV requires the ```backup.kubernetes.io/deltas``` annotation in order to be snapshotted.
From the k8s-snapshots README:
````
The generations are defined by a list of deltas formatted as ISO 8601 durations (this differs from tarsnapper). PT60S or PT1M means a minute, PT12H or P0.5D is half a day, P1W or P7D is a week. The number of backups in each generation is implied by it's and the parent generation's delta.
For example, given the deltas PT1H P1D P7D, the first generation will consist of 24 backups each one hour older than the previous (or the closest approximation possible given the available backups), the second generation of 7 backups each one day older than the previous, and backups older than 7 days will be discarded for good.
The most recent backup is always kept.
The first delta is the backup interval.
```
To add the annotation to an existing PV, run something like this:
```
kubectl patch pv pvc-01f74065-8fe9-11e6-abdd-42010af00148 -p \
'{"metadata": {"annotations": {"backup.kubernetes.io/deltas": "P1D P30D P360D"}}}'
```
To add the annotation to a _new_ PV, add the following annotation to your **PVC**:
```
backup.kubernetes.io/deltas: PT1H P2D P30D P180D
```
Here's an example of the PVC for the UniFi recipe, which includes 7 daily snapshots of the PV:
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: controller-volumeclaim
namespace: unifi
annotations:
backup.kubernetes.io/deltas: P1D P7D
spec:
accessModes: - ReadWriteOnce
resources:
requests:
storage: 1Gi
````
And here's what my snapshot list looks like after a few days:

### Snapshot a non-Kubernetes volume (optional)
If you're running traditional compute instances with your cloud provider (I do this for my poor man's load balancer), you might want to backup _these_ volumes as well.
To do so, first create a custom resource, ```SnapshotRule```:
````
cat <