New to Stash? Please start here.
Restic is a Kubernetes
CustomResourceDefinition (CRD). It provides declarative configuration for restic in a Kubernetes native way. You only need to describe the desired backup operations in a Restic object, and the Stash operator will reconfigure the matching workloads to the desired state for you.
As with all other Kubernetes objects, a Restic needs
metadata fields. It also needs a
.spec section. Below is an example Restic object.
apiVersion: stash.appscode.com/v1alpha1 kind: Restic metadata: name: stash-demo namespace: default spec: selector: matchLabels: app: stash-demo fileGroups: - path: /source/data retentionPolicyName: 'keep-last-5' backend: local: mountPath: /safe/data hostPath: path: /data/stash-test/restic-repo storageSecretName: stash-demo schedule: '@every 1m' volumeMounts: - mountPath: /source/data name: source-data retentionPolicies: - name: 'keep-last-5' keepLast: 5 prune: true
.spec section has following parts:
spec.selector is a required field that specifies a label selector for the Deployments, ReplicaSets, ReplicationControllers, DaemonSets and StatefulSets targeted by this Restic. Selectors are always matched against the labels of Deployments, ReplicaSets, ReplicationControllers, DaemonSets and StatefulSets in the same namespace as Restic object itself. You can create Deployment, etc and its matching Restic is any order. As long as the labels match, Stash operator will add sidecar container to the workload. If multiple
Restic objects are matched to a given workload, Stash operator will error out and avoid adding sidecar container.
The default value for
online. For offline backup you need to specify
spec.type=offline. For more details see here.
spec.fileGroups is a required field that specifies one or more directories that are backed up by restic. For each directory, you can specify custom tags and retention policy for snapshots.
spec.fileGroups.pathrepresents a local directory that backed up by
spec.fileGroups.tagsis an optional field. This can be used to apply one or more custom tag to snapshots taken from this path.
spec.fileGroups.retentionPolicyNameis an optional field that is used to specify a retention policy defined in
spec.retentionPolicies. This defines how old snapshots are forgot by
restic. If set, these options directly translate into flags for
spec.retentionPolicies defines a array of retention policies for old snapshots. Retention policy options are below.
|Policy||Value||restic forget flag||Description|
|string||Name of retention policy provided by user. This is used in file groups to refer to a policy.|
|integer||–keep-last n||Never delete the n last (most recent) snapshots|
|integer||–keep-hourly n||For the last n hours in which a snapshot was made, keep only the last snapshot for each hour.|
|integer||–keep-daily n||For the last n days which have one or more snapshots, only keep the last one for that day.|
|integer||–keep-weekly n||For the last n weeks which have one or more snapshots, only keep the last one for that week.|
|integer||–keep-monthly n||For the last n months which have one or more snapshots, only keep the last one for that month.|
|integer||–keep-yearly n||For the last n years which have one or more snapshots, only keep the last one for that year.|
|array||–keep-tag ||Keep all snapshots which have all tags specified by this option (can be specified multiple times). |
|bool||–prune||If set, actually removes the data that was referenced by the snapshot from the repository.|
You can set one or more of these retention policy options together. To learn more, read here.
To learn how to configure various backends for Restic, please visit here.
spec.schedule is a cron expression that indicates how often
restic commands are invoked for file groups.
At each tick,
restic backup and
restic forget commands are run for each of the configured file groups.
spec.resources refers to compute resources required by the
stash sidecar container. To learn more, visit here.
spec.volumeMounts refers to volumes to be mounted in
stash sidecar to get access to fileGroup paths.
ReplicationControllerrestic repo is created in the sub-directory
<WORKLOAD_KIND>/<WORKLOAD_NAME>. For multiple replicas, only one repository is created and sidecar is added to only one pod selected by leader-election.
Statefulsetrestic repository is created in the sub-directory
<WORKLOAD_KIND>/<POD_NAME>. For multiple replicas, multiple repositories are created and sidecar is added to all pods.
Daemonsetrestic repository is created in the sub-directory
<WORKLOAD_KIND>/<WORKLOAD_NAME>/<NODE_NAME>. For multiple replicas, multiple repositories are created and sidecar is added to all pods.
Stash operator updates
.status of a Restic CRD every time a backup operation is completed.
status.backupCountindicated the total number of backup operation completed for this Restic CRD.
status.firstBackupTimeindicates the timestamp of first backup operation.
status.lastBackupTimeindicates the timestamp of last backup operation.
status.lastSuccessfulBackupTimeindicates the timestamp of last successful backup operation. If
status.lastSuccessfulBackupTimeare same, it means that last backup operation was successful.
status.lastBackupDurationindicates the duration of last backup operation.
For each workload where a sidecar container is added by Stash operator, the following annotations are added:
restic.appscode.com/last-applied-configurationindicates the configuration of applied Restic CRD.
restic.appscode.com/tagindicates the tag of
appscode/stashDocker image that was added as sidecar.
The sidecar container watches for changes in the Restic fileGroups, backend and schedule. These changes are automatically applied on the next run of
restic commands. If the selector of a Restic CRD is changed, Stash operator will update workload accordingly by adding/removing sidecars as required.
To stop taking backup, you can do 2 things: