K8 Host Storage Backups with Rsync and CronJob

K8 Host Storage Backups with Rsync and CronJob

I host a single node Kubernetes cluster (MicroK8s) at home running various applications. I use simple hostpath volumes for storage and need a way to create backups of this data. The solution I chose is to combine K8 CronJobs with Rsync. The basic steps are:

  • Create a backups namespace
  • Create volume for backup directory
  • Create volume to map application data directory
  • Create Cronjob to perform the sync

I use Ansible for deployments so I will be showing those task/template files.

Create a Backups Namespace

Originally I created the CronJobs in the same namespace, however because of the number of containers it create it started to make a mess of my monitoring data (in Prometheus). I recommend creating a specific namespace for backups.

- name: Create Backups Namespace
  k8s:
    name: "{{ backups_namespace }}"
    api_version: v1
    kind: Namespace
    state: present
Create a namespace for Backup Resources

Create the Volumes

Use a separate drive to the source data to protect against disk failure.

These are the templates I use to create the volumes:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: "{{ volume_name }}-persistent-volume"
  labels:
    app: "{{ volume_app }}"
    type: local
spec:
  storageClassName: host-storage
  capacity:
    storage: "{{ disk_size }}"
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "{{ data_dir }}"
Persistent Volume Template
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "{{ claim_name }}-volume-claim"
  labels:
    app: "{{ claim_app }}"
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: host-storage
  resources:
    requests:
      storage: "{{ disk_size }}"
Persistent Volume Claim Template

First I'll create a volume for the backup destination. Volumes are global, but Volume claims are namespaced. This volume claim exists in the same namespace as the backup Pods so they will all share the claim:


- name: Create a Backups Persistent Volume
  vars:
    - volume_name: backup
    - volume_app: backup
    - disk_size: <backup volume size>
    - data_dir: /path/to/backup/directory
  k8s:
    state: present
    definition: "{{ lookup('template', '{{ common_templates }}/volume/host-storage-volume.yml.j2') }}"


- name: Create a Backups Persistent Volume Claim
  vars:
    - claim_name: backup
    - claim_app: backup
    - disk_size: <backup volume size>
  k8s:
    state: present
    namespace: "{{ backups_namespace }}"
    definition: "{{ lookup('template', '{{ common_templates }}/volume/host-storage-volume-claim.yml.j2') }}"
Backup Destination Volume

At this point we have somewhere to write the backups, now we need to setup the reads. I reuse the templates and just add in the specifics for the application. rsync will copy everything from the application data_dir to the backups data_dir.


- name: Create a <Application> Persistent Volume
  vars:
    - volume_name: <application>
    - volume_app: <application>
    - disk_size: <source volume size>
    - data_dir: /path/to/application/data/
  k8s:
    state: present
    definition: "{{ lookup('template', '{{ common_templates }}/volume/host-storage-volume.yml.j2') }}"


- name: Create a <Application> Persistent Volume Claim
  vars:
    - claim_name: <application>
    - claim_app: <application>
    - disk_size: <source volume size>
  k8s:
    state: present
    namespace: "{{ backups_namespace }}"
    definition: "{{ lookup('template', '{{ common_templates }}/volume/host-storage-volume-claim.yml.j2') }}"
Backup Source Volume

One gotcha that you might run into here is that you can't reuse the existing volume of the application you are targetting (unless they are in the same namespace, which I don't think they should be).

Create the Cronbjob

For this kind of backup I have 3 separate cron jobs. Daily, Weekly, Monthly. Daily and Weekly will also contain the most recent Day/Week copy. Monthly however will create separate directories for each run that will end in the current date.

Below is an example that I use to backup NextCloud data

- name: Create NextCloud Daily Backup
  vars:
      - cron_name: "nextcloud-backup-daily"
      - trigger_time: "0 13 * * *"
      - src_volume: nextcloud-backup-persistent-volume
      - src_volume_claim:  nextcloud-backup-volume-claim
      - src_backup_path: data
      - dst_volume: backups-persistent-volume
      - dst_volume_claim:  backups-volume-claim
      - dst_backup_path: nextcloud/nextcloud-data-daily
  k8s:
    state: "{{nextcloud_k8_state}}"
    namespace: "{{ backups_namespace }}"
    definition: "{{ lookup('template', '../common/templates/rsync-backup.yml.j2') }}"

- name: Create NextCloud Weekly Backup
  vars:
      - cron_name: "nextcloud-backup-weekly"
      - trigger_time: "00 14 * * 5"
      - src_volume: nextcloud-backup-persistent-volume
      - src_volume_claim:  nextcloud-backup-volume-claim
      - src_backup_path: data
      - dst_volume: backups-persistent-volume
      - dst_volume_claim:  backups-volume-claim
      - dst_backup_path: nextcloud/nextcloud-data-weekly
  k8s:
    state: "{{nextcloud_k8_state}}"
    namespace: "{{ backups_namespace }}"
    definition: "{{ lookup('template', '../common/templates/rsync-backup.yml.j2') }}"

- name: Create NextCloud Monthly Backup
  vars:
      - cron_name: "nextcloud-backup-monthly"
      - trigger_time: "00 15 1 * *"
      - src_volume: nextcloud-backup-persistent-volume
      - src_volume_claim:  nextcloud-backup-volume-claim
      - src_backup_path: data
      - dst_volume: backups-persistent-volume
      - dst_volume_claim:  backups-volume-claim
      - dst_backup_path: nextcloud/nextcloud-data-monthly-$(date +%Y%m%d)
  k8s:
    state: "{{nextcloud_k8_state}}"
    namespace: "{{ backups_namespace }}"
    definition: "{{ lookup('template', '../common/templates/rsync-backup.yml.j2') }}"
NextCloud Data Backup Cron Jobs

And finally, probably the most important part is the deployment template.

I have create a docker image that is just alpine + rsync:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: "{{ cron_name }}"
spec:
  schedule: "{{ trigger_time }}"
  jobTemplate:
    spec:
      template:
        spec:
          volumes:
          - name: "{{ src_volume }}"
            persistentVolumeClaim:
              claimName: "{{ src_volume_claim }}"
          - name: "{{ dst_volume }}"
            persistentVolumeClaim:
              claimName: "{{ dst_volume_claim }}"
          containers:
          - name: "{{ cron_name }}"
            image: jroddev/alpine-rsync
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - name: "{{ src_volume }}"
              mountPath: /backup-src
            - name: "{{ dst_volume }}"
              mountPath: /backup-dst
            args:
            - /bin/sh
            - -c
            - rsync -aAX --delete /backup-src/{{ src_backup_path }} /backup-dst/{{ dst_backup_path }}
          restartPolicy: OnFailure

When triggered the CronJob will use rsync to sync the src directory to the dst directory. It will create, update, and delete directories and files so that they match.

For testing I use the Kubernetes Dashboard to trigger a manual run of the Cron Jobs.

Clean up the finished containers

Every time a backup occurs it leaves behind a new Pod. I haven't automated the clean up of these yet. I simply remove them by running this command:

microk8s.kubectl -n backups get pods | grep Completed | awk '{print $1}' | xargs microk8s.kubectl -n backups delete pod