This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Velero is a solution for supporting Kubernetes cluster disaster recovery, data migration, and data protection by backing up Kubernetes cluster resources and persistent volumes to externally supported storage backend on-demand or by schedule.
The major functions include:
Backup Kubernetes resources and persistent volumes for supported storage providers.
Restore Kubernetes resources and persistent volumes for supported storage providers.
When backing up persistent volumes w/o supported storage provider, Velero leverages restic as an agnostic solution to back up this sort of persistent volumes under some known limitations.
User can leverage these fundamental functions to achieve user stories:
Backup whole Kubernetes cluster resources then restore if any Kubernetes resources loss.
Backup selected Kubernetes resources then restore if the selected Kubernetes resources loss.
Backup selected Kubernetes resources and persistent volumes then restore if the Kubernetes selected Kubernetes resources loss or data loss.
Replicate or migrate a cluster for any purpose, for example replicating a production cluster to a development cluster for testing.
Velero consists of below components:
A Velero server that runs on your Kubernetes cluster.
A restic deployed on each worker nodes that run on your Kubernetes cluster (optional).
A command-line client that runs locally.
Velero doesn’t overwrite objects in-cluster if they already exist.
Velero supports a single set of credentials per provider. It’s not yet possible to use different credentials for different object storage locations for the same provider.
Volume snapshots are limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is located. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster’s volume is, the backup will fail.
It is not yet possible to send a single Velero backup to multiple backup storage locations simultaneously, or a single volume snapshot to multiple locations simultaneously. However, you can set up multiple backups manually or scheduled that differ only in the storage locations.
Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume (e.g. NFS and Ceph), but you only have a volume snapshot location configured for NFS, then Velero will only snapshot the NFS volumes.
Restic data is stored under a prefix/subdirectory of the main Velero bucket and will go into the bucket corresponding backup storage location selected by the user at backup creation time.
When performing cluster migration, the new cluster number of nodes should be equal or greater than the original cluster.
For more information about storage and snapshot locations, refer to Velero: Backup Storage Locations and Volume Snapshot Locations
To successfully use Velero to backup and restore the Kubernetes cluster, you first need to install Helm. Refer to Section 3.1.2.1, “Installing Helm”.
Add SUSE helm chart repository URL:
helm repo add suse https://kubernetes-charts.suse.comVelero uses object storage to store backups and associated artifacts.
It can also optionally create snapshots of persistent volumes and store them in object storage via restic if there is no supported volume snapshot provider.
Choose one of the object storage providers, which fits your environment, from the list below for backing up and restoring the Kubernetes cluster.
The object storage server checks access permission, so it is vital to have credentials ready. Provide the credentials file credentials-velero to the velero server, so that it has the permission to write or read the backup data from the object storage.
Make sure the object storage is created before you install Velero. Otherwise, the Velero server won’t be able to start successfully. This is because the Velero server checks that the object storage exists and needs to have the permission to access it during server boot.
| Provider | Object Storage | Plugin Provider Repo |
|---|---|---|
Amazon Web Services (AWS) | AWS S3 | |
Google Cloud Platform (GCP) | Google Cloud Storage | |
Microsoft Azure | Azure Blob Storage |
AWS CLI
Install aws CLI locally, follow the doc to install.
AWS S3 bucket
Create an AWS S3 bucket to store backup data and restore data from the S3 bucket.
aws s3api create-bucket \
--bucket <BUCKET_NAME> \
--region <REGION> \
--create-bucket-configuration LocationConstraint=<REGION>Create the credential file credentials-velero in the local machine
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
For details, please refer to Velero Plugin For AWS.
GCP CLIs
Install gcloud and gsutil CLIs locally, follow the doc to install.
Create GCS bucket
gsutil mb gs://<BUCKET_NAME>/Create the service account
# View current config settings
gcloud config list
# Store the project value to PROJECT_ID environment variable
PROJECT_ID=$(gcloud config get-value project)
# Create a service account
gcloud iam service-accounts create velero \
--display-name "Velero service account"
# List all accounts
gcloud iam service-accounts list
# Set the SERVICE_ACCOUNT_EMAIL environment variable
SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:Velero service account" \
--format 'value(email)')
# Attach policies to give velero the necessary permissions
ROLE_PERMISSIONS=(
compute.disks.get
compute.disks.create
compute.disks.createSnapshot
compute.snapshots.get
compute.snapshots.create
compute.snapshots.useReadOnly
compute.snapshots.delete
compute.zones.get
)
# Create iam roles
gcloud iam roles create velero.server \
--project $PROJECT_ID \
--title "Velero Server" \
--permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
# Bind iam policy to project
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/velero.server
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://<BUCKET_NAME>Create the credential file credentials-velero in the local machine
gcloud iam service-accounts keys create credentials-velero \
--iam-account $SERVICE_ACCOUNT_EMAILFor details, please refer to Velero Plugin For GCP.
Azure CLI
Install az CLI locally, follow the doc to install.
Create a resource group for the backups storage account
Create the resource group named Velero_Backups, change the resource group name and location as needed.
AZURE_RESOURCE_GROUP=Velero_Backups
az group create -n $AZURE_RESOURCE_GROUP --location <location>Create the storage account
az storage account create \
--name $AZURE_STORAGE_ACCOUNT_ID \
--resource-group $AZURE_RESOURCE_GROUP \
--sku Standard_GRS \
--encryption-services blob \
--https-only true \
--kind BlobStorage \
--access-tier HotCreate a blob container
Create a blob container named velero. Change the name as needed.
BLOB_CONTAINER=velero
az storage container create -n $BLOB_CONTAINER --public-access off --account-name $AZURE_STORAGE_ACCOUNT_IDCreate the credential file credentials-velero in the local machine
# Obtain your Azure Account Subscription ID
AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv`
# Obtain your Azure Account Tenant ID
AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`
# Generate client secret
AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv`
# Generate client ID
AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
cat << EOF > ./credentials-velero
AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
AZURE_TENANT_ID=${AZURE_TENANT_ID}
AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
EOFFor details, please refer to Velero Plugin For Azure.
radosgw) #SUSE supports the SUSE Enterprise Storage 6 Ceph Object Gateway (radosgw) as an S3-compatible object storage provider.
Installation Refer to the SES 6 Object Gateway Manual Installation on how to install it.
Create the credential file credentials-velero in the local machine
[default] aws_access_key_id=<SES_STORAGE_ACCESS_KEY_ID> aws_secret_access_key=<SES_STORAGE_SECRET_ACCESS_KEY>
Besides SUSE Enterprise Storage, there is an alternative open-source S3-compatible object storage provider minio.
Prepare an external host and install Minio on the host
# Download Minio server
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
# Expose Minio access_key and secret_key
export MINIO_ACCESS_KEY=<access_key>
export MINIO_SECRET_KEY=<secret_key>
# Start Minio server
mkdir -p bucket
./minio server bucket &
# Download Minio client
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
# Setup Minio server
./mc config host add Velero http://localhost:9000 $MINIO_ACCESS_KEY $MINIO_SECRET_KEY
# Create bucket on Minio server
./mc mb -p velero/veleroCreate the credential file credentials-velero in the local machine
[default] aws_access_key_id=<MINIO_STORAGE_ACCESS_KEY_ID> aws_secret_access_key=<MINIO_STORAGE_SECRET_ACCESS_KEY>
For the rest of the S3-compatible storage providers, refer to Velero: Supported Providers.
A volume snapshotter can snapshot its persistent volumes if its volume driver supports volume snapshot and corresponding API.
If a volume provider does not support volume snapshot or volume snapshot API or does not have Velero supported storage plugin, Velero leverages restic as an agnostic solution to backup and restore this sort of persistent volumes.
| Provider | Volume Snapshotter | Plugin Provider Repo |
|---|---|---|
Amazon Web Services (AWS) | AWS EBS |
For the other snapshotter providers refer to Velero: Supported Providers.
When restoring dex and gangway, Velero reports NodePort cannot be restored since dex and gangway are deployed by an addon already and the same NodePort has been registered.
However, this does not break the dex and gangway service access from outside.
You can add a label to services oidc-dex and oidc-gangway to skip Velero backup.
kubectl label -n kube-system services/oidc-dex velero.io/exclude-from-backup=true
kubectl label -n kube-system services/oidc-gangway velero.io/exclude-from-backup=trueUse Helm CLI to install Velero deployment and restic (optional) if the storage does not provide volume snapshot API.
If Velero installed other than default namespace velero, setup velero config to the Velero installed namespace.
velero client config set namespace=<NAMESPACE>
For the cases that the Kubernetes cluster do not use external storage or the external storage would handle take volume snapshot by itself, it does not need Velero to backup persistent volume.
Backup To A Public Cloud Provider
Amazon Web Services (AWS)
The backup bucket name BUCKET_NAME. (The bucket name in AWS S3 object storage)
The backup region name REGION_NAME. (The region name for the AWS S3 object storage. For example, us-east-1 for AWS US East (N. Virginia))
The Velero installed namespace NAMESPACE, the default namespace is velero. (optional)
helm install velero \
--namespace=<NAMESPACE> \
--create-namespace \
--set-file credentials.secretContents.cloud=credentials-velero \
--set configuration.provider=aws \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=<BUCKET_NAME> \
--set configuration.backupStorageLocation.config.region=<REGION_NAME> \
--set snapshotsEnabled=false \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=registry.suse.com/caasp/v4.5/velero-plugin-for-aws:1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
suse/veleroThen, suggest creating at least one additional backup locations point to the different object storage server to prevent object storage server single point of failure.
velero backup-location create secondary \
--provider=aws \
--bucket=<SECONDARY_BUCKET_NAME> \
--region=<REGION_NAME>Google Cloud Platform (GCP)
The backup bucket name BUCKET_NAME. (The bucket name in Google Cloud Storage object storage)
The Velero installed namespace NAMESPACE, the default namespace is velero. (optional)
helm install velero \
--namespace=<NAMESPACE> \
--create-namespace \
--set-file credentials.secretContents.cloud=credentials-velero \
--set configuration.provider=gcp \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=<BUCKET_NAME> \
--set snapshotsEnabled=false \
--set initContainers[0].name=velero-plugin-for-gcp \
--set initContainers[0].image=registry.suse.com/caasp/v4.5/velero-plugin-for-gcp:1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
suse/veleroThen, suggest creating at least one additional backup locations point to the different object storage server to prevent object storage server single point of failure.
velero backup-location create secondary \
--provider=gcp \
--bucket=<SECONDARY_BUCKET_NAME>Microsoft Azure
The backup bucket name BUCKET_NAME. (The bucket name in Azure Blob Storage object storage)
The resource group name AZURE_RESOURCE_GROUP. (The Azure resource group name)
The storage account ID AZURE_STORAGE_ACCOUNT_ID. (The Azure storage account ID)
The Velero installed namespace NAMESPACE, the default namespace is velero. (optional)
helm install velero \
--namespace=<NAMESPACE> \
--create-namespace \
--set-file credentials.secretContents.cloud=credentials-velero \
--set configuration.provider=azure \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=<BUCKET_NAME> \
--set configuration.backupStorageLocation.config.resourceGroup=<AZURE_RESOURCE_GROUP> \
--set configuration.backupStorageLocation.config.storageAccount=<AZURE_STORAGE_ACCOUNT_ID> \
--set snapshotsEnabled=false \
--set initContainers[0].name=velero-plugin-for-microsoft-azure \
--set initContainers[0].image=registry.suse.com/caasp/v4.5/velero-plugin-for-microsoft-azure:1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
suse/veleroThen, suggest creating at least one additional backup locations point to the different object storage server to prevent object storage server single point of failure.
velero backup-location create secondary \
--provider=azure \
--bucket=<SECONDARY_BUCKET_NAME> \
--region resourceGroup=<AZURE_RESOURCE_GROUP>,storageAccount=<AZURE_STORAGE_ACCOUNT_ID>Backup To A S3-Compatible Provider
The backup bucket name BUCKET_NAME. (The bucket name in S3-compatible object storage)
The backup region name REGION_NAME. (The region name for the S3-compatible object storage. For example, radosgw or default/secondary if you have an HA backup servers)
The S3-compatible object storage simulates the S3-compatible object storage. Therefore, the configuration for S3-compatible object storage have to setup additional configurations.
--set configuration.backupStorageLocation.config.s3ForcePathStyle=true \
--set configuration.backupStorageLocation.config.s3Url=<S3_COMPATIBLE_STORAGE_SERVER_URL> \If the S3-Compatible storage server is secured with a self-signed certificate, add the below command when helm install and pass --cacert flag when using Velero CLI, refer to Velero: Self Signed Certificates. (optional)
--set configuration.backupStorageLocation.caCert=`cat <PATH_TO_THE_SELF_SIGNED_CA_CERTIFICATE> | base64 -w 0 && echo` \Install Velero Deployment.
helm install velero \
--namespace=<NAMESPACE> \
--create-namespace \
--set-file credentials.secretContents.cloud=credentials-velero \
--set configuration.provider=aws \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=<BUCKET_NAME> \
--set configuration.backupStorageLocation.config.region=<REGION_NAME> \
--set configuration.backupStorageLocation.config.s3ForcePathStyle=true \
--set configuration.backupStorageLocation.config.s3Url=<S3_COMPATIBLE_STORAGE_SERVER_URL> \
--set snapshotsEnabled=false \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=registry.suse.com/caasp/v4.5/velero-plugin-for-aws:1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
suse/veleroThen, suggest creating at least one additional backup location point to the different object storage server to prevent object storage server single point of failure.
velero backup-location create secondary \
--provider=aws \
--bucket=<SECONDARY_BUCKET_NAME> \
--config region=secondary,s3ForcePathStyle=true,s3Url=<S3_COMPATIBLE_STORAGE_SERVER_URL>For the case that the Kubernetes cluster uses external storage and the external storage would not handle volume snapshot by itself (either external storage does not support volume snapshot or administrator want use velero to take volume snapshot when velero do cluster backup).
Backup To A Public Cloud Provider
Amazon Web Services (AWS)
The backup bucket name BUCKET_NAME. (The bucket name in AWS S3 object storage)
The backup region name REGION_NAME. (The region name for the AWS S3 object storage. For example, us-east-1 for AWS US East (N. Virginia))
The Velero installed namespace NAMESPACE, the default namespace is velero. (optional)
If the Kubernetes cluster in AWS and uses AWS EBS as storage, please remove the
--set deployRestic=true \
at below to use AWS EBS volume snapshot API to take volume snapshot. Otherwise, it would install restic and velero server will use restic to take a volume snapshot and the volume data will store to AWS S3 bucket.
helm install velero \
--namespace=<NAMESPACE> \
--create-namespace \
--set-file credentials.secretContents.cloud=credentials-velero \
--set configuration.provider=aws \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=<BUCKET_NAME> \
--set configuration.backupStorageLocation.config.region=<REGION_NAME> \
--set snapshotsEnabled=true \
--set deployRestic=true \
--set configuration.volumeSnapshotLocation.name=default \
--set configuration.volumeSnapshotLocation.config.region=<REGION_NAME> \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=registry.suse.com/caasp/v4.5/velero-plugin-for-aws:1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
suse/veleroThen, suggest creating at least one additional backup locations point to the different object storage server to prevent object storage server single point of failure.
velero backup-location create secondary \
--provider=aws \
--bucket=<SECONDARY_BUCKET_NAME> \
--config region=<REGION_NAME>Google Cloud Platform (GCP)
The backup bucket name BUCKET_NAME. (The bucket name in Google Cloud Storage object storage)
The Velero installed namespace NAMESPACE, the default namespace is velero. (optional)
helm install velero \
--namespace=<NAMESPACE> \
--create-namespace \
--set-file credentials.secretContents.cloud=credentials-velero \
--set configuration.provider=gcp \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=<BUCKET_NAME> \
--set snapshotsEnabled=true \
--set deployRestic=true \
--set configuration.volumeSnapshotLocation.name=default \
--set initContainers[0].name=velero-plugin-for-gcp \
--set initContainers[0].image=registry.suse.com/caasp/v4.5/velero-plugin-for-gcp:1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
suse/veleroThen, suggest creating at least one additional backup locations point to the different object storage server to prevent object storage server single point of failure.
velero backup-location create secondary \
--provider=gcp \
--bucket=<SECONDARY_BUCKET_NAME>Microsoft Azure
The backup bucket name BUCKET_NAME. (The bucket name in Azure Blob Storage object storage)
The resource group name AZURE_RESOURCE_GROUP. (The Azure resource group name)
The storage account ID AZURE_STORAGE_ACCOUNT_ID. (The Azure storage account ID)
The Velero installed namespace NAMESPACE, the default namespace is velero. (optional)
helm install velero \
--namespace=<NAMESPACE> \
--create-namespace \
--set-file credentials.secretContents.cloud=credentials-velero \
--set configuration.provider=azure \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=<BUCKET_NAME> \
--set configuration.backupStorageLocation.config.resourceGroup=<AZURE_RESOURCE_GROUP> \
--set configuration.backupStorageLocation.config.storageAccount=<AZURE_STORAGE_ACCOUNT_ID> \
--set snapshotsEnabled=true \
--set deployRestic=true \
--set configuration.volumeSnapshotLocation.name=default \
--set initContainers[0].name=velero-plugin-for-microsoft-azure \
--set initContainers[0].image=registry.suse.com/caasp/v4.5/velero-plugin-for-microsoft-azure:1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
suse/veleroThen, suggest creating at least one additional backup locations point to the different object storage server to prevent object storage server single point of failure.
velero backup-location create secondary \
--provider=azure \
--bucket=<SECONDARY_BUCKET_NAME> \
--region resourceGroup=<AZURE_RESOURCE_GROUP>,storageAccount=<AZURE_STORAGE_ACCOUNT_ID>Backup To A S3-Compatible Provider
The backup bucket name BUCKET_NAME. (The bucket name in S3-compatible object storage)
The backup region name REGION_NAME. (The region name for the S3-compatible object storage. For example, radosgw or default/secondary if you have an HA backup servers)
The S3-compatible object storage simulates the S3-compatible object storage. Therefore, the configuration for S3-compatible object storage have to setup additional configurations.
--set configuration.backupStorageLocation.config.s3ForcePathStyle=true \
--set configuration.backupStorageLocation.config.s3Url=<S3_COMPATIBLE_STORAGE_SERVER_URL> \If the S3-Compatible storage server is secured with a self-signed certificate, add the below command when helm install and pass --cacert flag when using Velero CLI, refer to Velero: Self Signed Certificates. (optional)
--set configuration.backupStorageLocation.caCert=`cat <PATH_TO_THE_SELF_SIGNED_CA_CERTIFICATE> | base64 -w 0 && echo` \Install Velero Deployment and restic DaemonSet.
Mostly the on-premise persistent volume does not support volume snapshot API or does not have community-supported snapshotter providers. Therefore, we have to deploy the restic DaemonSet.
helm install velero \
--namespace=<NAMESPACE> \
--create-namespace \
--set-file credentials.secretContents.cloud=credentials-velero \
--set configuration.provider=aws \
--set configuration.backupStorageLocation.name=default \
--set configuration.backupStorageLocation.bucket=<BUCKET_NAME> \
--set configuration.backupStorageLocation.config.region=<REGION_NAME> \
--set configuration.backupStorageLocation.config.s3ForcePathStyle=true \
--set configuration.backupStorageLocation.config.s3Url=<S3_COMPATIBLE_STORAGE_SERVER_URL> \
--set snapshotsEnabled=true \
--set deployRestic=true \
--set configuration.volumeSnapshotLocation.name=default \
--set configuration.volumeSnapshotLocation.config.region=minio \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=registry.suse.com/caasp/v4.5/velero-plugin-for-aws:1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
suse/veleroThen, suggest creating at least one additional backup locations point to the different object storage server to prevent object storage server single point of failure.
velero backup-location create secondary \
--provider=aws \
--bucket=<SECONDARY_BUCKET_NAME> \
--config region=secondary,s3ForcePathStyle=true,s3Url=<S3_COMPATIBLE_STORAGE_SERVER_URL>Annotate Persistent Volume (optional)
If the persistent volume in the supported volume snapshotter provider, skip this procedure.
However, if we deploy the restic DaemonSet and want to backup the persistent volume by restic, we have to add annotation backup.velero.io/backup-volumes=<VOLUME_NAME_1>,<VOLUME_NAME_2>,… to the pods which have mounted the volume manually.
For example, we deploy an Elasticsearch cluster and want to backup the Elasticsearch cluster’s data. Add the annotation to the Elasticsearch cluster pods:
kubectl annotate pod/elasticsearch-master-0 backup.velero.io/backup-volumes=elasticsearch-master
kubectl annotate pod/elasticsearch-master-1 backup.velero.io/backup-volumes=elasticsearch-master
kubectl annotate pod/elasticsearch-master-2 backup.velero.io/backup-volumes=elasticsearch-masterVelero currently does not provide a mechanism to detect persistent volume claims that are missing the restic backup annotation.
To solve this, there is a community provided controller velero-pvc-watcher which integrates Prometheus to generate alerts for volumes that are not in the backup or backup-exclusion annotation.
Manual Backup
velero backup create <BACKUP_NAME>Scheduled Backup
The schedule template in cron notation, using UTC time. The schedule can also be expressed using @every <duration> syntax.
The duration can be specified using a combination of seconds (s), minutes (m), and hours (h), for example: @every 2h30m.
# Create schedule template
# Create a backup every 6 hours
velero schedule create <SCHEDULE_NAME> --schedule="0 */6 * * *"
# Create a backup every 6 hours with the @every notation
velero schedule create <SCHEDULE_NAME> --schedule="@every 6h"
# Create a daily backup of the web namespace
velero schedule create <SCHEDULE_NAME> --schedule="@every 24h" --include-namespaces web
# Create a weekly backup, each living for 90 days (2160 hours)
velero schedule create <SCHEDULE_NAME> --schedule="@every 168h" --ttl 2160h0m0s| Character Position | Character Period | Acceptable Values |
|---|---|---|
1 | Minute |
|
2 | Hour |
|
3 | Day of Month |
|
4 | Month |
|
5 | Day of Week |
|
When creating multiple backups to different backup locations closely, you might hit the object storage server API rate limit issues. Now, the velero does not have a mechanism on retry backups when the rate limit occurred. Consider shifting the time to create multiple backups.
Optional Flags
Granularity
Without passing extra flags to velero backup create, Velero will backup the whole Kubernetes cluster.
Namespace
Pass flag --include-namespaces or --exclude-namespaces to specify which namespaces to include/exclude when backing up.
For example:
# Create a backup including the nginx and default namespaces
velero backup create backup-1 --include-namespaces nginx,default
# Create a backup excluding the kube-system and default namespaces
velero backup create backup-1 --exclude-namespaces kube-system,defaultResources
Pass flag --include-resources or --exclude-resources to specifies which resources to include/exclude when backing up.
For example:
# Create a backup including storageclass resource only
velero backup create backup-1 --include-resources storageclassesUse kubectl api-resources to lists all API resources on the server.
Label Selector
Pass --selector to only back up resources matching the label selector.
# Create a backup for the elasticsearch cluster only
velero backup create backup-1 --selector app=elasticsearch-masterLocation
Pass --storage-location to specify where to store the backup.
For example, if we have an HA object storage server called default and secondary respectively.
# Create a backup to the default storage server
velero backup create backup2default --storage-location default
# Create a backup to the secondary storage server
velero backup create backup2secondary --storage-location secondaryGarbage Collection
Pass --ttl to specify how long the backup should be kept. After the specified time the backup will be deleted.
The default time for a backup before deletion is 720 hours (30 days).
Exclude Specific Items from Backup
You can exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=trueList Backups
velero backup getDescribe Backups
velero backup describe <BACKUP_NAME_1> <BACKUP_NAME_2> <BACKUP_NAME_3>Retrieve Backup Logs
velero backup logs <BACKUP_NAME>Manual Restore
velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME>For example:
# Create a restore named "restore-1" from backup "backup-1"
velero restore create restore-1 --from-backup backup-1
# Create a restore with a default name ("backup-1-<timestamp>") from backup "backup-1"
velero restore create --from-backup backup-1Scheduled Backup
velero restore create <RESTORE_NAME> --from-schedule <SCHEDULE_NAME>For example:
# Create a restore from the latest successful backup triggered by schedule "schedule-1"
velero restore create --from-schedule schedule-1
# Create a restore from the latest successful OR partially-failed backup triggered by schedule "schedule-1"
velero restore create --from-schedule schedule-1 --allow-partially-failedOptional Flags
Granularity
Without passing extra flags to velero restore create, Velero will restore whole resources from the backup or the schedule.
Namespace
Pass flag --include-namespaces or --exclude-namespaces to velero restore create to specifies which namespaces to include/exclude when restoring.
For example:
# Create a restore including the nginx and default namespaces
velero restore create --from-backup backup-1 --include-namespaces nginx,default
# Create a restore excluding the kube-system and default namespaces
velero restore create --from-backup backup-1 --exclude-namespaces kube-system,defaultResources
Pass flag --include-resources or --exclude-resources to velero restore create to specifies which resources to include/exclude when restoring.
For example:
# create a restore for only persistentvolumeclaims and persistentvolumes within a backup
velero restore create --from-backup backup-1 --include-resources persistentvolumeclaims,persistentvolumesUse kubectl api-resources to lists all API resources on the server.
Label Selector
Pass --selector to only restore the resources matching the label selector.
For example:
# create a restore for only the elasticsearch cluster within a backup
velero restore create --from-backup backup-1 --selector app=elasticsearch-masterRetrieve restores
velero restore getDescribe restores
velero restore describe <RESTORE_NAME_1> <RESTORE_NAME_2> <RESTORE_NAME_3>Retrieve restore logs
velero restore logs <RESTORE_NAME>Use the scheduled backup function for periodical backups. When the Kubernetes cluster runs into an unexpected state, recover from the most recent scheduled backup.
Backup
Run the scheduled backup, this creates a backup file with the name <SCHEDULE_NAME>-<TIMESTAMP>.
velero schedule create <SCHEDULE_NAME> --schedule="@daily"Restore
When a disaster happens, make sure the Velero server and restic DaemonSet exists (optional). If not, reinstall from the helm chart.
Update the backup storage location to read-only mode (it prevents the backup file from being created or deleted in the backup storage location during the restore process):
kubectl patch backupstoragelocation <STORAGE_LOCATION_NAME> \
--namespace <NAMESPACE> \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'Create a restore from the most recent backup file:
velero restore create --from-backup <SCHEDULE_NAME>-<TIMESTAMP>After restoring finished, change the backup storage location back to read-write mode:
kubectl patch backupstoragelocation <STORAGE_LOCATION_NAME> \
--namespace <NAMESPACE> \
--type merge \
--patch '{"spec":{"accessMode":"ReadWrite"}}'Migrate the Kubernetes cluster from cluster 1 to cluster 2, as long as you point different cluster’s Velero instances to the same external object storage location.
Velero does not support the migration of persistent volumes across public cloud providers.
(At cluster 1) Backup the entire Kubernetes cluster manually:
velero backup create <BACKUP_NAME>(At cluster 2) Prepare a Kubernetes cluster deployed by skuba:
(At cluster 2) Helm install Velero and make sure the backup-location and snapshot-location point to the same location as cluster 1:
velero backup-location get
velero snapshot-location getThe default sync interval is 1 minute. You could change the interval with the flag --backup-sync-period when creating a backup location.
(At cluster 2) Make sure the cluster 1 backup resources are sync to the external object storage server:
velero backup get <BACKUP_NAME>
velero backup describe <BACKUP_NAME>(At cluster 2) Restore the cluster from the backup file:
velero restore create --from-backup <BACKUP_NAME>(At cluster 2) Verify the cluster is behaving correctly:
velero restore get
velero restore describe <RESTORE_NAME>
velero restore logs <RESTORE_NAME>(At cluster 2) Since Velero doesn’t overwrite objects in-cluster if they already exist, a manual check of all addon configurations is desired after the cluster is restored:
Check dex configuration:
# Download dex.yaml
kubectl -n kube-system get configmap oidc-dex-config -o yaml > oidc-dex-config.yaml
# Edit oidc-dex-config.yaml to desired
vim oidc-dex-config.yaml
# Apply new oidc-dex-config.yaml
kubectl apply -f oidc-dex-config.yaml --force
# Restart oidc-dex deployment
kubectl rollout restart deployment/oidc-dex -n kube-systemCheck gangway configuration:
# Download gangway.yaml
kubectl -n kube-system get configmap oidc-gangway-config -o yaml > oidc-gangway-config.yaml
# Edit oidc-gangway-config.yaml to desired
vim oidc-gangway-config.yaml
# Apply new oidc-gangway-config.yaml
kubectl apply -f oidc-gangway-config.yaml --force
# Restart oidc-gangway deployment
kubectl rollout restart deployment/oidc-gangway -n kube-systemCheck kured is disabled automatically reboots
kubectl get daemonset kured -o yamlCheck that psp is what you wish it to be:
kubectl get psp suse.caasp.psp.privileged -o yaml
kubectl get clusterrole suse:caasp:psp:privileged -o yaml
kubectl get rolebinding suse:caasp:psp:privileged -o yaml
kubectl get psp suse.caasp.psp.unprivileged -o yaml
kubectl get clusterrole suse:caasp:psp:unprivileged -o yaml
kubectl get clusterrolebinding suse:caasp:psp:default -o yamlRemove the Velero server deployment and restic DaemonSet if it exist.
Then, delete Velero custom resource definitions (CRDs).
helm uninstall velero -n <NAMESPACE>
kubectl delete crds -l app.kubernetes.io/name=velero