You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: en/backup-restore-cr.md
+29Lines changed: 29 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -258,6 +258,33 @@ This section introduces the fields in the `Backup` CR.
258
258
* `.spec.local.volume`: the persistent volume configuration.
259
259
* `.spec.local.volumeMount`: the persistent volume mount configuration.
260
260
261
+
## CompactBackup CR fields
262
+
263
+
For TiDB v9.0.0 and later versions, you can use `CompactBackup` to accelerate PITR (Point-in-time recovery). To compact log backup data into structured SST files, you can create a custom `CompactBackup` CR object to define a backup task. The following introduces the fields in the `CompactBackup` CR:
264
+
265
+
* `.spec.startTs`: the start timestamp for log compaction backup.
266
+
* `.spec.endTs`: the end timestamp for log compaction backup.
267
+
* `.spec.concurrency`: the maximum number of concurrent log compaction tasks. The default value is `4`.
268
+
* `.spec.maxRetryTimes`: the maximum number of retries for failed compaction tasks. The default value is `6`.
269
+
* `.spec.toolImage`:the tool image used by `CompactBackup`. BR is the only tool image used in `CompactBackup`. When using BR for backup, you can specify the BR version with this field:
270
+
271
+
- If not specified or left empty, the `pingcap/br:${tikv_version}` image is used for backup by default.
272
+
- If a BR version is specified, such as `.spec.toolImage: pingcap/br:v9.0.0`, the image of the specified version is used for backup.
273
+
- If an image is specified without a version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup.
274
+
275
+
* `.spec.env`: the environment variables for the Pod that runs the compaction task.
276
+
* `.spec.affinity`: the affinity configuration for the Pod that runs the compaction task. For details on affinity, refer to [Affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
277
+
* `.spec.tolerations`: specifies that the Pod that runs the compaction task can schedule onto nodes with matching [taints](https://kubernetes.io/docs/reference/glossary/?all=true#term-taint). For details on taints and tolerations, refer to [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
278
+
* `.spec.podSecurityContext`: the security context configuration for the Pod that runs the compaction task, which allows the Pod to run as a non-root user. For details on `podSecurityContext`, refer to [Run Containers as a Non-root User](containers-run-as-non-root-user.md).
279
+
* `.spec.priorityClassName`: the name of the priority class for the Pod that runs the compaction task, which sets priority for the Pod. For details on priority classes, refer to [Pod Priority and Preemption](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/).
280
+
* `.spec.imagePullSecrets`: the [imagePullSecrets](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) for the Pod that runs the compaction task.
281
+
* `.spec.serviceAccount`: the name of the ServiceAccount used for compact.
282
+
* `.spec.useKMS`: whether to use AWS-KMS to decrypt the S3 storage key used for the backup.
283
+
* `.spec.br`: BR-related configuration. For more information, refer to [BR fields](#br-fields).
284
+
* `.spec.s3`: S3-related configuration. For more information, refer to [S3 storage fields](#s3-storage-fields).
285
+
* `.spec.gcs`: GCS-related configuration. For more information, refer to [GCS fields](#gcs-fields).
286
+
* `.spec.azblob`:Azure Blob Storage-related configuration. For more information, refer to [Azure Blob Storage fields](#azure-blob-storage-fields).
287
+
261
288
## Restore CR fields
262
289
263
290
To restore data to a TiDB cluster on Kubernetes, you can create a `Restore` CR object. For detailed restore process, refer to documents listed in [Restore data](backup-restore-overview.md#restore-data).
@@ -352,6 +379,7 @@ The `backupSchedule` configuration consists of three parts: the configuration of
352
379
353
380
* `backupTemplate`: the configuration of the snapshot backup. Specifies the configuration related to the cluster and remote storage of the snapshot backup, which is the same as the `spec` configuration of [the `Backup` CR](#backup-cr-fields).
354
381
* `logBackupTemplate`: the configuration of the log backup. Specifies the configuration related to the cluster and remote storage of the log backup, which is the same as the `spec` configuration of [the `Backup` CR](#backup-cr-fields). The log backup is created and deleted along with `backupSchedule` and recycled according to `.spec.maxReservedTime`. The log backup name is saved in `status.logBackup`.
382
+
* `compactBackupTemplate`: the configuration template of the log compaction backup. The fields are the same as those in the `spec` configuration of [the `CompactBackup` CR](#compactbackup-cr-fields). The compaction backup is created and deleted along with `backupSchedule`. The log backup names are stored in `status.logBackup`. The storage settings of the compaction backup should be the same as that of `logBackupTemplate` in the same `backupSchedule`.
355
383
356
384
> **Note:**
357
385
>
@@ -362,4 +390,5 @@ The `backupSchedule` configuration consists of three parts: the configuration of
362
390
* `.spec.maxBackups`: a backup retention policy, which determines the maximum number of backup files to be retained. When the number of backup files exceeds this value, the outdated backup file will be deleted. If you set this field to `0`, all backup items are retained.
363
391
* `.spec.maxReservedTime`: a backup retention policy based on time. For example, if you set the value of this field to `24h`, only backup files within the recent 24 hours are retained. All backup files older than this value are deleted. For the time format, refer to [`func ParseDuration`](https://golang.org/pkg/time/#ParseDuration). If you have set `.spec.maxBackups` and `.spec.maxReservedTime` at the same time, the latter takes effect.
364
392
* `.spec.schedule`: the time scheduling format of Cron. Refer to [Cron](https://en.wikipedia.org/wiki/Cron) for details.
393
+
* `.spec.compactInterval`: the time interval used to trigger a new compaction task.
365
394
* `.spec.pause`: `false` by default. If this field is set to `true`, the scheduled scheduling is paused. In this situation, the backup operation will not be performed even if the scheduling time point is reached. During this pause, the backup garbage collection runs normally. If you change `true` to `false`, the scheduled snapshot backup process is restarted. Because currently, log backup does not support pause, this configuration does not take effect for log backup.
For TiDB v9.0.0 and later versions, you can use a `CompactBackup` CR to compact log backup data into SST format, accelerating downstream PITR (Point-in-time recovery).
542
+
543
+
This section explains how to compact log backup based on the log backup example from previous sections.
544
+
545
+
1. In the `backup-test` namespace, create a `CompactBackup` CR named `demo1-compact-backup`.
546
+
547
+
```shell
548
+
kubectl apply -f compact-backup-demo1.yaml
549
+
```
550
+
551
+
The content of `compact-backup-demo1.yaml` is as follows:
552
+
553
+
```yaml
554
+
---
555
+
apiVersion: pingcap.com/v1alpha1
556
+
kind: CompactBackup
557
+
metadata:
558
+
name: demo1-compact-backup
559
+
namespace: backup-test
560
+
spec:
561
+
startTs: "***"
562
+
endTs: "***"
563
+
concurrency: 8
564
+
maxRetryTimes: 2
565
+
br:
566
+
cluster: demo1
567
+
clusterNamespace: test1
568
+
sendCredToTikv: true
569
+
s3:
570
+
provider: aws
571
+
secretName: s3-secret
572
+
region: us-west-1
573
+
bucket: my-bucket
574
+
prefix: my-log-backup-folder
575
+
```
576
+
577
+
The `startTs` and `endTs` fields specify the time range forthe logs to be compacted by `demo1-compact-backup`. Any log that contains at least one write within this time range will be includedin the compaction process. As a result, the final compacted data might include data written outside this range.
578
+
579
+
The `s3` settings should be the same as the storage settings of the log backup to be compacted. `CompactBackup` reads log files from the corresponding location and compact them.
580
+
581
+
#### View the status of log backup compaction
582
+
583
+
After creating the `CompactBackup` CR, TiDB Operator automatically starts compacting the log backup. You can check the backup status using the following command:
584
+
585
+
```shell
586
+
kubectl get cpbk -n backup-test
587
+
```
588
+
589
+
From the output, you can find the status of the `CompactBackup` CR named `demo1-compact-backup`. An example output is as follows:
## Integrated management of scheduled snapshot backup, log backup, and compact log backup
978
+
979
+
To accelerate downstream recovery, you can enable`CompactBackup` CR in the `BackupSchedule` CR. This feature periodically compacts log backup files in remote storage. You must enable log backup before using log backup compaction. This section extends the configuration from the previous section.
980
+
981
+
### Prerequisites: Prepare for a scheduled snapshot backup
982
+
983
+
The steps to prepare for a scheduled snapshot backup are the same as that of [Prepare for an ad-hoc backup](#prerequisites-prepare-for-an-ad-hoc-backup).
984
+
985
+
### Create `BackupSchedule`
986
+
987
+
1. Create a `BackupSchedule` CR named `integrated-backup-schedule-s3`in the `backup-test` namespace.
The content of `integrated-backup-schedule-s3.yaml` is as follows:
994
+
995
+
```yaml
996
+
---
997
+
apiVersion: pingcap.com/v1alpha1
998
+
kind: BackupSchedule
999
+
metadata:
1000
+
name: integrated-backup-schedule-s3
1001
+
namespace: backup-test
1002
+
spec:
1003
+
maxReservedTime: "3h"
1004
+
schedule: "* */2 * * *"
1005
+
compactInterval: "1h"
1006
+
backupTemplate:
1007
+
backupType: full
1008
+
cleanPolicy: Delete
1009
+
br:
1010
+
cluster: demo1
1011
+
clusterNamespace: test1
1012
+
sendCredToTikv: true
1013
+
s3:
1014
+
provider: aws
1015
+
secretName: s3-secret
1016
+
region: us-west-1
1017
+
bucket: my-bucket
1018
+
prefix: my-folder-snapshot
1019
+
logBackupTemplate:
1020
+
backupMode: log
1021
+
br:
1022
+
cluster: demo1
1023
+
clusterNamespace: test1
1024
+
sendCredToTikv: true
1025
+
s3:
1026
+
provider: aws
1027
+
secretName: s3-secret
1028
+
region: us-west-1
1029
+
bucket: my-bucket
1030
+
prefix: my-folder-log
1031
+
compactBackupTemplate:
1032
+
br:
1033
+
cluster: demo1
1034
+
clusterNamespace: test1
1035
+
sendCredToTikv: true
1036
+
s3:
1037
+
provider: aws
1038
+
secretName: s3-secret
1039
+
region: us-west-1
1040
+
bucket: my-bucket
1041
+
prefix: my-folder-log
1042
+
```
1043
+
1044
+
In the preceding example of `integrated-backup-schedule-s3.yaml`, the `backupSchedule` configuration is based on the previous section, with the following additions for`compactBackup`:
1045
+
1046
+
* Added the `BackupSchedule.spec.compactInterval` field to specify the interval for log backup compaction. It is recommended not to exceed the interval of scheduled snapshot backups and to keep it between one-half to one-third of the scheduled snapshot backup interval.
1047
+
1048
+
* Added the `BackupSchedule.spec.compactBackupTemplate` field. Ensure that the `BackupSchedule.spec.compactBackupTemplate.s3` configuration matches the `BackupSchedule.spec.logBackupTemplate.s3` configuration.
1049
+
1050
+
For the field description of `backupSchedule`, refer to [BackupSchedule CR fields](backup-restore-cr.md#backupschedule-cr-fields).
1051
+
1052
+
2. After creating `backupSchedule`, use the following command to check the backup status:
1053
+
1054
+
```shell
1055
+
kubectl get bks -n backup-test -o wide
1056
+
```
1057
+
1058
+
A compact log backup task is created together with `backupSchedule`. You can check the `CompactBackup` CR using the following command:
1059
+
1060
+
```shell
1061
+
kubectl get cpbk -n backup-test
1062
+
```
1063
+
919
1064
## Delete the backup CR
920
1065
921
1066
If you no longer need the backup CR, refer to [Delete the Backup CR](backup-restore-overview.md#delete-the-backup-cr).
For TiDB v9.0.0 and later versions, you can use a `CompactBackup` CR to compact log backup data into SST format, accelerating downstream PITR (Point-in-time recovery).
455
+
456
+
This section explains how to compact log backup based on the log backup example from previous sections.
457
+
458
+
1. In the `backup-test` namespace, create a `CompactBackup` CR named `demo1-compact-backup`.
459
+
460
+
```shell
461
+
kubectl apply -f compact-backup-demo1.yaml
462
+
```
463
+
464
+
The content of `compact-backup-demo1.yaml` is as follows:
465
+
466
+
```yaml
467
+
---
468
+
apiVersion: pingcap.com/v1alpha1
469
+
kind: CompactBackup
470
+
metadata:
471
+
name: demo1-compact-backup
472
+
namespace: backup-test
473
+
spec:
474
+
startTs: "***"
475
+
endTs: "***"
476
+
concurrency: 8
477
+
maxRetryTimes: 2
478
+
br:
479
+
cluster: demo1
480
+
clusterNamespace: test1
481
+
sendCredToTikv: true
482
+
azblob:
483
+
secretName: azblob-secret
484
+
container: my-container
485
+
prefix: my-log-backup-folder
486
+
```
487
+
488
+
The `startTs` and `endTs` fields specify the time range forthe logs to be compacted by `demo1-compact-backup`. Any log that contains at least one write within this time range will be includedin the compaction process. As a result, the final compacted data might include data written outside this range.
489
+
490
+
The `azblob` settings should be the same as the storage settings of the log backup to be compacted. `CompactBackup` reads log files from the corresponding location and compact them.
491
+
492
+
#### View the status of log backup compaction
493
+
494
+
After creating the `CompactBackup` CR, TiDB Operator automatically starts compacting the log backup. You can check the backup status using the following command:
495
+
496
+
```shell
497
+
kubectl get cpbk -n backup-test
498
+
```
499
+
500
+
From the output, you can find the status of the `CompactBackup` CR named `demo1-compact-backup`. An example output is as follows:
## Integrated management of scheduled snapshot backup, log backup, and compact log backup
829
+
830
+
To accelerate downstream recovery, you can enable`CompactBackup` CR in the `BackupSchedule` CR. This feature periodically compacts log backup files in remote storage. You must enable log backup before using log backup compaction. This section extends the configuration from the previous section.
831
+
832
+
### Prerequisites: Prepare for a scheduled snapshot backup
833
+
834
+
The steps to prepare for a scheduled snapshot backup are the same as that of [Prepare for an ad-hoc backup](#prerequisites-prepare-an-ad-hoc-backup-environment).
835
+
836
+
### Create `BackupSchedule`
837
+
838
+
1. Create a `BackupSchedule` CR named `integrated-backup-schedule-azblob`in the `backup-test` namespace.
The content of `integrated-backup-schedule-azblob.yaml` is as follows:
845
+
846
+
```yaml
847
+
---
848
+
apiVersion: pingcap.com/v1alpha1
849
+
kind: BackupSchedule
850
+
metadata:
851
+
name: integrated-backup-schedule-azblob
852
+
namespace: backup-test
853
+
spec:
854
+
maxReservedTime: "3h"
855
+
schedule: "* */2 * * *"
856
+
backupTemplate:
857
+
backupType: full
858
+
cleanPolicy: Delete
859
+
br:
860
+
cluster: demo1
861
+
clusterNamespace: test1
862
+
sendCredToTikv: true
863
+
azblob:
864
+
secretName: azblob-secret
865
+
container: my-container
866
+
prefix: schedule-backup-folder-snapshot
867
+
#accessTier: Hot
868
+
logBackupTemplate:
869
+
backupMode: log
870
+
br:
871
+
cluster: demo1
872
+
clusterNamespace: test1
873
+
sendCredToTikv: true
874
+
azblob:
875
+
secretName: azblob-secret
876
+
container: my-container
877
+
prefix: schedule-backup-folder-log
878
+
#accessTier: Hot
879
+
compactBackupTemplate:
880
+
br:
881
+
cluster: demo1
882
+
clusterNamespace: test1
883
+
sendCredToTikv: true
884
+
azblob:
885
+
secretName: azblob-secret
886
+
container: my-container
887
+
prefix: schedule-backup-folder-log
888
+
#accessTier: Hot
889
+
```
890
+
891
+
In the preceding example of `integrated-backup-schedule-azblob.yaml`, the `backupSchedule` configuration is based on the previous section, with the following additions for`compactBackup`:
892
+
893
+
* Added the `BackupSchedule.spec.compactInterval` field to specify the time interval for log backup compaction. It is recommended not to exceed the interval of scheduled snapshot backups and to keep it between one-half to one-third of the scheduled snapshot backup interval.
894
+
895
+
* Added the `BackupSchedule.spec.compactBackupTemplate` field. Ensure that the `BackupSchedule.spec.compactBackupTemplate.azblob` configuration matches the `BackupSchedule.spec.logBackupTemplate.azblob` configuration.
896
+
897
+
For the field description of `backupSchedule`, refer to [BackupSchedule CR fields](backup-restore-cr.md#backupschedule-cr-fields).
898
+
899
+
2. After creating `backupSchedule`, use the following command to check the backup status:
900
+
901
+
```shell
902
+
kubectl get bks -n backup-test -o wide
903
+
```
904
+
905
+
A compact log backup task is created together with `backupSchedule`. You can check the `CompactBackup` CR using the following command:
906
+
907
+
```shell
908
+
kubectl get cpbk -n backup-test
909
+
```
910
+
771
911
## Delete the backup CR
772
912
773
913
If you no longer need the backup CR, you can delete it by referring to [Delete the Backup CR](backup-restore-overview.md#delete-the-backup-cr).
0 commit comments