[prev in list] [next in list] [prev in thread] [next in thread] 

List:       bacula-users
Subject:    [Bacula-users] design&configuration challenges - file-cloud backup
From:       Žiga Žvan <ziga.zvan () cetrtapot ! si>
Date:       2020-08-05 8:52:50
Message-ID: 959502e2-3e82-2f5e-f19c-47a890b901f5 () cetrtapot ! si
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Dear all,
I have tested bacula sw (9.6.5) and I must say I'm quite happy with the 
results (eg. compression, encryption, configureability). However I have 
some configuration/design questions I hope, you can help me with.

Regarding job schedule, I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

I am using dummy cloud driver that writes to local file storage.   Volume 
is a directory with fileparts. I would like to have seperate 
volumes/pools for each client. I would like to delete the data on disk 
after retention period expires. If possible, I would like to delete just 
the fileparts with expired backup.

Questions:
a) At the moment, I'm using two backup job definitions per client and 
central schedule definition for all my clients. I have noticed that my 
incremental job gets promoted to full after monthly backup ("No prior 
Full backup Job record found"; because monthly backup is a seperate job, 
but bacula searches for full backups inside the same job). Could you 
please suggest a better configuration. If possible, I would like to keep 
central schedule definition (If I manipulate pools in a schedule 
resource, I would need to define them per client).

b) I would like to delete expired backups on disk (and in the catalog as 
well). At the moment I'm using one volume in a daily/weekly/monthly pool 
per client. In a volume, there are fileparts belonging to expired 
backups (eg. part1-23 in the output bellow). I have tried to solve this 
with purge/prune scripts in my BackupCatalog job (as suggested in the 
whitepapers) but the data does not get deleted. Is there any way to 
delete fileparts? Should I create separate volumes after retention 
period? Please suggest a better configuration.

c) Do I need a restore job for each client? I would just like to restore 
backup on the same client, default to /restore folder... When I use 
bconsole restore all command, the wizard asks me all the questions (eg. 
5- last backup for a client, which client,fileset...) but at the end it 
asks for a restore job which changes all previously defined things (eg. 
client).

d) At the moment, I have not implemented autochanger functionality. 
Clients compress/encrypt the data and send them to bacula server, which 
writes them on one central storage system. Jobs are processed in 
sequential order (one at a time). Do you expect any significant 
performance gain if i implement autochanger in order to have jobs run 
simultaneously?

Relevant part of configuration attached bellow.

Looking forward to move in the production...
Kind regards,
Ziga Zvan


*Volume example *(fileparts 1-23 should be deleted)*:*
[root@bacula cetrtapot-daily-vol-0022]# ls -ltr
total 0
-rw-r--r--. 1 bacula disk             262 Jul 28 23:05 part.1
-rw-r--r--. 1 bacula disk 999935988 Jul 28 23:06 part.2
-rw-r--r--. 1 bacula disk 999935992 Jul 28 23:07 part.3
-rw-r--r--. 1 bacula disk 999936000 Jul 28 23:08 part.4
-rw-r--r--. 1 bacula disk 999935981 Jul 28 23:09 part.5
-rw-r--r--. 1 bacula disk 328795126 Jul 28 23:10 part.6
-rw-r--r--. 1 bacula disk 999935988 Jul 29 23:09 part.7
-rw-r--r--. 1 bacula disk 999935995 Jul 29 23:10 part.8
-rw-r--r--. 1 bacula disk 999935981 Jul 29 23:11 part.9
-rw-r--r--. 1 bacula disk 999935992 Jul 29 23:12 part.10
-rw-r--r--. 1 bacula disk 453070890 Jul 29 23:12 part.11
-rw-r--r--. 1 bacula disk 999935995 Jul 30 23:09 part.12
-rw-r--r--. 1 bacula disk 999935993 Jul 30 23:10 part.13
-rw-r--r--. 1 bacula disk 999936000 Jul 30 23:11 part.14
-rw-r--r--. 1 bacula disk 999935984 Jul 30 23:12 part.15
-rw-r--r--. 1 bacula disk 580090514 Jul 30 23:13 part.16
-rw-r--r--. 1 bacula disk 999935994 Aug   3 23:09 part.17
-rw-r--r--. 1 bacula disk 999935936 Aug   3 23:12 part.18
-rw-r--r--. 1 bacula disk 999935971 Aug   3 23:13 part.19
-rw-r--r--. 1 bacula disk 999935984 Aug   3 23:14 part.20
-rw-r--r--. 1 bacula disk 999935973 Aug   3 23:15 part.21
-rw-r--r--. 1 bacula disk 999935977 Aug   3 23:17 part.22
-rw-r--r--. 1 bacula disk 108461297 Aug   3 23:17 part.23
-rw-r--r--. 1 bacula disk 999935974 Aug   4 23:09 part.24
-rw-r--r--. 1 bacula disk 999935987 Aug   4 23:10 part.25
-rw-r--r--. 1 bacula disk 999935971 Aug   4 23:11 part.26
-rw-r--r--. 1 bacula disk 999936000 Aug   4 23:12 part.27
-rw-r--r--. 1 bacula disk 398437855 Aug   4 23:12 part.28

*Cache (deleted as expected):*

[root@bacula cetrtapot-daily-vol-0022]# ls -ltr 
/mnt/backup_bacula/cloudcache/cetrtapot-daily-vol-0022/
total 4
-rw-r-----. 1 bacula disk 262 Jul 28 23:05 part.1

*Relevant part of central configuration*

# Backup the catalog database (after the nightly save)
Job {
    Name = "BackupCatalog"
    JobDefs = "CatalogJob"
    Level = Full
    FileSet="Catalog"
    Schedule = "WeeklyCycleAfterBackup"
    RunBeforeJob = "/opt/bacula/scripts/make_catalog_backup.pl MyCatalog"
    # This deletes the copy of the catalog
    RunAfterJob   = "/opt/bacula/scripts/delete_catalog_backup"
    #Prune
    RunScript {
        Console = "prune expired volume yes"
        RunsWhen = Before
        RunsOnClient= No
    }
    #Purge
    RunScript {
        RunsWhen=After
        RunsOnClient=No
        Console = "purge volume action=all allpools storage=FSOciCloudStandard"
    }
    Write Bootstrap = "/opt/bacula/working/%n.bsr"
    Priority = 11                                     # run after main backup
}

Schedule {
    Name = "WeeklyCycle"
    Run = Full 2nd-5th fri at 23:05
    Run = Incremental mon-thu at 23:05
}

Schedule {
    Name = "MonthlyFull"
    Run = Full 1st fri at 23:05
}

# This schedule does the catalog. It starts after the WeeklyCycle
Schedule {
    Name = "WeeklyCycleAfterBackup"
    Run = Full sun-sat at 23:10
}



*Configuration specific to each client*
Client {
    Name = oradev02.kranj.cetrtapot.si-fd
    Address = oradev02.kranj.cetrtapot.si       #IP or fqdn
    FDPort = 9102
    Catalog = MyCatalog
    Password = "something"                   # password for FileDaemon: will be 
match on client side
    File Retention = 60 days                       # 60 days
    Job Retention = 6 months                       # six months
    AutoPrune = yes                                         # Prune expired Jobs/Files
}

##Job for backup ##
JobDefs {
    Name = "oradev02-job"
    Type = Backup
    Level = Incremental
    Client = oradev02.kranj.cetrtapot.si-fd #Client names: will be match 
on bacula-fd.conf on client side
    FileSet = "oradev02-fileset"
    Schedule = "WeeklyCycle" #schedule : see in bacula-dir.conf
#   Storage = FSDedup
    Storage = FSOciCloudStandard
    Messages = Standard
    Pool = oradev02-daily-pool
    SpoolAttributes = yes                                     # Better for backup to disk
    Max Full Interval = 15 days                         # Ensure that full backup exist
    Priority = 10
    Write Bootstrap = "/opt/bacula/working/%c.bsr"
}

Job {
    Name = "oradev02-backup"
    JobDefs = "oradev02-job"
    Full Backup Pool = oradev02-weekly-pool
    Incremental Backup Pool = oradev02-daily-pool
}

Job {
    Name = "oradev02-monthly-backup"
    JobDefs = "oradev02-job"
    Pool = oradev02-monthly-pool
    Schedule = "MonthlyFull"   #schedule : see in bacula-dir.conf (monthly 
pool with longer retention)
}

## Job for restore ##
Job {
    Name = "oradev02-restore"
    Type = Restore
    Client=oradev02.kranj.cetrtapot.si-fd
    Storage = FSOciCloudStandard
# The FileSet and Pool directives are not used by Restore Jobs
# but must not be removed
    FileSet="oradev02-fileset"
    Pool = oradev02-weekly-pool
    Messages = Standard
    Where = /restore
}

FileSet {
    Name = "oradev02-fileset"
    Include {
        Options {
            signature = MD5
            compression = GZIP
        }
   #     File = "D:/projekti"     #Windows example
   #     File = /zz                       #Linux example
          File = /backup/export
    }

## Exclude   ##
    Exclude {
        File = /opt/bacula/working
        File = /tmp
        File = /proc
        File = /tmp
        File = /sys
        File = /.journal
        File = /.fsck
    }
}

Pool {
    Name = oradev02-monthly-pool
    Pool Type = Backup
    Recycle = yes                                             # Bacula can automatically 
recycle Volumes
    AutoPrune = no                                           # Prune expired volumes (catalog 
job handles this)
    Action On Purge = Truncate                   # Allow to volume truncation
    #Volume Use Duration = 14h                     # Create new volume for each backup
    Volume Retention = 365 days                 # one year
    Maximum Volume Bytes = 50G                   # Limit Volume size to something 
reasonable
    Maximum Volumes = 100                             # Limit number of Volumes in Pool
    Label Format = "oradev02-monthly-vol-"         # Auto label
    Cache Retention = 1 days                       # Cloud specific (delete local 
cache after one day)
}


Pool {
    Name = oradev02-weekly-pool
    Pool Type = Backup
    Recycle = yes                                             # Bacula can automatically 
recycle Volumes
    AutoPrune = no                                           # Prune expired volumes (catalog 
job handles this)
    Action On Purge = Truncate                   # Allow to volume truncation
    #Volume Use Duration = 14h                     # Create new volume for each backup
    Volume Retention = 35 days                   # one month
    Maximum Volume Bytes = 50G                   # Limit Volume size to something 
reasonable
    Maximum Volumes = 100                             # Limit number of Volumes in Pool
    Label Format = "oradev02-weekly-vol-"         # Auto label
    Cache Retention = 1 days                       # Cloud specific (delete local 
cache after one day)
}

Pool {
    Name = oradev02-daily-pool
    Pool Type = Backup
    Recycle = yes                                             # Bacula can automatically 
recycle Volumes
    AutoPrune = no                                           # Prune expired volumes (catalog 
job handles this)
    Action On Purge = Truncate                   # Allow to volume truncation
    #Volume Use Duration = 14h                     # Create new volume for each backup
    Volume Retention = 1 days                     # one week (for testing purposes, 
after that change to 5)
    Maximum Volume Bytes = 50G                   # Limit Volume size to something 
reasonable
    Maximum Volumes = 100                             # Limit number of Volumes in Pool
    Label Format = "oradev02-daily-vol-"         # Auto label
    Cache Retention = 1 days                       # Cloud specific (delete local 
cache after one day)
}



[Attachment #5 (text/html)]

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <br>
    <div class="moz-forward-container">
      <meta http-equiv="content-type" content="text/html; charset=UTF-8">
      <p>Dear all,<br>
        I have tested bacula sw (9.6.5) and I must say I'm quite happy
        with the results (eg. compression, encryption,
        configureability). However I have some configuration/design
        questions I hope, you can help me with.</p>
      <p>Regarding job schedule, I would like to:<br>
        - create incremental daily backup (retention 1 week)<br>
        - create weekly full backup (retention 1 month)<br>
        - create monthly full backup (retention 1 year)<br>
      </p>
      <p>I am using dummy cloud driver that writes to local file
        storage.   Volume is a directory with fileparts. I would like to
        have seperate volumes/pools for each client. I would like to
        delete the data on disk after retention period expires. If
        possible, I would like to delete just the fileparts with expired
        backup.</p>
      <p>Questions:<br>
        a) At the moment, I'm using two backup job definitions per
        client and central schedule definition for all my clients. I
        have noticed that my incremental job gets promoted to full after
        monthly backup ("No prior Full backup Job record found"; because
        monthly backup is a seperate job, but bacula searches for full
        backups inside the same job). Could you please suggest a better
        configuration. If possible, I would like to keep central
        schedule definition (If I manipulate pools in a schedule
        resource, I would need to define them per client).</p>
      <p>b) I would like to delete expired backups on disk (and in the
        catalog as well). At the moment I'm using one volume in a
        daily/weekly/monthly pool per client. In a volume, there are
        fileparts belonging to expired backups (eg. part1-23 in the
        output bellow). I have tried to solve this with purge/prune
        scripts in my BackupCatalog job (as suggested in the
        whitepapers) but the data does not get deleted. Is there any way
        to delete fileparts? Should I create separate volumes after
        retention period? Please suggest a better configuration. <br>
      </p>
      <p>c) Do I need a restore job for each client? I would just like
        to restore backup on the same client, default to /restore
        folder... When I use bconsole restore all command, the wizard
        asks me all the questions (eg. 5- last backup for a client,
        which client,fileset...) but at the end it asks for a restore
        job which changes all previously defined things (eg. client).</p>
      <p>d) At the moment, I have not implemented autochanger
        functionality. Clients compress/encrypt the data and send them
        to bacula server, which writes them on one central storage
        system. Jobs are processed in sequential order (one at a time).
        Do you expect any significant performance gain if i implement
        autochanger in order to have jobs run <span
          class="tlid-translation translation" lang="en"><span title=""
            class="">simultaneously?</span></span> </p>
      <p>Relevant part of configuration attached bellow.</p>
      <p>Looking forward to move in the production...<br>
        Kind regards,<br>
        Ziga Zvan<br>
      </p>
      <p><br>
      </p>
      <p><b>Volume example </b>(fileparts 1-23 should be deleted)<b>:</b><br>
        [root@bacula cetrtapot-daily-vol-0022]# ls -ltr<br>
        total 0<br>
        -rw-r--r--. 1 bacula disk             262 Jul 28 23:05 part.1<br>
        -rw-r--r--. 1 bacula disk 999935988 Jul 28 23:06 part.2<br>
        -rw-r--r--. 1 bacula disk 999935992 Jul 28 23:07 part.3<br>
        -rw-r--r--. 1 bacula disk 999936000 Jul 28 23:08 part.4<br>
        -rw-r--r--. 1 bacula disk 999935981 Jul 28 23:09 part.5<br>
        -rw-r--r--. 1 bacula disk 328795126 Jul 28 23:10 part.6<br>
        -rw-r--r--. 1 bacula disk 999935988 Jul 29 23:09 part.7<br>
        -rw-r--r--. 1 bacula disk 999935995 Jul 29 23:10 part.8<br>
        -rw-r--r--. 1 bacula disk 999935981 Jul 29 23:11 part.9<br>
        -rw-r--r--. 1 bacula disk 999935992 Jul 29 23:12 part.10<br>
        -rw-r--r--. 1 bacula disk 453070890 Jul 29 23:12 part.11<br>
        -rw-r--r--. 1 bacula disk 999935995 Jul 30 23:09 part.12<br>
        -rw-r--r--. 1 bacula disk 999935993 Jul 30 23:10 part.13<br>
        -rw-r--r--. 1 bacula disk 999936000 Jul 30 23:11 part.14<br>
        -rw-r--r--. 1 bacula disk 999935984 Jul 30 23:12 part.15<br>
        -rw-r--r--. 1 bacula disk 580090514 Jul 30 23:13 part.16<br>
        -rw-r--r--. 1 bacula disk 999935994 Aug   3 23:09 part.17<br>
        -rw-r--r--. 1 bacula disk 999935936 Aug   3 23:12 part.18<br>
        -rw-r--r--. 1 bacula disk 999935971 Aug   3 23:13 part.19<br>
        -rw-r--r--. 1 bacula disk 999935984 Aug   3 23:14 part.20<br>
        -rw-r--r--. 1 bacula disk 999935973 Aug   3 23:15 part.21<br>
        -rw-r--r--. 1 bacula disk 999935977 Aug   3 23:17 part.22<br>
        -rw-r--r--. 1 bacula disk 108461297 Aug   3 23:17 part.23<br>
        -rw-r--r--. 1 bacula disk 999935974 Aug   4 23:09 part.24<br>
        -rw-r--r--. 1 bacula disk 999935987 Aug   4 23:10 part.25<br>
        -rw-r--r--. 1 bacula disk 999935971 Aug   4 23:11 part.26<br>
        -rw-r--r--. 1 bacula disk 999936000 Aug   4 23:12 part.27<br>
        -rw-r--r--. 1 bacula disk 398437855 Aug   4 23:12 part.28</p>
      <p><b>Cache (deleted as expected):</b><br>
      </p>
      <p>[root@bacula cetrtapot-daily-vol-0022]# ls -ltr
        /mnt/backup_bacula/cloudcache/cetrtapot-daily-vol-0022/<br>
        total 4<br>
        -rw-r-----. 1 bacula disk 262 Jul 28 23:05 part.1<br>
        <br>
      </p>
      <p><b>Relevant part of central configuration</b><br>
      </p>
      <p># Backup the catalog database (after the nightly save)<br>
        Job {<br>
           Name = "BackupCatalog"<br>
           JobDefs = "CatalogJob"<br>
           Level = Full<br>
           FileSet="Catalog"<br>
           Schedule = "WeeklyCycleAfterBackup"<br>
           RunBeforeJob = "/opt/bacula/scripts/make_catalog_backup.pl
        MyCatalog"<br>
           # This deletes the copy of the catalog<br>
           RunAfterJob   = "/opt/bacula/scripts/delete_catalog_backup"<br>
           #Prune<br>
           RunScript {<br>
               Console = "prune expired volume yes"<br>
               RunsWhen = Before<br>
               RunsOnClient= No<br>
           }<br>
           #Purge<br>
           RunScript {<br>
               RunsWhen=After<br>
               RunsOnClient=No<br>
               Console = "purge volume action=all allpools
        storage=FSOciCloudStandard"<br>
           }<br>
           Write Bootstrap = "/opt/bacula/working/%n.bsr"<br>
           Priority = 11                                     # run after main backup<br>
        }<br>
      </p>
      <p>Schedule {<br>
           Name = "WeeklyCycle"<br>
           Run = Full 2nd-5th fri at 23:05<br>
           Run = Incremental mon-thu at 23:05<br>
        }<br>
        <br>
        Schedule {<br>
           Name = "MonthlyFull"<br>
           Run = Full 1st fri at 23:05<br>
        }<br>
        <br>
        # This schedule does the catalog. It starts after the
        WeeklyCycle<br>
        Schedule {<br>
           Name = "WeeklyCycleAfterBackup"<br>
           Run = Full sun-sat at 23:10<br>
        }<br>
        <br>
      </p>
      <p><br>
      </p>
      <p><br>
      </p>
      <p><b>Configuration specific to each client</b><br>
        Client {<br>
           Name = oradev02.kranj.cetrtapot.si-fd<br>
           Address = oradev02.kranj.cetrtapot.si       #IP or fqdn<br>
           FDPort = 9102<br>
           Catalog = MyCatalog<br>
           Password = "something"                   # password for FileDaemon:
        will be match on client side<br>
           File Retention = 60 days                       # 60 days<br>
           Job Retention = 6 months                       # six months<br>
           AutoPrune = yes                                         # Prune expired Jobs/Files<br>
        }<br>
        <br>
        ##Job for backup ##<br>
        JobDefs {<br>
           Name = "oradev02-job"<br>
           Type = Backup<br>
           Level = Incremental<br>
           Client = oradev02.kranj.cetrtapot.si-fd #Client names: will be
        match on bacula-fd.conf on client side<br>
           FileSet = "oradev02-fileset"<br>
           Schedule = "WeeklyCycle" #schedule : see in bacula-dir.conf<br>
        #   Storage = FSDedup<br>
           Storage = FSOciCloudStandard<br>
           Messages = Standard<br>
           Pool = oradev02-daily-pool<br>
           SpoolAttributes = yes                                     # Better for backup to
        disk<br>
           Max Full Interval = 15 days                         # Ensure that full
        backup exist<br>
           Priority = 10<br>
           Write Bootstrap = "/opt/bacula/working/%c.bsr"<br>
        }<br>
        <br>
        Job {<br>
           Name = "oradev02-backup"<br>
           JobDefs = "oradev02-job"<br>
           Full Backup Pool = oradev02-weekly-pool<br>
           Incremental Backup Pool = oradev02-daily-pool<br>
        }<br>
        <br>
        Job {<br>
           Name = "oradev02-monthly-backup"<br>
           JobDefs = "oradev02-job"<br>
           Pool = oradev02-monthly-pool<br>
           Schedule = "MonthlyFull"   #schedule : see in bacula-dir.conf
        (monthly pool with longer retention)<br>
        }<br>
        <br>
        ## Job for restore ##<br>
        Job {<br>
           Name = "oradev02-restore"<br>
           Type = Restore<br>
           Client=oradev02.kranj.cetrtapot.si-fd<br>
           Storage = FSOciCloudStandard<br>
        # The FileSet and Pool directives are not used by Restore Jobs<br>
        # but must not be removed<br>
           FileSet="oradev02-fileset"<br>
           Pool = oradev02-weekly-pool<br>
           Messages = Standard<br>
           Where = /restore<br>
        }<br>
      </p>
      <p>FileSet {<br>
           Name = "oradev02-fileset"<br>
           Include {<br>
               Options {<br>
                   signature = MD5<br>
                   compression = GZIP<br>
               }<br>
          #     File = "D:/projekti"     #Windows example<br>
          #     File = /zz                       #Linux example<br>
                 File = /backup/export<br>
           }<br>
        <br>
        ## Exclude   ##<br>
           Exclude {<br>
               File = /opt/bacula/working<br>
               File = /tmp<br>
               File = /proc<br>
               File = /tmp<br>
               File = /sys<br>
               File = /.journal<br>
               File = /.fsck<br>
           }<br>
        }<br>
        <br>
        Pool {<br>
           Name = oradev02-monthly-pool<br>
           Pool Type = Backup<br>
           Recycle = yes                                             # Bacula can automatically
        recycle Volumes<br>
           AutoPrune = no                                           # Prune expired volumes
        (catalog job handles this)<br>
           Action On Purge = Truncate                   # Allow to volume
        truncation<br>
           #Volume Use Duration = 14h                     # Create new volume for
        each backup<br>
           Volume Retention = 365 days                 # one year<br>
           Maximum Volume Bytes = 50G                   # Limit Volume size to
        something reasonable<br>
           Maximum Volumes = 100                             # Limit number of Volumes
        in Pool<br>
           Label Format = "oradev02-monthly-vol-"         # Auto label<br>
           Cache Retention = 1 days                       # Cloud specific (delete
        local cache after one day)<br>
        }<br>
      </p>
      <p><br>
        Pool {<br>
           Name = oradev02-weekly-pool<br>
           Pool Type = Backup<br>
           Recycle = yes                                             # Bacula can automatically
        recycle Volumes<br>
           AutoPrune = no                                           # Prune expired volumes
        (catalog job handles this)<br>
           Action On Purge = Truncate                   # Allow to volume
        truncation<br>
           #Volume Use Duration = 14h                     # Create new volume for
        each backup<br>
           Volume Retention = 35 days                   # one month<br>
           Maximum Volume Bytes = 50G                   # Limit Volume size to
        something reasonable<br>
           Maximum Volumes = 100                             # Limit number of Volumes
        in Pool<br>
           Label Format = "oradev02-weekly-vol-"         # Auto label<br>
           Cache Retention = 1 days                       # Cloud specific (delete
        local cache after one day)<br>
        }<br>
        <br>
        Pool {<br>
           Name = oradev02-daily-pool<br>
           Pool Type = Backup<br>
           Recycle = yes                                             # Bacula can automatically
        recycle Volumes<br>
           AutoPrune = no                                           # Prune expired volumes
        (catalog job handles this)<br>
           Action On Purge = Truncate                   # Allow to volume
        truncation<br>
           #Volume Use Duration = 14h                     # Create new volume for
        each backup<br>
           Volume Retention = 1 days                     # one week (for testing
        purposes, after that change to 5)<br>
           Maximum Volume Bytes = 50G                   # Limit Volume size to
        something reasonable<br>
           Maximum Volumes = 100                             # Limit number of Volumes
        in Pool<br>
           Label Format = "oradev02-daily-vol-"         # Auto label<br>
           Cache Retention = 1 days                       # Cloud specific (delete
        local cache after one day)<br>
        }<br>
        <br>
      </p>
      <div class="moz-forward-container"><br>
        <blockquote type="cite"
          cite="mid:1ba54dd3-61d3-dd8e-42c9-e7e6ff118f48@cetrtapot.si">
          <blockquote type="cite"
            cite="mid:5829da0a-775d-d3a6-7d4a-a438c6caf903@sibbald.com">
            <blockquote type="cite"
              cite="mid:c782a003-32c8-544d-8ed9-7d0180cbc3c2@cetrtapot.si">
              <blockquote type="cite"
                cite="mid:6a82865a-0ff2-1755-c804-009ad27eeccf@sibbald.com">
                <blockquote type="cite"
                  cite="mid:0025051d-673c-3850-e62f-72b45cbe0de9@cetrtapot.si">
                </blockquote>
              </blockquote>
            </blockquote>
          </blockquote>
        </blockquote>
      </div>
    </div>
  </body>
</html>




_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic