[prev in list] [next in list] [prev in thread] [next in thread] 

List:       bacula-commits
Subject:    [Bacula-commits] git: Bacula branch, Branch-5.0,
From:       "Kern Sibbald" <kerns () users ! sourceforge ! net>
Date:       2010-08-08 9:52:12
Message-ID: E1Oi2YA-0001o8-Fm () sfp-scmshell-3 ! v30 ! ch3 ! sourceforge ! com
[Download RAW message or body]

This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "Bacula".

The branch, Branch-5.0 has been updated
       via  aa10982982b9ea2efe50d938bf7c38ae6ea1dde7 (commit)
       via  8c20aa43159e004ba78046c85c830ff5d6a2c61d (commit)
      from  fb469c80dbcc01d0c9bf0f23892776861cb4d8c1 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -----------------------------------------------------------------
commit aa10982982b9ea2efe50d938bf7c38ae6ea1dde7
Author: Kern Sibbald <kern@sibbald.com>
Date:   Sun Aug 8 11:52:04 2010 +0200

    Update projects

commit 8c20aa43159e004ba78046c85c830ff5d6a2c61d
Author: Kern Sibbald <kern@sibbald.com>
Date:   Sat Aug 7 21:51:09 2010 +0200

    Tweak turn on DEVELOPER

-----------------------------------------------------------------------

Summary of changes:
diff --git a/bacula/projects b/bacula/projects
index a7334e2..bfa81cc 100644
--- a/bacula/projects
+++ b/bacula/projects
@@ -1,51 +1,54 @@
                 
 Projects:
                      Bacula Projects Roadmap 
-                    Status updated 25 February 2010
+                    Status updated 8 August 2010
 
 Summary:
 * => item complete
 
 Item  1: Ability to restart failed jobs
-Item  2: Scheduling syntax that permits more flexibility and options
-Item  3: Data encryption on storage daemon
-Item  4: Add ability to Verify any specified Job.
-Item  5: Improve Bacula's tape and drive usage and cleaning management
-Item  6: Allow FD to initiate a backup
-Item  7: Implement Storage daemon compression
+Item  2: SD redesign
+Item  3: NDMP backup
+Item  4: SAP backup
+Item  5: Oracle backup
+Item  6: Include timestamp of job launch in "stat clients" output
+Item  7: Include all conf files in specified directory
 Item  8: Reduction of communications bandwidth for a backup
-Item  9: Ability to reconnect a disconnected comm line
+Item  9: Concurrent spooling and despooling within a single job.
 Item 10: Start spooling even when waiting on tape
-Item 11: Include all conf files in specified directory
-Item 12: Multiple threads in file daemon for the same job
+Item 11: Add ability to Verify any specified Job.
+Item 12: Data encryption on storage daemon
 Item 13: Possibilty to schedule Jobs on last Friday of the month
-Item 14: Include timestamp of job launch in "stat clients" output
-Item 15: Message mailing based on backup types
-Item 16: Ability to import/export Bacula database entities
-Item 17: Implementation of running Job speed limit.
-Item 18: Add an override in Schedule for Pools based on backup types
-Item 19: Automatic promotion of backup levels based on backup size
-Item 20: Allow FileSet inclusion/exclusion by creation/mod times
-Item 21: Archival (removal) of User Files to Tape
-Item 22: An option to operate on all pools with update vol parameters
-Item 23: Automatic disabling of devices
-Item 24: Ability to defer Batch Insert to a later time
-Item 25: Add MaxVolumeSize/MaxVolumeBytes to Storage resource
-Item 26: Enable persistent naming/number of SQL queries
-Item 27: Bacula Dir, FD and SD to support proxies
-Item 28: Add Minumum Spool Size directive
-Item 29: Handle Windows Encrypted Files using Win raw encryption
-Item 30: Implement a Storage device like Amazon's S3.
-Item 31: Convert tray monitor on Windows to a stand alone program
-Item 32: Relabel disk volume after recycling
+Item 14: Scheduling syntax that permits more flexibility and options
+Item 15: Ability to defer Batch Insert to a later time
+Item 16: Add MaxVolumeSize/MaxVolumeBytes to Storage resource
+Item 17: Message mailing based on backup types
+Item 18: Handle Windows Encrypted Files using Win raw encryption
+Item 19: Job migration between different SDs
+Item 19. Allow FD to initiate a backup
+Item 20: Implement Storage daemon compression
+Item 21: Ability to import/export Bacula database entities
+Item 22: Implementation of running Job speed limit.
+Item 23: Add an override in Schedule for Pools based on backup types
+Item 24: Automatic promotion of backup levels based on backup size
+Item 25: Allow FileSet inclusion/exclusion by creation/mod times
+Item 26: Archival (removal) of User Files to Tape
+Item 27: Ability to reconnect a disconnected comm line
+Item 28: Multiple threads in file daemon for the same job
+Item 29: Automatic disabling of devices
+Item 30: Enable persistent naming/number of SQL queries
+Item 31: Bacula Dir, FD and SD to support proxies
+Item 32: Add Minumum Spool Size directive
 Item 33: Command that releases all drives in an autochanger
 Item 34: Run bscan on a remote storage daemon from within bconsole.
 Item 35: Implement a Migration job type that will create a reverse
-Item 36: Job migration between different SDs
-Item 37: Concurrent spooling and despooling withini a single job.
-Item 39: Extend the verify code to make it possible to verify
-Item 40: Separate "Storage" and "Device" in the bacula-dir.conf
-Item 41: Least recently used device selection for tape drives in autochanger.
+Item 36: Extend the verify code to make it possible to verify
+Item 37: Separate "Storage" and "Device" in the bacula-dir.conf
+Item 38: Least recently used device selection for tape drives in autochanger.
+Item 39: Implement a Storage device like Amazon's S3.
+Item 40: Convert tray monitor on Windows to a stand alone program
+Item 41: Improve Bacula's tape and drive usage and cleaning management
+Item 42: Relabel disk volume after recycling
 
 
 Item  1: Ability to restart failed jobs
@@ -68,8 +71,294 @@ Item  1: Ability to restart failed jobs
   Notes: Requires Accurate to restart correctly.  Must completed have a minimum
           volume of data or files stored on Volume before enabling.
 
+Item  2: SD redesign
+   Date: 8 August 2010
+ Origin: Kern
+ Status: 
+
+  What: Various ideas for redesigns planned for the SD:
+   1. One thread per drive
+   2. Design a class structure for all objects in the SD.
+   3. Make Device into C++ classes for each device type
+   4. Make Device have a proxy (front end intercept class) that will permit control \
over locking and changing the real device pointer.  It can also permit delaying \
opening, so that we can adapt to having another program that tells us the Archive \
device name. +   5. Allow plugins to create new on the fly devices
+   6. Separate SD volume manager
+   7. Volume manager tells Bacula what drive or device to use for a given volume
+  
+  Why:  It will simplify the SD, make it more modular, reduce locking
+        conflicts, and allow multiple buffer backups.
+
+Item  3: NDMP backup                                           
+   Date: 8 August 2010
+ Origin: Bacula Systems
+ Status: Enterprise only if implemented by Bacula Systems
+
+  What:  Backup/restore via NDMP -- most important NetApp compatibility
+
+Item  4: SAP backup                                           
+   Date: 8 August 2010
+ Origin: Bacula Systems
+ Status: Enterprise only if implemented by Bacula Systems
+
+  What:  Backup/restore SAP databases (MaxDB, Oracle, possibly DB2)
+
+Item  5: Oracle backup                                           
+   Date: 8 August 2010
+ Origin: Bacula Systems
+ Status: Enterprise only if implemented by Bacula Systems
+
+  What:  Backup/restore Oracle databases
+
+Item  6: Include timestamp of job launch in "stat clients" output
+  Origin: Mark Bergman <mark.bergman@uphs.upenn.edu>
+  Date:  Tue Aug 22 17:13:39 EDT 2006
+  Status:
+
+  What:  The "stat clients" command doesn't include any detail on when
+          the active backup jobs were launched.
+
+  Why:   Including the timestamp would make it much easier to decide whether
+          a job is running properly. 
+
+  Notes: It may be helpful to have the output from "stat clients" formatted 
+          more like that from "stat dir" (and other commands), in a column
+          format. The per-client information that's currently shown (level,
+          client name, JobId, Volume, pool, device, Files, etc.) is good, but
+          somewhat hard to parse (both programmatically and visually), 
+          particularly when there are many active clients.
+
+
+Item  7: Include all conf files in specified directory
+Date:  18 October 2008
+Origin: Database, Lda. Maputo, Mozambique
+Contact:Cameron Smith / cameron.ord@database.co.mz 
+Status: New request
+
+What: A directive something like "IncludeConf = /etc/bacula/subconfs" Every
+      time Bacula Director restarts or reloads, it will walk the given
+      directory (non-recursively) and include the contents of any files
+      therein, as though they were appended to bacula-dir.conf
+
+Why: Permits simplified and safer configuration for larger installations with
+      many client PCs.  Currently, through judicious use of JobDefs and
+      similar directives, it is possible to reduce the client-specific part of
+      a configuration to a minimum.  The client-specific directives can be
+      prepared according to a standard template and dropped into a known
+      directory.  However it is still necessary to add a line to the "master"
+      (bacula-dir.conf) referencing each new file.  This exposes the master to
+      unnecessary risk of accidental mistakes and makes automation of adding
+      new client-confs, more difficult (it is easier to automate dropping a
+      file into a dir, than rewriting an existing file).  Ken has previously
+      made a convincing argument for NOT including Bacula's core configuration
+      in an RDBMS, but I believe that the present request is a reasonable
+      extension to the current "flat-file-based" configuration philosophy.
+ 
+Notes: There is NO need for any special syntax to these files.  They should
+       contain standard directives which are simply "inlined" to the parent
+       file as already happens when you explicitly reference an external file.
+
+Notes: (kes) this can already be done with scripting
+     From: John Jorgensen <jorgnsn@lcd.uregina.ca>
+     The bacula-dir.conf at our site contains these lines:
+
+   #
+   # Include subfiles associated with configuration of clients.
+   # They define the bulk of the Clients, Jobs, and FileSets.
+   #
+   @|"sh -c 'for f in /etc/bacula/clientdefs/*.conf ; do echo @${f} ; done'"
+
+    and when we get a new client, we just put its configuration into
+    a new file called something like:
+
+    /etc/bacula/clientdefs/clientname.conf
+
+
+
+
+Item  8: Reduction of communications bandwidth for a backup
+   Date: 14 October 2008
+ Origin: Robin O'Leary (Equiinet)
+ Status: 
+
+  What:  Using rdiff techniques, Bacula could significantly reduce
+          the network data transfer volume to do a backup.
+
+  Why:   Faster backup across the Internet
+
+  Notes: This requires retaining certain data on the client during a Full
+          backup that will speed up subsequent backups.
+     
+
+Item  9: Concurrent spooling and despooling within a single job.
+Date:  17 nov 2009
+Origin: Jesper Krogh <jesper@krogh.cc>
+Status: NEW
+What:  When a job has spooling enabled and the spool area size is
+       less than the total volumes size the storage daemon will:
+       1) Spool to spool-area
+       2) Despool to tape
+       3) Go to 1 if more data to be backed up.
+
+       Typical disks will serve data with a speed of 100MB/s when
+       dealing with large files, network it typical capable of doing 115MB/s
+       (GbitE). Tape drives will despool with 50-90MB/s (LTO3) 70-120MB/s
+       (LTO4) depending on compression and data.
+
+       As bacula currently works it'll hold back data from the client until
+       de-spooling is done, now matter if the spool area can handle another
+       block of data. Say given a FileSet of 4TB and a spool-area of 100GB and
+       a Maximum Job Spool Size set to 50GB then above sequence could be
+       changed to allow to spool to the other 50GB while despooling the first
+       50GB and not holding back the client while doing it. As above numbers
+       show, depending on tape-drive and disk-arrays this potentially leads to
+       a cut of the backup-time of 50% for the individual jobs.
+
+       Real-world example, backing up 112.6GB (large files) to LTO4 tapes
+       (despools with ~75MB/s, data is gzipped on the remote filesystem.
+       Maximum Job Spool Size = 8GB
+
+       Current:
+       Size: 112.6GB
+       Elapsed time (total time): 46m 15s => 2775s
+       Despooling time: 25m 41s => 1541s (55%)
+       Spooling time: 20m 34s => 1234s (45%)
+       Reported speed: 40.58MB/s
+       Spooling speed: 112.6GB/1234s => 91.25MB/s
+       Despooling speed: 112.6GB/1541s => 73.07MB/s
+
+       So disk + net can "keep up" with the LTO4 drive (in this test)
+
+       Prosed change would effectively make the backup run in the "despooling
+       time" 1541s giving a reduction to 55% of the total run time.
+
+       In the situation where the individual job cannot keep up with LTO-drive
+       spooling enables efficient multiplexing of multiple concurrent jobs onto
+       the same drive.
+
+Why:   When dealing with larger volumes the general utillization of the
+       network/disk is important to maximize in order to be able to run a full
+       backup over a weekend. Current work-around is to split the FileSet in
+       smaller FileSet and Jobs but that leads to more configuration mangement
+       and is harder to review for completeness. Subsequently it makes restores
+       more complex.
+
+     
+
+Item 10: Start spooling even when waiting on tape
+  Origin: Tobias Barth <tobias.barth@web-arts.com>
+  Date:  25 April 2008
+  Status:
+
+  What: If a job can be spooled to disk before writing it to tape, it should
+          be spooled immediately.  Currently, bacula waits until the correct
+          tape is inserted into the drive.
+
+  Why:   It could save hours.  When bacula waits on the operator who must insert
+          the correct tape (e.g.  a new tape or a tape from another media
+          pool), bacula could already prepare the spooled data in the spooling
+          directory and immediately start despooling when the tape was
+          inserted by the operator.
+         
+          2nd step: Use 2 or more spooling directories.  When one directory is
+          currently despooling, the next (on different disk drives) could
+          already be spooling the next data.
+
+  Notes: I am using bacula 2.2.8, which has none of those features
+         implemented.
+
+
+Item 11: Add ability to Verify any specified Job.
+Date: 17 January 2008
+Origin: portrix.net Hamburg, Germany.
+Contact: Christian Sabelmann
+Status: 70% of the required Code is part of the Verify function since v. 2.x
+
+   What:
+   The ability to tell Bacula which Job should verify instead of 
+   automatically verify just the last one.
+
+   Why: 
+   It is sad that such a powerfull feature like Verify Jobs
+   (VolumeToCatalog) is restricted to be used only with the last backup Job
+   of a client.  Actual users who have to do daily Backups are forced to
+   also do daily Verify Jobs in order to take advantage of this useful
+   feature.  This Daily Verify after Backup conduct is not always desired
+   and Verify Jobs have to be sometimes scheduled.  (Not necessarily
+   scheduled in Bacula).  With this feature Admins can verify Jobs once a
+   Week or less per month, selecting the Jobs they want to verify.  This
+   feature is also not to difficult to implement taking in account older bug
+   reports about this feature and the selection of the Job to be verified.
+          
+   Notes: For the verify Job, the user could select the Job to be verified 
+   from a List of the latest Jobs of a client. It would also be possible to 
+   verify a certain volume.  All of these would naturaly apply only for 
+   Jobs whose file information are still in the catalog.
+
+
+Item 12: Data encryption on storage daemon
+  Origin: Tobias Barth <tobias.barth at web-arts.com>
+  Date:  04 February 2009
+  Status: new
+
+  What: The storage demon should be able to do the data encryption that can
+        currently be done by the file daemon.
+
+  Why: This would have 2 advantages: 
+       1) one could encrypt the data of unencrypted tapes by doing a 
+          migration job
+       2) the storage daemon would be the only machine that would have 
+          to keep the encryption keys.
+
+  Notes from Landon:
+          As an addendum to the feature request, here are some crypto  
+          implementation details I wrote up regarding SD-encryption back in Jan  
+          2008:
+          http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg28860.html
 +
+
+
+Item 13: Possibilty to schedule Jobs on last Friday of the month
+Origin: Carsten Menke <bootsy52 at gmx dot net>
+Date:   02 March 2008
+Status:
+
+   What: Currently if you want to run your monthly Backups on the last
+           Friday of each month this is only possible with workarounds (e.g
+           scripting) (As some months got 4 Fridays and some got 5 Fridays)
+           The same is true if you plan to run your yearly Backups on the
+           last Friday of the year.  It would be nice to have the ability to
+           use the builtin scheduler for this.
+
+   Why:   In many companies the last working day of the week is Friday (or 
+           Saturday), so to get the most data of the month onto the monthly
+           tape, the employees are advised to insert the tape for the
+           monthly backups on the last friday of the month.
+
+   Notes: To give this a complete functionality it would be nice if the
+           "first" and "last" Keywords could be implemented in the
+           scheduler, so it is also possible to run monthy backups at the
+           first friday of the month and many things more.  So if the syntax
+           would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
+           {Year|Month|Week} you would be able to run really flexible jobs.
 
-Item  2: Scheduling syntax that permits more flexibility and options
+           To got a certain Job run on the last Friday of the Month for example
+           one could then write
+
+              Run = pool=Monthly last Fri of the Month at 23:50
+
+              ## Yearly Backup
+
+              Run = pool=Yearly last Fri of the Year at 23:50
+
+              ## Certain Jobs the last Week of a Month
+
+              Run = pool=LastWeek last Week of the Month at 23:50
+
+              ## Monthly Backup on the last day of the month
+
+              Run = pool=Monthly last Day of the Month at 23:50
+
+Item 14: Scheduling syntax that permits more flexibility and options
    Date: 15 December 2006
   Origin: Gregory Brauer (greg at wildbrain dot com) and
           Florian Schnabel <florian.schnabel at docufy dot de>
@@ -175,125 +464,147 @@ Item  2: Scheduling syntax that permits more flexibility and \
options  jobs (via Schedule syntax) into this.
 
 
-Item  3: Data encryption on storage daemon
-  Origin: Tobias Barth <tobias.barth at web-arts.com>
-  Date:  04 February 2009
-  Status: new
+Item 15: Ability to defer Batch Insert to a later time
+   Date: 26 April 2009
+ Origin: Eric
+ Status: 
 
-  What: The storage demon should be able to do the data encryption that can
-        currently be done by the file daemon.
+  What:  Instead of doing a Job Batch Insert at the end of the Job
+          which might create resource contention with lots of Job,
+          defer the insert to a later time.
 
-  Why: This would have 2 advantages: 
-       1) one could encrypt the data of unencrypted tapes by doing a 
-          migration job
-       2) the storage daemon would be the only machine that would have 
-          to keep the encryption keys.
+  Why:   Permits to focus on getting the data on the Volume and
+          putting the metadata into the Catalog outside the backup
+          window.
 
-  Notes from Landon:
-          As an addendum to the feature request, here are some crypto  
-          implementation details I wrote up regarding SD-encryption back in Jan  
-          2008:
-          http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg28860.html
 +  Notes: Will use the proposed Bacula ASCII database import/export
+          format (i.e. dependent on the import/export entities project).
 
 
-Item  4: Add ability to Verify any specified Job.
-Date: 17 January 2008
-Origin: portrix.net Hamburg, Germany.
-Contact: Christian Sabelmann
-Status: 70% of the required Code is part of the Verify function since v. 2.x
+Item 16: Add MaxVolumeSize/MaxVolumeBytes to Storage resource
+   Origin: Bastian Friedrich <bastian.friedrich@collax.com>
+   Date:  2008-07-09
+   Status: -
 
-   What:
-   The ability to tell Bacula which Job should verify instead of 
-   automatically verify just the last one.
+   What:  SD has a "Maximum Volume Size" statement, which is deprecated and
+           superseded by the Pool resource statement "Maximum Volume Bytes".
+           It would be good if either statement could be used in Storage
+           resources.
 
-   Why: 
-   It is sad that such a powerfull feature like Verify Jobs
-   (VolumeToCatalog) is restricted to be used only with the last backup Job
-   of a client.  Actual users who have to do daily Backups are forced to
-   also do daily Verify Jobs in order to take advantage of this useful
-   feature.  This Daily Verify after Backup conduct is not always desired
-   and Verify Jobs have to be sometimes scheduled.  (Not necessarily
-   scheduled in Bacula).  With this feature Admins can verify Jobs once a
-   Week or less per month, selecting the Jobs they want to verify.  This
-   feature is also not to difficult to implement taking in account older bug
-   reports about this feature and the selection of the Job to be verified.
-          
-   Notes: For the verify Job, the user could select the Job to be verified 
-   from a List of the latest Jobs of a client. It would also be possible to 
-   verify a certain volume.  All of these would naturaly apply only for 
-   Jobs whose file information are still in the catalog.
+   Why:   Pools do not have to be restricted to a single storage type/device;
+           thus, it may be impossible to define Maximum Volume Bytes in the
+           Pool resource.  The old MaxVolSize statement is deprecated, as it
+           is SD side only.  I am using the same pool for different devices.
 
+   Notes: State of idea currently unknown.  Storage resources in the dir
+           config currently translate to very slim catalog entries; these
+           entries would require extensions to implement what is described
+           here.  Quite possibly, numerous other statements that are currently
+           available in Pool resources could be used in Storage resources too
+           quite well.
 
-Item  5: Improve Bacula's tape and drive usage and cleaning management 
-  Date:  8 November 2005, November 11, 2005
-  Origin: Adam Thornton <athornton at sinenomine dot net>,
-          Arno Lehmann <al at its-lehmann dot de>
+
+Item 17: Message mailing based on backup types
+ Origin: Evan Kaufman <evan.kaufman@gmail.com>
+   Date: January 6, 2006
+ Status:
+
+   What: In the "Messages" resource definitions, allowing messages
+          to be mailed based on the type (backup, restore, etc.) and level
+          (full, differential, etc) of job that created the originating
+          message(s).
+
+ Why:    It would, for example, allow someone's boss to be emailed
+          automatically only when a Full Backup job runs, so he can
+          retrieve the tapes for offsite storage, even if the IT dept.
+          doesn't (or can't) explicitly notify him.  At the same time, his
+          mailbox wouldnt be filled by notifications of Verifies, Restores,
+          or Incremental/Differential Backups (which would likely be kept
+          onsite).
+
+ Notes: One way this could be done is through additional message types, for
+ example:
+
+   Messages {
+     # email the boss only on full system backups
+     Mail = boss@mycompany.com = full, !incremental, !differential, !restore, 
+            !verify, !admin
+     # email us only when something breaks
+     MailOnError = itdept@mycompany.com = all
+   }
+
+   Notes: Kern: This should be rather trivial to implement.
+
+
+Item 18: Handle Windows Encrypted Files using Win raw encryption
+  Origin: Michael Mohr, SAG  Mohr.External@infineon.com
+  Date:  22 February 2008
+  Origin: Alex Ehrlich (Alex.Ehrlich-at-mail.ee)
+  Date:  05 August 2008
   Status:
 
-  What:  Make Bacula manage tape life cycle information, tape reuse
-          times and drive cleaning cycles.
-
-  Why:   All three parts of this project are important when operating
-          backups.
-          We need to know which tapes need replacement, and we need to
-          make sure the drives are cleaned when necessary.  While many
-          tape libraries and even autoloaders can handle all this
-          automatically, support by Bacula can be helpful for smaller
-          (older) libraries and single drives.  Limiting the number of
-          times a tape is used might prevent tape errors when using
-          tapes until the drives can't read it any more.  Also, checking
-          drive status during operation can prevent some failures (as I
-          [Arno] had to learn the hard way...)
-
-  Notes: First, Bacula could (and even does, to some limited extent)
-          record tape and drive usage.  For tapes, the number of mounts,
-          the amount of data, and the time the tape has actually been
-          running could be recorded.  Data fields for Read and Write
-          time and Number of mounts already exist in the catalog (I'm
-          not sure if VolBytes is the sum of all bytes ever written to
-          that volume by Bacula).  This information can be important
-          when determining which media to replace.  The ability to mark
-          Volumes as "used up" after a given number of write cycles
-          should also be implemented so that a tape is never actually
-          worn out.  For the tape drives known to Bacula, similar
-          information is interesting to determine the device status and
-          expected life time: Time it's been Reading and Writing, number
-          of tape Loads / Unloads / Errors.  This information is not yet
-          recorded as far as I [Arno] know.  A new volume status would
-          be necessary for the new state, like "Used up" or "Worn out".
-          Volumes with this state could be used for restores, but not
-          for writing. These volumes should be migrated first (assuming
-          migration is implemented) and, once they are no longer needed,
-          could be moved to a Trash pool.
-
-          The next step would be to implement a drive cleaning setup.
-          Bacula already has knowledge about cleaning tapes.  Once it
-          has some information about cleaning cycles (measured in drive
-          run time, number of tapes used, or calender days, for example)
-          it can automatically execute tape cleaning (with an
-          autochanger, obviously) or ask for operator assistance loading
-          a cleaning tape.
-
-          The final step would be to implement TAPEALERT checks not only
-          when changing tapes and only sending the information to the
-          administrator, but rather checking after each tape error,
-          checking on a regular basis (for example after each tape
-          file), and also before unloading and after loading a new tape.
-          Then, depending on the drives TAPEALERT state and the known
-          drive cleaning state Bacula could automatically schedule later
-          cleaning, clean immediately, or inform the operator.
-
-          Implementing this would perhaps require another catalog change
-          and perhaps major changes in SD code and the DIR-SD protocol,
-          so I'd only consider this worth implementing if it would
-          actually be used or even needed by many people.
-
-          Implementation of these projects could happen in three distinct
-          sub-projects: Measuring Tape and Drive usage, retiring
-          volumes, and handling drive cleaning and TAPEALERTs.
-
-
-Item  6: Allow FD to initiate a backup
+  What: Make it possible to backup and restore Encypted Files from and to
+          Windows systems without the need to decrypt it by using the raw
+          encryption functions API (see:
+          http://msdn2.microsoft.com/en-us/library/aa363783.aspx)
+          that is provided for that reason by Microsoft.
+          If a file ist encrypted could be examined by evaluating the 
+          FILE_ATTRIBUTE_ENCRYTED flag of the GetFileAttributes
+          function.
+          For each file backed up or restored by FD on Windows, check if
+          the file is encrypted; if so then use OpenEncryptedFileRaw,
+          ReadEncryptedFileRaw, WriteEncryptedFileRaw,
+          CloseEncryptedFileRaw instead of BackupRead and BackupWrite
+          API calls.
+
+  Why:   Without the usage of this interface the fd-daemon running
+          under the system account can't read encypted Files because
+          the key needed for the decrytion is missed by them. As a result 
+          actually encrypted files are not backed up
+          by bacula and also no error is shown while missing these files.
+
+   Notes: Using xxxEncryptedFileRaw API would allow to backup and
+           restore EFS-encrypted files without decrypting their data.
+           Note that such files cannot be restored "portably" (at least,
+           easily) but they would be restoreable to a different (or
+           reinstalled) Win32 machine; the restore would require setup
+           of a EFS recovery agent in advance, of course, and this shall
+           be clearly reflected in the documentation, but this is the
+           normal Windows SysAdmin's business.
+           When "portable" backup is requested the EFS-encrypted files
+           shall be clearly reported as errors.
+           See MSDN on the "Backup and Restore of Encrypted Files" topic:
+           http://msdn.microsoft.com/en-us/library/aa363783.aspx
+           Maybe the EFS support requires a new flag in the database for
+           each file, too?
+           Unfortunately, the implementation is not as straightforward as
+           1-to-1 replacement of BackupRead with ReadEncryptedFileRaw,
+           requiring some FD code rewrite to work with
+           encrypted-file-related callback functions.
+
+Item 19: Job migration between different SDs
+Origin:  Mariusz Czulada <manieq AT wp DOT eu>
+Date:    07 May 2007
+Status:  NEW
+
+What:   Allow to specify in migration job devices on Storage Daemon other then
+        the one used for migrated jobs (possibly on different/distant host)
+
+Why:    Sometimes we have more then one system which requires backup
+        implementation.  Often, these systems are functionally unrelated and
+        placed in different locations.  Having a big backup device (a tape
+        library) in each location is not cost-effective.  It would be much
+        better to have one powerful enough tape library which could handle
+        backups from all systems, assuming relatively fast and reliable WAN
+        connections.  In such architecture backups are done in service windows
+        on local bacula servers, then migrated to central storage off the peak
+        hours.
+
+Notes:  If migration to different SD is working, migration to the same SD, as
+        now, could be done the same way (i mean 'localhost') to unify the
+        whole process
+
+Item 19. Allow FD to initiate a backup
 Origin:  Frank Volf (frank at deze dot org)
 Date:    17 November 2005
 Status: 
@@ -333,9 +644,9 @@ Notes: - The FD already has code for the monitor interface
        8. The console interface to the FD should be extended to
           permit a properly authorized console to initiate a
           backup via the FD.
-              
 
-Item  7: Implement Storage daemon compression
+
+Item 20: Implement Storage daemon compression
   Date:  18 December 2006
   Origin: Vadim A. Umanski , e-mail umanski@ext.ru
   Status:
@@ -356,223 +667,7 @@ Item  7: Implement Storage daemon compression
           That's why the server-side compression feature is needed!
   Notes:
 
-
-Item  8: Reduction of communications bandwidth for a backup
-   Date: 14 October 2008
- Origin: Robin O'Leary (Equiinet)
- Status: 
-
-  What:  Using rdiff techniques, Bacula could significantly reduce
-          the network data transfer volume to do a backup.
-
-  Why:   Faster backup across the Internet
-
-  Notes: This requires retaining certain data on the client during a Full
-          backup that will speed up subsequent backups.
-     
-     
-Item  9: Ability to reconnect a disconnected comm line
-  Date:  26 April 2009
-  Origin: Kern/Eric
-  Status: 
-
-  What:  Often jobs fail because of a communications line drop. In that 
-          case, Bacula should be able to reconnect to the other daemon and
-          resume the job.
-
-  Why:   Avoids backuping data already saved.
-
-  Notes: *Very* complicated from a design point of view because of authenication.
-
-Item 10: Start spooling even when waiting on tape
-  Origin: Tobias Barth <tobias.barth@web-arts.com>
-  Date:  25 April 2008
-  Status:
-
-  What: If a job can be spooled to disk before writing it to tape, it should
-          be spooled immediately.  Currently, bacula waits until the correct
-          tape is inserted into the drive.
-
-  Why:   It could save hours.  When bacula waits on the operator who must insert
-          the correct tape (e.g.  a new tape or a tape from another media
-          pool), bacula could already prepare the spooled data in the spooling
-          directory and immediately start despooling when the tape was
-          inserted by the operator.
-         
-          2nd step: Use 2 or more spooling directories.  When one directory is
-          currently despooling, the next (on different disk drives) could
-          already be spooling the next data.
-
-  Notes: I am using bacula 2.2.8, which has none of those features
-         implemented.
-
-
-Item 11: Include all conf files in specified directory
-Date:  18 October 2008
-Origin: Database, Lda. Maputo, Mozambique
-Contact:Cameron Smith / cameron.ord@database.co.mz 
-Status: New request
-
-What: A directive something like "IncludeConf = /etc/bacula/subconfs" Every
-      time Bacula Director restarts or reloads, it will walk the given
-      directory (non-recursively) and include the contents of any files
-      therein, as though they were appended to bacula-dir.conf
-
-Why: Permits simplified and safer configuration for larger installations with
-      many client PCs.  Currently, through judicious use of JobDefs and
-      similar directives, it is possible to reduce the client-specific part of
-      a configuration to a minimum.  The client-specific directives can be
-      prepared according to a standard template and dropped into a known
-      directory.  However it is still necessary to add a line to the "master"
-      (bacula-dir.conf) referencing each new file.  This exposes the master to
-      unnecessary risk of accidental mistakes and makes automation of adding
-      new client-confs, more difficult (it is easier to automate dropping a
-      file into a dir, than rewriting an existing file).  Ken has previously
-      made a convincing argument for NOT including Bacula's core configuration
-      in an RDBMS, but I believe that the present request is a reasonable
-      extension to the current "flat-file-based" configuration philosophy.
- 
-Notes: There is NO need for any special syntax to these files.  They should
-       contain standard directives which are simply "inlined" to the parent
-       file as already happens when you explicitly reference an external file.
-
-Notes: (kes) this can already be done with scripting
-     From: John Jorgensen <jorgnsn@lcd.uregina.ca>
-     The bacula-dir.conf at our site contains these lines:
-
-   #
-   # Include subfiles associated with configuration of clients.
-   # They define the bulk of the Clients, Jobs, and FileSets.
-   #
-   @|"sh -c 'for f in /etc/bacula/clientdefs/*.conf ; do echo @${f} ; done'"
-
-    and when we get a new client, we just put its configuration into
-    a new file called something like:
-
-    /etc/bacula/clientdefs/clientname.conf
-
-
-Item 12: Multiple threads in file daemon for the same job
-  Date:  27 November 2005
-  Origin: Ove Risberg (Ove.Risberg at octocode dot com)
-  Status:
-
-  What:  I want the file daemon to start multiple threads for a backup
-          job so the fastest possible backup can be made.
-
-          The file daemon could parse the FileSet information and start
-          one thread for each File entry located on a separate
-          filesystem.
-
-          A confiuration option in the job section should be used to
-          enable or disable this feature. The confgutration option could
-          specify the maximum number of threads in the file daemon.
-
-          If the theads could spool the data to separate spool files
-          the restore process will not be much slower.
-
-  Why:   Multiple concurrent backups of a large fileserver with many
-          disks and controllers will be much faster.
-
-  Notes: (KES) This is not necessary and could be accomplished
-         by having two jobs.  In addition, the current VSS code
-         is single thread.
-
-
-Item 13: Possibilty to schedule Jobs on last Friday of the month
-Origin: Carsten Menke <bootsy52 at gmx dot net>
-Date:   02 March 2008
-Status:
-
-   What: Currently if you want to run your monthly Backups on the last
-           Friday of each month this is only possible with workarounds (e.g
-           scripting) (As some months got 4 Fridays and some got 5 Fridays)
-           The same is true if you plan to run your yearly Backups on the
-           last Friday of the year.  It would be nice to have the ability to
-           use the builtin scheduler for this.
-
-   Why:   In many companies the last working day of the week is Friday (or 
-           Saturday), so to get the most data of the month onto the monthly
-           tape, the employees are advised to insert the tape for the
-           monthly backups on the last friday of the month.
-
-   Notes: To give this a complete functionality it would be nice if the
-           "first" and "last" Keywords could be implemented in the
-           scheduler, so it is also possible to run monthy backups at the
-           first friday of the month and many things more.  So if the syntax
-           would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
-           {Year|Month|Week} you would be able to run really flexible jobs.
-
-           To got a certain Job run on the last Friday of the Month for example
-           one could then write
-
-              Run = pool=Monthly last Fri of the Month at 23:50
-
-              ## Yearly Backup
-
-              Run = pool=Yearly last Fri of the Year at 23:50
-
-              ## Certain Jobs the last Week of a Month
-
-              Run = pool=LastWeek last Week of the Month at 23:50
-
-              ## Monthly Backup on the last day of the month
-
-              Run = pool=Monthly last Day of the Month at 23:50
-
-
-Item 14: Include timestamp of job launch in "stat clients" output
-  Origin: Mark Bergman <mark.bergman@uphs.upenn.edu>
-  Date:  Tue Aug 22 17:13:39 EDT 2006
-  Status:
-
-  What:  The "stat clients" command doesn't include any detail on when
-          the active backup jobs were launched.
-
-  Why:   Including the timestamp would make it much easier to decide whether
-          a job is running properly. 
-
-  Notes: It may be helpful to have the output from "stat clients" formatted 
-          more like that from "stat dir" (and other commands), in a column
-          format. The per-client information that's currently shown (level,
-          client name, JobId, Volume, pool, device, Files, etc.) is good, but
-          somewhat hard to parse (both programmatically and visually), 
-          particularly when there are many active clients.
-
-
-Item 15: Message mailing based on backup types
- Origin: Evan Kaufman <evan.kaufman@gmail.com>
-   Date: January 6, 2006
- Status:
-
-   What: In the "Messages" resource definitions, allowing messages
-          to be mailed based on the type (backup, restore, etc.) and level
-          (full, differential, etc) of job that created the originating
-          message(s).
-
- Why:    It would, for example, allow someone's boss to be emailed
-          automatically only when a Full Backup job runs, so he can
-          retrieve the tapes for offsite storage, even if the IT dept.
-          doesn't (or can't) explicitly notify him.  At the same time, his
-          mailbox wouldnt be filled by notifications of Verifies, Restores,
-          or Incremental/Differential Backups (which would likely be kept
-          onsite).
-
- Notes: One way this could be done is through additional message types, for
- example:
-
-   Messages {
-     # email the boss only on full system backups
-     Mail = boss@mycompany.com = full, !incremental, !differential, !restore, 
-            !verify, !admin
-     # email us only when something breaks
-     MailOnError = itdept@mycompany.com = all
-   }
-
-   Notes: Kern: This should be rather trivial to implement.
-
-
-Item 16: Ability to import/export Bacula database entities
+Item 21: Ability to import/export Bacula database entities
    Date: 26 April 2009
  Origin: Eric
  Status: 
@@ -587,7 +682,7 @@ Item 16: Ability to import/export Bacula database entities
           other criteria.
 
 
-Item 17: Implementation of running Job speed limit.
+Item 22: Implementation of running Job speed limit.
 Origin: Alex F, alexxzell at yahoo dot com
 Date: 29 January 2009
 
@@ -609,7 +704,7 @@ Why: Because of a couple of reasons.  First, it's very hard to \
implement a  especially where there is little available.
 
 
-Item 18: Add an override in Schedule for Pools based on backup types
+Item 23: Add an override in Schedule for Pools based on backup types
 Date:    19 Jan 2005
 Origin:  Chad Slater <chad.slater@clickfox.com>
 Status: 
@@ -629,7 +724,7 @@ Status:
           has more capacity (i.e. a 8TB tape library.
 
 
-Item 19: Automatic promotion of backup levels based on backup size
+Item 24: Automatic promotion of backup levels based on backup size
    Date: 19 January 2006
   Origin: Adam Thornton <athornton@sinenomine.net>
   Status: 
@@ -649,7 +744,7 @@ Item 19: Automatic promotion of backup levels based on backup \
size  of).
 
 
-Item 20: Allow FileSet inclusion/exclusion by creation/mod times
+Item 25: Allow FileSet inclusion/exclusion by creation/mod times
   Origin: Evan Kaufman <evan.kaufman@gmail.com>
   Date:  January 11, 2006
   Status:
@@ -699,7 +794,7 @@ Item 20: Allow FileSet inclusion/exclusion by creation/mod times
            or 'since'.
 
 
-Item 21: Archival (removal) of User Files to Tape
+Item 26: Archival (removal) of User Files to Tape
   Date:  Nov. 24/2005 
   Origin: Ray Pengelly [ray at biomed dot queensu dot ca
   Status: 
@@ -726,23 +821,47 @@ Item 21: Archival (removal) of User Files to Tape
           access time.  Then after another 6 months (or possibly as one
           storage pool gets full) data is migrated to Tape.
 
+Item 27: Ability to reconnect a disconnected comm line
+  Date:  26 April 2009
+  Origin: Kern/Eric
+  Status: 
 
-Item 22: An option to operate on all pools with update vol parameters
-  Origin: Dmitriy Pinchukov <absh@bossdev.kiev.ua>
-   Date: 16 August 2006
-  Status: Patch made by  Nigel Stepp
+  What:  Often jobs fail because of a communications line drop. In that 
+          case, Bacula should be able to reconnect to the other daemon and
+          resume the job.
+
+  Why:   Avoids backuping data already saved.
 
-   What: When I do update -> Volume parameters -> All Volumes
-          from Pool, then I have to select pools one by one.  I'd like
-          console to have an option like "0: All Pools" in the list of
-          defined pools.
+  Notes: *Very* complicated from a design point of view because of authenication.
+
+Item 28: Multiple threads in file daemon for the same job
+  Date:  27 November 2005
+  Origin: Ove Risberg (Ove.Risberg at octocode dot com)
+  Status:
 
-   Why:  I have many pools and therefore unhappy with manually
-          updating each of them using update -> Volume parameters -> All
-          Volumes from Pool -> pool #.
+  What:  I want the file daemon to start multiple threads for a backup
+          job so the fastest possible backup can be made.
 
+          The file daemon could parse the FileSet information and start
+          one thread for each File entry located on a separate
+          filesystem.
 
-Item 23: Automatic disabling of devices
+          A confiuration option in the job section should be used to
+          enable or disable this feature. The confgutration option could
+          specify the maximum number of threads in the file daemon.
+
+          If the theads could spool the data to separate spool files
+          the restore process will not be much slower.
+
+  Why:   Multiple concurrent backups of a large fileserver with many
+          disks and controllers will be much faster.
+
+  Notes: (KES) This is not necessary and could be accomplished
+         by having two jobs.  In addition, the current VSS code
+         is single thread.
+
+
+Item 29: Automatic disabling of devices
    Date: 2005-11-11
   Origin: Peter Eriksson <peter at ifm.liu dot se>
   Status:
@@ -769,47 +888,7 @@ Item 23: Automatic disabling of devices
           instead.
 
 
-Item 24: Ability to defer Batch Insert to a later time
-   Date: 26 April 2009
- Origin: Eric
- Status: 
-
-  What:  Instead of doing a Job Batch Insert at the end of the Job
-          which might create resource contention with lots of Job,
-          defer the insert to a later time.
-
-  Why:   Permits to focus on getting the data on the Volume and
-          putting the metadata into the Catalog outside the backup
-          window.
-
-  Notes: Will use the proposed Bacula ASCII database import/export
-          format (i.e. dependent on the import/export entities project).
-
-
-Item 25: Add MaxVolumeSize/MaxVolumeBytes to Storage resource
-   Origin: Bastian Friedrich <bastian.friedrich@collax.com>
-   Date:  2008-07-09
-   Status: -
-
-   What:  SD has a "Maximum Volume Size" statement, which is deprecated and
-           superseded by the Pool resource statement "Maximum Volume Bytes".
-           It would be good if either statement could be used in Storage
-           resources.
-
-   Why:   Pools do not have to be restricted to a single storage type/device;
-           thus, it may be impossible to define Maximum Volume Bytes in the
-           Pool resource.  The old MaxVolSize statement is deprecated, as it
-           is SD side only.  I am using the same pool for different devices.
-
-   Notes: State of idea currently unknown.  Storage resources in the dir
-           config currently translate to very slim catalog entries; these
-           entries would require extensions to implement what is described
-           here.  Quite possibly, numerous other statements that are currently
-           available in Pool resources could be used in Storage resources too
-           quite well.
-
-
-Item 26: Enable persistent naming/number of SQL queries
+Item 30: Enable persistent naming/number of SQL queries
   Date:  24 Jan, 2007 
   Origin: Mark Bergman 
   Status: 
@@ -875,7 +954,7 @@ Item 26: Enable persistent naming/number of SQL queries
         than by number.
 
 
-Item 27: Bacula Dir, FD and SD to support proxies
+Item 31: Bacula Dir, FD and SD to support proxies
 Origin: Karl Grindley @ MIT Lincoln Laboratory <kgrindley at ll dot mit dot edu>
 Date:  25 March 2009
 Status: proposed
@@ -916,7 +995,7 @@ Notes: Director resource tunneling: This configuration option to \
utilize a  One could also possibly use stunnel, netcat, etc.
 
 
-Item 28: Add Minumum Spool Size directive
+Item 32: Add Minumum Spool Size directive
 Date: 20 March 2008
 Origin: Frank Sweetser <fs@wpi.edu>
 
@@ -939,112 +1018,8 @@ Origin: Frank Sweetser <fs@wpi.edu>
         gigabytes) it can easily produce multi-megabyte report emails!
 
 
-Item 29: Handle Windows Encrypted Files using Win raw encryption
-  Origin: Michael Mohr, SAG  Mohr.External@infineon.com
-  Date:  22 February 2008
-  Origin: Alex Ehrlich (Alex.Ehrlich-at-mail.ee)
-  Date:  05 August 2008
-  Status:
 
-  What: Make it possible to backup and restore Encypted Files from and to
-          Windows systems without the need to decrypt it by using the raw
-          encryption functions API (see:
-          http://msdn2.microsoft.com/en-us/library/aa363783.aspx)
-          that is provided for that reason by Microsoft.
-          If a file ist encrypted could be examined by evaluating the 
-          FILE_ATTRIBUTE_ENCRYTED flag of the GetFileAttributes
-          function.
-          For each file backed up or restored by FD on Windows, check if
-          the file is encrypted; if so then use OpenEncryptedFileRaw,
-          ReadEncryptedFileRaw, WriteEncryptedFileRaw,
-          CloseEncryptedFileRaw instead of BackupRead and BackupWrite
-          API calls.
 
-  Why:   Without the usage of this interface the fd-daemon running
-          under the system account can't read encypted Files because
-          the key needed for the decrytion is missed by them. As a result 
-          actually encrypted files are not backed up
-          by bacula and also no error is shown while missing these files.
-
-   Notes: Using xxxEncryptedFileRaw API would allow to backup and
-           restore EFS-encrypted files without decrypting their data.
-           Note that such files cannot be restored "portably" (at least,
-           easily) but they would be restoreable to a different (or
-           reinstalled) Win32 machine; the restore would require setup
-           of a EFS recovery agent in advance, of course, and this shall
-           be clearly reflected in the documentation, but this is the
-           normal Windows SysAdmin's business.
-           When "portable" backup is requested the EFS-encrypted files
-           shall be clearly reported as errors.
-           See MSDN on the "Backup and Restore of Encrypted Files" topic:
-           http://msdn.microsoft.com/en-us/library/aa363783.aspx
-           Maybe the EFS support requires a new flag in the database for
-           each file, too?
-           Unfortunately, the implementation is not as straightforward as
-           1-to-1 replacement of BackupRead with ReadEncryptedFileRaw,
-           requiring some FD code rewrite to work with
-           encrypted-file-related callback functions.
-
-
-Item 30: Implement a Storage device like Amazon's S3.
-  Date:  25 August 2008
-  Origin: Soren Hansen <soren@ubuntu.com>
-  Status: Not started.
-  What:  Enable the storage daemon to store backup data on Amazon's
-          S3 service.
-
-  Why:   Amazon's S3 is a cheap way to store data off-site. 
-
-  Notes: If we configure the Pool to put only one job per volume (they don't
-         support append operation), and the volume size isn't to big (100MB?),
-         it should be easy to adapt the disk-changer script to add get/put
-         procedure with curl. So, the data would be safetly copied during the
-         Job. 
-
-         Cloud should be only used with Copy jobs, users should always have
-         a copy of their data on their site.
-
-         We should also think to have our own cache, trying always to have
-         cloud volume on the local disk. (I don't know if users want to store
-         100GB on cloud, so it shouldn't be a disk size problem). For example,
-         if bacula want to recycle a volume, it will start by downloading the
-         file to truncate it few seconds later, if we can avoid that...
-
-Item 31: Convert tray monitor on Windows to a stand alone program
-   Date: 26 April 2009
- Origin: Kern/Eric
- Status: 
-
-  What:  Separate Win32 tray monitor to be a separate program.
-
-  Why:   Vista does not allow SYSTEM services to interact with the 
-          desktop, so the current tray monitor does not work on Vista
-          machines.  
-
-  Notes: Requires communicating with the FD via the network (simulate
-          a console connection).
-
-
-Item 32: Relabel disk volume after recycling
-  Origin: Pasi Kärkkäinen <pasik@iki.fi>
-  Date:   07 May 2009.
-  Status: Not implemented yet, no code written.
-
-  What: The ability to relabel the disk volume (and thus rename the file on the
-        disk) after it has been recycled. Useful when you have a single job
-        per disk volume, and you use a custom Label format, for example:
-        Label Format =
-        "${Client}-${Level}-${NumVols:p/4/0/r}-${Year}_${Month}_${Day}-${Hour}_${Minute}"
                
-
-  Why: Disk volumes in Bacula get the label/filename when they are used for the
-       first time.  If you use recycling and custom label format like above,
-       the disk volume name doesn't match the contents after it has been
-       recycled.  This feature makes it possible to keep the label/filename
-       in sync with the content and thus makes it easy to check/monitor the
-       backups from the shell and/or normal file management tools, because
-       the filenames of the disk volumes match the content.
-
-  Notes:  The configuration option could be "Relabel after Recycling = Yes".
 
 Item 33: Command that releases all drives in an autochanger
   Origin: Blake Dunlap (blake@nxs.net)
@@ -1130,82 +1105,9 @@ Item 35: Implement a Migration job type that will create a \
reverse  Notes:  This feature was previously discussed on the bacula-devel list
           here: http://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg04962.html
  
-Item 36: Job migration between different SDs
-Origin:  Mariusz Czulada <manieq AT wp DOT eu>
-Date:    07 May 2007
-Status:  NEW
-
-What:   Allow to specify in migration job devices on Storage Daemon other then
-        the one used for migrated jobs (possibly on different/distant host)
-
-Why:    Sometimes we have more then one system which requires backup
-        implementation.  Often, these systems are functionally unrelated and
-        placed in different locations.  Having a big backup device (a tape
-        library) in each location is not cost-effective.  It would be much
-        better to have one powerful enough tape library which could handle
-        backups from all systems, assuming relatively fast and reliable WAN
-        connections.  In such architecture backups are done in service windows
-        on local bacula servers, then migrated to central storage off the peak
-        hours.
-
-Notes:  If migration to different SD is working, migration to the same SD, as
-        now, could be done the same way (i mean 'localhost') to unify the
-        whole process
-
-Item 37: Concurrent spooling and despooling withini a single job.
-Date:  17 nov 2009
-Origin: Jesper Krogh <jesper@krogh.cc>
-Status: NEW
-What:  When a job has spooling enabled and the spool area size is
-       less than the total volumes size the storage daemon will:
-       1) Spool to spool-area
-       2) Despool to tape
-       3) Go to 1 if more data to be backed up.
-
-       Typical disks will serve data with a speed of 100MB/s when
-       dealing with large files, network it typical capable of doing 115MB/s
-       (GbitE). Tape drives will despool with 50-90MB/s (LTO3) 70-120MB/s
-       (LTO4) depending on compression and data.
-
-       As bacula currently works it'll hold back data from the client until
-       de-spooling is done, now matter if the spool area can handle another
-       block of data. Say given a FileSet of 4TB and a spool-area of 100GB and
-       a Maximum Job Spool Size set to 50GB then above sequence could be
-       changed to allow to spool to the other 50GB while despooling the first
-       50GB and not holding back the client while doing it. As above numbers
-       show, depending on tape-drive and disk-arrays this potentially leads to
-       a cut of the backup-time of 50% for the individual jobs.
 
-       Real-world example, backing up 112.6GB (large files) to LTO4 tapes
-       (despools with ~75MB/s, data is gzipped on the remote filesystem.
-       Maximum Job Spool Size = 8GB
 
-       Current:
-       Size: 112.6GB
-       Elapsed time (total time): 46m 15s => 2775s
-       Despooling time: 25m 41s => 1541s (55%)
-       Spooling time: 20m 34s => 1234s (45%)
-       Reported speed: 40.58MB/s
-       Spooling speed: 112.6GB/1234s => 91.25MB/s
-       Despooling speed: 112.6GB/1541s => 73.07MB/s
-
-       So disk + net can "keep up" with the LTO4 drive (in this test)
-
-       Prosed change would effectively make the backup run in the "despooling
-       time" 1541s giving a reduction to 55% of the total run time.
-
-       In the situation where the individual job cannot keep up with LTO-drive
-       spooling enables efficient multiplexing of multiple concurrent jobs onto
-       the same drive.
-
-Why:   When dealing with larger volumes the general utillization of the
-       network/disk is important to maximize in order to be able to run a full
-       backup over a weekend. Current work-around is to split the FileSet in
-       smaller FileSet and Jobs but that leads to more configuration mangement
-       and is harder to review for completeness. Subsequently it makes restores
-       more complex.
-
-Item 39: Extend the verify code to make it possible to verify
+Item 36: Extend the verify code to make it possible to verify
           older jobs, not only the last one that has finished
   Date:   10 April 2009
   Origin: Ralf Gross (Ralf-Lists <at> ralfgross.de)
@@ -1262,7 +1164,7 @@ Item 39: Extend the verify code to make it possible to verify
 
 
 
-Item 40: Separate "Storage" and "Device" in the bacula-dir.conf
+Item 37: Separate "Storage" and "Device" in the bacula-dir.conf
   Date:   29 April 2009
   Origin: "James Harper" <james.harper@bendigoit.com.au>
   Status: not implemented or documented
@@ -1299,7 +1201,7 @@ Item 40: Separate "Storage" and "Device" in the bacula-dir.conf
 
   Notes:  
 
-Item 41: Least recently used device selection for tape drives in autochanger.
+Item 38: Least recently used device selection for tape drives in autochanger.
 Date:    12 October 2009
 Origin:  Thomas Carter <tcarter@memc.com>
 Status:  Proposal
@@ -1318,9 +1220,88 @@ Why:  The current implementation places a majority of use and \
wear on drive  
 Notes:
 
+Item 39: Implement a Storage device like Amazon's S3.
+  Date:  25 August 2008
+  Origin: Soren Hansen <soren@ubuntu.com>
+  Status: Not started.
+  What:  Enable the storage daemon to store backup data on Amazon's
+          S3 service.
+
+  Why:   Amazon's S3 is a cheap way to store data off-site. 
+
+  Notes: If we configure the Pool to put only one job per volume (they don't
+         support append operation), and the volume size isn't to big (100MB?),
+         it should be easy to adapt the disk-changer script to add get/put
+         procedure with curl. So, the data would be safetly copied during the
+         Job. 
+
+         Cloud should be only used with Copy jobs, users should always have
+         a copy of their data on their site.
+
+         We should also think to have our own cache, trying always to have
+         cloud volume on the local disk. (I don't know if users want to store
+         100GB on cloud, so it shouldn't be a disk size problem). For example,
+         if bacula want to recycle a volume, it will start by downloading the
+         file to truncate it few seconds later, if we can avoid that...
+
+Item 40: Convert tray monitor on Windows to a stand alone program
+   Date: 26 April 2009
+ Origin: Kern/Eric
+ Status: 
+
+  What:  Separate Win32 tray monitor to be a separate program.
+
+  Why:   Vista does not allow SYSTEM services to interact with the 
+          desktop, so the current tray monitor does not work on Vista
+          machines.  
+
+  Notes: Requires communicating with the FD via the network (simulate
+          a console connection).
+
+Item 41: Improve Bacula's tape and drive usage and cleaning management 
+  Date:  8 November 2005, November 11, 2005
+  Origin: Adam Thornton <athornton at sinenomine dot net>,
+          Arno Lehmann <al at its-lehmann dot de>
+  Status:
+
+  What:  
+          1. Measure tape and drive usage (mostly implemented)
+          2. Retiring a volume when too old or too many errors
+          3. Handle cleaning and tape alerts.
+
+  Why:   Needed
+
+
+Item 42: Relabel disk volume after recycling
+  Origin: Pasi Kärkkäinen <pasik@iki.fi>
+  Date:   07 May 2009.
+  Status: Not implemented yet, no code written.
+
+  What: The ability to relabel the disk volume (and thus rename the file on the
+        disk) after it has been recycled. Useful when you have a single job
+        per disk volume, and you use a custom Label format, for example:
+        Label Format =
+        "${Client}-${Level}-${NumVols:p/4/0/r}-${Year}_${Month}_${Day}-${Hour}_${Minute}"
 +
+  Why: Disk volumes in Bacula get the label/filename when they are used for the
+       first time.  If you use recycling and custom label format like above,
+       the disk volume name doesn't match the contents after it has been
+       recycled.  This feature makes it possible to keep the label/filename
+       in sync with the content and thus makes it easy to check/monitor the
+       backups from the shell and/or normal file management tools, because
+       the filenames of the disk volumes match the content.
+
+  Notes:  The configuration option could be "Relabel after Recycling = Yes".
+
+
+
 ========= New items after last vote ====================
 
 
+Note to renumber items use:
+scripts/renumber_projects.pl projects >1
+
+
 ========= Add new items above this line =================
 
 
@@ -1342,12 +1323,13 @@ Item  n: One line summary ...
 
 
 ========== Items completed in version 5.0.0 ====================
-*Item  2: 'restore' menu: enter a JobId, automatically select dependents
-*Item  5: Deletion of disk Volumes when pruned (partial -- truncate when pruned)
-*Item  6: Implement Base jobs
-*Item 10: Restore from volumes on multiple storage daemons
-*Item 15: Enable/disable compression depending on storage device (disk/tape)
-*Item 20: Cause daemons to use a specific IP address to source communications
-*Item 23: "Maximum Concurrent Jobs" for drives when used with changer device
-*Item 31: List InChanger flag when doing restore.
-*Item 35: Port bat to Win32
+*Item   : 'restore' menu: enter a JobId, automatically select dependents
+*Item   : Deletion of disk Volumes when pruned (partial -- truncate when pruned)
+*Item   : Implement Base jobs
+*Item   : Restore from volumes on multiple storage daemons
+*Item   : Enable/disable compression depending on storage device (disk/tape)
+*Item   : Cause daemons to use a specific IP address to source communications
+*Item   : "Maximum Concurrent Jobs" for drives when used with changer device
+*Item   : List InChanger flag when doing restore.
+*Item   : Port bat to Win32
+*Item   : An option to operate on all pools with update vol parameters
diff --git a/bacula/src/version.h b/bacula/src/version.h
index c1f0c9d..dcd07db 100644
--- a/bacula/src/version.h
+++ b/bacula/src/version.h
@@ -52,7 +52,7 @@
 #define TRACE_FILE 1
 
 /* If this is set stdout will not be closed on startup */
-/* #define DEVELOPER 1 */
+#define DEVELOPER 1
 
 /*
  * SMCHECK does orphaned buffer checking (memory leaks)


hooks/post-receive
-- 
Bacula

------------------------------------------------------------------------------
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
_______________________________________________
Bacula-commits mailing list
Bacula-commits@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-commits


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic