[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ietf-nfsv4
Subject:    Re: FYI: HTTP Distribution and Replication Protocol
From:       "Juan Gomez" <juang () us ! ibm ! com>
Date:       2002-06-07 17:10:42
Message-ID: OFF1AE164A.3993EC79-ON88256BD1.005E2776 () boulder ! ibm ! com
[Download RAW message or body]

Although I am unaware of how popular what is stated in that note became, I
think it establishes a precedent that tracking data similarities based on
low-collission keys is a viable technique to reduce bandwith requirements
of replication.

Juan



> ---------+---------------------------------->
> > Brent Callaghan        |
> > <brent@eng.sun.com>    |
> > Sent by:               |
> > owner-nfsv4-wg@sunroof.|
> > eng.sun.com            |
> > > 
> > > 
> > 06/05/02 05:37 PM      |
> > > 
> ---------+---------------------------------->
  >------------------------------------------------------------------------------------------------------------------------|
  |                                                                                   \
|  |       To:       nfsv4-wg@sunroof.eng.sun.com                                     \
|  |       cc:                                                                        \
|  |       Subject:  FYI: HTTP Distribution and Replication Protocol                  \
|  |                                                                                  \
|  |                                                                                  \
|  >------------------------------------------------------------------------------------------------------------------------|




Although "push" technology is officially out of fashion amongst
the Internet cognoscienti, I remember being quite impressed by
Marimba's Castanet protocol for file distribution and replication.

It's built on a model of a central server and a large number
of replicas that need to be maintained by having file differences
transferred.  Their protocol also uses some sophisticated block
hashing to avoid transferring identical data blocks.

The protocol eventually found its way to the W3C, where
it morphed into the "HTTP Distribution and Replication
Protocol".  There's a description of the protocol on the
W3C website:

             http://www.w3.org/TR/NOTE-drp

Their model, based on file indexes, isn't quite what we're
looking at for this protocol.  But the protocol's use of
checksums to identify previously downloaded data might be
interesting.

             Brent


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic