[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gfs-users
Subject:    Re: [gfs-users] distributed block device, version 0.10
From:       andy adams <andy () syred ! com>
Date:       2001-01-29 17:33:41
[Download RAW message or body]

hayden@sakhalin.mediacity.com wrote:

> I'm pleased to announce version 0.10 of the Distributed Block Device
> or DBD (as opposed to DRBD).  A kernel block device driver is now
> included.  Much work has gone into improving performance, stability,
> and documentation for the software.  It has been tested with
> Sistina's GFS, though additional work is still required to integrate
> our DLM with GFS.  Also, a company, North Fork Networks
> (www.northforknet.com), has recently been formed to develop and
> support the software.
>
> regards, Mark Hayden
>
> Copyright (C) 2000 North Fork Networks, Inc.  All rights reserved.
> Author: Mark Hayden (mark@northforknet.com)
>
> DISTRIBUTED BLOCK DEVICE
>
> With DBD you run a collection of servers that provide a pool of
> storage for use by any number of clients.  Administrators configure
> the storage to be made advailable as any number of named "virtual
> disks".  A client accesses the virtual disks by communicating with
> the various servers on which the virtual disks are stored.
>
> The distributed block device (DBD) has advanced from pre-alpha to an
> alpha release.  The code has been tested a great deal more than in
> the previous release, but it is still not suitable for storing
> important data.  This release is being made in order to collect
> comments.
>
> * A company, North Fork Networks (www.northforknet.com), has been
>   formed to develop and support the DBD.
>
> * A HOWTO is now available at:
>
>     http://www.northforknet.com/doc/howto/t1.html
>
> * The DBD now includes a Linux kernel driver module for the client.
>   The driver has been tested with EXT2 file system and GFS.
>   Instructions are included in the HOWTO.
>
> * Raw character devices are supported.
>
> * Performance has been improved a great deal.
>
> The release includes (1) block-data servers that can store data for
> any number of virtual disks, (2) replicated "manager" servers that
> store the global meta-data for the system and manage the storage
> servers, and (3) the client code for accessing the system.  More
> information on the DBD, including a tutorial based on the user-level
> programs as well as instructions for building/running the client
> kernel module are included in the Northfork HOWTO.
>
> Some of the features of the DBD are:
>
> * High-availability: level of redundancy can be specified for each
>   disk.  For intance, specifying 2-way replication causes each block
>   of data to be stored on 2 storage servers.
>
> * Scalability.  Arbitrary numbers of storage servers can be added to
>   the system.  They can be added incrementally as more storage is
>   required.  Each virtual disk can be "striped" on multiple storage
>   servers using a technique called "chained de-clustering."  What
>   this means is that there is increased throughput because different
>   servers are providing different portions of the data.
>
> * Asynchronous protocols.  The system is designed from the bottom up
>   to be implemented using asynchronous protocols that tolerate
>   arbitrary reorderings and delays in messages.  Many other systems
>   are not designed in this fashion and are thus subject to various
>   failure modes known as "split-brain" scenarios that can result in
>   your data being corrupted.  Those systems attempt to avoid these
>   scenarios through techniques such as redundant networking hardware
>   and hardware watchdog timers.  The DBD solves these problems
>   through protocols designed to never allow data corruption as a
>   result of partial connectivity.
>
> * Designed to work with the distributed lock manager described below
>   (as well as other lock managers).  This means multiple clients can
>   coordinate concurrent access to a virtual disk, such as when using
>   a clustered file system like GFS.
>
> * Future capabilities anticipated in the DBD design include snapshots
>   of virtual disks and the ability to reorganize the layout of data
>   on the fly.
>
> PAXOS REPLICATED DATA SERVICE
>
> C-Ensemble includes a replicated data service based on Leslie
> Lamport's Paxos protocol.  No familiarity with C-Ensemble is needed
> to use the Paxos service (all the interface is in two header files).
> More information on this service can be found in Northfork HOWTO.
>
> DLM
>
> C-Ensemble includes a complete distributed lock manager (DLM) built
> using the toolkit.  The DLM can be used either through a text-based
> pipe interface or linked into your application as a C library.  No
> familiarity with C-Ensemble is needed to use the DLM (all the
> interface is in one header file).  More information and a DLM
> tutorial can be found in the Northfork HOWTO.
> _______________________________________________
> gfs-users mailing list
> gfs-users@sistina.com
> http://lists.sistina.com/mailman/listinfo/gfs-users
> Read the GFS Howto:  http://www.sistina.com/gfs/Pages/howto.html

There is no link to howto on Northforknet.com! Is it a joke?

_______________________________________________
gfs-users mailing list
gfs-users@sistina.com
http://lists.sistina.com/mailman/listinfo/gfs-users
Read the GFS Howto:  http://www.sistina.com/gfs/Pages/howto.html

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic