[prev in list] [next in list] [prev in thread] [next in thread] 

List:       intermezzo-devel
Subject:    [Fwd: LML changes]
From:       "Shirish H. Phatak" <shirish () nustorage ! com>
Date:       2000-12-20 22:45:47
[Download RAW message or body]

My first email bounced...

-Shirish


Received: from newton.tacitussystems.com ([64.127.40.234])
        by mail.promediaworld.com (MERAK 3.00.120) with ESMTP id BIC36683
        for <shirish@tacitussystems.com>; Wed, 20 Dec 2000 17:27:42 -0500
Received: from nustorage.com (IDENT:shirish@localhost.localdomain [127.0.0.1])
	by newton.tacitussystems.com (8.11.0/8.11.0) with ESMTP id eBKMeOe13834;
	Wed, 20 Dec 2000 17:40:24 -0500
Sender: shirish@newton.tacitussystems.com
Message-ID: <3A413556.CBE72681@nustorage.com>
Date: Wed, 20 Dec 2000 17:40:23 -0500
From: "Shirish H. Phatak" <shirish@nustorage.com>
Organization: Tacitus Systems, Inc.
X-Mailer: Mozilla 4.76 [en] (X11; U; Linux 2.2.17-ext3reiser-win4lin i586)
X-Accept-Language: en
MIME-Version: 1.0
To: Peter Braam <braam@mountainviewdata.com>
CC: intermezzo-devel@mountainviewdata.com,
   Shirish Phatak <shirish@tacitussystems.com>
Subject: Re: LML changes
References: <NEBBIIJKCMJGDLNAMBCBAENGCCAA.braam@mountainviewdata.com>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi,

Peter Braam wrote:

> OK - we have agreed what we will do.
>
> The kernel will write CLOSE style records in the LML (perhaps one extra flag
> is needed).
>

 Also needed, a mechanism to maintain monotonicity of the last_rcvd sequence
when the delayed close is seen. Otherwise, the kernel will record the kml
offset and record number from the CLOSE record in the last_rcvd file causing
the sequence to go haywire. When the presto sees the delayed close, it should
bump up the local recno/offset but leave the remote recno/offset untouched.

 Are we also planning to record the initial unfinished close in the kml? If not
we could bump up the remote_recno/remote_offset on seeing the close to
acknowledge that we have received the record. This way if lento fails after the
close goes into the LML and restarts, the CLOSE kml record need not be resent
from the peer.

>
> The kernel will provide an interface to place incomplete backfetch related
> close records in the LML.
>
> The kenrel will reclaim LML records when they are cleared and provide a
> clearing interface to Lento.
>
> Shirish will write the changes to reintegrate and to recover the LML on
> restarts, Peter will do the kernel code.
>
> Peter will also deal with applications writing LML records when they start
> writing to a file.
>
> For InterMezzo 1.0 we add the absolute minimum to enhance correctness of the
> protocol:
>
>  - failed backfetches are retried
>  - applications that crashed before a close, will get a close record on
> startup  (i.e. the last writer wins even after a crash.).
>

 I have been thinking about this a bit. There is a concern that if an
application keeps a file open for a long long time and then your local system
has a hard crash, you might loose a lot of data that you thought was replicated
(a good example being database applications). Is there a way to periodically
force a close record (only in KML) for an open file? Maybe by tracking number
of writes to the file?

 Another option is to pass every open to lento. Lento can then fire of a timer
which will force in a close record every so often (which can be a tunable
parameter) until the corresponding close arrives...

-Shirish

>
> Finally for 1.0 we need to use the data journaling on LML, KML and last_rcvd
> file with ext3 0.05d.  Peter will do this.
>
> - Peter -


_______________________________________________
intermezzo-devel mailing list
intermezzo-devel@lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/intermezzo-devel


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic