[prev in list] [next in list] [prev in thread] [next in thread] 

List:       osdl-lsb-discuss
Subject:    [lsb-discuss] statfs specification addition: review
From:       wol () thewolery ! demon ! co ! uk (Anthony W !  Youngman)
Date:       2006-12-28 23:48:46
Message-ID: E46jMARefFlFFw0m () thewolery ! demon ! co ! uk
[Download RAW message or body]

In message <20061228224149.GB11467 at thunk.org>, Theodore Tso 
<tytso at mit.edu> writes
>On Wed, Dec 27, 2006 at 08:40:51AM +0000, Anthony W. Youngman wrote:
>> Note that I am aware of at least one application (IBM UniVerse) where
>> the distinction between local and remote filesystems is CRUCIAL. I'm
>> sure others can think of more.
>>
>> It's a database, it stores each "table" in its own OS-level file, and
>> for integrity reasons it *needs* to know whether the files exist
>> physically on the local or remote machine. For data integrity reasons,
>> by default it will refuse to write to a remote file, instead passing a
>> request to the server daemon running on the remote machine.
>
>What do you mean by "data integrity reasons"?  Do you mean accidental
>data corruption?  Depending on the remote filesystem involved, and the
>quality of your local disk, the remote filesystem might actually be
>more robust (modern remote filesystem use aggressive checksumming;
>some even use encryption and message authentication codes to assure
>that the data can't be tampered with on the network).

Yes I do mean "accidental data corruption". And no I *don't* mean 
corruption by the computer!

Bear in mind that a database data store, which to the OS is just a file, 
is usually a file-system as far as the database is concerned. As such, 
it needs to handle multiple simultaneous readers and writers. And how 
does THE DATABASE ensure integrity?

UniVerse stores a lock table in shared ram. Which of course is totally 
useless if there are multiple database servers on multiple machines all 
writing to the same OS file. Which is why the default is to refuse to 
write directly to any file mounted via NFS. Not because NFS might 
corrupt the file, but because multiple servers might screw up each 
others writes.
>
>NFSv2 with checksumming disabled might be vulnerable to "data
>integrity issues", but PC class machines on pre-UDMA hard drives had
>no integrity checks (not even parity!) on hard drive accesses.
>
>A badly configured iSCSI disk that has no checksum protection and
>mounted as "a local filesystem" can be far less reliable by an NFSv4
>filesystem with checksumming enabled to a reliable dedicated
>fileserver.
>
>What concerns me about people who say that "they want to know about
>about local versus remote filesystems" is that invariably they are
>asking the wrong question.  There is this assumption that NFS will
>always be less secure/able to detect data corruption/fast/whatever,
>and very often that is just plain WRONG.
>
Right or wrong, it's irrelevant :-)

Cheers,
Wol
-- 
Anthony W. Youngman - anthony at thewolery.demon.co.uk


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic