[prev in list] [next in list] [prev in thread] [next in thread] 

List:       evms-devel
Subject:    Re: [Evms-devel] PEs vs run-lists (was: Just when you thought you were done
From:       "Don Mulvey" <dlmulvey () us ! ibm ! com>
Date:       2001-09-24 6:23:02
[Download RAW message or body]

>> Right.  However, a lot of people choose XFS over reiserfs, because it
uses
>> less CPU.  CPU does matter.  performance > IO latency / bandwidth

>That surprises me, but then I haven't done any benchmarks with XFS myself.
>One thing that isn't taken into accound with some benchmarks is the usage
>of RAM.  I think by nature of the large size of the XFS code, it will have
>a large impact on CPU cache.  It would be most useful to see comparisons
>of some CPU/memory intensive APPLICATIONS running on XFS, reiserfs, ext3
>because in the end it isn't the _filesystem_ performance people are really
>interested in, but how fast the actual tools they use are.  It is just
that
>it is easiest to compare filesystems with raw filesystem benchmarks, so
>people lose sight of the big picture.

Memory use is a funny thing to keep track of. I like to use working set as
an indication of how memory efficient a process is.  The lower the working
set ... the less memory a process needs to do its job. Good for making
comparisions. However, as you ramp up the benchmark you see funny things
happen ... like the file system cache makes some adjustments ... a
process's working set gets reduced and it starts running memory
constrained.  Its vital to know exactly what your measuring when you run a
benchmark.  Did I just measure file system performance or was there a
bottle neck in context switching times, or penalties due to cache misses or
misalligned data references or ?   Unless you look for the "funny things"
you really dont know what you measured when you finish with a benchmark. I
think folks like SPEC do a pretty good job at this and the bottom line I
think is to use some std benchmark that has been really worked over. I
think your right ... Applications ... are right on the money for high level
reporting on performance but you need to duplicate and have 2nd,3rd party
verification of numbers.  And to get repeatabiltiy you need benchmarks ...
so I think you wind up back looking at some of the std benchmarks out
there. Sigh ...


>> Anyway, the main reason I raised XFS was because they use the
terminology
>> "extent", which seems to be mean the complete opposite thing to that in
>> LVM terminology.

>You are correct.  In many cases with filesystems, an "extent" refers to a
>variable length sequence of consecutive data.  This is opposed to a
"block"
>which is a fixed size piece of consecutive data.  Under AIX, these are
>(confusingly) called "partitions" (e.g. Logical Partitions/Physical
Paritions
>are the same thing as Linux/HPUX Logical Extents/Physical Extents).

>When we first started discussing LVMS/EVMS, I ended up calling these
things
>"logical blocks" (referring to fixed-size chunks of disk space), because
>this is a better description of what they are.  I couldn't think of an
>unabiguous name for an "extent" (meaning a variable length of disk space).
>If you could think up a good name for this, it would help the discussions.

-Don

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic