[prev in list] [next in list] [prev in thread] [next in thread] 

List:       beowulf
Subject:    Re: [Beowulf] RE: programming multicore clusters
From:       richard.walsh () comcast ! net
Date:       2007-06-16 13:44:10
Message-ID: 061620071344.12885.4673E92A00060B8F000032552200750438089C040E99D20B9D0E080C079D () comcast ! net
[Download RAW message or body]


From: Joe Landman <landman@scalableinformatics.com>
> 
> 
> Greg Lindahl wrote:
> > On Fri, Jun 15, 2007 at 09:57:08AM -0400, Joe Landman wrote:
> > 
> > > First, shared memory is nice and simple as a programming model. 
> > 
> > Uhuh. You know, there are some studies going where students learning
> > parallel programming do the same algorithm with MPI and with shared
> > memory. Would you like to make a bet as to whether they found shared
> > memory much easier?
> 
> I don't know which "studies" you are referring to.  Having taught 
> multiple graduate level courses on MPI/OpenMP programming, I can tell 
> you what I observed from my students.  They largely just "get" OpenMP. 
> It won't get them great overall performance, as there aren't many large 
> multiprocessor SMPs around for them to work on.  Be that as it may, they 
> had little problem developing good code.  Compare this to MPI, and these 
> same exact students had a difficult time of it.

We did a study at the AHPCRC attempting to measure the "ease of programming"
of MPI versus UPC/CAF.   Having observed how it was done, the mix of experience
in the group looked at, and noting the complexity of measuring "ease of programming"
I would say the conclusions drawn were of nearly no value.  

Explicitness (MPI) tends to force one to think for carefully of the potential pits \
falls and complexities of the coding problem (in some cases delivering better code), \
while slowing you down in the short-run.  Implicitness (UPC,CAF, OpenMP) tends to \
speed the initial development of the code, while allowing more novice programmers to \
make both parallel programming and performance errors.  This tendency is reflected in \
the design ideas behind UPC (more implicit shared memory references) and CAF (more
explicit shared memory references).  While both are small foot print, I tend to like \
the CAF model better which reminds the programmer of every remote reference with a
square bracket at the end of its co-array expressions (Raffeinert es herr CAF, aber
boshaft es herr nicht ... ;-) ...)

I might add a point beyond ease-of-use related to granularity ... coding some \
algorithms that have a natural fine-grained-ness can be prevented entirely by the \
cumbersomeness explicit message passing models.  The algorithmic flexibiltiy provided \
by small foot-print shared memory and PGAS models can be a liberating experience for \
the programmer, just like a very good symbolism can be in mathematics.  Of course, \
across the spectrum of commodity resources OpenMP does not scale, and UPC and CAF do \
not yet equal the performance of well- written MPI code.  Although it would seem that \
much MPI code is not that "well-written".

As to how parallel programming will evolve in this context I think that my signature \
quote below is relevant.

Regards,

rbw

--

"Making predictions is hard, especially about the future."

Niels Bohr

--

Richard Walsh
Thrashing River Consulting--
5605 Alameda St.
Shoreview, MN 55126

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit \
http://www.beowulf.org/mailman/listinfo/beowulf


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic