[prev in list] [next in list] [prev in thread] [next in thread] 

List:       paraview
Subject:    Re: [Paraview] [paraview] using with python,
From:       Berk Geveci <berk.geveci () kitware ! com>
Date:       2009-06-25 12:06:31
Message-ID: 45d654b0906250506gf9e21a0pc8e749656817c186 () mail ! gmail ! com
[Download RAW message or body]

Upon closer inspection, you are right. This is not by intention
though. It must be a bug that got introduced into the pipeline (I
don't know when). The current behavior is for all processes to read
all of the data and then redistribution for load balancing. Therefore,
the main overhead is IO. Algorithms after the reader should still
scale close to linearly. I will fix this bug soon.

-berk

On Thu, Jun 25, 2009 at 3:32 AM, burlen<burlen.loring@gmail.com> wrote:
> *Wait a sec... I am not sure I agree with with you after all. Upon closer
> inspection it looks to me like all of the process do end up with a copy of
> the entire dataset in memory when using legacy readers. At the very least in
> between the read and the automagic PV load balancing you pointed out,
> possibly longer depending on how long the pipeline holds the reader output.
> 
> It looks as though legacy files are read by PV using vtkPDataSetReader. I
> put a couple print statements that show process Id and the number of points
> read on each process during request data. See the attached patch. When I run
> multiple pvservers all processes output the same response as a serial run.
> 
> For example, using "CylinderQuadratic.vtk" as a test case i get:
> 
> $mpiexec -np 2 bin/pvserver --server-port=22222
> Listen on port: 22222
> Waiting for client...
> Waiting for server...
> Client connected.
> 1 Read 2814
> 0 Read 2814
> 
> while with just the builtin I get:
> 
> 0 Read 2814
> 
> > *Berk Geveci* berk.geveci at kitware.com
> > <mailto:paraview%40paraview.org?Subject=Re%3A%20%5BParaview%5D%20%5Bparaview%5D%20 \
> > using%20with%20python%2C%0A%09mpi%20pvserver%20and%20scalar%20%09opacity%20%283.7% \
> > 29&In-Reply-To=%3C45d654b0906161227m74ae70d8n79afb5cb6ef174e9%40mail.gmail.com%3E>
> >  /Tue Jun 16 15:27:58 EDT 2009/
> > ------------------------------------------------------------------------
> > You can apply Process Id Scalars and then color by ProcessId or use
> > Threshold to extract a piece.
> > 
> > The implementation is in vtkSMOutputPort::InsertExtractPiecesIfNecessary()
> > 
> > -berk
> > 
> > On Tue, Jun 16, 2009 at 3:09 PM, burlen<burlen.loring at gmail.com
> > <http://www.paraview.org/mailman/listinfo/paraview>> wrote:
> > > / I stand corrected. I have heard so many different claims in term so how
> > > PV
> > />/ treats serial readers, it's hard to keep them all straight.
> > />/
> > />/ Where in the ParaView does the automatic domain decomposition and load
> > />/ balancing occur?  Is there a way to take a look at the decomposition
> > from
> > />/ within PV?
> > />/
> > />/ Berk Geveci wrote:
> > />>/
> > />>/ This is not correct. The reader is indeed serial. However, ParaView
> > />>/ redistributes the data after reading it. So the resulting mesh will
> > be
> > />>/ load balanced across processors but there will be an I/O and/or
> > />>/ communication overhead.
> > />>/
> > />>/ Note that for unstructured meshes, the redistribution is not
> > />>/ necessarily based on spatial partitioning. If you want a nicely
> > />>/ partitioned and load balanced mesh, use D3.
> > />>/
> > />>/ -berk
> > />>/
> > />>/ On Fri, Jun 5, 2009 at 11:50 AM, burlen<burlen.loring at gmail.com
> > <http://www.paraview.org/mailman/listinfo/paraview>> wrote:
> > />>/
> > />>>/
> > />>>/ Perhaps unrelated but I see you are using the legacy reader. As far
> > as I
> > />>>/ know this reader is not parallel and so if you use it while running
> > in
> > />>>/ parallel you'll end up with entire data set loaded in all the
> > processes.
> > />>>/ Probably not what you had in mind.
> > />>>/
> > />>>/ BOUSSOIR Jonathan 167706 wrote:
> > />>>/
> > />>>>/
> > />>>>/ Hi all,
> > />>>>/
> > />>>>/ I am using Linux and last csv version of Paraview (3.7)
> > />>>>/ I have 2 Pyphon script which animate a cylinder either with or
> > without
> > />>>>/ scalar opacity.
> > />>>>/ When I use "pvserver" on one CPU, both work well.
> > />>>>/ If I use "mpirun -np 4 pvserver" to work on four cpu, I saw a color
> > />>>>/ problem when I use script with scalar opacity.
> > />>>>/ I don't understand why.
> > />>>>/
> > />>>>/ I link the scipts in my email.
> > />>>>/ Thanks in advance for your kind help.
> > />>>>/
> > />>>>/ Regards, Jona
> > />>>>/
> > />>>>/
> > />>>>/
> > />>>>/
> > ------------------------------------------------------------------------
> > />>>>/
> > />>>>/ _______________________________________________
> > />>>>/ Powered by www.kitware.com
> > />>>>/
> > />>>>/ Visit other Kitware open-source projects at
> > />>>>/ http://www.kitware.com/opensource/opensource.html
> > />>>>/
> > />>>>/ Please keep messages on-topic and check the ParaView Wiki at:
> > />>>>/ http://paraview.org/Wiki/ParaView
> > />>>>/
> > />>>>/ Follow this link to subscribe/unsubscribe:
> > />>>>/ http://www.paraview.org/mailman/listinfo/paraview
> > />>>>/
> > />>>/
> > />>>/ _______________________________________________
> > />>>/ Powered by www.kitware.com
> > />>>/
> > />>>/ Visit other Kitware open-source projects at
> > />>>/ http://www.kitware.com/opensource/opensource.html
> > />>>/
> > />>>/ Please keep messages on-topic and check the ParaView Wiki at:
> > />>>/ http://paraview.org/Wiki/ParaView
> > />>>/
> > />>>/ Follow this link to subscribe/unsubscribe:
> > />>>/ http://www.paraview.org/mailman/listinfo/paraview
> > />>>/
> > />>>/
> > />/
> > />
> 
> *
> 
> /
> /
> 
> 
_______________________________________________
Powered by www.kitware.com

Visit other Kitware open-source projects at \
http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the ParaView Wiki at: \
http://paraview.org/Wiki/ParaView

Follow this link to subscribe/unsubscribe:
http://www.paraview.org/mailman/listinfo/paraview


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic