[prev in list] [next in list] [prev in thread] [next in thread] 

List:       omniorb-list
Subject:    Re: [omniORB] CORBA performance on InfiniBand network
From:       "Soundararajan, Krishna" <Krishna.Soundararajan () kla-tencor ! com>
Date:       2015-08-04 14:09:21
Message-ID: BN3PR0301MB1188C345D66E010F0C10BEA0D2760 () BN3PR0301MB1188 ! namprd03 ! prod ! outlook ! com
[Download RAW message or body]

Duncan,
Thanks for the quick reply.

As you pointed out, I looked back  at the old data and observed there is no \
throughput improvement in using more than 5 threads. This is because of the \
maxGIOPConnectionPerServer parameter. I will tweak the parameter and let you know the \
results.

Thanks
Krishna



-----Original Message-----
From: Duncan Grisby [mailto:duncan@grisby.org] 
Sent: Tuesday, August 04, 2015 4:08 PM
To: Soundararajan, Krishna
Cc: omniorb-list@omniorb-support.com
Subject: Re: [omniORB] CORBA performance on InfiniBand network

On Mon, 2015-08-03 at 05:01 +0000, Soundararajan, Krishna wrote:

> There have been two variables which were used. Each remote call is 
> repeated until the total data sent is 8GB
> a)      Remote calls are made with different argument sizes from
> 1kB,2kB,4kB upto 1mB.
> b)       Number of client threads is varied from 1 to 32(Each thread
> splitting totalData/NumOfThreads)
> 
> The surprising result is that CORBA performance on 10GeT is better 
> than Infiniband.

omniORB doesn't know anything about your underlying interconnect technology. It just \
uses TCP sockets.

One possible issue with a multithreaded client is that by default omniORB makes at \
most 5 concurrent calls to a single server. With 32 threads, most of them will be \
waiting for the others for most of the time. Look at the maxGIOPConnectionPerServer \
parameter:

http://www.omniorb.net/omni42/omniORB/omniORB004.html#toc23

Try setting the parameter to 32 (or higher) and see if throughput improves.

[...]
> With this results, I have following questions.
> 1. I read in several old articles in internet that ORB kills the
> throughput because of inefficient memory copies and
> marshalling/unmarshalling.

Those old articles are generally written by people with some sort of agenda, to push \
their way of doing things in preference to CORBA.

omniORB tries really hard to avoid unnecessary overheads. If you are sending big bulk \
transfers, and the data types are suitable, it transmits the data directly out of the \
application memory, rather than going through an intermediate buffer. In cases where \
it can't do that, it still tries to manage its buffers as efficiently as possible.

> In 10Gb Ethernet and 1 Gb Ethernet, I do see almost  90%
> utilization(CORBA compared to ethernet). But IB card give less
> than 25% of network utilization. What is causing this
> overhead?

How does the timing compare if you do just one single really big call, with an \
application buffer containing a sequence<octet>?

> Though CORBA is meant to be used for remote method calls, we
> are amazed with its superiority in serializing. Can we use it
> as alternative for socket data transfer with amnual
> serializing code? Is CORBA suitable in high bandwidth
> networks for data transfer?

As I say, omniORB's serialising code is quite finely tuned. You certainly could write \
special-case code that was more efficient, but it is rarely worth the effort. CORBA \
(at least with omniORB) is definitely suitable for high bandwidth data transfer.

Cheers,

Duncan.

-- 
 -- Duncan Grisby         --
  -- duncan@grisby.org     --
   -- http://www.grisby.org --


_______________________________________________
omniORB-list mailing list
omniORB-list@omniorb-support.com
http://www.omniorb-support.com/mailman/listinfo/omniorb-list


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic