[prev in list] [next in list] [prev in thread] [next in thread] 

List:       helix-datatype-dev
Subject:    [datatype-dev] RE: XPS buffer management redesign
From:       "Eric Hyche" <ehyche () real ! com>
Date:       2008-04-16 12:46:17
Message-ID: 003c01c89fbf$ddfdd930$db68a8c0 () EHYCHED620
[Download RAW message or body]


This change looks good.

=============================================
Eric Hyche (ehyche@real.com)
Technical Lead
RealNetworks, Inc.  

> -----Original Message-----
> From: john.wei@nokia.com [mailto:john.wei@nokia.com] 
> Sent: Tuesday, April 15, 2008 12:20 PM
> To: datatype-dev@helixcommunity.org
> Cc: gwright@real.com; ehyche@real.com
> Subject: CR: XPS buffer management redesign
> 
> 
> Nokia submits this code under the terms of a commercial contribution
> agreement with RealNetworks, and I am authorized to 
> contribute this code
> under said agreement."
> 
> Modified by:  john.wei@nokia.com
> 
> Reviewed by:
> 
> Date: 28-Mar-2008
> 
> Project: SymbianMmf_Rel
> 
> TSW: ESNG-7BCPHC, EOIE-7BR2VR
> 
> Synopsis: XPS buffer management redesign and implementation
> 
> XPS server has multiple stream queues. Helix fetches the packet from
> these queues in one thread, while XPS client pushes packets to these
> queues in another thread. When one queue is full, XPS server 
> informs XPS
> client with error code KErrOverFlow. When it becomes not full, XPS
> server informs XPS client to resume packet pushing by
> RestorePacketSupply(). This architecture has following problems:
> 
> 1) XPS client sometimes prefers to set max memory sizes in bytes, not
> max number of packets
> 
> 2) After queue is full, RestorePacketSupply() is called to 
> XPS client by
> XPS server whenever a packet is fetched by Helix and queue becomes
> un-full. Client refills queue quickly resulting another KErrOverflow
> message to client. Helix fetches one packet again resulting another
> RestorePacketSupply() to client. Client might complain too 
> many pairs of
> KErrOverflow and RestorePacketSupply() made in this scenario.
> 
> 3) The biggest problem is that this architect might result in 
> deadlock.
> For an example, there are two queues (E & F)in XPS server. 
> Let's assume
> queue F is full and queue E is empty, and Helix is fetching Queue E
> packets, XPS client is waiting to push packets to Queue F. XPS client
> needs to push packets in its internal buffer sequentially to 
> XPS server.
> XPS client knows that Queue E is empty, but it could not push 
> packets to
> Queue E since it has F-type packets ready for Queue F at the front of
> its internal buffer. In this situation, Helix could not get 
> packets from
> XPS server since Queue E is empty and XPS client could not 
> push packets
> to XPS server since Queue F is full. Both applications are deadlocked.
> 
> This CR proposes a change in XPS buffer management.
> 1) A new API is introduced: "TInt ConfigStreamBySize(TUint unStreamId,
> TUint32 unQueueSiezInBytes)" in addition to exiting API "TInt
> ConfigStream(TUint unStreamId, TUint unNumSlots)". New API 
> allows client
> to set max number of queue size based on the memory consumption. Both
> new and existing APIs are supported. If neither is used, XPS 
> server uses
> default max momery size of 1 mb.
> 
> 2) A new API is introduced: "TInt Enqueue(TUint unStreamNumber,
> CXPSPacket* pPacket)" in addition to existing API "TInt Enqueue(TUint
> unStreamId, const TRtpRecvHeader& aHeaderInfo, const TDesC8&
> aPayloadData)". Some clients prefer to send CXPSPacket to XPS.
> 
> 3) After one queue is full, XPS server returns KErrOverflow to XPS
> client. However, XPS server calls RestorePacketSupply() to client only
> after its queue usage is under a default percentage, currently set at
> 50%.
> 
> 4) In the deadlock scanario descriped above, Helix is fetching packets
> from Queue E. XPS server drops one packet from Queue F and asks client
> to replendish the queue. If the Queue E is still empty, XPS 
> server drops
> one packet from Queue F again and asks client to replendish the queue
> again. This action is repeated again and again till Queue E is not
> empty.
> 
> 5) A new API is introduced: "TBool SetOverflowAutoManage(TBool
> bIsOverflowAutoManaged)". If bIsOverflowAutoManaged is Etrue, XPS will
> do automatic queue overflow control described in (4) above. If
> bIsOverflowAutoManaged is Efalse, no overflow control will be done by
> XPS.
> 
> 
> Files Modified:
> RCS file: /cvsroot/datatype/xps/PacketSink/CXPSPacketSink.cpp,v
> RCS file: /cvsroot/datatype/xps/PacketSink/CXPSPacketSink.h,v
> RCS file: /cvsroot/datatype/xps/PacketSink/CXPSPktQue.cpp,v
> RCS file: /cvsroot/datatype/xps/PacketSink/CXPSPktQue.h,v
> RCS file: /cvsroot/datatype/xps/PacketSink/CXPSSession.cpp,v
> RCS file: /cvsroot/datatype/xps/PacketSink/CXPSSession.h,v
> RCS file: /cvsroot/datatype/xps/PacketSink/CXPSStream.cpp,v
> RCS file: /cvsroot/datatype/xps/PacketSink/CXPSStream.h,v
> 
> Image Size and Heap Use impact: minor
> 
> Module Release testing (STIF) :  N/A
> 
> Test case(s) Added  :  no
> 
> Memory leak check performed : Yes.  No new leaks introduced.  
> 
> Platforms and Profiles Build Verified: helix-client-s60-32-mmf-mdf-arm
> 
> Platforms and Profiles Functionality verified: armv5, winscw 
> 
> Branch: Head & 221CayS
> 


_______________________________________________
Datatype-dev mailing list
Datatype-dev@helixcommunity.org
http://lists.helixcommunity.org/mailman/listinfo/datatype-dev
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic