[prev in list] [next in list] [prev in thread] [next in thread]
List: helix-server-cvs
Subject: [Server-cvs] engine/dataflow basicpcktflow.cpp, 1.48,
From: dcollins () helixcommunity ! org
Date: 2010-12-10 20:25:01
Message-ID: 201012102024.oBAKOu7B006012 () mailer ! progressive-comp ! com
[Download RAW message or body]
Update of /cvsroot/server/engine/dataflow
In directory cvs01.internal.helixcommunity.org:/tmp/cvs-serv17744/server/engine/dataflow
Modified Files:
basicpcktflow.cpp flob_wrap.cpp ppm.cpp
Log Message:
Synopsis
========
Fixes a Win64 timeval-related issue causing high CPU load
Branches: HEAD
Reviewer: Chytanya
Description
===========
The Windows SDK unfortunately uses a 32-bit definition of struct timeval
on both 32-bit and 64-bit Windows, unlike the other 64-bit OSes that
we support. We had an inconsistency because of this resulting in us
calling into select() with a timeval which Windows interpreted as a
timeout of zero. So it would immediately pop back out of select(),
do a whole lot of nothing, and try again. This resulted in a pegged
server CPU even with just a few players attached.
We have Timeval, HXTimeval, HXTime, struct timeval... not to mention
time_t and a few others. The attached solution updates them in a
couple of places to be more consistent with the Windows struct timeval
on Windows, while using 64-bit values on 64-bit Linux and Solaris.
One common headache of the Windows definition that we now
have to contend with is that you cannot legitimatly do this:
struct tm tm;
HXTime now;
gettimeofday(&now, NULL);
hx_localtime_r((const time_t *)&now.tv_sec, &tm);
This is because tv_sec may only be 32-bits, and hx_localtime_r()
expects a *pointer* to a 64-bit time_t value.
The correct approach is to use an intermediate time_t variable:
struct tm tm;
HXTime now;
gettimeofday(&now, NULL);
time_t tNow = now.tv_sec;
hx_localtime_r((const time_t *)&tNow, &tm);
While doing this I updated some of the server's related strftime()
calls to use %Y for the year rather than %y so we print the full year
rather than just the last to digits.
I noticed a seek offset that wasn't updated yet in flob_wrap.cpp.
I also cleaned up some compiler warnings in server/engine/core.
Files Affected
==============
common/dbgtool/platform/default/debug.cpp
common/dbgtool/servertrace.cpp
common/include/hxengin.h
common/system/pub/hxtime.h
common/util/pub/timeval.h
filesystem/http/pub/http_debug.h
server/engine/context/errhand.cpp
server/engine/core/bcastfilter.cpp
server/engine/core/bcastmgr.cpp
server/engine/core/_main.cpp
server/engine/core/malloc.cpp
server/engine/core/mem_cache.cpp
server/engine/dataflow/basicpcktflow.cpp
server/engine/dataflow/flob_wrap.cpp
server/engine/dataflow/ppm.cpp
server/engine/dataflow/pub/flob_wrap_callbacks.h
server/engine/dataflow/pub/flob_wrap.h
server/license/slicensepln/server_license.cpp
server/log/tmplgpln/base_log.cpp
server-restricted/cdist/cdistpln/cdiststats.cpp
server-restricted/proxy/proxylib/prxctxt.cpp
server_rn/datatype/mpeg2ts/pub/streamhandler.h
server_rn/datatype/mpeg2ts/streamhandler.cpp
server_rn/datatype/mpeg2ts/streamsmap.cpp
server_rn/snmp/master/snmp++/src/msec.cpp
server_rn/snmp/snmppp/msec.cpp
Testing Performed
=================
Unit Tests:
- N/A
Integration Tests:
- Ran a 64-bit release build of the server and proxy in the Win2k8 uptime rig,
verifying that the CPU load was not excessive like before.
Leak Tests:
- N/A, will be running more uptimes post-checkin to look for these though
Performance Tests:
- VTune profiling was used to confirm that the problem was as described above.
Platforms Tested: win-x86_64-vc10
Builds Verified: win-x86_64-vc10, linux-rhel5-x86_64
QA Hints
========
- N/A
Index: flob_wrap.cpp
===================================================================
RCS file: /cvsroot/server/engine/dataflow/flob_wrap.cpp,v
retrieving revision 1.6
retrieving revision 1.7
diff -u -d -r1.6 -r1.7
--- flob_wrap.cpp 16 Nov 2010 22:25:02 -0000 1.6
+++ flob_wrap.cpp 10 Dec 2010 20:24:58 -0000 1.7
@@ -725,15 +725,15 @@
void
SeekFileCallback::func(Process* p)
{
- m_fow->m_pFileObject->Seek(m_ulOffset, m_bRelative);
+ m_fow->m_pFileObject->Seek(m_nOffset, m_bRelative);
delete this;
}
STDMETHODIMP
-FileObjectWrapper::Seek(HX_OFF_T ulOffset,
+FileObjectWrapper::Seek(HX_OFF_T nOffset,
BOOL bRelative)
{
- SeekFileCallback* sfcb = new SeekFileCallback(this, ulOffset, bRelative);
+ SeekFileCallback* sfcb = new SeekFileCallback(this, nOffset, bRelative);
m_myproc->pc->dispatchq->send(m_myproc, sfcb, m_fs_proc->procnum());
return HXR_OK;
}
Index: basicpcktflow.cpp
===================================================================
RCS file: /cvsroot/server/engine/dataflow/basicpcktflow.cpp,v
retrieving revision 1.48
retrieving revision 1.49
diff -u -d -r1.48 -r1.49
--- basicpcktflow.cpp 18 May 2010 17:57:29 -0000 1.48
+++ basicpcktflow.cpp 10 Dec 2010 20:24:58 -0000 1.49
@@ -1162,7 +1162,7 @@
INT64 ulTimeDiffms = t.tv_sec * 1000 + t.tv_usec / 1000;
ulTimeDiffms = (INT64) (ulTimeDiffms * fOldRatio);
m_pStreams[i].m_ulTSDMark += (UINT32)ulTimeDiffms;
- m_pStreams[i].m_ulLastScaledPacketTime += ulTimeDiffms;
+ m_pStreams[i].m_ulLastScaledPacketTime += (UINT32)ulTimeDiffms;
}
}
}
Index: ppm.cpp
===================================================================
RCS file: /cvsroot/server/engine/dataflow/ppm.cpp,v
retrieving revision 1.141
retrieving revision 1.142
diff -u -d -r1.141 -r1.142
--- ppm.cpp 31 Mar 2010 00:01:37 -0000 1.141
+++ ppm.cpp 10 Dec 2010 20:24:58 -0000 1.142
@@ -1743,11 +1743,11 @@
if (m_pASMSource)
{
- for (INT32 lRule = 0; lRule < pSD->m_lNumRules; lRule++)
+ for (UINT16 uRule = 0; uRule < pSD->m_lNumRules; uRule++)
{
- if (pSD->m_pRules[lRule].m_bRuleOn)
+ if (pSD->m_pRules[uRule].m_bRuleOn)
{
- m_pASMSource->Unsubscribe(i, lRule);
+ m_pASMSource->Unsubscribe(i, uRule);
}
}
}
@@ -3448,8 +3448,9 @@
char szTime[128];
Timeval tNow = m_pProc->pc->engine->now;
struct tm localTime;
- hx_localtime_r(&tNow.tv_sec, &localTime);
- strftime(szTime, 128, "%d-%b-%y %H:%M:%S", &localTime);
+ time_t tSec = tNow.tv_sec;
+ hx_localtime_r(&tSec, &localTime);
+ strftime(szTime, 128, "%d-%b-%Y %H:%M:%S", &localTime);
fprintf(stderr, "%s.%3d S=%s QT=%f QPN=%d\n",
szTime, tNow.tv_usec/1000,
@@ -4579,8 +4580,9 @@
char szTime[128];
Timeval tNow = m_pProc->pc->engine->now;
struct tm localTime;
- hx_localtime_r(&tNow.tv_sec, &localTime);
- strftime(szTime, 128, "%d-%b-%y %H:%M:%S", &localTime);
+ time_t tSec = tNow.tv_sec;
+ hx_localtime_r(&tSec, &localTime);
+ strftime(szTime, 128, "%d-%b-%Y %H:%M:%S", &localTime);
fprintf(stderr, "%s.%3d RSDLive(mobile) S=%s DM=%s FKT=%f\n",
szTime, tNow.tv_usec/1000,
@@ -5116,7 +5118,7 @@
if(ulDR)
{
- fEDT = (float)m_ulPreData*8.0/(float)ulDR;
+ fEDT = (float)(m_ulPreData*8.0/(float)ulDR);
}
snprintf(szDR, sizeof(szDR), "%ld", ulDR);
@@ -5141,12 +5143,13 @@
szSignalType = "NONE";
}
- float fADT = float(HX_GET_TICKCOUNT() - m_ulRSDDebugTS)/1000.0;
+ float fADT = (float)(float(HX_GET_TICKCOUNT() - m_ulRSDDebugTS)/1000.0);
char szTime[128];
Timeval tNow = m_pProc->pc->engine->now;
struct tm localTime;
- hx_localtime_r(&tNow.tv_sec, &localTime);
- strftime(szTime, sizeof(szTime), "%d-%b-%y %H:%M:%S", &localTime);
+ time_t tSec = tNow.tv_sec;
+ hx_localtime_r(&tSec, &localTime);
+ strftime(szTime, sizeof(szTime), "%d-%b-%Y %H:%M:%S", &localTime);
fprintf(stderr, "%s.%3d RSDLive S=%s St=%u R=%u MR=%u PR=%u PD=%ub DR=<%s>:%s"
" EDT=%s ADT=%f CTS=%d\n",
@@ -5224,7 +5227,7 @@
PPMStreamData* pSD = m_pStreamData + m_unKeyframeStream;
if(pSD)
{
- fPrerollInSec = (float)pSD->m_ulPreroll/1000.0;
+ fPrerollInSec = (float)(pSD->m_ulPreroll/1000.0);
}
for (int i = 0; i < m_unStreamCount; i++)
@@ -6670,7 +6673,7 @@
{
SendPacketQueue(pTransport);
}
- m_ulPacketQueueSize += ulPacketSize;
+ m_ulPacketQueueSize += (UINT16)ulPacketSize;
m_pPacketQueue[m_ucPacketQueuePos++] = pPacket;
}
@@ -7099,7 +7102,7 @@
(void **)&m_pSessionStats2);
}
- UINT32 ulBuffLen = strlen(szSessionId) + 1;
+ UINT32 ulBuffLen = (UINT32)strlen(szSessionId) + 1;
m_pPlayerSessionId = new ServerBuffer(TRUE);
m_pPlayerSessionId->Set((UCHAR*)szSessionId, ulBuffLen);
}
_______________________________________________
Server-cvs mailing list
Server-cvs@helixcommunity.org
http://lists.helixcommunity.org/mailman/listinfo/server-cvs
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic