[prev in list] [next in list] [prev in thread] [next in thread] 

List:       freediameter-help
Subject:    [Help] Congestion control
From:       zackhasit () gmail ! com (zack Hasit)
Date:       2013-04-22 15:45:24
Message-ID: CAD0_ocFZHxOJ1kGmBrY0cgSxFU+ASQacYS8QZijx5DocgB-i5A () mail ! gmail ! com
[Download RAW message or body]

>
> Hi Sebastian,
>
> The intention is to find out the time spent in the FD stack for request
> (like you describe) and response (when I handover message to FD to send
> back to network). For request side it works ok as you describe. But when a
> message is sent back to stack from my custom code (to deliver back to
> network) how can I find out the time it was in the stack ? I can then add
> these two to find out how much total time is spent in FD stack. That way we
> can know how much time can be attributed to FD stack vs my custom code.
>
> Thanks.
>
>
> On Mon, Apr 22, 2013 at 10:20 AM, Sebastien Decugis <
> sdecugis at freediameter.net> wrote:
>
>>  Hi Zack,
>>
>> Indeed, you can access a message only while it is being handled by the
>> framework.
>> Can you clarify what you are trying to do?
>> The initial use-case from this mail was to get the timing of message
>> reception when your callback receives the message, right?
>> To do that, you can use fd_msg_ts_get_recv on you message and compare
>> with the current time.
>> The other functions (to set the time and to get_sent_on) are useful only
>> for the framework itself to measure time to get an anwer, and such similar
>> tasks. You can display this information by using the
>> --enable_msg_log=TIMING command-line option.
>>
>> I hope this clarifies,
>> Best regards,
>> Sebastien
>>
>>
>> Le 2013/04/22 21:07, zack Hasit a ?crit :
>>
>>  Hi Sebastian,
>>
>> It looks like the message is deleted in the free diameter after the
>> message is sent.. and when we are referring the message using the
>>
>> fd_msg_ts_get_sent the message is considered to be invalid ? Any example
>> for how to get time stamp when message was sent back to network ?
>>
>>
>>
>> Warning: Invalid parameter received in '((msg) && (((struct msg_avp_chain
>> *)(msg))->type == MSG_MSG) && (((struct msg *)(msg))->msg_eyec ==
>> (0x11355463)))'
>>
>>
>>
>> Warning: Invalid parameter received in '((msg) && (((struct msg_avp_chain
>> *)(msg))->type == MSG_MSG) && (((struct msg *)(msg))->msg_eyec ==
>> (0x11355463)))'
>>
>>
>> On Thu, Nov 29, 2012 at 6:46 PM, Sebastien Decugis <
>> sdecugis at freediameter.net> wrote:
>>
>>>  Hi Zack,
>>>
>>> I have just committed a first draft of this change; can you experiment
>>> and give me your feedback? There are probably some cases where the reported
>>> value is incorrect, I'll need to test more this code but I do not have much
>>> time for this. It would be a great help if you can test on your side and
>>> list the situations where you see problems / lacking information.
>>>
>>> Thanks!
>>> sebastien.
>>>
>>>
>>> Le 2012/11/28 8:41, Sebastien Decugis a ?crit :
>>>
>>>  + list
>>>
>>>  This is also how I was thinking to implement it. Add a new timestamp
>>> field in the message structure, upon reception save the current timestamp
>>> and write this ts in the message once it is created; this ts can be used in
>>> several places afterwards, including when the message is sent.
>>>
>>> Please consider the case of freeDiameter relays also (Request received
>>> then transfered to another peer, answer received from that peer and
>>> transfered to the original request sender) in such case it is better to
>>> display the "transfer time" I guess, in addition to the time between
>>> original request and answer.
>>>
>>> Sebastien
>>>
>>> Le 2012/11/27 19:28, zack Hasit a ?crit :
>>>
>>> Hi, I was looking at the code for this change. Only way to do this
>>> properly is by adding "received time" information to newmsg itself because
>>> each message goes through multiple queues and threads.... the call back
>>> could then subtract the "nowtime" and get delta ? Not sure what after
>>> effect this would cause. Suggestions ?
>>>
>>> int fd_tls_rcvthr_core(struct cnxctx * conn, gnutls_session_t session)
>>> {
>>>         /* No guarantee that GnuTLS preserves the message boundaries, so
>>> we re-build it as in TCP */
>>>         do {
>>>                 uint8_t header[4];
>>>                 uint8_t * newmsg;
>>>                 size_t  length;
>>>
>>>
>>>
>>> On Tue, Nov 27, 2012 at 2:08 AM, Sebastien Decugis <
>>> sdecugis at freediameter.net> wrote:
>>>
>>>>  Hi Zack,
>>>>
>>>> This is a good idea, there is no such function at the moment but it is
>>>> possible to add one; I will do it when I have time -- unless you can
>>>> implement it and send the code to me?
>>>>
>>>> Best regards,
>>>> Sebastien.
>>>>
>>>> Le 2012/11/26 21:47, zack Hasit a ?crit :
>>>>
>>>> Hi , Is there any function that I could use to get time spent in FD
>>>> stack from point where CCR is received from socket to point where it gets
>>>> handed over to the call back function ? This would be excellent monitoring
>>>> tool to understand how well base stack vs custom (my own) code is
>>>> performing. It can also be used to give some insights to operator if needed
>>>> on drilling down to the bottlenecks ....
>>>>
>>>>
>>>> On Wed, Oct 31, 2012 at 1:38 AM, Sebastien Decugis <
>>>> sdecugis at freediameter.net> wrote:
>>>>
>>>>> Hi Zack,
>>>>>
>>>>> The queue size is not configurable, at least at the moment. You can
>>>>> check all calls to fd_fifo_new to find the size of all queues (the last
>>>>> parameter of the call). If you provide a patch to make this configurable, I
>>>>> will be happy to integrate it :)
>>>>>
>>>>> Your understanding is correct, the time to fill all the queues will
>>>>> depend on the incoming rate vs. the processing rate. Changing the queue
>>>>> size is equivalent to adjusting some internal buffer, but the producer /
>>>>> consummer ratio remains the same.
>>>>>
>>>>> Best regards,
>>>>> Sebastien.
>>>>>
>>>>>
>>>>> Le 2012/10/31 2:20, zack Hasit a ?crit :
>>>>>
>>>>>  Thanks. I looked at the sample config and I understand how to increase
>>>>>> or decrease threads but how can I control the max queue size if I want
>>>>>> to use just one thread ? Is that just a # define ? I am guessing a
>>>>>> capability to tweak it is needed via this config file ?
>>>>>> So to take an example if I have throughput of 100 events/sec vs 1000
>>>>>> events/sec I might need to tweak the fifo size so that overall latency
>>>>>> of a message remains same in both case before I get the congestion
>>>>>> control kick in. If max queue size is same for both I might have to
>>>>>> wait longer in case throughput is slower ?
>>>>>>
>>>>>>
>>>>>> -Zack
>>>>>>
>>>>>> On Tue, Oct 30, 2012 at 4:53 PM, Sebastien Decugis
>>>>>> <sdecugis at freediameter.net> wrote:
>>>>>>
>>>>>>> Hi Zack,
>>>>>>>
>>>>>>> If everything goes according to plan, ultimately the socket buffer
>>>>>>> will fill
>>>>>>> up and transport layer will stop acknowledging.
>>>>>>>
>>>>>>> There are several threads involved; let's consider the easyest case
>>>>>>> (TCP, no
>>>>>>> TLS):
>>>>>>>   1. the receiver thread in the connection (libfdcore/cnxctx.c:691)
>>>>>>>      a. read on the socket
>>>>>>>      b. send event on peer event queue (fifo)
>>>>>>>   2. the PSM thread
>>>>>>>      a. pick the event
>>>>>>>      b. send the message to the global incoming queue
>>>>>>>   3. the routing IN thread
>>>>>>>      a. pick the message
>>>>>>>      b. send to the dispatch queue
>>>>>>>   4. the dispatch thread
>>>>>>>      a. pick the message
>>>>>>>      b. execute your "slow" code
>>>>>>>
>>>>>>> All the queues should have a maximum number of elements, above this
>>>>>>> the
>>>>>>> "send to queue" operation becomes blocking. As a result, all the
>>>>>>> intermediary queues fills up until the first thread is not able to
>>>>>>> run
>>>>>>> anymore and stops reading the socket buffer. At this point, the
>>>>>>> system
>>>>>>> congestion control kicks in.
>>>>>>>
>>>>>>> This is the very core mechanism of freeDiameter based on threads and
>>>>>>> fifos;
>>>>>>> and I believe this is very robust as long as the number of resources
>>>>>>> (number
>>>>>>> of parallel threads, maximum length of queues) is finely tuned for
>>>>>>> the
>>>>>>> target system. I confess the current values for the maximum were
>>>>>>> quite
>>>>>>> randomly decided...
>>>>>>>
>>>>>>> Let me know if my explanation is not clear, or if you find places in
>>>>>>> the
>>>>>>> code where you think the description I gave here does not apply.
>>>>>>>
>>>>>>> Best regards,
>>>>>>> Sebastien
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Le 2012/10/23 21:06, zack Hasit a ?crit :
>>>>>>>
>>>>>>>  Hi Sebastian,
>>>>>>>>
>>>>>>>> Every time I get a new message my call back function gets called as
>>>>>>>> expected. The call back function is slow and takes about 10 ms to
>>>>>>>> process each request. Its slow because of my custom code and that's
>>>>>>>> expected. I can run only a single thread as multi-threading is not
>>>>>>>> supported. My question is what happens to messages that are still
>>>>>>>> being sent to this Diameter Server by the client. Do that stay on
>>>>>>>> the
>>>>>>>> socket buffer ? If they do stay on socket buffer then TCP/IP
>>>>>>>> congestion control will kick in which is good as it will slow down
>>>>>>>> the
>>>>>>>> client (and that's what I want). However if they are being picked by
>>>>>>>> another thread off the socket then we might have a problem as my
>>>>>>>> logic
>>>>>>>> is slow and cannot handle those pending queued messages.
>>>>>>>> Please let me know
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>>  _______________________________________________
>>> Help mailing listHelp at freediameter.nethttp://lists.freediameter.net/cgi-bin/mailman/listinfo/help
>>>
>>>
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freediameter.net/pipermail/help/attachments/20130422/114e71ee/attachment-0001.html>

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic