[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-scsi
Subject:    Re: [Open-FCoE] [PATCH] libfc: tune fc_exch_em_alloc() to be O(2)
From:       Hillf Danton <dhillf () gmail ! com>
Date:       2010-10-29 14:47:38
Message-ID: AANLkTikaYEPybOjfm-jbf8EFA+iaiTsC5wxrRKpdB4x9 () mail ! gmail ! com
[Download RAW message or body]

On Thu, Oct 28, 2010 at 11:04 PM, Zou, Yi <yi.zou@intel.com> wrote:
>> On Thu, Oct 28, 2010 at 5:36 AM, Robert Love <robert.w.love@intel.com>
>> wrote:
>> > On Fri, 2010-10-22 at 20:20 +0800, Hillf Danton wrote:
>> >> For allocating new exch from pool,  scanning for free slot in exch
>> >> array fluctuates when exch pool is close to exhaustion.
>> >>
>> >> The fluctuation is smoothed, and the scan looks to be O(2).
>> >>
>> > Hi Hillf,
>> >
>> >   I think this patch is fine, aside from a few minor nits below. I'm
>> > not sure how much this benefits us though. I don't think that it will
>> > hurt us, but I'd like to leave it in the fcoe-next tree a bit to make
>> > sure there aren't any adverse effects. I will fix the two issues I
>> > mention below and check it into fcoe-next, unless there are objections.
>> >
>> > Have you done any profiling with this patch to show the improvement?
>>
>> It is O(1) when looking up exch. Very cool.
>> In allocating O(1) was tried but failed.
>>
> How much fluctuation have you observed? Your patch looks reasonable to me
> to cache the freed xid and reuse it instead of scanning the array. Currently,
> as we incrementally allocate new xid, earlier allocated xids are expected to
> be freed earlier, I would think so the distance to scan would not be really

There are many factors, say the load of the other end and the
congestion of network, as covered in TCP/IP, and I think it is hard to
estimate the fluctuation of the distance in middle/high usage of
exchange pool. //Hillf

> that bad.
>
> Also, on low use, we will probably only see xids of left and right, which I
> guess it's fine, don't recall if there's requirement on consecutive xid being
> apart from each other from the spec, but I am not sure.
>
> yi
>
>> >
>> >> Signed-off-by: Hillf Danton <dhillf@gmail.com>
>> >> ---
>> >>
>> >> --- a/drivers/scsi/libfc/fc_exch.c    2010-09-13 07:07:38.000000000
>> +0800
>> >> +++ b/drivers/scsi/libfc/fc_exch.c    2010-10-22 20:02:54.000000000
>> +0800
>> >> @@ -67,6 +67,11 @@ struct workqueue_struct *fc_exch_workque
>> >>  struct fc_exch_pool {
>> >>       u16              next_index;
>> >>       u16              total_exches;
>> >> +
>> >> +     /* two cache of free slot in exch array */
>> >> +     u16              left;
>> >> +     u16              right;
>> >> +
>> >>       spinlock_t       lock;
>> >>       struct list_head ex_list;
>> >>  };
>> >> @@ -397,13 +402,26 @@ static inline void fc_exch_ptr_set(struc
>> >>  static void fc_exch_delete(struct fc_exch *ep)
>> >>  {
>> >>       struct fc_exch_pool *pool;
>> >> +     u16 index;
>> >>
>> >>       pool = ep->pool;
>> >>       spin_lock_bh(&pool->lock);
>> >>       WARN_ON(pool->total_exches <= 0);
>> >>       pool->total_exches--;
>> >> -     fc_exch_ptr_set(pool, (ep->xid - ep->em->min_xid) >> fc_cpu_order,
>> >> -                     NULL);
>> >> +
>> >> +     /* update cache of free slot */
>> >> +     index = (ep->xid - ep->em->min_xid) >> fc_cpu_order;
>> >> +     if (pool->left == FC_XID_UNKNOWN)
>> >> +             pool->left = index;
>> >> +     else if (pool->right == FC_XID_UNKNOWN)
>> >> +             pool->right = index;
>> >> +     else
>> >> +             /* XXX
>> >> +              * next = entropy(index, left, right);
>> >> +              **/
>> >
>> > We can remove this comment, right?
>>
>> Node.
>>
>> >
>> >> +             pool->next_index = index;
>> >> +
>> >> +     fc_exch_ptr_set(pool, index, NULL);
>> >>       list_del(&ep->ex_list);
>> >>       spin_unlock_bh(&pool->lock);
>> >>       fc_exch_release(ep);    /* drop hold for exch in mp */
>> >> @@ -679,6 +697,19 @@ static struct fc_exch *fc_exch_em_alloc(
>> >>       pool = per_cpu_ptr(mp->pool, cpu);
>> >>       spin_lock_bh(&pool->lock);
>> >>       put_cpu();
>> >> +
>> >> +     /* peek cache of free slot */
>> >> +     if (pool->left != FC_XID_UNKNOWN) {
>> >> +             index = pool->left;
>> >> +             pool->left = FC_XID_UNKNOWN;
>> >> +             goto hit;
>> >> +     }
>> >> +     if (pool->right != FC_XID_UNKNOWN) {
>> >> +             index = pool->right;
>> >> +             pool->right = FC_XID_UNKNOWN;
>> >> +             goto hit;
>> >> +     }
>> >> +
>> >>       index = pool->next_index;
>> >>       /* allocate new exch from pool */
>> >>       while (fc_exch_ptr_get(pool, index)) {
>> >> @@ -687,7 +718,7 @@ static struct fc_exch *fc_exch_em_alloc(
>> >>                       goto err;
>> >>       }
>> >>       pool->next_index = index == mp->pool_max_index ? 0 : index + 1;
>> >> -
>> >> +hit:
>> >>       fc_exch_hold(ep);       /* hold for exch in mp */
>> >>       spin_lock_init(&ep->ex_lock);
>> >>       /*
>> >> @@ -2181,6 +2212,8 @@ struct fc_exch_mgr *fc_exch_mgr_alloc(st
>> >>               goto free_mempool;
>> >>       for_each_possible_cpu(cpu) {
>> >>               pool = per_cpu_ptr(mp->pool, cpu);
>> >> +             pool->left  =
>> >
>> > I think we should initialize this without relying on the following line.
>>
>> It looks like,
>>
>>                     pool->left = pool->right = FC_XID_UNKNOWN;
>> right?
>>
>> I want cache play its role after warm-up.
>>
>> thanks //Hillf
>>
>> >
>> >> +             pool->right = FC_XID_UNKNOWN;
>> >>               spin_lock_init(&pool->lock);
>> >>               INIT_LIST_HEAD(&pool->ex_list);
>> >>       }
>> >> _______________________________________________
>> >> devel mailing list
>> >> devel@open-fcoe.org
>> >> http://www.open-fcoe.org/mailman/listinfo/devel
>> >
>> >
>> >
>> _______________________________________________
>> devel mailing list
>> devel@open-fcoe.org
>> http://www.open-fcoe.org/mailman/listinfo/devel
>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic