[prev in list] [next in list] [prev in thread] [next in thread] 

List:       dpdk-dev
Subject:    Re: [dpdk-dev] [PATCH 3/4] hash: fix rw concurrency while moving keys
From:       Honnappa Nagarahalli <Honnappa.Nagarahalli () arm ! com>
Date:       2018-09-30 23:05:51
Message-ID: AM6PR08MB36724E09B9AF737412D166C598EE0 () AM6PR08MB3672 ! eurprd08 ! prod ! outlook ! com
[Download RAW message or body]

> > 
> > Reader-writer concurrency issue, caused by moving the keys to their
> > alternative locations during key insert, is solved by introducing a
> > global counter(tbl_chng_cnt) indicating a change in table.
> > 
> > @@ -662,6 +679,20 @@ rte_hash_cuckoo_move_insert_mw(const struct
> rte_hash *h,
> > 		curr_bkt = curr_node->bkt;
> > 	}
> > 
> > +	/* Inform the previous move. The current move need
> > +	 * not be informed now as the current bucket entry
> > +	 * is present in both primary and secondary.
> > +	 * Since there is one writer, load acquires on
> > +	 * tbl_chng_cnt are not required.
> > +	 */
> > +	__atomic_store_n(&h->tbl_chng_cnt,
> > +			 h->tbl_chng_cnt + 1,
> > +			 __ATOMIC_RELEASE);
> > +	/* The stores to sig_alt and sig_current should not
> > +	 * move above the store to tbl_chng_cnt.
> > +	 */
> > +	__atomic_thread_fence(__ATOMIC_RELEASE);
> > +
> [Wang, Yipeng] I believe for X86 this fence should not be compiled to any
> code, otherwise we need macros for the compile time check.
'__atomic_thread_fence(__ATOMIC_RELEASE)' provides load-load and load-store fence \
[1]. Hence, it should not add any barriers for x86.

[1] https://preshing.com/20130922/acquire-and-release-fences/

> 
> > @@ -926,30 +957,56 @@ __rte_hash_lookup_with_hash(const struct
> rte_hash *h, const void *key,
> > 	uint32_t bucket_idx;
> > 	hash_sig_t alt_hash;
> > 	struct rte_hash_bucket *bkt;
> > +	uint32_t cnt_b, cnt_a;
> > 	int ret;
> > 
> > -	bucket_idx = sig & h->bucket_bitmask;
> > -	bkt = &h->buckets[bucket_idx];
> > -
> > 	__hash_rw_reader_lock(h);
> > 
> > -	/* Check if key is in primary location */
> > -	ret = search_one_bucket(h, key, sig, data, bkt);
> > -	if (ret != -1) {
> > -		__hash_rw_reader_unlock(h);
> > -		return ret;
> > -	}
> > -	/* Calculate secondary hash */
> > -	alt_hash = rte_hash_secondary_hash(sig);
> > -	bucket_idx = alt_hash & h->bucket_bitmask;
> > -	bkt = &h->buckets[bucket_idx];
> > +	do {
> [Wang, Yipeng] As far as I know, the MemC3 paper "MemC3: Compact and
> Concurrent MemCache with Dumber Caching and Smarter Hashing"
> as well as OvS cmap uses similar version counter to implement read-write
> concurrency for hash table, but one difference is reader checks even/odd of
> the version counter to make sure there is no concurrent writer. Could you just
> double check and confirm that this is not needed for your implementation?
> 
I relooked at this paper. My patch makes use of the fact that during the process of \
shifting the key will be present in both primary and secondary buckets. The check for \
odd version counter is not required as the full key comparison would have identified \
any false signature matches.

> > --- a/lib/librte_hash/rte_hash.h
> > +++ b/lib/librte_hash/rte_hash.h
> > @@ -156,7 +156,7 @@ rte_hash_count(const struct rte_hash *h);
> > *   - -ENOSPC if there is no space in the hash for this key.
> > */
> > int
> > -rte_hash_add_key_data(const struct rte_hash *h, const void *key, void
> > *data);
> > +rte_hash_add_key_data(struct rte_hash *h, const void *key, void
> > +*data);
> > 
> > /**
> > * Add a key-value pair with a pre-computed hash value @@ -180,7
> > +180,7 @@ rte_hash_add_key_data(const struct rte_hash *h, const void
> *key, void *data);
> > *   - -ENOSPC if there is no space in the hash for this key.
> > */
> > int32_t
> > -rte_hash_add_key_with_hash_data(const struct rte_hash *h, const void
> > *key,
> > +rte_hash_add_key_with_hash_data(struct rte_hash *h, const void *key,
> > 						hash_sig_t sig, void *data);
> > 
> > /**
> > @@ -200,7 +200,7 @@ rte_hash_add_key_with_hash_data(const struct
> rte_hash *h, const void *key,
> > *     array of user data. This value is unique for this key.
> > */
> > int32_t
> > -rte_hash_add_key(const struct rte_hash *h, const void *key);
> > +rte_hash_add_key(struct rte_hash *h, const void *key);
> > 
> > /**
> > * Add a key to an existing hash table.
> > @@ -222,7 +222,7 @@ rte_hash_add_key(const struct rte_hash *h, const
> void *key);
> > *     array of user data. This value is unique for this key.
> > */
> > int32_t
> > -rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
> > hash_sig_t sig);
> > +rte_hash_add_key_with_hash(struct rte_hash *h, const void *key,
> > +hash_sig_t sig);
> > 
> > /
> 
> I think the above changes will break ABI by changing the parameter type?
> Other people may know better on this.


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic