[prev in list] [next in list] [prev in thread] [next in thread]
List: linux-mm
Subject: Re: [PATCH V2 4/4] cpuset,mm: update task's mems_allowed lazily
From: David Rientjes <rientjes () google ! com>
Date: 2010-03-31 10:34:08
Message-ID: alpine.DEB.2.00.1003310324550.17661 () chino ! kir ! corp ! google ! com
[Download RAW message or body]
On Wed, 31 Mar 2010, Miao Xie wrote:
> diff --git a/mm/mmzone.c b/mm/mmzone.c
> index f5b7d17..43ac21b 100644
> --- a/mm/mmzone.c
> +++ b/mm/mmzone.c
> @@ -58,6 +58,7 @@ struct zoneref *next_zones_zonelist(struct zoneref *z,
> nodemask_t *nodes,
> struct zone **zone)
> {
> + nodemask_t tmp_nodes;
> /*
> * Find the next suitable zone to use for the allocation.
> * Only filter based on nodemask if it's set
> @@ -65,10 +66,16 @@ struct zoneref *next_zones_zonelist(struct zoneref *z,
> if (likely(nodes == NULL))
> while (zonelist_zone_idx(z) > highest_zoneidx)
> z++;
> - else
> - while (zonelist_zone_idx(z) > highest_zoneidx ||
> - (z->zone && !zref_in_nodemask(z, nodes)))
> - z++;
> + else {
> + tmp_nodes = *nodes;
> + if (nodes_empty(tmp_nodes))
> + while (zonelist_zone_idx(z) > highest_zoneidx)
> + z++;
> + else
> + while (zonelist_zone_idx(z) > highest_zoneidx ||
> + (z->zone && !zref_in_nodemask(z, &tmp_nodes)))
> + z++;
> + }
>
> *zone = zonelist_zone(z);
> return z;
Unfortunately, you can't allocate a nodemask_t on the stack here because
this is used in the iteration for get_page_from_freelist() which can occur
very deep in the stack already and there's a probability of overflow.
Dynamically allocating a nodemask_t simply wouldn't scale here, either,
since it would allocate on every iteration of a zonelist.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic