[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-kernel
Subject:    Re: [PATCH] mm: don't expose page to fast gup before it's ready
From:       Andrew Morton <akpm () linux-foundation ! org>
Date:       2018-01-31 23:07:36
Message-ID: 20180131150736.9703ab0826121f2e9e23cb8e () linux-foundation ! org
[Download RAW message or body]

On Tue, 9 Jan 2018 02:10:50 -0800 Yu Zhao <yuzhao@google.com> wrote:

> On Tue, Jan 09, 2018 at 09:46:22AM +0100, Michal Hocko wrote:
> > On Mon 08-01-18 14:56:32, Yu Zhao wrote:
> > > We don't want to expose page before it's properly setup. During
> > > page setup, we may call page_add_new_anon_rmap() which uses non-
> > > atomic bit op. If page is exposed before it's done, we could
> > > overwrite page flags that are set by get_user_pages_fast() or
> > > it's callers. Here is a non-fatal scenario (there might be other
> > > fatal problems that I didn't look into):
> > > 
> > > 	CPU 1				CPU1
> > > set_pte_at()			get_user_pages_fast()
> > > page_add_new_anon_rmap()		gup_pte_range()
> > > 	__SetPageSwapBacked()			SetPageReferenced()
> > > 
> > > Fix the problem by delaying set_pte_at() until page is ready.
> > 
> > Have you seen this race happening in real workloads or this is a code
> > review based fix or a theoretical issue? I am primarily asking because
> > the code is like that at least throughout git era and I do not remember
> > any issue like this. If you can really trigger this tiny race window
> > then we should mark the fix for stable.
> 
> I didn't observe the race directly. But I did get few crashes when
> trying to access mem_cgroup of pages returned by get_user_pages_fast().
> Those page were charged and they showed valid mem_cgroup in kdumps.
> So this led me to think the problem came from premature set_pte_at().
> 
> I think the fact that nobody complained about this problem is because
> the race only happens when using ksm+swap, and it might not cause
> any fatal problem even so. Nevertheless, it's nice to have set_pte_at()
> done consistently after rmap is added and page is charged.
> 
> > Also what prevents reordering here? There do not seem to be any barriers
> > to prevent __SetPageSwapBacked leak after set_pte_at with your patch.
> 
> I assumed mem_cgroup_commit_charge() acted as full barrier. Since you
> explicitly asked the question, I realized my assumption doesn't hold
> when memcg is disabled. So we do need something to prevent reordering
> in my patch. And it brings up the question whether we want to add more
> barrier to other places that call page_add_new_anon_rmap() and
> set_pte_at().

No progress here?  I have the patch marked "to be updated", hence it is
stuck.  Please let's get it finished off for 4.17-rc1.

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic