[prev in list] [next in list] [prev in thread] [next in thread]
List: linux-fsdevel
Subject: Re: [PATCH] fanotify: dont destroy mark when ignore mask is cleared
From: Eric Paris <eparis () redhat ! com>
Date: 2010-11-30 16:19:39
Message-ID: 1291133979.3169.2.camel () localhost ! localdomain
[Download RAW message or body]
On Tue, 2010-11-30 at 16:59 +0100, Lino Sanfilippo wrote:
> On Tue, Nov 30, 2010 at 01:16:35PM +0100, Lino Sanfilippo wrote:
>
> > I guess it is a question of safe vs racy. Yes it is safe, nothing will
> > explode or panic. But we might have a race between one task removing an
> > event type causing the mask to go to 0 and we should destroy the mark
> > and another task adding an event type. If it raced just right we might
> > destroy the mark after the second task added to it. I guess we really
> > need to serialize fsnotify_mark() per group to solve the race...
> >
> > Do you want to take a stab at fixing these things or should I?
> >
> > -Eric
>
> IMHO the right thing to serialize this would be to do
>
> LOCK(groups->mark_lock)
> - get the inode mark
> - set the marks mask
> - possibly destroy the mask
> UNLOCK(groups->mark_lock)
>
> But we cant do this since setting the marks mask requires the lock of the mark
> - which would mean an incorrect lock order according to fsnotify_add_mark():
>
> mark->lock
> group->mark_lock
> inode->i_lock
>
> What we could do very easily is use another mutex instead (use an existing one like the
> groups notification_mutex, or a completely new one) which is responsible for synchronising
> add_mark()/remove_mark().
I'd think a new per group mutex would be the right way to go. I'm not
sure how I feel about notification_mutex. I guess you can go ahead and
overload it and we can split it off later if someone finds it to be a
performance blocker.
-Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic