[prev in list] [next in list] [prev in thread] [next in thread] 

List:       fuse-devel
Subject:    Re: [fuse-devel] per-inode locks in FUSE (kernel vs userspace)
From:       Miklos Szeredi <miklos () szeredi ! hu>
Date:       2021-12-07 14:07:59
Message-ID: CAJfpegv1eDv062nnfXragUcMvb7ksonWwAB6J14-9_kxLtsa9g () mail ! gmail ! com
[Download RAW message or body]

On Tue, 7 Dec 2021 at 14:48, Vivek Goyal <vgoyal@redhat.com> wrote:
>
> On Tue, Dec 07, 2021 at 09:38:10AM +0100, Miklos Szeredi wrote:
> > On Mon, 6 Dec 2021 at 23:29, Vivek Goyal <vgoyal@redhat.com> wrote:
> > >
> > > On Fri, Dec 03, 2021 at 12:05:34AM +0000, Eric Wong wrote:
> > > > Hi all, I'm working on a new multi-threaded FS using the
> > > > libfuse3 fuse_lowlevel.h API.  It looks to me like the kernel
> > > > already performs the necessary locking on a per-inode basis to
> > > > save me some work in userspace.
> > > >
> > > > In particular, I originally thought I'd need pthreads mutexes on
> > > > a per-inode (fuse_ino_t) basis to protect userspace data
> > > > structures between the .setattr (truncate), .fsync, and
> > > > .write_buf userspace callbacks.
> > > >
> > > > However upon reading the kernel, I can see fuse_fsync,
> > > > fuse_{cache,direct}_write_iter in fs/fuse/file.c all use
> > > > inode_lock.  do_truncate also uses inode_lock in fs/open.c.
> > > >
> > > > So it's look like implementing extra locking in userspace would
> > > > do nothing useful in my case, right?
> > >
> > > I guess it probably is a good idea to implement proper locking
> > > in multi-threaded fs and not rely on what kind of locking
> > > kernel is doing. If kernel locking changes down the line, your
> > > implementation will be broken.
> >
> > Thing is, some fuse filesystem implementations already do rely on
> > kernel locking.   So while it shouldn't hurt to have an extra layer of
> > locking (except complexity and performance) it's not necessary.
>
> I am wondering if same applies to virtiofs. In that case guest kernel
> is untrusted entity. So we don't want to run into a situation where
> guest kernel can somehow corrupt shared data structures of virtiofsd
> and that somehow opens the door for some other bad outcome. May be in
> that case it is safer to not rely on guest kernel locking.

That's true, virtiofs has inverted trust model, so the server must not
assume anything from the client.

Thanks,
Miklos


-- 
fuse-devel mailing list
To unsubscribe or subscribe, visit https://lists.sourceforge.net/lists/listinfo/fuse-devel
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic