[prev in list] [next in list] [prev in thread] [next in thread]
List: gfs-bugs
Subject: [gfs-bugs] [Bug 221] New - flocks broken again
From: bugzilla-daemon () sistina ! com
Date: 2001-03-07 22:24:52
[Download RAW message or body]
http://bugzilla.sistina.com/show_bug.cgi?id=221
*** shadow/221 Wed Mar 7 16:24:51 2001
--- shadow/221.tmp.307 Wed Mar 7 16:24:51 2001
***************
*** 0 ****
--- 1,42 ----
+ Bug#: 221
+ Product: GFS
+ Version: Public CVS
+ Platform:
+ OS/Version: All
+ Status: NEW
+ Resolution:
+ Severity: normal
+ Priority: P4
+ Component: gfs
+ AssignedTo: gfs-bugs@sistina.com
+ ReportedBy: conrad@sistina.com
+ URL:
+ Summary: flocks broken again
+
+ Gee I write up this nice tool to verify posix and bsd style file locks.
+ And look wht the damn things did, it found a bug in code that was suposed to be
+ working.
+
+ grrr.
+
+ One node, gfs with memexp:
+ pidA: got open:fileA:/gfs/first_file
+ pidA: got flock:shl:fileA
+ pidB: got open:fileA:/gfs/first_file
+ pidB: got flock:shl:fileA
+ pidA: got flock:unl:fileA
+ pidB: got flock:unl:fileA
+ pidA: got close:fileA
+ pidB: got close:fileA
+
+ pidA: got open:fileA:/gfs/first_file
+ pidB: got open:fileA:/gfs/first_file
+ pidA: got flock:shn:fileA
+ pidB: got flock:shn:fileA
+ Locker `pidB' sent a result `11'(Resource temporarily unavailable) but we
+ expected 0
+
+ So, I can have two pids get a shared flock on a file. However if they ask for
+ teh lock with a nonblocking version of the call (shn), then it fails.
+
+ oopies. I'll have to go fix that......
Read the GFS HOWTO http://www.sistina.com/gfs/Pages/howto.html
gfs-bugs mailing list
gfs-bugs@sistina.com
http://lists.sistina.com/mailman/listinfo/gfs-bugs
Read the GFS Howto: http://www.sistina.com/gfs/Pages/howto.html
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic