[prev in list] [next in list] [prev in thread] [next in thread] 

List:       osdl-lsb-discuss
Subject:    Re: [lsb-discuss] What do we do for complex tests that don't have standards rationale?
From:       Jeff Licquia <licquia () linuxfoundation ! org>
Date:       2013-09-11 16:40:22
Message-ID: 52309CF6.7020503 () linuxfoundation ! org
[Download RAW message or body]

On 09/11/2013 06:22 AM, Carlos O'Donell wrote:
> Red Hat has been working hard to ensure that glibc-related
> LSB issues are resolved upstream, waived, or the appropriate
> course of action suggested in an LSB bug.

Which has been excellent, by the way.  I've enjoyed working with you
all, and I think we've made some real improvements in both the LSB and
in a number of upstreams.

> The single largest problem with this failure is that the
> model-driven framework isn't clearly coupled in any way with
> a standards requirements, making it difficult for a reviewer
> to easily determine what part of the standard is violated.

It's interesting that you say this.  The intent behind a few of our
newer test suites (like olver-core) is that each test failure should be
easily tied to a standards requirement.

Clearly, we're not meeting that goal.

> The `{app.read.10}' precondition failure for read_tty_spec, 
> has no documented standards-conforming reason for the test 
> to expect this result or value. The test code in 
> src/model/io/term/term_model.sec explains what IEXTEN is, 
> but fails to explain why it's expected to be on. The code
> can be called from several other functions which also
> don't document what state they are in and if it's valid
> for them to expect IEXTEN on Linux.

I note that IEXTEN is marked as a precondition for the test in the
source.  That implies that it's not really what we're testing for.

You'd be right to point out that we don't explicitly set IEXTEN in our
initialization.  If we had, the precondition would make sense: if we
tcsetattr() and those attributes don't actually get set, we have a problem.

The fact that disabling previous failing tests causes the result for
this test to succeed is more evidence that this is an initialization
problem.

> I'm including a tcgetattr IEXTEN test that shows ppc64
> behaves as expected by the standard (via glibc):

Good enough for me.

> It just isn't reasonable that for these model-driven tests,
> after each failure, the reviewer has to re-work their way through
> the entire test proving each step leads to the next step which
> leads to the next all within what is expected. Such reasoning
> should have been built into the model, either as comments or
> formal specification and it's not there.

Totally agree, except that I'd characterize this problem with the model
as a bug in the test more than as a flaw in the model-driven approach.

> Debugging io_term_rw_canon_scenario has been very very
> difficult, and even now I don't know which part of the
> model sets the initial erroneous c_lflag value, but it's
> not glibc, and it's not the kernel (I can show that via
> strace and gdb).
> 
> Has anyone debugged one of these model-driven tests?

The idea behind olver-core is to tie the tests more directly to their
underlying specifications, and to generate tests with less work based
directly off a model of the spec.  This has met with mixed success; as
you've observed, debugging a particular test failure can be difficult,
because of the number of layers generated between the actual editable
source code and the resulting test binary.

So to summarize, I see a number of problems here:

 - This particular test has two bugs: it assumes its environment rather
than explicitly initializing it, and it misidentifies IEXTEN as a
required termios setting.

 - The process of working back from the failing test to the
specification doesn't work right, and needs to be made easier.

 - We need some kind of a guide for evaluating test results generated
from olver-core, with instructions on using olver-core to debug test
failures and also debugging olver-core itself.

Does that sound right to you?  If so, I'll update the bug, and also file
some new bugs.

-- 
Jeff Licquia
The Linux Foundation
+1 (317) 915-7441
licquia@linuxfoundation.org

Linux Foundation Events Schedule:  events.linuxfoundation.org
Linux Foundation Training Schedule: training.linuxfoundation.org
_______________________________________________
lsb-discuss mailing list
lsb-discuss@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lsb-discuss
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic