[prev in list] [next in list] [prev in thread] [next in thread]
List: asterisk-dev
Subject: Re: [asterisk-dev] [Code Review]: testsuite: Fail Tokens
From: "jrose" <reviewboard () asterisk ! org>
Date: 2013-01-30 21:46:44
Message-ID: 20130130214644.21153.51833 () hotblack ! digium ! com
[Download RAW message or body]
[Attachment #2 (multipart/alternative)]
> On Jan. 30, 2013, 3:17 p.m., Mark Michelson wrote:
> > This looks like a good foundation for evaluating passing results more a=
ccurately. The fun part of this is going to be to change the various test o=
bjects and test modules to use fail tokens. I suppose that's next?
That'd be a worthy assumption. I'm mostly focused on using them on tests I =
have in development right now rather than changing existing tests, but I've=
given a little thought to adding them to modules as well. For any given co=
mponent it should really be a fairly simple change. Hopefully we don't unea=
rth too many hidden bugs, but I wouldn't be surprised if adding these to al=
l of our multi-component tests reveals some sources of false positives.
- jrose
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviewboard.asterisk.org/r/2302/#review7778
-----------------------------------------------------------
On Jan. 29, 2013, 4:40 p.m., jrose wrote:
> =
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviewboard.asterisk.org/r/2302/
> -----------------------------------------------------------
> =
> (Updated Jan. 29, 2013, 4:40 p.m.)
> =
> =
> Review request for Asterisk Developers, Mark Michelson, Matt Jordan, and =
kmoore.
> =
> =
> Summary
> -------
> =
> I was flustered when I found out that the pass/failure state was shared b=
etween all test modules in a test and setting pass in a single module means=
you have to actively get failures in every other module in order for the t=
est to fail, so I came up with this interesting little fix.
> =
> Test objects now contain a fail tokens list. In order to add to this lis=
t, the function 'create_fail_token(message)' should be used. When called, t=
his will create a new fail token with a UUID and the message contained and =
automatically add it to the fail token list. It will return a reference to =
that fail_token, which should be kept by its issuer so that it can be clear=
ed later.
> =
> If any fail tokens exist in the stack when the overall pass/failure of th=
e test is being evaluated, the test will automatically indicate failure whi=
le logging the failure message given to the create_fail_token function that=
created it.
> =
> Tokens are removed from the list with the remove_fail_token(failtoken) fu=
nction (which is where the value returned from create_fail_token should be =
supplied).
> =
> =
> Diffs
> -----
> =
> /asterisk/trunk/lib/python/asterisk/TestCase.py 3617 =
> =
> Diff: https://reviewboard.asterisk.org/r/2302/diff
> =
> =
> Testing
> -------
> =
> I added a few fail tokens to my callparking_timeout/comebacktoorigin_no t=
est and observed what would happen if I cleared none, any one, a subset of =
them, and all of them. In every case the right failtoken(s) were cleared a=
nd the remaining fail tokens would cause failures to occur with the right m=
essages logged. If no fail tokens were left over, the test would pass provi=
ded that the test didn't set failure elsewhere.
> =
> =
> Thanks,
> =
> jrose
> =
>
[Attachment #5 (text/html)]
<html>
<body>
<div style="font-family: Verdana, Arial, Helvetica, Sans-Serif;">
<table bgcolor="#f9f3c9" width="100%" cellpadding="8" style="border: 1px #c9c399 \
solid;"> <tr>
<td>
This is an automatically generated e-mail. To reply, visit:
<a href="https://reviewboard.asterisk.org/r/2302/">https://reviewboard.asterisk.org/r/2302/</a>
</td>
</tr>
</table>
<br />
<blockquote style="margin-left: 1em; border-left: 2px solid #d0d0d0; padding-left: \
10px;"> <p style="margin-top: 0;">On January 30th, 2013, 3:17 p.m., <b>Mark \
Michelson</b> wrote:</p> <blockquote style="margin-left: 1em; border-left: 2px solid \
#d0d0d0; padding-left: 10px;"> <pre style="white-space: pre-wrap; white-space: \
-moz-pre-wrap; white-space: -pre-wrap; white-space: -o-pre-wrap; word-wrap: \
break-word;">This looks like a good foundation for evaluating passing results more \
accurately. The fun part of this is going to be to change the various test objects \
and test modules to use fail tokens. I suppose that's next?</pre> </blockquote>
</blockquote>
<pre style="white-space: pre-wrap; white-space: -moz-pre-wrap; white-space: \
-pre-wrap; white-space: -o-pre-wrap; word-wrap: break-word;">That'd be a worthy \
assumption. I'm mostly focused on using them on tests I have in development right \
now rather than changing existing tests, but I've given a little thought to \
adding them to modules as well. For any given component it should really be a fairly \
simple change. Hopefully we don't unearth too many hidden bugs, but I \
wouldn't be surprised if adding these to all of our multi-component tests reveals \
some sources of false positives.</pre> <br />
<p>- jrose</p>
<br />
<p>On January 29th, 2013, 4:40 p.m., jrose wrote:</p>
<table bgcolor="#fefadf" width="100%" cellspacing="0" cellpadding="8" \
style="background-image: \
url('https://reviewboard.asterisk.org/media/rb/images/review_request_box_top_bg.png'); \
background-position: left top; background-repeat: repeat-x; border: 1px black \
solid;"> <tr>
<td>
<div>Review request for Asterisk Developers, Mark Michelson, Matt Jordan, and \
kmoore.</div> <div>By jrose.</div>
<p style="color: grey;"><i>Updated Jan. 29, 2013, 4:40 p.m.</i></p>
<h1 style="color: #575012; font-size: 10pt; margin-top: 1.5em;">Description </h1>
<table width="100%" bgcolor="#ffffff" cellspacing="0" cellpadding="10" style="border: \
1px solid #b8b5a0"> <tr>
<td>
<pre style="margin: 0; padding: 0; white-space: pre-wrap; white-space: \
-moz-pre-wrap; white-space: -pre-wrap; white-space: -o-pre-wrap; word-wrap: \
break-word;">I was flustered when I found out that the pass/failure state was shared \
between all test modules in a test and setting pass in a single module means you have \
to actively get failures in every other module in order for the test to fail, so I \
came up with this interesting little fix.
Test objects now contain a fail tokens list. In order to add to this list, the \
function 'create_fail_token(message)' should be used. When called, this will \
create a new fail token with a UUID and the message contained and automatically add \
it to the fail token list. It will return a reference to that fail_token, which \
should be kept by its issuer so that it can be cleared later.
If any fail tokens exist in the stack when the overall pass/failure of the test is \
being evaluated, the test will automatically indicate failure while logging the \
failure message given to the create_fail_token function that created it.
Tokens are removed from the list with the remove_fail_token(failtoken) function \
(which is where the value returned from create_fail_token should be supplied).</pre> \
</td> </tr>
</table>
<h1 style="color: #575012; font-size: 10pt; margin-top: 1.5em;">Testing </h1>
<table width="100%" bgcolor="#ffffff" cellspacing="0" cellpadding="10" style="border: \
1px solid #b8b5a0"> <tr>
<td>
<pre style="margin: 0; padding: 0; white-space: pre-wrap; white-space: \
-moz-pre-wrap; white-space: -pre-wrap; white-space: -o-pre-wrap; word-wrap: \
break-word;">I added a few fail tokens to my callparking_timeout/comebacktoorigin_no \
test and observed what would happen if I cleared none, any one, a subset of them, and \
all of them. In every case the right failtoken(s) were cleared and the remaining \
fail tokens would cause failures to occur with the right messages logged. If no fail \
tokens were left over, the test would pass provided that the test didn't set \
failure elsewhere.</pre> </td>
</tr>
</table>
<h1 style="color: #575012; font-size: 10pt; margin-top: 1.5em;">Diffs</b> </h1>
<ul style="margin-left: 3em; padding-left: 0;">
<li>/asterisk/trunk/lib/python/asterisk/TestCase.py <span style="color: \
grey">(3617)</span></li>
</ul>
<p><a href="https://reviewboard.asterisk.org/r/2302/diff/" style="margin-left: \
3em;">View Diff</a></p>
</td>
</tr>
</table>
</div>
</body>
</html>
--
_____________________________________________________________________
-- Bandwidth and Colocation Provided by http://www.api-digital.com --
asterisk-dev mailing list
To UNSUBSCRIBE or update options visit:
http://lists.digium.com/mailman/listinfo/asterisk-dev
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic