Suppose you use a test list to execute a set of scripts where some, but not all of the scripts have dependency on previous scripts succeeding. You don't want to halt execution on the first failure, but you also don't want to execute a dependent test just to show a failure. Any suggestions on how to best handle this case?
6 Answers, 1 is accepted
0
Hello David,
one very simple solution is to use the TestAsStep feature to link dependent tests.
For instance: Test B depends on Test A. You can have Test A call test B as a TestAsStep at the very end of Test A. That way if Test A fails - Test B will never run. But the TestList execution will still continue for the rest of the tests contained in this TestList.
Greetings,
Stoich
the Telerik team
one very simple solution is to use the TestAsStep feature to link dependent tests.
For instance: Test B depends on Test A. You can have Test A call test B as a TestAsStep at the very end of Test A. That way if Test A fails - Test B will never run. But the TestList execution will still continue for the rest of the tests contained in this TestList.
Greetings,
Stoich
the Telerik team
Quickly become an expert in Test Studio, check out our new training sessions!
Test Studio Trainings
Test Studio Trainings
0
David
Top achievements
Rank 1
answered on 11 Jul 2012, 01:56 PM
Thanks Stoich -
My issue with that solution is that I like to have short tests that are sewn together at the test list level. Longer tests make it more difficult to know what is actually being tested, while short tests make it clear what is being tested, especially to those who are just observing the results.
It would be great if there were a way to set dependencies and be able to skip a test when the parent test fails. Without this you might get a lot of false positives and makes the job of finding the root cause more difficult.
Maybe a future feature for Dynamic lists.
My issue with that solution is that I like to have short tests that are sewn together at the test list level. Longer tests make it more difficult to know what is actually being tested, while short tests make it clear what is being tested, especially to those who are just observing the results.
It would be great if there were a way to set dependencies and be able to skip a test when the parent test fails. Without this you might get a lot of false positives and makes the job of finding the root cause more difficult.
Maybe a future feature for Dynamic lists.
0
Hi David,
we already have a feature by that name. Check it out:
http://www.telerik.com/automated-testing-tools/support/documentation/user-guide/test-execution/standalone-test-lists.aspx#Dynamic
Let me know whether it offers the functionality you're looking for.
Regards,
Stoich
the Telerik team
Maybe a future feature for Dynamic lists.
we already have a feature by that name. Check it out:
http://www.telerik.com/automated-testing-tools/support/documentation/user-guide/test-execution/standalone-test-lists.aspx#Dynamic
Let me know whether it offers the functionality you're looking for.
Regards,
Stoich
the Telerik team
Quickly become an expert in Test Studio, check out our new training sessions!
Test Studio Trainings
Test Studio Trainings
0
David
Top achievements
Rank 1
answered on 16 Jul 2012, 01:30 PM
Thanks Stoich - It looks like the right place to do this sort of thing, but I don't think it has the features I am looking for. I want to be able to say, "run this test only if this other test passed". Maybe the feature or capability is in there, but it isn't jumping out at me.
0
Hello David,
a few more options come to mind here.
First off you can create smaller TestLists. From the test settings for specific tests, can set the entire TestList to fail if one test fails:
http://www.telerik.com/automated-testing-tools/support/documentation/user-guide/test-execution/test-list-settings.aspx
That way you'll establish a dependency on these tests. The drawback here is that in some instances you might have as little as two tests in one TestList.
On a coded solution: it's doable but it might not be too practical. You could, for instance, rely on conditions. E.g.
Test A needs to pass. At the end of Test A we set some sort of variable (maybe store it in a file on the harddrive) - TestAPass = true;
At the beginning of Test B we put the following code:
This will cause Test B to fail with the error message:
I hope this suggestions help. If you don't think they'll do the trick - we might need to log a Feature Request for you.
Greetings,
Stoich
the Telerik team
a few more options come to mind here.
First off you can create smaller TestLists. From the test settings for specific tests, can set the entire TestList to fail if one test fails:
http://www.telerik.com/automated-testing-tools/support/documentation/user-guide/test-execution/test-list-settings.aspx
That way you'll establish a dependency on these tests. The drawback here is that in some instances you might have as little as two tests in one TestList.
On a coded solution: it's doable but it might not be too practical. You could, for instance, rely on conditions. E.g.
Test A needs to pass. At the end of Test A we set some sort of variable (maybe store it in a file on the harddrive) - TestAPass = true;
At the beginning of Test B we put the following code:
if
(TestAPass ==
false
) {
throw
new
Exception(
"Test A didn't pass - failing this tests on purpose!"
);
}
This will cause Test B to fail with the error message:
Test A didn't pass - failing this tests on purpose!
I hope this suggestions help. If you don't think they'll do the trick - we might need to log a Feature Request for you.
Greetings,
Stoich
the Telerik team
Quickly become an expert in Test Studio, check out our new training sessions!
Test Studio Trainings
Test Studio Trainings
0
David
Top achievements
Rank 1
answered on 19 Jul 2012, 02:02 PM
Thanks Stoich -
I think the idea of having more smaller test lists that take advantage of the existing "fail on error", is the route to go for now and it should be workable. In the long run, seems like it would be a nice feature to add to dynamic lists without overcomplicating it - that said, I totally understand the beauty of the feature is in the eye of the beholder!
Thanks for working this out with me...
-- David.
I think the idea of having more smaller test lists that take advantage of the existing "fail on error", is the route to go for now and it should be workable. In the long run, seems like it would be a nice feature to add to dynamic lists without overcomplicating it - that said, I totally understand the beauty of the feature is in the eye of the beholder!
Thanks for working this out with me...
-- David.