This is a migrated thread and some comments may be shown as answers.

Pass Extracted Data Between Tests

7 Answers 273 Views
General Discussions
This is a migrated thread and some comments may be shown as answers.
Aaron
Top achievements
Rank 1
Aaron asked on 16 Nov 2012, 12:34 PM
We encountered a problem along the following test scenario for our Silverlight application:

TestA: test the creation-process of an object which automatically receives an unique ID. Extract this ID.
TestB: find the newly created object out of a list of other objects using the extracted ID from TestA. Here we test the search and filter routine.
TestC: test editing-process of this object, part 1
TestD: test editing-process of this object, part 2
...(other tests to test the editing of this object)..

(We tried to modularize the Test Studio project as much as possible to improve exchangeability and reduce redundancy.)
Now to run all tests we are using a Test List containing all tests. Unfortunately it doesn't seem possible to pass the extracted ID from test to test within a Test List. According to the documentation, this and this threads, the only solution would be

1) write the extracted data to a file (or another data source) and read from it for every test
2) use some kind of parent test which includes TestA, TestB, TestC, ... as steps and inherit the extracted data from it.

Solution 1) would be acceptable but needs some minor coding effort. Solution 2) has some disadvantages too. For example it somehow confilcts with the idea of Test Lists. (Another negative aspect is that you have to combine the storyboards from all sub-tests into a single one if you wish to have screenshots included in the test documentation.)

So, it would be really nice to avoid these "work-arounds" if the extracted data is globally available during the execution of tests in Test Lists. From our point of view, this would be the cleanest, easiest and fastest solution.

Are we missing something?
Is this feature planned for future releases?

7 Answers, 1 is accepted

Sort by
0
David
Top achievements
Rank 1
answered on 16 Nov 2012, 03:13 PM
We had the same issue and went the write the data to a file route - coding wasn't too bad actually, we wrapped it in a helper .dll which was easy to code and hook into the test project.

The first test generates the data and writes it out to a .csv in the data folder of the project.  The subsequent tests all use standard data binding to get access to the data.

It has one nice advantage: if you are debugging the fifth test in the list, you can likely just pick up and run that test in quick execute mode since the data it needs is already primed.  Alternate solution we were investigating was a global static class that could be used to hold tag data pairs - that solution does not have the advantage mentioned, you need to run the list from the beginning to debug and we abandoned it.

BTW, I agree - this is a common enough scenario that I would like to see a standard solution.
0
Accepted
Boyan Boev
Telerik team
answered on 16 Nov 2012, 03:24 PM
Hello Aaron,

There is a 3rd option you can use in order to get such functionality. You can create a Utility public static class, which is accessible by all the tests within the test project. There you can hold your values and using them in another test.

Let me know if this helps.

All the best,

Boyan Boev
the Telerik team
Are you enjoying Test Studio? We’d appreciate your vote in the ATI automation awards.
Vote now
0
Shashi
Top achievements
Rank 1
answered on 16 Nov 2012, 07:02 PM
Aaron,

If I am understanding what you are trying to do, it looks like your tests are not independent of each other i.e. Test B requires Test A to run first, Test C can run only after Test A and Test B, and so on - is that correct?  If so, then I would reconsider that design - and it is probably why you are running into your issues.  Ideally, every test should be able to run stand-alone as well as with other tests in a suite.  when running tests in a suite, it should not interfere with any other test in that suite and it should not be affected by anything that another test does in that suite. 

If your objective is to modularize your tests so that each of these tests can be called by other tests (always a good idea), then there are 2 techniques that I have used:

Option 1:
Create shared tests for individual operations - in your case, create, find, edit1, edit2.  Then implement main tests (which are independent of each other) that call one or more of these shared tests and which pass in and retrieve the necessary data. 
- Shared tests would be data-bound to variables and will have InheritParentSource enabled.  They will get their values from calling tests (using coded steps that call GetExtractedVariable.  If the variable is an output variable (to be consumed by the calling test), set the values in the variable using SetExtractedVariable. 
- For stand-alone operation of the shared tests (required for debugging purposes since Test Studio does not allow you to step into or break in another test at runtime), you can provide dummy values for the variables in the built-in data of the test. 
- Main tests would set values for input parameters (using coded steps that call SetExtractedVariable) before calling the shared test or retrieve the values of the variables (using coded steps that call GetExtractedVariable) after the call to the shared test. 

Option 2:
Each test can call one or more of the other tests as needed using Test As Step.  For example, TestB will call TestA, TestC will call TestB, etc.  If you don't want predecessor tests to run every time, you could add if/else statements around the calls to the predecessor tests that control when they are called (you will then need to ensure that predecessor tests are called at least once but not called multiple times if that is not desired).

You should find information on implementing shared tests and InheritParentSource option in the Test Studio documentation as well as in this forum (there have been plenty of conversations in the past on this topic).  I am also sure Telerik will help you with the details if you need it.

Hope that helps,
Shashi
0
David
Top achievements
Rank 1
answered on 16 Nov 2012, 07:34 PM
The reason I like to break the tests out and list them seperately in a test list is that at the end of the run you want to have a better idea of what was tested.  For example, your scenario might have several things being tested.  Suppose your test list was "reporting smoke test":

    -- Create a category.
    -- Create a new report.
    -- Add columns.
    -- Rename a column.
    -- Save the report.
    -- Delete the report.
    -- Delete the categoy.

There are basic pieces of information that might be needed to be shared, like the name of the report or the category name.  If you put them all together in one master test you get one result "smoke test passed" or "smoke test failed".  With them broken out you can easily see something like "rename column failed".

If I understand your suggestion Shashi - option 1 may also achieve this, but the tests are covering many of the same steps over and over again.
0
Shashi
Top achievements
Rank 1
answered on 16 Nov 2012, 08:31 PM
David,

I wasn't suggesting that you have one master test - you would still have one main test per feature but the important thing is to not make one main test depend on another main test running before it can run (i.e. it shouldn't be that the Create Report main test fails when run by itself because the Create Category main test hasn't run - if that is what it needs, the Create Report main test should call the Create Category main test (or a shared test that both main tests call) itself. 

Also, in my option 1, the tests may be "executing" the same steps - but you do not have multiple copies of the steps. Like I said, you can use logical statements to control that.

Using your example, lets assume that you need a category to be created before a report can be created (I don't know if that is actually the case but let's assume that for this example).  The Create Report main test would look something like this (I am using pseudo code - hopefully you can translate it into actual Test Studio statements):

If (Exists "categoryname")
Else
   Set Extracted Variable(
"categoryname")   // Sets the extracted variable used by the CreateCategory test to retrieve the category
   Call 
CreateCategory  // this could be a shared test (Option 1) or the Create Category main test (Option 2)
   Verify Exists
("categoryname")
End if

Call CreateReport shared test (Option 1) OR S
teps to create report (Option 2).  // Both the CreateReport shared test or the steps could use the extracted variable "categoryname"
An advantage of using shared tests is that you can have multiple calling tests - each feeding in a different value to it.  For example, if you needed to create different types of reports in one test, you can call CreateReport shared test repeatedly setting the value of categoryname each time.

Also, if you have another test that verifies something completely different but which needs to create a category or a report, it could call the appropriate shared test (perhaps with a different value than the CreateReport main test).  

Your test list would have all the main tests and so you can see the status of each one as you want to do.

Hope that helps.

Shashi
0
Boyan Boev
Telerik team
answered on 19 Nov 2012, 03:13 PM
Hello Shashi and David,

Thank you for your assistance helping other customers, I've updated your Telerik Points.

Kind regards,
Boyan Boev
the Telerik team
Are you enjoying Test Studio? We’d appreciate your vote in the ATI automation awards.
Vote now
0
Aaron
Top achievements
Rank 1
answered on 23 Nov 2012, 07:49 AM
Thanks, everyone. The discussion helped a lot.

(Sorry for my late response.)
Tags
General Discussions
Asked by
Aaron
Top achievements
Rank 1
Answers by
David
Top achievements
Rank 1
Boyan Boev
Telerik team
Shashi
Top achievements
Rank 1
Aaron
Top achievements
Rank 1
Share this question
or