- This defect work only for +ive test cases not –Ive test cases, so if someone changed and code broken then there is no way to I know, where is broken, there should be some filter if I like to run only –ve test cases, which may have been fix now.
- When I do recompile then my recorded don’t execute that code ( example I have Page A and Page B) 1st time I ran my both page and recoded, but now I changed page B and again replay then Page B wont run this time.
- There should also comparison of function between exe1New and Exe1Old, so I can easily check which functionally broken.
Looking forward.
Ashok Kumar
3 Answers, 1 is accepted
Item 1) I don't understand why/how you say it only works for +ive test cases. Can you elaborate on what you mean by the tool won't work for –Ive test cases.
We highly recommend making granular tests. For example, it's considered bad practice to try to make a single automated test that will test both successful login cases (where the userid and password are known to be good) and rejected login cases (where the userid and password should be rejected). It is better to have one automated test that verifies all of your positive test cases and a separate automated test that verifies all of your negative test cases. By taking this approach your test results now are crystal clear as to which functionality is/is not working correctly.
Item 2) "When I do recompile..." It's unclear what you're referring to here. Are you talking about changes to your application or changes to the test? I think you mean changes to the application.
Also please clarify what you mean by "now I changed page B and again replay then Page B wont run this time" I am sorry but I don't understand what exactly the problem is. Are you trying to say that you changed Page B of your application and now the test that exercises Page B doesn't work? Why should we expect the automation to continue to work when the application got changed out from under it? Can you demonstrate to me in what way the test no longer works? Depending on well your application was put together to handle automation and what exactly got changed in the application the automated test may or may not work. We need to investigate deeper to understand the cause of the test failure.
Test Studio is immune to most small changes of the applications. Our Element Repository works by recording attributes that define how to locate the elements (e.g. id=X, src=Y, href=Z). As long as these attributes of the specific elements involved do not change, the automated test should continue to work just fine.
Item 3) How would you like to see this comparison displayed in the UI? Can you show me a mock up example? After creating a test list and executing your tests as part of a test list you will have a history of what tests passed and what failed. From here you can investigate when tests passed and when they failed.
Also, as I touched on already earlier, if you make granular tests you will know almost instantly what features/logic of your application are working and which are not working simply by looking at which tests are passing and which are failing. This should be the right way to determine the current status of the most recent build of your application by simply running your test suite and getting a list of test failures.
Cody
the Telerik team
Thanks Cody,
Lets go point to point , and may be i can elobrate more,
my 1st point was +ive and -Ive test cases , i attached example,
i have 3 buttons and each button open one pop up screen , 1st time i ran with all three popup screen and save my recorder thing.
now i just commented code for button 2 and Recompile my code , so bin get chnage.
and in given example i just commented code for button 2, so next time when i ran test studio with recorder then it fail on button 2, what i would like to do test case execute button 1 , button2 and button 3 and at end of application it give me result,
red comment as my application fail ,
right now easy to find because application small but if we have big application and broken then i can not test other part because its aleardy broken.
you can find code, let me know if any concern.
Thank you for the example application demonstrating what you're trying to describe. I would like to point out a few things about your testing approach:
1) Clicking on buttons 1, 2 and three represent 3 separate features of the example application. Best practice says there should be a separate test for each feature, which means there should be one test for each button, not one test trying to click on all three buttons and testing all the features all at once.
2) The specific test step that clicks on button 2 really should not fail when you comment out the code inside of the event handler for button 2. The button was still present in the UI, it was clickable and the test automation did successfully click on it. There is no reason to fail that test step.
However the follow on test steps that try to interact with the popup window should fail because the popup window no longer opens. And they do fail as demonstrated in the attached test result file. The test correctly failed at the right point, the first test step that tried to interact with the popup window. If I had separated that test into three separate tests as I explained in item 1 above, I would have 2 passing tests and 1 failing test making it immediately crystal clear what functionality of my application is working and what is not simply by looking at which tests are passing and which are failing. I don't need to look at the test steps to determine where the broken functionality is.
3) You can take advantage of our "Continue on failure" feature to force the test to continue executing when a specific test step fails. This should only be used on steps where the remainder of the test would still be valid if that particular step fails (verification steps for example).
Cody
the Telerik team