This is a migrated thread and some comments may be shown as answers.

Dealing with Timeouts

7 Answers 473 Views
General Discussions
This is a migrated thread and some comments may be shown as answers.
Justin
Top achievements
Rank 1
Justin asked on 06 Nov 2017, 02:46 PM

Any recommendations to deal with timeouts?  

I have some tests that return results when run independently, but when run in a suite, it overloads the server, and a cascade of timeouts occur.  

I've tried using a wait step at the beginning of each test case to allow the server some recovery time, but I feel like there is a better way to do this.  

7 Answers, 1 is accepted

Sort by
0
Nikolay Petrov
Telerik team
answered on 09 Nov 2017, 09:35 AM
Hello Justin,

Thank you for this question. I would suggest to put more time on the Timeout setting. By default it is set to 2000 ms. I hope this would help in this case.

Best Regards,
Nikolay Petrov
Progress Telerik
Do you want to have your say when we set our development plans? Do you want to know when a feature you care about is added or when a bug fixed? Explore the Telerik Feedback Portal and vote to affect the priority of the items
0
Justin
Top achievements
Rank 1
answered on 15 Dec 2017, 06:16 PM

Yes, it does help, but I my question has evolved a bit after using the tool. 

I've been able to set each step to have it's own independent timeout setting. But due to the nature of our API, and probably most APIs, some of these manual timeout settings don't hold up when an entire test suite is being executed.  So a timeout setting of 10 seconds is fine when running the the request standalone, or in a test case, but not when running multiple test cases in a suite.

My solution was to create a wait step at the beginning of each test case to give the server a rest before starting new execution.  This is also an imperfect solution, to ensure making it through the test suite, wait steps have to be very long, 60+ seconds per test case, but that really slows down test execution.  

My idea was to have these wait steps be variable, ie if a step fails due to a timeout, wait x seconds.  If a second timeout occurs, wait 2x seconds. I'm having trouble figuring out how to accomplish this logic through the tool.  

0
Oleg
Telerik team
answered on 18 Dec 2017, 02:14 PM
Hello Justin,

The short answer is - unfortunately no, you do not have much options for achieving your scenario. Please read below for some longer explanation and ideas for changing the use-case itself (mostly I am thinking if you really need to change the timeout in the first place).

This is an interesting use case indeed. We already have a feature request in out feedback portal (for controlling timeouts globally or with variables), but unfortunately we still do not have it implemented. Please feel free to vote for the feature request or leave your comments. This will increase its priority for implementation.

The case with Wait steps is the same - they, just as the Timeout setting, can only accept a numeric value and cannot be bound to a variable in the test project, which would be helpful in your use case. I hope we will be able to update both functionalities soon. You could stack several wait steps in a row and execute zero, one or more of them, according to a condition, but that would pollute each of your test cases with multiple redundant wait steps.

I was thinking about workarounds that you could use to bypass that limitation (like using a combination of variables, coded steps, goto steps and conditions, or maybe overwriting the timeout properties directly on the json files of the tests with a batch/powershell file or a coded step according to the environment before executing the tests) but none seems really maintanable or reasonably efficient (and again would pollute your tests with redundant steps).

I would like ask though, just in case: would you consider it reasonable to set big-enough timeouts to all of your http requests, such that would be enough when you run your full project (i.e. when the server is under the load of all tests). The point of timeouts is to prevent the test run from infinite waiting (hanging) and even more important - to cause a test to fail if the server does not handle a request in an acceptable timeframe (thus being a performance treshold). Since running a complete test project at once is usually its main purpose (while runnig a separate test is usually a debugging activity), then all tests' properties are usually expected to reflect the expected behavior of the server in this condition. (I am thinking in the context of running test automatically by a Continuous Integration job for example.) Could you set all timeouts to the maximum delay that you would consider acceptable from your server under load? This way, if you run a single test manually (with no load on the server) that will also pass, because the default timeout will be more than enough in this case.

If you wish to test the performance of the server in the scenario of a single request, you could use a separate project that executes one or more selected tests with lower timeouts.

What you have mentioned as an option with setting wait steps in the beginning of test cases would really help decreasing the load on the server in the first place. In other cases you could use a goto step to create a loop and keep executing a single test step (and one or more optional wait steps before/after it according to a certain condition) until the request returns the desired result. This is useful in scenarios when a server performs some slow operation in the background and you need to poll many times until it is ready for you to continue with the next test step. Unfortunately this cannot work in the case of a request that timeouts because a timeout fails the request step (and the entire test case) and would not allow you to execute a goto step after it. You need the server to be able to return some kind of a true/false result (or anything) in order for your http request step to finish normally and provide information to the successive coded or goto step (this is where a "continue on failure" option for test steps would help, which is in our backlog, but not yet implemented). In the same time you cannot do such a loop outside the boundaries of a single test, because we do not yet support the "execute-test-as-a-step" feature (which is also a forecoming feature).

I guess I might be missing some context as a reasoning behind your need to change the timeout values when running single test case, so please let me know if that is the case and I will be happy to assist further.

Regards,
Oleg
Progress Telerik
Do you want to have your say when we set our development plans? Do you want to know when a feature you care about is added or when a bug fixed? Explore the Telerik Feedback Portal and vote to affect the priority of the items
0
Justin
Top achievements
Rank 1
answered on 18 Dec 2017, 02:47 PM

Thanks for the quick response Oleg.  

With some more development time, I've actually found an interesting workaround, by moving the tests with performance issue to the beginning of the suite, that has improved the performance of the test overall.   And yes, I am going to increase the wait time a max threshold for each test.  

Also, I was attempting to implement the conditional wait step as you suggested, but it wasn't immediately obvious how to determine at run time if a test failed due to a timeout.  Do you have any advice how to accomplish this?

Justin

0
Oleg
Telerik team
answered on 18 Dec 2017, 04:52 PM
Hi,

That's a great workaround indeed, I am glad you have a working solution.

As for the conditional wait step, I am not sure what scenario are you trying to implement. The one I mentioned, cannot be achieved, because the wait step will either be after the http step that times-out, thus it will not be executed anyway, because nothing after the failed http step will be executed, or it will be in the next test, where it will not be useful, because you cannot rerun the already failed previous test in question.

If your idea is to use a wait step in the beginning of the next test, i.e. to "cool-down" the server if a previous test has timed-out, so that the next test has a chance to execute normally, than that would be just a bit more possible. But I am afraid you cannot know why a previous test or step has failed. There is no such concept in Test Studio for APIs and we have never considered one as a use-case. Tests are not designed to know about the pass or fail outcome of a previous test in general. 

The only workaround that I can think of is not very pretty: at the end of each test, you could add a coded step that sets a success-state variable on a project level. Every test, when starting, could read that variable in the condition of a wait step and if the variable is with "fail" value, execute the wait step, otherwise skip it and proceed with the main steps of the test. To avoid variable clutter, you could reuse the same variable across all tests, but you would need to reset it to its default ("success") state in the beginning of each test (right after the wait test have used it.

Please let me know if that is not exactly what you were referring to or if you need any other assistance.
 
Regards,
Oleg
Progress Telerik
Do you want to have your say when we set our development plans? Do you want to know when a feature you care about is added or when a bug fixed? Explore the Telerik Feedback Portal and vote to affect the priority of the items
0
Justin
Top achievements
Rank 1
answered on 18 Dec 2017, 05:19 PM

Thanks Oleg,

I believe this quoted part gets to the heart of my question.  I was unsure how to set a success/fail on a project level variable through code.   There isn't much in the online documentation that goes over this type of scenario.  

[quote]Oleg said:

The only workaround that I can think of is not very pretty: at the end of each test, you could add a coded step that sets a success-state variable on a project level.  

Do you want to have your say when we set our development plans? Do you want to know when a feature you care about is added or when a bug fixed? Explore the Telerik Feedback Portal and vote to affect the priority of the items

[/quote]

 

 

0
Oleg
Telerik team
answered on 20 Dec 2017, 05:43 PM
Hi Justin,

The documentation shows how to set a runtime variable from a coded step in general (here) and as for the scenario in question - the coded step you would need does not need to have anything different than the basic scenario shown in the documentation.

I am including a simple demo project to my response that demonstrates the full scenario of executing a test that optionally (if the test passes and reaches its last step) sets a "success" variable, that the next test reads and executes a wait step in case the first test fails. I created this project in an even simpler way - using Set Variable steps instead of coded steps to set the "success" variable, but there is one coded step at the end of the project, just to show the coded way.

Regards,
Oleg
Progress Telerik
Do you want to have your say when we set our development plans? Do you want to know when a feature you care about is added or when a bug fixed? Explore the Telerik Feedback Portal and vote to affect the priority of the items
Tags
General Discussions
Asked by
Justin
Top achievements
Rank 1
Answers by
Nikolay Petrov
Telerik team
Justin
Top achievements
Rank 1
Oleg
Telerik team
Share this question
or