You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 28, 2022. It is now read-only.
Great template, got up and started quickly with no fuss.
One question I have tho, it's about the metrics. If you create CloudWatch Metrics for Number of Concurrent Users, that value is only an average of all LoadTestRunner Tasks. For example, if I have 150 concurrent users, and three LoadTestRunner Tasks are created by default, that is actually 450 users. Furthermore, the average response time is somewhat inaccurate because that's an average of an average.
Is there any way to consolidate logging between Tasks, or a way to group the metrics together?
The text was updated successfully, but these errors were encountered:
Hi @GregTurner, first of all thanks for trying this project!
The way Taurus prints the response times is based on average and it's not very granular, It aggregates the requests for a given second and then prints the average of it, which is the one being captured as a metric in CloudWatch. However, at the end of the test execution, taurus will print the Summary and show the response times based on percentiles (p50, p90, etc) which is a lot better than looking at the averages, you should see something like this in the CloudWatch Logs of each container log stream:
I have posted a question in the Taurus Forums to see if we can get more granular response times while the tests are being executed. This is definitely an area I want to improve on this project.
However, we need to keep in mind that the most important thing when doing Performance Load Testing is to evaluate the behavior of your System Under Test. It's important to monitor and have metrics around the load tests, but don't lose focus on what actually matters, which is to monitor your service itself. You should be learning how it responds, where are the bottlenecks, how does it scale, etc.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Great template, got up and started quickly with no fuss.
One question I have tho, it's about the metrics. If you create CloudWatch Metrics for Number of Concurrent Users, that value is only an average of all LoadTestRunner Tasks. For example, if I have 150 concurrent users, and three LoadTestRunner Tasks are created by default, that is actually 450 users. Furthermore, the average response time is somewhat inaccurate because that's an average of an average.
Is there any way to consolidate logging between Tasks, or a way to group the metrics together?
The text was updated successfully, but these errors were encountered: