Not sure if I understand what you're after, I think the metrics are pretty straightforward?
-average is... the average value for that time period
-min is the lowest value during the selected period
-max is the highest observed value during the period
-and standard deviation displays the variation of the values, you can find the formula for calculating std.dev e.g. from Wikipedia (it's a widely used metric, not specific to TM ART)
Does that help at all...? Of course one thing that affects the results, assuming they're not what you expected, is what transactions and locations you've chosen as filters from the TM ART GUI. Also, if the values are higher than expected, you might have some ThinkTime() in the script that is inflating your results.
Thanks Xta for the reply.
The thing I want to know is that for a particular metric say for e.g. Transaction Response Time, are average, min, max and std. deviation calculated based on actual values or are they calculated based on the normalized values? If the metrics are calculated based on the normailized values, how are normalized values calculated for a given run of a transaction? What is the formula used?
Our monitor is configured to use dynamic bounds.
Thanks & Regards,
Kalyan Krishna Nethy.
Transaction Response Time is the time it took to execute the entire script. If you had some ThinkTime() statements, those seconds will also be included there as well.
Page Times are response times for individual pages you've loaded up in a web/HTTP script.
So for example if your monitor first goes to the main page, and then clicks a link to the URL "News", you should get separate Page Time values from main page & news. The pages are usually named something general like "Site page #1", "Site page #2", etc., unless you specifically name them in the script.
Personally I don't use page times at all, I manually create all timers so that I have full control in what is included in each response time.
As to your earlier question about the metrics in relation to normalized values, I don't think there's any connection in that sense: the average values are always average etc., no matter what it relates to in the 0 - 100 index scale. It goes the other way around; the normalized values are calculated based on the original metrics.
The formula is something like this: response times below boundary 1 yield a performance rating of 80 – 100, and response times above boundary 2 result into a rating of 0. Response times between the two boundaries are rated between 50 – 80. The dynamic boundaries themselves are calculated using average values and standard deviation to estimate a typical/expected behavior for the monitored system.
You can find a more detailed description about this (with pictures) in the TM ART Central User Guide.
Excellent answer, Xta!
Only a couple of comments, and these are somewhat tangential to the question you're answering:
- If you are integrating TM ART with any other BMC product (BPPM, PATROL, or the Portal), then you should focus on Custom Timers. Page Timers are not captured in any of the integrations as we felt the amount of data would be an issue. Custom Timers give you total control over the data feeding into BPPM and the others and also let you break your transaction into logical chunks that make sense to your users rather than reporting on a collection of web pages. The one exception to this is that the most recent BPPM/TM ART Adapter added a Detailed Diagnostic allowing you to capture Page Timers on demand for diagnostic purposes (i.e., the timers do not get fed into the analysis engine).
- If, like Xta, you don't find page timers useful they can be eliminated by hand-editing the scripts. In each of the page-fetching functions there is an argument that provides the page name. If you set it to a null string, writing two quote marks with no intervening space (""), then the page timers will not be captured. Getting rid of uninteresting Page Timers can significantly reduce the load that TM ART places on the database where it stores results.