Share:|

As you might have noticed, I'm writing a few articles on new features in Version 9 which was released not too long ago. This time something relatively short (I have a tendency to write rather lengthy articles) as we're having a look at Mid-Tier's new response time monitoring capabilities.

 

Picture this: you’re about to log a change request via the system. You log in, open the page. All going well so far, and then ... then, well not an awful lot really. A loading icon for the first minute or so, you get through it eventually, but every action you take seems to take longer and longer. By the time you’re done logging the change request, you managed to answer ten emails, finish that report you were working on and even check the latest news.

 

If you ask a network administrator to look into this he’s going to suggest an awful lot of log files. Could you run a HTTP trace for a while, maybe a few workflow logs while you’re at it? He will enable the server logs and say he’ll have a look afterwards. It’s frustrating and of course I sympathise.  But the thing is, there’s only so much you can learn from looking at the server side – if you want to know why things are slowing down on the user side, well you have to look at this from a user’s perspective.

 

Yes, I’m one of those people who frequently asks end users to record HTTP traces (although I’m less fond of workflow logs). You see, what I want to know is how long things take exactly. How long does it take for a page to get loaded? How long does it take for a backchannel request to get processed on the server? That’s what will tell you where things get delayed. Because remember, there are a lot of places where this can go wrong: the client OS might be too busy, JavaScript might run wild, the network might be overloaded, maybe the server isn't coping. Our role as administrators is to find out where, why and (here’s the important bit) how to fix it.

 

Enter version 9’s new response time monitoring. It’s a new feature that was added which should give us a better idea of the performance from a client side perspective. What this gives us is an overview of how long things take. How long does it take for a page to load? How much of that time was spent on the network and on the server? The more seasoned network administrators among you will probably argue that a HTTP tracer like Fiddler is the answer to this, and you’d be absolutely correct. But that’s not necessarily the point where you want to start. I mean, it’s asking quite a lot from an end user to install Fiddler and run it next to the browser for a period of time while eliminating any non-Remedy traffic.

 

Let’s see what it can do. It’s a server setting which you enable via Mid-Tier’s Config Tool. No restarts or cache flushes required, just enable it. Note that it’s either on or off for the whole Mid-Tier environment. You can’t set this per user.

 

mon1.png

 

Once it’s on a little icon is added on each page in the bottom-right corner which shows you details of how long things took.

 

mon2.png

 

If you click on the information bar you get a few more details.

 

mon3.png

 

To measure this data, the browser sends a request for a small servlet called ResponseTimeServlet. One request at the start and another at the end. What you see of course is the difference. Most of this data is also available when you use Fiddler, but the good thing is that it’s readily available. No need to turn on your logs, a simple screenshot or two will do.

 

For example, if my form spends a lot of time interacting with the server (and by that I mean Mid-Tier and AR server) it will be obvious in the Page Load Time data. My server time will be unusually high which gives me a good indication where to look. If my load proxy servers are acting up it will be obvious too, my latency will be far too high and I’d just know something isn't quite right. That's the point where I start digging through Mid-Tier logs, or Access logs on the server. Or maybe get Fiddler or Wireshark out to see what's going on the network side. On the other side, if the Browser Time is higher than I'd expect it's probably JavaScript gone wild.

 

But is this the answer to all our performance monitoring questions? I’d argue not. It’s easy enough to enable and it’s not very intrusive – it’s passive in its design, doing its best not to add to the footprint. It can be as simple as asking users to take a few screenshots and you've got an idea what’s going on. If more data is needed you can always get your HTTP tracer or your server logs out for some serious network analysis.

 

The downside here is that it’s not a log file. It’s user friendly, but I’d be interested how the application performs during a user’s session. When the user navigates from page to page, how long does this take? What pieces of workflow take the longest? Am I looking at delays on the browser side, network side, or server? And at what points exactly? This tool doesn't answer these questions. It’s a snapshot of one form or one console at one particular moment in time. Although this certainly will get me started, I’m not entirely sure if it’s enough.

 

But hey, that's just my opinion. What do you think? Would you use the new response time monitoring feature? Can you work within it's limitations? Should it be user based, maybe in the form of a log file like the workflow log? The only way to know is by giving it a go! (And when you do,leave a comment below to let me know how you got on.)

 

Until next time,

 

blogname.png

 

 

Further reading: