Share This:

What is computer performance?  Those of us in the field think we know what it means, but do we really?  Is the user’s view of performance what we actually measure? 


I found myself pondering over this question when I found myself talking to one of my computer-illiterate friends.  He’s super-smart, but has no patience to learn the technical underpinnings of his PC.  I can relate – I don’t actually know how my flat panel TV works, and have only a vague notion of how my carfrustrated user.jpg moves me from one place to another.  I just expect them to work.  That’s how he feels about PCs.  When he is browsing, opening a program or trying to print, he blames the PC itself for any slowdowns.  I found him searching for faster machines, using the GHz rating to determine the relative speed. 


Any performance analyst worth their salt is shaking their heads right now.  Clock rating is only one of so many components that we need to look at to understand performance.  We pat ourselves on the back; we are so much more knowledgeable than the average user.  But do we have tunnel vision too?  Is our pool of data including all aspects of an end user experience? 



We work in silos of data ourselves.  My friend’s is very limited – he only sees one piece of hardware and one aspect of that hardware as the problem.  But when I did performance for a living, we weren’t all that much better. We had a metric called response time (yes, mainframes measure that), but it wasn’t really what the user saw.  That number represented the point when data arrived at the mainframe with a request to when the request came back to the mainframe. All the back-end network, potentially other servers and the internet was completely ignored. 


First, we need to know what we are measuring and what we should be measuring.  We want to clock from the moment the end user hits “enter” to when he receives a response on whatever device he chose.  Fortunately, there are solutions that can simulate this or actually measure user interactions. We simply have to employ them to get a real number.


Second, we have to understand what a transaction is to our user.  Not what we think of it as in IT terms, but the real business transaction that we deliver.  It helps to have a way to map it – which servers does it traverse?  Which networks? What data stores does it need?  We have always tried to measure the components of response time, the “speeds and feeds,” or more accurately, the “using and the waiting times,” but now, we can’t understand where the problem is unless we know the transaction path.   Again, there are tools that help you build a CMDB, automatically discovering the assets and relationships.  But you have to know that this needs to be done.


Finally, you have to move this all to a proactive approach, where you set thresholds and monitor, to detect and repair issues before the user sees them (or buys a new PC, because his is “too slow.”)   You need to do this because most users blame the owners of a web site or program for their performance woes, not their PC.  And that means they blame you.  And understand this – cloud will not fix this problem for you – it only makes it more difficult.  Get this right now, using the right automation tools, so you can limit those panicked help desk calls.  Be a performance hero by understanding what your users mean by performance.