Share This:

Performance issues can be difficult to pinpoint in some cases, but one fundamental on addressing such issues is to ensure that the internal traffic of a Remedy environment is configured correctly.

 

This blog post will discuss the basics of setting up Queues and Threads for the default queues, and we'll start with a definition & analogy adapted from Mr Doug Mueller himself.

 

 

What is the difference between a Queue and a Thread?

 

A queue is essentially a logical designation for a set of physical threads.

 

A queue is denoted by its RPC Program Number (e.g. 390635) or its purpose  (Fast, List, Private etc). But it’s more than just a logical name since a job can actually exist in a queue while waiting for an available thread.

 

Once a job has been assigned to a queue, then next available worker thread for that queue will run the job.

 

When a worker thread is started up, it immediately establishes a database connection.

 

Threads can exist without queues, enable the Thread log and restart an AR Server – you’ll see threads starting up for a few different things besides the default or private queues. These threads are different from the worker threads associated with the queues.

 

 

NOTE: If you have a Server Group environment, the threads for Escalation, Archive and FTS will not be started on a server if that server doesn't own that operation.

 

 

Further information on queues and threads can be found here:

https://docs.bmc.com/docs/display/ars81/AR+System+server+queues

https://docs.bmc.com/docs/display/ars81/AR+System+server+threads

 

 

What’s the communication flow when talking about Queues & Threads?

 

All API requests come to a single connection point which is the AR Server dispatcher.

 

Any client that needs the job to run on a specific queue will provide the corresponding RPC Program Number when it issues the request. Based upon this RPC Program Number, the dispatcher routes it accordingly and AR Server will run the job on the specific queue for that RPC Program Number.

 

If a client does not specify a queue by way of providing an RPC Program Number, then the default of 390620 is used.  When AR Server receives a request with an undesignated queue then it looks at the specific API call and places the job in the Fast or List queue accordingly.

 

The queues just handle how the work is actually performed.

 

Each of these queues has one or more threads - each thread being a database connection and a processing lane for an API call.

 

Default Queues

 

In the AR System Server, there are a set of queues.  There is the following by default:

 

Queue NameRPC Program NumberNotes
Admin390600
Alert390601Threads in this queue don't open database connections, therefore don't use many resources.
Full Text Indexer390602Queue is created only when if FTS is licensed
Escalation390603Work is not queued to it the same way that it is for user-facing queues. More on this later.
Fast390620All current versions have a default of 2 threads started for this queue.
List390635
Flashboards390619Queue only created when Flashboards is licensed

 

 

You may be wondering where RPC Program Number 390695 is?

It exists, and it is the RPC Program Number that Plugin Servers listen on. So each Plugin Server will have its own internal queue for 390695.

 

 

Private Queues

 

Additional Private Queues offer more threads to do the work and more ways to divide to workload by segregating the jobs. They also provide a method of throttling down certain activities.

 

Private Queues are represented by an RPC program number in the following ranges:

 

390621-390634

390636-390669

390680-390694

 

Private queues will be discussed in a later blog post.

 

 

Multiarchitecture Diagram.jpgVisual of AR System’s multi-threaded architecture

 

 

The Football Stadium Analogy

 

Think of it the following way. If you were going to a football game, the stadium generally has multiple queues you can enter.

 

Stadium.jpg

 

Queues

 

Say in a simple case, they may be on the N, E, S, and W of the stadium. Now, there may be a special queue in the NE for the "skybox" owners.

 

Threads

 

Each of the queues, the entrances to the stadium, has multiple lanes (with turnstiles) where people can enter. These are the threads. Threads are local to the individual queues. You cannot be in the queue on the N of the stadium and go through an entry lane that is on the S for example.

 

However, if one of the queues is closed, people get routed to one of the queues that’s open so you are not blocked out of the game just because the queue you were targeting is not available. You just get routed to another one that is available (in the AR System case, the FAST/LIST pair of queues is the one that you get routed to if your specific queue is not available).

 

 

So, the system has a set of queues - some pre-defined, some private and defined per site - and each of them has processing threads as configured.

 

One point to take into account here is that if you have too many lanes (threads) in your stadium, you would have a bottleneck right after the spectators got in.

 

They need time to find their seat and sit down, so there is a point of diminishing returns.   In AR System’s case, it might be the CPU or the network to the database.   Usually the CPU will be the bottleneck if you have too many threads.

 

 

Points to Note

 

Two important points to take note of are:

 

  1. Any operation that restructures definitions or changes the database MUST go through the Admin queue and will be routed to that queue (an Admin Change). No queue other than the Admin queue will process restructure operations.
  2. Escalations do not process any API calls

 

Other than these distinctions, any queue in the system can perform any non-restructuring API operation.

 

 

Why fast & list?  Are they relevant?

 

As previously noted, Fast and List queues perform work that was not specifically designated for a specific queue.

BMC has optimized the system to two different queues by default:

 

 

Fast

Fast APIs are those that are expected to run relatively Fast, which have discrete operations and activity.  This includes operations that create, modify, delete and also retrieving details of a single item given the ID of that item.

 

In a database environment, a Save is always expected to be faster than a Query. The same is generally true in AR System.  You get an indication of this when you look at the timeout settings.  Fast operations have a shorter timeout than List operations.

 

List

All the APIs run on a List queue perform some sort of ‘List’ operation, or query, hence the name.

 

This queue gets all the calls that often are (not always, but often can be) affected by the end user or where the speed of operation is not always controllable.  It includes the search calls and operations like export and running processes on the server from an active link.

 

These operations are often fast, but they have the potential to become long.  There is high variability to the performance or throughput of the calls.  Depending on how well qualified or what you are trying to retrieve, they are calls that can return little or large amounts of data.  The user often has an influence into overall throughput or performance because they often have some level of control over the qualifications or the amount of data they can request.

 

 

Having these queues allows for the adjustment of the threads that are needed to focus on the two different classes of operations.  Very often, system administrators will find that adjusting the number of threads in one of these queues has a significant impact on performance.  If there were no difference in the queues or the way load of the system was by default split between them, this wouldn't really be the case.

 

Also, in general, it is found that a higher number of threads in the list queue than in the fast queue is an appropriate configuration of the system.  The vast majority of the time, the variability of the interaction on the search calls and the overall time spend on searching vs. creating/updating dictates that more database connections and processing threads related to searching will give the system better throughput.

 

 

Tuning the threads on the Default Queues

 

Simply tuning the AR System Server queues and threads is an essential first step in troubleshooting performance issues

The following default queues can have their threads tuned. Here’s the recommendations on what to set the thread values as (they can be changed in the Server Information -> Ports and Queues tab or directly in the ar.cfg/conf file):

 

Queue NameRPC Program Numberar.cfg/conf parameterRecommended Setting
Full Text Indexer390602Private-RPC-Socket:  390602   x  yPrivate-RPC-Socket:  390602   1  1
Escalation390603Private-RPC-Socket:  390603   x  yx & y should be a same
Fast390620Private-RPC-Socket:  390620   x  y3* no. of CPU's/cores on the machine
List390635Private-RPC-Socket:  390635   x  y5* no. of CPU's/cores on the machine

x= minimum thread value & y = maximum thread value

 

 

There’s a few notes to take into account on the above settings:

 

Full Text Indexer

It is recommended to set the min & max thread count of this queue to 1, as memory contention can occur. This may appear low, and in some cases there may be a need to achieve parallel indexing – however, if FTS Fortification is performed, there usually is no need for parallel indexing.

 

Escalation

As previously noted, work is not queued to it the same way that it is for user-facing queues.

 

In truth, there isn’t really an Escalation queue, it’s more like a ‘pool’ of Escalation threads - it has multiple assigned threads that get mapped to the pools and these multiple threads do not pull jobs, they are assigned to a specific pool.

 

However, when discussing the threads used for escalations – it’s easier to talk about using the term ‘Queue’.

 

Escalations can be assigned to pools so the escalations from each pool run in parallel on separate threads within the Escalation queue. To use Escalation pools, you must first configure multiple threads for the Escalation queue.

 

Then in Dev Studio, display Escalations and check the Execution Options section you'll see the Pool Number field. Here you can assign escalations to a specific pool (its thread that it will run on).

 

To see the Pool Number column in the Object List View, perform the following:

 

  1. Window -> Preferences -> Object List View -> Escalations -> Pool Number -> Display = Yes
  2. Log back into Dev Studio and you'll see the the Pool Number column in the Object List View for Escalations

 

If you assign an escalation to a pool that has no thread configured, the escalation is run by the first thread.

 

All escalations in a particular pool run on the same thread, so the execution of escalations within a pool is serialized. Escalations run in the order of their firing times, but an escalation is delayed if an escalation from the same pool is currently running. If two or more escalations have dependencies and must not run at the same time, put them into the same pool to make sure they run in sequence.

 

Fast & List

It is generally recommended to use the formula in the table to set the Max thread value for the Fast & List queues as a starting point.

 

You can further gauge the number of Max threads required by looking at the 8.1 API logs. At the end of the API calls, a queue time is written:

 

// :q:0.0s

 

You don’t ever want to see a non-0 queue time. If you do see this, generally you’d want to increase threads.

 

However, you never want to get CPU much above say 70%. So if you have queue time >0 and CPU % >70% you need to start looking at where you can cut.  And this could be in private queues.

 

More information about tuning can be found here:

 

VersionLink
7.6.04

BMC Remedy AR System Server 7.6 Performance Tuning for Business Service Management White Paper

BMC Remedy Action Request System 7.6.04 Optimizing and Troubleshooting Guide

8.0https://docs.bmc.com/docs/display/ars8000/Performance+benchmarks+and+tuning
8.1https://docs.bmc.com/docs/display/ars81/Performance+benchmarks+and+tuning

 

and in KA292237

 

 

With regards to the min thread settings for the Fast & List queues, there are two schools of thought here:

  • Leave the min value as 2 – the default
  • Set the min value to be the same as the max value

 

The reason to leave the min value to 2 is that you want room for growth, but do not want to pre-allocate and waste resources if not needed.

The reason to use same min and max is that you do not want to have resource creep.

 

Whichever you decide, ensure it right for your environment.