Share This:

Welcome to October's new AR Server Blog post.


ARLogAnalyzer has long been the tool of choice to make sense of the complex logging generated by ARServer.

This tool can take one or many logs, sort them all timewise, and parse through them to provide analysis on what your APIs, SQLs, and Escalations are doing.

The analysis that it generates is extremely valuable for looking at performance.   Not only to solve reported performance issues but to proactively look at the system and prevent issues before they are noticed by end users.


The prior versions of this tool used PERL and have existed for many years with only minor modifications to allow it to work with current versions of ARServer.  ARLogAnalyzer Version 3 uses Java and brings several new exciting features along with fixing several defects. The look and feel is the same so there is only a very small learning curve.  And  any Java version 1.8 and higher should work fine.



Some of the key features of ARLogAnalyzer 3.0 are:

  • In one command will read in, parse, and analyze one or more log files or an entire folder
  • Provides overall statistics from the combined logs such as: thread counts for each thread type, total number of logged users, APIs, Forms, and more
  • Deep analysis into API calls including longest running APIs and statistics grouped and sorted by Client Type, IP address, User, Queue
  • Deep analysis into SQL calls including longest running SQLs and statistics grouped and sorted by Table, User, and Queue
  • Deep analysis into Filters workflow including Longest running Filters and statistics grouped and sorted by Most Executed, Most per Transaction, Most Levels, and more
  • Deep analysis into Escalations including longest running Escalations, Delayed Escalations, and statistics grouped and sorted by Form and Pool #
  • Drill down into the complete logs based on thread Id



ARLogAnalyzer 3.0 is documented in Knowledge Article 000373578 and contains the install package along with simple instructions to install and start using.

The package includes the utility, a readme file , the User Guide (webTemplates/ARLogAnalyzer.html), and several example batch files and shell scripts that you can use to get started right away.

The following batch files and shell scripts have been provided:






Analyzes all files in the provided directory.  Text-only output.


Analyzes all log files (*.log*) in the provided directory.  Text-only output.


Analyzes all log files (*.log*) in the provided directory.  Web output.


Analyzes all files in the provided directory.  Web output.


Analyzes all files in the provided directory.  Web output.  Creates zip.


Analyzes all log files (*.log*) in the provided directory.  Web output.  Creates zip.


Analyzes a single provided file.  Text-only output


Analyzes a single provided file.  Web output.


Analyzes a single provided file.  Web output.  Creates zip.




Launching the new utility is easier than ever.   There are multiple ways to run it:

1.  Drag and Drop a single file or a folder onto the appropriate batch file provided (Windows).

2.  Execute one of the batch files or shell scripts according to your needs (Windows or Unix)

3.  Execute the java command line to run ARLogAnalyzer.jar with the appropriate options


The steps to execute the java command line are documented in the Knowledge Article as well as the User Guide.


You will notice that whereas in older versions you had to prepare the logs ahead of time with arpreparelogs.exe,  the new version prepares the logs and analyzes them in a single step.

It does NOT create an intermediate file unless you specifically use the "-prepare" option from the command line.

Also, the time to perform the analysis is significantly lower.   In our internal test cases, we could analyze  8 GB of files in about 20 minutes.   See the "Large Log Sets" section in the Knowledge Article for performance configuration settings.



Accessing the Report

If you used text-only output you can simply open the output file in a text editor/viewer.


If you used one of the provided batch or script filers with Web Output, a folder will have been created with the original folder name + " analysis"

In this folder will be an index.html file.   Open index.html to view the report.
The Report will contain the following sections depending on which log-types you analyzed:

  • General
  • API Aggregates
  • SQL Aggregates
  • Escalation Aggregates
  • Filter Statistics


Reading the Report
All of the information in the text-only report is also in the web report.   The web report also allows you to drill down into the actual log lines. 
So this article will focus on reading the web report.


This is what the main navigation pane looks like:



General Statistics
A good place to start is the General Statistics.  You can ensure that the log analysis looks appropriate.  Some key items to look at are:
Start Time, End Time, Elapsed Time to ensure that these log contain the correct time frame.  Notice that it counts every individual thread that was referenced in the logs.




If you analyzed separate logs for API, SQL, Filter, and/or Escalation, you can see how well they align timewise by looking at the Logging Activity



From this example, you can see that Filter logs, which are far more verbose than the other logs,  contained a much smaller time window.


Much of the analysis refers to the log File Number and the log line.   These can be referenced from the Input Filenames.



API Aggregates
If your focus is end user slowness or other performance issues that impact end-users, a good place to start is the Top 50 API calls (based on the -n option used in the command)
This will show the duration, Queue Name, API call, and other information about all the longest running API calls




You can click on a line number to drill down into the API call to find out what made it take long to complete.  If you included SQL and Filter logging, you can attempt to isolate a long running filter or SQL call.  The drill-down is color coded to make it easy to follow; API calls are blue, SQL statements are green and workflow (Filters and Escalations) are black.   Each activity toggles between a shaded and unshaded background so it is easy to trace the beginning and ending.




There are several other API reports available to help understand API and client behavior.



There are 2 new reports, Group by Client and Group by Client IP that will help you identify if a specific client-type or a specific IP Address is responsible for excessive activities.


The Thread Statistics report has been modified to contain better information about how much a thread is in use.


You can see how often the thread was too busy to respond to a call and had to queue (Q count and Q Time) as well as the percentage of total time that the thread was Busy performing work (Busy%).  Count is the total number of API calls that were processed on that thread. Q count is the number of API calls that could not get a thread right away and had to wait in queue.  Q Time is the total amount of time that API calls waited in queue.


SQL Aggregates

You can analyze the database performance by looking at the SQL statistics, starting with the Top 50 Longest SQL calls (based on the -n option used in the command).



Like the Top 50 API calls, you can click on a line number to drill down. This will take you to the per-thread log file information related to that SQL call.


There are several other SQL reports available to help understand the ARSystem and database SQL behavior.



Escalation Aggregates

You can analyze Escalation behavior by looking at Top 50 (based on the -n option used in the command) and Delayed escalations.  This can help understand why escalations take long to complete or do not run at the expected time.





There are several other Escalation reports available to help understand the behavior of your escalations.



Filter Aggregates

If you have long running API calls or Escalations that create or modify records, you may have to analyze your Filters.  You can use the Filter statistics to help understand how Filters are affecting performance.


For example, you can see which Fire the most often and which transactions run the most filters.  This is one method of trying to understand why CPU might be high since checking Filters and running filters consume CPU.






Localization Note:

The ARLogAnalyzer attempts to auto-detect the locale of the date/time fields and handle them properly.  If this does not work properly, you can provide the 2-digit locale using the -l (lower case l) command-line option.  It was noted late in the development cycle that some non-English ARServer language features don't allow processing filter data properly. This is being worked on for a later release but for now, analysis for non-English ARServers may not provide proper filter results




Working with BMC Support

When you encounter a performance issue, you probably are aware that BMC Support is going to want to gather some logs so that they can analyze them.

  1. You can start the process by enabling SQL, API, Filter, Escalation (if needed) logging for a time duration in which the performance issue was observed.
  2. Then turn off logging and move all those logs into a separate folder, say D:\Logs (Windows) or /tmp/Logs (Unix).
  3. Now for Windows, you can simply drag and drop the D:\Logs folder onto the analyzeFolderLogsFilesWebZip.bat batch file.

On Unix, you can go to the /tmp directory and run "/opt/ARLogAnalyzer/ Logs"  (using your actual ARLogAnalyzer path)

In both OSs, 2 things will happen:

    • A new folder called "Logs analysis" will get created containing the web report.  You can analyze this report on your own to try to narrow down or resolve the problem.
    • A file called "Logs"  will get created

You can now open a case with BMC Support providing both the zip file and any results from your own analysis.




Now, go open the Knowledge Article and start using the new Java-based ARLogAnalzyer 3.0