Skip navigation
1 2 3 Previous Next

Remedy AR System

92 posts
Share This:

In a recent engagement with a Remedy customer we looked into their performance issue and discovered that the slowdown in the database was stemming from "enq:Index contention" waits.


Drilling down further into the waits we found that the waits were on the "S" tables corresponding to the Remedy forms "SMT: Social_FollowConfig" and "HPD: WorkLog". These "S" tables were introduced in Remedy's newest implementation of RLS that was CA (Controlled Availability = limited to  few chosen customers) in version 1902 and GA (General Availability) in version 1908.


The SQL statements that were waiting for an ITL (Interested Transaction List - at Oracle's block level) slot to open up were performing INSERTS into the two tables. The number of initial ITL slots when an index is created is specified by its INITRANS parameter. That parameter defaults to 1 for tables and 2 for indexes.


The default value of 2 for an index means that TWO transactions can each take one slot and perform an INSERT/UPDATE/DELETE operation on rows in the block.


If a third or fourth transaction comes along and needs to work on rows in the block and both the slots are taken Oracle can allocate additional slots, up to the table's/index's MAXTRANS parameter, provided there is space available in the block.


For the customer in question we had them increase the value of INITRANS for the indexes that were experiencing concurrency waits. The increase was from 2 to 10 in one case and 15 in another index.


The change can be accomplished by executing the following SQL command:

SQL> alter index <index name here> INITRANS <new value here>

Share This:

In a recent engagement with a Remedy customer we looked into their performance issue and discovered that the slowdown in the database was stemming from "enq:Index contention" waits.


Drilling down further into the waits we found that the waits were on the "S" tables corresponding to the Remedy forms "SMT: Social_FollowConfig" and "HPD: WorkLog".


Drilling down further into the waits we found that the waits were on the "S" tables corresponding to the Remedy forms "SMT: Social_FollowConfig" and "HPD: WorkLog". These "S" tables were introduced in Remedy's newest implementation of RLS that was CA (Controlled Availability = limited to  few chosen customers) in version 1902 and GA (General Availability) in version 1908.


The SQL statements that were waiting for an ITL (Interested Transaction List - at Oracle's block level) slot to open up were performing INSERTS into the two tables. The number of initial ITL slots when an index is created is specified by its INITRANS parameter. That parameter defaults to 1 for tables and 2 for indexes.


The default value of 2 for an index means that TWO transactions can each take one slot and perform an INSERT/UPDATE/DELETE operation on rows in the block.


If a third or fourth transaction comes along and needs to update rows in the block and both the slots are taken Oracle can allocate additional slots, up to the table's/index's MAXTRANS parameter, provided there is space available in the block.


For the customer in question we had them increase the value of INITRANS for the indexes that were experiencing concurrency waits. The increase was from 2 to 10 in one case and 15 in another index.


The change can be accomplished by executing the following SQL command:

SQL> alter index <index name here> INITRANS <new value here>

Share This:

BMC Software has identified an unauthenticated Remote Code Execution (RCE) vulnerability in Remedy Mid Tier.

Mid Tier versions 9.1, 18.05, 18.08, and 19.02 service packs, and patches are affected by this vulnerability.

For more information about this issue and the resolution, see the following links:


Thanks to Raphaël Arrouas and Stephane Grundschober for responsibly disclosing this vulnerability to BMC.


Best regards,


John Weigand
R&D Program Manager
BMC Software

Share This:



In my last blog post I wrote about how to use a Tomcat container and war files for mid-tier testing.  In this one I'd like to show you how the combination of a database container and the Remedy silent install process can be used to speed up test server deployment.  Other advantages of this approach include


  • being able to consistently reproduce a system in the same state.
  • less concern about disposing of a system after testing as it is easily recreated.
  • use of database backups provides options for quickly restoring test systems to known good states.


The option to run Remedy with a flat file database went away many moons ago (bonus points if you can name the last version to offer this) and the current versions require either MS-SQL or Oracle, both of which are large and complex pieces of software.  The effort required to download, install, and configure these databases can add significantly to the time taken to create a test environment.  Wouldn't it be nice to be able to run a few commands and have a new database instance up and running, ready for use?  Containers to the rescue!


Both MS-SQL and Oracle are available as containers which means that a lot of the work needed to get them set up has already been done.  The Oracle container is more complex to manage, as well as being larger, so this article will focus on using MS-SQL.




If you've read any of my previous articles you won't be surprised to find that we're going to be using a Linux system for our tests, specifically a CentOS 7 virtual machine.  At this point those of you that are familiar with the Remedy compatibility documents may be wondering about the combination of MS SQL and Linux.  There are two things to consider, firstly the use of Linux for the database platform and, secondly, whether a Linux based AR Server can talk to an MS SQL database.


Remedy has always been very platform agnostic with regards to the OS used to host the database.  If it looks like an MS SQL server, runs like an MS SQL server and squeaks like an MS SQL server, there's a very good chance that Remedy will run just fine.   On the second point, one of the consequences of the move to Java for the AR platform in version 9.0, was that the AR Server started using JDBC in place of the native database drivers of earlier releases.  This means that it is possible BUT NOT SUPPORTED for an AR Server running on Linux to use an MS SQL database.  Please note the highlighted comments in the previous sentence!  Yes, a Linux AR Server will work with MS SQL, but this should only be used for test systems as, at the time of writing, BMC DO NOT support this combination and you use it at your own risk.


OS Update and Docker Install


Start by making sure that the operating system is up-to-date, rebooting if many packages or the kernel are refreshed.


# yum update


Create working directories to store our files.  If you don't use /docker you will need to substitute your choice in some of the later commands.


# mkdir -p /docker/mssql

# cd /docker


If you haven't already installed Docker use these steps to add the software repository, install, and start the Docker engine.


# yum-config-manager --add-repo

# yum -y install docker-ce docker-ce-cli

# systemctl start docker


Confirm that Docker is running with:


# docker version


Version: 18.03.1-ce

API version: 1.37

Go version: go1.9.5

Git commit: 9ee9f40

Built: Thu Apr 26 07:20:16 2018

OS/Arch: linux/amd64

Experimental: false

Orchestrator: swarm


We're also going to use a tool called docker-compose to help manage the database container configuration.  Note the version used in the command below may not be the latest, check the documentation if you want the most recent.


# curl -L$(uname -s)-$(uname -m) -o /bin/docker-compose

# chmod a+x /bin/docker-compose

# docker-compose -version

docker-compose version 1.20.1, build 5d8c71b


MS-SQL Container


Microsoft publish container images for SQL 2017 on Linux in a public repository and there are some additional command line tools for MS-SQL that we will use.

# curl > /etc/yum.repos.d/msprod.repo

# ACCEPT_EULA=Y yum -y -q install mssql-tools unixODBC-devel


To help make the management of the container a little easier we're going to use docker-compose.  This allows us to put all of the configuration options in a file rather than having to remember them each time we want to run a container.  Here's an example, copy and save this as a file called mssql2017.yml in the /docker directory:


version: '3'




    container_name: mssql2017

    hostname: mssql2017


      - 1433:1433


      - /docker/mssql:/var/opt/mssql



      - MSSQL_SA_PASSWORD=P@ssw0rd


The various options we've used are:




image: image used to create the container.
container_name: mssql2017A friendly name for the container.
hostname: mssql2017Sets the hostname rather than using one that is automatically generated.  This will help us when we come to install AR as this is also used to name the database.


  - 1433:1433

Maps container ports to the outside world.


  - /docker/mssql:/var/opt/mssql

Creates a shared volume to allow the database files to be stored on the docker host file system rather than inside the container.




Pass environment variables used to set up the database.  Please note these requirements for the password from the Microsoft documentation :

The password should follow the SQL Server default password policy, otherwise the container can not setup SQL server and will stop working. By default, the password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols.


All that is now required to start a database container is a single command:


# docker-compose -f mssql2017.yml up -d

Creating mssql2017 ... done


We can check that the container is running using the docker ps command and then query the database using the sqlcmd tool we installed earlier:


# docker ps

CONTAINER ID IMAGE                                      COMMAND                    CREATED     STATUS       PORTS                  NAMES

0ed2f8602ce9 "/opt/mssql/bin/sqlserver" 1 hours ago Up 8 seconds>1433/tcp mssql2017


# /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P P@ssw0rd -Q "select getdate()"


2019-03-04 09:42:55.583

(1 rows affected)


We can also see that the container has created the default database files under the /docker/mssql directory:


# tree /docker/mssql/


├── data

│   ├── master.mdf

│   ├── mastlog.ldf

│   ├── modellog.ldf

│   ├── model.mdf

│   ├── msdbdata.mdf

│   ├── msdblog.ldf

│   ├── tempdb.mdf

│   └── templog.ldf

├── log

│   ├── errorlog

│   ├── errorlog.1

│   ├── HkEngineEventFile_0_131963630182660000.xel

│   ├── log.trc

│   ├── sqlagentstartup.log

│   └── system_health_0_131963630191840000.xel

└── secrets

    └── machine-key


Believe it or not, that's all it takes to get a running MS-SQL Server instance on Linux!


The database container can be stopped with:


# docker-compose -f mssql2017.yml stop

Stopping mssql2017 ... done


Installing Remedy (quietly...)


One of the challenges of setting up a Remedy test system is the time taken to run all of the installers, particularly if you're also installing the ITSM Suite.  There's an added complication if you're using a headless Linux server in that using the installer GUI requires a suitable X-Windows server running on a PC to display the interface.  One of the under-used (I think) Remedy features is the ability for the installers to run in a silent mode, using a configuration file to provide all of the inputs that you usually type in using the GUI interface.  Using this option has a number of benefits:

  • no need for a GUI environment on Linux.
  • can be scripted so installations may be run remotely/unattended.
  • consistent and repeatable which makes it easier to recreate environments for testing.

All of the Remedy components can be installed using this silent mode and the process is covered in the relevant pages on the BMC docs website: 


Installing BMC Remedy AR System using silent mode - Documentation for Remedy Deployment 9.1 - BMC Documentation

Installing BMC Atrium Core using silent mode - Documentation for Remedy Deployment 9.1 - BMC Documentation

Performing the installation in silent mode - Documentation for Remedy Deployment 9.1 - BMC Documentation


So how does it work?  When you unpack the installer you will find an example options file, usually in a directory called utility.  This is a text file template that provides details of how to run the installer in silent mode and lists all of the options, and values where appropriate, that you need to provide.  The exact contents will vary depending on the product but the minimum required set of options may be a lot less than you expect.


Let's see what we would need to have in the silent options file to install an AR Server on the Linux system where we're running our MS-SQL container.


# cat silent_ar.txt












-A featureARSystemServers










-J BMC_DATABASE_DBA_LOGFILE_NAME=/var/opt/mssql/data/ARSysLog










-J BMC_AR_SERVER_NAME=arserver01




-J BMC_AR_PORT=46262




You can see we're providing the Java path, product options such as language choice and sample data, database and Demo user credentials - everything that you would usually enter via the GUI.  A similar file could be used for an Oracle database but there would be some different options, which are explained in the sample file, that would need to be used.  Once this file has been created we just have to include it on the command line when running the installer:


# ./setup.bin -i silent -DOPTIONS_FILE=/path/to/silent_ar.txt


The installer will run, displaying some output on the screen and creating the usual log files, and setup the server.  Note that the file above does not include the mid-tier so you'll either have to add the options to install this or use a container based one created using the earlier blog post.


Once your server is up and running you can use the same process to install CMDB, AI, ITSM, SLM, SRM and so on...  With a bit of practice you could set up a script to create a full ITSM system from scratch with a single command.  Set it running as you leave for the the day and have it ready in the morning, perfect for testing.


There are other ways to use the MS-SQL container without installing Remedy from scratch, the next section looks at backing up and restoring the database from an existing AR Server which can then be configured to use the container based database.


Restore an Existing Remedy Database


If you don't want to wait for the silent install steps, and happen to have an existing Remedy system using MS-SQL, you could take a database backup and restore it into the container.  Transfer the .Bak file from the original system and copy it to the /docker/mssql/data directory - let's assume it is called ARSystem.Bak.    The sqlcmd utility is then used to perform the restore:


# /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P P@ssw0rd -Q "restore database ARSystem from disk='ARSystem.Bak' with replace, move 'ARSystem_data' to '/var/opt/mssql/data/arsys.mdf', move 'ARSystem_log' to '/var/opt/mssql/data/arsyslog.ldf'"


You may need to change the highlighted values depending on what your current system uses.  Progress will be shown as the restore takes place:


Processed 189816 pages for database 'ARSystem', file 'ARSystem_data' on file 1.

Processed 184 pages for database 'ARSystem', file 'ARSystem_log' on file 1.

Converting database 'ARSystem' from version 706 to the current version 869.

Database 'ARSystem' running the upgrade step from version 706 to version 770.

<lines snipped>

Database 'ARSystem' running the upgrade step from version 867 to version 868.

Database 'ARSystem' running the upgrade step from version 868 to version 869.

RESTORE DATABASE successfully processed 190000 pages in 3.662 seconds (405.344 MB/sec).


Then we need to create an ARAdmin account and make it the owner of the newly restored database, again some changes may be necessary to reflect your local names:


# /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P P@ssw0rd -Q "create login ARAdmin with password='arsystem', default_database=ARSystem, check_policy=off"

# /opt/mssql-tools/bin/sqlcmd -S localhost -d ARSystem -U sa -P P@ssw0rd -Q "exec sp_changedbowner ARAdmin, true"


Query the database as the ARAdmin user to confirm it has been restored as expected:


# /opt/mssql-tools/bin/sqlcmd -S localhost -U ARAdmin -P arsystem -Q "select schemaid, serverid, currdbversion from control"

schemaid serverid currdbversion

----------- ----------- -------------

4088      2            58

(1 rows affected)


That's it, we've created a new MS-SQL instance in a container and restored a Remedy database that's ready to use.  If you change the Db-Host-Name (and Db-user/Db-password if required)  in the ar.cfg of the AR Server that was using the original database system, setting it to the name of your docker host machine, it should be able to run using the restored copy.



I've started using this type of set up for most of my test environments and find it very flexible.  The ease and speed of creating a new database, along with the ability to install various combinations of Remedy products with minimal interaction, means that I no longer find it necessary to keep many different VMs lying around just in case I need a particular version.  Of course there are cases where this approach is not suitable, longer term test and development systems for example, but I'd encourage you to give it a go next time you need a system for a quick test or have a new version you want to evaluate.


Questions, comments & feedback are welcome.

Share This:

Introduction : This blog will give you the brief idea about the prerequisite needs to consider before performing AR system platform upgrade using installer. Also included prerequisite needs to consider while applying any patch/hotfix through d2p.


Basic configuration checks to be performed before upgrading the Remedy platform:


1. Validate ARSystemInstalledConfiguration.xml file. Perform the following steps:

  1. Verify that the ARSystemInstalledConfiguration.xml file exists in the <AR Installation Directory> folder. If the file does not exist, do not proceed with the upgrade. You can copy the ARSystemInstalledConfiguration.xml file from another server having the same version and make the server-specific changes (such as host name) in the file.


    b. Check the Product feature map section. The product feature map section contains all the features that are installed.
        If a feature that is currently running on the system, is missing from the list, you must add it to the ARSystemInstalledConfiguration.xml file.

        For example:
                 <productFeature backupOnUpgrade="false" id="featureARSystemServers"


<productFeature backupOnUpgrade="false" id="featureARServer"


<productFeaturebackupOnUpgrade="false" id="featureAREALDAPDirectoryServiceAuthentication"


<productFeaturebackupOnUpgrade="false" id="featureARDBCLDAPDirectoryServiceAuthentication" independentOfChildren="false" parent="featureARServer" rebootRequiredOnInstall="false" rebootRequiredOnUninstall="false" rebootRequiredOnUpgrade="false" requiredDiskSpaceMode="default.linux" state="INSTALLED" visible="true">


     c. Before starting the upgrade, verify the release version, major and minor version.

         For example, <version majorVersion="1" minorVersion="00" releaseVersion="9"/>,

         which indicates that the current version installed is 9.1.00.


     d. Verify that the following properties have correct hostnames/IP addresses.










      e. If the installed version is 9.1.04 or later, verify the 'BMC Remedy MidTier File Deployer' property.

          This property should exist for 9.1.04 or later versions.


      <name>BMC Remedy MidTier File Deployer - </name>

      <environmentVariable scope="SYSTEM">






2. Verify if the service name in the system registry is same as 'AR_Server_Host_Name' property in 'ARSystemInstalledConfiguration.xml' (WINDOWS SPECIFIC)


When you create a clone of an existing environment, you must also update the Windows registry in the cloned environment. If you do not update the registry, the following issues may occur:

  • Throwable=[ D:\Program Files\BMC Software\ARSystem\armonitor.exe (The process cannot access the file because it is being used by another process)Verify the following
  • Throwable=[ D:\Program Files\BMC Software\ARSystem\arcatalog_eng_W_win64.dll (The requested operation cannot be performed on a file with a user-mapped section open) Method)


  1. Verify the service name from the registry.


    b. Verify the service name from the registry

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Action Request System Server <BMC_AR_SERVER_NAME>


    c. Under registry, verify the JVM related options

    • JARS are pointing to the correct path
    • All required JARS exist.


HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Action Request System Server <BMC_AR_SERVER_NAME>\Parameters


    d. Verify the email engine service name from in registry against the one present inside ARSystemInstalledConfiguration.xml


        HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Email Engine - <BMC_AR_SERVER_NAME> 1

    e. Verify the email engine parameters are:

    • Pointing to correct paths.
    • All required JARS exist in these paths.



f. Verify the flashboard service name in registry against the one present inside ARSystemInstalledConfiguration.xml

           HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Flashboards Server - <BMC_AR_SERVER_NAME>



g. Under registry, verify the JVM related options

    • JARS are pointing to the correct path
    • All required JARS exist

           HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Flashboards Server - <BMC_AR_SERVER_NAME>\Parameters



        If the current installed version is 9.1.04 or later, verify the following:

        h. Verify the file deployer service name from registry against the one present in ARSystemInstalledConfiguration.xml

        HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy MidTier File Deployer - <BMC_AR_SERVER_NAME> 1


i. Under registry, verify the JVM related options

    • Jars are pointing to correct path
    • All appropriate jars exist.

          HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy MidTier File Deployer - <BMC_AR_SERVER_NAME> 1\Parameters



3. Check servgrp_board table. opFlag should be “1” for Administrator server.

4. Make sure that Object modification logs is disabled. If turned ON, it causes slowness during installation because the entire AR Server metadata undergoes changes during installation.

    Link for Object modification logs:


5.If you are upgrading from 7.x/8.x to 9.x or later, check the ‘Application Statistics Configuration’ form. All forms listed here must be part of some application.


6. Check DB Version in Control table. Refer to the following link to verify if the DBVersion column of the Control table has the value same as that of the AR Server version.


7. Execute the following queries to find out if there are invalid views in the database. Work with your DBA to fix/remove invalid views. This is one of the sure shot reasons of upgrade            failure.

Step 1: Execute the following SQL

select count(1) from    user_objects  where  status = 'INVALID' and object_type = 'VIEW'

Step 2: If the output is greater than 0 , execute following script to identify list of forms associated with invalid views :

select name, schemaid, case

        when overlayprop = 0 then 'Unmodified'

when overlayprop = 1 then 'Base'

when overlayprop =  2  then 'Overlay'

when overlayprop = 4 then 'Custom' end as CustomizationType

from arschema  where 'T' || schemaid in (select OBJECT_NAME  from  user_objects  where status = 'INVALID'  and object_type = 'VIEW')

Step 3 (Compile invalid views) : Execute following script to compile invalid views, change schema name if it is different from ARADMIN



8. Make sure that the ‘Temp’ directory is clean before trying to perform a subsequent upgrade attempt.


9. Metadata inconsistency is one of the reasons for upgrade failure. Make sure you run checkdb utililty before the upgrade and resolve relevant inconsistencies.





Following error was seen in an upgrade attempt for one of the customers:


(Apr 11 2019 04:36:52.311 PM +0200),SEVERE,com.bmc.install.product.arsuitekit.platforms.arsystemservers.arserver.ARServerOracleManageUpgradeDatabaseTask,

  LOG EVENT {Description=[[SQLERROR] [DESCRIPTION] Failed to upgrade the database schema],Detail=[[SQLERRORCODE]=0 [SQLMESSAGE]=Failed to run SQL statement [ALTER TABLE SERVGRP_RESOURCES MODIFY ( JMSRESOURCES CLOB NOT NULL )] Due to [ORA-22296: invalid ALTER TABLE option for conversion of LONG datatype to LOB


(Apr 11 2019 04:36:52.312 PM +0200),SEVERE,com.bmc.install.product.arsuitekit.platforms.arsystemservers.arserver.ARServerOracleManageUpgradeDatabaseTask,

  THROWABLE EVENT {Description=[Failed to upgrade the database schema]},

Throwable=[java.sql.SQLException: Failed to run SQL statement [ALTER TABLE SERVGRP_RESOURCES MODIFY ( JMSRESOURCES CLOB NOT NULL )] Due to [ORA-22296: invalid ALTER TABLE option for conversion of LONG datatype to LOB



The failed SQL statements are built by the upgrade installer using the information in ‘ReadARServerDatabaseModel.xml’ and ‘TransformedARServerDatabaseModel.xml’. These files are created by the upgrade installer in the ‘Temp’ directory during installation.


There were two issues in this scenario:


  • Installer was trying to alter a column in SERVGRP_RESOURCES table, but the column was already altered. This could be due to multiple attempts of upgrades without cleanly reverting database and the file system.
  • SQL statement was trying to update Datatype CLOB with ‘NOT NULL’ option (Oracle has some issues with it) which was not allowing as per oracle. On a successful installation environment, we don’t have NULL values ) Inhouse we haven’t seen NULL value, but for this customer it was NULL. Therefore, we manually updated same values (CLOB and NOT NULL)here, deleted everything from ‘Temp’ directory before upgrade.



NOTE: If you run into these issues, contact BMC support to get appropriate assistance. Do not perform any manual changes on the database without consulting BMC Support.



Basic Configuration checks to be performed before applying a patch / hotfix using D2P:

1. Make sure that correct server entries in ‘AR System Monitor’ form exist. Delete the orphan or duplicate entries and restart the file deployer service of each of the servers in the server group.


If the issue still persists, go to AR System Installation Directory. Make sure that ‘’ file is present. It should have unique GUID for each server.


If the GUIDs are not unique, you must create entries with unique GUIDs. Perform the following steps:

    • Delete the file
    • Remove all entries from AR System Monitor form
    • Restart file deployer service.

AR System Monitor Form



2. Make sure to wait for few mins after you import the package, before clicking on ‘Deploy’ button.


3. Make sure that ‘AR System Single Deployment Payload’ form and ‘AR System Single Deployment Status’ form should not have any orphan records for the payload which we are           trying to deploy.


4. Make sure that File deployer service is running on all servers in the server group.


5. Make sure all the processes that are part of the AR System Service in armonitor.cfg file, are started without any issue. Please check armonitor.log file to verify this.

     For one of the customers was having duplicate entries for pluginsvr process. This caused errors “Address already in use” error, visible in armonitor.log. Deployment for one of                the payloads failed due to the error.


6. During deployment if any process fails to start / stop then you need to check following.

     Following error will be visible in file deployer log.

     ITSM Deployment Rollback caused by error: com.bmc.arsys.filedeployer.PayloadProcessor  - Process BMC:NormalizationEngine failed to start.

  • FileDeployer signals the ARMonitor to start/stop Payload associated processes. During deployment if a specific process failed to start, then the related logging will be available in armonitor.log file with signal / log stmt saying … Starting Process <processName>  … (basic logging should be enough.)
  • If the process can be manually restarted then to troubleshoot from d2p perspective, execute the ARMonitor_Admin.bat file located in ARSystem directory manually to start / stop the specific process.


7. Prior to 1808 , it was mandatory to have process started for which we want to do payload deployment. Otherwise payload used to fail.

     For. E.g Email Engine process, DSO process etc.

     We can use following workaround to bypass this without starting that particular process.

  • import d2p package
  • click on deploy (This will create the Payload entries)
  • Run the below query (To remove the associated process entry for ex |;BMC:EmailEngine )
  • UPDATE < schemaid of ‘AR System Single Point Deployment Payload’> SET C49110 = 'BMC:ARServer|;BMC:JavaPluginServer|;BMC:DSOJServer|;BMC:CarteServer'          WHERE C49102 = '<Payload GUID>'

        Please get value of C49110 by viewing the payload ’Process Type’, copy it and remove only email engine from it.

        e.g. Screenshot is of different d2p package


  • Start the utility (arpayloadutility.bat) for deployment .
Share This:

We have seen requirements where Admins like to create users with limited admin privileges like give access only to selected forms like User/Group, Admin console, Server group admin console. Also give limited access to Dev studio like give both Base development and Best practice modes access or limit only to Best practice mode.

For such needs there is a feature in Remedy called Struct admin which are documented at below DOC pages.

Special groups in BMC Remedy AR System - Documentation for Remedy Action Request System 9.1 - BMC Documentation

Struct Admin group permissions - Documentation for Remedy Action Request System 9.1 - BMC Documentation


This blog page is created to demonstrate how struct admin users can be created and used. There are two use cases demonstrated in this blog. You may try more cases based on your need by reading documentation pages.


Case 1 - Full Struct Admin User:

This user should be able to access Server Information Console (AR System Administrator Console), Server Group Log Management Console, Centralized Configuration console, User and Group form. In Dev studio, can use both Best Practice Customization  Mode and Base Development mode.

Detailed steps to achieve this use case is available at Configuring Full Struct Admin.docx



Case 2 - Overlay Struct Admin User:

This user should be able to access ccess Server Information Console (AR System Administrator Console), Server Group Log Management Console. In Dev Studio only have access to Best Practice Customization Mode.

Detailed steps to achieve this use case is available at Configuring Overlay Struct Admin.docx




Thanks to Chandrakumar Palanisamy for his valuable time and inputs in creating this blog.


Hope you will find this blog useful.


Note: Though basic testing is done for the use cases, extensive testing is left to the user. Make sure to test your implementation and use case properly before making anything live on PROD.

Share This:

In interactions with customers it is often noted that getting table/view information is a back and forth operation between Support/PE and the customer.


The accompanying zip file contains SQL scripts, written for the Oracle database, will help gather information about a table or view in one fell swoop so to speak.



The Table_Info script has been updated to now show Function-Based Normal indexes too.



"table_info_wrapper.sql" can be run as SYS or ARADMIN (or the Remedy schema owner as the case may be).

                * * * * "table_info_wrapper.sql" accepts the table name to be queried and calls 2 other scripts, one of which gathers index statistics.


"view_info_wrapper.sql" needs to be run as SYS.

                 * * * * "view_info_wrapper.sql" also accepts the view name to be queried and gets its DDL information from dbms_metadata.get_ddl.

Share This:


Remedy 19.02 release includes a new feature called as “User Preference Theme”. This is a Remedy MidTier feature and as the name suggests, it allows users to set preference to visualize mid-tier UI with select theme colors.

Users now have flexibility to select a theme from list of themes available in drop down allowing users with different look and feel of the MidTier UI than just out-of-the-box one.

Remedy Customers (administrators) can to build custom themes (CSS files) and publish it to ‘User Preference theme’ drop down list. Thereby allowing Customers to define theme as per corporate branding guidelines.

This feature is user specific - so user can choose their theme or go with the default theme (the current look and feel of MidTier).

The Remedy administrator can enable themes at the company level or if the themes are disabled then the users will continue to see the default Theme which applies to all users.


Pre-requisites: Remedy MidTier 19.02 (and above) is required to enable ‘User Theme’



Customers have been using the “Skins” feature in Remedy where the customer can change the look and feel of a field/form etc. With the advent of Themes feature, the administrator has a choice to either use Skins or use Themes. However, both cannot be used together.

Skins feature is applicable for all users in the system. Whereas the Themes feature is “user preference” based. So, the Remedy Admin can enable or disable Themes. Once enabled, User has the choice to select theme of his choice.

Skins requires Remedy development skills for changing the look and feel. Themes however are CSS based and have wide range of options available to change the look and feel.

User Theme can be enabled and disabled through MidTier Configuration at run time.

Theme applies to MidTier Forms only & not to any third party embedded components like BIRT or Flash player



Open “AR System User Preference” Form -- Create drop down Menu field with field ID “24016” in Web tab and save it to all Views, as shown in below screenshot


Import Menu “UserPrefTheme” and Add same into Menu name of User Theme field as show in image-1

Set Expand Box Hide available in side Display Property as shown in image -2


Image 1

Image 2

Change value of property arsystem.showCfgThemeField from false to true available inside MidTier

Put all theme CSS files inside MidTier “resources\userpreftheme\stylesheets” path and restart MidTier

  1. After restart, open MidTier Config tool à AR Server Settings and click edit or add server à two new fields will appear “Enable User Theme “ check box and ”Default Theme” text Box.
  • “Enable User Theme” mark this as checked and give some default theme CSS file name in “Default Theme”  à this applies to all user if user haven’t selected any theme from user preference

Image 3


After applying theme landing console look like


Available sample 2 themes are attached to this blog.



There are two CSS files shared with this post. If customer want to create new theme then they can do so by changing their Background color and font color based on need.

Alternatively, Customers could use css class listed available in shared CSS file and redesign it there as per their requirement. After creating new CSS file, add the file name in “userTheme” menu list and flush MidTier cache.


For example, if you want to change Tab background color just make changes in below listed CSS

.OuterOuterTab,.Tab,.OuterTab .Tab, .OuterTab .TabLeft, .OuterTab .TabRight, .ScrollingTab .Tab,.OuterTab .TabRightRounded,.ScrollingTab .TabRightRounded


background: #f0f0f1 !important; /*change background color for Tab */




CC - Remedy ITSM Remedy AR System

Rahul Vedak Abhijeet Gadgil Ravi Singh Rawle Gibson

Share This:



Testing is a fact of life for those of us that work with, and support, software and there are many reasons why we need to do it.  Just a few examples are


  • Validating configuration changes.
  • Evaluating new versions.
  • Debugging problem behaviour.
  • Developing new functionality.


Some of the challenges of testing with the Remedy software stack are its size and complexity.  There are multiple components, different platforms, and a range of software dependencies, all of which take time to set up and maintain.  One way to try and deal with these factors is the use of virtual machines which make it possible to save the state of a system once it is set up, and to then rollback to that known good state at any time.  In this blog post I want to look at another option based on containers and Docker.


There's lots of information available on the internet that will help you understand and get started using Docker.  The short story version is that containers are a lightweight alternative to virtual machines that share some of the functionality from their host rather than requiring a full copy of an operating system.  Also, containers usually include any additional software that may be required, Java for example, so it is not necessary to download and install many extra components.  This helps overcome compatibility problems and should guarantee that the application packaged inside the container will always work as expected, regardless of the software versions installed on the host.


There some limitations, you can't for example, use Linux binaries on a Windows host without some sort of Linux kernel being run to provide the shared functions.  What they lose in this way they make up for in speed of deployment and flexibility.  Yes, some setup is required, but once this is done it can make a very good environment for testing.


In this article we're going to see how to set up Docker on a CentOS 7 Linux system and then use this to test different versions of the Remedy mid-tier with several versions of Tomcat.  In later posts I hope to look at how container versions of databases and other Remedy components may be used to help speed up the testing process.


Firstly though a caveat- whilst container technology is mature and widely used in production environments (BMC uses containers for most of the products in the Helix SaaS offering, and there may be on-premise customer versions at some point in the future) what I'm writing about here is very much focused on testing.  Using the details below you should be able to set up and use your own container test environment but don't point your customers at it!


Setting up the Docker Environment


Full details of the options available when installing Docker are documented here.  Start by making sure that your OS packages are the most recent available.


# yum update


This may take a few minutes and will return with either a list of available updates and a prompt to continue, or report that the system is up to date.  If prompted press 'Y' and wait for the updates to complete.  If a large number of updates are applied I'd recommend you reboot before continuing.


Create a working directory to store our files.  If you don't use /docker you will need to substitute your choice in some of the later commands.


# mkdir /docker

# cd /docker


These steps to add the Docker software repository, install the bits we need, and start the Docker engine.


# yum-config-manager --add-repo

# yum -y install docker-ce docker-ce-cli

# systemctl start docker


Confirm that Docker is running with


# docker version


Version: 18.03.1-ce

API version: 1.37

Go version: go1.9.5

Git commit: 9ee9f40

Built: Thu Apr 26 07:20:16 2018

OS/Arch: linux/amd64

Experimental: false

Orchestrator: swarm




Version: 18.03.1-ce

API version: 1.37 (minimum version 1.12)

Go version: go1.9.5

Git commit: 9ee9f40

Built: Thu Apr 26 07:23:58 2018

OS/Arch: linux/amd64

Experimental: false


We're also going to use a tool called docker-compose to help manage container configurations.  Note the version used in the command below may not be the latest, check the documentation if you want the most recent.


# curl -L$(uname -s)-$(uname -m) -o /bin/docker-compose

# chmod a+x /bin/docker-compose

# docker-compose -version

docker-compose version 1.20.1, build 5d8c71b


Apache Tomcat Containers


One of the advantages of using Docker is that many commonly used pieces of software are already available as containers.  Tomcat is a great example - here are the currently available versions:

Not only are there many Tomcat versions but some also have a choice of Java!


So how do we use one?  We already have Docker installed so it's simply a case of running one command:


# docker run -it --rm -p 8080:8080 tomcat:8.5

Unable to find image 'tomcat:8.5' locally

8.5: Pulling from library/tomcat

741437d97401: Downloading [==========>                                        ]  9.178MB/45.34MB

34d8874714d7: Downloading [==================================>                ]  7.417MB/10.78MB

The Tomcat images are available in the public Docker registry - a central repository of container images - so the 8.5 version is downloaded and stored locally.  Once this is done the image is used to create and run a container - a local instance of Tomcat.  Further output from the command above shows this:


Using CATALINA_BASE:   /usr/local/tomcat

Using CATALINA_HOME:   /usr/local/tomcat

Using CATALINA_TMPDIR: /usr/local/tomcat/temp

Using JRE_HOME:        /docker-java-home/jre

Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar

27-Feb-2019 15:07:48.785 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.37

27-Feb-2019 15:07:48.787 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Dec 12 2018 12:07:02 UTC

<lines snipped>

27-Feb-2019 15:29:07.013 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]

27-Feb-2019 15:29:07.028 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]

27-Feb-2019 15:29:07.034 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 849 ms


Then, pointing a browser at port 8080 shows that we have a running Tomcat:


Type Ctrl+C in the Linux terminal to stop the container and return to the command prompt.


Let's look at the command options in more detail.


run -itthe action being performed - run the container and show the output on the terminal.
--rmdelete the container when the process is terminated.  The downloaded image is NOT deleted.
-p 8080:8080host_port:container_port - exposes the container_port to make it accessible via the host using host_port
tomcat:8.5The name of the container being used.


These examples show how to run other versions of Tomcat using a different ports:


Tomcat 7.0 using port 8000

# docker run -it --rm -p 8000:8080 tomcat:7

Tomcat 9 with Java 8 using port 8080

# docker run -it --rm -p 8080:8080 tomcat:9-jre8

Tomcat 9 with Java 11 using port 8088

# docker run -it --rm -p 8088:8080 tomcat:9-jre11


Now that we can run Tomcat we need a way to add the mid-tier files so that they are accessible to a process inside the container.


Pump Up The Volume


The images we've tested include the software necessary to run Tomcat but no more.  We could use the Tomcat image as a base and build a new container that includes the mid-tier files but, for testing purposes, there's an easier way using Docker volumes.  These provide the processes running inside a container with access to the file system on the host.  By setting up a volume we can put our mid-tier files in the shared directory where Tomcat can read them.


Create some directories to use as volumes for different mid-tier versions:


# mkdir -p /docker/midtier/1805

# mkdir -p /docker/midtier/1808


The volume details are specified using the -v command line option for docker.  To run Tomcat 8.5 and use the 1805 volume the command is:


# docker run -it --rm -p 8080:8080 -v /docker/midtier/1805:/usr/local/tomcat/webapps tomcat:8.5


The format of the -v option is host_directory:container_directory so this command takes our host /docker/midtier/1805 directory and mounts it as /usr/local/tomcat/webapps inside the container.  Volumes are often used when you have data you want to persist between container restarts - remember the --rm option means our container is deleted when we cancel the command.  By using a volume we can carry data over to use in new containers as well as providing a way of getting data into the container.  How does this help us deploy our mid-tier though?  For that we need to go to war...


.war (What is it Good For?)


The mid-tier is included as part of the AR Server installer but we don't want to use this for several reasons;


  • we only need the mid-tier and the full installer is very large.
  • the installer requires a GUI or the use of a silent install file.
  • the installer won't be able to access the Tomcat files inside the container.


Fortunately BMC also provide the mid-tier as a war file.  This is a web archive, a standard zip file format used for web application packaging, that Tomcat understands.  When one of these is found in the webapps directory it will be unpacked and used to deploy the application it contains.  All we have to do is copy the appropriate war file to the host volume directory and run the container.  You can download the various mid-tier war files from the EPD website.


After downloading the files, decompress them and copy them to the appropriate directories.  I'm renaming each to arsys.war so that the familiar /arsys mid-tier URL is used.


# ls -l

drwxr-xr-x 2 root root      4096 Feb 27 10:29 1805

drwxr-xr-x 2 root root      4096 Feb 27 10:29 1808

-rw-r--r-- 1 root root 234438452 Jun  1  2018 MidtierWar_linux9.1.05.tar.gz

-rw-r--r-- 1 root root 234990835 Sep  3 02:41 MidtierWar_linux9.1.06.tar.gz

# tar zxvf MidtierWar_linux9.1.05.tar.gz


# mv midtier_linux.war 1805/arsys.war

# tar zxvf MidtierWar_linux9.1.06.tar.gz


# mv midtier_linux.war 1808/arsys.war

# ls 1805 1808






Now we run a Tomcat container and use the -v option to control which mid-tier version is used:


# docker run -it --rm -p 8080:8080 -v /docker/midtier/1805:/usr/local/tomcat/webapps tomcat:8.5


Looking at the console output we can see the mid-tier being deployed and, once the startup is complete, we can access it via a browser:


28-Feb-2019 08:07:21.713 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/usr/local/tomcat/webapps/arsys.war]


To switch to the 1808 mid-tier use Ctrl+C to close the running container and change the volume used:


# docker run -it --rm -p 8080:8080 -v /docker/midtier/1808:/usr/local/tomcat/webapps tomcat:8.5


Now we can login to the config pages and complete the set up by providing the details of the AR Server we want this mid-tier instance to connect to.  All of the configuration files are stored in the tree under the arsys directory so they will persist between container restarts.


# ls -l 1808/arsys

total 84

drwxr-x---  3 root root 4096 Feb 28 02:12 cache

-rw-r-----  1 root root 1703 Aug 27  2018 CancelTask.jsp

drwxr-x---  2 root root 4096 Feb 28 02:12 documents

drwxr-x---  2 root root 4096 Feb 28 02:12 filedeployer

drwxr-x---  2 root root 4096 Feb 28 02:12 flashboards

drwxr-x---  3 root root 4096 Feb 28 02:12 help

drwxr-x---  7 root root 4096 Feb 28 02:12 LocalPlugins

drwxr-x---  2 root root 4096 Feb 28 02:12 logs

drwxr-x---  2 root root 4096 Feb 28 02:12 META-INF

drwxr-x---  3 root root 4096 Feb 28 02:12 report

drwxr-x---  2 root root 4096 Feb 28 02:12 reporting

drwxr-x---  2 root root 4096 Feb 28 02:12 reports

drwxr-x--- 12 root root 4096 Feb 28 02:12 resources

drwxr-x---  3 root root 4096 Feb 28 02:12 samples

drwxr-x---  2 root root 4096 Feb 28 02:12 scriptlib

drwxr-x---  5 root root 4096 Feb 28 02:12 shared

drwxr-x---  4 root root 4096 Feb 28 02:12 SpellChecker

drwxr-x---  2 root root 4096 Feb 28 02:12 tools

drwxr-x---  2 root root 4096 Feb 28 02:12 Visualizer

drwxr-x---  3 root root 4096 Feb 28 02:12 webcontent

drwxr-x---  7 root root 4096 Feb 28 02:12 WEB-INF


It's just as easy to switch Tomcat versions by changing the container image name in the docker command.  For example, run the 1808 mid-tier we just deployed, but with Tomcat 9 and JRE 11:


#  docker run -it --rm -p 8080:8080 -v /docker/midtier/1808:/usr/local/tomcat/webapps tomcat:9-jre11


Logging in to the config pages using the default password of arsystem we can see:


Managing Multiple Configurations


We can see how the combination of Docker, Tomcat containers and mid-tier war files, makes it very easy to deploy and test many different combinations of software versions.  However, so far, all of the containers we have started have only run as long as we left the docker command in the foreground.  That's OK for quick tests but, sooner or later, we're going to want to keep them running for longer periods.  Also, the command lines have become more complex and there may be other options you've seen in the documentation that you want to use.  The docker-compose utility we installed at the start of this article is one way to do this.


docker-compose is a command line tool that may be used to help manage more complex container environments.  It uses YAML format text files to store the container configuration options so that you don't need to type them all on the command line.  The documentation provides full details of how it works but here's the configuration file that is the equivalent of our command that started the 1808 mid-tier with Tomcat 8.5:



# cat 1808.yml

version: '3'



    container_name: tomcat85_mt1808

        image: tomcat:8.5


      - "8080:8080"


      - /docker/midtier/1808:/usr/local/tomcat/webapps


To manage our different Tomcat/mid-tier test combinations we can make copies of this file and change the relevant details such as the Tomcat image and volume.  Then we can use the docker-compose command to run the container:


# docker-compose -f 1808.yml up -d

Creating tomcat85_mt1808 ... done

# docker ps

CONTAINER ID   IMAGE       COMMAND            CREATED             STATUS              PORTS                    NAMES

1fc00871bfef   tomcat:8.5  " run"  About a minute ago  Up About a minute>8080/tcp   tomcat85_mt1808


The -f option specifies the name of the configuration file to use (which otherwise defaults to docker-compose.yml), up is the command to start the container, and -d runs the container in the background.  The docker ps command shows us the running container.  To stop the container:


# docker-compose -f 1808.yml stop

Stopping tomcat85_mt1808 ... done


Testing with a Load Balancer


If you have a system with enough resources you can run multiple mid-tier containers as long as you map a different host port for each instance.  Some applications of this type of set up would be to:


  • compare the behaviour of different mid-tier or Tomcat versions side-by-side.
  • add a load balancer between the mid-tier and several AR Servers.
  • add a load balancer between the clients and several mid-tiers.


Let's look at the last example in more detail and see what is required.  docker-compose allows us configure multiple mid-tiers in a single YAML file so that they may be started and stopped as one.   We need to add a second service to our 1808.yml file from above, let's make a copy and change it to:


# cp 1808.yml midtierlb.yml

# vi midtier.yml

version: '3'



    container_name: midtier1

    image: tomcat:8.5


      - "8060:8080"


      - /docker/midtier/1808:/usr/local/tomcat/webapps1



    container_name: midtier2

    image: tomcat:8.5


      - "8070:8080"


      - /docker/midtier/1808:/usr/local/tomcat/webapps2



   container_name: haproxy

   image: mminks/haproxy-docker-logging


     - "8080:8080"


     - /docker/midtier/haproxy:/usr/local/etc/haproxy


I've highlighted the changes using different colours, they are:


  • the service name - miditer1 and midtier2.
  • the container_name - midtier1 and midtier2.
  • the port numbers on the docker host that are mapped to the Tomcat port in each container need to be unique - 8060 and 8070.
  • the host directory used to create a docker volume for each container where we need to copy the arsys.war file.


I've also added a load balancer service using a container version of haproxy which listens on port 8080 and distributes calls to our mid-tiers.  This is configured using a file called haproxy.cfg stored in the /docker/midtier/haproxy volume directory.


# cat haproxy/haproxy.cfg


        maxconn 256

        log local0 debug


        log global

        mode http

        timeout connect 5000ms

        timeout client 50000ms

        timeout server 50000ms

    frontend http-in

        bind *:8080

        default_backend midtiers

    backend midtiers

        mode http

        balance roundrobin

        server midtier1 DOCKER_HOST_IP:8060 check

        server midtier2 DOCKER_HOST_IP:8070 check


You will need to edit this file and replace DOCKER_HOST_IP with the address the machine you are using to run docker.


Before we run the containers let's just review the files you should under your /docker directory:


└── midtier

   ├── haproxy

   │   └── haproxy.cfg

   ├── 1808.yml

   ├── midtierlb.yml

   ├── webapps1

   │   └── arsys.war

   └── webapps2

       └── arsys.war


They are:

  • a directory called haproxy container haproxy.cfg.
  • the original 1808.yml file for a single mid-tier.
  • midtierlb.yml with our additional mid-tier and haproxy containers added.
  • webapps1 and webapps2 directories each containing the arsys.war file for the mid-tier we want to use.


Start the containers with:

# docker-compose -f midtierlb.yml up -d

Creating midtier1 ... done

Creating midtier2 ... done

Creating haproxy  ... done


Wait a minute or so for the containers to start, you can check their progress using the docker logs command:


# docker logs midtier1

06-Mar-2019 09:12:36.697 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.38

<lines snipped>

06-Mar-2019 09:13:03.074 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/usr/local/tomcat/webapps/arsys.war] has finished in [25,942] ms

06-Mar-2019 09:13:03.083 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]

06-Mar-2019 09:13:03.095 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]

06-Mar-2019 09:13:03.102 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 26113 ms


Now, using a browser, you should be able to connect to the mid-tiers directly using ports 8060 and 8070, and via haproxy on port 8080:




This has been a brief introduction to the use of the Remedy mid-tier with containers but I hope it shows how this technology may be used for rapid testing of new or different software versions.  There are many more docker and docker-compose options available to help you set up the test environments you need, have a browse of the documentation and search the web for inspiration.  Happy testing!


Comments, questions and feedback are welcome, leave a message below or send me an email.


Mark Walters




Share This:
Share This:

NOTE: This vulnerability is only applicable to AR System on Linux servers.


BMC Software has identified a security vulnerability (CVE-2018-19647) that could allow a remote, unauthenticated attacker to gain arbitrary code execution as the system user. The exposure is limited to scenarios where an attacker is on the same network as Remedy AR System and has the capability to bypass standard network based defenses such as firewalls.

All service packs and patches of Remedy AR System 9.x and 18.x versions are affected by this vulnerability.

BMC strongly recommends that customers who have installed Remedy AR System 9.x or 18.x on a Linux server apply this hotfix.


Hot fixes for the affected versions are available at the following links:


Note on prerequisites: On some versions, patches need to be applied prior to applying the hot fix (if they have not been already applied)

  • For 9.1.04, patch 002 (
  • For 9.1.03, patch 001 (
  • For 9.1.02, patch 004 (

There are no prerequisites for installation on Remedy AR System 18.05 or 18.08.


Thanks to François Goichon from the Google Security Team for identification of this problem.


Best regards,

John Weigand

R&D Program Manager

BMC Software

Share This:

If you are running a version of Remedy that is older than 9.1 SP3 you may notice the following join condition in the view definition of AST:Base Element when you open it in Developer Studio:


           ($ReconciliationIdentity$ = 'ReconciliationIdentity') OR ('ReconciliationIdentity' = $Instanceid$)


AST:Base Element is a joing of BMC:Base Element and AST:Attributes.


The first part of the join condition returns reconciled CIs while the second part, in italicized red above, returns unreconciled assets.


The OR condition needs to be removed from the view definition. This issue was addressed in BMC's software defect SW00511666 and fixed in 9.1 SP3.


If your upgrade to 9.1 SP3, or higher, is not imminent you can safely remove the above condition in Developer Studio.

Share This:

If you have SQL statements that are running poorly in an Oracle 12c database and their Explain Plans are showing "... SQL Plan Directive used for this statement" under "Note" you may want to look into turning off the SQL Plan Directive (SPD) and checking performance after.


At a recent customer site it was noticed that the SQL below (truncated here) had an Execution Plan, also shown below, that was extremely sub-optimal with a Cost in excess of 7.5 BILLION.



SELECT b1.C1 || '|' || NVL(b2.C1, ''), b1.C2, b1.C3, b1.C4, b1.C5, b1.C6, b2.C7, b1.C8, b2.C260100001, b1.C301019600, b1.C260100002, b2.C260100004, b2.C260100006, b2.C260100007, b2.C260100009, b2.C260100010, b2.C260100015, b2.C230000009, b2.C263000050, . . . b1.C530014300, b1.C530010200, b2.C810000272, b1.E0, b1.E1, b2.C1

FROM T525 b1 LEFT JOIN T3973 b2 ON ((b1.C400129200 = b2.C400129200) OR (b2.C400129200 = b1.C179))



Plan hash value: 3755947183



| Id | Operation                 | Name            | Rows   | Bytes |TempSpc| Cost (%CPU)| Time     |


| 0  | SELECT STATEMENT          |                 | 197K   | 2068M |       | 7929M (1)  | 86:02:37 |

| 1  |  NESTED LOOPS OUTER       |                 | 197K   | 2068M |       | 7929M (1)  | 86:02:37 |

|* 2 |   HASH JOIN               |                 | 98925  | 72M   | 24M   | 163K (1)   | 00:00:07 |

| 3  |    TABLE ACCESS FULL      | T457            | 98357  | 23M   |       | 1011 (1)   | 00:00:01 |

| 4  |    TABLE ACCESS FULL      | T476            | 3528K  | 1722M |       | 73424 (1)  | 00:00:03 |

| 5  |   VIEW                    | VW_LAT_F1632550 | 2      | 20394 |       | 80158 (1)  | 00:00:04 |

|* 6 |    TABLE ACCESS FULL      | T3973           | 2      | 1138  |       | 80158 (1)  | 00:00:04 |


Predicate Information (identified by operation id):


2 - access("B1"."C179"="B2"."C179")

6 - filter("B2"."C400129200"="B2"."C400129200" OR "B2"."C400129200"="B2"."C179")




  - dynamic statistics used: dynamic sampling (level=2)

  - 1 Sql Plan Directive used for this statement


The same SQL in a few other environments (customer's and BMC's internal ones) showed a better plan that used indexes and ran much faster.


The "good" plans were missing the "Note" above that mentions (a) dynamic sampling used and (b) 1 SQL Plan Directive used for the SQL.


This indicated that it was possibly the SQL Plan Directive that was responsible for the poor Execution Plan.


We checked the database for SPDs associated with the 2 objects above (T525 and T3973) and found that there were 4 table-level directives.


SQL> select * from dba_sql_plan_dir_objects where owner = 'ARADMIN' and object_name = 'T3973';



2625473566913189414 ARADMIN T3973       C400129200     COLUMN

9853893946733075077 ARADMIN T3973       C7             COLUMN

9853893946733075077 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>YES</equality_predicates_only>




6009259810806618512 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>NO</equality_predicates_only>




2625473566913189414 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>YES</equality_predicates_only>




3274712412944615867 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>NO</equality_predicates_only>





We the disabled the SPDs using the following commands (the long number in each command is the SQL Directive Id from above):


      exec dbms_spd.alter_sql_plan_directive(9853893946733075077,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(6009259810806618512,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(3274712412944615867,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(2625473566913189414,'ENABLED','NO');


The result was the Execution Plan that was faster and  which was observed in all the other environments.



| Id | Operation                                | Name            | Rows  | Bytes |TempSpc | Cost (%CPU)| Time     |


| 0  | SELECT STATEMENT                         |                 | 197K  | 2068M |        | 756K (1)   | 00:00:30 |

| 1  | NESTED LOOPS OUTER                       |                 | 197K  | 2068M |        | 756K (1)   | 00:00:30 |

|* 2 |  HASH JOIN                               |                 | 98925 | 72M   | 24M    | 163K (1)   | 00:00:07 |

| 3  |   TABLE ACCESS FULL                      | T457            | 98357 | 23M   |        | 1011 (1)   | 00:00:01 |

| 4  |   TABLE ACCESS FULL                      | T476            | 3528K | 1722M |        | 73424 (1)  | 00:00:03 |

| 5  |  VIEW                                    | VW_LAT_F1632550 | 2     | 20394 |        | 6 (0)      | 00:00:01 |

| 6  |  CONCATENATION                           |                 |       |       |        |            |          |

| 7  |    TABLE ACCESS BY INDEX ROWID BATCHED   | T3973           | 1     | 569   |        | 3 (0)      | 00:00:01 |

|* 8 |     INDEX RANGE SCAN | I3973_400129200_1 | 1               |       |       |        | 2 (0)      | 00:00:01 |

| 9  |    TABLE ACCESS BY INDEX ROWID BATCHED   | T3973           | 1     | 569   |        | 3 (0)      | 00:00:01 |

|* 10|     INDEX RANGE SCAN | I3973_400129200_1 | 1               |       |       |        | 2 (0)      | 00:00:01 |



Predicate Information (identified by operation id):


  2 - access("B1"."C179"="B2"."C179")

  8 - access("B2"."C400129200"="B2"."C179")

  10 - access("B2"."C400129200"="B2"."C400129200")



As one can see there are no longer any SPDs.


There are numerous ways to not have SQL Plan Directives affect query performance.


Database Level: (a) set optimizer_features_enable = ''.

                    NOTE: This will disable ALL 12c optimizer features

                (b) set optimizer_adaptive_features = FALSE.

                    NOTE: This disables ALL 12c adaptive features and

                          that may be too wide an option


SQL Directive Level: exec dbms_spd.alter_sql_plan_directive(<insert directive_id here>,


Share This:


In this series we're looking at how to setup the Elastic Stack to collect, parse, and display data from our Remedy logs.  So far we've covered:


  • Part 1 - setting up Elasticsearch, Kibana and Filebeat to collect logs from one or more Remedy servers.
  • Part 2 - adding Logstash and modifying the setup to pass logs through it to Elasticsearch.
  • Part 3 - first steps in using Logstash to enrich the logs with additional data for filtering and visualizatiions in Kibana.


This post will look at adding other Remedy server logs to the data being collected, one way to handle non-standard logs lines, and more advanced use of Logstash filters.


More Logs Please

At the moment we're collecting the API and arerror log files from our Remedy server so let's add some more.  To do this we need to modify the Filebeat configuration file on our Remedy server.  The files that are being collected are defined by filebeat.prospectors entries like this one for the arapi.log:


- type: log

  enabled: true


    - /opt/bmc/ARSystem/db/arapi.log


    logtype: arserver

  fields_under_root: true


There are several different ways to add additional logs.  We could

  1. create a new prospector entry for each log.
  2. add a new file in the paths: section.
  3. use a wildcard for the filename.


However, for our tests, I'm going to change the filename we're reading from to logstash.log and then use the logging options on the Remedy server to control what gets logged.  The advantage of doing it this way is that we can easily change which logs are being sent to Elasticsearch simply by using the same log file name for all of them and turning them on or off.   We won't need to reconfigure and restart Filebeat each time we want to use different log types.


Remedy server logging directed to a single file and switchable by using the checkboxes.


How Many Lines Should a Log Line Log?

While we're here I'd also like to look at how we can handle log entries that span multiple lines.  At the moment we're only interested in the standard Remedy server logs which all have the less than symbol as the first character of the line and share the same format for the first few fields :


<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service...

<SQL > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service...

<FLTR> <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service...

If you take a look at some sample logs you'll see that although the majority of lines follow this standard, there are some exceptions such as:


  • stack traces

<SQL > <TID: 0000000509> <RPC ID: 0000135141> <Queue: Fast      > <Client-RPC: 390620   > <USER: markw...

  com.bmc.arsys.domain.etc.ARException: ERROR (302): Entry does not exist in database

  at com.bmc.arsys.server.persistence.entry.impl.SQLHelperImpl.executePreparedStatement_aroundBody30( [bundlefile:9.1.04-SNAPSHOT]

  at com.bmc.arsys.server.persistence.entry.impl.SQLHelperImpl$ [bundlefile:9.1.04-SNAPSHOT]


  • some FLTR Set Fields and Notify action entries where text being processed can appear

<FLTR> <TID: 0000000334> <RPC ID: 0000135942> <Queue: Fast      > <Client-RPC: 390620   > <USER: markw...

z5VF_Message (304384301) = <html>

test text

In these cases the lines that don't start with < are continuations of the last line that does and, with our current configuration, will not be parsed by our Logstash grok filter.  They will be added to Elasticsearch as records without any of the new parsed data fields we're creating.  Both Logstash and Filebeat have methods to deal with these multi-line messages but the recommendation is to do this as early in the pipeline as possible.  For our logs, whenever we come across a line that does not start with <, we want to include it as part of the last line that does.  The Filebeat link above explains the details of how this is configured for each prospector definition.


The filebeat.yml updates we need to make to our API log file prospector for the file name change and multi-line processing are shown in bold below:


- type: log

  enabled: true


    - /opt/bmc/ARSystem/db/logstash.log


    logtype: arserver

  fields_under_root: true

  multiline.pattern: '^<'

  multiline.negate: true

  multiline.match: after


If you want to be very thorough you could make the pattern more selective as it is possible that the continuation lines may also start with <.  If you want to try this change multi-line pattern regex to lines starting with < and then having one of the recognised log type values:


    multiline.pattern: '^<(FLTR|SQL|API|ESCL|FTI|USER|THRD|ALRT|SGRP)'


Restart Filebeat and you should see that multi-line data, such as Java stack traces, is contained in a single record rather than the one record per line they would be otherwise:


Getting More Data From Log Lines

Now that we've changed Filebeat to pick up data from logstash.log go ahead and enable SQL or some of the other log types to that file and see what appears in Kibana:


Here we have API, SQL and FLTR logs.


Looking at the data above it would be nice to try and enhance our Logstash filter to capture some additional information such as the API type, schema or form names, and so on.  To do this we need to go back and tweak our Logstash grok filter.


At the moment we're handling all the fields up to the timestamp, storing the remainder of the line in log_details.


filter {

  grok {

    match => {"message" => "^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDYDATA:log_details}$"}




Next up is the timestamp so what can we do with that?  We could just skip it and rely on the @timestamp field that's already part of the Elasticsearch record. However, you'll notice that these are not the same as the Remedy timestamps as they're added when Filebeat processes the log lines and so lag by a small amount.  However, before we continue there's one other thing we need to consider.


It's Just a Matter of Time...

Here's where we hit our first gotcha with Remedy log lines.  Compare the log details below:


<API > <TID: 0000000336>...     /* Tue Jul 31 2018 11:21:39.8570 */ +GLEWF ARGetListEntryWithFields -- schema AR System....

<SQL > <TID: 0000000336>...     /* Fri Aug 03 2018 11:31:17.7820 */ SELECT t0.schemaId,, t0.overlayGroup, t0.schemaType...

<FLTR> <TID: 0000000336>...     /* Fri Aug 03 2018 11:30:58.0360 */ End of filter processing (phase 1) -- Operation - GET...

<FLTR> <TID: 0000000336>...     --> Passed -- perform actions


Oops - no timestamp on some FLTR lines!  These variations will start to crop up more frequently as we dig deeper into the details of the different log formats.  Of course it's unavoidable at some point as the different log types are recording fundamentally different information.  We can usually handle this by building filters for specific log types but, in this case, we're just going to ignore it for the moment.  If you're collecting FLTR logs you'll have to accept that some of them won't have a Remedy generated timestamp.  The net effect of this is that it won't be possible to display all the lines in the correct sequence in Kibana if sorting by this timestamp as all of the records without it will be out of sequence.  In most cases the system @timestamp should be a good enough alternative.  There's a similar problem for lines logged with identical timestamps.  There's no guarantee which order they will be displayed when sorted on matching values so, for example, you may notice the occasional SQL OK before a SELECT.


We need a grok pattern to match our Remedy ICU format timestamp of EEE MMM d yyyy HH:mm:ss:SSSS. Unfortunately there isn't a standard one available so we need to define a custom pattern as detailed in the docs.   Using the grok debugger we can build a pattern to create a field called remedy_timestamp:


As not all of our log lines have the timestamp present we can't just add our new pattern to the existing filter.  The grok only works if the line matches the pattern so lines without this data would not be parsed into our extra data fields.  We need a second grok in the filter {...} block of our logstash.conf:


filter {

  grok {

    match => {"message" => "^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDYDATA:log_details}$"}



  grok {

    match => {"log_details" => "^/\* (?<remedy_timestamp>%{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME}) \*/%{SPACE}%{GREEDYDATA:log_details}"}

    overwrite => ["log_details"]




This new filter searches the log_details field, looking for /* at the start of the line (note that we have to escape the asterisk as it's a recognised regex character), and the parses the timestamp before assigning the remainder of the line back to log_details.  The second line uses the grok overwrite option otherwise the field would be turned into an array containing both the old and the new strings.


We have the timestamp but it's not in the correct format because the time is specified to four decimal places and Elasticsearch uses three. It's time to introduce a couple of new filter plugins.


mutate the date

The mutate plugin has a range of features to modify data in fields and we can use one of these, gsub, to remove the last digit from the string:


  mutate { # remove the last digit so we're left with milliseconds

    gsub => ["remedy_timestamp", "\d{1}$", ""]



This will turn "Tue Jul 31 2018 11:21:39.8570" into "Tue Jul 31 2018 11:21:39.857".  Now we use the date plugin to convert this to the Elasticsearch timestamp format:


  date {

    match => ["remedy_timestamp", "EEE MMM dd yyyy HH:mm:ss.SSS"]

    target => "remedy_timestamp"



After adding both of these below our new grok, restart the Logstash container..


# docker-compose -f elk.yml restart logstash

Restarting logstash ... done


and take a look at the logs in Kibana to see the new field.  Remember that we need to refresh the index pattern so that it is recognised correctly:


before refresh


and after



Further API Log Parsing

We're now adding extra fields to our Elasticsearch index for all of the common log markers up to, and including, the timestamp.   For API type logs this is the type of information that's left in log_details after this parsing:


+GLEWF ARGetListEntryWithFields -- schema AR System Configuration Component from Approval Server (protocol 26) at IP address using RPC // :q:0.0s



We're going to build add some new grok patterns to get the following

  • API call name
  • schema or form being used
  • client type
  • client protocol
  • client IP address
  • client transport
  • API queue time


Remember that we may be handling more than just API log lines though. We don't want to try and parse SQL or FLTR lines with an API specific regex as it won't match and would just be a waste of CPU cycles.  We can use conditional clauses to help streamline the code in our filter {...} block.  The following filters are only run if the log_type is API and they get the type of call along with adding a tag to mark the line as the start or end of an API:


if [log_type] == "API" {


  grok { # Mark the start of an API

    match => ["log_details", "^\+%{WORD:api_call}"]

    add_tag => ["API_Start"]



  grok { # Mark the end of an API

    match => ["log_details", "^\-%{WORD:api_call}"]

    add_tag => ["API_End"]





With the start and end tags we can now go mining for all the other nuggets using the set of filters below.  They're broken down into multiple filters because the exact format of the line varies by the API call.  For example, a GSI call does not have a form or schema recorded and some internal calls are missing the :q: value, so we need separate filters otherwise the pattern wouldn't match and nothing would be gathered.


if "API_Start" in [tags] {


  grok { # Specific to +API entries we get the client details

    match => ["log_details", " from %{DATA:client_type} \(protocol %{DATA:protocol}\)"]



  grok { # and schema/form

          match => ["log_details", " \-\- (schema|form) %{DATA:form} (from|entry|fieldId|changed|# of)"]



  grok { # client IP

    match => ["log_details", " at IP address %{IP:client_ip}"]



  grok { # the API transport

    match => ["log_details", " using %{WORD:api_transport} \/\/"]



  grok { # the API queue time

    match => ["log_details", "\/\/ :q:%{NUMBER:api_qtime:float}s"]





The queue time filter shows how to set the type of a field in Elasticsearch as the default is text and automatic detection of numbers doesn't always work reliably.


Our final filter handles the API end lines and captures errors if they're present.


if "API_End" in [tags] {


  grok { # catch failing API and the related error

    match => ["log_details", "^\-%{DATA}-- AR Error\(%{DATA:arerror_number}\)%{SPACE}%{GREEDYDATA:arerror_message}$"]





Our updated logstash.conf file is now a bit big to display in full so it's attached at the end of this post.  Once we've restarted Logstash and refreshed the index pattern we can now see all the additional fields in Kibana:


Wrapping Up

That's it for this post.  I had planned on showing some visualizations using our new fields but i think this is long enough for now!   I hope these posts have given you a good idea of the type of data it's possible to extract from Remedy logs,  and provided enough detail to help you get started if you want to give it a go yourself.  There are many ways to extend what we've done so far, for example you could:


  • Write filters to parse other log types or look at other plugins to see what they could do.
  • Break down SQL logs by the type of command being run and the table being used?
  • The full text indexer logs record the number of records in the ft_pending table that could be used to watch for a backlog developing.
  • There are plugins that you can use to calculate elapsed time between events - how long are your CreateEntry API calls taking?


I'm sure there are many other use cases you could think of.


As always questions and feedback or all sorts is welcome.  Happy grok'ing!


Mark Walters


Using the Elastic Stack with Remedy Logs - Part 1

Using the Elastic Stack with Remedy Logs - Part 2

Using the Elastic Stack with Remedy Logs - Part 3

Share This:

The Story So Far...

In parts one and two of this series of blogs we've seen how to setup the Elastic stack to collect logs from a Remedy server.  At the end of the last post we had introduced Logstash between our Filebeat collection agent and Elasticsearch so that we're ready to start parsing those interesting pieces of data from the logs.


One of the challenges of working with Remedy logs in Elastic is that, although there is some level of standardisation in their format, there's still a wide variety of information present.  Many of the the different logs types may share the same markers at the beginning of their lines but they then contain very different data from the timestamp onward.   This is exactly what Logstash is designed to do deal with by making use of its many filter plugins.  These provide different ways to manipulate data and restructure it so that it becomes queryable beyond simple text searches.  There's one filter plugin in particular that we're going to use to help us grok our Remedy logs.

The grok Logstash Plugin

The documentation for the grok plugin says...

This is exactly what we want to do so how do we use it?  Logstash ships with the most commonly used filter plugins already installed so there are no additional steps to make it available.


grok works by using patterns to match data in our logs.  A pattern is a combination of a regular expression and a variable used to store the value if it matches the search regex.   As an example consider the first bit of data in our API logs - the log type:


<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 >.......


A grok pattern to read this and create a field called log_type in Elasticsearch would be


^<%{WORD:log_type} >


Let's break it down

  • ^< means we're looking for the < character only at the start of a line
  • the grok pattern syntax uses %{...} to enclose regex:field pairs
  • WORD is one the many built in Logstash regular expressions and matches the characters A-Za-z0-9_
  • log_type is the name of the field that the value will be assigned to in Elasticsearch


When a log line matches the pattern, that is it starts with < and has a string followed by a space and then >, the value of the string will be added as a field called log_type.


We can add more patterns to match the next piece of data on the log line, the thread ID:


^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}>


  • I've changed the the line to include %{SPACE} (another built in pattern matching 0 or more spaces) instead of an actual space character because, if this was a FLTR log line for example, there would be no space before the closing >
  • > <TID: is the literal text we're expecting
  • DATA is another built in regex


Now we will have two fields added to our Logstash records:


Logstash field


We can continue to build up the patterns until we have all the data we want from the log line.  Developing these patterns can be complex but there are a number of tools available to help you,  there's even one in Kibana.  Click on the Dev Tools link in the left hand panel and then Grok Debugger.


Here's the grok pattern for a complete API log line shown in the debugger and you can see the resulting field names and values in the Structured Data window:


Note the new patterns used, %{NUMBER:overlay_group} to create an integer type field rather than a string as this value is only ever a number, and %{GREEDYDATA:log_details} at the end which captures the remainder of the line and assigns it the log_details field.


^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDYDATA:log_details}$


We now need to add this grok filter definition to our Logstash configuration file which we created in the previous post.  My example was /root/elk/pipeline/logstash.conf which needs to be edited to include the grok filter definition:


# cat elk/pipeline/logstash.conf

input {

  beats {

  port => 5044




filter {

  grok {

    match => {"message" => "^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDATA:log_details}$"}




output {

  elasticsearch {

  hosts => "elasticsearch:9200"




Logstash needs to reload the updated configuration which can be done by restarting it using:


# docker-compose -f elk.yml restart logstash

Restarting logstash ... done


So let's see what out logs look like in Kibana now.  Go to the Discover tab and make sure you're looking at the logstash-* index pattern, expand one of the records and, if all went well, you's see something like this:

Index Pattern Refresh

Our new fields are listed and we can see the values for them from out log line!  There are orange warning flags by the values because they're new fields that are not in the index definition that we're using.  To fix this click on the Management link go to Index Patterns, select the logstash-* pattern and click the refresh icon.  You should see that the count of the number of fields increases and you can page through the field list if you want to see which fields are present.  While we're here I suggest clicking the star icon to set logstash-* as the default index pattern so that you don't have keep switching it from filebeat-* on the Discover page.



Reload the Discover page and the warnings should have gone.  The index pattern refresh is something that needs to be done each time new fields are added.


Making Use of Remedy Specific Fields in Kibana


Now that we're enriching our log line records with Remedy data we can start doing some more interesting things, such as...



Remember in Part 1 we saw how to filter the log data using the list of fields?  Well, now we have some fields which are relevant to our application, such as the User or RPC Queue, that the logged activity belongs to, let's see how we can use that to isolate actions from a single user.


I'm going to login to my Remedy server as Demo so let's setup the Discover page to see what I'm up to.  In addition to applying a filter from the field list you can use the Add a filter + link to get an interactive filter builder or you can use the search bar at the top of the screen.  To filter for log lines with the user field value of Demo enter user:Demo in the search bar and press return.  Assuming there are any matching logs within the current time window you should get a count of the hits and the lines will be displayed.  To make it a bit easier to see what's going on hover over the log_details field name and click add to show this field in the main pane.  Finally let's turn on auto-refresh of the search results so we can monitor our actions as they happen.  Click on the Auto-refresh link at the top of the page and select 5 or 10 seconds.  Now go ahead and login to Remedy as Demo and see what happens as you work.



Each time the screen refreshes you should see the results updated with the latest log entries and the timeline shows the number of matching log lines in that period.  There are many filtering options available so see what you can find by using the different fields we've added to look for specific types of activity.




Search and filtering are helpful but not very exciting to look at so how about some graphical representations of our data?  Click on the Visualize link in the left hand pane and then Create a visualization to see the range of formats available.



Let's start with a Pie chart, click on the icon, select the logstash-* index and yoi should see this



This is simply a count of the number of records so we need to provide some additional options to make it a bit more interesting.  At this point it's worth switching of the auto refresh if you have it set to avoid the graphics being refreshed as we experiment with them.


Click on Split Slices, choose Significant Terms as the Aggregation and rpc_queue.keyword as the Field.  Set the Size to 10 and click the play icon at the top of the panel.

Here we can see how much of our log activity belongs to the different RPC queues in our server.


Change the Field and try some of the others such as the user.keyword


The data used to create the graphics can be filtered just as on the Discover page and the time range can also be adjusted as required.


With so many different visualization types available you'll be able to view your logs in a variey of different ways, and a future post will look at how to get even more of the log line data in to fields so that they can be used this way.


Wrapping Up

This post builds on the previous two and shows how to start using Logstash to enrich the data being collected from our Remedy server using the powerful grok filter plugin.  In future posts we'll look at using it further to get even more information from our API log lines, and then extend it to other types of logs such as SQL and FT Index.  With all of this extra data we can build even more complex graphics to help us visualize and analyse our systems.


Comments, suggestions and questions welcome.


Using the Elastic Stack with Remedy Logs - Part 1

Using the Elastic Stack with Remedy Logs - Part 2

Using the Elastic Stack with Remedy Logs - Part 4

Filter Blog

By date:
By tag: