Skip navigation
1 2 3 Previous Next

Remedy AR System

83 posts

December 2018 list of top viewed AR Server knowledge articles, this will be regularly updated.


Article Number





Remedy - Server - 'Could not refresh JMS Connection for destination...' errors seen in the arerror.log


How to perform a Hard Cache flush for Mid Tier


How to pass User credentials in REST API URL


Remedy - Server - Enabling case-insensitive searches with an AR System server 8.1 or 9.x with Oracle


AR Server REST API architecture and overview - Start here


How to use Self Signed Certificates in Postman Chrome Extension


Remedy - Server - Error: "ARERR 623 - Authentication failed" when opening the Report Console


Remedy - FTS - How to rebuild the FTS index on a Server Group


ARERR 382 - The value(s) for this entry violate a unique index that has been defined for this form


Remedy - Server - Escalations stopped running in 9.x on a server group


Remedy - Server - Steps to reconfigure AR Servers in Server Group


Configuring Tomcat for encoded slashes in the URL


ORA-01430: Column being added already exists in table.


ERROR: ORA-03127: no new operations allowed until the active operation ends seen in arerror.log (ARERR 552)


How to manually configure 9.x AR System ODBC drivers on 64-bit Windows OS?


ARERR 9350 error


List of BMC Remedy Release Version Strings and dbversion values


ARERR 9084 - user already connected from another machine


BMC Software has identified a security vulnerability (CVE-2018-19647) that could allow a remote, unauthenticated attacker to gain arbitrary code execution as the system user. The exposure is limited to scenarios where an attacker is on the same network as Remedy AR System and has the capability to bypass standard network based defenses such as firewalls.

All service packs and patches of Remedy AR System 9.x and 18.x versions are affected by this vulnerability.

BMC strongly recommends that customers who have installed Remedy AR System 9.x or 18.x apply this hot fix.


Hot fixes for the affected versions are available at the following links:


Note on pre-requisites: On some versions, patches need to be applied prior to applying the hot fix (if they have not been already applied)

  • For 9.1.04, patch 002 (
  • For 9.1.03, patch 001 (
  • For 9.1.02, patch 004 (

There are no prerequisites for installation on Remedy AR System 18.05 or 18.08.


Thanks to François Goichon from the Google Security Team for identification of this problem.


Best regards,

John Weigand

R&D Program Manager

BMC Software


If you are running a version of Remedy that is older than 9.1 SP3 you may notice the following join condition in the view definition of AST:Base Element when you open it in Developer Studio:


           ($ReconciliationIdentity$ = 'ReconciliationIdentity') OR ('ReconciliationIdentity' = $Instanceid$)


AST:Base Element is a joing of BMC:Base Element and AST:Attributes.


The first part of the join condition returns reconciled CIs while the second part, in italicized red above, returns unreconciled assets.


The OR condition needs to be removed from the view definition. This issue was addressed in BMC's software defect SW00511666 and fixed in 9.1 SP3.


If your upgrade to 9.1 SP3, or higher, is not imminent you can safely remove the above condition in Developer Studio.


If you have SQL statements that are running poorly in an Oracle 12c database and their Explain Plans are showing "... SQL Plan Directive used for this statement" under "Note" you may want to look into turning off the SQL Plan Directive (SPD) and checking performance after.


At a recent customer site it was noticed that the SQL below (truncated here) had an Execution Plan, also shown below, that was extremely sub-optimal with a Cost in excess of 7.5 BILLION.



SELECT b1.C1 || '|' || NVL(b2.C1, ''), b1.C2, b1.C3, b1.C4, b1.C5, b1.C6, b2.C7, b1.C8, b2.C260100001, b1.C301019600, b1.C260100002, b2.C260100004, b2.C260100006, b2.C260100007, b2.C260100009, b2.C260100010, b2.C260100015, b2.C230000009, b2.C263000050, . . . b1.C530014300, b1.C530010200, b2.C810000272, b1.E0, b1.E1, b2.C1

FROM T525 b1 LEFT JOIN T3973 b2 ON ((b1.C400129200 = b2.C400129200) OR (b2.C400129200 = b1.C179))



Plan hash value: 3755947183



| Id | Operation                 | Name            | Rows   | Bytes |TempSpc| Cost (%CPU)| Time     |


| 0  | SELECT STATEMENT          |                 | 197K   | 2068M |       | 7929M (1)  | 86:02:37 |

| 1  |  NESTED LOOPS OUTER       |                 | 197K   | 2068M |       | 7929M (1)  | 86:02:37 |

|* 2 |   HASH JOIN               |                 | 98925  | 72M   | 24M   | 163K (1)   | 00:00:07 |

| 3  |    TABLE ACCESS FULL      | T457            | 98357  | 23M   |       | 1011 (1)   | 00:00:01 |

| 4  |    TABLE ACCESS FULL      | T476            | 3528K  | 1722M |       | 73424 (1)  | 00:00:03 |

| 5  |   VIEW                    | VW_LAT_F1632550 | 2      | 20394 |       | 80158 (1)  | 00:00:04 |

|* 6 |    TABLE ACCESS FULL      | T3973           | 2      | 1138  |       | 80158 (1)  | 00:00:04 |


Predicate Information (identified by operation id):


2 - access("B1"."C179"="B2"."C179")

6 - filter("B2"."C400129200"="B2"."C400129200" OR "B2"."C400129200"="B2"."C179")




  - dynamic statistics used: dynamic sampling (level=2)

  - 1 Sql Plan Directive used for this statement


The same SQL in a few other environments (customer's and BMC's internal ones) showed a better plan that used indexes and ran much faster.


The "good" plans were missing the "Note" above that mentions (a) dynamic sampling used and (b) 1 SQL Plan Directive used for the SQL.


This indicated that it was possibly the SQL Plan Directive that was responsible for the poor Execution Plan.


We checked the database for SPDs associated with the 2 objects above (T525 and T3973) and found that there were 4 table-level directives.


SQL> select * from dba_sql_plan_dir_objects where owner = 'ARADMIN' and object_name = 'T3973';



2625473566913189414 ARADMIN T3973       C400129200     COLUMN

9853893946733075077 ARADMIN T3973       C7             COLUMN

9853893946733075077 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>YES</equality_predicates_only>




6009259810806618512 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>NO</equality_predicates_only>




2625473566913189414 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>YES</equality_predicates_only>




3274712412944615867 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>NO</equality_predicates_only>





We the disabled the SPDs using the following commands (the long number in each command is the SQL Directive Id from above):


      exec dbms_spd.alter_sql_plan_directive(9853893946733075077,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(6009259810806618512,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(3274712412944615867,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(2625473566913189414,'ENABLED','NO');


The result was the Execution Plan that was faster and  which was observed in all the other environments.



| Id | Operation                                | Name            | Rows  | Bytes |TempSpc | Cost (%CPU)| Time     |


| 0  | SELECT STATEMENT                         |                 | 197K  | 2068M |        | 756K (1)   | 00:00:30 |

| 1  | NESTED LOOPS OUTER                       |                 | 197K  | 2068M |        | 756K (1)   | 00:00:30 |

|* 2 |  HASH JOIN                               |                 | 98925 | 72M   | 24M    | 163K (1)   | 00:00:07 |

| 3  |   TABLE ACCESS FULL                      | T457            | 98357 | 23M   |        | 1011 (1)   | 00:00:01 |

| 4  |   TABLE ACCESS FULL                      | T476            | 3528K | 1722M |        | 73424 (1)  | 00:00:03 |

| 5  |  VIEW                                    | VW_LAT_F1632550 | 2     | 20394 |        | 6 (0)      | 00:00:01 |

| 6  |  CONCATENATION                           |                 |       |       |        |            |          |

| 7  |    TABLE ACCESS BY INDEX ROWID BATCHED   | T3973           | 1     | 569   |        | 3 (0)      | 00:00:01 |

|* 8 |     INDEX RANGE SCAN | I3973_400129200_1 | 1               |       |       |        | 2 (0)      | 00:00:01 |

| 9  |    TABLE ACCESS BY INDEX ROWID BATCHED   | T3973           | 1     | 569   |        | 3 (0)      | 00:00:01 |

|* 10|     INDEX RANGE SCAN | I3973_400129200_1 | 1               |       |       |        | 2 (0)      | 00:00:01 |



Predicate Information (identified by operation id):


  2 - access("B1"."C179"="B2"."C179")

  8 - access("B2"."C400129200"="B2"."C179")

  10 - access("B2"."C400129200"="B2"."C400129200")



As one can see there are no longer any SPDs.


There are numerous ways to not have SQL Plan Directives affect query performance.


Database Level: (a) set optimizer_features_enable = ''.

                    NOTE: This will disable ALL 12c optimizer features

                (b) set optimizer_adaptive_features = FALSE.

                    NOTE: This disables ALL 12c adaptive features and that may be

                          too wide an option


SQL Directive Level: exec dbms_spd.alter_sql_plan_directive(<insert directive_id here>,




In this series we're looking at how to setup the Elastic Stack to collect, parse, and display data from our Remedy logs.  So far we've covered:


  • Part 1 - setting up Elasticsearch, Kibana and Filebeat to collect logs from one or more Remedy servers.
  • Part 2 - adding Logstash and modifying the setup to pass logs through it to Elasticsearch.
  • Part 3 - first steps in using Logstash to enrich the logs with additional data for filtering and visualizatiions in Kibana.


This post will look at adding other Remedy server logs to the data being collected, one way to handle non-standard logs lines, and more advanced use of Logstash filters.


More Logs Please

At the moment we're collecting the API and arerror log files from our Remedy server so let's add some more.  To do this we need to modify the Filebeat configuration file on our Remedy server.  The files that are being collected are defined by filebeat.prospectors entries like this one for the arapi.log:


- type: log

  enabled: true


    - /opt/bmc/ARSystem/db/arapi.log


    logtype: arserver

  fields_under_root: true


There are several different ways to add additional logs.  We could

  1. create a new prospector entry for each log.
  2. add a new file in the paths: section.
  3. use a wildcard for the filename.


However, for our tests, I'm going to change the filename we're reading from to logstash.log and then use the logging options on the Remedy server to control what gets logged.  The advantage of doing it this way is that we can easily change which logs are being sent to Elasticsearch simply by using the same log file name for all of them and turning them on or off.   We won't need to reconfigure and restart Filebeat each time we want to use different log types.


Remedy server logging directed to a single file and switchable by using the checkboxes.


How Many Lines Should a Log Line Log?

While we're here I'd also like to look at how we can handle log entries that span multiple lines.  At the moment we're only interested in the standard Remedy server logs which all have the less than symbol as the first character of the line and share the same format for the first few fields :


<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service...

<SQL > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service...

<FLTR> <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service...

If you take a look at some sample logs you'll see that although the majority of lines follow this standard, there are some exceptions such as:


  • stack traces

<SQL > <TID: 0000000509> <RPC ID: 0000135141> <Queue: Fast      > <Client-RPC: 390620   > <USER: markw...

  com.bmc.arsys.domain.etc.ARException: ERROR (302): Entry does not exist in database

  at com.bmc.arsys.server.persistence.entry.impl.SQLHelperImpl.executePreparedStatement_aroundBody30( [bundlefile:9.1.04-SNAPSHOT]

  at com.bmc.arsys.server.persistence.entry.impl.SQLHelperImpl$ [bundlefile:9.1.04-SNAPSHOT]


  • some FLTR Set Fields and Notify action entries where text being processed can appear

<FLTR> <TID: 0000000334> <RPC ID: 0000135942> <Queue: Fast      > <Client-RPC: 390620   > <USER: markw...

z5VF_Message (304384301) = <html>

test text

In these cases the lines that don't start with < are continuations of the last line that does and, with our current configuration, will not be parsed by our Logstash grok filter.  They will be added to Elasticsearch as records without any of the new parsed data fields we're creating.  Both Logstash and Filebeat have methods to deal with these multi-line messages but the recommendation is to do this as early in the pipeline as possible.  For our logs, whenever we come across a line that does not start with <, we want to include it as part of the last line that does.  The Filebeat link above explains the details of how this is configured for each prospector definition.


The filebeat.yml updates we need to make to our API log file prospector for the file name change and multi-line processing are shown in bold below:


- type: log

  enabled: true


    - /opt/bmc/ARSystem/db/logstash.log


    logtype: arserver

  fields_under_root: true

  multiline.pattern: '^<'

  multiline.negate: true

  multiline.match: after


If you want to be very thorough you could make the pattern more selective as it is possible that the continuation lines may also start with <.  If you want to try this change multi-line pattern regex to lines starting with < and then having one of the recognised log type values:


    multiline.pattern: '^<(FLTR|SQL|API|ESCL|FTI|USER|THRD|ALRT|SGRP)'


Restart Filebeat and you should see that multi-line data, such as Java stack traces, is contained in a single record rather than the one record per line they would be otherwise:


Getting More Data From Log Lines

Now that we've changed Filebeat to pick up data from logstash.log go ahead and enable SQL or some of the other log types to that file and see what appears in Kibana:


Here we have API, SQL and FLTR logs.


Looking at the data above it would be nice to try and enhance our Logstash filter to capture some additional information such as the API type, schema or form names, and so on.  To do this we need to go back and tweak our Logstash grok filter.


At the moment we're handling all the fields up to the timestamp, storing the remainder of the line in log_details.


filter {

  grok {

    match => {"message" => "^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDYDATA:log_details}$"}




Next up is the timestamp so what can we do with that?  We could just skip it and rely on the @timestamp field that's already part of the Elasticsearch record. However, you'll notice that these are not the same as the Remedy timestamps as they're added when Filebeat processes the log lines and so lag by a small amount.  However, before we continue there's one other thing we need to consider.


It's Just a Matter of Time...

Here's where we hit our first gotcha with Remedy log lines.  Compare the log details below:


<API > <TID: 0000000336>...     /* Tue Jul 31 2018 11:21:39.8570 */ +GLEWF ARGetListEntryWithFields -- schema AR System....

<SQL > <TID: 0000000336>...     /* Fri Aug 03 2018 11:31:17.7820 */ SELECT t0.schemaId,, t0.overlayGroup, t0.schemaType...

<FLTR> <TID: 0000000336>...     /* Fri Aug 03 2018 11:30:58.0360 */ End of filter processing (phase 1) -- Operation - GET...

<FLTR> <TID: 0000000336>...     --> Passed -- perform actions


Oops - no timestamp on some FLTR lines!  These variations will start to crop up more frequently as we dig deeper into the details of the different log formats.  Of course it's unavoidable at some point as the different log types are recording fundamentally different information.  We can usually handle this by building filters for specific log types but, in this case, we're just going to ignore it for the moment.  If you're collecting FLTR logs you'll have to accept that some of them won't have a Remedy generated timestamp.  The net effect of this is that it won't be possible to display all the lines in the correct sequence in Kibana if sorting by this timestamp as all of the records without it will be out of sequence.  In most cases the system @timestamp should be a good enough alternative.  There's a similar problem for lines logged with identical timestamps.  There's no guarantee which order they will be displayed when sorted on matching values so, for example, you may notice the occasional SQL OK before a SELECT.


We need a grok pattern to match our Remedy ICU format timestamp of EEE MMM d yyyy HH:mm:ss:SSSS. Unfortunately there isn't a standard one available so we need to define a custom pattern as detailed in the docs.   Using the grok debugger we can build a pattern to create a field called remedy_timestamp:


As not all of our log lines have the timestamp present we can't just add our new pattern to the existing filter.  The grok only works if the line matches the pattern so lines without this data would not be parsed into our extra data fields.  We need a second grok in the filter {...} block of our logstash.conf:


filter {

  grok {

    match => {"message" => "^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDYDATA:log_details}$"}



  grok {

    match => {"log_details" => "^/\* (?<remedy_timestamp>%{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME}) \*/%{SPACE}%{GREEDYDATA:log_details}"}

    overwrite => ["log_details"]




This new filter searches the log_details field, looking for /* at the start of the line (note that we have to escape the asterisk as it's a recognised regex character), and the parses the timestamp before assigning the remainder of the line back to log_details.  The second line uses the grok overwrite option otherwise the field would be turned into an array containing both the old and the new strings.


We have the timestamp but it's not in the correct format because the time is specified to four decimal places and Elasticsearch uses three. It's time to introduce a couple of new filter plugins.


mutate the date

The mutate plugin has a range of features to modify data in fields and we can use one of these, gsub, to remove the last digit from the string:


  mutate { # remove the last digit so we're left with milliseconds

    gsub => ["remedy_timestamp", "\d{1}$", ""]



This will turn "Tue Jul 31 2018 11:21:39.8570" into "Tue Jul 31 2018 11:21:39.857".  Now we use the date plugin to convert this to the Elasticsearch timestamp format:


  date {

    match => ["remedy_timestamp", "EEE MMM dd yyyy HH:mm:ss.SSS"]

    target => "remedy_timestamp"



After adding both of these below our new grok, restart the Logstash container..


# docker-compose -f elk.yml restart logstash

Restarting logstash ... done


and take a look at the logs in Kibana to see the new field.  Remember that we need to refresh the index pattern so that it is recognised correctly:


before refresh


and after



Further API Log Parsing

We're now adding extra fields to our Elasticsearch index for all of the common log markers up to, and including, the timestamp.   For API type logs this is the type of information that's left in log_details after this parsing:


+GLEWF ARGetListEntryWithFields -- schema AR System Configuration Component from Approval Server (protocol 26) at IP address using RPC // :q:0.0s



We're going to build add some new grok patterns to get the following

  • API call name
  • schema or form being used
  • client type
  • client protocol
  • client IP address
  • client transport
  • API queue time


Remember that we may be handling more than just API log lines though. We don't want to try and parse SQL or FLTR lines with an API specific regex as it won't match and would just be a waste of CPU cycles.  We can use conditional clauses to help streamline the code in our filter {...} block.  The following filters are only run if the log_type is API and they get the type of call along with adding a tag to mark the line as the start or end of an API:


if [log_type] == "API" {


  grok { # Mark the start of an API

    match => ["log_details", "^\+%{WORD:api_call}"]

    add_tag => ["API_Start"]



  grok { # Mark the end of an API

    match => ["log_details", "^\-%{WORD:api_call}"]

    add_tag => ["API_End"]





With the start and end tags we can now go mining for all the other nuggets using the set of filters below.  They're broken down into multiple filters because the exact format of the line varies by the API call.  For example, a GSI call does not have a form or schema recorded and some internal calls are missing the :q: value, so we need separate filters otherwise the pattern wouldn't match and nothing would be gathered.


if "API_Start" in [tags] {


  grok { # Specific to +API entries we get the client details

    match => ["log_details", " from %{DATA:client_type} \(protocol %{DATA:protocol}\)"]



  grok { # and schema/form

          match => ["log_details", " \-\- (schema|form) %{DATA:form} (from|entry|fieldId|changed|# of)"]



  grok { # client IP

    match => ["log_details", " at IP address %{IP:client_ip}"]



  grok { # the API transport

    match => ["log_details", " using %{WORD:api_transport} \/\/"]



  grok { # the API queue time

    match => ["log_details", "\/\/ :q:%{NUMBER:api_qtime:float}s"]





The queue time filter shows how to set the type of a field in Elasticsearch as the default is text and automatic detection of numbers doesn't always work reliably.


Our final filter handles the API end lines and captures errors if they're present.


if "API_End" in [tags] {


  grok { # catch failing API and the related error

    match => ["log_details", "^\-%{DATA}-- AR Error\(%{DATA:arerror_number}\)%{SPACE}%{GREEDYDATA:arerror_message}$"]





Our updated logstash.conf file is now a bit big to display in full so it's attached at the end of this post.  Once we've restarted Logstash and refreshed the index pattern we can now see all the additional fields in Kibana:


Wrapping Up

That's it for this post.  I had planned on showing some visualizations using our new fields but i think this is long enough for now!   I hope these posts have given you a good idea of the type of data it's possible to extract from Remedy logs,  and provided enough detail to help you get started if you want to give it a go yourself.  There are many ways to extend what we've done so far, for example you could:


  • Write filters to parse other log types or look at other plugins to see what they could do.
  • Break down SQL logs by the type of command being run and the table being used?
  • The full text indexer logs record the number of records in the ft_pending table that could be used to watch for a backlog developing.
  • There are plugins that you can use to calculate elapsed time between events - how long are your CreateEntry API calls taking?


I'm sure there are many other use cases you could think of.


As always questions and feedback or all sorts is welcome.  Happy grok'ing!


Mark Walters


Using the Elastic Stack with Remedy Logs - Part 1

Using the Elastic Stack with Remedy Logs - Part 2

Using the Elastic Stack with Remedy Logs - Part 3


The Story So Far...

In parts one and two of this series of blogs we've seen how to setup the Elastic stack to collect logs from a Remedy server.  At the end of the last post we had introduced Logstash between our Filebeat collection agent and Elasticsearch so that we're ready to start parsing those interesting pieces of data from the logs.


One of the challenges of working with Remedy logs in Elastic is that, although there is some level of standardisation in their format, there's still a wide variety of information present.  Many of the the different logs types may share the same markers at the beginning of their lines but they then contain very different data from the timestamp onward.   This is exactly what Logstash is designed to do deal with by making use of its many filter plugins.  These provide different ways to manipulate data and restructure it so that it becomes queryable beyond simple text searches.  There's one filter plugin in particular that we're going to use to help us grok our Remedy logs.

The grok Logstash Plugin

The documentation for the grok plugin says...

This is exactly what we want to do so how do we use it?  Logstash ships with the most commonly used filter plugins already installed so there are no additional steps to make it available.


grok works by using patterns to match data in our logs.  A pattern is a combination of a regular expression and a variable used to store the value if it matches the search regex.   As an example consider the first bit of data in our API logs - the log type:


<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 >.......


A grok pattern to read this and create a field called log_type in Elasticsearch would be


^<%{WORD:log_type} >


Let's break it down

  • ^< means we're looking for the < character only at the start of a line
  • the grok pattern syntax uses %{...} to enclose regex:field pairs
  • WORD is one the many built in Logstash regular expressions and matches the characters A-Za-z0-9_
  • log_type is the name of the field that the value will be assigned to in Elasticsearch


When a log line matches the pattern, that is it starts with < and has a string followed by a space and then >, the value of the string will be added as a field called log_type.


We can add more patterns to match the next piece of data on the log line, the thread ID:


^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}>


  • I've changed the the line to include %{SPACE} (another built in pattern matching 0 or more spaces) instead of an actual space character because, if this was a FLTR log line for example, there would be no space before the closing >
  • > <TID: is the literal text we're expecting
  • DATA is another built in regex


Now we will have two fields added to our Logstash records:


Logstash field


We can continue to build up the patterns until we have all the data we want from the log line.  Developing these patterns can be complex but there are a number of tools available to help you,  there's even one in Kibana.  Click on the Dev Tools link in the left hand panel and then Grok Debugger.


Here's the grok pattern for a complete API log line shown in the debugger and you can see the resulting field names and values in the Structured Data window:


Note the new patterns used, %{NUMBER:overlay_group} to create an integer type field rather than a string as this value is only ever a number, and %{GREEDYDATA:log_details} at the end which captures the remainder of the line and assigns it the log_details field.


^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDYDATA:log_details}$


We now need to add this grok filter definition to our Logstash configuration file which we created in the previous post.  My example was /root/elk/pipeline/logstash.conf which needs to be edited to include the grok filter definition:


# cat elk/pipeline/logstash.conf

input {

  beats {

  port => 5044




filter {

  grok {

    match => {"message" => "^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDATA:log_details}$"}




output {

  elasticsearch {

  hosts => "elasticsearch:9200"




Logstash needs to reload the updated configuration which can be done by restarting it using:


# docker-compose -f elk.yml restart logstash

Restarting logstash ... done


So let's see what out logs look like in Kibana now.  Go to the Discover tab and make sure you're looking at the logstash-* index pattern, expand one of the records and, if all went well, you's see something like this:

Index Pattern Refresh

Our new fields are listed and we can see the values for them from out log line!  There are orange warning flags by the values because they're new fields that are not in the index definition that we're using.  To fix this click on the Management link go to Index Patterns, select the logstash-* pattern and click the refresh icon.  You should see that the count of the number of fields increases and you can page through the field list if you want to see which fields are present.  While we're here I suggest clicking the star icon to set logstash-* as the default index pattern so that you don't have keep switching it from filebeat-* on the Discover page.



Reload the Discover page and the warnings should have gone.  The index pattern refresh is something that needs to be done each time new fields are added.


Making Use of Remedy Specific Fields in Kibana


Now that we're enriching our log line records with Remedy data we can start doing some more interesting things, such as...



Remember in Part 1 we saw how to filter the log data using the list of fields?  Well, now we have some fields which are relevant to our application, such as the User or RPC Queue, that the logged activity belongs to, let's see how we can use that to isolate actions from a single user.


I'm going to login to my Remedy server as Demo so let's setup the Discover page to see what I'm up to.  In addition to applying a filter from the field list you can use the Add a filter + link to get an interactive filter builder or you can use the search bar at the top of the screen.  To filter for log lines with the user field value of Demo enter user:Demo in the search bar and press return.  Assuming there are any matching logs within the current time window you should get a count of the hits and the lines will be displayed.  To make it a bit easier to see what's going on hover over the log_details field name and click add to show this field in the main pane.  Finally let's turn on auto-refresh of the search results so we can monitor our actions as they happen.  Click on the Auto-refresh link at the top of the page and select 5 or 10 seconds.  Now go ahead and login to Remedy as Demo and see what happens as you work.



Each time the screen refreshes you should see the results updated with the latest log entries and the timeline shows the number of matching log lines in that period.  There are many filtering options available so see what you can find by using the different fields we've added to look for specific types of activity.




Search and filtering are helpful but not very exciting to look at so how about some graphical representations of our data?  Click on the Visualize link in the left hand pane and then Create a visualization to see the range of formats available.



Let's start with a Pie chart, click on the icon, select the logstash-* index and yoi should see this



This is simply a count of the number of records so we need to provide some additional options to make it a bit more interesting.  At this point it's worth switching of the auto refresh if you have it set to avoid the graphics being refreshed as we experiment with them.


Click on Split Slices, choose Significant Terms as the Aggregation and rpc_queue.keyword as the Field.  Set the Size to 10 and click the play icon at the top of the panel.

Here we can see how much of our log activity belongs to the different RPC queues in our server.


Change the Field and try some of the others such as the user.keyword


The data used to create the graphics can be filtered just as on the Discover page and the time range can also be adjusted as required.


With so many different visualization types available you'll be able to view your logs in a variey of different ways, and a future post will look at how to get even more of the log line data in to fields so that they can be used this way.


Wrapping Up

This post builds on the previous two and shows how to start using Logstash to enrich the data being collected from our Remedy server using the powerful grok filter plugin.  In future posts we'll look at using it further to get even more information from our API log lines, and then extend it to other types of logs such as SQL and FT Index.  With all of this extra data we can build even more complex graphics to help us visualize and analyse our systems.


Comments, suggestions and questions welcome.


Using the Elastic Stack with Remedy Logs - Part 1

Using the Elastic Stack with Remedy Logs - Part 2

Using the Elastic Stack with Remedy Logs - Part 4


Introduction and Recap

In the first post we saw how to setup Filebeat to collect Remedy logs and send them to Elasticsearch where they could be searched and viewed using Kibana.  This was a good start and could easily be extended to include additional logs and servers. For example, a single Elastic stack instance could be used to aggregate logs from a number of Remedy servers and associated mid-tiers.  Once the logs have been collected they can be filtered using a combination of Elastic fields such as the source for the log file name, or beat.hostname for the originating host and logtype that we added in the filebeat.yml configuration file.


Having a single system to gather logs is a good first step but perhaps there's more that we can do with the data being collected?  Would it be helpful to be able to see the types of error codes that our servers are producing and their frequency?  How about the types and versions of the clients that are connecting?  There's a wide variety of data in the logs but how do we get it in a format that we can use?


This post will look at what's needed to get our test system ready to do this.  We need some way to take the separate pieces of information on each log line and break it out so that we can make more use of it in Kibana.  Fortunately, there's a tool to do just that...


Enter Logstash...

Logstash is another component in the Elastic stack and its job is to take data and transform it though a variety of filter plugins as it passes through the program.


Let's look at a few AR Server API log lines of the type that we collected in the example setup in Part 1.


<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:39.8570 */ +GLEWF ARGetListEntryWithFields -- schema AR System Configuration Component from Approval Server (protocol 26) at IP address using RPC // :q:0.0s

<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:39.8590 */ -GLEWF OK

<API > <TID: 0000000333> <RPC ID: 0000021385> <Queue: List > <Client-RPC: 390620 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:18.3550 */ +GLE ARGetListEntry -- schema AR System Email Messages from E-mail Engine (protocol 26) at IP address using RPC // :q:0.0s

<API > <TID: 0000000333> <RPC ID: 0000021385> <Queue: List > <Client-RPC: 390620 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:18.3570 */ -GLE OK


Do you notice the pattern in the format of the lines?  They all share many common elements that contain potentially interesting information that we may want to use to help analyse our system.  The same format is shared by many of the AR Server logs such as SQL, filter, escalation and so on.  The data up to the timestamp is fixed and the log specific information is added after this point.  The log format is documented here.


Examples of the common elements are


<API >                                        The log type - API, SQL, FLTR, ESCL, THRD, USER and more.

<TID: 0000000336>                             Thread ID - a unique number assigned to each thread when it is created.

<RPC ID: 0000021396>                          RPC ID - a unique number assigned to each API call and shared by all related activity in the API.

<Queue: Prv:390680>                           The type/number of the RPC queue that the thread belongs to - fast, list, private etc.

<Client-RPC: 390680 >                         The optional RPC queue that was requested by the client in the API call.

<USER: Remedy Application Service >           The user performing the activity.

<Overlay-Group: 1 >                           Overlay or base object mode actions.

/* Tue Jul 31 2018 11:21:39.8570 */           Timestamp


Before we can start getting these from our logs we need add Logstash to our setup and get them passing through it.


Installing Logstash

As we did in Part 1 I'm going to use the container version of Logstash and run it on the same system as the existing Elasticsearch and Kibana containers.  This should be fine for test purposes assuming you have at least 8GB of memory available for the three containers.  We need to add a new entry in the services section of the elk.yml file we created previously that contains




    container_name: logstash


      - 5000:5000


Let's start the container and confirm it works with


# docker-compose -f elk.yml up -d

Pulling logstash (

6.3.2: Pulling from logstash/logstash

7dc0dca2b151: Already exists

a9821ac0463b: Pull complete

6af182068d2d: Pull complete

7ad725483188: Pull complete

d5c1c85bef83: Pull complete

890f4724f175: Pull complete

dcfd1014e6a6: Pull complete

0b58f4123258: Pull complete

d82a06d8e668: Pull complete

a87e8d5f72c3: Pull complete

df6b8528f94e: Pull complete

Digest: sha256:838e9038388e3932f23ac9c014b14f4a1093cd8e5ede7b7e0ece98e517d44fb2

Status: Downloaded newer image for

kibana is up-to-date

elasticsearch is up-to-date

Creating logstash ... done


Using the docker ps command we can show the running containers:


# docker ps

CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                                            NAMES

4f4d0963ef64             "/usr/local/bin/dock…"   15 seconds ago      Up 14 seconds       5044/tcp,>5000/tcp, 9600/tcp       logstash

fb6abb7f3a2b   "/usr/local/bin/dock…"   6 hours ago         Up 6 hours>9200/tcp,>9300/tcp   elasticsearch



As a final test to confirm it's working access Kibana and click on the Monitoring link on the left.  You may get prompted to enable monitoring, accept this and you should see the status of your Elastic stack.


Configure Logstash

Now that Logstash is running we need to configure it to accept data from Filebeat running on our AR Server, process the data to find the key pieces of information we're interested in, and then send it to Elasticsearch.  We'll do this using a text file called logstash.conf to which we'll add the necessary settings.  Full details of the configuration options and how Logstash works are documented on the Elastic site.  It's a very flexible tool with many more features than we'll be using and I'd encourage you to have a look and see what it's capable of.


The first step is to define an input and output setting.  Create a new file called logstash.conf in an empty directory (I'm using /root/elk/pipeline - you'll see why shortly) and add the following:


input {
  beats {
  port => 5044


output {
  elasticsearch {
  hosts => "elasticsearch:9200"


The input definition creates a listener on port 5044 to accept data from a Beats client, Filebeat in our case.  The output section provides the details of our Elasticsearch server where we're sending the data.  Let's test this before we go any further and confirm that we can still see our log data.  Remember that our Logstash is running in a container so how do we get it to use the configuration file that is outside of that container?  We use a Docker feature called volumes which maps data from the host file system into a container.


Edit your elk.yml file and add the highlighted text.




    container_name: logstash


      - /root/elk/pipeline:/usr/share/logstash/pipeline


      - 5000:5000

      - 5044:5044


The volumes: option maps the contents of the host /root/elk/pipeline directory so that they appear in /usr/share/logstash/pipeline in the container and the ports: change makes port 5044 available from outside of the container so that Filebeat can connect.  You can use any directory you like on the host but it should contain the logstash.conf file we created above.


Restart the container with


# docker-compose -f elk.yml up -d

Recreating logstash ... done


We also need to update the /etc/filebeat/filebeat.yml file to use Logstash in place of Elasticsearch.  Hop over to your AR Server system console and edit the file so that it appears as below, where hostname is the name of the machine where your containers are running.


#Comment out this bit


  # Array of hosts to connect to.

#  hosts: ["hostname:9200"]


#----------------------------- Logstash output --------------------------------


  # The Logstash hosts

  hosts: ["hostname:5044"]


Restart Filebeat


# systemctl restart filebeat


To make sure Elasticsearch is receiving data via Logstash we need to create a new index pattern in Kibana as detailed in the previous article.  When you go to the Management -> Index Patterns -> Create index pattern page you should see a new index called logstash-YYYY.MM.DD



Create a new index pattern called logstash-* and choose @timestamp as the Time Filter field name.  Switch back to the Discover page and you can use the drop down menu under the Add a filter + link to switch between the filebeat-* and logstash-* index data.



What you should find is that the log data from your Remedy server is now being added under the logstash-* index.  Switching between the index patterns will allow you to compare the records and you should find they look pretty much the same other than the _index field value.


Conclusion and Next Steps

That's it for Part 2 of our Remedy/Elastic cook book.  In Part 1 we saw how to setup Elasticsearch, Kibana and Filebeat to collect logs from a Remedy server.  We were able to search and filter the log data but not much more.  In this post we've introduced Logstash and reconfigured Filebeat to pass logs through it to Elasticsearch.  This provides the foundation we need to start parsing our logs and that's what will be coming up in Part 3.


Please leave feedback and questions below and let me know if you find any problems when trying to follow the steps above.


Using the Elastic Stack with Remedy Logs - Part 1

Using the Elastic Stack with Remedy Logs - Part 3

Using the Elastic Stack with Remedy Logs - Part 4



This is the first in a series of articles that will look at how you can use the Elastic Stack to gather, view, and analyse the logs from Remedy products.  In this post I'll start by showing how you can setup the software and enable your choice of logs to be read and forwarded to Elastic so that they can be searched easily.  Later posts will show how lines from some of the logs can parsed to split out interesting bits of data that will then be used to visualize the activities contained in the logs.


What is the Elastic Stack?

Formerly known as ELK the Elastic Stack is a set of open source software components that will take data from many sources, format it, and make it available for searching, analysis and visualization in real time.  The main components are


  • Beats - lightweight collection and shipping agents that gather data and forward it to Elasticsearch or Logstash.
  • Logstash - collects and processes data in many formats, allowing parsing and enrichment.
  • Elasticsearch - the indexer and search engine used to store the data gathered by Beats and Logstash.
  • Kibana - an interface to Elasticsearch providing many types of visualization to help analyse and understand the data.


Why Use Elastic to Collect Remedy Logs?

Anyone who has supported a Remedy system will know that there are many different logs that need to be reviewed both during normal usage and when investigating a problem.  If your environment uses a server group  then the challenge is multiplied as each server will produce its own set of these logs.  Wouldn't it be nice to have these logs collected and forwarded to a single system where they could be searched and viewed without having to hop from console to console on each server?  This is what we're going to do with Elastic.


Setting Up Elasticsearch and Kibana

The first step is to setup Elasticsearch and Kibana to index, search and view our logs.  Once this is done we'll add one of the Beats modules to a Remedy server and configure it so send a selection of logs to Elasticsearch.  In a later post I'll add Logstash to extract more useful information from the logs before the data in indexed.


There are many options for installing the software.  You can download versions for many platforms, try a cloud based system or use a ready installed virtual machine. For this article I'm going to use the containers that are available for use with Docker as they include all the necessary supporting software such as Java.


I have a Linux VM running CentOS 7 and I've already installed the docker software.  If you're using a different platform or Linux version there should be a version available that you can use.


# docker version


Version:      18.03.1-ce

API version:  1.37

Go version:   go1.9.5

Git commit:   9ee9f40

Built:        Thu Apr 26 07:20:16 2018

OS/Arch:      linux/amd64

Experimental: false


The Elastic website has detailed instructions if you run into problems  (Elasticsearch and Kibana)  but it should just be a case of pulling the images and starting the containers.


# docker pull

6.3.2: Pulling from elasticsearch/elasticsearch

7dc0dca2b151: Pull complete

72d60ff53590: Pull complete

ca55c9f7cc1f: Pull complete

822d6592a660: Pull complete

22eceb1ece84: Pull complete

30e73cf19e42: Pull complete

f05e800ca884: Pull complete

3e6ee2f75301: Pull complete

Digest: sha256:8f06aecf7227dbc67ee62d8d05db680f8a29d0296ecd74c60d21f1fe665e04b0

Status: Downloaded newer image for


# docker pull

6.3.2: Pulling from kibana/kibana

7dc0dca2b151: Already exists

8ce241372bf3: Pull complete

d6671bb44c0a: Pull complete

f3522d60bffe: Pull complete

e4d56fe4aebf: Pull complete

68df91c3f85d: Pull complete

aef1ffed878c: Pull complete

c5aeceb9f680: Pull complete

af84d3e72ccf: Pull complete

Digest: sha256:7ae0616a5f5ddfb0f93ae4cc94038afd2d4e45fa7858fd39c506c8c682ee71f0

Status: Downloaded newer image for


Although it's possible to start each container individually this is a good example of where the docker-compose command can be used to configure and start them together.  This makes sure we have the necessary setup to allow Kibana to access Elasticsearch.


Create a file called elk.yml with these contents;


version: '3'




    container_name: kibana


  - 5601:5601




    container_name: elasticsearch


      - discovery.type=single-node


      - 9200:9200

      - 9300:9300


Now both containers can be started with


# docker-compose -f elk.yml up -d

Starting elasticsearch ... done

Starting kibana        ... done


Check that you can access Kibana by pointing your browser to http://hostname:5601 where the hostname is the machine that is running the containers, and you should see the interface.

We now have the software running to store and search our logs but we need to setup a Beats client to send logs to Elasticsearch.


Install and Configure Filebeat on the Remedy Server

Beats are a collection of different data collecting and shipping agents that can be used to forward data into Elasticsearch.  There are Beats available for network data, system metrics, auditing and many others. As we want to collect data from Remedy log files we're going to use Filebeat.


This client needs to be running on our Remedy server so that it has access to logs in the ./ARSystem/db and other directories.  We could also use container version of Filebeat but I'm going to install it directly on the server.  You may need to setup the Elastic repository on your server and then simply


# yum install filebeat

  Installing : filebeat-6.3.2-1.x86_64              

  Verifying  : filebeat-6.3.2-1.x86_64                     


  filebeat.x86_64 0:6.3.2-1                                                                                                                                                                                                                                               

With the client installed we need to configure the logs we want to send to our Elasticsearch server by editing the /etc/filebeat/filebeat.yml file.  Which logs you chose is up to you and, whilst it's possible to use a wildcard definition to monitor all files in a directory, let's start with an example of the arerror.log and server API log.  Each file is defined with an entry in the filebeat.prospectors section:




# Each - is a prospector. Most options can be set at the prospector level, so

# you can use different prospectors for various configurations.

# Below are the prospector specific configurations.

- type: log

  enabled: true


    - /opt/bmc/ARSystem/db/arerror.log


    logtype: arserver

  fields_under_root: true



- type: log

  enabled: true


    - /opt/bmc/ARSystem/db/arapi.log


    logtype: arserver

  fields_under_root: true


There's an extra fields: section for each entry to add a logtype identifier that will help us find the entries in Elasticsearch.  This is optional and by no means necessary but may be helpful as the setup becomes more complex.  Each prospector entry causes Filebeat to effectively run a tail on the file and send data to Elasticsearch as the application that creates the log writes to it.  Similar configuration entries could be used for other component logs such at the Email Engine of Tomcat.


We also need to tell Filebeat where to send the logs and this is done further down in the file where we need to uncomment the setup.kibana and output.elasticsearch entries and add our server.



  # Kibana Host

  # Scheme and port can be left out and will be set to the default (http and 5601)

  # In case you specify and additional path, the scheme is required: http://localhost:5601/path

  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

  host: "hostname:5601"



  # Array of hosts to connect to.

  hosts: ["hostname:9200"]


Change hostname to the name of the server where your Elasticsearch and Kibana containers are running.


We're almost there.  The arerror.log is always on but we need to go to the AR System Administration: Server Information form and enable API logging.  Remember that Filebeat is reading the log as it is written so we can set a small log file size, 10MB for example, and choose the option to Append to Existing so that only a single file is used.



Now start Filebeat...


# systemctl start filebeat

# tail /var/log/filebeat/filebeat

2018-07-31T06:52:38.716-0400 INFO log/harvester.go:216 Harvester started for file: /opt/bmc/ARSystem/db/arapi.log

2018-07-31T06:52:38.716-0400 INFO log/prospector.go:111 Configured paths: [/opt/bmc/ARSystem/db/arerror.log]

2018-07-31T06:52:38.716-0400 INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 2

2018-07-31T06:52:38.716-0400 INFO cfgfile/reload.go:127 Config reloader started

2018-07-31T06:52:38.718-0400 INFO cfgfile/reload.go:219 Loading of config files completed.

2018-07-31T06:52:38.742-0400 INFO elasticsearch/elasticsearch.go:177 Stop monitoring endpoint init loop.

2018-07-31T06:52:38.742-0400 INFO elasticsearch/elasticsearch.go:183 Start monitoring metrics snapshot loop.

2018-07-31T06:52:39.727-0400 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.3.2

2018-07-31T06:52:39.730-0400 INFO template/load.go:55 Loading template for Elasticsearch version: 6.3.2

2018-07-31T06:52:39.802-0400 INFO template/load.go:89 Elasticsearch template with name 'filebeat-6.2.4' loaded


Configure Kibana

The final step is to access Kibana and configure it to display the data being sent from our Remedy system.


Point your browser at Kibana as above and click on the Discover link at the top left hand side.  This will take you to the index pattern definition page.



There should be a new filebeat index available so enter filebeat-* in the index pattern box and click > Next Step



Choose @timestamp from the Time Filter field name drop down menu and click Create index pattern.  Once the index pattern is created click on the Discover link again to return the main page where you should be able to see log data from your Remedy server.



Browse Remedy Logs

The graph along the top of the screen shows the number of log entries received over, by default, the last 15 minutes.  You can change the time frame by clicking in the Last 15 minutes link at the top of the screen where you can choose from a range of pre-configured ranges or specify your own.


To see more details about the log entries click on the arrow to the left of the timestamp in the main panel.  This will expand the entry and show all the fields in the record.


Here you can see the full log line recorded in the message field - in this case it's an API log entry for a +GLEWF call.


Click the arrow again to collapse the record and then you can refresh the list with F5 or by clicking in the search box and pressing enter.


Filter Data and Customise the Display

You can change the fields that are displayed in the main pane by hovering over the field names in column to the left of the screen - an add button will appear which will cause the selected field to be added to the table.

If you click on a field name you will see the top 5 most common values and you can use the + spyglass icon to add a filter limiting the display to matching records.  Here's what it looks like after adding the message field and filtering the latter to show only entries from the arerror.log


Searching for Errors

As a final example in this post let's see how to use Kibana to search for errors.  Let's create an error by trying to login to Remedy using an invalid password.  Using the mid-tier login page I tried to login as Demo and get the folllowing

To remove the filter on the arerror.log hover over the blue oval that shows the filter below the search bar and click the trash can icon.  Now all records are being displayed type error in the search bar and press return.  All matching records are displayed and we can see the 623 Authentication failed message in the middle of the screen.

Conclusion and Next Steps

That's it for this post.  We've seen how to install Elasticsearch, Kibana and Filebeat and configure them to record and display data from a couple of Remedy server log files.  We've also seen how to get started viewing and searching these logs using the Kibana interface.  We've only scratched the surface but I hope that you'll be tempted to try this with one of your test servers and see how easy it is.  Try collecting some different logs and experiment with the search and interface options in Kibana.  There are many resources available to help you with this, starting with the official documentation.


In future posts we'll see how to add Logstash to our setup and use this to parse data from the logs which we can then use for more advanced analysis and graphical visualization of the logs.


If you have any comments, questions or suggestions please add them below and I'll do my best to answer.


Using the Elastic Stack with Remedy Logs - Part 2

Using the Elastic Stack with Remedy Logs - Part 3

Using the Elastic Stack with Remedy Logs - Part 4


BMC Support or R&D might have asked for a "slow" SQL's Execution Plan and it is possible that the SQL might have aged out of Oracle's Library Cache or MSSQL's Procedure cache and hence no Execution Plan is available.


That would then necessitate generating an Estimated Execution Plan ("Estimated Plan" in MSSQL's SQL Management Studio or "Explain Plan" in Oracle) on the SQL extracted from AR log file.


The SQL does need to be complete and not truncated in order for an Estimated Plan to be generated.


To get the complete SQL as it was sent to the database put the following AR parameter in your ar.cfg/ar.conf file(s) and restart the AR Server(s).


Enable-Unlimited-Log-Line-Length: T


Remedy documentation -


Some of you already know this so it may be old news for you! :-)


Remedy 9.x is written in Java. One of the features that was introduced was to avoid sending literal values in Remedy SQL statements as was the case in 8.x and prior versions.


Instead the statement is prepared in JDBC with "bind" variables and then sent to the database (this relieves the burden on the Oracle database to replace literals with binds).


One drawback to the above fix is addressed in Oracle Doc Id 1451804.1.


The drawback/problem arises if you are using the database (Oracle Standard/Enterprise or higher) in Case Insensitive mode, with appropriate Linguistic Indexes in place, and Remedy ends up generating SQL statements with LIKE conditions in them.


The Oracle document states that the LIKE condition must be <COLUMN LIKE 'CONSTANT%"> AND NOT <COLUMN LIKE :BIND>. This will cause the optimizer to NOT use a Linguistic Index on the column COLUMN.



A query using LIKE 'CONSTANT%' with nls_comp=LINGUISTIC against a column with a linguistic index defined on it.  This produces an efficient plan that uses a range scan on the index, but if a bind variable with the same value is used instead (LIKE :BIND, where:BIND = 'CONSTANT%') The query plan will not use a range scan, resulting in poor performance.  Hinting, if tried, does not help.



This is a known limitation of using LIKE with NLSSORT-based indexes.


Using an NLSSORT-based index with LIKE 'CONSTANT%' requires transforming the predicate into a range predicate based upon on the constant matching string. For instance, col LIKE 'CONSTANT%' when using LINGUISTIC and BINARY_CI is transformed intoNLSSORT("COL",'nls_sort=''BINARY_CI''')>=HEXTORAW('636F6E7374616E7400') AND NLSSORT("COL",'nls_sort=''BINARY_CI''')<HEXTORAW('636F6E7374616E7500').  The values passed to HEXTORAW are derived from the string 'CONSTANT'.


This transformation is performed during query parse and requires that the matching pattern be a known constant.  NLSSORT-based functional indexes cannot be used with a LIKE predicate when the matching pattern is based on a bind variable or expression.


When cursor_sharing=FORCE (or SIMILAR), all constants are replaced by bind variables before parse, preventing any NLSSORT-based functional indexes being used with LIKE.


The above issue is not likely to be fixed by Oracle any time soon.


As a result of the above problem BMC has introduced the following check in its code:

(a) Is the database Oracle and is it Case Insensitive?

(b) Are there any LIKE conditions in the SQL?




If the above are TRUE then the SQL will be send to the database AS IS (no replacing with binds in JDBC) and the Oracle database, where cursor_sharing = EXACT is necessary, will process the SQL and come up with an optimal execution plan.


UPDATE: Customers will need to set cursor_sharing to FORCE as the Remedy fix is sending ALL SQL statements with literal values and not just the ones that have at least one LIKE clause


So what does the Oracle database do when a SQL statement comes in to be processed?


A SQL is HARD PARSED the first time it comes into the database:


  • SYNTAX CHECK                      Is the SQL grammatically correct?
  • SEMANTIC CHECK                 Do the tables and columns exist? Privileges in place? Any ambiguities (e.g. same column name in two tables not specifying which one the SQL wants)


  • Oracle now checks if the SQL statement is found in the database.If so the next steps are skipped


  • COST ESTIMATION                      Estimate the Cost of the query. The Plan with the lowest cost is chosen
  • ROW SOURCE GENERATION     Row Source Generator receives the optimal plan from the Optimizer and generates an Execution Plan in a format the SQL engine can use


A SOFT PARSE is where the last 2 steps are skipped as the SQL is found in memory.



Soft parses, less expensive than hard parses, nevertheless incur a cost, that of needing the use of Shared Pool and Library Cache latches (these are serial

    operations), that can lead to performance issues in OLTP systems

To minimize the above impact session cursors of reused (used > 3 times) SQL can be stored in the Session Cursor Cache (UGA/PGA)

What is actually stored is a pointer to the location in the Library Cache of where the cursor existed when it was closed

The presence of a cursor in the Session Cursor Cache guarantees the validity of the SQL’s syntax and semantics so the first 2 steps of Parsing are skipped

Instead of searching for the cursor in the Library Cache the Server process follows the pointer and uses the cursor (if present and still valid)

One can use the above feature by setting a value to session_cached_cursors (BMC recommends 100)


Many moons ago, BMC deprecated the Windows ARUser client in favor of the web-based Mid-Tier client, and since that time there has been a scenario I have repeatedly run into that does not have a satisfactory solution based on out-of-the-box functionality:


There is an ARS table on the user's screen. The user needs the data from that table in a spreadsheet.

The built-in Report button that one gets on tables and on results lists yields a UI that ... well one of my users put it best:

"It looks like Radio Shack exploded on my screen!"

For instance, am I to turn loose a warehouse manager who uses her computer like 5 or 6 times a week, to build her own reports against a form that for instance, has display-only fields with names that look similar to fields on the screen?! Don't get me wrong, Smart Reporting has a whole lot of useful features and is a huge improvement, but at the end of the day ...





HTML 5 offers us a lot of interesting capabilities inside modern browsers that we didn't have before. In fact, the entire javascript ecosystem is experiencing something of a cambrian explosion at the moment. One of the new capabilities of HTML5 is the File API which allows us to programmatically construct documents in javascript and make them available for download to the user.


A few weeks ago, I once again, found myself in that scenario I described at the top. I had a custom application with a big table field, and a whole bunch of workflow that lets the user get the data on the screen that they need, and now the user was just basically saying "ok, but now I need this in a spreadsheet, can't you just give me a download button?".


This got me thinking. "Good point! Why CAN'T I just do that?!"

All of the data on the screen to drive the ARS table already exists somewhere in a javascript datastructure. Why can't I just snag that, reformat it into something Excel knows how to deal with, and draw a download link on the screen? So I got into a little mild hacking using the mighty, mighty Firefox Developer Edition Javascript Console, and I figured out how to do that.


What I'm about to show you is the cleaned up rightest way to do this hack that I can figure out. I'm somewhat of a greenhorn when it comes to javascript, so it's entirely possible there are better ways to do what I've done here. To that end, I've posted my code on github. If you see problems, please make a pull request there, and I'll merge it into the main branch.


The basic gist of it is this:

  1. copy some javascript functions into the view header of your form
    For the impatient: here ya go

  2. make an active link that calls one of those functions, using a set-fields action with a run process to put the output of the function into a tmp field
    It's gonna look something like this:

  3. make an active link populate a view field with a template, passing the output in the tmp field from step #2
    Here's my version of that template. You may want to inject your own flavor. I'll cover the options my version of the template takes further down. Basically you're just calling the TEMPLATE() function. it'll look something like this:
  4. There's now a dialog on the screen where the user can select the columns they want to export and download the freakin' spreadsheet!
    And that'll look something like this:


You May Ask Yourself, "How Do I Work This?!"


Probably the easiest way to get started, is just to download this demo and start playing with it.


  1. using Dev Studio, import the def file: ARS_CSVTools_demo_v1_definitions.def
    this will create the following objects on your Remedy Server:
    1. regular form: ahicox:csv:demo:data
    2. display form: ahicox:csv demo:dialog
    3. display form: ahicox:csv demo:gui
    4. active link: ahicox:csv demo:exportAll
    5. active link: ahicox:csv demo:exportSelected

  2. Import Demo Data to ahicox:csv:demo:data
    use the Data Import Tool to import this file: ahicox_csv_demo_data.arx to the ahicox:csv:demo:data form

  3. install the export dialog template
    Create a record in AR System Resource Definitions like this one:

    so basically:
    1. Attach this file: csvExportDialogTemplate.html
    2. Make sure the Mime Type is "text/html"
    3. Make sure the Type is "Template"
    4. Make sure the Status is "Active"
    5. Make sure the Name is "ahicox:csv demo:template-v1"  <- important!

  4. Open the ahicox:csv demo:gui form through your mid-tier
    1. click the Refresh button on the table. This should load a list of all planets from the sprawling longest running (and greatest) sci-fi show of all time Doctor Who. To keep things simple, we just have three columns: Entry ID, Key, and Value. That doesn't really matter, actually. This will work with whatever columns you put on whatever table you want to export. Only caveat being that no two columns should have the same display name (otherwise the export will only show one of them, probably whatever the last one with that name in the column order was, but no promises on that).

    2. The Export All Rows to CSV button will scoop up every row on screen and ship it off to the export dialog
      This button is calling the _exportAllRows javascript function, and assigning the output to the zTmp_JSONData field. The _exportAllRows function takes one argument, which is the field id of the table you want to export. For instance, the fieldid of the table on my demo form is: 536870913, so the set-fields action calls:

      $PROCESS$ javascript: _exportAllRows(536870913);
    3. The Export Selected Rows to CSV button will scoop up only the selected rows in the table field and ship them off to the export dialogThis is pretty much the same thing as # 2, except it's a different function name:

      $PROCESS$ javascript: _exportSelectedRows(536870913);

      An important note about these javascript functions: if you need to export a table that is on a form embedded in a view field (or embedded several times deep in a view field), you need to insert these functions on the view header of the root level form. So for instance if you wanted to be able to export tables buried on forms in the ITSM Modules, you'd want these functions in the view header of SHR:LandingConsole, rather than the individual child forms.

The Template

The HTML template is populated via the built-in TEMPLATE() function (as illustrated above). These are the arguments that the template takes:


  • jsonData
    this is the JSON datastructure returned from either the _exportAllRows() or _exportSelectedRows() functions
  • titleHeader
    this string is shown in the header of the dialog template, adjacent to the "Export as CSV Spreadsheet" button
  • defaultShowColumns
    this is an array in javascript notation, containing the columns you would like to be shown in the preview by default when the user opens the dialog (the user can select additional columns or deselect the defaults once the dialog is open). An example value would be:

     "['columnName1', 'columnName1']"

    NOTE however, if you're building that in work flow, the ticks will be a problem. It'll actually have to be constructed something like this:

     "[" + """" + "columnName1" + """" + ", " + """" + "columnName2" + """" + "]"

    column names are referenced by their DISPLAY NAME not database name in this context.All this does is control which of the columns have a checkbox by default when you open the dialog:


  • fileNameBase
    the template will attempt to construct a unique filename each time the user clicks the download button. It will do this by appending the epoch timestamp to the end of the file name at the time the user clicks the button. The fileNameBase argument allows you to give the file a meaningful name that appears before the unique timestamp. For instance

    example fileNameBase value: "DWP-Export"
    resulting file name:        "DWP-export-1522077583.csv"
  • displayLimit
    By default, the dialog is going to show a preview of the first X rows in the export where X is defined by displayLimit. If the number of rows in the export is less than this number, we'll just show all rows. Otherwise we will show only this many with a user-friendly message explaining that.





  • Handle Table Chunking
    At present, it'll just scoop up rows visible on screen. For instance in the case of exporting selected rows, it should be possible to hang a hook off of whatever function is called to retrieve the next chunk from the server, export data from selected rows of the previous chunk and append that with additional selections.  Perhaps something also for export all that will programmatically retrieve each chunk from the server and export/append. Just needs a little hackkity hack.

  • CSV Import!
    This should also be possible! Since the HTML5 File API allows us to access the contents of a file specified by the user without uploading it to the server. In theory, I should be able to create a similar dialog that shows the columns in your spreadsheet, the columns in your table and allows you to map them, then hijacks the PERFORM-ACTION-TABLE-ADD-ROW run-process command to add the rows to your table on-screen, so that you can set up your own table walk to push the rows where you want them to go.

    This would beat the living hell out of uploading the file as an attachment, staging it somewhere on the server, asynchronously forking off an AI job out of workflow to import/dispose of the file, and then having an active link loop run every couple seconds to check the status of the AI job. Which is the only other way I'm aware of to handle it right now. And god forbid if the file the user uploaded had the wrong column names or bad data! Good luck catching that exception!

  • Get BMC To Support this
    Look obviously this is unsupported.
    In order to figure this out, I had to pull apart lots of code I dug up off the mid-tier. This entire approach depends on the functions and datastructures in ClientCore.js being stable from version to version. There is no guarantee of that. Therefore BMC could break this at any moment without warning.

    My users like this feature, a whole lot more than the out-of-the-box reporting features. I'd like to be able to continue offering this feature without having to worry that every mid-tier patch we install will potentially break it. At the end of the day, that's actually not a lot to ask. BMC could simply make a function that does this and include it in ClientCore.js. It's pretty simple stuff. Heck. Maybe they could even give us a run-process command to export properly encoded JSON from a table into a field?!

    Anyhow. This is what I know for sure this works on:
    I've successfully tested this on ARS / Mid-Tier 201702131133. Against these browsers:
    1. Firefox 57.7.2 (32-bit)
    2. Firefox 10.0b6 (64-bit)
    3. Internet Explorer 11.2125.14393.0
    4. Chrome 65.0.3325.181 (32-bit)
    5. Edge 38.14393.2068.0


  • This approach in general could do a LOT of things
    There is pretty much nothing Javascript can't do inside a browser these days. Literally. From real-time 3D rendering to actually spinning up VMs. It's been done, on the client side, in a browser.  So why am I wrestling with cumbersome and poorly implemented server-side processes for mundane stuff like this that I could do entirely in the client? Javascript was BUILT for consuming JSON webservices -- that's REST in a nutshell, and now we have a REST API. All we really need to do some seriously amazing stuff in ARS is a supported interface to ClientCore.js and a way to get an authentication token from an already logged in user so that I can pass it to the REST API without asking the user to log in again.

    And that's just scratching the surface. Whose up for building an ARUser replacement out of the REST API and Electron? I would be.

    ATTENTION BMC: publish a developer-facing, supported (and documented) javascript API for interacting with ARSystem within HTML templates!
    Let a hundred flowers blossom
    . We're out here selling your software for you day in and day out. It's the least you can do.



ALSO: for those not hip to the github, there's an attachment with a zip file of everything :-)


Quite often when you have an issue, the first thing that is asked of you is for you to capture logs and send them off.  The problem with this is that these logs quite often contain sensitive information, they contain things like user names and server ip's.  Depending on the nature of your system and your organization, it might not be not only a bad idea to provide that information but it might be against your companies InfoSec policy, or, maybe even illegal.  To help combat this issue I wrote this very simple java program with a sample batch file.


At the heart the program is the simple ability to do pre-defined 'find/replace' scenarios.  The properties file contains two pre-defined find/replace scenarios.


1 - UserName - This will find the user: section of your log file and replace it with a generic 'UserName'

2 - IPv4 Address - This one will look for something like, but in a generic way so that it finds ANY ip address and replaces it with IPV4Address


The program is RegEx aware, which means that you can use complex find criteria that's not up on RegEx here if interested in the finer details (Regular expression - Wikipedia )


If you are on Windows all you need to do is configure your properties file to find/replace whatever it is you want to find and replace it with, then drag/drop your log file onto the batch file.  The batch file will run the log through the program and spit out a copy of the log with the suffix .scrubbed.log appended on.


This utility does not make any network calls, it only reads the log file you provide it and gives you a scrubbed output.


This is an unofficial and unsupported tool, and comes with no warranties expressed or implied.  It is still your responsibility to ensure that sensitive information is removed from the scrubbed file before posting that log anywhere, but this should help you get things cleaned up with ease and speed.

Rahul (Remedy) Shah

D2P Overview

Posted by Rahul (Remedy) Shah Employee Mar 2, 2018

D2P Overview

Known as Dev to Prod(D2P) is a feature used to push stuff from QA/Dev environment to production environment , this feature was initially introduce in AR System 9.x and later was enhanced a much over the releases. Adding more features and making it more stable.


Two primary components of D2P are

  1. D2P Console
  2. Single Point Deployment


D2P Console is mainly to manage and deploy the packages and has multiple capability will not talk much about it,  as it's an old feature


Quick look on what's new in release "pre 9.1.03" , "9.1.04" and ""



Why D2P and its enhancement ?


Basically today deployment of hotfix/patch was totally manual process and it was a big pain point to apply the hotfix/patch to each node/machine. if you look at a hotfix or patch it is bunch of binary files , definition files , data file (arx files) and some database changes and those has to manually apply on each node.  D2P had already had capability to promote workflows definition , data (arx files) and what was really missing was deployment of binary files , In release 9.1.04 a new feature was introduced know as "Single Point Deployment" which takes care of deploying binary files.


Post 9.1.04 all the hotfix and patches will be shipped as D2P Packages. Main advantage of this will be to deploy only on one server and it will get deployed automatically on all the servers in the server group and the progress can be easily monitored and other key feature will be it can be rolled backed if needed.


The deployment process is going to look like this, download from EPD on a local machine , test it on your test/dev environment and then import it on your production machine. its going to be easy process and your don't need to do access these machine physically. it will be all from D2P Console.



What is Single Point Deployment feature?


  1. What is Single Point Deployment ?

    Single Point Deployment helps to create and deploy an binary payload. A binary payload is a set of binary files, configuration files, batch, or Shell scripts which can be executed. Not only BMC anyone can create a payload for deploying
  2. What components that support binary payload ?


         You can deploy a binary payload onto the following components.           

    1. BMC Remedy Action Request System Server (AR)
    2. BMC Remedy Mid Tier (MT)
    3. SmartIT


   3. What changes where made to AR Monitor (armonitor) ?


         AR Monitor (armonitor.exe/sh) loads(starts) all the process which are available in armonitor.cfg  , from 9.1.04 has more intelligent where it know which process it has loaded and has capability to start and stop individual process.


  4. What is a file deployment Process?


          Starting 9.1.04 for all the 3 components above(AR , MT & SmartIT) , a separate process know as “file deployer” is running , which is aware of how to stop and start the processes basically it can instruct armonitor to start or stop any particular process , for example it can instruct to stop/start "java plugin process" hosted on a AR Server.


  5. What are the main components of File Deployment Process?


        Three components which are important for any file deployment process

  • File Deployer  :-   A new process that runs under monitor and does actual work of deployment of any new binary payload.
  • AR Monitor     :-  Filedeployer calls ARMonitor for stopping and starting required processes.
  • AR Server      :-  Acts as a co-ordinator between multiple file deployers. Control the sequence in which deployment should be done.


  Summary :- With all the above new stuff you have an new capability to deploy a binary file , to deploy an binary file you need create a payload on a AR Server , which can have a binary file that has to be deployed. The is how the flow is

  • AR Server Instructs file deployer that there is some work need to be done
  • File deployer downloads the binary file from the server
  • File deployer instruct AR Monitor to STOP the particular process which is defined in the payload ( Stop is required because that particular binary file (jar) might have been locked by process)
  • File deployer takes the backup of existing binary
  • File deployer copies the new binary
  • File deployer STARTS the particular process it has stopped
  • File deployer checks if that process was started or not
  • Marked deployment as done.


Will come with few more blogs on Single Point deployment.

Rahul (Remedy) Shah

Operating Mode

Posted by Rahul (Remedy) Shah Employee Feb 28, 2018

Operating Mode


Operating Mode is an feature of AR System introduced in 9.1.02 Release. its called as Operating mode.


Why was Operating Mode needed ?


When upgrading the server, CMDB, or any AR applications, background processes can slow down that upgrade and switching of operation ownership because of server restart during upgrades. BMC make some recommendations in documentation and/or white papers suggesting customers make some configuration changes before running an upgrade. Some of the product installers perform configuration changes as well.Instead of relying on manual changes or each installer making changes themselves, it would be better if the installers could put the server into an upgrade mode and it would take care of this. So every release BMC add a new capability to the upgrade mode rather than updating lots of documentation or installers.


Operating mode helps two things to drive

  • Helped in solving many of the BMC install/upgrade problems
  • Also helped in a feature called Zero Down Time upgrade (ZDT).


Who consumes Operating Mode?


All BMC Platform Installers ( AR Server , Atrium Core , Atrium Integrator) and Apps Installers ( ITSM , SRM , SLM) puts the server in Operating mode before installer starts any activity and puts it back to normal mode once its done. Internally Installer call SSI to set/re-set the server in operating mode


Some insights on server info


Setting name: Operating-Mode
Possible values (integer):

  • OPERATING_MODE_NORMAL, 0 - normal mode, default, how the server runs today
  • OPERATING_MODE_UPGRADE_PRIMARY, 1 - upgrade mode for the primary server
  • OPERATING_MODE_UPGRADE_SECONDARY, 2 - upgrade mode for secondary servers


By default server runs in normal mode and value is 0. when the install/upgrade is happening on primary server , the operating value is set to 1 ( it will disable few thing which we will talk it later ) , when the install/upgrade is happening on non-primary server(s) the operating mode is set to 2 ( as of now in 9.1.02,9.1.03 and 9.1.04 it does not disable any of the feature , because non-primary server upgrades are all about replacement of file system)


What kind of install/upgrade sets server in Operating mode?

  • Fresh installation does not uses operating mode
  • DB(Accelerated) only upgrade does not uses operating mode
  • Only normal upgrades uses this feature.


What are the features that get disabled when server is set in Operating mode ?


Following operation are disabled on the current server when Operating mode is set.


  1. Hierarchical groups ( processing of bulk compute because of group hierarchy change or form level property change )
  2. Object reservation
  3. Archiving
  4. DSO
  5. FTS Indexing
  6. Escalations
  7. Atrium Integrator
  8. Service Failover
  9. Server event recording
  10. SLM Collector
  11. Approval Server
  12. Reconciliation Engine
  13. Atrium Integration Engine
  14. CMDB
  15. Flashboards
  16. Business Rules Engine
  17. Assignment Engine
  18. E-Mail Engine
  19. The server will disable signals, just like AR_SERVER_INFO_DISABLE_ARSIGNALS does.
  20. The server only uses AR authentication, just like setting AR_SERVER_INFO_AUTH_CHAINING_MODE to AR_AUTH_CHAINING_MODE_DEFAULT does.
  21. The server turns off the global attachment size restriction, just like setting AR_SERVER_INFO_DB_MAX_ATTACH_SIZE to 0 does. (A customer may have set the maximum attachment config value to restrict the size of attachments to something like 10 MB. The apps installers need to import some DVF plugins that may be larger than that, so we temporarily turn off the restriction when in upgrade mode.)
  22. New in 9.1.03 - The server removes any server side limit on the maximum number of entries returned in a query, just like setting AR_SERVER_INFO_MAX_ENTRIES (configuration file item Max-Entries-Per-Query) to 0 does.
  23. New in 9.1.03 - The server removes any server side limit on the maximum number of menu items returned in a query, just like setting configuration file item Max-Entries-Per-Search-Menu to 0 does
  24. New in 9.1.03 - The server removes any attachment security validation, just like if there were no inclusion or exclusion lists specified and no validation plug-in specified.


Does the parameter gets changed when server goes in Operating mode ?


No , AR Server internally takes care of disabling parameter , for example if upgrade mode wants to disable escalation , it does not set the Disable-Escalation : T / F in ar.conf , AR Server internally set this parameter off.


What modification are done to AR System Server Group Operation Ranking form ?


If the server is in a server group, Server will change entries in the "AR System Server Group Operation Ranking" form to change the server's behaviour.

For example there are 2 server ( primary server and secondary Server )


Below tables explains what happens when primary server is set in upgrade mode and reset to normal mode.





When Upgrade is Set


When Upgrade mode is reset

Primary ServerAdministration111
Assignment Engine1null1
Atrium Integration Engine1null1
Atrium Integrator1null1
Business Rule Engine1null1
Approval Server1null1
Secondary ServerAdministration2null2
Assignment Engine222
Atrium Integration Engine222
Atrium Integrator222
Business Rule Engine222
Approval Server222


So from above table  , Primary Serve Administration ranking is not set to null ( or Empty ) and rest of non-primary server Administration ranking are set to null ( or empty ) , by doing this it helps to retain the administration rights with the primary server which is getting upgrade and hence failover of administration rights does not fail over to non-primary server (which was earlier ranked as 2 ). this helps during ZDT upgrades where secondary servers are up and running.


Where does operating mode takes the back-up of ranking information ?


In 9.1.02 & 9.1.03 , backup of all ranking information was stored in ar.conf

In 9.1.04 and onwards , a new field called "Operation backup"  on AR System Server Group Operation Ranking will be having back-up


What changes where made to reset operating mode in 91SP4 upgrade installer ?


  • Pre 9.1.04 ( i.e. 9.1.02 and 9.1.03) :-  Individual installer like AR System use to set the upgrade mode before start of the installation and reset once the installation was done.
  • In 9.1.04 :- AR upgrade installer will set the server in upgrade mode and before ending installation, AR installer checks if CMDB component exists. If CMDB doesn't exist, AR installer itself sends SSI call to reset Operating-Mode to 0. If there is CMDB, AR installer doesn't reset the Operating-Mode
  • In 9.1.04 :-CMDB upgrade installer - before ending installation, CMDB installer checks if AI component exists. If AI doesn't exist, CMDB installer itself sends SSI call to reset Operating-Mode to 0. If there is AI, CMDB installer doesn't reset the Operating-Mode.
  • AI upgrade installer always resets Operating-Mode to 0 on completion
  • Apps installer does it individually

Does installer reset operating mode in case installer fails


Yes in all cases  either installation is successful or failure , installer re-set the operating mode , but there are chances where installer might not re-set it back , in that case we have to reset it back to normal mode using SSI.


Hope this helps to know few things related to upgrade mode.

Filter Blog

By date:
By tag: