Skip navigation

BMC Support or R&D might have asked for a "slow" SQL's Execution Plan and it is possible that the SQL might have aged out of Oracle's Library Cache or MSSQL's Procedure cache and hence no Execution Plan is available.


That would then necessitate generating an Estimated Execution Plan ("Estimated Plan" in MSSQL's SQL Management Studio or "Explain Plan" in Oracle) on the SQL extracted from AR log file.


The SQL does need to be complete and not truncated in order for an Estimated Plan to be generated.


To get the complete SQL as it was sent to the database put the following AR parameter in your ar.cfg/ar.conf file(s) and restart the AR Server(s).


Enable-Unlimited-Log-Line-Length: T


Remedy documentation -


Some of you already know this so it may be old news for you! :-)


Remedy 9.x/18.x is written in Java. One of the features that was introduced was to avoid sending literal values in Remedy SQL statements as was the case in 8.x and prior versions.


Instead the statement is prepared in JDBC with "bind" variables and then sent to the database (this relieves the burden on the Oracle database to replace literals with binds).


One drawback to the above fix is addressed in Oracle Doc Id 1451804.1.


The drawback/problem arises if you are using the database (Oracle Standard/Enterprise or higher) in Case Insensitive mode, with appropriate Linguistic Indexes in place, and Remedy ends up generating SQL statements with LIKE conditions in them.


The Oracle document states that the LIKE condition must be <COLUMN LIKE 'CONSTANT%"> AND NOT <COLUMN LIKE :BIND>. This will cause the optimizer to NOT use a Linguistic Index on the column COLUMN.



A query using LIKE 'CONSTANT%' with nls_comp=LINGUISTIC against a column with a linguistic index defined on it.  This produces an efficient plan that uses a range scan on the index, but if a bind variable with the same value is used instead (LIKE :BIND, where:BIND = 'CONSTANT%') The query plan will not use a range scan, resulting in poor performance.  Hinting, if tried, does not help.



This is a known limitation of using LIKE with NLSSORT-based indexes.


Using an NLSSORT-based index with LIKE 'CONSTANT%' requires transforming the predicate into a range predicate based upon on the constant matching string. For instance, col LIKE 'CONSTANT%' when using LINGUISTIC and BINARY_CI is transformed intoNLSSORT("COL",'nls_sort=''BINARY_CI''')>=HEXTORAW('636F6E7374616E7400') AND NLSSORT("COL",'nls_sort=''BINARY_CI''')<HEXTORAW('636F6E7374616E7500').  The values passed to HEXTORAW are derived from the string 'CONSTANT'.


This transformation is performed during query parse and requires that the matching pattern be a known constant.  NLSSORT-based functional indexes cannot be used with a LIKE predicate when the matching pattern is based on a bind variable or expression.


When cursor_sharing=FORCE (or SIMILAR), all constants are replaced by bind variables before parse, preventing any NLSSORT-based functional indexes being used with LIKE.


The above issue is not likely to be fixed by Oracle any time soon.


As a result of the above problem BMC has introduced the following check in its code:

==> Is the database Oracle and is it Case Insensitive?


If the above is TRUE then the SQL will be sent to the database AS IS (no replacing with binds in JDBC) and the Oracle database, where cursor_sharing = EXACT is necessary, will process the SQL and come up with an optimal execution plan.


CONCLUSION: cursor_sharing should be set to EXACT (if running Remedy in Case Insensitive mode) to use the above BMC fix and have the database pick up proper Execution Plans




So what does the Oracle database do when a SQL statement comes in to be processed?


A SQL is HARD PARSED the first time it comes into the database:


  • SYNTAX CHECK                      Is the SQL grammatically correct?
  • SEMANTIC CHECK                 Do the tables and columns exist? Privileges in place? Any ambiguities (e.g. same column name in two tables not specifying which one the SQL wants)


  • Oracle now checks if the SQL statement is found in the database.If so the next steps are skipped


  • COST ESTIMATION                      Estimate the Cost of the query. The Plan with the lowest cost is chosen
  • ROW SOURCE GENERATION     Row Source Generator receives the optimal plan from the Optimizer and generates an Execution Plan in a format the SQL engine can use


A SOFT PARSE is where the last 2 steps are skipped as the SQL is found in memory.



Soft parses, less expensive than hard parses, nevertheless incur a cost, that of needing the use of Shared Pool and Library Cache latches (these are serial

    operations), that can lead to performance issues in OLTP systems

To minimize the above impact session cursors of reused (used > 3 times) SQL can be stored in the Session Cursor Cache (UGA/PGA)

What is actually stored is a pointer to the location in the Library Cache of where the cursor existed when it was closed

The presence of a cursor in the Session Cursor Cache guarantees the validity of the SQL’s syntax and semantics so the first 2 steps of Parsing are skipped

Instead of searching for the cursor in the Library Cache the Server process follows the pointer and uses the cursor (if present and still valid)

One can use the above feature by setting a value to session_cached_cursors (BMC recommends 100)


Many moons ago, BMC deprecated the Windows ARUser client in favor of the web-based Mid-Tier client, and since that time there has been a scenario I have repeatedly run into that does not have a satisfactory solution based on out-of-the-box functionality:


There is an ARS table on the user's screen. The user needs the data from that table in a spreadsheet.

The built-in Report button that one gets on tables and on results lists yields a UI that ... well one of my users put it best:

"It looks like Radio Shack exploded on my screen!"

For instance, am I to turn loose a warehouse manager who uses her computer like 5 or 6 times a week, to build her own reports against a form that for instance, has display-only fields with names that look similar to fields on the screen?! Don't get me wrong, Smart Reporting has a whole lot of useful features and is a huge improvement, but at the end of the day ...





HTML 5 offers us a lot of interesting capabilities inside modern browsers that we didn't have before. In fact, the entire javascript ecosystem is experiencing something of a cambrian explosion at the moment. One of the new capabilities of HTML5 is the File API which allows us to programmatically construct documents in javascript and make them available for download to the user.


A few weeks ago, I once again, found myself in that scenario I described at the top. I had a custom application with a big table field, and a whole bunch of workflow that lets the user get the data on the screen that they need, and now the user was just basically saying "ok, but now I need this in a spreadsheet, can't you just give me a download button?".


This got me thinking. "Good point! Why CAN'T I just do that?!"

All of the data on the screen to drive the ARS table already exists somewhere in a javascript datastructure. Why can't I just snag that, reformat it into something Excel knows how to deal with, and draw a download link on the screen? So I got into a little mild hacking using the mighty, mighty Firefox Developer Edition Javascript Console, and I figured out how to do that.


What I'm about to show you is the cleaned up rightest way to do this hack that I can figure out. I'm somewhat of a greenhorn when it comes to javascript, so it's entirely possible there are better ways to do what I've done here. To that end, I've posted my code on github. If you see problems, please make a pull request there, and I'll merge it into the main branch.


The basic gist of it is this:

  1. copy some javascript functions into the view header of your form
    For the impatient: here ya go

  2. make an active link that calls one of those functions, using a set-fields action with a run process to put the output of the function into a tmp field
    It's gonna look something like this:

  3. make an active link populate a view field with a template, passing the output in the tmp field from step #2
    Here's my version of that template. You may want to inject your own flavor. I'll cover the options my version of the template takes further down. Basically you're just calling the TEMPLATE() function. it'll look something like this:
  4. There's now a dialog on the screen where the user can select the columns they want to export and download the freakin' spreadsheet!
    And that'll look something like this:


You May Ask Yourself, "How Do I Work This?!"


Probably the easiest way to get started, is just to download this demo and start playing with it.


  1. using Dev Studio, import the def file: ARS_CSVTools_demo_v1_definitions.def
    this will create the following objects on your Remedy Server:
    1. regular form: ahicox:csv:demo:data
    2. display form: ahicox:csv demo:dialog
    3. display form: ahicox:csv demo:gui
    4. active link: ahicox:csv demo:exportAll
    5. active link: ahicox:csv demo:exportSelected

  2. Import Demo Data to ahicox:csv:demo:data
    use the Data Import Tool to import this file: ahicox_csv_demo_data.arx to the ahicox:csv:demo:data form

  3. install the export dialog template
    Create a record in AR System Resource Definitions like this one:

    so basically:
    1. Attach this file: csvExportDialogTemplate.html
    2. Make sure the Mime Type is "text/html"
    3. Make sure the Type is "Template"
    4. Make sure the Status is "Active"
    5. Make sure the Name is "ahicox:csv demo:template-v1"  <- important!

  4. Open the ahicox:csv demo:gui form through your mid-tier
    1. click the Refresh button on the table. This should load a list of all planets from the sprawling longest running (and greatest) sci-fi show of all time Doctor Who. To keep things simple, we just have three columns: Entry ID, Key, and Value. That doesn't really matter, actually. This will work with whatever columns you put on whatever table you want to export. Only caveat being that no two columns should have the same display name (otherwise the export will only show one of them, probably whatever the last one with that name in the column order was, but no promises on that).

    2. The Export All Rows to CSV button will scoop up every row on screen and ship it off to the export dialog
      This button is calling the _exportAllRows javascript function, and assigning the output to the zTmp_JSONData field. The _exportAllRows function takes one argument, which is the field id of the table you want to export. For instance, the fieldid of the table on my demo form is: 536870913, so the set-fields action calls:

      $PROCESS$ javascript: _exportAllRows(536870913);
    3. The Export Selected Rows to CSV button will scoop up only the selected rows in the table field and ship them off to the export dialogThis is pretty much the same thing as # 2, except it's a different function name:

      $PROCESS$ javascript: _exportSelectedRows(536870913);

      An important note about these javascript functions: if you need to export a table that is on a form embedded in a view field (or embedded several times deep in a view field), you need to insert these functions on the view header of the root level form. So for instance if you wanted to be able to export tables buried on forms in the ITSM Modules, you'd want these functions in the view header of SHR:LandingConsole, rather than the individual child forms.

The Template

The HTML template is populated via the built-in TEMPLATE() function (as illustrated above). These are the arguments that the template takes:


  • jsonData
    this is the JSON datastructure returned from either the _exportAllRows() or _exportSelectedRows() functions
  • titleHeader
    this string is shown in the header of the dialog template, adjacent to the "Export as CSV Spreadsheet" button
  • defaultShowColumns
    this is an array in javascript notation, containing the columns you would like to be shown in the preview by default when the user opens the dialog (the user can select additional columns or deselect the defaults once the dialog is open). An example value would be:

     "['columnName1', 'columnName1']"

    NOTE however, if you're building that in work flow, the ticks will be a problem. It'll actually have to be constructed something like this:

     "[" + """" + "columnName1" + """" + ", " + """" + "columnName2" + """" + "]"

    column names are referenced by their DISPLAY NAME not database name in this context.All this does is control which of the columns have a checkbox by default when you open the dialog:


  • fileNameBase
    the template will attempt to construct a unique filename each time the user clicks the download button. It will do this by appending the epoch timestamp to the end of the file name at the time the user clicks the button. The fileNameBase argument allows you to give the file a meaningful name that appears before the unique timestamp. For instance

    example fileNameBase value: "DWP-Export"
    resulting file name:        "DWP-export-1522077583.csv"
  • displayLimit
    By default, the dialog is going to show a preview of the first X rows in the export where X is defined by displayLimit. If the number of rows in the export is less than this number, we'll just show all rows. Otherwise we will show only this many with a user-friendly message explaining that.





  • Handle Table Chunking
    At present, it'll just scoop up rows visible on screen. For instance in the case of exporting selected rows, it should be possible to hang a hook off of whatever function is called to retrieve the next chunk from the server, export data from selected rows of the previous chunk and append that with additional selections.  Perhaps something also for export all that will programmatically retrieve each chunk from the server and export/append. Just needs a little hackkity hack.

  • CSV Import!
    This should also be possible! Since the HTML5 File API allows us to access the contents of a file specified by the user without uploading it to the server. In theory, I should be able to create a similar dialog that shows the columns in your spreadsheet, the columns in your table and allows you to map them, then hijacks the PERFORM-ACTION-TABLE-ADD-ROW run-process command to add the rows to your table on-screen, so that you can set up your own table walk to push the rows where you want them to go.

    This would beat the living hell out of uploading the file as an attachment, staging it somewhere on the server, asynchronously forking off an AI job out of workflow to import/dispose of the file, and then having an active link loop run every couple seconds to check the status of the AI job. Which is the only other way I'm aware of to handle it right now. And god forbid if the file the user uploaded had the wrong column names or bad data! Good luck catching that exception!

  • Get BMC To Support this
    Look obviously this is unsupported.
    In order to figure this out, I had to pull apart lots of code I dug up off the mid-tier. This entire approach depends on the functions and datastructures in ClientCore.js being stable from version to version. There is no guarantee of that. Therefore BMC could break this at any moment without warning.

    My users like this feature, a whole lot more than the out-of-the-box reporting features. I'd like to be able to continue offering this feature without having to worry that every mid-tier patch we install will potentially break it. At the end of the day, that's actually not a lot to ask. BMC could simply make a function that does this and include it in ClientCore.js. It's pretty simple stuff. Heck. Maybe they could even give us a run-process command to export properly encoded JSON from a table into a field?!

    Anyhow. This is what I know for sure this works on:
    I've successfully tested this on ARS / Mid-Tier 201702131133. Against these browsers:
    1. Firefox 57.7.2 (32-bit)
    2. Firefox 10.0b6 (64-bit)
    3. Internet Explorer 11.2125.14393.0
    4. Chrome 65.0.3325.181 (32-bit)
    5. Edge 38.14393.2068.0


  • This approach in general could do a LOT of things
    There is pretty much nothing Javascript can't do inside a browser these days. Literally. From real-time 3D rendering to actually spinning up VMs. It's been done, on the client side, in a browser.  So why am I wrestling with cumbersome and poorly implemented server-side processes for mundane stuff like this that I could do entirely in the client? Javascript was BUILT for consuming JSON webservices -- that's REST in a nutshell, and now we have a REST API. All we really need to do some seriously amazing stuff in ARS is a supported interface to ClientCore.js and a way to get an authentication token from an already logged in user so that I can pass it to the REST API without asking the user to log in again.

    And that's just scratching the surface. Whose up for building an ARUser replacement out of the REST API and Electron? I would be.

    ATTENTION BMC: publish a developer-facing, supported (and documented) javascript API for interacting with ARSystem within HTML templates!
    Let a hundred flowers blossom
    . We're out here selling your software for you day in and day out. It's the least you can do.



ALSO: for those not hip to the github, there's an attachment with a zip file of everything :-)


Quite often when you have an issue, the first thing that is asked of you is for you to capture logs and send them off.  The problem with this is that these logs quite often contain sensitive information, they contain things like user names and server ip's.  Depending on the nature of your system and your organization, it might not be not only a bad idea to provide that information but it might be against your companies InfoSec policy, or, maybe even illegal.  To help combat this issue I wrote this very simple java program with a sample batch file.


At the heart the program is the simple ability to do pre-defined 'find/replace' scenarios.  The properties file contains two pre-defined find/replace scenarios.


1 - UserName - This will find the user: section of your log file and replace it with a generic 'UserName'

2 - IPv4 Address - This one will look for something like, but in a generic way so that it finds ANY ip address and replaces it with IPV4Address


The program is RegEx aware, which means that you can use complex find criteria that's not up on RegEx here if interested in the finer details (Regular expression - Wikipedia )


If you are on Windows all you need to do is configure your properties file to find/replace whatever it is you want to find and replace it with, then drag/drop your log file onto the batch file.  The batch file will run the log through the program and spit out a copy of the log with the suffix .scrubbed.log appended on.


This utility does not make any network calls, it only reads the log file you provide it and gives you a scrubbed output.


This is an unofficial and unsupported tool, and comes with no warranties expressed or implied.  It is still your responsibility to ensure that sensitive information is removed from the scrubbed file before posting that log anywhere, but this should help you get things cleaned up with ease and speed.

Rahul (Remedy) Shah

D2P Overview

Posted by Rahul (Remedy) Shah Employee Mar 2, 2018

D2P Overview

Known as Dev to Prod(D2P) is a feature used to push stuff from QA/Dev environment to production environment , this feature was initially introduce in AR System 9.x and later was enhanced a much over the releases. Adding more features and making it more stable.


Two primary components of D2P are

  1. D2P Console
  2. Single Point Deployment


D2P Console is mainly to manage and deploy the packages and has multiple capability will not talk much about it,  as it's an old feature


Quick look on what's new in release "pre 9.1.03" , "9.1.04" and ""



Why D2P and its enhancement ?


Basically today deployment of hotfix/patch was totally manual process and it was a big pain point to apply the hotfix/patch to each node/machine. if you look at a hotfix or patch it is bunch of binary files , definition files , data file (arx files) and some database changes and those has to manually apply on each node.  D2P had already had capability to promote workflows definition , data (arx files) and what was really missing was deployment of binary files , In release 9.1.04 a new feature was introduced know as "Single Point Deployment" which takes care of deploying binary files.


Post 9.1.04 all the hotfix and patches will be shipped as D2P Packages. Main advantage of this will be to deploy only on one server and it will get deployed automatically on all the servers in the server group and the progress can be easily monitored and other key feature will be it can be rolled backed if needed.


The deployment process is going to look like this, download from EPD on a local machine , test it on your test/dev environment and then import it on your production machine. its going to be easy process and your don't need to do access these machine physically. it will be all from D2P Console.



What is Single Point Deployment feature?


  1. What is Single Point Deployment ?

    Single Point Deployment helps to create and deploy an binary payload. A binary payload is a set of binary files, configuration files, batch, or Shell scripts which can be executed. Not only BMC anyone can create a payload for deploying
  2. What components that support binary payload ?


         You can deploy a binary payload onto the following components.           

    1. BMC Remedy Action Request System Server (AR)
    2. BMC Remedy Mid Tier (MT)
    3. SmartIT


   3. What changes where made to AR Monitor (armonitor) ?


         AR Monitor (armonitor.exe/sh) loads(starts) all the process which are available in armonitor.cfg  , from 9.1.04 has more intelligent where it know which process it has loaded and has capability to start and stop individual process.


  4. What is a file deployment Process?


          Starting 9.1.04 for all the 3 components above(AR , MT & SmartIT) , a separate process know as “file deployer” is running , which is aware of how to stop and start the processes basically it can instruct armonitor to start or stop any particular process , for example it can instruct to stop/start "java plugin process" hosted on a AR Server.


  5. What are the main components of File Deployment Process?


        Three components which are important for any file deployment process

  • File Deployer  :-   A new process that runs under monitor and does actual work of deployment of any new binary payload.
  • AR Monitor     :-  Filedeployer calls ARMonitor for stopping and starting required processes.
  • AR Server      :-  Acts as a co-ordinator between multiple file deployers. Control the sequence in which deployment should be done.


  Summary :- With all the above new stuff you have an new capability to deploy a binary file , to deploy an binary file you need create a payload on a AR Server , which can have a binary file that has to be deployed. The is how the flow is

  • AR Server Instructs file deployer that there is some work need to be done
  • File deployer downloads the binary file from the server
  • File deployer instruct AR Monitor to STOP the particular process which is defined in the payload ( Stop is required because that particular binary file (jar) might have been locked by process)
  • File deployer takes the backup of existing binary
  • File deployer copies the new binary
  • File deployer STARTS the particular process it has stopped
  • File deployer checks if that process was started or not
  • Marked deployment as done.


Will come with few more blogs on Single Point deployment.

Rahul (Remedy) Shah

Operating Mode

Posted by Rahul (Remedy) Shah Employee Feb 28, 2018

Operating Mode


Operating Mode is an feature of AR System introduced in 9.1.02 Release. its called as Operating mode.


Why was Operating Mode needed ?


When upgrading the server, CMDB, or any AR applications, background processes can slow down that upgrade and switching of operation ownership because of server restart during upgrades. BMC make some recommendations in documentation and/or white papers suggesting customers make some configuration changes before running an upgrade. Some of the product installers perform configuration changes as well.Instead of relying on manual changes or each installer making changes themselves, it would be better if the installers could put the server into an upgrade mode and it would take care of this. So every release BMC add a new capability to the upgrade mode rather than updating lots of documentation or installers.


Operating mode helps two things to drive

  • Helped in solving many of the BMC install/upgrade problems
  • Also helped in a feature called Zero Down Time upgrade (ZDT).


Who consumes Operating Mode?


All BMC Platform Installers ( AR Server , Atrium Core , Atrium Integrator) and Apps Installers ( ITSM , SRM , SLM) puts the server in Operating mode before installer starts any activity and puts it back to normal mode once its done. Internally Installer call SSI to set/re-set the server in operating mode


Some insights on server info


Setting name: Operating-Mode
Possible values (integer):

  • OPERATING_MODE_NORMAL, 0 - normal mode, default, how the server runs today
  • OPERATING_MODE_UPGRADE_PRIMARY, 1 - upgrade mode for the primary server
  • OPERATING_MODE_UPGRADE_SECONDARY, 2 - upgrade mode for secondary servers


By default server runs in normal mode and value is 0. when the install/upgrade is happening on primary server , the operating value is set to 1 ( it will disable few thing which we will talk it later ) , when the install/upgrade is happening on non-primary server(s) the operating mode is set to 2 ( as of now in 9.1.02,9.1.03 and 9.1.04 it does not disable any of the feature , because non-primary server upgrades are all about replacement of file system)


What kind of install/upgrade sets server in Operating mode?

  • Fresh installation does not uses operating mode
  • DB(Accelerated) only upgrade does not uses operating mode
  • Only normal upgrades uses this feature.


What are the features that get disabled when server is set in Operating mode ?


Following operation are disabled on the current server when Operating mode is set.


  1. Hierarchical groups ( processing of bulk compute because of group hierarchy change or form level property change )
  2. Object reservation
  3. Archiving
  4. DSO
  5. FTS Indexing
  6. Escalations
  7. Atrium Integrator
  8. Service Failover
  9. Server event recording
  10. SLM Collector
  11. Approval Server
  12. Reconciliation Engine
  13. Atrium Integration Engine
  14. CMDB
  15. Flashboards
  16. Business Rules Engine
  17. Assignment Engine
  18. E-Mail Engine
  19. The server will disable signals, just like AR_SERVER_INFO_DISABLE_ARSIGNALS does.
  20. The server only uses AR authentication, just like setting AR_SERVER_INFO_AUTH_CHAINING_MODE to AR_AUTH_CHAINING_MODE_DEFAULT does.
  21. The server turns off the global attachment size restriction, just like setting AR_SERVER_INFO_DB_MAX_ATTACH_SIZE to 0 does. (A customer may have set the maximum attachment config value to restrict the size of attachments to something like 10 MB. The apps installers need to import some DVF plugins that may be larger than that, so we temporarily turn off the restriction when in upgrade mode.)
  22. New in 9.1.03 - The server removes any server side limit on the maximum number of entries returned in a query, just like setting AR_SERVER_INFO_MAX_ENTRIES (configuration file item Max-Entries-Per-Query) to 0 does.
  23. New in 9.1.03 - The server removes any server side limit on the maximum number of menu items returned in a query, just like setting configuration file item Max-Entries-Per-Search-Menu to 0 does
  24. New in 9.1.03 - The server removes any attachment security validation, just like if there were no inclusion or exclusion lists specified and no validation plug-in specified.


Does the parameter gets changed when server goes in Operating mode ?


No , AR Server internally takes care of disabling parameter , for example if upgrade mode wants to disable escalation , it does not set the Disable-Escalation : T / F in ar.conf , AR Server internally set this parameter off.


What modification are done to AR System Server Group Operation Ranking form ?


If the server is in a server group, Server will change entries in the "AR System Server Group Operation Ranking" form to change the server's behaviour.

For example there are 2 server ( primary server and secondary Server )


Below tables explains what happens when primary server is set in upgrade mode and reset to normal mode.





When Upgrade is Set


When Upgrade mode is reset

Primary ServerAdministration111
Assignment Engine1null1
Atrium Integration Engine1null1
Atrium Integrator1null1
Business Rule Engine1null1
Approval Server1null1
Secondary ServerAdministration2null2
Assignment Engine222
Atrium Integration Engine222
Atrium Integrator222
Business Rule Engine222
Approval Server222


So from above table  , Primary Serve Administration ranking is not set to null ( or Empty ) and rest of non-primary server Administration ranking are set to null ( or empty ) , by doing this it helps to retain the administration rights with the primary server which is getting upgrade and hence failover of administration rights does not fail over to non-primary server (which was earlier ranked as 2 ). this helps during ZDT upgrades where secondary servers are up and running.


Where does operating mode takes the back-up of ranking information ?


In 9.1.02 & 9.1.03 , backup of all ranking information was stored in ar.conf

In 9.1.04 and onwards , a new field called "Operation backup"  on AR System Server Group Operation Ranking will be having back-up


What changes where made to reset operating mode in 91SP4 upgrade installer ?


  • Pre 9.1.04 ( i.e. 9.1.02 and 9.1.03) :-  Individual installer like AR System use to set the upgrade mode before start of the installation and reset once the installation was done.
  • In 9.1.04 :- AR upgrade installer will set the server in upgrade mode and before ending installation, AR installer checks if CMDB component exists. If CMDB doesn't exist, AR installer itself sends SSI call to reset Operating-Mode to 0. If there is CMDB, AR installer doesn't reset the Operating-Mode
  • In 9.1.04 :-CMDB upgrade installer - before ending installation, CMDB installer checks if AI component exists. If AI doesn't exist, CMDB installer itself sends SSI call to reset Operating-Mode to 0. If there is AI, CMDB installer doesn't reset the Operating-Mode.
  • AI upgrade installer always resets Operating-Mode to 0 on completion
  • Apps installer does it individually

Does installer reset operating mode in case installer fails


Yes in all cases  either installation is successful or failure , installer re-set the operating mode , but there are chances where installer might not re-set it back , in that case we have to reset it back to normal mode using SSI.


Hope this helps to know few things related to upgrade mode.


BMC is excited to announce general availability of new Remedy releases as part of our Fall 2017 release cycle:

  • Remedy 9.1.04  (incl. Remedy AR System, CMDB, ITSM applications, Smart Reporting, Remedy Single Sign-on)
  • Remedy with Smart IT 2.0.00
  • BMC Multi-Cloud Service Management 17.11


Here is excerpt of platform specific improvements.


With Remedy platform version 9.1.04, BMC delivers a rich set of platform-related improvements that help Remedy on-premise customers reduce cost of operations and administration for their Remedy environment.


Significant improvements to the Zero Downtime Upgrade capability for the Remedy Platform

9.1.04 delivers significant improvements to the Zero Downtime Upgrade capability for the Remedy Platform: Several manual steps of the process have been automated. If, for some reason, the platform upgrade fails, the platform components and the file system are rolled back to the earlier version. All these enhancements allow customers to safely perform in-place upgrades of the Remedy platform without impact on the overall Remedy ITSM service. This recorded Connect with Remedy webinar session about Zero-Downtime Upgrades provides additional insight into the approach.


Efficient Patching/Hot-fix of Remedy with Deployment Manager

Starting with version 9.1.04, customers can now use the Remedy Deployment Application to easily deploy new Remedy platform patches and hotfixes into their Remedy environment, including new binaries. Remedy administrator no longer have to run patch installers on each server of a server group across multiple environments (Development, QA, and Production) to deploy new binaries. Platform patches are now delivered as deployable packages. When a Remedy administrator deploys such a package on a primary server in a server group, the changes / new binaries provided through the patch or hotfix are applied on all the secondary servers automatically.  Please note that there are also a number of other enhancements in the Remedy Deployment Application v9.1.04.


Centrally enable logging in a Remedy server group environment

Last but not least, Remedy 9.1.04 also makes it easier for Remedy administrator to centrally enable logging in a Remedy server group environment, reduces CPU resource usage on mid-tier server by 50%, and informs users of the mid-tier UI about an upcoming session timeout.


Additional Utilities - Remedy Server Group Administration Console

In support of the new Remedy 9.1.04 release, the Remedy product team also release a number of value-add utilities to the BMC Communities. These are unsupported at this time, but BMC will evaluate based on customer feedback whether to include it in the standard product at a later time.


Some references to additional information about this release:



Also check this blog by Peter Adams for details of other enhancements as part of Remedy 9.1.04 release - Remedy Fall 2017 Release (9.1.04): BMC Continues to Innovate ITSM with New CMDB User Experience, New Cognitive Capabilities in Remedy and New Multi-Cloud Service Management


Thank you for your continued support of the Remedy family of products and we look forward to updating you on more innovative product enhancements in the coming months.


Enjoy the year end and have a great start into 2018.


Rahul Vedak

Remedy Product Manager


The Remedy product management team is looking forward to giving attendees of the T3:Service Management and Automation Conference an opportunity to join onsite customer advisory sessions about specific topic, where you can give direct input to the planning process for the Remedy platform and the ITSM applications.


As room capacity at the conference site is limited, we’re trying to assess, which topics are of biggest interest to our customers. We’ll use this feedback to select, which advisory sessions we’ll organize at the event. If the time at the conference is not sufficient to come to a conclusion, we may continue to the discussion after the conference with virtual sessions.


Please let us know, which topics you are interested in providing feedback on by filling out a 2-min survey at


Thanks, Peter


BMC is proud to be the flagship sponsor for the upcoming T3: Service Management & Automation Conference, taking place during the week of Nov 6, 2017 at the Palms Casino & Resort in Las Vegas. T3: Service Management & Automation Conference - November 06 - 10, 2017 - Las Vegas, Nevada


If you did not make it to BMC Exchange New York City, not to worry, T3 will cover all the DSM topics which was shared there, in addition to 140+ tech sessions including Hands on Labs!


This year’s conference is being put on by T3 to provide an interactive, educational experience for attendees looking to gain mindshare and hands-on views to the latest best-of-breed technologies. This conference will be focused on giving you an in-depth, technical view with valuable training to help you succeed in your roles and accomplishing your business needs!


As the Flagship Sponsor, BMC will have a strong showing at the conference with VIPs, engineers, support technicians, product managers, marketing/sales representatives, and more on site.

•   Come see what is new in Remedy, BMC Innovation Suite, BMC Digital Workplace, BMC Discovery, BMC Remedyforce, BMC Track-It, BMC Client Management, BMCFootprints, TrueSight & more, to include products from vendors such as Numerify, RRR, Mobile Reach, RightStar, Fusion, Partner IT, Scapa Technologies, CyberTrain & RMI Solutions.

•   Come learn more about your products, as well as the latest trends in tools, training and technology, in a variety of breakout sessions to include many hands-on labs.

•   Come listen to our awesome Keynote speakers at the opening and general ceremonies.


There are lots of opportunities to network with BMC and non-BMC personnel who focus on a variety of products, as well as, spend an Evening with the Experts to talk about any of the questions you may have about Remedy platform. In addition, there'll be lots of opportunities to talk to the Remedy product management team about needs of your organization. See separate blog post about customer advisory meetings. If you are interested in a 1:1 meeting with product management team, please work with your BMC or partner sales contact to arrange that.



Register for the T3 Conference at:


We're collection information about use of Crystal Reports with Remedy.


If your organization currently uses Crystal Reports, we'd like to ask you to fill out this very brief survey:

Crystal Reports and Remedy Survey


Thanks very much in advance,


Peter Adams


Just sharing one tip with ARS. A known solution to known problem.

In case if you have  AR Server running on Windows. And if its service stops working due to java updates – you need to make changes (as per new JRE) at below mentioned locations.


  • Update java/jvm path at below location in registry on given system

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Action Request System Server <host name>\Parameters


  • Edit <AR Server Install Dir>\arserver.config and update jvm search path (set the upgraded JRE version)


     # JVM search paths (number indicates search order)\Program Files\Java\jre1.8.0_141\bin


  • Edit <AR Server Install Dir>/Conf/armonitor.cfg and update all hardcoded java paths.

In this third post on encryption we're going to show how to enable SSL between an AR System server and its Oracle database.  In previous posts we've seen how to use Oracle's native encryption and SSL with Microsoft SQL Server. The process we're going to follow is similar to the latter.


Again, the high level steps are:


  • obtain a certificate
  • configure the database to use the certificate
  • import the certificate on the client
  • configure the AR System server to use SSL


Oracle databases store their certificates in a set of files called a wallet so, unless you have an existing wallet, we need to create one.  As with most of these steps there are multiple ways to do this.  We could use the Wallet Manager GUI but we're going to stick to the command line and use the orapki utilty:


Create a new wallet with the auto-login property set:


c:\app>orapki wallet create -wallet c:\app\db_wallet -auto_login


We're prompted to enter a password to secure the wallet and I've used password1.  The directory listing shows the files created in the db_wallet directory which will be created if it does not already exist.


We now have an empty wallet to which we need to add a certificate.  As this is a test we'll create a self-signed certificate and add it with one command:


c:\app>orapki wallet add -wallet c:\app\db_wallet -dn "cn=clm-pun-013056,cn=bmc,cn=com" -keysize 2048 -validity 365 -pwd password1 -self_signed

c:\app>orapki wallet display -wallet c:\app\db_wallet -pwd password1


We've used the host name for the -dn option, specified a key length of 2048 bits and validity of a year.  The second command lists the contents of the wallet so that we can confirm that our certificate has been added.


Note that both user and trusted certificates called CN=clm-pun-013056,CN=bmc,CN=com were created and it is the latter that we will export so that it can be used on the AR System server.


Export the certificate to a file called db_CA.cert:


c:\app>orapki wallet export -wallet c:\app\db_wallet -dn "cn=clm-pun-013056,cn=bmc,cn=com" -cert c:\app\db_wallet\db_CA.cert -pwd password1


We've prepared the certificate but we still need to configure Oracle to use it.  To do this we need to edit two files in the ORACLE_HOME\network\admin directory, sqlnet.ora and listner.ora, and add these lines to both of them:






      (DIRECTORY = C:\app\db_wallet)





This specifies the location of the wallet and sets an option to show we're just using encryption, not authentication.


We also need to configure the listener to add a port that the database will use for SSL connections.  In the LISTENER section of the listener.ora file we add:


    (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = clm-pun-013056)(PORT = 2484)))


Note the protocol is TCPS and we've picked port 2484 which is commonly used.


Finally we need to restart the listener process so that it picks up the changes:



That completes the database setup, the listener output shows we're ready to receive SSL connections on port 2484.


The next step is to copy the certificate that we exported earlier to the AR Server system and add it to the Java cacerts file so that the Oracle JDBC driver can use it.  These steps are similar to those we used for MS SQL.  The certificate file is called db_CA.cert and it has been copied to c:\temp.


Open a command prompt and cd to the jre\lib\security directory of the Java instance that the AR System server is using.  There should already be a cacerts file in this directory, this is the default certificate store used by Java, and we're going to add our certificate to it with the keytool command:


C:\Program Files\Java\jre1.8.0_121\lib\security>..\..\bin\keytool -importcert -file c:\temp\db_CA.cert -alias dbcert       -storepass changeit -noprompt -keystore cacerts



We're almost done, all that is left is to configure the Remedy server to use SSL when connecting to the database.  A typical Remedy server configuration for an Oracle database includes these settings:


Db-Host-Name: clm-pun-013056

Db-Server-Port: 1521

Oracle-Service: orcl


On startup the server uses these to create a JDBC connection string using the format:




When using SSL the PROTOCOL setting needs to be changed from TCP to TCPS.  However, before 9.1 Service Pack 2, there was no way to modify this connection string to do this.  This release introduced a new configuration option called Oracle-JDBC-URL which can be used to provide the full connect string.  If this option is present it is used instead of the one derived from the settings above.  To configure our Remedy server we need to add this option with the appropriate values.  So, the new setting in our ar.cfg/ar.conf will be:


Oracle-JDBC-URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=clm-pun-013056)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=orcl)))


The original settings can be left in place as they are ignored when the new option is set.  Switching between SSL and plain text connections is simply a case of commenting out this new option.


Restart the AR System server and we now have an encrypted connection between the server and the Oracle database.  To verify that this is the case we can use the tcpdump or Wireshark tools as detailed in the earlier posts.  Looking at the packets we'll see that the contents are all binary data and no plain text is present. 





We've now looked at three different ways to encrypt data as it is transferred between Remedy and the database.  In each case I've tried to cover the minimum steps required to enable this feature, each one offers many more configuration options, and you can find additional details in the links at the end of the articles.


I hope the information is useful and I welcome suggestions for other topics that would be of interest to the Remedy community.  Please use the comments section below or send me a message with ideas.


Further Reading


Trending in Support: Encrypting Data Between AR Servers and Oracle Databases

Trending in Support: Enabling SSL Encryption for AR to MS SQL Database Connections with Remedy 9.1 SP2 and Later


SSL With Oracle JDBC Thin Driver Advanced Security Configuration

HOW TO: Setting up Encrypted Communications Channels in Oracle Database


Feedback and corrections are always welcome in the comments section below.


Mark Walters


Read more like this -  BMC Remedy Support Blogs

Matthias Minden

Disable IPv6

Posted by Matthias Minden Jun 1, 2017

We currently don't use IPv6 but discovered that the application (java) still wants to use IPv6 even when disabled at the O/S level.  We edited the arconfig file on the server(s) by adding the following to the java entries

     <your java path> -



You can also add these settings to the Developer Studio and Data Import Tool .ini files also!


This post shows how to use a new configuration option, added in AR System Server 9.1 Service Pack 2, to enable encryption of the data moving between a Remedy server and its Microsoft SQL Server database.  In an earlier post (Trending in Support: Encrypting Data Between AR Servers and Oracle Databases ) we saw how to enable Oracle's native encryption for these connections but, this time, we're going to be using SSL.  Microsoft have documentation on their website that describes how the feature is implemented.


Using SSL Encryption | Microsoft Docs


There are several steps necessary to prepare the environment before encryption can be enabled.  At a high level these are:


  • obtain a certificate
  • grant SQL Server access to the certificate
  • configure SQL to use the certificate
  • import the public certificate to the Java instance used by the AR System server
  • enable encryption on the AR System server


If you're configuring a production environment that requires this additional level of security you have probably already obtained an SSL certificate from one of the available commercial certification authorities.  However, for our tests, we're going to use a simple, self-signed, certificate.  There are a number of different ways to generate these but, as we happen to have IIS installed on our SQL Server machine, we'll use that.  Simply start the IIS Manager, goto Server Certificates and right click Create Self-Signed Certificate:



Enter a name and choose a Personal certificate.


Now that we have a certificate we need to make it available to our SQL Server instance.  Start by finding the account name used to run SQL. One way to do this is via the SQL Server Configuration Manager, check the Properties for the selected instance:



Note the Account Name and then launch the MMC management console and add the Certificates snap-in for a Computer Account:



  • in MMC, go to Certificates (Local Computer) > Personal > Certificates

  • the certificate should be listed there (you may have to import it if you did not use IIS to create it)

  • right click > All Tasks > Manage Private Keys

  • add the service account for your instance of SQL Server

  • give the service account Read permissions


While we're here we also need to export the certificate so that it can be imported on the AR System server machine later:


  • right click on the certificate > All Tasks > Export > Next
  • choose No, do not export the private key > Next
  • choose DER encoded binary X.509 (.CER) > Next
  • enter a file name (e.g. export.cer) noting where it is saved


The final step on the SQL Server machine is to configure SQL to use the certificate with the SQL Server Configuration Manager again:



  • start SQL Server Configuration Manager
  • go to SQL Server Network configuration
  • select your instance
  • right click > Properties > Certificate tab
  • choose the certificate from the list
  • restart the SQL service


We're finished with the SQL Server machine, the rest of the work is done on the AR System server host.


Start by copying the exported certificate file (which we called export.cer) created above to the system.  Then, open a command prompt and cd to the jre\lib\security directory of the Java instance that you are using to run your AR System server.


There should already be a cacerts file in this directory, this is a default certificate store used by Java, and we're going to add our certificate to it with the keytool command.



With the commands shown above we:


  • imported the certificate with an alias of arkey using the default store password of changeit
  • listed the certificate to verify it was imported


The final step is to enable the AR System server to use the certificate and encrypt traffic between itself and the database.  To do this we need to make use of a new configuration parameter that was added in 9.1 Service Pack 2 called Db-Custom-Conn-Props:  This allows us to pass one or more key=value pairs to the database driver using a semi-colon separated list.  For example:


Db-Custom-Conn-Props: key1=value1;key2=value2


This option was added in 9.1 SP2 to provide a way for administrators to specify the additional configuration options required for the JDBC driver when enabling features such as encryption.  We'll make use of it again when we look at SSL for Oracle databases in a future post.


Before we move on let's confirm the current state of the data flowing to and from the database.  In the earlier Oracle post we used tcpdump to snoop on the network traffic.  We're going to do the same here but with the graphical Wireshark utility.  This next picture shows some of the data packets coming from the database and we can see that there is plain text legible in their contents:



The above is some of the data being returned when selecting the User form record for the sample user Allen.  The full name and email address are there, along with the start of the list of groups that Allen is a member of.


To enable encryption we need to stop the AR System service add this line to our ar.cfg file;


Db-Custom-Conn-Props: encrypt=true


and then restart the service.  We could also have used the Centralised Configuration forms to add this to our server before restarting.


Now, when we look at the Wireshark captured data, we can immediately see a difference:



Note that the Info column is showing TLS traffic and the packet payload data is no longer in plain text - an encrypted connection!


We've deliberately glossed over some of the complexities that may be required in non-test environments such as:


  • using commercial SSL certificates
  • using alternative Java keystores
  • additional Db-Custom-Conn-Props options that may be need for different SSL configurations such as different keystore locations and passwords


but I hope that this shows that, with 9.1 Service Pack 2 and beyond, AR System server to database encryption is now supported when using Microsoft SQL databases.





Thanks to The Data Specialist blog post for details of configuring SQL Server with a self-signed certificate.

Using a self-signed SSL certificate with SQL Server | The Data Specialist


Further Reading


Using SSL Encryption | Microsoft Docs

Wireshark · Go Deep.



Feedback and corrections are always welcome in the comments section below and, if you have a suggestion for a technical post related to Remedy AR System, please drop me a message via the Communities.


Mark Walters


Read more like this -  BMC Remedy Support Blogs

Filter Blog

By date:
By tag: