Skip navigation
1 2 3 Previous Next

Remedy AR System

70 posts
Rahul (Remedy) Shah

D2P Overview

Posted by Rahul (Remedy) Shah Employee Mar 2, 2018

D2P Overview

Known as Dev to Prod(D2P) is a feature used to push stuff from QA/Dev environment to production environment , this feature was initially introduce in AR System 9.x and later was enhanced a much over the releases. Adding more features and making it more stable.


Two primary components of D2P are

  1. D2P Console
  2. Single Point Deployment


D2P Console is mainly to manage and deploy the packages and has multiple capability will not talk much about it,  as it's an old feature


Quick look on what's new in release "pre 9.1.03" , "9.1.04" and ""



Why D2P and its enhancement ?


Basically today deployment of hotfix/patch was totally manual process and it was a big pain point to apply the hotfix/patch to each node/machine. if you look at a hotfix or patch it is bunch of binary files , definition files , data file (arx files) and some database changes and those has to manually apply on each node.  D2P had already had capability to promote workflows definition , data (arx files) and what was really missing was deployment of binary files , In release 9.1.04 a new feature was introduced know as "Single Point Deployment" which takes care of deploying binary files.


Post 9.1.04 all the hotfix and patches will be shipped as D2P Packages. Main advantage of this will be to deploy only on one server and it will get deployed automatically on all the servers in the server group and the progress can be easily monitored and other key feature will be it can be rolled backed if needed.


The deployment process is going to look like this, download from EPD on a local machine , test it on your test/dev environment and then import it on your production machine. its going to be easy process and your don't need to do access these machine physically. it will be all from D2P Console.



What is Single Point Deployment feature?


  1. What is Single Point Deployment ?

    Single Point Deployment helps to create and deploy an binary payload. A binary payload is a set of binary files, configuration files, batch, or Shell scripts which can be executed. Not only BMC anyone can create a payload for deploying
  2. What components that support binary payload ?


         You can deploy a binary payload onto the following components.           

    1. BMC Remedy Action Request System Server (AR)
    2. BMC Remedy Mid Tier (MT)
    3. SmartIT


   3. What changes where made to AR Monitor (armonitor) ?


         AR Monitor (armonitor.exe/sh) loads(starts) all the process which are available in armonitor.cfg  , from 9.1.04 has more intelligent where it know which process it has loaded and has capability to start and stop individual process.


  4. What is a file deployment Process?


          Starting 9.1.04 for all the 3 components above(AR , MT & SmartIT) , a separate process know as “file deployer” is running , which is aware of how to stop and start the processes basically it can instruct armonitor to start or stop any particular process , for example it can instruct to stop/start "java plugin process" hosted on a AR Server.


  5. What are the main components of File Deployment Process?


        Three components which are important for any file deployment process

  • File Deployer  :-   A new process that runs under monitor and does actual work of deployment of any new binary payload.
  • AR Monitor     :-  Filedeployer calls ARMonitor for stopping and starting required processes.
  • AR Server      :-  Acts as a co-ordinator between multiple file deployers. Control the sequence in which deployment should be done.


  Summary :- With all the above new stuff you have an new capability to deploy a binary file , to deploy an binary file you need create a payload on a AR Server , which can have a binary file that has to be deployed. The is how the flow is

  • AR Server Instructs file deployer that there is some work need to be done
  • File deployer downloads the binary file from the server
  • File deployer instruct AR Monitor to STOP the particular process which is defined in the payload ( Stop is required because that particular binary file (jar) might have been locked by process)
  • File deployer takes the backup of existing binary
  • File deployer copies the new binary
  • File deployer STARTS the particular process it has stopped
  • File deployer checks if that process was started or not
  • Marked deployment as done.


Will come with few more blogs on Single Point deployment.

Rahul (Remedy) Shah

Operating Mode

Posted by Rahul (Remedy) Shah Employee Feb 28, 2018

Operating Mode


Operating Mode is an feature of AR System introduced in 9.1.02 Release. its called as Operating mode.


Why was Operating Mode needed ?


When upgrading the server, CMDB, or any AR applications, background processes can slow down that upgrade and switching of operation ownership because of server restart during upgrades. BMC make some recommendations in documentation and/or white papers suggesting customers make some configuration changes before running an upgrade. Some of the product installers perform configuration changes as well.Instead of relying on manual changes or each installer making changes themselves, it would be better if the installers could put the server into an upgrade mode and it would take care of this. So every release BMC add a new capability to the upgrade mode rather than updating lots of documentation or installers.


Operating mode helps two things to drive

  • Helped in solving many of the BMC install/upgrade problems
  • Also helped in a feature called Zero Down Time upgrade (ZDT).


Who consumes Operating Mode?


All BMC Platform Installers ( AR Server , Atrium Core , Atrium Integrator) and Apps Installers ( ITSM , SRM , SLM) puts the server in Operating mode before installer starts any activity and puts it back to normal mode once its done. Internally Installer call SSI to set/re-set the server in operating mode


Some insights on server info


Setting name: Operating-Mode
Possible values (integer):

  • OPERATING_MODE_NORMAL, 0 - normal mode, default, how the server runs today
  • OPERATING_MODE_UPGRADE_PRIMARY, 1 - upgrade mode for the primary server
  • OPERATING_MODE_UPGRADE_SECONDARY, 2 - upgrade mode for secondary servers


By default server runs in normal mode and value is 0. when the install/upgrade is happening on primary server , the operating value is set to 1 ( it will disable few thing which we will talk it later ) , when the install/upgrade is happening on non-primary server(s) the operating mode is set to 2 ( as of now in 9.1.02,9.1.03 and 9.1.04 it does not disable any of the feature , because non-primary server upgrades are all about replacement of file system)


What kind of install/upgrade sets server in Operating mode?

  • Fresh installation does not uses operating mode
  • DB(Accelerated) only upgrade does not uses operating mode
  • Only normal upgrades uses this feature.


What are the features that get disabled when server is set in Operating mode ?


Following operation are disabled on the current server when Operating mode is set.


  1. Hierarchical groups ( processing of bulk compute because of group hierarchy change or form level property change )
  2. Object reservation
  3. Archiving
  4. DSO
  5. FTS Indexing
  6. Escalations
  7. Atrium Integrator
  8. Service Failover
  9. Server event recording
  10. SLM Collector
  11. Approval Server
  12. Reconciliation Engine
  13. Atrium Integration Engine
  14. CMDB
  15. Flashboards
  16. Business Rules Engine
  17. Assignment Engine
  18. E-Mail Engine
  19. The server will disable signals, just like AR_SERVER_INFO_DISABLE_ARSIGNALS does.
  20. The server only uses AR authentication, just like setting AR_SERVER_INFO_AUTH_CHAINING_MODE to AR_AUTH_CHAINING_MODE_DEFAULT does.
  21. The server turns off the global attachment size restriction, just like setting AR_SERVER_INFO_DB_MAX_ATTACH_SIZE to 0 does. (A customer may have set the maximum attachment config value to restrict the size of attachments to something like 10 MB. The apps installers need to import some DVF plugins that may be larger than that, so we temporarily turn off the restriction when in upgrade mode.)
  22. New in 9.1.03 - The server removes any server side limit on the maximum number of entries returned in a query, just like setting AR_SERVER_INFO_MAX_ENTRIES (configuration file item Max-Entries-Per-Query) to 0 does.
  23. New in 9.1.03 - The server removes any server side limit on the maximum number of menu items returned in a query, just like setting configuration file item Max-Entries-Per-Search-Menu to 0 does
  24. New in 9.1.03 - The server removes any attachment security validation, just like if there were no inclusion or exclusion lists specified and no validation plug-in specified.


Does the parameter gets changed when server goes in Operating mode ?


No , AR Server internally takes care of disabling parameter , for example if upgrade mode wants to disable escalation , it does not set the Disable-Escalation : T / F in ar.conf , AR Server internally set this parameter off.


What modification are done to AR System Server Group Operation Ranking form ?


If the server is in a server group, Server will change entries in the "AR System Server Group Operation Ranking" form to change the server's behaviour.

For example there are 2 server ( primary server and secondary Server )


Below tables explains what happens when primary server is set in upgrade mode and reset to normal mode.





When Upgrade is Set


When Upgrade mode is reset

Primary ServerAdministration111
Assignment Engine1null1
Atrium Integration Engine1null1
Atrium Integrator1null1
Business Rule Engine1null1
Approval Server1null1
Secondary ServerAdministration2null2
Assignment Engine222
Atrium Integration Engine222
Atrium Integrator222
Business Rule Engine222
Approval Server222


So from above table  , Primary Serve Administration ranking is not set to null ( or Empty ) and rest of non-primary server Administration ranking are set to null ( or empty ) , by doing this it helps to retain the administration rights with the primary server which is getting upgrade and hence failover of administration rights does not fail over to non-primary server (which was earlier ranked as 2 ). this helps during ZDT upgrades where secondary servers are up and running.


Where does operating mode takes the back-up of ranking information ?


In 9.1.02 & 9.1.03 , backup of all ranking information was stored in ar.conf

In 9.1.04 and onwards , a new field called "Operation backup"  on AR System Server Group Operation Ranking will be having back-up


What changes where made to reset operating mode in 91SP4 upgrade installer ?


  • Pre 9.1.04 ( i.e. 9.1.02 and 9.1.03) :-  Individual installer like AR System use to set the upgrade mode before start of the installation and reset once the installation was done.
  • In 9.1.04 :- AR upgrade installer will set the server in upgrade mode and before ending installation, AR installer checks if CMDB component exists. If CMDB doesn't exist, AR installer itself sends SSI call to reset Operating-Mode to 0. If there is CMDB, AR installer doesn't reset the Operating-Mode
  • In 9.1.04 :-CMDB upgrade installer - before ending installation, CMDB installer checks if AI component exists. If AI doesn't exist, CMDB installer itself sends SSI call to reset Operating-Mode to 0. If there is AI, CMDB installer doesn't reset the Operating-Mode.
  • AI upgrade installer always resets Operating-Mode to 0 on completion
  • Apps installer does it individually

Does installer reset operating mode in case installer fails


Yes in all cases  either installation is successful or failure , installer re-set the operating mode , but there are chances where installer might not re-set it back , in that case we have to reset it back to normal mode using SSI.


Hope this helps to know few things related to upgrade mode.


BMC is excited to announce general availability of new Remedy releases as part of our Fall 2017 release cycle:

  • Remedy 9.1.04  (incl. Remedy AR System, CMDB, ITSM applications, Smart Reporting, Remedy Single Sign-on)
  • Remedy with Smart IT 2.0.00
  • BMC Multi-Cloud Service Management 17.11


Here is excerpt of platform specific improvements.


With Remedy platform version 9.1.04, BMC delivers a rich set of platform-related improvements that help Remedy on-premise customers reduce cost of operations and administration for their Remedy environment.


Significant improvements to the Zero Downtime Upgrade capability for the Remedy Platform

9.1.04 delivers significant improvements to the Zero Downtime Upgrade capability for the Remedy Platform: Several manual steps of the process have been automated. If, for some reason, the platform upgrade fails, the platform components and the file system are rolled back to the earlier version. All these enhancements allow customers to safely perform in-place upgrades of the Remedy platform without impact on the overall Remedy ITSM service. This recorded Connect with Remedy webinar session about Zero-Downtime Upgrades provides additional insight into the approach.


Efficient Patching/Hot-fix of Remedy with Deployment Manager

Starting with version 9.1.04, customers can now use the Remedy Deployment Application to easily deploy new Remedy platform patches and hotfixes into their Remedy environment, including new binaries. Remedy administrator no longer have to run patch installers on each server of a server group across multiple environments (Development, QA, and Production) to deploy new binaries. Platform patches are now delivered as deployable packages. When a Remedy administrator deploys such a package on a primary server in a server group, the changes / new binaries provided through the patch or hotfix are applied on all the secondary servers automatically.  Please note that there are also a number of other enhancements in the Remedy Deployment Application v9.1.04.


Centrally enable logging in a Remedy server group environment

Last but not least, Remedy 9.1.04 also makes it easier for Remedy administrator to centrally enable logging in a Remedy server group environment, reduces CPU resource usage on mid-tier server by 50%, and informs users of the mid-tier UI about an upcoming session timeout.


Additional Utilities - Remedy Server Group Administration Console

In support of the new Remedy 9.1.04 release, the Remedy product team also release a number of value-add utilities to the BMC Communities. These are unsupported at this time, but BMC will evaluate based on customer feedback whether to include it in the standard product at a later time.


Some references to additional information about this release:



Also check this blog by Peter Adams for details of other enhancements as part of Remedy 9.1.04 release - Remedy Fall 2017 Release (9.1.04): BMC Continues to Innovate ITSM with New CMDB User Experience, New Cognitive Capabilities in Remedy and New Multi-Cloud Service Management


Thank you for your continued support of the Remedy family of products and we look forward to updating you on more innovative product enhancements in the coming months.


Enjoy the year end and have a great start into 2018.


Rahul Vedak

Remedy Product Manager


The Remedy product management team is looking forward to giving attendees of the T3:Service Management and Automation Conference an opportunity to join onsite customer advisory sessions about specific topic, where you can give direct input to the planning process for the Remedy platform and the ITSM applications.


As room capacity at the conference site is limited, we’re trying to assess, which topics are of biggest interest to our customers. We’ll use this feedback to select, which advisory sessions we’ll organize at the event. If the time at the conference is not sufficient to come to a conclusion, we may continue to the discussion after the conference with virtual sessions.


Please let us know, which topics you are interested in providing feedback on by filling out a 2-min survey at


Thanks, Peter


BMC is proud to be the flagship sponsor for the upcoming T3: Service Management & Automation Conference, taking place during the week of Nov 6, 2017 at the Palms Casino & Resort in Las Vegas. T3: Service Management & Automation Conference - November 06 - 10, 2017 - Las Vegas, Nevada


If you did not make it to BMC Exchange New York City, not to worry, T3 will cover all the DSM topics which was shared there, in addition to 140+ tech sessions including Hands on Labs!


This year’s conference is being put on by T3 to provide an interactive, educational experience for attendees looking to gain mindshare and hands-on views to the latest best-of-breed technologies. This conference will be focused on giving you an in-depth, technical view with valuable training to help you succeed in your roles and accomplishing your business needs!


As the Flagship Sponsor, BMC will have a strong showing at the conference with VIPs, engineers, support technicians, product managers, marketing/sales representatives, and more on site.

•   Come see what is new in Remedy, BMC Innovation Suite, BMC Digital Workplace, BMC Discovery, BMC Remedyforce, BMC Track-It, BMC Client Management, BMCFootprints, TrueSight & more, to include products from vendors such as Numerify, RRR, Mobile Reach, RightStar, Fusion, Partner IT, Scapa Technologies, CyberTrain & RMI Solutions.

•   Come learn more about your products, as well as the latest trends in tools, training and technology, in a variety of breakout sessions to include many hands-on labs.

•   Come listen to our awesome Keynote speakers at the opening and general ceremonies.


There are lots of opportunities to network with BMC and non-BMC personnel who focus on a variety of products, as well as, spend an Evening with the Experts to talk about any of the questions you may have about Remedy platform. In addition, there'll be lots of opportunities to talk to the Remedy product management team about needs of your organization. See separate blog post about customer advisory meetings. If you are interested in a 1:1 meeting with product management team, please work with your BMC or partner sales contact to arrange that.



Register for the T3 Conference at:


We're collection information about use of Crystal Reports with Remedy.


If your organization currently uses Crystal Reports, we'd like to ask you to fill out this very brief survey:

Crystal Reports and Remedy Survey


Thanks very much in advance,


Peter Adams


Just sharing one tip with ARS. A known solution to known problem.

In case if you have  AR Server running on Windows. And if its service stops working due to java updates – you need to make changes (as per new JRE) at below mentioned locations.


  • Update java/jvm path at below location in registry on given system

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Action Request System Server <host name>\Parameters


  • Edit <AR Server Install Dir>\arserver.config and update jvm search path (set the upgraded JRE version)


     # JVM search paths (number indicates search order)\Program Files\Java\jre1.8.0_141\bin


  • Edit <AR Server Install Dir>/Conf/armonitor.cfg and update all hardcoded java paths.

In this third post on encryption we're going to show how to enable SSL between an AR System server and its Oracle database.  In previous posts we've seen how to use Oracle's native encryption and SSL with Microsoft SQL Server. The process we're going to follow is similar to the latter.


Again, the high level steps are:


  • obtain a certificate
  • configure the database to use the certificate
  • import the certificate on the client
  • configure the AR System server to use SSL


Oracle databases store their certificates in a set of files called a wallet so, unless you have an existing wallet, we need to create one.  As with most of these steps there are multiple ways to do this.  We could use the Wallet Manager GUI but we're going to stick to the command line and use the orapki utilty:


Create a new wallet with the auto-login property set:


c:\app>orapki wallet create -wallet c:\app\db_wallet -auto_login


We're prompted to enter a password to secure the wallet and I've used password1.  The directory listing shows the files created in the db_wallet directory which will be created if it does not already exist.


We now have an empty wallet to which we need to add a certificate.  As this is a test we'll create a self-signed certificate and add it with one command:


c:\app>orapki wallet add -wallet c:\app\db_wallet -dn "cn=clm-pun-013056,cn=bmc,cn=com" -keysize 2048 -validity 365 -pwd password1 -self_signed

c:\app>orapki wallet display -wallet c:\app\db_wallet -pwd password1


We've used the host name for the -dn option, specified a key length of 2048 bits and validity of a year.  The second command lists the contents of the wallet so that we can confirm that our certificate has been added.


Note that both user and trusted certificates called CN=clm-pun-013056,CN=bmc,CN=com were created and it is the latter that we will export so that it can be used on the AR System server.


Export the certificate to a file called db_CA.cert:


c:\app>orapki wallet export -wallet c:\app\db_wallet -dn "cn=clm-pun-013056,cn=bmc,cn=com" -cert c:\app\db_wallet\db_CA.cert -pwd password1


We've prepared the certificate but we still need to configure Oracle to use it.  To do this we need to edit two files in the ORACLE_HOME\network\admin directory, sqlnet.ora and listner.ora, and add these lines to both of them:






      (DIRECTORY = C:\app\db_wallet)





This specifies the location of the wallet and sets an option to show we're just using encryption, not authentication.


We also need to configure the listener to add a port that the database will use for SSL connections.  In the LISTENER section of the listener.ora file we add:


    (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = clm-pun-013056)(PORT = 2484)))


Note the protocol is TCPS and we've picked port 2484 which is commonly used.


Finally we need to restart the listener process so that it picks up the changes:



That completes the database setup, the listener output shows we're ready to receive SSL connections on port 2484.


The next step is to copy the certificate that we exported earlier to the AR Server system and add it to the Java cacerts file so that the Oracle JDBC driver can use it.  These steps are similar to those we used for MS SQL.  The certificate file is called db_CA.cert and it has been copied to c:\temp.


Open a command prompt and cd to the jre\lib\security directory of the Java instance that the AR System server is using.  There should already be a cacerts file in this directory, this is the default certificate store used by Java, and we're going to add our certificate to it with the keytool command:


C:\Program Files\Java\jre1.8.0_121\lib\security>..\..\bin\keytool -importcert -file c:\temp\db_CA.cert -alias dbcert       -storepass changeit -noprompt -keystore cacerts



We're almost done, all that is left is to configure the Remedy server to use SSL when connecting to the database.  A typical Remedy server configuration for an Oracle database includes these settings:


Db-Host-Name: clm-pun-013056

Db-Server-Port: 1521

Oracle-Service: orcl


On startup the server uses these to create a JDBC connection string using the format:




When using SSL the PROTOCOL setting needs to be changed from TCP to TCPS.  However, before 9.1 Service Pack 2, there was no way to modify this connection string to do this.  This release introduced a new configuration option called Oracle-JDBC-URL which can be used to provide the full connect string.  If this option is present it is used instead of the one derived from the settings above.  To configure our Remedy server we need to add this option with the appropriate values.  So, the new setting in our ar.cfg/ar.conf will be:


Oracle-JDBC-URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=clm-pun-013056)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=orcl)))


The original settings can be left in place as they are ignored when the new option is set.  Switching between SSL and plain text connections is simply a case of commenting out this new option.


Restart the AR System server and we now have an encrypted connection between the server and the Oracle database.  To verify that this is the case we can use the tcpdump or Wireshark tools as detailed in the earlier posts.  Looking at the packets we'll see that the contents are all binary data and no plain text is present. 





We've now looked at three different ways to encrypt data as it is transferred between Remedy and the database.  In each case I've tried to cover the minimum steps required to enable this feature, each one offers many more configuration options, and you can find additional details in the links at the end of the articles.


I hope the information is useful and I welcome suggestions for other topics that would be of interest to the Remedy community.  Please use the comments section below or send me a message with ideas.


Further Reading


Trending in Support: Encrypting Data Between AR Servers and Oracle Databases

Trending in Support: Enabling SSL Encryption for AR to MS SQL Database Connections with Remedy 9.1 SP2 and Later


SSL With Oracle JDBC Thin Driver Advanced Security Configuration

HOW TO: Setting up Encrypted Communications Channels in Oracle Database


Feedback and corrections are always welcome in the comments section below.


Mark Walters


Read more like this -  BMC Remedy Support Blogs

Matthias Minden

Disable IPv6

Posted by Matthias Minden Jun 1, 2017

We currently don't use IPv6 but discovered that the application (java) still wants to use IPv6 even when disabled at the O/S level.  We edited the arconfig file on the server(s) by adding the following to the java entries

     <your java path> -



You can also add these settings to the Developer Studio and Data Import Tool .ini files also!


This post shows how to use a new configuration option, added in AR System Server 9.1 Service Pack 2, to enable encryption of the data moving between a Remedy server and its Microsoft SQL Server database.  In an earlier post (Trending in Support: Encrypting Data Between AR Servers and Oracle Databases ) we saw how to enable Oracle's native encryption for these connections but, this time, we're going to be using SSL.  Microsoft have documentation on their website that describes how the feature is implemented.


Using SSL Encryption | Microsoft Docs


There are several steps necessary to prepare the environment before encryption can be enabled.  At a high level these are:


  • obtain a certificate
  • grant SQL Server access to the certificate
  • configure SQL to use the certificate
  • import the public certificate to the Java instance used by the AR System server
  • enable encryption on the AR System server


If you're configuring a production environment that requires this additional level of security you have probably already obtained an SSL certificate from one of the available commercial certification authorities.  However, for our tests, we're going to use a simple, self-signed, certificate.  There are a number of different ways to generate these but, as we happen to have IIS installed on our SQL Server machine, we'll use that.  Simply start the IIS Manager, goto Server Certificates and right click Create Self-Signed Certificate:



Enter a name and choose a Personal certificate.


Now that we have a certificate we need to make it available to our SQL Server instance.  Start by finding the account name used to run SQL. One way to do this is via the SQL Server Configuration Manager, check the Properties for the selected instance:



Note the Account Name and then launch the MMC management console and add the Certificates snap-in for a Computer Account:



  • in MMC, go to Certificates (Local Computer) > Personal > Certificates

  • the certificate should be listed there (you may have to import it if you did not use IIS to create it)

  • right click > All Tasks > Manage Private Keys

  • add the service account for your instance of SQL Server

  • give the service account Read permissions


While we're here we also need to export the certificate so that it can be imported on the AR System server machine later:


  • right click on the certificate > All Tasks > Export > Next
  • choose No, do not export the private key > Next
  • choose DER encoded binary X.509 (.CER) > Next
  • enter a file name (e.g. export.cer) noting where it is saved


The final step on the SQL Server machine is to configure SQL to use the certificate with the SQL Server Configuration Manager again:



  • start SQL Server Configuration Manager
  • go to SQL Server Network configuration
  • select your instance
  • right click > Properties > Certificate tab
  • choose the certificate from the list
  • restart the SQL service


We're finished with the SQL Server machine, the rest of the work is done on the AR System server host.


Start by copying the exported certificate file (which we called export.cer) created above to the system.  Then, open a command prompt and cd to the jre\lib\security directory of the Java instance that you are using to run your AR System server.


There should already be a cacerts file in this directory, this is a default certificate store used by Java, and we're going to add our certificate to it with the keytool command.



With the commands shown above we:


  • imported the certificate with an alias of arkey using the default store password of changeit
  • listed the certificate to verify it was imported


The final step is to enable the AR System server to use the certificate and encrypt traffic between itself and the database.  To do this we need to make use of a new configuration parameter that was added in 9.1 Service Pack 2 called Db-Custom-Conn-Props:  This allows us to pass one or more key=value pairs to the database driver using a semi-colon separated list.  For example:


Db-Custom-Conn-Props: key1=value1;key2=value2


This option was added in 9.1 SP2 to provide a way for administrators to specify the additional configuration options required for the JDBC driver when enabling features such as encryption.  We'll make use of it again when we look at SSL for Oracle databases in a future post.


Before we move on let's confirm the current state of the data flowing to and from the database.  In the earlier Oracle post we used tcpdump to snoop on the network traffic.  We're going to do the same here but with the graphical Wireshark utility.  This next picture shows some of the data packets coming from the database and we can see that there is plain text legible in their contents:



The above is some of the data being returned when selecting the User form record for the sample user Allen.  The full name and email address are there, along with the start of the list of groups that Allen is a member of.


To enable encryption we need to stop the AR System service add this line to our ar.cfg file;


Db-Custom-Conn-Props: encrypt=true


and then restart the service.  We could also have used the Centralised Configuration forms to add this to our server before restarting.


Now, when we look at the Wireshark captured data, we can immediately see a difference:



Note that the Info column is showing TLS traffic and the packet payload data is no longer in plain text - an encrypted connection!


We've deliberately glossed over some of the complexities that may be required in non-test environments such as:


  • using commercial SSL certificates
  • using alternative Java keystores
  • additional Db-Custom-Conn-Props options that may be need for different SSL configurations such as different keystore locations and passwords


but I hope that this shows that, with 9.1 Service Pack 2 and beyond, AR System server to database encryption is now supported when using Microsoft SQL databases.





Thanks to The Data Specialist blog post for details of configuring SQL Server with a self-signed certificate.

Using a self-signed SSL certificate with SQL Server | The Data Specialist


Further Reading


Using SSL Encryption | Microsoft Docs

Wireshark · Go Deep.



Feedback and corrections are always welcome in the comments section below and, if you have a suggestion for a technical post related to Remedy AR System, please drop me a message via the Communities.


Mark Walters


Read more like this -  BMC Remedy Support Blogs


Regular news coverage of data security breaches has made organisations increasingly aware of the importance of securing the data they own and manage.  As a result, one question that we're beginning to see more often in support is "How do I encrypt the data travelling between my AR server and database?".  The two databases supported by Remedy 9.x servers are Microsoft SQL and Oracle and both have options to provide this type of encryption.  This post covers one way to do this with Oracle; a later post will look at an alternative for this database and Microsoft SQL.


The architecture of a basic AR System installation looks something like this;



Data has multiple steps to take as it travels back and forth between clients and the storage medium used by the database server.  There are options available to encrypt that data during all of the steps but the one we’re focusing on in this post is highlighted in red on the diagram, the step between the AR System Server and an Oracle database.  Often these two servers are on separate machines so the data has to travel over a network and, by default, this transfer takes place in plain text.


To confirm that the data is being passed this way, and that after encryption has been enabled it is no longer in plain text, we’re using a test environment with a version 9.1 AR System Server running on Linux connecting to an Oracle 11g database running on Windows 2012.  We will monitor the network traffic travelling the AR and database servers to see what it looks like before and after the changes to turn on encryption.


Logging on to the Linux system we can use one of the many tools available to capture and display network traffic - in this case it’s tcpdump.  The command below will display the traffic flowing between the AR and Oracle servers in this environment.



To generate some traffic between the systems we use a User Tool client and start looking at records in the User form.  As different records are selected the tcpdump output shows the data being retrieved from the database.



As we can see there is information in the network traffic that can be read.  The screen shot above shows the data for user the Allen, including the full name, email address and a list of group IDs/groups.


If the tcpdump command is left running other legible data will be seen.  SQL statements for example;



Oracle offers both native and SSL options for encrypting the data between a client and the database server, details are available in many places on the web, one such example is here - ORACLE-BASE - Native Network Encryption for Database Connections. 


We’re going to use the native option as it does not require any changes to the client, simply some configuration settings on the database server.  The process for enabling this type of encryption is documented here - Configuring Network Data Encryption and Integrity for Oracle Servers and Clients.


One way to make the changes is to edit the Oracle sqlnet.ora file using a text editor but we’re going to use the Net Manager utility that is installed as part of the database software.


On the database server system launch the Net Manager tool and click on Profile in the tree window.  Select Oracle Advanced Security in the drop down menu and then the Encryption tab.



This is where the various encryption options are selected.  They are all covered in the link above and for this test we use these settings;


Encryption Type:          requested

Encryption Seed:          secretword

Selected Methods:         AES256




Select Save Network Configuration from the File menu and quit Net Manager.


We have now enabled encryption on the database server.  The options we have set request that encryption be enabled if the client supports it and we have specified a seed and algorithm to be used.


If we now go back and repeat the tcpdump test above what do we see?  When we select another user record, Bob's for example;



That doesn't look good - the data is still visible in plain text.  This is because the encryption configuration change is only picked up when a client first connects to the database, so a restart of the AR System Server is required.  Once this is done the test is repeated and the network traffic looks a little different;



The data is no longer in plain text – it is encrypted.  A positive step forward in an increasingly security conscious world!


I’m not sure how widely known this feature of Oracle is but, as we have shown, with a simple change on the database server and a restart of AR, it is possible to encrypt the traffic between these systems.  No changes are necessary on the AR System Server and this should work with any version of AR as it is a feature of the Oracle database and client software. 


In a future post I’ll look at how AR to database encryption can be enabled using SSL with both Oracle and Microsoft SQL.




Many thanks to Martin Rosenbauer for his feedback that led to this article.



Further Reading


A tcpdump Tutorial and Primer with Examples



Feedback and corrections are always welcome in the comments section below and, if you have a suggestion for a technical post related to Remedy AR System, please drop me a message via the Communities.


Mark Walters


Read more like this -  BMC Remedy Support Blogs


When I talk about the advantages of using the REST API, I usually talk about REST-based web services and compare this to SOAP-based web services. And there are a lot of advantages to using the REST API: there’s no need to define a web service since it’s always there, the interaction with the interface is a lot easier since the requests are a lot smaller, but most of all it’s intuitive. I often find that SOAP is a complex mechanism which can be challenging to use.


But that’s only a problem if you’re planning to handle the communication yourself. Say, you’re building an application in Java or need to get data to a different system that runs on PHP, in that case the REST-based web service is the obvious choice since all you’re doing is sending and receiving simple HTTP requests and responses.


ws soap.pngSo is there any need for SOAP at all? I’d argue there is. A SOAP-based web service uses the WSDL to describe in detail how everything works. That includes how the request should be formatted, what operations are on offer and what the response should look like. This means you know everything prior to starting your request. These are usually considered added complexities, but if you use an application that handles all of this for you, this doesn’t really matter that much. Because that’s where I see the added value of SOAP-based web services: if you use an application that can deal with the information in the WSDL and interpret it for you it’s actually easy to use.


Doing the same thing with REST-based clients is a lot trickier. A REST client acts a like a generic client and enters a REST service with little knowledge of the API, except for the entry point. An application might find it difficult to predict all those details that SOAP would store in the WSDL.


One such client is Remedy. When we consume a web service, we act as a SOAP client. During the design phase we read the WSDL file and allow the developer to simply map the fields to the XML elements. The system takes it from there. You don’t have to be concerned with creating SOAP requests, reading the WSDL file, etc.


I think SOAP-based web services work particularly well in a setting where the application can easily interpret the web service. If you have that available and there’s no need to do any coding, SOAP is probably a better choice. To learn more, attend my session, Session 233: Getting the Most out of Your Web Services Integration, at BMC Engage where we’ll be looking at SOAP-based services and REST-based web services.


Hope to see you there,




Don't forget to follow me on Twitter!


BMC Remedy AR System 9.1: Basic Development

For developers! *BMC Remedy AR System 9.1: Basic Development* training offered in July, August and September!!

Gain the knowledge and skills to take full advantage of all that Remedy AR System 9.1 has to offer.


Register now for one of the following instructor-led classes (includes hand-on lab and ebook):


If you have multiple students, contact us to discuss hosting a private class just for your organization.


Details here, or contact Tom Hogan for EMEA training and Brian Hall for AMER training.


Course Overview

This course combines classroom instruction with laboratory exercises to guide students through basic development using Developer Studio. They will leave the course with enough development experience to take a course on how to customize ITSM applications. The lab exercises contain scenarios that simulate real world requirements. By the end of the course, the student will have built deployable applications, forms, and workflow.


Course Objectives

» Create custom objects using Developer Studio

» Set object permissions
» Explore form definitions
» Create active links, filters, and escalations

» Create active link and filter guides
» Explore tables and workflow related to tables

» Understand how to deploy an application



Elaine Miller Geoff Bergren Terri Lawrence Dirk Braune Brian Rock Heather LeventryMarike Owen Tom Luebbe Crystal Mendell Mario Rivas Mahesh Argade Paul CutsuvitisGary Bersh Thomas Hogan Brian Hall Jon Rendle Rayemond Newman Dave GilesKim Wharton susie clare Fabienne de Beaufort sara hepner Antoinette Kaftan-KaemerowErwann Nedele


I’ve written a fair bit about web services before. There’s an article explaining how to analyse problems when consuming SOAP web services, how to read the logs and one of my more recent articles was about how to use the REST API. But I’m going to write a bit more about web services, it is after all one of my favourite subjects these days.


You see, I like SOAP. But apparently I’m a bit of an exception, but what I like about SOAP is the thoroughness. It might not always be the easiest to figure out, but if you know how to read it, the WSDL will tell you exactly what services are on offer, what your requests should look like and what you can expect back. As long as you stick to the rules, nothing can go wrong. What are my operations? Check the WSDL. What should the namespace look like? Check the WSDL. What does my SOAP response look like? Check the WSDL. See, you can’t go wrong.


But it’s the rules part that does tend to make it a bit overbearing. I frequently work on problems where there’s disagreement with regards to the exact interpretation of the WSDL. Minor things mostly, but that’s the big weakness of SOAP-based web services. If you don’t stick to the rules 100% all the time it’s not going to work.


Don’t believe me? Check this SOAP request:



It’s an external web service which is consumed with ARS, this is the error that’s returned:



It’s not a particularly helpful error but after a thorough review of the WSDL I realised the namespace for the attributes was wrong. The SOAP request should look like this:



I know that’s correct because the WSDL tells me exactly what the SOAP request should look like:



Notice the attribute element which itself has two attributes (name and form). The element form has a value of unqualified. Here’s what this means:

  • Qualified: indicates that this attribute must be qualified with the namespace prefix and the no-colon-name (NCName) of the attribute
  • Unqualified: indicates that this attribute is not required to be qualified with the namespace prefix and is matched against the (NCName) of the attribute.


So this would result in: <Customer id='value' /> or possibly: <ns0:Customer id='value' />


We could argue that is not required means that it’s optional, but the fact that they specifically set the attribute form to unqualified can be seen as an indication that the web service does not accept the namespace for the attribute.


Frustrating isn’t it? That tiny detail gave me a lot of headaches and it prevented the whole integration from working. And you really have to get into the details of the WSDL to understand what’s going wrong. Hey I said I like SOAP, I don’t love it.


But I must confess, I absolutely love REST. Yes, I am a true believer in the principles of REST. I love the way it takes advantage of the strengths of the architecture of the web, I love its focus on simplicity, on readability.


I’m not going to get into too much detail into the principles of REST, what I want to look at is how the implementation of a REST-based web service different from SOAP. The principles of REST-based web services are based on a more intuitive way to communicate with another machine. The result is a simpler interface with simpler requests and responses that are easier to read and understand. Consider the following SOAP request:


blog5.pngThere’s quite a lot to it and although we’re retrieving data we’re still using the HTTP method POST. Compare that to a request to a REST-based web service which accomplishes the same thing:



So now we’re GETting data. The URL makes a lot more sense as well and compared to the SOAP request is a lot easier to read.


One of the principles of a REST-based web service is that you don’t require prior knowledge of the web service other than its base URL. BMC’s implementation isn’t true RESTful in that sense as you do need some information to get you started. You need to know how to login and logout and you do need some degree of familiarity of the format of the requests. But other than that it’s really easy to use.


I think SOAP lends itself quite well for communication where two machines are already able to deal with SOAP’s complexities. ARS for example allows you to consume an external SOAP-based web service and it’s just a matter of mapping the elements to the various fields and integrating this in the workflow. You don’t really have to deal with the interpretation of the WSDL or the construction of SOAP requests. ARS does it for you, and unless something goes wrong the web service integration works perfectly.


But if you want to integrate with ARS using a programming language, REST is the better choice. For example, if you use Java and you’re interacting via SOAP there’s quite a lot to it. You can of course just hardcode the whole HTTP request but if you want to do it properly you need rely on an external library like AXIS. There’s nothing wrong with that but it does suddenly get quite complicated. Considering you only want to send small requests to check a status or to get a record, you need a lot of code to get this to work.


That’s where REST’s strengths come in, because using the REST API it suddenly gets a lot easier to do. Since we don’t have to follow the strict rules specified in the WSDL and since the requests are lot smaller it’s actually quite easy to get stuff done. Other than the usual networking libraries I don’t need any 3rd party libraries or frameworks to interact with the system, and since the API is intuitive it’s a lot easier to construct my requests.


Want to know more? I'll be talking about this at Session 233: Getting the Most out of Your Web Services Integration at BMC Engage. We’ll have a detailed look at SOAP, REST and of course at how you’d actually implement all of this. Join me there to learn more!




Don't forget to follow me on twitter!


Some Customers are asking us if AR Server / Mid-Tier is affected by the following Apache Struts vulnerability.


Apache Struts is not used by any version of AR Server / Mid-Tier.

Hence the Apache Struts vulnerability mentioned above or any other Apache Struts vulnerability does not affect AR Server / Mid-Tier.


--- Abhijit Rajwade

BMC Software

Filter Blog

By date:
By tag: