Skip navigation

Regular news coverage of data security breaches has made organisations increasingly aware of the importance of securing the data they own and manage.  As a result, one question that we're beginning to see more often in support is "How do I encrypt the data travelling between my AR server and database?".  The two databases supported by Remedy 9.x servers are Microsoft SQL and Oracle and both have options to provide this type of encryption.  This post covers one way to do this with Oracle; a later post will look at an alternative for this database and Microsoft SQL.


The architecture of a basic AR System installation looks something like this;



Data has multiple steps to take as it travels back and forth between clients and the storage medium used by the database server.  There are options available to encrypt that data during all of the steps but the one we’re focusing on in this post is highlighted in red on the diagram, the step between the AR System Server and an Oracle database.  Often these two servers are on separate machines so the data has to travel over a network and, by default, this transfer takes place in plain text.


To confirm that the data is being passed this way, and that after encryption has been enabled it is no longer in plain text, we’re using a test environment with a version 9.1 AR System Server running on Linux connecting to an Oracle 11g database running on Windows 2012.  We will monitor the network traffic travelling the AR and database servers to see what it looks like before and after the changes to turn on encryption.


Logging on to the Linux system we can use one of the many tools available to capture and display network traffic - in this case it’s tcpdump.  The command below will display the traffic flowing between the AR and Oracle servers in this environment.



To generate some traffic between the systems we use a User Tool client and start looking at records in the User form.  As different records are selected the tcpdump output shows the data being retrieved from the database.



As we can see there is information in the network traffic that can be read.  The screen shot above shows the data for user the Allen, including the full name, email address and a list of group IDs/groups.


If the tcpdump command is left running other legible data will be seen.  SQL statements for example;



Oracle offers both native and SSL options for encrypting the data between a client and the database server, details are available in many places on the web, one such example is here - ORACLE-BASE - Native Network Encryption for Database Connections. 


We’re going to use the native option as it does not require any changes to the client, simply some configuration settings on the database server.  The process for enabling this type of encryption is documented here - Configuring Network Data Encryption and Integrity for Oracle Servers and Clients.


One way to make the changes is to edit the Oracle sqlnet.ora file using a text editor but we’re going to use the Net Manager utility that is installed as part of the database software.


On the database server system launch the Net Manager tool and click on Profile in the tree window.  Select Oracle Advanced Security in the drop down menu and then the Encryption tab.



This is where the various encryption options are selected.  They are all covered in the link above and for this test we use these settings;


Encryption Type:          requested

Encryption Seed:          secretword

Selected Methods:         AES256




Select Save Network Configuration from the File menu and quit Net Manager.


We have now enabled encryption on the database server.  The options we have set request that encryption be enabled if the client supports it and we have specified a seed and algorithm to be used.


If we now go back and repeat the tcpdump test above what do we see?  When we select another user record, Bob's for example;



That doesn't look good - the data is still visible in plain text.  This is because the encryption configuration change is only picked up when a client first connects to the database, so a restart of the AR System Server is required.  Once this is done the test is repeated and the network traffic looks a little different;



The data is no longer in plain text – it is encrypted.  A positive step forward in an increasingly security conscious world!


I’m not sure how widely known this feature of Oracle is but, as we have shown, with a simple change on the database server and a restart of AR, it is possible to encrypt the traffic between these systems.  No changes are necessary on the AR System Server and this should work with any version of AR as it is a feature of the Oracle database and client software. 


In a future post I’ll look at how AR to database encryption can be enabled using SSL with both Oracle and Microsoft SQL.




Many thanks to Martin Rosenbauer for his feedback that led to this article.



Further Reading


A tcpdump Tutorial and Primer with Examples



Feedback and corrections are always welcome in the comments section below and, if you have a suggestion for a technical post related to Remedy AR System, please drop me a message via the Communities.


Mark Walters


Read more like this -  BMC Remedy Support Blogs


When I talk about the advantages of using the REST API, I usually talk about REST-based web services and compare this to SOAP-based web services. And there are a lot of advantages to using the REST API: there’s no need to define a web service since it’s always there, the interaction with the interface is a lot easier since the requests are a lot smaller, but most of all it’s intuitive. I often find that SOAP is a complex mechanism which can be challenging to use.


But that’s only a problem if you’re planning to handle the communication yourself. Say, you’re building an application in Java or need to get data to a different system that runs on PHP, in that case the REST-based web service is the obvious choice since all you’re doing is sending and receiving simple HTTP requests and responses.


ws soap.pngSo is there any need for SOAP at all? I’d argue there is. A SOAP-based web service uses the WSDL to describe in detail how everything works. That includes how the request should be formatted, what operations are on offer and what the response should look like. This means you know everything prior to starting your request. These are usually considered added complexities, but if you use an application that handles all of this for you, this doesn’t really matter that much. Because that’s where I see the added value of SOAP-based web services: if you use an application that can deal with the information in the WSDL and interpret it for you it’s actually easy to use.


Doing the same thing with REST-based clients is a lot trickier. A REST client acts a like a generic client and enters a REST service with little knowledge of the API, except for the entry point. An application might find it difficult to predict all those details that SOAP would store in the WSDL.


One such client is Remedy. When we consume a web service, we act as a SOAP client. During the design phase we read the WSDL file and allow the developer to simply map the fields to the XML elements. The system takes it from there. You don’t have to be concerned with creating SOAP requests, reading the WSDL file, etc.


I think SOAP-based web services work particularly well in a setting where the application can easily interpret the web service. If you have that available and there’s no need to do any coding, SOAP is probably a better choice. To learn more, attend my session, Session 233: Getting the Most out of Your Web Services Integration, at BMC Engage where we’ll be looking at SOAP-based services and REST-based web services.


Hope to see you there,




Don't forget to follow me on Twitter!


BMC Remedy AR System 9.1: Basic Development

For developers! *BMC Remedy AR System 9.1: Basic Development* training offered in July, August and September!!

Gain the knowledge and skills to take full advantage of all that Remedy AR System 9.1 has to offer.


Register now for one of the following instructor-led classes (includes hand-on lab and ebook):


If you have multiple students, contact us to discuss hosting a private class just for your organization.


Details here, or contact Tom Hogan for EMEA training and Brian Hall for AMER training.


Course Overview

This course combines classroom instruction with laboratory exercises to guide students through basic development using Developer Studio. They will leave the course with enough development experience to take a course on how to customize ITSM applications. The lab exercises contain scenarios that simulate real world requirements. By the end of the course, the student will have built deployable applications, forms, and workflow.


Course Objectives

» Create custom objects using Developer Studio

» Set object permissions
» Explore form definitions
» Create active links, filters, and escalations

» Create active link and filter guides
» Explore tables and workflow related to tables

» Understand how to deploy an application



Elaine Miller Geoff Bergren Terri Lawrence Dirk Braune Brian Rock Heather LeventryMarike Owen Tom Luebbe Crystal Mendell Mario Rivas Mahesh Argade Paul CutsuvitisGary Bersh Thomas Hogan Brian Hall Jon Rendle Rayemond Newman Dave GilesKim Wharton susie clare Fabienne de Beaufort sara hepner Antoinette Kaftan-KaemerowErwann Nedele


I’ve written a fair bit about web services before. There’s an article explaining how to analyse problems when consuming SOAP web services, how to read the logs and one of my more recent articles was about how to use the REST API. But I’m going to write a bit more about web services, it is after all one of my favourite subjects these days.


You see, I like SOAP. But apparently I’m a bit of an exception, but what I like about SOAP is the thoroughness. It might not always be the easiest to figure out, but if you know how to read it, the WSDL will tell you exactly what services are on offer, what your requests should look like and what you can expect back. As long as you stick to the rules, nothing can go wrong. What are my operations? Check the WSDL. What should the namespace look like? Check the WSDL. What does my SOAP response look like? Check the WSDL. See, you can’t go wrong.


But it’s the rules part that does tend to make it a bit overbearing. I frequently work on problems where there’s disagreement with regards to the exact interpretation of the WSDL. Minor things mostly, but that’s the big weakness of SOAP-based web services. If you don’t stick to the rules 100% all the time it’s not going to work.


Don’t believe me? Check this SOAP request:



It’s an external web service which is consumed with ARS, this is the error that’s returned:



It’s not a particularly helpful error but after a thorough review of the WSDL I realised the namespace for the attributes was wrong. The SOAP request should look like this:



I know that’s correct because the WSDL tells me exactly what the SOAP request should look like:



Notice the attribute element which itself has two attributes (name and form). The element form has a value of unqualified. Here’s what this means:

  • Qualified: indicates that this attribute must be qualified with the namespace prefix and the no-colon-name (NCName) of the attribute
  • Unqualified: indicates that this attribute is not required to be qualified with the namespace prefix and is matched against the (NCName) of the attribute.


So this would result in: <Customer id='value' /> or possibly: <ns0:Customer id='value' />


We could argue that is not required means that it’s optional, but the fact that they specifically set the attribute form to unqualified can be seen as an indication that the web service does not accept the namespace for the attribute.


Frustrating isn’t it? That tiny detail gave me a lot of headaches and it prevented the whole integration from working. And you really have to get into the details of the WSDL to understand what’s going wrong. Hey I said I like SOAP, I don’t love it.


But I must confess, I absolutely love REST. Yes, I am a true believer in the principles of REST. I love the way it takes advantage of the strengths of the architecture of the web, I love its focus on simplicity, on readability.


I’m not going to get into too much detail into the principles of REST, what I want to look at is how the implementation of a REST-based web service different from SOAP. The principles of REST-based web services are based on a more intuitive way to communicate with another machine. The result is a simpler interface with simpler requests and responses that are easier to read and understand. Consider the following SOAP request:


blog5.pngThere’s quite a lot to it and although we’re retrieving data we’re still using the HTTP method POST. Compare that to a request to a REST-based web service which accomplishes the same thing:



So now we’re GETting data. The URL makes a lot more sense as well and compared to the SOAP request is a lot easier to read.


One of the principles of a REST-based web service is that you don’t require prior knowledge of the web service other than its base URL. BMC’s implementation isn’t true RESTful in that sense as you do need some information to get you started. You need to know how to login and logout and you do need some degree of familiarity of the format of the requests. But other than that it’s really easy to use.


I think SOAP lends itself quite well for communication where two machines are already able to deal with SOAP’s complexities. ARS for example allows you to consume an external SOAP-based web service and it’s just a matter of mapping the elements to the various fields and integrating this in the workflow. You don’t really have to deal with the interpretation of the WSDL or the construction of SOAP requests. ARS does it for you, and unless something goes wrong the web service integration works perfectly.


But if you want to integrate with ARS using a programming language, REST is the better choice. For example, if you use Java and you’re interacting via SOAP there’s quite a lot to it. You can of course just hardcode the whole HTTP request but if you want to do it properly you need rely on an external library like AXIS. There’s nothing wrong with that but it does suddenly get quite complicated. Considering you only want to send small requests to check a status or to get a record, you need a lot of code to get this to work.


That’s where REST’s strengths come in, because using the REST API it suddenly gets a lot easier to do. Since we don’t have to follow the strict rules specified in the WSDL and since the requests are lot smaller it’s actually quite easy to get stuff done. Other than the usual networking libraries I don’t need any 3rd party libraries or frameworks to interact with the system, and since the API is intuitive it’s a lot easier to construct my requests.


Want to know more? I'll be talking about this at Session 233: Getting the Most out of Your Web Services Integration at BMC Engage. We’ll have a detailed look at SOAP, REST and of course at how you’d actually implement all of this. Join me there to learn more!




Don't forget to follow me on twitter!


Some Customers are asking us if AR Server / Mid-Tier is affected by the following Apache Struts vulnerability.


Apache Struts is not used by any version of AR Server / Mid-Tier.

Hence the Apache Struts vulnerability mentioned above or any other Apache Struts vulnerability does not affect AR Server / Mid-Tier.


--- Abhijit Rajwade

BMC Software


Remedy 9 Icon.jpg


April of 2015, BMC introduced Remedy 9. BMC applications run on the BMC AR System Platform. Remedy v9.1 was released in December 2015. In this article I will update, you on the changes made to the curriculum from v8 to v9. A list of all the courses are listed here: BMC Remedy Service Management Suite



The prerequisite for many of the ITSM, CMDB and AR classes was a 16 hour web-Based training in v8. We have shortened the prerequisite to 2 hours. BMC Remedy AR System 9.1: Concepts (Released recently and replaces the 9.0 version) is a great way for IT Managers to get a high level overview of Remedy 9 and will help folks that are new to Remedy learn the basics.


What's New for Remedy Application Admins

Remedy Application Administrators now have a course (BMC Remedy AR System 9.0: Administering) that will help you learn about troubleshooting techniques and server configurations instead of teaching you how to develop custom applications. With this course we were trying to solve the problem of customers who would leave Foundation part 2 wanting to know more about administering and troubleshooting. This is a great course for RemedyOnDemand customers.


What's New for Remedy Application Developers

In the previous learning path we focused on custom application development. The v8 learning path is right for you if you want to learn about custom application development. However, after teaching Remedy for many years, I know that many of you want to know how to tailor ITSM the right way. You left the Developer Part 2 class knowing a lot about Remedy Development, but had no guidance on tailoring ITSM. So, we are going to teach developers how to develop on the platform with ITSM in mind in the BMC Remedy AR System 9.0: Basic Development course and in IT Service Management 9.1: Development we will teach you how to tailor ITSM based on input from BMC Support. Therefore, what we teach is already blessed by BMC Support.


Smart IT and MyIT

If you are looking to enhance your experience with Smart IT and MyIT. We have training available for you. Check out

BMC MyIT 3.x and BMC Remedy with Smart IT 1.x: Administering and Configuring and BMC Remedy with Smart IT 1.3: Using and Administering (WBT).


Getting Certified

We offer an accreditation exam called BMC Accredited Administrator: BMC Remedy AR System 9.0 after you complete BMC Remedy AR System 9.0: Administering. The exam cost is included in the price of the course. Or if you are a BMC Accredited Administrator: BMC Remedy AR System 8.0, you can complete the BMC Remedy AR System 9.0: What's New for Administrators and pay for just the exam.


If you are a BMC Certified Developer: BMC Remedy AR System 8.x. You can upgrade to Remedy 9.1 by taking 

BMC Remedy AR System 9.0: What's New for Administrators (WBT), BMC Remedy AR System 9.0: What's New for Developers (WBT), and passing the BMC Certified Professional: BMC Remedy AR System Development 9.1 Upgrade Exam.

Please let me know if you have questions about the curriculum by commenting below. I have updated this blog to reflect changes since Remedy 9.1 was released.


BMC Remedy Single Sign On Service Provider (SP) certificate shipped with the product, which is used to sign SAML request, will be expired on April 21st 2016.


If you are using out of the box certificate to sign SAML requests in BMC Remedy Single Sign On, the request will fail due to the expiry of certificate.


In this blog, I will be covering the steps to update the BMC Remedy Single Sign On (RSSO) SP certificate so that it has a new expiry date, which will prevent from failure of SAML authentication.


If this certificate has already been replaced with a newer one with a valid future expiry date, you don't have to follow the steps mentioned in this blog.


First of all, how to find the Certificate expiry date of relying party (RSSO) for SAML authentication?


  • An easy way to find the certificate expiry is by logging to ADFS tool and checking the RSSO service provider relying party properties.
  • In the Signature tab, you should see the certificate expiry date.


Likewise, for other IdP tools that you are using with RSSO, you will have to contact your IdP administrator to check the RSSO relying party certificate expiry date.


What steps are necessary to update BMC Remedy Single Sign On (RSSO) SP Certificate?


Important Notes:


(A) The below instructions are written for Windows OS. All paths mentioned below are for Windows OS. Please use relative paths if you're using Linux or Solaris OS.


(B) The file name for the java keystore should be cot.jks. The alias for java keystore (cot.jks) should be test2.  The password for the cot.jks keystore is 'changeit'    Please do not change the password.


(C) Please make sure to set the Path environment to jdk or jre bin folder or else you may get error like ‘unknown internal or external command’. In Windows this means that you'll need to edit the System Environment properties and find the global variable PATH to update it.




Steps to update the certificate:


1. Update java keystore named cot.jks


Perform the following steps on the machine installed with RSSO server by being in <tomcat>\rsso\WEB-INF\classes folder:


a. Take a backup of existing cot.jks from <tomcat>\rsso\WEB-INF\classes folder


b. Delete alias ‘test2’ from existing cot.jks using keytool command line:


keytool -delete -alias test2 -keystore cot.jks


Note:  The password for the cot.jks is "changeit".  Please don't change the password


c. Create a new keypair with alias ‘test2’ in existing cot.jks


keytool -keystore cot.jks -genkey -alias test2 -keyalg RSA -sigalg SHA256withRSA -keysize 2048 -validity 730


Note:  In the above example, we used 730 days as validity, which is equivalent to 2 years validity.  You can use the validity days at your discretion


d. Export ‘test2’ certificate in PEM format


keytool -export -keystore cot.jks -alias test2 -file test2.pem –rfc


e. Take a backup of the updated cot.jks


If you have other RSSO server instances in same cluster, replace cot.jks in <tomcat>\rsso\ rsso\WEB-INF\classes folder with the updated cot.jks in step 1.e


2. Update signing certificate in RSSO Admin console


a. Login RSSO Admin console


b. Go to ‘General->Advanced’ tab


c. Open the file test2.pem which is created in step 1.d in text editor, remove the first line:




and the last line:




Also remove the newline delimiters (\r\n), and then copy the contents.

E.g. If you use Notepad++, you can open ‘replace’ dialog, select ‘Extended’ search mode, find ‘\r\n’ and click ‘Replace All’ button.





d. Paste the copied content in step 2.c to the ‘Signing Certificate’ field, replace existing content in the text area




e. Click ‘Save’ button to save the change


f. Wait for 15 seconds, view the realm using SAML, click ‘View Metadata’ button in ‘Authentication’ tab. Verify the SP metadata is updated with the new signing certificate.


3. Update SP metadata at IdP side


- Export the SP metadata in step 2.f and save it in a local file


- Send the exported SP metadata and the new signing certificate in step 1.d to IdP team for updating.


If the IdP is ADFS, the customer can add the new signing certificate as below:


a. Open ‘Properties’ dialog of the relying party for RSSO
b. Go to ‘Signature’ tab
c. Click ‘Add’ button, select the new signing certificate file and click ‘OK’





Notes for rolling upgrades (Cluster / High Availability environment)


Should you have a requirement for zero-down time in a cluster environment (assuming ADFS is the IdP) for the signing certificate update, then you can take actions with following sequence:


1. Take one RSSO server instance down first, perform step 1 on it
2. Perform step 2
3. Perform step 3 (remember NOT to delete the old signing certificate)
4. Make the RSSO server instance up again
5. Take the second RSSO server instance down, update its cot.jks with the one already updated on first RSSO server instance in step 1.e, then make it up again
6. Repeat step 5 on all other RSSO server instances
7. After the keystore cot.jks is updated on all RSSO server instances, you can remove the old signing certificate on the RSSO relying party at ADFS side.


This blog post is really just to gauge interest and gather feedback on something I've spent a lot of the last year working on - which is sanitizing the ar.conf parameters that are published in the wiki docs here:


A-B -

C-D -

E-M -

N-R -

S-Z -


In the comments section of each page I see questions being asked, so I wanted to ensure that the parameter information we provide is accurate and relevant.


The current published list could use improving in the form of:

  • Updating the parameters to include any unpublished ones and exclude any obsolete ones
  • Mapping each parameter to where in the application you can find it
  • Providing a link to where in the wiki docs you can read more about the parameter to understand it's context
  • Clearly identifying the default value


As a sample I've provided the A-B parameters that I've sanitized so far.

Unfortunately there's simply too much information to put it into a table inline to this blog post, so the only other alternative was an Excel file that is attached.


Let me know any thoughts/questions/suggestions etc. Bear in mind that this is still very much a work in progress so there are some gaps and parts that need to be modified further.


Some highlights of what's been changed so far are:


  1. All superscripts have been removed and each parameter now either explains where you can find the option in the UI or has a black cell indicating they don't map anywhere. Many of the parameters that had superscripts (which denoted you cannot set of view using the Server Information form) were incorrect, 44 on this page alone.
  2. URL's are added to where you can read up more on the parameter. Any that don't have a URL it's either because the correct page hasn't yet been located or one doesn't exist. For the latter, this will become an action point for our documentation team.
  3. The default value for each parameter has been added in a separate column to make it easy to identify what these should be without reading through the whole parameter description.
  4. The parameters for the Alert tool have been added back to the list. Although support for the Alert tool has now ended, Alerts are still used in Remedy for BMC Atrium Orchestrator.
  5. Parameter names have been corrected from where they were once wrong (for example: AE-Worker-Threads from AE-Worker-Thread and ARDBC-LDAP-Base-Dn from ARDBC-LDAP-Base-DN)
  6. The Atrium SSO parameters have been added
  7. Each parameter has now been categorized by component. This is for easy reference when you want to identify which component(s) use a particular parameter. This will be a filter option on the table once published the documentation pages.


My goal here is to make them easier to consume for you the customer/admin user.


I'm planning on trying to reduce the size of some of the parameter descriptions also, to make them more concise.

The corresponding page listed under the URL column should contain all the finer details of the parameter. This an action point for later though.

I'd also like to have an icon or similar to identify new parameters that were added to the version the page is published in (for example: 'API-SQL-Stats-Control' in 8.1.01)


I look forward to the feedback.




start now.jpg


BMC training schedule is posted for January and February.  We want you to be successful in BMC solutions.  We run classes year round and worldwide across the BMC product lines.  Below are the classes listed by class/product name.

Review the below and register today.  Please check BMC Academy for latest availability, BMC reserves the right to cancel/change the schedule.  View our cancellation policy and FAQs.   As always, check back in BMC Academy for the most up to date schedule.

To see all our courses by product/solution, view our training paths.

Also, BMC offers accreditations and certifications across all product lines, learn more.


For questions, contact us

Americas -


Asia Pacific -


BMC Remedy AR System 8.0: Developer - Part 2

18 January / Americas / Online

15 February / EMEA / Online

BMC Certified Developer: BMC Remedy AR System 8.x

4 January / Asia Pacific / Bangalore, IN

25 January / Americas / McLean, VA

15 February / Asia Pacific / Singapore, SG

22 February / EMEA / Winnersh, UK

BMC Remedy AR System 8.0: Foundation - Part 2

11 January / Americas / Online

11 January / EMEA / Winnersh, UK

18 January / Asia Pacific / Online

8 February / EMEA / Online

8 February / EMEA / Paris, FR

22 February / Americas / Online

BMC Remedy AR System 9.0: Administering

4 January / Asia Pacific / Online

18 January / Americas / Online

18 January / EMEA / Paris, FR

25 January / EMEA / Online

1 February / EMEA / Dortmund, DE

8 February / Americas / Online

8 February / Asia Pacific / Online

8 February / EMEA / Winnersh, UK

29 February / Americas / Online

ITIL Foundation and Exam

4 January / Americas / Online

1 February / Asia Pacific / Online

29 February / Americas / Online


If you have ever put together a web site you’re probably aware how tricky it can be to get the design right. Back in the day you were pretty limited as to what you could do. A few tables, some images and a bit of text, HTML mark-up certainly had its limitation. If you were too ambitious you were on your own.


But things were looking up with the introduction of Cascading Style Sheets. Suddenly it was possible to separate the content from the styling and with every new CSS version (and subsequent browser version) the design possibilities increased. But here’s the problem: the more complex it gets, the harder it is to figure out when things go wrong. Because that’s the thing with these sorts of problems: you’ve got to be able to work out how it’s put together.


That was actually a real headache. If something didn’t work they way you’d expect you just had to try again until it did. Other than checking the HTML source code there really wasn’t much else you could do. That all changed when browsers started including Web Development tools which finally allowed you to diagnose and (more importantly) fix the layout.  The part that helps us the most is called the DOM Explorer. The Document Object Model represents the page in the form of an object. The DOM Explorer displays this object in the form a tree structure. All the DIVs, tables, images, etc, are all represented as nodes within the tree, so you can navigate the whole page in an organised fashion.


Mid-Tier pages are no exception, if you are familiar with DOM it’s easy enough to see how they are constructed. To get a better idea how this is done, let’s have a look at an actual problem with Mid-Tier and see how we can use the DOM Explorer to make sense of the page’s construction. Here’s my problem: in ITSM I added a few custom fields to one of the existing forms. It all looks fine in Developer Studio and initially it seems to look fine in the browser. But then I noticed the fields are somehow displayed as read-only, no matter what I’m trying I’m not able to get any content in there.




Here's Firefox:




The fields are configured correctly in Developer Studio so let’s check what’s happening with the layout in the browser. I’m going to use Firefox for this example, but the functionality works similar in other browsers like Chrome. You need to be familiar with how ARS objects are displayed. Every field added via Developer Studio is part of the DOM tree, this is how this looks like:




So there’s a DIV which contains a few ARS fields. There's the FieldID in this format: WIN_0_101, you can see the type (char), the full name and the help text. The bits in red are HTML specific and mainly used for styling. Under the DIV we have the label and the actual field, a text field in this case. Here’s what this looks like via the DOM Explorer:




On the right we can see the various properties of the objects. There’s a lot of styling going on here, mainly colours, look and positioning.  You have to be relatively familiar with CSS formatting  to understand what’s going on here, but the good thing is that you can change stuff here to see what sort of effect it has on the actual layout. This is all client based, so you can’t do any permanent damage.


Keeping the DOM structure in mind, let’s have a look at one of our problematic fields. I’m going to use the Field ID (which I looked up in Developer Studio) to identify the field in the DOM tree. Here it is:




What I’m looking for is anything that would explain why the field is read-only. To do this I’m going to compare this field to one of the out-of-the-box ITSM fields. I’m going to put them next to each other and see what the obvious differences are.




I hope you’ve spotted the z-index property which is xxx in the out-of-the-box field and 0 in the custom field. z-index is a property used in CSS to determine the order of the element. It’s of course entirely possible to stack element onto element in your layout and you need a way to control what element goes on top. That’s what the z-index element does. And guess what? The higher the value, the higher the order. So if it’s 0 it’s pretty much at the very bottom.


Here’s the good thing about using the DOM Editor: You can change things on the fly. So what will happen when I change the z-index of my custom field to 1000? Well, let’s give it a go:




Behold, I can now type content in the field! What changed? The order of course, my custom field was stuck under another (transparent) panel. I could see it but not use it.


It’s not possible to explicitly set the order in Developer Studio. Mid-Tier’s interpretation of the order is based on the bring-to-front and bring-to-back functionality. It’s not always that obvious unfortunately and sometimes the layout can differ somewhat. The solution here? Bring it to the front a few times and keep checking with your browser.


I hope this gave you some idea how to use the DOM Explorer to look into layout problems. Mid-Tier’s job is to interpret the forms and translate it to HTML and CSS. If there’s anything that isn’t working the way you’d expect it, I’d encourage you to use the Web Development Tools. If you try it a few times you’ll notice it will be easy enough to diagnose this sort of layout issues. So the next time you notice an image which doesn’t look right, a field which isn’t in the right position, or text which is cut-off, press F12 on your keyboard and start debugging.





Don't forget to follow me on twitter!

Young So

BMC Data Migrator Tool

Posted by Young So Oct 13, 2015


My Journey with Migrator tool was an interesting one.  Trying to learn about the Migrator tool thru documentation was somewhat limited.  Once I starting using the tool.  I've started run into problems that are found or not found in BMC sites. The goal of this blog is to save the newbie sometime on getting jump start on actually using the tool right off the bat without troubleshoot the tool itself.


I will divide things into section so that you don't have to read the whole blog to get started with migrator. I would recommend that you read the tuning section so that you don't run into issue or error with the tool.


Understanding Migrator

There are two tools within BMC Migrator.  There is the migrator itself and delta data migration tool.  These tool use Jet engine to create copy of the database being migrated and migrated from in the %temp% in most case.  I found that my log files are stored in the different location.  The location  found was %userfolder%/appdata/roaming/ar system for the migrator tool on windows system.  The log files for Delta Data Migration was stored in the working folder of Delta Data Migration tool.  These thing can be changed via registry on windows system.


-- This is where you edit most of the Migration tool settings




The software seems to have few issue that are well documented but, I see the information scatter around knowledgeable and communities.  I recommend reading the tuning section before starting your migration.


Enable Logging

If you run into issue with migrator, in most case you'll need detail logging.  In order to achive logging with the migrator tool start with the understanding of where the log is kept.  The log file kept:  C:\Users\%username%\AppData\Roaming\AR System\Remedy Migrator\backup\%servername%


Migrator is got it's beginning form AR platform thus you can use API log settings that work on the AR server.  In order for you to enable API logging you have to enable environment variable ARAPILOGGING=%loglevel%.  Log level value are 1 thru 88 far has I know.  The LOGLEVEL 88 isn't documented on BMC documents from what I could tell.


There are different way to enable logging with Migrator tool.  Enable server style logging.  By changing the value registry, you'll enable different type of logging.  I haven't found documentation on it.  You'll have to play with it.

[HKEY_CURRENT_USER\Software\Remedy\Remedy Migrator]





Understanding API Logging variable


Here is the documented log levels:


ARAPILOGGING = 88 (shows you time stamp of the transaction)



Migrator Tuning

Here are knowledge article and communities discussion on Migrator tool.


I am firm believer in have two partition on servers.  One for the OS operation and second for the application.  Need to modify Configuration.xml setting delta work-dir to the correct install folder.  With migration there are lots of read and write to disk.  For example, when it cache the source server object.  It has read that information from the network and write it to disk.


Windows Registry Editor Version 5.00











[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server]



[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Jet\4.0\Engines\Jet 4.0]


[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Jet\4.0\Engines\Jet 3.x]




Thread Error 2004


1.  Enable logging

2.  Provided the logs to support and get the issue escalated


With my experience with this error, we found that there was "overlay schema migrator was not handling multiple field references correctly for while parsing object property"  and I had to apply a hotfix.  In addition, I had to install the latest version of Migrator.


How to migrate ITSM with Migrator During a Version Upgrade (This also address non-Unicode to Unicode migration)


1. build the AR Server

2. Use migrator to move data only on "AR System Licenses" form

3. Install CMDB

4. Atrium Integrator Server

5, Install ITSM

6. Install SRM

7. Process Designer

8. Install SLM

9. Apply available patches

10. Upgrade the mid-teir (optional or only required during version upgrade)

11. Extended CDM for custom attributes

12. Disable DSO

13. Disable Escalations

15. Disable Database Triggers (Ask the DBA to perform for ARSystem database)

16. Fix known DDM issue on production/source database with DDM fixes

17. Tune database for migration

18.  Turn off the following Recon Jobs:

        BMC Asset Management - Sandbox

        BMC Asset Management - Sandbox Bulk

        BMC Asset Management CI Data Load

19.    Disable Normalization Job if any at the destination server

20. Do a difference report on forms that was change %date%

21. Need to note the date that DDM was run:  9/29/15  (Need to run DDM more than 3 times)

22. Run migratorFindCustomForms.bat

23. Run Post Scripts for DDM


With the release of 9.0.01 the BMC Remedy AR System product now has a new component, BMC Remedy Single Sign-On (Remedy SSO). This component solves the problem of single sign-on for those who primarily use the BMC Remedy applications with either AR System based authentication or the SAML based authentication. This blog focuses on introducing the features of BMC Remedy Single Sign-on. 

Features of Remedy Single sign-on.

  • Light weight,
  • easy to deploy,
  • simple architecture,
  • modern UI,
  • default multi-tenant support,
  • easy high availability configuration.

The Remedy Single sign-on supports easy integration with BMC Remedy applications such as BMC Remedy IT Service Management (BMC Remedy ITSM), BMC MyIT, and BMC Analytics. Most of the integration steps are handled internally by the component as a result these applications are integrated in quick easy steps.

The Remedy Single sign-On is easy to configure in a high availability environment as there is no primary node and a secondary node. All the nodes are connected to a single database instance to store the data.

You can use the Remedy Single sign-on if you use the AR based authentication or SAML based authentication. Configuring these authentications are extremely easy and quick. For an AR System based authentication you only need to provide the AR System hostname and port number. To configure SAML V2.0 authentication properties, you need a Service Provider and Identity Provider, which allow you to set connectivity between SP and the IdP, certificates, attributes, and so on. In quick easy steps you can configure either of these authentication protocols and start using the Remedy Single Sign-on to solve the single sign-on problem in your organization.

Where to get started?



Watch the video below highlighting the features of Remedy Single-Sign On Remedy AR System



AR 8.1 has achieved Common Criteria Certification in May 2015


Refer to the following Common Criteria Certification artifacts

  1. Security Target document –
  2. Certification document  –


--- Abhijit Rajwade


Some of the common questions and answers regarding Remedy9 Full Text Search!



1. What's New in Remedy9 As far as Full Text Search feature is concerned? 

• Implemented FTS HA Model (Similar v7604 SP5 & v8.1 SP2)

• Apache Lucene version upgrade from 2.9 to 4.9

• Apache Tika version upgrade 1.2 to 1.6

• We broke down Monolithic FTS index file into schema specific files (a.k.a. Schema based index) using Lucene 4.9

• Developed FTS Index Migration Utility, Which will migrate Old Monolithic Indexes into Schema based Indexes & convert into Lucene 4.9 format too

•Incorporate FTS Index Migration Utility into AR Installer

2. What is Schema Specific Files (a.k.a. schema Based index) ?

  • Till Remedy9 Release, All FTS indexes gets stored into One Lucene specific File (Monolithic index file). 

      E.g. All incidents, Problems, Change, KA related indexes gets stored into this one single Index file. 


  • Now From 9.0 release we have broke down that file and we create separate folders inside <AR/FTSconfiguration/collection> folder for each AR Form/Schema stated above.

  • Example:

      If HPD:HelpDesk Schema has schemaId = 558 then All incidents related Indexes will get stored into one folder e.g. 558/


  • All Change Management related data will get stored into one folder e.g. 561

        similarly for rest of the AR forms, data will get stored.

Earlier Monolithic Indexes:


Schema Based Index:


3. What's a need of FTS Index Migration Utility?


  • Till Remedy9, All FTS indexes gets stored into Monolithic Files, and Apache Lucene (The underline Search engine) was at 2.9.
  • In 9.0 We have upgraded Lucene Version to 4.9 and broke down the Monolithic indexes into Schema Based index.
  • The need arises to cater above two requirements
    • Convert Monolithic Indexes into Schema Based Indexes
    • Upgrade older indexes to newer format (i.e. from Lucene 2.9 to 4.9)

4. Where does FTS index Migration Utility gets deployed?


  • Windows:
    • <AR INSTALL DIR>\arftsutil.bat


  • Linux:

5. Is FTS index Migration Utility will kicked off automatically While Upgrading AR Server to 9.0?


  • Yes, As you start AR Server 9.0 upgrade, Index migration is part of AR Upgrade process.

6. Where does FTS Index Migration utility logs gets stored?


  • It gets stored under <AR INSTALL DIR>\Arserver\db

7. Can I skip the FTS Index Migration utility Execution?


  • If you are upgrading AR Server to 9.0 Using Installer (running UI) Then User do not have any option to disable the FTS index Migration Utility execution.
  • Out of the box, Installer will migrate old Indexes.

8. Is there any Way to Skip The FTS Index Migration Utility Execution?


  • If You upgrade AR Server 9.0 Using Slient Mode then one can skip the FTS index Migration Utility execution.
  • During AR Server 9.0 Upgrade if <AR Server INSTALL DIR>\ftsconfiguration\collection folder does not have any files then in this case, AR Server FTS Index migration utility will not migrate anything.

9. What is the Silent install parameter in order to SKIP FTS Index Migration process during AR Upgrade via Silent installation mode?

  • FTS index migration can be skipped if Installation done via AR Silent installation.


  • Using silent Install Parameter one can skip Index Migration:

                -J BMC_AR_SKIP_FTS_INDEX_MIGRATION= true

                  (Value to this parameter is case insensitive)

  • If this Parameter is present in AR Silent install File and it is set to True/TRUE/true then ONLY Index migration will skipped.
  • In any other condition, By default index migration will execute and will migrate old indexes.

10. If utility fails will it cause AR Server Upgrade failure ?


  • If FTS Utility fails it will cause AR Upgrade Failure.
  • Utility returns 3 return codes:
    • 0: Success
    • 1: Warning[When DB Only Upgrade OR when just adding new locale on existing AR Installation, it will return 1 but upgrade will not fail]
    • 2: Fail [AR upgrade will also fail]
  • FTS index Migration Utility Logs will be get stored under <AR INSTALL DIR>\Arserver\db

11. Shared load among  indexer servers. How does one configure that?


  • We have FTS Configuration plugin in place. Using that, one can dedicate the Indexer Server and Searcher.
  • E.g. If I have 4 Servers in Server Group, and I have nominated 2 as Indexer servers, and 2 as searcher.

        From FTS Config Plugin UI, One can configure Server A & Server B  as Indexer,  and Server C & Server D as Searcher.

  • To nominate as Indexer server, in the Server Group Ranking Form, that server should have rank for FTS Operation, that way AR Server will assume it is FTS Indexer Server.

12. FTS Index Migration Utility execution Failures: Possible Reasons?

  • Old indexes are corrupted
  • Not Enough space on the disk where ftsconfiguration/collection directory exists (during Index migration, it would required double the space on the disk, later while merging it will delete unwanted files present in the /collection folder.

        Example: Before upgrade if /collection dir is of 2 GB then, then while AR upgrade there must be 4 GB of free space                                        available in the /collection folder, at the end of the migration process, /collection folder will become ~1.2 GB

13. Can I execute FTS index Migration Utility Manually later some time?

  • Yes, user can but if indexes already migrated then no need to execute manually again.

14. Which Version of Luke can I use in order to debug Lucene 4.9 Index?

  • you can use Luke 4.1x onwards, can download from here

15. AR FTS Index Migration Utility - Usage?

  • <ARInstallDir>\arftsutil.bat  –d “<COLLECTION DIR PATH>”  -c “<FTS CONFIGURATION DIR PATH>”
  • arftsutil.bat -help (it will print usage)

16. How much time required to migrate Old indexes into newer format using FTS Index Migration Utility?


  • In R&D Performance Lab we conducted tests and here are the results (Note: timings may vary depending on the Server configurations)
  • Test I
    • Collection Folder  Size Before Conversion: 1.5 GB
    • Collection Folder Size After Conversion  :  0.8 GB
    • Total Time for Index Conversion Process : 13 Mins
    • Test Data during this test: ~100K records (Incidents, Change, Articles, RKM external docs etc)
  • Test II
    • Collection Folder  Size Before Conversion:  24.5 GB
    • Collection Folder Size After Conversion  :  15 GB
    • Total Time for Index Conversion Process : 130 Mins
    • Test Data during this test: INC ~522K, CHG 103K, SR 535K, WO 12K, PB 100K, RKM Docs 100K, AR Form w/       

                                                          Attachments 200K, AR Form w/o Attachments 300K

17. Global Re-indexing is required after I move my entire stack to Remedy 9?


      No, Not required.

18. How can I Re-index particular schema if, indexes got corrupted and wanted to create indexes for that schema?


  • One can use Process command: Application-FTS-Reindex-Form "<FORM NAME>" using driver program OR can write Filter and use this proc command to re-index particular schema.

There is a new KB article available on resolving BIRT Security Issues ISS04485990 (CVE-2015-5071) and ISS04485988 (CVE-2015-5072).


Work around as well as fixes are available.

--- Abhijit Rajwade


Filter Blog

By date:
By tag: