Skip navigation
1 2 3 Previous Next

Remedy AR System

72 posts
Share:|

Many moons ago, BMC deprecated the Windows ARUser client in favor of the web-based Mid-Tier client, and since that time there has been a scenario I have repeatedly run into that does not have a satisfactory solution based on out-of-the-box functionality:

 

There is an ARS table on the user's screen. The user needs the data from that table in a spreadsheet.

The built-in Report button that one gets on tables and on results lists yields a UI that ... well one of my users put it best:

"It looks like Radio Shack exploded on my screen!"

For instance, am I to turn loose a warehouse manager who uses her computer like 5 or 6 times a week, to build her own reports against a form that for instance, has display-only fields with names that look similar to fields on the screen?! Don't get me wrong, Smart Reporting has a whole lot of useful features and is a huge improvement, but at the end of the day ...

 

 

bad_time.PNG

 

HTML 5 offers us a lot of interesting capabilities inside modern browsers that we didn't have before. In fact, the entire javascript ecosystem is experiencing something of a cambrian explosion at the moment. One of the new capabilities of HTML5 is the File API which allows us to programmatically construct documents in javascript and make them available for download to the user.

 

A few weeks ago, I once again, found myself in that scenario I described at the top. I had a custom application with a big table field, and a whole bunch of workflow that lets the user get the data on the screen that they need, and now the user was just basically saying "ok, but now I need this in a spreadsheet, can't you just give me a download button?".

 

This got me thinking. "Good point! Why CAN'T I just do that?!"


All of the data on the screen to drive the ARS table already exists somewhere in a javascript datastructure. Why can't I just snag that, reformat it into something Excel knows how to deal with, and draw a download link on the screen? So I got into a little mild hacking using the mighty, mighty Firefox Developer Edition Javascript Console, and I figured out how to do that.

 

What I'm about to show you is the cleaned up rightest way to do this hack that I can figure out. I'm somewhat of a greenhorn when it comes to javascript, so it's entirely possible there are better ways to do what I've done here. To that end, I've posted my code on github. If you see problems, please make a pull request there, and I'll merge it into the main branch.

 

The basic gist of it is this:

  1. copy some javascript functions into the view header of your form
    For the impatient: here ya go
    view_header.PNG

  2. make an active link that calls one of those functions, using a set-fields action with a run process to put the output of the function into a tmp field
    It's gonna look something like this:
    set-fields-from-js-function-active-link.PNG

  3. make an active link populate a view field with a template, passing the output in the tmp field from step #2
    Here's my version of that template. You may want to inject your own flavor. I'll cover the options my version of the template takes further down. Basically you're just calling the TEMPLATE() function. it'll look something like this:
    populate-template-active-link.PNG
  4. There's now a dialog on the screen where the user can select the columns they want to export and download the freakin' spreadsheet!
    And that'll look something like this:
    demo-export-dialog.PNG

 

You May Ask Yourself, "How Do I Work This?!"

 

Probably the easiest way to get started, is just to download this demo and start playing with it.

 

  1. using Dev Studio, import the def file: ARS_CSVTools_demo_v1_definitions.def
    this will create the following objects on your Remedy Server:
    1. regular form: ahicox:csv:demo:data
    2. display form: ahicox:csv demo:dialog
    3. display form: ahicox:csv demo:gui
    4. active link: ahicox:csv demo:exportAll
    5. active link: ahicox:csv demo:exportSelected

  2. Import Demo Data to ahicox:csv:demo:data
    use the Data Import Tool to import this file: ahicox_csv_demo_data.arx to the ahicox:csv:demo:data form

  3. install the export dialog template
    Create a record in AR System Resource Definitions like this one:

    ar_system_resource_defs.PNG
    so basically:
    1. Attach this file: csvExportDialogTemplate.html
    2. Make sure the Mime Type is "text/html"
    3. Make sure the Type is "Template"
    4. Make sure the Status is "Active"
    5. Make sure the Name is "ahicox:csv demo:template-v1"  <- important!

  4. Open the ahicox:csv demo:gui form through your mid-tier
    demo_ui.PNG
    1. click the Refresh button on the table. This should load a list of all planets from the sprawling longest running (and greatest) sci-fi show of all time Doctor Who. To keep things simple, we just have three columns: Entry ID, Key, and Value. That doesn't really matter, actually. This will work with whatever columns you put on whatever table you want to export. Only caveat being that no two columns should have the same display name (otherwise the export will only show one of them, probably whatever the last one with that name in the column order was, but no promises on that).

    2. The Export All Rows to CSV button will scoop up every row on screen and ship it off to the export dialog
      This button is calling the _exportAllRows javascript function, and assigning the output to the zTmp_JSONData field. The _exportAllRows function takes one argument, which is the field id of the table you want to export. For instance, the fieldid of the table on my demo form is: 536870913, so the set-fields action calls:

      $PROCESS$ javascript: _exportAllRows(536870913);
      
      
    3. The Export Selected Rows to CSV button will scoop up only the selected rows in the table field and ship them off to the export dialogThis is pretty much the same thing as # 2, except it's a different function name:

      $PROCESS$ javascript: _exportSelectedRows(536870913);

      An important note about these javascript functions: if you need to export a table that is on a form embedded in a view field (or embedded several times deep in a view field), you need to insert these functions on the view header of the root level form. So for instance if you wanted to be able to export tables buried on forms in the ITSM Modules, you'd want these functions in the view header of SHR:LandingConsole, rather than the individual child forms.

The Template

The HTML template is populated via the built-in TEMPLATE() function (as illustrated above). These are the arguments that the template takes:

 

  • jsonData
    this is the JSON datastructure returned from either the _exportAllRows() or _exportSelectedRows() functions
  • titleHeader
    this string is shown in the header of the dialog template, adjacent to the "Export as CSV Spreadsheet" button
  • defaultShowColumns
    this is an array in javascript notation, containing the columns you would like to be shown in the preview by default when the user opens the dialog (the user can select additional columns or deselect the defaults once the dialog is open). An example value would be:

     "['columnName1', 'columnName1']"

    NOTE however, if you're building that in work flow, the ticks will be a problem. It'll actually have to be constructed something like this:

     "[" + """" + "columnName1" + """" + ", " + """" + "columnName2" + """" + "]"

    column names are referenced by their DISPLAY NAME not database name in this context.All this does is control which of the columns have a checkbox by default when you open the dialog:

    defaultSelected.PNG


  • fileNameBase
    the template will attempt to construct a unique filename each time the user clicks the download button. It will do this by appending the epoch timestamp to the end of the file name at the time the user clicks the button. The fileNameBase argument allows you to give the file a meaningful name that appears before the unique timestamp. For instance

    example fileNameBase value: "DWP-Export"
    resulting file name:        "DWP-export-1522077583.csv"
    
    
    
  • displayLimit
    By default, the dialog is going to show a preview of the first X rows in the export where X is defined by displayLimit. If the number of rows in the export is less than this number, we'll just show all rows. Otherwise we will show only this many with a user-friendly message explaining that.

 

 

To-Do

 

  • Handle Table Chunking
    At present, it'll just scoop up rows visible on screen. For instance in the case of exporting selected rows, it should be possible to hang a hook off of whatever function is called to retrieve the next chunk from the server, export data from selected rows of the previous chunk and append that with additional selections.  Perhaps something also for export all that will programmatically retrieve each chunk from the server and export/append. Just needs a little hackkity hack.

  • CSV Import!
    This should also be possible! Since the HTML5 File API allows us to access the contents of a file specified by the user without uploading it to the server. In theory, I should be able to create a similar dialog that shows the columns in your spreadsheet, the columns in your table and allows you to map them, then hijacks the PERFORM-ACTION-TABLE-ADD-ROW run-process command to add the rows to your table on-screen, so that you can set up your own table walk to push the rows where you want them to go.

    This would beat the living hell out of uploading the file as an attachment, staging it somewhere on the server, asynchronously forking off an AI job out of workflow to import/dispose of the file, and then having an active link loop run every couple seconds to check the status of the AI job. Which is the only other way I'm aware of to handle it right now. And god forbid if the file the user uploaded had the wrong column names or bad data! Good luck catching that exception!

  • Get BMC To Support this
    Look obviously this is unsupported.
    In order to figure this out, I had to pull apart lots of code I dug up off the mid-tier. This entire approach depends on the functions and datastructures in ClientCore.js being stable from version to version. There is no guarantee of that. Therefore BMC could break this at any moment without warning.

    My users like this feature, a whole lot more than the out-of-the-box reporting features. I'd like to be able to continue offering this feature without having to worry that every mid-tier patch we install will potentially break it. At the end of the day, that's actually not a lot to ask. BMC could simply make a function that does this and include it in ClientCore.js. It's pretty simple stuff. Heck. Maybe they could even give us a run-process command to export properly encoded JSON from a table into a field?!

    Anyhow. This is what I know for sure this works on:
    I've successfully tested this on ARS / Mid-Tier 9.1.02.001 201702131133. Against these browsers:
    1. Firefox 57.7.2 (32-bit)
    2. Firefox 10.0b6 (64-bit)
    3. Internet Explorer 11.2125.14393.0
    4. Chrome 65.0.3325.181 (32-bit)
    5. Edge 38.14393.2068.0

 

  • This approach in general could do a LOT of things
    There is pretty much nothing Javascript can't do inside a browser these days. Literally. From real-time 3D rendering to actually spinning up VMs. It's been done, on the client side, in a browser.  So why am I wrestling with cumbersome and poorly implemented server-side processes for mundane stuff like this that I could do entirely in the client? Javascript was BUILT for consuming JSON webservices -- that's REST in a nutshell, and now we have a REST API. All we really need to do some seriously amazing stuff in ARS is a supported interface to ClientCore.js and a way to get an authentication token from an already logged in user so that I can pass it to the REST API without asking the user to log in again.

    And that's just scratching the surface. Whose up for building an ARUser replacement out of the REST API and Electron? I would be.

    ATTENTION BMC: publish a developer-facing, supported (and documented) javascript API for interacting with ARSystem within HTML templates!
    Let a hundred flowers blossom
    . We're out here selling your software for you day in and day out. It's the least you can do.

 

 

ALSO: for those not hip to the github, there's an attachment with a zip file of everything :-)

Share:|

Quite often when you have an issue, the first thing that is asked of you is for you to capture logs and send them off.  The problem with this is that these logs quite often contain sensitive information, they contain things like user names and server ip's.  Depending on the nature of your system and your organization, it might not be not only a bad idea to provide that information but it might be against your companies InfoSec policy, or, maybe even illegal.  To help combat this issue I wrote this very simple java program with a sample batch file.

 

At the heart the program is the simple ability to do pre-defined 'find/replace' scenarios.  The properties file contains two pre-defined find/replace scenarios.

 

1 - UserName - This will find the user: section of your log file and replace it with a generic 'UserName'

2 - IPv4 Address - This one will look for something like 192.168.0.125, but in a generic way so that it finds ANY ip address and replaces it with IPV4Address

 

The program is RegEx aware, which means that you can use complex find criteria that's not literal...read up on RegEx here if interested in the finer details (Regular expression - Wikipedia )

 

If you are on Windows all you need to do is configure your properties file to find/replace whatever it is you want to find and replace it with, then drag/drop your log file onto the batch file.  The batch file will run the log through the program and spit out a copy of the log with the suffix .scrubbed.log appended on.

 

This utility does not make any network calls, it only reads the log file you provide it and gives you a scrubbed output.

 

This is an unofficial and unsupported tool, and comes with no warranties expressed or implied.  It is still your responsibility to ensure that sensitive information is removed from the scrubbed file before posting that log anywhere, but this should help you get things cleaned up with ease and speed.

Rahul (Remedy) Shah

D2P Overview

Posted by Rahul (Remedy) Shah Employee Mar 2, 2018
Share:|

D2P Overview

Known as Dev to Prod(D2P) is a feature used to push stuff from QA/Dev environment to production environment , this feature was initially introduce in AR System 9.x and later was enhanced a much over the releases. Adding more features and making it more stable.

 

Two primary components of D2P are

  1. D2P Console
  2. Single Point Deployment

 

D2P Console is mainly to manage and deploy the packages and has multiple capability will not talk much about it,  as it's an old feature

 

Quick look on what's new in release "pre 9.1.03" , "9.1.04" and "9.1.04.001"

 

1.PNG

Why D2P and its enhancement ?

 

Basically today deployment of hotfix/patch was totally manual process and it was a big pain point to apply the hotfix/patch to each node/machine. if you look at a hotfix or patch it is bunch of binary files , definition files , data file (arx files) and some database changes and those has to manually apply on each node.  D2P had already had capability to promote workflows definition , data (arx files) and what was really missing was deployment of binary files , In release 9.1.04 a new feature was introduced know as "Single Point Deployment" which takes care of deploying binary files.

 

Post 9.1.04 all the hotfix and patches will be shipped as D2P Packages. Main advantage of this will be to deploy only on one server and it will get deployed automatically on all the servers in the server group and the progress can be easily monitored and other key feature will be it can be rolled backed if needed.

 

The deployment process is going to look like this, download from EPD on a local machine , test it on your test/dev environment and then import it on your production machine. its going to be easy process and your don't need to do access these machine physically. it will be all from D2P Console.

 

Picture1.jpg

What is Single Point Deployment feature?

 

  1. What is Single Point Deployment ?

    Single Point Deployment helps to create and deploy an binary payload. A binary payload is a set of binary files, configuration files, batch, or Shell scripts which can be executed. Not only BMC anyone can create a payload for deploying
  2. What components that support binary payload ?

     

         You can deploy a binary payload onto the following components.           

    1. BMC Remedy Action Request System Server (AR)
    2. BMC Remedy Mid Tier (MT)
    3. SmartIT

 

   3. What changes where made to AR Monitor (armonitor) ?

 

         AR Monitor (armonitor.exe/sh) loads(starts) all the process which are available in armonitor.cfg  , from 9.1.04 has more intelligent where it know which process it has loaded and has capability to start and stop individual process.

 

  4. What is a file deployment Process?

 

          Starting 9.1.04 for all the 3 components above(AR , MT & SmartIT) , a separate process know as “file deployer” is running , which is aware of how to stop and start the processes basically it can instruct armonitor to start or stop any particular process , for example it can instruct to stop/start "java plugin process" hosted on a AR Server.

 

  5. What are the main components of File Deployment Process?

 

        Three components which are important for any file deployment process

  • File Deployer  :-   A new process that runs under monitor and does actual work of deployment of any new binary payload.
  • AR Monitor     :-  Filedeployer calls ARMonitor for stopping and starting required processes.
  • AR Server      :-  Acts as a co-ordinator between multiple file deployers. Control the sequence in which deployment should be done.

 

  Summary :- With all the above new stuff you have an new capability to deploy a binary file , to deploy an binary file you need create a payload on a AR Server , which can have a binary file that has to be deployed. The is how the flow is

  • AR Server Instructs file deployer that there is some work need to be done
  • File deployer downloads the binary file from the server
  • File deployer instruct AR Monitor to STOP the particular process which is defined in the payload ( Stop is required because that particular binary file (jar) might have been locked by process)
  • File deployer takes the backup of existing binary
  • File deployer copies the new binary
  • File deployer STARTS the particular process it has stopped
  • File deployer checks if that process was started or not
  • Marked deployment as done.

 

Will come with few more blogs on Single Point deployment.

Rahul (Remedy) Shah

Operating Mode

Posted by Rahul (Remedy) Shah Employee Feb 28, 2018
Share:|

Operating Mode

 

Operating Mode is an feature of AR System introduced in 9.1.02 Release. its called as Operating mode.

 

Why was Operating Mode needed ?

 

When upgrading the server, CMDB, or any AR applications, background processes can slow down that upgrade and switching of operation ownership because of server restart during upgrades. BMC make some recommendations in documentation and/or white papers suggesting customers make some configuration changes before running an upgrade. Some of the product installers perform configuration changes as well.Instead of relying on manual changes or each installer making changes themselves, it would be better if the installers could put the server into an upgrade mode and it would take care of this. So every release BMC add a new capability to the upgrade mode rather than updating lots of documentation or installers.

 

Operating mode helps two things to drive

  • Helped in solving many of the BMC install/upgrade problems
  • Also helped in a feature called Zero Down Time upgrade (ZDT).

 

Who consumes Operating Mode?

 

All BMC Platform Installers ( AR Server , Atrium Core , Atrium Integrator) and Apps Installers ( ITSM , SRM , SLM) puts the server in Operating mode before installer starts any activity and puts it back to normal mode once its done. Internally Installer call SSI to set/re-set the server in operating mode

 

Some insights on server info

 

Server info: AR_SERVER_INFO_OPERATING_MODE, 463
Setting name: Operating-Mode
Possible values (integer):

  • OPERATING_MODE_NORMAL, 0 - normal mode, default, how the server runs today
  • OPERATING_MODE_UPGRADE_PRIMARY, 1 - upgrade mode for the primary server
  • OPERATING_MODE_UPGRADE_SECONDARY, 2 - upgrade mode for secondary servers

 

By default server runs in normal mode and value is 0. when the install/upgrade is happening on primary server , the operating value is set to 1 ( it will disable few thing which we will talk it later ) , when the install/upgrade is happening on non-primary server(s) the operating mode is set to 2 ( as of now in 9.1.02,9.1.03 and 9.1.04 it does not disable any of the feature , because non-primary server upgrades are all about replacement of file system)

 

What kind of install/upgrade sets server in Operating mode?

  • Fresh installation does not uses operating mode
  • DB(Accelerated) only upgrade does not uses operating mode
  • Only normal upgrades uses this feature.

 

What are the features that get disabled when server is set in Operating mode ?

 

Following operation are disabled on the current server when Operating mode is set.

 

  1. Hierarchical groups ( processing of bulk compute because of group hierarchy change or form level property change )
  2. Object reservation
  3. Archiving
  4. DSO
  5. FTS Indexing
  6. Escalations
  7. Atrium Integrator
  8. Service Failover
  9. Server event recording
  10. SLM Collector
  11. Approval Server
  12. Reconciliation Engine
  13. Atrium Integration Engine
  14. CMDB
  15. Flashboards
  16. Business Rules Engine
  17. Assignment Engine
  18. E-Mail Engine
  19. The server will disable signals, just like AR_SERVER_INFO_DISABLE_ARSIGNALS does.
  20. The server only uses AR authentication, just like setting AR_SERVER_INFO_AUTH_CHAINING_MODE to AR_AUTH_CHAINING_MODE_DEFAULT does.
  21. The server turns off the global attachment size restriction, just like setting AR_SERVER_INFO_DB_MAX_ATTACH_SIZE to 0 does. (A customer may have set the maximum attachment config value to restrict the size of attachments to something like 10 MB. The apps installers need to import some DVF plugins that may be larger than that, so we temporarily turn off the restriction when in upgrade mode.)
  22. New in 9.1.03 - The server removes any server side limit on the maximum number of entries returned in a query, just like setting AR_SERVER_INFO_MAX_ENTRIES (configuration file item Max-Entries-Per-Query) to 0 does.
  23. New in 9.1.03 - The server removes any server side limit on the maximum number of menu items returned in a query, just like setting configuration file item Max-Entries-Per-Search-Menu to 0 does
  24. New in 9.1.03 - The server removes any attachment security validation, just like if there were no inclusion or exclusion lists specified and no validation plug-in specified.

 

Does the parameter gets changed when server goes in Operating mode ?

 

No , AR Server internally takes care of disabling parameter , for example if upgrade mode wants to disable escalation , it does not set the Disable-Escalation : T / F in ar.conf , AR Server internally set this parameter off.

 

What modification are done to AR System Server Group Operation Ranking form ?

 

If the server is in a server group, Server will change entries in the "AR System Server Group Operation Ranking" form to change the server's behaviour.

For example there are 2 server ( primary server and secondary Server )

 

Below tables explains what happens when primary server is set in upgrade mode and reset to normal mode.

 

ServersOperation

Rank

Rank

When Upgrade is Set

Rank

When Upgrade mode is reset

Primary ServerAdministration111
FTS111
Archive1null1
Assignment Engine1null1
Atrium Integration Engine1null1
Atrium Integrator1null1
Business Rule Engine1null1
CMDB1null1
DSO1null1
Escalation1null1
Approval Server1null1
Secondary ServerAdministration2null2
FTS222
Archive222
Assignment Engine222
Atrium Integration Engine222
Atrium Integrator222
Business Rule Engine222
CMDB222
DSO222
Escalation222
Approval Server222

 

So from above table  , Primary Serve Administration ranking is not set to null ( or Empty ) and rest of non-primary server Administration ranking are set to null ( or empty ) , by doing this it helps to retain the administration rights with the primary server which is getting upgrade and hence failover of administration rights does not fail over to non-primary server (which was earlier ranked as 2 ). this helps during ZDT upgrades where secondary servers are up and running.

 

Where does operating mode takes the back-up of ranking information ?

 

In 9.1.02 & 9.1.03 , backup of all ranking information was stored in ar.conf

In 9.1.04 and onwards , a new field called "Operation backup"  on AR System Server Group Operation Ranking will be having back-up

 

What changes where made to reset operating mode in 91SP4 upgrade installer ?

 

  • Pre 9.1.04 ( i.e. 9.1.02 and 9.1.03) :-  Individual installer like AR System use to set the upgrade mode before start of the installation and reset once the installation was done.
  • In 9.1.04 :- AR upgrade installer will set the server in upgrade mode and before ending installation, AR installer checks if CMDB component exists. If CMDB doesn't exist, AR installer itself sends SSI call to reset Operating-Mode to 0. If there is CMDB, AR installer doesn't reset the Operating-Mode
  • In 9.1.04 :-CMDB upgrade installer - before ending installation, CMDB installer checks if AI component exists. If AI doesn't exist, CMDB installer itself sends SSI call to reset Operating-Mode to 0. If there is AI, CMDB installer doesn't reset the Operating-Mode.
  • AI upgrade installer always resets Operating-Mode to 0 on completion
  • Apps installer does it individually

Does installer reset operating mode in case installer fails

 

Yes in all cases  either installation is successful or failure , installer re-set the operating mode , but there are chances where installer might not re-set it back , in that case we have to reset it back to normal mode using SSI.

 

Hope this helps to know few things related to upgrade mode.

Share:|

BMC is excited to announce general availability of new Remedy releases as part of our Fall 2017 release cycle:

  • Remedy 9.1.04  (incl. Remedy AR System, CMDB, ITSM applications, Smart Reporting, Remedy Single Sign-on)
  • Remedy with Smart IT 2.0.00
  • BMC Multi-Cloud Service Management 17.11

 

Here is excerpt of platform specific improvements.

 

With Remedy platform version 9.1.04, BMC delivers a rich set of platform-related improvements that help Remedy on-premise customers reduce cost of operations and administration for their Remedy environment.

 

Significant improvements to the Zero Downtime Upgrade capability for the Remedy Platform

9.1.04 delivers significant improvements to the Zero Downtime Upgrade capability for the Remedy Platform: Several manual steps of the process have been automated. If, for some reason, the platform upgrade fails, the platform components and the file system are rolled back to the earlier version. All these enhancements allow customers to safely perform in-place upgrades of the Remedy platform without impact on the overall Remedy ITSM service. This recorded Connect with Remedy webinar session about Zero-Downtime Upgrades provides additional insight into the approach.

 

Efficient Patching/Hot-fix of Remedy with Deployment Manager

Starting with version 9.1.04, customers can now use the Remedy Deployment Application to easily deploy new Remedy platform patches and hotfixes into their Remedy environment, including new binaries. Remedy administrator no longer have to run patch installers on each server of a server group across multiple environments (Development, QA, and Production) to deploy new binaries. Platform patches are now delivered as deployable packages. When a Remedy administrator deploys such a package on a primary server in a server group, the changes / new binaries provided through the patch or hotfix are applied on all the secondary servers automatically.  Please note that there are also a number of other enhancements in the Remedy Deployment Application v9.1.04.

 

Centrally enable logging in a Remedy server group environment

Last but not least, Remedy 9.1.04 also makes it easier for Remedy administrator to centrally enable logging in a Remedy server group environment, reduces CPU resource usage on mid-tier server by 50%, and informs users of the mid-tier UI about an upcoming session timeout.

 

Additional Utilities - Remedy Server Group Administration Console

In support of the new Remedy 9.1.04 release, the Remedy product team also release a number of value-add utilities to the BMC Communities. These are unsupported at this time, but BMC will evaluate based on customer feedback whether to include it in the standard product at a later time.

 

Some references to additional information about this release:

 

 

Also check this blog by Peter Adams for details of other enhancements as part of Remedy 9.1.04 release - Remedy Fall 2017 Release (9.1.04): BMC Continues to Innovate ITSM with New CMDB User Experience, New Cognitive Capabilities in Remedy and New Multi-Cloud Service Management

 

Thank you for your continued support of the Remedy family of products and we look forward to updating you on more innovative product enhancements in the coming months.

 

Enjoy the year end and have a great start into 2018.

 

Rahul Vedak

Remedy Product Manager

Share:|

The Remedy product management team is looking forward to giving attendees of the T3:Service Management and Automation Conference an opportunity to join onsite customer advisory sessions about specific topic, where you can give direct input to the planning process for the Remedy platform and the ITSM applications.

 

As room capacity at the conference site is limited, we’re trying to assess, which topics are of biggest interest to our customers. We’ll use this feedback to select, which advisory sessions we’ll organize at the event. If the time at the conference is not sufficient to come to a conclusion, we may continue to the discussion after the conference with virtual sessions.

 

Please let us know, which topics you are interested in providing feedback on by filling out a 2-min survey at  https://www.surveymonkey.com/r/GW39PVP

 

Thanks, Peter

Share:|

BMC is proud to be the flagship sponsor for the upcoming T3: Service Management & Automation Conference, taking place during the week of Nov 6, 2017 at the Palms Casino & Resort in Las Vegas. T3: Service Management & Automation Conference - November 06 - 10, 2017 - Las Vegas, Nevada

 

If you did not make it to BMC Exchange New York City, not to worry, T3 will cover all the DSM topics which was shared there, in addition to 140+ tech sessions including Hands on Labs!

 

This year’s conference is being put on by T3 to provide an interactive, educational experience for attendees looking to gain mindshare and hands-on views to the latest best-of-breed technologies. This conference will be focused on giving you an in-depth, technical view with valuable training to help you succeed in your roles and accomplishing your business needs!

 

As the Flagship Sponsor, BMC will have a strong showing at the conference with VIPs, engineers, support technicians, product managers, marketing/sales representatives, and more on site.

•   Come see what is new in Remedy, BMC Innovation Suite, BMC Digital Workplace, BMC Discovery, BMC Remedyforce, BMC Track-It, BMC Client Management, BMCFootprints, TrueSight & more, to include products from vendors such as Numerify, RRR, Mobile Reach, RightStar, Fusion, Partner IT, Scapa Technologies, CyberTrain & RMI Solutions.

•   Come learn more about your products, as well as the latest trends in tools, training and technology, in a variety of breakout sessions to include many hands-on labs.

•   Come listen to our awesome Keynote speakers at the opening and general ceremonies.

 

There are lots of opportunities to network with BMC and non-BMC personnel who focus on a variety of products, as well as, spend an Evening with the Experts to talk about any of the questions you may have about Remedy platform. In addition, there'll be lots of opportunities to talk to the Remedy product management team about needs of your organization. See separate blog post about customer advisory meetings. If you are interested in a 1:1 meeting with product management team, please work with your BMC or partner sales contact to arrange that.

 

 

Register for the T3 Conference at: http://tooltechtrain.com/registrations.html

Share:|

We're collection information about use of Crystal Reports with Remedy.

 

If your organization currently uses Crystal Reports, we'd like to ask you to fill out this very brief survey:

Crystal Reports and Remedy Survey

 

Thanks very much in advance,

 

Peter Adams

Share:|

Just sharing one tip with ARS. A known solution to known problem.

In case if you have  AR Server running on Windows. And if its service stops working due to java updates – you need to make changes (as per new JRE) at below mentioned locations.

 

  • Update java/jvm path at below location in registry on given system

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Action Request System Server <host name>\Parameters

 

  • Edit <AR Server Install Dir>\arserver.config and update jvm search path (set the upgraded JRE version)

 

     # JVM search paths (number indicates search order)

     jvm.search.path.1=FILE_SYSTEM=C:\Program Files\Java\jre1.8.0_141\bin

 

  • Edit <AR Server Install Dir>/Conf/armonitor.cfg and update all hardcoded java paths.
Share:|

In this third post on encryption we're going to show how to enable SSL between an AR System server and its Oracle database.  In previous posts we've seen how to use Oracle's native encryption and SSL with Microsoft SQL Server. The process we're going to follow is similar to the latter.

 

Again, the high level steps are:

 

  • obtain a certificate
  • configure the database to use the certificate
  • import the certificate on the client
  • configure the AR System server to use SSL

 

Oracle databases store their certificates in a set of files called a wallet so, unless you have an existing wallet, we need to create one.  As with most of these steps there are multiple ways to do this.  We could use the Wallet Manager GUI but we're going to stick to the command line and use the orapki utilty:

 

Create a new wallet with the auto-login property set:

 

c:\app>orapki wallet create -wallet c:\app\db_wallet -auto_login

 

We're prompted to enter a password to secure the wallet and I've used password1.  The directory listing shows the files created in the db_wallet directory which will be created if it does not already exist.

 

We now have an empty wallet to which we need to add a certificate.  As this is a test we'll create a self-signed certificate and add it with one command:

 

c:\app>orapki wallet add -wallet c:\app\db_wallet -dn "cn=clm-pun-013056,cn=bmc,cn=com" -keysize 2048 -validity 365 -pwd password1 -self_signed

c:\app>orapki wallet display -wallet c:\app\db_wallet -pwd password1

 

We've used the host name for the -dn option, specified a key length of 2048 bits and validity of a year.  The second command lists the contents of the wallet so that we can confirm that our certificate has been added.

 

Note that both user and trusted certificates called CN=clm-pun-013056,CN=bmc,CN=com were created and it is the latter that we will export so that it can be used on the AR System server.

 

Export the certificate to a file called db_CA.cert:

 

c:\app>orapki wallet export -wallet c:\app\db_wallet -dn "cn=clm-pun-013056,cn=bmc,cn=com" -cert c:\app\db_wallet\db_CA.cert -pwd password1

 

We've prepared the certificate but we still need to configure Oracle to use it.  To do this we need to edit two files in the ORACLE_HOME\network\admin directory, sqlnet.ora and listner.ora, and add these lines to both of them:

 

WALLET_LOCATION =

  (SOURCE =

    (METHOD = FILE)

    (METHOD_DATA =

      (DIRECTORY = C:\app\db_wallet)

    )

  )

SSL_CLIENT_AUTHENTICATION = FALSE

 

This specifies the location of the wallet and sets an option to show we're just using encryption, not authentication.

 

We also need to configure the listener to add a port that the database will use for SSL connections.  In the LISTENER section of the listener.ora file we add:

 

    (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = clm-pun-013056)(PORT = 2484)))

 

Note the protocol is TCPS and we've picked port 2484 which is commonly used.

 

Finally we need to restart the listener process so that it picks up the changes:

lsnr.PNG

 

That completes the database setup, the listener output shows we're ready to receive SSL connections on port 2484.

 

The next step is to copy the certificate that we exported earlier to the AR Server system and add it to the Java cacerts file so that the Oracle JDBC driver can use it.  These steps are similar to those we used for MS SQL.  The certificate file is called db_CA.cert and it has been copied to c:\temp.

 

Open a command prompt and cd to the jre\lib\security directory of the Java instance that the AR System server is using.  There should already be a cacerts file in this directory, this is the default certificate store used by Java, and we're going to add our certificate to it with the keytool command:

 

C:\Program Files\Java\jre1.8.0_121\lib\security>..\..\bin\keytool -importcert -file c:\temp\db_CA.cert -alias dbcert       -storepass changeit -noprompt -keystore cacerts

imp.PNG

 

We're almost done, all that is left is to configure the Remedy server to use SSL when connecting to the database.  A typical Remedy server configuration for an Oracle database includes these settings:

 

Db-Host-Name: clm-pun-013056

Db-Server-Port: 1521

Oracle-Service: orcl

 

On startup the server uses these to create a JDBC connection string using the format:

 

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=Db-Host-Name)(PORT=Db-Server-Port))(CONNECT_DATA=(SERVICE_NAME=Oracle-Service)))

 

When using SSL the PROTOCOL setting needs to be changed from TCP to TCPS.  However, before 9.1 Service Pack 2, there was no way to modify this connection string to do this.  This release introduced a new configuration option called Oracle-JDBC-URL which can be used to provide the full connect string.  If this option is present it is used instead of the one derived from the settings above.  To configure our Remedy server we need to add this option with the appropriate values.  So, the new setting in our ar.cfg/ar.conf will be:

 

Oracle-JDBC-URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=clm-pun-013056)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=orcl)))

 

The original settings can be left in place as they are ignored when the new option is set.  Switching between SSL and plain text connections is simply a case of commenting out this new option.

 

Restart the AR System server and we now have an encrypted connection between the server and the Oracle database.  To verify that this is the case we can use the tcpdump or Wireshark tools as detailed in the earlier posts.  Looking at the packets we'll see that the contents are all binary data and no plain text is present. 

 

 

Summary

 

We've now looked at three different ways to encrypt data as it is transferred between Remedy and the database.  In each case I've tried to cover the minimum steps required to enable this feature, each one offers many more configuration options, and you can find additional details in the links at the end of the articles.

 

I hope the information is useful and I welcome suggestions for other topics that would be of interest to the Remedy community.  Please use the comments section below or send me a message with ideas.

 

Further Reading

 

Trending in Support: Encrypting Data Between AR Servers and Oracle Databases

Trending in Support: Enabling SSL Encryption for AR to MS SQL Database Connections with Remedy 9.1 SP2 and Later

 

SSL With Oracle JDBC Thin Driver

http://chadstechnoworks.com/wptech/db/oracle_advanced_security_p02.htmlOracle Advanced Security Configuration

HOW TO: Setting up Encrypted Communications Channels in Oracle Database

 

Feedback and corrections are always welcome in the comments section below.

 

Mark Walters

 

Read more like this -  BMC Remedy Support Blogs

Matthias Minden

Disable IPv6

Posted by Matthias Minden Jun 1, 2017
Share:|

We currently don't use IPv6 but discovered that the application (java) still wants to use IPv6 even when disabled at the O/S level.  We edited the arconfig file on the server(s) by adding the following to the java entries

     <your java path> -Djava.net.preferIPv4Stack=true - Djava.net.preferIPv6Addresses=false

 

Edited:

You can also add these settings to the Developer Studio and Data Import Tool .ini files also!

Share:|

This post shows how to use a new configuration option, added in AR System Server 9.1 Service Pack 2, to enable encryption of the data moving between a Remedy server and its Microsoft SQL Server database.  In an earlier post (Trending in Support: Encrypting Data Between AR Servers and Oracle Databases ) we saw how to enable Oracle's native encryption for these connections but, this time, we're going to be using SSL.  Microsoft have documentation on their website that describes how the feature is implemented.

 

Using SSL Encryption | Microsoft Docs

 

There are several steps necessary to prepare the environment before encryption can be enabled.  At a high level these are:

 

  • obtain a certificate
  • grant SQL Server access to the certificate
  • configure SQL to use the certificate
  • import the public certificate to the Java instance used by the AR System server
  • enable encryption on the AR System server

 

If you're configuring a production environment that requires this additional level of security you have probably already obtained an SSL certificate from one of the available commercial certification authorities.  However, for our tests, we're going to use a simple, self-signed, certificate.  There are a number of different ways to generate these but, as we happen to have IIS installed on our SQL Server machine, we'll use that.  Simply start the IIS Manager, goto Server Certificates and right click Create Self-Signed Certificate:

 

 

Enter a name and choose a Personal certificate.

 

Now that we have a certificate we need to make it available to our SQL Server instance.  Start by finding the account name used to run SQL. One way to do this is via the SQL Server Configuration Manager, check the Properties for the selected instance:

 

 

Note the Account Name and then launch the MMC management console and add the Certificates snap-in for a Computer Account:

 

 

  • in MMC, go to Certificates (Local Computer) > Personal > Certificates

  • the certificate should be listed there (you may have to import it if you did not use IIS to create it)

  • right click > All Tasks > Manage Private Keys

  • add the service account for your instance of SQL Server

  • give the service account Read permissions

 

While we're here we also need to export the certificate so that it can be imported on the AR System server machine later:

 

  • right click on the certificate > All Tasks > Export > Next
  • choose No, do not export the private key > Next
  • choose DER encoded binary X.509 (.CER) > Next
  • enter a file name (e.g. export.cer) noting where it is saved

 

The final step on the SQL Server machine is to configure SQL to use the certificate with the SQL Server Configuration Manager again:

 

 

  • start SQL Server Configuration Manager
  • go to SQL Server Network configuration
  • select your instance
  • right click > Properties > Certificate tab
  • choose the certificate from the list
  • restart the SQL service

 

We're finished with the SQL Server machine, the rest of the work is done on the AR System server host.

 

Start by copying the exported certificate file (which we called export.cer) created above to the system.  Then, open a command prompt and cd to the jre\lib\security directory of the Java instance that you are using to run your AR System server.

 

There should already be a cacerts file in this directory, this is a default certificate store used by Java, and we're going to add our certificate to it with the keytool command.

 

 

With the commands shown above we:

 

  • imported the certificate with an alias of arkey using the default store password of changeit
  • listed the certificate to verify it was imported

 

The final step is to enable the AR System server to use the certificate and encrypt traffic between itself and the database.  To do this we need to make use of a new configuration parameter that was added in 9.1 Service Pack 2 called Db-Custom-Conn-Props:  This allows us to pass one or more key=value pairs to the database driver using a semi-colon separated list.  For example:

 

Db-Custom-Conn-Props: key1=value1;key2=value2

 

This option was added in 9.1 SP2 to provide a way for administrators to specify the additional configuration options required for the JDBC driver when enabling features such as encryption.  We'll make use of it again when we look at SSL for Oracle databases in a future post.

 

Before we move on let's confirm the current state of the data flowing to and from the database.  In the earlier Oracle post we used tcpdump to snoop on the network traffic.  We're going to do the same here but with the graphical Wireshark utility.  This next picture shows some of the data packets coming from the database and we can see that there is plain text legible in their contents:

 

 

The above is some of the data being returned when selecting the User form record for the sample user Allen.  The full name and email address are there, along with the start of the list of groups that Allen is a member of.

 

To enable encryption we need to stop the AR System service add this line to our ar.cfg file;

 

Db-Custom-Conn-Props: encrypt=true

 

and then restart the service.  We could also have used the Centralised Configuration forms to add this to our server before restarting.

 

Now, when we look at the Wireshark captured data, we can immediately see a difference:

 

 

Note that the Info column is showing TLS traffic and the packet payload data is no longer in plain text - an encrypted connection!

 

We've deliberately glossed over some of the complexities that may be required in non-test environments such as:

 

  • using commercial SSL certificates
  • using alternative Java keystores
  • additional Db-Custom-Conn-Props options that may be need for different SSL configurations such as different keystore locations and passwords

 

but I hope that this shows that, with 9.1 Service Pack 2 and beyond, AR System server to database encryption is now supported when using Microsoft SQL databases.

 

 

Credits

 

Thanks to The Data Specialist blog post for details of configuring SQL Server with a self-signed certificate.

Using a self-signed SSL certificate with SQL Server | The Data Specialist

 

Further Reading

 

Using SSL Encryption | Microsoft Docs

Wireshark · Go Deep.

 

 

Feedback and corrections are always welcome in the comments section below and, if you have a suggestion for a technical post related to Remedy AR System, please drop me a message via the Communities.

 

Mark Walters

 

Read more like this -  BMC Remedy Support Blogs

Share:|

Regular news coverage of data security breaches has made organisations increasingly aware of the importance of securing the data they own and manage.  As a result, one question that we're beginning to see more often in support is "How do I encrypt the data travelling between my AR server and database?".  The two databases supported by Remedy 9.x servers are Microsoft SQL and Oracle and both have options to provide this type of encryption.  This post covers one way to do this with Oracle; a later post will look at an alternative for this database and Microsoft SQL.

 

The architecture of a basic AR System installation looks something like this;

 

p1.png

Data has multiple steps to take as it travels back and forth between clients and the storage medium used by the database server.  There are options available to encrypt that data during all of the steps but the one we’re focusing on in this post is highlighted in red on the diagram, the step between the AR System Server and an Oracle database.  Often these two servers are on separate machines so the data has to travel over a network and, by default, this transfer takes place in plain text.

 

To confirm that the data is being passed this way, and that after encryption has been enabled it is no longer in plain text, we’re using a test environment with a version 9.1 AR System Server running on Linux connecting to an Oracle 11g database running on Windows 2012.  We will monitor the network traffic travelling the AR and database servers to see what it looks like before and after the changes to turn on encryption.

 

Logging on to the Linux system we can use one of the many tools available to capture and display network traffic - in this case it’s tcpdump.  The command below will display the traffic flowing between the AR and Oracle servers in this environment.

p2.png

 

To generate some traffic between the systems we use a User Tool client and start looking at records in the User form.  As different records are selected the tcpdump output shows the data being retrieved from the database.

p3.png

 

As we can see there is information in the network traffic that can be read.  The screen shot above shows the data for user the Allen, including the full name, email address and a list of group IDs/groups.

 

If the tcpdump command is left running other legible data will be seen.  SQL statements for example;

p4.png

 

Oracle offers both native and SSL options for encrypting the data between a client and the database server, details are available in many places on the web, one such example is here - ORACLE-BASE - Native Network Encryption for Database Connections. 

 

We’re going to use the native option as it does not require any changes to the client, simply some configuration settings on the database server.  The process for enabling this type of encryption is documented here - Configuring Network Data Encryption and Integrity for Oracle Servers and Clients.

 

One way to make the changes is to edit the Oracle sqlnet.ora file using a text editor but we’re going to use the Net Manager utility that is installed as part of the database software.

 

On the database server system launch the Net Manager tool and click on Profile in the tree window.  Select Oracle Advanced Security in the drop down menu and then the Encryption tab.

p5.png

 

This is where the various encryption options are selected.  They are all covered in the link above and for this test we use these settings;

 

Encryption Type:          requested

Encryption Seed:          secretword

Selected Methods:         AES256

 

p6.png

 

Select Save Network Configuration from the File menu and quit Net Manager.

 

We have now enabled encryption on the database server.  The options we have set request that encryption be enabled if the client supports it and we have specified a seed and algorithm to be used.

 

If we now go back and repeat the tcpdump test above what do we see?  When we select another user record, Bob's for example;

p7.png

 

That doesn't look good - the data is still visible in plain text.  This is because the encryption configuration change is only picked up when a client first connects to the database, so a restart of the AR System Server is required.  Once this is done the test is repeated and the network traffic looks a little different;

p8.png

 

The data is no longer in plain text – it is encrypted.  A positive step forward in an increasingly security conscious world!

 

I’m not sure how widely known this feature of Oracle is but, as we have shown, with a simple change on the database server and a restart of AR, it is possible to encrypt the traffic between these systems.  No changes are necessary on the AR System Server and this should work with any version of AR as it is a feature of the Oracle database and client software. 

 

In a future post I’ll look at how AR to database encryption can be enabled using SSL with both Oracle and Microsoft SQL.

 

Credits

 

Many thanks to Martin Rosenbauer for his feedback that led to this article.

 

 

Further Reading

 

A tcpdump Tutorial and Primer with Examples

 

 

Feedback and corrections are always welcome in the comments section below and, if you have a suggestion for a technical post related to Remedy AR System, please drop me a message via the Communities.

 

Mark Walters

 

Read more like this -  BMC Remedy Support Blogs

Share:|

When I talk about the advantages of using the REST API, I usually talk about REST-based web services and compare this to SOAP-based web services. And there are a lot of advantages to using the REST API: there’s no need to define a web service since it’s always there, the interaction with the interface is a lot easier since the requests are a lot smaller, but most of all it’s intuitive. I often find that SOAP is a complex mechanism which can be challenging to use.

 

But that’s only a problem if you’re planning to handle the communication yourself. Say, you’re building an application in Java or need to get data to a different system that runs on PHP, in that case the REST-based web service is the obvious choice since all you’re doing is sending and receiving simple HTTP requests and responses.

 

ws soap.pngSo is there any need for SOAP at all? I’d argue there is. A SOAP-based web service uses the WSDL to describe in detail how everything works. That includes how the request should be formatted, what operations are on offer and what the response should look like. This means you know everything prior to starting your request. These are usually considered added complexities, but if you use an application that handles all of this for you, this doesn’t really matter that much. Because that’s where I see the added value of SOAP-based web services: if you use an application that can deal with the information in the WSDL and interpret it for you it’s actually easy to use.

 

Doing the same thing with REST-based clients is a lot trickier. A REST client acts a like a generic client and enters a REST service with little knowledge of the API, except for the entry point. An application might find it difficult to predict all those details that SOAP would store in the WSDL.

 

One such client is Remedy. When we consume a web service, we act as a SOAP client. During the design phase we read the WSDL file and allow the developer to simply map the fields to the XML elements. The system takes it from there. You don’t have to be concerned with creating SOAP requests, reading the WSDL file, etc.

 

I think SOAP-based web services work particularly well in a setting where the application can easily interpret the web service. If you have that available and there’s no need to do any coding, SOAP is probably a better choice. To learn more, attend my session, Session 233: Getting the Most out of Your Web Services Integration, at BMC Engage where we’ll be looking at SOAP-based services and REST-based web services.

 

Hope to see you there,

 

Justin

 

Don't forget to follow me on Twitter!

Share:|

BMC Remedy AR System 9.1: Basic Development

For developers! *BMC Remedy AR System 9.1: Basic Development* training offered in July, August and September!!

Gain the knowledge and skills to take full advantage of all that Remedy AR System 9.1 has to offer.

 

Register now for one of the following instructor-led classes (includes hand-on lab and ebook):

 

If you have multiple students, contact us to discuss hosting a private class just for your organization.

 

Details here, or contact Tom Hogan for EMEA training and Brian Hall for AMER training.

 

Course Overview

This course combines classroom instruction with laboratory exercises to guide students through basic development using Developer Studio. They will leave the course with enough development experience to take a course on how to customize ITSM applications. The lab exercises contain scenarios that simulate real world requirements. By the end of the course, the student will have built deployable applications, forms, and workflow.

 

Course Objectives

» Create custom objects using Developer Studio

» Set object permissions
» Explore form definitions
» Create active links, filters, and escalations

» Create active link and filter guides
» Explore tables and workflow related to tables

» Understand how to deploy an application

 

 

Elaine Miller Geoff Bergren Terri Lawrence Dirk Braune Brian Rock Heather LeventryMarike Owen Tom Luebbe Crystal Mendell Mario Rivas Mahesh Argade Paul CutsuvitisGary Bersh Thomas Hogan Brian Hall Jon Rendle Rayemond Newman Dave GilesKim Wharton susie clare Fabienne de Beaufort sara hepner Antoinette Kaftan-KaemerowErwann Nedele

Filter Blog

By date:
By tag: