Skip navigation
1 2 3 Previous Next

Remedy AR System

94 posts
Share This:

Welcome to November’s new AR Server Blog post and we are discussing D2P.

 

 

What is D2P? This stands for Development to Production (Dev to Prod).  This action is done via the Deployment Management Application that is included with the AR Server since 9.1.03.

 

The Deployment Application includes, the Deployment Management Console is used for all deployment functions –Import, Export, Create, Deploy, and Rollback. Let’s look at more specific examples for using this console and D2P packages, you can perform the following activities:

  • Apply BMC Hotfix and Patches
  • Installation of Applications like ITSM, SLM, SRM and Smart IT
  • If you performed Reconciliation, migrate Overlays between environments
  • Create and Deploy your own Custom Packages to Migrate data/definitions

 

For applications ‘ITSM, SRM, SLM  & Smart IT’, D2P packages are used as an Installer from version 18.08 (9.1.06) and higher.

 

To upgrade these applications there is a requirement to have these at version 18.05 (9.1.05); post which using D2P packages you can upgrade to a higher version.

 

For Midtier, AR and Atrium D2P packages are used to patch and it's available from 18.02 (9.1.04 002) onwards.

 

From version 18.02 onward, BMC also ships Hotfix through a D2P package.

 

Another main benefit you can get from this console is, you can create your own Custom Packages which can have a set of workflow, object definitions, data, or Service Request Management objects that you created in a development environment which, you can promote across environments, such as QA or production.

 

Using this console, you can patch, migrate data or upgrade all your servers in Server Group in a single go. You will IMPORT the D2P package only once and can DEPLOY the contain across all servers in a server group environment.

 

From Remedy version 19.02 onwards, you can perform Simultaneous Deployment or package Rollback.

 

The following diagram depicts the functionality of the BMC Remedy Deployment Application utility:

 

 

Reference Information for additional information

 

Communities

 

Documentation

 

 

  • Deploy Packages using Command Line Interface

     https://docs.bmc.com/docs/ars1808/using-a-command-line-interface-to-manage-a-package-821049028.html#Usingacommand-lineinterfacetomanageapackage-UsingCLI

 

  • Different Status of Binary Payload

      https://docs.bmc.com/docs/ars1808/viewing-the-status-of-a-binary-payload-820498092.html

 

 

Videos

 

 

Share This:

At a recent customer site (the Oracle version was 18c i.e. 12.2.0.2) we noticed that if Remedy was being run in Case Insensitive mode certain use cases would not return expected rows. Case in point is the "Site" drop down menu in the CTM:People form that returned the expected 72 rows (for this client) when Remedy was in Case Sensitive mode but not when in Case Insensitive mode.

 

One of the SQL statements associated with the population of the field is shown below (there were other SQLs where a value was entered for the Company field or the Region field or the Site Groupg field - all of them failed to return rows)

 

          SELECT DISTINCT T538.C260000001 FROM T538 WHERE ((((T538.C1000000001 = ' ') OR (' ' = ' ')) AND ((T538.C200000012 = ' ') OR (' ' = ' '))

          AND ((T538.C200000007 = ' ') OR (' ' = ' ')) AND (T538.C1000000073 = 0) AND (T538.C1000000081 = 1) AND (T538.C7 = 1)))

          AND ( ROWNUM <= 20101 ) ORDER BY T538.C260000001 ASC

 

We eventually realized that we may be encountering Oracle Bug 27416997 (check Metalink Doc ID 2390584.1).

 

The Bug has been resolved in Oracle 19.1. For the customer we set one of the parameters mentioned in the Doc (shown below) and the use case started working instantaneously.

       alter system set "_optimizer_generate_transitive_pred"=FALSE scope=both

Share This:

In a recent engagement with a Remedy customer we looked into their performance issue and discovered that the slowdown in the database was stemming from "enq:Index contention" waits.

 

Drilling down further into the waits we found that the waits were on the "S" tables corresponding to the Remedy forms "SMT: Social_FollowConfig" and "HPD: WorkLog". These "S" tables were introduced in Remedy's newest implementation of RLS that was CA (Controlled Availability = limited to  few chosen customers) in version 1902 and GA (General Availability) in version 1908.

 

The SQL statements that were waiting for an ITL (Interested Transaction List - at Oracle's block level) slot to open up were performing INSERTS into the two tables. The number of initial ITL slots when an index is created is specified by its INITRANS parameter. That parameter defaults to 1 for tables and 2 for indexes.

 

The default value of 2 for an index means that TWO transactions can each take one slot and perform an INSERT/UPDATE/DELETE operation on rows in the block.

 

If a third or fourth transaction comes along and needs to work on rows in the block and both the slots are taken Oracle can allocate additional slots, up to the table's/index's MAXTRANS parameter, provided there is space available in the block.

 

For the customer in question we had them increase the value of INITRANS for the indexes that were experiencing concurrency waits. The increase was from 2 to 10 in one case and 15 in another index.

 

The change can be accomplished by executing the following SQL command:

SQL> alter index <index name here> INITRANS <new value here>

Share This:

In a recent engagement with a Remedy customer we looked into their performance issue and discovered that the slowdown in the database was stemming from "enq:Index contention" waits.

 

Drilling down further into the waits we found that the waits were on the "S" tables corresponding to the Remedy forms "SMT: Social_FollowConfig" and "HPD: WorkLog".

 

Drilling down further into the waits we found that the waits were on the "S" tables corresponding to the Remedy forms "SMT: Social_FollowConfig" and "HPD: WorkLog". These "S" tables were introduced in Remedy's newest implementation of RLS that was CA (Controlled Availability = limited to  few chosen customers) in version 1902 and GA (General Availability) in version 1908.

 

The SQL statements that were waiting for an ITL (Interested Transaction List - at Oracle's block level) slot to open up were performing INSERTS into the two tables. The number of initial ITL slots when an index is created is specified by its INITRANS parameter. That parameter defaults to 1 for tables and 2 for indexes.

 

The default value of 2 for an index means that TWO transactions can each take one slot and perform an INSERT/UPDATE/DELETE operation on rows in the block.

 

If a third or fourth transaction comes along and needs to update rows in the block and both the slots are taken Oracle can allocate additional slots, up to the table's/index's MAXTRANS parameter, provided there is space available in the block.

 

For the customer in question we had them increase the value of INITRANS for the indexes that were experiencing concurrency waits. The increase was from 2 to 10 in one case and 15 in another index.

 

The change can be accomplished by executing the following SQL command:

SQL> alter index <index name here> INITRANS <new value here>

Share This:

BMC Software has identified an unauthenticated Remote Code Execution (RCE) vulnerability in Remedy Mid Tier.

Mid Tier versions 9.1, 18.05, 18.08, and 19.02 service packs, and patches are affected by this vulnerability.

For more information about this issue and the resolution, see the following links:

 

Thanks to Raphaël Arrouas and Stephane Grundschober for responsibly disclosing this vulnerability to BMC.

 

Best regards,

 

John Weigand
R&D Program Manager
BMC Software

Share This:

Introduction

 

In my last blog post I wrote about how to use a Tomcat container and war files for mid-tier testing.  In this one I'd like to show you how the combination of a database container and the Remedy silent install process can be used to speed up test server deployment.  Other advantages of this approach include

 

  • being able to consistently reproduce a system in the same state.
  • less concern about disposing of a system after testing as it is easily recreated.
  • use of database backups provides options for quickly restoring test systems to known good states.

 

The option to run Remedy with a flat file database went away many moons ago (bonus points if you can name the last version to offer this) and the current versions require either MS-SQL or Oracle, both of which are large and complex pieces of software.  The effort required to download, install, and configure these databases can add significantly to the time taken to create a test environment.  Wouldn't it be nice to be able to run a few commands and have a new database instance up and running, ready for use?  Containers to the rescue!

 

Both MS-SQL and Oracle are available as containers which means that a lot of the work needed to get them set up has already been done.  The Oracle container is more complex to manage, as well as being larger, so this article will focus on using MS-SQL.

 

Requirements

 

If you've read any of my previous articles you won't be surprised to find that we're going to be using a Linux system for our tests, specifically a CentOS 7 virtual machine.  At this point those of you that are familiar with the Remedy compatibility documents may be wondering about the combination of MS SQL and Linux.  There are two things to consider, firstly the use of Linux for the database platform and, secondly, whether a Linux based AR Server can talk to an MS SQL database.

 

Remedy has always been very platform agnostic with regards to the OS used to host the database.  If it looks like an MS SQL server, runs like an MS SQL server and squeaks like an MS SQL server, there's a very good chance that Remedy will run just fine.   On the second point, one of the consequences of the move to Java for the AR platform in version 9.0, was that the AR Server started using JDBC in place of the native database drivers of earlier releases.  This means that it is possible BUT NOT SUPPORTED for an AR Server running on Linux to use an MS SQL database.  Please note the highlighted comments in the previous sentence!  Yes, a Linux AR Server will work with MS SQL, but this should only be used for test systems as, at the time of writing, BMC DO NOT support this combination and you use it at your own risk.

 

OS Update and Docker Install

 

Start by making sure that the operating system is up-to-date, rebooting if many packages or the kernel are refreshed.

 

# yum update

 

Create working directories to store our files.  If you don't use /docker you will need to substitute your choice in some of the later commands.

 

# mkdir -p /docker/mssql

# cd /docker

 

If you haven't already installed Docker use these steps to add the software repository, install, and start the Docker engine.

 

# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# yum -y install docker-ce docker-ce-cli containerd.io

# systemctl start docker

 

Confirm that Docker is running with:

 

# docker version

Client:

Version: 18.03.1-ce

API version: 1.37

Go version: go1.9.5

Git commit: 9ee9f40

Built: Thu Apr 26 07:20:16 2018

OS/Arch: linux/amd64

Experimental: false

Orchestrator: swarm

 

We're also going to use a tool called docker-compose to help manage the database container configuration.  Note the version used in the command below may not be the latest, check the documentation if you want the most recent.

 

# curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-$(uname -s)-$(uname -m) -o /bin/docker-compose

# chmod a+x /bin/docker-compose

# docker-compose -version

docker-compose version 1.20.1, build 5d8c71b

 

MS-SQL Container

 

Microsoft publish container images for SQL 2017 on Linux in a public repository and there are some additional command line tools for MS-SQL that we will use.


# curl https://packages.microsoft.com/config/rhel/7/prod.repo > /etc/yum.repos.d/msprod.repo

# ACCEPT_EULA=Y yum -y -q install mssql-tools unixODBC-devel

 

To help make the management of the container a little easier we're going to use docker-compose.  This allows us to put all of the configuration options in a file rather than having to remember them each time we want to run a container.  Here's an example, copy and save this as a file called mssql2017.yml in the /docker directory:

 

version: '3'

services:

  mssql2017:

    image: mcr.microsoft.com/mssql/server:2017-latest

    container_name: mssql2017

    hostname: mssql2017

    ports:

      - 1433:1433

    volumes:

      - /docker/mssql:/var/opt/mssql

    environment:

      - ACCEPT_EULA=Y

      - MSSQL_SA_PASSWORD=P@ssw0rd

 

The various options we've used are:

 

Option

Purpose

image: mcr.microsoft.com/mssql/server:2017-latestThe image used to create the container.
container_name: mssql2017A friendly name for the container.
hostname: mssql2017Sets the hostname rather than using one that is automatically generated.  This will help us when we come to install AR as this is also used to name the database.

ports:

  - 1433:1433

Maps container ports to the outside world.

volumes:

  - /docker/mssql:/var/opt/mssql

Creates a shared volume to allow the database files to be stored on the docker host file system rather than inside the container.

environment:

   - ACCEPT_EULA=Y

   - MSSQL_SA_PASSWORD=P@ssw0rd

Pass environment variables used to set up the database.  Please note these requirements for the password from the Microsoft documentation :

The password should follow the SQL Server default password policy, otherwise the container can not setup SQL server and will stop working. By default, the password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols.

 

All that is now required to start a database container is a single command:

 

# docker-compose -f mssql2017.yml up -d

Creating mssql2017 ... done

 

We can check that the container is running using the docker ps command and then query the database using the sqlcmd tool we installed earlier:

 

# docker ps

CONTAINER ID IMAGE                                      COMMAND                    CREATED     STATUS       PORTS                  NAMES

0ed2f8602ce9 mcr.microsoft.com/mssql/server:2017-latest "/opt/mssql/bin/sqlserver" 1 hours ago Up 8 seconds 0.0.0.0:1433->1433/tcp mssql2017

 

# /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P P@ssw0rd -Q "select getdate()"

-----------------------

2019-03-04 09:42:55.583

(1 rows affected)

 

We can also see that the container has created the default database files under the /docker/mssql directory:

 

# tree /docker/mssql/

/docker/mssql/

├── data

│   ├── master.mdf

│   ├── mastlog.ldf

│   ├── modellog.ldf

│   ├── model.mdf

│   ├── msdbdata.mdf

│   ├── msdblog.ldf

│   ├── tempdb.mdf

│   └── templog.ldf

├── log

│   ├── errorlog

│   ├── errorlog.1

│   ├── HkEngineEventFile_0_131963630182660000.xel

│   ├── log.trc

│   ├── sqlagentstartup.log

│   └── system_health_0_131963630191840000.xel

└── secrets

    └── machine-key

 

Believe it or not, that's all it takes to get a running MS-SQL Server instance on Linux!

 

The database container can be stopped with:

 

# docker-compose -f mssql2017.yml stop

Stopping mssql2017 ... done

 

Installing Remedy (quietly...)

 

One of the challenges of setting up a Remedy test system is the time taken to run all of the installers, particularly if you're also installing the ITSM Suite.  There's an added complication if you're using a headless Linux server in that using the installer GUI requires a suitable X-Windows server running on a PC to display the interface.  One of the under-used (I think) Remedy features is the ability for the installers to run in a silent mode, using a configuration file to provide all of the inputs that you usually type in using the GUI interface.  Using this option has a number of benefits:

  • no need for a GUI environment on Linux.
  • can be scripted so installations may be run remotely/unattended.
  • consistent and repeatable which makes it easier to recreate environments for testing.

All of the Remedy components can be installed using this silent mode and the process is covered in the relevant pages on the BMC docs website: 

 

Installing BMC Remedy AR System using silent mode - Documentation for Remedy Deployment 9.1 - BMC Documentation

Installing BMC Atrium Core using silent mode - Documentation for Remedy Deployment 9.1 - BMC Documentation

Performing the installation in silent mode - Documentation for Remedy Deployment 9.1 - BMC Documentation

 

So how does it work?  When you unpack the installer you will find an example options file, usually in a directory called utility.  This is a text file template that provides details of how to run the installer in silent mode and lists all of the options, and values where appropriate, that you need to provide.  The exact contents will vary depending on the product but the minimum required set of options may be a lot less than you expect.

 

Let's see what we would need to have in the silent options file to install an AR Server on the Linux system where we're running our MS-SQL container.

 

# cat silent_ar.txt

-J BMC_AR_SYSTEM_64_BIT_OR_32_BIT_JRE=64

-J BMC_JAVA_JRE_64_BIT_HOME_PATH=/opt/jre8

-J BMC_JAVA_EMAIL_ENGINE_SELECTED_FOR_32BIT=false

-J BMC_JAVA_AR_SERVER_SELECTED_FOR_32BIT=false

-J BMC_AR_APPLICATION_PASSWORD=arsystem

-J BMC_MIDTIER_PASSWORD=arsystem

-J BMC_AR_DSO_PASSWORD=arsystem

-J BMC_ARSYSTEM_INSTALL_OPTION=Install

-J BMC_DBONLY_UPGRADE_CONFIRM=false

-J BMC_USER_SELECTED_VIEW_LANGUAGES=en

-J BMC_USER_SELECTED_DATA_LANGUAGE=en

-A featureARSystemServers

-J BMC_DATABASE_TYPE=SQL_SERVER

-J BMC_DATABASE_UTF=true

-J BMC_DATABASE_HOST=sql2017

-J BMC_DATABASE_PORT=1433

-J BMC_DATABASE_LOGIN=ARAdmin

-J BMC_DATABASE_PASSWORD=arsystem

-J BMC_DATABASE_CONFIRM_PASSWORD=arsystem

-J BMC_SQLSERVER_WINDOWSAUTH_OR_SQLAUTH=SQLAUTH

-J BMC_DATABASE_INSTANCE=sql2017

-J BMC_DATABASE_DBA_LOGFILE_NAME=/var/opt/mssql/data/ARSysLog

-J BMC_DATABASE_DBA_LOGFILE_SIZE=2048

-J BMC_DATABASE_DBA_TABLESPACE_NAME=ARSystem

-J BMC_DATABASE_DBA_DATAFILE_NAME=/var/opt/mssql/data/ARSys

-J BMC_DATABASE_DBA_DATAFILE_SIZE=2048

-J BMC_DATABASE_DBA_LOGIN=sa

-J BMC_DATABASE_DBA_PASSWORD=P@ssw0rd

-J BMC_AR_USER=Demo

-J BMC_AR_PASSWORD=P@ssw0rd

-J BMC_AR_CONFIRM_PASSWORD=P@ssw0rd

-J BMC_AR_SERVER_NAME=arserver01

-J BMC_AR_SERVER_HOST_NAME=arserver01.bmc.com

-J BMC_JAVA_PLUGIN_PORT=9999

-J BMC_PORT_MAPPER_ENABLED=false

-J BMC_AR_PORT=46262

-J BMC_AR_PLUGIN_PORT=46276

-J BMC_ARSERVER_SAMPLE_DATA=true

 

You can see we're providing the Java path, product options such as language choice and sample data, database and Demo user credentials - everything that you would usually enter via the GUI.  A similar file could be used for an Oracle database but there would be some different options, which are explained in the sample file, that would need to be used.  Once this file has been created we just have to include it on the command line when running the installer:

 

# ./setup.bin -i silent -DOPTIONS_FILE=/path/to/silent_ar.txt

 

The installer will run, displaying some output on the screen and creating the usual log files, and setup the server.  Note that the file above does not include the mid-tier so you'll either have to add the options to install this or use a container based one created using the earlier blog post.

 

Once your server is up and running you can use the same process to install CMDB, AI, ITSM, SLM, SRM and so on...  With a bit of practice you could set up a script to create a full ITSM system from scratch with a single command.  Set it running as you leave for the the day and have it ready in the morning, perfect for testing.

 

There are other ways to use the MS-SQL container without installing Remedy from scratch, the next section looks at backing up and restoring the database from an existing AR Server which can then be configured to use the container based database.

 

Restore an Existing Remedy Database

 

If you don't want to wait for the silent install steps, and happen to have an existing Remedy system using MS-SQL, you could take a database backup and restore it into the container.  Transfer the .Bak file from the original system and copy it to the /docker/mssql/data directory - let's assume it is called ARSystem.Bak.    The sqlcmd utility is then used to perform the restore:

 

# /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P P@ssw0rd -Q "restore database ARSystem from disk='ARSystem.Bak' with replace, move 'ARSystem_data' to '/var/opt/mssql/data/arsys.mdf', move 'ARSystem_log' to '/var/opt/mssql/data/arsyslog.ldf'"

 

You may need to change the highlighted values depending on what your current system uses.  Progress will be shown as the restore takes place:

 

Processed 189816 pages for database 'ARSystem', file 'ARSystem_data' on file 1.

Processed 184 pages for database 'ARSystem', file 'ARSystem_log' on file 1.

Converting database 'ARSystem' from version 706 to the current version 869.

Database 'ARSystem' running the upgrade step from version 706 to version 770.

<lines snipped>

Database 'ARSystem' running the upgrade step from version 867 to version 868.

Database 'ARSystem' running the upgrade step from version 868 to version 869.

RESTORE DATABASE successfully processed 190000 pages in 3.662 seconds (405.344 MB/sec).

 

Then we need to create an ARAdmin account and make it the owner of the newly restored database, again some changes may be necessary to reflect your local names:

 

# /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P P@ssw0rd -Q "create login ARAdmin with password='arsystem', default_database=ARSystem, check_policy=off"

# /opt/mssql-tools/bin/sqlcmd -S localhost -d ARSystem -U sa -P P@ssw0rd -Q "exec sp_changedbowner ARAdmin, true"

 

Query the database as the ARAdmin user to confirm it has been restored as expected:

 

# /opt/mssql-tools/bin/sqlcmd -S localhost -U ARAdmin -P arsystem -Q "select schemaid, serverid, currdbversion from control"

schemaid serverid currdbversion

----------- ----------- -------------

4088      2            58

(1 rows affected)

 

That's it, we've created a new MS-SQL instance in a container and restored a Remedy database that's ready to use.  If you change the Db-Host-Name (and Db-user/Db-password if required)  in the ar.cfg of the AR Server that was using the original database system, setting it to the name of your docker host machine, it should be able to run using the restored copy.

 

Conclusion

I've started using this type of set up for most of my test environments and find it very flexible.  The ease and speed of creating a new database, along with the ability to install various combinations of Remedy products with minimal interaction, means that I no longer find it necessary to keep many different VMs lying around just in case I need a particular version.  Of course there are cases where this approach is not suitable, longer term test and development systems for example, but I'd encourage you to give it a go next time you need a system for a quick test or have a new version you want to evaluate.

 

Questions, comments & feedback are welcome.

Share This:

Introduction : This blog will give you the brief idea about the prerequisite needs to consider before performing AR system platform upgrade using installer. Also included prerequisite needs to consider while applying any patch/hotfix through d2p.

 

Basic configuration checks to be performed before upgrading the Remedy platform:

 

1. Validate ARSystemInstalledConfiguration.xml file. Perform the following steps:

  1. Verify that the ARSystemInstalledConfiguration.xml file exists in the <AR Installation Directory> folder. If the file does not exist, do not proceed with the upgrade. You can copy the ARSystemInstalledConfiguration.xml file from another server having the same version and make the server-specific changes (such as host name) in the file.

 

    b. Check the Product feature map section. The product feature map section contains all the features that are installed.
        If a feature that is currently running on the system, is missing from the list, you must add it to the ARSystemInstalledConfiguration.xml file.

        For example:
                 <productFeature backupOnUpgrade="false" id="featureARSystemServers"

 

<productFeature backupOnUpgrade="false" id="featureARServer"

 

<productFeaturebackupOnUpgrade="false" id="featureAREALDAPDirectoryServiceAuthentication"

 

<productFeaturebackupOnUpgrade="false" id="featureARDBCLDAPDirectoryServiceAuthentication" independentOfChildren="false" parent="featureARServer" rebootRequiredOnInstall="false" rebootRequiredOnUninstall="false" rebootRequiredOnUpgrade="false" requiredDiskSpaceMode="default.linux" state="INSTALLED" visible="true">

 

     c. Before starting the upgrade, verify the release version, major and minor version.

         For example, <version majorVersion="1" minorVersion="00" releaseVersion="9"/>,

         which indicates that the current version installed is 9.1.00.

 

     d. Verify that the following properties have correct hostnames/IP addresses.

 

<name>BMC_AR_SERVER_NAME</name>

<name>BMC_AR_SERVER_HOST_NAME</name>

<name>BMC_AR_SERVER_CONNECT_NAME</name>

<name>BMC_AR_SERVICE_NAME</name>

<name>BMC_EMAIL_SERVICE_NAME</name>

<name>BMC_MIDTIER_TOMCAT_SERVICE_NAME</name>

<name>BMC_MIDTIER_INSTANCE_NAME</name>

 

      e. If the installed version is 9.1.04 or later, verify the 'BMC Remedy MidTier File Deployer' property.

          This property should exist for 9.1.04 or later versions.

<property>

      <name>BMC Remedy MidTier File Deployer - </name>

      <environmentVariable scope="SYSTEM">

      <name>BMC_JAVA_HOME</name>

                  <value>C:\Java\jdk1.8.0_45\jre</value>

      </environmentVariable>

</property>

 

2. Verify if the service name in the system registry is same as 'AR_Server_Host_Name' property in 'ARSystemInstalledConfiguration.xml' (WINDOWS SPECIFIC)

 

When you create a clone of an existing environment, you must also update the Windows registry in the cloned environment. If you do not update the registry, the following issues may occur:

  • Throwable=[java.io.FileNotFoundException: D:\Program Files\BMC Software\ARSystem\armonitor.exe (The process cannot access the file because it is being used by another process)Verify the following
  • Throwable=[java.io.FileNotFoundException: D:\Program Files\BMC Software\ARSystem\arcatalog_eng_W_win64.dll (The requested operation cannot be performed on a file with a user-mapped section open)  java.io.FileOutputStream.open0(Native Method)

 

  1. Verify the service name from the registry.

HKEY_LOCAL_MACHINE\SOFTWARE\Remedy\ARServer\<BMC_AR_SERVER_NAME>\ServiceName

    b. Verify the service name from the registry

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Action Request System Server <BMC_AR_SERVER_NAME>

 

    c. Under registry, verify the JVM related options

    • JARS are pointing to the correct path
    • All required JARS exist.

 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Action Request System Server <BMC_AR_SERVER_NAME>\Parameters

 

    d. Verify the email engine service name from in registry against the one present inside ARSystemInstalledConfiguration.xml

 

        HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Email Engine - <BMC_AR_SERVER_NAME> 1

    e. Verify the email engine parameters are:

    • Pointing to correct paths.
    • All required JARS exist in these paths.

 

 

f. Verify the flashboard service name in registry against the one present inside ARSystemInstalledConfiguration.xml

           HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Flashboards Server - <BMC_AR_SERVER_NAME>

 

 

g. Under registry, verify the JVM related options

    • JARS are pointing to the correct path
    • All required JARS exist

           HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy Flashboards Server - <BMC_AR_SERVER_NAME>\Parameters

 

 

        If the current installed version is 9.1.04 or later, verify the following:

        h. Verify the file deployer service name from registry against the one present in ARSystemInstalledConfiguration.xml

        HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy MidTier File Deployer - <BMC_AR_SERVER_NAME> 1

 

i. Under registry, verify the JVM related options

    • Jars are pointing to correct path
    • All appropriate jars exist.

          HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BMC Remedy MidTier File Deployer - <BMC_AR_SERVER_NAME> 1\Parameters

 

 

3. Check servgrp_board table. opFlag should be “1” for Administrator server.

4. Make sure that Object modification logs is disabled. If turned ON, it causes slowness during installation because the entire AR Server metadata undergoes changes during installation.

    Link for Object modification logs:

    https://docs.bmc.com/docs/display/ars91/Using+the+object+modification+log

 

5.If you are upgrading from 7.x/8.x to 9.x or later, check the ‘Application Statistics Configuration’ form. All forms listed here must be part of some application.

 

6. Check DB Version in Control table. Refer to the following link to verify if the DBVersion column of the Control table has the value same as that of the AR Server version.

   https://communities.bmc.com/docs/DOC-37267

 

7. Execute the following queries to find out if there are invalid views in the database. Work with your DBA to fix/remove invalid views. This is one of the sure shot reasons of upgrade            failure.

Step 1: Execute the following SQL

select count(1) from    user_objects  where  status = 'INVALID' and object_type = 'VIEW'

Step 2: If the output is greater than 0 , execute following script to identify list of forms associated with invalid views :

select name, schemaid, case

        when overlayprop = 0 then 'Unmodified'

when overlayprop = 1 then 'Base'

when overlayprop =  2  then 'Overlay'

when overlayprop = 4 then 'Custom' end as CustomizationType

from arschema  where 'T' || schemaid in (select OBJECT_NAME  from  user_objects  where status = 'INVALID'  and object_type = 'VIEW')

Step 3 (Compile invalid views) : Execute following script to compile invalid views, change schema name if it is different from ARADMIN

EXEC DBMS_UTILITY.COMPILE_SCHEMA( schema => 'ARADMIN', compile_all => FALSE)

 

8. Make sure that the ‘Temp’ directory is clean before trying to perform a subsequent upgrade attempt.

 

9. Metadata inconsistency is one of the reasons for upgrade failure. Make sure you run checkdb utililty before the upgrade and resolve relevant inconsistencies.

 

Example:

 

 

Following error was seen in an upgrade attempt for one of the customers:

 

(Apr 11 2019 04:36:52.311 PM +0200),SEVERE,com.bmc.install.product.arsuitekit.platforms.arsystemservers.arserver.ARServerOracleManageUpgradeDatabaseTask,

  LOG EVENT {Description=[[SQLERROR] [DESCRIPTION] Failed to upgrade the database schema],Detail=[[SQLERRORCODE]=0 [SQLMESSAGE]=Failed to run SQL statement [ALTER TABLE SERVGRP_RESOURCES MODIFY ( JMSRESOURCES CLOB NOT NULL )] Due to [ORA-22296: invalid ALTER TABLE option for conversion of LONG datatype to LOB

][SQLSTATEMENT]=]}

(Apr 11 2019 04:36:52.312 PM +0200),SEVERE,com.bmc.install.product.arsuitekit.platforms.arsystemservers.arserver.ARServerOracleManageUpgradeDatabaseTask,

  THROWABLE EVENT {Description=[Failed to upgrade the database schema]},

Throwable=[java.sql.SQLException: Failed to run SQL statement [ALTER TABLE SERVGRP_RESOURCES MODIFY ( JMSRESOURCES CLOB NOT NULL )] Due to [ORA-22296: invalid ALTER TABLE option for conversion of LONG datatype to LOB

]

 

The failed SQL statements are built by the upgrade installer using the information in ‘ReadARServerDatabaseModel.xml’ and ‘TransformedARServerDatabaseModel.xml’. These files are created by the upgrade installer in the ‘Temp’ directory during installation.

 

There were two issues in this scenario:

 

  • Installer was trying to alter a column in SERVGRP_RESOURCES table, but the column was already altered. This could be due to multiple attempts of upgrades without cleanly reverting database and the file system.
  • SQL statement was trying to update Datatype CLOB with ‘NOT NULL’ option (Oracle has some issues with it) which was not allowing as per oracle. On a successful installation environment, we don’t have NULL values ) Inhouse we haven’t seen NULL value, but for this customer it was NULL. Therefore, we manually updated same values (CLOB and NOT NULL)here, deleted everything from ‘Temp’ directory before upgrade.

 

 

NOTE: If you run into these issues, contact BMC support to get appropriate assistance. Do not perform any manual changes on the database without consulting BMC Support.

 

 

Basic Configuration checks to be performed before applying a patch / hotfix using D2P:

1. Make sure that correct server entries in ‘AR System Monitor’ form exist. Delete the orphan or duplicate entries and restart the file deployer service of each of the servers in the server group.

 

If the issue still persists, go to AR System Installation Directory. Make sure that ‘monitor-ARServer-guid.properties’ file is present. It should have unique GUID for each server.

 

If the GUIDs are not unique, you must create entries with unique GUIDs. Perform the following steps:

    • Delete the file monitor-ARServer-guid.properties
    • Remove all entries from AR System Monitor form
    • Restart file deployer service.

AR System Monitor Form

 

 

2. Make sure to wait for few mins after you import the package, before clicking on ‘Deploy’ button.

 

3. Make sure that ‘AR System Single Deployment Payload’ form and ‘AR System Single Deployment Status’ form should not have any orphan records for the payload which we are           trying to deploy.

 

4. Make sure that File deployer service is running on all servers in the server group.

 

5. Make sure all the processes that are part of the AR System Service in armonitor.cfg file, are started without any issue. Please check armonitor.log file to verify this.

     For one of the customers was having duplicate entries for pluginsvr process. This caused errors “Address already in use” error, visible in armonitor.log. Deployment for one of                the payloads failed due to the error.

 

6. During deployment if any process fails to start / stop then you need to check following.

     Following error will be visible in file deployer log.

     ITSM Deployment Rollback caused by error: com.bmc.arsys.filedeployer.PayloadProcessor  - Process BMC:NormalizationEngine failed to start.

  • FileDeployer signals the ARMonitor to start/stop Payload associated processes. During deployment if a specific process failed to start, then the related logging will be available in armonitor.log file with signal / log stmt saying … Starting Process <processName>  … (basic logging should be enough.)
  • If the process can be manually restarted then to troubleshoot from d2p perspective, execute the ARMonitor_Admin.bat file located in ARSystem directory manually to start / stop the specific process.

 

7. Prior to 1808 , it was mandatory to have process started for which we want to do payload deployment. Otherwise payload used to fail.

     For. E.g Email Engine process, DSO process etc.

     We can use following workaround to bypass this without starting that particular process.

  • import d2p package
  • click on deploy (This will create the Payload entries)
  • Run the below query (To remove the associated process entry for ex |;BMC:EmailEngine )
  • UPDATE < schemaid of ‘AR System Single Point Deployment Payload’> SET C49110 = 'BMC:ARServer|;BMC:JavaPluginServer|;BMC:DSOJServer|;BMC:CarteServer'          WHERE C49102 = '<Payload GUID>'

        Please get value of C49110 by viewing the payload ’Process Type’, copy it and remove only email engine from it.

        e.g. Screenshot is of different d2p package

 

  • Start the utility (arpayloadutility.bat) for deployment .
Share This:

We have seen requirements where Admins like to create users with limited admin privileges like give access only to selected forms like User/Group, Admin console, Server group admin console. Also give limited access to Dev studio like give both Base development and Best practice modes access or limit only to Best practice mode.

For such needs there is a feature in Remedy called Struct admin which are documented at below DOC pages.

Special groups in BMC Remedy AR System - Documentation for Remedy Action Request System 9.1 - BMC Documentation

Struct Admin group permissions - Documentation for Remedy Action Request System 9.1 - BMC Documentation

 

This blog page is created to demonstrate how struct admin users can be created and used. There are two use cases demonstrated in this blog. You may try more cases based on your need by reading documentation pages.

 

Case 1 - Full Struct Admin User:

This user should be able to access Server Information Console (AR System Administrator Console), Server Group Log Management Console, Centralized Configuration console, User and Group form. In Dev studio, can use both Best Practice Customization  Mode and Base Development mode.

Detailed steps to achieve this use case is available at Configuring Full Struct Admin.docx

 

 

Case 2 - Overlay Struct Admin User:

This user should be able to access ccess Server Information Console (AR System Administrator Console), Server Group Log Management Console. In Dev Studio only have access to Best Practice Customization Mode.

Detailed steps to achieve this use case is available at Configuring Overlay Struct Admin.docx

 

 

 

Thanks to Chandrakumar Palanisamy for his valuable time and inputs in creating this blog.

 

Hope you will find this blog useful.

 

Note: Though basic testing is done for the use cases, extensive testing is left to the user. Make sure to test your implementation and use case properly before making anything live on PROD.

Share This:

In interactions with customers it is often noted that getting table/view information is a back and forth operation between Support/PE and the customer.

 

The accompanying zip file contains SQL scripts, written for the Oracle database, will help gather information about a table or view in one fell swoop so to speak.

 

UPDATE

The Table_Info script has been updated to now show Function-Based Normal indexes too.

 

TABLE INFO

"table_info_wrapper.sql" can be run as SYS or ARADMIN (or the Remedy schema owner as the case may be).

                * * * * "table_info_wrapper.sql" accepts the table name to be queried and calls 2 other scripts, one of which gathers index statistics.

VIEW INFO

"view_info_wrapper.sql" needs to be run as SYS.

                 * * * * "view_info_wrapper.sql" also accepts the view name to be queried and gets its DDL information from dbms_metadata.get_ddl.

Share This:

‘USER THEME’ - OVERVIEW

Remedy 19.02 release includes a new feature called as “User Preference Theme”. This is a Remedy MidTier feature and as the name suggests, it allows users to set preference to visualize mid-tier UI with select theme colors.

Users now have flexibility to select a theme from list of themes available in drop down allowing users with different look and feel of the MidTier UI than just out-of-the-box one.

Remedy Customers (administrators) can to build custom themes (CSS files) and publish it to ‘User Preference theme’ drop down list. Thereby allowing Customers to define theme as per corporate branding guidelines.

This feature is user specific - so user can choose their theme or go with the default theme (the current look and feel of MidTier).

The Remedy administrator can enable themes at the company level or if the themes are disabled then the users will continue to see the default Theme which applies to all users.

 

Pre-requisites: Remedy MidTier 19.02 (and above) is required to enable ‘User Theme’

 

SKINS VS THEMES

Customers have been using the “Skins” feature in Remedy where the customer can change the look and feel of a field/form etc. With the advent of Themes feature, the administrator has a choice to either use Skins or use Themes. However, both cannot be used together.

Skins feature is applicable for all users in the system. Whereas the Themes feature is “user preference” based. So, the Remedy Admin can enable or disable Themes. Once enabled, User has the choice to select theme of his choice.

Skins requires Remedy development skills for changing the look and feel. Themes however are CSS based and have wide range of options available to change the look and feel.

User Theme can be enabled and disabled through MidTier Configuration at run time.

Theme applies to MidTier Forms only & not to any third party embedded components like BIRT or Flash player

 

STEPS TO ENABLE THEMES

Open “AR System User Preference” Form -- Create drop down Menu field with field ID “24016” in Web tab and save it to all Views, as shown in below screenshot

               

Import Menu “UserPrefTheme” and Add same into Menu name of User Theme field as show in image-1

Set Expand Box Hide available in side Display Property as shown in image -2

 

Image 1

Image 2

Change value of property arsystem.showCfgThemeField from false to true available inside MidTier config.properties.

Put all theme CSS files inside MidTier “resources\userpreftheme\stylesheets” path and restart MidTier

  1. After restart, open MidTier Config tool à AR Server Settings and click edit or add server à two new fields will appear “Enable User Theme “ check box and ”Default Theme” text Box.
  • “Enable User Theme” mark this as checked and give some default theme CSS file name in “Default Theme”  à this applies to all user if user haven’t selected any theme from user preference

Image 3

 

After applying theme landing console look like

AVAILABLE THEMES

Available sample 2 themes are attached to this blog.

 

HOW TO CREATE CUSTOM CSS THEME

There are two CSS files shared with this post. If customer want to create new theme then they can do so by changing their Background color and font color based on need.

Alternatively, Customers could use css class listed available in shared CSS file and redesign it there as per their requirement. After creating new CSS file, add the file name in “userTheme” menu list and flush MidTier cache.

 

For example, if you want to change Tab background color just make changes in below listed CSS

.OuterOuterTab,.Tab,.OuterTab .Tab, .OuterTab .TabLeft, .OuterTab .TabRight, .ScrollingTab .Tab,.OuterTab .TabRightRounded,.ScrollingTab .TabRightRounded

{

background: #f0f0f1 !important; /*change background color for Tab */

}

 

 

CC - Remedy ITSM Remedy AR System

Rahul Vedak Abhijeet Gadgil Ravi Singh Rawle Gibson

Share This:

Introduction

 

Testing is a fact of life for those of us that work with and support software, and there are many reasons why we need to do it.  Just a few examples are

 

  • Validating configuration changes.
  • Evaluating new versions.
  • Debugging problem behaviour.
  • Developing new functionality.

 

Some of the challenges of testing with the Remedy software stack are its size and complexity.  There are multiple components, different platforms, and a range of software dependencies, all of which take time to set up and maintain.  One way to try and deal with these factors is the use of virtual machines which make it possible to save the state of a system once it is set up, and to then rollback to that known good state at any time.  In this blog post I want to look at another option based on containers and Docker.

 

There's lots of information available on the internet that will help you understand and get started using Docker.  The short story version is that containers are a lightweight alternative to virtual machines that share some of the functionality from their host rather than requiring a full copy of an operating system.  Also, containers usually include any additional software that may be required, Java for example, so it is not necessary to download and install many extra components.  This helps overcome compatibility problems and should guarantee that the application packaged inside the container will always work as expected, regardless of the software versions installed on the host.

 

There are some limitations, for example you can't use Linux binaries on a Windows host without some sort of Linux kernel being run to provide the shared functions.  What they lose in this way they make up for in speed of deployment and flexibility.  Yes, some setup is required, but once this is done it can make a very good environment for testing.

 

In this article we're going to see how to set up Docker on a CentOS 7 Linux system and then use this to test different versions of the Remedy mid-tier with several versions of Tomcat.  In later posts I hope to look at how container versions of databases and other Remedy components may be used to help speed up the testing process.

 

Firstly though a caveat- whilst container technology is mature and widely used in production environments (BMC uses containers for most of the products in the Helix SaaS offering, and there may be on-premise customer versions at some point in the future) what I'm writing about here is very much focused on testing.  Using the details below you should be able to set up and use your own container test environment but don't point your customers at it!

 

Setting up the Docker Environment

 

Full details of the options available when installing Docker are documented here.  Start by making sure that your OS packages are the most recent available.

 

# yum update

 

This may take a few minutes and will return with either a list of available updates and a prompt to continue, or report that the system is up to date.  If prompted press 'Y' and wait for the updates to complete.  If a large number of updates are applied I'd recommend you reboot before continuing.

 

Create a working directory to store our files.  If you don't use /docker you will need to substitute your choice in some of the later commands.

 

# mkdir /docker

# cd /docker

 

These steps add the Docker software repository, install the bits we need, and start the Docker engine.

 

# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# yum -y install docker-ce docker-ce-cli containerd.io

# systemctl start docker

 

Confirm that Docker is running with

 

# docker version

Client:

Version: 18.03.1-ce

API version: 1.37

Go version: go1.9.5

Git commit: 9ee9f40

Built: Thu Apr 26 07:20:16 2018

OS/Arch: linux/amd64

Experimental: false

Orchestrator: swarm

 

Server:

Engine:

Version: 18.03.1-ce

API version: 1.37 (minimum version 1.12)

Go version: go1.9.5

Git commit: 9ee9f40

Built: Thu Apr 26 07:23:58 2018

OS/Arch: linux/amd64

Experimental: false

 

We're also going to use a tool called docker-compose to help manage container configurations.  Note the version used in the command below may not be the latest, check the documentation if you want the most recent.

 

# curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-$(uname -s)-$(uname -m) -o /bin/docker-compose

# chmod a+x /bin/docker-compose

# docker-compose -version

docker-compose version 1.20.1, build 5d8c71b

 

Apache Tomcat Containers

 

One of the advantages of using Docker is that many commonly used pieces of software are already available as containers.  Tomcat is a great example - here are the currently available versions:

Not only are there many Tomcat versions but some also have a choice of Java!

 

So how do we use one?  We already have Docker installed so it's simply a case of running one command:

 

# docker run -it --rm -p 8080:8080 tomcat:8.5

Unable to find image 'tomcat:8.5' locally

8.5: Pulling from library/tomcat

741437d97401: Downloading [==========>                                        ]  9.178MB/45.34MB

34d8874714d7: Downloading [==================================>                ]  7.417MB/10.78MB


The Tomcat images are available in the public Docker registry - a central repository of container images - so the 8.5 version is downloaded and stored locally.  Once this is done the image is used to create and run a container - a local instance of Tomcat.  Further output from the command above shows this:

 

Using CATALINA_BASE:   /usr/local/tomcat

Using CATALINA_HOME:   /usr/local/tomcat

Using CATALINA_TMPDIR: /usr/local/tomcat/temp

Using JRE_HOME:        /docker-java-home/jre

Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar

27-Feb-2019 15:07:48.785 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.37

27-Feb-2019 15:07:48.787 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Dec 12 2018 12:07:02 UTC

<lines snipped>

27-Feb-2019 15:29:07.013 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]

27-Feb-2019 15:29:07.028 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]

27-Feb-2019 15:29:07.034 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 849 ms

 

Then, pointing a browser at port 8080 shows that we have a running Tomcat:

 

Type Ctrl+C in the Linux terminal to stop the container and return to the command prompt.

 

Let's look at the command options in more detail.

 

OptionExplanation
run -itthe action being performed - run the container and show the output on the terminal.
--rmdelete the container when the process is terminated.  The downloaded image is NOT deleted.
-p 8080:8080host_port:container_port - exposes the container_port to make it accessible via the host using host_port
tomcat:8.5The name of the container being used.

 

These examples show how to run other versions of Tomcat using a different ports:

 

Tomcat 7.0 using port 8000

# docker run -it --rm -p 8000:8080 tomcat:7

Tomcat 9 with Java 8 using port 8080

# docker run -it --rm -p 8080:8080 tomcat:9-jre8

Tomcat 9 with Java 11 using port 8088

# docker run -it --rm -p 8088:8080 tomcat:9-jre11

 

Now that we can run Tomcat we need a way to add the mid-tier files so that they are accessible to a process inside the container.

 

Pump Up The Volume

 

The images we've tested include the software necessary to run Tomcat but no more.  We could use the Tomcat image as a base and build a new container that includes the mid-tier files but, for testing purposes, there's an easier way using Docker volumes.  These provide the processes running inside a container with access to the file system on the host.  By setting up a volume we can put our mid-tier files in a shared directory where Tomcat can read them.

 

Create some directories to use as volumes for different mid-tier versions:

 

# mkdir -p /docker/midtier/1805

# mkdir -p /docker/midtier/1808

 

The volume details are specified using the -v command line option for docker.  To run Tomcat 8.5 and use the 1805 volume the command is:

 

# docker run -it --rm -p 8080:8080 -v /docker/midtier/1805:/usr/local/tomcat/webapps tomcat:8.5

 

The format of the -v option is host_directory:container_directory so this command takes our host /docker/midtier/1805 directory and mounts it as /usr/local/tomcat/webapps inside the container.  Volumes are often used when you have data you want to persist between container restarts - remember the --rm option means our container is deleted when we cancel the command.  By using a volume we can carry data over to use in new containers as well as providing a way of getting data into the container.  How does this help us deploy our mid-tier though?  For that we need to go to war...

 

.war (What is it Good For?)

 

The mid-tier is included as part of the AR Server installer but we don't want to use this for several reasons;

 

  • we only need the mid-tier and the full installer is very large.
  • the installer requires a GUI or the use of a silent install file.
  • the installer won't be able to access the Tomcat files inside the container.

 

Fortunately BMC also provide the mid-tier as a war file.  This is a web archive, a standard zip file format used for web application packaging, that Tomcat understands.  When one of these is found in the webapps directory it will be unpacked and used to deploy the application it contains.  All we have to do is copy the appropriate war file to the host volume directory and run the container.  You can download the various mid-tier war files from the EPD website.

 

After downloading the files, decompress them and copy them to the appropriate directories.  I'm renaming each to arsys.war so that the familiar /arsys mid-tier URL is used.

 

# ls -l

drwxr-xr-x 2 root root      4096 Feb 27 10:29 1805

drwxr-xr-x 2 root root      4096 Feb 27 10:29 1808

-rw-r--r-- 1 root root 234438452 Jun  1  2018 MidtierWar_linux9.1.05.tar.gz

-rw-r--r-- 1 root root 234990835 Sep  3 02:41 MidtierWar_linux9.1.06.tar.gz

# tar zxvf MidtierWar_linux9.1.05.tar.gz

midtier_linux.war

# mv midtier_linux.war 1805/arsys.war

# tar zxvf MidtierWar_linux9.1.06.tar.gz

midtier_linux.war

# mv midtier_linux.war 1808/arsys.war

# ls 1805 1808

1805:

arsys.war

1808:

arsys.war

 

Now we run a Tomcat container and use the -v option to control which mid-tier version is used:

 

# docker run -it --rm -p 8080:8080 -v /docker/midtier/1805:/usr/local/tomcat/webapps tomcat:8.5

 

Looking at the console output we can see the mid-tier being deployed and, once the startup is complete, we can access it via a browser:

 

28-Feb-2019 08:07:21.713 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/usr/local/tomcat/webapps/arsys.war]

 

To switch to the 1808 mid-tier use Ctrl+C to stop the running container and change the volume used:

 

# docker run -it --rm -p 8080:8080 -v /docker/midtier/1808:/usr/local/tomcat/webapps tomcat:8.5

 

Now we can login to the config pages and complete the setup by providing the details of the AR Server we want this mid-tier instance to connect to.  All of the configuration files are stored in the tree under the arsys directory so they will persist between container restarts.

 

# ls -l 1808/arsys

total 84

drwxr-x---  3 root root 4096 Feb 28 02:12 cache

-rw-r-----  1 root root 1703 Aug 27  2018 CancelTask.jsp

drwxr-x---  2 root root 4096 Feb 28 02:12 documents

drwxr-x---  2 root root 4096 Feb 28 02:12 filedeployer

drwxr-x---  2 root root 4096 Feb 28 02:12 flashboards

drwxr-x---  3 root root 4096 Feb 28 02:12 help

drwxr-x---  7 root root 4096 Feb 28 02:12 LocalPlugins

drwxr-x---  2 root root 4096 Feb 28 02:12 logs

drwxr-x---  2 root root 4096 Feb 28 02:12 META-INF

drwxr-x---  3 root root 4096 Feb 28 02:12 report

drwxr-x---  2 root root 4096 Feb 28 02:12 reporting

drwxr-x---  2 root root 4096 Feb 28 02:12 reports

drwxr-x--- 12 root root 4096 Feb 28 02:12 resources

drwxr-x---  3 root root 4096 Feb 28 02:12 samples

drwxr-x---  2 root root 4096 Feb 28 02:12 scriptlib

drwxr-x---  5 root root 4096 Feb 28 02:12 shared

drwxr-x---  4 root root 4096 Feb 28 02:12 SpellChecker

drwxr-x---  2 root root 4096 Feb 28 02:12 tools

drwxr-x---  2 root root 4096 Feb 28 02:12 Visualizer

drwxr-x---  3 root root 4096 Feb 28 02:12 webcontent

drwxr-x---  7 root root 4096 Feb 28 02:12 WEB-INF

 

It's just as easy to switch Tomcat versions by changing the container image name in the docker command.  For example, run the 1808 mid-tier we just deployed, but with Tomcat 9 and JRE 11:

 

#  docker run -it --rm -p 8080:8080 -v /docker/midtier/1808:/usr/local/tomcat/webapps tomcat:9-jre11

 

Logging in to the config pages using the default password of arsystem we can see:

 

Managing Multiple Configurations

 

We can see how the combination of Docker, Tomcat containers and mid-tier war files, makes it very easy to deploy and test many different combinations of software versions.  However, so far, all of the containers we have started have only run as long as we left the docker command in the foreground.  That's OK for quick tests but, sooner or later, we're going to want to keep them running for longer periods.  Also, the command lines have become more complex and there may be other options you've seen in the documentation that you want to use.  The docker-compose utility we installed at the start of this article is one way to do this.

 

docker-compose is a command line tool that may be used to help manage more complex container environments.  It uses YAML format text files to store the container configuration options so that you don't need to type them all on the command line.  The documentation provides full details of how it works but here's the configuration file that is the equivalent of our command that started the 1808 mid-tier with Tomcat 8.5:

 

 

# cat 1808.yml

version: '3'

services:

  midtier:

    container_name: tomcat85_mt1808

        image: tomcat:8.5

    ports:

      - "8080:8080"

    volumes:

      - /docker/midtier/1808:/usr/local/tomcat/webapps

 

To manage our different Tomcat/mid-tier test combinations we can make copies of this file and change the relevant details such as the Tomcat image and volume.  Then we can use the docker-compose command to run the container:

 

# docker-compose -f 1808.yml up -d

Creating tomcat85_mt1808 ... done

# docker ps

CONTAINER ID   IMAGE       COMMAND            CREATED             STATUS              PORTS                    NAMES

1fc00871bfef   tomcat:8.5  "catalina.sh run"  About a minute ago  Up About a minute   0.0.0.0:8080->8080/tcp   tomcat85_mt1808

 

The -f option specifies the name of the configuration file to use (which otherwise defaults to docker-compose.yml), up is the command to start the container, and -d runs the container in the background.  The docker ps command shows us the running container.  To stop the container:

 

# docker-compose -f 1808.yml stop

Stopping tomcat85_mt1808 ... done

 

Testing with a Load Balancer

 

If you have a system with enough resources you can run multiple mid-tier containers as long as you map a different host port for each instance.  Some applications of this type of set up would be to:

 

  • compare the behaviour of different mid-tier or Tomcat versions side-by-side.
  • add a load balancer between the mid-tier and several AR Servers.
  • add a load balancer between the clients and several mid-tiers.

 

Let's look at the last example in more detail and see what is required.  docker-compose allows us configure multiple mid-tiers in a single YAML file so that they may be started and stopped as one.   We need to add a second service to our 1808.yml file from above, let's make a copy and change it to:

 

# cp 1808.yml midtierlb.yml

# vi midtier.yml

version: '3'

services:

  midtier1:

    container_name: midtier1

    image: tomcat:8.5

    ports:

      - "8060:8080"

    volumes:

      - /docker/midtier/1808:/usr/local/tomcat/webapps1

 

midtier2:

    container_name: midtier2

    image: tomcat:8.5

    ports:

      - "8070:8080"

    volumes:

      - /docker/midtier/1808:/usr/local/tomcat/webapps2

 

haproxy:

   container_name: haproxy

   image: mminks/haproxy-docker-logging

   ports:

     - "8080:8080"

   volumes:

     - /docker/midtier/haproxy:/usr/local/etc/haproxy

 

I've highlighted the changes using different colours, they are:

 

  • the service name - miditer1 and midtier2.
  • the container_name - midtier1 and midtier2.
  • the port numbers on the docker host that are mapped to the Tomcat port in each container need to be unique - 8060 and 8070.
  • the host directory used to create a docker volume for each container where we need to copy the arsys.war file.

 

I've also added a load balancer service using a container version of haproxy which listens on port 8080 and distributes calls to our mid-tiers.  This is configured using a file called haproxy.cfg stored in the /docker/midtier/haproxy volume directory.

 

# cat haproxy/haproxy.cfg

    global

        maxconn 256

        log 127.0.0.1 local0 debug

    defaults

        log global

        mode http

        timeout connect 5000ms

        timeout client 50000ms

        timeout server 50000ms

    frontend http-in

        bind *:8080

        default_backend midtiers

    backend midtiers

        mode http

        balance roundrobin

        server midtier1 DOCKER_HOST_IP:8060 check

        server midtier2 DOCKER_HOST_IP:8070 check

 

You will need to edit this file and replace DOCKER_HOST_IP with the address the machine you are using to run docker.

 

Before we run the containers let's just review the files you should under your /docker directory:

 

└── midtier

   ├── haproxy

   │   └── haproxy.cfg

   ├── 1808.yml

   ├── midtierlb.yml

   ├── webapps1

   │   └── arsys.war

   └── webapps2

       └── arsys.war

 

They are:

  • a directory called haproxy container haproxy.cfg.
  • the original 1808.yml file for a single mid-tier.
  • midtierlb.yml with our additional mid-tier and haproxy containers added.
  • webapps1 and webapps2 directories each containing the arsys.war file for the mid-tier we want to use.

 

Start the containers with:

# docker-compose -f midtierlb.yml up -d

Creating midtier1 ... done

Creating midtier2 ... done

Creating haproxy  ... done

 

Wait a minute or so for the containers to start, you can check their progress using the docker logs command:

 

# docker logs midtier1

06-Mar-2019 09:12:36.697 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.38

<lines snipped>

06-Mar-2019 09:13:03.074 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/usr/local/tomcat/webapps/arsys.war] has finished in [25,942] ms

06-Mar-2019 09:13:03.083 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]

06-Mar-2019 09:13:03.095 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]

06-Mar-2019 09:13:03.102 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 26113 ms

 

Now, using a browser, you should be able to connect to the mid-tiers directly using ports 8060 and 8070, and via haproxy on port 8080:

 

Summary

 

This has been a brief introduction to the use of the Remedy mid-tier with containers but I hope it shows how this technology may be used for rapid testing of new or different software versions.  There are many more docker and docker-compose options available to help you set up the test environments you need, have a browse of the documentation and search the web for inspiration.  Happy testing!

 

Comments, questions and feedback are welcome, leave a message below or send me an email.

 

Mark Walters

 

 

References

Share This:
Share This:

NOTE: This vulnerability is only applicable to AR System on Linux servers.

 

BMC Software has identified a security vulnerability (CVE-2018-19647) that could allow a remote, unauthenticated attacker to gain arbitrary code execution as the system user. The exposure is limited to scenarios where an attacker is on the same network as Remedy AR System and has the capability to bypass standard network based defenses such as firewalls.

All service packs and patches of Remedy AR System 9.x and 18.x versions are affected by this vulnerability.

BMC strongly recommends that customers who have installed Remedy AR System 9.x or 18.x on a Linux server apply this hotfix.

 

Hot fixes for the affected versions are available at the following links:

 

Note on prerequisites: On some versions, patches need to be applied prior to applying the hot fix (if they have not been already applied)

  • For 9.1.04, patch 002 (9.1.04.002).
  • For 9.1.03, patch 001 (9.1.03.001)
  • For 9.1.02, patch 004 (9.1.02.004)

There are no prerequisites for installation on Remedy AR System 18.05 or 18.08.

 

Thanks to François Goichon from the Google Security Team for identification of this problem.

 

Best regards,

John Weigand

R&D Program Manager

BMC Software

Share This:

If you are running a version of Remedy that is older than 9.1 SP3 you may notice the following join condition in the view definition of AST:Base Element when you open it in Developer Studio:

 

           ($ReconciliationIdentity$ = 'ReconciliationIdentity') OR ('ReconciliationIdentity' = $Instanceid$)

 

AST:Base Element is a joing of BMC:Base Element and AST:Attributes.

 

The first part of the join condition returns reconciled CIs while the second part, in italicized red above, returns unreconciled assets.

 

The OR condition needs to be removed from the view definition. This issue was addressed in BMC's software defect SW00511666 and fixed in 9.1 SP3.

 

If your upgrade to 9.1 SP3, or higher, is not imminent you can safely remove the above condition in Developer Studio.

Share This:

If you have SQL statements that are running poorly in an Oracle 12c database and their Explain Plans are showing "... SQL Plan Directive used for this statement" under "Note" you may want to look into turning off the SQL Plan Directive (SPD) and checking performance after.

 

At a recent customer site it was noticed that the SQL below (truncated here) had an Execution Plan, also shown below, that was extremely sub-optimal with a Cost in excess of 7.5 BILLION.

 

QUERY

SELECT b1.C1 || '|' || NVL(b2.C1, ''), b1.C2, b1.C3, b1.C4, b1.C5, b1.C6, b2.C7, b1.C8, b2.C260100001, b1.C301019600, b1.C260100002, b2.C260100004, b2.C260100006, b2.C260100007, b2.C260100009, b2.C260100010, b2.C260100015, b2.C230000009, b2.C263000050, . . . b1.C530014300, b1.C530010200, b2.C810000272, b1.E0, b1.E1, b2.C1

FROM T525 b1 LEFT JOIN T3973 b2 ON ((b1.C400129200 = b2.C400129200) OR (b2.C400129200 = b1.C179))

 

EXECUTION PLAN

Plan hash value: 3755947183

 

--------------------------------------------------------------------------------------------------

| Id | Operation                 | Name            | Rows   | Bytes |TempSpc| Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------------

| 0  | SELECT STATEMENT          |                 | 197K   | 2068M |       | 7929M (1)  | 86:02:37 |

| 1  |  NESTED LOOPS OUTER       |                 | 197K   | 2068M |       | 7929M (1)  | 86:02:37 |

|* 2 |   HASH JOIN               |                 | 98925  | 72M   | 24M   | 163K (1)   | 00:00:07 |

| 3  |    TABLE ACCESS FULL      | T457            | 98357  | 23M   |       | 1011 (1)   | 00:00:01 |

| 4  |    TABLE ACCESS FULL      | T476            | 3528K  | 1722M |       | 73424 (1)  | 00:00:03 |

| 5  |   VIEW                    | VW_LAT_F1632550 | 2      | 20394 |       | 80158 (1)  | 00:00:04 |

|* 6 |    TABLE ACCESS FULL      | T3973           | 2      | 1138  |       | 80158 (1)  | 00:00:04 |

--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

--------------------------------------------------- 

2 - access("B1"."C179"="B2"."C179")

6 - filter("B2"."C400129200"="B2"."C400129200" OR "B2"."C400129200"="B2"."C179")

 

Note

-----

  - dynamic statistics used: dynamic sampling (level=2)

  - 1 Sql Plan Directive used for this statement

 

The same SQL in a few other environments (customer's and BMC's internal ones) showed a better plan that used indexes and ran much faster.

 

The "good" plans were missing the "Note" above that mentions (a) dynamic sampling used and (b) 1 SQL Plan Directive used for the SQL.

 

This indicated that it was possibly the SQL Plan Directive that was responsible for the poor Execution Plan.

 

We checked the database for SPDs associated with the 2 objects above (T525 and T3973) and found that there were 4 table-level directives.

 

SQL> select * from dba_sql_plan_dir_objects where owner = 'ARADMIN' and object_name = 'T3973';

 

DIRECTIVE_ID        OWNER   OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE    NOTES

2625473566913189414 ARADMIN T3973       C400129200     COLUMN

9853893946733075077 ARADMIN T3973       C7             COLUMN

9853893946733075077 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>YES</equality_predicates_only>

                                                                      <simple_column_predicates_only>YES</simple_column_predicates_only>

                                                                      <index_access_by_join_predicates>NO</index_access_by_join_predicates>

                                                                      <filter_on_joining_object>NO</filter_on_joining_object></obj_note>

6009259810806618512 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>NO</equality_predicates_only>

                                                                      <simple_column_predicates_only>NO</simple_column_predicates_only>

                                                                      <index_access_by_join_predicates>NO</index_access_by_join_predicates>

                                                                      <filter_on_joining_object>YES</filter_on_joining_object></obj_note>

2625473566913189414 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>YES</equality_predicates_only>

                                                                      <simple_column_predicates_only>YES</simple_column_predicates_only>

                                                                      <index_access_by_join_predicates>NO</index_access_by_join_predicates>

                                                                      <filter_on_joining_object>NO</filter_on_joining_object></obj_note>

3274712412944615867 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>NO</equality_predicates_only>

                                                                      <simple_column_predicates_only>NO</simple_column_predicates_only>

                                                                      <index_access_by_join_predicates>NO</index_access_by_join_predicates>

                                                                      <filter_on_joining_object>NO</filter_on_joining_object></obj_note>

 

We the disabled the SPDs using the following commands (the long number in each command is the SQL Directive Id from above):

 

      exec dbms_spd.alter_sql_plan_directive(9853893946733075077,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(6009259810806618512,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(3274712412944615867,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(2625473566913189414,'ENABLED','NO');

 

The result was the Execution Plan that was faster and  which was observed in all the other environments.

 

--------------------------------------------------------------------------------------------------------------------

| Id | Operation                                | Name            | Rows  | Bytes |TempSpc | Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------------------------------

| 0  | SELECT STATEMENT                         |                 | 197K  | 2068M |        | 756K (1)   | 00:00:30 |

| 1  | NESTED LOOPS OUTER                       |                 | 197K  | 2068M |        | 756K (1)   | 00:00:30 |

|* 2 |  HASH JOIN                               |                 | 98925 | 72M   | 24M    | 163K (1)   | 00:00:07 |

| 3  |   TABLE ACCESS FULL                      | T457            | 98357 | 23M   |        | 1011 (1)   | 00:00:01 |

| 4  |   TABLE ACCESS FULL                      | T476            | 3528K | 1722M |        | 73424 (1)  | 00:00:03 |

| 5  |  VIEW                                    | VW_LAT_F1632550 | 2     | 20394 |        | 6 (0)      | 00:00:01 |

| 6  |  CONCATENATION                           |                 |       |       |        |            |          |

| 7  |    TABLE ACCESS BY INDEX ROWID BATCHED   | T3973           | 1     | 569   |        | 3 (0)      | 00:00:01 |

|* 8 |     INDEX RANGE SCAN | I3973_400129200_1 | 1               |       |       |        | 2 (0)      | 00:00:01 |

| 9  |    TABLE ACCESS BY INDEX ROWID BATCHED   | T3973           | 1     | 569   |        | 3 (0)      | 00:00:01 |

|* 10|     INDEX RANGE SCAN | I3973_400129200_1 | 1               |       |       |        | 2 (0)      | 00:00:01 |

--------------------------------------------------------------------------------------------------------------------

 

Predicate Information (identified by operation id):

--------------------------------------------------- 

  2 - access("B1"."C179"="B2"."C179")

  8 - access("B2"."C400129200"="B2"."C179")

  10 - access("B2"."C400129200"="B2"."C400129200")

       filter(LNNVL("B2"."C400129200"="B2"."C179"))

 

As one can see there are no longer any SPDs.

 

There are numerous ways to not have SQL Plan Directives affect query performance.

 

Database Level: (a) set optimizer_features_enable = '11.2.0.4'.

                    NOTE: This will disable ALL 12c optimizer features

                (b) set optimizer_adaptive_features = FALSE.

                    NOTE: This disables ALL 12c adaptive features and

                          that may be too wide an option

 

SQL Directive Level: exec dbms_spd.alter_sql_plan_directive(<insert directive_id here>,

                     ’ENABLED’,’NO’);

Filter Blog

By date:
By tag: