Skip navigation
1 2 3 Previous Next

Remedy AR System

86 posts
Share:|

In interactions with customers it is often noted that getting table/view information is a back and forth operation between Support/PE and the customer.

 

The accompanying zip file contains SQL scripts, written for the Oracle database, will help gather information about a table or view in one fell swoop so to speak.

 

UPDATE

The Table_Info script has been updated to now show Function-Based Normal indexes too.

 

TABLE INFO

"table_info_wrapper.sql" can be run as SYS or ARADMIN (or the Remedy schema owner as the case may be).

                * * * * "table_info_wrapper.sql" accepts the table name to be queried and calls 2 other scripts, one of which gathers index statistics.

VIEW INFO

"view_info_wrapper.sql" needs to be run as SYS.

                 * * * * "view_info_wrapper.sql" also accepts the view name to be queried and gets its DDL information from dbms_metadata.get_ddl.

Share:|

‘USER THEME’ - OVERVIEW

Remedy 19.02 release includes a new feature called as “User Preference Theme”. This is a Remedy MidTier feature and as the name suggests, it allows users to set preference to visualize mid-tier UI with select theme colors.

Users now have flexibility to select a theme from list of themes available in drop down allowing users with different look and feel of the MidTier UI than just out-of-the-box one.

Remedy Customers (administrators) can to build custom themes (CSS files) and publish it to ‘User Preference theme’ drop down list. Thereby allowing Customers to define theme as per corporate branding guidelines.

This feature is user specific - so user can choose their theme or go with the default theme (the current look and feel of MidTier).

The Remedy administrator can enable themes at the company level or if the themes are disabled then the users will continue to see the default Theme which applies to all users.

 

Pre-requisites: Remedy MidTier 19.02 (and above) is required to enable ‘User Theme’

 

SKINS VS THEMES

Customers have been using the “Skins” feature in Remedy where the customer can change the look and feel of a field/form etc. With the advent of Themes feature, the administrator has a choice to either use Skins or use Themes. However, both cannot be used together.

Skins feature is applicable for all users in the system. Whereas the Themes feature is “user preference” based. So, the Remedy Admin can enable or disable Themes. Once enabled, User has the choice to select theme of his choice.

Skins requires Remedy development skills for changing the look and feel. Themes however are CSS based and have wide range of options available to change the look and feel.

User Theme can be enabled and disabled through MidTier Configuration at run time.

Theme applies to MidTier Forms only & not to any third party embedded components like BIRT or Flash player

 

STEPS TO ENABLE THEMES

Open “AR System User Preference” Form -- Create drop down Menu field with field ID “24016” in Web tab and save it to all Views, as shown in below screenshot

               

Import Menu “UserPrefTheme” and Add same into Menu name of User Theme field as show in image-1

Set Expand Box Hide available in side Display Property as shown in image -2

 

Image 1

Image 2

Change value of property arsystem.showCfgThemeField from false to true available inside MidTier config.properties.

Put all theme CSS files inside MidTier “resources\userpreftheme\stylesheets” path and restart MidTier

  1. After restart, open MidTier Config tool à AR Server Settings and click edit or add server à two new fields will appear “Enable User Theme “ check box and ”Default Theme” text Box.
  • “Enable User Theme” mark this as checked and give some default theme CSS file name in “Default Theme”  à this applies to all user if user haven’t selected any theme from user preference

Image 3

 

After applying theme landing console look like

AVAILABLE THEMES

Available sample 2 themes are attached to this blog.

 

HOW TO CREATE CUSTOM CSS THEME

There are two CSS files shared with this post. If customer want to create new theme then they can do so by changing their Background color and font color based on need.

Alternatively, Customers could use css class listed available in shared CSS file and redesign it there as per their requirement. After creating new CSS file, add the file name in “userTheme” menu list and flush MidTier cache.

 

For example, if you want to change Tab background color just make changes in below listed CSS

.OuterOuterTab,.Tab,.OuterTab .Tab, .OuterTab .TabLeft, .OuterTab .TabRight, .ScrollingTab .Tab,.OuterTab .TabRightRounded,.ScrollingTab .TabRightRounded

{

background: #f0f0f1 !important; /*change background color for Tab */

}

 

 

CC - Remedy ITSM Remedy AR System

Rahul Vedak Abhijeet Gadgil Ravi Singh Rawle Gibson

Share:|

Introduction

 

Testing is a fact of life for those of us that work with, and support, software and there are many reasons why we need to do it.  Just a few examples are

 

  • Validating configuration changes.
  • Evaluating new versions.
  • Debugging problem behaviour.
  • Developing new functionality.

 

Some of the challenges of testing with the Remedy software stack are its size and complexity.  There are multiple components, different platforms, and a range of software dependencies, all of which take time to set up and maintain.  One way to try and deal with these factors is the use of virtual machines which make it possible to save the state of a system once it is set up, and to then rollback to that known good state at any time.  In this blog post I want to look at another option based on containers and Docker.

 

There's lots of information available on the internet that will help you understand and get started using Docker.  The short story version is that containers are a lightweight alternative to virtual machines that share some of the functionality from their host rather than requiring a full copy of an operating system.  Also, containers usually include any additional software that may be required, Java for example, so it is not necessary to download and install many extra components.  This helps overcome compatibility problems and should guarantee that the application packaged inside the container will always work as expected, regardless of the software versions installed on the host.

 

There some limitations, you can't for example, use Linux binaries on a Windows host without some sort of Linux kernel being run to provide the shared functions.  What they lose in this way they make up for in speed of deployment and flexibility.  Yes, some setup is required, but once this is done it can make a very good environment for testing.

 

In this article we're going to see how to set up Docker on a CentOS 7 Linux system and then use this to test different versions of the Remedy mid-tier with several versions of Tomcat.  In later posts I hope to look at how container versions of databases and other Remedy components may be used to help speed up the testing process.

 

Firstly though a caveat- whilst container technology is mature and widely used in production environments (BMC uses containers for most of the products in the Helix SaaS offering, and there may be on-premise customer versions at some point in the future) what I'm writing about here is very much focused on testing.  Using the details below you should be able to set up and use your own container test environment but don't point your customers at it!

 

Setting up the Docker Environment

 

Full details of the options available when installing Docker are documented here.  Start by making sure that your OS packages are the most recent available.

 

# yum update

 

This may take a few minutes and will return with either a list of available updates and a prompt to continue, or report that the system is up to date.  If prompted press 'Y' and wait for the updates to complete.  If a large number of updates are applied I'd recommend you reboot before continuing.

 

Create a working directory to store our files.  If you don't use /docker you will need to substitute your choice in some of the later commands.

 

# mkdir /docker

# cd /docker

 

These steps to add the Docker software repository, install the bits we need, and start the Docker engine.

 

# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# yum -y install docker-ce docker-ce-cli containerd.io

# systemctl start docker

 

Confirm that Docker is running with

 

# docker version

Client:

Version: 18.03.1-ce

API version: 1.37

Go version: go1.9.5

Git commit: 9ee9f40

Built: Thu Apr 26 07:20:16 2018

OS/Arch: linux/amd64

Experimental: false

Orchestrator: swarm

 

Server:

Engine:

Version: 18.03.1-ce

API version: 1.37 (minimum version 1.12)

Go version: go1.9.5

Git commit: 9ee9f40

Built: Thu Apr 26 07:23:58 2018

OS/Arch: linux/amd64

Experimental: false

 

We're also going to use a tool called docker-compose to help manage container configurations.  Note the version used in the command below may not be the latest, check the documentation if you want the most recent.

 

# curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-$(uname -s)-$(uname -m) -o /bin/docker-compose

# chmod a+x /bin/docker-compose

# docker-compose -version

docker-compose version 1.20.1, build 5d8c71b

 

Apache Tomcat Containers

 

One of the advantages of using Docker is that many commonly used pieces of software are already available as containers.  Tomcat is a great example - here are the currently available versions:

Not only are there many Tomcat versions but some also have a choice of Java!

 

So how do we use one?  We already have Docker installed so it's simply a case of running one command:

 

# docker run -it --rm -p 8080:8080 tomcat:8.5

Unable to find image 'tomcat:8.5' locally

8.5: Pulling from library/tomcat

741437d97401: Downloading [==========>                                        ]  9.178MB/45.34MB

34d8874714d7: Downloading [==================================>                ]  7.417MB/10.78MB


The Tomcat images are available in the public Docker registry - a central repository of container images - so the 8.5 version is downloaded and stored locally.  Once this is done the image is used to create and run a container - a local instance of Tomcat.  Further output from the command above shows this:

 

Using CATALINA_BASE:   /usr/local/tomcat

Using CATALINA_HOME:   /usr/local/tomcat

Using CATALINA_TMPDIR: /usr/local/tomcat/temp

Using JRE_HOME:        /docker-java-home/jre

Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar

27-Feb-2019 15:07:48.785 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.37

27-Feb-2019 15:07:48.787 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Dec 12 2018 12:07:02 UTC

<lines snipped>

27-Feb-2019 15:29:07.013 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]

27-Feb-2019 15:29:07.028 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]

27-Feb-2019 15:29:07.034 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 849 ms

 

Then, pointing a browser at port 8080 shows that we have a running Tomcat:

 

Type Ctrl+C in the Linux terminal to stop the container and return to the command prompt.

 

Let's look at the command options in more detail.

 

OptionExplanation
run -itthe action being performed - run the container and show the output on the terminal.
--rmdelete the container when the process is terminated.  The downloaded image is NOT deleted.
-p 8080:8080host_port:container_port - exposes the container_port to make it accessible via the host using host_port
tomcat:8.5The name of the container being used.

 

These examples show how to run other versions of Tomcat using a different ports:

 

Tomcat 7.0 using port 8000

# docker run -it --rm -p 8000:8080 tomcat:7

Tomcat 9 with Java 8 using port 8080

# docker run -it --rm -p 8080:8080 tomcat:9-jre8

Tomcat 9 with Java 11 using port 8088

# docker run -it --rm -p 8088:8080 tomcat:9-jre11

 

Now that we can run Tomcat we need a way to add the mid-tier files so that they are accessible to a process inside the container.

 

Pump Up The Volume

 

The images we've tested include the software necessary to run Tomcat but no more.  We could use the Tomcat image as a base and build a new container that includes the mid-tier files but, for testing purposes, there's an easier way using Docker volumes.  These provide the processes running inside a container with access to the file system on the host.  By setting up a volume we can put our mid-tier files in the shared directory where Tomcat can read them.

 

Create some directories to use as volumes for different mid-tier versions:

 

# mkdir -p /docker/midtier/1805

# mkdir -p /docker/midtier/1808

 

The volume details are specified using the -v command line option for docker.  To run Tomcat 8.5 and use the 1805 volume the command is:

 

# docker run -it --rm -p 8080:8080 -v /docker/midtier/1805:/usr/local/tomcat/webapps tomcat:8.5

 

The format of the -v option is host_directory:container_directory so this command takes our host /docker/midtier/1805 directory and mounts it as /usr/local/tomcat/webapps inside the container.  Volumes are often used when you have data you want to persist between container restarts - remember the --rm option means our container is deleted when we cancel the command.  By using a volume we can carry data over to use in new containers as well as providing a way of getting data into the container.  How does this help us deploy our mid-tier though?  For that we need to go to war...

 

.war (What is it Good For?)

 

The mid-tier is included as part of the AR Server installer but we don't want to use this for several reasons;

 

  • we only need the mid-tier and the full installer is very large.
  • the installer requires a GUI or the use of a silent install file.
  • the installer won't be able to access the Tomcat files inside the container.

 

Fortunately BMC also provide the mid-tier as a war file.  This is a web archive, a standard zip file format used for web application packaging, that Tomcat understands.  When one of these is found in the webapps directory it will be unpacked and used to deploy the application it contains.  All we have to do is copy the appropriate war file to the host volume directory and run the container.  You can download the various mid-tier war files from the EPD website.

 

After downloading the files, decompress them and copy them to the appropriate directories.  I'm renaming each to arsys.war so that the familiar /arsys mid-tier URL is used.

 

# ls -l

drwxr-xr-x 2 root root      4096 Feb 27 10:29 1805

drwxr-xr-x 2 root root      4096 Feb 27 10:29 1808

-rw-r--r-- 1 root root 234438452 Jun  1  2018 MidtierWar_linux9.1.05.tar.gz

-rw-r--r-- 1 root root 234990835 Sep  3 02:41 MidtierWar_linux9.1.06.tar.gz

# tar zxvf MidtierWar_linux9.1.05.tar.gz

midtier_linux.war

# mv midtier_linux.war 1805/arsys.war

# tar zxvf MidtierWar_linux9.1.06.tar.gz

midtier_linux.war

# mv midtier_linux.war 1808/arsys.war

# ls 1805 1808

1805:

arsys.war

1808:

arsys.war

 

Now we run a Tomcat container and use the -v option to control which mid-tier version is used:

 

# docker run -it --rm -p 8080:8080 -v /docker/midtier/1805:/usr/local/tomcat/webapps tomcat:8.5

 

Looking at the console output we can see the mid-tier being deployed and, once the startup is complete, we can access it via a browser:

 

28-Feb-2019 08:07:21.713 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/usr/local/tomcat/webapps/arsys.war]

 

To switch to the 1808 mid-tier use Ctrl+C to close the running container and change the volume used:

 

# docker run -it --rm -p 8080:8080 -v /docker/midtier/1808:/usr/local/tomcat/webapps tomcat:8.5

 

Now we can login to the config pages and complete the set up by providing the details of the AR Server we want this mid-tier instance to connect to.  All of the configuration files are stored in the tree under the arsys directory so they will persist between container restarts.

 

# ls -l 1808/arsys

total 84

drwxr-x---  3 root root 4096 Feb 28 02:12 cache

-rw-r-----  1 root root 1703 Aug 27  2018 CancelTask.jsp

drwxr-x---  2 root root 4096 Feb 28 02:12 documents

drwxr-x---  2 root root 4096 Feb 28 02:12 filedeployer

drwxr-x---  2 root root 4096 Feb 28 02:12 flashboards

drwxr-x---  3 root root 4096 Feb 28 02:12 help

drwxr-x---  7 root root 4096 Feb 28 02:12 LocalPlugins

drwxr-x---  2 root root 4096 Feb 28 02:12 logs

drwxr-x---  2 root root 4096 Feb 28 02:12 META-INF

drwxr-x---  3 root root 4096 Feb 28 02:12 report

drwxr-x---  2 root root 4096 Feb 28 02:12 reporting

drwxr-x---  2 root root 4096 Feb 28 02:12 reports

drwxr-x--- 12 root root 4096 Feb 28 02:12 resources

drwxr-x---  3 root root 4096 Feb 28 02:12 samples

drwxr-x---  2 root root 4096 Feb 28 02:12 scriptlib

drwxr-x---  5 root root 4096 Feb 28 02:12 shared

drwxr-x---  4 root root 4096 Feb 28 02:12 SpellChecker

drwxr-x---  2 root root 4096 Feb 28 02:12 tools

drwxr-x---  2 root root 4096 Feb 28 02:12 Visualizer

drwxr-x---  3 root root 4096 Feb 28 02:12 webcontent

drwxr-x---  7 root root 4096 Feb 28 02:12 WEB-INF

 

It's just as easy to switch Tomcat versions by changing the container image name in the docker command.  For example, run the 1808 mid-tier we just deployed, but with Tomcat 9 and JRE 11:

 

#  docker run -it --rm -p 8080:8080 -v /docker/midtier/1808:/usr/local/tomcat/webapps tomcat:9-jre11

 

Logging in to the config pages using the default password of arsystem we can see:

 

Managing Multiple Configurations

 

We can see how the combination of Docker, Tomcat containers and mid-tier war files, makes it very easy to deploy and test many different combinations of software versions.  However, so far, all of the containers we have started have only run as long as we left the docker command in the foreground.  That's OK for quick tests but, sooner or later, we're going to want to keep them running for longer periods.  Also, the command lines have become more complex and there may be other options you've seen in the documentation that you want to use.  The docker-compose utility we installed at the start of this article is one way to do this.

 

docker-compose is a command line tool that may be used to help manage more complex container environments.  It uses YAML format text files to store the container configuration options so that you don't need to type them all on the command line.  The documentation provides full details of how it works but here's the configuration file that is the equivalent of our command that started the 1808 mid-tier with Tomcat 8.5:

 

 

# cat 1808.yml

version: '3'

services:

  midtier:

    container_name: tomcat85_mt1808

        image: tomcat:8.5

    ports:

      - "8080:8080"

    volumes:

      - /docker/midtier/1808:/usr/local/tomcat/webapps

 

To manage our different Tomcat/mid-tier test combinations we can make copies of this file and change the relevant details such as the Tomcat image and volume.  Then we can use the docker-compose command to run the container:

 

# docker-compose -f 1808.yml up -d

Creating tomcat85_mt1808 ... done

# docker ps

CONTAINER ID   IMAGE       COMMAND            CREATED             STATUS              PORTS                    NAMES

1fc00871bfef   tomcat:8.5  "catalina.sh run"  About a minute ago  Up About a minute   0.0.0.0:8080->8080/tcp   tomcat85_mt1808

 

The -f option specifies the name of the configuration file to use (which otherwise defaults to docker-compose.yml), up is the command to start the container, and -d runs the container in the background.  The docker ps command shows us the running container.  To stop the container:

 

# docker-compose -f 1808.yml stop

Stopping tomcat85_mt1808 ... done

 

Testing with a Load Balancer

 

If you have a system with enough resources you can run multiple mid-tier containers as long as you map a different host port for each instance.  Some applications of this type of set up would be to:

 

  • compare the behaviour of different mid-tier or Tomcat versions side-by-side.
  • add a load balancer between the mid-tier and several AR Servers.
  • add a load balancer between the clients and several mid-tiers.

 

Let's look at the last example in more detail and see what is required.  docker-compose allows us configure multiple mid-tiers in a single YAML file so that they may be started and stopped as one.   We need to add a second service to our 1808.yml file from above, let's make a copy and change it to:

 

# cp 1808.yml midtierlb.yml

# vi midtier.yml

version: '3'

services:

  midtier1:

    container_name: midtier1

    image: tomcat:8.5

    ports:

      - "8060:8080"

    volumes:

      - /docker/midtier/1808:/usr/local/tomcat/webapps1

 

midtier2:

    container_name: midtier2

    image: tomcat:8.5

    ports:

      - "8070:8080"

    volumes:

      - /docker/midtier/1808:/usr/local/tomcat/webapps2

 

haproxy:

   container_name: haproxy

   image: mminks/haproxy-docker-logging

   ports:

     - "8080:8080"

   volumes:

     - /docker/midtier/haproxy:/usr/local/etc/haproxy

 

I've highlighted the changes using different colours, they are:

 

  • the service name - miditer1 and midtier2.
  • the container_name - midtier1 and midtier2.
  • the port numbers on the docker host that are mapped to the Tomcat port in each container need to be unique - 8060 and 8070.
  • the host directory used to create a docker volume for each container where we need to copy the arsys.war file.

 

I've also added a load balancer service using a container version of haproxy which listens on port 8080 and distributes calls to our mid-tiers.  This is configured using a file called haproxy.cfg stored in the /docker/midtier/haproxy volume directory.

 

# cat haproxy/haproxy.cfg

    global

        maxconn 256

        log 127.0.0.1 local0 debug

    defaults

        log global

        mode http

        timeout connect 5000ms

        timeout client 50000ms

        timeout server 50000ms

    frontend http-in

        bind *:8080

        default_backend midtiers

    backend midtiers

        mode http

        balance roundrobin

        server midtier1 DOCKER_HOST_IP:8060 check

        server midtier2 DOCKER_HOST_IP:8070 check

 

You will need to edit this file and replace DOCKER_HOST_IP with the address the machine you are using to run docker.

 

Before we run the containers let's just review the files you should under your /docker directory:

 

└── midtier

   ├── haproxy

   │   └── haproxy.cfg

   ├── 1808.yml

   ├── midtierlb.yml

   ├── webapps1

   │   └── arsys.war

   └── webapps2

       └── arsys.war

 

They are:

  • a directory called haproxy container haproxy.cfg.
  • the original 1808.yml file for a single mid-tier.
  • midtierlb.yml with our additional mid-tier and haproxy containers added.
  • webapps1 and webapps2 directories each containing the arsys.war file for the mid-tier we want to use.

 

Start the containers with:

# docker-compose -f midtierlb.yml up -d

Creating midtier1 ... done

Creating midtier2 ... done

Creating haproxy  ... done

 

Wait a minute or so for the containers to start, you can check their progress using the docker logs command:

 

# docker logs midtier1

06-Mar-2019 09:12:36.697 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.38

<lines snipped>

06-Mar-2019 09:13:03.074 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/usr/local/tomcat/webapps/arsys.war] has finished in [25,942] ms

06-Mar-2019 09:13:03.083 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]

06-Mar-2019 09:13:03.095 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]

06-Mar-2019 09:13:03.102 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 26113 ms

 

Now, using a browser, you should be able to connect to the mid-tiers directly using ports 8060 and 8070, and via haproxy on port 8080:

 

Summary

 

This has been a brief introduction to the use of the Remedy mid-tier with containers but I hope it shows how this technology may be used for rapid testing of new or different software versions.  There are many more docker and docker-compose options available to help you set up the test environments you need, have a browse of the documentation and search the web for inspiration.  Happy testing!

 

Comments, questions and feedback are welcome, leave a message below or send me an email.

 

Mark Walters

 

 

References

Share:|

December 2018 list of top viewed AR Server knowledge articles, this will be regularly updated.

 

Article Number

Title

000114508

REMEDY 9.x Release - CONFIGURATION CHECKLIST from the BMC R&D PERFORMANCE TEAM

000133403

Remedy - Server - 'Could not refresh JMS Connection for destination...' errors seen in the arerror.log

000112554

How to perform a Hard Cache flush for Mid Tier

000134723

How to pass User credentials in REST API URL

000102992

Remedy - Server - Enabling case-insensitive searches with an AR System server 8.1 or 9.x with Oracle

000134172

AR Server REST API architecture and overview - Start here

000139980

How to use Self Signed Certificates in Postman Chrome Extension

000120607

Remedy - Server - Error: "ARERR 623 - Authentication failed" when opening the Report Console

000084360

Remedy - FTS - How to rebuild the FTS index on a Server Group

000072784

ARERR 382 - The value(s) for this entry violate a unique index that has been defined for this form

000097211

Remedy - Server - Escalations stopped running in 9.x on a server group

000122810

Remedy - Server - Steps to reconfigure AR Servers in Server Group

000096477

Configuring Tomcat for encoded slashes in the URL

000130061

ORA-01430: Column being added already exists in table.

000095061

ERROR: ORA-03127: no new operations allowed until the active operation ends seen in arerror.log (ARERR 552)

000124525

How to manually configure 9.x AR System ODBC drivers on 64-bit Windows OS?

000065252

ARERR 9350 error

000136515

List of BMC Remedy Release Version Strings and dbversion values

000065674

ARERR 9084 - user already connected from another machine

Share:|

NOTE: This vulnerability is only applicable to AR System on Linux servers.

 

BMC Software has identified a security vulnerability (CVE-2018-19647) that could allow a remote, unauthenticated attacker to gain arbitrary code execution as the system user. The exposure is limited to scenarios where an attacker is on the same network as Remedy AR System and has the capability to bypass standard network based defenses such as firewalls.

All service packs and patches of Remedy AR System 9.x and 18.x versions are affected by this vulnerability.

BMC strongly recommends that customers who have installed Remedy AR System 9.x or 18.x on a Linux server apply this hotfix.

 

Hot fixes for the affected versions are available at the following links:

 

Note on prerequisites: On some versions, patches need to be applied prior to applying the hot fix (if they have not been already applied)

  • For 9.1.04, patch 002 (9.1.04.002).
  • For 9.1.03, patch 001 (9.1.03.001)
  • For 9.1.02, patch 004 (9.1.02.004)

There are no prerequisites for installation on Remedy AR System 18.05 or 18.08.

 

Thanks to François Goichon from the Google Security Team for identification of this problem.

 

Best regards,

John Weigand

R&D Program Manager

BMC Software

Share:|

If you are running a version of Remedy that is older than 9.1 SP3 you may notice the following join condition in the view definition of AST:Base Element when you open it in Developer Studio:

 

           ($ReconciliationIdentity$ = 'ReconciliationIdentity') OR ('ReconciliationIdentity' = $Instanceid$)

 

AST:Base Element is a joing of BMC:Base Element and AST:Attributes.

 

The first part of the join condition returns reconciled CIs while the second part, in italicized red above, returns unreconciled assets.

 

The OR condition needs to be removed from the view definition. This issue was addressed in BMC's software defect SW00511666 and fixed in 9.1 SP3.

 

If your upgrade to 9.1 SP3, or higher, is not imminent you can safely remove the above condition in Developer Studio.

Share:|

If you have SQL statements that are running poorly in an Oracle 12c database and their Explain Plans are showing "... SQL Plan Directive used for this statement" under "Note" you may want to look into turning off the SQL Plan Directive (SPD) and checking performance after.

 

At a recent customer site it was noticed that the SQL below (truncated here) had an Execution Plan, also shown below, that was extremely sub-optimal with a Cost in excess of 7.5 BILLION.

 

QUERY

SELECT b1.C1 || '|' || NVL(b2.C1, ''), b1.C2, b1.C3, b1.C4, b1.C5, b1.C6, b2.C7, b1.C8, b2.C260100001, b1.C301019600, b1.C260100002, b2.C260100004, b2.C260100006, b2.C260100007, b2.C260100009, b2.C260100010, b2.C260100015, b2.C230000009, b2.C263000050, . . . b1.C530014300, b1.C530010200, b2.C810000272, b1.E0, b1.E1, b2.C1

FROM T525 b1 LEFT JOIN T3973 b2 ON ((b1.C400129200 = b2.C400129200) OR (b2.C400129200 = b1.C179))

 

EXECUTION PLAN

Plan hash value: 3755947183

 

--------------------------------------------------------------------------------------------------

| Id | Operation                 | Name            | Rows   | Bytes |TempSpc| Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------------

| 0  | SELECT STATEMENT          |                 | 197K   | 2068M |       | 7929M (1)  | 86:02:37 |

| 1  |  NESTED LOOPS OUTER       |                 | 197K   | 2068M |       | 7929M (1)  | 86:02:37 |

|* 2 |   HASH JOIN               |                 | 98925  | 72M   | 24M   | 163K (1)   | 00:00:07 |

| 3  |    TABLE ACCESS FULL      | T457            | 98357  | 23M   |       | 1011 (1)   | 00:00:01 |

| 4  |    TABLE ACCESS FULL      | T476            | 3528K  | 1722M |       | 73424 (1)  | 00:00:03 |

| 5  |   VIEW                    | VW_LAT_F1632550 | 2      | 20394 |       | 80158 (1)  | 00:00:04 |

|* 6 |    TABLE ACCESS FULL      | T3973           | 2      | 1138  |       | 80158 (1)  | 00:00:04 |

--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

--------------------------------------------------- 

2 - access("B1"."C179"="B2"."C179")

6 - filter("B2"."C400129200"="B2"."C400129200" OR "B2"."C400129200"="B2"."C179")

 

Note

-----

  - dynamic statistics used: dynamic sampling (level=2)

  - 1 Sql Plan Directive used for this statement

 

The same SQL in a few other environments (customer's and BMC's internal ones) showed a better plan that used indexes and ran much faster.

 

The "good" plans were missing the "Note" above that mentions (a) dynamic sampling used and (b) 1 SQL Plan Directive used for the SQL.

 

This indicated that it was possibly the SQL Plan Directive that was responsible for the poor Execution Plan.

 

We checked the database for SPDs associated with the 2 objects above (T525 and T3973) and found that there were 4 table-level directives.

 

SQL> select * from dba_sql_plan_dir_objects where owner = 'ARADMIN' and object_name = 'T3973';

 

DIRECTIVE_ID        OWNER   OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE    NOTES

2625473566913189414 ARADMIN T3973       C400129200     COLUMN

9853893946733075077 ARADMIN T3973       C7             COLUMN

9853893946733075077 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>YES</equality_predicates_only>

                                                                      <simple_column_predicates_only>YES</simple_column_predicates_only>

                                                                      <index_access_by_join_predicates>NO</index_access_by_join_predicates>

                                                                      <filter_on_joining_object>NO</filter_on_joining_object></obj_note>

6009259810806618512 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>NO</equality_predicates_only>

                                                                      <simple_column_predicates_only>NO</simple_column_predicates_only>

                                                                      <index_access_by_join_predicates>NO</index_access_by_join_predicates>

                                                                      <filter_on_joining_object>YES</filter_on_joining_object></obj_note>

2625473566913189414 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>YES</equality_predicates_only>

                                                                      <simple_column_predicates_only>YES</simple_column_predicates_only>

                                                                      <index_access_by_join_predicates>NO</index_access_by_join_predicates>

                                                                      <filter_on_joining_object>NO</filter_on_joining_object></obj_note>

3274712412944615867 ARADMIN T3973                      TABLE          <obj_note><equality_predicates_only>NO</equality_predicates_only>

                                                                      <simple_column_predicates_only>NO</simple_column_predicates_only>

                                                                      <index_access_by_join_predicates>NO</index_access_by_join_predicates>

                                                                      <filter_on_joining_object>NO</filter_on_joining_object></obj_note>

 

We the disabled the SPDs using the following commands (the long number in each command is the SQL Directive Id from above):

 

      exec dbms_spd.alter_sql_plan_directive(9853893946733075077,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(6009259810806618512,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(3274712412944615867,'ENABLED','NO');

   exec dbms_spd.alter_sql_plan_directive(2625473566913189414,'ENABLED','NO');

 

The result was the Execution Plan that was faster and  which was observed in all the other environments.

 

--------------------------------------------------------------------------------------------------------------------

| Id | Operation                                | Name            | Rows  | Bytes |TempSpc | Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------------------------------

| 0  | SELECT STATEMENT                         |                 | 197K  | 2068M |        | 756K (1)   | 00:00:30 |

| 1  | NESTED LOOPS OUTER                       |                 | 197K  | 2068M |        | 756K (1)   | 00:00:30 |

|* 2 |  HASH JOIN                               |                 | 98925 | 72M   | 24M    | 163K (1)   | 00:00:07 |

| 3  |   TABLE ACCESS FULL                      | T457            | 98357 | 23M   |        | 1011 (1)   | 00:00:01 |

| 4  |   TABLE ACCESS FULL                      | T476            | 3528K | 1722M |        | 73424 (1)  | 00:00:03 |

| 5  |  VIEW                                    | VW_LAT_F1632550 | 2     | 20394 |        | 6 (0)      | 00:00:01 |

| 6  |  CONCATENATION                           |                 |       |       |        |            |          |

| 7  |    TABLE ACCESS BY INDEX ROWID BATCHED   | T3973           | 1     | 569   |        | 3 (0)      | 00:00:01 |

|* 8 |     INDEX RANGE SCAN | I3973_400129200_1 | 1               |       |       |        | 2 (0)      | 00:00:01 |

| 9  |    TABLE ACCESS BY INDEX ROWID BATCHED   | T3973           | 1     | 569   |        | 3 (0)      | 00:00:01 |

|* 10|     INDEX RANGE SCAN | I3973_400129200_1 | 1               |       |       |        | 2 (0)      | 00:00:01 |

--------------------------------------------------------------------------------------------------------------------

 

Predicate Information (identified by operation id):

--------------------------------------------------- 

  2 - access("B1"."C179"="B2"."C179")

  8 - access("B2"."C400129200"="B2"."C179")

  10 - access("B2"."C400129200"="B2"."C400129200")

       filter(LNNVL("B2"."C400129200"="B2"."C179"))

 

As one can see there are no longer any SPDs.

 

There are numerous ways to not have SQL Plan Directives affect query performance.

 

Database Level: (a) set optimizer_features_enable = '11.2.0.4'.

                    NOTE: This will disable ALL 12c optimizer features

                (b) set optimizer_adaptive_features = FALSE.

                    NOTE: This disables ALL 12c adaptive features and that may be

                          too wide an option

 

SQL Directive Level: exec dbms_spd.alter_sql_plan_directive(<insert directive_id here>,

                     ’ENABLED’,’NO’);

Share:|

Introduction

In this series we're looking at how to setup the Elastic Stack to collect, parse, and display data from our Remedy logs.  So far we've covered:

 

  • Part 1 - setting up Elasticsearch, Kibana and Filebeat to collect logs from one or more Remedy servers.
  • Part 2 - adding Logstash and modifying the setup to pass logs through it to Elasticsearch.
  • Part 3 - first steps in using Logstash to enrich the logs with additional data for filtering and visualizatiions in Kibana.

 

This post will look at adding other Remedy server logs to the data being collected, one way to handle non-standard logs lines, and more advanced use of Logstash filters.

 

More Logs Please

At the moment we're collecting the API and arerror log files from our Remedy server so let's add some more.  To do this we need to modify the Filebeat configuration file on our Remedy server.  The files that are being collected are defined by filebeat.prospectors entries like this one for the arapi.log:

 

- type: log

  enabled: true

  paths:

    - /opt/bmc/ARSystem/db/arapi.log

  fields:

    logtype: arserver

  fields_under_root: true

 

There are several different ways to add additional logs.  We could

  1. create a new prospector entry for each log.
  2. add a new file in the paths: section.
  3. use a wildcard for the filename.

 

However, for our tests, I'm going to change the filename we're reading from to logstash.log and then use the logging options on the Remedy server to control what gets logged.  The advantage of doing it this way is that we can easily change which logs are being sent to Elasticsearch simply by using the same log file name for all of them and turning them on or off.   We won't need to reconfigure and restart Filebeat each time we want to use different log types.

 

Remedy server logging directed to a single file and switchable by using the checkboxes.

 

How Many Lines Should a Log Line Log?

While we're here I'd also like to look at how we can handle log entries that span multiple lines.  At the moment we're only interested in the standard Remedy server logs which all have the less than symbol as the first character of the line and share the same format for the first few fields :

 

<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service...

<SQL > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service...

<FLTR> <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service...


If you take a look at some sample logs you'll see that although the majority of lines follow this standard, there are some exceptions such as:

 

  • stack traces

<SQL > <TID: 0000000509> <RPC ID: 0000135141> <Queue: Fast      > <Client-RPC: 390620   > <USER: markw...

  com.bmc.arsys.domain.etc.ARException: ERROR (302): Entry does not exist in database

  at com.bmc.arsys.server.persistence.entry.impl.SQLHelperImpl.executePreparedStatement_aroundBody30(SQLHelperImpl.java:1121) [bundlefile:9.1.04-SNAPSHOT]

  at com.bmc.arsys.server.persistence.entry.impl.SQLHelperImpl$AjcClosure31.run(SQLHelperImpl.java:1) [bundlefile:9.1.04-SNAPSHOT]

 

  • some FLTR Set Fields and Notify action entries where text being processed can appear

<FLTR> <TID: 0000000334> <RPC ID: 0000135942> <Queue: Fast      > <Client-RPC: 390620   > <USER: markw...

z5VF_Message (304384301) = <html>

test text


In these cases the lines that don't start with < are continuations of the last line that does and, with our current configuration, will not be parsed by our Logstash grok filter.  They will be added to Elasticsearch as records without any of the new parsed data fields we're creating.  Both Logstash and Filebeat have methods to deal with these multi-line messages but the recommendation is to do this as early in the pipeline as possible.  For our logs, whenever we come across a line that does not start with <, we want to include it as part of the last line that does.  The Filebeat link above explains the details of how this is configured for each prospector definition.

 

The filebeat.yml updates we need to make to our API log file prospector for the file name change and multi-line processing are shown in bold below:

 

- type: log

  enabled: true

  paths:

    - /opt/bmc/ARSystem/db/logstash.log

  fields:

    logtype: arserver

  fields_under_root: true

  multiline.pattern: '^<'

  multiline.negate: true

  multiline.match: after

 

If you want to be very thorough you could make the pattern more selective as it is possible that the continuation lines may also start with <.  If you want to try this change multi-line pattern regex to lines starting with < and then having one of the recognised log type values:

 

    multiline.pattern: '^<(FLTR|SQL|API|ESCL|FTI|USER|THRD|ALRT|SGRP)'

 

Restart Filebeat and you should see that multi-line data, such as Java stack traces, is contained in a single record rather than the one record per line they would be otherwise:

 

Getting More Data From Log Lines

Now that we've changed Filebeat to pick up data from logstash.log go ahead and enable SQL or some of the other log types to that file and see what appears in Kibana:

 

Here we have API, SQL and FLTR logs.

 

Looking at the data above it would be nice to try and enhance our Logstash filter to capture some additional information such as the API type, schema or form names, and so on.  To do this we need to go back and tweak our Logstash grok filter.

 

At the moment we're handling all the fields up to the timestamp, storing the remainder of the line in log_details.

 

filter {

  grok {

    match => {"message" => "^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDYDATA:log_details}$"}

  }

}

 

Next up is the timestamp so what can we do with that?  We could just skip it and rely on the @timestamp field that's already part of the Elasticsearch record. However, you'll notice that these are not the same as the Remedy timestamps as they're added when Filebeat processes the log lines and so lag by a small amount.  However, before we continue there's one other thing we need to consider.

 

It's Just a Matter of Time...

Here's where we hit our first gotcha with Remedy log lines.  Compare the log details below:

 

<API > <TID: 0000000336>...     /* Tue Jul 31 2018 11:21:39.8570 */ +GLEWF ARGetListEntryWithFields -- schema AR System....

<SQL > <TID: 0000000336>...     /* Fri Aug 03 2018 11:31:17.7820 */ SELECT t0.schemaId, t0.name, t0.overlayGroup, t0.schemaType...

<FLTR> <TID: 0000000336>...     /* Fri Aug 03 2018 11:30:58.0360 */ End of filter processing (phase 1) -- Operation - GET...

<FLTR> <TID: 0000000336>...     --> Passed -- perform actions

 

Oops - no timestamp on some FLTR lines!  These variations will start to crop up more frequently as we dig deeper into the details of the different log formats.  Of course it's unavoidable at some point as the different log types are recording fundamentally different information.  We can usually handle this by building filters for specific log types but, in this case, we're just going to ignore it for the moment.  If you're collecting FLTR logs you'll have to accept that some of them won't have a Remedy generated timestamp.  The net effect of this is that it won't be possible to display all the lines in the correct sequence in Kibana if sorting by this timestamp as all of the records without it will be out of sequence.  In most cases the system @timestamp should be a good enough alternative.  There's a similar problem for lines logged with identical timestamps.  There's no guarantee which order they will be displayed when sorted on matching values so, for example, you may notice the occasional SQL OK before a SELECT.

 

We need a grok pattern to match our Remedy ICU format timestamp of EEE MMM d yyyy HH:mm:ss:SSSS. Unfortunately there isn't a standard one available so we need to define a custom pattern as detailed in the docs.   Using the grok debugger we can build a pattern to create a field called remedy_timestamp:

 

As not all of our log lines have the timestamp present we can't just add our new pattern to the existing filter.  The grok only works if the line matches the pattern so lines without this data would not be parsed into our extra data fields.  We need a second grok in the filter {...} block of our logstash.conf:

 

filter {

  grok {

    match => {"message" => "^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDYDATA:log_details}$"}

  }

 

  grok {

    match => {"log_details" => "^/\* (?<remedy_timestamp>%{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME}) \*/%{SPACE}%{GREEDYDATA:log_details}"}

    overwrite => ["log_details"]

  }

}

 

This new filter searches the log_details field, looking for /* at the start of the line (note that we have to escape the asterisk as it's a recognised regex character), and the parses the timestamp before assigning the remainder of the line back to log_details.  The second line uses the grok overwrite option otherwise the field would be turned into an array containing both the old and the new strings.

 

We have the timestamp but it's not in the correct format because the time is specified to four decimal places and Elasticsearch uses three. It's time to introduce a couple of new filter plugins.

 

mutate the date

The mutate plugin has a range of features to modify data in fields and we can use one of these, gsub, to remove the last digit from the string:

 

  mutate { # remove the last digit so we're left with milliseconds

    gsub => ["remedy_timestamp", "\d{1}$", ""]

  }

 

This will turn "Tue Jul 31 2018 11:21:39.8570" into "Tue Jul 31 2018 11:21:39.857".  Now we use the date plugin to convert this to the Elasticsearch timestamp format:

 

  date {

    match => ["remedy_timestamp", "EEE MMM dd yyyy HH:mm:ss.SSS"]

    target => "remedy_timestamp"

  }

 

After adding both of these below our new grok, restart the Logstash container..

 

# docker-compose -f elk.yml restart logstash

Restarting logstash ... done

 

and take a look at the logs in Kibana to see the new field.  Remember that we need to refresh the index pattern so that it is recognised correctly:

 

before refresh

 

and after

 

 

Further API Log Parsing

We're now adding extra fields to our Elasticsearch index for all of the common log markers up to, and including, the timestamp.   For API type logs this is the type of information that's left in log_details after this parsing:

 

+GLEWF ARGetListEntryWithFields -- schema AR System Configuration Component from Approval Server (protocol 26) at IP address 10.133.181.82 using RPC // :q:0.0s

-GLEWF OK

 

We're going to build add some new grok patterns to get the following

  • API call name
  • schema or form being used
  • client type
  • client protocol
  • client IP address
  • client transport
  • API queue time

 

Remember that we may be handling more than just API log lines though. We don't want to try and parse SQL or FLTR lines with an API specific regex as it won't match and would just be a waste of CPU cycles.  We can use conditional clauses to help streamline the code in our filter {...} block.  The following filters are only run if the log_type is API and they get the type of call along with adding a tag to mark the line as the start or end of an API:

 

if [log_type] == "API" {

 

  grok { # Mark the start of an API

    match => ["log_details", "^\+%{WORD:api_call}"]

    add_tag => ["API_Start"]

  }

 

  grok { # Mark the end of an API

    match => ["log_details", "^\-%{WORD:api_call}"]

    add_tag => ["API_End"]

  }

 

}

 

With the start and end tags we can now go mining for all the other nuggets using the set of filters below.  They're broken down into multiple filters because the exact format of the line varies by the API call.  For example, a GSI call does not have a form or schema recorded and some internal calls are missing the :q: value, so we need separate filters otherwise the pattern wouldn't match and nothing would be gathered.

 

if "API_Start" in [tags] {

 

  grok { # Specific to +API entries we get the client details

    match => ["log_details", " from %{DATA:client_type} \(protocol %{DATA:protocol}\)"]

  }

 

  grok { # and schema/form

          match => ["log_details", " \-\- (schema|form) %{DATA:form} (from|entry|fieldId|changed|# of)"]

  }

 

  grok { # client IP

    match => ["log_details", " at IP address %{IP:client_ip}"]

  }

 

  grok { # the API transport

    match => ["log_details", " using %{WORD:api_transport} \/\/"]

  }

 

  grok { # the API queue time

    match => ["log_details", "\/\/ :q:%{NUMBER:api_qtime:float}s"]

  }

 

}

 

The queue time filter shows how to set the type of a field in Elasticsearch as the default is text and automatic detection of numbers doesn't always work reliably.

 

Our final filter handles the API end lines and captures errors if they're present.

 

if "API_End" in [tags] {

 

  grok { # catch failing API and the related error

    match => ["log_details", "^\-%{DATA}-- AR Error\(%{DATA:arerror_number}\)%{SPACE}%{GREEDYDATA:arerror_message}$"]

  }

 

}

 

Our updated logstash.conf file is now a bit big to display in full so it's attached at the end of this post.  Once we've restarted Logstash and refreshed the index pattern we can now see all the additional fields in Kibana:

 

Wrapping Up

That's it for this post.  I had planned on showing some visualizations using our new fields but i think this is long enough for now!   I hope these posts have given you a good idea of the type of data it's possible to extract from Remedy logs,  and provided enough detail to help you get started if you want to give it a go yourself.  There are many ways to extend what we've done so far, for example you could:

 

  • Write filters to parse other log types or look at other plugins to see what they could do.
  • Break down SQL logs by the type of command being run and the table being used?
  • The full text indexer logs record the number of records in the ft_pending table that could be used to watch for a backlog developing.
  • There are plugins that you can use to calculate elapsed time between events - how long are your CreateEntry API calls taking?

 

I'm sure there are many other use cases you could think of.

 

As always questions and feedback or all sorts is welcome.  Happy grok'ing!

 

Mark Walters

 

Using the Elastic Stack with Remedy Logs - Part 1

Using the Elastic Stack with Remedy Logs - Part 2

Using the Elastic Stack with Remedy Logs - Part 3

Share:|

The Story So Far...

In parts one and two of this series of blogs we've seen how to setup the Elastic stack to collect logs from a Remedy server.  At the end of the last post we had introduced Logstash between our Filebeat collection agent and Elasticsearch so that we're ready to start parsing those interesting pieces of data from the logs.

 

One of the challenges of working with Remedy logs in Elastic is that, although there is some level of standardisation in their format, there's still a wide variety of information present.  Many of the the different logs types may share the same markers at the beginning of their lines but they then contain very different data from the timestamp onward.   This is exactly what Logstash is designed to do deal with by making use of its many filter plugins.  These provide different ways to manipulate data and restructure it so that it becomes queryable beyond simple text searches.  There's one filter plugin in particular that we're going to use to help us grok our Remedy logs.

The grok Logstash Plugin

The documentation for the grok plugin says...

This is exactly what we want to do so how do we use it?  Logstash ships with the most commonly used filter plugins already installed so there are no additional steps to make it available.

 

grok works by using patterns to match data in our logs.  A pattern is a combination of a regular expression and a variable used to store the value if it matches the search regex.   As an example consider the first bit of data in our API logs - the log type:

 

<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 >.......

 

A grok pattern to read this and create a field called log_type in Elasticsearch would be

 

^<%{WORD:log_type} >

 

Let's break it down

  • ^< means we're looking for the < character only at the start of a line
  • the grok pattern syntax uses %{...} to enclose regex:field pairs
  • WORD is one the many built in Logstash regular expressions and matches the characters A-Za-z0-9_
  • log_type is the name of the field that the value will be assigned to in Elasticsearch

 

When a log line matches the pattern, that is it starts with < and has a string followed by a space and then >, the value of the string will be added as a field called log_type.

 

We can add more patterns to match the next piece of data on the log line, the thread ID:

 

^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}>

 

  • I've changed the the line to include %{SPACE} (another built in pattern matching 0 or more spaces) instead of an actual space character because, if this was a FLTR log line for example, there would be no space before the closing >
  • > <TID: is the literal text we're expecting
  • DATA is another built in regex

 

Now we will have two fields added to our Logstash records:

 

Logstash field
Value
log_typeAPI
tid0000000336

 

We can continue to build up the patterns until we have all the data we want from the log line.  Developing these patterns can be complex but there are a number of tools available to help you,  there's even one in Kibana.  Click on the Dev Tools link in the left hand panel and then Grok Debugger.

 

Here's the grok pattern for a complete API log line shown in the debugger and you can see the resulting field names and values in the Structured Data window:

 

Note the new patterns used, %{NUMBER:overlay_group} to create an integer type field rather than a string as this value is only ever a number, and %{GREEDYDATA:log_details} at the end which captures the remainder of the line and assigns it the log_details field.

 

^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDYDATA:log_details}$

 

We now need to add this grok filter definition to our Logstash configuration file which we created in the previous post.  My example was /root/elk/pipeline/logstash.conf which needs to be edited to include the grok filter definition:

 

# cat elk/pipeline/logstash.conf

input {

  beats {

  port => 5044

  }

}

 

filter {

  grok {

    match => {"message" => "^<%{WORD:log_type}%{SPACE}> <TID: %{DATA:tid}> <RPC ID: %{DATA:rpc_id}> <Queue: %{DATA:rpc_queue}%{SPACE}\> <Client-RPC: %{DATA:client_rpc}%{SPACE}> <USER: %{DATA:user}%{SPACE}> <Overlay-Group: %{NUMBER:overlay_group:}%{SPACE}>%{SPACE}%{GREEDATA:log_details}$"}

  }

}

 

output {

  elasticsearch {

  hosts => "elasticsearch:9200"

  }

}

 

Logstash needs to reload the updated configuration which can be done by restarting it using:

 

# docker-compose -f elk.yml restart logstash

Restarting logstash ... done

 

So let's see what out logs look like in Kibana now.  Go to the Discover tab and make sure you're looking at the logstash-* index pattern, expand one of the records and, if all went well, you's see something like this:

Index Pattern Refresh

Our new fields are listed and we can see the values for them from out log line!  There are orange warning flags by the values because they're new fields that are not in the index definition that we're using.  To fix this click on the Management link go to Index Patterns, select the logstash-* pattern and click the refresh icon.  You should see that the count of the number of fields increases and you can page through the field list if you want to see which fields are present.  While we're here I suggest clicking the star icon to set logstash-* as the default index pattern so that you don't have keep switching it from filebeat-* on the Discover page.

 

 

Reload the Discover page and the warnings should have gone.  The index pattern refresh is something that needs to be done each time new fields are added.

 

Making Use of Remedy Specific Fields in Kibana

 

Now that we're enriching our log line records with Remedy data we can start doing some more interesting things, such as...

Filtering

 

Remember in Part 1 we saw how to filter the log data using the list of fields?  Well, now we have some fields which are relevant to our application, such as the User or RPC Queue, that the logged activity belongs to, let's see how we can use that to isolate actions from a single user.

 

I'm going to login to my Remedy server as Demo so let's setup the Discover page to see what I'm up to.  In addition to applying a filter from the field list you can use the Add a filter + link to get an interactive filter builder or you can use the search bar at the top of the screen.  To filter for log lines with the user field value of Demo enter user:Demo in the search bar and press return.  Assuming there are any matching logs within the current time window you should get a count of the hits and the lines will be displayed.  To make it a bit easier to see what's going on hover over the log_details field name and click add to show this field in the main pane.  Finally let's turn on auto-refresh of the search results so we can monitor our actions as they happen.  Click on the Auto-refresh link at the top of the page and select 5 or 10 seconds.  Now go ahead and login to Remedy as Demo and see what happens as you work.

 

 

Each time the screen refreshes you should see the results updated with the latest log entries and the timeline shows the number of matching log lines in that period.  There are many filtering options available so see what you can find by using the different fields we've added to look for specific types of activity.

 

Visualizations

 

Search and filtering are helpful but not very exciting to look at so how about some graphical representations of our data?  Click on the Visualize link in the left hand pane and then Create a visualization to see the range of formats available.

 

 

Let's start with a Pie chart, click on the icon, select the logstash-* index and yoi should see this

 

 

This is simply a count of the number of records so we need to provide some additional options to make it a bit more interesting.  At this point it's worth switching of the auto refresh if you have it set to avoid the graphics being refreshed as we experiment with them.

 

Click on Split Slices, choose Significant Terms as the Aggregation and rpc_queue.keyword as the Field.  Set the Size to 10 and click the play icon at the top of the panel.

Here we can see how much of our log activity belongs to the different RPC queues in our server.

 

Change the Field and try some of the others such as the user.keyword

 

The data used to create the graphics can be filtered just as on the Discover page and the time range can also be adjusted as required.

 

With so many different visualization types available you'll be able to view your logs in a variey of different ways, and a future post will look at how to get even more of the log line data in to fields so that they can be used this way.

 

Wrapping Up

This post builds on the previous two and shows how to start using Logstash to enrich the data being collected from our Remedy server using the powerful grok filter plugin.  In future posts we'll look at using it further to get even more information from our API log lines, and then extend it to other types of logs such as SQL and FT Index.  With all of this extra data we can build even more complex graphics to help us visualize and analyse our systems.

 

Comments, suggestions and questions welcome.

 

Using the Elastic Stack with Remedy Logs - Part 1

Using the Elastic Stack with Remedy Logs - Part 2

Using the Elastic Stack with Remedy Logs - Part 4

Share:|

Introduction and Recap

In the first post we saw how to setup Filebeat to collect Remedy logs and send them to Elasticsearch where they could be searched and viewed using Kibana.  This was a good start and could easily be extended to include additional logs and servers. For example, a single Elastic stack instance could be used to aggregate logs from a number of Remedy servers and associated mid-tiers.  Once the logs have been collected they can be filtered using a combination of Elastic fields such as the source for the log file name, beat.name or beat.hostname for the originating host and logtype that we added in the filebeat.yml configuration file.

 

Having a single system to gather logs is a good first step but perhaps there's more that we can do with the data being collected?  Would it be helpful to be able to see the types of error codes that our servers are producing and their frequency?  How about the types and versions of the clients that are connecting?  There's a wide variety of data in the logs but how do we get it in a format that we can use?

 

This post will look at what's needed to get our test system ready to do this.  We need some way to take the separate pieces of information on each log line and break it out so that we can make more use of it in Kibana.  Fortunately, there's a tool to do just that...

 

Enter Logstash...

Logstash is another component in the Elastic stack and its job is to take data and transform it though a variety of filter plugins as it passes through the program.

 

Let's look at a few AR Server API log lines of the type that we collected in the example setup in Part 1.

 

<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:39.8570 */ +GLEWF ARGetListEntryWithFields -- schema AR System Configuration Component from Approval Server (protocol 26) at IP address 10.133.181.82 using RPC // :q:0.0s

<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:39.8590 */ -GLEWF OK

<API > <TID: 0000000333> <RPC ID: 0000021385> <Queue: List > <Client-RPC: 390620 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:18.3550 */ +GLE ARGetListEntry -- schema AR System Email Messages from E-mail Engine (protocol 26) at IP address 10.133.181.82 using RPC // :q:0.0s

<API > <TID: 0000000333> <RPC ID: 0000021385> <Queue: List > <Client-RPC: 390620 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:18.3570 */ -GLE OK

 

Do you notice the pattern in the format of the lines?  They all share many common elements that contain potentially interesting information that we may want to use to help analyse our system.  The same format is shared by many of the AR Server logs such as SQL, filter, escalation and so on.  The data up to the timestamp is fixed and the log specific information is added after this point.  The log format is documented here.

 

Examples of the common elements are

 

<API >                                        The log type - API, SQL, FLTR, ESCL, THRD, USER and more.

<TID: 0000000336>                             Thread ID - a unique number assigned to each thread when it is created.

<RPC ID: 0000021396>                          RPC ID - a unique number assigned to each API call and shared by all related activity in the API.

<Queue: Prv:390680>                           The type/number of the RPC queue that the thread belongs to - fast, list, private etc.

<Client-RPC: 390680 >                         The optional RPC queue that was requested by the client in the API call.

<USER: Remedy Application Service >           The user performing the activity.

<Overlay-Group: 1 >                           Overlay or base object mode actions.

/* Tue Jul 31 2018 11:21:39.8570 */           Timestamp

 

Before we can start getting these from our logs we need add Logstash to our setup and get them passing through it.

 

Installing Logstash

As we did in Part 1 I'm going to use the container version of Logstash and run it on the same system as the existing Elasticsearch and Kibana containers.  This should be fine for test purposes assuming you have at least 8GB of memory available for the three containers.  We need to add a new entry in the services section of the elk.yml file we created previously that contains

 

  logstash:

    image: docker.elastic.co/logstash/logstash:6.3.2

    container_name: logstash

    ports:

      - 5000:5000

 

Let's start the container and confirm it works with

 

# docker-compose -f elk.yml up -d

Pulling logstash (docker.elastic.co/logstash/logstash:6.3.2)...

6.3.2: Pulling from logstash/logstash

7dc0dca2b151: Already exists

a9821ac0463b: Pull complete

6af182068d2d: Pull complete

7ad725483188: Pull complete

d5c1c85bef83: Pull complete

890f4724f175: Pull complete

dcfd1014e6a6: Pull complete

0b58f4123258: Pull complete

d82a06d8e668: Pull complete

a87e8d5f72c3: Pull complete

df6b8528f94e: Pull complete

Digest: sha256:838e9038388e3932f23ac9c014b14f4a1093cd8e5ede7b7e0ece98e517d44fb2

Status: Downloaded newer image for docker.elastic.co/logstash/logstash:6.3.2

kibana is up-to-date

elasticsearch is up-to-date

Creating logstash ... done

 

Using the docker ps command we can show the running containers:

 

# docker ps

CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                                            NAMES

4f4d0963ef64        docker.elastic.co/logstash/logstash:6.3.2             "/usr/local/bin/dock…"   15 seconds ago      Up 14 seconds       5044/tcp, 0.0.0.0:5000->5000/tcp, 9600/tcp       logstash

fb6abb7f3a2b        docker.elastic.co/elasticsearch/elasticsearch:6.3.2   "/usr/local/bin/dock…"   6 hours ago         Up 6 hours          0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   elasticsearch

c08b71ce71eb        docker.elastic.co/kibana/kibana:6.3.2

 

As a final test to confirm it's working access Kibana and click on the Monitoring link on the left.  You may get prompted to enable monitoring, accept this and you should see the status of your Elastic stack.

 

Configure Logstash

Now that Logstash is running we need to configure it to accept data from Filebeat running on our AR Server, process the data to find the key pieces of information we're interested in, and then send it to Elasticsearch.  We'll do this using a text file called logstash.conf to which we'll add the necessary settings.  Full details of the configuration options and how Logstash works are documented on the Elastic site.  It's a very flexible tool with many more features than we'll be using and I'd encourage you to have a look and see what it's capable of.

 

The first step is to define an input and output setting.  Create a new file called logstash.conf in an empty directory (I'm using /root/elk/pipeline - you'll see why shortly) and add the following:

 

input {
  beats {
  port => 5044
  }
}

 

output {
  elasticsearch {
  hosts => "elasticsearch:9200"
  }
}

 

The input definition creates a listener on port 5044 to accept data from a Beats client, Filebeat in our case.  The output section provides the details of our Elasticsearch server where we're sending the data.  Let's test this before we go any further and confirm that we can still see our log data.  Remember that our Logstash is running in a container so how do we get it to use the configuration file that is outside of that container?  We use a Docker feature called volumes which maps data from the host file system into a container.

 

Edit your elk.yml file and add the highlighted text.

 

  logstash:

    image: docker.elastic.co/logstash/logstash:6.3.2

    container_name: logstash

    volumes:

      - /root/elk/pipeline:/usr/share/logstash/pipeline

    ports:

      - 5000:5000

      - 5044:5044

 

The volumes: option maps the contents of the host /root/elk/pipeline directory so that they appear in /usr/share/logstash/pipeline in the container and the ports: change makes port 5044 available from outside of the container so that Filebeat can connect.  You can use any directory you like on the host but it should contain the logstash.conf file we created above.

 

Restart the container with

 

# docker-compose -f elk.yml up -d

Recreating logstash ... done

 

We also need to update the /etc/filebeat/filebeat.yml file to use Logstash in place of Elasticsearch.  Hop over to your AR Server system console and edit the file so that it appears as below, where hostname is the name of the machine where your containers are running.

 

#Comment out this bit

#output.elasticsearch:

  # Array of hosts to connect to.

#  hosts: ["hostname:9200"]

 

#----------------------------- Logstash output --------------------------------

output.logstash:

  # The Logstash hosts

  hosts: ["hostname:5044"]

 

Restart Filebeat

 

# systemctl restart filebeat

 

To make sure Elasticsearch is receiving data via Logstash we need to create a new index pattern in Kibana as detailed in the previous article.  When you go to the Management -> Index Patterns -> Create index pattern page you should see a new index called logstash-YYYY.MM.DD

 

 

Create a new index pattern called logstash-* and choose @timestamp as the Time Filter field name.  Switch back to the Discover page and you can use the drop down menu under the Add a filter + link to switch between the filebeat-* and logstash-* index data.

 

 

What you should find is that the log data from your Remedy server is now being added under the logstash-* index.  Switching between the index patterns will allow you to compare the records and you should find they look pretty much the same other than the _index field value.

 

Conclusion and Next Steps

That's it for Part 2 of our Remedy/Elastic cook book.  In Part 1 we saw how to setup Elasticsearch, Kibana and Filebeat to collect logs from a Remedy server.  We were able to search and filter the log data but not much more.  In this post we've introduced Logstash and reconfigured Filebeat to pass logs through it to Elasticsearch.  This provides the foundation we need to start parsing our logs and that's what will be coming up in Part 3.

 

Please leave feedback and questions below and let me know if you find any problems when trying to follow the steps above.

 

Using the Elastic Stack with Remedy Logs - Part 1

Using the Elastic Stack with Remedy Logs - Part 3

Using the Elastic Stack with Remedy Logs - Part 4

Share:|

Introduction

This is the first in a series of articles that will look at how you can use the Elastic Stack to gather, view, and analyse the logs from Remedy products.  In this post I'll start by showing how you can setup the software and enable your choice of logs to be read and forwarded to Elastic so that they can be searched easily.  Later posts will show how lines from some of the logs can parsed to split out interesting bits of data that will then be used to visualize the activities contained in the logs.

 

What is the Elastic Stack?

Formerly known as ELK the Elastic Stack is a set of open source software components that will take data from many sources, format it, and make it available for searching, analysis and visualization in real time.  The main components are

 

  • Beats - lightweight collection and shipping agents that gather data and forward it to Elasticsearch or Logstash.
  • Logstash - collects and processes data in many formats, allowing parsing and enrichment.
  • Elasticsearch - the indexer and search engine used to store the data gathered by Beats and Logstash.
  • Kibana - an interface to Elasticsearch providing many types of visualization to help analyse and understand the data.

 

Why Use Elastic to Collect Remedy Logs?

Anyone who has supported a Remedy system will know that there are many different logs that need to be reviewed both during normal usage and when investigating a problem.  If your environment uses a server group  then the challenge is multiplied as each server will produce its own set of these logs.  Wouldn't it be nice to have these logs collected and forwarded to a single system where they could be searched and viewed without having to hop from console to console on each server?  This is what we're going to do with Elastic.

 

Setting Up Elasticsearch and Kibana

The first step is to setup Elasticsearch and Kibana to index, search and view our logs.  Once this is done we'll add one of the Beats modules to a Remedy server and configure it so send a selection of logs to Elasticsearch.  In a later post I'll add Logstash to extract more useful information from the logs before the data in indexed.

 

There are many options for installing the software.  You can download versions for many platforms, try a cloud based system or use a ready installed virtual machine. For this article I'm going to use the containers that are available for use with Docker as they include all the necessary supporting software such as Java.

 

I have a Linux VM running CentOS 7 and I've already installed the docker software.  If you're using a different platform or Linux version there should be a version available that you can use.

 

# docker version

Client:

Version:      18.03.1-ce

API version:  1.37

Go version:   go1.9.5

Git commit:   9ee9f40

Built:        Thu Apr 26 07:20:16 2018

OS/Arch:      linux/amd64

Experimental: false

 

The Elastic website has detailed instructions if you run into problems  (Elasticsearch and Kibana)  but it should just be a case of pulling the images and starting the containers.

 

# docker pull docker.elastic.co/elasticsearch/elasticsearch:6.3.2

6.3.2: Pulling from elasticsearch/elasticsearch

7dc0dca2b151: Pull complete

72d60ff53590: Pull complete

ca55c9f7cc1f: Pull complete

822d6592a660: Pull complete

22eceb1ece84: Pull complete

30e73cf19e42: Pull complete

f05e800ca884: Pull complete

3e6ee2f75301: Pull complete

Digest: sha256:8f06aecf7227dbc67ee62d8d05db680f8a29d0296ecd74c60d21f1fe665e04b0

Status: Downloaded newer image for docker.elastic.co/elasticsearch/elasticsearch:6.3.2

 

# docker pull docker.elastic.co/kibana/kibana:6.3.2

6.3.2: Pulling from kibana/kibana

7dc0dca2b151: Already exists

8ce241372bf3: Pull complete

d6671bb44c0a: Pull complete

f3522d60bffe: Pull complete

e4d56fe4aebf: Pull complete

68df91c3f85d: Pull complete

aef1ffed878c: Pull complete

c5aeceb9f680: Pull complete

af84d3e72ccf: Pull complete

Digest: sha256:7ae0616a5f5ddfb0f93ae4cc94038afd2d4e45fa7858fd39c506c8c682ee71f0

Status: Downloaded newer image for docker.elastic.co/kibana/kibana:6.3.2

 

Although it's possible to start each container individually this is a good example of where the docker-compose command can be used to configure and start them together.  This makes sure we have the necessary setup to allow Kibana to access Elasticsearch.

 

Create a file called elk.yml with these contents;

 

version: '3'

services:

  kibana:

    image: docker.elastic.co/kibana/kibana:6.3.2

    container_name: kibana

    ports:

  - 5601:5601

 

  elasticsearch:

    image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2

    container_name: elasticsearch

    environment:

      - discovery.type=single-node

    ports:

      - 9200:9200

      - 9300:9300

 

Now both containers can be started with

 

# docker-compose -f elk.yml up -d

Starting elasticsearch ... done

Starting kibana        ... done

 

Check that you can access Kibana by pointing your browser to http://hostname:5601 where the hostname is the machine that is running the containers, and you should see the interface.

We now have the software running to store and search our logs but we need to setup a Beats client to send logs to Elasticsearch.

 

Install and Configure Filebeat on the Remedy Server

Beats are a collection of different data collecting and shipping agents that can be used to forward data into Elasticsearch.  There are Beats available for network data, system metrics, auditing and many others. As we want to collect data from Remedy log files we're going to use Filebeat.

 

This client needs to be running on our Remedy server so that it has access to logs in the ./ARSystem/db and other directories.  We could also use container version of Filebeat but I'm going to install it directly on the server.  You may need to setup the Elastic repository on your server and then simply

 

# yum install filebeat

  Installing : filebeat-6.3.2-1.x86_64              

  Verifying  : filebeat-6.3.2-1.x86_64                     

Installed:

  filebeat.x86_64 0:6.3.2-1                                                                                                                                                                                                                                               

With the client installed we need to configure the logs we want to send to our Elasticsearch server by editing the /etc/filebeat/filebeat.yml file.  Which logs you chose is up to you and, whilst it's possible to use a wildcard definition to monitor all files in a directory, let's start with an example of the arerror.log and server API log.  Each file is defined with an entry in the filebeat.prospectors section:

 

filebeat.prospectors:

 

# Each - is a prospector. Most options can be set at the prospector level, so

# you can use different prospectors for various configurations.

# Below are the prospector specific configurations.

- type: log

  enabled: true

  paths:

    - /opt/bmc/ARSystem/db/arerror.log

  fields:

    logtype: arserver

  fields_under_root: true

 

 

- type: log

  enabled: true

  paths:

    - /opt/bmc/ARSystem/db/arapi.log

  fields:

    logtype: arserver

  fields_under_root: true

 

There's an extra fields: section for each entry to add a logtype identifier that will help us find the entries in Elasticsearch.  This is optional and by no means necessary but may be helpful as the setup becomes more complex.  Each prospector entry causes Filebeat to effectively run a tail on the file and send data to Elasticsearch as the application that creates the log writes to it.  Similar configuration entries could be used for other component logs such at the Email Engine of Tomcat.

 

We also need to tell Filebeat where to send the logs and this is done further down in the file where we need to uncomment the setup.kibana and output.elasticsearch entries and add our server.

 

setup.kibana:

  # Kibana Host

  # Scheme and port can be left out and will be set to the default (http and 5601)

  # In case you specify and additional path, the scheme is required: http://localhost:5601/path

  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

  host: "hostname:5601"

 

output.elasticsearch:

  # Array of hosts to connect to.

  hosts: ["hostname:9200"]

 

Change hostname to the name of the server where your Elasticsearch and Kibana containers are running.

 

We're almost there.  The arerror.log is always on but we need to go to the AR System Administration: Server Information form and enable API logging.  Remember that Filebeat is reading the log as it is written so we can set a small log file size, 10MB for example, and choose the option to Append to Existing so that only a single file is used.

 

 

Now start Filebeat...

 

# systemctl start filebeat

# tail /var/log/filebeat/filebeat

2018-07-31T06:52:38.716-0400 INFO log/harvester.go:216 Harvester started for file: /opt/bmc/ARSystem/db/arapi.log

2018-07-31T06:52:38.716-0400 INFO log/prospector.go:111 Configured paths: [/opt/bmc/ARSystem/db/arerror.log]

2018-07-31T06:52:38.716-0400 INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 2

2018-07-31T06:52:38.716-0400 INFO cfgfile/reload.go:127 Config reloader started

2018-07-31T06:52:38.718-0400 INFO cfgfile/reload.go:219 Loading of config files completed.

2018-07-31T06:52:38.742-0400 INFO elasticsearch/elasticsearch.go:177 Stop monitoring endpoint init loop.

2018-07-31T06:52:38.742-0400 INFO elasticsearch/elasticsearch.go:183 Start monitoring metrics snapshot loop.

2018-07-31T06:52:39.727-0400 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.3.2

2018-07-31T06:52:39.730-0400 INFO template/load.go:55 Loading template for Elasticsearch version: 6.3.2

2018-07-31T06:52:39.802-0400 INFO template/load.go:89 Elasticsearch template with name 'filebeat-6.2.4' loaded

 

Configure Kibana

The final step is to access Kibana and configure it to display the data being sent from our Remedy system.

 

Point your browser at Kibana as above and click on the Discover link at the top left hand side.  This will take you to the index pattern definition page.

 

 

There should be a new filebeat index available so enter filebeat-* in the index pattern box and click > Next Step

 

 

Choose @timestamp from the Time Filter field name drop down menu and click Create index pattern.  Once the index pattern is created click on the Discover link again to return the main page where you should be able to see log data from your Remedy server.

 

 

Browse Remedy Logs

The graph along the top of the screen shows the number of log entries received over, by default, the last 15 minutes.  You can change the time frame by clicking in the Last 15 minutes link at the top of the screen where you can choose from a range of pre-configured ranges or specify your own.

 

To see more details about the log entries click on the arrow to the left of the timestamp in the main panel.  This will expand the entry and show all the fields in the record.

 

Here you can see the full log line recorded in the message field - in this case it's an API log entry for a +GLEWF call.

 

Click the arrow again to collapse the record and then you can refresh the list with F5 or by clicking in the search box and pressing enter.

 

Filter Data and Customise the Display

You can change the fields that are displayed in the main pane by hovering over the field names in column to the left of the screen - an add button will appear which will cause the selected field to be added to the table.

If you click on a field name you will see the top 5 most common values and you can use the + spyglass icon to add a filter limiting the display to matching records.  Here's what it looks like after adding the message field and filtering the latter to show only entries from the arerror.log

 

Searching for Errors

As a final example in this post let's see how to use Kibana to search for errors.  Let's create an error by trying to login to Remedy using an invalid password.  Using the mid-tier login page I tried to login as Demo and get the folllowing

To remove the filter on the arerror.log hover over the blue oval that shows the filter below the search bar and click the trash can icon.  Now all records are being displayed type error in the search bar and press return.  All matching records are displayed and we can see the 623 Authentication failed message in the middle of the screen.

Conclusion and Next Steps

That's it for this post.  We've seen how to install Elasticsearch, Kibana and Filebeat and configure them to record and display data from a couple of Remedy server log files.  We've also seen how to get started viewing and searching these logs using the Kibana interface.  We've only scratched the surface but I hope that you'll be tempted to try this with one of your test servers and see how easy it is.  Try collecting some different logs and experiment with the search and interface options in Kibana.  There are many resources available to help you with this, starting with the official documentation.

 

In future posts we'll see how to add Logstash to our setup and use this to parse data from the logs which we can then use for more advanced analysis and graphical visualization of the logs.

 

If you have any comments, questions or suggestions please add them below and I'll do my best to answer.

 

Using the Elastic Stack with Remedy Logs - Part 2

Using the Elastic Stack with Remedy Logs - Part 3

Using the Elastic Stack with Remedy Logs - Part 4

Share:|

BMC Support or R&D might have asked for a "slow" SQL's Execution Plan and it is possible that the SQL might have aged out of Oracle's Library Cache or MSSQL's Procedure cache and hence no Execution Plan is available.

 

That would then necessitate generating an Estimated Execution Plan ("Estimated Plan" in MSSQL's SQL Management Studio or "Explain Plan" in Oracle) on the SQL extracted from AR log file.

 

The SQL does need to be complete and not truncated in order for an Estimated Plan to be generated.

 

To get the complete SQL as it was sent to the database put the following AR parameter in your ar.cfg/ar.conf file(s) and restart the AR Server(s).

 

Enable-Unlimited-Log-Line-Length: T

 

Remedy documentation - https://docs.bmc.com/docs/ars91/en/configuration-settings-e-m-609074231.html

 

Some of you already know this so it may be old news for you! :-)

Share:|

Remedy 9.x/18.x is written in Java. One of the features that was introduced was to avoid sending literal values in Remedy SQL statements as was the case in 8.x and prior versions.

 

Instead the statement is prepared in JDBC with "bind" variables and then sent to the database (this relieves the burden on the Oracle database to replace literals with binds).

 

One drawback to the above fix is addressed in Oracle Doc Id 1451804.1.

 

The drawback/problem arises if you are using the database (Oracle Standard/Enterprise 11.1.0.6 or higher) in Case Insensitive mode, with appropriate Linguistic Indexes in place, and Remedy ends up generating SQL statements with LIKE conditions in them.

 

The Oracle document states that the LIKE condition must be <COLUMN LIKE 'CONSTANT%"> AND NOT <COLUMN LIKE :BIND>. This will cause the optimizer to NOT use a Linguistic Index on the column COLUMN.

 

SYMPTOM

A query using LIKE 'CONSTANT%' with nls_comp=LINGUISTIC against a column with a linguistic index defined on it.  This produces an efficient plan that uses a range scan on the index, but if a bind variable with the same value is used instead (LIKE :BIND, where:BIND = 'CONSTANT%') The query plan will not use a range scan, resulting in poor performance.  Hinting, if tried, does not help.

 

CAUSE

This is a known limitation of using LIKE with NLSSORT-based indexes.

 

Using an NLSSORT-based index with LIKE 'CONSTANT%' requires transforming the predicate into a range predicate based upon on the constant matching string. For instance, col LIKE 'CONSTANT%' when using LINGUISTIC and BINARY_CI is transformed intoNLSSORT("COL",'nls_sort=''BINARY_CI''')>=HEXTORAW('636F6E7374616E7400') AND NLSSORT("COL",'nls_sort=''BINARY_CI''')<HEXTORAW('636F6E7374616E7500').  The values passed to HEXTORAW are derived from the string 'CONSTANT'.

 

This transformation is performed during query parse and requires that the matching pattern be a known constant.  NLSSORT-based functional indexes cannot be used with a LIKE predicate when the matching pattern is based on a bind variable or expression.

 

When cursor_sharing=FORCE (or SIMILAR), all constants are replaced by bind variables before parse, preventing any NLSSORT-based functional indexes being used with LIKE.

 

The above issue is not likely to be fixed by Oracle any time soon.

 

As a result of the above problem BMC has introduced the following check in its code:

==> Is the database Oracle and is it Case Insensitive?

 

If the above is TRUE then the SQL will be sent to the database AS IS (no replacing with binds in JDBC) and the Oracle database, where cursor_sharing = EXACT is necessary, will process the SQL and come up with an optimal execution plan.

 

CONCLUSION: cursor_sharing should be set to EXACT (if running Remedy in Case Insensitive mode) to use the above BMC fix and have the database pick up proper Execution Plans

 

 

Share:|

So what does the Oracle database do when a SQL statement comes in to be processed?

 

A SQL is HARD PARSED the first time it comes into the database:

 

  • SYNTAX CHECK                      Is the SQL grammatically correct?
  • SEMANTIC CHECK                 Do the tables and columns exist? Privileges in place? Any ambiguities (e.g. same column name in two tables not specifying which one the SQL wants)

 

  • Oracle now checks if the SQL statement is found in the database.If so the next steps are skipped

 

  • COST ESTIMATION                      Estimate the Cost of the query. The Plan with the lowest cost is chosen
  • ROW SOURCE GENERATION     Row Source Generator receives the optimal plan from the Optimizer and generates an Execution Plan in a format the SQL engine can use

 

A SOFT PARSE is where the last 2 steps are skipped as the SQL is found in memory.

 

SESSION_CACHED_CURSORS

Soft parses, less expensive than hard parses, nevertheless incur a cost, that of needing the use of Shared Pool and Library Cache latches (these are serial

    operations), that can lead to performance issues in OLTP systems

To minimize the above impact session cursors of reused (used > 3 times) SQL can be stored in the Session Cursor Cache (UGA/PGA)

What is actually stored is a pointer to the location in the Library Cache of where the cursor existed when it was closed

The presence of a cursor in the Session Cursor Cache guarantees the validity of the SQL’s syntax and semantics so the first 2 steps of Parsing are skipped

Instead of searching for the cursor in the Library Cache the Server process follows the pointer and uses the cursor (if present and still valid)

One can use the above feature by setting a value to session_cached_cursors (BMC recommends 100)

Share:|

Many moons ago, BMC deprecated the Windows ARUser client in favor of the web-based Mid-Tier client, and since that time there has been a scenario I have repeatedly run into that does not have a satisfactory solution based on out-of-the-box functionality:

 

There is an ARS table on the user's screen. The user needs the data from that table in a spreadsheet.

The built-in Report button that one gets on tables and on results lists yields a UI that ... well one of my users put it best:

"It looks like Radio Shack exploded on my screen!"

For instance, am I to turn loose a warehouse manager who uses her computer like 5 or 6 times a week, to build her own reports against a form that for instance, has display-only fields with names that look similar to fields on the screen?! Don't get me wrong, Smart Reporting has a whole lot of useful features and is a huge improvement, but at the end of the day ...

 

 

bad_time.PNG

 

HTML 5 offers us a lot of interesting capabilities inside modern browsers that we didn't have before. In fact, the entire javascript ecosystem is experiencing something of a cambrian explosion at the moment. One of the new capabilities of HTML5 is the File API which allows us to programmatically construct documents in javascript and make them available for download to the user.

 

A few weeks ago, I once again, found myself in that scenario I described at the top. I had a custom application with a big table field, and a whole bunch of workflow that lets the user get the data on the screen that they need, and now the user was just basically saying "ok, but now I need this in a spreadsheet, can't you just give me a download button?".

 

This got me thinking. "Good point! Why CAN'T I just do that?!"


All of the data on the screen to drive the ARS table already exists somewhere in a javascript datastructure. Why can't I just snag that, reformat it into something Excel knows how to deal with, and draw a download link on the screen? So I got into a little mild hacking using the mighty, mighty Firefox Developer Edition Javascript Console, and I figured out how to do that.

 

What I'm about to show you is the cleaned up rightest way to do this hack that I can figure out. I'm somewhat of a greenhorn when it comes to javascript, so it's entirely possible there are better ways to do what I've done here. To that end, I've posted my code on github. If you see problems, please make a pull request there, and I'll merge it into the main branch.

 

The basic gist of it is this:

  1. copy some javascript functions into the view header of your form
    For the impatient: here ya go
    view_header.PNG

  2. make an active link that calls one of those functions, using a set-fields action with a run process to put the output of the function into a tmp field
    It's gonna look something like this:
    set-fields-from-js-function-active-link.PNG

  3. make an active link populate a view field with a template, passing the output in the tmp field from step #2
    Here's my version of that template. You may want to inject your own flavor. I'll cover the options my version of the template takes further down. Basically you're just calling the TEMPLATE() function. it'll look something like this:
    populate-template-active-link.PNG
  4. There's now a dialog on the screen where the user can select the columns they want to export and download the freakin' spreadsheet!
    And that'll look something like this:
    demo-export-dialog.PNG

 

You May Ask Yourself, "How Do I Work This?!"

 

Probably the easiest way to get started, is just to download this demo and start playing with it.

 

  1. using Dev Studio, import the def file: ARS_CSVTools_demo_v1_definitions.def
    this will create the following objects on your Remedy Server:
    1. regular form: ahicox:csv:demo:data
    2. display form: ahicox:csv demo:dialog
    3. display form: ahicox:csv demo:gui
    4. active link: ahicox:csv demo:exportAll
    5. active link: ahicox:csv demo:exportSelected

  2. Import Demo Data to ahicox:csv:demo:data
    use the Data Import Tool to import this file: ahicox_csv_demo_data.arx to the ahicox:csv:demo:data form

  3. install the export dialog template
    Create a record in AR System Resource Definitions like this one:

    ar_system_resource_defs.PNG
    so basically:
    1. Attach this file: csvExportDialogTemplate.html
    2. Make sure the Mime Type is "text/html"
    3. Make sure the Type is "Template"
    4. Make sure the Status is "Active"
    5. Make sure the Name is "ahicox:csv demo:template-v1"  <- important!

  4. Open the ahicox:csv demo:gui form through your mid-tier
    demo_ui.PNG
    1. click the Refresh button on the table. This should load a list of all planets from the sprawling longest running (and greatest) sci-fi show of all time Doctor Who. To keep things simple, we just have three columns: Entry ID, Key, and Value. That doesn't really matter, actually. This will work with whatever columns you put on whatever table you want to export. Only caveat being that no two columns should have the same display name (otherwise the export will only show one of them, probably whatever the last one with that name in the column order was, but no promises on that).

    2. The Export All Rows to CSV button will scoop up every row on screen and ship it off to the export dialog
      This button is calling the _exportAllRows javascript function, and assigning the output to the zTmp_JSONData field. The _exportAllRows function takes one argument, which is the field id of the table you want to export. For instance, the fieldid of the table on my demo form is: 536870913, so the set-fields action calls:

      $PROCESS$ javascript: _exportAllRows(536870913);
      
      
    3. The Export Selected Rows to CSV button will scoop up only the selected rows in the table field and ship them off to the export dialogThis is pretty much the same thing as # 2, except it's a different function name:

      $PROCESS$ javascript: _exportSelectedRows(536870913);

      An important note about these javascript functions: if you need to export a table that is on a form embedded in a view field (or embedded several times deep in a view field), you need to insert these functions on the view header of the root level form. So for instance if you wanted to be able to export tables buried on forms in the ITSM Modules, you'd want these functions in the view header of SHR:LandingConsole, rather than the individual child forms.

The Template

The HTML template is populated via the built-in TEMPLATE() function (as illustrated above). These are the arguments that the template takes:

 

  • jsonData
    this is the JSON datastructure returned from either the _exportAllRows() or _exportSelectedRows() functions
  • titleHeader
    this string is shown in the header of the dialog template, adjacent to the "Export as CSV Spreadsheet" button
  • defaultShowColumns
    this is an array in javascript notation, containing the columns you would like to be shown in the preview by default when the user opens the dialog (the user can select additional columns or deselect the defaults once the dialog is open). An example value would be:

     "['columnName1', 'columnName1']"

    NOTE however, if you're building that in work flow, the ticks will be a problem. It'll actually have to be constructed something like this:

     "[" + """" + "columnName1" + """" + ", " + """" + "columnName2" + """" + "]"

    column names are referenced by their DISPLAY NAME not database name in this context.All this does is control which of the columns have a checkbox by default when you open the dialog:

    defaultSelected.PNG


  • fileNameBase
    the template will attempt to construct a unique filename each time the user clicks the download button. It will do this by appending the epoch timestamp to the end of the file name at the time the user clicks the button. The fileNameBase argument allows you to give the file a meaningful name that appears before the unique timestamp. For instance

    example fileNameBase value: "DWP-Export"
    resulting file name:        "DWP-export-1522077583.csv"
    
    
    
  • displayLimit
    By default, the dialog is going to show a preview of the first X rows in the export where X is defined by displayLimit. If the number of rows in the export is less than this number, we'll just show all rows. Otherwise we will show only this many with a user-friendly message explaining that.

 

 

To-Do

 

  • Handle Table Chunking
    At present, it'll just scoop up rows visible on screen. For instance in the case of exporting selected rows, it should be possible to hang a hook off of whatever function is called to retrieve the next chunk from the server, export data from selected rows of the previous chunk and append that with additional selections.  Perhaps something also for export all that will programmatically retrieve each chunk from the server and export/append. Just needs a little hackkity hack.

  • CSV Import!
    This should also be possible! Since the HTML5 File API allows us to access the contents of a file specified by the user without uploading it to the server. In theory, I should be able to create a similar dialog that shows the columns in your spreadsheet, the columns in your table and allows you to map them, then hijacks the PERFORM-ACTION-TABLE-ADD-ROW run-process command to add the rows to your table on-screen, so that you can set up your own table walk to push the rows where you want them to go.

    This would beat the living hell out of uploading the file as an attachment, staging it somewhere on the server, asynchronously forking off an AI job out of workflow to import/dispose of the file, and then having an active link loop run every couple seconds to check the status of the AI job. Which is the only other way I'm aware of to handle it right now. And god forbid if the file the user uploaded had the wrong column names or bad data! Good luck catching that exception!

  • Get BMC To Support this
    Look obviously this is unsupported.
    In order to figure this out, I had to pull apart lots of code I dug up off the mid-tier. This entire approach depends on the functions and datastructures in ClientCore.js being stable from version to version. There is no guarantee of that. Therefore BMC could break this at any moment without warning.

    My users like this feature, a whole lot more than the out-of-the-box reporting features. I'd like to be able to continue offering this feature without having to worry that every mid-tier patch we install will potentially break it. At the end of the day, that's actually not a lot to ask. BMC could simply make a function that does this and include it in ClientCore.js. It's pretty simple stuff. Heck. Maybe they could even give us a run-process command to export properly encoded JSON from a table into a field?!

    Anyhow. This is what I know for sure this works on:
    I've successfully tested this on ARS / Mid-Tier 9.1.02.001 201702131133. Against these browsers:
    1. Firefox 57.7.2 (32-bit)
    2. Firefox 10.0b6 (64-bit)
    3. Internet Explorer 11.2125.14393.0
    4. Chrome 65.0.3325.181 (32-bit)
    5. Edge 38.14393.2068.0

 

  • This approach in general could do a LOT of things
    There is pretty much nothing Javascript can't do inside a browser these days. Literally. From real-time 3D rendering to actually spinning up VMs. It's been done, on the client side, in a browser.  So why am I wrestling with cumbersome and poorly implemented server-side processes for mundane stuff like this that I could do entirely in the client? Javascript was BUILT for consuming JSON webservices -- that's REST in a nutshell, and now we have a REST API. All we really need to do some seriously amazing stuff in ARS is a supported interface to ClientCore.js and a way to get an authentication token from an already logged in user so that I can pass it to the REST API without asking the user to log in again.

    And that's just scratching the surface. Whose up for building an ARUser replacement out of the REST API and Electron? I would be.

    ATTENTION BMC: publish a developer-facing, supported (and documented) javascript API for interacting with ARSystem within HTML templates!
    Let a hundred flowers blossom
    . We're out here selling your software for you day in and day out. It's the least you can do.

 

 

ALSO: for those not hip to the github, there's an attachment with a zip file of everything :-)

Filter Blog

By date:
By tag: