Share:|

Introduction

This is the first in a series of articles that will look at how you can use the Elastic Stack to gather, view, and analyse the logs from Remedy products.  In this post I'll start by showing how you can setup the software and enable your choice of logs to be read and forwarded to Elastic so that they can be searched easily.  Later posts will show how lines from some of the logs can parsed to split out interesting bits of data that will then be used to visualize the activities contained in the logs.

 

What is the Elastic Stack?

Formerly known as ELK the Elastic Stack is a set of open source software components that will take data from many sources, format it, and make it available for searching, analysis and visualization in real time.  The main components are

 

  • Beats - lightweight collection and shipping agents that gather data and forward it to Elasticsearch or Logstash.
  • Logstash - collects and processes data in many formats, allowing parsing and enrichment.
  • Elasticsearch - the indexer and search engine used to store the data gathered by Beats and Logstash.
  • Kibana - an interface to Elasticsearch providing many types of visualization to help analyse and understand the data.

 

Why Use Elastic to Collect Remedy Logs?

Anyone who has supported a Remedy system will know that there are many different logs that need to be reviewed both during normal usage and when investigating a problem.  If your environment uses a server group  then the challenge is multiplied as each server will produce its own set of these logs.  Wouldn't it be nice to have these logs collected and forwarded to a single system where they could be searched and viewed without having to hop from console to console on each server?  This is what we're going to do with Elastic.

 

Setting Up Elasticsearch and Kibana

The first step is to setup Elasticsearch and Kibana to index, search and view our logs.  Once this is done we'll add one of the Beats modules to a Remedy server and configure it so send a selection of logs to Elasticsearch.  In a later post I'll add Logstash to extract more useful information from the logs before the data in indexed.

 

There are many options for installing the software.  You can download versions for many platforms, try a cloud based system or use a ready installed virtual machine. For this article I'm going to use the containers that are available for use with Docker as they include all the necessary supporting software such as Java.

 

I have a Linux VM running CentOS 7 and I've already installed the docker software.  If you're using a different platform or Linux version there should be a version available that you can use.

 

# docker version

Client:

Version:      18.03.1-ce

API version:  1.37

Go version:   go1.9.5

Git commit:   9ee9f40

Built:        Thu Apr 26 07:20:16 2018

OS/Arch:      linux/amd64

Experimental: false

 

The Elastic website has detailed instructions if you run into problems  (Elasticsearch and Kibana)  but it should just be a case of pulling the images and starting the containers.

 

# docker pull docker.elastic.co/elasticsearch/elasticsearch:6.3.2

6.3.2: Pulling from elasticsearch/elasticsearch

7dc0dca2b151: Pull complete

72d60ff53590: Pull complete

ca55c9f7cc1f: Pull complete

822d6592a660: Pull complete

22eceb1ece84: Pull complete

30e73cf19e42: Pull complete

f05e800ca884: Pull complete

3e6ee2f75301: Pull complete

Digest: sha256:8f06aecf7227dbc67ee62d8d05db680f8a29d0296ecd74c60d21f1fe665e04b0

Status: Downloaded newer image for docker.elastic.co/elasticsearch/elasticsearch:6.3.2

 

# docker pull docker.elastic.co/kibana/kibana:6.3.2

6.3.2: Pulling from kibana/kibana

7dc0dca2b151: Already exists

8ce241372bf3: Pull complete

d6671bb44c0a: Pull complete

f3522d60bffe: Pull complete

e4d56fe4aebf: Pull complete

68df91c3f85d: Pull complete

aef1ffed878c: Pull complete

c5aeceb9f680: Pull complete

af84d3e72ccf: Pull complete

Digest: sha256:7ae0616a5f5ddfb0f93ae4cc94038afd2d4e45fa7858fd39c506c8c682ee71f0

Status: Downloaded newer image for docker.elastic.co/kibana/kibana:6.3.2

 

Although it's possible to start each container individually this is a good example of where the docker-compose command can be used to configure and start them together.  This makes sure we have the necessary setup to allow Kibana to access Elasticsearch.

 

Create a file called elk.yml with these contents;

 

version: '3'

services:

  kibana:

    image: docker.elastic.co/kibana/kibana:6.3.2

    container_name: kibana

    ports:

  - 5601:5601

 

  elasticsearch:

    image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2

    container_name: elasticsearch

    environment:

      - discovery.type=single-node

    ports:

      - 9200:9200

      - 9300:9300

 

Now both containers can be started with

 

# docker-compose -f elk.yml up -d

Starting elasticsearch ... done

Starting kibana        ... done

 

Check that you can access Kibana by pointing your browser to http://hostname:5601 where the hostname is the machine that is running the containers, and you should see the interface.

We now have the software running to store and search our logs but we need to setup a Beats client to send logs to Elasticsearch.

 

Install and Configure Filebeat on the Remedy Server

Beats are a collection of different data collecting and shipping agents that can be used to forward data into Elasticsearch.  There are Beats available for network data, system metrics, auditing and many others. As we want to collect data from Remedy log files we're going to use Filebeat.

 

This client needs to be running on our Remedy server so that it has access to logs in the ./ARSystem/db and other directories.  We could also use container version of Filebeat but I'm going to install it directly on the server.  You may need to setup the Elastic repository on your server and then simply

 

# yum install filebeat

  Installing : filebeat-6.3.2-1.x86_64              

  Verifying  : filebeat-6.3.2-1.x86_64                     

Installed:

  filebeat.x86_64 0:6.3.2-1                                                                                                                                                                                                                                               

With the client installed we need to configure the logs we want to send to our Elasticsearch server by editing the /etc/filebeat/filebeat.yml file.  Which logs you chose is up to you and, whilst it's possible to use a wildcard definition to monitor all files in a directory, let's start with an example of the arerror.log and server API log.  Each file is defined with an entry in the filebeat.prospectors section:

 

filebeat.prospectors:

 

# Each - is a prospector. Most options can be set at the prospector level, so

# you can use different prospectors for various configurations.

# Below are the prospector specific configurations.

- type: log

  enabled: true

  paths:

    - /opt/bmc/ARSystem/db/arerror.log

  fields:

    logtype: arserver

  fields_under_root: true

 

 

- type: log

  enabled: true

  paths:

    - /opt/bmc/ARSystem/db/arapi.log

  fields:

    logtype: arserver

  fields_under_root: true

 

There's an extra fields: section for each entry to add a logtype identifier that will help us find the entries in Elasticsearch.  This is optional and by no means necessary but may be helpful as the setup becomes more complex.  Each prospector entry causes Filebeat to effectively run a tail on the file and send data to Elasticsearch as the application that creates the log writes to it.  Similar configuration entries could be used for other component logs such at the Email Engine of Tomcat.

 

We also need to tell Filebeat where to send the logs and this is done further down in the file where we need to uncomment the setup.kibana and output.elasticsearch entries and add our server.

 

setup.kibana:

  # Kibana Host

  # Scheme and port can be left out and will be set to the default (http and 5601)

  # In case you specify and additional path, the scheme is required: http://localhost:5601/path

  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

  host: "hostname:5601"

 

output.elasticsearch:

  # Array of hosts to connect to.

  hosts: ["hostname:9200"]

 

Change hostname to the name of the server where your Elasticsearch and Kibana containers are running.

 

We're almost there.  The arerror.log is always on but we need to go to the AR System Administration: Server Information form and enable API logging.  Remember that Filebeat is reading the log as it is written so we can set a small log file size, 10MB for example, and choose the option to Append to Existing so that only a single file is used.

 

 

Now start Filebeat...

 

# systemctl start filebeat

# tail /var/log/filebeat/filebeat

2018-07-31T06:52:38.716-0400 INFO log/harvester.go:216 Harvester started for file: /opt/bmc/ARSystem/db/arapi.log

2018-07-31T06:52:38.716-0400 INFO log/prospector.go:111 Configured paths: [/opt/bmc/ARSystem/db/arerror.log]

2018-07-31T06:52:38.716-0400 INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 2

2018-07-31T06:52:38.716-0400 INFO cfgfile/reload.go:127 Config reloader started

2018-07-31T06:52:38.718-0400 INFO cfgfile/reload.go:219 Loading of config files completed.

2018-07-31T06:52:38.742-0400 INFO elasticsearch/elasticsearch.go:177 Stop monitoring endpoint init loop.

2018-07-31T06:52:38.742-0400 INFO elasticsearch/elasticsearch.go:183 Start monitoring metrics snapshot loop.

2018-07-31T06:52:39.727-0400 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.3.2

2018-07-31T06:52:39.730-0400 INFO template/load.go:55 Loading template for Elasticsearch version: 6.3.2

2018-07-31T06:52:39.802-0400 INFO template/load.go:89 Elasticsearch template with name 'filebeat-6.2.4' loaded

 

Configure Kibana

The final step is to access Kibana and configure it to display the data being sent from our Remedy system.

 

Point your browser at Kibana as above and click on the Discover link at the top left hand side.  This will take you to the index pattern definition page.

 

 

There should be a new filebeat index available so enter filebeat-* in the index pattern box and click > Next Step

 

 

Choose @timestamp from the Time Filter field name drop down menu and click Create index pattern.  Once the index pattern is created click on the Discover link again to return the main page where you should be able to see log data from your Remedy server.

 

 

Browse Remedy Logs

The graph along the top of the screen shows the number of log entries received over, by default, the last 15 minutes.  You can change the time frame by clicking in the Last 15 minutes link at the top of the screen where you can choose from a range of pre-configured ranges or specify your own.

 

To see more details about the log entries click on the arrow to the left of the timestamp in the main panel.  This will expand the entry and show all the fields in the record.

 

Here you can see the full log line recorded in the message field - in this case it's an API log entry for a +GLEWF call.

 

Click the arrow again to collapse the record and then you can refresh the list with F5 or by clicking in the search box and pressing enter.

 

Filter Data and Customise the Display

You can change the fields that are displayed in the main pane by hovering over the field names in column to the left of the screen - an add button will appear which will cause the selected field to be added to the table.

If you click on a field name you will see the top 5 most common values and you can use the + spyglass icon to add a filter limiting the display to matching records.  Here's what it looks like after adding the message field and filtering the latter to show only entries from the arerror.log

 

Searching for Errors

As a final example in this post let's see how to use Kibana to search for errors.  Let's create an error by trying to login to Remedy using an invalid password.  Using the mid-tier login page I tried to login as Demo and get the folllowing

To remove the filter on the arerror.log hover over the blue oval that shows the filter below the search bar and click the trash can icon.  Now all records are being displayed type error in the search bar and press return.  All matching records are displayed and we can see the 623 Authentication failed message in the middle of the screen.

Conclusion and Next Steps

That's it for this post.  We've seen how to install Elasticsearch, Kibana and Filebeat and configure them to record and display data from a couple of Remedy server log files.  We've also seen how to get started viewing and searching these logs using the Kibana interface.  We've only scratched the surface but I hope that you'll be tempted to try this with one of your test servers and see how easy it is.  Try collecting some different logs and experiment with the search and interface options in Kibana.  There are many resources available to help you with this, starting with the official documentation.

 

In future posts we'll see how to add Logstash to our setup and use this to parse data from the logs which we can then use for more advanced analysis and graphical visualization of the logs.

 

If you have any comments, questions or suggestions please add them below and I'll do my best to answer.

 

Using the Elastic Stack with Remedy Logs - Part 2

Using the Elastic Stack with Remedy Logs - Part 3

Using the Elastic Stack with Remedy Logs - Part 4