Share:|

Introduction and Recap

In the first post we saw how to setup Filebeat to collect Remedy logs and send them to Elasticsearch where they could be searched and viewed using Kibana.  This was a good start and could easily be extended to include additional logs and servers. For example, a single Elastic stack instance could be used to aggregate logs from a number of Remedy servers and associated mid-tiers.  Once the logs have been collected they can be filtered using a combination of Elastic fields such as the source for the log file name, beat.name or beat.hostname for the originating host and logtype that we added in the filebeat.yml configuration file.

 

Having a single system to gather logs is a good first step but perhaps there's more that we can do with the data being collected?  Would it be helpful to be able to see the types of error codes that our servers are producing and their frequency?  How about the types and versions of the clients that are connecting?  There's a wide variety of data in the logs but how do we get it in a format that we can use?

 

This post will look at what's needed to get our test system ready to do this.  We need some way to take the separate pieces of information on each log line and break it out so that we can make more use of it in Kibana.  Fortunately, there's a tool to do just that...

 

Enter Logstash...

Logstash is another component in the Elastic stack and its job is to take data and transform it though a variety of filter plugins as it passes through the program.

 

Let's look at a few AR Server API log lines of the type that we collected in the example setup in Part 1.

 

<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:39.8570 */ +GLEWF ARGetListEntryWithFields -- schema AR System Configuration Component from Approval Server (protocol 26) at IP address 10.133.181.82 using RPC // :q:0.0s

<API > <TID: 0000000336> <RPC ID: 0000021396> <Queue: Prv:390680> <Client-RPC: 390680 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:39.8590 */ -GLEWF OK

<API > <TID: 0000000333> <RPC ID: 0000021385> <Queue: List > <Client-RPC: 390620 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:18.3550 */ +GLE ARGetListEntry -- schema AR System Email Messages from E-mail Engine (protocol 26) at IP address 10.133.181.82 using RPC // :q:0.0s

<API > <TID: 0000000333> <RPC ID: 0000021385> <Queue: List > <Client-RPC: 390620 > <USER: Remedy Application Service > <Overlay-Group: 1 > /* Tue Jul 31 2018 11:21:18.3570 */ -GLE OK

 

Do you notice the pattern in the format of the lines?  They all share many common elements that contain potentially interesting information that we may want to use to help analyse our system.  The same format is shared by many of the AR Server logs such as SQL, filter, escalation and so on.  The data up to the timestamp is fixed and the log specific information is added after this point.  The log format is documented here.

 

Examples of the common elements are

 

<API >                                        The log type - API, SQL, FLTR, ESCL, THRD, USER and more.

<TID: 0000000336>                             Thread ID - a unique number assigned to each thread when it is created.

<RPC ID: 0000021396>                          RPC ID - a unique number assigned to each API call and shared by all related activity in the API.

<Queue: Prv:390680>                           The type/number of the RPC queue that the thread belongs to - fast, list, private etc.

<Client-RPC: 390680 >                         The optional RPC queue that was requested by the client in the API call.

<USER: Remedy Application Service >           The user performing the activity.

<Overlay-Group: 1 >                           Overlay or base object mode actions.

/* Tue Jul 31 2018 11:21:39.8570 */           Timestamp

 

Before we can start getting these from our logs we need add Logstash to our setup and get them passing through it.

 

Installing Logstash

As we did in Part 1 I'm going to use the container version of Logstash and run it on the same system as the existing Elasticsearch and Kibana containers.  This should be fine for test purposes assuming you have at least 8GB of memory available for the three containers.  We need to add a new entry in the services section of the elk.yml file we created previously that contains

 

  logstash:

    image: docker.elastic.co/logstash/logstash:6.3.2

    container_name: logstash

    ports:

      - 5000:5000

 

Let's start the container and confirm it works with

 

# docker-compose -f elk.yml up -d

Pulling logstash (docker.elastic.co/logstash/logstash:6.3.2)...

6.3.2: Pulling from logstash/logstash

7dc0dca2b151: Already exists

a9821ac0463b: Pull complete

6af182068d2d: Pull complete

7ad725483188: Pull complete

d5c1c85bef83: Pull complete

890f4724f175: Pull complete

dcfd1014e6a6: Pull complete

0b58f4123258: Pull complete

d82a06d8e668: Pull complete

a87e8d5f72c3: Pull complete

df6b8528f94e: Pull complete

Digest: sha256:838e9038388e3932f23ac9c014b14f4a1093cd8e5ede7b7e0ece98e517d44fb2

Status: Downloaded newer image for docker.elastic.co/logstash/logstash:6.3.2

kibana is up-to-date

elasticsearch is up-to-date

Creating logstash ... done

 

Using the docker ps command we can show the running containers:

 

# docker ps

CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                                            NAMES

4f4d0963ef64        docker.elastic.co/logstash/logstash:6.3.2             "/usr/local/bin/dock…"   15 seconds ago      Up 14 seconds       5044/tcp, 0.0.0.0:5000->5000/tcp, 9600/tcp       logstash

fb6abb7f3a2b        docker.elastic.co/elasticsearch/elasticsearch:6.3.2   "/usr/local/bin/dock…"   6 hours ago         Up 6 hours          0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   elasticsearch

c08b71ce71eb        docker.elastic.co/kibana/kibana:6.3.2

 

As a final test to confirm it's working access Kibana and click on the Monitoring link on the left.  You may get prompted to enable monitoring, accept this and you should see the status of your Elastic stack.

 

Configure Logstash

Now that Logstash is running we need to configure it to accept data from Filebeat running on our AR Server, process the data to find the key pieces of information we're interested in, and then send it to Elasticsearch.  We'll do this using a text file called logstash.conf to which we'll add the necessary settings.  Full details of the configuration options and how Logstash works are documented on the Elastic site.  It's a very flexible tool with many more features than we'll be using and I'd encourage you to have a look and see what it's capable of.

 

The first step is to define an input and output setting.  Create a new file called logstash.conf in an empty directory (I'm using /root/elk/pipeline - you'll see why shortly) and add the following:

 

input {
  beats {
  port => 5044
  }
}

 

output {
  elasticsearch {
  hosts => "elasticsearch:9200"
  }
}

 

The input definition creates a listener on port 5044 to accept data from a Beats client, Filebeat in our case.  The output section provides the details of our Elasticsearch server where we're sending the data.  Let's test this before we go any further and confirm that we can still see our log data.  Remember that our Logstash is running in a container so how do we get it to use the configuration file that is outside of that container?  We use a Docker feature called volumes which maps data from the host file system into a container.

 

Edit your elk.yml file and add the highlighted text.

 

  logstash:

    image: docker.elastic.co/logstash/logstash:6.3.2

    container_name: logstash

    volumes:

      - /root/elk/pipeline:/usr/share/logstash/pipeline

    ports:

      - 5000:5000

      - 5044:5044

 

The volumes: option maps the contents of the host /root/elk/pipeline directory so that they appear in /usr/share/logstash/pipeline in the container and the ports: change makes port 5044 available from outside of the container so that Filebeat can connect.  You can use any directory you like on the host but it should contain the logstash.conf file we created above.

 

Restart the container with

 

# docker-compose -f elk.yml up -d

Recreating logstash ... done

 

We also need to update the /etc/filebeat/filebeat.yml file to use Logstash in place of Elasticsearch.  Hop over to your AR Server system console and edit the file so that it appears as below, where hostname is the name of the machine where your containers are running.

 

#Comment out this bit

#output.elasticsearch:

  # Array of hosts to connect to.

#  hosts: ["hostname:9200"]

 

#----------------------------- Logstash output --------------------------------

output.logstash:

  # The Logstash hosts

  hosts: ["hostname:5044"]

 

Restart Filebeat

 

# systemctl restart filebeat

 

To make sure Elasticsearch is receiving data via Logstash we need to create a new index pattern in Kibana as detailed in the previous article.  When you go to the Management -> Index Patterns -> Create index pattern page you should see a new index called logstash-YYYY.MM.DD

 

 

Create a new index pattern called logstash-* and choose @timestamp as the Time Filter field name.  Switch back to the Discover page and you can use the drop down menu under the Add a filter + link to switch between the filebeat-* and logstash-* index data.

 

 

What you should find is that the log data from your Remedy server is now being added under the logstash-* index.  Switching between the index patterns will allow you to compare the records and you should find they look pretty much the same other than the _index field value.

 

Conclusion and Next Steps

That's it for Part 2 of our Remedy/Elastic cook book.  In Part 1 we saw how to setup Elasticsearch, Kibana and Filebeat to collect logs from a Remedy server.  We were able to search and filter the log data but not much more.  In this post we've introduced Logstash and reconfigured Filebeat to pass logs through it to Elasticsearch.  This provides the foundation we need to start parsing our logs and that's what will be coming up in Part 3.

 

Please leave feedback and questions below and let me know if you find any problems when trying to follow the steps above.

 

Using the Elastic Stack with Remedy Logs - Part 1

Using the Elastic Stack with Remedy Logs - Part 3

Using the Elastic Stack with Remedy Logs - Part 4