Skip navigation
Share:|

BMC TrueSight 10 features a rich set of APIs that allow to interact with all the main aspects of the solution: Data, Events, Cis, Configuration.

This is the second of a series of posts on how to get familiar with those APIs and leverage them.

You can find the first post here: TrueSight Operations Management REST API for Event Management

 

For testing the APIs I suggest to use a Google Chrome app called Advanced REST client.

A valid alternative is another app/extension called Postman

 

Assumptions for using this tutorial:

TrueSight Operations Management 10.x installed and configured

  • TrueSight Presentation Server installed and configured for authentication with Atrium SSO
  • at least one TrueSight Infrastructure Management server installed and configured

 

Let's start setting up the Advanced REST client test environment:

 

 

The Central Monitoring Administration (CMA) API is exposed on the TS Presentation Server.

In order to be able to call any of the CMA APIs you need to obtain an authentication token calling the login API.


Here are the details needed for the login api

--------------------------------------------------------------------------------------------------------------

Url:

https://tsps:443/tsws/10.0/api/authenticate/login

 

Method: POST

 

Content-Type: application/json

 

Body:

 

{

    "username": "admin",

    "password": "admin12345",

    "tenantName": "BmcRealm"

}

 

------------------------------------------------------------------------------------------------------------------------

 

Calling the Login api in Advanced REST client:

cma_login_api_rest_client.png

more details here: Performing Central Monitoring Administration functions with web services - BMC TrueSight Operations Management 10.1 - BM…

 

Even though tokens expire, it is advised to release the token using the logout api when you are finished with the api calls.

 

 

 

List Policies

 

The CMA api allows to list policies

 

Here are the details needed for the list policies api:

--------------------------------------------------------------------------------------------------------------

 

url:

https://tsps.bmcswe.com:443/tsws/10.0/api/unifiedadmin/Policy/list?&responseType=basic

 

method: POST

 

Content-Type: application/json

 

Header: set Authorization header with the authtoken previously gotten.

Example:

Authorization: authtoken AQIC5wM2LY4SfcwdKU6v-5c6U-PtF1_x1qUlhw0CMjiiVKU.*AAJTSQACMDIAAlNLABM4NDE0Njc4NDQ2MDM4NTQ1MjA0AAJTMQACMDE.*

 

JSON body (sample) - Payload:

define an empty JSON body with just curly brackets

{}

 

Calling the list policies api in Advanced REST client:

cma_list_policy_api_rest_client.png

List Policy details

 

The CMA api allows to list policy details

 

Here are the details needed for the list policy details api:

--------------------------------------------------------------------------------------------------------------

 

url:

https://tsps:443/tsws/10.0/api/unifiedadmin/Policy/policy_Name/list?&responseType=basic

 

method: GET

 

Header: set Authorization header with the authtoken previously gotten.

Example:

Authorization: authtoken AQIC5wM2LY4SfcwdKU6v-5c6U-PtF1_x1qUlhw0CMjiiVKU.*AAJTSQACMDIAAlNLABM4NDE0Njc4NDQ2MDM4NTQ1MjA0AAJTMQACMDE.*

 

example:

https://tsps:443/tsws/10.0/api/unifiedadmin/Policy/PAR_ISM_PORTMON/list?&responseType=basic

 

 

Create Policies

 

The CMA api allows to create one or more policies. As you can notice the JSON body start and ends with square brackets, inside this section it is possible to define multiple events in blocks contained in curly brackets and separated by comma (",")

 

Here are the details needed for the create policy api:

--------------------------------------------------------------------------------------------------------------

 

url:

https://tsps:443/tsws/10.0/api/unifiedadmin/MonitoringPolicy/create

 

Content-Type: application/json


method: POST

 

Header: set Authorization header with the authtoken previously gotten.

Example:

Authorization: authtoken AQIC5wM2LY4SfcwdKU6v-5c6U-PtF1_x1qUlhw0CMjiiVKU.*AAJTSQACMDIAAlNLABM4NDE0Njc4NDQ2MDM4NTQ1MjA0AAJTMQACMDE.*

 

JSON body (sample policy for remote unix monitoring):

 

{

"monitoringPolicy": {

  "name" : "ZZZ-test-key-based-remote-unix-monitoring",

  "type" : "monitoring",

  "description" : "",

  "tenant" : {

    "name" : "bmcrealm",

    "id" : "bmcrealm"

  },

  "precedence" : 210,

  "agentSelectionCriteria" : "agentName STARTS_WITH \"agent_name\" ",

  "associatedUserGroup" : "Administrators",

  "owner" : "admin",

  "monitorConfiguration" : {

    "configurations" : [ {

      "solutionName" : "pukepd2",

      "solutionVersion" : "9.13.00",

      "monitoringProfile" : "REMOTE",

      "monitors" : [ {

        "monitorType" : "REMOTE_CONT",

        "configuration" : [ {

          "id" : "HOSTS",

          "value" : "remote_hostname_here",

          "details" : [ {

            "id" : "remHostName",

            "value" : "remote_hostname_here",

            "details" : null,

            "secure" : false,

            "mapDetails" : {

              "/remHostName" : "remote_hostname_here"

            }

          } ],

          "secure" : false,

          "mapDetails" : {

            "/HOSTS/remote_hostname_here/remHostName" : "remote_hostname_here"

          }

        }, {

          "id" : "HOSTS",

          "value" : "remote_hostname_here",

          "details" : [ {

            "id" : "remUserName",

            "value" : "root",

            "details" : null,

            "secure" : false,

            "mapDetails" : {

              "/remUserName" : "root"

            }

          } ],

          "secure" : false,

          "mapDetails" : {

            "/HOSTS/remote_hostname_here/remUserName" : "root"

          }

        }, {

          "id" : "HOSTS",

          "value" : "remote_hostname_here",

          "details" : [ {

            "id" : "remUsePwdAuth",

            "value" : "0",

            "details" : null,

            "secure" : false,

            "mapDetails" : {

              "/remUsePwdAuth" : "0"

            }

          } ],

          "secure" : false,

          "mapDetails" : {

            "/HOSTS/remote_hostname_here/remUsePwdAuth" : "0"

          }

        }, {

          "id" : "HOSTS",

          "value" : "remote_hostname_here",

          "details" : [ {

            "id" : "remPubKeyFilePath",

            "value" : "/home/patrol/.ssh/id_rsa.pub",

            "details" : null,

            "secure" : false,

            "mapDetails" : {

              "/remPubKeyFilePath" : "/home/patrol/.ssh/id_rsa.pub"

            }

          } ],

          "secure" : false,

          "mapDetails" : {

            "/HOSTS/remote_hostname_here/remPubKeyFilePath" : "/home/patrol/.ssh/id_rsa.pub"

          }

        }, {

          "id" : "HOSTS",

          "value" : "remote_hostname_here",

          "details" : [ {

            "id" : "remPrivKeyFilePath",

            "value" : "/home/patrol/.ssh/id_rsa",

            "details" : null,

            "secure" : false,

            "mapDetails" : {

              "/remPrivKeyFilePath" : "/home/patrol/.ssh/id_rsa"

            }

          } ],

          "secure" : false,

          "mapDetails" : {

            "/HOSTS/remote_hostname_here/remPrivKeyFilePath" : "/home/patrol/.ssh/id_rsa"

          }

        }, {

          "id" : "HOSTS",

          "value" : "remote_hostname_here",

          "details" : [ {

            "id" : "remPassPhrase",

            "value" : "f39/f39/f39/fwAAAAi2xZ3gZ7e7RgAAABD0Hxhh+szRH4B9PzNuO5KxAAAAFONPtuqt28Mo3Sj0zzzNC814OkdL",

            "details" : null,

            "secure" : true,

            "mapDetails" : {

              "/remPassPhrase" : "f39/f39/f39/fwAAAAi2xZ3gZ7e7RgAAABD0Hxhh+szRH4B9PzNuO5KxAAAAFONPtuqt28Mo3Sj0zzzNC814OkdL"

            }

          } ],

          "secure" : true,

          "mapDetails" : {

            "/HOSTS/remote_hostname_here/remPassPhrase" : "f39/f39/f39/fwAAAAi2xZ3gZ7e7RgAAABD0Hxhh+szRH4B9PzNuO5KxAAAAFONPtuqt28Mo3Sj0zzzNC814OkdL"

          }

        }, {

          "id" : "HOSTS",

          "value" : "remote_hostname_here",

          "details" : [ {

            "id" : "SudoMode",

            "value" : "1",

            "details" : null,

            "secure" : false,

            "mapDetails" : {

              "/SudoMode" : "1"

            }

          } ],

          "secure" : false,

          "mapDetails" : {

            "/HOSTS/remote_hostname_here/SudoMode" : "1"

          }

        } ]

      } ],

      "defaultMonitoring" : false

    } ]

  },

  "enabled" : false,

  "shared" : false

}

}

 

------------------------------------------------------------------------------------------------------------------------

The above policy configures monitoring for a new remote unix host with ssh key authentication. I highlighted in different colours the variable elements usually changed when adding remote hosts targets based on the same configuration. But there may be more attributes you may want to change according to your needs (sudo, etc.).

Calling the event Create api in Advanced REST client:

Screen Shot 2016-03-09 at 11.28.36.png

[...]

Screen Shot 2016-03-09 at 11.30.13.png

Sample response:

{

  "responseTimeStamp": "2016-03-09T11:28:32",

  "statusCode": "200",

  "statusMsg": "OK",

  "response": [{

       "resourceId": "031cf504-fd54-44f7-8d4f-ef5d684cc6b7",

       "resourceName": "ZZZ-test-key-based-remote-unix-monitoring",

       "resourceURI": null,

       "statusCode": "200",

       "statusMsg": "OK"

  }]

}

 

 

Delete a monitoring Policy

 

The CMA api allows to delete one or more policies.

 

Here are the details needed for the delete api:

--------------------------------------------------------------------------------------------------------------

Url:

http://tsps/tsws/10.0/api/unifiedadmin/Policy/policy203/delete?&idType=name

 

you can also delete a policy using policy Id so:  Idtype=id

 

method: DELETE (notice this is DELETE not POST as for the List and Create APIs)

 

Header: set Authorization header with the authtoken previously gotten.

Example:

Authorization: authtoken AQIC5wM2LY4SfcwdKU6v-5c6U-PtF1_x1qUlhw0CMjiiVKU.*AAJTSQACMDIAAlNLABM4NDE0Njc4NDQ2MDM4NTQ1MjA0AAJTMQACMDE.*

 

example:

https://tsps:443/tsws/10.0/api/unifiedadmin/Policy/PAR_ISM_PORTMON/delete?&idType=name

 

Being a DELETE, no JSON body is required.

 

Here is a sample response body:

 

{

    "response": [

        {

            "name": "PAR_ISM_PORTMON",

            "statusCode": "200",

            "statusMsg": "Successfully deleted policy",

            "resultHandle": "724"

        }

    ],

    "statusCode": "200",

    "statusMsg": "OK",

    "responseTimeStamp": "2016-04-29T16:33:20",

}

 

 

I plan to add more examples and attach some python scripts about CMA api with focus on policy management.

Share:|

The advent of social media has been impacting many aspects of social life, business, communication, the interaction between the consumers, end users, customers and the providers of content, services, applications.

It is common to see opinions, comments expressed on social media about the experience end users are having with the services and applications they use every day and then complain about a service like an ebanking portal being slow or down or praise the new version of that portal.

A single comment can have a significant reputation impact considering the amount of "followers" and "friends" a social media user can have. So it is not just about the experience of the end user but also about the potential ability to influence other users or potential new customers.

 

Screen Shot 2016-02-22 at 14.14.18.pngScreen Shot 2016-02-22 at 14.01.29.png

Example of BMC ITDA dashboard showing analyzed twitter data geographic distribution based on sentiment and influence levels

 

Screen Shot 2016-02-22 at 14.00.07.png

Example of BMC ITDA dashboard showing analyzed twitter data using ranges of influence level

 

 

Capabilities and structure of this integration

 

The Idea behind the Sentiment Analysis INTegration (SAINT) for BMC Data Analytics is to provide visibility on what is the general sentiment about something we can analyze using tracking words like hashtags, users or words in the tweeted text and a proof of concept of the potential integration capabilities.

 

The solution is composed of 3 main parts:

  • Twitter data mining through twitter stream API
  • Sentiment analysis and tagging using the Pattern (default) or NLTK libraries
  • Analyzed data is then streamed to BMC ITDA using a custom  UDP library

 

Twitter data mining is done using the Tweepy library. Through the definition of "trackwords" (hashtags, users or any part of the tweet text) it is possible to filter and focus only on the relevant tweets.

Every tweet is subject to sentiment evaluation using the Pattern library built by the CLiPS center (Packaged in TextBlob). The evaluation returns a polarity value between -1 ("strongly negative") and 1 ("strongly positive"), 0 would be "Neutral".

pattern_schema.gif

Computational Linguistics and Psycholinguistics Research Center - http://clips.ua.ac.be

 

As an alternate option, sentiment can be evaluated using the Naive Bayes Analyzer provided by the NLTK library and "trained" using corpora. A corpus is a set of text entries where, in this case, sentiment is already classified so that NLTK can learn what is positive and what is negative and use this knowledge to analyze tweets and evaluate the sentiment in the live tweet stream.

 

 

Configure and Install the Sentiment Analysis Integration

 

Prerequisite 1: Get your API keys from Twitter

 

Log in to your Twitter account at Twitter Application Management

 

Click on the Create New App button:

Screen Shot 2016-02-18 at 10.21.33.png

 

Insert information for Name, Description and Website. They can be anything you like:

Screen Shot 2016-02-18 at 10.24.25.png

 

If all went well you should get a green confirmation message:

Screen Shot 2016-02-18 at 10.26.08.png

If you only want to analyze data you can select Read Only otherwise make sure you select Read and Write (allows to post data) access for your app on the Permissions tab of Twitter's Application Management screen

 

Go to the  Keys and Access Tokens tab:

Screen Shot 2016-02-18 at 10.27.25.png

 

Scroll down to the access token section and generate a new access token if there isn't one yet:

Screen Shot 2016-02-18 at 10.28.34.png

 

This will give you four different keys:

  • consumer key
  • consumer key secret
  • access token
  • access token secret.

Screen Shot 2016-02-18 at 10.29.13.png

You will have to use those four keys in the Sentiment Analysis Integration configuration file.

 

Even though you will access "public" twitter data, you need to comply with the Developer Agreement & Policy | Twitter Developers in order to be sure you are making proper use of the data.

 

Prerequisite 2: Python 2.7 and libraries

 

You can run SAINT for Twitter on any platform that supports python. It has currently been tested on Linux and Windows. The following steps will cover windows, being it the platform requiring more steps. Linux usually comes with python already present or easily installable with native package managers (yum install python27 or similar).

 

  1. Create a folder for your SAINT installation
  2. Download Python | Python.org for you platform. Pick version 2.7.x package as it won't work with version 3.x
    1. Make sure "pip" is selected when you install pythonScreen Shot 2016-02-18 at 11.32.52.png
    2. Linux - The latest versions of CentOS, Fedora, Redhat Enterprise (RHEL) and Ubuntu come with Python 2.7 out of the box. PIP though, could not be there and therefore require installation.
  3. Install the Tweepy twitter library (needed for twitter access) using command "pip install tweepy"Screen Shot 2016-02-18 at 11.41.33.png
  4. Install the TextBlob library (provides the NLTK library used for the sentiment analysis) using command "pip install textblob"Screen Shot 2016-02-18 at 11.48.41.png
  5. Install the "corpora" needed to train text analysis using the command "python -m textblob.download_corpora"
  6. Place the Sentiment Analysis Integration binaries in your folder
  7. Configure the saint.conf file with
    1. 4 twitter keys (consumer key and secret, access token and secret)
    2. ITDA UDP listener ip address and port (you need to define a udp listener collector in your BMC ITDA environment)
    3. tracking words for twitter.
    4. Polarity preferences. default value is 0.15 meaning that any tweet
  8. Run the SAINT integration and enjoy the twitter data in your ITDA environment

 

ITDA Configuration

The SAINT Content Pack

Attached to this post there is also an ITDA content pack specifically made for this integration.

It includes:

  • A data pattern for the twitter data stream processed
  • Field extraction for key fields (country, place, username, sentiment polarity, etc.)
  • Sample Saved searches that show how to analyse twitter related sentiment data with ITDA
  • Sample dashboard that shows influence levels of twitter users and geographic distribution of tweets

 

Configuration Steps:

 

 

Sample saved search for analysing influence impact of tweets:

 

The Data Collector configuration

Let's now configure the ITDA data collector that will receive the SAINT data stream.

In the video you can see the steps for the configuration:

  1. Create a new data collector
  2. Select "Receive over TCP/UDP"
  3. Select "UDP" in the "protocol" dropdown list
  4. Enter a value for the UDP port. Make sure the port you define is consistent with what you define in the SAINT config file as described above at point 7.2.

 

This is a community based integration. It is provided with no formal support or guarantee of any sort. Please, use the comments section to report issues or problem or ask any related question.

Share:|

BMC IT Data Analytics (ITDA) is a powerful solution that allows to mine, index and analyze semistructured data, typically coming from log files, traces, syslog streams and other sources that provide data in a time stamped text format.

Among its various capabilities, ITDA allows to derive metrics from the text it analyzes, extracting the desired values from the text entries or counting the entries resulting from a search.

 

This post aims to explain and provide few examples on how to extract metrics and pass them to other applications.

 

Assumptions: in this post I only cover the data extraction and integration aspects, so I am assuming you are already familiar with ITDA setup and configuration, searches, patterns and all the other core product areas.

 

The key ITDA capabilities we'll use are:

 

Use Case: Execute a search, count the number of returned lines and send the number to a desired target.

 

The first thing to have is, of course, the search that will provide us with the lines matching our criteria in a given timeframe. This could be used for getting all the lines reporting failed actions, connection retries, etc. The time context will give us the measure of how significant or impacting that behaviour is from a time perspective.

In my example, I am counting lines representing tweets and those lines contain information like evaluated sentiment. So i'm basically counting the number of positive (or negative, or neutral) tweets  received in the last 5 minutes.

 

The saved search.

So the first step is to have the search you want use to count the relevant entries saved so that it can be called from other functions in and outside ITDA, like notification sin our case but a saved search could also be called by the ITDA REST API.

The saved search in my example is configured to return matching lines in the last five minutes but this can be overridden later.

saved_search_window.png

 

This search returns all the tweet lines tagged as "positive" in the last 5 minutes:

search_outcome.png

 

 

So now we can count them. This step is not actually needed but clarifies what we are going to get as a value we'll send out to the target system we want to integrate.

count_on_field.png

 

 

The notification.

Now the key part. Here we define a notification of type "alert" (1) that triggers a command line (5) when the defined conditions are met (2). This is done every 5 minutes (3) and as stated before, the saved search (2) time frame can be modified overriding the one defined in the saved search (4).

notification_details.png

In the above picture, step (2) shows 2 conditions:

 

number of saved search results > 0

OR

number of saved search results = 0

 

The reason for this is that with just the "> 0" condition we would trigger the script (5) and therefore send the data only when there are entries to count. But for a polling based monitoring system this would result in "data gaps" for all the intervals where the count "= 0" as the script wouldn't be executed and therefore no data would be sent to the target system.

Adding the "= 0" condition we're basically ensuring that when there are no entries to count in the 5 minutes polling time frame (3) instead of doing nothing we send a "0" data point to the target system avoiding misleading data representations.

 

Using Macros

Macros are more often used in internal actions like sending an email or an event. In those scenarios macros are pretty useful to dynamically build mail bodies or event messages.

 

In such context, a notification message that uses macros would look like this:

Saved search ${QUERYNAME} has result count:${COUNT} for duration: [${STARTTIME}] to [${ENDTIME}]

and would produce a message text like this:

Saved search ITDA_Log_Monitoring has result count: 3567 for duration: 01/30/2015 11:30:30 GMT to 02/06/2015 11:30:30 GMT

(I added the bold formatting)

 

The available macros are listed in this doc page.

 

In our example we are using the ${COUNT} macro which provides the number of search results returned by a search query.

if in an internal notification like an email the macro would be referenced as described above, when using an external script it can be referenced as an OS environment variable which in this case, on a Linux based ITDA system, it would be $COUNT without curly brackets but it would be %COUNT% on a windows ITDA system.

 

this means that in my case the Linux OS shell script I'm referencing at point (5) looks like this:

 

[root@itda-tsi TrueSight]# more itda_positweets_2_tsi.sh

 

#!/bin/bash

python /opt/bmc/TrueSight/ITDA_2_TSI_tweetscount.py -p $COUNT

 

So it is extremely straightforward, in my example I'm just passing the count of the log lines representing the positive tweets which will be a value "> 0" when there is data or "= 0" when there is no data so that we can avoid data gaps and actually have a "0" value when there are no lines to count.

 

The script called in this example sends the tweets count to TrueSight Intelligence. The specifics of the TS Intelligence integration will be covered in another post.

TSI_sshot.png

 

This is just an example of how ITDA can be used as a "KPI collector" and feed other monitoring, reporting, Dashboarding or SLM solutions.

 

I hope you'll find this useful.

 

Gianpaolo Pagano Mariano

gpaganom@bmc.com

Filter Blog

By date:
By tag: