Skip navigation
1 2 3 Previous Next

TrueSight Operations Mgmt

147 posts
Share This:

Please join Steve as he discusses TSOM Application High Availability Best Practices & Troubleshooting for TrueSight Presentation Server (TSPS) and TrueSight Infrastructure Management (TSIM) in the June webinar.

 

In this session he will review the TSOM Application High Availability Architecture and configuration best practices, as well as discussing troubleshooting of common issues for TSPS and TSIM.  For additional information you can review the documentation pages.

 

Steve Mundy is a Principal Technical Support Analyst

 

 

Event Registration Details

 

Date: Wednesday, June 17, 2020

Time: 10 a.m. Central Daylight Time (GMT-5)

Registration Link: https://globalmeet.webcasts.com/starthere.jsp?ei=1290369&tp_key=920ec74db4

Registration Password: helix

 

After registration, you will receive a confirmation email.

 

For more information, or if you have questions, please contact Gregory Kiyoi

Share This:

Introduction

I know, it sounds a bit odd but for simple monitoring requirements which require the execution of SQL queries, the PATROL for Log Management KM can be used. Technically speaking it is not the PATROL for Log Management KM which executes the query. On Windows systems the query execution must be placed into a CMD script. I experimented with PowerShell as well but I could not get the desired result. The reason for that is, that you first have to call the PowerShell executable an then the script. I think with some effort it will definitely work as well with PowerShell.

 

A little excurse to the Scripting KM

When talking about PowerShell the Scripting KM could come into mind. The KM offers PowerShell execution but in my humble opinion it is focuses on remote execution of PowerShell commands/scripts. Executing local PowerShell is not as easy as you might think. I don't know why there is no simple Type of Command for PowerShell is implemented into the KM to allow local PowerShell execution. The KM is another example for a good approach but it is not completely thought through. It is a pity, because this KM could have a great potential. Maybe too great?

 

Let's get started

However, let's get started. As a first step I have created a wrapper script, which allows a SQL script to be passed as argument. In early testing I executed the SQL query directly from the script but I thought it would be more clear and flexible to store the SQL queries in separate SQL script files. There could be further improvements for example to pass the database host, database name and database user & password as arguments to the wrapper script. But there is always room for improvement, right?

 

The use case where I used this solution was on Windows and the database was MS SQL. You have to adapt for other OS and database types.

 

Prerequisites

In order to execute SQL commands/queries against Microsoft SQL Server from command line, the simplest solution is the use of the sqlcmd utility. To use the utility you have to install the following components on the server from where the command is executed:

 

  • Microsoft ODBC Driver for SQL Server
  • Microsoft SQL Server Command Line Utilities

 

The wrapper script

So here is an example how the wrapper script could look like:

 

:: Name:     SQLCmdWrapper.cmd
:: Purpose:  Execute a SQL
:: Author:
:: Revision: April 2020 - initial version

@ECHO OFF

:: variables
set sqlHost=SQLHOST
set dbName=DBNAME
set dbUser=DBUSER
set dbSecret=DBPASS
set sqlScript=%~1

::execute sqlcmd
sqlcmd -S %sqlHost% -d %dbName% -U %dbUser% -P %dbSecret% -i %sqlScript%

 

Of course the place holders SQLHOST, DBNAME, DBUSER and DBPASS have to be replaced. I know, there are some security issues with this, as the DBPASS is stored in clear text in the script. I think this area could be improved but for the simplicity I recommend to use a low privileged user for the SQL execution.

 

The SQL script is passed as the first and only argument to the wrapper script. We can see this in the assignment set sqlScript=%~1.

 

I would like to mention, that the example does not have any exception handling. This should be taken into consideration when using this script in a production environment.

 

The SQL scripts

The SQL scripts can be almost anything but I suggest you keep it as simple as possible. This approach is not designed to parse a lot of output. At best the output of your SQL is a single line with either a text string you can parse or a numeric value.

 

Configuration of the PATROL for Log Management KM

In a next step, the PATROL for Log Management KM has to be configured. Firs we have to add a Log Management Script Files Monitor:

Blog_Post_SQL_Log_KM_01.PNG

With "Add" we create our first Instance for a SQL-Query execution with our wrapper script. First we enter some basic things into the form:

Blog_Post_SQL_Log_KM_02.PNG

Monitoring environment label

Here we enter a name for our monitor environment.

 

Monitoring file logical name

As we don't want our instance to have the name of the wrapper script, we enter a name we can recognize easily and bring into conjunction with our monitor.

 

Script file (full path)

Full path to the wrapper script.

 

Arguments

As argument we enter the full path to our SQL script

 

In a next step we can either configure a string search or a numeric search. Here you can follow the documentation of the KM. But I would like to mention that the numeric part needs to get used to.

Blog_Post_SQL_Log_KM_03.PNG

First Number

You have to specify a range in which the numbers you search for are in. In this example, we take all numbers greater than 5 as first number.

 

Operator

The number must be greater than the value entered in the First Number field.

 

Second Number

The numbers smaller or equal to 10.

 

Operator

The number must be smaller or equal to the value entered in the Second Number field.

 

Begin Token

The begin token where to search for the number. This is a bid odd. You can find a description here.

 

End Token

The end token where to search for the number.

 

Now we can also define our event creation. E.g.:

Blog_Post_SQL_Log_KM_04.PNG

 

That's it folks! If you have questions, don't hesitate and put them here as comment.

Share This:

Hey folks

I'm based out of Austin but I'll be in Portland, OR on Saturday January 25th for an event at my alma mater - Reed College. It's related to podcasting, which is something else I do besides work at BMC Software. For proof: https://partiallyexaminedlife.com/

 

If you are or will be in the area, please drop by and say "hi". It's at 2pm on the campus. Event details, maps, etc. can be found here: Burn Your Draft: Podcast Debut - Reed College

 

Cheers,

Seth

Share This:


This article explains how to install TSIM on Windows Failover Cluster Instance (FCI) on Azure virtual machines in Resource Manager model. This solution uses Windows Server 2016 Datacenter edition Storage Spaces Direct (S2D) as a software-based virtual SAN that synchronizes the storage (data disks) between the nodes (Azure VMs) in a Windows Cluster. S2D is new in Windows Server 2016.

 

The following diagram shows the complete solution on Azure virtual machines:

    

                                                                                   

The preceding diagram shows:

  • Two Azure virtual machines in a Windows Failover Cluster. When a virtual machine is in a failover cluster it is also called a cluster node, or nodes.
  • Each virtual machine has two or more data disks.
  • S2D synchronizes the data on the data disk and presents the synchronized storage as a storage pool.
  • The storage pool presents a cluster shared volume (CSV) to the failover cluster.
  • The File Server cluster role uses the volume created on S2D.
  • An Azure load balancer to hold the IP address for the File Server role.
  • An Azure availability set holds all the resources.

Note:

All Azure resources are in the diagram are in the same resource group.

For details about S2D, see Windows Server 2016 Datacenter edition Storage Spaces Direct (S2D).

S2D supports two types of architectures - converged and hyper-converged. The architecture in this document is hyper-converged. A hyper-converged infrastructure places the storage on the same servers that host the clustered application.

 

Before you begin

There are a few things you need to know and a couple of things that you need in place before you proceed.

What to know

You should have an operational understanding of the following technologies:

One important difference is that on an Azure IaaS VM guest failover cluster, we recommend a single NIC per server (cluster node) and a single subnet. Azure networking has physical redundancy which makes additional NICs and subnets unnecessary on an Azure IaaS VM guest cluster. Although the cluster validation report will issue a warning that the nodes are only reachable on a single network, this warning can be safely ignored on Azure IaaS VM guest failover clusters.

Additionally, you should have a general understanding of the following technologies:

 

What to have

Before following the instructions in this article, you should already have:

  • A Microsoft Azure subscription.
  • A Windows domain on Azure virtual machines.
  • An account with permission to create objects in the Azure virtual machine.
  • An Azure virtual network and subnet with sufficient IP address space for the following components:
    • Both virtual machines.
    • The failover cluster IP address.
    • An IP address for File server cluster role.
  • DNS configured on the Azure Network, pointing to the domain controllers.

With these prerequisites in place, you can proceed with building your failover cluster. The first step is to create the virtual machines.

 

Step 1: Create virtual machines

  1. Log in to the Azure portal with your subscription.
  2. Create an Azure availability set.

The availability set groups virtual machines across fault domains and update domains. The availability set makes sure that your application is not affected by single points of failure, like the network switch or the power unit of a rack of servers.

If you have not created the resource group for your virtual machines, do it when you create an Azure availability set. If you're using the Azure portal to create the availability set, do the following steps:

  • In the Azure portal, click + to open the Azure Marketplace. Search for Availability set.
  • Click Availability set.
  • Click Create.
  • On the Create availability set blade, set the following values:
    • Name: A name for the availability set.
    • Subscription: Your Azure subscription.
    • Resource group: If you want to use an existing group, click Use existing and select the group from the drop-down list. Otherwise choose Create New and type a name for the group.
    • Location: Set the location where you plan to create your virtual machines.
    • Fault domains: Use the default (3).
    • Update domains: Use the default (5).
    • Click Create to create the availability set.

  3. Create the virtual machines in the availability set.

Provision two Windows 2016 Datacenter Edition virtual machines in the Azure availability set.

Place both virtual machines:

  • In the same Azure resource group that your availability set is in.
  • On the same network as your domain controller.
  • On a subnet with sufficient IP address space for both virtual machines, and all FCIs that you may eventually use on this cluster.
  • In the Azure availability set.

Important

You cannot set or change availability set after a virtual machine has been created.

  4. After Azure creates your virtual machines, connect to each virtual machine with RDP.

When you first connect to a virtual machine with RDP, the computer asks if you want to allow this PC to be discoverable on the network. Click Yes.

  5. Open the firewall ports.

On each virtual machine, open the following ports on the Windows Firewall.

Purpose

TCP Port

Notes

Health probe

59999

Any open TCP port. In a later step, configure the load balancer health probe and the cluster to use this port.

  6. Add storage to the virtual machine. For detailed information, see add storage.


Both virtual machines need at least two data disks.

Attach raw disks - not NTFS formatted disks.

Note:

If you attach NTFS-formatted disks, you can only enable S2D with no disk eligibility check.

Attach a minimum of two premium SSDs to each VM. Size of the Disk depends on the utilization. Please note S2D uses mirroring for fault-tolerance. Considering Windows cluster will have 2 nodes, 2-ways mirroring will allow only half of the pool size to be used for creating disks/volumes.

Set host caching to Read-only.

The storage capacity you use in production environments depends on your workload. The values described in this article are for demonstration and testing.

  7. Add the virtual machines to your pre-existing domain.

 

After the virtual machines are created and configured, you can configure the failover cluster.

 

 

Step 2: Configure the Windows Failover Cluster with S2D

The next step is to configure the failover cluster with S2D. In this step, you will do the following substeps:

  1. Add Windows Failover Clustering feature
  2. Validate the cluster
  3. Create the failover cluster
  4. Create the cloud witness
  5. Add storage

Add Windows Failover Clustering feature

  1. To begin, connect to the first virtual machine with RDP using a domain account that is a member of local administrators, and has permissions to create objects in Active Directory. Use this account for the rest of the configuration.
  2. Add Failover Clustering feature to each virtual machine.

To install Failover Clustering feature from the UI, do the following steps on both virtual machines.

  • In Server Manager, click Manage, and then click Add Roles and Features.
  • In Add Roles and Features Wizard, click Next until you get to Select Features.
  • In Select Features, click Failover Clustering. Include all required features and the management tools. Click Add Features.
  • Click Next and then click Finish to install the features.

To install the Failover Clustering feature with PowerShell, run the following script from an administrator PowerShell session on one of the virtual machines.

PowerShell:

$nodes = ("<node1>","<node2>")

Invoke-Command  $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools}

For reference, the next steps follow the instructions under Step 3 of Hyper-converged solution using Storage Spaces Direct in Windows Server 2016.

Validate the cluster

This guide refers to instructions under validate cluster.

Validate the cluster in the UI or with PowerShell.

To validate the cluster with the UI, do the following steps from one of the virtual machines.

  1. In Server Manager, click Tools, then click Failover Cluster Manager.
  2. In Failover Cluster Manager, click Action, then click Validate Configuration....
  3. Click Next.
  4. On Select Servers or a Cluster, type the name of both virtual machines.
  5. On Testing options, choose Run only tests I select. Click Next.
  6. On Test selection, include all tests except Storage. See the following picture:

  7. Click Next.

  8. On Confirmation, click Next.

The Validate a Configuration Wizard runs the validation tests.

To validate the cluster with PowerShell, run the following script from an administrator PowerShell session on one of the virtual machines.

PowerShell:

Test-Cluster –Node ("<node1>","<node2>") –Include "Storage Spaces Direct", "Inventory", "Network", "System Configuration"

After you validate the cluster, create the failover cluster.

Create the failover cluster

This guide refers to Create the failover cluster.

To create the failover cluster, you need:

  • The names of the virtual machines that become the cluster nodes.
  • A name for the failover cluster
  • An IP address for the failover cluster. You can use an IP address that is not used on the same Azure virtual network and subnet as the cluster nodes.

Windows Server 2008-2016

The following PowerShell creates a failover cluster for Windows Server 2008-2016. Update the script with the names of the nodes (the virtual machine names) and an available IP address from the Azure VNET:

PowerShell:

New-Cluster -Name <FailoverCluster-Name> -Node ("<node1>","<node2>") –StaticAddress <n.n.n.n> -NoStorage

Windows Server 2019

The following PowerShell creates a failover cluster for Windows Server 2019. For more information, review the blog Failover Cluster: Cluster network Object. Update the script with the names of the nodes (the virtual machine names) and an available IP address from the Azure VNET:

PowerShell:

New-Cluster -Name <FailoverCluster-Name> -Node ("<node1>","<node2>") –StaticAddress <n.n.n.n> -NoStorage -ManagementPointNetworkType Singleton

 

Create a cloud witness

Cloud Witness is a new type of cluster quorum witness stored in an Azure Storage Blob. This removes the need of a separate VM hosting a witness share.

  1. Create a cloud witness for the failover cluster.
  2. Create a blob container.
  3. Save the access keys and the container URL.
  4. Configure the failover cluster quorum witness. See, Configure the quorum witness in the user interface in the UI.

Add storage

The disks for S2D need to be empty and without partitions or other data. To clean disks follow the steps in this guide.

  1. Enable Store Spaces Direct (S2D).

The following PowerShell enables storage spaces direct.

PowerShell:

Enable-ClusterS2D

In Failover Cluster Manager, you can now see the storage pool.

 

   2. Create a volume.

One of the features of S2D is that it automatically creates a storage pool when you enable it. You are now ready to create a volume. The PowerShell commandlet New-Volume automates the volume creation process, including formatting, adding to the cluster, and creating a cluster shared volume (CSV). The following example creates an 250 gigabyte (GB) CSV.

PowerShell:

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName tsdisk -FileSystem REFS -Size 100GB

After this command completes, 100 GB volume is mounted as a cluster resource.

The following diagram shows a disk:

Step 3: Test failover cluster failover

In Failover Cluster Manager, verify that you can move the storage resource to the other cluster node. If you can connect to the failover cluster with Failover Cluster Manager and move the storage from one node to the other, you are ready to configure the FCI.

Step 4: Create File Server Cluster Role

To create the File server cluster role, you need:

  • The names of the cluster virtual disk.
  • A NetBIOS name for the file server cluster role
  • An IP address for the file server cluster role. You can use an IP address that is not used on the same Azure virtual network and subnet as the cluster nodes.

After you have configured the failover cluster and all cluster components including storage, you can create File Server cluster role.

  1. Connect to one of cluster node and open Server Manager -> Tools -> Cluster Failover Manager
  2. In Failover Cluster Manager, right click Roles -> Configure Role. In the role list select ‘File Server’ and click Next. Select File server type ‘File server for general use’ and click Next.
  3. Provide NetBIOS name for role and click Next.
  4. Select Disk to be associated with this role and click Next.
  5. Click Next to and Finish role creation. This role will creation will fail (screenshot below) since in Azure by default it takes IP of the host (node) which is already in use.
  6. To assign static IP go to role IP address and click Properties. Select IP address and type in IP address. Saving role properties should change role Status to Online.

   7 .Change role Preferred Owners and Failback settings. Right click on role and select Properties.

In General tab Preferred Owners, select both nodes. Click on Failover tab and in Failback setting select ‘Allow Failback’.

 

Step 5: Install TrueSight Infrastructure Management Server

After you have configured the File System cluster role successfully, you can install TSIM server

  1. RDP to cluster Current Host server (we treat this as primary server)
  2. Extract TSIM installer and navigate to the Disk1 directory of the extracted folder. Run the installer using following command.
  3. install.cmd -J INSTALL_TYPE=PRIMARY
    1. Follow the default install instructions on most screen, installer will pick Installation directory on the drive which is mapped to disk associated to File server role.
    2. On Cluster group configuration page. Select Custom cluster group and provide NetBIOS FQDN of File System role created in Step 4 – “Create File Server Cluster Role.

   4. Primary install should complete successfully. Run the following command to verify the Infrastructure Management installation

Pw system status

   5. Update License files on primary TSIM server

   6. Go to TSIM_INSTALL_DIR/pw/custom/conf and add following entry to pronet.conf

   7. pronet.tsim.proxy.hosts=<comma_separated_list_of_Filesystem_cluster_role_NetBIOS_FQDN_and_FQDN_of_all_cluster_nodes>

Example: pronet.tsim.proxy.hosts=tsimvm3.bmcaaz.com,tsimvm4.bmcaaz.com,tsim-fs.bmcaaz.com

Error in TrueSight.log when you try to browse TSIM server without this configuration update.

------------------------------------------------------------------------------------------------------------------

ERROR 11/18 15:54:06 Invalid              [ajp-nio2-127.0.0.1-8009-exec-10] 600004 CSRF Filter - Header Referer is not matched. Blocking the call for TSPS UI console !!!

------------------------------------------------------------------------------------------------------------------

   8. Stop TSIM file server role using FCI console. And then move it to other cluster node (secondary).

   9. RDP to secondary server navigate to Disk1 directory and launch TSIM installer using following command.

          install.cmd -J INSTALL_TYPE=SECONDARY

   10. Follow default install instructions through install screen and install should succeed.

   11. Run the following command to verify the Infrastructure Management installation

Pw system status  

 

 

Step 6: Create Azure load balancer

On Azure virtual machines, clusters use a load balancer to hold an IP address that needs to be on one cluster node at a time. In this solution, the load balancer holds the IP address for the File Server role.

Create and configure an Azure load balancer.

Create the load balancer in the Azure portal

To create the load balancer:

  1. In the Azure portal, go to the Resource Group with the virtual machines.
  2. Click + Add. Search the Marketplace for Load Balancer. Click Load Balancer.
  3. Click Create.
  4. Configure the load balancer with:
    • Subscription: Your Azure subscription.
    • Resource Group: Use the same resource group as your virtual machines.
    • Name: A name that identifies the load balancer.
    • Region: Use the same Azure location as your virtual machines.
    • Type: The load balancer can be either public or private. A private load balancer can be accessed from within the same VNET. Most Azure applications can use a private load balancer SKU: The SKU for the load balancer should be standard.
    • Virtual Network: The same network as the virtual machines.
    • IP address assignment: The IP address assignment should be static.
    • Private IP address: The same IP address that you assigned to the File Server cluster role. See the following picture:

Configure the load balancer backend pool

  1. Return to the Azure Resource Group with the virtual machines and locate the new load balancer. You may have to refresh the view on the Resource Group. Click the load balancer.
  2. Click Backend pools and click + Add to add a backend pool.
  3. Associate the backend pool with the availability set that contains the VMs.
  4. Under Target network IP configurations, check VIRTUAL MACHINE and choose the virtual machines that will participate as cluster nodes. Be sure to include all virtual machines that will host the Windows FCI.
  5. Click OK to create the backend pool.

Configure a load balancer health probe

  1. On the load balancer blade, click Health probes.
  2. Click + Add.
  3. On the Add health probe blade, Set the health probe parameters:
    • Name: A name for the health probe.
    • Protocol: TCP.
    • Port: Set to the port you created in the firewall for the health probe. In this article, the example uses TCP port 59999.
    • Interval: 5 Seconds.
    • Unhealthy threshold: 2 consecutive failures.
  4. Click OK.

Set load balancing rules

  1. On the load balancer blade, click Load balancing rules.
  2. Click + Add.
  3. Set the load balancing rules parameters:
    • Name: A name for the load balancing rules.
    • Frontend IP address: Use the IP address file system cluster role.
    • Port: Set TCP port to 80.
    • Backend port: This should be same as port (80).
    • Backend pool: Use the backend pool name that you configured earlier.
    • Health probe: Use the health probe that you configured earlier.
    • Session persistence: None.
    • Idle timeout (minutes): 4.
    • Floating IP (direct server return): Disabled
  4. Click OK.
  5. Repeat step 3 and add load balancing rules for 80,443,8093,1099,1100,3084,1828,11590,10590,1839,1851 ports which are required for TSIM configuration.

Step 7: Configure cluster for probe

Set the cluster probe port parameter in PowerShell.

To set the cluster probe port parameter, update variables in the following script with values from your environment. Remove the angle brackets <> from the script.

PowerShell:

$ClusterNetworkName = "<Cluster Network Name>"

$IPResourceName = "<File Server cluster role IP Address Resource Name>"

$ILBIP = "<n.n.n.n>"

[int]$ProbePort = <nnnnn>

 

Import-Module FailoverClusters

 

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ILBIP";"ProbePort"=$ProbePort;"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

 

In the preceding script, set the values for your environment. The following list describes the values:

  • <Cluster Network Name>: Windows Server Failover Cluster name for the network. In Failover Cluster Manager > Networks, right-click on the network and click Properties. The correct value is under Name on the General tab.
  • <File Server cluster role IP Address Resource Name>: File Server cluster role IP address resource name. In Failover Cluster Manager > Roles, under the File Server cluster role, under Server Name, right click the IP address resource, and click Properties. The correct value is under Name on the General tab.
  • <ILBIP>: The ILB IP address. This address is configured in the Azure portal as the ILB front-end address. This is also the File Server cluster role IP address. You can find it in Failover Cluster Manager on the same properties page where you located the < File Server cluster role IP Address Resource Name >.
  • <nnnnn>: Is the probe port you configured in the load balancer health probe. Any unused TCP port is valid.

Note: Verify SubnetMask and EnableDhcp values for you cluster are same. If not change it.

 

 

Important

After you set the cluster probe you can see all of the cluster parameters in PowerShell. Run the following script:

PowerShell:

Get-ClusterResource $IPResourceName | Get-ClusterParameter

 

 

References:

Share This:

Hello TrueSight Operations Mgmt Family,

 

We are happy to announce a new 'Meet The Champions' blog post series, to spotlight those awesome members who dedicate their time to help other members of our community. You might have seen them replying to your questions (or others'), and sharing wisdom to make this a better place to be. We thank them, and we feel it is time the whole community get to know them better! Spotlighted champions will also be invited to be a part of an exclusive community, where they can interact with other champions on improving the overall BMC Communities experience.

 

In this edition, Roland Pocek from Austria talks about his experience with BMC Communities, personal life, and more!

 

 

Do you remember how you were introduced to BMC Communities? What was your journey like?

I started working as a consultant 14 years ago and by then joined BMC communities

 

 

Tell us a bit about your work and goals?

Working as a consultant for BMC products (discovery, patrol, tsom, tsim, appvis, …) for many years now, like to meet customers, learn new things every day, solving problems, …

 

 

What draws you to participate in BMC Communities?

Really good place for getting answers you maybe wont ask support, and pretty fun ppl there to share knowledge wtih, meet new ppl doing the same work/have the same problems and are willing to help each other

 

 

Did you make any new friends in BMC Communities? Do you have any stories to share?

Actually yes, met Matt Laurenceau (funniest skype talk I had in a long time) and was also able to meet BMC guys like Steve Mundy, John Gallagher, May Bakken and a few more in real life  

 

 

How did you end up picking a Sheldon Cooper (from Big Bang Theory) quote as your BMC Communities profile picture?

Really like the BAZINGA 

 

Do you have any message for the new members of BMC communities?

        Take the time and collaborate

 

 

How do you like to spend your spare time?

Playing drums/music and do lots of sports

 

 

If you could pick one thing that could be made better in BMC Communities, what would be it?

Get more BMC experts involved in answering questions in some communities and I would like to see more roadmap and upcoming products/features stuff

ropfloatyhorse.jpg

   View profile.png

 

Q. What  is your favorite movie(s)?

atm mostly watch tv series like banshee, gotham, santa clarita diet, …

 

Q. Who is the greatest player in your favorite sport?

Roger Federer

 

Q. What was the best vacation you have had?

Had the luck to see many different countries and cities, hard to say

 

ropsmiles.jpg

Roland with May Bakken

 

Thank You Roland for all the wonderful work you are doing here!

Community members, please make sure that you visit Roland's profile, and click 'Follow' (in 'inbox' if you wish to be notified on all activities) to be in touch with him, and be updated.  If you have had an interaction with Roland that helped you, feel free to share with us in the comments below!

Share This:

Hello World,

 

Are you worried about the management of thousands of events generated by your infrastructure? Are you using many event adapters? Do you want to know about event collector functionality? What about applying effective policies for managing the events?

Do you want to reach the level where you can build complex service models on your own? And what about adding some Business Service Resolution functionality so that the incident management is fully automated?

 

Too many questions.. too many requirements... Solution - Very Simple !!!

 

Take our brand new trainings on TSOM 11.x Event Management and Service Modeling which will help you find solutions to all the above questions.

 

  • BMC TrueSight Operations Management 11.x: Advanced Event Management
  • BMC TrueSight Operations Management 11.x: Advanced Training - Service Modeling

 

More information can be found at: TrueSight Operations Management Training - BMC Software  and check for 11.x Learning Track. Course abstracts are attached for quick reference.

 

Feel free to get back to us on further queries, and we are happy to help, as always !!!

 

 

Nidhi Gupta Mani Singh Geoffrey Bergren Dirk Braune Rasika Sarnaik Steve Mundy Mohana Ghotankar Trupti Modi Rafael de Rojas Shweta Agarwal Kamraun Marashi Pankaj Pansare Sabrina Paprocki Jim Stephens

Share This:

Logs are crucial to help you understand what is happening inside your Kubernetes cluster.

 

Even though most applications have native logging mechanism out of the box, in the distributed and containerized environment (like Kubernetes), users will be better off with the centralized logging solution. That’s because they need to collect logs from multiple applications with different log formats and send them to some logging backend for subsequent storage, processing, and analysis. Kubernetes provides all the basic resources needed to implement such functionality.

 

In this tutorial, we explore Kubernetes logging architecture and demonstrate how to collect application and system logs using Truesight ITDA, which using Elasticsearch backend offers great full-text search, log aggregation, analysis, and visualization functionality. Let’s get started!

 

Overview of Kubernetes Logging Architecture and Logging Options

Docker containers in Kubernetes write logs to standard output (stdout) and standard (stderr) error streams. Docker redirects these streams to a logging driver configured in Kubernetes to write to a file in JSON format. Kubernetes then exposes log files to users via kubectl logs command. Users can also get logs from a previous instantiation of a container setting the --previous flag of this command to true. That way they can get container logs if the container crashed and was restarted.

 

However, if a pod is deleted from the node forever, all corresponding containers and their logs are also deleted. The same happens when the node dies. In this case, users are no longer able to access application logs. To avoid this situation, container logs should have a separate shipper, storage, and lifecycle that are independent of pods and nodes. Kubernetes does not provide a native storage solution for log data, but you can easily integrate your preferred logging shipper into the Kubernetes cluster using Kubernetes API and controllers.

 

Kubernetes architecture facilitates a number of ways to manage application logs. Common approaches to consider are:

  • using a logging sidecar container running inside an app’s pod.
  • using a node-level logging agent that runs on every node.
  • push logs directly from within an application to some backend.

 

Let’s briefly discuss the details of the first and the second approach.

To complete examples used below, you’ll need the following:

  • A running Kubernetes cluster.
  • A kubectl command line tool installed and configured to communicate with the cluster.

 

Pre-requisite: Creating ITDA collector Docker image

Let’s create ITDA collector Docker image which can be used to collect logs using both the approach.

Use following Dockerfile to create Docker image.

 

FROM centos

MAINTAINER schopda@bmc.com

ENV TARGET Linux-2-6-x86-64-nptl

ENV BASEDIR /opt/

ENV INSTALLDIR /opt/bmc

ENV PATUSER patrol

ADD truesight_itda.tar $BASEDIR

 

RUN useradd -p 8Iq9Omhord6cQ $PATUSER

RUN mkdir $INSTALLDIR

RUN chmod -R 777 $INSTALLDIR

 

RUN echo "cat /etc/hostname" > /usr/bin/hostname

RUN chmod +x /usr/bin/hostname

RUN echo patAdm1n | passwd --stdin root

 

WORKDIR /opt/bmc_products

RUN sh RunSilentInstall.sh

 

WORKDIR $INSTALLDIR/Patrol3

RUN mkdir $INSTALLDIR/Patrol3/admin

RUN mkdir $INSTALLDIR/Patrol3/images

RUN chown -R patrol:patrol $INSTALLDIR/Patrol3

ENV PATH $PATH:$INSTALLDIR/Patrol3/$TARGET/bin

ENV PATROL_HOME=$INSTALLDIR/Patrol3/$TARGET

ENV PATROL_ADMIN=$PATUSER

 

RUN rm -rf /opt/bmc_products

CMD ["PatrolAgent"]

 

There are several things to pay attention to:

· It uses Truesight CMA repository package as installable. Package should have Patrol Agent, IT Data Analytics KM and JRE. Also make sure you apply correct integrationServices variable during package creation. Package should be saved as truesight_itda.tar.

· The collector needs root permission to read logs in /var/log. To avoid permission error, we will run collector with root user inside container. Create CMA repository package with PATROL default account as root. For this exercise root password is set to ‘patAdm1n’. If you change this then make sure to change it in Dockerfile as well.

In this blog, we will refer this image as “vl-pun-lnx-01.bmc.com:80/ITDA/truesight-itda-col”

To know more details on creating Docker image, please check out my blog post. Once the Docker image is created, push it to Docker repository.

 

Approach I: Using Sidecar Containers

 

 

Let’s assume you have an application container producing some logs and outputting them to stdout, stderr , and/or a log file. In this case, you can create one or more sidecar containers inside the application pod. The sidecars will be watching for the log files and an app’s container stdout/stderr  and will stream log data to their own stdout  and stderr  streams. Optionally, a sidecar container can also pass the retrieved logs to a node-level logging agent for subsequent processing and storage. This approach has many benefits described in this great article from the official documentation. Let’s summarize them:

  • With sidecar containers, you can separate several log streams from your app container. This is handy when your app container produces logs with different log formats. Mixing different log formats would deteriorate manageability of your logging pipeline.
  • Sidecar containers can read logs from those parts of your application that lack support for writing to stdout or stderr.
  • Because sidecar containers use stdout and stderr , you can use built-in logging tools like kubectl logs .
  • Sidecar containers can be used to rotate log files which cannot be rotated by the application itself.

At the same time, however, sidecar containers for logging have certain limitations:

  • Writing logs to a file and then streaming them to stdout can significantly increase disk usage. If your application writes to a single file, it’s better to set /dev/stdout as the destination instead of implementing the streaming sidecar container approach.
  • If you want to ship logs from multiple applications, you must design a sidecar(s) for each of them.

 

Sidecar container with logging agent:

 

If the node-level logging agent is not flexible enough for your situation, you can create a sidecar container with a separate logging agent that you have configured specifically to run with your application.

 

Here is the pod configuration manifest that you can use to implement this approach. The pod mounts a volume where ITDA can pick the data.

 

     1 apiVersion: v1

     2 kind: Pod

     3 metadata:

     4    name: counter

     5 spec:

     6    containers:

     7    - name: count

     8      image: busybox

     9      args:

    10      - /bin/sh

    11      - -c

    12      - >

    13        i=0;

    14        while true;

    15        do

    16          echo "$i: This is BMC sidecar container first log file test message $(date)" >> /var/log/1.log;

    17          echo "$(date) INFO:This is BMC sidecar container second log file test message $i" >> /var/log/2.log;

    18          i=$((i+1));

    19          sleep 1;

    20        done

    21      volumeMounts:

    22      - name: varlog

    23        mountPath: /var/log

    24    - name: count-agent

    25      image: vl-pun-lnx-01.bmc.com:80/ITDA/truesight-itda-col

    26      ports:

    27      - containerPort: 3181

    28        hostPort: 5010

    29      volumeMounts:

    30      - name: varlog

    31        mountPath: /var/log

    32    volumes:

    33    - name: varlog

    34 emptyDir: {}

There are some additional steps required to get data into ITDA server because of unknown log data format.

1. Once the pod is running, it should appear in the Managed Devices in Truesight presentation server.

2. Create Monitoring policy for IT Data Analytics solution to connect ITDA agent to ITDA collection station.

3. Once the policy is applied to pod agent. Next step is to create ITDA data collector, with file details and appropriate Data pattern format to index data correctly.

After few mins log data should appear in Truesight ITDA server.

 

Approach II: Using a Node-Level Logging Agent

 

In this approach, you deploy a node-level logging agent on each node of your cluster. This agent is usually a container with access to log files of all application containers running on that node. Production clusters normally have more than one nodes spun up. If this is your case, you’ll need to deploy a logging agent on each node.

 

The easiest way to do this in Kubernetes is to create a special type of deployment called DaemonSet. The DaemonSet controller will ensure that for every node running in your cluster you have a copy of the logging agent pod. The DaemonSet controller will also periodically check the count of nodes in the cluster and spin up/down a logging agent when the node count changes. DaemonSet structure is particularly suitable for logging solutions because you create only one logging agent per node and do not need to change the applications running on the node. The limitation of this approach, however, is that node-level logging only works for applications’ standard output and standard error streams.

 

 

Deploying ITDA Agent as Daemonset to collect system and application Logs

 

Using node-level logging agents is the encouraged approach in Kubernetes because it allows centralizing logs from multiple applications via installation of a single logging agent per each node. We now discuss how to implement this approach using ITDA agent deployed as a DaemonSet in your Kubernetes cluster.

 

We used same Docker image as that of sidecar approach to create Daemonset.

 

Step 1: Deploy a DaemonSet

 

Here is the Daemonset configuration manifest that you can use. Each Daemonset pod mounts a two volumes to host paths /var/log and /var/log/containers.

 

     1  apiVersion: apps/v1

     2  kind: DaemonSet

     3  metadata:

     4    name: truesight-itda

     5    namespace: default

     6    labels:

     7      k8s-app: truesight-itda

     8  spec:

     9    selector:

    10      matchLabels:

    11        name: truesight-itda

    12    template:

    13      metadata:

    14        labels:

    15          name: truesight-itda

    16      spec:

    17        tolerations:

    18        - key: node-role.kubernetes.io/master

    19          effect: NoSchedule

    20        containers:

    21        - name: truesight-itda

    22          image: vl-pun-lnx-01.bmc.com:80/ITDA/truesight-itda-col

    23          ports:

    24          - containerPort: 3181

    25            hostPort: 5003

    26          volumeMounts:

    27          - name: varlog

    28            mountPath: /var/log

    29          - name: varlibdockercontainers

    30            mountPath: /var/lib/docker/containers

    31            readOnly: true

    32        terminationGracePeriodSeconds: 30

    33        volumes:

    34        - name: varlog

    35          hostPath:

    36            path: /var/log

    37        - name: varlibdockercontainers

    38          hostPath:

    39            path: /var/lib/docker/containers

 

Let’s save the manifest in the ts-itda-ds.yaml  and create the DaemonSet:

1

2

kubectl create -f ts-itda-ds.yaml

 

Step 2: Configure log data collection

 

Once Deamonset is created successfully, it needs to be configured to collect data and index correctly to make log search and analysis easy and meaningful. In large cluster this could be cumbersome if one has to configure each node, that’s where Truesight policy configuration and Kubernetes ITDA content pack could make things easier and faster to setup.

 

To import “Kubernetes” content pack, login to ITDA server and use menu Administration -> Content Packs -> Import menu.

 

Once imported successfully, it adds Collection Profile template which includes two data collector configurations.

  1. varlogcontainer – To collect pod logs from each node residing in /var/log/containers
  2. varlogmessages – To collect system logs (kubelet and Docker log) from /var/log/messages file

 

Next step is to create Truesight CMA policy to configure Daemonset agents to connect to ITDA collection station and server with Elasticsearch backend and to use data collector templates created by content pack. The policy configuration remains same as we did for sidecar container with following two additions.

  1. Make sure Agent selection criteria is set as “Agent Host Name” starts with Daemonset name i.e. truesigh-itda. This could be useful in auto scaled cluster where nodes gets added/removed frequently.

   2. In IT Data Analytics monitor configuration, add Collection profile “kubernetes” to use data collector templates to automatically create data collectors in ITDA server.

 

 

With in few minutes ITDA agents will connect to server and automatically create required data collectors. For each node 2 data collectors will be created. For three node cluster it created 6 data collectors.

Data collectors: 

To see logs collected, click on search icon against respective data collector.

 

 

 

 

Conclusion

All things considered, Kubernetes platform facilitates implementation of full logging pipelines by providing such useful abstraction as DaemonSets. We saw how to easily implement cluster-level logging using node agents deployed as DaemonSets .

In this tutorial, we demonstrated how Truesight ITDA can easily centralize logs from multiple applications. Unlike sidecar containers that should be created for each application running in your cluster, node-level logging requires only one logging agent per node.

Share This:

PATROL Agent and KMs in Docker Containers

 

Overview:

This blog explains a way to create Docker image of Truesight repository package. Using this, one can build Docker image of Truesight repository package to run inside Docker container. Docker container is not replacement of full OS and hence local OS monitoring (OS KM's) is not something its intended for. Remote monitoring KM's (like VMware, OS remote motoring etc. ) are the ideal and best suited for containerization. We have used VMware KM for this exercise.

 

Docker Overview

Docker is a platform that enables users to build, package, ship and run distributed applications. Docker users package up their applications, and any dependent libraries or files, into a Docker image. Docker images are portable artifacts that can be distributed across Linux environments. Images that have been distributed can be used to instantiate containers where applications can run in isolation from other applications running in other containers on the same host operating system.

 

Pre-requisites:

  1. The PATROL components for which you want to create image should be in the form of Truesight CMA repository package only (tar file)
  2. Truesight repository should be v10.0 and above
  3. Make sure all the components are 64 bit only as Docker only support 64 architecture.
  4. Make note of all inputs used while creating repository package as same details should be used in Dockerfile as well (Ex. PATROL default account, Install dir. etc.).

 

PATROL package details used for this exercise:

  1. Components part of PATROL package are PATROL Agent for Linux v10.0, Oracle for Linux (JRE) and VMware KM v4.0.0
  2. Provide docker host root password for the root user details.
  3. Provide integration service to connect to during package creation.
  4. PATROL package should be saved as tar file.

 

Dockerfile:

 

# IMPORTANT

# -------------------------------------------------------------

# The resulting image will have PATROL components installed which are part of PATROL package used

#

# REQUIRED FILES TO BUILD THIS IMAGE

# -------------------------------------------------------------

# (1) patrol_cma_package.tar

# Please create the tar package of PATROL component from v10.0 and above CMA

#

# HOW TO BUILD THIS IMAGE

# -------------------------------------------------------------

# Create new directory on Docker host and put tar package along with this Dockerfile in it.

# Run:

#      $ docker build -t "patrol:1" .

#

# Pull base image

# -------------------------------------------------------------

FROM centos

 

# Maintainer

# ----------

MAINTAINER schopda@bmc.com

 

# [REVIEW]Environment variables required for this build (Change the values of PATUSER and INSTALLDIR if required)

# -------------------------------------------------------------

ENV TARGET Linux-2-6-x86-64-nptl

ENV BASEDIR /opt/

ENV INSTALLDIR /opt/bmc

ENV PATUSER patrol

 

# Add the installer file to container file system

# -------------------------------------------------------------

ADD patrol_cma_package.tar $BASEDIR

 

#[REVIEW] Setup filesystem and patrol user

# Encrypted value next to -p argument is the password for patrol user.

# To get encrypted value of your password for patrol user use following command.

# openssl passwd -crypt <password>

# ------------------------------------------------------------

RUN useradd -p Q70GsdNXWnwzs $PATUSER

RUN mkdir $INSTALLDIR

RUN chmod -R 777 $INSTALLDIR

 

# SETUP hostname command as this is being used by PATROL silent installer and not available as part of image

# --------------------------------------------------------------

RUN echo "cat /etc/hostname" > /usr/bin/hostname

RUN chmod +x /usr/bin/hostname

 

# Install PATROL package

# --------------------------------------------------------------

WORKDIR /opt/bmc_products

RUN sh RunSilentInstall.sh

 

# Setup required PATROL environment

# --------------------------------------------------------------

WORKDIR $INSTALLDIR/Patrol3

ENV PATH $PATH:$INSTALLDIR/Patrol3/$TARGET/bin

ENV PATROL_HOME=$INSTALLDIR/Patrol3/$TARGET

ENV PATROL_ADMIN=$PATUSER

 

# Remove PATROL installer

# --------------------------------------------------------------

RUN rm -rf /opt/bmc_products

 

# Define default command to start PATROL Agent

# This will start PATROL agent on default port 3181. To change the port replace command with following.

# CMD ["PatrolAgent", "-p", "6755"]

# -------------------------------------------------

 

CMD ["PatrolAgent"]

#***************END of Dockerfile****************

 

Building Docker Image:

  1. Create new directory on docker host
  2. Copy PATROL tar package in the new directory
  3. Copy Dockerfile in the same directory
  4. Review the Dockerfile sections marked with [REVIEW] and do the appropriate changes.
  5. Create new docker image using following command

          $ docker build -t "patrol:1" .

Note: In the command above “patrol” is the image name “1” is the tag. Do not remove “.” (dot) at the end of command.

 

Verify Docker image created:

 

Create container using Docker image:

To create container from docker image use following command:

$ docker run –d –p 5000:3181 –h patrolagent-1 “patrol:1”

Details of command above:

  1. -p 5000:3181 will bind the container port 3181 to 5000 of the docker host. To access PATROL agent externally user has to use 5000.
  2. –hostname is not mandatory however it will help to set valid hostname for the container which would be consumed by PATROL Agent for setting up device name in Truesight. By default container ID will be set as hostname

Note: To override ENV variables at run time please us –e argument of docker run command.

 

Things to know:

  1. PATROL Agent configuration running inside docker container can be changed using remote pconfig utility.
  2. Restart of PATROL agent using pconfig of console will not work. It stops the PATROL Agent however to start it container has to be started again.
  3. If not done while package creation, integration with Truesight will work by setting /AgentSetup/integration/integrationServices variable.
Share This:

 

Why would you change the beacon post from HTTP to HTTPS?

 

However, before explaining why we want to change the beacon post, let discuss what is a beacon post within the App Visibility End User Monitoring.

 

In App Visibility End User Experience Monitoring a JavaScript is injected into the actual pages loaded by users. That JavaScript then sends beacons to TrueSight App Visibility Manager which contains metrics collected from the end user’s browser about the page it just finished rendering.

 

App Visibility End User Experience Monitoring either needs an Agent to automatically inject the JavaScript (which can only work in servers supported by the AD Agent) or you need to manually add the JavaScript to your pages (which will work for any server).

 

******************************************************************************

 

Going forward we will also be introducing 2 more modes in Automatic Injection which will not require the agent.

As an application specialist you would be able to Automatically set up Active End User Monitoring for the following:

1)F5 Server

2)Apache Web Server

 

******************************************************************************

 

The AD Agent just monitors the request as normal. However, when it sees the response which contains the actual page which is being sent to the browser, it edits that page automatically before passing it on back to the client. In this edit, it inserts a <script> tag into the Head section of the HTML telling the browser where to find the JavaScript and leaves the rest of the page unchanged. Thus, when the page returns to the client browser, that browser begins to execute the JavaScript inside the browser itself. When significant actions happen in the browser, that script sends out notifications called Beacons to the TrueSight App Visibility Proxy component. Note that the hostname and port of the Proxy component were hard coded into the script when the AD Agent edited the page. Thus, all Beacons from that page will go to the same Proxy until the page is reloaded or a new page is loaded. A new Proxy component may be dynamically assigned by the AD Agent in that new edit of the page. The proxy component then sends its information up to the TrueSight App Visibility Portal where it becomes available just as any other TrueSight Application Visibility data.

 

The App Visibility End User Experience Monitoring data that comes in through the Proxy component to the TrueSight App Visible Portal is visible within the Application View is TrueSight Presentation Server.

 

Where the AD Agent Business Transaction data populates the Web, Business, and Database tiers, the App Visibility End User Experience Monitoring data populates the User and Network tiers.

 

Below is a link to the documentation that will explain App Visibility End User Monitoring further:

 

https://docs.bmc.com/docs/applicationmanagement/110/app-visibility-end-user-monitoring-721191241.html

 

 

OK so why would you change the beacon post from HTTP to HTTPS?

 

By default, the behaviour of the Beacons is that it matches the protocol used by the page. For example, if the application web page is HTTPS then the App Visibility End User Monitoring will inject the JavaScript into the web page, and the beacon post will be HTTPS.

 

If the application web page is HTTP then a setting in the App Visibility End User Monitoring will need to be changed in order to inject the JavaScript into the http web page, and the beacon post will be HTTP.  However, there may be certain situation where the user would like to receive the Beacon in https while the application web page is in HTTP.

So, if the user wants the beacon post to be more secure then the protocol would need to be changed to HTTPS.

 

 

Below are the steps to change the beacon post from HTTP to HTTPS:

 

In the JavaScript code, there are variables to App Visibility Proxy http and https ports. You need to look for the code that builds the beacon URL based on the page protocol, search for the URL that is built for HTTP protocol and update the beacons URL to use HTTPS port and protocol. 


In all App Visibility Proxies install directory, update <apm-proxy-install-dir>/webapps/static-resources/aeuem-10.1.0.min.js file. Change 'http' to 'https' in all places that are marked in the screenshot.

 

Changing Beacon Post.png

 

 

 


How to verify that the beacon post is now HTTPS?

 

The best way to verify that beacons are being sent with HTTPS is to use the Developer tool which is available in any browser.

After opening the Developer tool, you should check the Network Tab -- >JavaScript(JS)-->Protocol

 

Here you can see “beacons” item and verify that it is sent with HTTPS

 

Does this step apply to Manual and Automatic JavaScript Injection Or just manual JavaScript Injection?

 

These steps will apply both for Manual as well as Automatic Injection (Any Kind)

Share This:

Excited to announce the availability of the following role based trainings and certified associate exams on TrueSight Operations Management 11.x.

 

New ILT Courses:

  • BMC TrueSight Operations Management 11.x: Fundamentals Installing
  • BMC TrueSight Operations Management 11.x: Fundamentals Administering

New Certified Associate Exams:

  • BMC Certified Associate: TrueSight Operations Management 11.x for Administrators Online Exam
  • BMC Certified Associate: TrueSight Operations Management 11.x for Consultants Online Exam

 

To know more and to register, go to the TrueSight Operations Management Learning Path: https://www.bmc.com/education/courses/truesight_operations_mgmt_training.html

 

Nidhi Gupta Dirk Braune Rafael de Rojas Vrushali Athalye  Geoffrey Bergren Pankaj Pansare Jim Stephens Shweta Agarwal Kamraun Marashi

Sabrina Paprocki Jimish Thakkar Mani Singh

Share This:

BMC TrueSight Operations Management users can now get more out of their network data.  Entuity Network Analytics seamlessly integrates with TrueSight, to provide your IT team with the complete tools they need for success.

 

 

Reduce Your Alert Noise:   Let Entuity Network Analytics for TrueSight weed out the non-essential network event alerts that get escalated into TrueSight for more productive management

 

When it comes to service management, the more details you can see  about the network the better.  However, some of these details are not entirely useful.  While network alerts can act as a flag for when something is wrong, they can also act as a false flag, alerting the professional to an event that is not or will not lead to a major service incident. You and your team probably  get countless alerts throughout the day.  If there is a major incident, these alerts should help prevent or lessen a network outage, not create more work accessing whether they are a critical alerts.  If they are not major alerts, they serve as a noisy distraction from the day to day tasks and projects that you are occupied with.  Often too much  time is spent categorizing these alerts into events.   When too much time is spent on tasks like these insignificant events, other projects do not receive the attention they require. When integrated with TrueSight, Entuity Network Analytics (ENA) eliminates the non-essential network alerts by pre-processing network incidents.  With the non-threatening events weeded out before the IT professional sees them, the overall event information that TrueSight receives is more effective.  This means ENA adds more value to your TrueSight Operations and you can expect that when the network noise is reduced that event alerts being escalated are legitimate alerts. More efficient event processing permits IT staff to tackle more pressing issues rather than wasting time manually validating event alerts.

 

Network services can be managed individually to ensure performance stability with powerful sets of analytics (order processing, VoIP, E-Commerce, remote branch IT performance)

 

ENA lets you take your management to the next level with the ability to manage by Network Services. Now IT teams can track particular applications’ traffic patterns out and back to the user and associate all of the devices that are encompassed for an application.  For example, an order processing application can be managed by not only the application with TrueSight but through ENA you can see how the order processing data gets sent back and forth to the user.  Is the performance slow because of the application or is it because of how the data is being transported?  ENA can manage by network service to answer the question how the service is being handled on the network. Coupled with TrueSight, ENA gives you another way to manage your applications for outstanding performance.  Visit entuity.com for more information.

Share This:

Consider the following typical business scenario:

 

business scenario.png

Effective operation management is essential for maintaining a healthy and thriving business. IT operations must keep applications, infrastructure, middleware, and services up and running to support key business processes.

BMC TrueSight Operations Management is unique performance and availability management solution that goes beyond monitoring to handle complex IT environments and diverse data streams to deliver actionable IT intelligence. This can help to resolve issues before they impact the business.

Additionally, BMC TrueSight Operations Management provides application-aware infrastructure monitoring IT Operations, bringing together infrastructure and application monitoring in one integrated solution.

Operators play a crucial role in day-to-day product management. They need to monitor events, devices, event groups, work with dashboards, to address the performance monitoring and incident management for an IT Infrastructure.

BMC TrueSight Operations Management 11.x: Fundamentals Operating training is specially designed for TSOM 11.x operators which covers all the exciting features of the product which are crucial for the operators. It's a 1-day training which contains many relevant labs which are useful for operators. For more information and details about registration, feel free to visit: Education COURSE page - BMC TrueSight Operations Management 11.x: Fundamentals Operating - BMC Software

 

 

Nidhi Gupta Dirk Braune Jimish Thakkar Rasika Sarnaik Geoffrey Bergren Kristen Linehan Sabrina Paprocki Steve Mundy Namita Maslekar

Share This:

As your system grows, you may want to start putting multiple Synthetic TEA agents on a single Windows system.  It is BMC best practice when you do this to only have 1 TEA Agent from a single location on each machine.  For instance, let's say you have 3 locations and 3 TEA Agents assigned to each location:

Blue Location:

Blue Agents.png

Black Location:

Black Agents.png

Green Location:

Green Agents.png

 

You will then set up 3 Windows systems and install 3 TEA Agents on each system.  Each TEA Agent will point to one of the above locations.  For instance:

Windows Systems.png

 

If Windows System 3, Blue Location Agent 3 goes down:

Agent Crashes.png

 

then the system will load balance the Execution Plans from that System and Agent to the other agents on System 1 and System 2,  but there will only a single agent on System 1 and System 2 that will be affected, so it will not have such a large impact on the system resources.

 

If you have a scenario where your agents are not well balanced, then you may run into a situation where you have too many agents from the same location on the same system.  In this scenario, if one of those agents goes down or if the system goes down, then the system will load balance the Execution Plans to the other available agents.   This may lead to all the Execution Plans going to the same system or becoming "unbalanced" which could overwhelm the resources on that machine and either bring down that agent, or cause that agent to become very slow and take several hours to recover.

Too Many Agents crash.png

Share This:

Why should I have minimal access and very few applications running on my system that is hosting a TEA Agent?

 

Typically, you run a Truesight Synthetic TEA Agent as a process because your script will need access to the desktop or will require special privileges that running the TEA agent as a service will not provide.

 

It is BMC's recommendation that when you run your TEA Agent as a process, that you do not run any other applications on the system that may interfere with the TEA Agent or your script.  It is also important that you do not allow any users to log into the system.  This is because some scripts must have access to the desktop and mouse in order to work properly.  If a user is logging in and taking control of the mouse, then there is a very significant chance that the scripts will lose the mouse and not be able to click on items when needed thereby causing the scripts to fail and give false alarms.

 

Also, if there are other processes running on the system that are using resources that are needed by the scripts, then the scripts may start to fail continuously until the TEA Agent can be restarted.  On a production server, this can be very hard to do when you can only restart processes during maintenance windows.

Share This:

Coming up on October 18, 2018 is BMC’s annual user event, the BMC Exchange in New York City!

Exchange-NY-CityImage-Linkedin.jpg

 

During this free event, there will be thought-provoking keynotes including global trends and best practices.  Also, you will hear from BMC experts and your peers in the Digital Service Operations (DSO) track.  Lastly, you get to mingle with everyone including BMC experts, our business partners, and your peers.  Pretty cool event to attend, right? 

 

In the DSO track, we are so excited to have 3 customers tell their stories. 

  • Cerner will speak about TrueSight Capacity Optimization and their story around automation and advanced analytics for capacity and planning future demand.   Check out Cerner’s BizOps 101 ebook
  • Park Place Technologies’ presentation will focus on how they leverage AI technology to transform organizations.
  • Freddie Mac will join us in the session about vulnerability management.  Learn how your organization can protect itself from security threats.  Hear how Freddie Mac is using BMC solutions. 

 

BMC product experts will also be present in the track and throughout the entire event.

  • Hear from the VP of Product Management on how to optimize multi-cloud performance, cost and security
  • Also, hear from experts on cloud adoption.  This session will review how TrueSight Cloud Operations provides you visibility and control needed to govern, secure, and manage costs for AWS, Azure, and Google Cloud services.

 

At the end of the day, there will be a networking reception with a raffle (or 2 or 3).  Stick around and talk to us and your peers.  See the products live in the solutions showcase. Chat with our partners.  Stay around and relax before heading home. 

 

Event Info:

Date: October 18th 2018

When: 8:30am – 7:00pm

  • Keynote begins at 9:30am
  • Track Sessions begin at 1:30pm
  • Networking Reception begins at 5:00pm

Where: 415 5th Ave, NY, NY 10016

 

For more information and to register, click here

 

Look forward to seeing you in NYC!  Oh, and comment below if you are planning to attend!  We are excited to meet you.

Filter Blog

By date:
By tag: