Skip navigation
Share This:

Check out the three new videos for BMC Atrium Orchestrator Platform.

Exporting a module from Development Studio to a repository

Let us know if you find the video helpful by rating the blog post or commenting on the https://docs.bmc.com/docs/display/public/baop78/Exporting+modules topic.

Activating a module on a grid

Let us know if you find the video helpful by rating the blog post or commenting on the https://docs.bmc.com/docs/display/public/baop78/Activating+modules topic.

Scheduling processes and running processes on demand in Grid Manager

Let us know if you find the video helpful by rating the blog post or commenting on the https://docs.bmc.com/docs/display/public/baop78/Adding+a+process+schedule or https://docs.bmc.com/docs/display/public/baop78/Executing+a+process+on+demand topics.

Share This:

The support issue came in as "Schedules Running On Wrong Days".  After some back and forth, the problem was that workflows were converting dates to determine if they should run or not, and were running on the wrong days.

 

The customer had a collection of jobs that needed to run on specific days of the month and/or year.  The pattern was both beyond the reoccurance capabilities of BAO's internal scheduler, and frequent enough that it would have taken defining 20+ schedules in the module and that wasn't going to scale.

 

They solved this by creating one daily schedule for each job and having the job check to see if it should continue running that day.  They created a module configuration item with XML of the form:

 

<dates>
  <day>0101</day>
  <day>0401</day>
  <day>0701</day>
  <day>1001</day>
</dates>


 

(Theirs was way bigger, but you get the idea.)

 

The process called by the scheduler started with a utility process that converted "now" from the Utility Activity into a string of the form "mmdd".  The main process then did an Xpath transform to compare the "now" string to the values in the XML in the config item, and if a match was found, it would return "true".  When "true", a subsequent Switch activity would allow the process to actually run, otherwise it would just stop immediately.  They didn't need to even return the <day> element of the XML, because just the fact a match was found was enough.

 

The date comparison was done via an Xpath transform in the form of:

 

contains(., '$[now]')




 

This worked perfectly in BAO Development Studio, but once activated on the grid it sometimes ran on days it wasn't supposed to.  The job had run when it should have on January 1st, but had run incorrectly again on January 4th.  They opened a support ticket with BMC Customer Support and we dived in to figure out what the problem was.

 

The root cause of this issue was the use of the contains() Xpath function.

 

Xpath functions generally operate on an "input document" that may be all or part of a XML document.  The "." is used to signify "here".  Where "here" is, depends on the context.  When used, like in this instance, the "." turns into the whole XML that was fed in.  But the contains function operates against values only, so all the XML elements were stripped out, leaving just the string "0101040107011001" to compare against.  Doing the replacement on the contains function above, it actually looked like this to BAO:

 

contains('0101040107011001', '$[now]')




 

This is going to return true on the days expected, but also on "0104", "0107", "0110" and "1010".

 

The solution is to change the Xpath to look at lines first, and search within them second.  This can be done by doing a search for the day elements, and then picking only the one that has the date in it that matches $[now].  We still use the contains function, but we use it in the ordinal to only choose the line with the right day in it.  That transform looks like:

 

//day[contains(., '$[now]')]/text()





When $[now] is "0101", this will return:

 

<value>0101</value>




 

When $[now] is "0104", it will return

 

<value />




 

You can sexy the transform up some, to come up with a true/false result:

 

string-length(//day[contains(., '$[now]')]) > 0



 

There are other transforms that can be used, but I'll leave them to the Teeming Millions.

Share This:

Check out the video for what’s new in BMC Atrium Orchestrator Content 20.15.02.

Some of the features include a new BMC Service Desk Automation run book, certification with BMC Remedy AR System 9.0, and other updates to the application and base adapters.

 

For a detailed list of enhancements in this version, see BMC online technical documentation at https://docs.bmc.com/docs/display/public/baoc201502/Home.

 

Let us know if you find this video helpful by rating the blog post or leaving a comment.

Share This:

Are you getting started with Development Studio?

 

For an introduction to creating and running a workflow in Development Studio, see the new video on the Creating the Hello World workflow topic in the BMC Atrium Orchestrator Platform 7.8 wiki. The video demonstrates how to create a simple workflow that writes a greeting to a file.

 

For more information about creating workflows, see Developing workflows using BMC Atrium Orchestrator Development Studio


Let us know if you find the video helpful by rating the blog post or commenting on the Creating the Hello World workflow topic.

Share This:

I’m getting old. I passed the 40 mark a year or so back so have been around for a while. My IT career started as a Level one help desk engineer for a small software development company. I was a phone monkey. I had to take phone calls from customers, take down their details and have them hang on the line whilst I filled in endless information and forms to create a service desk ticket. Once I had all the required information I’d give the customer the ticket number and helpfully tell them I’d have someone call them back shortly. That’s all I had to do, take details, fill in forms and then forward the ticket to a Level 2 guy to have them start looking into the issue. It was a boring, thankless job, but apparently deemed necessary as there were 3 of us employed to do this!

 

Even back then I could see that this wasn’t a very efficient model. After a few months I could spot common trends amongst the calls. Issues being raised that were easily answered by the Level 2 guys (when they got round to it). Other issues required logs and diagnostics to be gathered and then a call would be had with the customer and sets of instructions provided to address the product issues. I started to pick things up from the Level 2 guys and felt I could start doing more than just taking calls and logging information. That wasn’t my job though. Surely there was a better way to do things?

 

Fast forward 20 years and things have been gradually improving around service desk capabilities & procedures. We started to benefit from more structured processes around incident and change management (courtesy of ITIL). Knowledge management was introduced to capture resolutions to common issues. We got web based service desk clients that end users could access to create & update their own tickets as opposed to having to call issues in. Then we started exposing knowledge management to those end users so they could troubleshoot their own issues too. Things were getting more efficient for the service desk and the end users were benefiting from self-help and easier ways to interact with the support desk teams.

 

In the past few years we’ve seen some further big strides forward. Service catalogs were introduced which showcased standard service offerings, helping end users get to grips with what was easily available to them verses what they had to ask for as a special case. Service catalog offerings were much easier to handle for the service desk as less information was required from the end user, and there were clearly defined procedures and SLA’s for the service desk engineers to follow to fulfill the request, lowering costs for the service desk and setting clear expectations around service delivery for end users.

 

In the latest set of service desk enhancements there are further improvements in ways to interact with IT. We are starting to see mobile service desk applications that are context & location aware - they know who you are and where you are and can route you to more tailored services and information. They enable crowd-sourced collaboration and have moved away from the concept of form filling, ensuring that making requests is a much easier and simpler experience than before.

The service desk has never been more accessible and efficient.

 

So what next? There is always room to improve right? What should the service desk focus on next to keep the momentum going? I believe the answer is “service desk automation”! Let’s look at the ongoing challenges - even after all the great work that’s gone on.

First is cost. Very few organizations have IT departments that go around saying “We’ve got a big raise in our budget this year, lets hire some new people!”. Cost is an ongoing issue and efficiencies must continue to be found. With a kind of perverse sense of injustice, especially given all the improvements that have gone on recently, one of the side effects of service desk modernization efforts has actually been that more interactions are occurring meaning higher costs. Think about it, if organizations expose more services to their end users and make those services more accessible then guess what? More people are going to use them! Yes we are reducing service desk tickets through better knowledge management and self-service initiatives but overall, more tickets are being produced as more capabilities are offered & exposed.

 

I recently read an article from MetricNet stating the average cost for a level 1 service desk engineer to manually handle a service desk ticket was $22. They then said that cost triples if that ticket is escalated to a level 2 engineer and then triples again for level 3. That’s a fairly staggering cost when you think about the ever growing workload on the service desk. Well a great way to keep a lid on these costs is to have automation in place to automate the handling or fulfillment of common service desk requests. If you have a well-defined service and a known way to handle requests for that service, why have valuable service desk engineers involved at all? Put automation in place and take the manual handling costs out of the loop. Sounds like a pie in the sky idea? We have one customer that put one piece of automation in place to handle password reset/unlock requests. This use case accounted for 22% of their total service desk ticket volume or put another way, 46,000 requests per year. Even using the Level 1 engineer cost of $22 per ticket that’s a million bucks of cost avoidance right there.

 

Customer satisfaction and end user productivity are other area’s where the service desk needs to stay focused. “Are we seen as a value adding area of the business or are we an obstacle to productivity?” The introduction of service catalogs have greatly helped service desk clearly communicate what they can offer but I would argue it’s one thing enabling your end users to easily ask for things and a whole other thing making sure you deliver what was asked for quickly and accurately. Look at the type of digital experiences millennials are used to these days. What would they think if they logged onto i-Tunes to purchase the latest Pitbull song, only to find out that the download won’t happen till the next day! “What century are we living in?”

It’s the same thing with IT based requests. Why does it have to take 24 hours to turn around a request for some software to be deployed on my laptop? No one wanders around manually installing software from CD’s anymore, they use automated configuration management tools that can push software on demand. So why the day long wait? Well it’s because the request gets put on some Level 2 engineers queue and they get to it when they can. “Can’t we just link the service desk with the configuration management tool and take out the middle man?”


As another anecdote we recently had a contractor come into the office to help us with a project. “I forgot to tell you, I’ll need access to the network to work. I know it takes a day to organize this in my company, I hope you can sort something out for me!” Not a problem says I. Check this out. I fire up the BMC MyIT app, locate the GuestWifi service offering (which is location aware and knows who I am). I fill in 2 pieces of information – how many guests and how long do they need wifi access for? I click submit and within 2 minutes I am automatically emailed the guest wifi access codes. The contractor was speechless and clearly impressed with our service desk.

I have to agree, having automated fulfillment of common service desk requests is AWESOME.

 

Trust me – service desk automation is the way forward !

Share This:

Hello,

 

the BMC Engage this year was very successful! Personally, this was the first time I attended the event. Besides having had the privilege  to assist Anthony in the "BMC Atrium Orchestrator Product Overview and Key Use Cases: Connect and Streamline Cross-IT Processes" session, I was able to conduct the lab "Automation and Orchestration Hackathon". We've spend additional time with the attendees and it was fantastic to see the folks getting involved.

 

121.jpg

 

As there were quite some sessions on the orchestration subject hosted by our customer, I'd like to highlight just a few, in no particular order:

 

The Hackathon

The lab documents are posted here: https://bmc.g2planet.com/bmcengage2015/myevent_session_view.php?agenda_session_id=107

If you are interested in the "BMCS-University" module to use the workflows in your own lab, let me know. I'd like to schedule a WebEx to explain in detail how this module is configured and how you can leverage the workflows further.

 

The overall scenario is presented in this image:

Slide11.JPG

 

In addition, I've document the lab setup here:

 

 

Regards, V.,aka Orchestrator

Share This:

BMC Atrium Orchestrator 7.8 adds new functionality that is not backward compatible.  BAO now has a 'While' activity in Development Studio that can be used in processes.  BAO users that have multiple installed environments, such as production, staqing, dev, and test, maybe not be able to upgrade all environments in a timely manner.  It may happen that development environments and developers are on BAO 7.8, while production environments are still previous versions, and development and deployment of new modules to production can't be halted in the interim.

 

We've been asked more than once how to handle this.

 

BMC will not be providing support for, or backporting, the new While activity available to BAO 7.8 to versions of BAO prior to that.  Modules created and/or edited in BAO 7.8 dev studio or later, may still be usable on grids running BAO 7.6.03 or 7.7.xx.

 

BMC Atrium Orchestrator QA has tested modules created in BAO 7.8 Development Studio on a BAO 7.6.03 grid.

 

  • If a BAO 7.8 module contains no processes with the new While activity, jobs will execute as expected.
  • If a BAO 7.8 module containing processes with While activities is activated on a 7.6.03 grid, the module will activate, but any job that executes a process containing a While activity will fail without compensation.
  • Modules created in BAO 7.8 containing While activities cannot be imported into previous versions of Dev Studio.  It will throw errors until the offending module is removed.
Share This:

Hello,

 

How to make the Robot more Human? Well, let's start by giving him / her a voice. But that's not enough. If your teenage kids don't listen, at least the home automation and my server automation / IT Service Management solution should ....


I'm very excited to get the integration with BAO and Amazon's Echo build. Gone are the days when I have to use a keyboard to deploy my VM's or open an incident. Thanks to my "Orchestration" skill, IT is finally listening to me. Getting the IT admin to, is another story.  Hello ITSM, Hello Echo ....


See me in Las Vegas at the BMC Engage - Automation and Orchestration Hackathon to learn more ....


Regards, V. aka "Orchestrator"



Share This:

Essentially what conducting is about is getting the players to play their best and to be able to use their energy and to access their point of view about the music. How does this apply to 'Atrium Orchestrator'?

 

In an orchestrated network several parties collaborate to create value. The leader is the orchestrator putting together participants with different, complementary capabilities.

 

For registered communities users:

Find the iBook Community Edition at New to Orchestration? and find out how you can apply this in the BMC Atrium Orchestrator world.

 

Regards, V.

Share This:

I recently decided to automate some daily checks for a client that a Remedy Team would normally perform against the ARS Servers such as checking memory for the Mid Tier processes (Java Memory Allocation).  One of the factors driving this is a memory leak in Tomcat that was affecting the systems where Tomcat needed to be restarted periodically before it became unresponsive.

Due to the type of client, additional security measures are utilised to gain access to the servers in the form of secondary level access.

 

A 2 step login process is used:

 

  • First Level - Password for the "Remedy" user that changes daily (obtained via a script using SSH)
  • Second Level - Unix second level access consisting of a Username / Password combination for authorised users where commands can be executed in the Unix shell

 

Here is where I encountered some "fun" results and unexpected/inconsistent behaviour. 

 

For obtaining the First Level Password, the Unix script that ran required that you entered your Second Level Access credentials and then simulated keystrokes to pass the various screens until you could obtain the Daily Password.

What the user saw on screen vs what was actually happening in the background with the script were 2 different things.

The Unix script running this process effectively displayed everything "nicely" to the user, but the raw SSH that was returned via the SSH Adapter showed additional information that was not displayed to the user.

This ended up being a little trickier than first thought, but through trial and error managed to obtain the information required to produce the commands and prompts required.

 

Playing around with the "Prompts" was the fun bit here as this was a script that did not actually return to a system prompt and any additional keystrokes would terminate the SSH session before you obtained the output.

 

The request looked something like this:

 

<request-data>

  <ssh-request>

    <prompts>

      <prompt name="cmd">: </prompt>

      <prompt name="Username">Username: </prompt>

      <prompt name="Password">Password: </prompt>

      <prompt name="Query">press 'M' for a manual date query</prompt>

      <prompt name="Output">===========================================================================

</prompt>

      <prompt name="Newline">\n</prompt>

    </prompts>

    <targets>

      <target name="">

        <host>{hostname.com}</host>

        <port>22</port>

        <userName>qpass</userName>

        <password>ask4help</password>

        <timeout-secs>60</timeout-secs>

        <allow-unknown-hosts>true</allow-unknown-hosts>

        <use-shell-mode />

        <prompt>: </prompt>

        <establish-connection-timeout-secs>60</establish-connection-timeout-secs>

      </target>

    </targets>

    <commands>

      <command ignore-exit-code="true" prompt="Password"><![CDATA[{username}]]></command>

      <command ignore-exit-code="true" prompt="Query">

        <EncryptedData xmlns="http://www.w3.org/2001/04/xmlenc#" Type="http://www.w3.org/2001/04/xmlenc#Content">

          <CipherData>

            <CipherValue>{encrypted password}</CipherValue>

          </CipherData>

        </EncryptedData>

      </command>

      <command ignore-exit-code="true" prompt="Password"><![CDATA[\n]]></command>

      <command ignore-exit-code="true" prompt="Newline"><![CDATA[\n]]></command>

    </commands>

  </ssh-request>

</request-data>

 

The above used the OOB "SSH" process under the "AutoPilot-AD-Utilities Module".  Values for the Secondary Access are taken from the Module Configuration, thus the "<CipherData>" tags and encrypted password in the request (I have removed other sensitive information).

 

The issues started when I attempted to then log into the ARS servers (Solaris) to perform the Java Memory Check using the same logic (and prompts/commands) above, after all the prompts for the Username and Password looked the same in a SSH session so they should work exactly the same for the next session?  Wrong, this is where the assumption "that it all works the same" was my downfall.

 

Although everything appeared normal, same prompts presented for Username / Password in the Putty Sessions, it however did not work the same in the background and through the SSH Adapter.  No matter what combination I tried I could not get past the "Password: " prompt and kept receiving a error that did not make sense (as I had not encountered this before nor could I reproduce) from the system and the session was terminated.  The logs did not shed much light on what was happening other than showing the error I was seeing being thrown.

It took about half a day of investigation to reproduce the steps that were causing the behaviour and the error, which was only present when entering an invalid Username / Password combination e.g. password / password.  Entering a valid Username and wrong Password did not cause the error it just prompted for the password again.

It took a bit more digging to understand that if I sent a "enter/enter" combination I received the same error and the session was terminated.  ** A clue to what was going on **

 

Now I had the culprit, the system was sending an "Return/Enter" type command and the session was being terminated before the actual defined commands were executed.

This led down the path of "what will cause an Return/Enter command to be sent before the actual command I had listed in the request after the prompt"?

There is one setting that causes this behaviour ..... <verify-os>

 

As explained in the following discussion, the "<verify-os>" tag when excluded from the request call can introduce different behaviour across devices when using SSH:

 

How to Use Orchestrator (AO) to execute commands on networking devices (e.g. Cisco)

 

So, now I had the culprit and the potential solution, what is next?

To verify I quickly whipped up a static adapter request (XML Context Item containing the full adapter request to execute) to test some different scenarios by using the call adapter process directly.

This allowed me to narrow down the combination that worked with these Solaris servers and the login script.

 

To make this reusable once identified, I copied the OOB SSH process (and renamed) where I adjusted the XSLT to include the required element:

 

"<verify-os>false</verify-os>"

 

in the "<targets>" node.

 

I did this for all XSLT Transforms in the process.  I can then use this particular SSH process where required.

 

The XSLT looks like the following:

 

<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">

  <xsl:output indent="no" cdata-section-elements="command " />

  <xsl:template match="/">

    <request-data>

      <ssh-request>

        ${prompts}

        <xsl:if test="string-length('${target}')!=0">

          <!-- Dynamic target choice -->

          <targets>

            <target name="${target}" />

          </targets>

        </xsl:if>

        <xsl:if test="string-length('${host name}')!=0">

          <!-- Dynamic target specification -->

          <targets>

            <target name="">

              <verify-os>

                <xsl:text disable-output-escaping="no">false</xsl:text>

              </verify-os>

              <host>

                <xsl:text disable-output-escaping="no">${host name}</xsl:text>

              </host>

              <xsl:if test="string-length('${port}')=0">

                <port>

                  <xsl:text>22</xsl:text>

                </port>

              </xsl:if>

              <xsl:if test="string-length('${port}')!=0">

                <port>

                  <xsl:text>${port}</xsl:text>

                </port>

              </xsl:if>

              <userName>${user name}</userName>

              <xsl:if test="string-length('${private key file}')!=0">

                <private-key-file>

                  <xsl:text disable-output-escaping="no">${private key file}</xsl:text>

                </private-key-file>

              </xsl:if>

              <xsl:if test="string-length('${private key data}')!=0">

                <private-key-data>

                  <xsl:text disable-output-escaping="no">${private key data}</xsl:text>

                </private-key-data>

              </xsl:if>

              <xsl:if test="string-length('${pass phrase}')!=0">

                <pass-phrase>

                  <xsl:choose>

                    <xsl:when test="'${pass phrase encryption type}'='base64'">

                      <xsl:attribute name="encryption-type">

                        <xsl:text disable-output-escaping="no">${pass phrase encryption type}</xsl:text>

                      </xsl:attribute>

                      <xsl:text disable-output-escaping="no">$[pass phrase]</xsl:text>

                    </xsl:when>

                    <xsl:when test="'${pass phrase encryption type}'='plain'">

                      <xsl:attribute name="encryption-type">

                        <xsl:text disable-output-escaping="no">${pass phrase encryption type}</xsl:text>

                      </xsl:attribute>

                      <xsl:text disable-output-escaping="no">$[pass phrase]</xsl:text>

                    </xsl:when>

                    <xsl:otherwise>${pass phrase}</xsl:otherwise>

                  </xsl:choose>

                </pass-phrase>

              </xsl:if>

              <password>

                <xsl:choose>

                  <xsl:when test="'${password encryption type}'='base64'">

                    <xsl:attribute name="encryption-type">

                      <xsl:text disable-output-escaping="no">${password encryption type}</xsl:text>

                    </xsl:attribute>

                    <xsl:text disable-output-escaping="no">$[password]</xsl:text>

                  </xsl:when>

                  <xsl:when test="'${password encryption type}'='plain'">

                    <xsl:attribute name="encryption-type">

                      <xsl:text disable-output-escaping="no">${password encryption type}</xsl:text>

                    </xsl:attribute>

                    <xsl:text disable-output-escaping="no">$[password]</xsl:text>

                  </xsl:when>

                  <xsl:otherwise>${password}</xsl:otherwise>

                </xsl:choose>

              </password>

              <xsl:if test="string-length('${command timeout}')!=0">

                <timeout-secs>

                  <xsl:text disable-output-escaping="no">${command timeout}</xsl:text>

                </timeout-secs>

              </xsl:if>

              <xsl:if test="string-length('${connection name}')!=0">

                <connection>

                  <name>

                    <xsl:text disable-output-escaping="no">${connection name}</xsl:text>

                  </name>

                  <xsl:if test="string-length('${terminate connection}')!=0">

                    <terminate-on-exit>

                      <xsl:text disable-output-escaping="no">${terminate connection}</xsl:text>

                    </terminate-on-exit>

                  </xsl:if>

                </connection>

              </xsl:if>

              <xsl:if test="string-length('${known hosts config}')!=0">

                <known-hosts-config>

                  <xsl:text disable-output-escaping="no">${known hosts config}</xsl:text>

                </known-hosts-config>

              </xsl:if>

              <xsl:if test="string-length('${allow unknown hosts}')!=0">

                <xsl:choose>

                  <xsl:when test="'${allow unknown hosts}'='true'">

                    <allow-unknown-hosts>

                      <xsl:text>${allow unknown hosts}</xsl:text>

                    </allow-unknown-hosts>

                  </xsl:when>

                  <xsl:when test="'${allow unknown hosts}'='false'">

                    <allow-unknown-hosts>

                      <xsl:text>${allow unknown hosts}</xsl:text>

                    </allow-unknown-hosts>

                  </xsl:when>

                  <xsl:otherwise>

                    <allow-unknown-hosts>

                      <xsl:text>false</xsl:text>

                    </allow-unknown-hosts>

                  </xsl:otherwise>

                </xsl:choose>

              </xsl:if>

              <use-shell-mode>

                <xsl:text disable-output-escaping="no">$[use shell mode]</xsl:text>

              </use-shell-mode>

              <xsl:if test="string-length('${preferred pk algorithm}')!=0">

                <xsl:choose>

                  <xsl:when test="'${preferred pk algorithm}'='ssh-dss'">

                    <preferred-pk-algorithm>

                      <xsl:text>${preferred pk algorithm}</xsl:text>

                    </preferred-pk-algorithm>

                  </xsl:when>

                  <xsl:when test="'${preferred pk algorithm}'='ssh-rsa'">

                    <preferred-pk-algorithm>

                      <xsl:text>${preferred pk algorithm}</xsl:text>

                    </preferred-pk-algorithm>

                  </xsl:when>

                  <xsl:otherwise>

                    <preferred-pk-algorithm>

                      <xsl:text>ssh-rsa</xsl:text>

                    </preferred-pk-algorithm>

                  </xsl:otherwise>

                </xsl:choose>

              </xsl:if>

              <prompt>${prompt}</prompt>

              <xsl:if test="string-length('$[charSet]')&gt;0">

                <character-set>

                  <xsl:text disable-output-escaping="no">${charSet}</xsl:text>

                </character-set>

              </xsl:if>

              <xsl:if test="string-length('${establish connection timeout}')!=0">

                <establish-connection-timeout-secs>

                  <xsl:text disable-output-escaping="no">${establish connection timeout}</xsl:text>

                </establish-connection-timeout-secs>

              </xsl:if>

            </target>

          </targets>

        </xsl:if>

        <xsl:if test="string-length(.)&gt;0">

          <commands>

            <command>

              <xsl:choose>

                <xsl:when test="'${command encryption type}'='base64'">

                  <xsl:attribute name="encryption-type">

                    <xsl:text disable-output-escaping="no">${command encryption type}</xsl:text>

                  </xsl:attribute>

                  <xsl:attribute name="timeout-secs">

                    <xsl:if test="string-length('${command timeout}')=0">

                      <xsl:text disable-output-escaping="no">60</xsl:text>

                    </xsl:if>

                    <xsl:text disable-output-escaping="no">${command timeout}</xsl:text>

                  </xsl:attribute>

                </xsl:when>

                <xsl:when test="'${command encryption type}'='plain'">

                  <xsl:attribute name="encryption-type">

                    <xsl:text disable-output-escaping="no">${command encryption type}</xsl:text>

                  </xsl:attribute>

                  <xsl:attribute name="timeout-secs">

                    <xsl:if test="string-length('${command timeout}')=0">

                      <xsl:text disable-output-escaping="no">60</xsl:text>

                    </xsl:if>

                    <xsl:text disable-output-escaping="no">${command timeout}</xsl:text>

                  </xsl:attribute>

                </xsl:when>

                <xsl:otherwise>

                  <xsl:attribute name="encryption-type">

                    <xsl:text disable-output-escaping="no">plain</xsl:text>

                  </xsl:attribute>

                  <xsl:attribute name="timeout-secs">

                    <xsl:if test="string-length('${command timeout}')=0">

                      <xsl:text disable-output-escaping="no">60</xsl:text>

                    </xsl:if>

                    <xsl:text disable-output-escaping="no">${command timeout}</xsl:text>

                  </xsl:attribute>

                </xsl:otherwise>

              </xsl:choose>

              <xsl:value-of select="." disable-output-escaping="no" />

            </command>

          </commands>

        </xsl:if>

        <xsl:if test="string-length(&quot;//commands/command&quot;) &gt; 0 ">${commands}</xsl:if>

      </ssh-request>

    </request-data>

  </xsl:template>

</xsl:stylesheet>

 

This allows the generated request to include the required element in the "<targets>" section, which eliminated the behaviour I was seeing allowing the commands to execute as required.  I could then parse the output and nicely format an email to send to the Remedy Team showing the memory consumption, which I would schedule to happen daily.

 

Working request:

 

<request-data>

  <ssh-request>

    <prompts>

      <prompt name="cmd">: </prompt>

      <prompt name="Username">Username: </prompt>

      <prompt name="Password">Password: </prompt>

      <prompt name="Bash">$</prompt>

      <prompt name="bash-3.2">bash-3.2$</prompt>

    </prompts>

    <targets>

      <target name="">

        <verify-os>false</verify-os>

        <host>xxxx.arsserver.host.com</host>

        <port>22</port>

        <userName>remedy</userName>

        <password>{dailypassword}</password>

        <timeout-secs>60</timeout-secs>

        <allow-unknown-hosts>true</allow-unknown-hosts>

        <use-shell-mode />

        <prompt>Username:</prompt>

        <establish-connection-timeout-secs>60</establish-connection-timeout-secs>

      </target>

    </targets>

    <commands>

      <command verify-os="false" prompt="Password" ignore-exit-code="true"><![CDATA[{username}]]></command>

      <command verify-os="false" prompt="Bash" ignore-exit-code="true">

        <EncryptedData xmlns="http://www.w3.org/2001/04/xmlenc#" Type="http://www.w3.org/2001/04/xmlenc#Content">

          <CipherData>

            <CipherValue>{encrypted password}</CipherValue>

          </CipherData>

        </EncryptedData>

      </command>

      <command verify-os="false" prompt="Bash"><![CDATA[PID=$(/usr/ucb/ps -auxww | egrep tomcat6 | egrep -v egrep | awk '{print $2}') ; ps -p $PID -o pid,vsz | grep $PID | awk '{print $2/1024}' ; unset PID]]></command>

    </commands>

  </ssh-request>

</request-data>

 

Although this took a couple of days to diagnose, in-between all the other things I was doing, it meant I would not forget this in the future when seeing similar behaviour.

 

Some points of interest to anyone wanting to use the SSH Adapter and understand how to configure to work how they expect it to.  [This is a consolidation of various discussions available on the SSH Adapter].

 

  1. The "<prompt>" element in the "<targets>" section is required to run multiple commands in the one session without terminating after each command.  If not included the SSH Adapter will open each command in its own session e.g. 3 commands, 3 separate sessions.  If the "<prompts>" section is not used in the adapter request, then the prompt defined in the targets sections is used for all commands.
  2. The "<verify-os>" element in the "<targets>" section is required on certain types of devices to stop additional commands being issued due to the adapter attempting to verify the OS.  You will see the associated command attempting to execute with the adapter log level set to "Debug".  To eliminate, add the element into the correct adapter request section.
  3. Where you specify the "prompt" (or interchangeable with "expect" attribute) in the command to be executed, this is the prompt that you are expecting to receive once the command has executed and completed (not the prompt you are expecting before the command is run).
  4. If you have the "<prompts>" section defined, you can reference the name of the prompt defined directly in the <command>.  This adds value as you can name your prompts with something recognisable and descriptive.

 

Hopefully if you encounter similar behaviour you can use this to narrow down what the issue is and correct.

 

Enjoy and good luck process building using the SSH Adapter.

 

Carl

Share This:

We've got a new version of BAO Content available today - v20.14.02 which can be downloaded from the normal BMC Electronic Product Download (EPD) site. http://www.bmc.com/support/downloads-patches/BMC-Support-Product-Downloads-and-Patches.html

 

 

Highlights of the release:


New Adapters

 

  • DNS adapter: The DNS adapter allows you to create an A record, or a PTR record for a DNS server
  • Microsoft Hyper-V Server 2012 adapter: Invokes requests to the Hyper-V Server 2012 R2 by executing PowerShell commands.

 

Adapter Updates

 

  • Infoblox adapter: Ability to specify the working directory for Infoblox NIOS adapter
  • VMware Adapter: Support for retrieving all or limited virtual machine properties in a cluster
  • Powershell Adapter: Support for Windows PowerShell 3.0 and 4.0
  • FTP Adapter: Support for SITE command in FTP adapter
  • Windows Command and PowerShell Adapters: xCmd utility now supports command timeout
  • HTTP Adapter: HTTP adapter request now supports timeout
  • New Tokenize Large String process to handle large input

 

New Modules

 

  • DNS Module: A new DNS Integration module contains workflows that enable you to perform basic operations while handling DNS servers
  • IPAM Module: BMC Cloud Lifecycle Management IP Address Management module for VitalQIP now included with BMC Atrium Orchestrator Content 20.14.02

 

Runbook Updates

 

Continuous Compliance for Servers

 

Part of BMC’s Intelligent Compliance use case, the BMC Continuous Compliance for Servers run book automates the integration of BladeLogic Server Automation monitoring, auditing, compliance, and remediation processes with IT management systems such as BMC Remedy ITSM. The 20.14.02 release simplifies the configuration and customization of the solution by replacing the previous module (Closed_Loop_Server-SA-Compliance) with two new modules

•              Closed_Loop_Compliance-SA-Servers

•              Closed_Loop_Compliance_ITSM_Integration

 

This change effectively breaks out the BladeLogic Server Automation workflows from the ITSM workflows enabling them to be installed and tested independently of each other. This change should make it simpler to configure the solution and customize if required.

 

Adapter Version Support updates

 

  • Remedy ARS v8.1.02
  • Remedy CMDB v8.1.02
  • BMC Remedyforce Summer '14
  • Bladelogic Server Automation v8.6
  • Bladelogic Network Automation v8.6
  • Bladelogic Database Automation v8.6
  • VMware vSphere 5.5
  • VMware vCloud Director v5.5
  • BMC ProactiveNet Performance Manager 9.5

 

For full details on all the changes, please visit the Docs pages here: https://docs.bmc.com/docs/display/public/baoc201402/20.14.02+enhancements

 

Enjoy!

Share This:

Every service desk owner seems to have the same set of challenges.

  • How can I reduce the ever growing number of incidents and change requests I have to deal with?
  • How can I provide better value and service to the business?
  • What can I do to improve my customer satisfaction scores?

 

The answers to these questions can often lie in automation. If there are commonly recurring incidents or requests that are currently handled manually, why not automate them? Automation enables requests to be handled instantaneously, greatly improving response times. And as a machine is doing the leg work things are done the right way every time, removing the potential for human mistakes.

 

Everyone wants self-service

What about the handling of common requests such as password resets or providing access to new applications? How long do these requests take to manage and deliver manually? Hours….days? Orchestration can enable organizations to move from a reactive, manual handling of requests to a fully automated delivery system exposed to the end users through a catalog of requestable services.

Most service desks now provide a service catalog capability, but requests made are still delivered manually. Why not automate the end to end delivery of a request through orchestration? Once again, speed and accuracy are key here as well as reducing the costs of a manual fulfillment process.

 

I was recently talking with an organization about their project in this space. The service desk was suffering from a bad reputation. It was taking them up to 5 days to reset a users password and up to 3 weeks to create a new users account. Calls for desktop support to install new software were taking weeks to fulfill. What’s the point offering these services if it takes you so long to deliver them, you’re just going to make your customers mad right!

 

The service desk made the decision to change. It didn’t have a whole lot of money to throw at the problem, they just had to work smarter and more efficiently and this was where orchestration came in. After analyzing the types of tickets and requests that were hitting the service desk they put automation in place to automatically handle the top items (like password resets) reducing the number of tickets handled manually by 30%. This alone saved them about $1m through cost avoidance. Most importantly though, the service desk satisfaction ratings rose significantly. Requests that were previously taking days or weeks were now fulfilled in minutes or hours.

 

Onwards and upwards
Once you get going with service desk automation it’s difficult to stop. Analyze potential use cases and pick the ones that will have the best business impact for the smallest amount of effort.  Deliver the use case, show success and move onto the next one. And don’t forget to show the business all the good things you’re doing along the way. Reports showing the “before automation” and “after automation” picture are great ways to demonstrate the value you’re providing. In fact our own internal BMC IT team are masters at this and publish reports for all to see on the corporate intranet (see below)

 

BSM@BMC.png

 

Now if this doesn’t get people excited, I don’t know what will. Why not give your organization an early Christmas present and see how orchestration can help your service desk team become superstars too!

 

Happy automating!

Share This:

As requested by a few people, here's an index of the articles. I'll update this as we go.

 

Writing BAO Adapters - Part 1: Calculus Not Required

Writing BAO Adapters - Part 2: Lock and Load

Writing BAO Adapters - Part 3: Acting Up (again)

Writing BAO Adapters - Part 4: Adding Flesh to the Actor's Bones

Writing BAO Adapters - Part 5: Big Brother is Monitoring You

Writing BAO Adapters - Part 6: Put Some Meat on Your Monitor – coming soon!

Writing BAO Adapters - Part 7: The Sincerest Form of Flattery – coming slightly less soon!

Writing BAO Adapters - Part 8: Sprinkle Some Awesome on Your Adapter – this one might be a while...

Writing BAO Adapters - Part 9: Improving the Development Process – it's going to be ages before this one is written.

Share This:

Writing BAO Adapters - Index

 

Introduction

In Part 4 we put together a working, though simplistic, actor adapter. Now we move on to the monitor adapter. This adapter type is effectively the reverse of the actor; where the actor allows us to effect change in other systems, the monitor allows change in other systems to affect us.

 

Special thanks to Ranganath Samudrala for the simple monitor adapter example he sent me some time back, which really helped my understanding of how monitor adapters are constructed.

 

Our example monitor adapter

As we did with the actor adapter before, we’ll first implement a very basic skeleton that loads on the grid and generates a simple event. It turns out this is slightly trickier with the monitor adapter since it must have an active execution loop running in order for the adapter to stay running on the grid.

 

In the next article the adapter will be extended to send events based on a real external change, but for now we'll simply send an event to the grid based on a timer; in other words, generate an event every x milliseconds.

 

Configuration Parameters

As with the actor adapters, each named monitor adapter on the grid has an XML configuration document that allows the user to set global parameters for that adapter. Unlike actors, however, there are no adapter requests in which these parameters can be overridden.

 

We’ll include a single configuration parameter in our basic adaper: sleep-delay. This will be the number of milliseconds between events and will default to 5000, which will fire off an event every five seconds.

 

WARNING: There is no checking for ‘sensible’ values here, so if you set this to a value of 1 millisecond your adapter is going to flood the grid with events. This is not likely to make your grid peers very happy...

 

The only required method for the monitor adapter configuration (in addition to the constructor) is validate(). This method is called when BAO wishes to check that the configuration is valid; if you find any configuration parameters that are not suitable, you throw an InvalidConfigurationException.

 

The Monitor Adapter Life-cycle

The life-cycle of the monitor adapter is very much the same as that of the actor adapter if you look at it in terms of its operational states (stopped, started, and so on). There is a significant implementation difference, though; where an actor adapter only executes when it is passed an adapter request, an enabled monitor adapter is running all the time. It may spend much of this time in an idle loop, or waiting for an inbound socket connection, or polling a file.

 

Here's an extended version of the adapter life-cycle diagram seen in an earlier article:

 

Adapter States - extended.png

 

The official development documentation for AO doesn't make clear the purpose of all of the adapter states, so a little bit of interpretation and invention is needed to establish how best to use them. The two that you must use are RUNNING and STOPPED. The others aren't critical but let's call it good practice to use them. I would propose:

 

INITIALIZING: Set this as early as possible in your initialize() method.

STARTING: Set this at the end of your initialize() method.

RUNNING: Set this at the start of your run() method.

STOPPING: Set this at the start of your shutdown() method.

STOPPED: Set this at the end of your shutdown() method.

FAULT: Generally it seems better to generate an appropriate exception to indicate a fault (e.g. InvalidConfigurationException), but there are certainly cases where we might wish to move the adapter to a fault state that aren't covered by the standard adapter exception. An example might be that the target system has gone offline.

PAUSED: Ignore this one for now; at the moment I can't see that pause functionality even exists as part of the adapter definition, so this is one to come back to in a future article .

 

Event De-duplication

The grid performs a very important function in relation to monitor adapters, and that is de-duplication of events. This is mainly useful when you have multiple peers running the same adapter for resilience, and those adapters are likely to generate the same event in response to some single external change. The grid peers generate a hash of each received event and store this in a queue. After a short time (2 minutes by default), each hash expires from the queue and is removed. Every incoming event is checked against this queue, and if its hash matches an existing entry then the event is discarded.

 

This is why our “dummy” event includes a counter, otherwise the majority of our events would be discarded as they’d be identical to the first one recorded on the queue. After two minutes, that first entry expires and another identical event would be allowed through.

 

So be aware of de-duplication and how it can work both for you and against you. If you are missing events, chances are you’re sending multiple identical events within two minutes of each other. If you are getting more events than expected, you may have introduced some unintended “uniqueness” to your events that is preventing them from being de-duplicated.

 

Gotta Have Rules

All events your adapter sends to the adapter manager are then pushed to the Activity Processor for rules processing. All rules present in active modules are evaluated and matching rules result in the execution of the associated workflow, with the event passed in the context item “inputevent”.

 

Note that it is entirely possible for multiple rules to match the event, and hence multiple workflows will fire based on that event.

 

Required Methods

There are a few mandatory methods for monitor adapters:

public void run()

public void shutdown()

 

The run() method is the “main loop” of the adapter that listens, polls, or performs whatever our monitoring action happens to be. If you need an adapter that monitors a tiger by poking it with a stick, this is the method in which to implement that.

 

As might be expected, shutdown() should contain any logic required to disconnect from external systems or otherwise clean up as the adapter is being stopped.

 

The Configuration Class

The basic configuration class is not so different from the one used with the actor adapter:

 

package com.example.bao.adapter.timedmonitor.Configuration;

import com.example.bao.adapter.timedmonitor.TimedMonitorAdapter;
import com.realops.common.configuration.InvalidConfigurationException;
import com.realops.foundation.adapterframework.configuration.AdapterConfigurationException;
import com.realops.foundation.adapterframework.configuration.BaseAdapterConfiguration;

import java.util.Hashtable;
import java.util.Set;

/**
* This is the configuration class for the Timed Monitor adapter (see {@link TimedMonitorAdapter}).
*
 * @author      Gordon Mckeown <gordon_mckeown@bmc.com>
* @version     1.0
*
*/
public class TimedMonitorAdapterConfig extends BaseAdapterConfiguration {
    private static final String CONFIG_SLEEP_DELAY = "sleep-delay";
    private static final long DEFAULT_SLEEP_DELAY = 5000;

    /**
     * Class constructor taking the ID of the adapter being instantiated.
     *
     * @param adapterId     String uniquely identifying this instance of the adapter
     */
    public TimedMonitorAdapterConfig(String adapterId) {
        super(adapterId);
        addRequiredKeys();
        addValidKeys();
    }

    /**
     * Class constructor taking the ID of the adapter and a hashtable of default options
     *
     * @param id            String uniquely identifying this instance of the adapter
     * @param defaults
     */
    public TimedMonitorAdapterConfig(String id, Hashtable defaults) {
        super(id, defaults);
        addRequiredKeys();
        addValidKeys();
    }

    /**
     * Class constructor taking the ID of the adapter, a hash-table of default options, and sets defining the valid and required keys.
     *
     * @param id            String uniquely identifying this instance of the adapter
     * @param defaults
     * @param validKeys
     * @param requiredKeys
     * @throws AdapterConfigurationException
     */
    public TimedMonitorAdapterConfig(String id, Hashtable defaults, Set validKeys, Set requiredKeys)
            throws AdapterConfigurationException {
        super(id, defaults, validKeys, requiredKeys);
        addRequiredKeys();
        addValidKeys();
    }

    /**
     * Method to advise the adapter manager which configuration entries are mandatory
     */
    private void addRequiredKeys() {
        // Do nothing
    }

    /**
     * Method to advise the adapter manager which configuration entries are permitted.
     */
    private void addValidKeys() {
        this.addValidKey(CONFIG_SLEEP_DELAY);
    }

    /**
     * Method to perform validation checks on the supplied adapter configuration.
     *
     * @throws InvalidConfigurationException
     */
    public void validate() throws InvalidConfigurationException {
        super.validate();
    }

    /**
     * Getter method for the configured delay between events
     * @return  the chosen sleep delay in milliseconds
     */
    public long getSleepDelay() {
        Long sleepDelay = super.getLongProperty(CONFIG_SLEEP_DELAY);
        if (sleepDelay == null || sleepDelay.longValue() == 0) {
            sleepDelay = DEFAULT_SLEEP_DELAY;
        }
        return sleepDelay;
    }
}






 

The Monitor Adapter Class

Here’s the bit you've been waiting for; our skeleton monitor adapter:

 

package com.example.bao.adapter.timedmonitor;

import com.example.bao.adapter.timedmonitor.Configuration.TimedMonitorAdapterConfig;
import com.realops.common.enumeration.StateEnum;
import com.realops.common.xml.XML;
import com.realops.foundation.adapterframework.AbstractMonitorAdapter;
import com.realops.foundation.adapterframework.AdapterException;
import com.realops.foundation.adapterframework.AdapterManager;

/**
* This class defines a minimal framework to get a monitor adapter running on an AO grid.
*
 * @author      Gordon Mckeown <gordon_mckeown@bmc.com>
* @version     1.0
*
*/
public class TimedMonitorAdapter extends AbstractMonitorAdapter {
    private Long sleepDelay;
    private TimedMonitorAdapterConfig adapterConfig;
    private int executionCounter = 0;

    /**
     * Default constructor
     */
    public TimedMonitorAdapter() {
        super();
    }

    /**
     * Initialisation method called when adapter is started by Adapter Manager
     *
     * @param adapterManager
     */
    @Override
    @SuppressWarnings("deprecation")
    public void initialize(AdapterManager adapterManager) {
        super.initialize(adapterManager);
        setState(StateEnum.INITIALIZING);
        adapterConfig = (TimedMonitorAdapterConfig) getConfiguration();
        sleepDelay = adapterConfig.getSleepDelay();
        setState(StateEnum.STARTING);
    }

    /**
     * Returns the class name of our configuration class.
     *
     * @return  Name of configuration class
     */
    @Override
    public String getAdapterConfigurationClassName() {
        return TimedMonitorAdapterConfig.class.getName();
    }

    /**
     * Shutdown method called when adapter is stopped by Adapter Manager
     *
     * @throws AdapterException
     */
    @Override
    @SuppressWarnings("deprecation")
    public void shutdown() throws AdapterException {
        setState(StateEnum.STOPPING);
        try {
            Thread.sleep(sleepDelay);   // Allow main loop time to finish
            setState(StateEnum.STOPPED);
        } catch (InterruptedException e) {
            setState(StateEnum.FAULT);
        }
    }

    /**
     * Main adapter logic loop that runs whilst adapter is in "running" state.
     */
    @Override
    @SuppressWarnings("deprecation")
    public void run() {
        setState(StateEnum.RUNNING);

        while(getState() == StateEnum.RUNNING) {
            executionCounter++;
            XML xmlEvent = new XML("dummy-event");
            xmlEvent.setText(getName());
            this.sendEvent("Dummy event, loop number: " + executionCounter, xmlEvent);
            try {
                Thread.sleep(sleepDelay);
            } catch (InterruptedException e) {
                setState(StateEnum.FAULT);
            }
        }
    }
}





 

A Test Class

Because of the way monitor adapters work – their ‘always running’ nature – our test class needs to be constructed slightly differently. Also, we need a way to receive the event being sent by the adapter, and this involves creating a “mock” adapter manager. Creating an adapter manager object requires an additional library from the tomcat/lib folder on your CDP: servlet-api.jar.

 

Without this library in your class path, you will see “class not found” errors relating to javax/servlet/filter when trying to test. I’ve simply added it to the library folder containing the other hundred-or-so BAO libs.

 

Our example test class isn't massively functional; it simply executes the monitor adapter in a thread for 20 seconds and then shuts it down. No particular checks are performed to ensure the correct operation of the adapter itself. All that said, here’s the test class:

 

package com.example.bao.adapter.timedmonitor;

import com.example.bao.adapter.timedmonitor.Configuration.TimedMonitorAdapterConfig;
import com.realops.common.configuration.InvalidConfigurationException;
import com.realops.common.enumeration.StateEnum;
import com.realops.common.util.ResourceLocator;
import com.realops.common.xml.InvalidXMLFormatException;
import com.realops.common.xml.XML;
import com.realops.foundation.adapterframework.AdapterManager;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import java.io.InputStream;
import static org.junit.Assert.*;

/**
* Test class for the Timed Monitor Adapter
*
 * @author  Gordon Mckeown <gordon_mckeown@bmc.com>
* @version 1.0
*/
public class TimedMonitorAdapterTest {
    private TimedMonitorAdapterConfig config;
    private TimedMonitorAdapter adapter;
    private AdapterManager am;

    private static final String CONFIG_FILE_ADAPTER = "adapter-config.xml";
    private static final String ADAPTER_ID = "TimedMonitorID";
    private static final String ADAPTER_NAME = "TimedMonitorAdapter";
    private static final String PEER_NAME = "DevStudio";

    @Before
    public void setUp() throws Exception {
        try {
            config = new TimedMonitorAdapterConfig("TestTimedMonitorAdapter");
            XML configXml = loadResourceXml(CONFIG_FILE_ADAPTER);
            config.fromXML(configXml);

            adapter = new TimedMonitorAdapter();
            adapter.setPeerName(PEER_NAME);
            adapter.createConfiguration(ADAPTER_ID);
            adapter.updateConfiguration(config);

            adapter.setName(ADAPTER_NAME);
            am = new MockAdapterManager();
            adapter.initialize(am);

        } catch (InvalidXMLFormatException e) {
            fail("Configuration file XML format error");
        } catch (InvalidConfigurationException e) {
            fail("Invalid entries in configuration file");
        }
    }

    @After
    public void tearDown() throws Exception {
        adapter = null;
    }

    @Test
    @SuppressWarnings("deprecation")
    public void testTimedMonitorAdapter() {
        Thread t = new Thread(adapter, "TestTimedMonitorAdapter");
        t.start();

        try {
            Thread.sleep(20000);
        } catch (Exception e) {
            fail("An exception occurred during the thread sleep.");
        }

        try {
            adapter.shutdown();
            t.join(6000);
        } catch (Exception e) {
            fail("An exception occurred during the thread join");
        } finally {
            assertTrue("Failed to stop adapter!", adapter.getState() == StateEnum.STOPPED);
        }
    }

    /**
     * Method that loads XML document from a resource file
     *
     * @param fileName  File from which to load XML document
     * @return          XML object representing the loaded file
     */
    private XML loadResourceXml(String fileName) {
        XML xmlData = null;
        try {
            InputStream stream = ResourceLocator.getSystemResourceAsInputStream(fileName);
            assertNotNull("Unable to load file resource " + fileName, stream);
            xmlData = XML.read(stream);
        } catch (Exception e) {
            e.printStackTrace();
            fail(e.getMessage());
        }
        return xmlData;
    }
}





 

And the MockAdapterManager class definition (which can live in the same directory as your test class):

 

package com.example.bao.adapter.timedmonitor;

import com.realops.foundation.adapterframework.AdapterEvent;
import com.realops.foundation.adapterframework.AdapterManager;

/**
* A mock adapter manager that simply outputs the event details as a string
*
 * @author      Gordon Mckeown <gordon_mckeown@bmc.com>
* @version     1.0
*
*/
public class MockAdapterManager extends AdapterManager {
    /**
     * Default constructor - performs no action
     */
    public MockAdapterManager() {
        // Do nothing
    }

    /**
     * Handle event sent from monitor adapter
     *
     * @param event AdapterEvent containing event information from adapter
     */

    @Override
    public void sendEvent(AdapterEvent event) {
        System.out.println(event.toXML().toCompactString());
    }


}






 

Finally, the very important adapter configuration XML file (adapter-config.xml) that must live in your test resources folder:

 

<config>
    <grid>
        <grid>
            <adapters>
                <adapter id="TimedMonitorID">
                    <name>TimedMonitorAdapter</name>
                    <class-name>com.example.bao.adapter.timedmonitor.TimedMonitorAdapter</class-name>
                    <config>
                        <sleep-delay>5000</sleep-delay>
                    </config>
                </adapter>
            </adapters>
        </grid>
    </grid>
</config>



 

Supporting Module

Although not essential at this stage of our adapter development, let’s put together a simple workflow to do something with the events being generated. This workflow will just dump the input event into the processes.log file; we could achieve pretty much the same thing by playing with the log levels on our peers (though that would log into grid.log – take a look at "Seeing the Events" below).

 

So create your workflow (I've called mine HandleTimedEvents), set an input parameter of “inputevent” and a single Assign activity that logs the content of inputevent like so:

 

2014-12-03 13_23_20-192.168.0.158 - Remote Desktop Connection.png

 

Ensuring you left the "Expose Process: In Rules" option ticked in your process properties, create a rule within the same module and set it up like this:

 

2014-12-03 13_24_43-192.168.0.158 - Remote Desktop Connection.png

 

Note that Core_Timed_Monitor is the name of my adapter on the grid, so change this to suit if you’re going to call yours something else.

 

Uploading to the Grid

This was detailed in Part 3, so I won’t go through it in full detail again. Just ensure you generate a JAR file, and use the following class name: com.example.bao.adapter.timedmonitor.TimedMonitorAdapter


Use a type of “timed monitor adapter” if your imagination fails you, and don’t forget to upload your supporting module and activate it!

 

Seeing the Events

Increase the log level of the Activity Manager on your peers to DEBUG and you will start to see entries like this one:

 

03 Dec 2014 13:06:44,258 DEBUG DefaultAttributeManager : Thread[AMP - Activity Processor - Process Execution - 3994,5,main] set attribute with name=inputevent, value=<adapter-event><source-adapter>Core_Timed_Monitor</source-adapter><event>Dummy event, loop number: 102</event><data><dummy-event>Core_Timed_Monitor</dummy-event></data></adapter-event>, isGlobal=false, for com.realops.foundation.activityprocessor.executionstate.DefaultAttributeManager@70e7bc6b







 

One quick XML pretty print later and we can see our event in all of its glory:

 

<adapter-event>
    <source-adapter>Core_Timed_Monitor</source-adapter>
    <event>Dummy event, loop number: 102</event>
    <data>
        <dummy-event>Core_Timed_Monitor</dummy-event>
    </data>
</adapter-event>







 

This should help you in defining the rules used to trigger workflows. If your rule is working, you’ll also be seeing the logging entries from your workflow being placed into the processes.log:

 

The process started. It is triggered by a rule.
03 Dec 2014 13:06:44,316 [Current Time=Wed Dec 03 13:06:44 GMT 2014] [Process Name=:BMC_Testing_GMK:SimpleMonitor:HandleTimedEvents] [Root Job Id=5b55545cf05bfc30:-4862ff7a:149a97323ba:-7fc41-1417613209310] [Job Id=5b55545cf05bfc30:-4862ff7a:149a97323ba:-7fc41-1417613209310]
!!!! Timed monitor adapter event received !!!!
[inputevent=
<adapter-event>
  <source-adapter>Core_Timed_Monitor</source-adapter>
  <event>Dummy event, loop number: 102</event>
  <data>
    <dummy-event>Core_Timed_Monitor</dummy-event>
  </data>
</adapter-event>]








What's Next?

At this point you should have this obnoxious little monitor adapter pumping events onto your grid, and a workflow that gets called for each event. In the next part we’ll look at extending the adapter to respond to “real” external events rather than an internal timer.

Share This:

The BMC Product Documentation team updates the list of BMC Atrium Orchestrator Content delivered in each BMC Atrium Orchestrator Content release, which includes the following components:

  • Application adapters
  • Base adapters
  • Operations Actions Management Modules
  • Run Books

For each space, this topic is available easily from the navigation tree.

Platformadaptersmodules.png

For the latest version, you can view the adapters, modules, and run books at the BMC online technical documentation portal at: https://docs.bmc.com/docs/display/public/baoc201401/Platform%2C+adapters%2C+modules%2C+and+run+books

 

If you feel this topic is helpful, let us know by rating the blog post or leaving comments.

Filter Blog

By date:
By tag: