Great quest Flavio!
Surprised no one share experience yet.
Note: this comment will make your post pop up in the "Recent Activity", this should help experienced users to notice it and weigh in.
basically a good idea, question for me would be how to distribute the include file onto all executionservers runnign the scripts, which would be similar effort as changing the project attributes.
maybe someone from TMART team could let us know if the passwords in 4.2 SP1 still could be written in plain text since from the idea it looks like fixed in 4.2
to be honest i dont have a good idea right now how to accomplish what you wanna do
I wasn't worried about the distribution, but that will be very useful.
My thoughts are to update one single file (the include on) and compile and send back the projects to TMART central and replace the packages.
But that indeed a very nice thing to have...
Maybe BMC could figure out something... pretty sure that will be totally useful for a lot of people.
Excuse me for coming into this without knowing a great deal about what it is your doing/developing against.
But this sounds like a terrible idea. Why do you need to bypass the authentication in the first place?
The user that is making the ajax call to the webserver should be authenticated in the same way they got to the webpage in the first place. From my perspective the users first have to log into the site (or identify in some manner) this allows them some level of access, if they make an ajax call to the server they must also have access to that. Once the authentication has occurred then that token can be reused.
Hi Just Replying,
We are not trying to bypass the authentication process. Let me illustrate.
I record a script in workbench logged with my user and my user is setup to be in the BrowserSetAuthentication function in my DBL code.
That comes with my user and a hash for my password. That will be hardcode into the DBL script.
The issue here is... If I update my password for some reason (forced by the system, and I can't setup the new password as the same the 10 old ones) the hash hardcoded in BDL will not be valid any more.
So, instead of update many projects in Workbench I'm looking for a way to update a single .bdh file that is used for all the scripts.
Some cons about this:
1 - With the main approach, all the projects should be re-compiled and upload again to the TMART Central and into TMART you should REPLACE THE PACKAGES for each monitor.
2 - We may figure out a way to distribute the file for the execution servers. That could prevent us to re-compile the scripts and would be much easier...
Something that just occurred to me while I'm typing this answer..
What about a .txt file hosted on each execution server, and the BDL code open the .txt and read the values. And I use this values into my code...
This .txt file will be created using the workbench in order to create the correct 3DES encryption.
If we know the encryption key used by Workbench we can create the encryption hashes use websites like Online encrypt tool - Online tools
1 of 1 people found this helpful
Hey Flavio Bonacordi
I've been watching the discussion but had no time to jump in with suggestions.
You're on the right track with your most recent post.
The most scalable way to do this is to put the passwords in an external file. I'd suggest CSV format rather than text. BDL has a robust set of functions for reading and parsing CSV files so it's pretty easy.
We had a guy who did a prototype a while back that put the location, an application name, username, and password into a CSV file. Location was the IP address of the Execution Server (see the WebGetLocalAddress function). Application name was used so he could share this one file across many scripts. Username and password are obvious. The code for reading the file went into a function stored in a file in the Custom Include directory of the Workbench. TM ART will deal with distribution of the code without help.
You don't need to know the encryption key to create an encrypted password. Create your file in two steps. First step is is to manually construct the file using plain text for everything. Write a simple Workbench script that uses the encrypt() function to encrypt the appropriate parts and write out a new, "safe" credentials file. This Workbench project would never be sent to any Execution Servers, just run it with Try Script to create the credentials file.
There are multiple options for distributing the credentials file.
- Put it into the Custom Data directory of each project that uses it. Workable but not convenient as you'd have to resend all your monitors whenever the credentials file changed.
- Create a "distributor" Workbench project. Code in this would copy the file from its Custom Data to a "well known" spot on the Execution Server. By "well known", I mean any directory you select that exists or can be created on every ES. When the credentials file changes, send out the "distributor" project set up to run exactly once. (Note: I'm speculating here a little bit, this isn't something I've tried)
- The best way, at least in my opinion, is to use whatever patch distribution mechanism you have for your systems. I suggest that because patch distribution systems usually have good capability for auditing and compliance. You'll want to know for sure that your updated file got to every ES and nothing happens to change it.
Is that all clear?
Can you talk more about the Create a "distributor" Workbench project.?
Really the distributor project could do all the encrypting and distribution. TMARt has various file open, read, write, close commands available. In addition you can start various processes from a bdl script like cmd.exe (with arguments to copy files), or xcopy, or even write your distribution script in powershell and execute it separately or from within TMART scripts. With the ability to launch external processes the possibilities are really endless.
Why even distribute? You could store the password on central and just have the scripts open it via UNC path which is supported. I think if you used Fopen with the OPT_FILE_SHARE_READ option locking wouldn't be a problem.
All good thoughts, Adam Wemlinger!
I hesitate to suggest a UNC path, though. TM ART Execution Servers run under the Local System account on the ES so depending your security setup, it can be difficult to connect to UNC resources. You'd also be introducing a dependency on a working network connection to the shared resource.
Flavio Bonacordi, my thoughts for a "distributor" project were pretty simple, but untested.
I was thinking you could include the completed, encrypted file as a "data file" in a Workbench project. The Workbench script would do something ... harmless. Upload that project to Central and send it to all your Execution Servers (note below) with a schedule to run one time. The point is not to run the script but to get it loaded onto the ES.
Note: there is one, now obvious, flaw and one potential flaw in this.
- The obvious flaw is that it fails completely if you have more than one ES per Location as the updated project will only be distributed to a single ES per Location
- The potential flaw is that I'm assuming that all projects on an ES share a "custom data" directory as they do on the Workbench.
So the more I think and write about this idea, the less I like it.
If you've got a software distribution and patching tool, you'd be better off shipping the encrypted file with that. Target it to all your ES machines, to a "known" directory. "Known" in the sense that you would hard code that into the function that fetches passwords.
Or go withAdam Wemlinger's suggestion in his first paragraph. If you can establish a UNC path to each of your Execution Servers, write a script (Workbench or any language) to write the encrypted file to all the Execution Servers. Again, stick the file into a "known" spot.
In some cases we have created domain service accounts to run the ES servers to help get around some of those security issues. I think if you didn't want to run the ES under a service account you could include psexec.exe as a data file (or in our case it lives on all windows servers in a known location already) and launch it with service account credentials to then copy the file locally for the ES to read.
Another option might be to distribute the encrypted password to all ES servers via the registry and read it from there. To make it lots of fun you can use powershell to extract server names from TMART web services, throw those into an array, and then remotely edit the registry on each target server in the array. Just pass the encrypted password to the script as a param and execute!
Hi Guys, First time poster here. But experienced tmart tech.
We are running TM ART 4.2 sp1
I and my coworker have implemented the very things proposed within this posting. We rely on our own file distribution process because of the limitations of using an essential or script to distribute (cannot copy files to all ES's in one location, etc.).
I believe a file distribution process outside of Monitors should be considered by the powers that be. It is almost essential for more complex uses of TM ART.
Also, because we use include files for the libraries we developed, there are pros and cons to having to re-export projects in order to update them. It does protect running projects from changes until they are re-exported but it would be nice to be able to globally update code without re-exporting too. (I wont get into debating source control etc.)
We wrote code in BDF, developed a few libraries that handle what we call 'Data pooling'. We use csv files for the actual data. So they can be viewed within workbench. Typically usage is that one 'Global' csv file contain account information and the encrypted password. We were approved by our corporate policy makers to use this encrypted password file and the actual DES encyption.
The 'global' password file is stored on each ES in a specific path.
Pointer files are used to track at which record the individual datapool file is during a script run.
(An example for this is to be able to use multiple different user accounts at the same location).
Each file can be opened as 'wrap' or 'nowrap'. Wrapping is looping through a datapool file more than once during a script run.
As Hal suggested you can utilize multiple datapool files to organize which Account is used in what location, get what domain an account is in for authentication that requires the domain be included in the account field.
We wrote a simple bdf to encrypt a plain text password file.
Typical usage is to use a while loop that calls a function that can search based on criteria ( like ES location) and matches that with entries in a datapool file. Then we can also control what servers are being monitored for a given script ( we typically have load balanced and servers behind a load balancer) at a given location.
Each ES also has a file that has the recorded Location name so each monitor/script knows who it is.
An example of a datapool files header:
It is the typical workbench csv file formatting. We have a function that allows access to each column by name. So changes or additions to a datapool file don't have to break a script.
The above is from our URL checker script, each record doesn't have to use all of the columns either. The User column value is then used to access the password datapool file as the key to obtain the password.
The tricky part of this was to allow multiple datapool files be opened during the same script run.
Most of these datapool files are stored with each project. You have to be careful to make sure everything works when being run from workbench as being run on an ES too.
I am sorry I cannot share our code exactly because of non-disclosure agreements but I will enertain questions about what we did if anyone is interested.