If you have BDSSA as part of your suite, you can better use “Change tracking Report” from 8.2 version onwards…
You just need to create a template of parts and then discover and snapshot on regular interval of it..(like weekly or daily basis)
Then you can check the same in BDSSA console (Change tracking report)
HP: +91 9711156098
Take an Snapshot of an Standard Configuration.
Create an Audit job, use this snapshot as Master & as target use Live server objects/components.
And schedule the job on daily basis or as per ur req.
Unfortunately there are dozens or even hundreds of 'Standard Configurations' that would be compliant and I think it would be too cumbersome to maintain. We have a separate project to create a compliance template for tracking of certain (mostly security) parameters over time and compliance or audits work well for that, but the idea with this is a one-time ad hoc check to make sure that all servers are like each other rather than like some predetermined static configuration.
I hadn't thought of an audit job but doesn't that also require a predetermined configuration against which checks are made?
You can do an ad-hoc live to live audit – just browse to the thing you want to compare on a server that has the correct setting, right click on that and do ‘audit’ and select the other targets to compare to.
Is there a way to set up the items that would be included and then run the job against a set of servers rather than having to select the items from a given server? The 'right setting' is going to change from job run to job run.
I believe the primary intention is when servers are released to an application team for dev, qa, and prod environments that it be demonstrated that are all configured in the same manner (relative to each other rather than to some preset value) at the time they are released. If I run it on dev/qa/prod servers for application_x the right setting will be different than for application_y. It would be preferable to have a job that could just be run against e.g. all of the dev/prod/qa database servers for any given application that showed they had the same hardware, memory, patches, software version, etc. Having to keep selecting these configuration items everytime we wanted to run the comparison would be cumbersome.
You can manually type in the asset you want to audit/snap when you create the job instead of selecting it from a server. But every audit will need a master server. that’s where I’m struggling here a little w/ the use case below – which server is going to be your ‘master’ ? audits are comparisons. So if you have different values in dev/test/prod then you’d need 3 audit jobs, one for each env.
This is basic, what we call "audit-based" compliance (which uses an Audit Job, by the way).
A component template can be used to select which configuration items to use (so you don't need to build them fresh every time). These are very re-usable, and can even be used across specific applications if you're looking for the same things every time.
You can either use a snapshot of a given server at a point in time as your standard (good if you have gold masters that may only be correct on a given server at a point in time) or do a "live to live" audit, which will just, when it runs, look at the settings on the "master" server, and compare them to all of the "target" servers.
The best part is that the output of these audits can be an easy HTML report that can be emailed to your internal customers when the job runs.
I've got a "howto" video on this subject in the works, should be out in a week or two.
The idea is that the values are not different between dev/qa/prod. The intention is to validate that they are the same. It sounds like my initial impression that Audit/Compliance wouldn't work is correct. In our case "compliance" doesn't mean values matching a predetermined master in the comparison. We're not testing to see that target_server01.propertyX = master_server.propertyX, but rather that target_server01.propertyX=target_server02.propertyX=target_server03.propertyX . We don't know what that value is ahead of time and want to confirm instead that a common value is shared between the servers.
There are too many combinations of the things they want to test (OS, memory, types/versions of software installed, hardware, drivers, drive sizes) to keep a 'master' for each permutation.
1 of 1 people found this helpful
so for compliance you are doing a conditional evaluation:
target_server01.propertyX = 1
target_server02.propertyX = 1
target_server01.propertyY = 2
there is no 'master' there. there are values set in the rule conditions to check against. if you know what the values should be, then you should create a component template w/ compliance rules.
if all the values are the same and you don't know what they should be, but just that they should be the same, then pick one server as the master - it shouldn't matter because they all should be the same. and run an audit job. if next time you want to use a different server as the master, you can do that. by changing the master server in the job - you won't have to re-select anything else.
i'm not clear about the issue w/ too many combinations - if everything should be the same between servers, why woudl there be different permutations ?
We are doing what you are talking about with a twist… we run the jobs in Bladelogic to compaire dev-QA-Production … then we export the logs via blcli commands and then clean up the data with a script to create a csv that contains all differences other than the ones we are expecting….
Host name, Total Memory, CPU, etc…. works pretty well but it takes time to fine tune the script…
Christopher S. Dale
Enabling Capabilities team
As long as you are basically checking for consistency between configurations (target_server01.propertyX=target_server02.propertyX=target_server03.propertyX as you have it), then a "live to live" audit makes plenty of sense.
What's key here is that, by using a "live to live" audit, no value is pre-determined: they're read in at audit time from one server (whichever of the set of servers you pick as the "master"), and then compared to the rest of the servers in that set.
What I was saying about components is not that it defines the values, but that it makes it easy to pick out -which-
settings to look at.
I'm glad to hop on a webex and talk through this if you'd like. (and share with anyone else who wants to see it)
For any individual job run the servers should be the same, but not between job runs. We're looking for a repeatable job run that only checks for consistency among a set of servers - we want devapp01, prodapp01, and qaapp01 to be the same. We want devapp02, prodapp02, and qaapp02 to be the same. These 2 sets however need not have anything in common. Application01 might require win2k8r2, 16GB memory, 2 HBAs of a certain type, etc. Application02 might require win2k3, 4GB of memory, 1 HBA of a certain type, etc. We're not looking for adherence to a standard but just consistency among designated targets. The problem with the combinations is that there are too many to create a standard for all of the options. 3 OSes, 10ish possible memory sizes, myriad installed relevant applications, too many hardware options to count, etc.
My idea was similar to the script-based one referenced in the preceding. We could export the values we want to test to a file and parse that file for differences but was hoping there was something more elegant i was missing.
Right – so do an audit job of live servers. everytime you run the job the values will be pulled, at run time, from the master server.
One job per grouping of servers. pick any server in the set as the master for that set.
If you don’t want to have to build out the jobs w/ the asset list everytime, but the asset list in a Component Template and do a CT based Audit job (still of the live server)