Nope – i think there are some ‘ideas’ for this kind of thing though.
Seems like user education ? or why do these users have access to targets they shouldn’t run against ? or smart groups they shouldn’t run against ?
They should have access to all of the possible targets within their purview since they could potentially need to run a job ( typically some sort of simple bldeploy ) against any of them. I am not sure of a good way to allow them to run against any one of a large number of servers without being able to filter smartgroups to select a target. While they could potentially need access to any of the targets it is uncommon for most users to need to target more than one server at a time.
It is certainly a user education issue but as we attempt to expand our user base we need to be able to justify expanding to less sophisticated users by having proper safeguards in place.
I wasn't able to find any existing ideas to upvote but this is a pretty non-specific request that is difficult to find in a search.
So we've discussed this two ways.
One is the potential idea or RFE where the product might show the number of targets a given job would target today, and pop up a flag or "Are you sure" type window if that number was larger than a certain number, say a thousand targets. At that point, the user wouldn't accidentally target more hosts than they had in mind without at least going through one "unusual" prompt to warn them. Accidents might still happen, but they would be much less likely, and the user would still have the "second chance".
Another thing we've discussed with a number of customers is using access control to give users in certain roles access to a subset of the infrastructure: in very large banks, for example, there are usually several major business units, each of which has a natural boundary within the organization, and each business unit "owns" a subset of the servers. There are similar divisions by server role (all the database servers, middleware servers, all the production windows servers, etc.). This could be done today, in most customer environments.
We discussed that it would be entirely appropriate for admins whose primary focus is a particular business unit to have deploy access to just the servers in that business unit, and to build jobs at that level. Then, another role can have broader access, to the entire environment (like the BLAdmins or equivalent enterprise-wide role), and can execute only those jobs that should run across the entire environment. It's a bit more process, but something I'm thinking may be reasonable for most customer environments, and would allow you to use all the common Separation of Duty mechanisms that address some of the common concerns in this area.
The problem with putting this on user education is that even educated users can accidentally change the parallelism of a job to unlimited or a very high concurrency, and accidentally call the job against a smartgroup containing thousands of servers, and then jam the whole environment because the job is using all available WITs and potentially choke the database as well if BSA is not set to use no more than the maximum db connections allowed on the db server. I happened to me a few times, and I'm the BLAdmin ...
The way I see it this could be resolved like so:
1. Add a field in the blrole table to support a "MAX_PARALLELISM" setting (integer, 0 being unlimited)
2. Add a field in the blrole table to support a "MAX_TARGETS" setting (integer, 0 being unlimited)
3. Add corresponding input fields in the role properties panel to set those values, and add corresponding blcli commands as well.
4. Add a logic at job execution between the moment the targets are resolved, and the execution starts, to evaluate the following:
a) If total number of targets greater than MAX_TARGETS, exit in error with descriptive message.
b) If job supports parallelism and if max parallelism set in the job is greater than MAX_PARALLELISM, exit in error with descriptive message.
c) Also evaluate B when saving or creating a job schedule (since the job would then run with your role and you wouldn't know if it worked until it runs).
but sometimes you might need to target a lot of servers - so maybe a warning if you exceed the max ? like "hey, do you really want to run against 10,000 servers ???" but for scheduled jobs maybe a fail. i can see the case where someone messes up a property and a scheduled job might end up hitting 'too many' servers because of that.
I like that !
per role 'max_targets'
per role 'warn_max_targets' (true/false)
where true gets you a warning, a false fails and doesn't let you run the job until you reduce the targets.
and if run on a schedule or blcli the warning would also stop the job from running.
Yeah, and also something like:
per role 'max_parrelism'
per role 'warn_max_parrelism' (true/false)
Same as above but for the level of parallelism. It may be ok to run against 1000 targets, with 25 in parallel, but not unlimited for example.