Ok, seems like failures can easily occur when running against several hundred systems, so, is there an 'easy' way to auto-run against the list of 'failed servers' from a job run? I'm finding that I usually pick up a few stragglers. While I could simply re-run against all systems I would prefer to run against the exception (failed) list.
Also, looking for a way to call/run a 'summarize results type job' when my job(s) complete [Note, not looking for success/fail email; look for how do you auto 'trigger' a follow-up/related job?]
Scenarios - NOTE: using NSH Script jobs
A) manual run against 'failed' servers via some sort of 'job review' command?
Example for scenario A [what I'm currently doing]
1. via console run job # 5555
2. job completes w/30 servers on the 'failed list'
3. manual run of job 5555 against this list of 'failed servers'
4. manaul run of job 5555 with 'summary report' option
B) auto-magic; some command to run as a 'post job' or 'secondary' or 'conditional' follow-on job?
Example for scenario B [what I would prefer to do - automate the steps]
1. schedule job # 5555
2. when job #5555 completes it auto-re-runs against list of 'failed servers'
3. when re-run completes a 'summary job' is auto-run
Current approach: I can accomplish steps 1 & 3 by scheduling with a large delay between the steps. If there is an easy way to create the 'failed servers' list then step 2 could simply be inserted into the 'schedule'.
Should I be using some 'other' sort of Job type for this type of scenario?
there's a blcli command 'Job executeAgainstFailedTarget'. that would work.
Thanks Bill - that looks like a 'winner'.