a lot of people have been looking for this. the short answer is you can't do this.
why do you need to log what job ran against the server in a text file, on the server?
I would like to have all the logs relevent to one batch job in one file, so I can easily tail -f that file to see exactly what's going on in real time. Plus, the log would be logging only my information, as opposed to BL's logs, which I see lots all the subshell's stderr and stdout being logged into it, which if there is alot of, makes it hard to find my information. The subshell commands usually are not mine, thus I don't control what's written by it.
I am more proficient with unix and could use unix tools to search for things, instead of spending forever with the BL admin console. I could also write all the logs to //someserver so I have all logs in one location to do easy data parsing using unix commands.
Why do you need to watch the logs real time on the targets? you are going to do that across 100s of servers at a time ? that doesn’t scale very well.
Why do you need the batch job and other info if you aren’t going to look in the console ? if you want to log your nexecs and such to a file that’s trivial. but then all the data is on the server and not in bsa (well, you can pipe everything through tee).
The only info you have about the thing running is the dbkey of the nsh job ($). You might be able to trace that back to a job run w/ a lot of blcli commands and logic but that will be pretty expensive.
My batch jobs are not doing 1 thing to hundreds or thousands or severs, but it is doing a variety of things at different points in time, all relating to the same main batch job. Thus grouping writing to logs to a common topmost batch job name would have been the most helpful. I barely know BL, but know unix much better, thus I could put logs in directories, simultaneously locally and remotely via //someserver via multiple piped tee, and I can grep for logged keywords pretty easily.
I did find the blcli commands to search for job dbkey, then job name, which is the basis I used for searching for scripts to copy in the depot location. But there is no possible way to find the parent of the job based on the dbkey. What I did find is for us to search for batch jobs that started recently and maybe one can find something there, or to log everything to a file with the same HH (hour) extension.
that's all pretty expensive to write some log info out...