So, two things there.
I've never tried this, but can't you do exactly what you want with a phased deploy? As in - a deploy in which each phase has it's own schedule? That way, the deploy job could stage to the repeater at a certain time and then deploy to the end server(s) at a different time. Would that be possible?
Also, if you are nexec-ing a command on a remote server from an NSH script job, your BL credentials are carried through - and that nexec spawns with your BL credentials... so will not your BL credentials pass through to the remote agents?
Am I overlooking something?
I believe the phased deploy stages the package all the way to the destination target, not to the repeater. This poses a problem because the scenario in question is to do quick provisioning on top of a base OS. Since the base OS was just provisioned (in this scenario), we cannot pre-stage the package there.
As for the permissions question, the ACLs will by default give us the proper permissions on the repeater. So if the repeater entry for the user running the NSH Script Job is "rw,map=root", then you'll have root permissions on the repeater, but then it'll be root, not your CM user, that is contacting the target. So root would need to be in the users file on the target.
please let us know if you get it figured out. I have received the question many time about a distributed depot and you are almost there if you can get this to work.
In some instances, the prospect would like to accomplish the first step of ensure the package/installable is on the repeater by fed-exing a CD and copying it there because of bandwith issues.
The way we do this with a monitoring script is to have an SRP proxy running, have a bl_srp_agent running, pre-authenticated to the user:role and write out the shared memory ID to a file.
Then at script execution time it sets and exports the BL_SRP_INFO value for the appropriate role based on the contents of the file. The nsh script then proxies all requests (default in /usr/lib/rsc/secure is routed through the SRP proxy)which execute with a valid user:role.
It works for our monitoring system, no reason it shouldn't work for a bldeploy as well.
Install an application server on the repeater and tie it into your primary database. Edit the sqlmap.properties file to execute only scheduled jobs on that remote application server that have a certain job property. Edit the sqlmap.properties file on the primary app server to not execute these scheduled jobs.
Schedule the deploy to the repeater, and then schedule a file deploy to execute on the repeater to deploy to remote agents.
If you want to finish this deploy via a scheduled script, consider the following.
You authenticate in your CM as SpecialRole:SpecialUser. Your script executes and your special credentials map you to a 'noaccess' (or nobody or whatever) user. This 'noaccess' user should be a local user on the repeater that has no login credentials, and maybe even their login shell is /dev/null. This user must exist so that the BL agent can map you to it. Your script calls:
nexec repeater_hostname nsh -c 'cp /path/to/file //remote_host/deploy/path'
The users.local file on the remote_host server must map noaccess to your desired local user (i.e., root), and must include your repeater as 'ro' in the exports file. Mapping to 'noaccess' guarantees that, unless a user accesses your repeater via your special BL login and gets mapped to 'noaccess', then they will never have the capability to login to that repeater as 'noaccess' and launch NSH to gain root access to the remote machine.
Okay, Unix gurus - I'm sure I've missed a pitfall here or are making some assumption that I should not make, so hack away at my solution, please.