The RCP login is not controlled by the rsc files so I'm not sure I understand your comment "this would take care of the security but this does not allow RCP console login from any other server besides the 2 app servers"
I'm not sure i understand the issue about the users file. are they using a shared login? or they essentially want to map everyone to root and not to any kind of authorization check on the agent side?
You can use
On Unix’s users.local files and
On Win’s users.local files.
This won’t need any updates, and all new BLAdmins users will inherit automatically full access to servers.
The RCP consoles actually talks to the appservers, not to the agents (even on the appservers hosts) so they shouldn’t be affected by the RSCD security files.
Please clarify your use case better:
How many roles? Which level of access to servers for each role?
the client has Fileserver on same box as AP01 appserver, and because of this, they are forced keep their Exports file open to everyone, meaning anyone has r/w access to the App servers.
what the client is concerned about is that because AP01 is open, users can go in and run NSH commands or file deploy jobs on the actuall App servers (AP01,AP02) while logged in through RCP console on some other Terminal server. They are wondering if theres a way to limit access to the app servers but still keep access to teh Fileserver open for everyone. They dont want anyone touching the Appservers but still being able to have access to Fileserver.
I know theres a 2 RSCD agent setup (on 2 separate IPs), but that looks pretty complex, wondering if theres an easier way to manage this.
thanks for the suggestion
so on our 2 app servers (including Fileserver), it will look like this,
* rw, user=root //because of fileserver we need to keep Exports open to all servers
// nothing here
BLAdmins:* rw,map=root //any BLAdmin user can
if a non-BLAdmin user tries to add any of the 2 App servers to their Server folder in the Console or run NSH command on these 2 app servers, they wont be able to. But if they want to deploy a package or run a job on some other target server, they can still access the Fileserver.
does this sound correct? Thanks.
you do not need to keep exports open the all users on the app/file servers. you can restrict the access on those server just like you do on the other boxes. the rcp does not talk directly to the agent, ever.
Bill is right. RCP never talks to agents.
But do not leave your exports file like the way you suggested above by using the '*' or it will be wide open to all hosts...
Change exports from:
* rw, user=root
appservername2 rw, user=root
This way only appserver1 & 2 can talk to the File server agent. Any other host that attempts to connect will get 'no authorization to access host'. In addition, file server mapping does not need to be open to all hosts. The only servers that need to talk to the fileserver are the appserver.
Bill, I have never done it yet but could managed servers need to access to the file server it you do patch deployment and use 'agent mount payload ...' option and your file server is also the patch repo?
APPSERVER rw,user=root also works… which is how all the other servers will be. the root= should also work.
Agent mount at payload uses nfs or cifs which are not controlled by the agent acls.
I would also advise to move the file server off of the appserver as a best practice. Otherwise the ACL’s you push to the “fileserver” my cause the appserver not to start.
Thanks everyone for suggestions, it looks like the scenario works as follows
on teh App Servers I modified the exports file tobe like this
AP01, AP02 rw,user=root
this prevents non-BLAdmin users from browsing the app servers and running jobs on app servers. BLAdmins can browse and modify app servers, as intended.
What stops users from running jobs against the appserver is the permissions granted to the role in the ui. You should still re-consider using RBAC for its intended purpose. Maintaining RBAC is not that onerous and provides significant security benefit. You are now dropping anyone that needs server access into one role which may or may not be ideal. Should all users be able to manage all boxes in the environment? That is likely to happen w/ this setup.
Bill, from our testing it looks like non BLAdmin users are able to have access to non-app servers, if a non BLAdmin user tries browsing the app servers, they get a read-only.
also 2 things that we noticed from testing this setup so far,
1. We have a NAS-share that acts as the Fileserver for both App servers. I wasnt aware of this, but when the RSCD agent reads the NAS, it views it as a separate IP, so in the agent logs, its showing that its reading the NAS as 127.0.0.1
when we configured the Exports file, we took out the * and added individual app servers to root,
we could not start up the app servers after this setup. The reason is that when startign the app server, its looking to access the Fileserver, which isnt local on AP01 or AP02, its on a separate NAS. We had to go back and add the following,
this starts up the appservers.
#2 I'm not sure if this is intended functionality, when doing a Live Browse on 1 of our App servers, it hangs indefinitely.
We suspect it has to do with Server Config. We designated our AP01 server to be Authentication and Config server, while AP02 is a Job server. When doing a live browse on AP02, no problem, comes up real quick. Browsing AP01 causes a hang, almost seems like its in a loop.
Not sure why. Our users.local on AP01 looks ok, but for some reason if a server is designated as Authentication server, it doesnt like to be browsed.
Mike, does the appserver point to localhost as the fileserver? That could be why it tries to come across that interface.
yup, exactly, the app server is configured with Fileserver as localhost. Thats why I coudlnt figure out why the service wouldnt start after the Exports change.
Browsing of a server has nothing to do w/ the appserver configuration. The agent is a separate mechanism from the appserver.
the nas share is not acting as a file server. the 'file server' is whatever agent is sitting in front of your storage. it really doesn't matter where the actual files are stored, as long as there is an agent infront of them.
it looks like what is happening here is that you configured the file server to be 'localhost' or, because the appserver is talking to itself for the file server, it's using the localhost interface (as adam notes). so when it comes up it's coming from localhost, and the exports file is blocking that. adding '127.0.0.1' to exports should allow access.
the live browse is another issue. app01 is trying to connect to itself on 4750 to do the live browse. is there anything in the appserver or agent log when you do this?