Based on your comments, it seems you have a Filer with RSCD agent installed and mounted as a File System on ALL your Appserver (hopefully, Linux). It works as a single FileServer shared by the AppServers. You may try df -h /bladelogic/8.2/NSH/storage on all the AppServers, you will understand more on it.
you are confusing things between the logical and the physical.
logically there are separate components - appserver, file server, etc. in your case, the file server and appserver are located on the same physical system. you could have the file server on another system - eg the file server host is not 'localhost' or otherwise aliased to localhost (as in your case) but resolves to a separate physical system. in that case you would not have an issue if there was no rscd on the appserver.
First of all... thank you for your comments, are very appreciated.
I am very sorry but that's not correct... We have NFS volume mounted on the both the two app servers which composed this environment. This NFS is on other server 188.8.131.52... We mounted this NSF volume under the following path on both app servers /bladelogic/8.2/NSH/storage. This is also the same path pointed out the Bladelogic's Database's references and by the App servers, as you could see in the screenshot of the previous post.
This confusion have been my fault, because I've just noticed that one of the screenshot I tried to attached to my first post hasn't been posted well... anyway here it is.
Al you could see in the next screenshots, when RSCD is running, the app server getting to file server through the RSCD, not locally... it making a network connection from the server to outside to then go into the ... RSCD to get the files....
This output corresponds to when you open a depot's script from Bladelogic's console. I think is sure that it is the agent who is retrieving and accessing the files in the file server.
I am very sorry Bill, but I don't know if I understood you well. Did you mean that if I stop the RSCD with the current configuration I have, the app server will resolve the host "localhost" as itself and won't try connect through the RSCD?
the communication to the file server is always:
appserver java process -> file server rscd process -> file system
in your case you have your file server set to 'localhost'. that means each appserver will try and connect to the rscd on itself for the file server instead of some other host. so if the rscd on the local appserver is offline, then it will not be able to connect to the file server rscd.
if the file server was defined to be on some other host, then the appserver would try and connect to the rscd on that host, not itself.
the appserver never accesses the file server file system directly - it always goes through the rscd agent that is 'infront' of the file system location.
You're always going to go to a RSCD agent to get to the fileserver, even when it's a "local" RSCD agent. The gain from using an agent on the app server is increased reliability: app servers don't all depend on a single fileserver agent at that point, they each use a "local" path to the files.
Pedro José Barbero Iglesias in BSA application server and file server is unique components. You can choose to install them on the same physical server or segregate them to different physical servers. In your case it seems like you have installed them on the same physical server as your application server therefore you need the rscd agent on application server. If you have choose to install it in completely different physical server, then you do not need the agent. That is what the documentation says and what Bill Robinson said as physical/logical layers.
Ok now I see... so according to your comments, what is mentioned in the below article and taking in consideration the case where I am...
Any app server mounting a NFS volume that represents the location of its own file server, should have two RSCD configured, one for accessing File Server and other to be managed. Or at less one to access the file server (NFS).
If I had the file server on other server, we wouldn't need any RSCD installed. Just one ,and just in case we want that server will be managed.
Right – also the appserver needs an rscd on it for content installs to work.
The idea behind the two agents is so that you will map all connections to the ‘file server’ to a non-privileged use account that owns the file server files. this avoids underlying OS permission issues that may exist because roles are mapped to different accounts on other servers and w/o needing to register the file server host in bsa.
Perfect, now it's crystal clear! thank you to all of you for your help, it's so appreciated.
The app server itself will always access files on the file server using the NSH path as such //<file server hostname>/<file server root path>, as configured in your blasadmin.
So, if you point it to localhost, and into a path like /bladelogic/8.2/NSH/storage, then it means it will try to connect to this: //localhost/bladelogic/8.2/NSH/storage. This implies that there must be a RSCD agent listening on localhost for every application server that you have in that environment,and that their /bladelogic/8.2/NSH/storage path must be a mount to a shared NAS (via NFS for example). Doing so, it's true to say that the application needs a RSCD agent to be present because the file server is also on the same server (locally).
Alternatively, you could have an external fil eserver, a stand alone server with an agent on it, that all application servers would connect to. (i.e. instead of localhost, it would be an another dedicated server), an din this case, you wouldn't need to have an RSCD agent installed on the application server for it to work.
In our environment, we chose to have two agents installed side by side on our application servers. One for the app server itself, and one for the file server (we call it blfs). The reason is simple, we want to be able to manage the application server as any other server in our environment, but don't want the file server to be accessed as root. So we used the -local switch when installing both agents to separate the installations, and this allows us to use different user mapping for both. We have created a restricted user called blfsuser and all files on the file server's root directory are owned by it. That way if a user accidentally codes something that runs against blfs, and/or tries to write to / on the server, it will fail as he won't be mapping to root. Each app server's blfs hostname is pointing to a secondary IP address on the server and added to the hosts file, so it's all local.
Note that the dual-agent config is not officially supported by BMC, but it's the only thing that worked for us at the time.
I'd like to comment that finally after well understanding what I doing and the reason for it, I have successfully deployed a new environment using a NFS resource, having two agents installed on the app servers, each one for the different functionality which were aimed to... so thank you to all of you, for your time answering and sharing your thoughts,understanding and knowledge with me/us.
My best regards.