7 Replies Latest reply on Jun 17, 2019 6:42 PM by Jonathan Rennie

    Container discovery access

    Andrew Mark Shaw
      Share This:

      Hi, we've begun looking at container discovery enabled in Discovery 11.3. Technology is mainly Docker, Kubernetes, etc. The platform team have enabled SSH access to the targets along with sudo access to run the necessary commands (leveraging on PRIV_CMD – thanks!). So far, the results look good, but we have one major challenge – the default configuration is to not run SSH on the target. The logic behind this is to retain complete control over the image and manage it through configuration -> deployment. On the face of it, no SSH access is going to rule out discovery completely, but I wonder whether others have run up against this one? My understanding is that a no CLI access approach is becoming standard practice. The teams have suggested that a local agent could be baked into the images and that could be leveraged by Discovery. I suppose that SSH could be built in and configured to only allow discovery access, but keen to hear if others have dealt with similar scenarios.

        • 1. Re: Container discovery access
          Brice-Emmanuel Loiseaux

          Which local agent do you think of? Why can't you restrict SSH access to discovery access only?

          1 of 1 people found this helpful
          • 2. Re: Container discovery access
            mike spaller

            We are running into the same problem. I am told that the plan is to tear down the server instance and rebuild it if anything does happen to log in since there may have been some change made. I would hate to be the reason that a server instance gets rebuilt daily, it would also make the data we collect almost useless since the server that we just discovered would no longer exist.

            • 3. Re: Container discovery access
              Andrew Mark Shaw

              Hi Brice-Emmanuel Loiseaux,

               

              Yes, that was my suggestion too. We're still in discussions around this, but what I would add, this no access to containers approach is something our organisation are extremely keen on, and is perhaps going to become a wider standard going forward. While I don't expect to see agent based Discovery, I do wonder whether REST based discovery might enable us work around the SSH access issue.

              • 4. Re: Container discovery access
                Andrew Mark Shaw

                Hi mike spaller,

                 

                The reasoning is the same for us. Changes to the container are expected to come through a refresh of the image rather than anyone logging on and changing the OS/configuration directly. It allows for assurances around configuration and state that were previously harder to enforce.

                • 5. Re: Container discovery access
                  Jonathan Rennie

                  Hi Brice-Emmanuel Loiseaux.

                   

                  I work in same group as Andy ... some more info on our problem.

                   

                  For Kubernetes and OpenShift we use a lightweight, immutable, read only operating system for all nodes in the container cluster. One of our security targets in that is to have no console access and no ssh enabled, so we know that the operating system installation cannot be changed (and it is automatically redeployed every X days anyway).  For discovery of namespaces, projects, pods and containers within that cluster we would use the native cli commands to Kubernetes or OpenShift. These seems to be the ones that BMC Discovery would be executing locally via ssh connection - but they can also be run remotely with the right cli command config to target a remote cluster.  That would feel like the "native Kubernetes" way of doing things.

                   

                  To collect cluster info, pattern uses OpenShift CLI commands:

                  oc version - OpenShift server and node version

                  oc config view -o=json - obtains current cluster config

                  oc get nodes -o=json - obtains list of nodes

                  oc get all --all-namespaces -o=json - main request to collect information about OpenShift cluster

                   

                  To collect cluster info, pattern uses Kubernetes CLI commands:

                  kubectl version - Kubernetes server and node version

                  kubectl config view -o=json - obtains current cluster config

                  kubectl get nodes -o=json - obtains list of nodes

                  kubectl get all --all-namespaces -o=json - main request to collect information about Kubernetes cluster

                  kubectl get pods --all-namespaces - obtains list of Pods

                   

                  Is this something that BMC Discovery could do? Run the cli commands from central point using the native Kubernetes/OpenShift cli?

                   

                  If not possible from central point (for which BMC Discovery server/service would need the kubectl/oc binaries and network paths to the API servers/masters of each target cluster), an agent-like approach could be:

                  1. a script packaged to run on API servers/masters which run the required CLI commands, parse and format the results as required, and POST them back to Discovery service.

                  2. a special purpose container image that exists only to query the local API servers/masters, which could be deployed into every cluster by policy. This image could either make the results available through REST call (nice), or parse and reformat to send back to Discovery service

                   

                  In worst case, we could do something like building a special purpose container image which would have the cli binaries and a ssh service with pre shared key for Discovery only in it (and nothing else), and deploy this into each Kubernetes/OpenShift cluster by policy, and expose the sshd service as an ingress point.

                   

                   

                  For simple containers without container orchestration through kubernetes or other, we would be using either podman or docker, that would likely be using our standard Linux operating system builds. So for now for us less of a problem (as those generally have ssh enabled for any required manual intervention), but the more we move workloads to public and private cloud, the more we will be removing ssh daemons. So again for that, having a packaged script that will do discovery of local containers and send/POST results back to central Discovery service (or make that information accessible by REST call to the local script from remote Discovery service) has more longevity.

                   

                  This note was too long, sorry, but I hope it gives some flavour to Andy's questions.

                   

                  Jon

                  • 6. Re: Container discovery access
                    Andrew Waters

                    Discovery already runs those commands when it discovers Kubernetes (docs) and OpenShift (docs). It does however need to be able to discovery them to know to run the commands.

                    1 of 1 people found this helpful
                    • 7. Re: Container discovery access
                      Jonathan Rennie

                      Hi Andrew.

                       

                      Yes, but (from what I see, I may be wrong) it is executing them locally in a command shell on one of the servers in the cluster, hence the need for ssh daemon and pre-shared keys, which is where our problem is (our security folks do not want to see ssh daemon running on any container host or in any virtual machine running on our private cloud or public cloud environments).

                       

                      It would be preferable pattern to either execute those commands from the central discovery host, or to use curl/wget to make REST calls to the Kubernetes API server / OpenShift master in the cluster to gain required info.

                       

                      Jon