1 Reply Latest reply on Mar 8, 2018 1:08 AM by Andrew Waters

    Maximum concurrent discovery requests per engine - Further Explanation

    Cory Garcia

      Maximum concurrent discovery requests per engine:

       

      Setting in Question:

      30 requests - can bump to 150

      Maximum total of 3360 concurrent discovery requests

       

       

      What does an increase mean exactly in relation to:

      - System Resource Taxing

      - Bandwidth/Throughput taxing

      - affect on Discovery Time given that their are enough resources both on the server and the bandwidth available

       

      As far as current physical resources available:

      note: Currently 8 processors on each server which equates to 7 ECA engines available per box

      note: Currently 24GB Memory and 32GB Swap with plans to bump both

       

       

      Additional Questions:

      Who out there has bumped this significantly? What can I expect and What should I look out for?

        • 1. Re: Maximum concurrent discovery requests per engine - Further Explanation
          Andrew Waters

          That is very hard to answer specifically because different discovery requests take vastly different amount of resources. Plus it is very dependant on your hardware.

           

          Effectively you are making up to 5 times the number of discovery requests so there is basically up to 5 times more bandwidth and discovery resources. The system does not make concurrent requests to the same endpoint so this can increase overall throughput rather than decreasing discovery time for individual endpoints. In fact it may make individual endpoints take longer. Because more things are being processed at the same time the ECA engines are almost certainly going to grow in size. If your are not scanning enough IPs at the same time then there will be no additional work done.

           

          Frequently the biggest bottleneck in the system is writing data to the datastore. This in turn is normally limited by the speed of your I/O.

          3 of 3 people found this helpful