6 Replies Latest reply on Dec 7, 2015 6:12 AM by Steffen Kreis

    Question on WIT Distribution

    Steffen Kreis



      we are now using Splunk to analyze our AppServer Logs of our BSA 8.6.1 environment.

      We are especially looking at the [Memory Monitor] entries and display them in a graphical fashion.


      [01 Dec 2015 06:09:13,218] [Scheduled-System-Tasks-Thread-2] [INFO] [System:System:] [Memory Monitor] Total JVM (B): 3202351104,Free JVM (B): 1443224584,Used JVM (B): 1759126520,VSize (B): 4895207424,RSS (B): 4562538496,Used File Descriptors: 14214,Used Work Item Threads: 3/100,Used NSH Proxy Threads: 0/600,Used Client Connections: 0/200,DB Client-Connection-Pool: 2/2/0/100,DB Job-Connection-Pool: 10/10/0/50,DB General-Connection-Pool: 2/2/0/50


      One item we display is the Work-Item-Threads and how they reach the max of 100 WIT's that we have set for all of our 8 Job-Server instances.

      Looking at that graph shows an odd distribution of the WIT's from my perspective and i was hoping if somebody can explain the behavior.


      Yesterday at 10:00 p.m. we ran an Inventory Snapshot against all our Windows Servers (~8.000 servers)

      "Number of Targets to Process in Parallel" is set to 50. The job ran for 2 hours and finished around midnight.




      The Job itself was started on the AppServer instance that is shown in green. According to the Job log the WorkItems were started on all the other servers, but looking at the App-Server-Logs (which produce the graph above) it seems that only the Instance that started the main job was under load during the whole job-run and the WIT's where not really distributed evenly.


      Is that expected behavior or is something wrong with that ?