1 of 1 people found this helpful
What version? First check to see what Reasoning has to say.
What is logged in tw_svc_reasoning.log, tw_svc_reasoning.out and tw_svc_eca_engine.log (or tw_svc_eca_engine_*.log depending on version).
We are using 11.3.
I put in attachment extract of the logs for the time of the pattern upload (which finally finished). I can notice 2 things:
- DDD removal scheduled a lot of items for removal
- An external event consolidation took place exactly at the same time.
This external event is triggered every day at the same time, I will check tomorrow trying to upload a pattern to see if it impacts.
Could this be the reason ?
That seems to take much longer than I would expect, from the log
140290656872192: 2018-12-13 12:53:19,373: reasoning.patternmanager: INFO: Added knowledge upload "HCL.DPTechnology upload 3"
140290656872192: 2018-12-13 12:53:27,686: reasoning.patternupdate: INFO: Loading new rules
140290656872192: 2018-12-13 14:06:05,321: reasoning.patternupdate: INFO: Updated pattern modules
Is the appliance very heavily loaded? Do you have a really large number of patterns as the code generation does not seem to take too long.
On this environment we have 3251 active pattern modules:
- 1246 belongs to BMC TKUs
- 2005 are our owns
- 1649 for application models patterns
- 356 custom patterns
The appliance is not heavily loaded and respond pretty well.
I really wonder if the event we use every day is not the cause since we have this problem when it's running (this has to be confirmed). This event triggers a pattern used to update all Host attributes from a sql query to an external database. It means an update of maybe 10 000 hosts every day.
Maybe I need to optimize the pattern to update the object only when an attribute changes ?
1 of 1 people found this helpful
I have a couple of thousand pattern modules, so less but it is still much faster loading than yours.
It is all a bit difficult to tell. You could temporarily put reasoning into debug when you are about the update/upload a pattern as that would record loading of individual modules which may be able to tell if there is some module taking ages or it it is just slow loading them all.
I changed the time our integration runs and now the loading is faster. I think we had this problem only when the integration runs, which could overload a bit the appliance.
Integration is doing this:
- run a SQL query to get a list of attributes (a dozen) for servers
- for each server in the result, get it from ADDM and set the attributes
I did it very simply, seting attributes all the time, but in reality we really update 1 or 2% of the attributes every day, the rest remaining the same.
Is there any way to do it more efficiently ? I wonder if testing all attributes for all servers will not be longer ?
2 of 2 people found this helpful
You really need to be able to have a query for each server rather than a query which lists servers. If you change lots of servers then Reasoning memory usage can become large as it keeps in memory all the changes to apply until it reaches the end up the pattern. Checking attributes would help a little but not much because it caches the accessed values in case the pattern uses them later.
OK, just to be sure to understand, when you say one query for each server, do you mean one SQL query to the external database and then updating the matching server, or one query in ADDM ?
Currently I'm doing this:
//SQL queries ... all_hosts := search(Host); for h in all_hosts do //update based on queries results end for;
I assume then I'm in the case you mentioned modifying many hosts.
Will that work if I simply run the SQL queries on one pattern execution. Then have another pattern triggering on 'Host created, confirmed' and reading last query result to modify the Host ?
Thanks a lot
If you are using an integration point then it would be shared for the endpoints in a single scan.
I'm using an integration point triggered from an event. So I was thinking of just doing the queries within this pattern, results will be stored in IntegrationPoint nodes.
Then during the normal scan, triggering on each server doing a search on IntegrationPoint -> SQLResultRow and updating the server. in this case we update one server at a time and not all.
Could this work ? There are many possible implementation, just looking for the best design.
Next question is: How are IntegrationPoints removed ? I just made a query and we have many on the system, but for some needs we only need the one from a week ago.
Thanks a lot