Do you archive data periodically? You need to capture the logs(API/SQL/Filter) for one of the specific use case and need to analyze where its taking more time than expected. For 8.x version, you need to enable arexception logs explicitly to know what all API's are taking time to process.
Thanks and Regards,
just increasing the RAM on a server will not necessarily up the performance.
You will need to allocate this additional RAM to the processes on the Server so it can utilise the RAM e.g. Change the Mid Tier JVM allocation, change the JVM allocation for the AR Processes, etc.
Also, the most gains in performance are realised in the Database tuning so I would also take another look at this along with the suggestions from Pallavi.
There is no archive and the system has been running like that for the past 6 years. we have captured the logs and found that there is some sql that consumes more then 9 mints; which was requested by multiple users and multiple times by each user. This was a motive to look up for DB tune up issues. we have done several indexing on HPD and CHG and hope to see some improvements when users start their use of the system.
this is true, but we were eliminating factors of delay one by one. we knew that this would not help much but to keep this as an open subject would not help closing the issue. Anyway we have applied a fix to the tables indexes and it seems to have improved the overall performance. We have tested that on the actual application at non-peak time and it works okay. awaiting the users login to confirm solution. I will let you know. This was a very tricky one.
There is a morel issue that I have learned from this indecent: users eat what you feed them. There were several discussions with management about the root cause of the issue and they could not understand how this came about all of sudden. The fact that was missing is that the users were the source of feedback for the start of the problem. Users did not realize the degradation in performance and assumed that this is due to heavy load on the network. Admins also took this break and did not do any tune up on anything and never investigated a performance issue as to be serious or in a professional manner (investigating the network speed, monitoring the utilization of the DB or the user's screen). In fact the Admins assumed that when ever the user complain there must be another reason why they complain. ITIL advised to take the user seriously and show interest about the users' need but the number of users and complains made the admin numb to the problems and did not see the indicators. This indecent did not happen all of a sudden; it was rather "small drops of rain water that made the flood"
So if a small issue was raised to you as an Admin don't take it for granted that it is not Remedy; assume it is Remedy first then look for other reasons.
I will let you know what was the solution implemented as soon as it is tested. Part of the problem is in BMC documentation method !!!
How do you change the JVM allocation or JVM allocation for the AR processes ? give me a lead to do that on a tomcat web server.
Remedy is just not a install and forget application.
Like most enterprise applications it requires constant tuning and tweaking, kind of like you need to always check the oil/coolant in a car and keep this topped up or else it will grind to a halt.
As the DB grows, there will be a limit that current indexes stop working and new ones are required.
The application has many parts to it, so it can be daunting to "tune' everything properly.
However, with the advent of BMC now displaying KB's on the communities, finding the information is much easier.
Thank you Carl for the mechanical analogy. I did put the oil and gas and did the needed tune up, but this did not work!!! so I turned to the engineer who designed that "car" . The "designer" puts one screw in and it works!!! I will leave you with this image guessing the "screw" type.
Anyway the issue is now 70% resolved and I need to attend my work then come back and feed the community with the resolution.
Now it is a my free time so I can update you:
This is the solution we have arrived at:
Our ARsystem is at version 8.1 and was in good shape for the pass 5 years and standing. Management did not want to do any upgrade as the system is integrated with other legacy systems and upgrade would impact this integration. The problem we faced was a degradation in services to the point that some users where unable to login. We did a full inspection for any updates or changes in the infrastructure or database but with no luck. And while this was done we have notices some areas of improvement so we requested an upgrade of RAM and CPU on the webserver and tuned up every table in the Remedy Database. All this helped very little but it showed us where the highest impact was. The logfile (sql, Esc, filter and API) did not show much. We got in touch with the vendor and BMC and thankfully they helped. Since we have investigated all other causes and generated the logs it was easier to locate the source of this anomaly.
- BMC has an analysis tool for the logs which helped in identifying the longest acting API or SQL
- BMC agent checked the offending SQL and all pointed to the HPD:Help Desk and CHG:infrastructure forms
- They pointed out that there was a knowledge sheet that indicated such anomaly in ARsystem 7.9 - 8.1 and the fix was to modify/add composite indexes to the above tables.
- Also they requested to re-index and analysis all table by the DBA for better performance.
we applied the suggested fix and sure enough the system started performing better.
what was notices that some of the tickets due to that error did not display on the support monitors and kept open for years. (no one complained !)
so the problem was there all the time but no one notices as they were happy with no tickets on their screen. The users did not complain about that for the past year or so!! this is where the idea came to me as "users eat what you feed them". the nontechnical culture around us did not suspect that there was a degradation in services and when ever they face a problem with Remedy it was not reported until the system came to a halt. at which time I was hired.This problem showed up As soon as I started as a delayed approvals. I took the action to re-index and tune up the offending table. having this out the rest of the tables started halting one by one.
So we knew that the source of the problem is in the Database but there was no way that we could get to that one knowledge sheet if it was not for the BMC support.
It is nice that we get such support, but I recall that when Remedy was introduced to the market the aim was to make it simple and easy for end-users to build their own application and front ends with needed workflow. I find it now hard for even the experience admin to keep up with the technology, vocabulary, workflows, techniques, admin tasks, tweaks, upgrades, errors, development, continuous educations etc. To be a Remedy Admin now costs more than being a Certified Oracle DBA. and yet the number and types of businesses that employ Remedy fully or partially are very limited.
KnowledgeArticle - BMC.pdf 152.7 K