What part is taking the time? staging/copying the payload to the target or running the install ?
Running the install is taking the time. I can see in the bldeploy log where the patch is kicked off (typically with the /quiet /norestart command options) and then nothing else in the log for sometimes hours until the install comes back with a response.
Also, we typically stage our patches (copy them) to the targets days in advance to avoid them taking extra time waiting on the copy.
Anything in the windowsupdate.log ?
This sounds like a server environment issue - maybe AV?
You may be able to confirm by launching the patch install locally on the box outside of BSA and monitoring it wit a tool such as process monitor or process explorer
Pretty much the same finding the WindowsUpdate.log.
Reporting that the update started at 00:14:05 and then nothing until 03:32:07 requesting post-reboot reporting for the package.
initial thoughts on AV possibly being the cause, I haven't dug deeper into that possibility yet, but these are all in the same environment and should have the same versions and exclusions, but I'll definitely have the security team take a deeper look at it.
Did we get any solution on this either from Microsoft/BMC?
However, i have a concern where, after the upgrade to 8.9 SP2, the tool functionalities have also become slow during patching operations. We have recently upgraded to 8.9.02.329 and have the console on the separate server, but the admins have reported that it is taking 15-20 seconds to complete a click while patching is in progress! and sometimes it is going to Not responding state. Can we have a solution for this?
However, i have a concern where, after the upgrade to 8.9 SP2, the tool functionalities have also become slow during patching operations.
what 'tool functionalities' ?
but the admins have reported that it is taking 15-20 seconds to complete a click while patching is in progress!
click on what ?
and sometimes it is going to Not responding state. Can we have a solution for this?
the console is installed on a separate system - how many people are using the console on this system concurrently ? is this server 'near' the appserver ? you were using the exact same setup before upgrading ? what does the load look like on the server w/ the gui installed on it when things are 'slow' ? on the appserver ?
what db back end ? what os is the appserver ?
Simple tool functionalities like, clicking on Depot folder, doing right click for opening a job or checking its configurations, running a report etc, is taking more time than earlier and tends to be slow. Only one person is accessing the console at a given time. This server is in same environment/domain as of app server. Before upgrading we were having console on the app server itself. The load took the console to a hanged state on few occasions .
The db is MSSQL2014 standard edition. The console and app server are on Microsoft Windows Server 2008 OS.
there are some database maintenance activites recommend - are those running on a regular basis: Setting up a SQL Server database and user for BMC Server Automation - Documentation for BMC Server Automation 8.9 - BMC …
what's the cpu usage on the appserver and on the console when this happens ?
The database has been upgraded recently with UPI as per the upgrade plan, and the CPU usage has been 10-15% in app server and console server, while this issue is being faced. However, if recommended , we can still go and implement database maintenance activities. But is that the only solution to overcome this slowness on console?
1 of 1 people found this helpful
There are a lot of different places to look when you see general performance issues:
- cpu/memory usage on the system running the rcp.
- cpu/memory usage on the appserver(s)
- poor db performance
those are typically the first things to look at. since it seems like your rcp client system and the appserver aren't having a problem then the next thing to look at is db performance. the first thing to check there is if the table stats updates are being done on a weekly basis. i think there's an index rebuild and reorg procedure too recommended for sql server as a general maintenance plan. so i'd start looking at those things first. another thing would be db size - if you are running the recommended db cleanups, that they are working and that your db isn't 'too large' (which is of course subjective). other problems could be something running on the rcp side like anti-virus causing a problem or network connectivity problems between the rcp and appserver.
We have performed the DB maintenance activity, and checked the cpu and memory performances of the servers, but somehow i have observed that the slowness, has become intermittent . Sometimes it is on par and sometimes it is becoming slow irrespective of maintenance activity on db . Can we have root cause for this. Our admins have reported that it is taking more than 15-20 seconds to respond a single click once it is slow and works efficiently other times.
1 of 1 people found this helpful
which db activity did you perform? the table stats update? the index rebuild/reorg ? both ? is there some other activity like a backup scheduled on the db that correlates w/ the slowness ?
you can enable debug logging in the rcp side, that should capture what the gui is doing. if this was an oracle db you could easily generate db reports covering the time of the problem, however w/ sqlserver i believe the dba needs to setup a trace session and if the problem is intermittent that probably won't work very well.