You should probably open a support ticket for this issue, since this is production I wouldn't want to suggest something that could make things worse.
Last time I had a deployment crash beyond repair, I deleted it and recreated it... but that involves database manipulations and a proper _template profile and I wouldn't want you to mess it up further.
This may be way overkill... and looking at Ankur's response, check this URL: ORA-12519: TNS:no appropriate service handler found tips
That may be of more help.
I believe the cause is number of processes and session of your database...
DBA to perform the following actions on your Oracle database:
- verify that your database has free space on its disks
- modify two parameters to increase the number of processes and sessions
The suggested values for a standard small or medium size installation are the following:
Note: Higher values might be needed for larger instances, since these depend on the load of the database; contact your DBAs for suggestions
Best way to find out the root cause is, Job_Server log file, which is located NSH_DIR/br and name will be name of Job Server.
It looks like, your database is correct. Check for File Server access.
Check Database for all below blasamdin connections:
blasadmin -s _template
bladmin:_template>show database all
blasadmin -s _util
bladmin:_util>show database all
Is there any more information provided in the stack trace?
Please review KA374284, which you should be able to find in our KB. Please let me know if you have any trouble locating it.
a few things:
-> _util can be deleted, it's not an active appserver deployment
-> on some of the appservers, there were port conflicts w/ the spawner instance that was preventing the appserver deployment from starting. we fixed one and there were a couple more like this.
-> it's not clear if that db error is still occurring or not - if it was he needs to talk to his dbas as it seems to indicate a misconfiguration in a rac node or the scan ip is not setup correctly.
Can you please let us know the current status of this issue?
This issue has been resolved. It turned out to be a port conflict that was preventing the _j01 job instance from starting. We set the appserver spawner maxport and miniport to a different port which fixed the issue. It also turned out that the reason jobs wouldn’t finish and were unable to start was database related. I had the DBA’s investigate which led to them recycling the databases that cleared the queues and job functionality resumed. Resources may have been undersized, so we are waiting on a recommendation from our DBA’s on how to prevent future occurrences.
Sherif L. Shenouda
Thank you for the update. Is your database Oracle or SQLServer?
If it's Oracle, please make sure that your DBA is running the BMC/Oracle gather stats procedure on a weekly basis, this will help with some performance.