take a look into the admin manual (page 108).
1 of 1 people found this helpful
In regards to your query, Older result data for which you no longer need the full level of detail can be aggregated, thus saving space in the repository.
You can refer to the steps to reduce the repository size on the database server from the page no. 104 from the TMART 4.1 Administration guide.
Although it is strongly recommended that you delete old data from your BMC Central database regularly in order avoid your database growing exponentially; it is also important to note that the Data Delete process consumes some database performance and may run for several hours on large databases. Therefore with this in mind, if you wish to avoid performance issues with your database during the week then it is recommended that you schedule the Data Delete process to be executed at the weekend. For example, the workload generated by a Data Delete may be higher than the possible performance gain if the process is run daily. Consider if the Data Delete job runs for 10 hours, then you will have 10 slow hours versus 14 slightly faster hours (if at all) of normal performance in a single day. Therefore; it is for this reason that we recommend that you schedule the process to run on a weekly basis.
Another factor worth considering is that the Data Delete process is only designed to free up disk space which it does this by deleting the result data for which you no longer need the full level of detail for and instead aggregates the results in the database thus saving space in the repository. It is not designed to speed up the performance of your database; to achieve this you would need to defragment your database and then rebuild the indexes. As BMC TMART Central makes high usage of indexes for the tables which are influenced by the Data Delete job.
Continuous data writing and deleting causes the database indexes to become fragmented leading to degraded performance if there has been a lack of database managment.
This fragmentation is responsible for the majority of performance issues and must be addressed via database maintenance implemented by your DBA.
The following database maintenance tasks are essential for improving database performance and GUI response times:
1. Rebuild indexes on SV_TimeSeriesData table
2. Rebuild statistics on entire database
Hope this information will be helpful to you.
We are facing similar issues with another drive in TM ART Server.
Could you suggest any steps to archive the logs to some other drive ?
Thanks in advance,
by example for the application server, you have to modify the SccAppServerBootConf.xml file and check the <Archive> XML tag.
So you can choose to archive all logs files and even compress them.
You can also directly change the path of the logs with the <ApplicationData> XML tag.