I don't know enough about SQL to advise on the best approach but I can tell you that the FTS Reindex option is not relevant. The FTS indexes are independent of the database - they're stored in local files as FTS is a separate process, outside of the database.
If you're concerned about performance try to identify a particular activity highlights the issue - submitting a new incident for example - and get the server SQL/API logs for some examples. Take a look and see if there's any long running SQL that might be the cause, perhaps you could add some new indexes to help with specific queries or some of your tables have now very large numbers of records and some sort of archiving/retention policy could help?
Thanks for the response Mark. So I can rule out the FTS Re-index as a solution to the index fragmentation - I assume the FTS re-index is still something that needs done from time to time, any ideas what sort of timescale?
I don't have a particular performance issue in any one area but more of a general feeling that it is running slower than it use to. That and the fact that we have very few maintenance/clean up procedures in place has made me start to do a bit of digging.
I have identified a few tables with large number of records that I will need to address
1.5 million - SLM:AuditLog
1.3 million - NTE:Notifier Log
1.3 million - SLM:MilestoneLogging
1.1 million - HPD:HelpDesk_AuditLogSystem
But the very high fragmentation shown up by the monitoring has me concerned. Now I am sure I can find a SQL command to reindex but I was hoping to find out how others keep on top of this sort of thing.
Non Clustered Index Fragmentation
T1597 (SRM:Request) I1597_2_1 99.6%
T461 (BMC.CORE:BMC_ComputerSystem_) I461_179_1 99.5%
T940 (CFG:Geography City) I940_1000000002_1 99.4%
T438 (BMC.CORE:BMC_Product_) I438_179_1 99.4%
T2134 (HPD:Help Desk) I2134_1000000080_1 99.4%
T423 (BMC.CORE:BMC_BaseRelationship) I423_400131000_1 99.3%
Clustered index Fragmentation
field_column PK__field_co__CE94C40060A75C0F 98.5%
actlink_mapping PK__actlink___59AC8DD0541C5DCC 98.1%
image PK__image__336E9B5531B762FC 98.0%
T286 (SIT:LoadSiteCompanyAssoc) PK__T286__32149A136B5AC003 98.0%
filter_mapping PK__filter_m__1B42391F7A4206B4 97.2%
join_mapping PK__join_map__CE94C4006C190EBB 97.2%
T940 (CFG:Geography City) PK__T940__32149A1354EEBDFC 97.1%
T1389 (DLD:SYS:DataWizardStatus) PK__T1389__32149A135A7E4577 96.9%
yes, your performance issue is nothing to do with FTS. FTS runs on a separate plugin thread and helps the users to perform faster searches. The FTS indexes are stored on the AR file system in the FTS Collection directory.
I recommend once a DBA looks at the fragmentation thing, you look into the option of archiving the data. In addition to that I would recommend you to think about storing old records from forms like NTE Notifier, AR System Email Messages etc...
Hope this helps.
I now understand that FTS is separate and archiving / clean up of tables is on the action list.
Reading some historic forum posts on index fragmentation some people think it is not an issue, others say they do a reindex weekly or monthly. BMC docs say "Indexes are automatically maintained for all the tables created by BMC Remedy AR System". SQL default reports recommends reindexing. My online forums recommend reindexing if fragmentation is over 30%, I am seeing many at 90+%
I don't have access to a DBA at the minute, I am sure many AR admins don't have that luxury all the time.
So I am hoping to find out what others do when they don't have access to a decent DBA. Do they perform any reindexing or just leave it alone? Do you run any reindexing jobs in your environment?
Thanks again, all advice welcome.
1 of 1 people found this helpful
I talked about this at Engage 2015
It's weird to have a index % fragmentation at 99%...
Usually there is a job in SQL server (a maintenance plan) that is in charge of reindexing / rebuilding the index after "n%" of data have been modified (added, deleted, modified) in the SQL Table.
It is not perfect but it's already something. So in your case it seems the job has been disabled (?) or you have a volume of data so high that the "n% before maintenance plan execution" is not done very often. Let's say you have 2 000 000 incidents. If the "n" is set to 10%, that would mean the statistics would only be done after 200 000 incidents... It could be a problem in the log run
Though 1.x million does not sound a lot...
Try to get the "last statistics date" for the index you mentioned because if you have this % of fragmentation, I fear that your statistics are way outdated (and this will have a huge impact on performances as well), something like this SQL Query:
Select Name as IndexName, STATS_DATE ( object_id , index_id ) as IndexCreatedDate From sys.indexes where Name LIKE 'I2471_%'; Go
You will have something like this. It's very rough, here you can see the last time the statistics were calculated for the T2471 which is "HPD:Help Desk" on my test VM:
Anyway, 99% is a lot... At this %, usually Microsoft advises to rebuild the indexes because there is a "chance" a REORGANIZE will not be enough..
Rule is usually something like:
- If fragmentation<30%, do a REORGANIZE of the index, AND update the statistics (reorganize does not update statistics),
- If fragmentation>30%, do a REBUILD of the index. It should update at the same time the statistics as well (verify it).
The problem is that depending on your SQL Server version (Standard or Enterprise), the REBUILD will lock the index the resource and it could take a while.
Usually what I say is to run the statistics once a day, and rebuild indexes once a week.
At this step most of the time the DBA go ballistic, though it is useful in real life and I have seen the positive impact on several customers...
Some tables "move" a lot, for example "SLM:Measurement" and I have seen cases where the SELECT queries did a "full scan" on a 10 million entries "SLM:Measurement" even with an index the DBA thought was enough.
Even if you set the "right" index (or you think an existing index would do the trick), SQL Server will decide if it's best for him to use it or not and sometimes you have to see exactly what he does to "force" him to use a peculiar index, "for his own good".
Setting statistics regularily ensures you will have better performances.
If you cannot run statistics on all tables every day (morning for example), focus on the SQL tables that are used a lot.
As for adding indexes, it's usually up to the DBA job to do this, since it's pretty easy to capture queries that consume resources.
You can capture API and SQL logs and use PyARLogAnalyzer to do the same kind of work ARS side.
Then you have to decide if adding an index would benefit the performances enough compared to the drawbacks (insert a bit slower, need space on hard drive etc...).
That's why a DBA is (should be) the ARS Admin best friend...
So, no, ARS will not take care of your database for you. ARS will give you the tools to "monitor" the most consuming queries (API/SQL) depending on your version (here in V9):
Thank you so much for your detailed reply. Unfortunately I couldn't make Engage this year (travel cost from Ireland is just too high) so I missed out - maybe next year they will start to broadcast the sessions
We do not have high volumes (HPD:Help Desk has 175,000 rows) so that should not be the cause of the job not running.
I used your query to get the statistics from the HPD:Help desk table and they are not pretty as you can see from the screenshot. You should also be able to see that Fragmentation of this index is at 99.31 and the page fullness is only 66.87
Unfortunately I don't have access to a DBA resource so I will have to have a look to see if I can find the maintenance job you speak of, to see if it is disabled.
It looks like we need to get the basics in place first before we worry about additional indexes. There is no doubt our DB has been neglected, I had blindly hoped that ARS would look after its own DB.
I will have to try and figure out how to setup SQL (SQL Server 2008 R2 Enterprise) to run the statistics and reindex jobs manually and then setup a schedule to keep on top of it.
Any and all advice welcome.
Yes strange, so it looks the maintenance jobs are deactivated
I know that sometimes DBA deactivate OOTB jobs, create their own and schedule them. Maybe something is wrong with the schedule (?).
If you have the Enterprise version, the REBUILD can be set "ONLINE", so without locking (or so it is adertised by Microsoft) but I would not advise you to run a REBUILD during production hours (or within interfaces running etc...) anyway... Depending on volumes it might take a while...
The commands I have are those, I guess you would have to do a script or something
Rebuild for a SQL table (T468) and with "ONLINE" used:
ALTER INDEX ALL ON dbo.T468 REBUILD WITH (ONLINE = ON)
If you want to try the reorganize:
ALTER INDEX ALL ON dbo.T468 REORGANIZE
And then the restat:
UPDATE STATISTICS dbo.T468 WITH FULLSCAN;
The good thing with the Microsoft Data Management Studio is that you can just do the same with a right click I guess
If you want to see the difference in a query, activate the "monitoring" from SQL Server, it'll give you some information (CPU usage etc...) and you can also see the "most expensive queries" (ms/cpu is a good way to see if a request eats your SQL Server CPU):
Try to save some "expensive / long queries" and their execution plan.
Then after your first reorganize/restat (or rebuild) try to run the same queries (SELECT obviously ) and see if they consume less resources (it should, will be the case).
My suggestion for the list is when you run the monitor tool from SQL Server, also capture API+SQL logs and run them through PyArLogAnalyzer. This way you will have the best of two worlds:
- From SQL Server: query, ms/cpu and read amounts, execution plan,
- From PY: query and time spent and full query,
Thanks Laurent, that is exactly what I needed and gives me a great start.
I am just finishing up for the holidays here but I had a quick look at our test environment and a rebuild of the indexes on HPD:Help Desk only took a few seconds. I did not get a chance to record before and after stats but once I come back in Jan I will be able to compare before and after.
Thanks again and enjoy the holidays.
Enjoy your holidays ^_^