While I'm not sure what 'revision' is.. I assume something unique to the v11 CMDB.. I would agree that history can get out of hand. We are seeing it already in some cases in version 12. Both on a per-record level, but also overall in the system.
If you've ever worked with an SDE customer you know that 9 times out of 10 the largest table in their database was the attachment table. What I've been seeing in v12 is the audit tables are the largest. And unlike attachments in SDE which was a "few" records of large size, audit is hundreds of thousands of rows large. And this on systems that are less than a year old.
We are exploring options - part of the data purge initiative we have in our FACT tool - to drop audits after X timeframe, to keep the clutter down.
A revision is like a snapshot of a CI that is retained forever. Maybe "version" would have been a better term to use. Every time you or an import edits a CI, its current state is saved as a Revision before the changes are committed. So, if you edit a CI 20 times, it'll be at revision 20. From the Edit CI page, on the history tab, all the previous revisions are listed, and you can actually click a button next to any one of them to "Revert to this revision", restoring the CI to the state it was in at the time of the revision. The revisions are stored in tables just like the regular CIs but with a "_GENERATIONS" suffix on the table names.
For example, your Phones might be saved in CMDB1_CI_31. The past revisions of Phone are therefore stored in CMDB1_CI_31.
I wonder what the impact is of just having a huge database as would be the case with my CMDB imports or with version12 due to its auditing. I'm no expert, but I've read some opinions that say number of rows, amount of data per row, number of tables ,etc. really don't impact on the database server in a noticeable way. These statements were made under the assumption that the tables have an effecient index design.
1 of 1 people found this helpful
If anyone's interested, I scripted something to simulate a couple thousand updates to a particular CI. It changes 2 field values and saves the changes, repeats in 5 seconds. Each change produces a new Revision and of course adds to the history data you see on the CI details page.
Once my CI had been updated 2000 times, I loaded its details view and the page came up in about 17 seconds. Not good, but not horrible either.
I used the Perl profiler and found that the bulk of the time (14s) was spent playing with timestamps on the history records, getting them into the same timezone for instance. They all actually are in the same time zone already, so I removed the timezone stuff from the code (on a test server) and got the page to load in 3seconds.