Last week, we discussed moving from VSAM to OSAM when IMS databases near their size limit. This week, we'll look at putting the data on a diet - or in a girdle - with compression.
Compression reduces cost by requiring less DASD for storing compressed data and reducing the load placed on I/O processing. If you need to improve online application performance, maintain high data availability, and reduce batch application elapsed times, software compression may be a viable option for you. Compression provides these benefits:
Better buffer hit ratios for online transactions
Smaller logs, which have a ripple effect: smaller logs lead to smaller log volume, fewer log switches, fewer archive executions, and shorter change accumulation run times
Dramatically lower costs for image copy space and storage. Recoveries can be completed faster because compressed image copies can be applied quickly.
Improved utility maintenance times. For example, if it takes 60 minutes to reorganize a database with no compression, implementing a compression percentage of 50% can potentially shorten the reorganization to 30 minutes.
Lower I/O costs. With compression, segments and data within buffers are smaller. Because the data being read into the buffer is compressed, you can allocate smaller buffers or a smaller number of buffers.
Non-displayable data. Compressed data is non-displayable, which may meet some security standards.
Improved online IMS application response time through efficient use of IMS buffers and virtual storage
Reduced segment splits and related I/O
Efficient free space utilization
Reduced elapsed time and I/O activity for sequential batch IMS applications
When implementing compression, choose a compression technique that yields the best compression percentage for your data. Several vendor compression tools are available, and the various compression techniques (algorithms) will yield different compression percentages for your data.
Test your data to determine which compression technique works best with your data. For example, the Huffman algorithm tends to provide the highest compression percentage for short, hierarchical data. On the other hand, the Ziv Limpel algorithm, which is used for hardware compression, works best with relational data and long segments. The software you choose will offer many compression options, and you can see which one works best in your environment.
Hardware vs. software compression
For IMS, accessing and using native hardware compression is not a viable option. IBM implemented hardware compression using a software compression product. Even if you choose hardware compression, you must implement software for IMS to communicate with the hardware.
Costs and benefits
Compression provides many benefits, but it also has some costs, including the price of the compression package. (IBM provides some basic compression software at no additional charge.) Compressed data will also lead to higher CPU costs because data must be expanded and compressed whenever an application needs it. Before implementing compression, balance the cost of the software and CPU processing costs with the need for larger databases.
Compressed data must be able to be expanded. Vendors accept the responsibility for guaranteeing that compressed data can be expanded. Check with your vendors to see what data integrity checks they provide.
If compression does not resolve your database space issues, consider converting the database to a partitioned format or to Fast Path. We will look at these options in the coming weeks.
The postings in this blog are my own and don't necessarily represent BMC's opinion or position.