Robert's Blog


Tuesday, December 29, 2009

Clearing the Air Re: Indexes on DB2 for z/OS Partitioned Tables

With the end of the year in sight, it's a good time to tie up loose ends, as we say here in the USA. Thus it is that I've decided to focus, in this last post to my blog in 2009, on indexes as they pertain to DB2 for z/OS partitioned tables. That subject qualifies as a "loose end," because several years after the introduction of table-controlled partitioning with DB2 for z/OS V8, some folks are still not certain as to what can and cannot be done with indexes defined on partitioned tables. I'll try, in this entry, to clear things up.

When a table is partitioned by way of an index specification (the only way to partition a table prior to DB2 V8), index options for the table are pretty straightforward. The index that describes the partitioning scheme (i.e., the one with the PART integer VALUES (constant) clause in the CREATE INDEX statement) is called the partitioning index. No other index on that table is called a partitioning index, and no other index on the table is physically partitioned. Any index on a unique key can be defined as UNIQUE.
Only the partitioning index can be the table's clustering index.

Starting with DB2 V8, table partitioning can be controlled by way of a table's definition (through the PARTITION BY and PARTITION integer ENDING AT (constant) clauses of CREATE TABLE). Table-controlled partitioning (enhanced in DB2 V9 via the partition-by-range universal tablespace) is way better than index-controlled partitioning, but this important DB2 advancement did change -- considerably -- the landscape as far as index options are concerned. First and foremost: for a table-controlled partitioned table, any index on a key that starts with the table's partition-by column or columns is called a partitioning index
(and "starts with" means that the columns of a multi-column partitioning key appear in the order specified in the CREATE TABLE statement); so, if a table's partitioning key is COL_X, COL_Y then an index on COL_X, COL_Y, COL_A is a partitioning index, and so is an index on COL_X, COL_Y, COL_B (but an index on COL_Y, COL_X, COL_D would not be a partitioning index, because the order of the partition-by columns does not match the order specified in the table's definition). Among the implications of this rule: a table-controlled partitioned table may have several partitioning indexes, or it may not have any partitioning indexes. Furthermore, a partitioning index may or may not be physically partitioned. Continuing with the example of the table partitioned on COL_X, COL_Y, an index on COL_X, COL_Y, COL_D that does not have the PARTITIONED clause in its definition is partitioning (because its key begins with the table's partitioning key) but not partitioned (because it was not defined with the PARTITIONED clause). Possible, of course, doesn't necessarily mean advisable -- I don't see why you would have a partitioning index that is not also partitioned.

Next: for a table-controlled partitioned table, a secondary index (i.e., one that is not a partitioning index) can itself be partitioned -- you just have to specify PARTITIONED in the definition of the index (this will cause the index to be physically partitioned along the lines of the underlying table, so that partition 1 of the secondary index will contain the keys for rows in partition 1 of the table). A secondary index that is partitioned is called a data-partitioned secondary index, or DPSI (an index that is not partitioned is called a non-partitioned index, or NPI).

Third: there is a restriction on DPSIs with regard to uniqueness (a restriction that was loosened somewhat with DB2 V9). In a DB2 for z/OS V8 environment, no DPSI can be defined as unique (and remember: a DPSI is a partitioned index that is not a partitioning index -- a partitioning index can be unique). If you try to create a secondary index with both UNIQUE and PARTITIONED in the index definition, you'll get a -628 SQL code (and an accompanying error message indicating that "clauses are mutually exclusive"). In a DB2 V9 environment (and this was recently pointed out by DB2 consultant Peter Backlund in a thread on the DB2-L discussion forum), a DPSI can be defined as UNIQUE if the index key contains the table's partition by column or columns. Once again, consider the table partitioned on COL_X, COL_Y. A DPSI defined on COL_A, COL_Y, COL_B, COL_X could have the UNIQUE attribute because the index key contains all of the table's partition-by columns (and note that the partition-by columns do not have to be in any particular order within the DPSI's key -- they just have to be present within the key). Can there be multiple unique DPSIs defined on a DB2 V9 table-controlled partitioned table? Yes -- again, what's required is that the underlying table's partition-by columns be included in the key of a DPSI that is to be defined as UNIQUE.

Finally: with regard to the CLUSTER attribute, you have flexibility with a table-controlled partitioned table that you don't have with an index-controlled partitioned table. For an index-controlled partitioned table, the partitioning index will be the table's clustering index. For a table-controlled partitioned table, any one index can be the table's clustering index (and of course, a table can only have one index with the CLUSTER attribute). The clustering index could be a partitioning index or a secondary index (whether a DPSI or an NPI). The ability to cluster a table with one key and partition it by another key is, in my opinion, one of the key advantages of table-controlled partitioning over index-controlled partitioning (other pluses include the ability to add partitions to a table-controlled partitioned table, and the ability to rotate partitions in a "first to last" manner).

Is all that clear? I hope so. Table-controlled partitioning is a VERY good thing -- well worth the effort of getting your arms around the new rules regarding indexes on table-controlled partitioned tables.

Throughout my 27 years in IT, I've enjoyed the constancy of opportunities to learn new things. I look forward to more of the same in 2010. Have fun ringing in the new year!

Thursday, December 10, 2009

DB2 for z/OS and the Disk Subsystem

Earlier this week, we in the Atlanta DB2 Users Group were treated to a day with Roger Miller, the ebullient IBMer who, more than anyone else, has been the "face" of the DB2 for z/OS development organization since the product debuted more than 25 years ago (Roger spends a great deal of time in front of DB2 users -- at customer sites, at the briefing center at IBM's Silicon Valley Lab, and at user group meetings). Roger provided us with six hours of DB2 information, laced with a little bit of Shakespeare (appropriate, as Roger's presentation style certainly brings to mind the line, "All the world's a stage," from Shakespeare's As You Like It). Some of the material that I found to be particularly interesting had to do with advances in disk I/O technology over the years, and the effect of those enhancements on DB2-related I/O performance. In this post, I'll share some of that information with you, and I'll include a few of my own observations on DB2 and the disk subsystem, based on my work with DB2 for z/OS-using organizations.

Faster and faster. Roger talked about the major gains seen in mainframe processor performance over the past 10 years (the "clock rate" of the engines in IBM's top-of-the line z10 mainframe server is 4.4 GHz, versus 550 MHz for the G6 line in 1999), and he went on to point out that improvements in disk I/O performance have been just as dramatic. As I noted in a blog entry on DB2 I/O performance that I posted last year, a target of 20-30 milliseconds of wait time per synchronous read (i.e., for an on-demand read of a single 4K DB2 page into memory from the disk subsystem) was pretty much the norm well into the 1990s (this figure is usually obtained from a DB2 monitor accounting report or an online display of DB2 accounting data). Nowadays, 2 milliseconds of wait time per synchronous read is fairly common. There are a number of factors behind this order-of-magnitude improvement in disk I/O performance, none more important than the large increase in disk controller cache memory sizes since the mid-90s (32 MB of cache once seemed like a lot in the old days -- now you can get more than 300 GB of cache on a control unit), and the development of sophisticated algorithms to optimize the effective use of the cache resource (enterprise-class disk controllers run these algorithms on multiple high-performance microprocessors).

The cache impact on disk I/O performance was underscored by some numbers that Roger shared with our group: while the time required to retrieve a DB2 page from spinning disk will generally be between 4 and 8 milliseconds (a big improvement versus 1990s-era disk subsystems), a disk controller cache hit results in a wait time of 230 to 290 microseconds for a synchronous read (the low end of this range is for a system with the z10 High-Performance FICON I/O architecture, also known as zHPF).

For DB2 prefetch reads, the gains are even more impressive. A decade or so ago, reading 64 4K pages into a DB2 buffer pool in 90 milliseconds was thought to be good performance. These days, it's possible to get 64 pages into memory in 1.5 milliseconds -- a 60X improvement over the old standard.

Looking beyond cache, a fairly recent arrival on the enterprise storage scene is solid-state disk technology (SSD). Actually, SSD itself isn't all that new -- devices of this type have been in use for about 30 years. What's new is the cost-competitiveness of enterprise-class SSD systems -- still several times more expensive than a traditional spinning-disk system, on a cost-per-gigabyte basis, but close enough now to warrant serious consideration for certain performance-critical database objects that tend to be accessed in a random fashion (as versus a sequential processing pattern). Random access is the SSD sweet spot because the technology eliminates the seek time that elongates random I/O service times when pages are read from spinning disk; thus, the wait time for a DB2 synchronous read from an SSD devices will likely be about 740-840 microseconds, versus the aforementioned 4-8 milliseconds for a read from spinning disk.

64-bit addressing means that DB2 buffer pools can be way bigger than before, and that's good for performance because no I/O operation -- even one that results in a cache hit -- will come close to the speed with which a page in memory can be accessed. That said, a large-scale DB2 application accessing a multi-terabyte database is likely to drive a lot of I/O activity, even if you have a really big buffer pool configuration. Advanced disk storage and I/O subsystem technology make those page reads much less of a drag on application performance than they otherwise would be.

Disk space utilization: don't overdo it. High-performance storage systems cost plenty of money, and people who purchase the devices don't want to have a lot of unused disk capacity on the floor. Some, however, push the target space-utilization threshold too far. See, a storage system isn't just about application performance -- it's also about application availability. Fill your disk volumes too full, and you're asking for DB2 data set space allocation failures (and associated application outages). How full is too full? I can tell you that in my own experience, DB2 data set space allocation failures tend to be a problem when an organization goes for a disk space utilization rate of 90% or more (and keep in mind that we're not just talking about data set extensions related to data-load operations -- we're also talking about utility-related disk space usage for things like sort work files and online REORG shadow data sets). At the other end of the spectrum, I've seen a situation in which an organization set a 60% threshold for disk volume space utilization. This company's DBAs liked that policy a lot (it really cut down on middle-of-the-night rousting of whoever was on-call), but I can't say that I'd advocate such a target -- it seems too low from a cost-efficiency perspective.

Where do I stand? I'm pretty comfortable with a disk space utilization target in the 70-80% range. I might lean towards the lower end of that range in a DB2 Version 9 environment, as partition-level online REORG jobs will reorganize non-partitioning indexes in their entirety, thereby necessitating more shadow data set space (a potentially compensating factor would be DB2 9 index compression, which can significantly reduce disk space requirements for index data sets). Something else to keep in mind: in addition to avoiding overly-high utilization of disk space, another good practice with regard to minimizing DB2 data set space allocation failures is to let DB2 determine the amount of space that will be requested when a data set has to be extended. This capability, sometimes called sliding-scale space allocation, was introduced with DB2 for z/OS Version 8. I blogged about it a few weeks ago, and I highly recommend its usage. People who have taken advantage of this functionality have expressed great satisfaction with the results: much less DBA time spent in managing DB2 data set allocation processing, and a sharp drop in the occurrence of DB2 space allocation-related failures. Check it out.