Robert's Blog


Thursday, October 23, 2008

So, What Makes for "Good" DB2 I/O Performance?

Recently, someone e-mailed me a portion of a DB2 monitor accounting report for an application program, and asked me if any of the numbers in the report were of the "red flag" variety (i.e., indicative of a performance problem). In responding, I mentrioned that one figure did indeed stand out in a negative way: the 30 millisecond average wait time per synchronous read I/O (meaning: on average, every time DB2 had to bring a particular table or index page - as opposed to a chunk of pages in a prefetch request - into the buffer pool from the disk subsystem on behalf of the application program, said program had to be suspended for 30 milliseconds waiting for the I/O operation to complete). Funny thing is, 30 milliseconds of wait time per synchronous DB2 read was once considered to be good performance. How times have changed.

When I first started working with DB2 in the latter half of the 1980s, IBM's 3380 disk storage device was king of the hill. The associated 3880 storage controllers had little, if any, cache memory (maybe 16 or 32 megabytes), and the cache management algorithms were not very sophisticated. On top of that, data was transmitted between disk subsystem and mainfrane DB2 server (DB2 for Linux/UNIX/Windows was not yet on the scene) over bundles of copper wire. It all seemed pretty fast to us at the time, and DB2 users did indeed aim to get the average wait time per synchronous read I/O below 30 milliseconds (20 milliseconds per sync read was indicative of really good performance).

The 1990s saw huge leaps forward in the capabilities of high-end disk storage systems (and saw these devices become options for users of LUW servers as well as for mainframes). Disk controller cache sizes jumped way up to multi-gigabyte territory, and the controllers got maximum bang from that resource thanks to advanced cache-management software that ran on powerful CPUs built into the units (a cool example of sophisticated cache handling was adaptive staging, whereby the disk controller monitored data-access patterns and shifted between cylinder-at-a-time and track-at-a-time staging of data to cache memory depending on whether sequential or random access was predominant at the time). Non-volatile memory for super-fast disk write operations became a standard feature, and fiber-optic disk-to-server connections really opened thr throttle with respect to data transfer rates. I well remember the first time - around 1995 or so - that I looked over a DB2 monitor statistics report at an organization that had implemented a disk storage subsystem with gigabytes of controller cache. I was amazed to see that the average wait time per synchronous DB2 read I/O was about 5 milliseconds. Since that time, speed has increased even more, to the point that some DB2-using companies see average sync read wait times in the vicinity of 2 to 3 milliseconds - an order of magnitude better than what I'd seen in the DB2 Version 1 days.

So, what if you are a DB2 user and you see an average wait time per synchronous read I/O that's in the 20- to 30-millisecond range. That is NOT good by today's standards, but what can you do about it? Some thoughts:

  • Spinning disk - even fast disk - ain't enough. To get great I/O performance, you need a lot of hits in the disk controller cache. To get that, you probably need gigabytes of cache memory in front of your disk volumes.
  • Check the size of your DB2 buffer pool configuration. Whaddya got? A couple hundred meg worth of buffer space? HELLO! It's a 64-bit world, folks! A buffer pool config that's less than a gigabyte in size is kind of dinky, in my book. "OK," to me, means at least 2-4 GB, and "big" is north of 10 GB (yeah, you need the server memory to back it up, but you can get hundreds of gigabytes of memory on a high-end system these days). A too-small buffer pool means that your disk storage subsystem will get pounded, I/O wise (maybe hundreds of I/O operations per second), and even high-performance disk devices can get bogged down with I/O contention.
  • Clustering matters. Did you put much thought into performance implications when you chose clustering keys for your DB2 tables? Locality of reference (i.e., rows to be retrieved by a program being in close physical proximity to each other) can make a very big difference in the number of pages that DB2 has to examine (and bring into the buffer pool if they're not already there) in executing SQL statements. Are your programs getting 20 rows from 1 or 2 pages, or from 20 pages?
  • Data compression can be a performance-booster. It's not just about saving disk space. Mainframers have known this for a long time, and now DB2 for LUW has a great data compression capability, as well. When you compress a table, the number of rows per page will typically increase by 2-3 times, and that can mean a drop in page I/O requests.
  • Do NOT try to fool the DB2 optimizer. I've heard of sites where they fudge DB2 catalog statistics in an attempt to get DB2 to not choose prefetch for data access when doing access path selection. This is typically done because of an impression that prefetch reads "get in the way" of single-page synchronous reads, thereby slowing the latter. Folks, today's DB2 optimizer is the product of 25 years of development effort and experience, and it knows what it's doing. If you don't want DB2 to prefetch a lot of pages when a certain SELECT statement is executed, try telling it that you want the first few rows of the result set ASAP (via the OPTIMIZE FOR n ROWS clause), or that DB2 can quit fetching after a few rows have been retrieved (FETCH FIRST n ROWS ONLY). When, through catalog stats that do not reflect reality, you trick DB2 into thinking that just a few data and/or index pages will be scanned to generate a result set when in fact lots of pages will be examined, you will very likely end up driving lots more single-page synchronous reads than should be occurring for your workload, and that can really gum things up. It's best to be honest with DB2 - keep your stats current and accurate.
Will technology advances eventually get us to where we expect average wait time per synchronous DB2 read I/O to be under a millisecond? Based on what I've seen over the past 20 years, I'd lean towards a "yes" on that one. Us verterans will then have fun regaling young 'uns with stories of these things on which we used to store DB2 data. "Disks," we called them...

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home