DB2 for z/OS People: Don't be Memory Misers
When I first started working with DB2 on the mainframe platform, back in the mid-1980s, plenty of people were still running a version of the operating system called MVS/370. That OS limited the size of an address space to 16 MB, and the amount that a subsystem such as DB2 actually had to work with was less than that, since some megabytes were occupied by system code and by common storage areas (i.e., areas of virtual storage that were shared across address spaces). Since server memory was tight, people were frugal in their use of it. Early on, it was common to find organizations running DB2 in production with a total buffer pool configuration of 1000 4K buffers (4 MB of virtual storage) -- often allocated to a single pool, BP0.
MVS/370 was supplanted in the late 1980s by MVS/XA, which upped the maximum address space size to a whopping 2 GB (the XA in the OS name stood for eXtended Addressing). Though the amount of memory available to a DB2 subsystem had expanded by more than two orders of magnitude, many mainframe systems programmers were hesitant to significantly increase the size of DB2 buffer pools. I vividly recall seeing eyes widen when I told people that a DB2 subsystem running under MVS/XA should have 10,000 buffers in the pool at a bare minimum. 10,000 buffers! That's 40 MB! True, but 40 MB was peanuts in a 2 GB address space. On top of that, DB2 was designed to utilize large server memory resources to maximum advantage: the more virtual storage you gave it, the better it performed. Still, widespread acceptance of big buffer pools (meaning at least several hundred megabytes) was slow in coming.
Jump ahead to the late 00s (or whatever you call our present decade), and you see a repeat of that old treat-mainframe-memory-like-gold mindset. This time around, the big change was 64-bit hardware and virtual storage addressing on the Z platform, initially exploited by DB2 for z/OS Version 8. Organizations now have System Z servers with dozens of gigabytes of central storage, on which they have production DB2 V8 and V9 subsystems running with buffer pool configurations that use a gigabyte (or even less) of virtual storage for buffers. Come on, people! As was true when MVS/XA hit the scene, DB2 doesn't just tolerate large address spaces - it THRIVES in large address spaces. Here's a modest proposition for you: if you have a production DB2 V8 or V9 subsystem running in a z/OS LPAR (logical partition) that has a central storage resource of X gigabytes, think in terms of using at least 25% of X for DB2 buffers. Recall that folks routinely used 70-80% of a 2GB address space for DB2 buffers. Twenty-five percent is conservative. So, if your DB2-hosting z/OS LPAR has 40 GB of central storage, go for a 10 GB buffer pool configuration for starters, and be ready to expand further from there.
If you're planning on growing your DB2 buffer pool configuration to leverage 64-bit mainframe storage technology, should you do this by just enlarging all of your existing pools (assuming that you are using multiple pools -- and you should be -- for different categories of database objects)? No! Be smart about buffer pool expansion, and add buffers to pools in a way that will give you the biggest bang for your central storage buck. Your principal aim in bulking up buffer pools should be to reduce the rate of DB2 disk I/O operations. If you have two pools of equal size, and the rate of synchronous read I/Os (and/or the rate of prefetch I/Os -- both are important) is 100 per second for pool A and 5 per second for pool B, add buffers to pool A (and if you decide to make both pools bigger, add more buffers to A than to B). The rationale for focusing on disk I/O is simple: doing so gives you a double-barreled benefit: fewer disk I/Os means reduced elapsed times for DB2 queries and DB2-accessing programs (these are typically I/O-bound) AND reduced CPU consumption for a given DB2 workload (it doesn't take much CPU time to drive a disk I/O operation, but a small amount of CPU processing resource expended a zillion times can really add up). You can get I/O rates for DB2 buffer pools from several sources, including DB2 monitor products and the good old DB2 DISPLAY BUFFERPOOL command.
Something else to keep in mind: mainframe memory tends to cost less than mainframe MIPS. If you want to boost DB2 throughput, leverage the central storage resource at hand before adding engines.
Something else to keep in mind: fewer I/Os does NOT mean fewer GETPAGEs (a GETPAGE is a DB2 request to view a table or index page). Fewer I/O operations translates into CPU savings, but if you want to take a really big bite out of the CPU cost of a DB2 query or program or workload, you need to achieve a big reduction in GETPAGEs. That is basically a matter of access path analysis and associated query and/or database design changes. And that is a subject for another post. Till then, don't be shy about using more mainframe central storage for DB2 (V8 and above) buffer pools.
MVS/370 was supplanted in the late 1980s by MVS/XA, which upped the maximum address space size to a whopping 2 GB (the XA in the OS name stood for eXtended Addressing). Though the amount of memory available to a DB2 subsystem had expanded by more than two orders of magnitude, many mainframe systems programmers were hesitant to significantly increase the size of DB2 buffer pools. I vividly recall seeing eyes widen when I told people that a DB2 subsystem running under MVS/XA should have 10,000 buffers in the pool at a bare minimum. 10,000 buffers! That's 40 MB! True, but 40 MB was peanuts in a 2 GB address space. On top of that, DB2 was designed to utilize large server memory resources to maximum advantage: the more virtual storage you gave it, the better it performed. Still, widespread acceptance of big buffer pools (meaning at least several hundred megabytes) was slow in coming.
Jump ahead to the late 00s (or whatever you call our present decade), and you see a repeat of that old treat-mainframe-memory-like-gold mindset. This time around, the big change was 64-bit hardware and virtual storage addressing on the Z platform, initially exploited by DB2 for z/OS Version 8. Organizations now have System Z servers with dozens of gigabytes of central storage, on which they have production DB2 V8 and V9 subsystems running with buffer pool configurations that use a gigabyte (or even less) of virtual storage for buffers. Come on, people! As was true when MVS/XA hit the scene, DB2 doesn't just tolerate large address spaces - it THRIVES in large address spaces. Here's a modest proposition for you: if you have a production DB2 V8 or V9 subsystem running in a z/OS LPAR (logical partition) that has a central storage resource of X gigabytes, think in terms of using at least 25% of X for DB2 buffers. Recall that folks routinely used 70-80% of a 2GB address space for DB2 buffers. Twenty-five percent is conservative. So, if your DB2-hosting z/OS LPAR has 40 GB of central storage, go for a 10 GB buffer pool configuration for starters, and be ready to expand further from there.
If you're planning on growing your DB2 buffer pool configuration to leverage 64-bit mainframe storage technology, should you do this by just enlarging all of your existing pools (assuming that you are using multiple pools -- and you should be -- for different categories of database objects)? No! Be smart about buffer pool expansion, and add buffers to pools in a way that will give you the biggest bang for your central storage buck. Your principal aim in bulking up buffer pools should be to reduce the rate of DB2 disk I/O operations. If you have two pools of equal size, and the rate of synchronous read I/Os (and/or the rate of prefetch I/Os -- both are important) is 100 per second for pool A and 5 per second for pool B, add buffers to pool A (and if you decide to make both pools bigger, add more buffers to A than to B). The rationale for focusing on disk I/O is simple: doing so gives you a double-barreled benefit: fewer disk I/Os means reduced elapsed times for DB2 queries and DB2-accessing programs (these are typically I/O-bound) AND reduced CPU consumption for a given DB2 workload (it doesn't take much CPU time to drive a disk I/O operation, but a small amount of CPU processing resource expended a zillion times can really add up). You can get I/O rates for DB2 buffer pools from several sources, including DB2 monitor products and the good old DB2 DISPLAY BUFFERPOOL command.
Something else to keep in mind: mainframe memory tends to cost less than mainframe MIPS. If you want to boost DB2 throughput, leverage the central storage resource at hand before adding engines.
Something else to keep in mind: fewer I/Os does NOT mean fewer GETPAGEs (a GETPAGE is a DB2 request to view a table or index page). Fewer I/O operations translates into CPU savings, but if you want to take a really big bite out of the CPU cost of a DB2 query or program or workload, you need to achieve a big reduction in GETPAGEs. That is basically a matter of access path analysis and associated query and/or database design changes. And that is a subject for another post. Till then, don't be shy about using more mainframe central storage for DB2 (V8 and above) buffer pools.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home