Whither DB2 for z/OS Self-Tuning Memory Management?
In my own experience as a DB2 consultant, I've seen organizations put their trust in DB2 for Linux, UNIX, and Windows (LUW) when it comes to automatic management of the server memory resources utilized by DB2 - with positive results. Self-tuning memory management (STMM) was activated for databases by default starting with DB2 9.1 for LUW, and DB2 9.5 added several memory-related configuration parameters to the list of those for which a value of AUTOMATIC can be specified, thereby turning over to DB2 the task of optimizing the use of server memory for associated functions (DB2 9.5 also unified the use of the threaded architecture across platforms, versus the process model previously used on Linux and UNIX servers - an advantageous change in regard to STMM, as I pointed out in a previous post on the topic). DB2 for LUW STMM is very broad in scope. Memory use for buffer pools, package caching, sort operations, locking - all can be managed dynamically and automatically by DB2. My impression is that STMM has been well received by the DB2 for LUW user community.
IBM's DB2 for z/OS development organization made an important move with respect to STMM in the form of the AUTOSIZE option of ALTER BUFFERPOOL, introduced with DB2 for z/OS Version 9 (generally available since March of 2007). I say important because this is something of an acid test. Of all the performance tuning knobs and levers exposed by DB2 for z/OS, perhaps none gets as much attention as buffer pool sizing. Will DB2 for z/OS systems people be willing to turn the management of this crucial performance factor over to DB2 itself? If they will, the door will be wide open for DB2 on the mainframe platform, in future releases, to make available to users the option of having the DBMS manage most all aspects of subsystem memory utilization in an automatic and dynamic fashion.
This is sure to be a hotly debated topic within the DB2 for z/OS user community. I'll give you my take: it may take a few years, but I believe that DB2 STMM on the mainframe platform will come to be as broad in scope as it is on LUW servers (encompassing not only the management of buffer pool sizing, but of the EDM pool, the RID pool, and the sort pool, as well), and that utilization of comprehensive STMM among DB2 for z/OS-using organizations will become commonplace.
I believe that a parallel development can be found in mainframe DB2's past, that being the introduction by IBM in the 1990s of automatic DB2 data set placement within a disk subsystem (made possible by advances in both DB2 and z/OS file management capabilities). For years, DB2 DBAs had carefully assigned DB2 data sets to particular volumes by using specific volume serial numbers (disk volume IDs) in CREATE STOGROUP statements, and then assigning data sets to STOGROUPs to (among other things) physically separate tables from associated indexes, and partitions of partitioned tablespaces from other partitions belonging to the same tablespace. When DB2 made it possible to leave data set placement up to the operating system via the specification of '*' for VOLUMES in a CREATE STOGROUP statement, a lot of DB2 people were very hesitant to make that leap. Those that did often ended up exercising quite a lot of control over data set placement via so-called ACS routines. Eventually, though, VOLUMES('*') came to be a very common CREATE STOGROUP specification, and many sites eschewed complex ACS routines in favor of very simple set-ups that sometimes involved a single big disk volume pool for all DB2 user tables and indexes. Why the change in attitudes? Simple: system-managed placement of DB2 data sets worked (meaning that people let the system manage this and found that they still got excellent DB2 I/O response time). Why did it work? One reason was the sophistication of z/OS file management capabilities, but at least as big a reason was the big change in disk subsystem technology that made the particular physical location of a DB2 data set on disk relative to the location of other data sets more and more of a non-issue: the advent of very large (multi-gigabyte) cache memory resources on disk controllers, along with sophisticated algorithms to optimize the use of same.
I believe that STMM will work well on the mainframe platform, and aside from the efforts of the DB2 for z/OS developers to this end, a big factor will be the growing use of very large server memory resources that were made possible by the move to 64-bit addressing. The availability of vast amounts of server memory will not make the sizing of buffer pools unimportant, but it stands to reason that sizing buffer pools for good performance in a 1000+ transactions per second environment is easier when you have 100 GB (for example) of memory to work with, versus 2 GB.
Will DBAs who have spent lots of time monitoring and tuning buffer pools end up out in the cold when DB2 for z/OS STMM becomes widely used? I think not. They'll have more time for application-enablement work that will boost the value that they deliver to their employing organizations.
Anyway, that's what I think. Your comments would be most welcome.
IBM's DB2 for z/OS development organization made an important move with respect to STMM in the form of the AUTOSIZE option of ALTER BUFFERPOOL, introduced with DB2 for z/OS Version 9 (generally available since March of 2007). I say important because this is something of an acid test. Of all the performance tuning knobs and levers exposed by DB2 for z/OS, perhaps none gets as much attention as buffer pool sizing. Will DB2 for z/OS systems people be willing to turn the management of this crucial performance factor over to DB2 itself? If they will, the door will be wide open for DB2 on the mainframe platform, in future releases, to make available to users the option of having the DBMS manage most all aspects of subsystem memory utilization in an automatic and dynamic fashion.
This is sure to be a hotly debated topic within the DB2 for z/OS user community. I'll give you my take: it may take a few years, but I believe that DB2 STMM on the mainframe platform will come to be as broad in scope as it is on LUW servers (encompassing not only the management of buffer pool sizing, but of the EDM pool, the RID pool, and the sort pool, as well), and that utilization of comprehensive STMM among DB2 for z/OS-using organizations will become commonplace.
I believe that a parallel development can be found in mainframe DB2's past, that being the introduction by IBM in the 1990s of automatic DB2 data set placement within a disk subsystem (made possible by advances in both DB2 and z/OS file management capabilities). For years, DB2 DBAs had carefully assigned DB2 data sets to particular volumes by using specific volume serial numbers (disk volume IDs) in CREATE STOGROUP statements, and then assigning data sets to STOGROUPs to (among other things) physically separate tables from associated indexes, and partitions of partitioned tablespaces from other partitions belonging to the same tablespace. When DB2 made it possible to leave data set placement up to the operating system via the specification of '*' for VOLUMES in a CREATE STOGROUP statement, a lot of DB2 people were very hesitant to make that leap. Those that did often ended up exercising quite a lot of control over data set placement via so-called ACS routines. Eventually, though, VOLUMES('*') came to be a very common CREATE STOGROUP specification, and many sites eschewed complex ACS routines in favor of very simple set-ups that sometimes involved a single big disk volume pool for all DB2 user tables and indexes. Why the change in attitudes? Simple: system-managed placement of DB2 data sets worked (meaning that people let the system manage this and found that they still got excellent DB2 I/O response time). Why did it work? One reason was the sophistication of z/OS file management capabilities, but at least as big a reason was the big change in disk subsystem technology that made the particular physical location of a DB2 data set on disk relative to the location of other data sets more and more of a non-issue: the advent of very large (multi-gigabyte) cache memory resources on disk controllers, along with sophisticated algorithms to optimize the use of same.
I believe that STMM will work well on the mainframe platform, and aside from the efforts of the DB2 for z/OS developers to this end, a big factor will be the growing use of very large server memory resources that were made possible by the move to 64-bit addressing. The availability of vast amounts of server memory will not make the sizing of buffer pools unimportant, but it stands to reason that sizing buffer pools for good performance in a 1000+ transactions per second environment is easier when you have 100 GB (for example) of memory to work with, versus 2 GB.
Will DBAs who have spent lots of time monitoring and tuning buffer pools end up out in the cold when DB2 for z/OS STMM becomes widely used? I think not. They'll have more time for application-enablement work that will boost the value that they deliver to their employing organizations.
Anyway, that's what I think. Your comments would be most welcome.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home