DB2 for z/OS Data Sharing: Then and Now (Part 2)
In part 1 of this two-part entry, posted last week, I wrote about some of the more interesting changes that I've seen in DB2 for z/OS data sharing technology over the years (about 15) since it was introduced through DB2 Version 4. More specifically, in that entry I highlighted the tremendous improvement in performance with regard to the servicing of coupling facility requests, and described some of the system software enhancements that have made data sharing a more CPU-efficient solution for organizations looking to maximize the availability and scalability of a mainframe DB2 data-serving system. In this post, I'll cover changes in the way that people configure data sharing groups, and provide a contemporary view of a once-popular -- and perhaps now unnecessary -- application tuning action.
Putting it all together. Of course, before you run a DB2 data sharing group, you have to set one up. Hardware-wise, the biggest change since the early years of data sharing has been the growing use of internal coupling facilities, versus the standalone boxes that were once the only option available. The primary advantage of internal coupling facilities (ICFs), which operate as logical partitions within a System z server, with dedicated processor and memory resources, is economic: they cost less than external, standalone coupling facilities. They also offer something of a performance benefit, as communication between a z/OS system on a mainframe and an ICF on the same mainframe is a memory-to-memory operation, with no requirement for the traversing of a physical coupling facility link.
When internal coupling facilities first came along in the late 1990s, organizations that acquired them tended to use only one in a given parallel sysplex (the mainframe cluster on which a DB2 data sharing group runs) -- the other coupling facility in the sysplex (you always want at least two, so as to avoid having a single point of failure) would be of the external variety. This was so because people wanted to avoid the effect of the so-called "double failure" scenario, in which a mainframe housing both an ICF and a z/OS system participating in the sysplex associated with the ICF goes down. Group buffer pool duplexing, delivered with DB2 Version 6 (with a subsequently retrofit to Version 5), allayed the double-failure concerns of those who would put the group buffer pools (GBPs) in an ICF on a server with a sysplex-associated z/OS system: if that server were to fail, taking the ICF down with it, the surviving DB2 subsystems (running on another server or servers) would simply use what had been the secondary group buffer pools in the other coupling facility as primary GBPs, and the application workload would continue to be processed by those subsystems (in the meantime, any DB2 group members on the failed server would be automatically restarted on other servers in the sysplex). Ah, but what of the lock structure and the shared communications area (SCA), the other coupling facility structures used by the members of a DB2 data sharing group? Companies generally wanted these in an external, standalone CF (or in an ICF on a server that did not also house a z/OS system participating in the sysplex associated with the ICF). Why? Because a double-failure scenario involving these structures would lead to a group-wide failure -- this because successful rebuild of the lock structure or SCA (which would prevent a group failure) requires information from ALL the members of a DB2 data sharing group. If a server has an ICF containing the lock structure and SCA (these are usually placed within the same coupling facility) and also houses a member of the associated DB2 data sharing group, and that server fails, the lock structure and SCA will not be rebuilt (because a DB2 member failed, too), and without those structures, the data sharing group will come down.
Nowadays, parallel sysplexes configured with ICFs and no external CFs are increasingly common. For one thing, the double-failure scenario is less scary to a lot of folks than it used to be, because a) actually losing a System z server is exceedingly unlikely (it's rare enough for a z/OS or a DB2 or an ICF to crash, and rarer still for a mainframe itself to fail), and b) even if a double-failure involving the lock structure and SCA were to occur, causing the DB2 data sharing group to go down, the subsequent group restart process that would restore availability would likely complete within a few minutes (with duplexed group buffer pools, there would be no data sets in group buffer pool recover pending status, and restart goes much more quickly when there's no GRECP). Secondly, organizations that want insurance against even a very rare outage situation that would probably not exceed a small number of minutes in duration can eliminate the possibility of an outage-causing double-failure by implementing system duplexing of the lock structure and the SCA. System duplexing of the lock structure and SCA increases the overhead of running DB2 in data sharing mode (more so than group buffer pool duplexing), and that's why some organizations use it and some don't -- it's a cost versus benefit decision.
Another relatively recent development with regard to data sharing set-up is the availability of 64-bit addressing in Coupling Facility Control Code, beginning with the CFLEVEL 12 release (Coupling Facility Control Code is the operating system of a coupling facility, and I believe that CFLEVEL 16 is the current release). So, how big can an individual coupling facility structure be? About 100 GB (actually, 99,999,999 KB -- the current maximum value that can be specified when defining a structure in the specification of a Coupling Facility Resource Management, or CFRM, policy). For the lock structure and the SCA, this structure size ceiling (obviously well below what's possible with 64-bit addressing) is totally a non-issue, as these structures are usually not larger than 128 MB each. Could someone, someday, want a group buffer pool to be larger than 100 GB? Maybe, but I think that we're a long way from that point, and I expect that the 100 GB limit on the size of a coupling facility structure will be increased well before then.
DEALLOCATE or COMMIT? DB2 data sharing is, for the most part, invisible to application programs, and organizations implementing data sharing groups often find that use of the technology necessitates little, if anything, in the way of application code changes. That said, people have done various things over the years to optimize the CPU efficiency of DB2-accessing programs running in a data sharing environment. For a long time, one of the more popular tuning actions was to bind programs executed via persistent threads (i.e., threads that persist across commits, such as those associated with batch jobs and with CICS-DB2 protected entry threads) with the RELEASE(DEALLOCATE) option. This was done to reduce tablespace lock activity: RELEASE(DEALLOCATE) causes tablespace locks (not page or row locks) acquired by an application process to be retained until thread deallocation, as opposed to being released (and reacquired, if necessary) at commit points. The reduced tablespace lock activity would in turn reduce the type of data sharing global lock contention called XES contention (about which I wrote in last week's part 1 of this two-part entry). There were some costs associated with the use of RELEASE(DEALLOCATE) for packages executed via persistent threads (the size of the DB2 EDM pool often had to be increased, and package rebind procedures sometimes had to be changed or rescheduled to reflect the fact that some packages would remain in an "in use" state for much longer periods of time than before), but these were typically seen as being outweighed by the gain in CPU efficiency related to the aforementioned reduction in the level of XES contention.
All well and good, but then (with DB2 Version 8) along came data sharing Locking Protocol 2 (also described in last week's part 1 post). Locking Protocol 2 drove XES contention way down, essentially eliminating XES contention reduction as a rational for pairing RELEASE(DEALLOCATE) with persistent threads. With this global locking protocol in effect, the RELEASE(DEALLOCATE) versus RELEASE(COMMIT) package bind decision is essentially unrelated to your use of data sharing. There are still benefits associated with the use of RELEASE(DEALLOCATE) for persistent-thread packages (e.g., a slight improvement in CPU efficiency due to reduced tablespace lock and EDM pool resource release and reacquisition, more-effective dynamic prefetch), but take away the data sharing efficiency gain of old that is now provided by Locking Protocol 2, and you might decide to be more selective in your use of RELEASE(DEALLOCATE). If you implement a DB2 data sharing group, you may just leave your package bind RELEASE specifications as they were in your standalone DB2 environment.
What new advances in data sharing technology are on the way? We'll soon see: IBM is expected to formally announce DB2 Version "X," the successor to Version 9, later this year. I'll be looking for data sharing enhancements among the "What's New?" items delivered with Version X, and I'll likely report in this space on what I find. Stay tuned.
Putting it all together. Of course, before you run a DB2 data sharing group, you have to set one up. Hardware-wise, the biggest change since the early years of data sharing has been the growing use of internal coupling facilities, versus the standalone boxes that were once the only option available. The primary advantage of internal coupling facilities (ICFs), which operate as logical partitions within a System z server, with dedicated processor and memory resources, is economic: they cost less than external, standalone coupling facilities. They also offer something of a performance benefit, as communication between a z/OS system on a mainframe and an ICF on the same mainframe is a memory-to-memory operation, with no requirement for the traversing of a physical coupling facility link.
When internal coupling facilities first came along in the late 1990s, organizations that acquired them tended to use only one in a given parallel sysplex (the mainframe cluster on which a DB2 data sharing group runs) -- the other coupling facility in the sysplex (you always want at least two, so as to avoid having a single point of failure) would be of the external variety. This was so because people wanted to avoid the effect of the so-called "double failure" scenario, in which a mainframe housing both an ICF and a z/OS system participating in the sysplex associated with the ICF goes down. Group buffer pool duplexing, delivered with DB2 Version 6 (with a subsequently retrofit to Version 5), allayed the double-failure concerns of those who would put the group buffer pools (GBPs) in an ICF on a server with a sysplex-associated z/OS system: if that server were to fail, taking the ICF down with it, the surviving DB2 subsystems (running on another server or servers) would simply use what had been the secondary group buffer pools in the other coupling facility as primary GBPs, and the application workload would continue to be processed by those subsystems (in the meantime, any DB2 group members on the failed server would be automatically restarted on other servers in the sysplex). Ah, but what of the lock structure and the shared communications area (SCA), the other coupling facility structures used by the members of a DB2 data sharing group? Companies generally wanted these in an external, standalone CF (or in an ICF on a server that did not also house a z/OS system participating in the sysplex associated with the ICF). Why? Because a double-failure scenario involving these structures would lead to a group-wide failure -- this because successful rebuild of the lock structure or SCA (which would prevent a group failure) requires information from ALL the members of a DB2 data sharing group. If a server has an ICF containing the lock structure and SCA (these are usually placed within the same coupling facility) and also houses a member of the associated DB2 data sharing group, and that server fails, the lock structure and SCA will not be rebuilt (because a DB2 member failed, too), and without those structures, the data sharing group will come down.
Nowadays, parallel sysplexes configured with ICFs and no external CFs are increasingly common. For one thing, the double-failure scenario is less scary to a lot of folks than it used to be, because a) actually losing a System z server is exceedingly unlikely (it's rare enough for a z/OS or a DB2 or an ICF to crash, and rarer still for a mainframe itself to fail), and b) even if a double-failure involving the lock structure and SCA were to occur, causing the DB2 data sharing group to go down, the subsequent group restart process that would restore availability would likely complete within a few minutes (with duplexed group buffer pools, there would be no data sets in group buffer pool recover pending status, and restart goes much more quickly when there's no GRECP). Secondly, organizations that want insurance against even a very rare outage situation that would probably not exceed a small number of minutes in duration can eliminate the possibility of an outage-causing double-failure by implementing system duplexing of the lock structure and the SCA. System duplexing of the lock structure and SCA increases the overhead of running DB2 in data sharing mode (more so than group buffer pool duplexing), and that's why some organizations use it and some don't -- it's a cost versus benefit decision.
Another relatively recent development with regard to data sharing set-up is the availability of 64-bit addressing in Coupling Facility Control Code, beginning with the CFLEVEL 12 release (Coupling Facility Control Code is the operating system of a coupling facility, and I believe that CFLEVEL 16 is the current release). So, how big can an individual coupling facility structure be? About 100 GB (actually, 99,999,999 KB -- the current maximum value that can be specified when defining a structure in the specification of a Coupling Facility Resource Management, or CFRM, policy). For the lock structure and the SCA, this structure size ceiling (obviously well below what's possible with 64-bit addressing) is totally a non-issue, as these structures are usually not larger than 128 MB each. Could someone, someday, want a group buffer pool to be larger than 100 GB? Maybe, but I think that we're a long way from that point, and I expect that the 100 GB limit on the size of a coupling facility structure will be increased well before then.
DEALLOCATE or COMMIT? DB2 data sharing is, for the most part, invisible to application programs, and organizations implementing data sharing groups often find that use of the technology necessitates little, if anything, in the way of application code changes. That said, people have done various things over the years to optimize the CPU efficiency of DB2-accessing programs running in a data sharing environment. For a long time, one of the more popular tuning actions was to bind programs executed via persistent threads (i.e., threads that persist across commits, such as those associated with batch jobs and with CICS-DB2 protected entry threads) with the RELEASE(DEALLOCATE) option. This was done to reduce tablespace lock activity: RELEASE(DEALLOCATE) causes tablespace locks (not page or row locks) acquired by an application process to be retained until thread deallocation, as opposed to being released (and reacquired, if necessary) at commit points. The reduced tablespace lock activity would in turn reduce the type of data sharing global lock contention called XES contention (about which I wrote in last week's part 1 of this two-part entry). There were some costs associated with the use of RELEASE(DEALLOCATE) for packages executed via persistent threads (the size of the DB2 EDM pool often had to be increased, and package rebind procedures sometimes had to be changed or rescheduled to reflect the fact that some packages would remain in an "in use" state for much longer periods of time than before), but these were typically seen as being outweighed by the gain in CPU efficiency related to the aforementioned reduction in the level of XES contention.
All well and good, but then (with DB2 Version 8) along came data sharing Locking Protocol 2 (also described in last week's part 1 post). Locking Protocol 2 drove XES contention way down, essentially eliminating XES contention reduction as a rational for pairing RELEASE(DEALLOCATE) with persistent threads. With this global locking protocol in effect, the RELEASE(DEALLOCATE) versus RELEASE(COMMIT) package bind decision is essentially unrelated to your use of data sharing. There are still benefits associated with the use of RELEASE(DEALLOCATE) for persistent-thread packages (e.g., a slight improvement in CPU efficiency due to reduced tablespace lock and EDM pool resource release and reacquisition, more-effective dynamic prefetch), but take away the data sharing efficiency gain of old that is now provided by Locking Protocol 2, and you might decide to be more selective in your use of RELEASE(DEALLOCATE). If you implement a DB2 data sharing group, you may just leave your package bind RELEASE specifications as they were in your standalone DB2 environment.
What new advances in data sharing technology are on the way? We'll soon see: IBM is expected to formally announce DB2 Version "X," the successor to Version 9, later this year. I'll be looking for data sharing enhancements among the "What's New?" items delivered with Version X, and I'll likely report in this space on what I find. Stay tuned.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home