Robert's Blog

Friday, January 8, 2010

DB2 for z/OS Data Sharing: Then and Now (Part 1)

A couple of months ago, I did some DB2 data sharing planning work for a financial services organization. That engagement gave me an opportunity to reflect on how far the technology has come since it was introduced in the mid-1990s via DB2 for z/OS Version 4 (I was a part of IBM's DB2 National Technical Support team at the time, and I worked with several organizations that were among the very first to implement DB2 data sharing groups on parallel sysplex mainframe clusters). This being the start of a new year, it seems a fitting time to look at where DB2 data sharing is now as compared to where it was about fifteen years ago. I'll do that by way of a 2-part post, focusing here on speed gains and smarter software, and in part 2 (sometime next week) on configuration changes and application tuning considerations.

Speed, and more speed. One of most critical factors with respect to the performance of a data sharing group is the speed with which a request to a coupling facility can be processed. In a parallel sysplex, the coupling facilities provide the shared memory resource in which DB2 structures such as the global lock structure and the group buffer pools are located. Requests to these structures (e.g., the writing of a changed page to a group buffer pool, or the propagation of a global lock request for a data page or a row) have to be processed exceedingly quickly, because 1) the volume of requests can be very high (thousands per second) and 2) most DB2-related coupling facility requests are synchronous, meaning that the mainframe engine that drives such a request will wait, basically doing nothing, until the coupling facility response is received (this is so for performance reasons: the request-driving mainframe processor is like a runner in a relay race, waiting with outstretched hand to take the baton from a teammate and immediately sprint with it towards the finish line). This processor wait time associated with synchronous coupling facility requests, technically referred to as "dwell time," has to be minimized because it is a key determinant of data sharing overhead (that being the difference in the CPU cost of executing an SQL statement in a data sharing environment versus the cost of executing the same statement in a standalone DB2 subsystem).

In the late 1990s, people who looked after DB2 data sharing systems were pretty happy if they saw average service times for synchronous requests to the group buffer pools and lock structure that were under 250 and 150 microseconds, respectively. Nowadays, sites report that their average service times for synchronous group buffer pool and lock structure requests are less than 20 microseconds. This huge improvement is due in large part to two factors. First, coupling facility engines are MUCH faster than they were in the old days. If you know mainframes, you know about this even if you are not familiar with coupling facilities, because coupling facility microprocessors are identical, hardware-wise, to general purpose System z engines -- they just run Coupling Facility Control Code instead of z/OS. Today's z10 microprocessors pack 10 times the compute power delivered by top-of-the-line mainframe engines a decade ago. The second big performance booster with regard to coupling facility synchronous request service times is the major increase in coupling facility link capacity versus what was available in the 1999-2000 time frame. Back then, many of the links in use had an effective data rate of 250 MB per second. Current links can move information at 2 GB (2000 MB) per second.

This big improvement in performance related to the servicing of synchronous coupling facility requests helped to improve throughput in data sharing systems. Did it also reduce the CPU cost of data sharing? Yes, but it's only part of that story. DB2 data sharing is a more CPU-efficient technology now than it was in the 1990s: overhead in an environment characterized by lots of what I call inter-DB2 write/write interest (referring to the updating of individual database objects -- tables and indexes -- by multiple application processes running concurrently on different members of a DB2 data sharing group) was once generally in the 10-20% range. Now the range is more like 8-15%. That improvement wasn't helped along all that much by faster coupling facility engines. Sure, they lowered service times for synchronous coupling facility requests, but the resulting reduction in the aforementioned mainframe processor "dwell time" was offset by the fact that much-faster mainframe engines forgo handling a lot more instructions during a given period of "dwelling" versus their slower predecessors (in other words, as mainframe processors get faster, you have to drive down synchronous request service times just to hold the line on data sharing overhead). Faster coupling facility links helped to reduce overhead, but I think that improvements in DB2 data sharing CPU efficiency have at least as much to do with system software changes as with speedier servicing of coupling facility requests. A tip of the hat, then, to the DB2, z/OS, and Coupling Facility Control Code development teams.

Working smarter, not harder. Over the years, IBM has delivered a number of software enhancements that lowered the CPU cost of DB2 data sharing. Some of these code changes boosted efficiency by reducing the number of coupling facility requests that would be generated in the processing of a given workload. A DB2 Version 5 subsystem, for example (and recall that data sharing was introduced with DB2 Version 4), was able to detect more quickly that a data set that it was updating and which had been open on multiple members of a data sharing group was now physically closed on the other members. As a result, the subsystem could take the data set out of the group buffer pool dependent state sooner, thereby eliminating associated coupling facility group buffer pool requests. DB2 Version 6 introduced the MEMBER CLUSTER option of CREATE TABLESPACE, enabling organizations to reduce coupling facility accesses related to space map page updating for tablespaces with multi-member insert "hot spots" (these can occur when multiple application processes running on different DB2 members are driving concurrent inserts to the same area within a tablespace). DB2 Version 8 took advantage of a Coupling Facility Control Code enhancement, the WARM command (short for Write And Register Multiple -- a means of batching coupling facility requests), to reduce the number of group buffer pool writes necessitated by a page split in a group buffer pool dependent index from five to one.

These code improvements were all welcome, but my all-time favorite efficiency-boosting enhancement is the Locking Protocol 2 feature delivered with DB2 Version 8. Locking Protocol 2 lowered data sharing overhead by virtually eliminating a type of global lock contention known as XES contention. Prior to Version 8, if an application process running on a member of a data sharing group requested an IX or IS lock on a tablespace (indicating an intent to update or read from the tablespace in a non-exclusive manner), that request would be propagated by XES (Cross-System Extended Services, a component of the z/OS operating system) to the coupling facility lock structure as an X or S lock request (exclusive update or exclusive read) on the target object, because those are the only logical lock states known to XES. Thus, if process Q running on DB2 Version 7 member DB2A requests an IX lock on tablespace XYZ, that request will get propagated to the lock structure as an X lock on the tablespace. If process R running on DB2 Version 7 member DB2B subsequently requests an IS lock on the same tablespace, that request will be propagated to the lock structure as an S lock on the resource. If process Q on DB2A still holds its lock on tablespace XYZ, the lock structure will detect the incompatibility of the X and S locks on the tablespace, and DB2B will get a contention-detected response from the coupling facility. The z/OS image under which DB2B is running will contact the z/OS associated with DB2A in an effort to resolve the apparent contention situation. DB2A's z/OS then drives an IRLM exit (IRLM being the lock management subsystem used by DB2), supplying the target resource identifier (for tablespace XYZ) and the actual logical lock requests involved in the XES-perceived conflict (IX and IS). IRLM will indicate to the z/OS system that these logical lock states are in fact compatible, and DB2B's z/OS will be informed that application process R can in fact get it's requested lock on tablespace XYZ. The extra processing required to determine that the conflict perceived by XES is in fact not a conflict adds to the cost of data sharing.

With locking protocol 2, the aforementioned scenario would play out as follows: the request by application process Q on DB2 Version 8 (or 9) member DB2A for an IX lock on tablespace XYZ gets propagated to the lock structure as a request for an S lock on the object. The subsequent request by application process R on DB2 Version 8 (or 9) member DB2B is also propagated as an S lock request, and because S locks on a resource are compatible with each other, no contention is indicated in the response to DB2B's system from the coupling facility, and the lock request is granted. Because IX and IS tablespace lock requests are very common in a DB2 environment, and because IX-IX and IX-IS lock situations involving application access to a given tablespace from different data sharing members are not perceived by XES as being in-conflict when Locking Protocol 2 is in effect, almost all of the XES contention that would be seen in a pre-Version 8 DB2 data sharing group goes away, and data sharing CPU overhead goes down. I say almost all, because the S-IS tablespace lock situation (exclusive read on the one side, and non-exclusive read on the other), which would not result in XES contention with Locking Protocol 1, does cause XES contention with Locking Protocol 2 (the reason being that Locking Protocol 2 causes an S tablespace lock request to be propagated to the lock structure as an X request -- necessary to ensure that an X-IX tablespace lock situation will be correctly perceived by XES as being in-conflict). This is generally not a big deal, because S tablespace lock requests tend to be quite unusual in most DB2 systems.

So, DB2 for z/OS data sharing technology, which was truly ground-breaking when introduced, has not remained static in the years since -- it's gotten better, and it will get better still in years to come. That's good news for data sharing users. If your organization doesn't have a DB2 data sharing group, make a new year's resolution to look into it. In any case, stop by the blog next week for my part 2 entry on the "then" and "now" of data sharing.


Post a Comment

Subscribe to Post Comments [Atom]

<< Home