"What is DB2 data sharing?"
When data sharing was delivered with DB2 for z/OS (OS/390 at the time) Version 4, more than ten years ago, I was part of IBM's DB2 National Technical Support team at the Dallas Systems Center. I was tasked at the time with learning all that I could about data sharing, so that I could help IBM customers (and IBMers) understand, implement, and successfully use the technology. That was a really fun time in my IT career because people had lots of questions about DB2 data sharing, and I very much enjoyed answering those questions (sometimes in-person at locations in Asia, Australia, Europe, and throughout the USA).
Over time, as DB2 data sharing knowledge became more widely diffused, and as people such as my good friend Bryce Krohn shouldered more of the knowledge-transfer load, there were fewer questions for me to answer -- thus my delight in fielding a query a couple of weeks ago. The questioner was the general manager of a company that provides IT training services, and he and I were talking about a request from one of his clients for data sharing education. "So," asked Mr. General Manager, "what is DB2 data sharing?"
[Before I tell you what I told the GM, I'll provide a brief technical response to the question. Data sharing is a shared-disk scale-out solution for DB2 running on an IBM mainframe cluster configuration called a parallel sysplex. In a data sharing group, multiple DB2 subsystems co-own and have concurrent read/write access to a database (there is also a single catalog -- DB2's metadata repository -- that is shared by the DB2 subsystems in the group). Specialized devices called coupling facilities -- existing as standalone boxes and/or as logical partitions within mainframe servers -- provide the shared memory resources in which group buffepools (for data coherency protection) and the global lock structure reside (the latter provides concurrency control and data integrity preservation).]
I thought for a second or two, and told the GM: "DB2 data sharing is the most highly available, highly scalable data-serving platform on the market."
Now, it's been more than eight years since I last worked for IBM, so that wasn't a sales line. It is, I think, a factually correct answer. Consider those two qualities, availability and scalability. The thing about data sharing is, you start with the highest-availability standalone data-serving platform -- an IBM mainframe running the z/OS operating system -- and you make it even MORE highly available by clustering it in a shared-disk configuration. Does anyone dispute my contention that DB2 on a mainframe is the alpha dog when it comes to uptime in a non-clustered environment? Seems to me that fans of competing platforms speak of "mainframe-like" availability. The mainframe is still the gold standard with respect to rock-solid reliability -- the platform against which others are measure d. It's legendary for its ability to stay up under the most extreme loads, able to cruise along for extended periods of time at utilization levels well over 90%, with mixed and widely-varying workloads, keeping a huge number of balls in the air and not dropping a single one. You put several of those babies in a shared-disk cluster, and now you have a system that lets you apply maintenance (hardware or software) without a maintenance window. It also limits the scope of component failures to a remarkable extent, and recovers so quickly from same that a DB2 subsystem or a z/OS image or a mainframe server can crash (and keep in mind that such events are quite rare) and the situation can be resolved (typically in an automated fashion -- the system restores itself in most cases) with users never being aware that a problem occurred (and I'm talking about the real world -- I have first-hand experience in this area).
How about scalability? You might think that the overhead of DB2 data sharing would reach unacceptable levels once you get to four or five servers in a parallel sysplex, but you'd be wrong there. The CPU cost of data sharing is often around 10% or so in a 2-way data sharing group (meaning that an SQL statement running in a DB2 data sharing group might consume 10% more CPU time than the same statement running in a standalone DB2 environment). Add a third DB2 to the group, and you might add another half percentage point of overhead. Add a fourth member, and again overhead is likely to increase by less than one percent. So it goes, with a fifth DB2 member, a sixth, a seventh, an eighth, a ninth -- the increase in overall system capacity is almost linear with respect to the addition of an extra server's cycles. Again, I'm talking about the real world, in which organizations treat a data sharing group like a single-image system and let the parallel sysplex itself distribute work across servers in a load-balancing way, with no attempt made to establish "affinities" between certain servers and certain parts of the shared database in order to reduce overhead. Intense inter-DB2 read/write interest in database objects (tables and indexes) is the norm in data sharing systems I've worked on, and that's so because DB2 (and z/OS and Coupling Facility Control Code) handles that kind of environment very well. DB2 and parallel sysplex architecture allow for the coupling of up to 32 mainframes in one data-sharing cluster, and no one has come close to needing that amount of compute power to handle a data-serving workload (keep in mind that the capacity of individual mainframes continues to grow at an impressive clip).
Hey, there are plenty of good data-serving platforms out there, but when it comes to the ultimate in availability and scalability, the king of the hill is a DB2 for z/OS data sharing group on a parallel sysplex.
Any questions?
Over time, as DB2 data sharing knowledge became more widely diffused, and as people such as my good friend Bryce Krohn shouldered more of the knowledge-transfer load, there were fewer questions for me to answer -- thus my delight in fielding a query a couple of weeks ago. The questioner was the general manager of a company that provides IT training services, and he and I were talking about a request from one of his clients for data sharing education. "So," asked Mr. General Manager, "what is DB2 data sharing?"
[Before I tell you what I told the GM, I'll provide a brief technical response to the question. Data sharing is a shared-disk scale-out solution for DB2 running on an IBM mainframe cluster configuration called a parallel sysplex. In a data sharing group, multiple DB2 subsystems co-own and have concurrent read/write access to a database (there is also a single catalog -- DB2's metadata repository -- that is shared by the DB2 subsystems in the group). Specialized devices called coupling facilities -- existing as standalone boxes and/or as logical partitions within mainframe servers -- provide the shared memory resources in which group buffepools (for data coherency protection) and the global lock structure reside (the latter provides concurrency control and data integrity preservation).]
I thought for a second or two, and told the GM: "DB2 data sharing is the most highly available, highly scalable data-serving platform on the market."
Now, it's been more than eight years since I last worked for IBM, so that wasn't a sales line. It is, I think, a factually correct answer. Consider those two qualities, availability and scalability. The thing about data sharing is, you start with the highest-availability standalone data-serving platform -- an IBM mainframe running the z/OS operating system -- and you make it even MORE highly available by clustering it in a shared-disk configuration. Does anyone dispute my contention that DB2 on a mainframe is the alpha dog when it comes to uptime in a non-clustered environment? Seems to me that fans of competing platforms speak of "mainframe-like" availability. The mainframe is still the gold standard with respect to rock-solid reliability -- the platform against which others are measure d. It's legendary for its ability to stay up under the most extreme loads, able to cruise along for extended periods of time at utilization levels well over 90%, with mixed and widely-varying workloads, keeping a huge number of balls in the air and not dropping a single one. You put several of those babies in a shared-disk cluster, and now you have a system that lets you apply maintenance (hardware or software) without a maintenance window. It also limits the scope of component failures to a remarkable extent, and recovers so quickly from same that a DB2 subsystem or a z/OS image or a mainframe server can crash (and keep in mind that such events are quite rare) and the situation can be resolved (typically in an automated fashion -- the system restores itself in most cases) with users never being aware that a problem occurred (and I'm talking about the real world -- I have first-hand experience in this area).
How about scalability? You might think that the overhead of DB2 data sharing would reach unacceptable levels once you get to four or five servers in a parallel sysplex, but you'd be wrong there. The CPU cost of data sharing is often around 10% or so in a 2-way data sharing group (meaning that an SQL statement running in a DB2 data sharing group might consume 10% more CPU time than the same statement running in a standalone DB2 environment). Add a third DB2 to the group, and you might add another half percentage point of overhead. Add a fourth member, and again overhead is likely to increase by less than one percent. So it goes, with a fifth DB2 member, a sixth, a seventh, an eighth, a ninth -- the increase in overall system capacity is almost linear with respect to the addition of an extra server's cycles. Again, I'm talking about the real world, in which organizations treat a data sharing group like a single-image system and let the parallel sysplex itself distribute work across servers in a load-balancing way, with no attempt made to establish "affinities" between certain servers and certain parts of the shared database in order to reduce overhead. Intense inter-DB2 read/write interest in database objects (tables and indexes) is the norm in data sharing systems I've worked on, and that's so because DB2 (and z/OS and Coupling Facility Control Code) handles that kind of environment very well. DB2 and parallel sysplex architecture allow for the coupling of up to 32 mainframes in one data-sharing cluster, and no one has come close to needing that amount of compute power to handle a data-serving workload (keep in mind that the capacity of individual mainframes continues to grow at an impressive clip).
Hey, there are plenty of good data-serving platforms out there, but when it comes to the ultimate in availability and scalability, the king of the hill is a DB2 for z/OS data sharing group on a parallel sysplex.
Any questions?
4 Comments:
Hi there, sorry for the stupid question....if I have an environment in parallel sysplex but no DATA Sharing, what is the impact and the effort I need to turn it on?And what are the consideration I have to do before activate data sharing? Thanks for your support
Not a stupid question (sorry about the delayed response).
I'll assume that you have a parallel sysplex environment (mainframe servers, coupling facilities, sysplex timers, couple data sets, etc.), and a DB2 for z/OS subsystem that is running in that environment (i.e., on one of the z/OS systems in the parallel sysplex) in standalone mode.
First, I'll point out that you should be running DB2 at the Version 8 or Version 9 level - not only because V7 has gone out of service, but for the important data sharing benefits delivered with V8, as well (particularly the global locking enhancements that significantly reduced data sharing lock contention and associated overhead).
Activating DB2 data sharing does not require a great deal of effort. One of the most important pre-activation activities is the development of a good naming convention that will make the jobs of people responsible for operating, administering, and maintaining the system MUCH easier than would be the case with a poor naming convention. Oftentimes, a good naming convention will lead to a renaming of the "originating member" (the DB2 subsystem for which data sharing is activated). That's not too big a deal, as IBM provides a well-documented procedure for accomplishing this task (it's in a manual for which I'll provide a link momentarily).
Aside from developing the data sharing naming convention, you'll have decisions to make regarding the coupling facility structured used by DB2 (the Shared Communications Area, the lock structure, and the group buffer pools): size, location (i.e., coupling facility assignment), etc. You'll also want to set up a z/OS Automatic Restart Manager (ARM) policy that will enable the system to automatically restart a DB2 subsystem (and perhaps associated subsystems such as CICS) in case of an abnormal termination situation. Additionally, you'll want to enable your DB2 clients (if you have any access to DB2 via the Distributed Data Facility) to be able to access the data sharing group as a generic resource (versus accessing individual DB2 data sharing group members by name).
Actually enabling DB2 data sharing on the originating member is a process of about a dozen steps that should take less than half a day (the planning and preparation for executing this process will probably take 2-3 months, to ensure that you dot all your i's and cross all your t's and do so in a deliberate, unhurried manner).
The impact on your system, in terms of the CPU overhead of 2-way (or more) data sharing is typically not very large - often around 10% or so. The availability impact on your system could be quite positive, in that data sharing will enable you to apply maintenance to DB2 (and to other subsystems and to z/OS) with no requirement for a workload-disrupting maintenance window. In addition, the availability impact of a DB2 subsystem failure (a rare event to begin with) is usually very much reduced in a DB2 data sharing system versus a standalone DB2 environment.
A very good guide for you would be the DB2 Data Sharing Planning and Administration Guide. It's available on IBM's Web site. The DB2 V9 manuals (including the aforementioned Data Sharing Guide) are at http://www-01.ibm.com/support/docview.wss?rs=64&uid=swg27011656. The DB2 V8 manuals are also accessible from this page - just click on the "Version 8" tab near the top of the page.
Also for your consideration: Catterall Consulting offers several DB2 data sharing classes that can be delivered at your location. One of these is a 3-day class called DB2 Data Sharing Implementation. If you'd like information about this course or about consulting services aimed at facilitating a DB2 data sharing implementation project, feel free to send a note to rcatterall@catterallconsulting.com.
Hi Robert
Would you have an updated version on a formula to calculate the data-sharing overhead, please?
When I have calculated the CPU overhead of DB2 for z/OS data sharing, it's been in situations in which I can compare the in-DB2 CPU cost of SQL statement execution (also known as the "class 2" CPU time, because it is reported via DB2 accounting trace class 2) in a standalone DB2 environment with that seen in a DB2 data sharing group. I've generally done this using DB2 monitor accounting detail reports that an organization ran before and after implementing DB2 data sharing. I might be looking, for example, at DB2 monitor accounting detail reports with data grouped by connection type (i.e., all CICS-DB2 work, all call attach batch work, all DRDA work, etc.). In the report generated in the standalone DB2 environment I might see that the average in-DB2 CPU time for CICS-DB2 transactions was 100 milliseconds. In the report generated in a DB2 data sharing environment for the same application workload, I might see that the average in-DB2 CPU time for CICS-DB2 transactions went to 110 milliseconds. That would indicate a figure of about 10% for the CPU overhead of DB2 data sharing (I'd also compare before and after in-DB2 CPU times for batch jobs, DRDA transactions, and other components of the overall DB2 workload).
Post a Comment
Subscribe to Post Comments [Atom]
<< Home