CICS is great, but is it needed with DB2 for z/OS?
Opening statement: I like CICS. I really do. IBM's flagship mainframe-based transaction management subsystem has a lot going for it, including:
That said, there is a question that intrigues me. Suppose that your organization wants to build a mainframe DB2-based application system. The system will handle a large workload, in excess of 1000 DB2 database-accessing transactions per second at peak. Excellent performance is a must, and high availability is of critical importance. Here's my question: given this situation, would success depend on front-ending the DB2 for z/OS database with CICS? Here's my answer: no. A high-volume, mission-critical system can be successfully implemented without CICS being in the mix.
Absent CICS, where would the transaction programs originate? Well, think about it. Where would transaction programs originate if the database platform were something other than a mainframe (e.g., DB2 on a Solaris server, or Oracle on a Linux server or SQL Server on a Windows system)? In those cases, you'd have the transactions coming in from application servers running on boxes separate from the one on which the DBMS is housed, right? You would do that for reasons of scalability (often, you can have several application servers running flat-out, and the target database server will barely be breaking a sweat) and availability (the app servers are workload conduits for the database server, and if one of these "pipes" goes offline, work can continue to flow through the others). Why would you want to do things differently, just because the database server happens to be a mainframe? Why not front the mainframe with application servers (e.g., WebSphere, WebLogic, Windows .NET) running on Intel- or RISC-based systems?
This I will concede: I would not have wanted to implement an enterprise-class, DB2 for z/OS-based transactional application without CICS as recently as a few years ago. Server-side SQL-issuing programs are a key to scalability when transaction volume is high, and absent CICS transaction programs, server-side SQL in a mainframe environment means DB2 stored procedures invoked via CALL statements coming through the DB2 Distributed Data Facility (DDF). Not so long ago, DDF and DB2 stored procedures had some limitations that made them less attractive as components of large, mission-critical applications, to wit:
Again, CICS is a great piece of software, and if you have it, go ahead and leverage it; however, if you're wondering whether or not a high-volume, highly available transactional application can be successfully implemented with a DB2 for z/OS data server fronted exclusively by application servers running on Intel- and/or RISC-based computers, wonder no more. It can.
- Legs - CICS is a veteran player. It's been handling big transactional workloads around the world for more than 30 years - that's 30+ years of transaction management expertise built into the product through regular releases of new versions of the code.
- Scalability - I worked with an organization that went from about 100 CICS-DB2 transactions per second during peak hours to well over 1000 per second over the course of a two- to three-year period, and the system scaled beautifully. Response times remained excellent through the more-than-an-order-of-magnitude growth of the application workload.
- Great tools - With CICS having been on the scene for so long, and being so widely installed, you'd think that there'd be a rich ecosystem of complementary tools from IBM and other vendors, and you'd be right in so thinking. Performance monitors, diagnostic tools, and other products abound.
That said, there is a question that intrigues me. Suppose that your organization wants to build a mainframe DB2-based application system. The system will handle a large workload, in excess of 1000 DB2 database-accessing transactions per second at peak. Excellent performance is a must, and high availability is of critical importance. Here's my question: given this situation, would success depend on front-ending the DB2 for z/OS database with CICS? Here's my answer: no. A high-volume, mission-critical system can be successfully implemented without CICS being in the mix.
Absent CICS, where would the transaction programs originate? Well, think about it. Where would transaction programs originate if the database platform were something other than a mainframe (e.g., DB2 on a Solaris server, or Oracle on a Linux server or SQL Server on a Windows system)? In those cases, you'd have the transactions coming in from application servers running on boxes separate from the one on which the DBMS is housed, right? You would do that for reasons of scalability (often, you can have several application servers running flat-out, and the target database server will barely be breaking a sweat) and availability (the app servers are workload conduits for the database server, and if one of these "pipes" goes offline, work can continue to flow through the others). Why would you want to do things differently, just because the database server happens to be a mainframe? Why not front the mainframe with application servers (e.g., WebSphere, WebLogic, Windows .NET) running on Intel- or RISC-based systems?
This I will concede: I would not have wanted to implement an enterprise-class, DB2 for z/OS-based transactional application without CICS as recently as a few years ago. Server-side SQL-issuing programs are a key to scalability when transaction volume is high, and absent CICS transaction programs, server-side SQL in a mainframe environment means DB2 stored procedures invoked via CALL statements coming through the DB2 Distributed Data Facility (DDF). Not so long ago, DDF and DB2 stored procedures had some limitations that made them less attractive as components of large, mission-critical applications, to wit:
- It was not possible to assign different priorities (with respect to WLM, the z/OS Workload Manager) to different classes of work coming through the DB2 DDF - everything ran at the priority of the DDF address space.
- One could not get detailed DB2 accounting data (i.e., trace data used by a DB2 monitor product to generate a DB2 accounting detail report or display a screen of DB2 thread detail information) for stored procedure programs, because such detailed data was not available at the DB2 package level, and all work coming through DDF ran under the one DB2DIST plan name.
- Stored procedures could run in only one address space (an address space managed by DB2).
- There was a dearth of vendor tools that could be used to facilitate the development, debugging, and monitoring of DB2 stored procedure programs.
- Different z/OS workload management priorities can be assigned to different, user-specified classes of DDF-routed application work.
- Detailed DB2 accounting data is available at the package level, whether the package is executed as a stored procedure or in any other z/OS address space.
- DB2 stored procedure execution can be spread across multiple WLM-managed address spaces, and instances of a given stored procedure execution environment can be instantiated on demand by WLM in response to workload volume changes.
- The DB2 stored procedure tool scene looks great these days. To give you just a few examples: IBM's DB2 Development Center, a component of the DB2 Management Clients Package (a comprehensive stored procedure development tool); Compuware's Xpediter/DB2 Stored Procedures (a popular tool for stored procedure testing); and CA's TRILOGexpert TriTune (very useful for performance analysis of stored procedures and of DDF-related work in general).
Again, CICS is a great piece of software, and if you have it, go ahead and leverage it; however, if you're wondering whether or not a high-volume, highly available transactional application can be successfully implemented with a DB2 for z/OS data server fronted exclusively by application servers running on Intel- and/or RISC-based computers, wonder no more. It can.