"Big C" or "little c," Champions All
Earlier this week, at its Information on Demand conference in Las Vegas, IBM announced its Data Champions program, along with the first 23 people to be so designated. I am delighted to be among this inaugural class of IBM Data Champions, in large part because of the other people in the group. The 15 DB2-focused honorees (there are also Informix, U2, and Data Studio professionals on the list) are consultants and users who have made tremendous contributions to the worldwide DB2 community. IBM, through this new program, has made us "Big C" (i.e., "official") Champions, but what we have in common - and I suppose that this is the point of the award - is that we have worked to further the efforts of the legions of "little c" (as in not officially known-as) champions all over the world who stand up and speak out for DB2 every working day of the year.
Having long been a "little c" DB2 champion myself, I think that I can speak to what motivates so many people to be active DB2 advocates, even when there is no direct corresponding financial reward (as might be earned by an IBM software sales representative). It comes down to having a strongly-held view that DB2 represents the very best in database technology - in terms of advanced functionality, in terms of reliability and availability, in terms of scalability and performance, and in terms of value received for the dollar (or whatever your unit of currency) spent. During my first year with IBM, when I was training to be a Systems Engineer (an in-the-field technical support person), I delivered a presentation that impressed one of my instructors. "You ought to consider going into sales," he said, but I wasn't interested in that line of work. I never was comfortable with trying to persuade someone to buy something. And yet, I've long spoken forcefully in favor of using DB2 for high-visibility, high-stakes applications - whether as an IBMer, as a DB2 user, or as an independent consultant. How to explain the seeming contradiction? Simple: I don't like to "sell," but I'm very much ready to speak to the advantages of building applications on a DB2 foundation. I'm just educating people - telling them what I feel they ought to know. This may sound like sales talk, but it doesn't feel that way to me.
And so it is that all kinds of folks - right now - are championing DB2. Many of these individuals - including, as mentioned previously, my fellow "Big C" Champions - have taken things a step further by working to equip "little c" champions with the knowledge that they need to be successful in their DB2 advocacy efforts. We facilitate that knowledge transfer through our leadership of regional and international DB2 user groups, through our presentations and Webcasts, and through our writing in technical journals and blogs. We help people to learn about new DB2 features, we share best practices, and we encourage people to think big with respect to using DB2 technology in new ways. We're Champions working with champions. That's called community. It's something that money doesn't buy, and it's something that DB2 has in spades. I encourage you to be a part of it.
So, What Makes for "Good" DB2 I/O Performance?
Recently, someone e-mailed me a portion of a DB2 monitor accounting report for an application program, and asked me if any of the numbers in the report were of the "red flag" variety (i.e., indicative of a performance problem). In responding, I mentrioned that one figure did indeed stand out in a negative way: the 30 millisecond average wait time per synchronous read I/O (meaning: on average, every time DB2 had to bring a particular table or index page - as opposed to a chunk of pages in a prefetch request - into the buffer pool from the disk subsystem on behalf of the application program, said program had to be suspended for 30 milliseconds waiting for the I/O operation to complete). Funny thing is, 30 milliseconds of wait time per synchronous DB2 read was once considered to be good performance. How times have changed.
When I first started working with DB2 in the latter half of the 1980s, IBM's 3380 disk storage device was king of the hill. The associated 3880 storage controllers had little, if any, cache memory (maybe 16 or 32 megabytes), and the cache management algorithms were not very sophisticated. On top of that, data was transmitted between disk subsystem and mainfrane DB2 server (DB2 for Linux/UNIX/Windows was not yet on the scene) over bundles of copper wire. It all seemed pretty fast to us at the time, and DB2 users did indeed aim to get the average wait time per synchronous read I/O below 30 milliseconds (20 milliseconds per sync read was indicative of really good performance).
The 1990s saw huge leaps forward in the capabilities of high-end disk storage systems (and saw these devices become options for users of LUW servers as well as for mainframes). Disk controller cache sizes jumped way up to multi-gigabyte territory, and the controllers got maximum bang from that resource thanks to advanced cache-management software that ran on powerful CPUs built into the units (a cool example of sophisticated cache handling was adaptive staging, whereby the disk controller monitored data-access patterns and shifted between cylinder-at-a-time and track-at-a-time staging of data to cache memory depending on whether sequential or random access was predominant at the time). Non-volatile memory for super-fast disk write operations became a standard feature, and fiber-optic disk-to-server connections really opened thr throttle with respect to data transfer rates. I well remember the first time - around 1995 or so - that I looked over a DB2 monitor statistics report at an organization that had implemented a disk storage subsystem with gigabytes of controller cache. I was amazed to see that the average wait time per synchronous DB2 read I/O was about 5 milliseconds. Since that time, speed has increased even more, to the point that some DB2-using companies see average sync read wait times in the vicinity of 2 to 3 milliseconds - an order of magnitude better than what I'd seen in the DB2 Version 1 days.
So, what if you are a DB2 user and you see an average wait time per synchronous read I/O that's in the 20- to 30-millisecond range. That is NOT good by today's standards, but what can you do about it? Some thoughts:- Spinning disk - even fast disk - ain't enough. To get great I/O performance, you need a lot of hits in the disk controller cache. To get that, you probably need gigabytes of cache memory in front of your disk volumes.
- Check the size of your DB2 buffer pool configuration. Whaddya got? A couple hundred meg worth of buffer space? HELLO! It's a 64-bit world, folks! A buffer pool config that's less than a gigabyte in size is kind of dinky, in my book. "OK," to me, means at least 2-4 GB, and "big" is north of 10 GB (yeah, you need the server memory to back it up, but you can get hundreds of gigabytes of memory on a high-end system these days). A too-small buffer pool means that your disk storage subsystem will get pounded, I/O wise (maybe hundreds of I/O operations per second), and even high-performance disk devices can get bogged down with I/O contention.
- Clustering matters. Did you put much thought into performance implications when you chose clustering keys for your DB2 tables? Locality of reference (i.e., rows to be retrieved by a program being in close physical proximity to each other) can make a very big difference in the number of pages that DB2 has to examine (and bring into the buffer pool if they're not already there) in executing SQL statements. Are your programs getting 20 rows from 1 or 2 pages, or from 20 pages?
- Data compression can be a performance-booster. It's not just about saving disk space. Mainframers have known this for a long time, and now DB2 for LUW has a great data compression capability, as well. When you compress a table, the number of rows per page will typically increase by 2-3 times, and that can mean a drop in page I/O requests.
- Do NOT try to fool the DB2 optimizer. I've heard of sites where they fudge DB2 catalog statistics in an attempt to get DB2 to not choose prefetch for data access when doing access path selection. This is typically done because of an impression that prefetch reads "get in the way" of single-page synchronous reads, thereby slowing the latter. Folks, today's DB2 optimizer is the product of 25 years of development effort and experience, and it knows what it's doing. If you don't want DB2 to prefetch a lot of pages when a certain SELECT statement is executed, try telling it that you want the first few rows of the result set ASAP (via the OPTIMIZE FOR n ROWS clause), or that DB2 can quit fetching after a few rows have been retrieved (FETCH FIRST n ROWS ONLY). When, through catalog stats that do not reflect reality, you trick DB2 into thinking that just a few data and/or index pages will be scanned to generate a result set when in fact lots of pages will be examined, you will very likely end up driving lots more single-page synchronous reads than should be occurring for your workload, and that can really gum things up. It's best to be honest with DB2 - keep your stats current and accurate.
Will technology advances eventually get us to where we expect average wait time per synchronous DB2 read I/O to be under a millisecond? Based on what I've seen over the past 20 years, I'd lean towards a "yes" on that one. Us verterans will then have fun regaling young 'uns with stories of these things on which we used to store DB2 data. "Disks," we called them...
DB2 Notes from Warsaw (Day 4)
The 2008 International DB2 Users Group European Conference concluded a little while ago, and I'm looking forward to seeing many of my fellow attendees (and some new people) at the 2009 conference that will take place October 5 - 9 in Rome. This was another good day, from my perspective. Some snapshots:
- Mainframe DB2 is rockin' and rollin'. Roger Miller, a veteran member of the DB2 for z/OS development team at IBM's Silicon Valley Lab, gave his session attendees an update on the topic of DB2 for z/OS performance. He talked up the z10, IBM's top-of-the-line mainframe server. Processing capacity has long been a core strength of the mainframe platform, but this thing is a monster (in a good way, of course). The z10's engines are more 50% faster than those of its predecessor, and you can get up to 64 of the CPUs in one box (and z/OS manages a large number of processors very effectively). Want a lot of memory on your z10 server? You can get up to 1.5 terabytes. What will people do with all those MIPS? How about running a lot of the native SQL stored procedures that you can deploy in a DB2 V9 environment? Roger talked about performance tests of DB2 V9 on a z/10 server, with thousands of stored procedure calls per second. Native SQL procedures generally consume 30-40% less CPU time than external SQL procedures, AND - when invoked via DDF (the Distributed Data Facility component of DB2 for z/OS) - they can run on zIIP processors (specialty engines that - unlike general-purpose CPUs - do not factor into mainframe software pricing). Roger also talked about advances in disk I/O technology and mainframe I/O connections that benefit DB2 for z/OS performance: a chunk of 32 4KB pages can be brought into a DB2 buffer pool via prefetch in one millisecond. A number of organizations are migrating their DB2 Version 8 subsystems to Version 9, and Roger indicated that these companies can expect to see CPU reductions of around 3% upon migrating to Version 9 in Conversion Mode, and another 3% or so (for a total CPU efficiency gain of about 6%) once DB2 9 is in New Functon Mode and new performance-enhancing features are being exploited. Some of the best performance gains are expected for programs that access LOB (large object) data, as multiple improvements have been made to this component of DB2. Roger also told attendees that the multi-row fetch and insert capability provided with DB2 V9 can dramatically reduce CPU costs for data-intensive programs, with the magnitude of this positive effect perhaps being greatest for distributed database applications that access DB2 via DDF. Roger concluded by offering up some goodies that people can look for in "version next" of DB2 for z/OS, including lots more concurrently active threads, lots more stuff moved above the 2 GB level in the DB2 database services address space, and a hash technique for super-fast data row location.
- The IBM panel did well, collectively and individually. Several leaders of IBM's DB2 development group, including Curt Cotner (IBM Fellow and CTO for DB2) and Matt Huras (chief kernal architect for DB2 for Linux/UNIX/Windows) answered a variety of questions from attendees during a lively 90-minute session. Some of the panel's answers had to do with new features expected to be delivered in the next release of DB2, with Jeff Josten of the DB2 for z/OS team mentioning an ALTER capability that will facilitate conversion of existing tablespaces to the new universal format (a combination of segmented and partitioned). Others had to do with outreach to various application programming communities (Curt spoke of IBM initiatives that are making it easier for people coding in languages such as PERL, Python, PHP, and Ruby to interact with DB2). This being DB2's 25th anniversary year, the panelists were asked to name their favorite all-time DB2 technology advances. On the DB2 for z/OS side, data sharing on the parallel sysplex got a mention, as did stored procedures and distributed database application support. Matt Huras cited the change to a threaded process model (from an agent model) for DB2 on Linux and UNIX servers, and John Hornibrook talked up the real-time statistics updates that enable the DB2 for LUW optimizer to make better access path decisions.
- Curt Cotner delivered a good "state of DB2" keynote address. IBM's DB2 CTO talked about the enduring strength of DB2 for z/OS in the large-enterprise market, noting that 59 of the world's top 59 banks run DB2 for z/OS. He also pointed out that UPS took top honors in a recent Winter Corporation survey of the world's largest databases and database workloads, with a peak load of over 1 billion SQL statements executed per hour on a DB2 for z/OS system. The aforementioned (in the above item on the panel discussion) switch to a threaded process model for DB2 on Linux and UNIX servers (it already used a threaded process model in a Windows environment) was cited as an enabler of a more unified design for DB2 on the mainframe and Linux/UNIX/Windows platforms, since DB2 for z/OS has always used that process model (a fact not well known due to nomenclature differences: the pieces of work that execute in a mainframe DB2 address space are called TCBs and SRBs, but they equate to Linux/UNIX/Windows threads). Near the end of his presentation, Curt spoke of the importance of IBM's new Data Studio tool, particularly as a means of enabling DB2 DBAs to much more effectively support application developers whose programs access DB2 databases from Java-based and other application server environments. Previously, it could be very difficult to tie a poor-performing SQL statement noticed by a DB2 DBA to the Java (for example) program that issued it. With Data Studio in the mix, DB2 can build a repository of application metadata that can greatly facilitate the linking of an SQL statement with the issuing program, thereby providing a DBA with information that he (or she) can use to help the associated programmer code a more efficient data-accessing statement.
Tomorrow morning, it's back to Atlanta for me. I've enjoyed my first visit to Poland, and I hope to return at some point in the future.
DB2 Notes from Warsaw (Day 3)
I'm blogging about my Day Three of the 2008 International DB2 Users Group European Conference on the morning of Day Four. Yesterday (Wednesday) I delivered my presentation (on what I call ultra-availability). Before that late-afternoon session, I was focused on business intelligence and data warehousing (much of the consulting work I've done over the past several months has been in this area). The day ended with a DB2 25th birthday party sponsored by IBM and held at an old manor house in the country just outside of Warsaw (the long bus ride back to the hotel proved to be a bonus, as I'll explain later).
Some of the take-aways from my Day Three:
- BI is a hot topic in DB2-land. I participated in the Business Intelligence and Data Warehousing Special Interest Group session (a SIG, in IDUG-ese - basically a "birds of a feather" get-together). The discussion was lively and covered a lot of ground. One of the IBMers present reminded people of the special DB2 for z/OS pricing (called DB2 for z/OS Value Unit Edition) available for qualifying BI workloads. The benefits and costs of ETL (extract/transform/load processes) were kicked around, with references to the need (or not) to aggregate data values bound for the data warehouse, differences between operational system and BI system database designs, and the potential integration of ETL with an organization's overall information lifecycle strategy (the latter brought up by consultant Jan Henderyckx, a reliably forward-thinking individual). The need for BI-supporting DB2 people to really understand what business users are trying to accomplish with a data warehouse was emphasized. One of the DB2 for z/OS participants spoke of the performance breakthroughs that can be achieved through the use of materialized query tables (MQT support, a feature well-known by DB2 for LUW people, was delivered on the mainframe platform with DB2 Version 8). Near the end of the discussion, participants talked about the growing popularity of "operational BI", especially on the system Z platform - a trend driven users' desire to access detailed as well as aggregated data records, and the importance of "data immediacy" (time-proximity to data change events).
- The latest on data warehousing and DB2 for z/OS. Willie Favero, one of IBM's top DB2 experts, delivered an excellent session on data warehousing and DB2 for z/OS. He started out by showing text from IBM's 1983 announcement of DB2 which positioned the DBMS as an excellent foundation for decision-support applications (this was before the term "data warehouse" emerged as a label for BI database systems). Willie noted that two of the factors driving an increase in data warehouse activity on the mainframe DB2 platform are SLAs that match those for operational database systems (and so call for maximum uptime, a strength of DB2 and system Z), and ever-growing numbers of concurrent queries accessing data warehouses (the z/OS operating system is very good at managing very large numbers of concurrently active tasks). A recently completed performance benchmark run at IBM's lab in Poughkeepsie, New York, was described: a 50-terabyte database, with a 300 billion-row table, and 200-300 query-issuing clients hitting the warehouse at one time. Willie was very pumped up about the results of the benchmark, which will be documented in an IBM "red book" that should be out within the next couple of months. An interesting slide in Willie's presentation showed the evolution of decision support applications from those that used query and reporting to gain insight into what happened, to deep analysis to try to better predict what will happen, to the current leading-edge systems that enable analysis of "right now" events so as to see more clearly things that are happening now. The benefits of DB2 for z/OS hardware-assisted compression were covered: 50% space savings, on average (with savings of 80% or more seen for some tables), with virtually no overhead on read (uncompress) operations. A recent enhancement: starting with DB2 for z/OS V8, the 64 KB compression dictionary that goes with each compressed data set (each partition of a compressed partitioned tablespace has one) is stored "above the bar" (i.e., above the 2 GB level) in the DB2 address space. Willie wrapped up with some discussion of DB2 for z/OS query parallelism: 1) use it, because it works great, 2) let DB2 determine the degree of parallelism for queries, because it does that very well, 3) remember that sysplex query parallelism can deliver massive parallel processing by splitting a query across multiple DB2 subsystems in a data sharing group, and 4) remember that query parallelism is one of the best ways to drive usage of a zIIP processor (a specialized mainframe engine that can offload certain types of work from the general-purpose CPUs but which does not affect mainframe software pricing).
- People are asking the right questions about high availability. I delivered a presentation on ultra-availability, pushing people to think beyond merely "great" availability to "ultimate" availability, which might be described as "never down, never lose anything, even in a disaster recovery situation." In discussions with attendees after the session, I heard a lot about the costs - hardware, software, programming - associated with getting closer and closer to ultimate availability for a data-serving system. Super-high availability can indeed be a pricey proposition, but IT people can serve their organizations by developing and costing solutions for ultra-availability (versus settling for less by assuming that the organization won't commit to achieving an audacious availability goal) and letting upper management make the call as to whether or not the potential payoff is worth the required investment.
- You have to love a line of work that can stay fun over a 5-decade career. On the bus ride back to the Hilton following the aforementioned DB2 25th anniversary party, I had the pleasure of sitting next to Tor Stryker (I hope that I have the spelling right), a lead data architect for one of Norway's top insurance companies. Tor is in his fifth decade in IT (all with the same company), and he still gets a tremendous kick out of helping to advance his organization's IT capabilities. He spoke of the days, way back, when programmers had to write code that would move sections of their own programs in and out of server memory because the whole thing couldn't fit within the few kilobytes of available space. Recently, he's helped to extend the functionality of legacy CICS-DB2 programs to Java-based client systems by enabling them to be accessed via DB2 stored procedure calls, and he has been a leader in the development of rules-based applications that can be extended, functionality-wise, much more quickly than old-style monolithic applications, thereby enhancing his company's operational agility for competitive advantage. All of this information was delivered with smiles and enthusiasm that were inspiring. Getting paid to learn and to innovate is indeed a good thing, Tor. Thanks for the reminder.
DB2 Notes from Warsaw (Day 2)
A little rainy in the morning on this, the second day of the 2008 International DB2 Users Group European Conference, then just some high clouds after that. Following are a few interesting (to me, at least) nuggets of information from my day of technical sessions and hallway conversations (it's been said that some of the most useful knowledge that one gains from attending a conference such as this one comes from the so-called "coffee track"):
- The rules of table partitioning really did change a lot with DB2 for z/OS Version 8. Kurt Struyf, a Senior Consultant with Competence Partners in Belgium, delivered a very good presentation on the table-based (versus index-based) partitioning feature that was delivered with DB2 for z/OS Version 8. He went over some of the benefits of table-based partitioning, such as the ability to partition a table's data using one key while clustering data within partitions using another key (with index-based partitioning, the clustering and partitioning indexes are one and the same). He talked about data-partitioned secondary indexes (DPSIs), noting that while they can be good for availability (enabling the avoidance of the BUILD2 phase of online REORG, which in some cases can result in a multi-minute loss of data availability when a subset of a tablespace's partitions are reorganized online), they do not provide an application performance advantage. Kurt also covered the increase in the number of partitions that can be defined for a tablespace (now up to 4096), the dynamic adding of a new partition to an existing partitioned tablespace, and the partition-swap capability that enables one to replace data in a partition so that what had been the first partiton now logically becomes the last (e.g., the partition that had held data for 1997 becomes the receptacle for 2008 data). The presentation also laid out nomeclature changes that came with DB2 V8: a partitioning index now is one that does not control partitioning, but which begins with the column (or columns) of the partitioning key of a table-based partitioned table (and a table-based partitioned table can have several partitioning indexes). Indexes defined on a table-based partitioned table that are not partitioning indexes are called secondary indexes, and these can be partitioned (with entries in separate files that correspond to the table's partitions) or not (Kurt recommended that partitioning indexes be made partitioned indexes, as well). The session concluded with a description of changes in the output of the DISPLAY DATABASE command that support table-based partitioned tables, and with examples of various recovery and reorganization scenarios involving table-based partitioned tables. Kurt's presentation, and others delivered here this week, will be available to IDUG premier-level members on IDUG's Web site (www.idug.org) within a couple of months, and will be available on the site to basic-level members nine months after that.
- DB2 for z/OS dynamic statement caching is NOT just about avoiding the CPU cost of statement preparation. I had an interesting between-sessions discussion with Thomas Baumann, a long-time (and VERY knowledgeable) DB2 professional who works for Swiss Mobiliar, a large insurance company. Thomas described how Swiss Mobiliar uses the information in the dynamic statement cache (extracted via EXPLAIN STMTCACHE ALL and EXPLAIN STMTCACHE STMTID) to monitor and tune dynamic SQL statements that run on the Company's production DB2 for z/OS system, and it seemed to me that while dynamic statement caching was initially touted as a way to avoid the CPU cost of re-preparing previously prepared dynamic SQL statements, the primary benefit of the technology to Swiss Mobiliar is in the area of enhanced performance management of the dynamic SQL workload.
- .NET is a great application server platform to pair with a DB2 database (whether DB2 for z/OS or DB2 for Linux/UNIX/Windows). Frank Petersen of Bankdata (a large supplier of IT services to 15 Danish banks) gave an excellent presentation on the various ways in which a .NET-based application can work very well together with a DB2 database. He compared and contrasted the Java and .NET application environments (both have their strong points), and described the ways in which a .NET program can access DB2 data (focusing primarily on the the ODBC, ADO.NET, and DB2 .NET data provider interfaces). Going above and beyond a presenter's call of duty, Frank passed out to all session attendees a CD-ROM containing the source of a simple .NET-DB2 application program he had written, thereby providing a great starting point for further exploration of this programming technology. Nick Manu, who works in Belgium for insurance giant AXA, was sitting next to me in this session, and he talked about the tremendous performance results he'd seen in working with DB2 as a back-end database for programs running on Linux/UNIX/Windows-based application servers.
- What should DB2 professionals be doing with their time and talents? Jan Henderyckx, leader of the consulting firm BrainWare, delivered a thought-provoking presentation in which he urged DB2 professionals to focus on increasing the value that they deliver to their employing organizations by spending more of their time and creative energy on higher-value data-related activities, as opposed to just administering a DBMS. To this end, Jan suggested that people work to develop a catalog of the database services that should be made available within their respective enterprises, along with risk assessments associated with NOT having those services, together with potential organizational benefits that could be realized if the services were to be established. Among the services promoted by Jan were data governance (including not only data quality but data "lineage," which has to do with where data came from and how it got to where it is), data movement (which includes data replication), metadata management (emphasizing the need for metadata to deliver "actionability" - in other words, to actually be useful to people within the organization), and SOA (with an interesting description of the concept of an enterprise information platform). A challenge mentioned by Jan: getting to a unified approach to data service management and delivery can be complicated by that fact that at many companies various data-related roles are spread across multiple departments within an IT organization. He suggested that virtual teams of data professionals can be effective if the actual organizational structure can't be changed to bring data-related people together within a department.
- The DB2 optimizer: 25 years old (more if you count its developmental years before the general availability of DB2) and getting better all the time. Terry Purcell, optimizer guru with the DB2 for z/OS development organization at IBM's Silicon Valley Lab, gave a presentation on recent enhancements to the DB2 for z/OS optimizer. He began by laying out IBM's ongoing priorities as they pertain to optimizer development: 1) improve access path selection (largely a matter of making SQL statement access path cost estimation ever more accurate and comprehensive), 2) provide more access path choices, and 3) provide better access path stability. Terry then reviewed some of the optimizer-related enhancements delivered with DB2 for z/OS V8 (including materialized query tables, improved star-join processing, backward index scans, and the ability to change a table's clustering index key), and some of the V8 SQL enhancements (such as upping the SQL statement size limitation to 2 MB, the ability to GROUP BY an expression, and support for common table expressions and recursive SQL). After that Terry covered DB2 V9 optimizer-related enhancements (examples include plan stability, which is basically the ability to back up and restore access paths; histogram statistics for SQL statement optimzation; and the ability to create an index on an expression), along with V9 SQL-related enhancements (e.g., the MERGE statement, also known as "upsert"; OLAP improvements; and the SQL extensions INTERSECT and EXCEPT). The last part of the presentation covered two of Terry's personal-favorite V9 optimizer enhancements. The first of these is a sort-avoidance technique that can kick in when a statement has a large initial result set and contains an ORDER BY and a FETCH FIRST n ROWS (with "n" being a relatively small value). Rather than sort the large initial result set (say, a million or more rows) just to return the first three rows of the ordered set (if FETCH FIRST 3 ROWS is specified), DB2 V9 will start going through the values in the ORDER BY column, comparing them in good-sized chunks at a time and all the while retaining the three smallest values found (again, assuming a specification of FETCH FIRST 3 ROWS - the number could be any relatively small values). In the end, the rows with the 3 smallest values of the ORDER BY column are identified in a much more efficient manner versus a full-blown sort operation. Terry concluded with a review of the new SYSINDEXSPACESTATS.LASTUSED table, which is populated by the DB2 real-time statistics function and which indicates the last time an index was used (and I mean used for data location or for referential integrity purposes - not just accessed because of a row INSERT or DELETE that drives the insert or deletion of an entry in an index leaf page). This should be especially useful when it comes to figuring out which indexes could be dropped without hurting application performance - if you see that a given index hasn't been used in three months, for example, it might be a good candidate for dropping (which would save disk space and reduce the cost of insert and delete and some update operations targeting the associated table). People with dynamic SQL workloads will especially like this feature, since the SYSPACKDEP catalog table just shows indexes used by static SQL statements.
Tomorrow afternoon I'll deliver my presentation (on what I call ultra-availability), and I'll post another blog entry with things learned and discussed during the day.
DB2 Notes from Warsaw
I'm in Warsaw, Poland, this week for the 2008 International DB2 Users Group European Conference. It's been a good first day. This is my first time in Poland, and I have to say that the level of human energy in Warsaw is quite impressive. Construction cranes reach into the sky all around the city center, where multiple new commercial and residential high-rises are under construction (the Hilton, where we're meeting, is itself only about a year old). I've eaten very well, and I'm planning on trying the sushi place adjacent to the hotel sometime later in the week.
A few notes and observations from Day 1:
- The kick-off keynote was first-rate. Marc Woods, a gold medal-winning paralympic swimmer, delivered an excellent talk on the trials and triumphs he's experienced as a competitive swimmer since losing the lower part of one of his legs to cancer at age 17. I've sat through many a keynote speech in my time. Some stick with me, and some don't. This one will. Two key take-aways: don't be afraid to set audacious goals, and know that some wins require multiple years of planning and effort (so don't give up prematurely).
- IDUG membership is growing nicely. Just prior to Marc's keynote, IDUG President Julian Stuhler addressed the opening session audience and shared some information about the ongoing growth of IDUG membership. There are now more than 12,000 registered IDUG members, representing more than 100 countries. Registering as an IDUG member is easily done at the IDUG Web site (www.idug.org). Basic membership is free, and premier membership, which offers additional benefits, is available for a modest annual fee (or by attending an IDUG conference such as this one going on now in Warsaw).
- How big is a "big" DB2 for z/OS buffer pool? Bigger than you might think. Thomas Baumann, a DB2 professional who has worked for Swiss Mobiliar (a large insurance company) for the past 16 years, delivered one of his typically excellent and information-packed presentations, this time on the subject of DB2 for z/OS virtual storage management (and, in particular, how it relates to the use of dynamic SQL). In his session, Thomas mentioned that Swiss Mobiliar's primary DB2 for z/OS production subsystem has a buffer pool configuration that's in excess of 14 GB, size-wise. Swiss Mobiliar is running with DB2 V8, which brought 64-bit virtual and read storage addressing to the mainframe DB2 platform (up from the old 31 bits). A few months ago, I posted a blog entry in which I urged DB2 for z/OS people to take advantage of 64-bit addressing, especially as it pertains to buffer pool sizing. The folks at Swiss Mobiliar are certainly on board with regard to that message. Thomas made an interesting observation: whereas people once thought of a 100,000-buffer pool as being big (that's 400 MB if we're talking about 4 KB buffers), now it's probably reasonable to think of a 100,000-buffer pool as being of medium size, with 1,000,000 or more buffers (4 GB if 4 KB buffers) being a better threshold for qualification as a "large" pool (and that's just one pool in what would likely be a multi-pool configuration). Thomas's presentation (and all the others delivered here), crammed with performance analysis formulas and rules of thumb, will be available on IDUG's Web site within the next couple of months for IDUG premier member access, and nine months after that for basic-level member access.
- DB2 for z/OS V9 has some very attractive features related to tables and tablespaces. Phil Grainger of CA delivered an informative presentation on this topic. He mentioned that the new SQL statement TRUNCATE TABLE essentially provides a means of doing with SQL what could be done before via an execution of the LOAD utility with a dummy input data set (i.e., an empty file) and the REPLACE option: namely, empty a table of data very quickly (even more quickly than a mass DELETE of a table in a segmented tablespace - a process that just marks space occupied by table rows as being empty, versus actually deleting the rows one-by-one). Staying with that analogy, Phil explained that the new ADD CLONE option of ALTER TABLE, combined with the new EXCHANGE statement, enables one to very quickly do with SQL what could be done by way of the LOAD utility with the REPLACE option and a non-empty input data set. Basically, you create a clone of a base table, then populate that clone with the data with which you want to replace the base table data, and then make the clone the new base table through execution of an EXCHANGE statement (this causes DB2 to update its catalog so that the table name is associated with the data set(s) of what had formerly been the clone table). Phil also talked up the benefits of the new universal tablespace (a tablespace that is both segmented and partitioned), and of the new "by growth" table partitioning option (enabling one to get the benefits of partitioning - especially useful for very large tables - without having to specify partitioning range values). Topping it off, we got some good information about reordered row format (now a DB2 standard), that being the label for a feature by which DB2 for z/OS V9 physically relocates varying-length columns to the "back" of table rows, "behind" all the fixed length columns, whilst continuing to return retrieved rows with the column order as specified in the CREATE TABLE statement (this under-the-covers column reordering enables DB2 to locate data in varying-length columns within a row in a much more efficient manner).
- Trends favoring the use of DB2 for z/OS for data warehousing. Over lunch with a couple of fellow attendees (including Lennart Henang of Henfield AB), I got into a discussion about the rise in data warehousing activity on the mainframe DB2 platform (about which I blogged a few weeks ago). It was mentioned that one driving factor could be the increased interest on the part of many organizations in getting closer and closer to "real time" BI, in which data changes are analyzed for insight very soon after (or even as) they occur, versus taking these changes and batch-loading them into a data warehouse nightly for analysis the next day). When that source data is in a DB2 for z/OS database, that can lead to a desire to have analysis also occur on that platform. In fact, the now-64-bit architecture of DB2 for z/OS (enabling much larger buffer pool configurations), combined with the unparalleled ability of the z/OS operating system to manage concurrent workloads with widely divergent characteristics, is leading some companies to think very seriously about running analysis-oriented queries against the actual DB2 for z/OS database that is accessed by the operational online transaction-processing workload. Think this can't be done? Think again. I'm not saying that you should toss a data warehouse system in favor of just querying the OLTP database (the database design of a data warehouse is generally much better suited to OLAP and other forms of intense analysis than is a typical OLTP database design), but some querying of the operational database for BI purposes could be a useful component of an overall business intelligence application system that would also feature a true warehouse.
That's it for now. More to come tomorrow.
On DB2 and Table-Pinning
I recently fielded, via e-mail, an entertaining question about "pinning" a DB2 table in memory (i.e., having the whole table resident in virtual storage on the DB2 server). I use the term "entertaining" not to poke fun at the question (or the questioner), but to convey that I enjoyed answering the question.
Among some people who are familiar with certain non-IBM DBMSs but are relatively new to DB2, there can be a bit of confusion about table-pinning in a DB2 environment. This may be due to the fact that table-pinning is covered in the documentation of some other DBMSs but is not mentioned in any of the DB2 manuals. Thus it is that one might conclude that DB2 does not support the pinning of tables in memory. In fact, DB2 has ALWAYS supported table-pinning - the difference is in nomenclature and in the process by which a DB2 table gets pinned.
In some non-DB2 DBMS environments, a table that is to be pinned in memory must be explicitly marked as such. With DB2, this is not the case. Why? Because some DBMSs do not allow a database administrator or systems programmer to control the size of a buffer pool, nor do they provide support for multiple buffer pools. Instead, there is a single buffer pool (aka a page cache) and the DBMS dynamically determines the amount of memory to be allocated to the pool (based on the available memory on the server and the nature of the database-accessing workload). Given that situation, a table that is not marked as to-be-pinned is unlikely to be entirely resident in memory unless it is quite small and very frequently accessed.
Now, don't get me wrong - DBMS management of buffer pool sizing can be quite useful in some cases, and indeed DB2 Version 9 provides a system-managed buffer pool sizing option. Pinning a table in memory, however, is about the database adminsitrator exerting control, and DB2 DBAs have been able to do this since Version 1 Release 1 back in the early 1980s. How do I pin a DB2 table in memory? Easy - I just assign it to a buffer pool that has enough buffers to hold all the table's pages. Here's a very simple example: I have a DB2 table, called XYZ, that occupies 1000 4K pages (the page size doesn't matter - could be 4K pages, 8K, 16K, whatever). I want to pin this table in memory. I set up buffer pool BP8 (the actual buffer pool name doesn't matter) with 1000 4K buffers. I assign table XYZ (actually, the tablespace in which XYZ is located) to BP8 and, voila, it's pinned in memory. What if I also want to pin table ABC, with 500 4K pages, in memory? No problem. I can set up BP9 with 500 buffers and assign table ABC's tablespace to that buffer pool, or I can add 500 buffers to BP8 and have both XYZ and ABC pinned in that pool.
Of course, you could go overboard with table-pinning. I, for one, would not want to pin a 1-terabyte table in memory. On the other hand, I wouldn't confine my pinning plans to itty-bitty tables. In today's 64-bit addressing world, you really can think about pinning a pretty big DB2 table - say, one that has a hundred thousand or more pages. I might actually do that if said table were really critical to the performance of an important application.
How big is too big, when you're talking about table-pinning? The answer to that question doesn't involve rocket science. First of all, I would start out NOT pinning any tables in memory, and going the pinning route if I need to boost the performance of a certain program or programs by eliminating read I/O wait time to the fullest extent possible. If I do decide to pin, the amount of table-pinning that I do (in terms of the number of pages belonging to pinned tables) will depend in large part on the amount of memory that is available on the server. If my server has 50 gigabytes of memory, should I think about pinning 50 gigabytes' worth of tables in memory? Of course not - you don't ever try to use ALL of a server's memory for DB2. You also, however, don't want to under-utilize the server's memory resource (your company paid for the memory, so you want to get maximum bang for that buck). When I'm trying to determine whether or not I'm using too much of a server's memory for DB2 buffers, I take a look at the rate of demand paging to auxiliary storage (that's a mainframe term, but you Linux/UNIX/Windows people will know what I mean). In other words, I want to see how often a referenced page (and I'm talking about ALL pages in memory, not just those holding DB2 data) has to be read in from auxiliary storage on disk (also known as page data sets in mainframe-ese) because it got paged out of virtual storage by the operating system (a system monitor product can provide this information). If the rate of demand paging to auxiliary storage is less than one per second, my assessment would be that you're under-utilizing server memory, and a good way to rectify that situation would be to make more of that memory available to DB2 in the form of additional buffers, and one possible use of the extra buffer pool space would be more table-pinning. If the rate of demand paging to disk is in the single digits of pages per socond, I'd still be fine with giving more memory to DB2. If the demand paging rate is in the double digits per second, I might want to pass on bulking up the DB2 buffer pools and doing more table-pinning.
Can you pin DB2 indexes in memory, as well as (or instead of) tables? Of course you can. As with tables, it's a matter of assigning an index to a buffer pool that has enough buffers to hold all of the index's pages.
Something I mentioned earlier bears repeating: it's a good idea to start out NOT pinning any DB2 objects in memory. If you do that (i.e., you don't pin anything), and application performance is good, there's no need to pin. Even if you need to cut down on disk read I/Os in order to improve application response time and/or throughput, try doing that by enlarging the buffer pool before you try pinning anything. If a larger buffer pool doesn't do the trick, you can think about pinning certain DB2 objects in memory.
So, don't be put off by differences in nomenclature and database administration in a DB2 environment versus other DBMSs. I put stuff in the trunk of my car, while my friends in the UK speak of stowing items in the boot, but we mean the same thing. One has always been able to pin DB2 objects in memory, even though the DB2 doc doesn't use that particular term. Go ahead and pin, my friends - but pin wisely.