DB2 Notes from Warsaw
I'm in Warsaw, Poland, this week for the 2008 International DB2 Users Group European Conference. It's been a good first day. This is my first time in Poland, and I have to say that the level of human energy in Warsaw is quite impressive. Construction cranes reach into the sky all around the city center, where multiple new commercial and residential high-rises are under construction (the Hilton, where we're meeting, is itself only about a year old). I've eaten very well, and I'm planning on trying the sushi place adjacent to the hotel sometime later in the week.
A few notes and observations from Day 1:
A few notes and observations from Day 1:
- The kick-off keynote was first-rate. Marc Woods, a gold medal-winning paralympic swimmer, delivered an excellent talk on the trials and triumphs he's experienced as a competitive swimmer since losing the lower part of one of his legs to cancer at age 17. I've sat through many a keynote speech in my time. Some stick with me, and some don't. This one will. Two key take-aways: don't be afraid to set audacious goals, and know that some wins require multiple years of planning and effort (so don't give up prematurely).
- IDUG membership is growing nicely. Just prior to Marc's keynote, IDUG President Julian Stuhler addressed the opening session audience and shared some information about the ongoing growth of IDUG membership. There are now more than 12,000 registered IDUG members, representing more than 100 countries. Registering as an IDUG member is easily done at the IDUG Web site (www.idug.org). Basic membership is free, and premier membership, which offers additional benefits, is available for a modest annual fee (or by attending an IDUG conference such as this one going on now in Warsaw).
- How big is a "big" DB2 for z/OS buffer pool? Bigger than you might think. Thomas Baumann, a DB2 professional who has worked for Swiss Mobiliar (a large insurance company) for the past 16 years, delivered one of his typically excellent and information-packed presentations, this time on the subject of DB2 for z/OS virtual storage management (and, in particular, how it relates to the use of dynamic SQL). In his session, Thomas mentioned that Swiss Mobiliar's primary DB2 for z/OS production subsystem has a buffer pool configuration that's in excess of 14 GB, size-wise. Swiss Mobiliar is running with DB2 V8, which brought 64-bit virtual and read storage addressing to the mainframe DB2 platform (up from the old 31 bits). A few months ago, I posted a blog entry in which I urged DB2 for z/OS people to take advantage of 64-bit addressing, especially as it pertains to buffer pool sizing. The folks at Swiss Mobiliar are certainly on board with regard to that message. Thomas made an interesting observation: whereas people once thought of a 100,000-buffer pool as being big (that's 400 MB if we're talking about 4 KB buffers), now it's probably reasonable to think of a 100,000-buffer pool as being of medium size, with 1,000,000 or more buffers (4 GB if 4 KB buffers) being a better threshold for qualification as a "large" pool (and that's just one pool in what would likely be a multi-pool configuration). Thomas's presentation (and all the others delivered here), crammed with performance analysis formulas and rules of thumb, will be available on IDUG's Web site within the next couple of months for IDUG premier member access, and nine months after that for basic-level member access.
- DB2 for z/OS V9 has some very attractive features related to tables and tablespaces. Phil Grainger of CA delivered an informative presentation on this topic. He mentioned that the new SQL statement TRUNCATE TABLE essentially provides a means of doing with SQL what could be done before via an execution of the LOAD utility with a dummy input data set (i.e., an empty file) and the REPLACE option: namely, empty a table of data very quickly (even more quickly than a mass DELETE of a table in a segmented tablespace - a process that just marks space occupied by table rows as being empty, versus actually deleting the rows one-by-one). Staying with that analogy, Phil explained that the new ADD CLONE option of ALTER TABLE, combined with the new EXCHANGE statement, enables one to very quickly do with SQL what could be done by way of the LOAD utility with the REPLACE option and a non-empty input data set. Basically, you create a clone of a base table, then populate that clone with the data with which you want to replace the base table data, and then make the clone the new base table through execution of an EXCHANGE statement (this causes DB2 to update its catalog so that the table name is associated with the data set(s) of what had formerly been the clone table). Phil also talked up the benefits of the new universal tablespace (a tablespace that is both segmented and partitioned), and of the new "by growth" table partitioning option (enabling one to get the benefits of partitioning - especially useful for very large tables - without having to specify partitioning range values). Topping it off, we got some good information about reordered row format (now a DB2 standard), that being the label for a feature by which DB2 for z/OS V9 physically relocates varying-length columns to the "back" of table rows, "behind" all the fixed length columns, whilst continuing to return retrieved rows with the column order as specified in the CREATE TABLE statement (this under-the-covers column reordering enables DB2 to locate data in varying-length columns within a row in a much more efficient manner).
- Trends favoring the use of DB2 for z/OS for data warehousing. Over lunch with a couple of fellow attendees (including Lennart Henang of Henfield AB), I got into a discussion about the rise in data warehousing activity on the mainframe DB2 platform (about which I blogged a few weeks ago). It was mentioned that one driving factor could be the increased interest on the part of many organizations in getting closer and closer to "real time" BI, in which data changes are analyzed for insight very soon after (or even as) they occur, versus taking these changes and batch-loading them into a data warehouse nightly for analysis the next day). When that source data is in a DB2 for z/OS database, that can lead to a desire to have analysis also occur on that platform. In fact, the now-64-bit architecture of DB2 for z/OS (enabling much larger buffer pool configurations), combined with the unparalleled ability of the z/OS operating system to manage concurrent workloads with widely divergent characteristics, is leading some companies to think very seriously about running analysis-oriented queries against the actual DB2 for z/OS database that is accessed by the operational online transaction-processing workload. Think this can't be done? Think again. I'm not saying that you should toss a data warehouse system in favor of just querying the OLTP database (the database design of a data warehouse is generally much better suited to OLAP and other forms of intense analysis than is a typical OLTP database design), but some querying of the operational database for BI purposes could be a useful component of an overall business intelligence application system that would also feature a true warehouse.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home