IBM IOD 2009 - Day 3
Following is some good stuff that I picked up during the course of day 3 of IBM's 2009 Information on Demand conference:
IBM Information Management software executives had some interesting things to say - IBM got some of us bloggers together with some software execs for a Q&A session. A few highlights:
After I'd covered DB2 query parallelism, a session attendee suggested that in a CPU-constrained mainframe DB2 data warehouse system, adding one or more zIIP engines and turning on query parallelism (something that probably wouldn't be activated in a system with little in the way of CPU head room) could provide a double benefit: more cycles to enable beneficial utilization of DB2's query parallelism capability, and a workload (parallelized queries) that could drive utilization of the cost-effective zIIPs. Spot on - couldn't have said it better myself (I wrote about query parallelism and zIIP engines in a comment that I added to a blog entry that I posted last year).
Bernie Spang is a man on a mission - IBM's Director of Strategy and Marketing for InfoSphere and Information Management software wants companies to have trusted information. Too often, people confuse "trusted" with "secure." "Secure" is important, but "trusted," in this context, refers to data that is reliable, complete, and correct - the kind of data on which you could confidently base important decisions. Bernie is out to make IBM's InfoSphere portfolio the go-to solution for organizations wanting to get to a trusted-information environment. There's a lot there: data architecting, discovery, master data management, and data governance are just a few of the capabilities that can be delivered by way of various InfoSphere offerings. It's all about getting a handle on the state of your data assets, rationalizing inconsistencies and discrepancies, and providing an interface that leads to agreed-upon "true" values (and this has plenty to do with integrating formerly siloed data stores). If you want to get your data house in order, there's a way to get that done.
Chris Eaton wants mainframe DB2 people to be at ease with DB2 for LUW lingo - Chris, one of the technical leaders in the DB2 for Linux, UNIX, and Windows development organization at IBM's Toronto Lab, knows that there are some DB2 for LUW concepts and terminologies that are a little confusing to mainframe DB2 folks, and he wants to clear things up. SQL data manipulation language statements are virtually identical across DB2 platforms, but there are some differences in the DBA and systems programming views of things on the mainframe and LUW platforms (largely a reflection of significantly different operating system and file system architectures and interfaces). In a session on DB2 for LUW for mainframe DB2 people, Chris explained plenty. Some examples:
IBM Information Management software executives had some interesting things to say - IBM got some of us bloggers together with some software execs for a Q&A session. A few highlights:
- Interest in DB2 pureScale, the recently announced shared-data cluster for the DB2/AIX/Power platform, is strong. Demo sessions at the conference this week were full-up.
- It used to be that organizations asked IBM about products. These days, companies are increasingly likely to ask about capabilities. IBM is responding by packaging software (and sometimes hardware) products into integrated offerings designed to fulfill these capability requirements.
- New products at the upper end of IBM's information transformation software stack are driving requirements at the foundational level of the stack (where you'll find the database engines such as DB2), and even into IBM's hardware platforms (such as the Power Systems server line).
- Regarding software-as-a-service (SaaS) and cloud computing, IBM sees a "broadening of capabilities" with respect to software delivery and pricing models.
- The IBM folks in the room were pretty keyed up about the Company's new Smart Archive offerings, which can - among other things - drive cost savings by using discovery and analytics capabilities to determine which information (structured and unstructured data) an organization has to retain and archive.
- Jeff Jonas, one of IBM's top scientists, talked about the huge increase in the amount of data streaming into many companies' systems (much of it from various sensors that emit various signals). People may assume that their organization cannot manage this informational in-surge, but Jeff noted that the more data you get into your system, the faster things can go ("It's like a jigsaw puzzle: the more pieces you put together, the more quickly you can correctly place other pieces").
- Jeff also spoke of "enterprise amnesia:" a firm has so much information with which to deal that it loses track of some of it. Consequently, a large retailer will sometimes hire a person who had previously been fired for stealing from that same company.
After I'd covered DB2 query parallelism, a session attendee suggested that in a CPU-constrained mainframe DB2 data warehouse system, adding one or more zIIP engines and turning on query parallelism (something that probably wouldn't be activated in a system with little in the way of CPU head room) could provide a double benefit: more cycles to enable beneficial utilization of DB2's query parallelism capability, and a workload (parallelized queries) that could drive utilization of the cost-effective zIIPs. Spot on - couldn't have said it better myself (I wrote about query parallelism and zIIP engines in a comment that I added to a blog entry that I posted last year).
Bernie Spang is a man on a mission - IBM's Director of Strategy and Marketing for InfoSphere and Information Management software wants companies to have trusted information. Too often, people confuse "trusted" with "secure." "Secure" is important, but "trusted," in this context, refers to data that is reliable, complete, and correct - the kind of data on which you could confidently base important decisions. Bernie is out to make IBM's InfoSphere portfolio the go-to solution for organizations wanting to get to a trusted-information environment. There's a lot there: data architecting, discovery, master data management, and data governance are just a few of the capabilities that can be delivered by way of various InfoSphere offerings. It's all about getting a handle on the state of your data assets, rationalizing inconsistencies and discrepancies, and providing an interface that leads to agreed-upon "true" values (and this has plenty to do with integrating formerly siloed data stores). If you want to get your data house in order, there's a way to get that done.
Chris Eaton wants mainframe DB2 people to be at ease with DB2 for LUW lingo - Chris, one of the technical leaders in the DB2 for Linux, UNIX, and Windows development organization at IBM's Toronto Lab, knows that there are some DB2 for LUW concepts and terminologies that are a little confusing to mainframe DB2 folks, and he wants to clear things up. SQL data manipulation language statements are virtually identical across DB2 platforms, but there are some differences in the DBA and systems programming views of things on the mainframe and LUW platforms (largely a reflection of significantly different operating system and file system architectures and interfaces). In a session on DB2 for LUW for mainframe DB2 people, Chris explained plenty. Some examples:
- A copy of DB2 running on an LUW server is an instance. A copy of DB2 running on a mainframe server is a subsystem.
- A DB2 for z/OS subsystem has its own catalog. A DB2 for LUW database, several of which can be associated with a DB2 instance, has its own catalog (and its own transaction log - something else that's identified with a subsystem in a mainframe DB2 environment).
- So-called installation parameter values are associated with a DB2 subsystem on a mainframe (most of these values are specified in a module known as ZPARM). The bulk of DB2 for LUW installation parameter values are specified at the database level.
- A DB2 for z/OS thread is analogous to a DB for LUW agent, and a DB2 for LUW thread is analogous to a mainframe DB2 TCB or SRB (i.e., a dispatchable piece of work in the system).
- A mainframe DB2 data set would be called a file in a DB2 for LUW environment, and a mainframe address space would be referred to as memory on an LUW server.
- The DB2 for LUW lock list is what mainframe people would call the IRLM component of DB2.
- The DB2 for LUW command FORCE APPLICATION is analogous to the -CANCEL THREAD command in a DB2 for z/OS environment.
- Self-tuning memory management (the ability for DB2 to automatically monitor and adjust amounts of memory used for things such as page buffering, package caching, and sorting) works very well on the LUW platform, and Chris recommends use of this feature.
- Chris favors the use of DMS files (versus SMS) in a DB2 for LUW system, and the use of automatic-storage databases over DMS files for most objects in a DB2 database.
- Chris is big on the use of administrative views as a means of easily obtaining DB2 for LUW performance and system information using SQL.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home