TPC photo
The TPC defines transaction processing and database benchmarks and delivers trusted results to the industry
    Document Search         Member Login    
About the TPC
      Join the TPC
      Technical Articles

No Analysis Required!
The Performance Analyst's Paradox.

by Dr. Neil J. Gunther, Computer Dynamics Consulting

Brave New World of Computing
Those of us involved in the Performance Management (PM) of enterprise level computing know that the technology is changing rapidly and so are the requirements of PM. Although most of us recognize this rapid change, none of us is entirely sure how to accommodate those changes into our discipline.

This article will focus on the impact of two emerging influences that are already having an impact on the way PM is being done now and will continue to have an impact in the future:

These two trends offer many rewards but they also present their own set of problems for the performance analyst. It is important for performance experts to be aware of them and to develop PM strategies accordingly. One thing is very clear: the traditional, centralized, approach to performance analysis and capacity planning is rapidly being eroded as these trends gain influence.

After examining these trends in some detail, you should still be convinced that there is still an important role for performance analysis and PM. That role is prescribed by the current lack of predictable performance when products from different vendors are brought together to support distributed business functions.

This lack of prediction is a source of significant hidden cost that is slowly being recognized by those who have bought into distributed applications such as client/server computing. For PM to be successful in the face of trends like zero-point pricing and autonomous tuning, instrumentation for performance monitoring and prediction need to be enhanced for distributed system development and procurement. The performance analyst is needed more than ever to oversee and analyze such activities as benchmarking, tuning and capacity planning in an environment that is even more complex than that of a monolithic mainframe.

Zero-point Pricing
A dominant trend in the global economic marketplace is for software companies to offer new applications for free, and for electronics companies to offer digital gismos at prices that do not permit the recovery of development costs. The financial irony is that these companies either are reaping, or are poised to reap, significant profits. How can this possibly happen?

In the case of software applications Release 1.0 of the new software is made available for free. The new application is passed around on diskette, copied to other machines (piracy is encouraged), or downloaded from the Internet. Favorable reaction leads to further word-of-mouth endorsement which leads to further propagation of the (free) product. This free propagation far outweighs the manufacturing cost of shrink-wrapping packaged diskettes and manuals. At some point, if the technology is attractive, users who have become hooked on that technology are more likely to be persuaded to buy upgrades, related software and long-term support.

Consumer attention is the most precious resource here and therefore the most valuable commodity. This new marketing paradigm is so antithetical to conventional economic theories that if he were alive to witness it, even Karl Marx might've become a capitalist!

Riding the Wave
One example of how this new economic wave is being ridden is Netscape Communications Corp. in Mountain View, California. In fact, the company was formed around the success of the zeroth product -- the public-domain, Mosaic browser program for "surfing" the World Wide Web.

The first release was imperfect but free! Successive releases, however, do have a price tag and are intended to provide revenue for Netscape and its future product development. Web pirating is oxymoronic since surfing the Web and downloading free software simply broadens the market and inflates demand. So much for the good news; what happens when this economic shift is ignored?

Wipe Out!
A case in point is the venerable reference work that constitutes the Encyclopedia Britannica. The owners of Britannica are desperately in need of capital because Britannica owners failed to realize they are really in the information business, not the book business. They thought they could maintain a high price tag for the book set in the face of fierce competition from less exhaustive encyclopedia available on the newer dynamic media of CD-ROMs and On-line Service providers such as America Online.

How do you sell the consumer an encyclopedia that costs thousands of dollars when they are getting a CD-ROM encyclopedia for free with their new PC? Worse, books lack the fast keyword searching afforded by digital technology. Britannica survived the American Revolution and the Industrial Revolution but it may not survive the Digital Revolution.

No Performance Analysis Required
As a performance analyst, you may think you are above all these economic machinations. But look around you. Commodity software and hardware is invading even the most heavily fortified bastions of centralized computing. And it's happening for precisely the same reasons I have just outlined.

This economic trend is creating a paradox for PM. Put bluntly, who needs performance analysis and capacity management when the technology is free? Just throw more "iron" at your performance problems. If you're still unclear about what ramifications this trend has in store for you, consider it from the following perspective.

It is well recognized that a lot of commercially available software has successively poorer performance with each major release. New releases contain bug fixes and new features (identifying which is which is not always obvious). This, in turn, creates longer execution paths to process and greater memory consumption. Would you, as a performance analyst, spend your time studying and solving the performance deficiencies of Windows 95? Unlikely. Why waste time and effort making desktop software more efficient when desktop computing resources aren't scarce?

Regressive Technologies
Desktop operating systems from Apple and Microsoft have re-invented virtual memory -- a well-defined concept in computer science that was effectively studied and solved 30 years ago. The consumer pays for that re-development. The same commodity pressures we see shaping the consumer marketplace are also shaping the migration to open systems. In the "plug-and-play" illusion of open systems, the consumer can choose platforms and applications from independent vendors at the most competitive prices. An unfortunate side-effect of this rush for cheaper technology is that coherent PM tends to be a casualty of this migration process.

Unlike centralized proprietary systems, the platform comes from a hardware vendor while the RDBMS, middleware and applications software come from different software vendors. The net result for an enterprise is that any commercial gain due to commodity pricing is often eroded by loss of control over performance management and capacity planning.

Many of the traditional, centralized approaches to PM are unavailable in open systems because performance instrumentation is very immature and fragmented. Even if kernel-level performance instrumentation was in better shape, it cannot easily be correlated with the performance of the open RDBMS. That information must be gleaned separately from statistics reported by the RDBMS vendor. But there is no consistency among the various database vendors as to how these statistics are reported and what they mean, let alone integrating this information. Therefore, conventional PM techniques are rendered impotent in an open environment.

Open systems platforms use commodity parts, e.g., the same microprocessors as those used in PCs and workstations. The commodity pricing of a microprocessor, however, is primarily determined by its share of the workstation or PC marketplace. The engineering requirements in that marketplace are often the converse of what is required for the commercial database marketplace. For example, working sets of many PC applications is small compared to that required for an RDBMS. The footprint of many corporate RDBMSs exceeds the typical on-chip cache on commodity microprocessors.

Microprocessor speeds are also tending to outpace the speed of commercially available memory controllers. This conflicts with the requirement for the RDBMS to access its own memory caches to service requests from database backend servers. Arguably, for these reasons and others, the die has already been cast regarding hardware technology for open systems. The remaining issues lie with software, generally, and in the RDBMS, in particular.

Zero-point pricing demands that engineers learn a whole new discipline: wastefulness. Time-to-market requires that efficient design be eliminated. That's hard on designers who take pride in writing tight code and building efficient systems but commodity economics demand technological regression.

Autonomous Tuning
Now let's turn to Autonomous Tuning. Many of the newer PM products coming onto the market employ some form of software-based agents and knowledge-bases that give the impression of being able to undertake autonomous performance tuning in the sense of so-called Expert Systems. Some prefer to take a more moderate approach. For example, consider the following excerpts from Optimizing Windows NT by Russ Blake.

Windows/NT Is Always in Tune...
A major design goal of Windows NT was to eliminate the many obtuse parameters that characterized earlier systems. Adaptive algorithms were incorporated ... so that correct values are determined by the system as it runs.

... (This is followed shortly afterward by an admission of realism)

Windows NT did not achieve the goal of automatic tuning in every single case. A few parameters remain, mainly because it is not possible for us to know precisely how every computer is used.

So, the incorporation of knowledge-bases, production rules, and inference engines, is an emerging technology but it is still very immature for the PM of complex distributed systems. Moreover, we can expect the impact of autonomous tuning to be felt more slowly than the effects of zero-point pricing. It has a number of significant hurdles to jump.

Even the most proficient performance analysts are often forced to devise special experiments, resort to their intuition, draw on folklore and talk to other performance analysts to form hypotheses about performance anomalies prior to taking any action. This kind of diffuse and ambiguous judgment is difficult to convey to apprentice performance analysts, let alone capture it in the structured semantics of conventional knowledge-bases. In this respect, performance analysis is more an art than a science.

Reliably Unreliable
Much of the knowledge base for complex PM has a very short life, typically being invalidated with each new release of hardware and software; the latter occurring with the higher frequency. This ongoing informational turnover makes for unattractive maintenance costs in PM expert systems.

If more PM tasks could be handed over to AI systems then reliability (and possibly security) becomes paramount. If a complex decision sequence was incorrectly executed by an AI system, it would be extremely expensive to find the error and then correct it. The more the AI system matches human intelligence, using such devices as self-modifying code, the more difficult it is to "debug" the semantic network.

Nonetheless, even with all these hurdles AI and autonomous tuning will continue to have a growing impact on the way future PM is accomplished.

So What's Left?
So far, I have discussed two important trends that, I believe, are seriously reshaping the role of PM in the business enterprise: Zero-point Pricing and the emergence of Autonomous Tuning.

These trends represent a rapid movement away from the traditional PM methods associated with centralized computing. That role was predicated on maintaining and upgrading very expensive, and therefore very precious, computing resources. The new commodity resource is mind-share and capturing it is making ubiquitous, cheap computing products.

But just as software and hardware appear to be getting cheaper and more plentiful, the composition of the business computing is becoming more complex. The new performance issues have to do with the unexpected interactions of independently developed computing products being brought together for the first time in a particular business environment running a wide range of business workloads.

Since the vendors of these computing products are responding to exactly the same time-to-market pressures I discussed earlier, they typically don't consider performance degradation effects in their products at integration or when the end-user's computing environment is scaled up to meet business growth. As a consequence, many performance analysts in operations are often consumed by trouble-shooting new applications. What little performance information is available from these individual products is rendered less effective by distribution of processing across many software layers in many places on the network of the enterprise.

A Question of Scale
Therefore, I would suggest that performance analysis is a very necessary requirement because performance levels cannot be guaranteed by independent vendors as the computing environment is scaled up to meet greater business demand. But the tools and methods of performance analysis will need to change dramatically to enable accurate analysis; primarily in the area of performance instrumentation (See "Thinking Inside the Box, TPC Quarterly Report, Jan. 1995).

A certain degree of autonomous tuning will become workable and accepted as distributed computing environments become larger and more complex. Only the very simplest administrative functions can be handled by autonomous agents using today's technology but this will mature over the next 10 years.

Overall, we can expect PM to become less operational and more strategic. Because of the unknown performance attributes inherent in such diverse computing products, the importance of doing pilot-studies and benchmarking will be more widely recognized as a necessary precursor to full deployment. From this performance data, predicting the performance impact of scaling up workloads will require performance modeling and capacity planning of the entire network-based configuration. Business enterprises that recognize this as the true goal of PM stand to benefit the most from their investment in distributed computing architectures.

So, the performance analyst's paradox only remains a paradox for those performance analysts who do not recognize the influences that are reshaping their role in the modern business of computing.

Further Reading
  • Business Week, "The Technology Paradox: How Companies Can Thrive as Prices Dive," pp. 76-84, March, 1995.
  • H. Dreyfus, What Computers Still Cannot Do, MIT Press, 1972.
  • N. Gunther and K. McDonell, ''Distributed Performance Management in a Heterogeneous Environment,'' CMG Transactions, Issue #81, pp. 83-90, Summer 1993.
  • P. Maes, "Intelligent Software," Sci. American, Sept. 1995.
© 1995 Neil J. Gunther. All Rights Reserved. No part of this document may be reproduced without prior permission of the author who can be reached at 415-967-2110 or Permission has been granted to the Transaction Processing Performance Council, Inc. to publish the contents of this document in the TPC Quarterly Report.

Valid XHTML 1.0 Transitional Valid CSS!