Software Quality Insights

A SearchSoftwareQuality.com blog

» VIEW ALL POSTS Apr 1 2009   10:27PM GMT

Application performance testing issues: Cloud, virtual labs, scale-up



Posted by: Jan Stafford
Tags:
application performance
Cloud computing
software development
Software testing
Virtualization

Application performance testing used to be a standalone process, but the emergence of dynamic, complex mission-critical applications, virtualization and cloud computing calls for putting it into a larger practice, Mark Kremer, CEO of Precise Software Solutions of Redwood Shores, Calif., told me recently. In our discussion, he offered some advice about how to handle new challenges facing those who must ensure top application performance.

I asked Kremer what complications porting apps to the cloud add to application performance testing and management. He replied that the dynamic nature of development in the cloud means that application performance must be monitored constantly.

“In physical environments, application performance management assumes quasi-static resource configurations; the computing power, network bandwidth, memory pools, and system overhead are invariable over time or at least until the next configuration upgrade,” Kremer said. “Under these assumptions, time measurements are consistent as they were measured under the same terms. Once an application is run on a cloud, its configuration may change from one invocation to another, or even within the same run, as processes may be transparently moved around the cloud. This phenomenon of ever-changing resources makes time measurements inconsistent as they have been taken under a different condition. Correcting, or normalizing time measurements to a standard scale is conditional to self referencing performance monitoring, and is a daunting challenge to model and implement.”

(For more info on software testing and cloud computing, check out my interview with Eugene Ciurana, director of systems infrastructure at LeapFrog Enterprises, a large U.S. educational toy company.)

The dynamic nature of virtualized environments also requires changes in how application performance is monitored and testing, Kremer said. The development/testing team should keep an internal application clock — app time, if you will — that is invariant to the underlying hardware. He explained:

“For example, a transaction will spend the same time measured by the application clock in a Java method regardless of the power of CPUs used in each invocation,” he said. “As application performance management evolves to include this concept, developers building applications for virtual or more commonly mixed mode — virtual and physical — can get around the semantics of time in virtual environments.”

Talking about application performance in general, Kremer stressed that testing can’t just take place in a lab, because it’s so hard to replicate real production environments there. Even if the production environment can be created in a lab, often performance still changes when apps are placed on a real, dynamic production line.

“This dynamic manner of problem resolution analyzes the data that causes performance-loss by tracking spikes in user behavior, patterns in data accumulations, and changes to the systems configurations,” Kremer said. “Application performance testing relies more on static test models which makes it tough to replicate real-world production environments.”

I asked Kremer how scale-up changes what must be tested to ensure stellar application performance. In response, he said that when applications scale up, performance testing must change from being input oriented – focusing on test patterns, synthetic transactions, etc. — to being throughput oriented, where the focus is on transaction monitoring, performance base lining and so on.

“As systems scale up, their performance testing paradigm shifts from predefined synthetic tests to monitoring and self-reference,” Kremer added. “For optimal results, IT needs to identify the top, say 20, transactions of the system, constantly monitor their performance, their component’s performance, and the time allocations of various tiers in the system. Then it must self reference these measurements hour-to-hour, day-to-day, season-to-season…to detect performance degradation, offending transaction components or performance hot-spots.”

That’s all from my interview with Mark Kremer. SearchSoftwareQuality.com news writer Colleen Frye is covering application performance topics, so watch for more articles in the news section. Here’s a sampling: CareGroup solves application performance issues with APM tool and Don’t let poor website performance ruin e-commerce sales.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: