Software Quality Insights


April 1, 2009  10:27 PM

Application performance testing issues: Cloud, virtual labs, scale-up

Jan Stafford Jan Stafford Profile: Jan Stafford

Application performance testing used to be a standalone process, but the emergence of dynamic, complex mission-critical applications, virtualization and cloud computing calls for putting it into a larger practice, Mark Kremer, CEO of Precise Software Solutions of Redwood Shores, Calif., told me recently. In our discussion, he offered some advice about how to handle new challenges facing those who must ensure top application performance.

I asked Kremer what complications porting apps to the cloud add to application performance testing and management. He replied that the dynamic nature of development in the cloud means that application performance must be monitored constantly.

“In physical environments, application performance management assumes quasi-static resource configurations; the computing power, network bandwidth, memory pools, and system overhead are invariable over time or at least until the next configuration upgrade,” Kremer said. “Under these assumptions, time measurements are consistent as they were measured under the same terms. Once an application is run on a cloud, its configuration may change from one invocation to another, or even within the same run, as processes may be transparently moved around the cloud. This phenomenon of ever-changing resources makes time measurements inconsistent as they have been taken under a different condition. Correcting, or normalizing time measurements to a standard scale is conditional to self referencing performance monitoring, and is a daunting challenge to model and implement.”

(For more info on software testing and cloud computing, check out my interview with Eugene Ciurana, director of systems infrastructure at LeapFrog Enterprises, a large U.S. educational toy company.)

The dynamic nature of virtualized environments also requires changes in how application performance is monitored and testing, Kremer said. The development/testing team should keep an internal application clock — app time, if you will — that is invariant to the underlying hardware. He explained:

“For example, a transaction will spend the same time measured by the application clock in a Java method regardless of the power of CPUs used in each invocation,” he said. “As application performance management evolves to include this concept, developers building applications for virtual or more commonly mixed mode — virtual and physical — can get around the semantics of time in virtual environments.”

Talking about application performance in general, Kremer stressed that testing can’t just take place in a lab, because it’s so hard to replicate real production environments there. Even if the production environment can be created in a lab, often performance still changes when apps are placed on a real, dynamic production line.

“This dynamic manner of problem resolution analyzes the data that causes performance-loss by tracking spikes in user behavior, patterns in data accumulations, and changes to the systems configurations,” Kremer said. “Application performance testing relies more on static test models which makes it tough to replicate real-world production environments.”

I asked Kremer how scale-up changes what must be tested to ensure stellar application performance. In response, he said that when applications scale up, performance testing must change from being input oriented – focusing on test patterns, synthetic transactions, etc. — to being throughput oriented, where the focus is on transaction monitoring, performance base lining and so on.

“As systems scale up, their performance testing paradigm shifts from predefined synthetic tests to monitoring and self-reference,” Kremer added. “For optimal results, IT needs to identify the top, say 20, transactions of the system, constantly monitor their performance, their component’s performance, and the time allocations of various tiers in the system. Then it must self reference these measurements hour-to-hour, day-to-day, season-to-season…to detect performance degradation, offending transaction components or performance hot-spots.”

That’s all from my interview with Mark Kremer. SearchSoftwareQuality.com news writer Colleen Frye is covering application performance topics, so watch for more articles in the news section. Here’s a sampling: CareGroup solves application performance issues with APM tool and Don’t let poor website performance ruin e-commerce sales.

March 31, 2009  8:00 PM

New book: Step-by-step Eclipse plug-ins, plus Java testing tool

Jan Stafford Jan Stafford Profile: Jan Stafford

I found a handy tool for automating testing of Java graphical user interfaces (GUIs) while reading
Eclipse Plug-ins, third edition
(Addison-Wesley) by Eric Clayberg and Dan Rubel. This month, I also got a chance to ask them some questions about the newly-minted third edition of this book, which gives step-by-step directions for plug-in development and descriptions of specific plug-ins. Here are excerpts from our Q&A and some information about the book’s contents and the Java testing tool.

First off, both Clayberg and Rubel are co-founders of Instantiations Inc., maker of GUI-building software and automated testing and code quality tools. They’ve been working with Eclipse since 1999 and developed CodePro on it.

Usage of Eclipse, now in its eight year of existence, is on the rise, the authors told me. They see potential for greater growth with OSGi and Equinox.

Taking developers beyond the basics to a point where they can create high-quality commercial Eclipse plug-ins is the goal of the book, said Clayberg. “In the world of Eclipse plug-ins, very few people take the time to really go the extra mile, and most plug-ins fall into the open source, amateur category.”

Describing and offering use cases of the Eclipse Command Framework (ECF) is one way the book helps developers get up to commercial speed. ECP replaces the older Action framework. “Throughout the book, use of the older Action framework has been replaced with new content describing how to accomplish the same thing with the new command framework,” said Rubel. In particular, the book covers use of commands with views and editors in ECF.

Here are some other ways the book provides updates and detailed information about some beyond-the-basics development steps:

All of the screen shots, text and code examples throughout the book have been updated to use the latest Eclipse 3.4 API and Java 5 syntax. New capabilities in Eclipse 3.4 are detailed, including a new overview of using Mylyn and a discussion of new preferences and PDE and SWT tools available in Eclipse 3.4.

In Chapter 20, you’ll find a step-by-step guide to using GEF, the Graphical Editing Framework from Eclipse.org. This toolkit is designed for building dynamic interactive graphical user interface elements. The authors walk through the process of building a GEF-based view for graphically presenting the relationships between the favorites items and their underlying resources. Then, taking a bigger step, they show how to build a GEF-based editor with the ability to add, move, resize, and delete the graphical elements representing those favorites items.

Clayberg and Rubel practice what they preach. They’ve recently released WindowBuilder Pro v7.0, an Eclipse plug-in tool for Java GUI developers, and continually update their CodePro AnalytiX software that adds enhancements to Eclipse and any Eclipse-based IDE.

Another plug-in the authors have made is WindowTester Pro, and it’s the Java GUI testing plug-in I mentioned reading about earlier in this post. As I said, it enables automated testing of Java GUIs that use SWT, JFace, Swing or RCP, eliminating the need to create and maintain test code through various phases – recording, test generation, code coverage, etc. – of GUI application interactions. Among other functions, WindowTester Pro facilitates integration of test case execution into a continuous build system, so your application is tested each every time it’s built.

Next year, the fourth edition of Eclipse Plug-Inswill deliver information on how to fix long-standing issues and jettison old, deprecated APIs.


March 31, 2009  1:41 AM

Recession survival tips for project managers

Jan Stafford Jan Stafford Profile: Jan Stafford

In an economic downturn, project managers have to motivate teams that may be worried and overworked, a situation in which projecting optimism, confidence and an “anything is possible” attitude is a must, according to project management consultant and trainer Michelle LaBrosse. On the reality-check side, LaBrosse recently told me, PMs have to plan ahead more carefully than they ever had to before.

LaBrosse, founder of Cheetah Learning, offered these tips that could help PMs prosper during lean times:

  • PMs who are resourceful, innovative thinkers are desperately needed during a recession, she said. This is the time to lead by example.
  • Assess your projects and commitments to see which are energizing and which are dragging you down. Said LaBrosse: “A simple way to figure it out is to ask yourself: ‘If I had to make the decision today to start this project, would I?’ If the answer is no, stop wasting your valuable resources on it.”
  • Seek out opportunities to do your own formal and informal learning. This isn’t the time to cut back on training. Keep engaging in activities like informational interviews and podcasts to webinars and development courses.
  • This is also a key time to brush the dust off your resume. “Your resume should serve as a timeline of what you’ve been up to,” LaBrosse told SearchSoftwareQuality.com not long ago. “It should tell a story about your growth and experiences since your first real job. Look critically at your resume and make sure it weaves a story that sets you apart from others in your industry. What is your unique selling point? Is it that you are a programmer who also worked as a stand-up comedian? That could communicate that you think fast on your feet or you can diffuse situations by using humor. Whatever your story is, make sure it showcases your confidence in being at the top of your game.”
  • Networking is crucial now. Participate in trade associations, like the Project Management Institute. Not only is this the way to gain more skills, it can often be your key to your next job or project. “You need to expand your worldview,” she said.

On her blog, Everyday Project Management, LaBrosse offers these additional tips:

  • Make sure the commitments you are considering pursuing will make sense in another week, another month, and another year. “A good way I have found to do this sense making is with doing a project agreement on goals I am considering pursuing,” she wrote. She offers a free project agreement template, which requires registration.
  • Manage resources so you can complete your projects with a variety of nonfinancial capital, so that you don’t have to wait for the credit markets to unfreeze to finish the project.

LaBrosse’s company, Cheetah Learning, is based in Carson City, Nev., and offers Project Management Professional Exam training and other services.


March 27, 2009  4:17 PM

Using screen recorders as a lightweight form of documentation

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Any time I find that I need to create training material or reference material, I ask myself if it makes sense to use some sort of screen capture utility to capture that information. I find that it can be a better medium for some types of content. And, given current technology (both commercial and open source) it can be easier to update than extensive documentation.

For example, on a past project I partnered with another tester and used a screen recorder to create small videos of how to perform common functions on the mainframe. We had a lot of people who needed to use the mainframe in support of their testing, but only rarely. So even if they learned something once, it might be weeks before they did it again. If you’re like me, if you’ve only done something one time, in a week you’ve likely forgotten how to do it. Together, we created a series of 10 to 15 small videos (screen recordings with someone talking through the steps) that ran through the most common functions.

I’ve also used this technology to capture a record of my exploratory test execution. Many testers already take screenshots when they test; this is just an extension of that practice. With most tools, you can easily edit what you capture to pull out clips of video and compress them to be small portable files. This is great for attaching bug examples to defect tickets, to ask for a peer reviews or to get a second opinion on something, or even to pull out an example of a complex test for a lunch and learn.

If you’re interested in giving something like this a try, the tools I use the most for this type of documentation include: BB TestAssistant, iShowU, Snagit, and CamStudio.


March 24, 2009  12:12 AM

Blu Age at EclipseCon: On agile, an agile project, reverse engineering

Jan Stafford Jan Stafford Profile: Jan Stafford

On the eve of EclipseCon in Santa Clara, CA, which opened today, I interviewed NetFective Technology Group president Christian Champagne about agile adoption and legacy application reverse engineering, as well as a legacy app makeover that Blu Age Corp., a NetFective offshoot, helped a company do.

Today at EclipseCon, Blu Age introduced new tools — Blu Age Reverse Modeling and Blu Age Modernization –- to its product suite for reverse modeling and re-engineering legacy enterprise applications. These products complement existing modules, Blu Age Build and Deliver, and together they make up a platform for transforming existing code and data into UML 2 models independent from their original technological platform and according to the Object Management Group’s Model Driven Architecture (MDA) standards.

In our interview, Champagne said that there is about an even split between waterfall and agile model users today, but he sees more and more waterfall users evaluating agile. Just as the economic downturn spurred IT managers to use virtualization to consolidate servers, he said, development groups are turning to agile “to become far more efficient, not only for a better return, but to keep their job or survive as a company.”

Champagne sees agile’s iterative development model as the cornerstone to implementing more effective software testing and quality assurance (QA) processes. At each iteration, testing and QA ensures that expected business needs are properly covered, and the next step or iteration of development isn’t started without full acceptance of the current one.

“What is important is that users directly approved the application during the development phase and not when development is almost finished and the budget already consumed,” Champagne said.

Champagne described a recent project in which a Blu Age team helped a company move a legacy application’s existing code and data into UML 2 models. The primary project goal was modernizing the application and enabling use of the agile process for reverse modeling. The company didn’t have an out-of-box software package able to modernize the application to meet all functional and architectural requests. After evaluating several options, the company chose to do PIM extraction with Blu Age’s MDA workbench.

The project’s biggest challenge was changing the mindset of IT and business people.

IT people have to admit that techniques and standards can automate 90% of their traditional activities except the value added ones, like enterprise architecture. Business people must admit that they have to be responsible for the application they ask to be developed. Automated and instantaneous model transformations with agile put business people direct in front of their needs, and they could not anymore claim against IT for bad delivery. Both must admit that they have to work together but both must manage acceleration of application development deliveries.

Once everyone was on board, the project moved forward quickly. The result, said Champagne, is “simplified project deployment, mainly due to the fact that agile and iterative AD facilitate end-user acceptance and increase functional accuracy of business-oriented applications.” Project delivery time was cut 50%.

This project spurred the company to move from waterfall to agile development overall. “At this stage, 60% of their new devs are agile, whatever the size — 300 Man/days up to 3000 Man/days,” Champagne said.

Looking ahead at trends in development in general, Champagne foresees growing usage of powerful but simple-to-use languages like PHP5, as well as automated software packages that can handled complex business needs with SOA (service-oriented architecture), Web services or RIA (rich Internet applications). Overall, he sees developers changing their ways of working and their roles in projects as iterative development is widely adopted.


March 23, 2009  1:07 PM

How test-driven development increases overall usability

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Test-driven development (TDD) can be the path to not having to reinvent the wheel with every new test. In the test-driven development I’ve done, I’ve found that my tests force me to write a more manageable interface over time. Early in a programming a class, I find that I can get away with something simple that might not be intuitive. But the more tests I add (so I can add more and more functionality), the more refactoring I have to do to get the previous tests to pass. This continually forces me to think about the interfaces I have and the best way to test them. That gets me thinking about simplifying my previous crufty code. In the end, I have a more usable interface for me, which I suspect also makes it a more usable interface for others.

The folks over at UX Booth recently posted an article on “How Test-Driven Development Increases Overall Usability.” As with all of their articles, it’s a well-researched and well-written look at the topic. In the article, they contrast testing at the user interface with testing at the application interfaces. It’s not an in-depth technical article, but it’s an interesting look at the topic. I think they accurately express one of the core ideas of test-driven development—to “make the application more usable to everyone involved.”

Another great example of TDD increasing usability can be found in Dale Emery’s analysis of Brian Button’s article “TDD Defeats Programmer’s Block—Film at 11.” Dale points out Brian’s general pattern of naming his tests using stimulus, result, and context. The naming scheme makes the tests more readable and allows you to readily interpret what the code/tests are trying to accomplish. It’s another, more detailed example of the UX Booth idea of usability at the application interface. Dale further expands on this pattern in a follow-up post on the anatomy of responsibility.


March 17, 2009  1:20 PM

Upcoming software testing conference tutorials

MichaelDKelly Michael Kelly Profile: MichaelDKelly

The Association for Software Testing (AST) recently announced the tutorial lineup for its upcoming conference in Colorado Springs. It’s an impressive lineup, covering both mainstream topics (like agile) and more daring conference topics (like self-learning).

Looking at the AST tutorial lineup got me thinking. How does this compare to other conference tutorial lineups? And what are the current tutorial themes at conferences today? I suppose I could have looked at keynotes or track presentations, but I like tutorial topics because people often pay extra for them. That means they, in theory, might represent the topics people are most interested in.

A quick look at the upcoming Conference for the Association for Software Testing, Software Test and Performance Conference, and STAREAST yields the following breakdown (using my arbitrary topic classifications):

Conference Breakdown

Test management covers building and managing the test organization, process improvement and metrics. It’s a broad classification (I know), but I didn’t want 30 categories. Test analysis includes any topic that provides a structured way to design tests (with the exception of exploratory testing which I broke out into its own category because I wanted to see how often that was offered). Agile includes all the tutorials that have “agile” in the title — automation and performance are built out in a similar way.

So what might this tell us? If you group the numbers, you can see that we spend roughly equal amounts on management (25%), analysis (23% for analysis and exploratory testing), and coding (23% for automation, performance and database). I think that balance is important and I’m happy to see it emerge from the data. It’s a mix that shows the multifaceted problem of software testing.


March 12, 2009  6:36 PM

Figuring out what to document for operations

MichaelDKelly Michael Kelly Profile: MichaelDKelly

With several agile teams I’ve worked with, thinking about operations has been an afterthought. Even when operations personnel are captured as stakeholders upfront, many times the hand-off from development to production is … well, “less than coordinated.”

I think this happens for a handful of natural reasons:

  • The operations profile (interactions with other systems, infrastructure requirements, technology dependencies, configuration requirements and options, etc…) emerges as development unfolds over multiple iterations. That makes it difficult to look at the intermediate iterations, where the software is possibly going through rapid changes in direction, and invest in that type of documentation at the time. That means that the operations documentation often isn’t created until later iterations, for example a transition sprint in scrum, where it’s a mad dash to remember everything that was built out, capture it in one place, and deal with the realization that not much upfront thought was put into how this software would live and breath in production.
  • Some aspects of operations are emergent. They depend on the architecture and technology decisions that are made as the project unfolds. They require extensive testing to measure. For example, providing details around system requirements (how much processing power, memory and disk space you’ll need), characteristics of performance/availability/reliability, and troubleshooting guidelines all emerge naturally from the testing process. Some aspects of operations often aren’t requirements you can just choose, you have to figure them out through iterations of testing and experimentation.
  • There are many aspects of operations most programmers on the team just won’t know much about. When should backups be done, and how? What needs to be done for monitoring, alarming and alerting? How will that integrate with the technologies currently being used for those activities? Where will log files go and how often will they be checked, stored or cleared?

I call those natural reasons because all of them will necessarily happen to some degree no matter how much planning you do or how you structure the team. In any project, at early stages where you want to document aspects like this the most, that’s when decisions are most likely to change. Understanding operational criteria is something that’s always going to require testing and experimentation, which means it’s going to happen later in the project. And it’s unrealistic to expect programmers to be experts at programming and to be experts at knowing how to run data centers and administer systems.

So what can we do?

I think there are a couple of things that can be done:

  • As mentioned earlier, many teams invest in the idea of a transition iteration. A period of time dedicated to pulling it all together. They collect the test results, inventory the system requirements, sit down with the operations folks and perform any last-minute tweaks to make the software manageable in the target environment.
  • Other teams work with the operations team upfront to get stories for what their requirements are for an application from an operations perspective. While they still likely won’t get done until later iterations, it can reduce the surprise and can ensure that any big features are part of the technical road map for the product.
  • I also think it’s helpful for the programming team to have exposure to operations in the form of production support for the applications that get deployed. Through needing to support the products in the wild, they develop an appreciation for the common issues around troubleshooting and maintaining the software. This can be done a number of ways. Some teams create separate production support teams and have programmers rotate through that team. Others have the same team that develops the product support it in production. What’s important is that the team gets the gritty exposure to the problems.


March 10, 2009  7:19 PM

Why video modes matter in software testing and configuration

Rick Vanover Rick Vanover Profile: Rick Vanover

In my software testing work, I’ve found that video configuration consistency is a critical factor in a system’s performance and behavior under test conditions and an important factor in delivering technology in the intended fashion.  Naturally, traditional resolution requirements for a consistent experience are only one part of video’s critical role, which can encompass everything from video adapters, driver version, video configuration, resolution, refresh rate, any console interaction mechanisms and the number of monitors running that configuration.

I recently talked with David Wren about the importance of video configuration’s importance. Wren is the managing director of PassMark software, a leading system benchmark software provider. Wren offers that video configuration is important when ensuring a solution is to function as expected. One example where this was an issue for a Windows-based system was the recent issue with Nvidia drivers on Windows Vista. Some video adapter models have had ten or more driver releases since November 2007.

Beyond driver issues, which Wren states can be notoriously buggy, 2D and 3D video performance can be point to watch in the test process. For software implementations that are not graphics intensive, such as text-based screen activity, the performance is relatively unimportant. When graphics are introduced, then the video performance is critical. This can include playing high-quality videos, games, or graphics rendering with design software. PassMark maintains a benchmark website of video card performance benchmarks that is updated daily, and the results for the same tests on different platforms show varied results.

These factors and more are especially relevant in today’s media-rich technology landscape. Many of the various high quality media will likely perform differently under circumstances where various video configurations are in use, making benchmarks a critical part of the test process.


March 10, 2009  12:55 PM

Privacy issues with cloud computing

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Judith Myerson recently published a developerWorks article on cloud computing versus grid computing. It’s a fantastic read if you’re new to the topic, and provides tons of links for where to learn more. While introducing the topic, Myerson lays out the basic relationship between the two. In the article, when describing some of the basic differences, she points out that one of the big advantages (among several) of cloud computing is on-demand resource provisioning.

Myerson then goes on to discuss some issues to consider. Those include threshold policy, interoperability issues, hidden costs, unexpected behavior, and security concerns. As you continue your research in cloud computing, you’re likely to find that it’s security concerns in particular that get people talking. In a recent Computerworld article on cloud computing not being fully enterprise-ready, author Craig Stedman points out that many cloud computing vendors might not be ready to support corporate IT due to security concerns. A more detailed article on the topic by Jaikumar Vijayan goes further.

In the article, Vijayan takes an in-depth look at the World Privacy Forum’s recent report and highlights some of the bigger issues like concerns about data privacy, security, and confidentiality. Specific issues discussed include dealing with privacy regulations, data-disclosure terms and conditions, legal protections, issues relating to the host’s geographic location, and allowing more ready access to data to government agencies as well as parties involved in legal disputes. If Myerson’s article is a must read as an introduction to the topic, then Vijayan’s article is a must read for the concerns around privacy issues.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: