Software Quality Insights

May 7, 2012  11:03 PM

Scrum Gathering Atlanta: Activities for co-creating a shared vision

Melanie Luna Melanie Luna Profile: Melanie Luna

In his presentation at Scrum Gathering Atlanta, “Creating a Shared Vision: From Compliance to Commitment,” Senior Coach and Vice President of agile42 Brad Swanson discussed the importance of having a vision and offered five strategies for building a shared vision.

In defining the word “vision,” Swanson gave some inspiring examples, such as Kennedy’s vision of landing a man on the moon by the end of the 60’s and the vision of creating a portable digital music player—the iPod. Inspiring visions include concrete, measurable details, he explained.

This interactive session gave participants opportunities to explore different strategies for building a shared vision. Groups prepared posters and gave presentations based on the five strategies for building a shared vision, from the book The Fifth Discipline Fieldbook by Peter Senge, et al: Telling, Selling, Testing, Consulting and Co-Creating. Groups practiced Innovation Games techniques such as “Remember the Future” and “Product Box.”

Team members had just minutes to apply techniques and produce analyses and product brochures. “The process you went through as a team is more important than what you ended up with on paper,” Swanson pointed out during the wrap-up.

Participants highlighted key takeaways from the presentation, including “applying the co-creation techniques generated passion and purpose.” One of the participants said that he liked how the product brochure activity, one that required the creation of a physical item, spurred participation from everyone in the group. Another participant commented on how these techniques helped sharpen focus, when so often people can get caught up in the details of a project.

How effective is your organization at implementing a shared vision?

April 25, 2012  2:01 PM

Can exploratory testing be automated?

Yvette Francino Yvette Francino Profile: Yvette Francino

Some might argue that exploratory testing, by its very definition, is the opposite of automated testing; however, testers are learning ways in which automation and exploratory can be used to complement with one another.

At STAREAST 2012, I spoke to one participant who used tools that would help facilitate her exploratory test session, allowing collection of logs and other information which would help with troubleshooting. Though this is not “automated testing,” automation is aiding exploratory testing through the tool.

In her keynote, Dot Graham talked about “automated exploratory testing,” giving an example of how it was used to test Bing. I asked Bob Galen, who presented “Session-Based Exploratory Testing,” about mixing automation and exploratory testing. He warned against the thinking that automation could replace exploratory testing, but did give an example of a case study in which the repository that was created from exploratory test documentation helped to generate automation test ideas.

Listen in below to Dot Graham and Mark Fewster, authors of Experiences of Test Automation, as well as to Bob Galen, as they explain their thoughts on the mixture of exploratory testing and automation.

[kml_flashembed movie="" width="425" height="350" wmode="transparent" /]
[kml_flashembed movie="" width="425" height="350" wmode="transparent" /]

April 23, 2012  7:56 PM

Test automation success: Breaking down managers’ misconceptions

Melanie Luna Melanie Luna Profile: Melanie Luna

One popular objective for test automation is to automate 100% of manual tests, according to independent test consultant and author Dorothy Graham. However, while some tests are better as automated tests, some tests are better performed manually. In her keynote at STAREAST 2012, “What Managers Think They Know about Test Automation—But Don’t,” Graham discussed the various misconceptions managers can have about automation and identified ways to set realistic goals.

Objectives are vital, as they determine a team’s direction, funding and assessment of the project as a success or failure, she explained. Unrealistically high goals will ensure failure. One measurement of success is ROI, which some managers misconstrue as the many benefits of automation: tests are run more frequently, they take less time to run and they require less human effort.

To measure the ROI, one can use a simple calculation of (benefit – cost)/cost. More information on this is available on Graham’s website. Without this quantifying of ROI, good ROI can be achieved, but it is essential that the benefits are made visible.

“Automation does not find bugs; tests find bugs,” said Graham. Automation in itself is not a cure-all for an organization’s testing needs. Furthermore, it does not replace testers. Automation, like other testing tools, supports the efforts of testers.

Graham emphasized the importance of implementing high quality testware architecture, as she cited poor architecture as the leading cause of abandoned automation efforts. The testers, test execution tools and positive relationships between developers and managers must all work in concert to produce successful results. “Good automation takes time and effort. It doesn’t just come out of the box,” said Graham.

She encouraged testers to educate their managers on the realities of test automation, as their support is critical to project success.

For more on test automation from Dorothy Graham and Mark Fewster, authors of Experiences with Test Automation, see Test automation: Exploring automation case studies in Agile development.

For comprehensive conference coverage, see our Software Testing Analysis and Review conference page.

April 20, 2012  10:25 PM

Tools and techniques for performance testing with Scott Barber

Melanie Luna Melanie Luna Profile: Melanie Luna

Many people think performance testing is a difficult effort riddled with complicated tools and measurements. As for those aspects that are complex, Scott Barber, Chief Technologist at PerfTestPlus, Inc., suggested leaving them to the specialists; there are easy parts that people who are just getting started with performance testing can take on. His presentation at STAREAST 2012, “Simple and Informative Performance Tests You Can Do Now” offered several entry points to this often misunderstood aspect of testing.


He discussed the definitions of performance testing floating around, from “performance testing makes websites go fast” to “performance testing optimizes software systems by balancing cost, time-to-market and capacity while remaining focused on the quality of service to system users.” Furthermore, he clarified definitions of some common testing terms. Load testing is about expected, anticipated conditions. Stress testing, conversely, is about outcomes you don’t expect, finding when and where something might happen or break.


Performance testing plays an important role in numerous possible objectives: determining compliance with requirements, evaluating release readiness, assessing user satisfaction, estimating capacity, validating assumptions and generating marketing statements. As such, it hardly makes sense for performance testing to come only at the end of production, explained Barber.


“Performance testing helps stakeholders make decisions regarding product value and project risk; specifically value and risk related to speed, scalability and the stability attributes of a system and its components throughout the product lifecycle,” he said.


He advocated raising visibility about performance testing within your organization by asking questions, which often boil down to “what’s the real goal?”; generating acceptance criteria and setting priorities. Report and talk about performance often, he suggested. He also recommended taking a few minutes to spot-check the performance of competitors’ sites; not many organizations do this, but a competitive analysis does not take much time and offers some useful data.


As far as getting started with actual performance testing, he provided links to several online tools:


Tools for determining speed:


Tools for making use of performance snapshots:


These tools can offer data, often with accompanying graphics, in a rather short amount of time, and they aren’t the only ones; more tools become available every day.


Check out this video of Scott Barber at STAREAST 2012:

[kml_flashembed movie="" width="425" height="350" wmode="transparent" /]

April 20, 2012  4:31 PM

Software testing is an “intentional investigation”

Melanie Luna Melanie Luna Profile: Melanie Luna

 When testers “compare the product to specifications,” what they often find is that they identify intentions — and the intentions are different from the product. While the tester’s job may be to gather information, this becomes an intentional investigation, according to STAREAST 2012 keynote speaker Michael Bolton of DevelopSense, Inc.

His speech, “Evaluating Testing: The Qualitative Way,” explored qualitative research methods, compared the job of software testers to that of investigative journalists and offered the notion that testing is ultimately a human activity.

Read more in-depth coverage of this insightful speech here: Software testers as qualitative researchers: STAREAST 2012 keynote.

[kml_flashembed movie="" width="425" height="350" wmode="transparent" /]

April 19, 2012  1:28 AM

Don’t wait too late to start performance testing

Yvette Francino Yvette Francino Profile: Yvette Francino

As software quality professionals, we all know that the earlier we find defects, the easier they are to fix. However, all too often, performance testing is not done until the very end of a release cycle. In fact, some organizations wait until after the code has deployed, thinking they can only monitor performance, and fix it when there’s a problem, rather than doing testing up front.

Eric Gee is facilitating a session here at STAREAST 2012 titled, “Performance Testing Earlier in the Software Development Lifecycle.” In this short video clip, Gee gives us a short preview of his session and how performance test engineers can partner with architects and developers, testing at the component level ensuring that performance issues are found and addressed as early as possible.

[kml_flashembed movie="" width="425" height="350" wmode="transparent" /]

April 19, 2012  1:05 AM

Exploratory testing: Tools to help recreate the problem

Yvette Francino Yvette Francino Profile: Yvette Francino

Here at STAREAST 2012, we’ve been hearing quite a lot about the importance of exploratory testing. This technique allows testers to really dig into areas of high risk, using the knowledge they’ve learned as they’re testing to creatively test and uncover problems that aren’t on the “happy path.” However, when they do run across an anomaly or defect, they need to recreate that problem so that the developer is able to troubleshoot it and find out the root cause.

I asked conference attendee Thora Commins the tools she used in order to easily recreate the issues she uncovered during exploratory testing. Listen in to hear her response:

[kml_flashembed movie="" width="425" height="350" wmode="transparent" /]

April 19, 2012  12:57 AM

Risk-based testing session at STAREAST 2012

Yvette Francino Yvette Francino Profile: Yvette Francino

“Simplify, simplify, simplify” and “make sure you’re testing exactly what your customers are asking for,” says Mary LeMieux-Ruibal from Cognizant. She and Dr. Mirkeya Capellán from Sogeti facilitated the session, “Creating a Risk-based Testing Strategy,” at STAREAST 2012.

In the article, Risk-based testing approaches for Agile development teams, I talk to LeMieux-Ruibal and Capellán about risk-based testing and how it’s used in Agile environments. It was a real pleasure to meet up with them in person in sunny Orlando, FLA at the conference and get to chat with them in person. Take a look at this short video clip as they give a quick summary of their session:

[kml_flashembed movie="" width="425" height="350" wmode="transparent" /]

April 13, 2012  4:03 PM

Exploratory testing in Agile environments

Melanie Luna Melanie Luna Profile: Melanie Luna

How do testers integrate exploratory testing techniques and test automation in an Agile setting? Software testing consultant Lanette Creamer is co-presenting “You Can’t Spell Agile Testing without ‘ET’” with Matt Barcomb at STAREAST on April 18. Listen to this brief podcast for a preview of the session, which addresses exploratory testing in Agile environments.

Listen to the podcast here.

April 12, 2012  2:31 PM

Programming skills needed for test automation

Yvette Francino Yvette Francino Profile: Yvette Francino

We continue to hear more about test automation as more organizations are claiming success with their automation strategies. Just a few months ago at our local SQuAD (Software Quality Association of Denver) meeting, a panel of recruiters advised test professionals to learn some technical skills.

That advice was repeated at last night’s SQuAD presentation, “Test Automation 101,” by Jim Hazen. Much of Hazen’s presentation centered around testers learning how to program. Many of the automation tools will require some degree of programming. “Even codeless and scriptless tools [require some programming skills.] At some point, you’re going to need to dig into the code.”

[kml_flashembed movie="" width="425" height="350" wmode="transparent" /]

Hazen suggested books and online resources to get started. The first book he mentioned was Experiences of Test Automation: Case Studies of Software Test Automation, the new book by Dot Graham and Mark Fewster. Coincidentally, I’ve been emailing Dot and Mark about meeting at next week’s STAREAST conference to augment the recently published two-part interview I’d done with them:

Test automation: Exploring automation case studies in Agile development

How IT leaders can boost ROI with test automation

Hazen talked about several automation tools and suggested the popular open source tool, Selenium, for those who’d like to get their feet wet with test automation. However, Hazen also warned that automation takes work, and believing some of the vendor hype can be one of the biggest mistakes groups make when implementing an automated test solution. Though certainly organizations who implement well will realize a strong ROI, Hazen warned that 70-80% of organizations fail on their first implementation of test automation.

As with any effort, it’s important to start with planning and making sure the staff is properly trained. The message is pretty clear that in this day of Agile development and automation test, it’s important for testers to get programming skills to remain competitive.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: