Software Quality Insights


October 13, 2009  8:44 PM

Project management consultant/author extends savings to SearchSoftwareQuality readers

Daniel Mondello Profile: Daniel Mondello

Barbee Davis is an experienced Project management consultant and author who routinely writes for a semi-monthly international publication, Community Post on project manager concerns as well as ways to successfully negotiate projects.

She is offering our readers this 30% savings coupon on her latest project management book published through O’Reilly: 97 Things Every Project Manager Should Know.

To receive this 30% title savings, apply this promotional code ABF09 through the O’Reilly website


For more information on Davis click here.


Related Content
Software expert on Agile’s rise, avoiding project management mistakes
Software project management consultant opines on importance of agile development, common errors in PM, PM career preparation and more.

October 13, 2009  6:25 PM

Is your software test team rigorously incompetent?

Jan Stafford Jan Stafford Profile: Jan Stafford

What happens when a software development or test team’s manager preaches rigor in processes, but doesn’t make sure that team members do processes correctly? “Rigorous incompetence,” says veteran tester and author James Bach. In this video, shot at the recent Software Quality Engineering STARWEST Conference, he shares his views on managers who insist their teams look busy, but don’t ensure that they’re doing their jobs well.


Related Content
Testing rigorously, learning on your own: A chat with James Bach
Author and founder of Satisfice Inc., a software testing and quality assurance company, describes best practices in testing and learning.

James Bach interview: Dispelling software testing myths
The current drive for more rigorous software testing processes and more test documentation is misguided and wastes time and money, says James Bach, author and software testing veteran.


October 12, 2009  2:24 AM

How software test teams’ people skills affect results

MichaelDKelly Michael Kelly Profile: MichaelDKelly

I wasn’t always the brilliant, charismatic, well-loved tester that I am today. Well, I’m not really any of those things today, but I often like to think I am. That’s my problem. Sometimes my ego gets in the way of my best work. I’m also susceptible to all the bad behaviors that come with tight deadlines, layoffs and work I don’t particularly like. Like everyone else, I can let stress tax my professional relationships.

Karen Johnson will be tackling this problem head on during her talk — titled “Building Alliances” — at the Pacific Northwest Software Quality Conference (PNSQC), which takes place Oct 26-28, 2009 in Portland Oregon. Karen will look at how to create and foster successful professional relationships with teams and people. That means not just fostering a collaborative spirit, but also looking for ways that teams can look out for each other and work together.

“The talk focuses on building healthy relationships at work,” Johnson told me recently. “In theory we have a relationship with everyone we work with, at least at some level. But how many relationships do you have on your team or at your company that you can count on when the going gets tough? The answer is usually a handful of carefully maintained alliances. Those alliances are the solid dependable work relationships my talk is focused on. How do those alliances get built? And how are those relationships maintained?”

Karen Johnson is an independent software test consultant and has been involved in software testing for more than two decades. She has extensive test management experience, and her work often focuses on strategic planning. A frequent speaker at major conferences, she has published numerous articles and recorded webcasts on software testing, many of them here on SearchSoftwareQuality.com. She is also a contributing author to “Beautiful Testing,” a O’Reilly book coming out later this year.

Most recently, Johnson’s focus has been on developing a sense of community for software testers working in the area of regulated software testing. She’s been studying office politics and how they affect people. In her PNSQC session, she’ll reveal her finding on positive practices of healthy, functioning teams and how those practices build good, collaborative relationships.

“I find it interesting that sometimes people in our software testing community recollect with horror painful people events that have taken place from their past work experiences,” she said. “And sometimes people talk about successful alliances at work and people they’ve been able to depend on. But what I haven’t heard people talk about is how to go about creating those healthy relationships. After a number of years of working in a variety of companies, I have a collection of positive and negative experiences. My talk draws from both.

“I don’t like politics. I’m not especially good at them and earlier in my career, I figured I could avoid politics altogether. Once I became a test manager, I realized, I couldn’t avoid politics. Much of what I wanted to say and plan to say, comes from my experiences, my stories and from my heart of what I’ve learned.”

In typical Karen Johnson style, her talk will also be well referenced. Karen told me that in addition to her own experiences, the talk also draws on the work of others. “There is an amazing amount of material on the topic,” said Karen. “These references are especially good reads…”

Clearly, her talk is not focused on the technical aspects of software development. It’s about people. For those who might not feel topics like this are important, Karen offers the following:

“Let me offer this thought: One of the skills I’ve focused on acquiring and sharpening over the years is being observant. When I test software, I’m making observations. When I work with people, I use my observation skills to notice relationships and dynamics. I need to be able to observe changes with people and teams in order to get work done. It’s not just the raw technical skills that get the job done.

“If you want to test software and quietly report defects, I suppose technical skills might be enough. If you want to be someone who has influence to improve a product, you have to be able to work with people. I think it makes sense to have talks at our testing conferences that address people topics.”

Karen will also be teaching a full day tutorial on SQL at PNSQC. The class is a more advanced class focused primarily on joins and queries. For more on the upcoming show, check out the PNSQC website. You can also learn more on Karen Johnson’s website, Testing Reflections blog or by checking out some of her Expert Responses and Tips on SearchSoftwareQuality.com.


October 8, 2009  7:31 PM

Testing rigorously, learning on your own: A chat with James Bach

Jan Stafford Jan Stafford Profile: Jan Stafford

My conversation with James Bach yesterday was certainly the high spot of my StarWest 2009 experience. We talked about topics dear to our hearts, such as critical thinking, self-education, wrong-headed notions about what constitutes good testing practices, Shakespeare and more.

Bach is author and founder of Satisfice Inc., a software testing and quality assurance company, but mostly he’s known as an articulate, passionate writer and speaker. His philosophy of testing is controversial, because he believes that what software testers know from experience and self-education is as valuable, if not more important, than certifications.

He knows, he told me, that certifications and degree help testers get jobs from strangers. Both “pieces of paper” serve as verification that the person has the skills needed for the job. Unfortunately, neither really verifies that the holder can think critically or creatively about what’s needed to do the job in various situations or work on a team.

That said, Bach is much more interested in personal fulfillment and the acquisition of real and not rote learning. He serves up his lifelong self-eduction journey in his new book, Secrets of a Buccaneer-Scholar. It’s taken years to write, he said, because he had to have experiences before he could write about them.

Besides self-education, Bach is currently on a quest to get test organizations to re-examine the concept of rigor in software testing. “Too often, they’re doing their testing rigorously, but they’re also doing their testing incorrectly,” he told me. Rigorously doing something wrongly compounds problems instead of solving them. He sees too many test managers equating success with the large number of tests run and defects found instead of effectiveness. You can read more of his views on test rigor in Mike Kelly’s interview with him, titled James Bach interview: Dispelling software testing myths

As I said, our conversation took off on many tangents, touching on our love for Kenneth Branaugh’s Shakespeare films and the cool things about testers, one of which is their curiosity. This experience was what actually going to conferences is all about, meeting people in person and talking about a little bit of everything.


October 8, 2009  6:47 PM

Consultant Lloyd Roden: Choosing realistic software test team challenges

Jan Stafford Jan Stafford Profile: Jan Stafford

Some challenges are not worth taking on, said consultant Lloyd Roden during his StarWest 2009 keynote here at the Disneyland Hotel today. A software testing expert for U.K.-based Grove Consultants, Roden started off his talk about testers’ top challenges today with a warning about setting up the wrong challenges for a test organization.

“We can’t fight every challenge. You can’t do everything,” Roden said.

Examine the preparation needed and the talents of your team before setting goals, Roden advised. “Climbing everest would be a challenge, but it would be a stupid one to undertake without preparation and skill,” he said. “Cooking a meal for 20 would be a challenge for some and not for me, because I love cooking and have cooked a lot.”

A good challenge improves the people who take it on and opens their minds to different approaches to achieving the goal at hand. “Bad challenges are harmful and have undesirable consequences,” he said.

After advising testers there to “choose your battles carefully,” Roden gave a personal example, recalling his daughter asking for pierced ears when she was 14 years old. At first he was against the ear piercing, his daughter’s first request, until his wife explained that piercings didn’t have to be permanent. So, he chose not to say no to piercings. He did say no to tattoos, which are permanent, until she was 18. He didn’t feel she was prepared to make a decision with permanent consequences.

One way test managers can know what challenges to set for test teams is to do testing himself. He often runs into test managers who do no testing and believes they’re missing the opportunity to really know what goes on with his team everyday. Remaining outside of testing can lead test managers to set unrealistics goals and challenges.


October 5, 2009  2:30 PM

James Bach interview: Dispelling software testing myths

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Software consultant, trainer and author James Bach is a bit of a lightning rod in our industry. Most of his notoriety comes from his stand on software testing certifications. He doesn’t like them and isn’t shy about letting people know it. Bach also challenges other widely held beliefs in our industry.

Bach takes his off-the-certification-track testing tutorial, How to teach yourself testing, to StarWest 2009 in Anaheim this week. He’ll also take on some of the myths about rigor in software testing at the Pacific Northwest Software Quality Conference (PNSQC), which takes place Oct 26-28, 2009 in Portland Oregon.

The software myths talk gathers into one place and in succinct form arguments that Bach has been making indirectly for years.

“I keep hearing that more rigor is good and less rigor is bad. Some managers who’ve never studied testing, never done testing, probably have never even seen testing up close, nevertheless insist that it be rigorously planned in advance and fully documented. This is a cancer of ignorance that hobbles our craft.”

Bach’s talk is about clearing up what he calls the “silliness and sloppiness” surrounding the notion of rigorous processes.

“Managers want to say ‘let’s get a lot more rigorous in our processes!’ They may say ‘formal’ or ‘repeatable,’ but it’s all the same sort of thing,” he told me. “But getting rigorous is no panacea. It’s actually a bad idea, in many cases, because rigor can interfere with learning by locking us into bad practice. We need to apply rigor without obsession or compulsion, and at the right level, so that our testing is flexible and inexpensive.”

Bach has been a test manager or consulting tester since Apple lured him from a programming career in 1987. He spent about 10 years in Silicon Valley before going independent and traveling the world teaching rapid software testing skills. He is the author of Lessons Learned in Software Testing, and a new book, Secrets of a Buccaneer-Scholar: How Self-Education and the Pursuit of Passion Can Lead to a Lifetime of Success.

I asked him to offer an example of one of the myths facing our industry. He shared some of the impact of detailed documentation around testing processes and procedures:

“One myth is that a process becomes more rigorous just because it is written down. Well, as anyone ought to know who works with procedures, what is written is not necessarily what is practiced. In fact, it rarely is, in my experience. Moreover, it is not even possible to write down everything that matters about the processes that skilled people use.

“The impact of this myth is that a huge amount of money and time is wasted trying to chase an unhelpful obsession with documentation. It makes good theater – you look busy – but it doesn’t necessarily help you do better work.”

Bach has recently been training to fly a float plane. During PNWSQC he plans to draw some examples from that experience, such as the fact that pilots have to learn many procedures and protocols in order to fly safely. “Rigor is an interesting challenge for a pilot.” he said. “In the talk, I talk about how we use checklists and some of the problems with those checklists.”

A lot of the philosophy behind James’ stance can be found in his latest book:

“Compulsory public education is an example of rigor myths getting drunk and going wild. Many millions of my fellow humans have bought into the idea that education is possible only through schooling. Rigor applied to education is usually presumed to mean that you subordinate your will to that of a schoolmaster, priest, guru or some other authority. I practice a different sort of rigor — education strictly inseparable from life. My life is my education. My life is my own. Education is not some side activity that I do to prepare for life.

“That applies to testing. I am a tester. That is a big part of my life. So, I can’t accept these silly certification programs and bad standards and process guides. Those are examples of rigor, however they represent bad rigor, and my standards are too high for that. I’m trying to raise the standards of the industry so that it will laugh at bad work instead of enshrining it.”

You can learn more about James Bach’s testing practice on Satisfice.com. You can also follow him on Twitter.


September 29, 2009  10:32 PM

How software project managers can react to recessionary trends

Jan Stafford Jan Stafford Profile: Jan Stafford

Project management (PM) consultant Michelle LaBrosse shared some quick tips for PM strategies in recessionary times with me recently. These ideas gelled during an interview with Carey Earle, president of Green Apple Marketing, on her syndicated radio program, Your World, Your Way.

“We talked about great ways to use these trends as a great launch pad for new ideas, solutions and direction in the workplace,” said LaBrosse, founder of Cheetah Learning, a PM consultancy, and aPM issues blogger.

Many projects are on an “economic slim-fast” diet, LaBrosse said. Not surprisingly, she’s seeing may project managers focus on practices that save their teams’ time, cut spending and improve quality and time to market. Also, businesses are trending toward novel perks in these days when raises are rare and budgets tight. She’s seen good results when project managers reward teams in inexpensive and creative ways, such as making a premium parking spot available to a top achiever each month.

The secrecy and lack of honest documentation of the early 2000s has caused an about face, in a trend that LaBrosse and Earle call The Full Monty. “Technology brings us a whole new level of honesty whether we like it or not, but underneath the technology is a new human desire for trust and transparency like never before,” LaBrosse said. “Think: Is your documentation in good shape? Are you leaving a trail that you’re proud of? How can you be more transparent in your business?”

Finally, LaBrosse advises project managers to start noticing trends on their own. “The art and science of noticing the dramatic or subtle changes taking place will help you and your team continue to seek out future opportunities and successes,” LaBrosse said.


September 29, 2009  9:49 PM

At the movies: Exploratory, performance, security testing a kiosk

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Whenever I walk into a movie theater, I remember when I tested a self-service ticket machine. No one was paying me to test the kiosk. I was just killing time, waiting at a theater for someone to join me to watch a movie. The machine looked and functioned similar to an ATM. You select your movie, slide your credit card and print your tickets. What was great about the opportunity is that it allowed me to practice exploratory testing, usability testing, performance testing and security testing all at once.

I discovered that “playing” with the kiosk nicely illustrated what software testers do every day.

The system would allow you to select up to 10 tickets for each type of ticket you could purchase: adult, child and senior. While testing the limits of ticket selection and the proper calculation of the total amount, I noticed that if you max out the number of tickets for senior- and child-priced tickets, the system would beep at you each time you tried to select more then ten tickets. However, when you attempted to select more then ten tickets priced for adults, there was no beep. It made me wonder about the beep. Was it a usability feature?

After I was done doing my functional analysis of the system I had a chance to do some usability testing by watching people interact with the system. I noticed one case in particular that showed what I consider to be a serious defect. A lady using the system selected her movie, entered her credit card information and started waiting as the screen displayed the message: “Please wait while processing your transaction.” I assume that at this point the system was attempting to connect to whatever service it uses to process credit cards.

As luck would have it, at that moment credit card processing for the theater went down. I know this due to the very vocal population of customers at the ticket counter. Unfortunately for the lady making her self-service purchase, the ticket machine seemed to have hung as well. It just sat there saying “Please wait while processing your transaction.” No message saying: “Timed out while connecting to service. Please try again.” No message saying: “Trying your transaction again, please wait.” Nothing. It just sat there.

After about five minutes, the lady finally lost her patience and started pushing the cancel button. She pushed it once. She pushed it a second time – harder. She then pushed it five times in rapid succession. She then put all of her weight into the pushing of the button and kept the button down for several seconds. This processed continued for some time. I counted as she pushed the button over 40 times. Still the screen read: “Please wait while processing your transaction.” So much for the cancel option! She then left the machine and went to the ticket counter for help.

I found other issues while testing, but what stands out for me when reviewing this experience is not the issues I found, but that the process of finding issues “in the wild” is the same that we use “in the lab.” There was setup and configuration for my testing: show times; my credit card; connectivity to the bank; real users I could observe; and my watch to time transaction response times.

There was interaction with the system: myself and others pushing buttons: the system with the bank: the system with the system at the counter that the clerks used; customers swiping cards; and the system printing tickets and receipts.

There was observation of results: noticing beeps and information on the screen; looking at my receipt and tickets; looking at the time on my watch; listening to customer reactions and the conversations at the counter; and seeing the actions the user took under stress.

I was able to draw conclusions based on those observations: the need for better error messaging in the system; the probability of a bug around the beeping for adults; and the fact that the cancel key sticks could be due to multiple people applying fifty pounds of pressure for extended periods of time.

Does that testing process sound familiar?

I like this memory because it illustrates all the basic mechanics of software testing, regardless of the type. It doesn’t matter if it’s functional testing, usability testing, performance testing, security testing, or even automated testing:

  • Testing almost always requires basic setup and system configuration.
  • Testing requires that someone operate the test system or interact with it in some way.
  • Testing requires that someone observe the results of those interactions.
  • Testing requires that someone evaluate those results and draw conclusions.

What’s even better is that I learned something while waiting!


September 28, 2009  4:31 PM

Questions to help clarify test status

MichaelDKelly Michael Kelly Profile: MichaelDKelly

When I lead testing teams, the teams are typically doing session-based exploratory testing. A big part of session-based exploratory testing is the debrief. When testers complete a testing session (a time boxed testing effort focused on a specific test mission) they debrief with me as the testing manager. That means I might sit down with each tester two or three times a day to do debriefs.

In each debrief the tester walks me through what they tested, what issues they found, we discuss the impact of their testing to project risks and test coverage, and sometimes we review the notes from their testing. There’s a lot that can get covered in a debrief, so I’ve developed a list of questions that I can use to help me make sure I’ve covered everything when I’m debriefing someone.

  • What was your mission for this session?
  • What did you test and what did you find?
  • What did you not test (and why)?
  • How does your testing affect the remaining testing for the project? Do we need to add new charters or re-prioritize the remaining work?
  • Is there anything you could have had that would have made your testing go faster or might have made your job easier?
  • How do you feel about your testing?

I don’t use these questions as a template. Instead I use them to fill in the gaps. I’ll typically open with something generic like, “Tell me about your testing.” Then after the tester is done telling me about their session, I walk through this list in my head an make sure I have answers to each of these questions. If not, then I’ll go ahead and ask at that time.

Recently, during a class on exploratory testing where I review this list I was asked why I include the last question, “How do you feel about your testing?” For me, that’s a coaching question. I’m looking for the tester to express something that they might need help with. Often they do. They might say something like, “I wasn’t happy with my testing of X or Y.” Or they might say they didn’t feel prepared for the session. I’ll use this information to help them with their testing.

When you first start debriefs, they might be slow. Some might take five or ten minutes. But fear not, like anything the more you and your team do it – the easier it gets. Most debriefs take under five minutes, and some can be as quick as 60 seconds. The trick is to just make sure you’re not forgetting anything as you quickly move through the information.


September 28, 2009  4:26 PM

Tips for figuring out test coverage

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Determining testing coverage is about figuring out what you’re going to test in the application. When I start this process, I start with a coverage outline. And while I like to develop coverage outlines in Excel, you can use just about any application you’d like. A lot of people use mind mapping tools, word, or a graphing tool like Visio or OmniGraffle.

I’ll often start by developing a generic list of items to cover while I’m testing. I typically do this by working through the elements of the SFPDO mnemonic to get things started. The SFPDO mnemonic comes from James Bach, and it’s a heuristic to help you figure out what you need to test. If you are not familiar with the SFDPO heuristic, it addresses the following:

  • Structure: what the product is
  • Function: what the product does
  • Data: what it processes
  • Platform: what it depends upon
  • Operations: how it will be used

Within each of those areas, there are specific factors you can look at. For example the following list details out what’s included in Structure – often ignored area of test coverage:

  • Code: the code structures that comprise the product, from executables to individual routines.
  • Interfaces: points of connection and communication between sub-systems.
  • Hardware: any hardware component that is integral to the product.
  • Non-executable files: any files other than multimedia or programs, like text files, sample data, or help files.
  • Collateral: anything beyond software and hardware that is also part of the product, such as paper documents, web links and content, packaging, license agreements, etc.

Using the SFDPO mnemonic, I’ll cover each area in detail to identify what I believe I should be testing. Once I have my initial list, I put it down and walk away from it. I do this for a couple of reasons. Normally, it’s because I’m tired, but also to give myself time away from the list to see if anything new occurs to me while I keep it in the back of my thoughts.

A second approach I use to identify coverage is to look at what test data I already have. I’ll see if there is any data I have access to that’s ready to use, or could be ready to use with very little work. Is there test data lying around from past projects or production that I can use? What coverage does that give me? Is there test data I can create easily with tools or automation? What coverage does that give me? If I find anything interesting, or if the data I find sparks any ideas, I’ll go back and add that to the coverage outline.

Finally, a their approach is to think about specific risks related to the product I’ll be testing. Sometimes I’ll use bug taxonomies to spark my thinking if I have a hard time getting started. These normally help me with generic risks. The one I reference most is the appendix to Kaner, Falk, and Nquyen’s Testing Computer Software. Once the taxonomy gets me going, I can normally think of some additional risks that are more specific to my application.

Regardless of where the ideas come from and how I develop it, once I have a coverage outline I work to get it reviewed with various project stakeholders. That typically involves dialog and trade-offs. I cut out a bunch of the stuff I wanted to test and add a bunch of stuff I didn’t think of. Over time, this outline evolves as my understanding of the application and the risks to the project evolve.

For more information on SFDPO, check out Bach’s original article or his methodology handout which details the specific product elements covered with the mnemonic. Also, if you don’t have a copy of Testing Computer Software, you can pick one up here.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: