Software Quality Insights


July 11, 2009  12:33 AM

CAST 2009 preview: Positioning software testers as service providers

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Presenting correct information isn’t just a function of how you write your report at the end of a software project. Instead, it is the result of a complex process that starts with analyzing the needs of your stakeholders, moves on to gathering accurate and timely data from all your multiple project sources, and then results in presenting your findings in the correct format, at the right time and to the proper audience. Joel Montveslisky calls this “Testing Intelligence” and is presenting a talk on that topic at this year’s Conference for the Association for Software Testing (CAST), July 13-16th in Colorado Springs.

“Testing intelligence is a term to describe a slightly different perspective to software testing that places the focus on the needs of our project stakeholders instead of on the application under test. The main idea is to position the testing team as a service organization within the development group, whose purpose is to provide the timely and actionable testing-based visibility needed by the project stakeholders to make their tactical and strategic decisions.”

“In principle this is nothing new, but in practice many testing teams tend to get disconnected from the changing needs of the Organization during the project and end up working for the sake of their own “product coverage needs” or the old information needs of their Project Stakeholders.”

Montvelisky is one of the founders and Product Architect of PractiTest, a company providing an SaaS (Software as a Service) test and quality assurance (QA) management system. He is also a QA consultant specializing in testing processes and a QA Instructor for multiple Israeli Training Centers. A member of the Advisory Board of the Israeli Testing Certification Board (the Israeli chapter of the ISTQB); he publishes articles and a periodic QA blog and is an active speaker in local and international conferences.

According to Montvelisky, the process of gathering testing intelligence starts by correctly identifying your stakeholders, then working with them to understand their needs, and finally providing them with correct and timely information they need. He thinks there are a number of things that make this a hard process, or at least not a trivial process, for testing teams:

“We can start by the fact that many times we are not aware who are all our project stakeholders, we tend to miss some that are not physically close or that enter the project late in the process. Secondly, we testers are not really trained to work with customers […], so many times we don’t communicate correctly and we assume their needs without consulting with them about what information is important for their decisions, what format they need it, and when. And finally, we don’t take into account the dynamic nature of our projects. We don’t understand that people require specific information at certain times. Nor do we take into account that as the project evolves the information we need to provide changes.”

Montvelisky started developing his CAST talk when he was a QA Manager working for an enterprise software company. There he realized that many stakeholders were frustrated with the existing bureaucracy of the QA work. The stakeholders thought the QA team’s work was dictated by test-planning documents written months beforehand. Montvelisky noticed that those documents were not staying relevant to the current issues affecting the release.

“In this company we made a mind-shift and decided to set aside time during the project for ‘specific testing tasks’ that would be given to us by the Development Team in real-time. Soon enough the demand for these tasks increased and we realized that we were providing real value to the process by becoming the eyes and ears of the project. After I left this company and became a consultant I took this approach with me and created a process around it to help organizations make the mind-switch and start working more effectively with their stakeholders throughout the project.”

The chance to get feedback is one reason Montvelisky is excited to be presenting at CAST. “It’s not easy to receive hard criticism,but once you learn to take it in a positive light and use these comments to continue developing your work, it makes it one of the most fruitful encounters for people looking to improve and develop ideas in the field.”

I asked Montvelisky where he thought he might get some pushback on his approach:

“In the past, I’ve heard two main areas of criticism to my approach, both of them fair. First, people explain to me that all their professional lives they’ve worked based on what I describe as testing intelligence, and that this is nothing new to them. To these people I usually come asking for their best practices and asking for their inputs in order to improve my approach.”

“Second, people tell me that our job should limit itself to test, and ‘we should be proud’ of it instead of trying to find big names for what we do, leaving this to the marketing team. To these people, I try to explain that every team in the organization needs to contribute value to the process, and if they think that all their value comes from reporting bugs and coverage percentages then they can continue working like that.”

“Having said that, there is a lot more value that can be provided by the Testing Team, and we don’t need to change what we do in order to provide it we only need to make sure we stay connected with our stakeholders and help them throughout the project and not only at the end of it.”

Montvelisky is currently focusing on a couple of research topics. One of them is related to adding value by correctly utilizing the test management tools in the organization. The other is related to collaboration between testers from different organizations, and different cultures and countries, in order to improve their overall work.

For more on the upcoming show, check out the CAST conference website. For more on Joel Montvelisky and what he’s currently working on, you can follow him on Twitter or his PractiTest QA Blog.

July 11, 2009  12:10 AM

Eight days, 80 testers: Exploratory testing case study at CAST 2009

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Software consultant Henrik Andersson implemented an exploratory testing training project in an 80-tester group in only eight days, and he lived to talk about it. Next week, he’ll outline the steps he took to quickly set up a pilot program and train testers in exploratory testing theory and practice during the Conference for the Association for Software Testing (CAST), which takes place July 13-16 in Colorado Springs. In his session, he’ll also cover how he made responsibilities and expectations clear.

I recently interviewed Andersson about his session, titled “Implementing Exploratory Testing at a Large Organization.” He said his first reaction upon receiving the assignment in question was that it was impossible to implement exploratory testing on this scale in that time frame. To reach out to 80 testers is a challenging thing to do, he said, and it takes time to implement such a different way of testing. Yet, he decided to rise to the challenge. “If I turned it down I would not likely get another chance,” said Andersson, a consultant and founder of House of Test, headquartered in Sweden and China.

Once he accepted the project, he had to figure out how to do the impossible.

“I came up with a little twist on the initial request. I suggested that we should pick one tester from each test team and tutor them to become Exploratory Testing Champions. This gave me an initial group of nine people. This is what we achieved during the 8 days. The Champions would then have the responsibilities to tutor the rest of the testers in their teams. The Champions are now continuously working with this in their test teams, and we have established this new role formally.

Andersson’s case will show what the exploratory testing champions approach achieved. He also will explain in detail how the project was implemented. He’ll describe the workshops on theory and practical exploratory testing that he conducted during theproject. He’ll share observations about tutoring a group of people used to working in a completely different way, the positive feelings and feedback he receive, what surprised him and what approaches did not succeed.

Just so you know that Andersson is no newbie to testing metholodies, here’s some information about his background. As a software tester and consultant, he has worked in a variety of fields, including telecom, medical devices, defense, insurance, SAP and supply chain systems.

For the past 10 years, Andersson has focused on working in a context-driven fashion, mixing exploratory testing with more traditional methods, such as RUP, V-model, TMap and others. However, he has never followed any method by the letter. “I always only took the part that has been useful and invented the parts I was lacking,” said Andersson. “I definitely didn’t do the parts that felt were obstacles or not useful.” Indeed, Andersson enjoys helping organizations transform from the “old school” practices.

For more on the upcoming show, check out the CAST conference website. You can learn more about Henrik Andersson and his company House of Test on their website. You can also follow Henrik on Twitter.


July 7, 2009  3:29 PM

Addressing eVoting concerns at this year’s CAST conference

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Right now, if I do a search on ‘electronic voting’ on Google News, I get 748 results for the past month. The headlines include phrases like “How to trust,” “Technology is not fullproof,” “Electronic voting machines are fallible,” and the ever present headline of “Electronic voting machines also caused widespread problems in Florida.” There are many legitimate concerns around electronic voting technology.

At this year’s Conference for the Association for Software Testing (CAST), July 13-16th in Colorado Springs, AST eVoting Special Interest Group (SIG) members Geordie Keitt and Jim Nilius will be taking a look at the highly visible testing processes around eVoting systems, and will outline their concerns for the way the testing is done today.

“Our talk is about the reasons why an electronic voting system can undergo a rigorous, expensive, careful series of tests and achieve certification, and still be a terrible quality product. The laboratory certification system is not capable of ensuring quality products where the complexity of the system is as poorly represented by its standards as eVoting systems are. We have come up with an interesting model of the certification testing context, which helps us see when the rigors of cert lab testing are appropriate and adaptive and when they are not. We will be asking the conference attendees for help in honing our arguments in preparation for publication. We will be presenting them to National Institute of Standards and Technology (NIST) to suggest changes to the accreditation guidelines for certification labs.”

Geordie Keitt works for ProChain Solutions doing software testing, a career he began in 1995. He’s tested UNIX APIs, MQ Series apps, Windows apps, multimedia training courses, Web marketplaces, a webmart builder IDE, the websites that run bandwidth auctions for the FCC, and now critical chain project scheduling software. Geordie aspires to be a respected and respectful practitioner and teacher of the craft of rapid, exploratory software testing. He is the lead of the AST’s eVoting SIG.

Jim Nilius has over 23 years of experience in software testing, test management and architecture. His most recent role was as Program Manager and Technical Director of SysTest Lab’s Voting System Test Laboratory. An ISO 17025 test & calibration lab accredited by the US Election Assistance Commission under NIST’s National Voluntary Laboratory Accreditation Program and mandated by the Help America Vote Act of 2002. The lab performs testing of voting systems for Federal Certification against the 2005 Voluntary Voting System Guidelines. He is a member of AST’s eVoting SIG.

“Jim has a wealth of experience in this domain,” said Geordie Keitt, “and as the Chair of the AST eVoting SIG I wanted to draw out of him as much knowledge as possible and get it out into the open where we can all look at it and try to detect patterns and learn lessons from it.” Keitt and Nilius did an initial presentation of their work at the second Workshop on Regulated Software Testing (WREST). One of the outcomes from that workshop was a diagram of the challenges facing a sapient testing process in a regulated environment.

“We are bringing an immature argument before a group and asking for their help to toughen it. We are testing the founding principles of regulated testing, which predates software testing by a hundred years and has yet to recognize that new software systems are more complicated than mature ones, instead of the other way around. We need help to hone and tighten our argument for it to be effective.”

For more on the upcoming show, check out the CAST conference website. Also, checkout the website for the AST eVoting SIG to get involved and see what else they are working on. For more on the work of Geordie Keitt and Jim Nilius, take a look at their outcomes from the second Workshop on Regulated Software Testing.


July 6, 2009  5:32 PM

CAST 2009: Understanding the Principles of the Law of Software Contracts, an interview with Cem Kaner

MichaelDKelly Michael Kelly Profile: MichaelDKelly

“The American Law Institute just adopted the Principles of the Law of Software Contracts. For now, this will guide judges when they decide disputes involving the marketing, sale (of product or product license), quality, support and maintenance of software. Over the next few years, it will probably guide some new legislation as individual states apply its terms. It will probably also inform some legislative drafting efforts underway in Europe, and probably to come in other countries whose economies are gaining greater influence (e.g. India and China).”

That introduction comes from Cem Kaner as he summarized his upcoming talk at this year’s Conference for the Association for Software Testing (CAST). CAST takes place in a couple of weeks July 13-16th in Colorado Springs, and Dr, Kaner will be looking to dig into some of the new rules for software contracts adopted by the American Law Institute in more detail.

“Historically, the American Law Institute has had enormous influence. Its membership is politically diverse and primarily judges and tenured law professors. Appellate-level judges routinely reference American Law Institute materials in their published cases. For software, judges need to turn to other judges’ writing even more than in other areas, because so much software law is judge-made. […] The Principles provide a unifying framework, based on judicial opinions around the country over the past 50 years. I was ill and unable to travel to the ALI meeting this year, but the ALI meeting blog reported that the Principles were passed unanimously. This is very rare, and it speaks well to the likely future influence of the document.”

Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: “Testing Computer Software,” “Bad Software,” and “Lessons Learned in Software Testing.”

While this might seem like an unlikely talk to some, the subject matter is critical to our industry and CAST is a perfect venue to get people talking about it. There’s a mix at CAST that is rare at other conferences. Not only do the talks go deep into the subjects they cover, the average attendee is willing to dig in and do the work necessary to understand the subject matter. Add to that the importance of the Principles of the Law of Software Contracts to the industry and you have the perfect mix.

“At most conferences, I would give a talk about the Principles, a few people would ask short questions, we would call a time limit, and that would be that. Those are “marketing talks” – one-way communication from someone pushing an idea to an audience that politely listens. I strongly prefer CAST’s approach, which encourages more critical questioning and follow-up discussions. Sometimes people come prepared for a serious debate. CAST encourages this, and everyone learns from it. More often, a group of us break away for more discussion at the end of the meeting. Again, I learn a lot from that, and so do the people in the breakout group, who get to discuss the ideas in a way that makes sense to them.

Many testers talk about their unhappiness with their company’s quality standards. They feel as though they are working on bad software, with irresponsible project managers who care a lot about cost, ship date, and personal glory but don’t care whether the final result works. Commercial law creates the playing field for software development and marketing. When you ask why a company can survive when it sells crap software, lies about what it is selling, and treats its customers like dirt, you are partially asking a market question (how can it keep customers?) and partially a legal question.

Given that we have hit a milestone in the adoption of the Principles of the Law of Software Contracts, I think it’s important to let testers know the new rules.”

Dr. Kaner is no stranger to some of the issues that will surround the Principles of the Law of Software Contracts. He started working on software-quality-related legislation in 1995, when he helped write the Uniform Electronic Transactions Act (UETA). UETA (adopted federally as ESIGN) removed a major barrier to electronic commerce by giving legal force to electronic signatures. Dr. Kaner also worked on the Uniform Computer Information Transactions Act (UCITA), trying to improve it when it was a joint project of the American Law Institute, the National Conference of Commissioners of Uniform State Laws, and the American Bar Association. From 1995-2001, he wrote almost 100 status reports on UCC2B /UCITA, most for the testing community.

“UCITA started as UCC-Article 2B, a joint project of the American Law Institute and the National Conference of Commissioners of Uniform State Laws and the American Bar Association to update the Uniform Commercial Code, which is America’s main body of commercial law. The American Law Institute and the National Conference of Commissioners of Uniform State Laws jointly run the UCC’s Permanent Editorial Board, which has such a strong reputation for fairness and thoroughness (amendments take up to 10 years of hearings) that state legislatures look to the Board for all amendments to the UCC.

Unfortunately, the Article 2B project was hijacked by political activists who wrote a bill that radically tilted copyright and contract law in favor of large software vendors. The American Law Institute demanded a rebalancing of the bill. When the National Conference of Commissioners of Uniform State Laws refused, the American Law Institute walked off the project, killing it as a UCC project. The National Conference of Commissioners of Uniform State Laws renamed the bill, UCITA, and submitted it to state legislatures. Ultimately, two states adopted UCITA, the rest rejected it, and four states even adopted laws that made UCITA-based contracts unenforceable in their states.

This was the first time since the American Civil War that some states passed laws to explicitly reject and make unenforceable contract terms that were lawful in the state in which the contract was written. Once UCITA’s failure was clear, the American Law Institute started a new project (the Principles), to bring computer law in line with the mainstream of American commercial and intellectual property law, balancing the rights of vendors and customers. They elected me as a member, which put me into a better position to comment on laws affecting the software industry. I think I am the least experienced lawyer ever elected to the American Law Institute. Since then, I’ve given a few reports to AST members at CAST and by email, collecting feedback and making suggestions back to the American Law Institute.

Revamping the legal infrastructure is not a guarantee of better times to come. A poorly balanced body of law can wipe out a marketplace or wipe out the companies trying to serve that market. I think this decade’s lethargic software market has been a result of that. I’ve seen a lot of extreme proposals coming from many quarters (right, left, and unclassifiable). The American Law Institute work is the best and most promising that I have seen.”

At CAST, debate on topics presented is encouraged. Given the effect of the Principles on the software development industry, I asked Dr. Kaner what he thought the most likely criticisms might be.

“I think the most likely criticism will be holding software companies accountable for undisclosed known defects will somehow harm the industry. Sadly, I think this is poorly informed. A bunch of silliness was promoted on the web just before the American Law Institute meeting that claimed this would particularly hurt the open source community. As is so common in this decade’s political propaganda, this was blatantly wrong. The Principles specifically exclude open source software from this type of liability. I think more generally that some people fear that commercial regulation creates a potential to kill the industry. I have seen some proposals, especially from consumer activists and buyers for very large non-software companies, that demand too much. I think the Principles are much more moderate – perhaps too moderate.

The only way to address these types of concerns is with open discussion and facts. I’ll introduce some of the key ideas in my talk and then be available for as much post-talk discussion as people want. I don’t expect everyone to come out loving the Principles, but at least the folks who want a deeper understanding will have a good chance of getting it.”

For more on the upcoming show, check out the CAST conference website. For more on Cem Kaner, you can check out his website, or take a look at what he considers to be his career-best writing and research in his freely available book, “Bad Software.” The Principles are not available in a free copy on the web. However, you can get to a copy for sale from the American Law Institute. Dr. Kaner has summaries of some of the main ideas in the Principles on his blog and in his 2007 CAST presentation “Law of Software Contracting: New Rules Coming.”


July 2, 2009  6:25 PM

CAST 2009: Almog presents controversial new test case definition approach

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Dani Almog will be outlining a new approach to test case definition at the Conference for the Association for Software Testing (CAST 2009) this month. In his talk “Test Case Definition: A New Structural Approach,” Almog will explore and classify the various definitions of test cases, discuss the implications of the current situation, and will suggest an alternative structural definition for test cases. CAST 2009 takes place July 13-16 in Colorado Springs, CO.

“Based on thorough research of academic articles, journals and books and by consulting some of my distinguished colleagues around the globe, I will present my findings aiming at an alternative formal structural definition of the term ‘test case. Although it may be a controversial approach, formalizing the term fits my engineering perceptio’n of the testing process. Thus, during my talk I will present a newly developed structural definition to a test case.

During the last six years, I was very fortunate to be able to fulfill a dream and implement ideas pertaining to how and when software testing should be developed; in short, finding ways to bridge the gap between the software development approach, the technology, the training (mostly object oriented thinking) and the way we testing engineers see the same applications; in an intuitive, procedural –the way we use it — manner.

I got full support from top management in the company I worked for (Amdocs) to recruit a very talented and innovative group of young people who assisted me in developing and implementing a full infrastructure for test automation. Later, (this) was rolled out to all the corporate divisions and units. Now that I have retired from Amdocs, I have decided to dedicate the rest of my professional career to exposing and promoting the new approach I have developed, including methodologies and tools.”

Dani Almog is currently a member of the academic staff at Ben Gurion University, Israel. He teaching and researching. Dani teaches software quality engineering and testing, and research the interaction between development processes and quality/testing – trying to introduce to the academic world some of the achievements and work models developed in industry.

“Coming from a large corporation’s (Amdocs) R&D division, we have encountered many issues regarding test case automation- a necessity and a key factor for supporting eight different large software products, often with 3 different versions distributed among one hundred different very large customers. This situation made us consider all options and alternatives for test automation infrastructure, tools and methodologies. We were given the opportunity to be involved in shaping the future of our new products’ development processes and procedures. All my academic activities since my retirement, including this talk, are derived from this experience and I am now actually documenting and presenting the best practices of what have we done. “

Almog says that in his talk he’s targeting two different communities. The first is the professional community, who he says is struggling to “improve its skills and outcomes regardless of inferior reputation and image – knowing they never really get the deserved glory.” The second is the academic community, who has “neglected a very exciting field of research and progress.” He suspects both communities might have some criticisms for his talk.

“I welcome the challenge of debate and criticism and believe it will improve my work. I guess the criticism will come from two main channels. From those questioning the relevance of my approach to all different streams and to the new ways software development expands to. And from people who perceive testing as softer and more flexible (exploratory, context-dependent, and others) rather than structured and engineered. I believe that my approach paves the way to more systematic and precise testing, as well as to development of advanced automation tools. I welcome all constructive and relevant criticism. It will help me present a better model.”

For more on the upcoming show, check out the CAST conference website. For a chance to explore the topic in more detail, you can contact Dani Almog on the Software Testing Club or following his discussions.


July 1, 2009  3:54 PM

CHATing about CAST 2009: Software testing and cultural history

MichaelDKelly Michael Kelly Profile: MichaelDKelly

If you’re looking for a cross-discipline topic related to software testing, Rebecca Fiedler’s and Cem Kaner’s upcoming talk at the Conference for the Association for Software Testing (CAST) – July 13-16th in Colorado Springs – might just be for you. Their talk on “Cultural-Historical Activity Theory: Framework, to characterize the activity of software testing” takes a look at one of the most difficult tasks in software testing – discovering and applying contextual information. Cultural-Historical Activity Theory (or CHAT) has been applied widely to software usability analysis, but not so much to testing. Fiedler and Kaner are hoping to change that.

“Cultural Historical Activity Theory provides a clear structure for applying a systems theory-approach to human activities. In particular, it is really useful for looking at change on a human system. Perhaps you’re trying to understand a change that has caused your project to go off the rails or maybe you’ll use it to analyze the introduction of a new tool or technology you’re trying to implement. The Computer-Human Interaction and Computer-Supported Cooperative Work crowds have been using Activity Theory for years. More recently, they’ve begun shifting from user-focused design to context-centered design. It seemed natural, given our advocacy for context-driven testing, to use CHAT to think about the context of testing as well as the context in which the software we’re testing will be used. CHAT helps with that.”

Rebecca Fiedler is an Assistant Professor in the Curriculum, Instruction, and Media Technology Department at Indiana State University. She’s interested in how people learn and how technology can make educational efforts more effective and more accessible to more people. In the testing community, she works with Cem Kaner on the Black Box Software Testing (BBST) courses and AST’s Education SIG. She is also a regular attendee at the Workshop on Teaching Software Testing.

Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: “Testing Computer Software,” “Bad Software,” and “Lessons Learned in Software Testing.”

Fiedler says the idea for the talk started when she was doing her dissertation research a couple of years ago:

“I’m interested in how technology can help people learn and so I spent a lot of time at two different institutions watching college students use a specialized software tool for a high stakes task – high stakes as in their graduation depended on it. Academics love theory so I decided to use CHAT to sharpen the focus of my observations, interviews, and analysis. As Cem and I talked about what I was finding in my research, we started asking, ‘Where was their test group? How could they defer that bug? It does what?’ and other tester-like questions. It wasn’t long before Cem realized this would be a great model for testers to use. We started floating this with some of his testing colleagues and got more and more excited about it.”

Here are a few examples of challenges faced by testers that a CHAT-based analysis might help us better understand and thus more effectively work within:

  • Introducing a new metric
  • Introducing a new test tool
  • Interviewing stakeholders to gather their requirements and to discover the conflicts among stakeholders’ requirements
  • Designing tests that are tailored to expose highly important problems
  • Describing failures in ways that are intended to motivate specific stakeholders to demand fixes
  • Gaining insight into the dynamics of a failing project

Fiedler and Kaner have some concern that some people might find CHAT too complex to master in the short time available. That’s one of the reasons they choose CAST as the venue for their talk.

“I like that the CAST format requires audience and speaker interaction. Attendees get to explore a topic until they’ve heard enough. I also like that the conference isn’t over-scheduled so that you can have lunch or dinner and an extended conversation with speakers and other attendees. I’ve presented CHAT before. On the speaker side, I can tell you that it takes a while to convey the richness of the model. On the listener side, it takes a while to appreciate how it can be used. CAST gives us the time we need to talk about this.

In addition, Fiedler and Kaner indicated they would be willing to take the discussion online after the conference if there were enough interest. “If enough people are interested,” Fiedler said, “we could participate in a discussion forum at AST or TestingClub in which participants apply this to their real examples/situations.”

I asked Fiedler where testers who might not be able to attend the conference could go for more information. She listed off a handful of papers and books that she’s used to help her develop her understanding of the method.

“Yrjo Engestrom (from Helsinki, Finland) developed the CHAT model and first wrote about it in a paper called ‘Learning by Expanding: An Activity – Theoretical Approach to Developmental Research.’ That’s the seminal work but I thought it was a tough read. A few years ago, Sasha Barab, Michael Evans, and Un-Ok Baek wrote a chapter on using CHAT that appeared in the ‘Handbook of Research on Educational Communications and Technology.’ That chapter was very helpful. […] Right now I’m reading two books and I think I’m going to start recommending them to anyone interested in CHAT. They are ‘Activity Centered Design: An Ecological Approach to Designing Smart Tool and Usable Systems‘ by Geri Gay and Helen Hembrooke and ‘Acting with Technology: Activity Theory and Interaction Design‘ by Victor Kaptelinin and Bonnie A. Nardi. Both are grounded in the HCI field, but I think they’ll be helpful to testers, too.”

For more on the upcoming show, check out the CAST conference website. For more on Rebecca Fiedler and online teaching and learning, you can follow her research on her website. For more on Cem Kaner, you can check out his website, or one of his books on software testing.


June 30, 2009  7:37 PM

Good metrics critical to software projects, CAST keynoter says

MichaelDKelly Michael Kelly Profile: MichaelDKelly

At this year’s Conference for the Association for Software Testing (CAST), taking place July 13-16th in Colorado Springs, Dr. Jonathan Koomey will be delivering the keynote address on “Real-life lessons for responsible use of data and analysis in decision making.”

In the keynote, Dr. Koomey is planning to present a few recent examples of widely-cited statistics that were grossly misleading or wrong. He’ll then use those examples to summarize real-world lessons for attendees so they can immediately improve their use of data and analysis.

“My presentation will delve into often ignored aspects of the art of problem solving,” said Dr. Koomey. “Including the crucial distinction between facts and values, the dangers of ill-considered assumptions, the need for transparent documentation, and the importance of consistent comparisons.”

Dr. Koomey is a Project Scientist at Lawrence Berkeley National Laboratory and a consulting professor at Stanford University. He is one of the leading international experts on electricity used by computers, office equipment, and data centers, and is the author or co-author of eight books and more than one hundred and fifty articles and reports on energy and environmental economics, technology, forecasting, and policy. He has been quoted in many major media sources, including the New York Times, Wall Street Journal, Barrons, The Financial Times, The Washington Post, Science, Science News, American Scientist and more.

I opened the interview by asking Dr. Koomey about his latest book, Turning Numbers into Knowledge: Mastering the Art of Problem Solving, now out in its second edition.

“The book summarizes what I’ve learned over the years in doing analysis and supervising analysts at Lawrence Berkeley National Laboratory. I’ve hired dozens of analysts over the years but I grew frustrated that I kept having to explain to them basic aspects of the art of analysis, like making good tables and graphs, what constitutes complete documentation, and how to structure a technical report. The book, and this talk, grew out of that frustration. It summarizes the craft of research in a way that is useful for inexperienced analysts, but it’s also a good refresher for those who’ve been in the field for a while. And I tried to make it a fun read, with lots of short chapters, cartoons, and funny graphics.”

When I asked what influenced the talk and the book, Dr. Koomey cited a couple of sources that were influential to him. Those included Edward Tufte’s The Visual Display of Quantitative Information and William Hughes’ 1997 book, Critical Thinking: An Introduction to the Basic Skills. Dr. Koomey also added: “Also surprisingly enough, I was inspired by Zen in the Martial Arts by Joe Hyams. The fluidity and readability of that book influenced the structure of Turning Numbers into Knowledge.”

In addition to his keynote at CAST, Dr. Koomey is also leading a workshop on his new book.

“The workshop will be an interactive exploration of how managers can encourage, prompt, and cajole their employees to give them the numbers they need to make good decisions. It will also help both managers and analysts hone their own analytical skills. Even if you’re a seasoned analyst, the exercises in the workshop will get you to think afresh about the challenges you face at work, and should help you become more effective at your job.”

This will be Dr. Koomey’s first time speaking at CAST, and I asked him if he was looking forward to getting to know the software testing community. “The essence of good software testing is critical thinking, and I’m happy to be in the company of smart people who use their problem-solving skills in new and innovative ways. I always learn something when I attend conferences like this.”

For more on the upcoming show, check out the CAST conference website.


June 29, 2009  3:16 PM

CAST 2009: How to teach yourself testing, an interview with James Bach

MichaelDKelly Michael Kelly Profile: MichaelDKelly

At this year’s Conference for the Association for Software Testing (CAST), taking place July 13-16th in Colorado Springs, James Bach will be presenting a tutorial on self-education for software testers. The tutorial, titled “Teach YOURSELF Software Testing,” is about teaching yourself testing, instead of waiting for some testing guru to tell you all the answers. Bach often boasts that he invented testing for himself, and he believes you can too. In his tutorial, Bach plans to share his personal system of testing self-education. Based on his upcoming book “Secrets of a Buccaneer-Scholar,” it’s a system of analyzing experiences and questioning conventional wisdom.

James Bach is a high school dropout who taught himself programming and testing. He’s been a tester, test manager, and consultant since 1987. A founding member of the Context-Driven School of testing, he has taught his class, Rapid Software Testing, around the world. He is co-author of “Lessons Learned in Software Testing,” and author of “Secrets of a Buccaneer-Scholar,” a book about technical self-education which is being published in September.

I first heard Bach talk at a conference in 2000 where he outlined a method of approaching testing problems that I found both engaging and effective. A few years later, I met him at a workshop and (after trying to hack his laptop) was lucky enough to get along well enough with him that he invited me to study software testing with him. I’ve had the pleasure of studying under him and have first hand experience working through his syllabus of software testing concepts; developing my own understanding of how to identify, articulate, and test my own heuristics; and developing methods of how to assess my progress. All of those topics are covered in this tutorial.

I opened the interview with Bach by asking him why he thinks self-education, as opposed to more traditional methods like classes or certification, is so important for someone’s career:

“Technical self-education is traditional, going back hundreds and thousands of years. Electricity, for instance, was discovered and developed by individuals working on their own, outside of any institution. So was chemistry and physics, for the most part. We are in the process of developing the testing craft, and that requires people who innovate.

Testing classes and certifications are pretty bad, for the most part. Of course, I try to teach a good one, but there’s not a whole lot I can do in three days. What I try to do is start a fire in the minds of the testers to pursue their education further without me telling them the answers.

Self-education is available to each of us, all the time. We don’t need a budget. We don’t need anyone’s permission. Institutional education, on the other hand, is expensive and limiting.”

Bach first presented a tutorial along these lines at CAST 2007. He often talks fondly about pulling the material together for the first time as a “bold boast.” A bold boast is a self-education technique.

“A bold boast is a trick I use to get myself going on a project. It’s basically a promise to accomplish some feat, such as to write an article or teach a class. I make the boast that I can teach something, and then my mind gets serious about solving the problems that need to be solved. So, the first time I did this tutorial was when I was writing the Buccaneer book and wanted to develop new material for it, more quickly. I told the CAST organizers ‘I’ll teach a class on self-education for testers,’ not knowing what I was going to do, at first.”

If you follow Bach’s work at all, a lot of what he puts forth in the tutorial mirrors the way he talks about and teaches software testing. So I asked him how the tutorial builds on, or extends, some of his past work.

“I have lots of odd ideas about testing. They run counter to the traditional ‘Factory School’ ideas that you find in most testing textbooks. I teach and demonstrate these ideas, in all my classes, but in this tutorial I get to share how I came up with them in the first place.

I’m nervous when I teach this tutorial, because I expect more from the students than in my normal classes. People who only want quick answers about how to test will be disappointed, because my goal is to show them how to create their own answers. In fact, I show them how they already know a lot of the things they don’t think they know!

In the tutorial, I also talk about how the testing craft is going through a great struggle. The various testing schools are fighting with each other for dominance. The certificationists, of course, being the most visible and aggressive of those. I stand up for the Context-Driven School – against the certificationists and Factory folks – and do my best to recruit testers to our cause. I’m up front about that.”

Bach’s ideas about self-education have, in the past, faced some criticisms. I asked him to anticipate a couple of the more likely criticisms and asked him how he addresses them.

“The most likely criticisms, I think, are these two:

In the tutorial I attack other schools of testing thought instead of trying to find common ground. My response to that is – that’s right. I think those other schools are harming the craft. I don’t see common ground. But I’m glad that our field is not regulated. I’d like to see the other schools go down in flames, but not through any mechanism other than the efficient operation of a well-informed market of ideas.

The second criticism is that it’s all well that the great James Bach can make it up as he goes along, but what about people who aren’t famous? My response to this criticism is that I was developing my own methodology before anyone outside of my team at Apple Computer knew my name. I became well known because some folks found out about what I was doing and thought it was interesting. ANYONE can be a testing methodologist. The market will decide, in the long run, whether it is interested in your methods.”

Bach’s ideas on self education have been influenced by the works of Jerry Weinberg (also speaking at CAST this year), Herbert Simon, Daniel Kahneman, and Nicholas Taleb. “I would recommend ‘The Invention of Air’ as a great book that shows the development of one man – Joseph Priestly – and the development of his field of electricity and chemistry through vigorous and collegial self-education.”

“CAST is the conference that attracts the core contributing thinkers in the Context-Driven School. These are the people who, like me, are engaged in the creation of a vibrant testing craft that has roots in many disciplines and in the history of science. No other testing conference is like that. At CAST, I don’t have to apologize for using Joseph Priestly as an example of a good tester.”

For more on the upcoming show, check out the CAST conference website. For more on James Bach’s work, you can check out his website, or either of his books “Lessons Learned in Software Testing,” and “Secrets of a Buccaneer-Scholar.” Bach also runs two very popular blogs, one on testing and one on self-education.


June 23, 2009  7:14 PM

CAST 2009: The challenges of regulation, an interview with Jean Ann Harrison

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Veteran software testing and quality assurance pro Jean Ann Harrison will be presenting a software testing case study based on her experiences at a medical device study at this year’s Conference for the Association for Software Testing (CAST), slated for July 13-16th in Colorado Springs.

In her session — titled “A Balancing Act: Satisfying Regulators, End Users, Business Leaders and Development” — Harrison plans to provide guidance to testers who have to deal with conflicting priorities between developers, project managers, customers/patients, and regulators.

“Priorities clash and inevitably software testers are in the middle of a battlefield between developers trying to get their work done and delivered while project managers are trying to make a deadline, customers/patients want to make sure the product works as expected and the regulators demand the proper documentation delivered in a sequential timeframe.” Harrison went on to share some of the questions she hopes to answer in the talk. “How do testers balance all these shareholder’s priorities? How can testers decide which direction to take as the project matures? Which shareholders’ take precedence over another?”

With 10 years of experience in software quality assurance and testing and three years testing embedded software on portable devices, Jean Ann Harrison has gained broad experience in various software testing processes and has worked in varied contexts including large multi-million dollar corporations, venture capital, and start-up companies. Harrison is currently works for CardioNet, Inc. where her primary role is testing software embedded in medical devices that provide diagnostic data for physicians to determine their patient’s heart condition.

“I developed the talk through my own learning process as a software tester working in a regulated environment for the first time. What to do, what not to do, what one can expect, and how to handle the demands of a regulated company helped formulate my subject. And most companies producing software usually have some sort of description of what is wanted, needed, or expected when the project is completed. Most companies usually have some method of traceability of software requirements, software design, and product information. In a regulated environment, the role of documentation is the centerpiece of any project.”

When asked to expand a bit on the challenges of regulation, Harrison continued:

“First, a single source location for documentation must be identified, implemented and then monitored. Then documentation is distinct in a regulated environment by the level of detail provided, the sequence of submittal of documentation, identifying appropriate reviewers to approve the documentation, and a historical record is maintained for traceability purposes. This process is extremely formalized and dates of submittals are critical to the project. Non-regulated environments tend to be more relaxed and even the most formal processes have allowable slips. In regulated environments, slips are not acceptable, and contingency plans must be implemented to explain deviations. If regulated environments do not meet regulators demands, the certifications are rescinded.”

One of the things I found most interesting about my interview with Jean Ann Harrison was her biggest influence for the talk, which came not from the field of testing, but instead from political science. Harrison majored in Political Science 25 years ago. She’s found that the analytical thinking skills her professors emphasized play a large part in her success.

“In the four years and loads of courses, exercises were given to force students to practice analytical thinking. Software testers are constantly required to analyze how to do something, how to improve, what is the data telling you, analyze different perspectives, create. Over the years, my analytical skills have evolved but certainly were given a solid foundation because two professors teaching the subject of Political Science felt the skill was critical to the coursework. One exercise that was given to me in a course called Research Methods, I use today to train and mentor software testers. It is simplistic in nature but very difficult to implement. The exercise requires them to generate a new hypothesis that they personally have not read about, been trained in, or been given any sort of research material on. Then they are required to describe and prove the hypothesis using empirical means.”

When asked why she chose CAST as the venue for her talk, Harrison shared that this year’s theme for the conference, “Serving our stakeholders,” is directly relevant to some of the lessons her current company is learning, as it’s experiencing growth. “Each department is learning who the customers are,” Harrison says, “How we can better be of service, and what can we learn from our mistakes.”

For more on the upcoming show, check out the CAST conference website.


June 22, 2009  7:06 PM

CAST 2009: Understanding how much responsibility a testing team should have, an interview with Gerald M. Weinberg

MichaelDKelly Michael Kelly Profile: MichaelDKelly

For the previous three years, I was either an organizer of the Conference for the Association for Software Testing (CAST) or the President of the AST. So as you can imagine, I watched the conference closely. Last year, when we were able to announce Jerry Weinberg as the keynote speaker it was a great feeling.

At CAST 2008 Weinberg offered a tutorial that sold out so fast we had to add another day to the conference. This year Weinberg will again be offering a tutorial at CAST. The topic is “Ensuring Testing’s Proper Place in the Organization.”

Jerry Weinberg is easily one of the most influential people in my practice as a software tester and consultant. For the last 50 years, Weinberg has worked on transforming software organizations. For example, in 1958, he formed the world’s first group of specialized software testers.

Weinberg is author or co-author of may articles and books, including “The Psychology of Computer Programming” and the 4-volume “Quality Software Management” series. He is perhaps best known for his training of software leaders, including the Amplifying Your Effectiveness (AYE) conference and the Problem Solving Leadership (PSL) workshop.

In this year’s tutorial, Weinberg will help attendees demonstrate the value of testing vs. the cost; teach them how to find the points of influence in the organization, and how to cope with them; work with them on communicate with executives; and will help them to better evaluate risk and make it real.

When asked, Weinberg said that the tutorial is focused on “addressing the problem of testing being given too much, too little, or the wrong kind of responsibility. The tutorial will address this problem at the individual, test team, organizational, and societal level.”

At last year’s conference, Weinberg launched his latest book, “Perfect Software and Other Testing Myths.” When asked how much interplay there might be between the book and his tutorial, he responded:

Certainly much of the misplacement of testing starts with the common myths and misunderstandings about testing, so, yes, there is quite a bit of interplay. However, reading the book is not a prerequisite to participating in the tutorial, because all professional testers are well acquainted with these myths and misunderstandings. What they may not understand is how these myths and misunderstandings are contributing to the low esteem in which testing is commonly held–and what they personally can do to achieve their proper role.

When asked how this tutorial built on, or extend, some of the other work he’s done in the past, Weinberg responded: “I’m trying to correct the impression from much writing that ‘Development is Everything; Testing is Nothing.’ Or, even, that ‘Development would be easy if it weren’t for Testing.'”

Given the variety of places Weinberg could deliver this message, including much larger venues, I asked him why he chose to give the tutorial at CAST.

“The people who attend. I was at CAST last year in Toronto, and found it to be a cut above your typical conference (almost on a par with our own AYE conference). I learn at CAST, and I enjoy CAST. That’s why I believe it’s the right place for me to be, teaching and learning.”

For those unfamiliar with AYE, or Amplifying Your Effectiveness, you can learn more about it at their website. For more on the upcoming show, check out the CAST conference website. For more on Jerry Weinberg – his works, conferences, and to interact with him – check out his website and blogs on the topics of consulting and writing. While you’re at it, if you haven’t already taken a look at the new book “Perfect Software and Other Testing Myths” I highly recommend it.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: