Software Quality Insights


July 11, 2009  1:41 AM

CAST 2009: Taking a new look user acceptance testing, an interview with Michael Bolton

MichaelDKelly Michael Kelly Profile: MichaelDKelly

User Acceptance Testing (UAT) is a part of most testing plans and projects, but when you ask people what it is, they have a hard time defining it. You quickly find that it isn’t obvious what user acceptance testing means.

I talked to Michael Bolton about his views on UAT this week. He’ll discuss that topic at next week’s Conference for the Association for Software Testing (CAST). When the subject comes to UAT, Bolton said, there’s a lot of miscommunication. “The same words can mean dramatically different things to different people,” he said. He want to help user-acceptance testers “recognize that it’s usually risky to think in terms of what something is, and more helpful to think in terms of what it might be. That helps us to defend ourselves against misunderstanding and being misunderstood.”

Bolton has been teaching software testing on five continents for eight years. He is the co-author of “Rapid Software Testing,” a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. He’s also a co-founder of the Toronto Workshops on Software Testing.

Bolton says the idea for his CASTsession first came from a message on a mailing list. In the list, someone suggested the following user acceptance test-driven approach to developing a product:

  • Write a bunch of acceptance tests before writing the program
  • Write the program
  • When the acceptance tests pass, you’re done

“Now to most testers that I know, this sounds crazy. It occurred to me that maybe we should explore alternative interpretations of ‘acceptance tests’ and ‘done,’ but maybe we should also explore the business of exploring alternative interpretations altogether.”

“The language issue has always interested me. Back when I was a project manager for a really successful commercial software company, I noticed that some people weren’t saying what I thought they meant. Others were saying things that I was pretty sure they didn’t mean. I’ve done a paper on user acceptance testing, and I’ve done some classroom work on it, but this is the first time that I’ve taken it to an audience like CAST. “

Bolton wants his talk and his work in this field to trigger discussion. To learn more about his point of view and join that discussion, check out his CAST 2009 presentation. If you can’t make it to Colorado Springs, you can find Bolton’s past articles and conference presentations on his website. You can also follow his work as it unfolds on his blog or via Twitter.

July 11, 2009  1:30 AM

Mike Dwyer at CAST 2009: New simulation helps teams learn Agile

MichaelDKelly Michael Kelly Profile: MichaelDKelly

A common criticism of Agile development practices is that they are difficult to scale with large teams. Another common challenge faced, is figuring out where testers can fit in an Agile context. At this year’s Conference for the Association for Software Testing (CAST), July 13-16th in Colorado Springs, Mike Dwyer is presenting a workshop titled “Experiencing Agile Integration.”

I interviewed Dwyer recently about the session, in which he talks about a new simulation that allows participants to experience first-hand what it feels like to apply Agile principles. He said the goal is of the simulation is to provide a simple environment that reflects the dynamics of expanding Agile and enough time for the participants to inspect and adapt how they support the expansion so that the product, team and organization garnish optimum value from going Agile.

Mike Dwyer is a principal Agile coach at BigVisible Solutions working with IT organizations as they adopt Agile and Lean methods. He has extensive experience as a manager, a coach and consultant transforming high growth organizations into hyper-productive Agile organizations. In addition, he is a well-known and respected contributor to the Scrum, Agile, and Lean software community.

A product of BigVisible Solutions team and their shared experiences, the simulation was built after Dwyer and several of his cohorts came off a 280-person, 30-team project. The designers all hold or have held multiple certifications in both PMI and Scrum as well as in IIAB, APICS, and other professional organizations. Many of the BigVisible Solutions team are experienced testers with background in performance, web, application, system, CI, TDD and exploratory work.

“The simulation mirrors what we have seen many organizations do to ‘pilot’ Agile. In order to provide the experience to the attendees and work toward optimal value for them, we use a simple pattern based on Scrum workframes. That is to work at delivery of value for short periods of time, discuss and learn from what we have done and then apply our learning to our next iteration. Value is optimized by the coach/facilitator working with the teams and individuals to inspect and adapt what they are doing to find better ways for the team to reach its goal.”

For more on the upcoming show, check out the CAST conference website. For more about Mike Dwyer and his work, you can checkout his BigVisible blog.


July 11, 2009  1:01 AM

Two experts: Why not to skip some software testing phases

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Is software testing really necessary? Do we do it just because everyone else does it? Why is software testing important? While ideas about testing vary, the motive is generally the same: someone believes it has value. We test because someone wants us to test. They may be managers, developers, business executives, regulators, or even customers. But how do testers know if they are doing the right thing or if something is lacking?

I explored these ideas recently in interviews with Neha Thakur, a business technology analyst at Deloitte Consulting, India, and Edgardo Greising, manager of the Performance Testing Laboratory for Centro de Ensayos de Software in Uruguay. Both are speakers at next week’s Conference for the Association for Software Testing (CAST).

Thakur will be exploring this topic in her upcoming talk, “Software Testing – To Be or Not To Be.” She is keenly interested in talking with her peers about identifying and serving testing stakeholders. In her own work, she’s discovered the advantages of identifying and involving stakeholders, and she’ll share methods for stakeholder analysis, gaining stakeholder involvement and making sure stakeholders’ needs are being met by the development team.

Thakur has performed automation testing in a variety of contexts, ranging from medical electronics, to storage and networking, to risk and compliance.

“I have always been curious to know and learn about the various aspects of software testing: the prevalent models, technology, tools etc. This curiosity to learn new things helped me go deep in various management topics and to develop a better understanding of the various stakeholders at each level that might impact the project. It also allows me to be proactive in communicating the risks, issues, and information with the respective stakeholders. I think testing is on an evolutionary path and there are still axioms of test management which need to be improvised.”

“Stakeholders believe in facts and figures; they believe in action and not words merely stated. Thus a subjective way of thinking while testing might not be the correct way of approaching an issue. Thinking objectively always helps. [You need] data, facts, and figures to support [your testing].”

Approaching the problem from another angle, Edgardo Greising plans to look why it can be difficult getting some IT managers to see the value in doing performance testing. According to Greising, this needs to change. In his upcoming CAST talk — titled “Helping Managers To Make Up Their Minds: The ROI of Performance Testing” — Greising plans to explore the risks and costs associated with performance testing.

“I will be talking about the return on investment of a performance test. From my experience, many managers refuse to do performance testing because they think there is a high cost. I always try to illustrate that the cost of not doing performance testing is higher.

Our commercial people have to fight against the cost-myth each time they are visiting potential clients. And, on the other hand, we know that a performance test gives us a lot of information to tune the system and help us avoid system downtime. The objective, then, is to put those things together and show the convenience of performance testing.”

During my interview with Greising, he talked about the ways software performance testing yields great improvements in application health. For one thing, performance testing leads to reducing resources consumed and lowering response times, he said. “With most projects, we are unable to support half the volume expected in production until the tests shows us where the bottlenecks are so we can fix them.”

Unfortunately, Greisling said, performance testing is rarely found as a regular activity in a systems development or migration project. Then, when application deployment approaches at high velocity, nobody has time for even think about it.

Greising is no stranger to keeping cost in mind when testing. He worked as a salesman and pre-sales engineer for 15 years. For him, balancing cost and risk are just a regular part of testing. To talk about cost justification, you need to talk about risk,he said.

For more on the upcoming show, check out the CAST conference website.


July 11, 2009  12:33 AM

CAST 2009 preview: Positioning software testers as service providers

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Presenting correct information isn’t just a function of how you write your report at the end of a software project. Instead, it is the result of a complex process that starts with analyzing the needs of your stakeholders, moves on to gathering accurate and timely data from all your multiple project sources, and then results in presenting your findings in the correct format, at the right time and to the proper audience. Joel Montveslisky calls this “Testing Intelligence” and is presenting a talk on that topic at this year’s Conference for the Association for Software Testing (CAST), July 13-16th in Colorado Springs.

“Testing intelligence is a term to describe a slightly different perspective to software testing that places the focus on the needs of our project stakeholders instead of on the application under test. The main idea is to position the testing team as a service organization within the development group, whose purpose is to provide the timely and actionable testing-based visibility needed by the project stakeholders to make their tactical and strategic decisions.”

“In principle this is nothing new, but in practice many testing teams tend to get disconnected from the changing needs of the Organization during the project and end up working for the sake of their own “product coverage needs” or the old information needs of their Project Stakeholders.”

Montvelisky is one of the founders and Product Architect of PractiTest, a company providing an SaaS (Software as a Service) test and quality assurance (QA) management system. He is also a QA consultant specializing in testing processes and a QA Instructor for multiple Israeli Training Centers. A member of the Advisory Board of the Israeli Testing Certification Board (the Israeli chapter of the ISTQB); he publishes articles and a periodic QA blog and is an active speaker in local and international conferences.

According to Montvelisky, the process of gathering testing intelligence starts by correctly identifying your stakeholders, then working with them to understand their needs, and finally providing them with correct and timely information they need. He thinks there are a number of things that make this a hard process, or at least not a trivial process, for testing teams:

“We can start by the fact that many times we are not aware who are all our project stakeholders, we tend to miss some that are not physically close or that enter the project late in the process. Secondly, we testers are not really trained to work with customers […], so many times we don’t communicate correctly and we assume their needs without consulting with them about what information is important for their decisions, what format they need it, and when. And finally, we don’t take into account the dynamic nature of our projects. We don’t understand that people require specific information at certain times. Nor do we take into account that as the project evolves the information we need to provide changes.”

Montvelisky started developing his CAST talk when he was a QA Manager working for an enterprise software company. There he realized that many stakeholders were frustrated with the existing bureaucracy of the QA work. The stakeholders thought the QA team’s work was dictated by test-planning documents written months beforehand. Montvelisky noticed that those documents were not staying relevant to the current issues affecting the release.

“In this company we made a mind-shift and decided to set aside time during the project for ‘specific testing tasks’ that would be given to us by the Development Team in real-time. Soon enough the demand for these tasks increased and we realized that we were providing real value to the process by becoming the eyes and ears of the project. After I left this company and became a consultant I took this approach with me and created a process around it to help organizations make the mind-switch and start working more effectively with their stakeholders throughout the project.”

The chance to get feedback is one reason Montvelisky is excited to be presenting at CAST. “It’s not easy to receive hard criticism,but once you learn to take it in a positive light and use these comments to continue developing your work, it makes it one of the most fruitful encounters for people looking to improve and develop ideas in the field.”

I asked Montvelisky where he thought he might get some pushback on his approach:

“In the past, I’ve heard two main areas of criticism to my approach, both of them fair. First, people explain to me that all their professional lives they’ve worked based on what I describe as testing intelligence, and that this is nothing new to them. To these people I usually come asking for their best practices and asking for their inputs in order to improve my approach.”

“Second, people tell me that our job should limit itself to test, and ‘we should be proud’ of it instead of trying to find big names for what we do, leaving this to the marketing team. To these people, I try to explain that every team in the organization needs to contribute value to the process, and if they think that all their value comes from reporting bugs and coverage percentages then they can continue working like that.”

“Having said that, there is a lot more value that can be provided by the Testing Team, and we don’t need to change what we do in order to provide it we only need to make sure we stay connected with our stakeholders and help them throughout the project and not only at the end of it.”

Montvelisky is currently focusing on a couple of research topics. One of them is related to adding value by correctly utilizing the test management tools in the organization. The other is related to collaboration between testers from different organizations, and different cultures and countries, in order to improve their overall work.

For more on the upcoming show, check out the CAST conference website. For more on Joel Montvelisky and what he’s currently working on, you can follow him on Twitter or his PractiTest QA Blog.


July 11, 2009  12:10 AM

Eight days, 80 testers: Exploratory testing case study at CAST 2009

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Software consultant Henrik Andersson implemented an exploratory testing training project in an 80-tester group in only eight days, and he lived to talk about it. Next week, he’ll outline the steps he took to quickly set up a pilot program and train testers in exploratory testing theory and practice during the Conference for the Association for Software Testing (CAST), which takes place July 13-16 in Colorado Springs. In his session, he’ll also cover how he made responsibilities and expectations clear.

I recently interviewed Andersson about his session, titled “Implementing Exploratory Testing at a Large Organization.” He said his first reaction upon receiving the assignment in question was that it was impossible to implement exploratory testing on this scale in that time frame. To reach out to 80 testers is a challenging thing to do, he said, and it takes time to implement such a different way of testing. Yet, he decided to rise to the challenge. “If I turned it down I would not likely get another chance,” said Andersson, a consultant and founder of House of Test, headquartered in Sweden and China.

Once he accepted the project, he had to figure out how to do the impossible.

“I came up with a little twist on the initial request. I suggested that we should pick one tester from each test team and tutor them to become Exploratory Testing Champions. This gave me an initial group of nine people. This is what we achieved during the 8 days. The Champions would then have the responsibilities to tutor the rest of the testers in their teams. The Champions are now continuously working with this in their test teams, and we have established this new role formally.

Andersson’s case will show what the exploratory testing champions approach achieved. He also will explain in detail how the project was implemented. He’ll describe the workshops on theory and practical exploratory testing that he conducted during theproject. He’ll share observations about tutoring a group of people used to working in a completely different way, the positive feelings and feedback he receive, what surprised him and what approaches did not succeed.

Just so you know that Andersson is no newbie to testing metholodies, here’s some information about his background. As a software tester and consultant, he has worked in a variety of fields, including telecom, medical devices, defense, insurance, SAP and supply chain systems.

For the past 10 years, Andersson has focused on working in a context-driven fashion, mixing exploratory testing with more traditional methods, such as RUP, V-model, TMap and others. However, he has never followed any method by the letter. “I always only took the part that has been useful and invented the parts I was lacking,” said Andersson. “I definitely didn’t do the parts that felt were obstacles or not useful.” Indeed, Andersson enjoys helping organizations transform from the “old school” practices.

For more on the upcoming show, check out the CAST conference website. You can learn more about Henrik Andersson and his company House of Test on their website. You can also follow Henrik on Twitter.


July 7, 2009  3:29 PM

Addressing eVoting concerns at this year’s CAST conference

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Right now, if I do a search on ‘electronic voting’ on Google News, I get 748 results for the past month. The headlines include phrases like “How to trust,” “Technology is not fullproof,” “Electronic voting machines are fallible,” and the ever present headline of “Electronic voting machines also caused widespread problems in Florida.” There are many legitimate concerns around electronic voting technology.

At this year’s Conference for the Association for Software Testing (CAST), July 13-16th in Colorado Springs, AST eVoting Special Interest Group (SIG) members Geordie Keitt and Jim Nilius will be taking a look at the highly visible testing processes around eVoting systems, and will outline their concerns for the way the testing is done today.

“Our talk is about the reasons why an electronic voting system can undergo a rigorous, expensive, careful series of tests and achieve certification, and still be a terrible quality product. The laboratory certification system is not capable of ensuring quality products where the complexity of the system is as poorly represented by its standards as eVoting systems are. We have come up with an interesting model of the certification testing context, which helps us see when the rigors of cert lab testing are appropriate and adaptive and when they are not. We will be asking the conference attendees for help in honing our arguments in preparation for publication. We will be presenting them to National Institute of Standards and Technology (NIST) to suggest changes to the accreditation guidelines for certification labs.”

Geordie Keitt works for ProChain Solutions doing software testing, a career he began in 1995. He’s tested UNIX APIs, MQ Series apps, Windows apps, multimedia training courses, Web marketplaces, a webmart builder IDE, the websites that run bandwidth auctions for the FCC, and now critical chain project scheduling software. Geordie aspires to be a respected and respectful practitioner and teacher of the craft of rapid, exploratory software testing. He is the lead of the AST’s eVoting SIG.

Jim Nilius has over 23 years of experience in software testing, test management and architecture. His most recent role was as Program Manager and Technical Director of SysTest Lab’s Voting System Test Laboratory. An ISO 17025 test & calibration lab accredited by the US Election Assistance Commission under NIST’s National Voluntary Laboratory Accreditation Program and mandated by the Help America Vote Act of 2002. The lab performs testing of voting systems for Federal Certification against the 2005 Voluntary Voting System Guidelines. He is a member of AST’s eVoting SIG.

“Jim has a wealth of experience in this domain,” said Geordie Keitt, “and as the Chair of the AST eVoting SIG I wanted to draw out of him as much knowledge as possible and get it out into the open where we can all look at it and try to detect patterns and learn lessons from it.” Keitt and Nilius did an initial presentation of their work at the second Workshop on Regulated Software Testing (WREST). One of the outcomes from that workshop was a diagram of the challenges facing a sapient testing process in a regulated environment.

“We are bringing an immature argument before a group and asking for their help to toughen it. We are testing the founding principles of regulated testing, which predates software testing by a hundred years and has yet to recognize that new software systems are more complicated than mature ones, instead of the other way around. We need help to hone and tighten our argument for it to be effective.”

For more on the upcoming show, check out the CAST conference website. Also, checkout the website for the AST eVoting SIG to get involved and see what else they are working on. For more on the work of Geordie Keitt and Jim Nilius, take a look at their outcomes from the second Workshop on Regulated Software Testing.


July 6, 2009  5:32 PM

CAST 2009: Understanding the Principles of the Law of Software Contracts, an interview with Cem Kaner

MichaelDKelly Michael Kelly Profile: MichaelDKelly

“The American Law Institute just adopted the Principles of the Law of Software Contracts. For now, this will guide judges when they decide disputes involving the marketing, sale (of product or product license), quality, support and maintenance of software. Over the next few years, it will probably guide some new legislation as individual states apply its terms. It will probably also inform some legislative drafting efforts underway in Europe, and probably to come in other countries whose economies are gaining greater influence (e.g. India and China).”

That introduction comes from Cem Kaner as he summarized his upcoming talk at this year’s Conference for the Association for Software Testing (CAST). CAST takes place in a couple of weeks July 13-16th in Colorado Springs, and Dr, Kaner will be looking to dig into some of the new rules for software contracts adopted by the American Law Institute in more detail.

“Historically, the American Law Institute has had enormous influence. Its membership is politically diverse and primarily judges and tenured law professors. Appellate-level judges routinely reference American Law Institute materials in their published cases. For software, judges need to turn to other judges’ writing even more than in other areas, because so much software law is judge-made. […] The Principles provide a unifying framework, based on judicial opinions around the country over the past 50 years. I was ill and unable to travel to the ALI meeting this year, but the ALI meeting blog reported that the Principles were passed unanimously. This is very rare, and it speaks well to the likely future influence of the document.”

Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: “Testing Computer Software,” “Bad Software,” and “Lessons Learned in Software Testing.”

While this might seem like an unlikely talk to some, the subject matter is critical to our industry and CAST is a perfect venue to get people talking about it. There’s a mix at CAST that is rare at other conferences. Not only do the talks go deep into the subjects they cover, the average attendee is willing to dig in and do the work necessary to understand the subject matter. Add to that the importance of the Principles of the Law of Software Contracts to the industry and you have the perfect mix.

“At most conferences, I would give a talk about the Principles, a few people would ask short questions, we would call a time limit, and that would be that. Those are “marketing talks” – one-way communication from someone pushing an idea to an audience that politely listens. I strongly prefer CAST’s approach, which encourages more critical questioning and follow-up discussions. Sometimes people come prepared for a serious debate. CAST encourages this, and everyone learns from it. More often, a group of us break away for more discussion at the end of the meeting. Again, I learn a lot from that, and so do the people in the breakout group, who get to discuss the ideas in a way that makes sense to them.

Many testers talk about their unhappiness with their company’s quality standards. They feel as though they are working on bad software, with irresponsible project managers who care a lot about cost, ship date, and personal glory but don’t care whether the final result works. Commercial law creates the playing field for software development and marketing. When you ask why a company can survive when it sells crap software, lies about what it is selling, and treats its customers like dirt, you are partially asking a market question (how can it keep customers?) and partially a legal question.

Given that we have hit a milestone in the adoption of the Principles of the Law of Software Contracts, I think it’s important to let testers know the new rules.”

Dr. Kaner is no stranger to some of the issues that will surround the Principles of the Law of Software Contracts. He started working on software-quality-related legislation in 1995, when he helped write the Uniform Electronic Transactions Act (UETA). UETA (adopted federally as ESIGN) removed a major barrier to electronic commerce by giving legal force to electronic signatures. Dr. Kaner also worked on the Uniform Computer Information Transactions Act (UCITA), trying to improve it when it was a joint project of the American Law Institute, the National Conference of Commissioners of Uniform State Laws, and the American Bar Association. From 1995-2001, he wrote almost 100 status reports on UCC2B /UCITA, most for the testing community.

“UCITA started as UCC-Article 2B, a joint project of the American Law Institute and the National Conference of Commissioners of Uniform State Laws and the American Bar Association to update the Uniform Commercial Code, which is America’s main body of commercial law. The American Law Institute and the National Conference of Commissioners of Uniform State Laws jointly run the UCC’s Permanent Editorial Board, which has such a strong reputation for fairness and thoroughness (amendments take up to 10 years of hearings) that state legislatures look to the Board for all amendments to the UCC.

Unfortunately, the Article 2B project was hijacked by political activists who wrote a bill that radically tilted copyright and contract law in favor of large software vendors. The American Law Institute demanded a rebalancing of the bill. When the National Conference of Commissioners of Uniform State Laws refused, the American Law Institute walked off the project, killing it as a UCC project. The National Conference of Commissioners of Uniform State Laws renamed the bill, UCITA, and submitted it to state legislatures. Ultimately, two states adopted UCITA, the rest rejected it, and four states even adopted laws that made UCITA-based contracts unenforceable in their states.

This was the first time since the American Civil War that some states passed laws to explicitly reject and make unenforceable contract terms that were lawful in the state in which the contract was written. Once UCITA’s failure was clear, the American Law Institute started a new project (the Principles), to bring computer law in line with the mainstream of American commercial and intellectual property law, balancing the rights of vendors and customers. They elected me as a member, which put me into a better position to comment on laws affecting the software industry. I think I am the least experienced lawyer ever elected to the American Law Institute. Since then, I’ve given a few reports to AST members at CAST and by email, collecting feedback and making suggestions back to the American Law Institute.

Revamping the legal infrastructure is not a guarantee of better times to come. A poorly balanced body of law can wipe out a marketplace or wipe out the companies trying to serve that market. I think this decade’s lethargic software market has been a result of that. I’ve seen a lot of extreme proposals coming from many quarters (right, left, and unclassifiable). The American Law Institute work is the best and most promising that I have seen.”

At CAST, debate on topics presented is encouraged. Given the effect of the Principles on the software development industry, I asked Dr. Kaner what he thought the most likely criticisms might be.

“I think the most likely criticism will be holding software companies accountable for undisclosed known defects will somehow harm the industry. Sadly, I think this is poorly informed. A bunch of silliness was promoted on the web just before the American Law Institute meeting that claimed this would particularly hurt the open source community. As is so common in this decade’s political propaganda, this was blatantly wrong. The Principles specifically exclude open source software from this type of liability. I think more generally that some people fear that commercial regulation creates a potential to kill the industry. I have seen some proposals, especially from consumer activists and buyers for very large non-software companies, that demand too much. I think the Principles are much more moderate – perhaps too moderate.

The only way to address these types of concerns is with open discussion and facts. I’ll introduce some of the key ideas in my talk and then be available for as much post-talk discussion as people want. I don’t expect everyone to come out loving the Principles, but at least the folks who want a deeper understanding will have a good chance of getting it.”

For more on the upcoming show, check out the CAST conference website. For more on Cem Kaner, you can check out his website, or take a look at what he considers to be his career-best writing and research in his freely available book, “Bad Software.” The Principles are not available in a free copy on the web. However, you can get to a copy for sale from the American Law Institute. Dr. Kaner has summaries of some of the main ideas in the Principles on his blog and in his 2007 CAST presentation “Law of Software Contracting: New Rules Coming.”


July 2, 2009  6:25 PM

CAST 2009: Almog presents controversial new test case definition approach

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Dani Almog will be outlining a new approach to test case definition at the Conference for the Association for Software Testing (CAST 2009) this month. In his talk “Test Case Definition: A New Structural Approach,” Almog will explore and classify the various definitions of test cases, discuss the implications of the current situation, and will suggest an alternative structural definition for test cases. CAST 2009 takes place July 13-16 in Colorado Springs, CO.

“Based on thorough research of academic articles, journals and books and by consulting some of my distinguished colleagues around the globe, I will present my findings aiming at an alternative formal structural definition of the term ‘test case. Although it may be a controversial approach, formalizing the term fits my engineering perceptio’n of the testing process. Thus, during my talk I will present a newly developed structural definition to a test case.

During the last six years, I was very fortunate to be able to fulfill a dream and implement ideas pertaining to how and when software testing should be developed; in short, finding ways to bridge the gap between the software development approach, the technology, the training (mostly object oriented thinking) and the way we testing engineers see the same applications; in an intuitive, procedural –the way we use it — manner.

I got full support from top management in the company I worked for (Amdocs) to recruit a very talented and innovative group of young people who assisted me in developing and implementing a full infrastructure for test automation. Later, (this) was rolled out to all the corporate divisions and units. Now that I have retired from Amdocs, I have decided to dedicate the rest of my professional career to exposing and promoting the new approach I have developed, including methodologies and tools.”

Dani Almog is currently a member of the academic staff at Ben Gurion University, Israel. He teaching and researching. Dani teaches software quality engineering and testing, and research the interaction between development processes and quality/testing – trying to introduce to the academic world some of the achievements and work models developed in industry.

“Coming from a large corporation’s (Amdocs) R&D division, we have encountered many issues regarding test case automation- a necessity and a key factor for supporting eight different large software products, often with 3 different versions distributed among one hundred different very large customers. This situation made us consider all options and alternatives for test automation infrastructure, tools and methodologies. We were given the opportunity to be involved in shaping the future of our new products’ development processes and procedures. All my academic activities since my retirement, including this talk, are derived from this experience and I am now actually documenting and presenting the best practices of what have we done. “

Almog says that in his talk he’s targeting two different communities. The first is the professional community, who he says is struggling to “improve its skills and outcomes regardless of inferior reputation and image – knowing they never really get the deserved glory.” The second is the academic community, who has “neglected a very exciting field of research and progress.” He suspects both communities might have some criticisms for his talk.

“I welcome the challenge of debate and criticism and believe it will improve my work. I guess the criticism will come from two main channels. From those questioning the relevance of my approach to all different streams and to the new ways software development expands to. And from people who perceive testing as softer and more flexible (exploratory, context-dependent, and others) rather than structured and engineered. I believe that my approach paves the way to more systematic and precise testing, as well as to development of advanced automation tools. I welcome all constructive and relevant criticism. It will help me present a better model.”

For more on the upcoming show, check out the CAST conference website. For a chance to explore the topic in more detail, you can contact Dani Almog on the Software Testing Club or following his discussions.


July 1, 2009  3:54 PM

CHATing about CAST 2009: Software testing and cultural history

MichaelDKelly Michael Kelly Profile: MichaelDKelly

If you’re looking for a cross-discipline topic related to software testing, Rebecca Fiedler’s and Cem Kaner’s upcoming talk at the Conference for the Association for Software Testing (CAST) – July 13-16th in Colorado Springs – might just be for you. Their talk on “Cultural-Historical Activity Theory: Framework, to characterize the activity of software testing” takes a look at one of the most difficult tasks in software testing – discovering and applying contextual information. Cultural-Historical Activity Theory (or CHAT) has been applied widely to software usability analysis, but not so much to testing. Fiedler and Kaner are hoping to change that.

“Cultural Historical Activity Theory provides a clear structure for applying a systems theory-approach to human activities. In particular, it is really useful for looking at change on a human system. Perhaps you’re trying to understand a change that has caused your project to go off the rails or maybe you’ll use it to analyze the introduction of a new tool or technology you’re trying to implement. The Computer-Human Interaction and Computer-Supported Cooperative Work crowds have been using Activity Theory for years. More recently, they’ve begun shifting from user-focused design to context-centered design. It seemed natural, given our advocacy for context-driven testing, to use CHAT to think about the context of testing as well as the context in which the software we’re testing will be used. CHAT helps with that.”

Rebecca Fiedler is an Assistant Professor in the Curriculum, Instruction, and Media Technology Department at Indiana State University. She’s interested in how people learn and how technology can make educational efforts more effective and more accessible to more people. In the testing community, she works with Cem Kaner on the Black Box Software Testing (BBST) courses and AST’s Education SIG. She is also a regular attendee at the Workshop on Teaching Software Testing.

Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: “Testing Computer Software,” “Bad Software,” and “Lessons Learned in Software Testing.”

Fiedler says the idea for the talk started when she was doing her dissertation research a couple of years ago:

“I’m interested in how technology can help people learn and so I spent a lot of time at two different institutions watching college students use a specialized software tool for a high stakes task – high stakes as in their graduation depended on it. Academics love theory so I decided to use CHAT to sharpen the focus of my observations, interviews, and analysis. As Cem and I talked about what I was finding in my research, we started asking, ‘Where was their test group? How could they defer that bug? It does what?’ and other tester-like questions. It wasn’t long before Cem realized this would be a great model for testers to use. We started floating this with some of his testing colleagues and got more and more excited about it.”

Here are a few examples of challenges faced by testers that a CHAT-based analysis might help us better understand and thus more effectively work within:

  • Introducing a new metric
  • Introducing a new test tool
  • Interviewing stakeholders to gather their requirements and to discover the conflicts among stakeholders’ requirements
  • Designing tests that are tailored to expose highly important problems
  • Describing failures in ways that are intended to motivate specific stakeholders to demand fixes
  • Gaining insight into the dynamics of a failing project

Fiedler and Kaner have some concern that some people might find CHAT too complex to master in the short time available. That’s one of the reasons they choose CAST as the venue for their talk.

“I like that the CAST format requires audience and speaker interaction. Attendees get to explore a topic until they’ve heard enough. I also like that the conference isn’t over-scheduled so that you can have lunch or dinner and an extended conversation with speakers and other attendees. I’ve presented CHAT before. On the speaker side, I can tell you that it takes a while to convey the richness of the model. On the listener side, it takes a while to appreciate how it can be used. CAST gives us the time we need to talk about this.

In addition, Fiedler and Kaner indicated they would be willing to take the discussion online after the conference if there were enough interest. “If enough people are interested,” Fiedler said, “we could participate in a discussion forum at AST or TestingClub in which participants apply this to their real examples/situations.”

I asked Fiedler where testers who might not be able to attend the conference could go for more information. She listed off a handful of papers and books that she’s used to help her develop her understanding of the method.

“Yrjo Engestrom (from Helsinki, Finland) developed the CHAT model and first wrote about it in a paper called ‘Learning by Expanding: An Activity – Theoretical Approach to Developmental Research.’ That’s the seminal work but I thought it was a tough read. A few years ago, Sasha Barab, Michael Evans, and Un-Ok Baek wrote a chapter on using CHAT that appeared in the ‘Handbook of Research on Educational Communications and Technology.’ That chapter was very helpful. […] Right now I’m reading two books and I think I’m going to start recommending them to anyone interested in CHAT. They are ‘Activity Centered Design: An Ecological Approach to Designing Smart Tool and Usable Systems‘ by Geri Gay and Helen Hembrooke and ‘Acting with Technology: Activity Theory and Interaction Design‘ by Victor Kaptelinin and Bonnie A. Nardi. Both are grounded in the HCI field, but I think they’ll be helpful to testers, too.”

For more on the upcoming show, check out the CAST conference website. For more on Rebecca Fiedler and online teaching and learning, you can follow her research on her website. For more on Cem Kaner, you can check out his website, or one of his books on software testing.


June 30, 2009  7:37 PM

Good metrics critical to software projects, CAST keynoter says

MichaelDKelly Michael Kelly Profile: MichaelDKelly

At this year’s Conference for the Association for Software Testing (CAST), taking place July 13-16th in Colorado Springs, Dr. Jonathan Koomey will be delivering the keynote address on “Real-life lessons for responsible use of data and analysis in decision making.”

In the keynote, Dr. Koomey is planning to present a few recent examples of widely-cited statistics that were grossly misleading or wrong. He’ll then use those examples to summarize real-world lessons for attendees so they can immediately improve their use of data and analysis.

“My presentation will delve into often ignored aspects of the art of problem solving,” said Dr. Koomey. “Including the crucial distinction between facts and values, the dangers of ill-considered assumptions, the need for transparent documentation, and the importance of consistent comparisons.”

Dr. Koomey is a Project Scientist at Lawrence Berkeley National Laboratory and a consulting professor at Stanford University. He is one of the leading international experts on electricity used by computers, office equipment, and data centers, and is the author or co-author of eight books and more than one hundred and fifty articles and reports on energy and environmental economics, technology, forecasting, and policy. He has been quoted in many major media sources, including the New York Times, Wall Street Journal, Barrons, The Financial Times, The Washington Post, Science, Science News, American Scientist and more.

I opened the interview by asking Dr. Koomey about his latest book, Turning Numbers into Knowledge: Mastering the Art of Problem Solving, now out in its second edition.

“The book summarizes what I’ve learned over the years in doing analysis and supervising analysts at Lawrence Berkeley National Laboratory. I’ve hired dozens of analysts over the years but I grew frustrated that I kept having to explain to them basic aspects of the art of analysis, like making good tables and graphs, what constitutes complete documentation, and how to structure a technical report. The book, and this talk, grew out of that frustration. It summarizes the craft of research in a way that is useful for inexperienced analysts, but it’s also a good refresher for those who’ve been in the field for a while. And I tried to make it a fun read, with lots of short chapters, cartoons, and funny graphics.”

When I asked what influenced the talk and the book, Dr. Koomey cited a couple of sources that were influential to him. Those included Edward Tufte’s The Visual Display of Quantitative Information and William Hughes’ 1997 book, Critical Thinking: An Introduction to the Basic Skills. Dr. Koomey also added: “Also surprisingly enough, I was inspired by Zen in the Martial Arts by Joe Hyams. The fluidity and readability of that book influenced the structure of Turning Numbers into Knowledge.”

In addition to his keynote at CAST, Dr. Koomey is also leading a workshop on his new book.

“The workshop will be an interactive exploration of how managers can encourage, prompt, and cajole their employees to give them the numbers they need to make good decisions. It will also help both managers and analysts hone their own analytical skills. Even if you’re a seasoned analyst, the exercises in the workshop will get you to think afresh about the challenges you face at work, and should help you become more effective at your job.”

This will be Dr. Koomey’s first time speaking at CAST, and I asked him if he was looking forward to getting to know the software testing community. “The essence of good software testing is critical thinking, and I’m happy to be in the company of smart people who use their problem-solving skills in new and innovative ways. I always learn something when I attend conferences like this.”

For more on the upcoming show, check out the CAST conference website.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: