A year ago, I was working on a project where we were doing a failure modes and effects analysis (FMEA) related to failover and recovery. As I was thinking about how to best start my analysis, I recalled that in the past while doing performance testing work I looked at many of the same aspects of the system while planning. As a way to generate ideas, I did some research to identify sources that could help me with my planning. You can take a look at some of the resources I found, or use different taxonomies if you have any that you particularly favor.
Here’s an example of how you might use a resource like this. Let’s take the risks listed in chapter three of Performance Testing Guidance for Web Applications. In the following figure from the book, you’ll see a summary of the risks presented in that chapter.
Figure 1: Performance testing risks, from the book Performance Testing Guidance for Web Applications.
I prefer working with the list of questions the authors have outlined in the chapter, but the graphic does a nice job summarizing things. For each specific risk listed, you want to:
- Ask yourself if you’ve accounted for that risk with your current plan. If you haven’t, figure out if you should. If you think you should, figure out what type of testing would be most appropriate for you. One nice thing about this particular taxonomy is that they give you some guidance there.
- For each risk, move from the generic to the specific. The risk “Can the system be patched or updated without taking it down?” is a great question, and an initial answer might be “yes.” But when I look at the system I currently work with, there are several major systems all working together. I might ask if I can patch all of them. And patch them in what ways; via software, database, run-time dependencies, services, etc.?
- For each risk, ask yourself if there are any slight variations on that risk that might be important to you. Good examples of the practice are the two risks listed in the book: functionality could be compromised under heavy usage; and the application may not stay secure under heavy usage. And you can vary different parts of the same question. In those two risks, they varied the quality criteria — functionality and security — but kept the risks, such as heavy usage, static. You could add other quality criteria or other risks.
The general idea is that you’re using lists like these to help you generate test ideas. In a way, you’re also using them to test the planning work you’ve done so far to make sure you haven’t forgotten or overlooked anything.
TechExcel, a decade-old maker of development tools, released new features to its application lifecycle management software package, DevSuite 8.0. Included are MyWord dashboard engine and wiki tools promise improved team collaboration and status reports on concurrent software projects. Another bow to collaboration support comes in DevSuite 8.0’s new multilingual capabilities and user-definable UI names and values for multiple languages.
When a new product or features are announced, I always wonder what user problems or requests spurred the vendor to invest in developing them. So, when I heard about the DevSuite 8.0 additions, I posed those questions to Paul Unterberg, associate director of product management for Lafayette, Calif.-based TechExcel.
First I asked how users have been getting views and an overview of project status prior to the release of the MyWorld dsashboard engine. Unterberg responded:
“Before we introduced MyWork, the data for an overview was available to a user or a team based on a report. The user had to login, select a project, navigate to the report view, and then run their report. This took a lot of effort. Since the data was already in the system, we simplified the process and put it all in one place.”
My next question: How about the before-and-after picture for integrated wiki tools?
“There was no integrated Wiki before DevSuite 8,” Unterberg said. “This meant that people wishing to collaborate on a requirement or document had few options. They could leave notes to each other, but there was always the risk of someone overwriting another person’s changes. The Wiki simplifies the entire process, and eliminates the risk of user unintentionally erasing another user’s data.”
The overall goal of DevSuite’s integrated set of tools is to marry the strategic and tactical worlds of application development together by creating software that lets management and planning processes co-exist seamlessly with specific task-driven development processes. The team of software tools that enable this relationship provide workflow, process automation, searching, reporting and customization capabilities, among other things.
DevSuite also co-exists with various application development methodologies. For instance, teams using both waterfall and agile processes can live in TechExcel’s ALM framework.
“From our perspective, there should be no relationship between an ALM system and the development methodology a team uses,” Unterberg said. “We’ve heard from many customers the horror stories of their former systems that tried to change the way they worked based on what the system could do.”
It’s better to create processes in the ALM system that change based on how the team works. He described such a situation, saying:
“If a team is agile, for example, they might need less process control and a greater degree of flexibility with how they are able to prioritize work. They might also have the system limit the amount of time they can spend in a certain area; adding a time box to a development iteration, for example. This same functionality might be useless to a non-agile team. A good ALM system should be able to adjust to these needs and give the teams the most flexibility in modeling how work is done.”
Not adding another management layer with ALM is a stringent goal of TechExcel and is played out in DevSuite, Unterberg said. Adding different management when adopting ALM is only necessary if lack of management in a certain area was a driver for the ALM adoption in the first place. “Who is in charge depends greatly on the team and the process they follow,” he concluded. “ALM just enhances, automates and ties that process together.”
User Acceptance Testing (UAT) is a part of most testing plans and projects, but when you ask people what it is, they have a hard time defining it. You quickly find that it isn’t obvious what user acceptance testing means.
I talked to Michael Bolton about his views on UAT this week. He’ll discuss that topic at next week’s Conference for the Association for Software Testing (CAST). When the subject comes to UAT, Bolton said, there’s a lot of miscommunication. “The same words can mean dramatically different things to different people,” he said. He want to help user-acceptance testers “recognize that it’s usually risky to think in terms of what something is, and more helpful to think in terms of what it might be. That helps us to defend ourselves against misunderstanding and being misunderstood.”
Bolton has been teaching software testing on five continents for eight years. He is the co-author of “Rapid Software Testing,” a course that presents a methodology and mindset for testing software expertly in uncertain conditions and under extreme time pressure. He’s also a co-founder of the Toronto Workshops on Software Testing.
Bolton says the idea for his CASTsession first came from a message on a mailing list. In the list, someone suggested the following user acceptance test-driven approach to developing a product:
- Write a bunch of acceptance tests before writing the program
- Write the program
- When the acceptance tests pass, you’re done
“Now to most testers that I know, this sounds crazy. It occurred to me that maybe we should explore alternative interpretations of ‘acceptance tests’ and ‘done,’ but maybe we should also explore the business of exploring alternative interpretations altogether.”
“The language issue has always interested me. Back when I was a project manager for a really successful commercial software company, I noticed that some people weren’t saying what I thought they meant. Others were saying things that I was pretty sure they didn’t mean. I’ve done a paper on user acceptance testing, and I’ve done some classroom work on it, but this is the first time that I’ve taken it to an audience like CAST. “
Bolton wants his talk and his work in this field to trigger discussion. To learn more about his point of view and join that discussion, check out his CAST 2009 presentation. If you can’t make it to Colorado Springs, you can find Bolton’s past articles and conference presentations on his website. You can also follow his work as it unfolds on his blog or via Twitter.
A common criticism of Agile development practices is that they are difficult to scale with large teams. Another common challenge faced, is figuring out where testers can fit in an Agile context. At this year’s Conference for the Association for Software Testing (CAST), July 13-16th in Colorado Springs, Mike Dwyer is presenting a workshop titled “Experiencing Agile Integration.”
I interviewed Dwyer recently about the session, in which he talks about a new simulation that allows participants to experience first-hand what it feels like to apply Agile principles. He said the goal is of the simulation is to provide a simple environment that reflects the dynamics of expanding Agile and enough time for the participants to inspect and adapt how they support the expansion so that the product, team and organization garnish optimum value from going Agile.
Mike Dwyer is a principal Agile coach at BigVisible Solutions working with IT organizations as they adopt Agile and Lean methods. He has extensive experience as a manager, a coach and consultant transforming high growth organizations into hyper-productive Agile organizations. In addition, he is a well-known and respected contributor to the Scrum, Agile, and Lean software community.
A product of BigVisible Solutions team and their shared experiences, the simulation was built after Dwyer and several of his cohorts came off a 280-person, 30-team project. The designers all hold or have held multiple certifications in both PMI and Scrum as well as in IIAB, APICS, and other professional organizations. Many of the BigVisible Solutions team are experienced testers with background in performance, web, application, system, CI, TDD and exploratory work.
“The simulation mirrors what we have seen many organizations do to ‘pilot’ Agile. In order to provide the experience to the attendees and work toward optimal value for them, we use a simple pattern based on Scrum workframes. That is to work at delivery of value for short periods of time, discuss and learn from what we have done and then apply our learning to our next iteration. Value is optimized by the coach/facilitator working with the teams and individuals to inspect and adapt what they are doing to find better ways for the team to reach its goal.”
Is software testing really necessary? Do we do it just because everyone else does it? Why is software testing important? While ideas about testing vary, the motive is generally the same: someone believes it has value. We test because someone wants us to test. They may be managers, developers, business executives, regulators, or even customers. But how do testers know if they are doing the right thing or if something is lacking?
I explored these ideas recently in interviews with Neha Thakur, a business technology analyst at Deloitte Consulting, India, and Edgardo Greising, manager of the Performance Testing Laboratory for Centro de Ensayos de Software in Uruguay. Both are speakers at next week’s Conference for the Association for Software Testing (CAST).
Thakur will be exploring this topic in her upcoming talk, “Software Testing – To Be or Not To Be.” She is keenly interested in talking with her peers about identifying and serving testing stakeholders. In her own work, she’s discovered the advantages of identifying and involving stakeholders, and she’ll share methods for stakeholder analysis, gaining stakeholder involvement and making sure stakeholders’ needs are being met by the development team.
Thakur has performed automation testing in a variety of contexts, ranging from medical electronics, to storage and networking, to risk and compliance.
“I have always been curious to know and learn about the various aspects of software testing: the prevalent models, technology, tools etc. This curiosity to learn new things helped me go deep in various management topics and to develop a better understanding of the various stakeholders at each level that might impact the project. It also allows me to be proactive in communicating the risks, issues, and information with the respective stakeholders. I think testing is on an evolutionary path and there are still axioms of test management which need to be improvised.”
“Stakeholders believe in facts and figures; they believe in action and not words merely stated. Thus a subjective way of thinking while testing might not be the correct way of approaching an issue. Thinking objectively always helps. [You need] data, facts, and figures to support [your testing].”
Approaching the problem from another angle, Edgardo Greising plans to look why it can be difficult getting some IT managers to see the value in doing performance testing. According to Greising, this needs to change. In his upcoming CAST talk — titled “Helping Managers To Make Up Their Minds: The ROI of Performance Testing” — Greising plans to explore the risks and costs associated with performance testing.
“I will be talking about the return on investment of a performance test. From my experience, many managers refuse to do performance testing because they think there is a high cost. I always try to illustrate that the cost of not doing performance testing is higher.
Our commercial people have to fight against the cost-myth each time they are visiting potential clients. And, on the other hand, we know that a performance test gives us a lot of information to tune the system and help us avoid system downtime. The objective, then, is to put those things together and show the convenience of performance testing.”
During my interview with Greising, he talked about the ways software performance testing yields great improvements in application health. For one thing, performance testing leads to reducing resources consumed and lowering response times, he said. “With most projects, we are unable to support half the volume expected in production until the tests shows us where the bottlenecks are so we can fix them.”
Unfortunately, Greisling said, performance testing is rarely found as a regular activity in a systems development or migration project. Then, when application deployment approaches at high velocity, nobody has time for even think about it.
Greising is no stranger to keeping cost in mind when testing. He worked as a salesman and pre-sales engineer for 15 years. For him, balancing cost and risk are just a regular part of testing. To talk about cost justification, you need to talk about risk,he said.
For more on the upcoming show, check out the CAST conference website.
Presenting correct information isn’t just a function of how you write your report at the end of a software project. Instead, it is the result of a complex process that starts with analyzing the needs of your stakeholders, moves on to gathering accurate and timely data from all your multiple project sources, and then results in presenting your findings in the correct format, at the right time and to the proper audience. Joel Montveslisky calls this “Testing Intelligence” and is presenting a talk on that topic at this year’s Conference for the Association for Software Testing (CAST), July 13-16th in Colorado Springs.
“Testing intelligence is a term to describe a slightly different perspective to software testing that places the focus on the needs of our project stakeholders instead of on the application under test. The main idea is to position the testing team as a service organization within the development group, whose purpose is to provide the timely and actionable testing-based visibility needed by the project stakeholders to make their tactical and strategic decisions.”
“In principle this is nothing new, but in practice many testing teams tend to get disconnected from the changing needs of the Organization during the project and end up working for the sake of their own “product coverage needs” or the old information needs of their Project Stakeholders.”
Montvelisky is one of the founders and Product Architect of PractiTest, a company providing an SaaS (Software as a Service) test and quality assurance (QA) management system. He is also a QA consultant specializing in testing processes and a QA Instructor for multiple Israeli Training Centers. A member of the Advisory Board of the Israeli Testing Certification Board (the Israeli chapter of the ISTQB); he publishes articles and a periodic QA blog and is an active speaker in local and international conferences.
According to Montvelisky, the process of gathering testing intelligence starts by correctly identifying your stakeholders, then working with them to understand their needs, and finally providing them with correct and timely information they need. He thinks there are a number of things that make this a hard process, or at least not a trivial process, for testing teams:
“We can start by the fact that many times we are not aware who are all our project stakeholders, we tend to miss some that are not physically close or that enter the project late in the process. Secondly, we testers are not really trained to work with customers […], so many times we don’t communicate correctly and we assume their needs without consulting with them about what information is important for their decisions, what format they need it, and when. And finally, we don’t take into account the dynamic nature of our projects. We don’t understand that people require specific information at certain times. Nor do we take into account that as the project evolves the information we need to provide changes.”
Montvelisky started developing his CAST talk when he was a QA Manager working for an enterprise software company. There he realized that many stakeholders were frustrated with the existing bureaucracy of the QA work. The stakeholders thought the QA team’s work was dictated by test-planning documents written months beforehand. Montvelisky noticed that those documents were not staying relevant to the current issues affecting the release.
“In this company we made a mind-shift and decided to set aside time during the project for ‘specific testing tasks’ that would be given to us by the Development Team in real-time. Soon enough the demand for these tasks increased and we realized that we were providing real value to the process by becoming the eyes and ears of the project. After I left this company and became a consultant I took this approach with me and created a process around it to help organizations make the mind-switch and start working more effectively with their stakeholders throughout the project.”
The chance to get feedback is one reason Montvelisky is excited to be presenting at CAST. “It’s not easy to receive hard criticism,but once you learn to take it in a positive light and use these comments to continue developing your work, it makes it one of the most fruitful encounters for people looking to improve and develop ideas in the field.”
I asked Montvelisky where he thought he might get some pushback on his approach:
“In the past, I’ve heard two main areas of criticism to my approach, both of them fair. First, people explain to me that all their professional lives they’ve worked based on what I describe as testing intelligence, and that this is nothing new to them. To these people I usually come asking for their best practices and asking for their inputs in order to improve my approach.”
“Second, people tell me that our job should limit itself to test, and ‘we should be proud’ of it instead of trying to find big names for what we do, leaving this to the marketing team. To these people, I try to explain that every team in the organization needs to contribute value to the process, and if they think that all their value comes from reporting bugs and coverage percentages then they can continue working like that.”
“Having said that, there is a lot more value that can be provided by the Testing Team, and we don’t need to change what we do in order to provide it we only need to make sure we stay connected with our stakeholders and help them throughout the project and not only at the end of it.”
Montvelisky is currently focusing on a couple of research topics. One of them is related to adding value by correctly utilizing the test management tools in the organization. The other is related to collaboration between testers from different organizations, and different cultures and countries, in order to improve their overall work.
Software consultant Henrik Andersson implemented an exploratory testing training project in an 80-tester group in only eight days, and he lived to talk about it. Next week, he’ll outline the steps he took to quickly set up a pilot program and train testers in exploratory testing theory and practice during the Conference for the Association for Software Testing (CAST), which takes place July 13-16 in Colorado Springs. In his session, he’ll also cover how he made responsibilities and expectations clear.
I recently interviewed Andersson about his session, titled “Implementing Exploratory Testing at a Large Organization.” He said his first reaction upon receiving the assignment in question was that it was impossible to implement exploratory testing on this scale in that time frame. To reach out to 80 testers is a challenging thing to do, he said, and it takes time to implement such a different way of testing. Yet, he decided to rise to the challenge. “If I turned it down I would not likely get another chance,” said Andersson, a consultant and founder of House of Test, headquartered in Sweden and China.
Once he accepted the project, he had to figure out how to do the impossible.
“I came up with a little twist on the initial request. I suggested that we should pick one tester from each test team and tutor them to become Exploratory Testing Champions. This gave me an initial group of nine people. This is what we achieved during the 8 days. The Champions would then have the responsibilities to tutor the rest of the testers in their teams. The Champions are now continuously working with this in their test teams, and we have established this new role formally.
Andersson’s case will show what the exploratory testing champions approach achieved. He also will explain in detail how the project was implemented. He’ll describe the workshops on theory and practical exploratory testing that he conducted during theproject. He’ll share observations about tutoring a group of people used to working in a completely different way, the positive feelings and feedback he receive, what surprised him and what approaches did not succeed.
Just so you know that Andersson is no newbie to testing metholodies, here’s some information about his background. As a software tester and consultant, he has worked in a variety of fields, including telecom, medical devices, defense, insurance, SAP and supply chain systems.
For the past 10 years, Andersson has focused on working in a context-driven fashion, mixing exploratory testing with more traditional methods, such as RUP, V-model, TMap and others. However, he has never followed any method by the letter. “I always only took the part that has been useful and invented the parts I was lacking,” said Andersson. “I definitely didn’t do the parts that felt were obstacles or not useful.” Indeed, Andersson enjoys helping organizations transform from the “old school” practices.
Right now, if I do a search on ‘electronic voting’ on Google News, I get 748 results for the past month. The headlines include phrases like “How to trust,” “Technology is not fullproof,” “Electronic voting machines are fallible,” and the ever present headline of “Electronic voting machines also caused widespread problems in Florida.” There are many legitimate concerns around electronic voting technology.
At this year’s Conference for the Association for Software Testing (CAST), July 13-16th in Colorado Springs, AST eVoting Special Interest Group (SIG) members Geordie Keitt and Jim Nilius will be taking a look at the highly visible testing processes around eVoting systems, and will outline their concerns for the way the testing is done today.
“Our talk is about the reasons why an electronic voting system can undergo a rigorous, expensive, careful series of tests and achieve certification, and still be a terrible quality product. The laboratory certification system is not capable of ensuring quality products where the complexity of the system is as poorly represented by its standards as eVoting systems are. We have come up with an interesting model of the certification testing context, which helps us see when the rigors of cert lab testing are appropriate and adaptive and when they are not. We will be asking the conference attendees for help in honing our arguments in preparation for publication. We will be presenting them to National Institute of Standards and Technology (NIST) to suggest changes to the accreditation guidelines for certification labs.”
Geordie Keitt works for ProChain Solutions doing software testing, a career he began in 1995. He’s tested UNIX APIs, MQ Series apps, Windows apps, multimedia training courses, Web marketplaces, a webmart builder IDE, the websites that run bandwidth auctions for the FCC, and now critical chain project scheduling software. Geordie aspires to be a respected and respectful practitioner and teacher of the craft of rapid, exploratory software testing. He is the lead of the AST’s eVoting SIG.
Jim Nilius has over 23 years of experience in software testing, test management and architecture. His most recent role was as Program Manager and Technical Director of SysTest Lab’s Voting System Test Laboratory. An ISO 17025 test & calibration lab accredited by the US Election Assistance Commission under NIST’s National Voluntary Laboratory Accreditation Program and mandated by the Help America Vote Act of 2002. The lab performs testing of voting systems for Federal Certification against the 2005 Voluntary Voting System Guidelines. He is a member of AST’s eVoting SIG.
“Jim has a wealth of experience in this domain,” said Geordie Keitt, “and as the Chair of the AST eVoting SIG I wanted to draw out of him as much knowledge as possible and get it out into the open where we can all look at it and try to detect patterns and learn lessons from it.” Keitt and Nilius did an initial presentation of their work at the second Workshop on Regulated Software Testing (WREST). One of the outcomes from that workshop was a diagram of the challenges facing a sapient testing process in a regulated environment.
“We are bringing an immature argument before a group and asking for their help to toughen it. We are testing the founding principles of regulated testing, which predates software testing by a hundred years and has yet to recognize that new software systems are more complicated than mature ones, instead of the other way around. We need help to hone and tighten our argument for it to be effective.”
For more on the upcoming show, check out the CAST conference website. Also, checkout the website for the AST eVoting SIG to get involved and see what else they are working on. For more on the work of Geordie Keitt and Jim Nilius, take a look at their outcomes from the second Workshop on Regulated Software Testing.
CAST 2009: Understanding the Principles of the Law of Software Contracts, an interview with Cem Kaner
“The American Law Institute just adopted the Principles of the Law of Software Contracts. For now, this will guide judges when they decide disputes involving the marketing, sale (of product or product license), quality, support and maintenance of software. Over the next few years, it will probably guide some new legislation as individual states apply its terms. It will probably also inform some legislative drafting efforts underway in Europe, and probably to come in other countries whose economies are gaining greater influence (e.g. India and China).”
That introduction comes from Cem Kaner as he summarized his upcoming talk at this year’s Conference for the Association for Software Testing (CAST). CAST takes place in a couple of weeks July 13-16th in Colorado Springs, and Dr, Kaner will be looking to dig into some of the new rules for software contracts adopted by the American Law Institute in more detail.
“Historically, the American Law Institute has had enormous influence. Its membership is politically diverse and primarily judges and tenured law professors. Appellate-level judges routinely reference American Law Institute materials in their published cases. For software, judges need to turn to other judges’ writing even more than in other areas, because so much software law is judge-made. […] The Principles provide a unifying framework, based on judicial opinions around the country over the past 50 years. I was ill and unable to travel to the ALI meeting this year, but the ALI meeting blog reported that the Principles were passed unanimously. This is very rare, and it speaks well to the likely future influence of the document.”
Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: “Testing Computer Software,” “Bad Software,” and “Lessons Learned in Software Testing.”
While this might seem like an unlikely talk to some, the subject matter is critical to our industry and CAST is a perfect venue to get people talking about it. There’s a mix at CAST that is rare at other conferences. Not only do the talks go deep into the subjects they cover, the average attendee is willing to dig in and do the work necessary to understand the subject matter. Add to that the importance of the Principles of the Law of Software Contracts to the industry and you have the perfect mix.
“At most conferences, I would give a talk about the Principles, a few people would ask short questions, we would call a time limit, and that would be that. Those are “marketing talks” – one-way communication from someone pushing an idea to an audience that politely listens. I strongly prefer CAST’s approach, which encourages more critical questioning and follow-up discussions. Sometimes people come prepared for a serious debate. CAST encourages this, and everyone learns from it. More often, a group of us break away for more discussion at the end of the meeting. Again, I learn a lot from that, and so do the people in the breakout group, who get to discuss the ideas in a way that makes sense to them.
Many testers talk about their unhappiness with their company’s quality standards. They feel as though they are working on bad software, with irresponsible project managers who care a lot about cost, ship date, and personal glory but don’t care whether the final result works. Commercial law creates the playing field for software development and marketing. When you ask why a company can survive when it sells crap software, lies about what it is selling, and treats its customers like dirt, you are partially asking a market question (how can it keep customers?) and partially a legal question.
Given that we have hit a milestone in the adoption of the Principles of the Law of Software Contracts, I think it’s important to let testers know the new rules.”
Dr. Kaner is no stranger to some of the issues that will surround the Principles of the Law of Software Contracts. He started working on software-quality-related legislation in 1995, when he helped write the Uniform Electronic Transactions Act (UETA). UETA (adopted federally as ESIGN) removed a major barrier to electronic commerce by giving legal force to electronic signatures. Dr. Kaner also worked on the Uniform Computer Information Transactions Act (UCITA), trying to improve it when it was a joint project of the American Law Institute, the National Conference of Commissioners of Uniform State Laws, and the American Bar Association. From 1995-2001, he wrote almost 100 status reports on UCC2B /UCITA, most for the testing community.
“UCITA started as UCC-Article 2B, a joint project of the American Law Institute and the National Conference of Commissioners of Uniform State Laws and the American Bar Association to update the Uniform Commercial Code, which is America’s main body of commercial law. The American Law Institute and the National Conference of Commissioners of Uniform State Laws jointly run the UCC’s Permanent Editorial Board, which has such a strong reputation for fairness and thoroughness (amendments take up to 10 years of hearings) that state legislatures look to the Board for all amendments to the UCC.
Unfortunately, the Article 2B project was hijacked by political activists who wrote a bill that radically tilted copyright and contract law in favor of large software vendors. The American Law Institute demanded a rebalancing of the bill. When the National Conference of Commissioners of Uniform State Laws refused, the American Law Institute walked off the project, killing it as a UCC project. The National Conference of Commissioners of Uniform State Laws renamed the bill, UCITA, and submitted it to state legislatures. Ultimately, two states adopted UCITA, the rest rejected it, and four states even adopted laws that made UCITA-based contracts unenforceable in their states.
This was the first time since the American Civil War that some states passed laws to explicitly reject and make unenforceable contract terms that were lawful in the state in which the contract was written. Once UCITA’s failure was clear, the American Law Institute started a new project (the Principles), to bring computer law in line with the mainstream of American commercial and intellectual property law, balancing the rights of vendors and customers. They elected me as a member, which put me into a better position to comment on laws affecting the software industry. I think I am the least experienced lawyer ever elected to the American Law Institute. Since then, I’ve given a few reports to AST members at CAST and by email, collecting feedback and making suggestions back to the American Law Institute.
Revamping the legal infrastructure is not a guarantee of better times to come. A poorly balanced body of law can wipe out a marketplace or wipe out the companies trying to serve that market. I think this decade’s lethargic software market has been a result of that. I’ve seen a lot of extreme proposals coming from many quarters (right, left, and unclassifiable). The American Law Institute work is the best and most promising that I have seen.”
At CAST, debate on topics presented is encouraged. Given the effect of the Principles on the software development industry, I asked Dr. Kaner what he thought the most likely criticisms might be.
“I think the most likely criticism will be holding software companies accountable for undisclosed known defects will somehow harm the industry. Sadly, I think this is poorly informed. A bunch of silliness was promoted on the web just before the American Law Institute meeting that claimed this would particularly hurt the open source community. As is so common in this decade’s political propaganda, this was blatantly wrong. The Principles specifically exclude open source software from this type of liability. I think more generally that some people fear that commercial regulation creates a potential to kill the industry. I have seen some proposals, especially from consumer activists and buyers for very large non-software companies, that demand too much. I think the Principles are much more moderate – perhaps too moderate.
The only way to address these types of concerns is with open discussion and facts. I’ll introduce some of the key ideas in my talk and then be available for as much post-talk discussion as people want. I don’t expect everyone to come out loving the Principles, but at least the folks who want a deeper understanding will have a good chance of getting it.”
For more on the upcoming show, check out the CAST conference website. For more on Cem Kaner, you can check out his website, or take a look at what he considers to be his career-best writing and research in his freely available book, “Bad Software.” The Principles are not available in a free copy on the web. However, you can get to a copy for sale from the American Law Institute. Dr. Kaner has summaries of some of the main ideas in the Principles on his blog and in his 2007 CAST presentation “Law of Software Contracting: New Rules Coming.”
Dani Almog will be outlining a new approach to test case definition at the Conference for the Association for Software Testing (CAST 2009) this month. In his talk “Test Case Definition: A New Structural Approach,” Almog will explore and classify the various definitions of test cases, discuss the implications of the current situation, and will suggest an alternative structural definition for test cases. CAST 2009 takes place July 13-16 in Colorado Springs, CO.
“Based on thorough research of academic articles, journals and books and by consulting some of my distinguished colleagues around the globe, I will present my findings aiming at an alternative formal structural definition of the term ‘test case. Although it may be a controversial approach, formalizing the term fits my engineering perceptio’n of the testing process. Thus, during my talk I will present a newly developed structural definition to a test case.
During the last six years, I was very fortunate to be able to fulfill a dream and implement ideas pertaining to how and when software testing should be developed; in short, finding ways to bridge the gap between the software development approach, the technology, the training (mostly object oriented thinking) and the way we testing engineers see the same applications; in an intuitive, procedural –the way we use it — manner.
I got full support from top management in the company I worked for (Amdocs) to recruit a very talented and innovative group of young people who assisted me in developing and implementing a full infrastructure for test automation. Later, (this) was rolled out to all the corporate divisions and units. Now that I have retired from Amdocs, I have decided to dedicate the rest of my professional career to exposing and promoting the new approach I have developed, including methodologies and tools.”
Dani Almog is currently a member of the academic staff at Ben Gurion University, Israel. He teaching and researching. Dani teaches software quality engineering and testing, and research the interaction between development processes and quality/testing – trying to introduce to the academic world some of the achievements and work models developed in industry.
“Coming from a large corporation’s (Amdocs) R&D division, we have encountered many issues regarding test case automation- a necessity and a key factor for supporting eight different large software products, often with 3 different versions distributed among one hundred different very large customers. This situation made us consider all options and alternatives for test automation infrastructure, tools and methodologies. We were given the opportunity to be involved in shaping the future of our new products’ development processes and procedures. All my academic activities since my retirement, including this talk, are derived from this experience and I am now actually documenting and presenting the best practices of what have we done. “
Almog says that in his talk he’s targeting two different communities. The first is the professional community, who he says is struggling to “improve its skills and outcomes regardless of inferior reputation and image – knowing they never really get the deserved glory.” The second is the academic community, who has “neglected a very exciting field of research and progress.” He suspects both communities might have some criticisms for his talk.
“I welcome the challenge of debate and criticism and believe it will improve my work. I guess the criticism will come from two main channels. From those questioning the relevance of my approach to all different streams and to the new ways software development expands to. And from people who perceive testing as softer and more flexible (exploratory, context-dependent, and others) rather than structured and engineered. I believe that my approach paves the way to more systematic and precise testing, as well as to development of advanced automation tools. I welcome all constructive and relevant criticism. It will help me present a better model.”
For more on the upcoming show, check out the CAST conference website. For a chance to explore the topic in more detail, you can contact Dani Almog on the Software Testing Club or following his discussions.