When testers “compare the product to specifications,” what they often find is that they identify intentions — and the intentions are different from the product. While the tester’s job may be to gather information, this becomes an intentional investigation, according to STAREAST 2012 keynote speaker Michael Bolton of DevelopSense, Inc.
His speech, “Evaluating Testing: The Qualitative Way,” explored qualitative research methods, compared the job of software testers to that of investigative journalists and offered the notion that testing is ultimately a human activity.
Read more in-depth coverage of this insightful speech here: Software testers as qualitative researchers: STAREAST 2012 keynote.
As software quality professionals, we all know that the earlier we find defects, the easier they are to fix. However, all too often, performance testing is not done until the very end of a release cycle. In fact, some organizations wait until after the code has deployed, thinking they can only monitor performance, and fix it when there’s a problem, rather than doing testing up front.
Eric Gee is facilitating a session here at STAREAST 2012 titled, “Performance Testing Earlier in the Software Development Lifecycle.” In this short video clip, Gee gives us a short preview of his session and how performance test engineers can partner with architects and developers, testing at the component level ensuring that performance issues are found and addressed as early as possible.
Here at STAREAST 2012, we’ve been hearing quite a lot about the importance of exploratory testing. This technique allows testers to really dig into areas of high risk, using the knowledge they’ve learned as they’re testing to creatively test and uncover problems that aren’t on the “happy path.” However, when they do run across an anomaly or defect, they need to recreate that problem so that the developer is able to troubleshoot it and find out the root cause.
I asked conference attendee Thora Commins the tools she used in order to easily recreate the issues she uncovered during exploratory testing. Listen in to hear her response:
“Simplify, simplify, simplify” and “make sure you’re testing exactly what your customers are asking for,” says Mary LeMieux-Ruibal from Cognizant. She and Dr. Mirkeya Capellán from Sogeti facilitated the session, “Creating a Risk-based Testing Strategy,” at STAREAST 2012.
In the article, Risk-based testing approaches for Agile development teams, I talk to LeMieux-Ruibal and Capellán about risk-based testing and how it’s used in Agile environments. It was a real pleasure to meet up with them in person in sunny Orlando, FLA at the conference and get to chat with them in person. Take a look at this short video clip as they give a quick summary of their session:
How do testers integrate exploratory testing techniques and test automation in an Agile setting? Software testing consultant Lanette Creamer is co-presenting “You Can’t Spell Agile Testing without ‘ET’” with Matt Barcomb at STAREAST on April 18. Listen to this brief podcast for a preview of the session, which addresses exploratory testing in Agile environments.
We continue to hear more about test automation as more organizations are claiming success with their automation strategies. Just a few months ago at our local SQuAD (Software Quality Association of Denver) meeting, a panel of recruiters advised test professionals to learn some technical skills.
That advice was repeated at last night’s SQuAD presentation, “Test Automation 101,” by Jim Hazen. Much of Hazen’s presentation centered around testers learning how to program. Many of the automation tools will require some degree of programming. “Even codeless and scriptless tools [require some programming skills.] At some point, you’re going to need to dig into the code.”
Hazen suggested books and online resources to get started. The first book he mentioned was Experiences of Test Automation: Case Studies of Software Test Automation, the new book by Dot Graham and Mark Fewster. Coincidentally, I’ve been emailing Dot and Mark about meeting at next week’s STAREAST conference to augment the recently published two-part interview I’d done with them:
Hazen talked about several automation tools and suggested the popular open source tool, Selenium, for those who’d like to get their feet wet with test automation. However, Hazen also warned that automation takes work, and believing some of the vendor hype can be one of the biggest mistakes groups make when implementing an automated test solution. Though certainly organizations who implement well will realize a strong ROI, Hazen warned that 70-80% of organizations fail on their first implementation of test automation.
As with any effort, it’s important to start with planning and making sure the staff is properly trained. The message is pretty clear that in this day of Agile development and automation test, it’s important for testers to get programming skills to remain competitive.
The conference season kicked off for me in my own back yard at 2012 Mile High Agile on April 3rd in Denver. This was the second annual Mile High Agile conference and, once again, touted a full house of engaged participants and an impressive variety of sessions and networking opportunities.
Highlights included a keynote address by Jeff Patton, a prominent speaker, writer, instructor and product design coach, who reminded us of the importance of the conversation and reaching a shared understanding of requirements.
Four sessions were held in each of seven tracks: Agile Technical Practices, Agile Quality Practices, Executive & Leadership, Agile Coaching, Product Management, Agile Boot Camp and Agile Outside the Box, as well as an additional track from sponsors covering a variety of topics. There were also 10-minute Lightning Talks and Birds of a Feather sessions where interested parties could flock together.
In the short video clip below, you hear from Kim Barnes and David Madouros who give their number one piece of advice for those new to Scrum teams:
As usual, my only problem was that there were so many interesting sessions that it was hard to decide which ones to attend! At this conference, I even participated as a speaker, leading an interactive session in the leadership track about distributed collaboration.
This is just the beginning. Melanie Webb and I will be attending STAREAST 2012 in mid-April and already have had some interesting interviews with speakers, so stay tuned!
SSQ recently published Social media: A guide to enhancing ALM with collaborative tools. The guide shows many examples of the use of social media in the development lifecycle.
Social media also offers specific advantages to business. I spoke with Steve Nicholls, author of the book, Social Media in Business, who writes about four business opportunities available to organizations with social media: communication, communities, collaboration and collective intelligence.
SSQ: Steve, you talk about different types of social media tools. Would you say there are certain types of social media tools that are most beneficial for software development teams?
Steve Nicholls: Most businesses do not think about business goals when they think about social media. Any tools considered need to always be in the context of the business goals. The quick way of saying that is it depends on what you want to achieve. For instance, if it is to raise your profile, you would set appropriate social media goals and then select the best social media tools based on your organizational environment. For example, you’d look at things such as what are you already using, what skills you have, the resources, etc.
SSQ: In what ways are software development teams able to communicate with their customers by using social media?
Nicholls: Again, the thing to consider is the business goal you’re trying to achieve, then what blend of social media would be needed to achieve that. It could be the organizational goal could be related to increasing the repeat customer percentage to increase revenue by X %. One marketing strategy could be to have a more effective customer relationship management strategy. You would then select the social media program that would support those goals. That would be around customer service, the ways you interact with the customer to provide information. It would depend what you already do as to what tools you would use. There are a number of tools that could be used but you would want to look at the way you exchange different types of media, the way you have meetings and the frequency you measure customer satisfaction and surveys, ratings and so on.
SSQ: Are privacy and security concerns when businesses start to use social media? What should organizations watch out for?
Nicholls: Managers and policy makers are right to be worried about privacy and security concerns; these are real. What should you watch out for? Most companies are worried about their reputation in the marketplace; employees wasting time on non-work related social activities, of which there are many; leaking of competitive and confidential information; and stealing intellectual property and legal problems, from what your employees say and do online. Employees also have concerns that employers are spying on them, that social media adds more work to an already full schedule, the organization boundaries blurring as to what is company business and private to the employee. Some employees may need to use social media for their personal life but access is blocked in their work place. Use of mobile devices is also an issue as a number of the organization’s security protocols can be bypassed using a tablet computer or a smart phone. These issues are best dealt with using a well-crafted and enforced social media policy.
SSQ: What would you say is the biggest takeaway readers will gain from your book?
Nicholls: There are two major takeaways:
1. You need to have a clear model of what social media is and how your company can utilize it, not just in your marketing and IT peopl,e but in your competitive strategy.
There are risks and obstacles with social media, which cannot be underplayed, and there are risks in every area of business; the key is to quantify that risk and weigh it against the potential gains.
2. To implement social media in a systematic way, you need a comprehensive implementation framework –- like the 3-Core Project success system that is easy to learn and provides a step-by-step low risk way of moving your organization forward.
There are companies in your market, or the more dangerous, in a related market, that are trying to figure out how to gain a competitive advantage using innovative combinations of social media tools to enhance their business strategy. If you are a leader in your market, or aspire to be a leader, then you need to have full comprehension of the advances in technology and software that is fuelling the rise of the Internet revolution. If you are a follower, then you need to have a model for the best of the best and not look at your nearest competitor and feel comfortable because they are not doing much either!
Currently the book is available from Amazon.com and Amazon.co.uk and from www.SocialMediainBusiness.com.
Earlier this month, CollabNet announced Agile Assessment, a new consulting service that helps IT organizations assess the status of existing Agile practices and develop strategies to extend the benefits of Agile-based methodologies across the enterprise.
I had the opportunity to speak with David Parker, Vice President of CollabNet’s Scrum Business Line Unit about the announcement.
SSQ: Can you tell us more about the Agile Assessment? Is it a series of questions like a survey, or does a team come in and observe? Does an organization get scored, and then are there recommendations about how the team could incorporate more Agile practices?
David Parker: CollabNet’s Agile Assessment is one of the most critical steps an organization can take to become more Agile. First, it clearly defines the goals you are seeking to accomplish while weighing the business context in which you operate. Second, it delivers a snapshot of your organization, helping you to determine the extent that you have embraced Agile. And finally, it delivers a crisp, prioritized plan that defines the practical steps you should take to become more Agile.
The process by which we build this assessment includes a detailed questionnaire, on-site observation of the development teams and free-form discussions with management stakeholders.
SSQ: The announcement says the service includes: “strategies to extend the benefits of Agile-based methodologies across the enterprise.” What type of “Agile-based methodologies” are we talking about? Scrum? XP? or things like “collaboration,” and “transparency”?
Parker: Although CollabNet has a long history of involvement with Scrum, we do recognize that 1) most organizations adopt a hybrid Agile approach that incorporates aspects of Agile and Waterfall and 2) new methodologies like Lean/Kanban are emerging. As such, our assessment is methodology-agnostic. As mentioned above, a critical part of the assessment is the prioritized action plan that helps companies determine the steps they need to take to become more Agile. From our perspective, it does not matter what flavor of Agile the company is adopting. Our goal is to simply help the organization become more Agile.
SSQ: Agile practices promote face-to-face communication and collaboration. Is the team makeup (co-located or distributed) part of the assessment? What are your thoughts on co-location vs. distributed teams?
Parker: Geographically distributed teams are a reality today. It is the rare enterprise who has not, to some degree, embraced a distributed development model. As such, part of scaling Agile to the enterprise necessitates the use of Agile in a distributed context. Having worked with hundreds of clients with distributed development models, we fully appreciate the challenges it brings. But the benefits of access to deep talent pools and of reduced cost (to a lessening extent) make the net result a positive one for the enterprise. We have proven that Agile can add great value to distributed development organizations.
CollabNet’s Agile assessment examines the interaction of distributed teams. We look at release planning, sprint planning, backlog management, and organizational context. In this way, the assessment helps organizations understand where things are working well and where there is room for improvement, and provides a plan for better integrated distributed teams.
SSQ: Once again, on the question of “extending across the enterprise,” can you elaborate? Are we talking about using Agile practices for sales? Marketing? Governance? Operations?
Parker: Although CollabNet’s Agile Assessment focuses on the software development organization, we recognize that the development organizations do not work in a vacuum. The truly Agile enterprise brings multiple stakeholders into the development process, and into IT as a whole. When we think of extending Agile across the enterprise, we are looking at ways to better engage with stakeholders in other parts of the organization.
SSQ: Do the services include an evaluation of the organization’s ALM toolset and/or recommendation for tools?
Parker: Although tools can play an important role in a broad-scale Agile adoption, CollabNet’s Agile Assessment is tools-agnostic. Evaluation of specific tools is not part of this assessment.
SSQ: Are “best practices” for Agile ALM available?
Parker: With hundreds (thousands?) of large-scale ALM and Agile customer engagements, CollabNet has built up a huge repository of expertise in Agile practice, Agile Process and ALM. We believe that we are one of the few vendors that can really combine these three for enterprise adoption. Each and every engagement we enter into leverages this vast experience. CollabNet case studies provide real-world examples of our implementations.
SSQ: Are these Agile development coaching services? If so, is a particular methodology such as Scrum recommended?
Parker: The CollabNet Agile Assessment is not a coaching engagement. Rather, it is designed to provide a snapshot of an organization’s adoption of Agile techniques and a prioritized set of recommendations for helping improve agility. We seek to be tools-agnostic, and we do not favor one flavor of Agile over another. That said, CollabNet does have deep expertise in Scrum. We’ve trained more ScrumMasters than any other vendor, and have on staff an impressive set of Certified Scrum Trainers and Coaches. If Scrum is the best fit for an organization, we are certainly well-equipped to help them adopt it.
Recently, SSQ created a quality metrics guide which includes a series of articles, tips and stories related to measuring software quality. It’s a complicated and controversial topic with no easy answers. We asked our readers to weigh in, and I wanted to share a couple of insightful responses we received.
Darek Malinowski, an ALM Solutions Architect at Hewlett-Packard, writes:
In my opinion we should start from business requirements. First we should estimate the weight (criticality) or business impact of each requirement. Business impact is related to loss of money, loss of confidence, loss of market etc. So the best to measure quality is requirements coverage by tests vs. requirements criticality. This can tell if the product we deliver to the business provides all needed functionality as required.
Haya Rubinstein works as the Quality and Delivery team lead in SAP in the IMS NetWeaver MDM group. After reading Crystal Bedell’s article, What upper management should know about managing software testing processes, she gave a very detailed answer to our question about how her organization measured software quality.
In my experience, management is interested in the test KPI’s even before the release is decided on. There is usually a quality standard that they wish to adhere to.
Following is an example of KPI’s for a software release:
- 0% defects on the top ten priority functionalities.
- Less than 5% degradation in performance.
- All new features have been tested.
- All new “hard” software requirements were met.
- All quality standards were met or mitigations were agreed on with quality standard owners.
- Over 90% of tests on new features have passed.
- Over 95% of all unit tests have passed.
- Over 95% of all regression tests have passed.
- Over 95% of all quality standard tests have passed.
- No Very High or High priority defects remain open.
- All medium priority defects have been examined by development to ensure they are not a symptom of a more severe issue.
Rubinstein goes on to describe the detailed and disciplined process that is used throughout the development life-cycle with test milestones starting with the planning of the feature through it’s delivery. She says that at the end of the test cycle, management receives weekly reports aimed at showing compliance to the KPIs including:
- If there are any risks to KPI compliance they are presented together with the mitigations proposed.
- The compliance to the KPI’s above with the current status (number or % and color coded)
- Details of open Very High/High priority defects –Defect #, Summary of issue, Created by, Opened on date, Assigned to, Current status
- Approval status for the relevant features according to hard software requirements and details of remaining defects that are still open on each of the features.
- Link to full defect report.
- Link to full test plan report.
- Notes exist for all open defects.
An overall status (green, yellow, or red).
What about your organization? What metrics do you use to measure quality?