[kml_flashembed movie="http://www.youtube.com/v/lK97h6zxM68" width="425" height="350" wmode="transparent" /]
Boy, do I feel lucky to be living in the Denver area. Not only do I get to attend monthly meetings of the SQuAD (Software Quality Association of Denver), but today started the two-day SQuAD conference! The morning kicked off with opening remarks by Melissa Tondi, followed by two half-day workshops.
I attended Jon Hagar’s morning session about creating attacks for embedded software. I’d heard Hagar speak about embedded software before and was interested in learning more. This time we had some hands-on time testing of a 21-question hand-held game.
“When you’re thrown into a test situation, where do you start?” asked Hagar as he gave us the hand-held games telling us to “test.” When I asked if there was any documentation available, Hagar gave me the thumbs up, though he didn’t seem overly impressed by my quick response teasing, “You’ve heard me speak before!” He gave us the user instructions, but talked to us about ways of testing that would go beyond simply “happy path” testing using documentation.
For the afternoon session, I attended the QA Leadership Summit led by Bev Berry and Michelle Rocke. Several questions were posed to the group which spawned discussions including topics of Agile adoption, challenges QA managers face, the changing roles of testers, and quality metrics used to make decisions.
The conference continues tomorrow with morning keynotes by Lee Copeland and Michael Bolton, followed by a variety of presentations by industry leaders. If you can’t attend, follow the conference on Twitter with #squadco and stay tuned as I report back with more video and reports later this week.
[kml_flashembed movie="http://www.youtube.com/v/5U_9HvpopMw" width="425" height="350" wmode="transparent" /]
At a recent Software Quality Association of Denver (SQuAD) meeting, Jon Hagar gave a presentation about testing embedded software and demonstrated his take on this emerging skill with an exercise entitled, “Attack of the Killer Robots.”
There are certain aspects that must be taken into account with embedded software, such as timing and integration with the hardware, that need to be “attacked” as Hagar says. Hagar also explained that when you are dealing with testing embedded software, the environment, tools and testing methodologies may be quite different than what we work with when testing traditional software that runs on a computer.
Read more about testing embedded software and the challenge Hagar gave us in the story, Embedded software test: Attack of the killer robots.
[kml_flashembed movie="http://www.youtube.com/v/x5Rpw45GC6o" width="425" height="350" wmode="transparent" /]
Jon Hagar will be giving a half-day workshop, How to Break Handheld and Embedded Software, at SQuAD’s 2011 Conference taking place March 9-10 in Denver.
It’s conference season again, and I’m looking forward to attending a couple of software quality conferences right here in the Denver area that will be coming up soon. The Software Quality Association of Denver (SQuAD) conference will be March 9-10, and Mile High Agile is April 7th. I’ll get to venture a bit farther for STAREast in Orlando in May. However, I have not been able to get out of the US for a software conference yet.
Fortunately, SSQ contributor and Agile expert Lisa Crispin was able to attend Belgium Testing Days in mid-February and report back on the experience. Not only did she share about several of the presentations she attended, but she also noted some of the cultural differences between the Belgium conference and those she has attended in the US. She writes:
Maybe it’s the fact that cultures mix more frequently in Europe, or that their software industry seems quite progressive, but attending Belgium Testing Days was a new and rewarding experience for me.
Other news across the ocean includes the expansion of Software Quality Systems (SQS) Belfast facility. In an interview with Rob McConnell, SQS Regional Director for Northern Ireland, I learned that the expansion is due to increased demands for “near-shore” outsourcing solutions from US and European clients.
This month we’ve explored automation throughout the application lifecycle, and our contributors have revealed that the role of the software tester is changing.
In his tip, Is automated testing replacing the software tester?, software consultant David Johnson discusses how both Agile methodologies and automation contribute to the changing responsibilities of the software tester.
There are several forces responsible for the changing role of testers, at least for on-shore testing resources. These include:
- Agile development techniques that integrate test automation into the development process.
- Mature test automation tools that simplify test creation, automation and test execution.
In all cases the need for testing has not decreased, but the responsibility has shifted to either development (i.e. TDD) or functional test automation. In all cases the test resource model for traditional testers is significantly reduced, if not completely removed.
So what skills do traditional testers need to deal with this transition?
In a recent expert response, Lisa Crispin answers the question: What kind of automation skills should a tester have and what’s the best way to get them? Check out her advice on ways that test engineers can build their knowledge and skills with test automation.
With the onset of cloud computing and utility-based pricing, “as-a-service” solutions are becoming more popular. HP has recently added Testing-as-a-Service as an outsourcing alternative for complex testing needs.
For the past 10 years or so, HP has provided Managed Testing Services, which offer a variety of testing services including staff augmentation or full-testing-service solutions, with pricing based on people-hours. The difference with the TaaS solution is that the pricing is “consumption-based.”
I spoke with Paul Ashwood of HP Enterprise Services, who explained:
The “as-a-service” model is all about consumption of service, which means you need to have a lot of repeatable activity. And when you’re doing the development of application, there’s a lot of new stuff there that you can’t really price. Where we apply TaaS is more around applications that are already in production where you need to do regression testing. Regression testing is well served by test automation. Test automation and test regression is all very repeatable.
The services that HP is offering in their TaaS Solution include:
1) Accelerated Test Service – Regression test automation. Pricing is based on test cases rather than the cost of people. The service also includes the maintenance and execution of the automated test cases.
2) Performance Test Service – Pricing is based on Performance Test Units (PTUs.) PTU values are based on things like the number of the steps in the test case or the number of virtual users, etc. Clients pay for what they use, rather than having to purchase the tools themselves.
3) Application Security Testing – Leverage WebInspect to for penetration testing and the Fortify tool to do code scanning. The price is per assessment or Website. The client doesn’t need to pay for the tools.
4) SAP Test Service – Accelerated Test and Performance Test specifically for SAP. HP has built out common business scenarios and common test cases for SAP.
The advantage to the client is that they are paying for what they use rather than paying for people or tools who may not be needed during certain downtime periods.
This type of solution is typically used post-production for mission-critical applications. As Ashwood points out, the solution does not depend on a relationship with specific people, and the success of their model includes having “test factories” in various countries and the ability to move resources to where the need is.
What is key to these models is the concept of a factory approach. So rather than the client saying, ‘I’m buying 20 of your test automation engineers for the next two years,’ instead what they’re buying is 20,000 executions of automated test cases over the next two years. So the people are not tied to a client anymore.
Obviously, this is a different approach than the tight coupling of developers and testers that is encouraged in Agile development models. Ashwood agrees that this model is not conducive to new development work, but, again, it is cost-effective for specialty testing of mission-critical applications.
Rona Shuchat, Director of Application Outsourcing Services at WW IDC Services Research Group say has this to say about TaaS solutions:
The reality around testing as a service or on demand testing as some refer to it, is that different vendors are positioning different offerings under this term “TaaS” or “On Demand Testing”. Companies like SOASTA, for example, are pushing a cloud based test service, also on demand, that is focused on performance testing and load testing – where multiple load generators can be combined to enable scalability out to hundreds or thousands of virtual users. SOASTA promotes its real-time BI engine to capture and report out on performance metrics. This service can be purchased for x hours, or monthly or annually.
A company like Skytap is also offering cloud based on demand test lab environments, and are positioning their cloud service as on demand virtual data centers that can drive parallel dev/test requirements or positioning as a test environment on demand to support unit testing, system testing, UAT or performance testing. We also see prepackaged domain solutions being offered under the term “TaaS” – for SAP, or other ERP solutions (e.g. Infosys, HP, etc. ). Perfecto Mobile offers automated testing solutions for mobile devices, enabling mobile developers to test mobile apps and content on a range of real devices over the web and so on.
For more about Testing-as-a-Service, read Testing-as-a-Service: Outsourcing your specialized software testing.
Recently, WhiteHat Security announced Sentinel PL (PreLaunch), a service for website security testing done before an application is released to production.
Senior Analyst from the 451 Group Wendy Nather says:
With Sentinel PL, WhiteHat is addressing the growing need to move security testing farther upstream in the software development lifecycle. Catching problems earlier means lower remediation costs and better-informed risk management in the release process.
It’s undisputed that the earlier in the software development cycle that defects are found, the easier and cheaper it is to fix them. However, security testing is often only done post-release, if at all. While there is still the need to test security post-release, it makes sense to test whatever can be tested pre-release as well.
The other day I spoke with WhiteHat CSO Bill Pennington and product specialist Ravi Iver about their service and the announcement. Up until now their service began once the code was complete and deployed in a production environment. They heard that their customers felt their service could be improved if some testing could be done in the pre-production environment.
I asked about some of the differences in testing in pre-production versus post-production.
Pennington answered that there were several tradeoffs that needed to be made for production testing to ensure availability, stability, performance and security.
To read more about application security testing, check out our security expert responses, as well as the Security Lesson: Beating Web application security threats. For more information about Testing-as-a-Service (TaaS), read Testing-as-a-Service: Outsourcing your specialized software testing.
The other day I attended Rally Software’s Agile Cafe in Boulder, giving me the opportunity to hear more stories of “real world” Agile. I had just attended the Boulder Agile Software Meetup group the week before and heard about some of the obstacles to Agile acceptance. Two of the speakers at the Agile Cafe were real world success stories, but both of them did experience obstacles with their transitions.
Bill Holst, President & Principal Consulting Software Engineer at Prescient Software, talked about two projects done for Colorado Springs Utilities, one done with waterfall and one with Agile. Though the Agile project was successful, it started with a lot of problems. A key factor in its success was that the team stopped halfway through and the customer group, in this case field engineers, reworked the requirements.
[kml_flashembed movie="http://www.youtube.com/v/LDq5mlLebfg" width="425" height="350" wmode="transparent" /]
The second presenter was Adam Woods, Director of Product Development at StoneRiver. Woods described a phased transition to Agile for a major development effort. One of the biggest obstacles encountered was the lack of buy-in and understanding of Agile principles from executives, project leaders and delivery teams. Woods, too, attributes a key to the eventual success to taking the time to stop and regroup when necessary. As the team did this, they inspected and adapted, going back to basic principles. Recognizing the importance of engagement from the product owners, they changed their model, and eventually gained the acceptance needed from all stakeholders.
Jean Tabaka, author of Collaboration Explained, followed up with a keynote, describing important factors for success in Agile transition. I had a chance to catch up with Tabaka after the presentation for a quick summary.
[kml_flashembed movie="http://www.youtube.com/v/yR-dccJMEzQ" width="425" height="350" wmode="transparent" /]
This month on SearchSoftwareQuality.com we are talking about automation in ALM. As I read through the various content, I’m recognizing what a vast amount of automation is involved in software development. In the context of software testing, I used to think of “automation skills” as skills with record and playback tools, such as HP’s QuickTestPro or maybe the popular open source tool, Selenium. However, there are many more types of automation and a variety of tasks that automation can perform.
Not only is automation used for multiple tasks, but often a specific type of automation can serve multiple purposes.
As Agile Testing co-author Lisa Crispin writes in Automated test scripts: The Smartphones of testing:
Automated test scripts also serve multiple purposes. We write tests up front so that we know what code to write and when it is finished. Creating the tests is a team activity which enhances collaboration and communication. Automated test scripts provide up-to-date documentation about how the system behaves. Automating our regression tests not only provides quick feedback as we check in code changes, keeping regression failures out of production, it frees time for all-important exploratory testing to find out whether the software delivers the expected business value.
In the end, that’s what we all want isn’t it? We want to automate the tedious, repetitive tasks, and to use our minds for the critical thinking.
It’s a new month and with that, we have a new theme at SearchSoftwareQuality.com: Automation in ALM. Throughout the lifecycle, there are ways that project teams are automating their processes and their work to provide efficiency so they can focus on those tasks that require critical thinking.
I’m excited about our lineup for the month, starting with these three pieces we published this week about automation in ALM:
Test automation: When, how and how much
Test automation has often been touted as an important part of an organization’s quality strategy. However, it’s not a silver bullet. In this tip, consultant David Johnson describes important considerations in determining when to invest in test automation, how to implement the program, and how much of your application space should be automated.
UI Testing: Automated and Exploratory
Should user interface (UI) testing be automated or exploratory? In this tip, SSQ contributor Chris McMahon answers that both automated and exploratory testing can be used for UI testing, and what is most effective is using them together. McMahon explains when each type of testing is appropriate and how used together, they can complement one another to provide the most comprehensive UI test coverage.
Six Tips for Selecting Automated Test Tools for Agile Development
Software consultant Nari Kannan describes the differences between agile application lifecycle management tools and traditional lifecycle management software. Agile ALM tools are more tightly integrated, easier to use, support distributed teams and seamlessly integrate many of the traditional lifecycle management functions.
Promoting Agile internally was the topic at last week’s Boulder Agile Software Meetup Group.
[kml_flashembed movie="http://www.youtube.com/v/5gK5Nq40n3Y" width="425" height="350" wmode="transparent" /]
In this video, Matt Weir speaks about transitioning to Agile and how he learned the most from the skeptics. He found that by listening and working together to address concerns, the skeptics became his best allies in the transition.
Want to read more about the challenges of gaining buy-in when your team is transitioning to Agile and how those are being addressed? Check out Real world Agile: Gaining internal acceptance of Agile methodologies on SearchSoftwareQuality.com