Mobility testing and the challenges that come along with it has become a frequent topic of discussion amongst software testers. An SSQ reader asked our mobility test expert, Karen Johnson, “How can I possibly test on all the different phones available?” Johnson answers, “It is pretty well impossible to test all the devices, so it’s even more important to have a strategy to maximize the time you have.” While Johnson gives some valid test strategies, there is another alternative to consider: crowdsourcing.
At the recent SQuAD conference, Lee Copeland mentioned in his keynote that crowdsourcing was a new trend in the industry. Copeland suggested crowdsourcing as a source of education for testers. By doing some freelance testing for crowdsource groups that specialize in tests like uTest or Mob4Hire, testers are able to keep their skills strong.
Crowdsourcing not only is good for testers, though; it also provides an alternative for organizations to get some quick testing done, and this can definitely be helpful with mobility testing.
Today, crowdsource test vendor uTest announced uTest Express, a service for startups and small businesses to test their mobile applications. uTest Express allows clients to identify geography, number of testers, device models and carriers on which they want their mobile applications tested. Drawing upon their 35,000+ community of testers, uTest will be able to execute “in the wild” tests using actual devices in needed locations, rather than depending on emulators, simulators or remote access testing. The client will then be provided with reports of bugs with screenshots and video, as well as feedback on performance and usability.
Mob4Hire is another crowdsource test vendor specializing in mobility testing. Those who want mobile applications tested are matched to testers (or “mobsters”) who have the appropriate mobile devices and want to test.
A lot of people think of ALM as a “heavy-weight” enterprise toolset. In this short video clip, Mik Kersten talks about the variety of ALM tools available, including some that are open source and “light-weight.” “What we’ve learned in the last five years is open source ALM tools speak very well to the developers. We just have to get them connected to the enterprise ALM tools,” says Kersten.
I’ve been writing about my experiences at last week’s SQuAD (Software Quality Association of Denver) Conference, but the presentation I enjoyed the most was not about tools, techniques or the technologies that we work with as QA professionals. It was about the people.
Michael Bolton introduces himself as “not the singer and not the guy from ‘Office Space.'” He’s Michael Bolton, the software tester, and quite the celebrity himself, in SQA circles. Bolton delivered the afternoon keynote at the March 10th SQuAD conference, emphasizing the diversity of backgrounds and unique aspects of personality which will ultimately help each tester grow in his or her career.
The keynote started with a faulty microphone and Bolton demonstrated his point early on by showing us that his background in theater would allow him to project his voice so he could be well-heard in the crowded auditorium.
He quoted Jerry Weinberg as saying, “Quality is not a thing. Quality is ‘value to some person(s)'” and then told us that he and James Bach add the two words: “who matter.” Quality depends on people. It is about adding value to the customer (or people ‘who matter’). You can have the most bug-free piece of code in the world, but it won’t be of any value if no one uses it or cares about it.
Bolton told us that decisions are not based on numbers or data. They are based on the way the decision-makers feel about the numbers. Again, Bolton is reminding us that we don’t operate like a robot, programmed with an algorithm to spit out right answers. We are people with feelings, emotions, backgrounds and experiences and we operate and make decisions using a mixture of data and gut instincts.
“Testing is an investigation of code, systems, people, and the relationships between them,” says Bolton.
Bolton’s message is similar to the message we heard from James Bach at last year’s Star East conference. Recognize your background, your unique experiences and talents. Realize how that uniqueness — that one-of-a-kind person that you are — adds strength and value to your work. A robot or computer can follow repeatable scripts. Demonstrate your skills beyond blindly following a set of steps in a test case. Use your mind, intuition and experience to add value and provide service to people ‘who matter.’
Thursday’s keynote at last week’s SQuAD (Software Quality Association of Denver) conference was delivered by well-known industry leader, Lee Copeland. The theme of the conference was Testing Concepts and Innovations and Copeland kept that in mind with his presentation titled, Today’s Testing Innovations.
Copeland described innovations in five key areas: Process, Agile, Education, Technology & Tools and Process Improvements.
He started by talking about the “context-driven” school in which it is understood that testers provide a service that would be effective for their particular project. Different test teams will have different missions. With this school of thought, there are no “best practices.” What may be a best practice in one context, may not be best in another. “We get to use our brains,” Copeland exclaimed, encouraging testers to figure out their own best practices given the context. “What is the most effective practice right now in this situation?”
In talking about exploratory testing, Copeland quoted James Bach as saying, “The classical approach to test design is like playing 20-questions by writing out all the questions in advance.” Copeland demonstrated by putting the audience through a round of 20-questions. The audience asked questions. “Animal?” No. “Vegetable?” Yes. “Green?” Yes. After about ten questions the audience figured out what he was thinking of: Spinach.
However, it was in using the answers already given that the audience was able to narrow down so quickly what was in Copeland’s head. If the audience were to just write out a bunch of questions first, without hearing any answers, it’s highly unlikely that anyone would have figured out what Copeland had been thinking.
Similarly, when we plan all our tests up front, as often is done with the “classical approach,” we don’t have the benefit of using our findings to help us dig deeper. With exploratory testing, we rely on the knowledge we gain from each test and are able to continue to narrow down our focus to discover the problem areas in an application.
Copeland was inspiring as he encouraged the audience to continue to learn and grow in the field and to use the many resources (such as SearchSoftwareQuality.com!) that are available.
I’ve had a couple of opportunities now to do some workshops with embedded software test guru, Jon Hagar. Most recently at last week’s SQuAD conference, we took a look at testing a hand-held 20-questions game.
Hagar had us look at several creative methods for determining what to test. We are all familiar with traditional black-box testing, where we test against user requirements or white-box testing, where we look at code, and test the various code paths. But some of the techniques Hagar described are what I’m beginning to think of as “out-of-the-box testing.”
He had us look at things such as risks and users. What were risky things that could happen to the user? For example, if the user is a small child, could there be a risk that a battery might be swallowed? This may be why changing the battery required a screwdriver. I’ve always thought this was very cumbersome and in the past, whenever a game required a screwdriver to change its battery, I would have been prone to thinking of it as “poorly designed.” However, recognizing the potential risk to a small child, helped me realize that the battery compartment may be purposely difficult to open for safety reasons.
Still, I couldn’t help but question if some of the areas we were exploring were outside the scope of what a tester should be looking at. I challenged Hagar, “Shouldn’t looking at areas of safety and usability be done by product owners or business analysts?” As a developer, I remember getting quite annoyed with testers who would report bugs on areas that were completely out of my control, wishing they would just stick with testing the code.
Hagar answered that it’s true that we don’t want testers to go down a rat hole into areas, or become obsessed with questions of usability, but that it’s good to at least ask the questions. Bring up areas of concern that perhaps were overlooked by others. If the answer is, “Don’t worry about it,” the tester has to be able to let it go, but that thorough testers will explore areas outside of typical boundaries.
To read more about embedded software quality and the workshops with Jon Hagar see:
I’ve been attending the Denver SQuAD conference this week, including a half-day QA Leadership Summit, facilitated by Michelle Rocke and Bev Berry.
The summit was attended by QA leaders and managers, discussing some of the challenges they face. The changing role of the tester, challenges around Agile adoption, and gathering QA metrics to help with business decisions, were just a few of the topics which opened up some lively conversation.
An hour of the afternoon was spent on the concept of branding. The question was posed: What is your QA Brand? How do other groups view the QA group? As a scapegoat? A hero? Is the QA team viewed as a bottleneck or a valuable partner?
As leaders, QA managers are responsible for defining, building and marketing their brand. There should be clarity, consistency and unity in your message. Understand the perceptions and expectations of the groups you interact with. Work with them to understand why they may have the perceptions they do and to ensure that your message and brand is well-communicated.
Boy, do I feel lucky to be living in the Denver area. Not only do I get to attend monthly meetings of the SQuAD (Software Quality Association of Denver), but today started the two-day SQuAD conference! The morning kicked off with opening remarks by Melissa Tondi, followed by two half-day workshops.
I attended Jon Hagar’s morning session about creating attacks for embedded software. I’d heard Hagar speak about embedded software before and was interested in learning more. This time we had some hands-on time testing of a 21-question hand-held game.
“When you’re thrown into a test situation, where do you start?” asked Hagar as he gave us the hand-held games telling us to “test.” When I asked if there was any documentation available, Hagar gave me the thumbs up, though he didn’t seem overly impressed by my quick response teasing, “You’ve heard me speak before!” He gave us the user instructions, but talked to us about ways of testing that would go beyond simply “happy path” testing using documentation.
For the afternoon session, I attended the QA Leadership Summit led by Bev Berry and Michelle Rocke. Several questions were posed to the group which spawned discussions including topics of Agile adoption, challenges QA managers face, the changing roles of testers, and quality metrics used to make decisions.
The conference continues tomorrow with morning keynotes by Lee Copeland and Michael Bolton, followed by a variety of presentations by industry leaders. If you can’t attend, follow the conference on Twitter with #squadco and stay tuned as I report back with more video and reports later this week.
At a recent Software Quality Association of Denver (SQuAD) meeting, Jon Hagar gave a presentation about testing embedded software and demonstrated his take on this emerging skill with an exercise entitled, “Attack of the Killer Robots.”
There are certain aspects that must be taken into account with embedded software, such as timing and integration with the hardware, that need to be “attacked” as Hagar says. Hagar also explained that when you are dealing with testing embedded software, the environment, tools and testing methodologies may be quite different than what we work with when testing traditional software that runs on a computer.
Read more about testing embedded software and the challenge Hagar gave us in the story, Embedded software test: Attack of the killer robots.
Jon Hagar will be giving a half-day workshop, How to Break Handheld and Embedded Software, at SQuAD’s 2011 Conference taking place March 9-10 in Denver.
It’s conference season again, and I’m looking forward to attending a couple of software quality conferences right here in the Denver area that will be coming up soon. The Software Quality Association of Denver (SQuAD) conference will be March 9-10, and Mile High Agile is April 7th. I’ll get to venture a bit farther for STAREast in Orlando in May. However, I have not been able to get out of the US for a software conference yet.
Fortunately, SSQ contributor and Agile expert Lisa Crispin was able to attend Belgium Testing Days in mid-February and report back on the experience. Not only did she share about several of the presentations she attended, but she also noted some of the cultural differences between the Belgium conference and those she has attended in the US. She writes:
Maybe it’s the fact that cultures mix more frequently in Europe, or that their software industry seems quite progressive, but attending Belgium Testing Days was a new and rewarding experience for me.
Other news across the ocean includes the expansion of Software Quality Systems (SQS) Belfast facility. In an interview with Rob McConnell, SQS Regional Director for Northern Ireland, I learned that the expansion is due to increased demands for “near-shore” outsourcing solutions from US and European clients.
This month we’ve explored automation throughout the application lifecycle, and our contributors have revealed that the role of the software tester is changing.
In his tip, Is automated testing replacing the software tester?, software consultant David Johnson discusses how both Agile methodologies and automation contribute to the changing responsibilities of the software tester.
There are several forces responsible for the changing role of testers, at least for on-shore testing resources. These include:
- Agile development techniques that integrate test automation into the development process.
- Mature test automation tools that simplify test creation, automation and test execution.
In all cases the need for testing has not decreased, but the responsibility has shifted to either development (i.e. TDD) or functional test automation. In all cases the test resource model for traditional testers is significantly reduced, if not completely removed.
So what skills do traditional testers need to deal with this transition?
In a recent expert response, Lisa Crispin answers the question: What kind of automation skills should a tester have and what’s the best way to get them? Check out her advice on ways that test engineers can build their knowledge and skills with test automation.