Today Gorilla Logic announced the release of a new open source automation test tool, FoneMonkey 5, for testing iPhone and iPad applications.
I spoke with president and CEO of Gorilla Logic, Stu Stern, who was one of the founders of Gorilla Logic in 2002. A former Sun Microsystems exec, Stern says Gorilla Logic, primarily an Agile development consulting firm, brings an executive perspective with rigorous process to their clients.
“One of the challenges was finding good testing tools for rich applications,” says Stern when talking about the creation of FlexMonkey, their first open source automated test tool for Flex applications.
For more information on mobile application testing, check out these recent SearchSoftwareQuality articles: Tips for application testing on mobile devices and Defining a strategy for testing mobile devices.
uTest is known for their crowdsource testing services, but they are now expanding their market to online dating! Today they announced QADate, a free online dating service for software testers. Though the site is designed specifically with testers in mind, all technosexuals familiar with software development will enjoy the features offered. From the press release:
Considering that uTest has a community of more than 37,000 QA professionals worldwide, and broad experience matching skilled testers with leading customers like Google, Microsoft, and AOL, this is a natural, inevitable—and perhaps obvious—step in the company’s growth strategy.
QA professionals certainly understand the importance of validating requirements, so there is no doubt that before embarking on a date, they’ll check for compatibility. And I would venture to guess there will be some serious questions about performance and security before any connections are made.
The site, unlike other dating sites, allows users to state their testing preferences. Exploratory testers will undoubtedly be thrilled to find matches who will totally get it when they ask for an IP address and understand questions about their use of anti-virus software. Testers know the importance of a safe connection!
Another unique feature is the ability to track the bugs you find with your date, classifying them as priority one problems (for major issues such as foul breath and body odor) to priority four problems (accidental burp). Of course, usability testing is subjective, but the QADate community will appreciate the transparency, allowing everyone the opportunity to assess feedback received as dates share the bugs and issues found. It allows for continuous improvement until that perfect connection is made.
How can I possibly test all mobile devices? Try crowdsourcing
uTest releases new apps for the iPhone and iPad
Crowdsource specialist uTest launching new performance, load test offerings
Crowd meets cloud: uTest and SOASTA announce partnership
Agile environments encourage and embrace requirements changes. However, knowing how to effectively manage those changes can be a huge challenge. In March, SearchSoftwareQuality focused on tips from experts about requirements management. In this series of articles we look specifically at managing requirements in Agile environments.
Agile requirements: A conversation with author Dean Leffingwell, part 1
Author of Agile Software Requirements – Lean Requirements Practices for Teams, Programs, and the Enterprise Dean Leffingwell talks about the differences between Agile and traditional requirements practices and gives advice on what to look for in Agile requirements tools.
Requirements in Scrum environments: Q&A with Dean Leffingwell, part 2
Dean Leffingwell, author of Agile Software Requirements – Lean Requirements Practices for Teams, Programs, and the Enterprise, answers questions about requirements management in Scrum environments.
The value of visible requirements
Chris McMahon describes the experience of migrating requirements data from a difficult-to-use tool to a whiteboard that clearly displays requirements and status.
Getting on the same page: How testers can help clarify requirements
Agile expert Lisa Crispin gives helpful advice to testers on helping to clarify requirements. Programmers, testers and business experts must work together to ensure requirements are well-understood.
Want more? Dean Leffingwell will be presenting at the Virtual Trade Show on April 27th: Beating Key ALM Challenges.
Conference season is in full swing and those of you who are Agile enthusiasts will want to mark your calendars for three upcoming Agile conferences.
On April 7th (that’s just around the corner), Denver is hosting The Mile High Agile Conference. The focus of this one-day event is on “elevating agility.” By using promo code “RALLYSAVER30” you will save 30% off registration, thanks to Rally Software. Jean Tabaka, of Rally Software, will be delivering the keynote. The conference program includes 50 speakers in these four categories: Adopting and transitioning to Agile, Technical practices and tools, Agile product and project practices, and Executive and enterprise agility.
Agile Development Practices West will be held June 5-10 in Las Vegas, Nevada, and Keynote speakers include industry leaders Geoff Bellman, Martin Fowler, Linda Rising and Bob Galen. The conference touts a full lineup of tutorials, workshops, presentations and interactive sessions. Register by April 8th to receive the Super Early Bird Discount.
The annual conference sponsored by Agile Alliance, Agile2011, will be held August 8-12 in Salt Lake City, Utah. This conference promises to be special as it reunites almost all of the 17 signatories of the Agile Manifesto 10 years after they gathered near Salt Lake City and crafted the creed. Keynote speakers include Dr. Barbara Fredrickson, Kevlin Henney and Linda Rising.
We have read and heard about the value of automated testing in Agile environments. We also hear a lot about the importance of exploratory testing, a technique that is purely manual, in Agile environments. Are these two recommendations in conflict with one another?
Most experts agree that both automated and manual testing are important, but depending on the type of testing you’re doing, one type may be the better choice. In Agile Testing, written by SSQ’s Agile expert Lisa Crispin along with co-author Janet Gregory, four quadrants of testing are described.
Quadrant one encompasses the automated test efforts including unit tests and component tests. These are typically done by the developer, perhaps using test-driven development (TDD) even before the code is written.
Quadrant two is testing that may be automated or may be manually done by developers and testers. This includes functional tests, examples, story tests, prototypes and simulations.
Quadrant three contains manual tests including exploratory testing, usability testing, user acceptance testing and alpha/beta testing.
Quadrant four includes speciality testing that is typically done with tools such as performance testing and load testing.
Steffan Surdek will be covering more about the use of automation in testing and throughout the application lifecycle in our upcoming free virtual trade show, Beating Key ALM Challenges to be held April 27th.
For related articles from SearchSoftwareQuality, see:
Requirements management has been said to be the most challenging part of software development, often because of miscommunication between the business owners and the development team. This month SearchSoftwareQuality has been providing a series of tips on improving cross-organizational communication and collaboration so that requirements are clear and tracked throughout the application lifecycle.
Trends in ALM: Requirements management tools
In this interview with Forrester analyst Mary Gerush, we hear about five important ALM trends in requirements management tools and explore questions organizations may want to consider when selecting a requirements management tool.
Seven steps for tracking requirements throughout a software release
In this article, Kay Diller shows you in seven simple steps how to develop, document, track and test business requirements throughout a release.
Business requirements: Five steps to exceed business expectations
Kay Diller explains five ideas to help business partners and developers work together to ensure a release that is not only acceptable, but exceeds everyone’s expectations.
Requirements tips for data-centric projects
In this tip, requirements expert Sue Burk describes strategies for working with the business to understand data usage with detailed scenarios, allowing your data-centric projects to be designed right the first time.
Mobility testing and the challenges that come along with it has become a frequent topic of discussion amongst software testers. An SSQ reader asked our mobility test expert, Karen Johnson, “How can I possibly test on all the different phones available?” Johnson answers, “It is pretty well impossible to test all the devices, so it’s even more important to have a strategy to maximize the time you have.” While Johnson gives some valid test strategies, there is another alternative to consider: crowdsourcing.
At the recent SQuAD conference, Lee Copeland mentioned in his keynote that crowdsourcing was a new trend in the industry. Copeland suggested crowdsourcing as a source of education for testers. By doing some freelance testing for crowdsource groups that specialize in tests like uTest or Mob4Hire, testers are able to keep their skills strong.
Crowdsourcing not only is good for testers, though; it also provides an alternative for organizations to get some quick testing done, and this can definitely be helpful with mobility testing.
Today, crowdsource test vendor uTest announced uTest Express, a service for startups and small businesses to test their mobile applications. uTest Express allows clients to identify geography, number of testers, device models and carriers on which they want their mobile applications tested. Drawing upon their 35,000+ community of testers, uTest will be able to execute “in the wild” tests using actual devices in needed locations, rather than depending on emulators, simulators or remote access testing. The client will then be provided with reports of bugs with screenshots and video, as well as feedback on performance and usability.
Mob4Hire is another crowdsource test vendor specializing in mobility testing. Those who want mobile applications tested are matched to testers (or “mobsters”) who have the appropriate mobile devices and want to test.
A lot of people think of ALM as a “heavy-weight” enterprise toolset. In this short video clip, Mik Kersten talks about the variety of ALM tools available, including some that are open source and “light-weight.” “What we’ve learned in the last five years is open source ALM tools speak very well to the developers. We just have to get them connected to the enterprise ALM tools,” says Kersten.
I’ve been writing about my experiences at last week’s SQuAD (Software Quality Association of Denver) Conference, but the presentation I enjoyed the most was not about tools, techniques or the technologies that we work with as QA professionals. It was about the people.
Michael Bolton introduces himself as “not the singer and not the guy from ‘Office Space.'” He’s Michael Bolton, the software tester, and quite the celebrity himself, in SQA circles. Bolton delivered the afternoon keynote at the March 10th SQuAD conference, emphasizing the diversity of backgrounds and unique aspects of personality which will ultimately help each tester grow in his or her career.
The keynote started with a faulty microphone and Bolton demonstrated his point early on by showing us that his background in theater would allow him to project his voice so he could be well-heard in the crowded auditorium.
He quoted Jerry Weinberg as saying, “Quality is not a thing. Quality is ‘value to some person(s)'” and then told us that he and James Bach add the two words: “who matter.” Quality depends on people. It is about adding value to the customer (or people ‘who matter’). You can have the most bug-free piece of code in the world, but it won’t be of any value if no one uses it or cares about it.
Bolton told us that decisions are not based on numbers or data. They are based on the way the decision-makers feel about the numbers. Again, Bolton is reminding us that we don’t operate like a robot, programmed with an algorithm to spit out right answers. We are people with feelings, emotions, backgrounds and experiences and we operate and make decisions using a mixture of data and gut instincts.
“Testing is an investigation of code, systems, people, and the relationships between them,” says Bolton.
Bolton’s message is similar to the message we heard from James Bach at last year’s Star East conference. Recognize your background, your unique experiences and talents. Realize how that uniqueness — that one-of-a-kind person that you are — adds strength and value to your work. A robot or computer can follow repeatable scripts. Demonstrate your skills beyond blindly following a set of steps in a test case. Use your mind, intuition and experience to add value and provide service to people ‘who matter.’
Thursday’s keynote at last week’s SQuAD (Software Quality Association of Denver) conference was delivered by well-known industry leader, Lee Copeland. The theme of the conference was Testing Concepts and Innovations and Copeland kept that in mind with his presentation titled, Today’s Testing Innovations.
Copeland described innovations in five key areas: Process, Agile, Education, Technology & Tools and Process Improvements.
He started by talking about the “context-driven” school in which it is understood that testers provide a service that would be effective for their particular project. Different test teams will have different missions. With this school of thought, there are no “best practices.” What may be a best practice in one context, may not be best in another. “We get to use our brains,” Copeland exclaimed, encouraging testers to figure out their own best practices given the context. “What is the most effective practice right now in this situation?”
In talking about exploratory testing, Copeland quoted James Bach as saying, “The classical approach to test design is like playing 20-questions by writing out all the questions in advance.” Copeland demonstrated by putting the audience through a round of 20-questions. The audience asked questions. “Animal?” No. “Vegetable?” Yes. “Green?” Yes. After about ten questions the audience figured out what he was thinking of: Spinach.
However, it was in using the answers already given that the audience was able to narrow down so quickly what was in Copeland’s head. If the audience were to just write out a bunch of questions first, without hearing any answers, it’s highly unlikely that anyone would have figured out what Copeland had been thinking.
Similarly, when we plan all our tests up front, as often is done with the “classical approach,” we don’t have the benefit of using our findings to help us dig deeper. With exploratory testing, we rely on the knowledge we gain from each test and are able to continue to narrow down our focus to discover the problem areas in an application.
Copeland was inspiring as he encouraged the audience to continue to learn and grow in the field and to use the many resources (such as SearchSoftwareQuality.com!) that are available.