March 1, 2010 7:48 PM
Posted by: Yvette Francino
Last week, I was able to attend a trade show, network with performance test experts, learn about some new performance test tools and hear some very informative presentations — all free and from the comfort of my den. How did I do it? Well, virtually, of course.
SearchSoftwareQuality hosted Build, Bug, and Lifecycle Strategies, a virtual trade show focused on tools, strategies and techniques for executing effective application performance management. There were even prizes given out for visitor participation.
The last virtual seminar I’d attended was at Sun and held using Second Life. Now, I’m not one to knock new technologies; but I have to tell you, I got a little bit freaked out when I started flying around in Second Life without really knowing how to navigate. I finally made it to the conference room to listen to Scott McNealy’s keynote and couldn’t quite figure out how to sit down. Then I saw some poor guy’s avatar wandering aimlessly across the virtual stage in front of the virtual Scott McNealy. “Someone better help that guy off the stage,” quipped the “live” McNealy from somewhere in cyberspace. Yes, Second Life can take a little getting used to.
The software used for the APM Virtual Trade Show was much more intuitive and everything worked seamlessly. I’m often confused in real life, so I was a bit concerned about running around in a virtual world with no training. However, despite my typical navigational challenges, I had no trouble figuring out how to both hang out in the Networking Lounge, while simultaneously listening to the various presentations and Webcasts where experts were enlightening us with advice on performance and load test.
If you missed the virtual seminar, don’t worry. All presentations will be available for the next month at the following link: Application Performance Management Virtual Seminar: Build, Bug and Lifecycle Strategies.
And to give you very early notice, SearchSoftwareQuality’s next virtual trade show will be about Application Lifecycle Management and is planned for May 19th.
So no need to learn to virtually fly or create a fancy avatar. Enjoy the virtual trade show and plan to join us at the next trade show on May 19th.
February 24, 2010 6:24 PM
Posted by: Yvette Francino
The other day I posted a blog entitled: Methodology Wars: Agile or waterfall? Much to my delight, it has generated comments from both camps as well as from people who fall in-between, thinking a middle-of-the-road approach is doable.
I ran into Agile guru, Lisa Crispin at the Denver Agile User Group meeting the other night, so I took the opportunity to ask for her opinion on this matter. Obviously, Lisa is a proponent of agile methodologies, but I wanted to know if she thought there were times when agile might not be the best methodology for a software project.
Here’s how she answered:
[kml_flashembed movie="http://www.youtube.com/v/EpkYza3So5s" width="425" height="350" wmode="transparent" /]
Crispin and Janet Gregory co-authored the book, “Agile Testing: A Practical Guide for Testers and Agile Teams.” Crispin will be presenting at the StarEast conference in Orlando, April 26-30. Check out this quick preview of what you can expect to see if you’re able to attend one of her sessions:
[kml_flashembed movie="http://www.youtube.com/v/LUYGFKpBFc0" width="425" height="350" wmode="transparent" /]
I’m looking forward to seeing her there!
February 23, 2010 8:35 PM
Posted by: Yvette Francino
, Load testing
, Software testing
, Software testing tools
uTest, known for their crowdsourced approach to functional testing, is adding load and performance testing to their offerings. SearchSoftwareQuality got wind of uTest’s news, to be formally announced on Wednesday, February 24, and spoke with Vice President of Marketing and Community Matt Johnston. “We’ll be offering three flavors to our customers: Live Load, Simulated Load, and Hybrid Load,” said Johnston.
Johnston explained that Live Load would entail coordinating with their global team of testers to simultaneously test the system. Simulated Load will use test tools designed to simulate load on a system. Hybrid Load, will do both: use tools to simulate a load using the test tools, while testers are simultaneously performing functional test.
“There are certain bugs that only reveal themselves while your application is under load,” Johnston explained. Applications that use flash or streaming video, for example, need to be checked for quality of audio and video while the application is experiencing heavy traffic.
Currently these performance test offerings are primarily for Web-based applications because that’s where there is greatest demand, but uTest is willing to dig in and customize performance test efforts for customers with other needs. At some point, Johnston thinks there might be additional interest in the mobile market as it continues to mature, but right now Web-based performance test is their biggest market.
The competitors in the performance test arena are not other groups that offer crowdsource services, but the vendors and consultants that specialize in performance test tools, such as HP’s LoadRunner, according to Johnston. How uTest differs from consultants that specialize in certain tools is that, thanks to the uTest crowdsource model, they have access to a vast array of test tools and performance test experts.
“uTest has over 23,000 testers spread across 163 countries,” Johnston told us. Being a uTest member myself, I reminded Johnston that many of the 23,000 testers were inactive, and Johnston agreed. In any online community the typical makeup is 90 percent inactive (“lurkers”), nine percent active and one percent hyperactive. With uTest, Johnston said the spread is more like 70 percent inactive, 27 percent active and three percent hyperactive, so uTest is a more active community than most.
All uTest testers fill out a personal profile with information about their skill sets, locations and the technologies and tools to which they have access. This information helps uTest match people with the right skillset to the clients. Again, being a member myself — albeit a self-proclaimed “lurker” — I can well attest to uTest’s active community. Even though I rarely sign up to test, I have found the site one that actively encourages networking and professional development.
I asked Johnston about the pricing model for performance test, knowing that when I was a performance test manager at Sun, LoadRunner consultants were very expensive. Johnston said that the price will vary depending on the client’s needs. Though they follow the market and will charge more for expertise in the competitively-priced tools, overall, using uTest will give the client a cost advantage. Due to the wide array of testers, skills, and tools available, there is flexibility in what can be done and the client isn’t locked in to any high-priced contracts.
uTest CEO, Doron Reuveni, will be presenting at the upcoming StarEast conference on April 29th. My SSQ colleague, Dan Mondello, and I will be at the conference and plan to talk to Reuveni and more with Johnston there. So, stay tuned for more news and information on crowdsource testing.
February 23, 2010 8:03 PM
Posted by: Yvette Francino
Agile software development
, application lifecycle management
This week CollabNet made a move to strengthen its Agile application lifecycle management management (ALM) line by acquiring ScrumWorks-creator Danube. CollabNet’s ALM products TeamForge and Subversion and Danube’s popular Agile PM product, ScrumWorks, will continue to be offered separately. However, plans are in place to integrate these products allowing for a solution in Q2 that will offer both distributed ALM and agile project management to customers.
I just met with Bill Portelli, CollabNet CEO, and Victor Szalvay, now CTO of CollabNet’s Scrum Business Unit. Before I share some of our conversation, here’s a little more about the products involved. TeamForge is an ALM platform designed for distributed teams. It includes a series of modules that can be used throughout the software development lifecycle and is methodology-agnostic. It is built around the popular Subversion source code management tool, an open source tool founded by CollabNet. Danube’s ScrumWorks is a popular Scrum project and program management tool, with over 150,000 users.
I asked Portelli and Szalvay how their products compared with offerings from ThoughtWorks and Rally. Though those tools can integrate using APIs to a variety of other ALM products, they said, CollabNet will offer a package of distributed ALM tools, allowing for more flexibility for those organizations that are adopting agile with caution. Collabnet’s ALM products are methodology-agnostic and allow for a “slow step-wise approach and on-ramp towards agility,” said Portelli. The tools are Web-based and allow for technology independence as well, working equally well whether you’re code base is Java, .Net or something entirely different. The tools also are scalable, working well for large enterprise projects. Collabnet’s product line will help cover the gap of issues that have typically been difficult barriers for agile development: distance collaboration and large-scale development.
Distributed agile is a particular interest of mine since I manage a professional development network myself, Beyond Certification, for those who want to learn more about using agile practices in distributed settings.
So, I asked Portelli and Szalvay if agile purists might feel that agile methodologies were being compromised by bringing in tools that manage processes that might be considered more traditional. “One of the exciting things is that we have not had to compromise the purity,” Szalvay said. Customers will continue to have the option of using only ScrumWorks as a standalone product. However, for those that are transitioning to agile or using hybrid methodologies, it looks like Collabnet will have a fully integrated ALM solution.
Portelli and Szalvay believe that their customers — both the execs and the developers — are pleased with the acquisition. They’ve been asking for a more “flexible and mixed methodology” environment. “Our goal is to win the hearts and minds of developers. Evangelists will see the reality of allowing to onramp people into agile. [Agile] Gurus are positive and congratulatory.”
How are the people at Danube feeling about the acquisition? I found this post by Lyssa Adkins from Danube:Is your Scrum team a dinghy or an oceanliner? She uses the metaphor of sailing the ocean, asking teams whether they’d rather be a dinghy, hanging on for life in turbulant waters or an ocean liner, in control.
Oftentimes, reflecting on themselves through metaphor and not thinking (feeling instead) turns up the resonance for the team. From here, they can see things about themselves they didn’t see before. Better yet, with the metaphor in mind and the focus on the future the metaphor often brings, they can see new possibilities for being better than they are today.
It appears that CollabNet and Danube are joining to become an ocean liner and preparing to sail the Seven Seas.
February 18, 2010 5:23 PM
Posted by: Yvette Francino
There has been a recent flurry of events in the Denver area catering to the agile enthusiast. This is the second event in a week where I’ve been able to personally meet with nationally-known agile leaders. I’m going to have to start carrying around an autograph book!
Last Wednesday I met Mike Cohn at the local Java User Group meeting. Then last night I attended a meeting of the Agile Denver user group, which featured Selenium creator and co-founder of Sauce Labs, Jason Huggins. Huggins and Sauce Labs were the topic of a recent SearchSoftwareQuality news story, Sauce Labs adds business value to Selenium testing with IDE.
In a panel discussion about automation and the use of Selenium. panelists included SSQ contributor Chris McMahon, fellow agile experts, Matt Raible, Scott Allman, Thomas Albright and Joe Yakich and, of course, Huggins. The first question they tackled was, “Do you think UI Automation Test Tools are snake oil?” The question was prompted by the controversial blog post by Michael Feathers claiming user interface (UI) automation does not deliver on its promises. Panelists discussed using UI automation testing tools the right way, which means testing navigation, not business logic.
Regarding the return on investment of automation versus doing such techniques such as exploratory testing manually was also debated by panelists. Most felt that automation could be effectively used in conjunction with exploratory testing, and that automation could be used to perform such functions as the setup of tests or the recreation of defects.
In another session, Huggins gave overview of Selenium, the popular open source tool providing platform-independent automation testing. He suggested, among other things, taking advantage of Selenium Grid to executed tests much more quickly on multiple machines, rather than executing tests serially on limited hardware.
Huggins also talked about future trends in test automation, Selenium and cloud computing, such as using automation for mobile device testing and the use of automation videos for more than testing. Check out this video in which Huggins describes his views on the future of automation combined with screencasting to create test outputs that can be used for marketing, documentation or training.
[kml_flashembed movie="http://www.youtube.com/v/pDMhB_HXFKc" width="425" height="350" wmode="transparent" /]
I’m convinced test automation can provide a lot more than regression tests, but there are some that are disillusioned by the maintenance costs. What experiences have you had with test automation?
February 17, 2010 10:58 PM
Posted by: Yvette Francino
, Mike Cohn
, Planning Poker
Estimating software can be tricky. Earlier this week, I’d written a tip about using Planning Poker for estimations in an Agile environment. As I was researching this tool, I found Mike Cohn’s name dominating this space as an authority on Agile estimating techniques. I’m used to finding experts on the Internet that can be anywhere in the world. Little did I know, Cohn lives in neighboring Lafayette, Colorado. I was even more surprised when an email popped into my inbox yesterday that said that Cohn would be speaking this week at the Denver Java User Group (JUG) meeting! How lucky for me!
Cohn, Agile coach, trainer and author of several books, including Agile Estimating and Planning, described using “story points” as units of measure. “People are better at relative estimating than absolute estimating,” he explains. Story points allow for relative measurement, rather than using time or size. In this video clip, Cohn compares using story points to the “T-Shirt sizes” method of estimating.
[kml_flashembed movie="http://www.youtube.com/v/BAGain3T-xU" width="425" height="350" wmode="transparent" /]
Cohn warned us against “anchoring.” Anchoring, he explains, is when people hear a number, and even if that number is not relevant, it still seems to influence their estimates. Cohn gave a very interesting example of a team that was asked to estimate a project. A control group, without any outside influence, estimated that they project would take 456 hours . A second group was told that the customer, with no knowledge of the project, thought it should be done in 500 hours but that this should not influence their estimate. The estimate this group came up with was 555 hours. A third group was told the customer thought it should be done in 50 hours. Again, they were told that this should not be used in their process for coming up with estimate. This group came up with an estimate of 99 hours. Even though the groups were told not to be influenced by the customer’s numbers, the experiment implied that their estimates were skewed towards the numbers given by the customers.
Planning Poker is a technique where each team member use cards with a range of numbers to estimate effort. Typically the numbers do not progress incrementally, but are more spread apart, the higher they get. The Fibonacci series (0, 1, 2, 3, 5, 8, 13, 21, …) can be used for this. The reasoning behind this is that the larger the numbers get, the more uncertainty there is. Cohn gave us each a deck of cards and had us do an exercise in which we were given several tasks and then work in teams to estimate those tasks using the cards. If we didn’t agree on the first pass, we would explain our reasoning and vote again. In all cases, we were able to reach consensus quickly. Cohn even has made a free planning poker tool available for distributed agile teams.
Advice from Cohn
I caught Cohn at the end of his presentation for a few minutes and asked him if he had some short words of wisdom that might help teams that are working on software estimates. He shares this advice for times when the team feels the estimate is somewhere between two numbers that are being used:
[kml_flashembed movie="http://www.youtube.com/v/AkbcuiZbh94" width="425" height="350" wmode="transparent" /]
Cohn gave us each a couple of decks of his Planning Poker cards to go home and play with. His decks include four sets of ?, 0, 1/2, 1, 2, 3, 5, 8, 13, 20, 40, 100 and the infinity symbol. I may not be ready for Vegas, but I am so ready to estimate software. Bring it on!
February 17, 2010 10:06 PM
Posted by: Matt Heusser
Headline: “Google admits buzz social network testing flaws.” The same head was covered on MSNBC, The Business Insider, and USA Today. Regarding the mess, software testing authority James Bach said publicly :”These problems with Google Buzz could have been averted using the most basic kinds of critical thinking!”
Here’s the issue in a nutshell: Google Buzz is a new social tool that allows you to share your opinions and status with other users. Because it integrates with Gmail, Buzz knows to whom you send email the most. Buzz also had an added feature that automatically picks your “friends” — those with whom you share — out of the box for you. It also made your communication stream public by default.
The automatic friending is no problem for a lot of people; but what if you are having a legal dispute or custody battle, don’t want the boss to know that you’re interviewing with another company or just have two in-laws that don’t get along very welll?
Google Buzz — and, arguably, Microsoft Vista before it — actually represents a unique kind of testing challenge, one we don’t deal with much in the industry. You see, Buzz actually works. No, that is not a typo. Buzz works according to it’s specifications.
The problem is, the specifications were wrong, objectively wrong, in that they were not in the public’s interest.
I’m certain the folks at Google tested the heck out of Buzz. They probably had test scripts that exercised every possible feature, and it all worked per spec. I bet they had metrics and greenbars all over the process.
But they had the wrong tests.
You could claim this was a product management failure. After all, the software did what it was supposed to do, but “what it was supposed to do” was the wrong thing. It is possible, even likely, that in some meeting a tester or developer asked the question:”Do we really want all thing information to be public?”And got an answer that was a resounding, “Yes!”
So it might be more fair to say that this was a systems-thinking flaw. It expands the idea of testing, beyond “conformance to specification” and into another land, one of “fitness for use.”
A test for fitness for use needs to do more than “requirements traceability,” it needs to ask very tough questions about what the software does and if it does the right things? Those are questions of quality. While I’m reluctant to blame “poor testing” on a product management decision, certainly more testing might have reduced the risk for a bad rollout. Even the folks at Google agree that, in hindsight, they should have had a larger, more public beta.
As social media continues to become more integrated into our daily lives, and mashup platforms that combine Facebook and Twitter, etc., become more common, I expect we’ll have more testing challenges of this nature.
I wonder what will happen to the profession of software testers in the years to come. Will we bury our heads, claiming: “It worked per spec. It’s a security flaw. Talk to project management”? We will we might expand our view of testing to include critical thinking, as an investigative approach that can not be trivially scripted into clicks that “prove” the specification “works” or easily automated away? Only time will tell.
For right now, we should all buckle up; It’s going to be quite a ride.
February 17, 2010 3:12 PM
Posted by: Yvette Francino
I attend a variety of software and quality user group meetings in the Denver area. One of my favorites is the monthly SQuAD (Software Quality Association of Denver) meeting. There’s always a lot of energy and networking opportunities and great speakers! This month was no exception. Scott Allman, a software quality consultant and my former Sun colleague, was there to discuss the tools he uses to make his job as a software test professional easier.
Though most people think of software automation as a record-and-playback type of tool for functional testing, it encompasses a variety of tools that can be used throughout the testing cycle from test preparation through result reporting. Scott pointed out some automated tools that are easy to use, can be downloaded in a matter of minutes — in some cases, seconds — and, best of all, are free.
XAMPP (X=Cross Platform, A=Apache HTTP Server, M=MySQL, P=PHP, P=Perl) automates the download and installation process for an open source set of Web development tools: Apache HTTP Server, MySQL, PHP and Perl. Using these tools makes it easy to get a website set up on a single computer and test web code before large distribution.
OpenOffice, formally StarOffice, is an open source productivity suite of tools, arguably comparable to Microsoft Office. As a former Sun employee, myself, I’m very familiar with these tools, which offer full functionality for word processing, spreadsheets, presentations and more. As Allman pointed out, spreadsheets can be used for planning, integrating test data, recording results and a variety of other tasks. If you only could use one of the tool he covered during the meeting, Allman said, use this spreadsheet tool.
FitNesse is an Acceptance Testing Framework which makes testing as easy as defining your input data and your expected output data into tables. This allows even those people with absolutely no programming experience to create automated tests. FitNesse is popular in agile environments, as it encourages early user involvement and collaboration.
For more information on Fitness, check out the TwoMinuteExample and this article on using FitNesse and other tools when <a href=running user interface, unit and integration tests.
In the video below, Allman shows a user how to connect FitNesse to Java code.
[kml_flashembed movie="http://www.youtube.com/v/xnl3Zyk-rXs" width="425" height="350" wmode="transparent" /]
FindBugs is a tool that will analyze Java bytecode and find hundreds of different potential types of bugs. This would be an excellent tool for software developers to use in their unit test efforts, Allman said. If the developers don’t know about the tool, testers can play the hero and run the code through the tool, coming up with a long list of potential issues for the team to explore. Yes, the team will eventually catch on that you’re using a tool, but, at least for the first pass, you might be able to pull off that you’re super-human.
Chainsaw is an open source GUI tool, developed by Apache Software Foundation, that lets you view and analyze log files. This tool uses color-coding techniques to help you filter through logs, preventing you from ever being “stumped” by a problem in your code, Allman said.
Yahoo! Widgets – ScreenShooter
One tool that every tester must have in order to document unexpected behavior is a ScreenShot tool. Allman thinks many of the Yahoo! Widgets are nifty for the tester’s toolkit. He gave us a quick demo of ScreenShooter, which allows you to copy a full screen, a window or any portion of a screen and quickly create an image file.
TestLink is a Test Management System that gives you a full repository for all your test documentation and results on the Web. The tools is Web-based so all your data can be accessed by logging into the system from any computer that has Internet access. The tool has reporting capabilities as well and if you’re the type that really likes hard-copy documentation, you can print your reports in a variety of formats.
Emma is a Java code coverage analyzer. This tool allows you to discover which code paths have been exercised by your test efforts. Another tool that would be extremely useful for white-box testing of Java, Emma will measure and report on Java code, including large-scale enterprise software development projects.
I caught up with Allman at the end of his presentation to ask him about his favorite tool. Listen to what he told me what tool he thinks is most important and why.
[kml_flashembed movie="http://www.youtube.com/v/fY4VBb4eO6E" width="425" height="350" wmode="transparent" /]
February 12, 2010 5:09 PM
Posted by: Yvette Francino
, Agile software development
Lately, I’ve been talking with software testers and developers about software methodologies, and I’ve noticed the Agile vs. waterfall camps are very divided.
Miss Manners will tell you to stay away from controversial topics, such as politics or religion, when you’re meeting a new acquaintance. Some people have such strong opinions that they have a hard time discussing them without getting into a heated debate. I’m sure you could add many things to that list. I’ve known sports fans — or are they fanatics? — who have had such heated arguments that nacho wars are a real danger at Super Bowl parties.
And geeks? What do we get all hot and bothered about? All kinds of things. Techies argue about operating systems, tools, databases — just about everything — each certain that their way is the right way. I’m amazed at how personally people can feel about these things!
Yes, the Agile-versus-waterfall debate can be added to the list of controversial things that geeks have strong opinions about.
Many groups are convinced that Agile is the best methodology for any type of development effort. Others feel it works well for small projects, but not for large enterprise development projects. There are even some that speak out against Agile. Software pro, blogger and self-proclaimed software entomologist Dave Whalen makes no secret of his feeling that the Agile movement can be like a cult. On his blog, Whalan complained:
“Like most cults, if anyone dares to have a different opinion or refuses to drink the Agile Kool-Aid, the true-believers will rally their masses and crucify you…Don’t get me wrong –I like Agile. It’s a great process. It works well in SOME, very narrowly defined, situations. NOT every situation!”
I found this “Agile Made Simple” video on YouTube by Rodrigo Coutinho that I think does a great job of explaining the difference between the two models.
[kml_flashembed movie="http://www.youtube.com/v/rhIu-hjvxc4" width="425" height="350" wmode="transparent" /]
As I responded to Dave in his post about cults, my opinion is that not any one methodology is the right answer to every software development project. Some of the considerations I think are important:
- Size of the project: I would think smaller projects cater more to agile, because the smaller the team, the easier to collaborate.
- Compliance/regulations: Some government projects may require documentation and processes that are not always present with Agile.
- Risk of project: High risk, life-and-death type projects require more rigor and discipline than low risk.
- Existing processes, tools, methodologies, team expertise: There’s always a learning curve and pain involved in changing the way things have been done. Change is good, but might want to take baby steps, depending on the criticality of the project.
- Team mindsets: Is the team excited about trying agile? It seems it would be a lot more effective if there was buy-in rather than mandate.
I asked an agile expert whether there were certain types of projects, such as the ones I mentioned above, that might be better being done using a waterfall approach. Her opinion was that all projects would benefit from using an agile methodology. She felt smaller, iterative releases would always be an advantage.
SearchSoftwareQuality’s contributors have been participating in the debate, as well as offering useful information about Agile and waterfall. They’ve covered such topics as waterfall versus iterative development misconceptions and if Agile and traditional project management can co-exist. Also, in two articles, software testers Matt Heusser and Lanette Creamer debated the differences between test automation and test-driven development.
When deciding on a methodology, there are a lot of things to consider, a major being choosing Agile, waterfall, or something in between. Which is right for you and why? And no virtual nacho-throwing for those who disagree!