March 3, 2010 8:12 PM
Posted by: Yvette Francino
During a discussion about Scrum at the Boulder Agile User Group (BAUG) meeting recently, Paul Quarles of Oppenheimer Funds shared his experience as a Product Owner demoing to the business in a Scrum environment. I’ll share that story and my take on the moniker, Agilists, in this post.
There are no speakers or formal presentations at BAUG meetings. This is more of a casual sharing of experiences and dilemmas that people are experiencing as they use agile at their workplaces. What’s cool about this group is that people describe what’s happening in the real world — not necessarily what the books tell you or what the classes teach you, but what is really happening. As we all know, real life rarely works out exactly as the books describe. This may be why many people think that software development is not something you can learn from a text book or a class, but something you must experience.
Of the many topics discussed that evening, the use of Scrum came up most often. Everyone was very positive about their experiences with Scrum and were strong supporters of agile. I’ve been hearing the word “agilist” to describe agile supporters; but, being a bit of a wordsmith, I’ve decided the word “evagilist” might be a better way to describe these evangelists.
I asked the group if I might take back a “real world agile story” for Software Quality Insights, and that’s when Paul Quarles shared his Product Owner experiences.
In Scrum, there’s a Product Owner, Scrum Master, and a Team, comprised of developers and testers. Scrum uses short iterations ending in a deliverable piece of code that can be demoed. The books say that it’s the Team’s responsibility to demo the work results to the Product Owner. However, Paul — the Product Owner for his Scrum project — says that he is the person that demos the code for the business users. Listen to his accounting of why this works well for his group.
[kml_flashembed movie="http://www.youtube.com/v/DCboKQ6_gpA" width="425" height="350" wmode="transparent" /]
I plan to gather more of these Real World Agile stories each month. Maybe I could even make a Reagility TV Series!
March 3, 2010 6:28 PM
Posted by: Yvette Francino
Agile software development
, Software testing
VersionOne, an Agile-focused application lifecycle management product (ALM) vendor, has just released Ultimate Edition, a suite of software testing and management tools that includes current and new features. Customizable reporting capabilities and enhanced regression test functionality are new features in the new Ultimate Edition package.
Other VersionOne solutions include Team Edition, a free set of tools for a single team new to agile. It is a web-based tool based on spreadsheets with an agile workflow. The Enterprise Edition builds on the Team Edition, scaling for multiple teams, projects and locations. The Ultimate Edition adds customizable analytics platform and enhanced regression test management to the Enterprise Edition.
I spoke with VersionOne CEO Robert Holler shortly after the Ultimate Edition announcement. He said that VersionOne uses its own IdeaSpace software to collect input from users about what features they’re intested in. Requests for reporting enhancements have been repeatedly asked for, he said, and new features will allow users more choices in reporting.
“They’ll be able to use one of the many role-based best practice dashboards, a configurable analytics grid which is essentially a spreadsheet on steroids or use our wizard-driven user interface allowing roll-your-own reports.”
In talking about the enhancements for regression test management, Holler explained that VersionOne’s Ultimate Edition allows for the creation of a repository of regression tests that can be chosen from the acceptance test buckets and run automatically with builds as scheduled. I asked if the regression test enhancements were a result of user requests as well, but Holler said that was more of a strategic decision for VersionOne.
As for the bundling of the Ideas Management module, I couldn’t really see the advantage for this piece of the announcement. The Ideas Management module is still available as a separate add-on and doesn’t add any improved functionality when packaged with the Ultimate Edition. Holler said that they were trying to move away from the modular approach and provide more of a full-function solution with the bundling.
When asked how he felt about the recent announcement of CollabNet’s acquisition of Danube’s ScrumWorks, Holler felt that it was a positive sign of the industry’s recognition of the agile trend. “There is consolidation going on and incredible interest and momentum around agile,” he said.
VersionOne uses agile methodologies in internal software development. Also, iterative releases and agile concepts are also used in VersionOne’s marketing and services organizations. “Fundamentally it’s a good way to do business,” Holler said.
March 1, 2010 7:48 PM
Posted by: Yvette Francino
Last week, I was able to attend a trade show, network with performance test experts, learn about some new performance test tools and hear some very informative presentations — all free and from the comfort of my den. How did I do it? Well, virtually, of course.
SearchSoftwareQuality hosted Build, Bug, and Lifecycle Strategies, a virtual trade show focused on tools, strategies and techniques for executing effective application performance management. There were even prizes given out for visitor participation.
The last virtual seminar I’d attended was at Sun and held using Second Life. Now, I’m not one to knock new technologies; but I have to tell you, I got a little bit freaked out when I started flying around in Second Life without really knowing how to navigate. I finally made it to the conference room to listen to Scott McNealy’s keynote and couldn’t quite figure out how to sit down. Then I saw some poor guy’s avatar wandering aimlessly across the virtual stage in front of the virtual Scott McNealy. “Someone better help that guy off the stage,” quipped the “live” McNealy from somewhere in cyberspace. Yes, Second Life can take a little getting used to.
The software used for the APM Virtual Trade Show was much more intuitive and everything worked seamlessly. I’m often confused in real life, so I was a bit concerned about running around in a virtual world with no training. However, despite my typical navigational challenges, I had no trouble figuring out how to both hang out in the Networking Lounge, while simultaneously listening to the various presentations and Webcasts where experts were enlightening us with advice on performance and load test.
If you missed the virtual seminar, don’t worry. All presentations will be available for the next month at the following link: Application Performance Management Virtual Seminar: Build, Bug and Lifecycle Strategies.
And to give you very early notice, SearchSoftwareQuality’s next virtual trade show will be about Application Lifecycle Management and is planned for May 19th.
So no need to learn to virtually fly or create a fancy avatar. Enjoy the virtual trade show and plan to join us at the next trade show on May 19th.
February 24, 2010 6:24 PM
Posted by: Yvette Francino
The other day I posted a blog entitled: Methodology Wars: Agile or waterfall? Much to my delight, it has generated comments from both camps as well as from people who fall in-between, thinking a middle-of-the-road approach is doable.
I ran into Agile guru, Lisa Crispin at the Denver Agile User Group meeting the other night, so I took the opportunity to ask for her opinion on this matter. Obviously, Lisa is a proponent of agile methodologies, but I wanted to know if she thought there were times when agile might not be the best methodology for a software project.
Here’s how she answered:
[kml_flashembed movie="http://www.youtube.com/v/EpkYza3So5s" width="425" height="350" wmode="transparent" /]
Crispin and Janet Gregory co-authored the book, “Agile Testing: A Practical Guide for Testers and Agile Teams.” Crispin will be presenting at the StarEast conference in Orlando, April 26-30. Check out this quick preview of what you can expect to see if you’re able to attend one of her sessions:
[kml_flashembed movie="http://www.youtube.com/v/LUYGFKpBFc0" width="425" height="350" wmode="transparent" /]
I’m looking forward to seeing her there!
February 23, 2010 8:35 PM
Posted by: Yvette Francino
, Load testing
, Software testing
, Software testing tools
uTest, known for their crowdsourced approach to functional testing, is adding load and performance testing to their offerings. SearchSoftwareQuality got wind of uTest’s news, to be formally announced on Wednesday, February 24, and spoke with Vice President of Marketing and Community Matt Johnston. “We’ll be offering three flavors to our customers: Live Load, Simulated Load, and Hybrid Load,” said Johnston.
Johnston explained that Live Load would entail coordinating with their global team of testers to simultaneously test the system. Simulated Load will use test tools designed to simulate load on a system. Hybrid Load, will do both: use tools to simulate a load using the test tools, while testers are simultaneously performing functional test.
“There are certain bugs that only reveal themselves while your application is under load,” Johnston explained. Applications that use flash or streaming video, for example, need to be checked for quality of audio and video while the application is experiencing heavy traffic.
Currently these performance test offerings are primarily for Web-based applications because that’s where there is greatest demand, but uTest is willing to dig in and customize performance test efforts for customers with other needs. At some point, Johnston thinks there might be additional interest in the mobile market as it continues to mature, but right now Web-based performance test is their biggest market.
The competitors in the performance test arena are not other groups that offer crowdsource services, but the vendors and consultants that specialize in performance test tools, such as HP’s LoadRunner, according to Johnston. How uTest differs from consultants that specialize in certain tools is that, thanks to the uTest crowdsource model, they have access to a vast array of test tools and performance test experts.
“uTest has over 23,000 testers spread across 163 countries,” Johnston told us. Being a uTest member myself, I reminded Johnston that many of the 23,000 testers were inactive, and Johnston agreed. In any online community the typical makeup is 90 percent inactive (“lurkers”), nine percent active and one percent hyperactive. With uTest, Johnston said the spread is more like 70 percent inactive, 27 percent active and three percent hyperactive, so uTest is a more active community than most.
All uTest testers fill out a personal profile with information about their skill sets, locations and the technologies and tools to which they have access. This information helps uTest match people with the right skillset to the clients. Again, being a member myself — albeit a self-proclaimed “lurker” — I can well attest to uTest’s active community. Even though I rarely sign up to test, I have found the site one that actively encourages networking and professional development.
I asked Johnston about the pricing model for performance test, knowing that when I was a performance test manager at Sun, LoadRunner consultants were very expensive. Johnston said that the price will vary depending on the client’s needs. Though they follow the market and will charge more for expertise in the competitively-priced tools, overall, using uTest will give the client a cost advantage. Due to the wide array of testers, skills, and tools available, there is flexibility in what can be done and the client isn’t locked in to any high-priced contracts.
uTest CEO, Doron Reuveni, will be presenting at the upcoming StarEast conference on April 29th. My SSQ colleague, Dan Mondello, and I will be at the conference and plan to talk to Reuveni and more with Johnston there. So, stay tuned for more news and information on crowdsource testing.
February 23, 2010 8:03 PM
Posted by: Yvette Francino
Agile software development
, application lifecycle management
This week CollabNet made a move to strengthen its Agile application lifecycle management management (ALM) line by acquiring ScrumWorks-creator Danube. CollabNet’s ALM products TeamForge and Subversion and Danube’s popular Agile PM product, ScrumWorks, will continue to be offered separately. However, plans are in place to integrate these products allowing for a solution in Q2 that will offer both distributed ALM and agile project management to customers.
I just met with Bill Portelli, CollabNet CEO, and Victor Szalvay, now CTO of CollabNet’s Scrum Business Unit. Before I share some of our conversation, here’s a little more about the products involved. TeamForge is an ALM platform designed for distributed teams. It includes a series of modules that can be used throughout the software development lifecycle and is methodology-agnostic. It is built around the popular Subversion source code management tool, an open source tool founded by CollabNet. Danube’s ScrumWorks is a popular Scrum project and program management tool, with over 150,000 users.
I asked Portelli and Szalvay how their products compared with offerings from ThoughtWorks and Rally. Though those tools can integrate using APIs to a variety of other ALM products, they said, CollabNet will offer a package of distributed ALM tools, allowing for more flexibility for those organizations that are adopting agile with caution. Collabnet’s ALM products are methodology-agnostic and allow for a “slow step-wise approach and on-ramp towards agility,” said Portelli. The tools are Web-based and allow for technology independence as well, working equally well whether you’re code base is Java, .Net or something entirely different. The tools also are scalable, working well for large enterprise projects. Collabnet’s product line will help cover the gap of issues that have typically been difficult barriers for agile development: distance collaboration and large-scale development.
Distributed agile is a particular interest of mine since I manage a professional development network myself, Beyond Certification, for those who want to learn more about using agile practices in distributed settings.
So, I asked Portelli and Szalvay if agile purists might feel that agile methodologies were being compromised by bringing in tools that manage processes that might be considered more traditional. “One of the exciting things is that we have not had to compromise the purity,” Szalvay said. Customers will continue to have the option of using only ScrumWorks as a standalone product. However, for those that are transitioning to agile or using hybrid methodologies, it looks like Collabnet will have a fully integrated ALM solution.
Portelli and Szalvay believe that their customers — both the execs and the developers — are pleased with the acquisition. They’ve been asking for a more “flexible and mixed methodology” environment. “Our goal is to win the hearts and minds of developers. Evangelists will see the reality of allowing to onramp people into agile. [Agile] Gurus are positive and congratulatory.”
How are the people at Danube feeling about the acquisition? I found this post by Lyssa Adkins from Danube:Is your Scrum team a dinghy or an oceanliner? She uses the metaphor of sailing the ocean, asking teams whether they’d rather be a dinghy, hanging on for life in turbulant waters or an ocean liner, in control.
Oftentimes, reflecting on themselves through metaphor and not thinking (feeling instead) turns up the resonance for the team. From here, they can see things about themselves they didn’t see before. Better yet, with the metaphor in mind and the focus on the future the metaphor often brings, they can see new possibilities for being better than they are today.
It appears that CollabNet and Danube are joining to become an ocean liner and preparing to sail the Seven Seas.
February 18, 2010 5:23 PM
Posted by: Yvette Francino
There has been a recent flurry of events in the Denver area catering to the agile enthusiast. This is the second event in a week where I’ve been able to personally meet with nationally-known agile leaders. I’m going to have to start carrying around an autograph book!
Last Wednesday I met Mike Cohn at the local Java User Group meeting. Then last night I attended a meeting of the Agile Denver user group, which featured Selenium creator and co-founder of Sauce Labs, Jason Huggins. Huggins and Sauce Labs were the topic of a recent SearchSoftwareQuality news story, Sauce Labs adds business value to Selenium testing with IDE.
In a panel discussion about automation and the use of Selenium. panelists included SSQ contributor Chris McMahon, fellow agile experts, Matt Raible, Scott Allman, Thomas Albright and Joe Yakich and, of course, Huggins. The first question they tackled was, “Do you think UI Automation Test Tools are snake oil?” The question was prompted by the controversial blog post by Michael Feathers claiming user interface (UI) automation does not deliver on its promises. Panelists discussed using UI automation testing tools the right way, which means testing navigation, not business logic.
Regarding the return on investment of automation versus doing such techniques such as exploratory testing manually was also debated by panelists. Most felt that automation could be effectively used in conjunction with exploratory testing, and that automation could be used to perform such functions as the setup of tests or the recreation of defects.
In another session, Huggins gave overview of Selenium, the popular open source tool providing platform-independent automation testing. He suggested, among other things, taking advantage of Selenium Grid to executed tests much more quickly on multiple machines, rather than executing tests serially on limited hardware.
Huggins also talked about future trends in test automation, Selenium and cloud computing, such as using automation for mobile device testing and the use of automation videos for more than testing. Check out this video in which Huggins describes his views on the future of automation combined with screencasting to create test outputs that can be used for marketing, documentation or training.
[kml_flashembed movie="http://www.youtube.com/v/pDMhB_HXFKc" width="425" height="350" wmode="transparent" /]
I’m convinced test automation can provide a lot more than regression tests, but there are some that are disillusioned by the maintenance costs. What experiences have you had with test automation?
February 17, 2010 10:58 PM
Posted by: Yvette Francino
, Mike Cohn
, Planning Poker
Estimating software can be tricky. Earlier this week, I’d written a tip about using Planning Poker for estimations in an Agile environment. As I was researching this tool, I found Mike Cohn’s name dominating this space as an authority on Agile estimating techniques. I’m used to finding experts on the Internet that can be anywhere in the world. Little did I know, Cohn lives in neighboring Lafayette, Colorado. I was even more surprised when an email popped into my inbox yesterday that said that Cohn would be speaking this week at the Denver Java User Group (JUG) meeting! How lucky for me!
Cohn, Agile coach, trainer and author of several books, including Agile Estimating and Planning, described using “story points” as units of measure. “People are better at relative estimating than absolute estimating,” he explains. Story points allow for relative measurement, rather than using time or size. In this video clip, Cohn compares using story points to the “T-Shirt sizes” method of estimating.
[kml_flashembed movie="http://www.youtube.com/v/BAGain3T-xU" width="425" height="350" wmode="transparent" /]
Cohn warned us against “anchoring.” Anchoring, he explains, is when people hear a number, and even if that number is not relevant, it still seems to influence their estimates. Cohn gave a very interesting example of a team that was asked to estimate a project. A control group, without any outside influence, estimated that they project would take 456 hours . A second group was told that the customer, with no knowledge of the project, thought it should be done in 500 hours but that this should not influence their estimate. The estimate this group came up with was 555 hours. A third group was told the customer thought it should be done in 50 hours. Again, they were told that this should not be used in their process for coming up with estimate. This group came up with an estimate of 99 hours. Even though the groups were told not to be influenced by the customer’s numbers, the experiment implied that their estimates were skewed towards the numbers given by the customers.
Planning Poker is a technique where each team member use cards with a range of numbers to estimate effort. Typically the numbers do not progress incrementally, but are more spread apart, the higher they get. The Fibonacci series (0, 1, 2, 3, 5, 8, 13, 21, …) can be used for this. The reasoning behind this is that the larger the numbers get, the more uncertainty there is. Cohn gave us each a deck of cards and had us do an exercise in which we were given several tasks and then work in teams to estimate those tasks using the cards. If we didn’t agree on the first pass, we would explain our reasoning and vote again. In all cases, we were able to reach consensus quickly. Cohn even has made a free planning poker tool available for distributed agile teams.
Advice from Cohn
I caught Cohn at the end of his presentation for a few minutes and asked him if he had some short words of wisdom that might help teams that are working on software estimates. He shares this advice for times when the team feels the estimate is somewhere between two numbers that are being used:
[kml_flashembed movie="http://www.youtube.com/v/AkbcuiZbh94" width="425" height="350" wmode="transparent" /]
Cohn gave us each a couple of decks of his Planning Poker cards to go home and play with. His decks include four sets of ?, 0, 1/2, 1, 2, 3, 5, 8, 13, 20, 40, 100 and the infinity symbol. I may not be ready for Vegas, but I am so ready to estimate software. Bring it on!
February 17, 2010 10:06 PM
Posted by: Matt Heusser
Headline: “Google admits buzz social network testing flaws.” The same head was covered on MSNBC, The Business Insider, and USA Today. Regarding the mess, software testing authority James Bach said publicly :”These problems with Google Buzz could have been averted using the most basic kinds of critical thinking!”
Here’s the issue in a nutshell: Google Buzz is a new social tool that allows you to share your opinions and status with other users. Because it integrates with Gmail, Buzz knows to whom you send email the most. Buzz also had an added feature that automatically picks your “friends” — those with whom you share — out of the box for you. It also made your communication stream public by default.
The automatic friending is no problem for a lot of people; but what if you are having a legal dispute or custody battle, don’t want the boss to know that you’re interviewing with another company or just have two in-laws that don’t get along very welll?
Google Buzz — and, arguably, Microsoft Vista before it — actually represents a unique kind of testing challenge, one we don’t deal with much in the industry. You see, Buzz actually works. No, that is not a typo. Buzz works according to it’s specifications.
The problem is, the specifications were wrong, objectively wrong, in that they were not in the public’s interest.
I’m certain the folks at Google tested the heck out of Buzz. They probably had test scripts that exercised every possible feature, and it all worked per spec. I bet they had metrics and greenbars all over the process.
But they had the wrong tests.
You could claim this was a product management failure. After all, the software did what it was supposed to do, but “what it was supposed to do” was the wrong thing. It is possible, even likely, that in some meeting a tester or developer asked the question:”Do we really want all thing information to be public?”And got an answer that was a resounding, “Yes!”
So it might be more fair to say that this was a systems-thinking flaw. It expands the idea of testing, beyond “conformance to specification” and into another land, one of “fitness for use.”
A test for fitness for use needs to do more than “requirements traceability,” it needs to ask very tough questions about what the software does and if it does the right things? Those are questions of quality. While I’m reluctant to blame “poor testing” on a product management decision, certainly more testing might have reduced the risk for a bad rollout. Even the folks at Google agree that, in hindsight, they should have had a larger, more public beta.
As social media continues to become more integrated into our daily lives, and mashup platforms that combine Facebook and Twitter, etc., become more common, I expect we’ll have more testing challenges of this nature.
I wonder what will happen to the profession of software testers in the years to come. Will we bury our heads, claiming: “It worked per spec. It’s a security flaw. Talk to project management”? We will we might expand our view of testing to include critical thinking, as an investigative approach that can not be trivially scripted into clicks that “prove” the specification “works” or easily automated away? Only time will tell.
For right now, we should all buckle up; It’s going to be quite a ride.