Agile practices, including continuous integration and documentation in agile environments, were the kick-off topics at STAREast in Orlando, Fla., today.
I’ve been sitting here in the second row of a ballroom at the STAREast conference with the Rebel Alliance. We’ve been listening to two keynote speakers, Jeff Payne and Elisabeth Hendrickson.
Payne started his presentation — “You Can’t Test Quality into Your Systems” — talking about some of the changes we’re seeing in development, asking if they were fads or trends. Fads come and go, but trends “change the way we work,” he said. Obviously, agile development is one of the trends that he mentioned and we are hearing about a lot in the industry.
“Continuous integration is HOT,” said Payne as he spoke about SecureCI, which is an open source tool which integrates several other opensource tools. Continuous integration is the “hub” that ties together development, test, release engineering and management.
Payne quipped that developers originally made up agile as “retaliation of quality people” to get auditors out of their hair. Then the developers found out that, indeed, they’re not off the hook for unit testing and documentation! There’s actually a lot of “rigor” in agile done right. Payne said that people often think that agile means “no documentation.” A common misconception is that you can’t use an agile approach when documentation is required due to regulation. However, if documentation is a requirement, then that documentation is included as a high-priority story in agile. Payne gave an example of a project in which the documentation was stellar, impressing the ranks. When they asked about the methodology, they were told “agile!”
Hendrickson spoke next on “Agile Testing: Uncertainty, Risk, and Why It All Works.”
When describing documentation, Hendrickson hopped all over the stage imitating all the various documentation that needs to be updated in traditional environments. As she described her frustrations, she joked about a product she once tested using Excel as the documentation tool: “I found more bugs in Excel than in the product I was actually testing.” She described the documentation process as “exhausting.”
One of the concepts used to boost documentation efficiency, Hendrickson said, is Acceptance Test Driven Development (ATDD). With ATDD we get the shared understanding of what we’re building and can automate expectations. Hendrickson talked of an “executable specification” saying: “We finally can actually do it. We have real evidence, executed running testing features.” ATDD provides leveraged documentation resulting in executable requirements.
Hendrickson described how much more quickly feedback is received with continuous integration and automated regression. “The automated system level documentation reduces the feedback latency to the time the developer checks in code until there is evidence that the code works.”
Both keynote speakers seem to agree that they’re seeing usage of agile development growing rapidly. Learning how to take advantage of continuous integration and using documentation efficiently are two practices that they see adding to the success of agile projects.
For more STAREast coverage, check out these links:
James Bach – The buccaneer tester
STAREast keynotes: Continuous integration and documentation in Agile
STAREast: Is your software development organization agile?
Survey results from a study on modernization strategies from IT leaders were released April 26th, 2010, in a special report, “Clearing Your Path to Modern Applications and Business Agility.” The survey and report were put out by Forrester Consulting and commissioned by Hewlett Packard in January, 2010. Targeted organizations included over 200 companies in North America, Western Europe and Asia Pacific. The targeted companies were noted as “significant IT shops” with 61% having budgets of over $50 million, spanning a wide range of industries, stated Phil Murphy of Forrester in a pre-announcement press release on April 23, 2010.
The report was aimed at finding out what was driving folks to modernization and exploring the barriers they were encountering.
Barriers to application modernization
Primary challenges impacting application development productivity were complexity, cumbersome end-to-end application development processes and difficulty in changing legacy applications.
The three top reasons applications were being considered for retirement or replacement were obsolete technology, the application no longer met business needs and the application was difficult to maintain.
However, the biggest barriers against modernization were the business user’s insistence that they needed the legacy applications and that the development teams were too busy doing other work.
A significant 91% of respondents agreed that they would benefit from an application consolidation/rationalization effort.
How are firms approaching modernization?
Initiatives to improve application productivity included improved SDLC processes, reducing reliance on legacy applications and platforms, formal application portfolio rationalization and improved SDLC Tools including software testing tools.
Primary drivers behind modernization were increased agility, innovation and cost reduction.
Regarding methodology adoption, iterative methodologies such as RUP and agile methodologies such as Scrum showed primary adoption. The survey also showed a 26% adoption of waterfall, which Murphy explained was due to “some of the smaller firms that had zero methodology are adopting a methodology and waterfall is the one that they’re choosing.”
What results have been achieved?
For those firms that have adopted an agile approach, the most significant improvements noted were in productivity, quality and time to market. “Clearly the adoption of agile has had an effect on success rates and ways we normally measure success,” said Murphy.
When asked about the effectiveness of initiatives to improve development productivity, 74% responded improved SDLC tools, including testing tools as contributors. Improved SDLC processes, including software testing processes were also listed by 70% of respondents as contributing to the improvements.
Murphy compared the barriers of modernization to a three-headed beast, listing bloated portfolios, obsolete tools and practices and excessive “lights-on” costs as the three heads that all needed to be conquered. Fighting one head leaves you vulnerable to the other two. Though 91% of respondents agreed that they would benefit from a formal retirement program, business users are reluctant to let go of legacy applications. However, modernization efforts that have taken place are showing productivity gains from SDLC tooling, SDLC processes and application rationalization.
Murphy concluded, “You have to clean out that portfolio so the overall lights-on IT costs come down.”
Today I checked out the Top 100 Software Testing Blogs and was happy to see Software Quality Insights listed as #13 on that list!
Blogging is a wonderful way to learn and network with others in your field. One of my favorite parts of this job is networking with others by reading and writing blogs. It’s especially fulfilling to have a conversation or learn differing viewpoints via the comments. One of the great things about Web 2.0 is that it allows us to have open group communication. I strongly encourage you to post comments on our blogs, letting us know your point of view or what type of coverage you would like to have on SearchSoftwareQuality.com. We look at our traffic as a way of helping us determine which content is most valuable to you, but that’s only one input.
Here are our top five traffic generators for Software Quality Insights in 2010:
- Methodology wars: Agile or waterfall?
- Eight free tools to automate your software test processes
- Running, debugging, and analyzing load tests using JMeter
- A modern way of gathering requirements: Visualizations
- What’s going on with Google’s social network testing?
Are these topics representative of your interests? Let us know. We don’t want to stay at unlucky #13 for long. Let’s see if we can move that up to #1! Help us out by adding comments, giving us feedback and letting others know about Software Quality Insights.
Today crowdsource test organization uTest announced its partnership with cloud-based testing organization SOASTA. Just a little over a month ago I spoke with uTest’s VP of Marketing and Community, Matt Johnston, about uTest’s new load and performance test offerings. Now they are teaming with SOASTA to offer cloud-based load testing. Today’s announcement is interesting in that it combines two relatively new trends in Web-based testing, crowdsourced testing and testing in the cloud, in a way that benefits both companies.
I spoke with Johnston and SOASTA CEO, Tom Lounibos, about the announcement.
Johnston said, “One of the things that’s very appealing to us is that SOASTA has an on-demand offering, meaning that for companies of all sizes, you pay as you go and you only pay for what you use. That’s very much in line with the model uTest employs for functional and load testing services where there’s not a big commitment up front and you can scale up as much as a company wants to meet its needs.”
I asked Lounibos if there were many competitors in the cloud-based testing arena. He answered, “There’s not a lot of on-demand testing companies.” He noted that there were traditional competitors like HP and Rational that offered load test tools and services. “We’re all trying to help customers with reducing the cost and improving the quality. Testing software had traditionally been really expensive. We’re looking for easier, faster, more scalable ways to test Web applications.”
When asked if uTest had considered alternatives and why they had chosen to partner with SOASTA, Johnston answered that uTest was “cognizant of the landscape. We looked around and from a business perspective, in terms of maturity, target audiences and its on-demand business model, SOASTA was the best fit.”
That being said, uTest will work with other load test tools. “uTest is and will remain brand-neutral.” Johnston said that they would support whichever load and performance tool was best for the client. “For companies that are turning to uTest to help make a load test decision, SOASTA is extremely strong for moderate to large volume Web-applications. And we’ll have members of the uTest community who are trained and well-equipped to work with a SOASTA platform.”
Lounibos talked about the history of SOASTA and some of the resource-intensive obstacles the industry has been faced with which are addressed through a cloud-based testing model. Lounibos realized a real time OLAP engine was needed to house the vast amount of performance business intelligence that’s available. Customers are able to drill into the data to determine what exactly is running at the time of a performance spike. SOASTA’s unique IP is the ability to give their customers a high level of real time “performance intelligence.”
Johnston described the partnership as “multi-faceted” talking first about the training that would be offered to performance engineers.
“We’ll be selecting members of the uTest community to get SOASTA training.” This will allow customers of either SOASTA or uTest access to SOASTA-trained performance engineers. “We will be creating a skilled labor force, equipped to deal with SOASTA product and projects brought in either by SOASTA or uTest. This will provide opportunities for the testing community to get free training and testing-related work that’s more consultative and strategic in nature.”
Additionally, the partnership allows uTest to offer SOASTA’s load testing tools to clients and SOASTA to offer its clients functional test services to be fulfilled through the uTest community of testers. Johnston described the model as “marrying up the services and capacity of the uTest community with the tools of SOASTA.”
Lounibos noted of the union, “uTest has 25,000 professional testers. The attractive thing is that each one of those can be enabled to generate a load using one of our agents. Our customers want to know what the user experience might be in Hong Kong vs. Liverpool vs. Des Moines.”
Johnston agrees. “More and more companies are wanting to get closer to real-world end-user experience. With our liveload offering and with this partnership we can now offer our customers in ways we couldn’t before.”
Basically, static analysis is a way to find bugs in the code without testing. Anyone who’s compiled code is familiar with warnings that can be generated. Sometimes these are warnings that can be ignored, but other times they raise awareness of potential problems in the code. Static analysis can go beyond compiler warnings and examine code paths for potential issues. Defect categories that Ma highlighted in his presentation included:
- Memory corruption, illegal access
- Null pointer dereference
- Resource leak
- Concurrency and deadlock issues
- Incorrect expressions
- Insecure data handling
- Library and API misuse
- Uninitialized data
Ma gave an example of C code that compiled cleanly yet clobbered memory that hadn’t been properly allocated. Being a former C developer, I’m quite familiar with this scenario. I was going through a period of getting a lot of memory corruption errors causing unfortunate system crashes along with disturbing messages about the program aborting. This came at a time when I was very pregnant causing me to have a very geeky nightmare one night: I hadn’t allocated enough memory in my womb for the baby! Oh dear!
Coverity Integrity Manager is the tool Ma used to demo how static analysis can be used by development organizations to catch these types of bugs before any test cases are written. As I wrote in Eight free tools to automate your test processes, FindBugs is an open source static analysis alternative for Java developers.
Since this presentation was part of an agile conference, I was curious if static analysis had any additional benefits in an agile environment. It seemed to me it would be just as beneficial in traditional environments. I asked Ma about this during the panel Q&A and he confirmed that static analysis was methodology-agnostic. He said, however, that if you run the tool on old, legacy code, be prepared for a very long report to go through!
My advice to testers? If the developers haven’t done the static analysis, then you do it! You’ll find the problem areas before you’ve written a single test case!
Remote access, collaboration and intelligence are among the most sought-after features in debugging tools. DebugLive has put those features in debuggers for .NET and Windows. In this post, I share information on today’s debugging challenges and DebugLive’s new features, gleaned from my conversations with Debuglive CEO Donis Marshall and others.
Today, software teams are dispersed across the globe. The good news is that distributed test and development teams have many ways to collaborate remotely, such social networking tools, phones, instant messenger and a wide array of communication devices and software. The downside is that IM, email, Facebook and other such software are all external applications, according to Marshall. Using external tools means the security of an internal firewall is lost. Even if security is not an issue, the work required to maintain contact lists and communicate with multiple people using different tools and technologies is substantial.
Marshall’s team saw a need for a single, internal tool that gives the whole software team the same screen for comments and test corrections. So DebugLive was designed to provide remote access to data, while retaining all personal and company firewalls .
Scott Gagnon Marketing Director for DebugLive believe so.
“With DebugLive you can debug your Windows-based applicationfrom anywhere you can connect online and have multiple users all locked into the same debugging session, trying to achieve the same end goals,” says Scott Gagnon, DebugLive marketing director. Also, Marshall noted that DebugLive complements Microsoft tools, such as WinDBG and Visual Studio, by adding multi-user, remote access, collaboration and other features not available in Redmond’s suites.
Despite the name, DebugLive has features that go beyond debugging. One such feature is the ability to record a video offered as a plug-in. Though this can be used to record a debugging session, it could serve other purposes as well, such as training or marketing, Marshall said.
Other features in the Debuglive tooling are hyperlinked hints for debugging. Say you want to run a predetermined debugging process, Marshall explained. Instead of typing out statement after statement of code, simply click and the process runs. DebugLive also uses intelligent selections for next processes. Once debugging has begun, DebugLive suggests what command should be run next based on context. Testers can also save their own debugging processes in the tools database; these become available to their co-workers who are also logged into DebugLive.
Wintellect— a software consultancy, debugging and training company — uses DebugLive in all of its debugging and consulting services. John Robbins, co-founder, likes DebugLive’s collaboration capability in particular. “What was once a solution, laid out in weeks of review, emails, phone chats and flights all over the world can now be solved in real time with a group logged into DebugLive,” he said. Teams are able to examine and debug large-sized core dumps and store all their artifacts in a single repository for easy organization.
Robbins recommends DebugLive to IT organizations bogged down by tech support problems and requests. “These tech guys have been taking on more and more application bug problems, many of which lie outside of their area of expertise,” he said. “Many of them have some sort of background in debugging, but had never thought it would dominate their workloads. DebugLive has helped them in that they can resource numerous members of various software specialties to help resolve complex issues.”
Convenience and cost reduction are two benefits WinIntellect has derived from DebugLive. “It is an expensive industry to be in, flights, conference calls, explaining issues to management, explaining them again and again. Now if part of the team works in India and another in the continental US, there is no problem. Just login and start debugging, share screens and send IMs,” said Robbins.
DebugLive is currently offering free trials available through their website. Next on DebugLive’s plate is making their debugger available to developers and testers who aren’t exclusively operating in .NET. The new platform is currently in beta and is expected to launch later in the year. DebugLive is also currently offering a free trial of their product which you can sign up for through this link, free trial
Last week I had the opportunity to attend SQE’s complimentary “Agile Comes to You” conference in Colorado Springs. A group of vendors that integrate well to create an Agile ALM experience was on hand to offer up presentations and demos of their products.
The agenda started with a Keynote from Rally Software’s Founder and CTO, Ryan Martens, who spoke about ROI in an agile environment. This was followed by vendor presentations, lunch and demos from four vendors.
- AccuRev presented “Automating & Managing Agile Software Development Processes.” Accurev is an Agile ALM provider that carries configuration management software.
- Coverity spoke about static analysis in a presentation titled “Agile Software Development.” Examples were given of how static analysis can catch defects before code is even executed when bugs are the least costly to fix.
- Rally Software presented “The Power of Feedback” in which several examples of the importance of feedback in the application lifecycle was demonstrated. Rally Software provides agile project management software.
- AnthillPro‘s presentation was titled “The Co-Evolution of Agile and Continuous Integration” and included a history of agile development. AnthillPro’s software handles build and deployment automation for agile teams.
More and more organizations are transitioning to agile, including larger organizations that are looking for a way to handle enterprise-wide software development using agile. These organizations will be looking at Agile ALM solutions to help them handle the complexities of scaling and geographically dispersed teams.
Some books say that if our projects are not “properly” controlled, if our written specifications are not always complete and up to date, if our code is not properly organized according to whatever methodology is fashionable, then, well, they should be. These books talk about testing when everyone else plays “by the rules.” This book is about doing testing when your coworkers don’t, won’t and don’t have to follow the rules. Consumer software projects are often characterized by a budget that is too small, a staff that is too small, a deadline that is too soon …
– Testing Computer Software, 2nd Edition, Kaner et al.
I’ve always been partial to that introduction; it matches my life experience. My own contributions to this area of testing include the idea of the ‘Boutique Tester.’ In other words, the tester air-dropped into a project, making contributions immediately and to the best of his ability. Sure, the tester will leave some artifacts, do some training, and try to leave the organization in better shape for next time, but his focus is on actually adding value to the project through testing right now.
Since I started writing and speaking about Boutique Testers, I’ve had several people approach me and talk about the business model. A few of them are doing it; making a living as freelance software testing experts. James Lyndsay refers to himself as a “Test Strategist.” James Bach just did a blog post and video on the role of the Consulting Software Tester, and my friend John Kotzian just changed his linkedin title to “Boutique Tester at uTest.” (He also just started a blog you might enjoy; check it out.)
Once you accept the idea, the next question that logically follows is “okay, but, um … what does a Boutique Tester actually do?”
I don’t want to be overly prescriptive; after all, what the Boutique Tester really does is “figure out what needs to be done and then do that.” But I’d like to at least give an example. So please allow me to paint a picture …
The sales work is done, the contract is signed, and it’s day one. The Boutique Tester arrives on-site. First he likely walks around, meeting the technical staff. He’ll watch what the technical staff is actually doing and how they report that to management. He’ll look at what story is told to the customer and the story that gets told when the customer walks away. He’ll get a copy of the software — or try to — and find out how often builds happen.
Now it’s the good part; time to attack the software. We’re quickly getting past what I can describe in one blog post. So …
Good news! I’m pleased to note that the folks at SearchSoftwareQuality have invited me to start a series of articles on how to attack software with little knowledge of the application, under time and staffing pressure. Nearly any type of tester can use these kinds of articles, Boutique, Contract, Outsourced, or Full-Time Employee.
I’ve already written an article on Quick Attacks for Web Security and have another one describing Quick Attacks for Web Applications coming up soon. From there we could expand to how to simulate perform more detailed, social attacks for systems security, or how to better understand the application under test, finding possible flaws in business logic more quickly.
What would you like to read about?
If a picture’s worth a thousand words, a visualization’s worth a thousand pictures. A “visualization” is a term used to describe a functional software prototype. This form of rapid User Interface (UI) prototyping is now being used by business analysts as an effective method for gathering customer requirements.
In the software development lifecycle, it’s the business analyst’s responsibility to work with business stakeholders to determine the requirements of a software application. There are many ways that this can be done. Traditionally, after many long meetings and countless interviews, a big, thick requirements specification would be written, attempting to describe the requirements of the system. This would then get passed to a design team who would create a functional specification which developers would ultimately use to write code. The end result would be an application that often looked very different from what the business had originally envisioned.
Some reasons this approach leads to an end product that can be so different than what the business wants are:
- It’s difficult to describe exactly what you want when you don’t have a starting point.
- Requirements can change over the lifecycle of a product.
- The details of what the business wants need to be continually clarified as the project evolves.
- It’s much more difficult to describe a user interface and functionality in words than it is to work with the actual screens.
These problems are some of the reasons the waterfall methodology has gotten such a bad rap and why so many people are switching to an agile methodology. It’s become recognized that more effective collaboration and communication with the business is required in order to accurately understand exactly what business users want.
But switching to an agile methodology isn’t the only solution to this problem. Another technique that is being used is to gather requirements using visualizations.
SearchSoftwareQuality met with iRise leader Mitch Bishop last week to discuss their product line. Their tools are meant for the business analyst, not the developer. Working with the iRise applications, business analysts are able to collaborate with stakeholders to create working models, going beyond mock-ups or wireframes. Visualizations can be integrated with data to provide an actual functional preview of a finished application.
A quick search revealed this informative blog post listing 41 prototyping tools that can be used for rapid UI generation. iRise was included in the list, described as “A very complex tool used to model business process and prototype application interfaces.” It’s not surprising to me that the tool is listed as “complex” as it does allow for quick prototyping for a variety of product types including Web 2.0, mobile applications and SAP Extensions across several industries.
Being the devil’s advocate that I am, I asked Bishop whether the problem they were trying to solve was already solved by the trend of agile teams. In an agile environment, the product owner works on the same team as developers and testers, producing functional code in short sprints. This cross-functional team addresses the improved collaboration and communication with the business and allows for the continual product review throughout the lifecycle.
Bishop answered that rapid prototyping tools can be used in an agile environment as part of the short sprint. He’s finding that all teams, regardless of methodology, are effectively using visualizations to help better define requirements.
It’s good to know that the industry is finding effective ways to gather requirements. Let’s hope the days of thick requirements specifications are quickly coming to an end.
This week, I had the opportunity to attend the HP StorageWorks Tech Day in Houston. This social media event connected bloggers across technology fields to focus on various server and storage products. One of the focus points of the event was at two of the quality assurance labs at HP’s campus in Houston. There we met with various people who manage and implement all aspects of HP’s lab functions for internal product support.
The first quality assurance lab was for the Enterprise Backup Solutions or EBS. In this lab, the entire matrix of supported data protection configurations is available for HP’s testing along with the partner ecosystem. The EBS lab’s main objective is to prepare the support matrix for various data protection products, storage systems and partner software. This lab builds up and breaks down the various permutations of products and software to continually produce the support matrix. With the vast array of product lifecycles between servers, storage and software – this becomes a big challenge.
The second quality assurance lab we visited was the StorageWorks lab. This area provided a different level of support down to disk array, drive and controller endpoints. Here, a number of engineering QA functions were in action during our visit the foremost of which is firmware testing. For storage systems, there were a lot of protocol analyzers in place. For this function, throughout much of the equipment in the lab, there were Serial Attached SCSI (SAS) protocol analyzers hooked up to drive slots or blade chassis backplanes. Figure A below shows a SAS protocol analyzer hooked up to a SAS drive slot:
Another quality assurance function that caught my interest was the ability to get to some very low-level functionality within a storage array. While in the storage lab, I noticed that many of the popular storage products such as the MSA P2000 had additional connectivity compared to a normal customer installation. For the MSA P2000, the lab has a special I/O device attached to the controller shown in Figure B below:
This device is quite the storage utility knife. First of all, all devices in the lab with a red circuit board indicate a series of device states. The primary states for red circuit boards are prototype or internal tool. This I/O device shown in Figure B is an HP internal tool and is not available to customers. The CAT-5 cable attached to the end of the interface allows engineers to get access to the device for low level functions, such as firmware updates (which are done daily) and command line options. The device has an additional functionality of allowing engineers to step through the storage processor’s commands in a debug capacity. This allows HP to diagnose how the storage processor is handling a command step-by-step by jogging it through the sequencing on the current command set.
This opportunity to see a number of the quality assurance labs for HP was an interesting experience. While I do not have anything to compare it to, it was an impressive operation.
Disclosure: The event organizer has provided me attendance to the event which included meals, airfare and accommodations. The opinions mentioned above are a result of an in-person demonstration of the technologies discussed. Read my full blogger disclosure here.