Software Quality Insights


March 31, 2009  1:41 AM

Recession survival tips for project managers

Jan Stafford Jan Stafford Profile: Jan Stafford

In an economic downturn, project managers have to motivate teams that may be worried and overworked, a situation in which projecting optimism, confidence and an “anything is possible” attitude is a must, according to project management consultant and trainer Michelle LaBrosse. On the reality-check side, LaBrosse recently told me, PMs have to plan ahead more carefully than they ever had to before.

LaBrosse, founder of Cheetah Learning, offered these tips that could help PMs prosper during lean times:

  • PMs who are resourceful, innovative thinkers are desperately needed during a recession, she said. This is the time to lead by example.
  • Assess your projects and commitments to see which are energizing and which are dragging you down. Said LaBrosse: “A simple way to figure it out is to ask yourself: ‘If I had to make the decision today to start this project, would I?’ If the answer is no, stop wasting your valuable resources on it.”
  • Seek out opportunities to do your own formal and informal learning. This isn’t the time to cut back on training. Keep engaging in activities like informational interviews and podcasts to webinars and development courses.
  • This is also a key time to brush the dust off your resume. “Your resume should serve as a timeline of what you’ve been up to,” LaBrosse told SearchSoftwareQuality.com not long ago. “It should tell a story about your growth and experiences since your first real job. Look critically at your resume and make sure it weaves a story that sets you apart from others in your industry. What is your unique selling point? Is it that you are a programmer who also worked as a stand-up comedian? That could communicate that you think fast on your feet or you can diffuse situations by using humor. Whatever your story is, make sure it showcases your confidence in being at the top of your game.”
  • Networking is crucial now. Participate in trade associations, like the Project Management Institute. Not only is this the way to gain more skills, it can often be your key to your next job or project. “You need to expand your worldview,” she said.

On her blog, Everyday Project Management, LaBrosse offers these additional tips:

  • Make sure the commitments you are considering pursuing will make sense in another week, another month, and another year. “A good way I have found to do this sense making is with doing a project agreement on goals I am considering pursuing,” she wrote. She offers a free project agreement template, which requires registration.
  • Manage resources so you can complete your projects with a variety of nonfinancial capital, so that you don’t have to wait for the credit markets to unfreeze to finish the project.

LaBrosse’s company, Cheetah Learning, is based in Carson City, Nev., and offers Project Management Professional Exam training and other services.

March 27, 2009  4:17 PM

Using screen recorders as a lightweight form of documentation

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Any time I find that I need to create training material or reference material, I ask myself if it makes sense to use some sort of screen capture utility to capture that information. I find that it can be a better medium for some types of content. And, given current technology (both commercial and open source) it can be easier to update than extensive documentation.

For example, on a past project I partnered with another tester and used a screen recorder to create small videos of how to perform common functions on the mainframe. We had a lot of people who needed to use the mainframe in support of their testing, but only rarely. So even if they learned something once, it might be weeks before they did it again. If you’re like me, if you’ve only done something one time, in a week you’ve likely forgotten how to do it. Together, we created a series of 10 to 15 small videos (screen recordings with someone talking through the steps) that ran through the most common functions.

I’ve also used this technology to capture a record of my exploratory test execution. Many testers already take screenshots when they test; this is just an extension of that practice. With most tools, you can easily edit what you capture to pull out clips of video and compress them to be small portable files. This is great for attaching bug examples to defect tickets, to ask for a peer reviews or to get a second opinion on something, or even to pull out an example of a complex test for a lunch and learn.

If you’re interested in giving something like this a try, the tools I use the most for this type of documentation include: BB TestAssistant, iShowU, Snagit, and CamStudio.


March 24, 2009  12:12 AM

Blu Age at EclipseCon: On agile, an agile project, reverse engineering

Jan Stafford Jan Stafford Profile: Jan Stafford

On the eve of EclipseCon in Santa Clara, CA, which opened today, I interviewed NetFective Technology Group president Christian Champagne about agile adoption and legacy application reverse engineering, as well as a legacy app makeover that Blu Age Corp., a NetFective offshoot, helped a company do.

Today at EclipseCon, Blu Age introduced new tools — Blu Age Reverse Modeling and Blu Age Modernization –- to its product suite for reverse modeling and re-engineering legacy enterprise applications. These products complement existing modules, Blu Age Build and Deliver, and together they make up a platform for transforming existing code and data into UML 2 models independent from their original technological platform and according to the Object Management Group’s Model Driven Architecture (MDA) standards.

In our interview, Champagne said that there is about an even split between waterfall and agile model users today, but he sees more and more waterfall users evaluating agile. Just as the economic downturn spurred IT managers to use virtualization to consolidate servers, he said, development groups are turning to agile “to become far more efficient, not only for a better return, but to keep their job or survive as a company.”

Champagne sees agile’s iterative development model as the cornerstone to implementing more effective software testing and quality assurance (QA) processes. At each iteration, testing and QA ensures that expected business needs are properly covered, and the next step or iteration of development isn’t started without full acceptance of the current one.

“What is important is that users directly approved the application during the development phase and not when development is almost finished and the budget already consumed,” Champagne said.

Champagne described a recent project in which a Blu Age team helped a company move a legacy application’s existing code and data into UML 2 models. The primary project goal was modernizing the application and enabling use of the agile process for reverse modeling. The company didn’t have an out-of-box software package able to modernize the application to meet all functional and architectural requests. After evaluating several options, the company chose to do PIM extraction with Blu Age’s MDA workbench.

The project’s biggest challenge was changing the mindset of IT and business people.

IT people have to admit that techniques and standards can automate 90% of their traditional activities except the value added ones, like enterprise architecture. Business people must admit that they have to be responsible for the application they ask to be developed. Automated and instantaneous model transformations with agile put business people direct in front of their needs, and they could not anymore claim against IT for bad delivery. Both must admit that they have to work together but both must manage acceleration of application development deliveries.

Once everyone was on board, the project moved forward quickly. The result, said Champagne, is “simplified project deployment, mainly due to the fact that agile and iterative AD facilitate end-user acceptance and increase functional accuracy of business-oriented applications.” Project delivery time was cut 50%.

This project spurred the company to move from waterfall to agile development overall. “At this stage, 60% of their new devs are agile, whatever the size — 300 Man/days up to 3000 Man/days,” Champagne said.

Looking ahead at trends in development in general, Champagne foresees growing usage of powerful but simple-to-use languages like PHP5, as well as automated software packages that can handled complex business needs with SOA (service-oriented architecture), Web services or RIA (rich Internet applications). Overall, he sees developers changing their ways of working and their roles in projects as iterative development is widely adopted.


March 23, 2009  1:07 PM

How test-driven development increases overall usability

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Test-driven development (TDD) can be the path to not having to reinvent the wheel with every new test. In the test-driven development I’ve done, I’ve found that my tests force me to write a more manageable interface over time. Early in a programming a class, I find that I can get away with something simple that might not be intuitive. But the more tests I add (so I can add more and more functionality), the more refactoring I have to do to get the previous tests to pass. This continually forces me to think about the interfaces I have and the best way to test them. That gets me thinking about simplifying my previous crufty code. In the end, I have a more usable interface for me, which I suspect also makes it a more usable interface for others.

The folks over at UX Booth recently posted an article on “How Test-Driven Development Increases Overall Usability.” As with all of their articles, it’s a well-researched and well-written look at the topic. In the article, they contrast testing at the user interface with testing at the application interfaces. It’s not an in-depth technical article, but it’s an interesting look at the topic. I think they accurately express one of the core ideas of test-driven development—to “make the application more usable to everyone involved.”

Another great example of TDD increasing usability can be found in Dale Emery’s analysis of Brian Button’s article “TDD Defeats Programmer’s Block—Film at 11.” Dale points out Brian’s general pattern of naming his tests using stimulus, result, and context. The naming scheme makes the tests more readable and allows you to readily interpret what the code/tests are trying to accomplish. It’s another, more detailed example of the UX Booth idea of usability at the application interface. Dale further expands on this pattern in a follow-up post on the anatomy of responsibility.


March 17, 2009  1:20 PM

Upcoming software testing conference tutorials

MichaelDKelly Michael Kelly Profile: MichaelDKelly

The Association for Software Testing (AST) recently announced the tutorial lineup for its upcoming conference in Colorado Springs. It’s an impressive lineup, covering both mainstream topics (like agile) and more daring conference topics (like self-learning).

Looking at the AST tutorial lineup got me thinking. How does this compare to other conference tutorial lineups? And what are the current tutorial themes at conferences today? I suppose I could have looked at keynotes or track presentations, but I like tutorial topics because people often pay extra for them. That means they, in theory, might represent the topics people are most interested in.

A quick look at the upcoming Conference for the Association for Software Testing, Software Test and Performance Conference, and STAREAST yields the following breakdown (using my arbitrary topic classifications):

Conference Breakdown

Test management covers building and managing the test organization, process improvement and metrics. It’s a broad classification (I know), but I didn’t want 30 categories. Test analysis includes any topic that provides a structured way to design tests (with the exception of exploratory testing which I broke out into its own category because I wanted to see how often that was offered). Agile includes all the tutorials that have “agile” in the title — automation and performance are built out in a similar way.

So what might this tell us? If you group the numbers, you can see that we spend roughly equal amounts on management (25%), analysis (23% for analysis and exploratory testing), and coding (23% for automation, performance and database). I think that balance is important and I’m happy to see it emerge from the data. It’s a mix that shows the multifaceted problem of software testing.


March 12, 2009  6:36 PM

Figuring out what to document for operations

MichaelDKelly Michael Kelly Profile: MichaelDKelly

With several agile teams I’ve worked with, thinking about operations has been an afterthought. Even when operations personnel are captured as stakeholders upfront, many times the hand-off from development to production is … well, “less than coordinated.”

I think this happens for a handful of natural reasons:

  • The operations profile (interactions with other systems, infrastructure requirements, technology dependencies, configuration requirements and options, etc…) emerges as development unfolds over multiple iterations. That makes it difficult to look at the intermediate iterations, where the software is possibly going through rapid changes in direction, and invest in that type of documentation at the time. That means that the operations documentation often isn’t created until later iterations, for example a transition sprint in scrum, where it’s a mad dash to remember everything that was built out, capture it in one place, and deal with the realization that not much upfront thought was put into how this software would live and breath in production.
  • Some aspects of operations are emergent. They depend on the architecture and technology decisions that are made as the project unfolds. They require extensive testing to measure. For example, providing details around system requirements (how much processing power, memory and disk space you’ll need), characteristics of performance/availability/reliability, and troubleshooting guidelines all emerge naturally from the testing process. Some aspects of operations often aren’t requirements you can just choose, you have to figure them out through iterations of testing and experimentation.
  • There are many aspects of operations most programmers on the team just won’t know much about. When should backups be done, and how? What needs to be done for monitoring, alarming and alerting? How will that integrate with the technologies currently being used for those activities? Where will log files go and how often will they be checked, stored or cleared?

I call those natural reasons because all of them will necessarily happen to some degree no matter how much planning you do or how you structure the team. In any project, at early stages where you want to document aspects like this the most, that’s when decisions are most likely to change. Understanding operational criteria is something that’s always going to require testing and experimentation, which means it’s going to happen later in the project. And it’s unrealistic to expect programmers to be experts at programming and to be experts at knowing how to run data centers and administer systems.

So what can we do?

I think there are a couple of things that can be done:

  • As mentioned earlier, many teams invest in the idea of a transition iteration. A period of time dedicated to pulling it all together. They collect the test results, inventory the system requirements, sit down with the operations folks and perform any last-minute tweaks to make the software manageable in the target environment.
  • Other teams work with the operations team upfront to get stories for what their requirements are for an application from an operations perspective. While they still likely won’t get done until later iterations, it can reduce the surprise and can ensure that any big features are part of the technical road map for the product.
  • I also think it’s helpful for the programming team to have exposure to operations in the form of production support for the applications that get deployed. Through needing to support the products in the wild, they develop an appreciation for the common issues around troubleshooting and maintaining the software. This can be done a number of ways. Some teams create separate production support teams and have programmers rotate through that team. Others have the same team that develops the product support it in production. What’s important is that the team gets the gritty exposure to the problems.


March 10, 2009  7:19 PM

Why video modes matter in software testing and configuration

Rick Vanover Rick Vanover Profile: Rick Vanover

In my software testing work, I’ve found that video configuration consistency is a critical factor in a system’s performance and behavior under test conditions and an important factor in delivering technology in the intended fashion.  Naturally, traditional resolution requirements for a consistent experience are only one part of video’s critical role, which can encompass everything from video adapters, driver version, video configuration, resolution, refresh rate, any console interaction mechanisms and the number of monitors running that configuration.

I recently talked with David Wren about the importance of video configuration’s importance. Wren is the managing director of PassMark software, a leading system benchmark software provider. Wren offers that video configuration is important when ensuring a solution is to function as expected. One example where this was an issue for a Windows-based system was the recent issue with Nvidia drivers on Windows Vista. Some video adapter models have had ten or more driver releases since November 2007.

Beyond driver issues, which Wren states can be notoriously buggy, 2D and 3D video performance can be point to watch in the test process. For software implementations that are not graphics intensive, such as text-based screen activity, the performance is relatively unimportant. When graphics are introduced, then the video performance is critical. This can include playing high-quality videos, games, or graphics rendering with design software. PassMark maintains a benchmark website of video card performance benchmarks that is updated daily, and the results for the same tests on different platforms show varied results.

These factors and more are especially relevant in today’s media-rich technology landscape. Many of the various high quality media will likely perform differently under circumstances where various video configurations are in use, making benchmarks a critical part of the test process.


March 10, 2009  12:55 PM

Privacy issues with cloud computing

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Judith Myerson recently published a developerWorks article on cloud computing versus grid computing. It’s a fantastic read if you’re new to the topic, and provides tons of links for where to learn more. While introducing the topic, Myerson lays out the basic relationship between the two. In the article, when describing some of the basic differences, she points out that one of the big advantages (among several) of cloud computing is on-demand resource provisioning.

Myerson then goes on to discuss some issues to consider. Those include threshold policy, interoperability issues, hidden costs, unexpected behavior, and security concerns. As you continue your research in cloud computing, you’re likely to find that it’s security concerns in particular that get people talking. In a recent Computerworld article on cloud computing not being fully enterprise-ready, author Craig Stedman points out that many cloud computing vendors might not be ready to support corporate IT due to security concerns. A more detailed article on the topic by Jaikumar Vijayan goes further.

In the article, Vijayan takes an in-depth look at the World Privacy Forum’s recent report and highlights some of the bigger issues like concerns about data privacy, security, and confidentiality. Specific issues discussed include dealing with privacy regulations, data-disclosure terms and conditions, legal protections, issues relating to the host’s geographic location, and allowing more ready access to data to government agencies as well as parties involved in legal disputes. If Myerson’s article is a must read as an introduction to the topic, then Vijayan’s article is a must read for the concerns around privacy issues.


March 9, 2009  12:41 PM

Agile, management tools help small team boost software quality

Jan Stafford Jan Stafford Profile: Jan Stafford

Scheduling, documentation, tracking bugs and –- most critically -– producing great products with a small development staff are the headaches Dan McRae, Comet Solutions Inc.’s software engineering manager, faces every day. I talked with McRae recently about how he’s combined best practices and lifecycle management products to get the upper hand on quality assurance (QA) and deliver strong products on deadline. Indeed, Comet estimates that these tactics have improved software quality by 25% and time-to-market by 10 to 20%.

McRae oversees a team of about 10 developers, two of which are dedicated to QA, although all do some QA work. They produce about four major and four point releases each year for Albuquerque, N.M.-based Comet, which provides design engineering workgroup software primarily aimed at enabling early simulation.

“Our biggest challenge is that we only have two full-time QA people,” McRae said. “The rest of us generate more features than they can thoroughly test. As manager, I direct what QA’s focus is. Some of those directions are based on word of mouth from our developers; but I also get direction from our chief technical officer, who points to things that are high-risk and need to be addressed. There are always challenges because there are always the latest defects that have the highest priority, and several of those pop before you can fully QA the previous ones.”

Comet’s development team uses agile development and automated tools to reduce the development and QA burden and improve quality overall.

Using waterfall methods for product development became very difficult as software and users’ needs became more complex. Under waterfall, it took too long to handle customer requirements and changes. Today, McRae’s team uses such agile techniques as daily standups to discuss current relevant issues, working on a two-week iteration and having release planning meetings based on Scrum practices.

While moving to agile processes, the development team found that its homegrown tools couldn’t evolve to improve the team’s ability to track defects, feature releases and scheduling.

“We had been using a home-concocted system for tracking defects and feature requests internally ourselves, and it wasn’t able to help us keep up with changes in customer requirements or QA,” said McRae. “That system didn’t have scheduling abilities, so planning was done on a kind of ad hoc basis.”

After scouting around for better tools, Comet’s team did a trial run on Rally Software’s Agile lifecycle management solutions. The trial led to adoption.

“Rally gave us a more robust system for defect tracking and feature request tracking. It also added scheduling, so we can schedule our development efforts, give a structured process to the collection, evaluation and implementation of feature requests as well as defects,” said McRae. “That gave us the opportunity to look ahead and plan our time out, knowing what features could be done in what time. Rally provided us with flexibility to swap and replace as needed.”

Today, when a new business need arises while development is in progress, McRae can look at the project plan in Rally and “see that we need to add certain features that require certain resources and determine what we can take out that was equivalent. Rally gave us the platform for making that judgment call without the guesswork that was part of the process before.”

Though the team has made great productivity gains, there are always improvements needed. Documentation is one area “that’s always a challenge,” McRae said.

“Any released software needs documentation, but if you don’t have a functioning product then documentation isn’t worth anything. There’s always the balancing act between building features and working on the product and trying to find time and the appropriate resources for the right level of documentation to be done. It’s something we don’t have a specific process to handle yet.”

McRae plans to look at Rally’s features to get some documentation metrics, such as finding which user stories and defects were completed. But he’d love to get his hands on an automated documentation tool. Any suggestions?


March 6, 2009  6:07 PM

Choosing, combining agile, waterfall, other software development models

Scmconsulting Ashfaque Ahmed Profile: Scmconsulting

Having trouble deciding which software development methodology to use in your projects? Right now you have a lot of tried and tested methods for software development projects. Here are some of my rationales for choosing particular methodologies, focusly mostly on agile, waterfall, incremental iteration and continuous integration.

Compared to projects in other industries, software development projects are very different and, in most cases, more complex. For instance, in other industries, many projects are very well defined from the very beginning. All the requirements are known. How and when each task will be executed is well known in advance. The only concern on these projects is whether some purchased goods will arrive on time or not, or if the quality of some supplied part is as per specifications or not and so on.

Software development projects, on the other hand, are rarely well defined. In most cases, requirements keep changing during the entire course of software development project.

In my experience, there are some exceptions in which requirements are set and rigid. That’s when I’ve used the waterfall model. Waterfall fits when requirements are frozen, and no changes are allowed. Don’t use waterfall if the requirements in your software project are not going to be clear even after, say, 25% of the project has already been executed.

When my customers want additional requirements to be incorporated throughout the project, then I go with the agile model. Agile methods allow additional requirements and/or changes in requirements to be incorporated in any phase of the software development project, whether the project is in the design phase, building phase, testing phase or even just before the deployment phase.

Agile is great in many ways, but I’ve run into serious quality assurance (QA) issues when using agile methods. What can go wrong? Well, going back and changing design and then incorporating those changes in code — which you often do in agile development — can make the software build unstable. A quick change in design makes the design vulnerable for defects.

I’ve found that melding agile and waterfall into the incremental iteration model provides flexibility and QA in software product development. Here, requirements may come from two sources: planned release planning for the next versions of the product, and requirements coming from customers.

With the incremental interation model, we can divide all requirements into manageable groups. Then for a set of requirements, we can make a branch of the base application model and develop this branch as per requirements. We can have several branches of the base application model, and each branch will be further developed using a set of requirements. This way, each branch will have a set of requirements for which all phases of the project will be well defined. No changes in requirements are allowed in this branch until the iteration is complete. After the iteration is complete, this branch can also be merged into the main application base. If this is done, then the features which were developed in this branch become available to the main application. Otherwise, if these features are not needed in the main application then this branch will be kept separate and it will not be merged with the main application.

A slightly different model is the continuous integration model. Here instead of making branches, all the new code developed in the next release of the software goes directly to the main build.

In the continuous integration model, the software design should be based on an open standard. So whenever new requirements come, the design allows for integrating features which will fulfill these requirements. In object-oriented programming, we have parent classes based on which child classes are built. The child classes themselves will have child classes. This could go on and on, and we could end up having more than 20 layers of classes. This is fine as long as the design is open for all foreseeable requirements. The problem comes when the initial design was not kept open, and so later the design may not allow for integrating some functionality which cannot be built using the existing base design.

In most of my projects today, I use the continuous integration model. This model can be adopted for any kind of software development and is a good compromise between the rigid waterfall model and the often too-fluid concepts of agile methods. Here we are getting all the benefits of waterfall — better quality, well defined process, predictability — and the benefits of agile methods, such as incorporating new requirements quickly instead of waiting for the entire project to be completed and then adding new functionality to incorporate new requirements.

Using small iterations is good from another perspective. Testing of small iterations makes it easier to test the application. In small iterations, the testing cycle is also small. Small project sizes make all aspects of project management easier to manage.

There are many software development models from which to choose and ways to mix the best features of those models. For me, a good open design coupled with an incremental iteration model makes a good choice for projects.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: