Test-driven development (TDD) can be the path to not having to reinvent the wheel with every new test. In the test-driven development I’ve done, I’ve found that my tests force me to write a more manageable interface over time. Early in a programming a class, I find that I can get away with something simple that might not be intuitive. But the more tests I add (so I can add more and more functionality), the more refactoring I have to do to get the previous tests to pass. This continually forces me to think about the interfaces I have and the best way to test them. That gets me thinking about simplifying my previous crufty code. In the end, I have a more usable interface for me, which I suspect also makes it a more usable interface for others.
The folks over at UX Booth recently posted an article on “How Test-Driven Development Increases Overall Usability.” As with all of their articles, it’s a well-researched and well-written look at the topic. In the article, they contrast testing at the user interface with testing at the application interfaces. It’s not an in-depth technical article, but it’s an interesting look at the topic. I think they accurately express one of the core ideas of test-driven development—to “make the application more usable to everyone involved.”
Another great example of TDD increasing usability can be found in Dale Emery’s analysis of Brian Button’s article “TDD Defeats Programmer’s Block—Film at 11.” Dale points out Brian’s general pattern of naming his tests using stimulus, result, and context. The naming scheme makes the tests more readable and allows you to readily interpret what the code/tests are trying to accomplish. It’s another, more detailed example of the UX Booth idea of usability at the application interface. Dale further expands on this pattern in a follow-up post on the anatomy of responsibility.
The Association for Software Testing (AST) recently announced the tutorial lineup for its upcoming conference in Colorado Springs. It’s an impressive lineup, covering both mainstream topics (like agile) and more daring conference topics (like self-learning).
Looking at the AST tutorial lineup got me thinking. How does this compare to other conference tutorial lineups? And what are the current tutorial themes at conferences today? I suppose I could have looked at keynotes or track presentations, but I like tutorial topics because people often pay extra for them. That means they, in theory, might represent the topics people are most interested in.
A quick look at the upcoming Conference for the Association for Software Testing, Software Test and Performance Conference, and STAREAST yields the following breakdown (using my arbitrary topic classifications):
Test management covers building and managing the test organization, process improvement and metrics. It’s a broad classification (I know), but I didn’t want 30 categories. Test analysis includes any topic that provides a structured way to design tests (with the exception of exploratory testing which I broke out into its own category because I wanted to see how often that was offered). Agile includes all the tutorials that have “agile” in the title — automation and performance are built out in a similar way.
So what might this tell us? If you group the numbers, you can see that we spend roughly equal amounts on management (25%), analysis (23% for analysis and exploratory testing), and coding (23% for automation, performance and database). I think that balance is important and I’m happy to see it emerge from the data. It’s a mix that shows the multifaceted problem of software testing.
With several agile teams I’ve worked with, thinking about operations has been an afterthought. Even when operations personnel are captured as stakeholders upfront, many times the hand-off from development to production is … well, “less than coordinated.”
I think this happens for a handful of natural reasons:
- The operations profile (interactions with other systems, infrastructure requirements, technology dependencies, configuration requirements and options, etc…) emerges as development unfolds over multiple iterations. That makes it difficult to look at the intermediate iterations, where the software is possibly going through rapid changes in direction, and invest in that type of documentation at the time. That means that the operations documentation often isn’t created until later iterations, for example a transition sprint in scrum, where it’s a mad dash to remember everything that was built out, capture it in one place, and deal with the realization that not much upfront thought was put into how this software would live and breath in production.
- Some aspects of operations are emergent. They depend on the architecture and technology decisions that are made as the project unfolds. They require extensive testing to measure. For example, providing details around system requirements (how much processing power, memory and disk space you’ll need), characteristics of performance/availability/reliability, and troubleshooting guidelines all emerge naturally from the testing process. Some aspects of operations often aren’t requirements you can just choose, you have to figure them out through iterations of testing and experimentation.
- There are many aspects of operations most programmers on the team just won’t know much about. When should backups be done, and how? What needs to be done for monitoring, alarming and alerting? How will that integrate with the technologies currently being used for those activities? Where will log files go and how often will they be checked, stored or cleared?
I call those natural reasons because all of them will necessarily happen to some degree no matter how much planning you do or how you structure the team. In any project, at early stages where you want to document aspects like this the most, that’s when decisions are most likely to change. Understanding operational criteria is something that’s always going to require testing and experimentation, which means it’s going to happen later in the project. And it’s unrealistic to expect programmers to be experts at programming and to be experts at knowing how to run data centers and administer systems.
So what can we do?
I think there are a couple of things that can be done:
- As mentioned earlier, many teams invest in the idea of a transition iteration. A period of time dedicated to pulling it all together. They collect the test results, inventory the system requirements, sit down with the operations folks and perform any last-minute tweaks to make the software manageable in the target environment.
- Other teams work with the operations team upfront to get stories for what their requirements are for an application from an operations perspective. While they still likely won’t get done until later iterations, it can reduce the surprise and can ensure that any big features are part of the technical road map for the product.
- I also think it’s helpful for the programming team to have exposure to operations in the form of production support for the applications that get deployed. Through needing to support the products in the wild, they develop an appreciation for the common issues around troubleshooting and maintaining the software. This can be done a number of ways. Some teams create separate production support teams and have programmers rotate through that team. Others have the same team that develops the product support it in production. What’s important is that the team gets the gritty exposure to the problems.
In my software testing work, I’ve found that video configuration consistency is a critical factor in a system’s performance and behavior under test conditions and an important factor in delivering technology in the intended fashion. Naturally, traditional resolution requirements for a consistent experience are only one part of video’s critical role, which can encompass everything from video adapters, driver version, video configuration, resolution, refresh rate, any console interaction mechanisms and the number of monitors running that configuration.
I recently talked with David Wren about the importance of video configuration’s importance. Wren is the managing director of PassMark software, a leading system benchmark software provider. Wren offers that video configuration is important when ensuring a solution is to function as expected. One example where this was an issue for a Windows-based system was the recent issue with Nvidia drivers on Windows Vista. Some video adapter models have had ten or more driver releases since November 2007.
Beyond driver issues, which Wren states can be notoriously buggy, 2D and 3D video performance can be point to watch in the test process. For software implementations that are not graphics intensive, such as text-based screen activity, the performance is relatively unimportant. When graphics are introduced, then the video performance is critical. This can include playing high-quality videos, games, or graphics rendering with design software. PassMark maintains a benchmark website of video card performance benchmarks that is updated daily, and the results for the same tests on different platforms show varied results.
These factors and more are especially relevant in today’s media-rich technology landscape. Many of the various high quality media will likely perform differently under circumstances where various video configurations are in use, making benchmarks a critical part of the test process.
Judith Myerson recently published a developerWorks article on cloud computing versus grid computing. It’s a fantastic read if you’re new to the topic, and provides tons of links for where to learn more. While introducing the topic, Myerson lays out the basic relationship between the two. In the article, when describing some of the basic differences, she points out that one of the big advantages (among several) of cloud computing is on-demand resource provisioning.
Myerson then goes on to discuss some issues to consider. Those include threshold policy, interoperability issues, hidden costs, unexpected behavior, and security concerns. As you continue your research in cloud computing, you’re likely to find that it’s security concerns in particular that get people talking. In a recent Computerworld article on cloud computing not being fully enterprise-ready, author Craig Stedman points out that many cloud computing vendors might not be ready to support corporate IT due to security concerns. A more detailed article on the topic by Jaikumar Vijayan goes further.
In the article, Vijayan takes an in-depth look at the World Privacy Forum’s recent report and highlights some of the bigger issues like concerns about data privacy, security, and confidentiality. Specific issues discussed include dealing with privacy regulations, data-disclosure terms and conditions, legal protections, issues relating to the host’s geographic location, and allowing more ready access to data to government agencies as well as parties involved in legal disputes. If Myerson’s article is a must read as an introduction to the topic, then Vijayan’s article is a must read for the concerns around privacy issues.
Scheduling, documentation, tracking bugs and –- most critically -– producing great products with a small development staff are the headaches Dan McRae, Comet Solutions Inc.’s software engineering manager, faces every day. I talked with McRae recently about how he’s combined best practices and lifecycle management products to get the upper hand on quality assurance (QA) and deliver strong products on deadline. Indeed, Comet estimates that these tactics have improved software quality by 25% and time-to-market by 10 to 20%.
McRae oversees a team of about 10 developers, two of which are dedicated to QA, although all do some QA work. They produce about four major and four point releases each year for Albuquerque, N.M.-based Comet, which provides design engineering workgroup software primarily aimed at enabling early simulation.
“Our biggest challenge is that we only have two full-time QA people,” McRae said. “The rest of us generate more features than they can thoroughly test. As manager, I direct what QA’s focus is. Some of those directions are based on word of mouth from our developers; but I also get direction from our chief technical officer, who points to things that are high-risk and need to be addressed. There are always challenges because there are always the latest defects that have the highest priority, and several of those pop before you can fully QA the previous ones.”
Comet’s development team uses agile development and automated tools to reduce the development and QA burden and improve quality overall.
Using waterfall methods for product development became very difficult as software and users’ needs became more complex. Under waterfall, it took too long to handle customer requirements and changes. Today, McRae’s team uses such agile techniques as daily standups to discuss current relevant issues, working on a two-week iteration and having release planning meetings based on Scrum practices.
While moving to agile processes, the development team found that its homegrown tools couldn’t evolve to improve the team’s ability to track defects, feature releases and scheduling.
“We had been using a home-concocted system for tracking defects and feature requests internally ourselves, and it wasn’t able to help us keep up with changes in customer requirements or QA,” said McRae. “That system didn’t have scheduling abilities, so planning was done on a kind of ad hoc basis.”
After scouting around for better tools, Comet’s team did a trial run on Rally Software’s Agile lifecycle management solutions. The trial led to adoption.
“Rally gave us a more robust system for defect tracking and feature request tracking. It also added scheduling, so we can schedule our development efforts, give a structured process to the collection, evaluation and implementation of feature requests as well as defects,” said McRae. “That gave us the opportunity to look ahead and plan our time out, knowing what features could be done in what time. Rally provided us with flexibility to swap and replace as needed.”
Today, when a new business need arises while development is in progress, McRae can look at the project plan in Rally and “see that we need to add certain features that require certain resources and determine what we can take out that was equivalent. Rally gave us the platform for making that judgment call without the guesswork that was part of the process before.”
Though the team has made great productivity gains, there are always improvements needed. Documentation is one area “that’s always a challenge,” McRae said.
“Any released software needs documentation, but if you don’t have a functioning product then documentation isn’t worth anything. There’s always the balancing act between building features and working on the product and trying to find time and the appropriate resources for the right level of documentation to be done. It’s something we don’t have a specific process to handle yet.”
McRae plans to look at Rally’s features to get some documentation metrics, such as finding which user stories and defects were completed. But he’d love to get his hands on an automated documentation tool. Any suggestions?
Having trouble deciding which software development methodology to use in your projects? Right now you have a lot of tried and tested methods for software development projects. Here are some of my rationales for choosing particular methodologies, focusly mostly on agile, waterfall, incremental iteration and continuous integration.
Compared to projects in other industries, software development projects are very different and, in most cases, more complex. For instance, in other industries, many projects are very well defined from the very beginning. All the requirements are known. How and when each task will be executed is well known in advance. The only concern on these projects is whether some purchased goods will arrive on time or not, or if the quality of some supplied part is as per specifications or not and so on.
Software development projects, on the other hand, are rarely well defined. In most cases, requirements keep changing during the entire course of software development project.
In my experience, there are some exceptions in which requirements are set and rigid. That’s when I’ve used the waterfall model. Waterfall fits when requirements are frozen, and no changes are allowed. Don’t use waterfall if the requirements in your software project are not going to be clear even after, say, 25% of the project has already been executed.
When my customers want additional requirements to be incorporated throughout the project, then I go with the agile model. Agile methods allow additional requirements and/or changes in requirements to be incorporated in any phase of the software development project, whether the project is in the design phase, building phase, testing phase or even just before the deployment phase.
Agile is great in many ways, but I’ve run into serious quality assurance (QA) issues when using agile methods. What can go wrong? Well, going back and changing design and then incorporating those changes in code — which you often do in agile development — can make the software build unstable. A quick change in design makes the design vulnerable for defects.
I’ve found that melding agile and waterfall into the incremental iteration model provides flexibility and QA in software product development. Here, requirements may come from two sources: planned release planning for the next versions of the product, and requirements coming from customers.
With the incremental interation model, we can divide all requirements into manageable groups. Then for a set of requirements, we can make a branch of the base application model and develop this branch as per requirements. We can have several branches of the base application model, and each branch will be further developed using a set of requirements. This way, each branch will have a set of requirements for which all phases of the project will be well defined. No changes in requirements are allowed in this branch until the iteration is complete. After the iteration is complete, this branch can also be merged into the main application base. If this is done, then the features which were developed in this branch become available to the main application. Otherwise, if these features are not needed in the main application then this branch will be kept separate and it will not be merged with the main application.
A slightly different model is the continuous integration model. Here instead of making branches, all the new code developed in the next release of the software goes directly to the main build.
In the continuous integration model, the software design should be based on an open standard. So whenever new requirements come, the design allows for integrating features which will fulfill these requirements. In object-oriented programming, we have parent classes based on which child classes are built. The child classes themselves will have child classes. This could go on and on, and we could end up having more than 20 layers of classes. This is fine as long as the design is open for all foreseeable requirements. The problem comes when the initial design was not kept open, and so later the design may not allow for integrating some functionality which cannot be built using the existing base design.
In most of my projects today, I use the continuous integration model. This model can be adopted for any kind of software development and is a good compromise between the rigid waterfall model and the often too-fluid concepts of agile methods. Here we are getting all the benefits of waterfall — better quality, well defined process, predictability — and the benefits of agile methods, such as incorporating new requirements quickly instead of waiting for the entire project to be completed and then adding new functionality to incorporate new requirements.
Using small iterations is good from another perspective. Testing of small iterations makes it easier to test the application. In small iterations, the testing cycle is also small. Small project sizes make all aspects of project management easier to manage.
There are many software development models from which to choose and ways to mix the best features of those models. For me, a good open design coupled with an incremental iteration model makes a good choice for projects.
Last year I was involved in an audit where I was told that my testing team would benefit from using a large enterprise test case management tool. I found the recommendation interesting, because we primarily do exploratory testing. We have some scripted test cases, but not many and really only for regression testing purposes. My experience with most test case management tools (including the one that was recommended) tells me that the tool would not help us.
When I sit back and reflect on my experiences as a test manager, I know where the recommendation comes from. I know that on large IT projects, where we have (literally) armies of testers, managing test artifacts and test execution results can be a huge challenge. I also know the power and influence the larger tools vendors have on our industry. And I know why an auditor, someone who’s likely never done exploratory testing, would make the recommendation. And perhaps we could benefit from some more sophisticated tooling. We do struggle a bit in that area.
This came together for me this morning while I read Joe Farah’s article CM: THE NEXT GENERATION – Don’t Let Our Mistakes Hold Us Back on cmCrossroads. It’s not a testing article, but it is a fantastic look at the problems we as software developers and project teams encounter when we look at tooling. In the article, Farah lists out five mistakes and then spends time with each one:
- Mistake #1: We assume that the most expensive, or most prevalent solution is the best
- Mistake #2: We develop ever more complex branching strategies to deal with ever more complex CM.
- Mistake #3: Common backplane schemes reduce effort and costs, but they don’t come close to well designed monolithic tools
- Mistake #4: We hold on to a system because of our familiarity with it, regardless of the cost
- Mistake #5: Change-based CM, not File-based
As I read the article, I found myself saying, “Yea! That’s a testing problem too!” I’m empathetic to the problems he describes, because I believe we also have those problems. We do assume the most expensive tool is the best solution and we hold on to them once implemented, even if they don’t work. We do develop complex test case to test requirement hierarchies in an attempt to deal with an often misconceived concept of traceability. We do track test cases and defects, instead of coverage and risk, because they are easier to track with the tools we have.
Farah summarizes nicely:
Ask yourself, are the processes and tools I’m using today for CM substantially different than the ones I was using 5 years ago, 10 years ago, 15 years ago? Or are they just packaged prettier, with add-ons to make some of the admin-intensive tasks more tolerable? I know of 2 or 3 companies who can (and many that will) say their tools are significantly better than 3 years ago. I know of many, many more who have just spruced up the package, or perhaps customized it for another sector. Nothing wrong with that, but that won’t get us ahead. Let’s hope the 2 or 3 can stick with it and bring about the change needed to drag the rest of the industry forward. That’s exactly what I think is going to happen … and then the pace will quicken.
This is a problem that I’ve heard testing book author and expert Cem Kaner point out on a regular basis when he gives conference talks. While programming methods have changed drastically over the last 20 years, software-testing methods have remained largely the same. The tools that support programmers have changed drastically, but the tools that support testers are again, (with the possible exception of areas like performance, security, and mobile) largely the same.
I’d love to see a couple of testing tool vendors “drag the rest of the industry forward.”
I’m always excited when I stumble across an area which is an intersection of two of my favorite topics. Recently, I started reading Applied Security Visualization by Raffael Marty. In the book, Marty introduces the concepts and techniques of network visualization and explains how you can use that information to identify patterns in the data and possible emerging vulnerabilities and attacks. It’s the perfect merger of data visualization (a topic fellow SearchSoftwareQuality.com expert Karen Johnson has me hooked on) and security.
This morning, I stumbled across an ITworld article Marty published earlier this month on getting started with security visualization. In the article Marty provides three simple must-dos and don’ts:
The three “must-dos” from the article:
- Learn about visualization: It’s important for security people to understand the basics of visualization. Learn a bit about perception and good practices for generating effective graphs. Learn about which charts to use for which kinds of use cases and data. This is the minimum you should know about visualization.
- Understand your data: Visualization is not a magic method that will explain the contents of a given data set. Without understanding the underlying data, you can’t generate a meaningful graph and you won’t be able to interpret the graphs generated.
- Get to know your environment: I can be an expert in firewalls and know all there is to know about a specific firewall’s logs. However, if you give me a visualization of a firewall log, I won’t be able to tell you much or help you figure out what you should focus on. Context is important. You need to know the context in which the logs were generated. What are the roles of the machines on the network, what are some of the security policies, what type of traffic is normal, etc. You can use visualization to help understand the context, but there are things you have to know up front.
And the three “don’ts”:
- Don’t get scared: The topic of security visualization is a big one. You have to know a lot of things from visualization to security. Start small. Start with some data that you know well. Start with some simple use cases and explore visualization slowly.
- Don’t do it all at once: Start with a small data set. Maybe a few hundred log lines. Once you are happy with the results you get for a small data set, increase the size and see what that does to your visualization. Still happy? Increase the size some more until you end up with the complete data set.
- Don’t do it yourself: If you’re in charge of data analysis and you aren’t the data owner (meaning that you don’t understand the application that generates the data intimately well) you should get help from the data owner. Have the application developers or other experts help you understand the data and create the visuals together with you.
If you’d like to read more on the topic (and see some cool examples) check out Raffael Marty’s blog.
In my work as a project manager, test manager and consultant, I’ve often used the earned value management (EVM) technique to provide status reports during project execution cycle, among other things. In this post, I share some of the things I’ve learned about it, hoping to help people who have found it difficult to use EVM on software development/testing projects.
I’ve found that EVM can a very useful technique for software development and testing projects. EVM shows how the budget, resources and time is being spent on the project and if the project is going in the right direction. It also shows what correction could be needed to bring the project on the right track. Without EVM; project tracking, monitoring, control and reporting is done on an ad hoc basis. Nobody, including the project stakeholders and the project team itself, knows exactly where the project is or where it is heading to; at any given time during execution of the project.
In EVM, the stress is on project baseline. Before the start of the project; a baseline project plan is developed. At this stage all steps, tasks, resources, time duration for execution of each task and budgets are determined. From this baseline, values for budget, time and resources utilized for each project task is compared when the project execution progresses. This comparison provides information as to whether any task is consuming more or less budget, time or resources compared to the corresponding planned baseline values. These values are also compared for the entire project as well. This information helps in tracking and controlling projects.
One challenge with EVM is that it doesn’t work when requirements are not clear or well defined at initial stage of the project. When requirements keep changing during the software development/testing project, you can’t set realistic baseline values. If the baseline values are not realistic, then EVM method can not work.
EVM can be successfully implemented on software development/testing projects. Suppose we are following the waterfall model for the project. In this case, most of the things for the project are fixed in the beginning of the project. That means we have good baseline figures for the project which will not change. In such a case, EVM can be implemented.
Now, let’s talk about EVM and the agile model. Suppose we have realized that a lot of changes will take place in requirements through out the project. Due to changes in requirements, the project plan will also keep changing to incorporate changes required in different phases of the project due to changes in requirements. So we have no option but to follow an agile method for our project; but how can we have a realistic baseline defined for our project? There are two approaches:
- In one, you have a project where requirements keep coming and are integrated into the base application model; in other words, an incremental integration model.
- The other model is the incremental iterative model, in which requirements are grouped into separate small sets. Branches in the main application are made, and a new build is made in each of these branches to fulfill these sets of requirements.
In case of incremental iterative model, we can divide all requirements into manageable groups. Then for a set of requirements, we can make a branch of the base application model and develop this branch as per requirements. We can have several branches of the base application model, and each branch will be further developed using a set of requirements. This way, each branch will have a set of requirements for which all phases of the projects will be well defined. No changes in requirements are allowed in this branch until the iteration is complete.
In case of incremental integration model, the additional design is made based on new requirements. Thus the new design and the new code developed in the next release of the software go directly to the main build.
Using these methods, we are freezing the requirements and thus we can make a good baseline for each of these small iterative projects. We can now easily set budget, timelines and resources for each of these smaller projects and each of the tasks associated with these projects. When the project execution for these iterations starts; we can easily track and monitor these projects.
Finally, there’s another situation in which EVM EVM is relatively simple to implement: software projects using commercial-off-the-shelf (COTS) software. Here if customization of the software package is not required or very minimal, then the project team mostly has to deal with tasks like installing different modules of the package, configuring different modules of the package as per requirements, creating data and finally testing the entire implemented system. In such a scenario, software development is minimal and most requirements are not prone to change. This makes EVM implementation on such projects viable.
Summing up, I advise project leaders to check out EVM. In my experience, software projects often fail because of requirement creep. Using EVM will make software projects manageable and reduce the chances of project failure.