In my software testing work, I’ve found that video configuration consistency is a critical factor in a system’s performance and behavior under test conditions and an important factor in delivering technology in the intended fashion. Naturally, traditional resolution requirements for a consistent experience are only one part of video’s critical role, which can encompass everything from video adapters, driver version, video configuration, resolution, refresh rate, any console interaction mechanisms and the number of monitors running that configuration.
I recently talked with David Wren about the importance of video configuration’s importance. Wren is the managing director of PassMark software, a leading system benchmark software provider. Wren offers that video configuration is important when ensuring a solution is to function as expected. One example where this was an issue for a Windows-based system was the recent issue with Nvidia drivers on Windows Vista. Some video adapter models have had ten or more driver releases since November 2007.
Beyond driver issues, which Wren states can be notoriously buggy, 2D and 3D video performance can be point to watch in the test process. For software implementations that are not graphics intensive, such as text-based screen activity, the performance is relatively unimportant. When graphics are introduced, then the video performance is critical. This can include playing high-quality videos, games, or graphics rendering with design software. PassMark maintains a benchmark website of video card performance benchmarks that is updated daily, and the results for the same tests on different platforms show varied results.
These factors and more are especially relevant in today’s media-rich technology landscape. Many of the various high quality media will likely perform differently under circumstances where various video configurations are in use, making benchmarks a critical part of the test process.
Judith Myerson recently published a developerWorks article on cloud computing versus grid computing. It’s a fantastic read if you’re new to the topic, and provides tons of links for where to learn more. While introducing the topic, Myerson lays out the basic relationship between the two. In the article, when describing some of the basic differences, she points out that one of the big advantages (among several) of cloud computing is on-demand resource provisioning.
Myerson then goes on to discuss some issues to consider. Those include threshold policy, interoperability issues, hidden costs, unexpected behavior, and security concerns. As you continue your research in cloud computing, you’re likely to find that it’s security concerns in particular that get people talking. In a recent Computerworld article on cloud computing not being fully enterprise-ready, author Craig Stedman points out that many cloud computing vendors might not be ready to support corporate IT due to security concerns. A more detailed article on the topic by Jaikumar Vijayan goes further.
In the article, Vijayan takes an in-depth look at the World Privacy Forum’s recent report and highlights some of the bigger issues like concerns about data privacy, security, and confidentiality. Specific issues discussed include dealing with privacy regulations, data-disclosure terms and conditions, legal protections, issues relating to the host’s geographic location, and allowing more ready access to data to government agencies as well as parties involved in legal disputes. If Myerson’s article is a must read as an introduction to the topic, then Vijayan’s article is a must read for the concerns around privacy issues.
Scheduling, documentation, tracking bugs and –- most critically -– producing great products with a small development staff are the headaches Dan McRae, Comet Solutions Inc.’s software engineering manager, faces every day. I talked with McRae recently about how he’s combined best practices and lifecycle management products to get the upper hand on quality assurance (QA) and deliver strong products on deadline. Indeed, Comet estimates that these tactics have improved software quality by 25% and time-to-market by 10 to 20%.
McRae oversees a team of about 10 developers, two of which are dedicated to QA, although all do some QA work. They produce about four major and four point releases each year for Albuquerque, N.M.-based Comet, which provides design engineering workgroup software primarily aimed at enabling early simulation.
“Our biggest challenge is that we only have two full-time QA people,” McRae said. “The rest of us generate more features than they can thoroughly test. As manager, I direct what QA’s focus is. Some of those directions are based on word of mouth from our developers; but I also get direction from our chief technical officer, who points to things that are high-risk and need to be addressed. There are always challenges because there are always the latest defects that have the highest priority, and several of those pop before you can fully QA the previous ones.”
Comet’s development team uses agile development and automated tools to reduce the development and QA burden and improve quality overall.
Using waterfall methods for product development became very difficult as software and users’ needs became more complex. Under waterfall, it took too long to handle customer requirements and changes. Today, McRae’s team uses such agile techniques as daily standups to discuss current relevant issues, working on a two-week iteration and having release planning meetings based on Scrum practices.
While moving to agile processes, the development team found that its homegrown tools couldn’t evolve to improve the team’s ability to track defects, feature releases and scheduling.
“We had been using a home-concocted system for tracking defects and feature requests internally ourselves, and it wasn’t able to help us keep up with changes in customer requirements or QA,” said McRae. “That system didn’t have scheduling abilities, so planning was done on a kind of ad hoc basis.”
After scouting around for better tools, Comet’s team did a trial run on Rally Software’s Agile lifecycle management solutions. The trial led to adoption.
“Rally gave us a more robust system for defect tracking and feature request tracking. It also added scheduling, so we can schedule our development efforts, give a structured process to the collection, evaluation and implementation of feature requests as well as defects,” said McRae. “That gave us the opportunity to look ahead and plan our time out, knowing what features could be done in what time. Rally provided us with flexibility to swap and replace as needed.”
Today, when a new business need arises while development is in progress, McRae can look at the project plan in Rally and “see that we need to add certain features that require certain resources and determine what we can take out that was equivalent. Rally gave us the platform for making that judgment call without the guesswork that was part of the process before.”
Though the team has made great productivity gains, there are always improvements needed. Documentation is one area “that’s always a challenge,” McRae said.
“Any released software needs documentation, but if you don’t have a functioning product then documentation isn’t worth anything. There’s always the balancing act between building features and working on the product and trying to find time and the appropriate resources for the right level of documentation to be done. It’s something we don’t have a specific process to handle yet.”
McRae plans to look at Rally’s features to get some documentation metrics, such as finding which user stories and defects were completed. But he’d love to get his hands on an automated documentation tool. Any suggestions?
Having trouble deciding which software development methodology to use in your projects? Right now you have a lot of tried and tested methods for software development projects. Here are some of my rationales for choosing particular methodologies, focusly mostly on agile, waterfall, incremental iteration and continuous integration.
Compared to projects in other industries, software development projects are very different and, in most cases, more complex. For instance, in other industries, many projects are very well defined from the very beginning. All the requirements are known. How and when each task will be executed is well known in advance. The only concern on these projects is whether some purchased goods will arrive on time or not, or if the quality of some supplied part is as per specifications or not and so on.
Software development projects, on the other hand, are rarely well defined. In most cases, requirements keep changing during the entire course of software development project.
In my experience, there are some exceptions in which requirements are set and rigid. That’s when I’ve used the waterfall model. Waterfall fits when requirements are frozen, and no changes are allowed. Don’t use waterfall if the requirements in your software project are not going to be clear even after, say, 25% of the project has already been executed.
When my customers want additional requirements to be incorporated throughout the project, then I go with the agile model. Agile methods allow additional requirements and/or changes in requirements to be incorporated in any phase of the software development project, whether the project is in the design phase, building phase, testing phase or even just before the deployment phase.
Agile is great in many ways, but I’ve run into serious quality assurance (QA) issues when using agile methods. What can go wrong? Well, going back and changing design and then incorporating those changes in code — which you often do in agile development — can make the software build unstable. A quick change in design makes the design vulnerable for defects.
I’ve found that melding agile and waterfall into the incremental iteration model provides flexibility and QA in software product development. Here, requirements may come from two sources: planned release planning for the next versions of the product, and requirements coming from customers.
With the incremental interation model, we can divide all requirements into manageable groups. Then for a set of requirements, we can make a branch of the base application model and develop this branch as per requirements. We can have several branches of the base application model, and each branch will be further developed using a set of requirements. This way, each branch will have a set of requirements for which all phases of the project will be well defined. No changes in requirements are allowed in this branch until the iteration is complete. After the iteration is complete, this branch can also be merged into the main application base. If this is done, then the features which were developed in this branch become available to the main application. Otherwise, if these features are not needed in the main application then this branch will be kept separate and it will not be merged with the main application.
A slightly different model is the continuous integration model. Here instead of making branches, all the new code developed in the next release of the software goes directly to the main build.
In the continuous integration model, the software design should be based on an open standard. So whenever new requirements come, the design allows for integrating features which will fulfill these requirements. In object-oriented programming, we have parent classes based on which child classes are built. The child classes themselves will have child classes. This could go on and on, and we could end up having more than 20 layers of classes. This is fine as long as the design is open for all foreseeable requirements. The problem comes when the initial design was not kept open, and so later the design may not allow for integrating some functionality which cannot be built using the existing base design.
In most of my projects today, I use the continuous integration model. This model can be adopted for any kind of software development and is a good compromise between the rigid waterfall model and the often too-fluid concepts of agile methods. Here we are getting all the benefits of waterfall — better quality, well defined process, predictability — and the benefits of agile methods, such as incorporating new requirements quickly instead of waiting for the entire project to be completed and then adding new functionality to incorporate new requirements.
Using small iterations is good from another perspective. Testing of small iterations makes it easier to test the application. In small iterations, the testing cycle is also small. Small project sizes make all aspects of project management easier to manage.
There are many software development models from which to choose and ways to mix the best features of those models. For me, a good open design coupled with an incremental iteration model makes a good choice for projects.
Last year I was involved in an audit where I was told that my testing team would benefit from using a large enterprise test case management tool. I found the recommendation interesting, because we primarily do exploratory testing. We have some scripted test cases, but not many and really only for regression testing purposes. My experience with most test case management tools (including the one that was recommended) tells me that the tool would not help us.
When I sit back and reflect on my experiences as a test manager, I know where the recommendation comes from. I know that on large IT projects, where we have (literally) armies of testers, managing test artifacts and test execution results can be a huge challenge. I also know the power and influence the larger tools vendors have on our industry. And I know why an auditor, someone who’s likely never done exploratory testing, would make the recommendation. And perhaps we could benefit from some more sophisticated tooling. We do struggle a bit in that area.
This came together for me this morning while I read Joe Farah’s article CM: THE NEXT GENERATION – Don’t Let Our Mistakes Hold Us Back on cmCrossroads. It’s not a testing article, but it is a fantastic look at the problems we as software developers and project teams encounter when we look at tooling. In the article, Farah lists out five mistakes and then spends time with each one:
- Mistake #1: We assume that the most expensive, or most prevalent solution is the best
- Mistake #2: We develop ever more complex branching strategies to deal with ever more complex CM.
- Mistake #3: Common backplane schemes reduce effort and costs, but they don’t come close to well designed monolithic tools
- Mistake #4: We hold on to a system because of our familiarity with it, regardless of the cost
- Mistake #5: Change-based CM, not File-based
As I read the article, I found myself saying, “Yea! That’s a testing problem too!” I’m empathetic to the problems he describes, because I believe we also have those problems. We do assume the most expensive tool is the best solution and we hold on to them once implemented, even if they don’t work. We do develop complex test case to test requirement hierarchies in an attempt to deal with an often misconceived concept of traceability. We do track test cases and defects, instead of coverage and risk, because they are easier to track with the tools we have.
Farah summarizes nicely:
Ask yourself, are the processes and tools I’m using today for CM substantially different than the ones I was using 5 years ago, 10 years ago, 15 years ago? Or are they just packaged prettier, with add-ons to make some of the admin-intensive tasks more tolerable? I know of 2 or 3 companies who can (and many that will) say their tools are significantly better than 3 years ago. I know of many, many more who have just spruced up the package, or perhaps customized it for another sector. Nothing wrong with that, but that won’t get us ahead. Let’s hope the 2 or 3 can stick with it and bring about the change needed to drag the rest of the industry forward. That’s exactly what I think is going to happen … and then the pace will quicken.
This is a problem that I’ve heard testing book author and expert Cem Kaner point out on a regular basis when he gives conference talks. While programming methods have changed drastically over the last 20 years, software-testing methods have remained largely the same. The tools that support programmers have changed drastically, but the tools that support testers are again, (with the possible exception of areas like performance, security, and mobile) largely the same.
I’d love to see a couple of testing tool vendors “drag the rest of the industry forward.”
I’m always excited when I stumble across an area which is an intersection of two of my favorite topics. Recently, I started reading Applied Security Visualization by Raffael Marty. In the book, Marty introduces the concepts and techniques of network visualization and explains how you can use that information to identify patterns in the data and possible emerging vulnerabilities and attacks. It’s the perfect merger of data visualization (a topic fellow SearchSoftwareQuality.com expert Karen Johnson has me hooked on) and security.
This morning, I stumbled across an ITworld article Marty published earlier this month on getting started with security visualization. In the article Marty provides three simple must-dos and don’ts:
The three “must-dos” from the article:
- Learn about visualization: It’s important for security people to understand the basics of visualization. Learn a bit about perception and good practices for generating effective graphs. Learn about which charts to use for which kinds of use cases and data. This is the minimum you should know about visualization.
- Understand your data: Visualization is not a magic method that will explain the contents of a given data set. Without understanding the underlying data, you can’t generate a meaningful graph and you won’t be able to interpret the graphs generated.
- Get to know your environment: I can be an expert in firewalls and know all there is to know about a specific firewall’s logs. However, if you give me a visualization of a firewall log, I won’t be able to tell you much or help you figure out what you should focus on. Context is important. You need to know the context in which the logs were generated. What are the roles of the machines on the network, what are some of the security policies, what type of traffic is normal, etc. You can use visualization to help understand the context, but there are things you have to know up front.
And the three “don’ts”:
- Don’t get scared: The topic of security visualization is a big one. You have to know a lot of things from visualization to security. Start small. Start with some data that you know well. Start with some simple use cases and explore visualization slowly.
- Don’t do it all at once: Start with a small data set. Maybe a few hundred log lines. Once you are happy with the results you get for a small data set, increase the size and see what that does to your visualization. Still happy? Increase the size some more until you end up with the complete data set.
- Don’t do it yourself: If you’re in charge of data analysis and you aren’t the data owner (meaning that you don’t understand the application that generates the data intimately well) you should get help from the data owner. Have the application developers or other experts help you understand the data and create the visuals together with you.
If you’d like to read more on the topic (and see some cool examples) check out Raffael Marty’s blog.
In my work as a project manager, test manager and consultant, I’ve often used the earned value management (EVM) technique to provide status reports during project execution cycle, among other things. In this post, I share some of the things I’ve learned about it, hoping to help people who have found it difficult to use EVM on software development/testing projects.
I’ve found that EVM can a very useful technique for software development and testing projects. EVM shows how the budget, resources and time is being spent on the project and if the project is going in the right direction. It also shows what correction could be needed to bring the project on the right track. Without EVM; project tracking, monitoring, control and reporting is done on an ad hoc basis. Nobody, including the project stakeholders and the project team itself, knows exactly where the project is or where it is heading to; at any given time during execution of the project.
In EVM, the stress is on project baseline. Before the start of the project; a baseline project plan is developed. At this stage all steps, tasks, resources, time duration for execution of each task and budgets are determined. From this baseline, values for budget, time and resources utilized for each project task is compared when the project execution progresses. This comparison provides information as to whether any task is consuming more or less budget, time or resources compared to the corresponding planned baseline values. These values are also compared for the entire project as well. This information helps in tracking and controlling projects.
One challenge with EVM is that it doesn’t work when requirements are not clear or well defined at initial stage of the project. When requirements keep changing during the software development/testing project, you can’t set realistic baseline values. If the baseline values are not realistic, then EVM method can not work.
EVM can be successfully implemented on software development/testing projects. Suppose we are following the waterfall model for the project. In this case, most of the things for the project are fixed in the beginning of the project. That means we have good baseline figures for the project which will not change. In such a case, EVM can be implemented.
Now, let’s talk about EVM and the agile model. Suppose we have realized that a lot of changes will take place in requirements through out the project. Due to changes in requirements, the project plan will also keep changing to incorporate changes required in different phases of the project due to changes in requirements. So we have no option but to follow an agile method for our project; but how can we have a realistic baseline defined for our project? There are two approaches:
- In one, you have a project where requirements keep coming and are integrated into the base application model; in other words, an incremental integration model.
- The other model is the incremental iterative model, in which requirements are grouped into separate small sets. Branches in the main application are made, and a new build is made in each of these branches to fulfill these sets of requirements.
In case of incremental iterative model, we can divide all requirements into manageable groups. Then for a set of requirements, we can make a branch of the base application model and develop this branch as per requirements. We can have several branches of the base application model, and each branch will be further developed using a set of requirements. This way, each branch will have a set of requirements for which all phases of the projects will be well defined. No changes in requirements are allowed in this branch until the iteration is complete.
In case of incremental integration model, the additional design is made based on new requirements. Thus the new design and the new code developed in the next release of the software go directly to the main build.
Using these methods, we are freezing the requirements and thus we can make a good baseline for each of these small iterative projects. We can now easily set budget, timelines and resources for each of these smaller projects and each of the tasks associated with these projects. When the project execution for these iterations starts; we can easily track and monitor these projects.
Finally, there’s another situation in which EVM EVM is relatively simple to implement: software projects using commercial-off-the-shelf (COTS) software. Here if customization of the software package is not required or very minimal, then the project team mostly has to deal with tasks like installing different modules of the package, configuring different modules of the package as per requirements, creating data and finally testing the entire implemented system. In such a scenario, software development is minimal and most requirements are not prone to change. This makes EVM implementation on such projects viable.
Summing up, I advise project leaders to check out EVM. In my experience, software projects often fail because of requirement creep. Using EVM will make software projects manageable and reduce the chances of project failure.
Earlier this month Lisa Crispin and Janet Gregory, authors of Agile Testing: A Practical Guide for Testers and Agile Teams, published an article called “Testers: The Hidden Resource.” While the article doesn’t appear to be written for testers, it’s nonetheless an interesting read. If you’re still not quite sure where testers might be able to play a role on your projects, it might be a place to start.
One quote I found particularly interesting talked about whole team commitment:
What do we mean by this “whole team” commitment? Testers work with programmers to turn their test ideas into automated tests that become part of the regression test suite. The whole team becomes responsible for keeping the test suite “running green”; that is, keeping the tests passing. The regression suite (including all unit and functional tests) allows the team to refactor continually to keep the code clean and to minimize technical debt. Testers contribute their specialized skills for developing robust test cases, but the entire team gets involved with designing testable code, automating and executing tests. A team commitment to principles, values, and practices that promote quality will result in well-designed code and keep maintenance costs low. Good test coverage means that changes are easier and faster to implement.
I think automated test coverage (unit, acceptance, functional, etc.) is a great place for testers and programmers to come together. I’ve also found that having programmers pair with testers when doing exploratory testing can also help. Not only do they provide meaningful insights to the testing taking place, it also allows the tester an opportunity to illustrate testing techniques that get programmers thinking about risks that can’t necessarily be addressed with automated tests. That in turn can create dialogue around what the best way might be to address those risks as a project team.
In general, I’m not a big fan of “selling” testing to the teams I’m working with when I’m a tester or test manager. Instead I show them what value I (or my team) can provide. If they don’t want to give me the opportunity to show them that value, that’s okay with me. I don’t want to work with people who don’t feel like they need the kind of feedback I can provide. To date I’ve only met a handful of programmers who don’t ask people to test their code. So I’ve found it relatively easy to break down any resistance a programmer might have to involving testers.
After discussing HP’s new Performance Center 9.5 release with voke analyst Theresa Lanowitz, I asked a user — Constellation Energy Group’s software engineer Srinivasa Margani –- for his take on it.
Marconi is happiest about two additions to HP Performance Center 9.5: HP Protocol Advisor and HP LoadRunner.
“Protocol Advisor is the best from 9.5. When it comes to the administration side, I liked the project-level privileges. We were looking for this kind of facility for a long time.”
Also, Margani will use 9.5’s Results Trending feature to compare tests. “It’s a very easy way to compare the results and avoids lot of manual errors.”
HP LoadRunner will take a lot of the guess work out of running load tests. Margani welcomes all the help he can get in that area, and he learned the hard way: “Don’t rush to do a load test. Think twice about pros and cons before kicking off a load test.”
Constellation and Margani started out using Mercury ITG, a change and demand management application, and moved to HP Performance and Quality Center after HP acquired Mercury a few years ago. Margani heads a group of three developers and four system administrators.
In general, Margani has found HP Performance Center easy to implement and user-friendly. “It was never difficult to create or run a load test using HP Performance Center or LoadRunner,” he told me.
On the other hand, the complexity of Performance Center makes it difficult for Margani to get and keep his development team up to speed on how to write test scripts and accurately execute them. He’s had to hire third-party consultants in the past to run the application testing phase for high-profile and time-constrained projects.
HP’s addition of some hefty features to its new HP Performance Center 9.5 release, announced today, is further evidence that HP has done the right things with the acquired Mercury software testing line, according to Theresa Lanowitz, founder of analyst firm voke inc.
“Mercury found a good home in HP,” Lanowitz told me yesterday. “Being on the later release numbers of HP Performance Center and Quality Center speaks to the longevity that these tools have on the market.”
Rather than just focus on the HP release in our conversation, Lanowitz discussed the key trends that it signifies, and that’s what this post covers. That said, here are some of the basics about the announcement:
- HP Performance Center release 9.5, an updated suite of performance testing software, is available now as a product or through HP Software as a Service, the latter approach being a conduit toward creating a consolidated quality management program.
- HP LoadRunner load testing software is now part of Performance Center, enabling checking application performance against business requirements during the testing cycle.
- HP today also updated its Application Lifecycle Management services, bring more features that help IT organizations create Centers of Excellence (CoE) to increase the quality of applications.
With this release, HP continues tearing down the walls between development, QA, IT operations and business analysts,” Lanowitz said. She continued:
HP is giving people ways to work together via dashboards, that sort of thing, that allow the lifecycle to not be linear — to not be just about development. It’s about transforming the lifecycle to take on all of these aspects and make sure these barriers are broken down between all the parts of the IT and business organizations.
Lanowitz sees people using Performance Center today to prioritize the apps that they have to build out as an enterprise IT organization and centralize their efforts to make sure they’re using all their skills and resources correctly. Lanowitz expects to see fewer software projects using department-centric methodologies and a blend of best-of-breed tools.
“The economic climate and complexity of projects is creating more interest in standardizing on one set of tools that can be used across the project and, mostly likely, enterprise,” Lanowitz said.
The new HP Performance Center 9.5 release -– as well as the strong and continued work by IBM on its Rational line and Microsoft on Visual Studio — shows the maturity of the movement away from point software development products and the sticking power of the trend toward wall-less and business-centric application lifecycle management processes.
Each acquired company had a sweet spot where they lived, Lanowitz said. For instance, Mercury was focused on the testing side, and HP has extended from there to operations-centric tools. Rational was more focused on developers, and IBM has expanded to include testers, IT operations managers and more.
In Lanowitz’s view, global lifecycle solutions help businesses build productive, efficient software application organizations whose projects fulfill business needs, Lanowitz said. They open the door to automating more processes and tearing down productivity-breaking walls between software, IT and business departments.