Software Quality Insights

March 2, 2009  3:24 PM

The next generation of testing tools

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Last year I was involved in an audit where I was told that my testing team would benefit from using a large enterprise test case management tool. I found the recommendation interesting, because we primarily do exploratory testing. We have some scripted test cases, but not many and really only for regression testing purposes. My experience with most test case management tools (including the one that was recommended) tells me that the tool would not help us.

When I sit back and reflect on my experiences as a test manager, I know where the recommendation comes from. I know that on large IT projects, where we have (literally) armies of testers, managing test artifacts and test execution results can be a huge challenge. I also know the power and influence the larger tools vendors have on our industry. And I know why an auditor, someone who’s likely never done exploratory testing, would make the recommendation. And perhaps we could benefit from some more sophisticated tooling. We do struggle a bit in that area.

This came together for me this morning while I read Joe Farah’s article CM: THE NEXT GENERATION – Don’t Let Our Mistakes Hold Us Back on cmCrossroads. It’s not a testing article, but it is a fantastic look at the problems we as software developers and project teams encounter when we look at tooling. In the article, Farah lists out five mistakes and then spends time with each one:

  • Mistake #1: We assume that the most expensive, or most prevalent solution is the best
  • Mistake #2: We develop ever more complex branching strategies to deal with ever more complex CM.
  • Mistake #3: Common backplane schemes reduce effort and costs, but they don’t come close to well designed monolithic tools
  • Mistake #4: We hold on to a system because of our familiarity with it, regardless of the cost
  • Mistake #5: Change-based CM, not File-based

As I read the article, I found myself saying, “Yea! That’s a testing problem too!” I’m empathetic to the problems he describes, because I believe we also have those problems. We do assume the most expensive tool is the best solution and we hold on to them once implemented, even if they don’t work. We do develop complex test case to test requirement hierarchies in an attempt to deal with an often misconceived concept of traceability. We do track test cases and defects, instead of coverage and risk, because they are easier to track with the tools we have.

Farah summarizes nicely:

Ask yourself, are the processes and tools I’m using today for CM substantially different than the ones I was using 5 years ago, 10 years ago, 15 years ago? Or are they just packaged prettier, with add-ons to make some of the admin-intensive tasks more tolerable? I know of 2 or 3 companies who can (and many that will) say their tools are significantly better than 3 years ago. I know of many, many more who have just spruced up the package, or perhaps customized it for another sector. Nothing wrong with that, but that won’t get us ahead. Let’s hope the 2 or 3 can stick with it and bring about the change needed to drag the rest of the industry forward. That’s exactly what I think is going to happen … and then the pace will quicken.

This is a problem that I’ve heard testing book author and expert Cem Kaner point out on a regular basis when he gives conference talks. While programming methods have changed drastically over the last 20 years, software-testing methods have remained largely the same. The tools that support programmers have changed drastically, but the tools that support testers are again, (with the possible exception of areas like performance, security, and mobile) largely the same.

I’d love to see a couple of testing tool vendors “drag the rest of the industry forward.”

February 26, 2009  1:58 PM

Better security through better visualization

MichaelDKelly Michael Kelly Profile: MichaelDKelly

I’m always excited when I stumble across an area which is an intersection of two of my favorite topics. Recently, I started reading Applied Security Visualization by Raffael Marty. In the book, Marty introduces the concepts and techniques of network visualization and explains how you can use that information to identify patterns in the data and possible emerging vulnerabilities and attacks. It’s the perfect merger of data visualization (a topic fellow expert Karen Johnson has me hooked on) and security.

This morning, I stumbled across an ITworld article Marty published earlier this month on getting started with security visualization. In the article Marty provides three simple must-dos and don’ts:

The three “must-dos” from the article:

  • Learn about visualization: It’s important for security people to understand the basics of visualization. Learn a bit about perception and good practices for generating effective graphs. Learn about which charts to use for which kinds of use cases and data. This is the minimum you should know about visualization.
  • Understand your data: Visualization is not a magic method that will explain the contents of a given data set. Without understanding the underlying data, you can’t generate a meaningful graph and you won’t be able to interpret the graphs generated.
  • Get to know your environment: I can be an expert in firewalls and know all there is to know about a specific firewall’s logs. However, if you give me a visualization of a firewall log, I won’t be able to tell you much or help you figure out what you should focus on. Context is important. You need to know the context in which the logs were generated. What are the roles of the machines on the network, what are some of the security policies, what type of traffic is normal, etc. You can use visualization to help understand the context, but there are things you have to know up front.

And the three “don’ts”:

  • Don’t get scared: The topic of security visualization is a big one. You have to know a lot of things from visualization to security. Start small. Start with some data that you know well. Start with some simple use cases and explore visualization slowly.
  • Don’t do it all at once: Start with a small data set. Maybe a few hundred log lines. Once you are happy with the results you get for a small data set, increase the size and see what that does to your visualization. Still happy? Increase the size some more until you end up with the complete data set.
  • Don’t do it yourself: If you’re in charge of data analysis and you aren’t the data owner (meaning that you don’t understand the application that generates the data intimately well) you should get help from the data owner. Have the application developers or other experts help you understand the data and create the visuals together with you.

If you’d like to read more on the topic (and see some cool examples) check out Raffael Marty’s blog.

February 25, 2009  7:39 PM

Using EVM in waterfall, agile software projects: Hows, pros and cons

Scmconsulting Ashfaque Ahmed Profile: Scmconsulting

In my work as a project manager, test manager and consultant, I’ve often used the earned value management (EVM) technique to provide status reports during project execution cycle, among other things. In this post, I share some of the things I’ve learned about it, hoping to help people who have found it difficult to use EVM on software development/testing projects.

I’ve found that EVM can a very useful technique for software development and testing projects. EVM shows how the budget, resources and time is being spent on the project and if the project is going in the right direction. It also shows what correction could be needed to bring the project on the right track. Without EVM; project tracking, monitoring, control and reporting is done on an ad hoc basis. Nobody, including the project stakeholders and the project team itself, knows exactly where the project is or where it is heading to; at any given time during execution of the project.

In EVM, the stress is on project baseline. Before the start of the project; a baseline project plan is developed. At this stage all steps, tasks, resources, time duration for execution of each task and budgets are determined. From this baseline, values for budget, time and resources utilized for each project task is compared when the project execution progresses. This comparison provides information as to whether any task is consuming more or less budget, time or resources compared to the corresponding planned baseline values. These values are also compared for the entire project as well. This information helps in tracking and controlling projects.

One challenge with EVM is that it doesn’t work when requirements are not clear or well defined at initial stage of the project. When requirements keep changing during the software development/testing project, you can’t set realistic baseline values. If the baseline values are not realistic, then EVM method can not work.

EVM can be successfully implemented on software development/testing projects. Suppose we are following the waterfall model for the project. In this case, most of the things for the project are fixed in the beginning of the project. That means we have good baseline figures for the project which will not change. In such a case, EVM can be implemented.

Now, let’s talk about EVM and the agile model. Suppose we have realized that a lot of changes will take place in requirements through out the project. Due to changes in requirements, the project plan will also keep changing to incorporate changes required in different phases of the project due to changes in requirements. So we have no option but to follow an agile method for our project; but how can we have a realistic baseline defined for our project? There are two approaches:

  • In one, you have a project where requirements keep coming and are integrated into the base application model; in other words, an incremental integration model.
  • The other model is the incremental iterative model, in which requirements are grouped into separate small sets. Branches in the main application are made, and a new build is made in each of these branches to fulfill these sets of requirements.

In case of incremental iterative model, we can divide all requirements into manageable groups. Then for a set of requirements, we can make a branch of the base application model and develop this branch as per requirements. We can have several branches of the base application model, and each branch will be further developed using a set of requirements. This way, each branch will have a set of requirements for which all phases of the projects will be well defined. No changes in requirements are allowed in this branch until the iteration is complete.

In case of incremental integration model, the additional design is made based on new requirements. Thus the new design and the new code developed in the next release of the software go directly to the main build.

Using these methods, we are freezing the requirements and thus we can make a good baseline for each of these small iterative projects. We can now easily set budget, timelines and resources for each of these smaller projects and each of the tasks associated with these projects. When the project execution for these iterations starts; we can easily track and monitor these projects.

Finally, there’s another situation in which EVM EVM is relatively simple to implement: software projects using commercial-off-the-shelf (COTS) software. Here if customization of the software package is not required or very minimal, then the project team mostly has to deal with tasks like installing different modules of the package, configuring different modules of the package as per requirements, creating data and finally testing the entire implemented system. In such a scenario, software development is minimal and most requirements are not prone to change. This makes EVM implementation on such projects viable.

Summing up, I advise project leaders to check out EVM. In my experience, software projects often fail because of requirement creep. Using EVM will make software projects manageable and reduce the chances of project failure.

February 25, 2009  2:03 PM

Agile testers: The value of whole team commitment

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Earlier this month Lisa Crispin and Janet Gregory, authors of Agile Testing: A Practical Guide for Testers and Agile Teams, published an article called “Testers: The Hidden Resource.” While the article doesn’t appear to be written for testers, it’s nonetheless an interesting read. If you’re still not quite sure where testers might be able to play a role on your projects, it might be a place to start.

One quote I found particularly interesting talked about whole team commitment:

What do we mean by this “whole team” commitment? Testers work with programmers to turn their test ideas into automated tests that become part of the regression test suite. The whole team becomes responsible for keeping the test suite “running green”; that is, keeping the tests passing. The regression suite (including all unit and functional tests) allows the team to refactor continually to keep the code clean and to minimize technical debt. Testers contribute their specialized skills for developing robust test cases, but the entire team gets involved with designing testable code, automating and executing tests. A team commitment to principles, values, and practices that promote quality will result in well-designed code and keep maintenance costs low. Good test coverage means that changes are easier and faster to implement.

I think automated test coverage (unit, acceptance, functional, etc.) is a great place for testers and programmers to come together. I’ve also found that having programmers pair with testers when doing exploratory testing can also help. Not only do they provide meaningful insights to the testing taking place, it also allows the tester an opportunity to illustrate testing techniques that get programmers thinking about risks that can’t necessarily be addressed with automated tests. That in turn can create dialogue around what the best way might be to address those risks as a project team.

In general, I’m not a big fan of “selling” testing to the teams I’m working with when I’m a tester or test manager. Instead I show them what value I (or my team) can provide. If they don’t want to give me the opportunity to show them that value, that’s okay with me. I don’t want to work with people who don’t feel like they need the kind of feedback I can provide. To date I’ve only met a handful of programmers who don’t ask people to test their code. So I’ve found it relatively easy to break down any resistance a programmer might have to involving testers.

February 25, 2009  1:51 PM

User sees high points in HP Performance Center 9.5; complexity an issue

Jan Stafford Jan Stafford Profile: Jan Stafford

After discussing HP’s new Performance Center 9.5 release with voke analyst Theresa Lanowitz, I asked a user — Constellation Energy Group’s software engineer Srinivasa Margani –- for his take on it.

Marconi is happiest about two additions to HP Performance Center 9.5: HP Protocol Advisor and HP LoadRunner.

“Protocol Advisor is the best from 9.5. When it comes to the administration side, I liked the project-level privileges. We were looking for this kind of facility for a long time.”

Also, Margani will use 9.5’s Results Trending feature to compare tests. “It’s a very easy way to compare the results and avoids lot of manual errors.”

HP LoadRunner will take a lot of the guess work out of running load tests. Margani welcomes all the help he can get in that area, and he learned the hard way: “Don’t rush to do a load test. Think twice about pros and cons before kicking off a load test.”

Constellation and Margani started out using Mercury ITG, a change and demand management application, and moved to HP Performance and Quality Center after HP acquired Mercury a few years ago. Margani heads a group of three developers and four system administrators.

In general, Margani has found HP Performance Center easy to implement and user-friendly. “It was never difficult to create or run a load test using HP Performance Center or LoadRunner,” he told me.

On the other hand, the complexity of Performance Center makes it difficult for Margani to get and keep his development team up to speed on how to write test scripts and accurately execute them. He’s had to hire third-party consultants in the past to run the application testing phase for high-profile and time-constrained projects.

February 24, 2009  2:31 PM

HP Performance Center 9.5 and the suite life

Jan Stafford Jan Stafford Profile: Jan Stafford

HP’s addition of some hefty features to its new HP Performance Center 9.5 release, announced today, is further evidence that HP has done the right things with the acquired Mercury software testing line, according to Theresa Lanowitz, founder of analyst firm voke inc.

“Mercury found a good home in HP,” Lanowitz told me yesterday. “Being on the later release numbers of HP Performance Center and Quality Center speaks to the longevity that these tools have on the market.”

Rather than just focus on the HP release in our conversation, Lanowitz discussed the key trends that it signifies, and that’s what this post covers. That said, here are some of the basics about the announcement:

  • HP Performance Center release 9.5, an updated suite of performance testing software, is available now as a product or through HP Software as a Service, the latter approach being a conduit toward creating a consolidated quality management program.
  • HP LoadRunner load testing software is now part of Performance Center, enabling checking application performance against business requirements during the testing cycle.
  • HP today also updated its Application Lifecycle Management services, bring more features that help IT organizations create Centers of Excellence (CoE) to increase the quality of applications.

With this release, HP continues tearing down the walls between development, QA, IT operations and business analysts,” Lanowitz said. She continued:

HP is giving people ways to work together via dashboards, that sort of thing, that allow the lifecycle to not be linear — to not be just about development. It’s about transforming the lifecycle to take on all of these aspects and make sure these barriers are broken down between all the parts of the IT and business organizations.

Lanowitz sees people using Performance Center today to prioritize the apps that they have to build out as an enterprise IT organization and centralize their efforts to make sure they’re using all their skills and resources correctly. Lanowitz expects to see fewer software projects using department-centric methodologies and a blend of best-of-breed tools.

“The economic climate and complexity of projects is creating more interest in standardizing on one set of tools that can be used across the project and, mostly likely, enterprise,” Lanowitz said.

The new HP Performance Center 9.5 release -– as well as the strong and continued work by IBM on its Rational line and Microsoft on Visual Studio — shows the maturity of the movement away from point software development products and the sticking power of the trend toward wall-less and business-centric application lifecycle management processes.

Each acquired company had a sweet spot where they lived, Lanowitz said. For instance, Mercury was focused on the testing side, and HP has extended from there to operations-centric tools. Rational was more focused on developers, and IBM has expanded to include testers, IT operations managers and more.

In Lanowitz’s view, global lifecycle solutions help businesses build productive, efficient software application organizations whose projects fulfill business needs, Lanowitz said. They open the door to automating more processes and tearing down productivity-breaking walls between software, IT and business departments.

February 20, 2009  7:53 PM

Best practices for requirements in agile: Forget about agile

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Yesterday I got together with Robin F. Goldsmith, president of consultancy Go Pro Management Inc. and our resident requirements expert, and asked him, “What are the best practices for gathering requirements in an agile environment?”

His advice? Don’t worry about how agile methodologies change the requirements gathering process. “The best practices for gathering requirements are the same regardless of the methodology you use,” Goldsmith said.

Goldsmith went on to explain that agile isn’t necessarily a totally new way of doing things. “The premise of agile development is to focus on very small pieces and get them done. To my thinking that has always been the approach people have used when they’ve gotten things done,” he said.

The hard part is figuring out which pieces to work on, and for that you need “a structured and systematic way of understanding the big picture,” he said. “So the starting point for meaningful requirements discovery is to start at the top, to get the big picture. Then you analyze and prioritize and select those pieces that you want to drive down to more detail.”

According to Goldsmith, one of the common traps that IT pros fall into when discussing requirements is wanting to provide a solution — a product or system — before determining at a high level what the business really needs. Agile can actually exacerbate this issue.

“The problem with agile development is that it’s driven by a programmer, and the programmer doesn’t know to find out about business requirements. The programmer wants to find out what program they should write,” he said. “The context that they’re in, especially driven from the programming perspective, pushes them into saying, ‘What do you want me to build?’ not ‘What should what I build accomplish?'”

To complete any software project successfully, you need to work closely with your users and help them “understand what they need to accomplish before they try to settle on a solution,” Goldsmith said. “That’s true whether it’s agile or any other methodology.”

Got a question about requirements? Submit your question and Robin Goldsmith will answer it in his Ask the Expert section on

February 20, 2009  2:01 PM

PCI compliance and how PI affects your testing

MichaelDKelly Michael Kelly Profile: MichaelDKelly

In a recent E-Commerce Times article titled “Beyond the Audit: Maintaining a PCI-Compliant Environment,” Dave Shackleford lays out the basics for ongoing compliance fundamentals. In the article, he mentions achieving visibility in the environment, implementing rigorous change control, and managing the scope of what needs to be controlled.

However, early in the article he points out how compliance doesn’t make you secure:

Yet even if you pass the audit, doing so doesn’t automatically render your system secure, or even demonstrate an effort toward improving security. The recent breach at Heartland Payment Systems is, unfortunately, a shining example: The company had been audited and certified PCI-DSS compliant.

Fellow Software Quality Insights blogger Jack Danahy provided additional details on the Heartland Payment Systems story in his “Does PCI compliance make your data secure? Nope.” post.

So what does all this mean to testers? Are we slaves to the auditors? Helpless to help ourselves and our companies in the fight for data security? Are nebulous words and concepts like “change control” and “increased visibility” our only defense? In the words of Jack Danahy, “Nope.”

Here’s what you can do.

As software testers you have a hands-on understanding of the systems your company is using. More importantly, you have a working knowledge of the data those systems are using and how they are using it. In the fight for data security, it’s important that you understand what protected information is and how it needs to be managed. You can test (not document, but actually test) that the data storage is secure — whether it’s in a database, a flat-file, or some other format. You can track the data through a transaction and make sure it’s transported in secure formats and via secure transports. You can ensure that it’s not mistakenly getting written out to unsecured application log files. Because you’re doing the testing, you get to verify that the data is treated in a way that keeps it secure as the system processes it.

Will doing so keep all your customer data safe? It won’t. But it’s an important step — along with change control, data management policies and restricted access. It’s the step that you control: providing information to the rest of the project team about the PCI-compliance implications of their technical implementation of data management within the system you’re testing.

February 18, 2009  1:46 PM

Aragon nabs Krugle’s cool code search technology

Jan Stafford Jan Stafford Profile: Jan Stafford

In 2007 Krugle was a startup with a code-finding appliance, and I blogged about it, asking ”Will Krugle be cool?” Obviously, San Mateo, Calif.-based Aragon Consulting Group does think Krugle’s software code search and analysis tool is cool, because it acquired Krugle this week.

Krugle’s current users won’t be left high and dry, according to Mel Badgett, Aragon’s marketing head. He told me that:

Aragon will also continue to support all current users of Krugle Enterprise and Krugle Search Technology –- a broad group that includes developer communities, such as MSDN, and private companies with medium- to large-sized software development organizations.

Aragon will also offer a new product to Krugle users, as it is incorporating Krugle’s tools into its upcoming Next.0 Delivery Platform. The Krugle technology will enhance two Next.0 Delivery Platform features: Test Management and Test Strategist.

“Test Management provides a real time assessment of project quality and release readiness –- a first for the outsourcing world,” Badgett said. “Test Strategist intelligently maps code/code changes to test cases to minimize regressions and defect inject –- another first for outsourcing.”

Stay tuned for Krugle in the cloud. Cloud computing is a next step in Next.0, according to Badgett. Aragon has built applications to facilitate management of ERP, CRM, HRM application instances hosted on the cloud. Planning and automation have been built into Next.0 that will make “cloud testing tractable,” Badgett said.

February 18, 2009  1:42 PM

Thinking like a hacker

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Jeff Feinman recently published a great article in SD Times titled “Think like a hacker.” The article collects statements about security testing from several vendors and ties them together around this basic theme:

From a technology standpoint, there are two main approaches for testing software for security, and they are well known to developers and testers. One is exercising the software from what many call the outside-in approach: testing to see how the application responds to a simulated attack. The second is more of an inside-out approach, which looks for coding patterns that would highlight vulnerabilities in the code.

In the limited security testing I’ve done, I’ve had luck with both approaches. It can be downright scary how poor application security really is. I’m not even a full-time security tester, and I can find my share of security bugs with little effort. And don’t get me wrong, that’s not a testament to my skill. That’s a testament to the state of software security — it’s bad.

As one example, I remember a number of years ago I was asked to evaluate a new Web store implementation for a company that sold products around potentially sensitive customer information (think credit reports, but not quite). As I evaluated the website, I looked at the code for the e-commerce portion of the site. As I reviewed the code, I got the feeling that whoever programmed this section of the site really knew what they were doing. They were pros, and I didn’t feel it was very likely that I’d be able to find any issues in that code without a lot of hard work.

Stepping back from the website a bit, I stopped thinking of security tests I could run, and instead thought about the company and how they would likely have run this project. I didn’t know, I wasn’t involved, but I’ve worked in similar situations in the past. They had likely outsourced this portion of the site to a company that’s done countless e-commerce implementations.

But then I asked, “What would this company be too cheap to outsource? What would they do themselves because they think it’s easy and there’s no risk?” I reviewed the site a second time, this time with those questions in mind. Instead of focusing on products, I looked at the supporting features. There was an Advanced Search function (everyone thinks they can do search), so I checked that out. Within seconds (really, my first test) I had information from the search function that I could use to access other parts of the site.

That’s why I really like that quote above from the article. In my one simple example of security testing, I took two approaches. First I tested to see how the application responded to simulated attacks (SQL injection, URL hacking, etc.). When that didn’t work, I switched gears and thought about coding and project patterns that would highlight vulnerabilities in the code.

Again from the article:

“You don’t want to be starting to think about testing security as you’re coming into a release candidate,” DeMarines said. “You want to be looking at this fairly upfront when most of the functionality has been implemented in a way that you can test it, and then figure out how to make it resistant to the kinds of threats the enterprise is worried about.”

Upfront indeed … the article provides some great places to start looking at integrating application security into your development process.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: