Software Quality Insights


February 24, 2009  2:31 PM

HP Performance Center 9.5 and the suite life

Jan Stafford Jan Stafford Profile: Jan Stafford

HP’s addition of some hefty features to its new HP Performance Center 9.5 release, announced today, is further evidence that HP has done the right things with the acquired Mercury software testing line, according to Theresa Lanowitz, founder of analyst firm voke inc.

“Mercury found a good home in HP,” Lanowitz told me yesterday. “Being on the later release numbers of HP Performance Center and Quality Center speaks to the longevity that these tools have on the market.”

Rather than just focus on the HP release in our conversation, Lanowitz discussed the key trends that it signifies, and that’s what this post covers. That said, here are some of the basics about the announcement:

  • HP Performance Center release 9.5, an updated suite of performance testing software, is available now as a product or through HP Software as a Service, the latter approach being a conduit toward creating a consolidated quality management program.
  • HP LoadRunner load testing software is now part of Performance Center, enabling checking application performance against business requirements during the testing cycle.
  • HP today also updated its Application Lifecycle Management services, bring more features that help IT organizations create Centers of Excellence (CoE) to increase the quality of applications.

With this release, HP continues tearing down the walls between development, QA, IT operations and business analysts,” Lanowitz said. She continued:

HP is giving people ways to work together via dashboards, that sort of thing, that allow the lifecycle to not be linear — to not be just about development. It’s about transforming the lifecycle to take on all of these aspects and make sure these barriers are broken down between all the parts of the IT and business organizations.

Lanowitz sees people using Performance Center today to prioritize the apps that they have to build out as an enterprise IT organization and centralize their efforts to make sure they’re using all their skills and resources correctly. Lanowitz expects to see fewer software projects using department-centric methodologies and a blend of best-of-breed tools.

“The economic climate and complexity of projects is creating more interest in standardizing on one set of tools that can be used across the project and, mostly likely, enterprise,” Lanowitz said.

The new HP Performance Center 9.5 release -– as well as the strong and continued work by IBM on its Rational line and Microsoft on Visual Studio — shows the maturity of the movement away from point software development products and the sticking power of the trend toward wall-less and business-centric application lifecycle management processes.

Each acquired company had a sweet spot where they lived, Lanowitz said. For instance, Mercury was focused on the testing side, and HP has extended from there to operations-centric tools. Rational was more focused on developers, and IBM has expanded to include testers, IT operations managers and more.

In Lanowitz’s view, global lifecycle solutions help businesses build productive, efficient software application organizations whose projects fulfill business needs, Lanowitz said. They open the door to automating more processes and tearing down productivity-breaking walls between software, IT and business departments.

February 20, 2009  7:53 PM

Best practices for requirements in agile: Forget about agile

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Yesterday I got together with Robin F. Goldsmith, president of consultancy Go Pro Management Inc. and our resident requirements expert, and asked him, “What are the best practices for gathering requirements in an agile environment?”

His advice? Don’t worry about how agile methodologies change the requirements gathering process. “The best practices for gathering requirements are the same regardless of the methodology you use,” Goldsmith said.

Goldsmith went on to explain that agile isn’t necessarily a totally new way of doing things. “The premise of agile development is to focus on very small pieces and get them done. To my thinking that has always been the approach people have used when they’ve gotten things done,” he said.

The hard part is figuring out which pieces to work on, and for that you need “a structured and systematic way of understanding the big picture,” he said. “So the starting point for meaningful requirements discovery is to start at the top, to get the big picture. Then you analyze and prioritize and select those pieces that you want to drive down to more detail.”

According to Goldsmith, one of the common traps that IT pros fall into when discussing requirements is wanting to provide a solution — a product or system — before determining at a high level what the business really needs. Agile can actually exacerbate this issue.

“The problem with agile development is that it’s driven by a programmer, and the programmer doesn’t know to find out about business requirements. The programmer wants to find out what program they should write,” he said. “The context that they’re in, especially driven from the programming perspective, pushes them into saying, ‘What do you want me to build?’ not ‘What should what I build accomplish?'”

To complete any software project successfully, you need to work closely with your users and help them “understand what they need to accomplish before they try to settle on a solution,” Goldsmith said. “That’s true whether it’s agile or any other methodology.”

Got a question about requirements? Submit your question and Robin Goldsmith will answer it in his Ask the Expert section on SearchSoftwareQuality.com.


February 20, 2009  2:01 PM

PCI compliance and how PI affects your testing

MichaelDKelly Michael Kelly Profile: MichaelDKelly

In a recent E-Commerce Times article titled “Beyond the Audit: Maintaining a PCI-Compliant Environment,” Dave Shackleford lays out the basics for ongoing compliance fundamentals. In the article, he mentions achieving visibility in the environment, implementing rigorous change control, and managing the scope of what needs to be controlled.

However, early in the article he points out how compliance doesn’t make you secure:

Yet even if you pass the audit, doing so doesn’t automatically render your system secure, or even demonstrate an effort toward improving security. The recent breach at Heartland Payment Systems is, unfortunately, a shining example: The company had been audited and certified PCI-DSS compliant.

Fellow Software Quality Insights blogger Jack Danahy provided additional details on the Heartland Payment Systems story in his “Does PCI compliance make your data secure? Nope.” post.

So what does all this mean to testers? Are we slaves to the auditors? Helpless to help ourselves and our companies in the fight for data security? Are nebulous words and concepts like “change control” and “increased visibility” our only defense? In the words of Jack Danahy, “Nope.”

Here’s what you can do.

As software testers you have a hands-on understanding of the systems your company is using. More importantly, you have a working knowledge of the data those systems are using and how they are using it. In the fight for data security, it’s important that you understand what protected information is and how it needs to be managed. You can test (not document, but actually test) that the data storage is secure — whether it’s in a database, a flat-file, or some other format. You can track the data through a transaction and make sure it’s transported in secure formats and via secure transports. You can ensure that it’s not mistakenly getting written out to unsecured application log files. Because you’re doing the testing, you get to verify that the data is treated in a way that keeps it secure as the system processes it.

Will doing so keep all your customer data safe? It won’t. But it’s an important step — along with change control, data management policies and restricted access. It’s the step that you control: providing information to the rest of the project team about the PCI-compliance implications of their technical implementation of data management within the system you’re testing.


February 18, 2009  1:46 PM

Aragon nabs Krugle’s cool code search technology

Jan Stafford Jan Stafford Profile: Jan Stafford

In 2007 Krugle was a startup with a code-finding appliance, and I blogged about it, asking ”Will Krugle be cool?” Obviously, San Mateo, Calif.-based Aragon Consulting Group does think Krugle’s software code search and analysis tool is cool, because it acquired Krugle this week.

Krugle’s current users won’t be left high and dry, according to Mel Badgett, Aragon’s marketing head. He told me that:

Aragon will also continue to support all current users of Krugle Enterprise and Krugle Search Technology –- a broad group that includes developer communities, such as MSDN, and private companies with medium- to large-sized software development organizations.

Aragon will also offer a new product to Krugle users, as it is incorporating Krugle’s tools into its upcoming Next.0 Delivery Platform. The Krugle technology will enhance two Next.0 Delivery Platform features: Test Management and Test Strategist.

“Test Management provides a real time assessment of project quality and release readiness –- a first for the outsourcing world,” Badgett said. “Test Strategist intelligently maps code/code changes to test cases to minimize regressions and defect inject –- another first for outsourcing.”

Stay tuned for Krugle in the cloud. Cloud computing is a next step in Next.0, according to Badgett. Aragon has built applications to facilitate management of ERP, CRM, HRM application instances hosted on the cloud. Planning and automation have been built into Next.0 that will make “cloud testing tractable,” Badgett said.


February 18, 2009  1:42 PM

Thinking like a hacker

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Jeff Feinman recently published a great article in SD Times titled “Think like a hacker.” The article collects statements about security testing from several vendors and ties them together around this basic theme:

From a technology standpoint, there are two main approaches for testing software for security, and they are well known to developers and testers. One is exercising the software from what many call the outside-in approach: testing to see how the application responds to a simulated attack. The second is more of an inside-out approach, which looks for coding patterns that would highlight vulnerabilities in the code.

In the limited security testing I’ve done, I’ve had luck with both approaches. It can be downright scary how poor application security really is. I’m not even a full-time security tester, and I can find my share of security bugs with little effort. And don’t get me wrong, that’s not a testament to my skill. That’s a testament to the state of software security — it’s bad.

As one example, I remember a number of years ago I was asked to evaluate a new Web store implementation for a company that sold products around potentially sensitive customer information (think credit reports, but not quite). As I evaluated the website, I looked at the code for the e-commerce portion of the site. As I reviewed the code, I got the feeling that whoever programmed this section of the site really knew what they were doing. They were pros, and I didn’t feel it was very likely that I’d be able to find any issues in that code without a lot of hard work.

Stepping back from the website a bit, I stopped thinking of security tests I could run, and instead thought about the company and how they would likely have run this project. I didn’t know, I wasn’t involved, but I’ve worked in similar situations in the past. They had likely outsourced this portion of the site to a company that’s done countless e-commerce implementations.

But then I asked, “What would this company be too cheap to outsource? What would they do themselves because they think it’s easy and there’s no risk?” I reviewed the site a second time, this time with those questions in mind. Instead of focusing on products, I looked at the supporting features. There was an Advanced Search function (everyone thinks they can do search), so I checked that out. Within seconds (really, my first test) I had information from the search function that I could use to access other parts of the site.

That’s why I really like that quote above from the article. In my one simple example of security testing, I took two approaches. First I tested to see how the application responded to simulated attacks (SQL injection, URL hacking, etc.). When that didn’t work, I switched gears and thought about coding and project patterns that would highlight vulnerabilities in the code.

Again from the article:

“You don’t want to be starting to think about testing security as you’re coming into a release candidate,” DeMarines said. “You want to be looking at this fairly upfront when most of the functionality has been implemented in a way that you can test it, and then figure out how to make it resistant to the kinds of threats the enterprise is worried about.”

Upfront indeed … the article provides some great places to start looking at integrating application security into your development process.


February 9, 2009  2:38 PM

Book gives managers a software testing reality check

Jan Stafford Jan Stafford Profile: Jan Stafford

Software testers and quality assurance managers know that perfect software doesn’t exist.

Unfortunately their project managers, company executives and legal teams often don’t, and these people have “unreasonable expectations,” experience “constant disappointments” and make “disastrous decisions” regarding software testing, said Gerald M. Weinberg, author of many IT and software quality books, in our recent phone conversation.

Weinberg wants those people to read his new book, Perfect Software And Other Illusions About Testing.

“This book is designed for people who don’t fully know what testing encompasses, but make decisions that affect how and how much software testing is done,” said Weinberg.

The current economic downturn and resulting budget cutbacks in software development make informing decision makers and influencers about when, how and why software must be tested even more urgent. While it seems obvious to software testers that cutting software testing is a sure way to produce faulty software, it’s not as obvious to managers.

“Management thinks of testing as what is done after the software is developed,” Weinberg said. “They schedule testing at the end of the process, and that’s the easiest place to cut if there’s a cost overrun.”

Naturally, that’s when the QA manager should step in and explain why testing shouldn’t be cut; but often the lack of funds defeats such arguments at the end of a project.

Establishing processes wherein software is tested early and continually is a better practice than testing at the end of development. “There should be more people wearing testing hats at each stage of development, because mistakes made early are very hard to find later,” Weinberg said.

Reducing development costs by reducing testing is a penny-wise, pound-foolish approach. More and more, Weinberg said, software-related lawsuits and malpractice cases verdicts come down to what software wasn’t tested adequately.

Another common management mistake is not putting software support on the same ledger as development.

“Accurate cost accounting takes into account post-release costs,” Weinberg said. “Usually, the cost of fixing errors doesn’t get attributed to development managers and project managers. It should be.”

Weinberg hopes this book will help managers get “tuned into reality” about software testing. The key is getting it into the right people’s hands, he said, noting that testers and QA pros may want to keep it handy to do just that.

If you’re having trouble getting support for testing in any area or face testing cutbacks, our resident site experts can offer helpful advice. Just send your questions or describe your problem in an email to editor@searchsoftwarequality.com. This software security pro wrote in and got advice on how to get management support for security quality.


February 9, 2009  2:26 PM

Finding some performance testing wisdom

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Last week O’Reilly’s 97 Things Every Software Architect Should Know: Collective Wisdom from the Experts was released. In a recent excerpt published by Sara Peyton, Thoughtworks CTO Rebecca Parsons provides some insight into performance testing.

In early testing, you may not even try to diagnose performance, but you do have a baseline of performance figures to work from. This trend data provides vital information in diagnosing the source of performance issues and resolving them.

I like this quote because it looks at an often overlooked aspect of performance testing. Many times when people think about performance testing, they only think they can test if they have requirements. While those are helpful, they aren’t the only tools you have available. Sometimes trending is enough to recognize you might have found issues. If something was working with a response time of 5 seconds, and degrades to 10 seconds, you don’t need a requirement. It’s worse. When running your early tests, that’s often good enough to get going.

A bit later in the excerpt, another small gem is highlighted:

Technical testing is also notoriously difficult to get going. Setting up appropriate environments, generating the proper data sets, and defining the necessary test cases all take a lot of time. By addressing performance testing early, you can establish your test environment incrementally, thereby avoiding much more expensive efforts once you discover performance issues.

It’s incredibly difficult to build out even seemingly simple performance test environments. It’s not even that physically setting up the environment is always hard (however many times it can be), it’s often figuring out what’s different from the production environment. You know, those little problems you discover after everyone has said it’s “the same” but it is really different due to minor changes in configuration, version, or dependencies. Ironing out those kinks, or identifying the key differences can take time.

If the rest of the book is like the excerpt, it might be worth picking up. Some other topics covered in the book (as noted by the publisher) include:

  • Don’t Put Your Resume Ahead of the Requirements (Nitin Borwankar)
  • Chances Are, Your Biggest Problem Isn’t Technical (Mark Ramm)
  • Communication Is King; Clarity and Leadership, Its Humble Servants (Mark Richards)
  • Simplicity Before Generality, Use Before Reuse (Kevlin Henney)
  • For the End User, the Interface Is the System (Vinayak Hegde)


February 5, 2009  2:20 PM

Does PCI compliance make your data secure? Nope.

JackDanahy Jack Danahy Profile: JackDanahy

Another week, another cascade of information pouring unintentionally out of another unwitting company — this time it is Heartland Payment Systems.

As a result, Heartland customers will get letters letting them know that they should watch out for unexpected transactions; hundreds of man hours are going to be spent understanding the circumstances of the breach; and already the inquisitors of information security are pounding keyboards mercilessly, pillorying the Heartland team for this most recent episode.

Heartland is reported to have been PCI-compliant. That’s the interesting nugget in this story and it’s one we have seen before. PCI compliance didn’t save Heartland from losing so much data that the company could more aptly be called “Heartland Pay-out Systems” for the next several months, as it pays out clean-up, fines, and costs.

Assuming Heartland did what was asked by PCI auditors, and that it provided sufficient access for the assessor to do their job: if I were Heartland I would be really not happy. If there was some fundamental communication breakdown, I would be similarly displeased.

I mean, really, what is the purpose of issuing one of the most proscriptive standards on security if measurement of compliance with it is so meaningless? My information about this data breach comes from public sources and not from Heartland or their PCI assessors, but here is what I can glean:

1. Heartland was certified as PCI-compliant. (See the list of PCI-compliant service providers. Heartland is on Page 12.)

2. Part of PCI compliance includes section 3.4 which says that the credit card data must be encrypted anytime it is stored. Now, while some will argue that there should be a new requirement calling for encryption of internal networks (which does not exist in PCI currently), any entry-level programmer knows that as soon as I read my input for the credit card number, I am storing it, even if only in memory. (See the complete PCI DSS v1.2 standard.)

3. Malicious code — likely a sniffer — grabbed private unencrypted data (they called it “recorded data” which sounds a lot like like storage to me) off of the wire as the data was being sent for processing. (See here for one of many articles about the breach.)

So, Heartland wasn’t compliant, period, according to my read; but note that I am not a certified reviewer.

Is this their fault or their assessor’s? I have no idea, but I do believe that it is the responsibility of the assessor to have done this simple analysis and identified the weakness.

If Heartland was complicit in this, then I have no defense for them. If, however, in a business climate charged with specialization and outsourced expertise, they relied on their provider to help them validate that they had done enough, then the responsibility and the spotlight should be turned at least equally on the process and the people who gave them the useless rubber stamp.

Does being PCI-compliant make you secure? Nope. Not the same thing at all, but that isn’t the issue here.

Does thinking you are compliant when you are not even close create some substantial risk?

You bet.


February 2, 2009  6:57 PM

Will penetration testing be replaced by preventative tools?

MichaelDKelly Michael Kelly Profile: MichaelDKelly

I recently read the article “Penetration Testing: Dead in 2009” by Bill Brenner. In the article Mr. Brenner follows a small debate around the idea that over time penetration testing will be largely replaced by preventative checks.

The debate opens with some quotes from Brian Chess from Fortify Software. Fortify creates code analysis tools that scan for security concerns and adherence to good secure coding practices. That potential bias aside, I suspect that Mr. Chess’ statement — that “Customers are clamoring more for preventative tools than tools that simply find the weaknesses that already exist […]. They want to prevent holes from opening in the first place” — is absolutely true. I know I clamor for those tools, and I’m just a lowly test manager.

I’m a big fan of the work companies like Fortify, IBM and HP are doing in this space. If my project team can find a potential issue before we deploy the code, I’m all for it. It can save us time and helps us focus on different and potentially higher-value risks. However, I’ve yet to see a tool that can deal with the complexity of a deployment environment (setup, configuration, code, etc…) and while I’m a big believer in doing everything you can up front (design, review, runtime-analysis, etc.), I believe there will always be a roll for a skilled manual investigation of what gets deployed.

Testing (penetration or other) is about applying skill and judgment to uncover quality-related information about the product. That’s not just code — it’s more than that. Your typical penetration tester today covers more than today’s automated tools can cover. While there are different tools to test various components (some that focus on code, some that focus on the network, etc.), and they should absolutely be used, those tools will never be able to uncover all the potential issues with a system. And, what’s sometimes worse, is they can lead to a false sense of security.


January 30, 2009  5:21 PM

Using source code analysis tools

MichaelDKelly Michael Kelly Profile: MichaelDKelly

I found a great article earlier this week on static analysis tools by Mary Brandel. In the article, “How to choose and use source code analysis tools,” she cites some statistics on the static analysis market, including:

  • “The entire software security market was worth about US $300 million in 2007”
  • “The tools portion of that market doubled from 2006 to 2007 to about $180 million”
  • “About half of that is attributable to static analysis tools, which amounted to about $91.9 million”

In the article, Brandel also offers some evaluation criteria for when you start looking at source code analysis tools. These include language support and integration, assessment accuracy, customization, and knowledge base. She also provides some dos and don’ts for source code analysis. I think the most valuable tidbits from that list include:

  • DO consider using more than one tool: The article provides a good story about Lint vs. Coverity, and I’ve found that static analysis tools will find different issues as well. Each vendor will have its own specific focus on vulnerabilities and warnings.
  • DO retain the human element: While I’ve yet to work with a team that thinks adding automated tools like this will allow you to remove people, there’s certainly the feeling from the marketing materials that the results are intuitive. That’s typically not the case. You often need to know what you’re looking at or you’ll miss the subtleties in the data. I agree with the “truly an art form” quote. This stuff is hard, and while tools make it easier, it’s still brain-engaged work.
  • DO consider reporting flexibility: At some companies this is a big deal. When working with smaller software development organizations, it doesn’t matter what the reports look like. The only people looking at them are the people working in the code. However at a larger company, Fortune 500 for example, information like this normally needs to be summarized and reported up.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: