Software Quality Insights


February 18, 2009  1:46 PM

Aragon nabs Krugle’s cool code search technology

Jan Stafford Jan Stafford Profile: Jan Stafford

In 2007 Krugle was a startup with a code-finding appliance, and I blogged about it, asking ”Will Krugle be cool?” Obviously, San Mateo, Calif.-based Aragon Consulting Group does think Krugle’s software code search and analysis tool is cool, because it acquired Krugle this week.

Krugle’s current users won’t be left high and dry, according to Mel Badgett, Aragon’s marketing head. He told me that:

Aragon will also continue to support all current users of Krugle Enterprise and Krugle Search Technology –- a broad group that includes developer communities, such as MSDN, and private companies with medium- to large-sized software development organizations.

Aragon will also offer a new product to Krugle users, as it is incorporating Krugle’s tools into its upcoming Next.0 Delivery Platform. The Krugle technology will enhance two Next.0 Delivery Platform features: Test Management and Test Strategist.

“Test Management provides a real time assessment of project quality and release readiness –- a first for the outsourcing world,” Badgett said. “Test Strategist intelligently maps code/code changes to test cases to minimize regressions and defect inject –- another first for outsourcing.”

Stay tuned for Krugle in the cloud. Cloud computing is a next step in Next.0, according to Badgett. Aragon has built applications to facilitate management of ERP, CRM, HRM application instances hosted on the cloud. Planning and automation have been built into Next.0 that will make “cloud testing tractable,” Badgett said.

February 18, 2009  1:42 PM

Thinking like a hacker

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Jeff Feinman recently published a great article in SD Times titled “Think like a hacker.” The article collects statements about security testing from several vendors and ties them together around this basic theme:

From a technology standpoint, there are two main approaches for testing software for security, and they are well known to developers and testers. One is exercising the software from what many call the outside-in approach: testing to see how the application responds to a simulated attack. The second is more of an inside-out approach, which looks for coding patterns that would highlight vulnerabilities in the code.

In the limited security testing I’ve done, I’ve had luck with both approaches. It can be downright scary how poor application security really is. I’m not even a full-time security tester, and I can find my share of security bugs with little effort. And don’t get me wrong, that’s not a testament to my skill. That’s a testament to the state of software security — it’s bad.

As one example, I remember a number of years ago I was asked to evaluate a new Web store implementation for a company that sold products around potentially sensitive customer information (think credit reports, but not quite). As I evaluated the website, I looked at the code for the e-commerce portion of the site. As I reviewed the code, I got the feeling that whoever programmed this section of the site really knew what they were doing. They were pros, and I didn’t feel it was very likely that I’d be able to find any issues in that code without a lot of hard work.

Stepping back from the website a bit, I stopped thinking of security tests I could run, and instead thought about the company and how they would likely have run this project. I didn’t know, I wasn’t involved, but I’ve worked in similar situations in the past. They had likely outsourced this portion of the site to a company that’s done countless e-commerce implementations.

But then I asked, “What would this company be too cheap to outsource? What would they do themselves because they think it’s easy and there’s no risk?” I reviewed the site a second time, this time with those questions in mind. Instead of focusing on products, I looked at the supporting features. There was an Advanced Search function (everyone thinks they can do search), so I checked that out. Within seconds (really, my first test) I had information from the search function that I could use to access other parts of the site.

That’s why I really like that quote above from the article. In my one simple example of security testing, I took two approaches. First I tested to see how the application responded to simulated attacks (SQL injection, URL hacking, etc.). When that didn’t work, I switched gears and thought about coding and project patterns that would highlight vulnerabilities in the code.

Again from the article:

“You don’t want to be starting to think about testing security as you’re coming into a release candidate,” DeMarines said. “You want to be looking at this fairly upfront when most of the functionality has been implemented in a way that you can test it, and then figure out how to make it resistant to the kinds of threats the enterprise is worried about.”

Upfront indeed … the article provides some great places to start looking at integrating application security into your development process.


February 9, 2009  2:38 PM

Book gives managers a software testing reality check

Jan Stafford Jan Stafford Profile: Jan Stafford

Software testers and quality assurance managers know that perfect software doesn’t exist.

Unfortunately their project managers, company executives and legal teams often don’t, and these people have “unreasonable expectations,” experience “constant disappointments” and make “disastrous decisions” regarding software testing, said Gerald M. Weinberg, author of many IT and software quality books, in our recent phone conversation.

Weinberg wants those people to read his new book, Perfect Software And Other Illusions About Testing.

“This book is designed for people who don’t fully know what testing encompasses, but make decisions that affect how and how much software testing is done,” said Weinberg.

The current economic downturn and resulting budget cutbacks in software development make informing decision makers and influencers about when, how and why software must be tested even more urgent. While it seems obvious to software testers that cutting software testing is a sure way to produce faulty software, it’s not as obvious to managers.

“Management thinks of testing as what is done after the software is developed,” Weinberg said. “They schedule testing at the end of the process, and that’s the easiest place to cut if there’s a cost overrun.”

Naturally, that’s when the QA manager should step in and explain why testing shouldn’t be cut; but often the lack of funds defeats such arguments at the end of a project.

Establishing processes wherein software is tested early and continually is a better practice than testing at the end of development. “There should be more people wearing testing hats at each stage of development, because mistakes made early are very hard to find later,” Weinberg said.

Reducing development costs by reducing testing is a penny-wise, pound-foolish approach. More and more, Weinberg said, software-related lawsuits and malpractice cases verdicts come down to what software wasn’t tested adequately.

Another common management mistake is not putting software support on the same ledger as development.

“Accurate cost accounting takes into account post-release costs,” Weinberg said. “Usually, the cost of fixing errors doesn’t get attributed to development managers and project managers. It should be.”

Weinberg hopes this book will help managers get “tuned into reality” about software testing. The key is getting it into the right people’s hands, he said, noting that testers and QA pros may want to keep it handy to do just that.

If you’re having trouble getting support for testing in any area or face testing cutbacks, our resident site experts can offer helpful advice. Just send your questions or describe your problem in an email to editor@searchsoftwarequality.com. This software security pro wrote in and got advice on how to get management support for security quality.


February 9, 2009  2:26 PM

Finding some performance testing wisdom

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Last week O’Reilly’s 97 Things Every Software Architect Should Know: Collective Wisdom from the Experts was released. In a recent excerpt published by Sara Peyton, Thoughtworks CTO Rebecca Parsons provides some insight into performance testing.

In early testing, you may not even try to diagnose performance, but you do have a baseline of performance figures to work from. This trend data provides vital information in diagnosing the source of performance issues and resolving them.

I like this quote because it looks at an often overlooked aspect of performance testing. Many times when people think about performance testing, they only think they can test if they have requirements. While those are helpful, they aren’t the only tools you have available. Sometimes trending is enough to recognize you might have found issues. If something was working with a response time of 5 seconds, and degrades to 10 seconds, you don’t need a requirement. It’s worse. When running your early tests, that’s often good enough to get going.

A bit later in the excerpt, another small gem is highlighted:

Technical testing is also notoriously difficult to get going. Setting up appropriate environments, generating the proper data sets, and defining the necessary test cases all take a lot of time. By addressing performance testing early, you can establish your test environment incrementally, thereby avoiding much more expensive efforts once you discover performance issues.

It’s incredibly difficult to build out even seemingly simple performance test environments. It’s not even that physically setting up the environment is always hard (however many times it can be), it’s often figuring out what’s different from the production environment. You know, those little problems you discover after everyone has said it’s “the same” but it is really different due to minor changes in configuration, version, or dependencies. Ironing out those kinks, or identifying the key differences can take time.

If the rest of the book is like the excerpt, it might be worth picking up. Some other topics covered in the book (as noted by the publisher) include:

  • Don’t Put Your Resume Ahead of the Requirements (Nitin Borwankar)
  • Chances Are, Your Biggest Problem Isn’t Technical (Mark Ramm)
  • Communication Is King; Clarity and Leadership, Its Humble Servants (Mark Richards)
  • Simplicity Before Generality, Use Before Reuse (Kevlin Henney)
  • For the End User, the Interface Is the System (Vinayak Hegde)


February 5, 2009  2:20 PM

Does PCI compliance make your data secure? Nope.

JackDanahy Jack Danahy Profile: JackDanahy

Another week, another cascade of information pouring unintentionally out of another unwitting company — this time it is Heartland Payment Systems.

As a result, Heartland customers will get letters letting them know that they should watch out for unexpected transactions; hundreds of man hours are going to be spent understanding the circumstances of the breach; and already the inquisitors of information security are pounding keyboards mercilessly, pillorying the Heartland team for this most recent episode.

Heartland is reported to have been PCI-compliant. That’s the interesting nugget in this story and it’s one we have seen before. PCI compliance didn’t save Heartland from losing so much data that the company could more aptly be called “Heartland Pay-out Systems” for the next several months, as it pays out clean-up, fines, and costs.

Assuming Heartland did what was asked by PCI auditors, and that it provided sufficient access for the assessor to do their job: if I were Heartland I would be really not happy. If there was some fundamental communication breakdown, I would be similarly displeased.

I mean, really, what is the purpose of issuing one of the most proscriptive standards on security if measurement of compliance with it is so meaningless? My information about this data breach comes from public sources and not from Heartland or their PCI assessors, but here is what I can glean:

1. Heartland was certified as PCI-compliant. (See the list of PCI-compliant service providers. Heartland is on Page 12.)

2. Part of PCI compliance includes section 3.4 which says that the credit card data must be encrypted anytime it is stored. Now, while some will argue that there should be a new requirement calling for encryption of internal networks (which does not exist in PCI currently), any entry-level programmer knows that as soon as I read my input for the credit card number, I am storing it, even if only in memory. (See the complete PCI DSS v1.2 standard.)

3. Malicious code — likely a sniffer — grabbed private unencrypted data (they called it “recorded data” which sounds a lot like like storage to me) off of the wire as the data was being sent for processing. (See here for one of many articles about the breach.)

So, Heartland wasn’t compliant, period, according to my read; but note that I am not a certified reviewer.

Is this their fault or their assessor’s? I have no idea, but I do believe that it is the responsibility of the assessor to have done this simple analysis and identified the weakness.

If Heartland was complicit in this, then I have no defense for them. If, however, in a business climate charged with specialization and outsourced expertise, they relied on their provider to help them validate that they had done enough, then the responsibility and the spotlight should be turned at least equally on the process and the people who gave them the useless rubber stamp.

Does being PCI-compliant make you secure? Nope. Not the same thing at all, but that isn’t the issue here.

Does thinking you are compliant when you are not even close create some substantial risk?

You bet.


February 2, 2009  6:57 PM

Will penetration testing be replaced by preventative tools?

MichaelDKelly Michael Kelly Profile: MichaelDKelly

I recently read the article “Penetration Testing: Dead in 2009” by Bill Brenner. In the article Mr. Brenner follows a small debate around the idea that over time penetration testing will be largely replaced by preventative checks.

The debate opens with some quotes from Brian Chess from Fortify Software. Fortify creates code analysis tools that scan for security concerns and adherence to good secure coding practices. That potential bias aside, I suspect that Mr. Chess’ statement — that “Customers are clamoring more for preventative tools than tools that simply find the weaknesses that already exist […]. They want to prevent holes from opening in the first place” — is absolutely true. I know I clamor for those tools, and I’m just a lowly test manager.

I’m a big fan of the work companies like Fortify, IBM and HP are doing in this space. If my project team can find a potential issue before we deploy the code, I’m all for it. It can save us time and helps us focus on different and potentially higher-value risks. However, I’ve yet to see a tool that can deal with the complexity of a deployment environment (setup, configuration, code, etc…) and while I’m a big believer in doing everything you can up front (design, review, runtime-analysis, etc.), I believe there will always be a roll for a skilled manual investigation of what gets deployed.

Testing (penetration or other) is about applying skill and judgment to uncover quality-related information about the product. That’s not just code — it’s more than that. Your typical penetration tester today covers more than today’s automated tools can cover. While there are different tools to test various components (some that focus on code, some that focus on the network, etc.), and they should absolutely be used, those tools will never be able to uncover all the potential issues with a system. And, what’s sometimes worse, is they can lead to a false sense of security.


January 30, 2009  5:21 PM

Using source code analysis tools

MichaelDKelly Michael Kelly Profile: MichaelDKelly

I found a great article earlier this week on static analysis tools by Mary Brandel. In the article, “How to choose and use source code analysis tools,” she cites some statistics on the static analysis market, including:

  • “The entire software security market was worth about US $300 million in 2007”
  • “The tools portion of that market doubled from 2006 to 2007 to about $180 million”
  • “About half of that is attributable to static analysis tools, which amounted to about $91.9 million”

In the article, Brandel also offers some evaluation criteria for when you start looking at source code analysis tools. These include language support and integration, assessment accuracy, customization, and knowledge base. She also provides some dos and don’ts for source code analysis. I think the most valuable tidbits from that list include:

  • DO consider using more than one tool: The article provides a good story about Lint vs. Coverity, and I’ve found that static analysis tools will find different issues as well. Each vendor will have its own specific focus on vulnerabilities and warnings.
  • DO retain the human element: While I’ve yet to work with a team that thinks adding automated tools like this will allow you to remove people, there’s certainly the feeling from the marketing materials that the results are intuitive. That’s typically not the case. You often need to know what you’re looking at or you’ll miss the subtleties in the data. I agree with the “truly an art form” quote. This stuff is hard, and while tools make it easier, it’s still brain-engaged work.
  • DO consider reporting flexibility: At some companies this is a big deal. When working with smaller software development organizations, it doesn’t matter what the reports look like. The only people looking at them are the people working in the code. However at a larger company, Fortune 500 for example, information like this normally needs to be summarized and reported up.


January 27, 2009  1:13 PM

Coverity CTO Q&A: New, Microsoft-friendlier tools ease app fixes

Jan Stafford Jan Stafford Profile: Jan Stafford

Ben Chelf CTO CoverityToday, the development community gets a first look at Coverity Prevent’s new Microsoft-friendly analysis tools. Yesterday, I talked with Coverity Inc. CTO Ben Chelf about how the new features will help software developers beat problems like deadlocks and race conditions and save time detecting defects. We also touched on Prevent’s role in cloud computing development. After a short bit of background here’s a Q&A based on our conversation.

Coverity Prevent, Coverity’s flagship static analysis solution, can now give Microsoft developers better tools for finding and fixing defects. With the new features, developers get modeling for Win32 concurrency APIs, Microsoft Windows Vista support and integration with Microsoft Visual Studio.

In addition, Coverity has dropped in some quality and concurrency checkers for C#. On Jan. 19, Coverity introduced Prevent for C#, a tool for identifying critical source code defects in .NET applications.

What gaps in analysis functionality will be filled by Prevent’s new features?

Chelf: The new features add Microsoft-specific checks that have a deep understanding of the Microsoft platform directly to the developer desktop in the developer’s IDE. IT pros now can save money on traditional testing techniques, since many of the problems that were previously discovered in testing or post-release are now discovered as the developer is writing the code. Every IT professional wrestles with testing costs and the time it takes to get a software system out the door, and this technology accelerates that process.

While other companies have desktop plugins for general static analysis solutions, because the checking is not Microsoft-specific, the other products tend to suffer from high false positive rates which can quickly turn off developers leaving the tool as shelfware.

In general, what software testing and quality assurance (QA) problems will this solve?

Chelf: The problem this solves is in some of the very difficult-to-reproduce defects. Especially when tracking down concurrency problems, the QA department has a very hard time putting together the exact test suite to make an application fail the way it would fail in production. These wasted cycles are now eliminated by finding the problems earlier in the development process.

So, for example, how would Prevent help with race conditions?

Chelf: That’s one concurrency problem that can happen when you’re developing in a multithreaded application and you have multiple things happening simultaneously. These threads in the application are all trying to access the same memory. If they access it at the same time without any kind of protection, the data can be corrupted. Without static analysis capability, the only way to track these things down is to find them in the testing environment. Since multithreaded problems are difficult to diagnose, because you are at the whim of how the different threads are scheduled, it can often take the developers days or weeks to reproduce a problem they encounter.

This new technology helps them find problems more quickly. As they are writing the code themselves, they’re sitting in the IDE and saving their files and checking in code into their source code management system from time to time. The Prevent technology gives them in IDE another button that says, “Analyze my source code.” Then, they get automated analysis of all the source code in the system, not only the source code they’re writing. They can do a kind of virtual simulation of the software system looking for these kinds of problems.

How can Prevent’s new features accelerate development in virtual cloud computing environments?

Chelf: And as it pertains to the cloud, many applications are moving more toward multithreaded design in order to take advantage of multiple cores on a machine as well as multiple machines in a cluster. However, distributing computation like this introduces a new class of potential coding defects that our technology helps address.

In the multicore era, there are going to be more and more multithreaded applications, and that introduces a host of problems that we’re trying to rid the world of, such as deadlocks and race conditions.


January 26, 2009  2:51 PM

Regulated software testing

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Earlier this month, the New York Times ran an article on a report criticizing the F.D.A. on device testing. The article seems to indicate that one of the leading causes for poor testing is manufacturer claims about new devices being like other existing devices already on the market.

The article also points out that the F.D.A. has failed to update its rules for Class III devices for a while now. As near as I can tell, the software (the part I care about) is in those Class III devices. For those not up on their F.D.A. history, the article has a great tidbit that I found very interesting.

Created in 1976, the F.D.A.’s process for approving devices divides the products into three classes and three levels of scrutiny. Tongue depressors, reading glasses, forceps and similar products are called Class I devices and are largely exempt from agency reviews. Mercury thermometers are Class II devices, and most get quick reviews. Class III devices include pacemakers and replacement heart valves.

Congress initially allowed many of the Class III products to receive perfunctory reviews if they were determined to be nearly identical to devices already on the market in 1976 when the rules were changed. But the original legislation and a companion law enacted in 1990 instructed the agency to write rules that would set firm deadlines for when all Class III devices would have to undergo rigorous testing before being approved.

The agency laid out a plan in 1995 to write those rules but never followed through, the accountability office found. The result is that most Class III devices are still approved with minimal testing.

I only found the article because I happened to see a letter to the editor from Stephen J. Ubl, the president and chief executive of Advanced Medical Technology Association. The letter caught me eye because in May I’ll be facilitating the second Workshop on Regulated Software Testing. His comment about the “extensive review of specifications and performance-testing information” is exactly the type of stuff I want to see at the workshop.

Regulated device/software testing is a difficult thing to do. For those who want to focus on the testing, there’s a lot of process already and it can distract from doing the testing. For those who want to make sure the process is followed and that all the right testing is taking place, then your focus is on the process and evidence. Figuring out that balance is always hard, whether you’re the F.D.A. or the company developing the product.


January 23, 2009  9:39 PM

Tools, techniques to avoid common software security mistakes

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Software developers make common and avoidable mistakes that create vulnerabilities and expose their software to ever-present security threats, according to field observations by Vic DeMarines.

Yesterday I spoke with Vic, VP of products at V.i. Laboratories Inc. (V.i. Labs) in Waltham, Mass. V.i. Labs’ products help software providers protect themselves against piracy and associated revenue losses. The company also provides antitampering solutions and products that prevent intellectual property theft. For example, its CodeArmor Intelligence antipiracy product enables software publishers to identify organizations that are using their software illegally. V.i. Labs’ customers include financial services software companies and online gaming providers.

I asked Vic to name some of the most common security mistakes he sees. He said there are three major security threats: piracy, code theft and tampering. “You can’t stop piracy, but you can be more resistant to it,” Vic said. “When developers integrate licensing into an application, they rarely consider making it resistant to reverse engineering or the threat of piracy.” There are basic tools and techniques to help vendors resist that — namely antitamper technologies, obfuscation, or tamper detection and reporting — and not using them is a common mistake. Vic said some of these can be accomplished in-house and others are available on the market.

Developers also frequently make security mistakes when coding new applications in Microsoft .NET. “Developers need to understand the risk in .NET,” Vic said. The “bad news” with this practice, according to Vic, is that when you compile, people who know where to look can view your source code using freeware tools. This mistake could be avoided without abandoning .NET — developers can put sensitive code in a different format. They can use obfuscation techniques or protection tools to prevent people from seeing sensitive code.

I also asked Vic for tips on producing high-quality, secure software in a down economy — how do you “do more with less” when it comes to software security? Vic advised developers to think ahead — if you’re about to design an app, “make security a priority and define how you’re going to test it,” he said. Enlisting an outside security testing team is expensive, so instead have someone in your group who is strong in security “think like a cracker” to determine vulnerabilities.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: