Software Quality Insights


December 9, 2010  8:09 PM

Mixing CMMI and Agile

Yvette Francino Yvette Francino Profile: Yvette Francino

The first time I heard about integrating CMMI and agile was at IBM’s Innovate conference last spring. I thought it an odd combination. My experience with CMMI, a process-improvement methodology, was that it was extremely documentation-intensive and all about process. When you look at the Agile Manifesto, you see these are “right-sided” values– the values that are considered less important than the “left-sided” values agile touts such as people, working software, customer collaboration and responding to change. Can these two seemingly conflicting methodologies really be combined effectively?

Paul McMahon, author of Integrating CMMI and Agile Development, says ‘yes.’ In a two-part interview, I pose some tough questions for McMahon.

In CMMI and Agile integration: Adding agility to CMMI-mature organizations, part 1, I start by questioning McMahon about the combination of traditional model emphasizing process and documentation with a model that claims success by practicing a rather opposite philosophy.  I ask about the buy-in from agile organizations in Adding CMMI process maturity to Agile organizations, part 2.

McMahon answers these questions and more, giving me a new perspective on CMMI and insights into how the balance of these two, when done correctly, can be a great benefit to an organization.

December 6, 2010  2:31 PM

The benefits of test-driven development

Yvette Francino Yvette Francino Profile: Yvette Francino

I’ve heard from many agile experts, including Elizabeth Woodward and Steffan Surdek, two of the authors of A Practical Guide to Distributed Scrum, about the importance of test-driven development (TDD).

Even though I had been a software developer for many years, I wasn’t entirely clear about the difference between unit testing and test-driven development. It seemed that the primary difference was that with TDD, the tests are written before the code, but I still was unclear as to why that would really make much difference.

I explored that question and others in a two-part interview with the author of Test-Driven JavaScript Development, Christian Johansen. In part one, Johansen describes the mechanics of TDD and how it compares to traditional unit testing. In AgileTechniques: Benefits of test-driven development – Part 2, we learn more about the benefits, which include the automated tests which provides documentation for the most current code. When asked about the time to write, test and maintain the test cases, Johansen said:

Writing tests takes time, for sure. However, writing tests also saves time by removing or vastly reducing other activities, such as manual or formal debugging and ad hoc bug fixing. TDD also has the ability of reducing the total time spent writing tests. When retrofitting unit tests onto a system, a programmer will likely occasionally encounter tightly coupled code that is either hard or impossible to test. Because TDD promotes the unit test to the front seat, testability is never an issue.


November 23, 2010  6:50 PM

The works of the “father of software quality” Watts Humphrey remain

Yvette Francino Yvette Francino Profile: Yvette Francino

Recently, Watts Humphrey, an icon of software quality, passed away at the age of 83. Humphrey was probably best known for his work as one of the founding fathers of the Capability Maturity Model (CMM), a popular and well-known process improvement methodology.

Hearing about his death, I was curious about CMM. Had they been updated over the years? Was the process improvement methodology itself undergoing improvements? As a matter of fact, I learned that the Software Engineering Institute (SEI) recently released CMMI for Development, v.1.3. The new version does include some additions, taking into account the transitions many organzations are making to agile software development.

In CMMI for development: An overview of a process improvement model I give a high-level overview of CMMI for development, the process improvement methodology which has continued to evolve over time. Though I just cover the basics, the full 482-page document is available from the SEI website.

Humphrey, of course, left behind a much bigger legacy than his work on the capability maturity model with 12 books and hundreds of articles influencing topics of software quality. In fact, just a few months ago, SSQ published a chapter excerpt from his book, Reflections on Management. He was a scholar who will be missed, but his work will remain to guide and teach others in the industry.


November 18, 2010  4:57 PM

Software methodologies: Are we still at “war”?

Yvette Francino Yvette Francino Profile: Yvette Francino

The most popular blog post on SQI for the year has been: Methodology wars: Agile or Waterfall? Most people still take software methodology very seriously and can get quite vocal about the benefits of using a particular methodology or quite defensive when questioned about some of the pitfalls. I’ve heard words such as “zealots” and “cult mentality” bandied around describing people who feel strongly about a particular methodology. But what exactly are the differences?

This month SSQ is focusing on software methodologies, bringing to you a variety of articles that will help you determine which methodology might be best for your organization.

In Waterfall or Agile: Differences between predictive and adaptive software methodologies, David Johnson highlights the differences between a predictive waterfall approach versus the adaptive agile approach to software development.

And in Applying lean concepts to software development, Matt Heusser describes practices that originated in manufacturing that are becoming more popular in software development such as the concepts of flow and continuous work.

Whether your software development processes are predictive, adaptive, lean or maybe use a mixture of concepts from various methodologies, your answer is not the one and only “right” way to develop code. What’s most important is that it is right for your organization.


November 15, 2010  7:54 PM

Real world Agile: Pair programming

Yvette Francino Yvette Francino Profile: Yvette Francino

At last week’s Boulder Agile User Group meeting, Pivotal Labs’ Mike Gehard described a day in the life of working in a rather unusual environment.

At Pivotal Labs, the culture is very important. They start their mornings with breakfast at 8:45. Their working hours are a very strict 9am to 6pm and their schedule looks like this:

8:45 Breakfast
9:05 Standup
9:15-ish Team standups
Noon Lunch
9:15-6:00 Pair programming
6:00 End of work day

Having been in corporate America for my whole career, I felt this was a rather rigid schedule, particularly for working parents who have children to pick up. However, Gehard showed a photo of an empty workplace at 6:05. Though the schedule is rigid, it means you can count on being done with your work at 6pm.

Gehard spoke a lot of the Extreme Programming practice of pair programming and what a big part of the culture it is at Pivotal Labs. I explore the topic further in the tip Pair programming: Two people, one computer.

Listen in to this short video where Gehard describes the importance of pair programming at Pivotal Labs:

[kml_flashembed movie="http://www.youtube.com/v/tETzWJ6ukxA" width="425" height="350" wmode="transparent" /]


November 12, 2010  3:08 PM

Are major post-production software bugs becoming more prevalent?

Yvette Francino Yvette Francino Profile: Yvette Francino

In a recent interview with Jeff Papows, author of Glitch, the Hidden Impact of Faulty Software, one of the reasons Papows noted for the increased number of bugs found in production systems is the sheer volume and ubiquity of technology. He writes in his book:

 It’s difficult to understate the scale at which the IT industry has transformed productivity, stimulated economic growth, and forever changed how people work and live.

Complexity of code was further reiterated in an interview I had with IBM’s Sky Matthews: How do you test 10 million lines of code?  Matthews talks about the enormous amount of software that’s being used in the 2011 Chevy Volt.

Once again massive code complexity was noted by Coverity Chief Scientist Andy Chou in an interview in which he went over the results of an open source integrity report as noted in the post: Open source or proprietary: Which is higher quality? 

Can improvements in processes and methodologies lead to higher quality to compensate for increasingly complex systems? Trends are showing that when development and testers collaborate closely as a unified team, breaking down silos, quality does improve. This may not be enough to solve all the quality issues that result from complex code, but it’s a step in the right direction.


November 5, 2010  3:24 PM

Enterprise Agile: Handling the challenges of dispersed teams and large-scale projects

Yvette Francino Yvette Francino Profile: Yvette Francino

There’s no denying that agile adoption is on the rise. But can agile methodologies, originally intended for small, co-located teams, be effective when we apply them to large-scale projects and geographically dispersed teams?

This week SSQ brings you a full range of articles and multimedia content covering solutions to the challenges of large-scale agile, particularly on distributed teams.

Included in our Distributed Agile Lesson is a videocast interview, a podcast interview and tip from well-known agile expert, Lisa Crispin. The lesson also hosts several short video clips from expert practitioners of distributed agile, including Janet Gregory and Jon Bach. Additionally, you’ll find a book review of A Practical Guide to Distributed Scrum, along with a video clip from the book’s co-author Elizabeth Woodward.

In Scaling Agile software development: Challenges and solutions, consultant Nari Kannan provides further insights and advice about implementing agile in the large. And if that’s not enough, requirements expert Sue Burk advises best practices on gathering requirements in a distributed team in an expert response to a user question.

Want more? Mark your calendars for December 14th when SSQ will be hosting a virtual trade show, dedicated to providing you information on trends and solutions with large-scale agile.


November 2, 2010  10:06 PM

Open source or proprietary: Which is higher quality?

Yvette Francino Yvette Francino Profile: Yvette Francino

This week, Coverity, a company that provides a tool that performs static analysis of code, announced findings of their annual report on the state of open source software integrity. Details of the report include findings from the analysis of 291 popular open source projects and over 61 million lines of code. Included were tests of an Android kernel from the popular HTC Droid Incredible.

The report shows that almost half the defects found were considered high risk with the possibility of causing security vulnerabilities or system crashes.

I spoke with Coverity Chief Scientist Andy Chou about the report and about the quality of open source code in general.

When asked whether proprietary software was higher quality than open source, Chou notes that many commercial products are a mix of proprietary and open source.

We often get asked the question, how do these compare? Our thinking has evolved over time. The boundary between the two is quite blurry. If you look at a lot of proprietary commercial software it often contains open source software so it’s very difficult to separate the open source and proprietary components these days. If you look at typical mobile phones operating system, for example, Android is a good example, the whole operating system is open source, but OEMs can add proprietary software on top of it for custom applications or custom devices. So when you put the whole system together, it’s a hybrid of the two. The fact that it’s such a mixture makes it very difficult to separate measurements.

Still, I persisted, aren’t vendors responsible for ensuring the quality of their overall product? Open source code is visible, so shouldn’t it be tested if it’s packaged as a commercial product?

Chou answered:

Often commercial software vendors and OEMs have limited visibility into the quality of the software they’re using and the accountability is quite fragmented. It’s not easy to pinpoint exactly who has a handle on all of the software.

Despite the blurred boundaries, I wanted to know if the studies were showing that software offered by vendors for a cost was better quality than open source. My assumption had always been that vendors would have more resources than open source providers to hire the testers and purchase the necessary tools to ensure high quality. Chou answered that there was a wide range of quality in both the open source market as well as the commercial market.

There’s no simple pat answer. The differences between the best and worst are very broad.  There’s a spectrum. Same thing with commercial software. Some industries may choose to release early, knowing there are going to be defects and that’s a business trade off they’re willing to take.


November 1, 2010  9:11 PM

How do you test 10 million lines of code?

Yvette Francino Yvette Francino Profile: Yvette Francino

Today IBM and GM announced the use of IBM software to help build the 2011 Chevy Volt. The Volt is an example of a “system of systems” talked about at the IBM Innovate conference I attended a few months ago and includes:

• Over 100 electronic controllers
• Nearly 10 million lines of software code
• Its own IP address

The Volt is powered by a software-driven lithium-ion battery powering an electric drive unit, allowing it to go from 0-60 in about 9 seconds, hit a top speed of 100mph, and drive 40 miles on battery power alone. IBM provided the software and simulation tools to design and develop the advanced control systems.

Sky Matthews, CTO for complex and embedded systems within IBM’s Rational division, spoke with SSQ today about the announcement and the testing processes. A number of years ago, when I worked as a developer at IBM, the test group was responsible for finding a certain number of bugs for each KLOC (thousand lines of code.) (Industry averages state that there are about 15-50 bugs per KLOC.) I asked if “readiness” was still based on finding a certain number of defects per KLOC.

Matthews answered that there were some differences in how the system was tested which would allow for the ability to test such a massive amount of software without the expectation of finding as many defects per KLOC as days past:

Hardware-in-the-loop simulation

“Simulation to test the functionality of the vehicle at  multiple levels,” Matthews said.  He explained hardware-in-the-loop testing where software is tested using simulated hardware.

Generate the source code from models

Another difference from past methods he said is that “quite a bit of the software and controllers in the vehicles are automatically generated from models. A lot of [the source code] gets generated from the tools and that greatly reduces the number of defects per lines of code.”

Testing the design using model-in-the loop simulation

Early in the process, they’ll test the design using model-in-the-loop simulation. This involves taking models of algorithms and behavior and running various test cases using just those models. “They’re testing the higher level design abstraction. You can do a lot of verification of the model design using model-in-the-loop simulation.”

Matthews pointed out how this early design testing also helps in expediting the testing process, reducing the defects found at the end of the cycle, which was more common traditionally. “The more you can test up front with high-level models the more you can save in the back end.”

Improved time to market
The Volt was designed and developed “in 29 months as opposed to over double that for traditional models” according to the press release material.

When asked how the improved time-to-market was achieved, Matthews attributes the productivity gains to two major factors, model-driven systems engineering (MDSC) and more collaboration facilities within the tools so that the engineering teams worked together more efficiently.

What about safety?

But with the Toyota scare and other major glitches in complex systems, are consumers wary of buying a car that is dependent on 10 million lines of code?

Matthews believes that the industry is concerned and they must assure consumers of safety, and personally believes the vehicle is much safer with the software than without it. He mentioned stability control, anti-lock breaks, and OnStar as examples of functions provided by software designed to improve safety for drivers and passengers.


October 28, 2010  6:24 PM

Chris McMahon on experience reports, writing and one year at SSQ

Yvette Francino Yvette Francino Profile: Yvette Francino

If you’re a regular reader of SearchSoftwareQuality.com or of software quality publications in general, you are bound to be familiar with the writing of Chris McMahon. He is one of our most frequent contributors, and certainly one who is valued both for his expertise in software quality as well as in writing.

In The software experience report: Record what you learn, McMahon describes how each job gives us an opportunity to learn and grow. He encourages each of us to take the time to record these experiences and share our learnings with others.

McMahon recently reached his one year mark of writing for SSQ with over 40 pieces of content. He honored us by noting that accomplishment in a recent post on his popular blog. McMahon acts as a mentor, particularly for those people in software quality who like to write. He facilitates a “writing about testing” network and hosts a conference for the same.  When asked why he does it, he comes up with a few answers. My favorite: “I believe strongly in giving away one’s best ideas.”

Other recent content from McMahon:
Breaking the bug reporting rules
The perfect storm: Multiple mishaps lead to disaster


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: