Software Quality Insights

A SearchSoftwareQuality.com blog


December 6, 2010  2:31 PM

The benefits of test-driven development



Posted by: Yvette Francino
TDD, test-driven development

I’ve heard from many agile experts, including Elizabeth Woodward and Steffan Surdek, two of the authors of A Practical Guide to Distributed Scrum, about the importance of test-driven development (TDD).

Even though I had been a software developer for many years, I wasn’t entirely clear about the difference between unit testing and test-driven development. It seemed that the primary difference was that with TDD, the tests are written before the code, but I still was unclear as to why that would really make much difference.

I explored that question and others in a two-part interview with the author of Test-Driven JavaScript Development, Christian Johansen. In part one, Johansen describes the mechanics of TDD and how it compares to traditional unit testing. In AgileTechniques: Benefits of test-driven development – Part 2, we learn more about the benefits, which include the automated tests which provides documentation for the most current code. When asked about the time to write, test and maintain the test cases, Johansen said:

Writing tests takes time, for sure. However, writing tests also saves time by removing or vastly reducing other activities, such as manual or formal debugging and ad hoc bug fixing. TDD also has the ability of reducing the total time spent writing tests. When retrofitting unit tests onto a system, a programmer will likely occasionally encounter tightly coupled code that is either hard or impossible to test. Because TDD promotes the unit test to the front seat, testability is never an issue.

November 23, 2010  6:50 PM

The works of the “father of software quality” Watts Humphrey remain



Posted by: Yvette Francino

Recently, Watts Humphrey, an icon of software quality, passed away at the age of 83. Humphrey was probably best known for his work as one of the founding fathers of the Capability Maturity Model (CMM), a popular and well-known process improvement methodology.

Hearing about his death, I was curious about CMM. Had they been updated over the years? Was the process improvement methodology itself undergoing improvements? As a matter of fact, I learned that the Software Engineering Institute (SEI) recently released CMMI for Development, v.1.3. The new version does include some additions, taking into account the transitions many organzations are making to agile software development.

In CMMI for development: An overview of a process improvement model I give a high-level overview of CMMI for development, the process improvement methodology which has continued to evolve over time. Though I just cover the basics, the full 482-page document is available from the SEI website.

Humphrey, of course, left behind a much bigger legacy than his work on the capability maturity model with 12 books and hundreds of articles influencing topics of software quality. In fact, just a few months ago, SSQ published a chapter excerpt from his book, Reflections on Management. He was a scholar who will be missed, but his work will remain to guide and teach others in the industry.


November 18, 2010  4:57 PM

Software methodologies: Are we still at “war”?



Posted by: Yvette Francino
agile, software methodologies, waterfall

The most popular blog post on SQI for the year has been: Methodology wars: Agile or Waterfall? Most people still take software methodology very seriously and can get quite vocal about the benefits of using a particular methodology or quite defensive when questioned about some of the pitfalls. I’ve heard words such as “zealots” and “cult mentality” bandied around describing people who feel strongly about a particular methodology. But what exactly are the differences?

This month SSQ is focusing on software methodologies, bringing to you a variety of articles that will help you determine which methodology might be best for your organization.

In Waterfall or Agile: Differences between predictive and adaptive software methodologies, David Johnson highlights the differences between a predictive waterfall approach versus the adaptive agile approach to software development.

And in Applying lean concepts to software development, Matt Heusser describes practices that originated in manufacturing that are becoming more popular in software development such as the concepts of flow and continuous work.

Whether your software development processes are predictive, adaptive, lean or maybe use a mixture of concepts from various methodologies, your answer is not the one and only “right” way to develop code. What’s most important is that it is right for your organization.


November 15, 2010  7:54 PM

Real world Agile: Pair programming



Posted by: Yvette Francino
agile

At last week’s Boulder Agile User Group meeting, Pivotal Labs’ Mike Gehard described a day in the life of working in a rather unusual environment.

At Pivotal Labs, the culture is very important. They start their mornings with breakfast at 8:45. Their working hours are a very strict 9am to 6pm and their schedule looks like this:

8:45 Breakfast
9:05 Standup
9:15-ish Team standups
Noon Lunch
9:15-6:00 Pair programming
6:00 End of work day

Having been in corporate America for my whole career, I felt this was a rather rigid schedule, particularly for working parents who have children to pick up. However, Gehard showed a photo of an empty workplace at 6:05. Though the schedule is rigid, it means you can count on being done with your work at 6pm.

Gehard spoke a lot of the Extreme Programming practice of pair programming and what a big part of the culture it is at Pivotal Labs. I explore the topic further in the tip Pair programming: Two people, one computer.

Listen in to this short video where Gehard describes the importance of pair programming at Pivotal Labs:

[kml_flashembed movie="http://www.youtube.com/v/tETzWJ6ukxA" width="425" height="350" wmode="transparent" /]


November 12, 2010  3:08 PM

Are major post-production software bugs becoming more prevalent?



Posted by: Yvette Francino

In a recent interview with Jeff Papows, author of Glitch, the Hidden Impact of Faulty Software, one of the reasons Papows noted for the increased number of bugs found in production systems is the sheer volume and ubiquity of technology. He writes in his book:

 It’s difficult to understate the scale at which the IT industry has transformed productivity, stimulated economic growth, and forever changed how people work and live.

Complexity of code was further reiterated in an interview I had with IBM’s Sky Matthews: How do you test 10 million lines of code?  Matthews talks about the enormous amount of software that’s being used in the 2011 Chevy Volt.

Once again massive code complexity was noted by Coverity Chief Scientist Andy Chou in an interview in which he went over the results of an open source integrity report as noted in the post: Open source or proprietary: Which is higher quality? 

Can improvements in processes and methodologies lead to higher quality to compensate for increasingly complex systems? Trends are showing that when development and testers collaborate closely as a unified team, breaking down silos, quality does improve. This may not be enough to solve all the quality issues that result from complex code, but it’s a step in the right direction.


November 5, 2010  3:24 PM

Enterprise Agile: Handling the challenges of dispersed teams and large-scale projects



Posted by: Yvette Francino
Distributed Agile

There’s no denying that agile adoption is on the rise. But can agile methodologies, originally intended for small, co-located teams, be effective when we apply them to large-scale projects and geographically dispersed teams?

This week SSQ brings you a full range of articles and multimedia content covering solutions to the challenges of large-scale agile, particularly on distributed teams.

Included in our Distributed Agile Lesson is a videocast interview, a podcast interview and tip from well-known agile expert, Lisa Crispin. The lesson also hosts several short video clips from expert practitioners of distributed agile, including Janet Gregory and Jon Bach. Additionally, you’ll find a book review of A Practical Guide to Distributed Scrum, along with a video clip from the book’s co-author Elizabeth Woodward.

In Scaling Agile software development: Challenges and solutions, consultant Nari Kannan provides further insights and advice about implementing agile in the large. And if that’s not enough, requirements expert Sue Burk advises best practices on gathering requirements in a distributed team in an expert response to a user question.

Want more? Mark your calendars for December 14th when SSQ will be hosting a virtual trade show, dedicated to providing you information on trends and solutions with large-scale agile.


November 2, 2010  10:06 PM

Open source or proprietary: Which is higher quality?



Posted by: Yvette Francino
Coverity, proprietary, Software Quality

This week, Coverity, a company that provides a tool that performs static analysis of code, announced findings of their annual report on the state of open source software integrity. Details of the report include findings from the analysis of 291 popular open source projects and over 61 million lines of code. Included were tests of an Android kernel from the popular HTC Droid Incredible.

The report shows that almost half the defects found were considered high risk with the possibility of causing security vulnerabilities or system crashes.

I spoke with Coverity Chief Scientist Andy Chou about the report and about the quality of open source code in general.

When asked whether proprietary software was higher quality than open source, Chou notes that many commercial products are a mix of proprietary and open source.

We often get asked the question, how do these compare? Our thinking has evolved over time. The boundary between the two is quite blurry. If you look at a lot of proprietary commercial software it often contains open source software so it’s very difficult to separate the open source and proprietary components these days. If you look at typical mobile phones operating system, for example, Android is a good example, the whole operating system is open source, but OEMs can add proprietary software on top of it for custom applications or custom devices. So when you put the whole system together, it’s a hybrid of the two. The fact that it’s such a mixture makes it very difficult to separate measurements.

Still, I persisted, aren’t vendors responsible for ensuring the quality of their overall product? Open source code is visible, so shouldn’t it be tested if it’s packaged as a commercial product?

Chou answered:

Often commercial software vendors and OEMs have limited visibility into the quality of the software they’re using and the accountability is quite fragmented. It’s not easy to pinpoint exactly who has a handle on all of the software.

Despite the blurred boundaries, I wanted to know if the studies were showing that software offered by vendors for a cost was better quality than open source. My assumption had always been that vendors would have more resources than open source providers to hire the testers and purchase the necessary tools to ensure high quality. Chou answered that there was a wide range of quality in both the open source market as well as the commercial market.

There’s no simple pat answer. The differences between the best and worst are very broad.  There’s a spectrum. Same thing with commercial software. Some industries may choose to release early, knowing there are going to be defects and that’s a business trade off they’re willing to take.


November 1, 2010  9:11 PM

How do you test 10 million lines of code?



Posted by: Yvette Francino

Today IBM and GM announced the use of IBM software to help build the 2011 Chevy Volt. The Volt is an example of a “system of systems” talked about at the IBM Innovate conference I attended a few months ago and includes:

• Over 100 electronic controllers
• Nearly 10 million lines of software code
• Its own IP address

The Volt is powered by a software-driven lithium-ion battery powering an electric drive unit, allowing it to go from 0-60 in about 9 seconds, hit a top speed of 100mph, and drive 40 miles on battery power alone. IBM provided the software and simulation tools to design and develop the advanced control systems.

Sky Matthews, CTO for complex and embedded systems within IBM’s Rational division, spoke with SSQ today about the announcement and the testing processes. A number of years ago, when I worked as a developer at IBM, the test group was responsible for finding a certain number of bugs for each KLOC (thousand lines of code.) (Industry averages state that there are about 15-50 bugs per KLOC.) I asked if “readiness” was still based on finding a certain number of defects per KLOC.

Matthews answered that there were some differences in how the system was tested which would allow for the ability to test such a massive amount of software without the expectation of finding as many defects per KLOC as days past:

Hardware-in-the-loop simulation

“Simulation to test the functionality of the vehicle at  multiple levels,” Matthews said.  He explained hardware-in-the-loop testing where software is tested using simulated hardware.

Generate the source code from models

Another difference from past methods he said is that “quite a bit of the software and controllers in the vehicles are automatically generated from models. A lot of [the source code] gets generated from the tools and that greatly reduces the number of defects per lines of code.”

Testing the design using model-in-the loop simulation

Early in the process, they’ll test the design using model-in-the-loop simulation. This involves taking models of algorithms and behavior and running various test cases using just those models. “They’re testing the higher level design abstraction. You can do a lot of verification of the model design using model-in-the-loop simulation.”

Matthews pointed out how this early design testing also helps in expediting the testing process, reducing the defects found at the end of the cycle, which was more common traditionally. “The more you can test up front with high-level models the more you can save in the back end.”

Improved time to market
The Volt was designed and developed “in 29 months as opposed to over double that for traditional models” according to the press release material.

When asked how the improved time-to-market was achieved, Matthews attributes the productivity gains to two major factors, model-driven systems engineering (MDSC) and more collaboration facilities within the tools so that the engineering teams worked together more efficiently.

What about safety?

But with the Toyota scare and other major glitches in complex systems, are consumers wary of buying a car that is dependent on 10 million lines of code?

Matthews believes that the industry is concerned and they must assure consumers of safety, and personally believes the vehicle is much safer with the software than without it. He mentioned stability control, anti-lock breaks, and OnStar as examples of functions provided by software designed to improve safety for drivers and passengers.


October 28, 2010  6:24 PM

Chris McMahon on experience reports, writing and one year at SSQ



Posted by: Yvette Francino
Chris McMahon, writing about testing

If you’re a regular reader of SearchSoftwareQuality.com or of software quality publications in general, you are bound to be familiar with the writing of Chris McMahon. He is one of our most frequent contributors, and certainly one who is valued both for his expertise in software quality as well as in writing.

In The software experience report: Record what you learn, McMahon describes how each job gives us an opportunity to learn and grow. He encourages each of us to take the time to record these experiences and share our learnings with others.

McMahon recently reached his one year mark of writing for SSQ with over 40 pieces of content. He honored us by noting that accomplishment in a recent post on his popular blog. McMahon acts as a mentor, particularly for those people in software quality who like to write. He facilitates a “writing about testing” network and hosts a conference for the same.  When asked why he does it, he comes up with a few answers. My favorite: “I believe strongly in giving away one’s best ideas.”

Other recent content from McMahon:
Breaking the bug reporting rules
The perfect storm: Multiple mishaps lead to disaster


October 25, 2010  4:48 PM

Kent Beck on release schedules



Posted by: Matt Heusser
agile, STPcon, TDD, test-driven development, testing

Many readers will recognize Kent Beck as the co-creator of Extreme Programming, as one of the authors of the Agile Manifesto, or for one of his many books on software development and unit testing.

This Tuesday I got to know him as the “man on center stage” for the Software Test Professionals Conference – and also as a context-driven thinker.

Beck began his talk by telling us about the arguments of his youth — how one side would say that practice X was essential for success in software development, while a second would say the same practice was a recipe for disaster. He pointed out that, in his experience, the arguments never seemed to go anywhere, they just ended in hard feelings.

Next Beck wondered aloud: Is it possible that both of those people are wrong? Is it possible that they are both right? Or perhaps they are both right — for two entirely different contexts.

While many different things can change the context for a development team, Beck chose to focus on one thing for his talk: The speed of the deployment cycle. He showed us a very informal graph of projects in the 1990′s with rough percentages of project cycle-times. The main cycle times he selected were projects that took a  year or more to ship to production, those that deployed quarterly, monthly, weekly, daily and even many times a day. He also showed what project release schedules look like today and a comparison between the two.

The overall conclusion: Project teams today are shipping more often.

His proposal: Changing the deployment cycle causes social, technical, organizational, and business changes. These changes mean the practices the team will need to use in order to be successful will also change.

For example, these are some of the changes Beck suggested teams make when accelerating release schedules.

From annual releases to quarterly:

- Automate acceptance tests
- Institutionalize refactoring as an everyday, continuous practice
- Continuous integration
- Subscription (don’t charge for upgrades, buy support)

From quarterly releases to monthly:

- Developer testing (developers have to stop making so many bugs)
- Stand-up meetings
- Cards on a wall
- Pay per use business model

From monthly to weekly:

- Live, two-way data migration (rollback, make it safe and cheap.)
- Defect zero (another good example of context matters.  Great idea for tight releases, crazy idea for long release cycles)
- Temporary branches
- Kanban
- Bootstrap financing

In addition to adding features, Beck suggested taking some practices away, such as large organizational barriers between development, test and operations. He also suggested that traditional paperwork heavy processes, like formal change control or design documents, become less valuable as deployments shift toward daily or even more frequent builds. (At the daily level, he suggests getting rid of standup meetings, because they are too slow to enable daily releases. Beck suggested keeping everyone in the same room and constantly communicating as an alternative.)

What to do tomorrow

Although he never came out and said it, the general impression I had was that Beck supports more frequent, iterative releases. (Come to think of it, he does say that, often, in just about all of his books.)

One of the things he discussed during his talk, however, was the sort of dogmatic “you will begin (x more often or shorter) releases” attitude of the typical agile consultant, which is typically met with resistance.

Instead of thinking of it as a battle of wills, Beck suggested picking up some of the enabling practices above, implementing them one by one, and seeing if the objections and opposition to more frequent release just sort of … melts away.

Overall, I found the framework to look at practices to be much better than any “it depends” handwaving. It’s practical without being prescriptive.

As for his idea of trying a practice at a time to enable more frequent delivery, I found it refreshing.

I hope you do too.

For even more, you can look at Mr. Beck’s entire set of slides on slideshare for free.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: