November 12, 2010 3:08 PM
Posted by: Yvette Francino
In a recent interview with Jeff Papows, author of Glitch, the Hidden Impact of Faulty Software, one of the reasons Papows noted for the increased number of bugs found in production systems is the sheer volume and ubiquity of technology. He writes in his book:
It’s difficult to understate the scale at which the IT industry has transformed productivity, stimulated economic growth, and forever changed how people work and live.
Complexity of code was further reiterated in an interview I had with IBM’s Sky Matthews: How do you test 10 million lines of code? Matthews talks about the enormous amount of software that’s being used in the 2011 Chevy Volt.
Once again massive code complexity was noted by Coverity Chief Scientist Andy Chou in an interview in which he went over the results of an open source integrity report as noted in the post: Open source or proprietary: Which is higher quality?
Can improvements in processes and methodologies lead to higher quality to compensate for increasingly complex systems? Trends are showing that when development and testers collaborate closely as a unified team, breaking down silos, quality does improve. This may not be enough to solve all the quality issues that result from complex code, but it’s a step in the right direction.
November 5, 2010 3:24 PM
Posted by: Yvette Francino
There’s no denying that agile adoption is on the rise. But can agile methodologies, originally intended for small, co-located teams, be effective when we apply them to large-scale projects and geographically dispersed teams?
This week SSQ brings you a full range of articles and multimedia content covering solutions to the challenges of large-scale agile, particularly on distributed teams.
Included in our Distributed Agile Lesson is a videocast interview, a podcast interview and tip from well-known agile expert, Lisa Crispin. The lesson also hosts several short video clips from expert practitioners of distributed agile, including Janet Gregory and Jon Bach. Additionally, you’ll find a book review of A Practical Guide to Distributed Scrum, along with a video clip from the book’s co-author Elizabeth Woodward.
In Scaling Agile software development: Challenges and solutions, consultant Nari Kannan provides further insights and advice about implementing agile in the large. And if that’s not enough, requirements expert Sue Burk advises best practices on gathering requirements in a distributed team in an expert response to a user question.
Want more? Mark your calendars for December 14th when SSQ will be hosting a virtual trade show, dedicated to providing you information on trends and solutions with large-scale agile.
November 2, 2010 10:06 PM
Posted by: Yvette Francino
, Software Quality
This week, Coverity, a company that provides a tool that performs static analysis of code, announced findings of their annual report on the state of open source software integrity. Details of the report include findings from the analysis of 291 popular open source projects and over 61 million lines of code. Included were tests of an Android kernel from the popular HTC Droid Incredible.
The report shows that almost half the defects found were considered high risk with the possibility of causing security vulnerabilities or system crashes.
I spoke with Coverity Chief Scientist Andy Chou about the report and about the quality of open source code in general.
When asked whether proprietary software was higher quality than open source, Chou notes that many commercial products are a mix of proprietary and open source.
We often get asked the question, how do these compare? Our thinking has evolved over time. The boundary between the two is quite blurry. If you look at a lot of proprietary commercial software it often contains open source software so it’s very difficult to separate the open source and proprietary components these days. If you look at typical mobile phones operating system, for example, Android is a good example, the whole operating system is open source, but OEMs can add proprietary software on top of it for custom applications or custom devices. So when you put the whole system together, it’s a hybrid of the two. The fact that it’s such a mixture makes it very difficult to separate measurements.
Still, I persisted, aren’t vendors responsible for ensuring the quality of their overall product? Open source code is visible, so shouldn’t it be tested if it’s packaged as a commercial product?
Often commercial software vendors and OEMs have limited visibility into the quality of the software they’re using and the accountability is quite fragmented. It’s not easy to pinpoint exactly who has a handle on all of the software.
Despite the blurred boundaries, I wanted to know if the studies were showing that software offered by vendors for a cost was better quality than open source. My assumption had always been that vendors would have more resources than open source providers to hire the testers and purchase the necessary tools to ensure high quality. Chou answered that there was a wide range of quality in both the open source market as well as the commercial market.
There’s no simple pat answer. The differences between the best and worst are very broad. There’s a spectrum. Same thing with commercial software. Some industries may choose to release early, knowing there are going to be defects and that’s a business trade off they’re willing to take.
November 1, 2010 9:11 PM
Posted by: Yvette Francino
Today IBM and GM announced the use of IBM software to help build the 2011 Chevy Volt. The Volt is an example of a “system of systems” talked about at the IBM Innovate conference I attended a few months ago and includes:
• Over 100 electronic controllers
• Nearly 10 million lines of software code
• Its own IP address
The Volt is powered by a software-driven lithium-ion battery powering an electric drive unit, allowing it to go from 0-60 in about 9 seconds, hit a top speed of 100mph, and drive 40 miles on battery power alone. IBM provided the software and simulation tools to design and develop the advanced control systems.
Sky Matthews, CTO for complex and embedded systems within IBM’s Rational division, spoke with SSQ today about the announcement and the testing processes. A number of years ago, when I worked as a developer at IBM, the test group was responsible for finding a certain number of bugs for each KLOC (thousand lines of code.) (Industry averages state that there are about 15-50 bugs per KLOC.) I asked if “readiness” was still based on finding a certain number of defects per KLOC.
Matthews answered that there were some differences in how the system was tested which would allow for the ability to test such a massive amount of software without the expectation of finding as many defects per KLOC as days past:
“Simulation to test the functionality of the vehicle at multiple levels,” Matthews said. He explained hardware-in-the-loop testing where software is tested using simulated hardware.
Generate the source code from models
Another difference from past methods he said is that “quite a bit of the software and controllers in the vehicles are automatically generated from models. A lot of [the source code] gets generated from the tools and that greatly reduces the number of defects per lines of code.”
Testing the design using model-in-the loop simulation
Early in the process, they’ll test the design using model-in-the-loop simulation. This involves taking models of algorithms and behavior and running various test cases using just those models. “They’re testing the higher level design abstraction. You can do a lot of verification of the model design using model-in-the-loop simulation.”
Matthews pointed out how this early design testing also helps in expediting the testing process, reducing the defects found at the end of the cycle, which was more common traditionally. “The more you can test up front with high-level models the more you can save in the back end.”
Improved time to market
The Volt was designed and developed “in 29 months as opposed to over double that for traditional models” according to the press release material.
When asked how the improved time-to-market was achieved, Matthews attributes the productivity gains to two major factors, model-driven systems engineering (MDSC) and more collaboration facilities within the tools so that the engineering teams worked together more efficiently.
What about safety?
But with the Toyota scare and other major glitches in complex systems, are consumers wary of buying a car that is dependent on 10 million lines of code?
Matthews believes that the industry is concerned and they must assure consumers of safety, and personally believes the vehicle is much safer with the software than without it. He mentioned stability control, anti-lock breaks, and OnStar as examples of functions provided by software designed to improve safety for drivers and passengers.
October 28, 2010 6:24 PM
Posted by: Yvette Francino
, writing about testing
If you’re a regular reader of SearchSoftwareQuality.com or of software quality publications in general, you are bound to be familiar with the writing of Chris McMahon. He is one of our most frequent contributors, and certainly one who is valued both for his expertise in software quality as well as in writing.
In The software experience report: Record what you learn, McMahon describes how each job gives us an opportunity to learn and grow. He encourages each of us to take the time to record these experiences and share our learnings with others.
McMahon recently reached his one year mark of writing for SSQ with over 40 pieces of content. He honored us by noting that accomplishment in a recent post on his popular blog. McMahon acts as a mentor, particularly for those people in software quality who like to write. He facilitates a “writing about testing” network and hosts a conference for the same. When asked why he does it, he comes up with a few answers. My favorite: “I believe strongly in giving away one’s best ideas.”
Other recent content from McMahon:
Breaking the bug reporting rules
The perfect storm: Multiple mishaps lead to disaster
October 25, 2010 4:48 PM
Posted by: Matt Heusser
, test-driven development
Many readers will recognize Kent Beck as the co-creator of Extreme Programming, as one of the authors of the Agile Manifesto, or for one of his many books on software development and unit testing.
This Tuesday I got to know him as the “man on center stage” for the Software Test Professionals Conference – and also as a context-driven thinker.
Beck began his talk by telling us about the arguments of his youth — how one side would say that practice X was essential for success in software development, while a second would say the same practice was a recipe for disaster. He pointed out that, in his experience, the arguments never seemed to go anywhere, they just ended in hard feelings.
Next Beck wondered aloud: Is it possible that both of those people are wrong? Is it possible that they are both right? Or perhaps they are both right — for two entirely different contexts.
While many different things can change the context for a development team, Beck chose to focus on one thing for his talk: The speed of the deployment cycle. He showed us a very informal graph of projects in the 1990′s with rough percentages of project cycle-times. The main cycle times he selected were projects that took a year or more to ship to production, those that deployed quarterly, monthly, weekly, daily and even many times a day. He also showed what project release schedules look like today and a comparison between the two.
The overall conclusion: Project teams today are shipping more often.
His proposal: Changing the deployment cycle causes social, technical, organizational, and business changes. These changes mean the practices the team will need to use in order to be successful will also change.
For example, these are some of the changes Beck suggested teams make when accelerating release schedules.
From annual releases to quarterly:
- Automate acceptance tests
- Institutionalize refactoring as an everyday, continuous practice
- Continuous integration
- Subscription (don’t charge for upgrades, buy support)
From quarterly releases to monthly:
- Developer testing (developers have to stop making so many bugs)
- Stand-up meetings
- Cards on a wall
- Pay per use business model
From monthly to weekly:
- Live, two-way data migration (rollback, make it safe and cheap.)
- Defect zero (another good example of context matters. Great idea for tight releases, crazy idea for long release cycles)
- Temporary branches
- Bootstrap financing
In addition to adding features, Beck suggested taking some practices away, such as large organizational barriers between development, test and operations. He also suggested that traditional paperwork heavy processes, like formal change control or design documents, become less valuable as deployments shift toward daily or even more frequent builds. (At the daily level, he suggests getting rid of standup meetings, because they are too slow to enable daily releases. Beck suggested keeping everyone in the same room and constantly communicating as an alternative.)
What to do tomorrow
Although he never came out and said it, the general impression I had was that Beck supports more frequent, iterative releases. (Come to think of it, he does say that, often, in just about all of his books.)
One of the things he discussed during his talk, however, was the sort of dogmatic “you will begin (x more often or shorter) releases” attitude of the typical agile consultant, which is typically met with resistance.
Instead of thinking of it as a battle of wills, Beck suggested picking up some of the enabling practices above, implementing them one by one, and seeing if the objections and opposition to more frequent release just sort of … melts away.
Overall, I found the framework to look at practices to be much better than any “it depends” handwaving. It’s practical without being prescriptive.
As for his idea of trying a practice at a time to enable more frequent delivery, I found it refreshing.
I hope you do too.
For even more, you can look at Mr. Beck’s entire set of slides on slideshare for free.
October 22, 2010 11:13 PM
Posted by: Matt Heusser
This year’s STPCon had something a little bit different – hands-on sessions where not only did the presenter actually test software — the participants did as well.
The first session of the conference was Justin Hunter’s “Let’s Test Together,” which promised to not only introduce a new test design method, but to change the way we (the audience) think about software testing.
It was an interesting session. While I’m afraid I have neither a time machine nor did I get permission to video-tape the session, I do have the next best thing: Hunter’s Slides plus a summary.
The first thing Hunter did was hand out a series of ‘spools,’ representing the requirements for mortgage-origination software. The software had inputs allowing five types of credit rating, five ranges of for income, six property types, and six different locations in the United States the mortgage could originate in (slide five). Hunter asked us to look at each range (not boundaries), and come up with a number of suggested test cases; a simple 6x5x6x6 yields 1,080 test ideas — and real software of this type would have far more than six inputs.
Next Hunter pointed out the assumption behind the 1,080 test cases was that some magical combination of things were wrong. So he pulled out some historical data from the National Institute of Standards and Technology, pointing out that most bugs are tripped by either a single condition (all people in income range number one) or a combination of two conditions – say all people income range one and credit score three. Thus, based on historical data, if you “just” ran all combination of tests, you would find something like 85% of the bugs in ten or eleven test ideas.
To prove it, Hunter ran a computer program to generate those ten tests, then create a board with all 23 (6+5+6+6) options. He had an audience member throw darts onto two elements of the grid, and, yes, everything the young gent threw was covered in those ten cases.
We call this solution “All-Pairs”, or pairwise testing. After explaining the technique, Hunter drew a chart, from high-value to low-value, diagramming the kinds of problems pairwise testing was effective at (limiting configuration or input types), as well as the types if had little or no application for, such as error and exception handling.
It was a neat session and his passion came through. While Justin Hunter did not invent the pairwise idea, you might say it runs in the family: William G. Hunter, his father, was a professor of statistics and contributor to the design of experiments movement in the 1980′s.
But wait, there’s more
It turns out that doing the fancy math to generate the table, to figure out those ten test cases, is non-trivial. And, while you could buy a statistics book with pre-cooked templates that can be adapted, that takes a fair bit of work as well.
Hunter has software, Hexawise, that participants could use to speed up the process. Instead of doing the manual work, you let the computer generate the tables; you can also ask if the next ten, or the next ten, on and on, of test ideas sorted by highest probability of detection.
Yes, it does get better than this — you could win the lotto. I’m just afraid the odds on that are a little longer than 1,080 to ten.
But hey, we were in Vegas.
A 14-day trial of Hexawise is available here.
October 22, 2010 1:15 PM
Posted by: Matt Heusser
I’m here at the Software Test Professionals Conference in Las Vegas, Nevada. Yesterday, I had the opportunity to participate in a panel discussion on “How to Reduce the Cost of Testing.” It was fun.
Cutting the costs of testing
So here’s the problem: American business has a mandate for growth in profits. They can accomplish this in two ways: grow revenues or cut costs. Looking at the testing group as a cost center gets you one choice: cut costs.
A small group of passionate testers is currently working on a book on the subject, and several chapter contributors happened to be at the conference. So we got the group together for a panel discussion to cover what ideas they had to reduce the cost of testing with integrity. Please allow me to share some of their answers:
Justin Hunter suggested that most bugs come from combining two or more inputs. So we can collapse our tests into only those tests that actually cover pairs of test conditions.
Scott Barber wanted to talk to executives about the other side of the equation – the social value of having software that actually works.
Catherine Powell was interested in the “great game of testing.” Given that we have limited time and budget and an infintite number of test ideas, what should we be testing right now in the time we have. The formal tool she suggested to compare ideas is designed to measure the opportunity cost of various test strategies.
Selena Delesie talked about the cost of quality, and how to discuss it with executives.
Lanette Creamersuggested collaborative testing – basically decreasing the organizational boundaries and the cost of tradeoffs between business people, developers and testers, through pair testing, story kick-off and so on.
Finally, Petteri Lyytinen (yes, he’s Finnish) had a small set of ideas like Test Driven Development and Continuous Integration to try which would allow teams to decrease the length of a test cycle.
Then what? Lay off your staff?
As moderator, I got to ask the leading question: Let’s say you are successful at reducing the cost of testing, and can now do with eight people what used to take ten. Does that mean that you have just earned the dubious privilege of laying off your staff?
Lanette expressed that senior management was likely going to cut anyway, so this was advice on how to deal with those cuts. Several other people stated that these ideas were designed to meet management where they are at, then move the discussion to include value. (Scott Barber made it clear that if you can’t quantify the value, you can’t decide if any cost is “too high,” “about right,” or a good deal.)
How about just preventing the bugs in the first place?
Then the audience got to ask their questions. My personal favorite was the person asking about traditional manufacturing ideas that we should “just” prevent “all” bugs “up front.” The general consensus was that, while those ideas have some things to offer, the analogy is not correct: With manufacturing, you do need to test one time, at least, for every change in process.
The thing about software is that every check-in of new code combined with a new build, is a change in your build process.
So one place to start cutting the cost of software is not eliminating practice X, process Y or person Z. It’s in actually understanding why those practices and processes are in place, then deciding if the risk is worth not worrying about.
Do you have ideas of your own? Let us know!
October 21, 2010 1:24 PM
Posted by: Yvette Francino
, IBM innovate 2010
The results of SearchSoftwareQuality’s reader survey are in and Colleen Frye reports the findings in her article, Agile, virtualization help with long-standing challenges. The interest in agile development continues to be on the rise, which is no surprise based on the conferences I attend and the industry reports and articles I read. In fact, at IBM’s Innovate conference in June, keynote speaker Walker Royce labeled the waterfall methodology ‘geriatric.’
Interest in agile is on the rise among respondents, with 42% planning to implement agile processes within the next year vs. 17% in 2009. And Scrum is the dominant methodology (45%), with both XP and XP/Scrum hybrid at 9% and other methods in the single digits.
Though there are still plenty of organizations that may be have their origins using a waterfall approach, more are starting to adopt agile work practices, such as increased collaboration and more testing earlier in the development cycle.
Agile is not without challenges. Frye notes:
But the more things change, the more they stay the same, an adage that some of the survey results support. For example, respondents cite the top challenges of adopting agile processes are some age-old issues: documentation, communication and resistance to change. However, while documentation was the top challenge for agile adopters for two years in a row, the percentage cited it dropped from 67% to 52% this year.
Take a look at Frye’s article to find out more about the challenges readers face and the trends their seeing in software development.