May 3, 2009 8:26 PM
Posted by: MichaelDKelly
In her post on How app visualization enables picture-perfect requirements, Jan Stafford shared excerpts from her interview with application visualization expert, Maurice Martin, president, COO and founder of iRise, a maker of application visualization products. While I’m not as bullish as Martin is on the virtues of application visualization, it’s been my experience that visualization — especially around requirements as outlined in Jan’s post — can significantly help the testing effort for the project.
In his interview, Martin said he’s seen visualization “cut down on change orders and eliminate about 70% of downstream rework.” While I can’t speak to the numbers, I will say that anecdotally while serving as a guest reviewer for HCI projects at Indiana University I’ve seen a number of different mock-up techniques from paper prototypes to Flash mock-ups. It’s certainly the case that I can provide better feedback on the interactive mock-ups, and I suspect that using an enterprise visualization solution — instead of custom coding — is a more scalable solution for a product team or IT department.
Visualization, like prototypes and mock-ups, allow you to put something in front of a user community sooner. This allows you to begin usability studies before you have a finished product. While there are multiple ways to address that issue visualization is certainly one effective method. Other methods like the embedding users in the project team and delivering working software each iteration so you can get feedback on the actual product can address some of the same issues, but can be more difficult for product development where you might not be developing for a user population you can share the product with each iteration.
When it comes to testing, visualization can help move requirements and design disambiguation to the front of the project. While not all problems lend themselves to this type of visualization, for the problems where you can easily visualize the end product by creating interactive models you can get a lot of value. Not only does visualization create a more interactive platform for disambiguation, it give testers a better working model of what the product will look like and how it will behave. This creates a shared vocabulary and framework for discussing the product with other project team members.
Visualization also gives testers the ability to raise inconsistencies earlier in the project. If you put a visualization in front of me (or likely any tester), I’ll test it. While it’s not going to be a full-featured working product, depending on what aspects of the product get modeled, I’ll be able to test the proposed processes and workflows, the proposed user interface, and the proposed data model. Not only does this allow a tester to provide quality related information about the product earlier in the project, it better prepares them for the testing that will take place later in the project – making them more effective when they get the actual product for testing.
April 30, 2009 8:42 PM
Posted by: Rick Vanover
, virtual networks
Many IT environments of all sizes are enjoying virtualization in some capacity to save on costs, provision systems quicker and consolidate equipment. While the time to market availability for virtual systems is much quicker, there are built-in safeguards that can be rolled into any test, development or software quality effort.
All virtualization platforms offer some form of virtualized network functionality. I’ll talk about VMware’s ESX and vCenter Server, as it is the most popular in the space. A virtual switch is where virtual machines are connected to the network, and on the host side that virtual switch has a configuration as well. For test purposes, a fully isolated network can be provisioned for use by virtual machines on the host. Take the following example, an organization’s internal IT staff provisions a special virtual local area network (VLAN) to the host system, which is then configured as a port group in the virtual switch. At that point a virtual machine can be configured to access that VLAN for a private, testing only environment. The figure below shows an example of this configuration where four virtual machines are assigned to a private VLAN:
The private VLAN is not connected to the rest of the company network, yet the four virtual machines can communicate with each other. I like using the isolated virtual networks as they can be populated with other systems to test communications between virtual machines. Further, built-in virtual machine functionality of cloning and conversion allows copies of live systems to be put into the test environment quite easily. This example is not specific enough for many to take and use in their internal testing, but should be a springboard for conversation to see how network virtualization technology can be applied to your software testing and quality procedures.
April 24, 2009 9:09 PM
Posted by: Jan Stafford
Agile software development
, software development
, software requirements
Readers of Mike Kelly’s post on software security uses for data visualization –- the ability to visualize patterns of data –- asked us for info on application visualization, too. Application visualization enables software developers, testers and users to see and interact with a fully functional prototype or simulation of a proposed application before coding. We’ll be covering both visualization topics more in this blog and on SearchSoftwareQuality.com.
In this post, I share excerpts from my interview with application visualization expert, Maurice Martin, president, COO and founder of iRise, a maker – of course – of application visualization products.
Developers and business analysts seeking to get software requirements right will find visualization far more effective than text documentation and diagrams, according to Martin. Planning to visualize applications in high fidelity before coding can ensure that the right requirements are captured early in the process.
Imagine giving stakeholders a fully-functional, working prototype early in the process; development teams can secure the right feedback early in the process, before any expensive coding happens.
“Instead of a stakeholder review meeting where everyone is flipping through giant binders of text requirements and struggling to understand what’s been written, now everyone can ‘test drive’ the application live,” Martin said. “The magic happens when changes are made to the visualization right in the stakeholder review meeting, cutting days, weeks or even months off the requirements cycle time.”
Application visualization is a paradigm-changing technology and approach for software development, in Martin’s opinion. Business analysts are no longer just reviewing text requirements, they’re “now rapidly iterating functional prototypes directly in front of the business stakeholders,” he told me. The result is quicker capture of more accurate requirements for designing and developing code, building test scripts and writing documentation or training content.
Until business analysts, developers and users can see, interact with and fully experience the application, they can really only guess at the requirements, Martin said and then continued, saying:
“Add to this the fact that the tools and processes used by most BAs are antiquated at best, and it’s no wonder project overruns, delays and missing features are all-to-common outcomes. The sad truth is this: the state-of-the-art in requirements definition for many organizations is still a giant text document, maybe annotated with a few static screen shot mock-ups or use case diagrams. Let’s face it, the cycle times to create these documents are long and the business stakeholders don’t understand –or read — these giant binders. This would be analogous to designing a new car, airplane, building or semiconductor today with a drafting board. It simply doesn’t happen in other industries; why are we still designing software this way?”
Visualizations can provide guides for what to build, eliminating confusion and enabling developers to focus on what’s important; architecture, design, coding and delivery, Martin said. He’s found that visualization can cut down on change orders and eliminate about 70% of downstream rework. In short, visualizations forces the business to articulate what they want in a language everyone can understand.
April 22, 2009 11:50 PM
Posted by: MichaelDKelly
, software quality assurance
, Software testing
At a recent Indianapolis Workshop on Software Testing, I joined a group of software testing pros in examining the practice of test-driven development (TDD). We brainstormed ahout how to increase adoption by helping teams new to the practice. While there are general principles for introducing change into an organization that come into play, we came up with the following practices that would make the TDD practice sticky in most organizations.
One topic that came up often was making sure the team understands why implementing test-driven development is needed. Managers need explain the benefits in terms of how teams will benefit. A key to success here is finding out what the team’s objections and concerns are before you begin implementation. Without that information, objections can’t be addressed in advance, and they’ll show up later in ways that will slow down the project.
Once people understand the goal, the next step will likely be to pilot the practice. Some ideas from the group included starting with the strongest advocates in the group, targeting the programmers with most influence first, or pulling in external resources that already know how to do it and can share their experience with the team. By starting with a smaller team and growing from there, you’re (hopefully) building a success story. You want to create a sub-culture where people want it to succeed.
To help make the tests a more visible and tangible part of the development practice, make sure they are included in code reviews and that when people pair they are still working test-first. You want to create a culture where programmers don’t just challenge each other on code and design, but also on tests. You can also make unit testing tasks a visible part of the development process; include them in stories, on your kanban board, or as part of task-out.
In addition to making sure people understand the goals and getting the practice integrated into your development practices, there are specific tools and metrics that can help. Here are some good practices:
- Get people looking at static analysis metrics — complexity, magic numbers, etc. — for the code before and after test-driven development.
- Get people looking at code coverage for unit tests and notice how it changes as adoption picks up.
- When you start, it can be helpful to have an endpoint in mind and trace how test-driven development can assist you in getting there.
- Try to find and measure other metrics to support the end goal; like time to fix with test-driven development versus without.
Some of the tools that came up during the workshop that can help included Heckle, Flog, and Blame. There are tons of tools out there to help teams manage unit tests and provide code and testing metrics. Figure out what works for you, based on your programming language, IDE of choice, and how your team wants to work. As your team starts down the road of implementing test-driven development, recognize that they’ll need support and reinforcement. Finally, understand that the change likely won’t happen overnight.
These are just a few of the many good ideas offered in the March meeting of theIndianapolis Workshop on Software Testing. Besides yours turly, participants of the workshop included Chris Achard, Patrick Bailey, Anthony Bye, Jason Gladish, Matthew Gordon, Tim Harvey, Frank Jaloma, Courtney Jones, Baher Malek, Joel Meador, Elijah Miller, Steve Pollak, Russell Scheerer, Elizabeth M. Shaw, Dustin Sparks, Miles Z. Sterrett, Jon Strayer, Nick Voll, and Jeff White.
April 21, 2009 8:52 PM
Posted by: MichaelDKelly
, Software testing
I’ve worked with a lot of people new to testing. Helping people find an area within testing they like is something I enjoy doing. Testing is fun. It’s challenging. I think it’s got something to offer everyone who’s willing to learn, is comfortable dealing with conflict and doesn’t mind hard work.
So what does that have to do with hedgehogs?
I’m a big Jim Collins fan. I know I’m on the bandwagon, but if you’ve not read Good to Great or Built to Last, please give them a try. There is some great material in there that I think applies to all levels of contributors. For example, I’d like to take a second to look at what Collins calls the hedgehog concept. If you’re not familiar with the hedgehog concept, please take a quick look at it.
It’s ok, I’ll wait.
So let’s apply the hedgehog concept to software testing. You have three circles:
- What you can be the best in the world at
- What drives your economic engine
- What you are deeply passionate about
Best in the world
For companies, we have fairly well defined measures of success. For individual contributors, it becomes a bit more difficult. The idea here isn’t that you would literally become the best in the world, it’s that you have the ability to be one of the best, you know what it would look like to be one of the best, and that you recognize the depth and breadth of what it would likely take to be the best within that aspect of software testing.
This question reminds me of a conference talk Rob Sabourin gave a couple years ago on the topic of what baseball taught him about metrics. Think of the back of a baseball card and the stats that you would find. Do you know what stats are important for measuring testers? What stats do performance testers care about compared to security testers?
I think what’s important isn’t that you’re actually the best. There’s no real way you’d ever know. Instead, what’s important is that you’re thinking about what being the best would look like. You’re asking yourself how you’d measure it. And as far as your hedgehog concept is concerned, you believe you can be the best in some aspect of your testing.
Figuring out your economics
Economics in this case has a double meaning. It’s not just about how much money you make. However, that’s certainly part of it. You can’t work as a tester if you can’t pay the bills. But it’s more general than that. As a project team member, you play into the “economics” of the project team as well. How do you drive the project economic engine?
Think about the statement “profit per X.” If you substitute the word “value” for “profit,” what X’s might a tester care about?
- Is it value per test?
- Is it value per test idea?
- Is it value per requirement covered?
- Is it value per line of code covered?
- Is it value per defect identified?
- Is it value per document?
- Is it value per project?
- Or is it something else? Or some combination?
How do you affect the project’s bottom line? What value to you provide to the project? Do you know how you affect the project’s economic engine and do you know how that in turn drives your personal economic engine? Is the connection clear and do you actively try to manage it?
Finding your passion
This is typically the easiest circle to figure out. What types of activities or issues keep you at work late, not because you have to, but because you want to? Where’s your passion? Is it performance testing, security testing, usability testing, exploratory testing, test management, testing automation, or some other aspect of testing? Think about when you’re most satisfied in your work. You likely won’t like every aspect of testing, but which aspects give you energy?
Thinking about your hedgehog concept can be a valuable activity in trying to figure out what you want to do with your career. It’s also useful in recognizing what value you provide to the project team so you can actively manage how much value you provide. Just as Collins says in Good to Great, you likely won’t be able to answer any of these questions right away. They take thought, questioning, and self-discovery. If you candevelop a hedgehog concept (and follow it), I suspect you’ll have a clearer understanding of what sucess looks like for you and you’ll be more fullfilled as you work to follow it.
April 17, 2009 6:36 PM
Posted by: Jan Stafford
, security threats
, software security
Software quality assurance (QA) and software security teams have long been separate islands within development organizations. That division is giving data pirates carte blanche to compromise software, cyber-security industry veteran Barmak Meftah told me recently.
“Today, we are witnessing the greatest increase in cybercrime,” said Meftah, Fortify Software’s technology senior vice president. “Yet most organizations continue to bolt security on after the fact. Organizations need to build in security first, not layer it on later.”
Mehta and I talked about how this code-red software environment has pushed software security assurance to the forefront in 2009. He listed these reasons:
- Heightened executive pressure for software security: As
software’s critical role in running business operations becomes apparent, non-technical executives have become increasingly interested in understanding broader approaches that manage risk, not technology.
- The need for a holistic platform for software security: Traditionally, companies have only been able to buy point products to secure applications during development, QA or in production. That doesn’t work now that data predators, who are continually becoming more sophisticated, use multiple methods to leverage software vulnerabilities to attack organizations. Reacting to this pressure, companies have been forced to adopt multi-prong approaches that include pen testing, static analysis, web application firewalls and so on.
- A business-driven focus on more security from development to production: Eliminating risk leads to preventative, not reactive, security testing. There’s a move from SDLC only to security teams are attempting to bring about significant cultural and process changes to elevate security awareness and governance across an organization.
Today, Mehta said, QA teams must pick up responsibility for security and work with development to make fixes. This is a marked change from the recent past, when QA and security managers have rarely crossed paths. “They have been different organizations with different objectives,” Meftah said. QA has focused on common use cases to uncover problems that typical users may encounter. Security has focused on abuse cases, hoping to uncover a corner case that could allow an adversary to penetrate and exploit a system.
Now, Mehta told me, QA and security teams have to cross the chasm to work on these issues and more together. QA involvement is key way to shift from a reactive security approach — such as patching and firewalls — towards a preventative approach that builds security in from the beginning.
This conversation left me wondering how many companies are pairing or have paired their QA and security teams and to what extent. Or, perhaps some QA and security managers believe they should continue as separate entities. I’d like to hear from you on this subject, either in comments below, or via email at email@example.com.
April 13, 2009 6:55 PM
Posted by: MichaelDKelly
, unit testing
I’ve always found Kent Beck’s Four Rules of Simple Code to be a great framework for when I think about code reviews. Those rules are:
- The code correctly runs all the tests.
- The code clearly expresses the ideas/intentions of the developer.
- The code contains no duplication.
- The code minimizes the number of classes and methods.
This is useful to me because it reminds me to:
- Check to make sure the code has unit tests, and that it passes them.
- Verify that I’m not confused by what the developer is trying to accomplish in the code. I can read and understand it clearly.
- Verify that code isn’t duplicated (unless it’s needed to clearly expresses the ideas/intentions of the developer).
- Verify that the code has the smallest footprint possible (not really my strong suit, but I can normally spot when something might be refactored into fewer lines of code).
Something that’s new to me is the idea of peer reviewing the unit tests along with the code. While I’ve always looked at what the unit tests cover (behavior, error conditions and code coverage), I’ve never thought to look at the structure of the unit tests.
That’s changed recently. At the March Indianapolis Workshop on Software Testing, Matt Gordon shared a story on unit testing using fixtures. In that experience report, he outlined his two rules for unit testing:
- A test should test one thing; and it should be obvious about what that one thing is.
- Everything should be in the test.
I like these rules, because they help me to focus on making sure I:
- Understand what each test is trying to accomplish.
- Understand where the tests are getting/creating their test data or supporting objects (mocks, etc.).
Overall, I find simple rules like these very helpful. While they don’t provide specific things to look for (like formatting, secure coding practices, etc.), they get you thinking about overall design, maintainability and quality.
April 10, 2009 1:24 PM
Posted by: MichaelDKelly
I recently received the following question related to debugging:
Debugging takes up a lot of my team’s time. What are some shortcuts, process changes or alternatives that can reduce the need to debug frequently? How can TDD or IDE help?
Figuring out what process changes you might be able to make to reduce the need to debug is what software engineering is all about. That’s a big topic, which can potentially span how you do design, coding, testing, configuration management and project management. That said, I do think I can offer some insights into how test-driven development (TDD) and a good IDE might help.
One of the biggest things TDD offers to the debugging process is more usable code. When you look at well-tested code, where the interface was designed using an evolving suite of tests, it’s often easier to understand what’s happening. In addition, you also have the tests, which not only help document the code’s behavior, but they allow you to make changes with confidence while you’re debugging.
In fact, on past projects where I’ve been one of the programmers, I’ve used my unit tests to help isolate issues — picking a test that’s close to the code I want to isolate and just changing the test again and again until I prove (or disprove) my theory on what the bug is that I’m trying to track down.
The IDE you use can also play a role in how effective your debugging will be. While most IDE’s have basic debugging features built in (breakpoints, ability to view the stack, etc.) some IDE’s have advanced features that allow you to do things like defining complex conditional logic to establish the criteria that will cause the script to halt. This can be good for finding issues that only occur intermittently.
In addition, how your IDE integrates with runtime and static analysis tools can affect your debugging effectiveness. If you’re tracking down a memory leak, depending on the technology you’re working with, you may want to be able to leverage tools that instrument the code for you, or give you line-by-line metrics. If you’re trying to debug a security issue, a static analysis tool that checks for secure coding practices might save you a lot of time.
April 7, 2009 7:17 PM
Posted by: JackDanahy
There is no more critical component to an organization’s capacity for trust than reputation.
Whether choosing to hand them money, customer information or your own health, the reputation of a prospective bank, partner, or hospital will likely be the reason you consider, choose or cut the organization out of contention.
Consider the announcement in January 2009 of a breach at Heartland Payment Systems. Since that story broke, there has been a continuing stream of news on the topic that keeps both Heartland and the breach in the headlines. This ripple effect from the original event deserves consideration by other organizations as they make their own decisions regarding risk, investment, consequences and policy.
To understand how broad the impact has been, I thought it would be useful to use Google to simply look up “Heartland Payment Systems” and see what kind of exposure this single breach was enjoying now, almost three months after the original announcement by Heartland.
The output is pretty illuminating. As one would expect, the first natural topic is the corporate website. Beyond this, it goes downhill pretty fast. Of the remaining nine items in the natural search list, with the exception of a pointer to a secondary company site and the company’s Hoovers listing, everything relates to the breach. That’s a pretty high percentage.
By way of description, the second item is a website, www.2008breach.com, which is registered to Heartland Payment Systems, on which is a statement from Heartland CEO Robert O. Carr about the breach and about Heartland’s continued role as a payment processor. Mr. Carr also draws attention to the fact that some competitors had been misrepresenting the actual meaning of the announcement by Visa that they had removed Heartland from the PCI-compliant vendors list. This type of disclosure and investment in educating potential victims is laudable, but querying for a vendor and having the second item have “breach” in the URL would likely be a warning flag to someone trying to learn about Heartland.
The other items in the natural list point to articles relating to various writers’ viewpoints on the breach. While some are more objective than others, the actual topics are much broader than I would have suspected:
- Three of the articles are pretty straight news stories on the breach including the idea that it may be the largest breach in history.
- One is a news story on a class action lawsuit that “seeks actual and punitive damages for allegations of negligence and breach of duty.”
- One describes the author’s view that Heartland attempted to hide the “Largest Data Breach in History.”
- One describes the “Big Breach and Lame PR Tactic.”
- One claims that Heartland “Uncovers Malicious Software in its Systems.”
So, what does all this mean? I, for one, am not suggesting that all of this content is correct, or that Heartland does not deserve the opportunity to address any issues and continue on with their business. My point is that reputation is a critical, yet fragile thing. Building it and defending it are not small tasks, and a fall from favor can be swift and absolute.
It should also be noted that three advertisements arrived in the right-hand column of the Google results window when searching for “Heartland Payment Systems.”
- One is for point-of-sale systems for retail use.
- One is a recruiting advertisement for people who want to sell point-of-sale systems.
- The last is from the firm of KaplanFox, who claimed to be investigating “Possible Securities Fraud by Heartland.”
Even the targeted advertisements promote a difficult message for Heartland.
All of this ties directly into managing risk. Since reputation is an invaluable asset to any organization, protecting it with sufficient resources and rigor seems reasonable. Rebuilding a tarnished reputation after a breach will require efforts along all of the avenues cited above, and is always much more difficult than creating it in the first place, because breaches result in headlines that are free, interesting, popular media, while fixes and cleanup result in little beyond whitepapers, which are costly and unpopular media. There was not a single positive article, review or news item on the first full page of results.
From this event and countless others that one can find, the link is clear between reputation and the trusted data that is received from customers and partners. This creates the real requirement that organizations do a comprehensive job of ensuring that data will be protected, and that systems are in place to minimize the risk and impact of any possible breach. Optimally, organizations should mitigate the risk before something bad happens. Not knowing how to do it or where to start is no longer an excuse. It is time to take action.
As a first step, I recommend that you take a step back and better understand what it is that you are protecting. Before you buy any product to help with this, even ours, it is most important to understand how you can use a product to help you. With the proliferation of malware and hacking activity out there, and the obvious toll that breaches take, it is only a matter of time before short-range savings might be wiped out by staggering, breach-related costs.