Software Quality Insights


June 29, 2009  3:16 PM

CAST 2009: How to teach yourself testing, an interview with James Bach

MichaelDKelly Michael Kelly Profile: MichaelDKelly

At this year’s Conference for the Association for Software Testing (CAST), taking place July 13-16th in Colorado Springs, James Bach will be presenting a tutorial on self-education for software testers. The tutorial, titled “Teach YOURSELF Software Testing,” is about teaching yourself testing, instead of waiting for some testing guru to tell you all the answers. Bach often boasts that he invented testing for himself, and he believes you can too. In his tutorial, Bach plans to share his personal system of testing self-education. Based on his upcoming book “Secrets of a Buccaneer-Scholar,” it’s a system of analyzing experiences and questioning conventional wisdom.

James Bach is a high school dropout who taught himself programming and testing. He’s been a tester, test manager, and consultant since 1987. A founding member of the Context-Driven School of testing, he has taught his class, Rapid Software Testing, around the world. He is co-author of “Lessons Learned in Software Testing,” and author of “Secrets of a Buccaneer-Scholar,” a book about technical self-education which is being published in September.

I first heard Bach talk at a conference in 2000 where he outlined a method of approaching testing problems that I found both engaging and effective. A few years later, I met him at a workshop and (after trying to hack his laptop) was lucky enough to get along well enough with him that he invited me to study software testing with him. I’ve had the pleasure of studying under him and have first hand experience working through his syllabus of software testing concepts; developing my own understanding of how to identify, articulate, and test my own heuristics; and developing methods of how to assess my progress. All of those topics are covered in this tutorial.

I opened the interview with Bach by asking him why he thinks self-education, as opposed to more traditional methods like classes or certification, is so important for someone’s career:

“Technical self-education is traditional, going back hundreds and thousands of years. Electricity, for instance, was discovered and developed by individuals working on their own, outside of any institution. So was chemistry and physics, for the most part. We are in the process of developing the testing craft, and that requires people who innovate.

Testing classes and certifications are pretty bad, for the most part. Of course, I try to teach a good one, but there’s not a whole lot I can do in three days. What I try to do is start a fire in the minds of the testers to pursue their education further without me telling them the answers.

Self-education is available to each of us, all the time. We don’t need a budget. We don’t need anyone’s permission. Institutional education, on the other hand, is expensive and limiting.”

Bach first presented a tutorial along these lines at CAST 2007. He often talks fondly about pulling the material together for the first time as a “bold boast.” A bold boast is a self-education technique.

“A bold boast is a trick I use to get myself going on a project. It’s basically a promise to accomplish some feat, such as to write an article or teach a class. I make the boast that I can teach something, and then my mind gets serious about solving the problems that need to be solved. So, the first time I did this tutorial was when I was writing the Buccaneer book and wanted to develop new material for it, more quickly. I told the CAST organizers ‘I’ll teach a class on self-education for testers,’ not knowing what I was going to do, at first.”

If you follow Bach’s work at all, a lot of what he puts forth in the tutorial mirrors the way he talks about and teaches software testing. So I asked him how the tutorial builds on, or extends, some of his past work.

“I have lots of odd ideas about testing. They run counter to the traditional ‘Factory School’ ideas that you find in most testing textbooks. I teach and demonstrate these ideas, in all my classes, but in this tutorial I get to share how I came up with them in the first place.

I’m nervous when I teach this tutorial, because I expect more from the students than in my normal classes. People who only want quick answers about how to test will be disappointed, because my goal is to show them how to create their own answers. In fact, I show them how they already know a lot of the things they don’t think they know!

In the tutorial, I also talk about how the testing craft is going through a great struggle. The various testing schools are fighting with each other for dominance. The certificationists, of course, being the most visible and aggressive of those. I stand up for the Context-Driven School – against the certificationists and Factory folks – and do my best to recruit testers to our cause. I’m up front about that.”

Bach’s ideas about self-education have, in the past, faced some criticisms. I asked him to anticipate a couple of the more likely criticisms and asked him how he addresses them.

“The most likely criticisms, I think, are these two:

In the tutorial I attack other schools of testing thought instead of trying to find common ground. My response to that is – that’s right. I think those other schools are harming the craft. I don’t see common ground. But I’m glad that our field is not regulated. I’d like to see the other schools go down in flames, but not through any mechanism other than the efficient operation of a well-informed market of ideas.

The second criticism is that it’s all well that the great James Bach can make it up as he goes along, but what about people who aren’t famous? My response to this criticism is that I was developing my own methodology before anyone outside of my team at Apple Computer knew my name. I became well known because some folks found out about what I was doing and thought it was interesting. ANYONE can be a testing methodologist. The market will decide, in the long run, whether it is interested in your methods.”

Bach’s ideas on self education have been influenced by the works of Jerry Weinberg (also speaking at CAST this year), Herbert Simon, Daniel Kahneman, and Nicholas Taleb. “I would recommend ‘The Invention of Air’ as a great book that shows the development of one man – Joseph Priestly – and the development of his field of electricity and chemistry through vigorous and collegial self-education.”

“CAST is the conference that attracts the core contributing thinkers in the Context-Driven School. These are the people who, like me, are engaged in the creation of a vibrant testing craft that has roots in many disciplines and in the history of science. No other testing conference is like that. At CAST, I don’t have to apologize for using Joseph Priestly as an example of a good tester.”

For more on the upcoming show, check out the CAST conference website. For more on James Bach’s work, you can check out his website, or either of his books “Lessons Learned in Software Testing,” and “Secrets of a Buccaneer-Scholar.” Bach also runs two very popular blogs, one on testing and one on self-education.

June 23, 2009  7:14 PM

CAST 2009: The challenges of regulation, an interview with Jean Ann Harrison

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Veteran software testing and quality assurance pro Jean Ann Harrison will be presenting a software testing case study based on her experiences at a medical device study at this year’s Conference for the Association for Software Testing (CAST), slated for July 13-16th in Colorado Springs.

In her session — titled “A Balancing Act: Satisfying Regulators, End Users, Business Leaders and Development” — Harrison plans to provide guidance to testers who have to deal with conflicting priorities between developers, project managers, customers/patients, and regulators.

“Priorities clash and inevitably software testers are in the middle of a battlefield between developers trying to get their work done and delivered while project managers are trying to make a deadline, customers/patients want to make sure the product works as expected and the regulators demand the proper documentation delivered in a sequential timeframe.” Harrison went on to share some of the questions she hopes to answer in the talk. “How do testers balance all these shareholder’s priorities? How can testers decide which direction to take as the project matures? Which shareholders’ take precedence over another?”

With 10 years of experience in software quality assurance and testing and three years testing embedded software on portable devices, Jean Ann Harrison has gained broad experience in various software testing processes and has worked in varied contexts including large multi-million dollar corporations, venture capital, and start-up companies. Harrison is currently works for CardioNet, Inc. where her primary role is testing software embedded in medical devices that provide diagnostic data for physicians to determine their patient’s heart condition.

“I developed the talk through my own learning process as a software tester working in a regulated environment for the first time. What to do, what not to do, what one can expect, and how to handle the demands of a regulated company helped formulate my subject. And most companies producing software usually have some sort of description of what is wanted, needed, or expected when the project is completed. Most companies usually have some method of traceability of software requirements, software design, and product information. In a regulated environment, the role of documentation is the centerpiece of any project.”

When asked to expand a bit on the challenges of regulation, Harrison continued:

“First, a single source location for documentation must be identified, implemented and then monitored. Then documentation is distinct in a regulated environment by the level of detail provided, the sequence of submittal of documentation, identifying appropriate reviewers to approve the documentation, and a historical record is maintained for traceability purposes. This process is extremely formalized and dates of submittals are critical to the project. Non-regulated environments tend to be more relaxed and even the most formal processes have allowable slips. In regulated environments, slips are not acceptable, and contingency plans must be implemented to explain deviations. If regulated environments do not meet regulators demands, the certifications are rescinded.”

One of the things I found most interesting about my interview with Jean Ann Harrison was her biggest influence for the talk, which came not from the field of testing, but instead from political science. Harrison majored in Political Science 25 years ago. She’s found that the analytical thinking skills her professors emphasized play a large part in her success.

“In the four years and loads of courses, exercises were given to force students to practice analytical thinking. Software testers are constantly required to analyze how to do something, how to improve, what is the data telling you, analyze different perspectives, create. Over the years, my analytical skills have evolved but certainly were given a solid foundation because two professors teaching the subject of Political Science felt the skill was critical to the coursework. One exercise that was given to me in a course called Research Methods, I use today to train and mentor software testers. It is simplistic in nature but very difficult to implement. The exercise requires them to generate a new hypothesis that they personally have not read about, been trained in, or been given any sort of research material on. Then they are required to describe and prove the hypothesis using empirical means.”

When asked why she chose CAST as the venue for her talk, Harrison shared that this year’s theme for the conference, “Serving our stakeholders,” is directly relevant to some of the lessons her current company is learning, as it’s experiencing growth. “Each department is learning who the customers are,” Harrison says, “How we can better be of service, and what can we learn from our mistakes.”

For more on the upcoming show, check out the CAST conference website.


June 22, 2009  7:06 PM

CAST 2009: Understanding how much responsibility a testing team should have, an interview with Gerald M. Weinberg

MichaelDKelly Michael Kelly Profile: MichaelDKelly

For the previous three years, I was either an organizer of the Conference for the Association for Software Testing (CAST) or the President of the AST. So as you can imagine, I watched the conference closely. Last year, when we were able to announce Jerry Weinberg as the keynote speaker it was a great feeling.

At CAST 2008 Weinberg offered a tutorial that sold out so fast we had to add another day to the conference. This year Weinberg will again be offering a tutorial at CAST. The topic is “Ensuring Testing’s Proper Place in the Organization.”

Jerry Weinberg is easily one of the most influential people in my practice as a software tester and consultant. For the last 50 years, Weinberg has worked on transforming software organizations. For example, in 1958, he formed the world’s first group of specialized software testers.

Weinberg is author or co-author of may articles and books, including “The Psychology of Computer Programming” and the 4-volume “Quality Software Management” series. He is perhaps best known for his training of software leaders, including the Amplifying Your Effectiveness (AYE) conference and the Problem Solving Leadership (PSL) workshop.

In this year’s tutorial, Weinberg will help attendees demonstrate the value of testing vs. the cost; teach them how to find the points of influence in the organization, and how to cope with them; work with them on communicate with executives; and will help them to better evaluate risk and make it real.

When asked, Weinberg said that the tutorial is focused on “addressing the problem of testing being given too much, too little, or the wrong kind of responsibility. The tutorial will address this problem at the individual, test team, organizational, and societal level.”

At last year’s conference, Weinberg launched his latest book, “Perfect Software and Other Testing Myths.” When asked how much interplay there might be between the book and his tutorial, he responded:

Certainly much of the misplacement of testing starts with the common myths and misunderstandings about testing, so, yes, there is quite a bit of interplay. However, reading the book is not a prerequisite to participating in the tutorial, because all professional testers are well acquainted with these myths and misunderstandings. What they may not understand is how these myths and misunderstandings are contributing to the low esteem in which testing is commonly held–and what they personally can do to achieve their proper role.

When asked how this tutorial built on, or extend, some of the other work he’s done in the past, Weinberg responded: “I’m trying to correct the impression from much writing that ‘Development is Everything; Testing is Nothing.’ Or, even, that ‘Development would be easy if it weren’t for Testing.'”

Given the variety of places Weinberg could deliver this message, including much larger venues, I asked him why he chose to give the tutorial at CAST.

“The people who attend. I was at CAST last year in Toronto, and found it to be a cut above your typical conference (almost on a par with our own AYE conference). I learn at CAST, and I enjoy CAST. That’s why I believe it’s the right place for me to be, teaching and learning.”

For those unfamiliar with AYE, or Amplifying Your Effectiveness, you can learn more about it at their website. For more on the upcoming show, check out the CAST conference website. For more on Jerry Weinberg – his works, conferences, and to interact with him – check out his website and blogs on the topics of consulting and writing. While you’re at it, if you haven’t already taken a look at the new book “Perfect Software and Other Testing Myths” I highly recommend it.


June 22, 2009  7:05 PM

CAST 2009: Challenging one of the classic ideas around testing, an interview with Doug Hoffman

MichaelDKelly Michael Kelly Profile: MichaelDKelly

At next month’s Conference of the Association for Software Testing (CAST) in Colorado Springs Doug Hoffman will call to question one of the most fundamental ideas in software testing: Do tests really pass or fail? I had the opportunity to talk with Hoffman about his conference session titled “Why tests don’t pass.”

Doug Hoffman has over thirty years experience in software quality assurance and has earned degrees in Computer Science, Electrical Engineering, and an MBA. He is currently working as an independent with Software Quality Methods, LLC. Hoffman is involved in just about every organization having to do with software quality; he’s an ASQ Fellow, a member in the ACM and IEEE, and is a Founding Member and a Director of the Association for Software Testing.

When asked to summarize his talk, Hoffman got straight to the point, “The results of running a test aren’t really pass or fail. I think this message will resonate with part of the audience and may inspire others to challenge the idea. CAST is a venue where such discussion is encouraged.”

The idea is expanded on in the summary for his talk:

Most testers think of tests passing or failing. Either they found a bug or they didn’t. Unfortunately, experience repeatedly shows us that passing a test doesn’t really mean there is no bug. It is possible for bugs to exist in the feature being tested in spite of passing the test of that capability. It is also quite possible for a test to surface an error but it not be detected at the time. Passing really only means that we didn’t notice anything interesting.

Likewise, failing a test is no guarantee that a bug is present. There could be a bug in the test itself, a configuration problem, corrupted data, or a host of other explainable reasons that do not mean that there is anything wrong with the software being tested. Failing really only means that something that was noticed warrants further investigation.

“I think all we can really conclude from a test is whether or not further work is appropriate.” Hoffman said. “The talk goes into why I think this, and some of the implications of thinking this way.”

When I asked Hoffman what inspired him to question the binary nature of a test, he said: “I was discussing the value (or lack of value) of pass/fail metrics when it occurred to me how bogus the numbers were, and some of the reasons. That led me to think through what ‘pass’ and ‘fail’ mean.”

So where does this leave teams who use pass/fail metrics? What does Hoffman see as a better alternative? Instead of a world of pass/fail, which doesn’t inspire additional work or thinking about the problem, he sees a system where a result might lead you down the road to additional investigation or bug reporting. With each result you have to ask additional questions before you move on. It challenges the tester to evaluate when they are really done with something, or if they’ve gotten all the value they can from an activity.

“Even with exploratory sessions, we conclude whether or not there are problems to report now and further avenues where we think we’ve detected problems, or not. For discrete test cases it is much clearer whether or not further work is indicated. In any case, most people refer to the software as failing or passing based on these indications.”

“The idea of a test passing/failing, indeed the idea of discrete tests, may be foreign to some people who have only known exploratory testing. In those contexts there may be audience members who may challenge that tests don’t pass or fail because the concepts aren’t applicable.”

So for Hoffman, testers doing exploratory testing face this issue all the time and already have methods for dealing with it. “There also could be criticism that I look at test results as being binary,” said Hoffman. “Others may consider there to be more than two outcomes. Again, I think it depends on how pass and fail are defined.”

In the past, Hoffman has done extensive work around test oracles. An oracle is the principle or mechanism by which we recognize a problem (that is, it’s how you can tell the good behavior from the bad). When asked how this work relates to his work on test oracles, Hoffman replied: “This is one conclusion I’ve drawn from that oracle work. Over the years I stopped talking about passing and failing, but had never consciously realized it.”

For more on the upcoming show, check out the CAST conference website. I also recommend, if you haven’t already, familiarizing yourself with Doug Hoffman’s work, which is available at Software Quality Methods.


June 22, 2009  4:48 PM

CAST 2009: Taking a closer look at scenario testing, an interview with Fiona Charles

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Fiona Charles will share the details of her scenario testing method at this year’s Conference for the Association for Software Testing (CAST), which takes place July 13-16th in Colorado Springs. Charles has used this approach on several projects where test scenarios were designed based on models derived from system data. I recently had the opportunity to talk with Charles about her upcoming CAST presentation, titled “Modeling Scenarios with a Framework Based On Data.”

Charles teaches organizations to match their software testing to their business risks and opportunities. With 30 years experience in software development and integration projects, she has managed testing and consulted on testing on many projects for clients in retail, banking, financial services, health care, and telecommunications.

When asked where the talk came from, Charles said that she’s not seen very much written about scenario testing and that she believes it’s an important way to test in certain circumstances.

For the talk I searched online and in the books in my own testing library, but didn’t find much. I’ve cited the two really useful articles that anyone contemplating scenario testing should read. Cem Kaner’s article on scenario testing is an excellent general introduction to the topic, and Hans Buwalda’s Better Software article on soap opera testing describes one way to model a scenario test

She went on to describe the specific projects that formed the basis for her presentation, saying:

I originally developed this method of designing scenarios for large-scale systems integration tests. The first one I did was for a retail project where we were building a new customer rewards system and integrating it with both the in-store systems and the enterprise corporate systems. My role was to conduct a black-box functional test of the integrated systems, after each team had tested its own system and immediately before we went live with a pilot. It seemed obvious to me that I needed to build the test around the integrated data: the flows and the frequency with which different types of data changed. There were no context diagrams or dataflow diagrams for the integration, so I began by developing one, which I then used to model the integrated test and spec out the scenarios to run it.

The next project was another retail integration, this time integrating a new custom-built e-store and warehouse management system into a large and complicated suite of enterprise systems. I wanted to extend the method I’d used before by introducing more structure so we could construct scenarios from reusable building blocks. Also, I couldn’t be sure my team would be able to complete the transactions planned for a given day, and it was essential to be able to say exactly what should be expected in the downstream systems, where the data outcomes might take several days to appear. So I was especially keen to have an easy way of generating “dynamic” expected results for my core team to communicate to the extended team, which consisted of people evaluating each store-day’s test results in their own systems downstream. I was lucky to have on my team a programmer/tester who understood what I wanted to do and was committed to implementing it.

He and I worked together over the course of several projects, sharing ideas and spurring each other to build on and extend how we tested integrated systems. I’m a modeler by temperament and preference, and over time I began to abstract reusable models to have a way to talk about what we were doing. One of them I think of as a conceptual framework for building scenarios based on system data: the topic for this talk. The first time I applied it to testing a standalone system was for the project I’m using as the example for the talk, testing a customized Point of Sale (POS) system.

When asked why she focused on data as a basis for scenario testing, instead of focusing her test development around more traditional business requirements or the same source documents used by the programmers, Charles elaborated on some of the challenges seen with using that as the only approach.

I’ve never seen documented requirements that came near adequately describing what a system was intended to do. And even when use cases cover most of the intended interactions, they’re usually much too high-level to build tests from. Paradoxically, I find that on the one hand the sources used by the programmers are too high-level, as in use cases, and on the other hand, they mostly don’t take a large enough view of how the whole system operates, either in itself or in the context of the systems it’s integrated with.

I once managed the testing on a bank teller system where a significant downstream output was to the bank’s General Ledger system. Yet the architect’s context drawing didn’t show the GL—and neither he nor any of the programmers could tell us what we needed to know about it to model the test.

That’s one reason I think we need to build our own models for testing, typically based in some way on how we expect a system to be used and the outcomes we hypothesize. That could be a state transition model, one of many other kinds of models, or a combination. The essential thing is to model a test consciously taking a fresh approach, and not blindly accept what we’re given.

I asked Charles to share a bit about her impression of the AST conference. This was her second year being involved in the conference, and I was curious why she choose CAST as the venue for her message. “My hope,” said Charles, “is that CAST primarily attracts testers who are open-minded and curious, not wedded to “traditional” (and to me boring and suboptimal) ways of testing systems. Those are the testers I most want to talk to and learn from.”

One of the key aspects of CAST is that after a topic is presented, debate on issues related to the topic is encouraged. The AST community believes that through dialog and struggling with difficult problems you get better solutions. So I asked Fiona Charles what she thought a likely criticism of her presentation might be.

One criticism I might expect is that this is a structured method with built-in expected results, analogous to scripted testing. I can see an audience of mainly exploratory testers possibly having issues with this. My answer would be that this is one kind of testing, appropriate in contexts where we have to have strictly controlled dynamic data in order to evaluate the aggregated outcomes. That’s not a good context for exploratory testing. It doesn’t preclude it entirely, but exploratory tests do have to be back-fed daily into the expected outcomes.

And finally, when I asked what topics currently had her excited, Charles offered the following:

I am very interested in agile and the practical integration of real testing (rather than mere confirmation) into agile projects. But I have a concern that in the rush to small teams and small projects (which I think is mainly a good thing), larger integration issues are being ignored. Those could become important in the next while. I’m also interested at the moment in how we can get our hands around testing for risks that really matter to businesses. I’m exploring that in the tutorial I’m doing at CAST 2009 with Michael Bolton and in tutorials I’m doing elsewhere, and I’m also writing about it.

For more on the upcoming show, check out the CAST conference website. You can learn more about Fiona Charles on her website, where you can also read her articles on testing and test management.


June 22, 2009  1:55 PM

Requirements-based provisioning useful in all aspects of IT

Rick Vanover Rick Vanover Profile: Rick Vanover

At some point in time, we all have likely engaged with some level of requirements-based testing (RBT) or development. While RBT is generally a fundamental in software quality, there can be limitations and cons as described in this Software Quality Insights tip. In my professional IT work, I currently focus primarily on infrastructure and related technologies. But that is not to say that we cannot take a page from the requirements-based disciplines.

One of the most frustrating situations in the new build process of any system or collection of systems is when others have applied technologies without defining requirements. This frequently arises in disaster recovery, system availability or business continuity arenas. While the development and application teams want to provision a highly-available solution, there can be a disconnect between the internal decisions and what infrastructure teams can provide.

The current IT landscape offers infrastructure professionals many options in protection and availability. This is made possible by virtualization, load-balancing solutions and a matured backup software space. My approach has been for the application and development teams to provide the availability requirements for a technology solution. This usually involves defining two important terms:

Recovery Time Objective (RTO) – How much time would be required to recover from any system-level failure that would make the system and business process available.

Recovery Point Objective (RPO) – A defined requirement for point-in-time recoverability for the business process.

The RTO and RPO work together to determine what availability and recovery would be provided to a business process. Infrastructure professionals can play this game quite well now with the current landscape of tools and technologies. Be sure to engage them as appropriate in the system provisioning process.


June 12, 2009  8:13 PM

Common mistakes in real-time Java programming

Jan Stafford Jan Stafford Profile: Jan Stafford

When you hit a key on your keyboard, the delay before the letter appear on your screen is mildly annoying. When you’re in a warship under enemy attack, having a delay before new radar information shows up would be deadly. In a nutshell, that’s the difference between general-purpose and real-time programming.

The difference, however, can be less obvious in business settings.

Not knowing when real-time applications are needed is a common mistake companies and software developers make, Eric Bruno and Greg Bollella, authors of Real-time Java Programming with Java RTS, told me recently.

“Some companies have real-time requirements but don’t interpret them as such,” said Greg Bollella, a Sun Microsystems distinguished engineer who leads R&D for real-time Java. An example would be financial companies involved in stock trading. “Often, they try to force a general-purpose system to behave as if it is a real-time system.”

For the most part, those efforts fail. “Response times are too slow, and code can become very fragile,” said Bruno, who has broad experience working on software design and architecture on financial trading and data and real-time news delivery.

In this video excerpt of our interview, Bollella and Bruno discuss this common mistake and others made in real-time Java programming:


June 12, 2009  3:34 PM

Why JavaFX fits the bill for RIAs

Jan Stafford Jan Stafford Profile: Jan Stafford

I recently met and talked with authors Jim Clarke and Eric Bruno about JavaFX.. They co-wrote, with Jim Connors, the recently-released book on that subject, JavaFX: Developing Rich Internet Applications. Clarke and Bruno explain how JavaFX simplifies and improves the RIA development process in ther book and also in this video excerpt from our interview,. Their book offers an introduction to JavaFX, and then it heads off into nuts-and-bolts descriptions of how to use its its ready-built components and frameworks to build RIAs.


June 12, 2009  3:06 PM

Atlassian Nerd Herder Moore spots app/dev trends

Jan Stafford Jan Stafford Profile: Jan Stafford

In an interview this week, Atlassian engineer/Nerd Herder, Pete Moore, told me that he got good news and bad news from software developers and engineers at JavaOne 2009 this year. The good news was that developers had moved beyond Code Review 101, but the down side was a lack of adoption of cloud and some backwards thinking about tool purchasing.

A good number of software engineers at JavaOne 2009 told Moore, that they still have to fight with management for approval to buy lightweight development tools. Wait, there’s a punchline: The big surprise is that the managers aren’t approving these requests due to lack of money, but rather because the managers “still believe in top-down purchasing of suites and one-size-fits-all,” Moore said. He couldn’t believe that the people holding development purse-strings had such an antiquated approach to buying software. Well, actually, he called them “ignorant managers.”

Fortunately, Moore said, Atlassian’s products are priced low enough that “most teams can sidestep the management silliness, because it fits in their discretionary budgets.”

I met Moore at JavaOne, where he showed me an animated 3D tee shirt logo. We talked about Atlassian’s comprehensive Java-based plugin architecture, a subject that drew a lot of interest from attendees in the booth. Here’s an excerpt from our conversation in this video.

Moore spent a lot of time at JavaOne talking about the nuts and bolts of integrating plugins into real life environments. “I think this underlines that engineers still want pragmatic point solutions,” he said when we talked this week. “Best of breed [software] was the catch cry a few years ago, and it’s still what the front lines want, they now just want them to work together!”

This was Moore’s fifth JavaOne, and “it was sensational that I didn’t have anyone who didn’t know what code coverage or peer code review was,” he said. That hasn’t been the case in the past, when he’s had to explain what per-test coverage was, “or worse, the merits of unit testing.”

Two years ago when Atlassian introduced its Crucible code review tool “the majority of
young developers had never done formal code review, and everyone was talking about pair programming,” he said. “This year, whilst there were still heaps of people who weren’t doing reviews, it seemed that every second person specifically wanted a demo of Crucible.”

Developers haven’t stepped up in another area, though. “I was disappointed not to see more development in the cloud in real life,” Moore said. Engineers like Atlassian’s Bamboo tool, with which one could start agents and do builds in the cloud. “But almost to a person they
said, ‘There’s no way we’d be allowed to use that.’ Here’s hoping that next year the story will be different.”


June 11, 2009  5:25 PM

CAST 2009: Test gurus Sabourin, Coulter preview keynotes

MichaelDKelly Michael Kelly Profile: MichaelDKelly

Next month, the Association for Software Testing (AST) will hold the fourth annual Conference of the Association for Software Testing (CAST) in Colorado Springs, Colorado. This will be the first year I won’t be attending, so I wanted to take a chance to catch up with some of the speakers to talk to them about their papers and presentations. The first pair of speakers I was able to catch up with were giving the closing keynote for the conference: Rob Sabourin and Tim Coulter.

Rob Sabourin is presently the President of AmiBug.Com Inc, a frequent guest lecturer at McGill University, the author of a short book illustrated by his daughter Catherine entitled “I Am a Bug,” a regular author of articles of software engineering topics, and he’s a regular speaker at just about every software testing conference you’ve heard of.

Tim Coulter is a software developer for The Open Planning Project, has participated in over ten software testing peer workshops, and he brings a fresh perspective to the practice of software testing which you can read on his blog at OneOfTheWolves.com.

Both Sabourin and Coulter are regulars at CAST, and this year they are taking on a rather interesting challenge with their closing keynote. Their talk, “Tim Bits: What I Learned About Software Testing at CAST 2009″ will be an attempt to summarize lessons learned from the 2009 talks and will use a mix of improv and group participation to make the lessons specific and relevant.

“We came up with the idea of ‘Tim Bits’ at a peer conference,” said Sabourin. “I think it was at a Workshop on Performance and Reliability in New York city in which I asked Tim to give us some quick lightening encapsulations of lessons he learned – as a novice – from presentations made by experienced professionals.  Tim Bits is also the name of a popular doughnut hole treat at the famous Canadian chain Tim Horton’s and thus the pun began.”

Sabourin, a speaking veteran, has a history of taking on challenging keynote presentations. He’s done light but lesson-filled talks about software testing based on lessons-learned from the Simpsons, Dr. Seuss, and the Looney Tunes Gang. Two of the best talks I’ve seen him give include “A Whodunit? Testing Lessons from the Great Detectives” and “Peanuts and Crackerjacks: What Baseball Taught Me about Metrics.” But given that this talk depends on material presented by others in the two or three days before the closing keynote, I asked Sabourin how they plan to prepare.

“I’ve prepared a number of closing keynote-style presentations at STAR conferences in which I focus on pain points of delegates and how lessons learned from specific conference or tutorial sessions can be applied. So when Tim and I were asked to combine Tim Bits with the ‘Closing Lessons Learned’ to create our closing keynote at Cast, we of course said yes.” Sabourin went on to outline their planning. “Tim and I plan to spend several evenings together in New Jersey the week before CAST preparing our Framework. But the actual content will be captured on-the-fly during CAST.”

When I asked if the on-the-fly preparation was at all intimidating, Sabourin responded: “Not at all! We will be well prepared in advance, and spending time dialoguing with delegates to capture real learnings and applications on the fly during the conference will be fun. I feel that our talk at CAST can be a solid practical constructive step to not only making CAST more useful, but also in demonstrating the power of actively participating in the AST community.”

I also asked Coulter how he felt about the talk, and he said: “I’m extremely excited for this talk. This is going to be up there as one of the coolest things I’ve done so far, in testing or otherwise. The AST community has done so much for me since I started college that I’m happy to do anything I can to give back.”

When asked what they would be working on for next year, the two of them listed off several topics.

“I have been working hard on task analysis of session based exploratory testing implemented in real projects and especially in frameworks like SCRUM,” said Sabourin. “In 2010 I hope to share these experiences. I’m also dedicating a lot of time to visual modeling in test design and testing in turbulent contexts.”

Coulter has been thinking about how to put theory into practice. “I’ve thought testing history would be an exciting thing to research, and if I can get a speech or paper to come out of that I would be more than happy. In total though, I don’t know what’s to come. I envision a talk titled ‘Trying to make it in testing while discovering the (software) world around me,’ but when that’ll come I don’t know.”

For more on the upcoming show, check out the CAST conference website. Another great resource is this site’s info on Rob Sabourin, his book, or his classes. And here’s how to learn more about Tim Coulter, the man behind ‘Tim Bits,’ and his current projects.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: