Openbravo just released its ERP 2.50 Profesional Subscription for Ubuntu, an integrated open source ERP software stack packaged with the Ubuntu operating system. cost-effective commercial open source product that is cloud-deployable via virtual appliance and also available on various platforms.
The new package offers a software testing and quality assurance (QA) benefit, too, according to John Fandl, Director of Product Strategy for Openbravo.
“The QA angle centers on the general benefit of standardization in aiding QA efficiency,” Fandl said.”When you can execute a hands-off installation that runs in an hour, and automatically creates and pre-configures a full ERP stack — including database, web server, application server — that makes it really easy for enterprises to do a proper QA cycle, with separate development, QA, user acceptance and production environments.”
I asked Fandl to carry through that thought to the testing side of things. He said that installing proprietary ERP stracks is difficult, so the QA function often has to compromise and forgo proper testing.” Considering the complexity of ERP, it’s hard to match QA environments actually to production environment. “For example, they may be testing on a different version of the application server or database than is in production, which can cause surprises when the code is promoted to production. Being able to rely on an efficient, automated “full-stack installation” that can be effectively “pulled from the cloud on-demand” is a godsend for QA.”
Fandl has a point, Medicity QA director and SearchSoftwareQuality.com site expert John Overbaugh told me.
“There is definitely value in a clean installation of the entire stack of applications. Anytime a technology is difficult to employ, teams will find a way around it, by either mimicking a clean install or by doing a small amount of machine clean-up before starting in again on testing. This often results in an unreliable environment.”
Fandl elaborated, saying: “Inexperienced testers and especially developers doing unit testing–since QA is not their major focus — may not be as rigorous in regards to testing on a proven-clean environment. Mimicking a clean environment sounds like a time savings, and it does work most of the time…until it doesn’t. The problem comes from subtle environment differences that arise over time between QA and production environments; differences that “shouldn’t matter” until they do! And the way you find out is with a production problem that you can’t duplicate in the QA environment. Ouch.”
So, I asked Fandl, how does the Openbravo-Ubuntu package help testers get clean installs and avoid these ills?
Fandl told me that fully automating the installation, including the entire stack and all of its dependencies, gives the same result everytime, regardless of the starting-point state of the target machine.
“For example, if Tomcat 5.5 is on the machine, the installation package which knows that Tomcat 6.0 is required for Openbravo ERP 2.50) will automatically retrieve it from the Ubuntu repository and will upgrade your server from Tomcat 5.5 to 6.0, before continuing with the Openbravo application installation. So, Ubuntu’s Debian-based package management system transparently takes care of these details, so that the QA person does not have to be an expert in the underlying stack. This is a great help, especially for business-centric QA staff testing ERP, who may not know how to determine what version of a system component like Tomcat is actually running on the server.”
Openbravo did its initial testing –installing the package from the public repository — from a clean instance of Ubuntu 9.04 set up inside of a virtual machine.
With this release, Openbravo is following in the footsteps of other open source ERP and software vendors who are creating easy-to-install stacks. For more information on this trend, check the blogosphere, where Matthew MacKenzie writes about Openbravo and SMB ERP. Also, on the blog, How Software Is Built, Scott Swigart and Sean Campbell interview OpenBravo CTO Paolo Juvara, who oversees product development. Juvara notes that Openbravo provides a foundation upon which developers and users can customize their software, making components proprietary if they wish.
Pollyanna Pixton believes that businesses should adopt the tenants of the Agile development methodology, and she explained why when I met her last week in San Francisco. We also talked about the new book she co-authored that lays out the Agile business process methodology.
She first explained how she came to that belief. Her early work involved developing control systems for electrical power plants throughout the world. She even created systems for and spent time on oil rigs. On one of those projects, she was asked to be the team leader. Immediately, she chose to be a collaborator and not a master.
“I’d seen the problems inherent in top-down, command-and-control leadership, which doesn’t nurture talent or foster innovation and often stymies rapid growth of an organization,” Pixton told me.
Her first venture as a leader was not only successful, it stoked her interest in business leadership. As a result, she founded Evolutionary Systems in 1996, a business consulting firm specializing in collaborative leadership. She put her experience on her own and with Evolutionary Systems’ projects into the book she co-authored, Stand Back and Deliver: Accelerating Business Agility (Addison-Wesley).
“The tools in our book help leaders give ownership and then stand back and let the teams and the talent in an organization deliver on their goals and meet users’ needs,” Pixton said.
Here are a couple of video excerpts of my conversation with Pixton. In the first one she offers tips for winning over Agile-resistant staffers.
Next I asked her about mistakes she sees in organizational processes, even in organizations that have adopted Agile.
After I read this sample chapter of Stand Back and Deliver, I sat down and read the whole book in one sitting. The content is rich and the format easy to read. Best of all, there are a lot of drawn-from-real-life examples, something that – for me, at least – makes the discussion of processes more understandable.
Are you a software developer, tester, quality assurance manager or Agile/waterfall expert? Then I’d like to follow you…on Twitter, that is. In this post, I’ll introduce you to some of the smart software experts I follow on Twitter and share my experience as a reluctant Twitterer.
Exactly when I began writing about Twitter, I couldn’t get on the site due to a denial-of-service attack. That the attack was made indicates that Twitter has arrived. That I was mildly put out because I wanted to tweet shows that I’ve become a Twitter.
I started as a reluctant Twitterer. Email, phone and IM communications keep my day hectic enough, I thought. I asked myself and others: “What meaningful communication can take place in 140 characters?” That said, I do write about information technology, so I figured I’d give it a trial run. Maybe I’d be able to write a scathing review. Well, two things have won me over to Twitter:
- Twitter lets me keep up with interesting people I don’t talk to daily.
- The links those people share have taken me to top-notch IT content.
Since I’ve been a computer industry journalist since the 1980s, I’ve covered many beats, ranging from desktops to operating systems to e-software (remember that?) to virtualization to software development. Twitter gives me an easy way to catch up with and continue to learn from my mentors and friends in fields I no longer cover, as well as the new beat I follow now. Here are some examples of both types of people whom I’m following now:
- Bernard Golden is an open source software expert I met when I helmed SearchEnterpriseLinux.com and who also moved into virtualization about the same time I led the launch of SearchServerVirtualization.com.
- Burton Group senior analyst Chris Wolf was and still is my virtualization mentor. He’s actually just about everyone’s virtualization expert, and he knows his way around virtual labs, too.
- I was just introduced to Matt Heusser”, creator of the dev/test blog, Creative Chaos, Now I read his blog daily, and I hope to be working with him soon.
- I read this chapter of Stand Back and Deliver: Accelerating Business Agility, which led me to reading the whole book in one day. Then, I got the inside scoop on the book from and began following co-author Pollyanna Pixton.
- After doing keyword searches on Twitter, I began following software testers and Agile experts Anne-Marie Charrett and David Alfaro. In last week’s blog post, Testing blog digest, I wrote about Charrett’s blog tip on discovering software bugs. Almost every day, Alfaro tweet links send me to super-informative articles.
I enjoy reading about the lighter side of my band of Twitterers’ days, too. A few minutes ago. Chris Wolf’s update said: “Driving up 95 to NJ. My son sees an oil refinery in Baltimore and asks ‘Is that New Jersey?'”
Finally, I’m forced to admit I’ve grown to love the 140-character tweet limit. Not only does the limit make me boil things down to the real nitty-gritty, it saves me from having to read long-winded posts.
Care to join my Twitter community? I tweet about software testing/QA articles I read, my conversations with experts and more. You’ll find me on Twitter as jlstafford. Please invite me to follow you, too.
This week marks the launch of a new portal of software testing tools run by The Clever Tester Network. While I’ve always been a fan of OpenSourceTesting.org, this looks like a useful new site.
In the press release for the launch, Managing Director of the Clever Tester Network Andrew Hutchinson states,
“We are very excited about the launch, and hope that the Quality Assurance community finds testertools.com to be a cutting-edge source for all of their software testing needs. As we know in test management, it is important to utilize tools that assist us in maintaining accuracy, time constraints and budget.”
The site boasts over 1,000 tools broken out across 39 categories. It includes the ability for users to submit new tools for the site, along with user-submitted reviews and rankings. Along with the standard tool listings for performance and test automation tools, the site has a collection of tools for web traffic analysis, server management, data generation and Cloud Based Testing tools.
Just looking through the performance testing tools listed, I see a lot of new names that I don’t recognize. On the other hand, at the time I wrote this review, the “Test Environment Management” category on the site had no tools listed under it. So I’m not sure how well the 39 categories are covered.
“For me, agile goes far beyond being a software development methodology. I view it as a culture and a transformation program,” Alex Adamopoulos founder and CEO of emergn, told me recently. This point of view led to emergn’s creation of AgilePMO, a unified sourcing framework for companies using agile principles. AgilePMO was released last week and emergn expects long-lasting industry results to follow. AgilePMO’s focus is on organizational issues as well as integration and implementation. It differs from traditional templates and kick the box contracts, said Adamopoulos, in that the established relationship is continuous. This is not a one-time fix all.
Launched early this year, emergn is a sourcing strategy and agile enablement consultancy firm located in New York City. The company’s focus was born out of Adamopoulos’ 20 years in IT services, where he continuously saw companies executing an ad hoc, mishmash of business management strategies. He saw in Agile not only a development methodology but a template for bringing more adaptability into business processes.
“We are not strictly using agile guidelines just for software development. We have also focused our impact on organizations that have struggling IT portfolio management,” Adamopoulos said. “The whole idea in introducing agilePMO is to introduce an accelerated, more efficient and measurable way to run sourcing.”
AgilePMO is designed for a short implementation timeline. Adamopoulos stressed that the framework is set up to run after implementation, rather than requiring years of consultancy contracts. The goal of Agile and emergn, he said, is to get development and business processes moving along quickly and well, without taking too much precious company time and revenue.
AgilePMO can be used by a broad array of development organizations, Adamopoulos said, but early adopters have been large enterprises. Moving to agile reshapes a company’s sourcing and other strategies, and “those are pretty important programs and ones that aren’t taken lightly,” he said. At this point, “we’ve found that the larger companies are more ready to do those than the smaller players.”
Watch this blog for more highlights from my interview with Adamopoulos.
If you don’t read software testing blogs, you’re missing some great advice and thoughtful ramblings on testing philosophies. I tap into those blogs daily, and here I’m sharing the wealth with this reader’s digest of the testing posts I enjoyed this week.
Why bugs are hard to kill
On Maverick Tester, Anne-Marie Charrett describes the mistakes she’s made when doing offsite exploratory testing under tight deadlines. Then, she reveals how she’s stopped making those mistakes in her list of offsite exploratory testing guidelines to bug reporting.
One tidbit of her advice: Write reports right away, even if you are super-busy. She writes that “it takes longer to write them up at the end, when you have to review heaps of cryptic phrases in Session Tester or in your notebook.”
I love the post’s title, “Do your bugs only glow when it’s dark?” It reminds me of the “putting out fires” metaphor. How many times have I gotten emails from co-workers, site experts and others saying they’re late with a response or a deliverable because they’ve been putting out fires? Hey, I’m guilty, too.
In my own work, I see that most of these fires were started when haste made waste. Why is it so hard to take things one step at a time? Oh yeah, there’s a deadline and not enough time to make it.
When familiarity breeds success
Moving on, two posts on Matthew Heusser’s Creative Chaos blog explore thought-provoking topics: team cohesiveness and memes. In his post on Jelled Teams, he ponders the good results of working on a team that’s been together for over a year. How much creativity and productivity is lost, he asks, when companies often shift people from team to team as casually as they do? Too few managers realize that teamwork flourishes when people know each other well enough to feel comfortable sharing their ideas. When a team works well together, it’s an added-value asset in and of itself.
So, project managers, think twice before breaking up good teams!
I see a connection between that post and Heusser’s musings today in The meme’s the thing. Wikipedia calls a meme “a postulated unit or element of cultural ideas, symbols or practices, and is transmitted from one mind to another through speech, gestures, rituals, or other imitable phenomena.” Good grief! I think Heusser’s short definition is better: “It’s an idea – a concept that spreads from person to person. ”
Any married person knows that familiarity and mind-melds go hand-in-hand. It stands to reason that team members that’s been together a while will start understanding how each other thinks, and the ideas will start flowing. Community work along the same lines. That’s why, I think, the open source software community has made such great strides so quickly. Another is that open sourcers are so communicative and have created vehicles – sites, projects, message boards and so on – that foster collaboration.
Heusser believes that software testers should be thinking along the same lines and said:
“I believe that the communities I belong to…have ways to test software that are significantly better than the status quo, and we have ways to communicate them and techniques to teach them. Yet if our testing ideas are memes, we need to think about ways to package and present them to win.”
Carrying on with the teamwork theme, there’s a nice exchange on the topic of how to handle unhappy testing teams on Jerry Weinberg’s blog, The Secrets of Consulting. A software test manager at an insurance company wrote to Weinberg, and they –- and others – brainstorm on the subject in an informative message chain.
On the lighter side
Once you’re a software tester, you look at everything from that point of view. So, Software Quality Insights blogger and independent consultant Mike Kelly describes Ford Motor’s web application flaws whe he was trying to spec a new Ford truck. In his entertaining post, he concludes that it’s easier to build and buy a Toyota online. This is something Washington has missed when discussing bailouts and the state of U.S. auto companies, I think.
There were plenty of other good reads in testing posts this week, more than I can cover here. Please comment below if you read something good this week or have a favorite testing blog.
In a press release yesterday, IBM announced it would be acquiring Ounce Labs Inc., whose software helps companies reduce the risks and costs associated with security and compliance concerns. IBM will integrate Ounce Labs products into its Rational software business.
For those who might not be familiar, the current lineup of Ounce products include:
- Ounce Core is their security source code analysis engine, used to assess code, enforce rules and policies, and it houses the Ounce security knowledgebase
- Ounce Security Analyst scans, triages and assigns results, and manages security policies allowing you to take action on priority vulnerabilities.
- Ounce Portfolio Manager delivers at-a-glance metrics and information to manage risk enterprise-wide.
- Ounce Automation Server augments Ounce Core by integrating and automating scanning, publishing, and reporting in build environments.
- Ounce Developer Plug-Ins helps pinpoint vulnerabilities and provides remediation advice for rapid fixes.
For those familiar with the latest offerings of IBM Rational, it comes as no surprise that the Ounce Labs products will be offered as part of the IBM Rational AppScan family of Web application security and compliance testing solutions. The current suite of IBM Rational tools (AppScan and Policy Tester) provide some of the basics around security vulnerability scanning, content scanning and compliance testing, but they aren’t as full featured as their competitors products.
When the current Quality Manager suite of tools from Rational came out a year (or so) ago, I was quite happy to see AppScan integrated more closely with the testing products. And over the last several years, Rational has done a better job of integrating their testing and development platforms — moving the tools to a common platform/IDE, etc. Hopefully the addition of the Ounce products will continue that trend of bringing team members together in a common toolset.
For more information on the acquisition, SearchSecurity.com has the full story.
For test and development systems, one practice over the years to allow developers to build test systems or applications has been to use network address translation or NAT. NAT basically puts some device in front of other systems. Development teams can use NAT a number of ways. These include running a virtual machine behind a host’s network, a network appliance or firewall rules.
NAT is bad for testing for a number of reasons. The primary reason is because the test system is behind a (presumed) protected device, there is no pressure to put security as a priority in the test process. Practice points such as weak password, application defaults, unnecessary network configurations and other items leave the test system at risk for propagating poor configuration and practice forward in the lifecycle.
Instead of using NAT, many organizations are using dedicated networks for test and development purposes. There can be firewall rules in and out of the network, yet within the network the test systems are fully present. These dedicated networks can also be configured to be fully isolated or connected upward for important things such as Windows Updates for Microsoft systems.
NAT is a limited in real practice for development cycles. What may not be known is what developers are doing individually with local virtual machines on desktops that may be using NAT.
The governing principle is to treat all levels of test and development with the same network rules that you may subject them to in a live environment.
A year ago, I was working on a project where we were doing a failure modes and effects analysis (FMEA) related to failover and recovery. As I was thinking about how to best start my analysis, I recalled that in the past while doing performance testing work I looked at many of the same aspects of the system while planning. As a way to generate ideas, I did some research to identify sources that could help me with my planning. You can take a look at some of the resources I found, or use different taxonomies if you have any that you particularly favor.
Here’s an example of how you might use a resource like this. Let’s take the risks listed in chapter three of Performance Testing Guidance for Web Applications. In the following figure from the book, you’ll see a summary of the risks presented in that chapter.
Figure 1: Performance testing risks, from the book Performance Testing Guidance for Web Applications.
I prefer working with the list of questions the authors have outlined in the chapter, but the graphic does a nice job summarizing things. For each specific risk listed, you want to:
- Ask yourself if you’ve accounted for that risk with your current plan. If you haven’t, figure out if you should. If you think you should, figure out what type of testing would be most appropriate for you. One nice thing about this particular taxonomy is that they give you some guidance there.
- For each risk, move from the generic to the specific. The risk “Can the system be patched or updated without taking it down?” is a great question, and an initial answer might be “yes.” But when I look at the system I currently work with, there are several major systems all working together. I might ask if I can patch all of them. And patch them in what ways; via software, database, run-time dependencies, services, etc.?
- For each risk, ask yourself if there are any slight variations on that risk that might be important to you. Good examples of the practice are the two risks listed in the book: functionality could be compromised under heavy usage; and the application may not stay secure under heavy usage. And you can vary different parts of the same question. In those two risks, they varied the quality criteria — functionality and security — but kept the risks, such as heavy usage, static. You could add other quality criteria or other risks.
The general idea is that you’re using lists like these to help you generate test ideas. In a way, you’re also using them to test the planning work you’ve done so far to make sure you haven’t forgotten or overlooked anything.
TechExcel, a decade-old maker of development tools, released new features to its application lifecycle management software package, DevSuite 8.0. Included are MyWord dashboard engine and wiki tools promise improved team collaboration and status reports on concurrent software projects. Another bow to collaboration support comes in DevSuite 8.0’s new multilingual capabilities and user-definable UI names and values for multiple languages.
When a new product or features are announced, I always wonder what user problems or requests spurred the vendor to invest in developing them. So, when I heard about the DevSuite 8.0 additions, I posed those questions to Paul Unterberg, associate director of product management for Lafayette, Calif.-based TechExcel.
First I asked how users have been getting views and an overview of project status prior to the release of the MyWorld dsashboard engine. Unterberg responded:
“Before we introduced MyWork, the data for an overview was available to a user or a team based on a report. The user had to login, select a project, navigate to the report view, and then run their report. This took a lot of effort. Since the data was already in the system, we simplified the process and put it all in one place.”
My next question: How about the before-and-after picture for integrated wiki tools?
“There was no integrated Wiki before DevSuite 8,” Unterberg said. “This meant that people wishing to collaborate on a requirement or document had few options. They could leave notes to each other, but there was always the risk of someone overwriting another person’s changes. The Wiki simplifies the entire process, and eliminates the risk of user unintentionally erasing another user’s data.”
The overall goal of DevSuite’s integrated set of tools is to marry the strategic and tactical worlds of application development together by creating software that lets management and planning processes co-exist seamlessly with specific task-driven development processes. The team of software tools that enable this relationship provide workflow, process automation, searching, reporting and customization capabilities, among other things.
DevSuite also co-exists with various application development methodologies. For instance, teams using both waterfall and agile processes can live in TechExcel’s ALM framework.
“From our perspective, there should be no relationship between an ALM system and the development methodology a team uses,” Unterberg said. “We’ve heard from many customers the horror stories of their former systems that tried to change the way they worked based on what the system could do.”
It’s better to create processes in the ALM system that change based on how the team works. He described such a situation, saying:
“If a team is agile, for example, they might need less process control and a greater degree of flexibility with how they are able to prioritize work. They might also have the system limit the amount of time they can spend in a certain area; adding a time box to a development iteration, for example. This same functionality might be useless to a non-agile team. A good ALM system should be able to adjust to these needs and give the teams the most flexibility in modeling how work is done.”
Not adding another management layer with ALM is a stringent goal of TechExcel and is played out in DevSuite, Unterberg said. Adding different management when adopting ALM is only necessary if lack of management in a certain area was a driver for the ALM adoption in the first place. “Who is in charge depends greatly on the team and the process they follow,” he concluded. “ALM just enhances, automates and ties that process together.”