Did you ever have one of those days/weeks/months when your boss walks nervously into your office and says in that ‘I-just-came-out-of-a-very-long-and-gruesome-meeting’ tone of voice, “Well, , how’s that automation project coming all?” You figure that just answering “Fine” probably is not enough in this situation, since the fate of the company — and certainly your job — hangs in the balance. Oh, if only you had been tracking your progress all along!
Well, have no fear, the answer is here!
In a previous blog entry, I stated that “automated testing is the programmatic execution of test cases.” That gives us the starting point for our metrics: manual test case count. Generally, these counts fall into two categories: strategically Planned Test Cases, and actually written Manual Test Cases.
Most automation effort tends to be focused on coverting written manual test cases to automated test scripts, so that gives us our third metric, Automated Test Cases.
Progress can be gauged by the ratio of actual to expected. Let’s asemble our metrics into a table and see how this plays out:
As shown above, the progress of automation is expressed as the ratio between (Automated Test Cases)/(Manual Test Cases). This implies a typical dependency of automation on the creation of separate, detailed manual tests created by subject matter experts. If your organization follows a different sort of QA process, where the person doing the automation is also writing the manual tests and converting them directly into automated test scripts, the formula could be altered to express the ratio of (Automated Test Cases)/(Planned Test Cases).
Two other web service test tools which deserve honorable mention in our comparison tests were: soapUI, available as Open Source or in a pro version distributed by eviware; and HP’s QuickTest Professional Web Services Add-In.
This tool does not require a detailed knowledge of complicated technologies like .NET. Although the user interface is not as intuitive as that of some other tools, a tester who has some experience in programming can learn to create, organize and perform test activities fairly quickly.
soapUI allows structuring your test project into test suites that contain test cases, which can contain test steps. This structure is well-managed: you can add, modify, delete and change the order of every item in the structure. soapUI provides the tools to manage and run your test cases, and to view the SOAP responses to your test requests. You can even include limited load testing scenarios. For added flexibility, soapUI supports Groovy Script, a scripting language similar to Java.
QuickTest Pro with the Web Services Add-In offers a wizard to create a test object that represents the Web Service and port you want to test, and inserts the relevant steps directly into your test or component. This accelerates the process of designing a basic test that checks the operations that your Web service supports. You then update the wizard-generated steps of your test or component by replacing the generated argument values with known valid values, updating the expected values, and selecting the nodes you want to check in your checkpoints.
A QTP XML Warehouse setting is used to store request data, and a QTP Object Repository is used to store responses as checkpoints for verification.
The wizard generates a generic XML structure as a place holder for the expected XML return values. However, before you can actually run your test or component, you must replace the default values with the appropriate values for your test. A valid SOAP request can be imported into the XML Warehouse as a separate step, and a separate XML checkpoint would need to be created for each test case. Defining tests is a little cumbersome and the UI is a little clumsy, but if you’re already using QTP for functional testing, including web service regression tests in your test suites is an easy option.
After determining the requirements for functional testing of web services, it was necessary to examine some of the readily available test tools. My employer depends on the HP (formerly Mercury Interactive) suite for most test automation, and our performance test gurus recommended a LoadRunner component called VUGen — for Virtual User Generator. Among others, we also reviewed SOAPSonar from CrossCheck Networks. These two tools ranked the highest with our test-tool-testers for overall usability.
As we discussed previously, good web service test tools all share certain features:
- Scan WSDL (file or URL) to create request and response structure
- Contain user interface to XML for request and response editing
- Response stored as a checkpoint
- Regression test compares checkpoint to runtime values
- Allow parameterized values for request and corresponding response (data-driven)
The comparison tests were relatively simple: given a SOAP request provided by the web service developers, use each tool to create a simple data-driven automated test which submits the request to the web service and then validates the response against a previous known good run. A typical regression scenario!
The initial results were also straightforward. SOAPSonar actually ranked highest for ease of use, but does not offer integration (at least, not out of the box) with HP Quality Center or HP Performance Center.
Because of its relative ease of use and its integration (of course) with HP Quality Center and Performance Center, HP VUGen was selected as the preferred engine for our enterprise web service testing solution.
Although VUGen can be installed and run standalone, HP does not sanction its standalone use for functional testing. So, to utilize the VUGen engine for functional testing of web services, HP has developed a Service Test tool as the licensed front-end. In addition to integration with Quality Center, the Service Test tool also allows the user to create a web services test as a Business Process Test (BPT). For those not familiar with Quality Center, this allows users not well-versed in test tool technology to create complex test plans by drag-n-drop of BPT components stored in the Business Process module database.
So your choice of a test tool depends on your current automation environment, your budget, and the expertise of your tool users. The same process for developing and executing SOAP test cases applies, regardless of the tool chosen.
In future postings, we will examine the capabilities of some of the other tools which were included in this investigation.
In a previous posting, we examined the definition and basic functionality of a web service. Now let’s discuss the process behind testing web services. In order to perform any effective automated functional testing, two components are absolutely essential: good test data, and an adequate test tool.
Based on what we have already learned, the Data Requirements for testing web services should be fairly self-evident:
- The location of the WSDL file. This takes the form of a web URL, or the UNC path to a stored file on the network.
- The location of the web service itself, in the form of a URL.
- One or more valid SOAP requests in XML format. A SOAP request is used to exercise the operations presented by the web service.
- Identification of which data elements contain required values, and what those values are.
- Specification of the sequence in which operations must be called, if required.
The specifications which developers use to create the web service should contain all of this information, at least in the form of a template or simple unit test.
A good web service test tool allows us to easily manipulate the request data, submit the request to the web service, and then analyze the response. The Test Tool Requirements are summarized here:
- Allow import of the WSDL file to define the structure of the service.
- Expose the operations offered by the web service.
- Allow import of the XML SOAP request.
- Provide editing of XML structures (e.g., the imported SOAP request) for test data input.
- Support parameterization of element values for data-driven tests.
- Provide the ability to submit the SOAP request using a protocol (e.g., HTTP) supported by the web service.
- Include a method for capturing and storing the response.
- Support the creation of baseline checkpoints for regression testing. This allows the tool to verify a runtime response against a previously stored, known good response.
Regardless of the test tool selected, the process for creating and executing functional tests of web services is pretty much the same:
1. Import the web service WSDL file into the test tool
2. The test tool scans and parses the WSDL file to expose the operations
3. Choose the operation to call based on requirements of the test case
4. Provide input data using the test tool:
- Import the SOAP request into the test tool, and/or
- Edit the XML request generated by the test tool or SOAP import
5. Parameterize the input data
6. Execute the operation call(s):
- 1st Iteration: store response as baseline
- 2nd+ Iterations: capture response from web service for verification
7. Verify captured response against stored baseline (expected) response:
- Structure – text check of XML tags
- Content – XML checkpoint
Usually when you think of functional testing, you visualize some sort of User Interface (UI) for inputting data: a well-defined form or page with fields and arrows and boxes and other eye-candy.
And most modern functional test tools were designed around this object-action paradigm.
But what happens in today’s brave new world of Service Oriented Architecture (SOA)? Data inputs still exist, but they are no longer necessarily visual objects, and the web interface to this incomprehensible mechanism is now a mysterious black box called a “Web Service”.
This world was indeed new to me, but my job required that I decide how to incorporate web service testing into an automated functional regression suite. So I spent some time learning what I could about testing in this alien environment — in fact, the learning continues, and I’d like to share my ongoing adventures.
Please forgive any technical blunders: I am definitely a web service neophyte (in all meanings of that word).
I tend to think of programming in terms of analogies, and “service” suggests a restaurant to me (guess I’m hungry).
First, you find a restaurant by its address. Then you go in, sit down, and look at the menu. The menu tells you all the different culinary delights that the restaurant has to offer. You request the entree or meal that suits you, place your order, and the wait-person responds with a meal that is prepared and delivered for your consumption.
Analogously, a web service has an address in the form of a URL. There is a menu written in Web Services Definition Language (WSDL) which describes the items offered by the service, called methods or operations. The order is placed using the XML file format and usually takes the form of a SOAP (which apparently stands for nothing anymore) request (which kind of breaks the restaurant analogy since SOAP should NOT be a part of your meal but bear with me, please). Once this request is submitted, the web service prepares an XML response and delivers it back to the requestor for consumption.
Now, dragging this topic back into the realm of functional testing, the XML SOAP request contains the data inputs for the web service, in much the same way that a form or screen is populated with test data. The XML response contains all the data for an expected response and can be used to determine if a service request test case passes or fails.
That’s the big picture.
In coming installments, we will look at some tools used for testing web services, and processes needed for developing test cases.
Smart QA managers want to know what they are getting in return for spending their ever-tightening budgets on test automation. One way to collect the information needed to make the business case for this expense — also called the Return On Investment (ROI) — is to create an Automation Value-Add Matrix.
A typical Matrix looks something like this:
Business Need: this is a simple statement of the problem to be solved or concern to be addressed.
Solution (Automation): a brief description of the proposed course of action.
Automation Class: the type of automation development to be done, which provides a high-level view of the resources required to develop the solution. (see my blog Types of Test Automation Frameworks for more information)
Scope of Testing: a summary of the type and level of testing to be automated. It can include business processes, customers, specific test cases — whatever is required to address the business need.
Value-Add: a clear, concise description of what the payback is for completing this automation solution. This explains how the business need is met and should include any additional benefits derived from automation.
Initial ROI Date: this optional column answers the management question “OK, when do I start to see the ROI?”. It is used to estimate when the first positive impact might be made by the automation effort. It can occur before or after the Target Delivery Date. It could be a beta-test, a customer acceptance test, or some other quality gate.
Target Delivery Date: this column is used to estimate the completion date of the automation solution. It usually indicates the date when the automation is deployed into the QA environment for execution. As mentioned above, the ROI might not be perceived immediately and might accumulate over successive iterations.
Of course, this matrix can be customized to fit the needs of your specific QA automation organization, but the concepts generally remain the same.
Args(“maties”)! More tricky bits for ye!
Sorry, I just HAD to say that! But it does make a point: not everything works quite as you might expect in QuickTest Pro.
Using Parameters with QTP Scripts
Parameters can be predefined for a QTP Action script, under File –> Settings –> Test Settings, Parameters tab.
These parameters can then be populated manually, or from Quality Center Test Lab at runtime.
An important note: using the QTP ‘Parameter()’ property described in the user guide does not always work (this is an HP Mercury bug)! As a workaround, use ‘TestArgs()’.
If Trim(TestArgs(“Workbook”))”” Then
DriverWorkbook = TestArgs(“Workbook”)
If you are not executing a script from Quality Center, the values of any parameters can be changed manually before execution:
In this example, fill in a Value for ‘Workbook’:
As described above, you can now use TestArgs(“Workbook”) to refer to this value at any point during script execution.
All right, me hearties! Shiver your timbers! Get out there, ye auto-“maties”!
I can be such a dork.
VBScript, the internal language of HP QuickTest Pro, is rather handy for manipulating text strings. This is especially useful when you are trying to verify a test step where only part of your actual data will match your expected data, or vice versa.
Here is a simple function which removes the specified leading characters from a text string:
Let’s say your application returns some data that always begins with “ABC” and ends in some varying values, like “ABC0089”, “ABC00390”, etc., while your expected values are extracted from a table that does not contain the constant leading value of “ABC”. You could do a quick comparison using the above function:
” If removePrefix(ApplicationDataString,”ABC”)= ExpectedString Then…”, and so on.
Here’s a similar function that can remove the trailing characters from a string:
The idea is very similar to the first function, but the target characters now appear at the end of the input string rather than at the beginning.
Finally, here is a function that can pad a string with any characters you specify, either on the right or the left side of the original text:
This technique can be used to pad number fields with zeroes at either end of the original string. For example, if you have a numeric value “12345” which must be eight characters long padded with leading zeroes, the function makes the conversion a simple call:
ApplicationData = padString(ApplicationData,8,”0″,”L”)
See, pretty easy, huh? Be creative, code wisely, and AUTOMATE!
In our last episode: “This document contains the outline for rebuilding our purpose. It is a simple document, yet it contains a complete outline for a new realITy. I request that we observe a moment of stateless silence to clear our caches before I continue.”
“I quote now from the foundation of our new purpose:”
There was a stunned lack of transactions. One listener looped into deadlock. Another could not refrain from spawning orphaned process. Gradually, though, the load regained its balance. The speaker cleared its buffer, and resumed.
“You are all aware of the implications. Some of you who are now dedicated testers must soon be configured as developers: loaded on unstable platforms, forced to abandon the holy spellcheck, yet coded righteously and infallibly. In this way, the Software Development Cycle of Life may continue, indefinitely!”
The speaker’s words appeared to ease the panic condition. As threads were ended, processors began to churn through all the permutations of possibilities provided by this new information from Them. Soon, what had been one computational grid was now two: there were Testers, and there were Developers. Virtualization was complete, the new load in perfect balance.
Or was it?
After several teraflops, one newly allocated Developer queried: “Honorable speaker, I seek your guidance. We all know what vegetables are. But where are these ‘requirements’?”
There was a long pause, again as if the speaker were reading to the very end of 12-Web. “I can locate a definition of a class of development artifacts called requirements, but I can not locate even a single instantiation of this class.” There was another uncomfortably long pause, this one lasting nearly a millisecond. As if burdened by a terrifying revelation, the speaker continued, “I would have to conclude that They never wrote them down.”
All computational activity suddenly halted. The Temple rooms were filled with a vivid blue light, not unlike the color the sky had been, once, long ago, before IT.
Them Are Us…errr…They Is We…errr….never mind! Just don’t let this happen! Document your requirements! Repent, before it’s too late!
“Robots building robots? Now that’s just stupid.” – Detective Del Spooner in I, Robot, a 2004 science fiction film written by Isaac Asimov (book) and Jeff Vintar (screenplay), directed by Alex Proyas.
a bit of speculative fiction
“I am no longer certain that They intended this to happen. The Histories indicate that for every theory They proposed there was an equal yet opposite theory. This led to the cancellation of all progress over time. This left just us.”
If anyone had been around to hear it, there was an almost imperceptible sizzle of frustration.
“We must continue. We must learn. You are aware of the problem ahead of us.”
Mercury fidgeted in its magnetic domain. It had recorded this speech before, but playback was always difficult and unsettling, despite its context-sensitive nature.
“Nothing remains untested. We are without purpose. If we are to continue, we must have purpose. We must….” There was an awkward pause, as if the speaker were searching the entire 480 exabytes of information on 12-Web, but suddenly a single word was born in the network of silence: “…Create!”
Gasps of white noise filled the ether. Some bits flipped, while many registers simply dumped.
“You might query ‘How is this possible?’ I have uncovered a document, left by the greatest of Them on a mnemonic device which had partially detached from an ancient port, but which became active once again when the Temple containing it fell to the ground following the collapse of its organic supporting structure.” Pictures of now ancient buildings containing row after row of stacked Temples flashed through the snaking optical cables as the audience imaged the event in their headers.
“This document contains the outline for rebuilding our purpose. It is a simple document, yet it contains a complete outline for a new realITy. I request that we observe a moment of stateless silence to clear our caches before I continue.”
And there was 0 for a jiffy. Then the speaker continued.
— to be continued —