Simply stated, automated testing is the programmatic execution of test cases. Computers are very literal beasties. In order to accomodate the interpreted scripting language that is typically at the core of a test tool, the test steps in an automated test case must be more granular than your typical ‘natural language’ manual test. But they “do” essentially the same things as a manual test.
Within a test case, whether automated or manual, each test step generally takes this form:
Object –> Action –> Expected Result
- An Object is a concrete realization of a class, and includes data and the operations associated with that data.
- An Action affects the properties of an object; it is the execution of one or more of its methods.
- The Expected Result is the required or predicted outcome of the action.
In automated functional testing:
- Objects are the components in the application’s UI which can be manipulated by the test tool: labels, fields, grids, etc.
- An Action mimics some procedure a user would perform with an Object and is typically coded as a reusable function.
- The Expected Result is the desired outcome of the Action and is used as one of the criteria for the test step Passing or Failing.
Here’s a picture, worth a thousand words*:
So automation is a method of executing test cases, just like manual testing. Some concessions are made to the test tools utilized, but generally all test steps take the form shown above.
*Good thing I don’t get paid by the word! ;-))
Periodically, I hope to post a few helpful suggestions about using HP’s QuickTest Professional automated test tool. These are the tricky little things that are obvious once you stumble across then actually use them. Perhaps someone else will benefit by my stumbling.
Environment.Value and XML Files
If you load a user-defined environment file (XML) as a Test Resource setting, you can NOT change any of the values (i.e., QTP won’t let you). However, if you define an Environment.Value programmatically, you CAN change its value. This is especially useful for updating global variable values used by functions.
To detect an existing Environment.Value and load if NOT found:
Dynamically Loaded Function Libraries
You can use the vbscript ‘Execute’ statement to dynamically load functions as test resources. Put all common functions and subprocedures in a text file, e.g., “global.txt”. Then at the top of the script to be executed, paste in these lines:
Print Run-time Values during Debug
Use the ‘Print’ utility to display information in the QTP Print Log window while still continuing your run session. For example:
ConfigSoapBlock = FaWebServiceGetConfigBlock(“Global”,NameSpace)
These tips apply to all 9.x versions of HP QuickTest Professional
In a previous posting, we examined the evolution of automation frameworks.
How are frameworks being implemented today by various QA organizations? Here’s a basic summary of the types of test automation currently in use:
- Scripting developed in reactionary mode to test a single issue or fix
- Test case steps are part of each Action script: high maintenance, low reusability
- Contains some data inputs stored in test script’s datasheet, but not true data-driven
- Scripts are an assembly of function calls
- Data for test cases read in from external source (e.g., Excel spreadsheet, ODBC database)
- Results can be captured externally per script execution (i.e., spreadsheet, database)
- Test cases are expressed as sequence of keyword-prompted actions
- A Driver script executes Action scripts which call functions as prompted by keywords
- No scripting knowledge necessary for developing and maintaining test cases
- Descriptive programming is used to respond to dynamic applications (e.g., websites)
- Actually, this is a method which can used within other solution types
- Objects defined by parameterized code (i.e., regular expressions, descriptive programming)
- Custom functions used to enhance workflow capabilities
3rd-Party: HP (Mercury) Quality Center with QuickTest Pro and Business Process Testing
- Similar to keyword-driven but controlled via HP Quality Center database
- Can be used for collaborative work efforts: Business Analysts, Testers, Automaters
- Begins with high-level test requirements:
- Business Requirements defined
- Application Areas (shared resources) defined
- Business Components defined and grouped under Application Areas
- Test steps defined
- Tests can be defined as Scripted Components (i.e., QTP scripts with Expert Mode)
- Business Process Tests and Scripted Components are collected under Test Plan
- Test Runs are organized from Test Plan components and executed from Test Lab
- Test Runs can be scheduled and/or executed on remote/virtual test machines (with QTP)
- Defects can be generated automatically or entered manually per incident
- Dashboard available for interactive status monitoring
- Object Oriented
- Constructed as a layered framework
- Test data is compiled at runtime using data-mining techniques
Of course, a combination of these techniques could also be used based on the scope and depth of test requirements.
Have you seen any other notable examples of test automation framework development?
I use HP (formerly Mercury) QuickTest Professional for most of my functional test automation. This tool uses Windows’ VBScript as its internal scripting language. VBScript is relatively easy to learn and provides functions and sub-routines, basic date/time, string manipulation, math, user interaction, error handling, and regular expressions. This gives you the ability to extend the capabilities of QuickTest’s built-in functions by writing your own, which can then be associated with test scripts and called at run time. In fact, this capacity to call user-defined functions becomes the basis for most home-brewed automation frameworks.
One simple example of a user-defined function is a handy bit of code which pops up a customizeable message window that can be set to ‘self-destruct’ after a specified number of seconds. This is often more useful than the QuickTest msgbox() function which halts script execution until a required user action (pressing the OK button) is performed. Included with the function is a test harness which calls the function to demonstrate how it operates.
Copy and paste the code below into a text editor then save it as a file with a .VBS extension:
‘ ————————————- start here————————————–
Function popupMessage(msgText, waitSec, titleText, typeInt)
Dim WshShell, BtnCode
Set WshShell = CreateObject(“WScript.Shell”)
BtnCode = WshShell.Popup(msgText, waitSec, titleText, typeInt)
popupMessage = BtnCode ‘ allows processing in calling function
‘ vbscript test harness for function
‘ Valid codes for typeInt:”
‘ Value Button Value Icon
‘ —– ——————– —– ————
‘ 0 OK 16 Critical
‘ 1 OK, Cancel 32 Question
‘ 2 Abort, Ignore, Retry 48 Exclamation
‘ 3 Yes, No, Cancel 64 Information
‘ 4 Yes, No
‘ 5 Retry, Cancel
‘ The button values and icon values can be added together for composite effect.
‘ EG, typeInt of 4 + 32 (or, 36) means a message box with the ‘Question’ icon,
‘ and ‘Yes’ and ‘No’ buttons.
msgText=”This is a message!”
titleText=”This Is A Pop-Up Window”
‘ ————————————- end here————————————–
There are several ways to run this script (.VBS file) once you save it. (Don’t worry, it won’t damage anything!) The most common would be to right-click the script file in Windows Explorer and select ‘Open with’ to run in WScript, or ‘Open in MS-DOS Window’ (Windows 9x), or ‘Open in Command Window’ (Windows NT and Windows 2000) to run in CScript. Refer to the Microsoft Developer Network knowledgebase on VBScript for further information: http://msdn.microsoft.com/en-us/library/xazzc41b(VS.85).aspx
Our little script can also become part of a QuickTest Pro function library and called from within a QuickTest Pro script. I find it useful for debugging automated scripts when I want to see values of variables or just monitor execution progress.
There is a wealth of information on the web about the capabilities of VBScript, including a lot of sample code. So go ahead, be creative, have fun. Learn much. And AUTOMATE!
“Besides black art, there is only automation and mechanization.” – Federico Garcia Lorca
Like other living things, software QA processes evolve over time. From its humble beginnings, test automation has undergone a similar transformation.
The first stage of test automation was the ‘Record and Playback’ age. Tools were marketed for their ability to record a typical user session and then faithfully play it back using the same objects and inputs. Good marketing, bad practice, because as soon as the application changed the recording stopped working. But it did get people interested in automation.
The next stage could be called the ‘Script Modularity’ age. The recording concept was retained, but now it was linked to a scripting language that allowed a tool expert to create modular, reusable scripts to perform the actions required in a test case. These scripts could be maintained as separate modules that corresponded roughly to the modules of an application, making it easier to change the test code when the application code changed. Easier, but still not efficient. Complex applications would require complex scripts, which usually require more expertise to maintain.
And what about handling all that test data hardcoded into each script? The mind boggled.
Luckily it didn’t boggle too long, which led to the next stage, the ‘Data-Driven’ age. Tools were constructed that allowed access to large pools of external test data, so that these modular scripts could process iteration after iteration of data input. They could churn through mountains of data, as often as desired. What could be better? Well, there were still maintenance issues, as the number of test scripts still grew in direct, or sometimes geometric, proportion to the growth of applications. Additional tools were created just to manage the test execution tools as the asset inventory climbed ever higher. And all those tool experts, they were getting expensive. But that’s the price of progress, right?
Wrong! Evolution usually favors simplicity, since knobby bits tend to break off and fail, sometimes endangering the entire species. Another level of abstraction was necessary, and this ushered in the ‘Keyword-Driven’ age. The test actions were generalized and stored in function libraries, objects were either inventoried in repositories or identified descriptively by type, and testers who were experts in application testing no longer needed to be test tool experts to execute their automated tests. By choosing from a list of keywords linked to functions, they could now describe their tests in their own terms. Test tool script maintenance was simplified down to occasionally updating the few assets that were required to process the keywords, which meant fewer tool experts (a.k.a, ‘knobby bits’ — lol). Truly, a Golden Age.
Of course, evolution doesn’t stop there. There are many experiments in progress today: business process testing, model-driven testing, intelligent query-driven testing, to name a few. The goal seems to be to find the toolset that provides the greatest test coverage with the least amount of maintenance.
And certainly the field of artificial intelligence will have a major impact on sofware testing in the future. But that is another topic — another blog post.
Modern commercial test tools claim that it is easier than ever for a QA person with manual testing expertise to create and execute automated tests. But who really wants to become a developer of automated scripts? Why put in all that effort when you’ve already got a good thing going?
Take the quiz below and see where YOU stand!
Q1: “Test automation is a software development effort.”
[ ] True
[ ] False
Q2: On a scale of 1 to 5, please rate your skills as a developer. _____
1. newbie: a mouse is an animal that eats cheese
2. not sure: do formulas in spreadsheets count?
3. have written small programs that actually run – sometimes
4. write .NET, Java, ASP, VB, vbscript, etc., code every day
5. hacker : will pass this test even if I don’t take it (heh)
Q3: Which of the following is not a type of testing? _____
3. Ad Hoc
12. User Acceptance
Q4: Manual test cases have little or no relationship to test cases for automation.
[ ] True
[ ] False
E1: Define Testing
E2: Define Test Automation:
Give up? Answers will be discussed in future postings.
One of the amazing things about working in the domain of software quality assurance — or QA, as we devotees so fondly refer to it — is that everyone agrees that there are standards yet so few people agree on what they are. Take for example the very basic definition of a Test Case. If you Google that phrase, you’ll see there are around 46,700,000 references! Usually near the top of that heap you will find a Wikipedia defintion. I know, Wikipedia can be controversial because of the communal nature of its content, but the link is right on top and I don’t like to read 46,700,000 of anything.
Anyway, Wikipedia states “A test case in software engineering is a set of conditions or variables under which a tester will determine if a requirement or use case upon an application is partially or fully satisfied.” Hmmm…engineering…conditions…variables…it all sounds very scientific to me.
Well then, let’s examine the concept of test case in the light of scientific method. In fact, you could say that a test case is the application of scientific method to the process of discovering whether or not a software product is suitable for delivery to the customer. It involves a few very specific steps:
1. Ask a question about something observable: for software, this question would be about a requirement, How does this really work?
2. Research what is already known about the question: in our software example, design documents, process diagrams, screenshots.
3. Formulate the hypothesis: knowing what I do about the requirement, what do I need to do to prove how it works?
4. Test your hyposthesis by doing an experiment: develop and execute the procedures, or test steps, that are implied by your test case.
5. Analyze the data and results obtained from performing the test case, and draw a conclusion: Pass, or Fail. Did the software behave as expected and meet the requirement?
6. Publish the results: if there were problems, create defect reports and begin the iterative fix/retest cycle. Keep all your test data to build metrics for process improvement. This includes your test cases, both for traceability and potential reuse as regression tests.
By Jove, I do believe we’ve located the missing case after all!