Custom Search

Friday, October 3, 2008

What is Diff. between STLC and SDLC?

Software Development Life Cycle (SDLC): It consists of 4 phases
i)Requirements Analysis
ii) System Design
iii)Implementation
iv) Testing
i)Requirements Analysis:- In this the customers of the system provide the business requirements for the system. It is the analysts job to extract these requirements from the system and document them with sufficient clarity that the team know what to build to meet the customer needs. Analysts will do the feasibility of the requirements and put it on FDD.
ii) System Design:- The architecture and details of how the system will work are created. These are documented by technical architects who produce documents such as UML diagrams.
iii) Implementation:- The actual coding of the solution happens in this stage.
iv)Testing:- Finally testing happens.
The delivered code is tested against the requirements documents to ensure that the system being delivered meets the needs of the customer
Software Testing Life Cycle (SDLC): It consists of 4 phases
i) Requirements Analysis
ii)System Design
iii)Implementation
iv)Testing
i)Requirements Analysis:- Testing begins with the verification of “requirements” documents. Testers need to analyse the requirements documents to ensure that they know exactly what the requirement is and that everyone has the same understanding. Removing ambiguity at this stage will remove bugs from later stages.A commonly used checklist for verifying requirements consists of the following:
• Correct Simply put, does the requirement correctly reflect what the user wants?
• Complete Ensure no elements are missing from the requirement, does the requirement describe all possible values, what about performance/security/accessibility aspects of a requirement?
• Consistent Checking that there are no contradictions within the requirements.
• Feasable Can the requirement be delivered given the technology, time and budget constraints.
• Testable Is the expected result known and can it be programatically or visually verified? Words like ‘maximise’ or ‘adequate’ can not be verified and should not be used in requirements.
• Traceable Is it clear which part of the system this requirement applies to, if it applies to more than one area is this clear?
• Unambiguous Look for ambiguous words like ’should’ ‘can’ ‘etc.’ ‘usually’ ‘and/or’ ‘quick’
ii) System Design:-Once the System Design and Requirements are available they can be used to for Test Cases. Each Test Case has a Test Condition and procedure and an expected outcome. During this phase Testers, Designers and Developers should be working together to ensure that everyone understands the solution and the possible risks and weaknesses of the solution. Developers should be thinking about the unit tests they need to perform against the units they are developing, this includes the identification of any stubs or test harnesses that may be needed. Testers can use some of the Test Case Design Techniques to help them identify test cases that should be created.
iii)Implementation / Development:- During the development of the solution “unit tests” are executed against each of the units being developed.This is the first oppurtunity to test the application.
iv)Testing:- It consists of 3 types
a)Integration Testing:- consists of checking the data flow between two or more integrated modules
b) Exploratory Testing:-is the simultaneous learning, creating and executing of test cases. In traditional methods the learning and test case creation is done ahead of time working from functional requirements.
c) Functional Testing :-is the process of testing a system to verify that it meets the functional requirements.
d) Load Testing:- is to check the system under expected load condition i.e How system behaves when ‘concurrent users’ accessing the same application. Testing an application under ‘heavy loads’, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
e) Performance Testing:- will make sure that product does not take up the much of the system resource and ‘time taken to execute’ the task. Imagine the reaction of the user if save operation takes up more than 5 minutes.and also testing will check that response time is meets the user requirement. There are no industry wide standard response time for web applications although there are some interesting writings on the matter.
v)Reliability Testing:- is testing the ability of a system, to handle ‘negative flow’ or ’situations’E.x for example, if printer is not connected to your Application and if you have given Print Command The AUT Should not hang waiting for response from Printer Machine, it should have the facility to give error message and the system should recover to normal functionality. It is also known as recovery testing.vi) Regression Testing:- Regression testing can be defined as the retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. The principle is that given a tested system and a new version of that system with some change made, a subset of the tests may be sufficient to restore the “tested” status of the system. The value of separating out regression testing as a separate concept from simply re-testing the system completely is that when testing costs are high, being able to get the same verification from less testing is desirable. If the cost of a full system test is low, simply re-testing the system when changes are made is probably the best strategy. The extra work that has to be done to execute a regression test is determining the subset of the full system test required to bring the system back to “tested” status. It is possible, if the changes to the system are extensive enough, that the regression test will in fact be the same as the full system test. A regression test suite is created from selected test cases and scripts. Where possible regression testing should be automated to reduce the time taken to repeat these tests, in some cases the additional time required to automate the tests will outweigh the benefits

Explain sdlc?

There are 5 phases in sdlc:
1)requirement & analysis
2)design
3)coding
4)testing
5)maintainance

1)requirement & analysis:
the main aim of requirement analysis phase is to produce
a document that properly specifies all reuirements of the
customer.
requirement specification document is the primary out put
of this phase.
proper requirements and analysys are critical for having
succesful project.
the need for executing this phase properly to produce an
srs with the least defect should be evident.

2)design: during the design phase the user requirements are
elicited and software satisfying these requirements are
designed built tested,delivered to the customer.

high level:high level design is the phase of life cycle
the logical view of computer implementation of solution to
the customer requirements are developed.it gives the
solution at high level of abstraction.during the high level
design the functional architecture of application and
database design takes place.
the entry criteria is that srs has been reviewed and
authourised. the input for this phase is software
requirement specification and output for this phase is high
level design document. the exit criteria is that the high
level design document has been reviewed and authourised.

low level design:the view of the application developed in
the high level design is broken into modules and logic
design is done for every program .a unit test plan is
created and documented as program specification. the
important activity in the detailed design phase is
identification of common routines and programs.

the entry criteria is that the highlevel design document
has been reviewed and authourised.
the exit criteria is that the program specification
document has been reviewed and authourised.

coding:
during the coding phase required programing language is
used to produce the program.this phase produces source
code, executables and database design.

testing:
here actual testing takes place .

maintainance:
succesfully developed project will undergo for maintainance.

what is a RAD (Rapid Application Development) Model when do the firm go for such a model?

The RAD model is linear sequential software development
process that emphasizes an extremely short development
cycle.The RAD model is a high speed adaption of the linear
sequential model in which rapid development is achieved by
using component based construction approach.it has
following phases

Business modelling
Data modelling
Process modelling
Application generation
Testing and Turnover

How do states move from discovery to action?

The purpose of the discovery process is to produce information that
can inform decisions and point to actions for remediation and quality improvement. This
paper has focused on ways to develop a reliable and robust set of discovery methods as a
foundation for an overall quality management system. Moving from the production of
accurate and reliable data to presentation of understandable and actionable information
requires a number of additional techniques and tools.

What is Quality improvement activities?
Quality improvement activities is an opportunity for the practice's GPs and staff members to come together as a team to consider quality improvement.

What is a discovery method?

A discovery method is defined as a systematic and organized
activity to assess, review, evaluate or otherwise analyze a process, program, operation,
provider or outcome. The end product of a good discovery method is reliable data that
provides “evidence” to support a conclusion or action either at the individual or system
level. In order to produce systematic and reliable data, certain core features should be
present in a discovery method. These include:
• protocols for data collection
• qualified reviewers/interviewers
• sampling methods that allow conclusions
• standard data collection instruments
• reliable and accurate data
• ability to aggregate, analyze and report data

what is prototype model(in S.T.L.C)? What is architecture of prototype model?

1.Test Strategy& Analysis
2.Test case design
3.Test Execution
4.Test Log preparation
5.Defect tracking
6.Final report


Explain different Prototype Models Types.
There are four types of Prototype Models based on their development planning:
The Patch-Up Prototype,
Nonoperational Prototype,
First-of-a-Series Prototype and
Selected Features Prototype.
What are advantages of Prototype Model?
Creating software using the prototype model also has its benefits. One of the key advantages a prototype modeled software has is the time frame of development. Instead of concentrating on documentation, more effort is placed in creating the actual software. This way, the actual software could be released in advance. The work on prototype models could also be spread to others since there are practically no stages of work in this model. Everyone has to work on the same thing and at the same time, reducing man hours in creating a software. The work will even be faster and efficient if developers will collaborate more regarding the status of a specific function and develop the necessary adjustments in time for the integration.

Another advantage of having a prototype modeled software is that the software is created using lots of user feedbacks. In every prototype created, users could give their honest opinion about the software. If something is unfavorable, it can be changed. Slowly the program is created with the customer in mind.
What are Disadvantages of Prototype Model ?
Implementing the prototype model for creating software has disadvantages. Since its being built out of concept, most of the models presented in the early stage are not complete. Usually they lack flaws that developers still need to work on them again and again. Since the prototype changes from time to time, it’s a nightmare to create a document for this software. There are many things that are removed, changed and added in a single update of the prototype and documenting each of them has been proven difficult.

There is also a great temptation for most developers to create a prototype and stick to it even though it has flaws. Since prototypes are not yet complete software programs, there is always a possibility of a designer flaw. When flawed software is implemented, it could mean losses of important resources.

Lastly, integration could be very difficult for a prototype model. This often happens when other programs are already stable. The prototype software is released and integrated to the company’s suite of software. But if there’s something wrong the prototype, changes are required not only with the software. It’s also possible that the stable software should be changed in order for them to be integrated properly.

What is the difference between iterative model and prototype

Iterative Model: In this u can come back to previous
phases, and make the changes accordingly. In this we
revived a final output product at the end of the SDLC.

Prototype Model: Here, we received Prototypes of the
product, before the final release. We release 4-5 Prototypes
with some differences b/w them, and take client opinion, and
modifies the final Product, as per client suggestions.

What is difference between iterative model and waterfall

WaterFall Model: This is a flow based model, in which u
pass every phase once, and can not go back to that phase
again. That is why, it is used rarely now a days.

Drawback: If there is nay change in requirements, then u
can not make any changes in requirements.

Iterative Model: In this u can come back to previous
phases, and make the changes accordingly. In this we
revieved a final output product at the end of the SDLC.

What is a backward compatible design?

The design is backward compatible, if the design continues to work with earlier versions of a language, program, code, or software. When the design is backward compatible, the signals or data that has to be changed does not break the existing code.
For instance, a (mythical) web designer decides he should make some changes, because the fun of using Javascript and Flash is more important (to his customers) than his backward compatible design. Or, alternatively, he decides, he has to make some changes because he doesn't have the resources to maintain multiple styles of backward compatible web design. Therefore, our mythical web designer's decision will inconvenience some users, because some of the earlier versions of Internet Explorer and Netscape will not display his web pages properly (as there are some serious improvements in the newer versions of Internet Explorer and Netscape that make the older versions of these browsers incompatible with, for example, DHTML). This is when we say, "Our (mythical) web designer's code fails to work with earlier versions of browser software, therefore his design is not backward compatible".
On the other hand, if the same mythical web designer decides that backward compatibility is more important than fun, or, if he decides that he does have the resources to maintain multiple styles of backward compatible code, then, obviously, no user will be inconvenienced when Microsoft or Netscape make some serious improvements in their web browsers. This is when we can say, "Our mythical web designer's design is backward compatible".

How does winrunner invoke on remote machine?

Steps to call WinRunner in remote machine:
1) Send a file to remote machine particular folder (this may contains your test parameters)
2) write a shell script listener & keep always running the remotehost (this script will watching the file in folder mentioned in step 1)
3) write a batch file to invoke the winrunner, test name & kept it in remote machine
4) call the batch file thru shell script whenever the file exist as mentioned in step1

How to Plan automation testing to to impliment keyword driven methodology in testing automation using winrunner8.2?

Keyword-driven testing refers to an application-independent automation framework. This framework requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test.
Suppose you want to test a simple application like Calculator and want to perform 1+3=4, then you require to design a framework as follows:

Window->Calculator ; Control->Pushbutton ; Action-> Push; Argument->1
Window->Calculator ; Control->Pushbutton ; Action-> Push; Argument->+
Window->Calculator ; Control->Pushbutton ; Action-> Push; Argument->3
Window->Calculator ; Control->Pushbutton ; Action-> Push; Argument->=
Window->Calculator ; Action-> Verify; Argument->4

Steps are associated with the manual test case execution. Now write functions for all these common framework required for your test caese. Your representation may be different as per your requirement and used tool.

WinRunner: Why "Bitmap Check point" is not working with Framework?

Bitmap chekpoint is dependent on the monitor resolution. It depends on the machine on which it has been recorded. Unless you are using a machine with a screen of the same resolution and settings , it will fail. Run it in update mode on your machine once. It will get updated to your system and then onwards will pass.

How do you view the contents of the GUI map?

If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.

How do you view the contents of the GUI map?

GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

What is the different between GUI map and GUI map files?

The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.

If the object does not have a name then what will be the logical name?

If the object does not have a name then the logical name could be the attached text.

What is meant by the logical name of the object?

An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.

What is the purpose of loading WinRunner Add-Ins?

Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function. What are the reasons that WinRunner fails to identify an object on the GUI? - WinRunner fails to identify an object in a GUI due to various reasons. The object is not a standard windows object. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.

Have you integrated your automated scripts from TestDirector?

When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT. What are the different modes of recording? - There are two type of recording in WinRunner. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen

What is the use of Test Director software?

TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release

How do you analyze results and report the defects?

Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

How do you run your test scripts?

We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.

Have you performed debugging of the scripts?

Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

How does WinRunner evaluate test results?

Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

Have you created test scripts and what is contained in the test scripts?

Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.

How does WinRunner recognize objects on the application?

WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.

What is contained in the GUI map?

WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. There are 2 types of GUI Map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

Explain WinRunner testing process?

WinRunner testing process involves six main stages
Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested
Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
Debug Test: run tests in Debug mode to make sure they run smoothly
Run Tests: run tests in Verify mode to test your application.
View Results: determines the success or failure of the tests.
Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

How you used WinRunner in your project?

Yes, I have been using WinRunner for creating automated scripts for GUI, functional and regression testing of the AUT.

Write and explain compile module?

Write TSL functions for the following interactive modes:
i. Creating a dialog box with any message you specify, and an edit field.
ii. Create dialog box with list of items and message.
iii. Create dialog box with edit field, check box, and execute button, and a cancel button.
iv. Creating a browse dialog box from which user selects a file.
v. Create a dialog box with two edit fields, one for login and another for password input.

Write and explain compile module?

Write TSL functions for the following interactive modes:
i. Creating a dialog box with any message you specify, and an edit field.
ii. Create dialog box with list of items and message.
iii. Create dialog box with edit field, check box, and execute button, and a cancel button.
iv. Creating a browse dialog box from which user selects a file.
v. Create a dialog box with two edit fields, one for login and another for password input.

Why you use reload function?

If you make changes in a module, you should reload it. The reload function removes a loaded module from memory and reloads it (combining the functions of unload and load).
The syntax of the reload function is:
reload ( module_name [ ,1|0 ] [ ,1|0 ] );
The module_name is the name of an existing compiled module.
Two additional optional parameters indicate the type of module. The first parameter indicates whether the module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.
(Default = 0)
The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded. 1 indicates that the module will close automatically. 0 indicates that the module will remain open.
(Default = 0)

How do you load and unload a compile module?

In order to access the functions in a compiled module you need to load the module. You can load it from within any test script using the load command; all tests will then be able to access the function until you quit WinRunner or unload the compiled module.
You can load a module either as a system module or as a user module. A system module is generally a closed module that is invisible to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when you execute an unload statement with no parameters (global unload).
load (module_name [,1|0] [,1|0] );
The module_name is the name of an existing compiled module.
Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.
(Default = 0)
The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded: 1 indicates that the module will close automatically; 0 indicates that the module will remain open.
(Default = 0)
The unload function removes a loaded module or selected functions from memory.
It has the following syntax:
unload ( [ module_name | test_name [ , "function_name" ] ] );

How do you declare arrays?

The following syntax is used to define the class and the initial expression of an array. Array size need not be defined in TSL.
class array_name [ ] [=init_expression]
The array class may be any of the classes used for variable declarations (auto, static, public, extern).

How do you declare constants?

The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it remains in existence until you exit WinRunner.
The syntax of this declaration is: [class] const name [= expression];

What does auto, static, public and extern variables means?

auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the function is running. A new copy of the variable is created each time the function is called.
static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is executed.
public: A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules.
extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module.

What is the use of treturn and texit statements in the test script?

The treturn and texit statements are used to stop execution of called tests.
i. The treturn statement stops the current test and returns control to the calling test.
ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test.
Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0.
The syntax is: treturn [( expression )]; texit [( expression )];

What is the use of putting call and call_close statements in the test script?

You can use two types of call statements to invoke one test from another:
A call statement invokes a test from within another test.
A call_close statement invokes a test from within a script and closes the test when the test is completed.

What is the use of function generator?

The Function Generator provides a quick, error-free way to program scripts. You can:
Add Context Sensitive functions that perform operations on a GUI object or get information from the application being tested.
Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or sending user-defined messages to a report.
Add Customization functions that enable you to modify WinRunner to suit your testing environment.

Which TSL function you will use to compare two files?

We can compare 2 files in WinRunner using the file_compare function. Syntax: file_compare (file1, file2 [, save file]);

What is the purpose of tl_step command?

Used to determine whether sections of a test pass or fail.
Syntax: tl_step(step_name, status, description);

What is a command to invoke application?

Invoke_application is the function used to invoke an application.
Syntax: invoke_application(file, command_option, working_dir, SHOW);

How do you write messages to the report?

To write message to a report we use the report_msg statement
Syntax: report_msg (message);

What is the difference between script and compile module?

Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable.
WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of Compiled Module.
By default, modules containing TSL code have a property value of "main". Main modules are called for execution from within other modules. Main modules are dynamically compiled into machine code only when WinRunner recognizes a "call" statement. Example of a call for the "app_init" script:
call cso_init();
call( "C:\\MyAppFolder\\" & "app_init" );
Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement:
reload (C:\\MyAppFolder\\" & "flt_lib");
or load ("C:\\MyAppFolder\\" & "flt_lib");

What is a compile module?

A compiled module is a script containing a library of user-defined functions that you want to call frequently from other tests. When you load a compiled module, its functions are automatically compiled and remain in memory. You can call them directly from within any test.
Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.

How to use data driver wizard?

You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-driven test. For example, your test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test script that you want to run in a loop with multiple sets of data.
To create a data-driven test:
• If you want to turn only part of your test script into a data-driven test, first select those lines in the test script.
• Choose Tools - DataDriver Wizard.
• If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven test, click Next.
• The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use
• The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.
• In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, table.
• At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table
• To the script at a later time without making changes throughout the script.
• Choose from among the following options:
1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data table; adds ddt_open and ddt_close statements
2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these statements to your test script manually.
3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close your datatable.
4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test script after the ddt_open statement.
5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package.
6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data.
7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table.
• The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different argument to replace.
Choose whether and how to replace the selected data:
1. Do not replace this data: Does not parameterize this data.
2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list.
3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name.
• The final screen of the wizard opens.
1. If you want the data table to open after you close the wizard, select Show data table now.
2. To perform the tasks specified in previous screens and close the wizard, click Finish.
3. To close the wizard without making any changes to the test script, click Cancel.
Q: How do you handle object exceptions?
During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run and distort results.
You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution

What are the steps of creating a data driven test?

The steps involved in data driven testing are:
Creating a test
Converting to a data-driven test and preparing a database
Running the test
Analyzing the test results.

Which TSL functions you will use for Searching text on the window ?

find_text ( string, out_coord_array, search_area [, string_def ] );
win_find_text ( window, string, result_array [, search_area [, string_def ] ] );

How to get Text from screen area ?

We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

How to get Text from object/window ?

We use obj_get_text (logical_name, out_text) function to get the text from an object
We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

What check points you will use to read and check text on the GUI and explain its syntax?

• You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text.
• You can use a text checkpoint to:
o Read text from a GUI object or window in your application, using obj_get_text and win_get_text
o Search for text in an object or window, using win_find_text and obj_find_text
o Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text
o Click on text in an object or window, using obj_click_on_text and win_click_on_text

How do you create parameterize SQL commands?

A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application:
SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)
SELECT defines the columns to include in the query.
FROM specifies the path of the database.
WHERE (optional) specifies the conditions, or filters to use in the query. Departure is the parameter that represents the departure point of a flight.
Day_Of_Week is the parameter that represents the day of the week of a flight.
When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script:
db_check("list1.cdl", "dbvf1", NO_LIMIT, dbvf1_params);
The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.

How do you parameterize database check points?

When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes.

How do you convert a database file to a text file?

You can use Data Junction to create a conversion file which converts a database to a target text file.

How do you record a data driven test?

We can create a data-driven testing using data from a flat file, data table or a database.
Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data.
Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table.
Database: we store test data in the database and access these data using ‘db_*’ functions

How do you create ODBC query?

We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.

How do you handle ActiveX and Visual basic objects?

WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.

How do you modify the expected results of a GUI checkpoint?

We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.

How do you edit the expected value of an object?

We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.

How do you edit checklist file and when do you need to edit the checklist file?

WinRunner has an edit checklist file option under the create menu. Select the Edit GUI Checklist to modify GUI checklist file and Edit Database Checklist to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.

What do you verify with the sync point for screen area and what command it generates, explain syntax?

For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);

In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?

In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:
i. A location selector uses the spatial position of objects.
ii. An index selector uses a unique number to identify the object in a window.

What is the name of custom class in WinRunner and what methods it applies on the custom objects?

WinRunner learns custom class objects under the generic object class. WinRunner records operations on custom objects using obj_ statements.

How do you handle custom objects?

A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic object class. WinRunner records operations on custom objects using obj_mouse_ statements.
If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing.

What is the purpose of location indicator and index indicator in GUI map configuration?

In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:
A location selector uses the spatial position of objects.
The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description.
An index selector uses a unique number to identify the object in a window.
The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window.

When the optional properties are learned?

An optional property is used only if the obligatory properties do not provide unique identification of an object.

What is the purpose of obligatory and optional properties of the objects?

For each class, WinRunner learns a set of default properties. Each default property is classified obligatory or optional.
1. An obligatory property is always learned (if it exists).
2. An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object.

What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?

You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.
During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.
Syntax:
obj_wait_bitmap ( object, image, time );
win_wait_bitmap ( window, image, time );

What do you verify with the sync point for object/window property and what command it generates, explain syntax?

• Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.
• You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object.
• You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax:
obj_exists ( object [, time ] ); win_exists ( window [, time ] );

What do you verify with the database check point custom and what command it generates, explain syntax?

• When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set.
• You can create a custom check on a database in order to:
o check the contents of part or the entire result set
o edit the expected results of the contents of the result set
o count the rows in the result set
o count the columns in the result set
• You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.

How do you handle dynamically changing area of the window in the bitmap checkpoints?

The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch

What do you verify with the database checkpoint default and what command it generates, explain syntax?

• By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application.
• When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query.
• You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you
• specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found.
• You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run.
Syntax: db_check(checklist_file, expected_restult);
• You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script.
Syntax: db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );
ChecklistFileName ---- A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard.
SuccessConditions ----- Contains one of the following values:
1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found.
2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found.
3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found.
RecordNumber --- An out parameter returning the number of records in the database.

What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?

• You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window).
• To capture an area of the screen as a bitmap:
1. Choose Create - Bitmap Checkpoint - For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens.
2. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.
3. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script.
4. The win_check_bitmap statement for an area of the screen has the following syntax: win_check_bitmap ( window, bitmap, time, x, y, width, height );

:What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?

• You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), you can identify the nature of the discrepancy.
• When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.
• Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to check a bitmap.
• To capture a window or object as a bitmap:
1. Choose Create - Bitmap Checkpoint - For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens.
2. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax: win_check_bitmap ( object, bitmap, time );
3. For an object bitmap, the syntax is: obj_check_bitmap ( object, bitmap, time );
4. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be: win_check_bitmap ("Flight Reservation", "Img2", 1);
5. However, if you click the Date of Flight box in the same window, the statement might be: obj_check_bitmap ("Date of Flight:", "Img1", 1);
Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );

What information is contained in the checklist file and in which file expected results are stored?

The checklist file contains information about the objects and the properties of the object we are verifying.
The gui*.chk file contains the expected results which is stored in the exp folder

What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?

To create a GUI checkpoint for two or more objects:
• Choose Create GUI Checkpoint For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens.
• Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.
• To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window.
• The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check.
• Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens.
• The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
• To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script.
Syntax: win_check_gui ( window, checklist, expected_results_file, time );
obj_check_gui ( object, checklist, expected results file, time );

What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?

• You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or you can specify which properties to check.
• Creating a GUI Checkpoint using the Default Checks
o You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.
o To create a GUI checkpoint using default checks:
1. Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.
2. Click an object.
3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement Syntax: win_check_gui ( window, checklist, expected_results_file, time );
• Creating a GUI Checkpoint by Specifying which Properties to Check
• You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that it is in focus, instead of enabled.
• To create a GUI checkpoint by specifying which properties to check:
o Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.
o Double-click the object or window. The Check GUI dialog box opens.
o Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.
o Select the properties you want to check.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement. Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object, checklist, expected results file, time );

What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?

You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:
button_check_info
scroll_check_info
edit_check_info
static_check_info
list_check_info
win_check_info
obj_check_info
Syntax: button_check_info (button, property, property_value );
edit_check_info ( edit, property, property_value );

How do you maintain the document information of the test scripts?

Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document.

What is parameterize?

In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterize your test. The data is stored in a data table.

What are the synchronization points?

Synchronization points enable you to solve anticipated timing problems between the test and your application. For example, if you create a test that opens a database application, you can add a synchronization point that causes the test to wait until the database records are loaded on the screen.
For Analog testing, you can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. When you run a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window

What are data driven tests?

When you test your application, you may want to check how it performs the same operations with multiple sets of data. You can create a data-driven test with a loop that runs ten times: each time the loop runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table. You can perform these operations manually, or you can use the DataDriver Wizard to parameterize your test and store the data in a data table.

What is a checkpoint and what are different types of checkpoints?

Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version.
You can add four types of checkpoints to your test scripts:
1. GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list.
2. Bitmap checkpoints take a snapshot of a window or area of your application and compare this to an image captured in an earlier version.
3. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.
4. Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database.

What are the two modes of recording?

There are 2 modes of recording in WinRunner
1. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.
2. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

What are the virtual objects and how do you learn them?

• Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and run tests.
• Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.
To define a virtual object using the Virtual Object wizard:
1. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.
2. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next.
3. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If the object marked is visible on the screen, you can click the Highlight button to view it. Click Next.
4. Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc.
5. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the same name before confirming your choice.

How do you find out which is the start up file in WinRunner?

The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start up file in WinRunner.

What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore.?

1) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)
2) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.
3) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were object class.
4) Ignore instructs WinRunner to disregard all operations performed on the class.

What is the purpose of GUI spy?

Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.

How do you make the configuration and mappings permanent?

The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

What is the purpose of GUI map configuration?

GUI Map configuration is used to map a custom object to a standard object.

How do you configure GUI map?

1. When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object.
2. Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic object class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script.
3. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

How do you filter the objects in the GUI map?

GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.
1. Logical name displays only objects with the specified logical name.
2. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.
3. Class displays only objects of the specified class, such as all the push buttons.

How do you clear a GUI map files?

We can clear a GUI Map file using the Clear All option in the GUI Map Editor.

How do you select multiple objects during merging the files?

Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit - Select All.

How do you copy and move objects between different GUI map files?

We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:
1. Choose Tools - GUI Map Editor to open the GUI Map Editor.
2. Choose View - GUI Files.
3. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.
4. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.
5. In one file, select the objects you want to copy or move. Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit - Select All.
6. Click Copy or Move.
7. To restore the GUI Map Editor to its original size, click Collapse.

How do you suppress a regular expression?

We can suppress the regular expression of a window by replacing the regexp_label property with label property.

What is the purpose of regexp_label property and regexp_MSW_class property?

The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a window’s label description.
The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

How WinRunner handles varying window labels?

We can handle varying window labels using regular expressions. WinRunner uses two hidden properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.
i. The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a window’s label description.
ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

When it is appropriate to change physical description?

Changing the physical description is necessary when the property value of an object changes.

When do you feel you need to modify the logical name?

Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long

How do you modify the logical name or the physical description of the objects in GUI map?

You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.

How do you identify which files are loaded in the GUI map?

The GUI Map Editor has a drop down GUI File displaying all the GUI Map files loaded into the memory.

What different actions are performed by find and show button?

To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

How do you find an object in an GUI map?

The GUI Map Editor is been provided with a Find and Show Buttons.
To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN

Issu fixing cycle. Once the development team has fixed issues, a regression test can be run t ovalidate the fixes. Tests are based on the step-by-step test casess that were originally reported:
• If an issue is confirmeded as fixed, then the issue report status should be changed to Closed.
• If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to report the side effect.
• If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, along with comments outlining the oustanding problems

Open-status regression cycle. Periodic regression tests may be run on all open issue in the issue-tracking database. During this cycle, issue status is confirmed either the report is reproducible as is with no modification, the report is reproducible with additional comments or modifications, or the report is no longer reproducible
Closed-fixed regression cycle. In the final phase of testing, a full-regression test cycle should be run to confirm the status of all fixed-closed issues.
Feature regression cycle. Each time a new build is cut or is in the final phase of testing depending on the organizational procedure, a full-regression test cycle should be run to confirm that the proven correctly functional features are still working as expected.

Database Testing
Items to check when testing a database
What to test Environment toola/technique
Seach results System test environment Black Box and White Box technique
Response time System test environment Sytax Testing/Functional Testing
Data integrity Development environment White Box testing
Data validity Development environment White Box testing

Increase Capacity Testing

When you begin your stress testing, you will want to increase your capacity testing to make sure you are able to handle the increased load of data such as ASP pages and graphics. When you test the ASP pages, you may want to create a page similar to the original page that will simulate the same items on the ASP page and have it send the information to a test bed with a process that completes just a small data output. By doing this, you will have your processor still stressing the system but not taking up the bandwidth by sending the HTML code along the full path. This will not stress the entire code but will give you a basis from which to work. Dividing the requests per second by the total number of user or threads will determine the number of transactions per second. It will tell you at what point the server will start becoming less efficient at handling the load. Let's look at an example. Let's say your test with 50 users shows your server can handle 5 requests per seconf, with 100 users it is 10 requests per second, with 200 users it is 15 requests per second, and eventually with 300 users it is 20 requests per second. Your requests per second are continually climbing, so it seems that you are obtaining steadily improving performance. Let's look at the ratios:
05/50 = 0.1
10/100 = 0.1
15/200 = 0.075
20/300 = 0.073
From this example you can see that the performance of the server is becoming less and less efficient as the load grows. This in itself is not necessarily bad (as long as your pages are still returning within your target time frame). However, it can be a useful indicator during your optimization process and does give you some indication of how much leeway you have to handle expected peaks.

Stateful testing
When you use a Web-enabled application to set a value, does the server respond correctly later on?

Privilage testing
What happens when the everyday user tries to access a control that is authorized only for adminstrators?

Speed testing
Is the Web-enabled application taking too long to respond?

Boundary Test
Boundary tests are designed to check a program's response to extreme input values. Extreme output values are generated by the input values. It is important to check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that you can create extreme boundary results from non-extreme input values. It is essential to analyze how to generate extremes of both types. In addition. sometime you know that there is an intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or overflow condition.

Boundary timeing testing
What happens when your Web-enabled application request times out or takes a really long time to respond?

Regression testing
Did a new build break an existing function? Repeat testing after changes for managing risk relate to product enhancement.
A regression test is performded when the tester wishes to see the progress of the testing processs by performing identical tests before and after a bug has been fixed. A regression test allows the tester to compare expeted test results with the actual results.
Regression testing's primary objective is to ensure that all bugfree features stay that way. In addition, bugs which have been fixed once should not turn up again in subsequent program versions.
Regression testing: After every software modification or before next release, we repeat all test cases to check if fixed bugs are not show up again and new and existing functions are all working correctly.
Regression testing is used to confirm that fixed bugs have, in fact, been fixed and that new bugs have not been introduced in the process, and that festures that were proven correctly functional are intact. Depending on the size of a project, cycles of regression testing may be perform once per milestone or once per build. Some bug regression testing may also be performed during each accceptance test cycle, forcusing on only the most important bugs. Regression tests can be automated

Having the DB Check point, its able to show the current values in form but its not showing the values that saved in the table

Having the DB Check point, its able to show the current values in form but its not showing the values that saved in the table
This looks like its happening because the data has
been written to the db after your checkpoint, so you
have to do a runtime record check Create>Database
Checkpoint>Runtime Record Check. You may also have to
perform some customization if the data displayed in
the application is in a different format than the data
in the database by using TSL. For example, converting
radio buttons to database readable form involves the
following:

# Flight Reservation
set_window ("Flight Reservation", 2);
# edit_set ("Date of Flight:", "06/08/02");

# retrieve the three button states
button_get_state ( "First", first);
button_get_state ( "Business", bus);
button_get_state ( "Economy", econ);

# establish a variable with the correct numeric value
based on which radio button is set
if (first)
service="1";

if (bus)
service="2";

if (econ)
service="3";

set_window("Untitled - Notepad",3);

edit_set("Report Area",service);

db_record_check("list1.cvr", DVR_ONE_MATCH,record_num);

the MSW_id value sometimes changes, rendering the GUI map useless

:
MSW_Id's will continue to change as long as your developers are modifying your application. Having dealt with this, I determined that each MSW_Id shifted by the same amount and I was able to modify the entries in the gui map rather easily and continue testing.
Instead of using the MSW_id use the "location". If you use your GUI spy it will give you every detail it can. Then add or remove what you don't want.

How to do text matching?

You could try embedding it in an if statement. If/when it fails use a tl_step statement to indicate passage and then do a texit to leave the test. Another idea would be to use win_get_text or web_frame_get_text to capture the text of the object and the do a comparison (using the match function) to determine it's existance.

User-defined function that would write to the Print-log as well as write to a file

User-defined function that would write to the Print-log as well as write to a file
function writeLog(in strMessage){
file_open("C:\FilePath\...");
file_printf(strMessage);
printf(strMessage);
}

How to break infinite loop

set_window("Browser Main Window",1);
text="";
start = get_time();
while(text!="Done")
{
statusbar_get_text("Status Bar",0,text);
now = get_time();
if ( (now-start) == 60 ) # Specify no of seconds after which u want
break
{
break;
}
}

Read and write to the registry using the Windows API functions

function space(isize)
{
auto s;
auto i;
for (i =1;i<=isize;i++)
{
s = s & " ";

}
return(s);
}

load_dll("c:\\windows\\system32\\ADVAPI32.DLL");
extern long RegDeleteKey( long, string<1024> );
extern long RegCloseKey(long);
extern long RegQueryValueExA(long,string,long,long,inout string<1024>,inout long );
extern long RegOpenKeyExA(long,string,long ,long,inout long);
extern long RegSetValueExA(long,string,long,long,string,long);

MainKey = 2147483649; # HKEY_CURRENT_USER
SubKey = "Software\\TestConverter\\TCEditor\\Settings";
# This is where you set your subkey path
const ERROR_SUCCESS = 0;

const KEY_ALL_ACCESS = 983103;
ret = RegOpenKeyExA(MainKey, SubKey, 0, KEY_ALL_ACCESS, hKey); # open the
key
if (ret==ERROR_SUCCESS)
{
cbData = 256;
tmp = space(256);
KeyType = 0;
ret = RegQueryValueExA(hKey,"Last language",0,KeyType,tmp,cbData); # replace
"Last language" with the key you want to read
}
pause (tmp);
NewSetting = "SQABASIC";
cbData = length(NewSetting) + 1;
ret = RegSetValueExA(hKey,"Last language",0,KeyType,NewSetting,cbData);
# replace "Last language" with the key you want to write

cbData = 256;
tmp = space(256);
KeyType = 0;
ret = RegQueryValueExA(hKey,"Last language",0,KeyType,tmp,cbData);
# verifies you changed the key

pause (tmp);

RegCloseKey(hKey); # close the key

Loads multiple giumaps into an array

#GUIMAPS
static guiname1 = "MMAQ_guimap.gui";
static guiname2 = "SSPicker_guimap.gui";
static guiname3 = "TradeEntry.gui";
static guiLoad[] = {guiname1, guiname2, guiname3}

Then I just call the function:
#LOAD GUIMAP FILES VIA THE LOAD GUIMAP FUNCTION (this closes ALL open guimaps)
rc = loadGui(guiLoad);
if (rc != "Pass") #Check success of the Gui_Load
{
tl_step("Guiload",FAIL,"Failed to load Guimap(s)for "&testname(getvar));
#This line to test log
texit("Failed to load Guimap(s)for "&testname(getvar));
}

public function loadGui(inout guiLoad[])
{
static i;
static rc;

# close any temp GUI map files
GUI_close("");
GUI_close_all();

for(i in guiLoad)
{
rc = (GUI_load(GUIPATH & guiLoad[i]));
if ((rc != 0) && (rc != E_OK)) #Check the Gui_Load
{
return ("Failed to load " &guiLoad[i]);
}
}
return ("Pass");
}

Text Field Validations

Need to validate text fields against
1. Null
2. Not Null.
3. whether it allows any Special Characters.
4. whether it allows numeric contents.
5. Maximum length of the field etc.

1) From the requirements find out what the behaviour of the text field in
question should be. Things you need to know are :
what should happen if field left blank
what special characters are allowed
is it an alpha, nemeric or alphanumeric field etc.etc.

2) Write manual tests for doing what you want. This will create a structure
to form the basis of your WR tests.

3) now create your WR scripts. I suggest that you use data driven tests and
use Excel spreadsheets for your inputs instead of having user input.
For example the following structure will test whether the text field will
accept special characters :

open the data table
for each value in the data table
get value
insert value into text field
attempt to use the value inserted
if result is as expected
report pass
else
report fail
next value in data table

in this case the data table will contain all the special charcaters

Object name Changing dynamically?

:
1.
logicalname:"chkESActivity"
{
class: check_button,
MSW_class: html_check_button,
html_name: chkESActivity,
part_value: 90
}
2.
logical name "chkESActivity_1"

{
class: check_button,
MSW_class: html_check_button,
html_name: chkESActivity,
part_value: 91
}


Replace with:

Logical:"CheckBox" # you give any name as the logical name
{
class: check_button,
MSW_class: html_check_button,
html_name: chkESActivity,
part_value: "![0-9][0-9]" # changes were done here
}

you can use any of the checkbox command like
button_set("CheckBox",ON); # the above statement will check any check
box with part value ranging from 00 to 99

How to to get the information from the status bar without doing any activity/click on the hyperlink?

You can use the "statusbar_get_text("Status Bar",0,text);" function
"text" variable contains the status bar statement.

or

web_cursor_to_link ( link, x, y );

link The name of the link.
x,y The x- and y-coordinates of the mouse pointer when moved to a link,
relative to the upper left corner of the link.

BitMap or GUI Checkpoints

DO NOT use BitMap or GUI Checkpoints for dynamic verification. These checkpoints are purely for static verifications. There are ofcourse, work-arounds, but mostly not worth the effort.

How to check property of specific Icon is highlighted or not?

set_window("Name of the window");
obj_check_info("Name of the object ","focused",0ut_value);

check for out_value & proceed further

How to force WR to learn the sub-items on a menu...?

If WR is not learning sub-items then the easy way id to add manually those sub items in to GUI map.. of course you need to study the menu description and always add the PARENT menu name for that particular sub-menu..

: How to use a regular _expression in the physical description of a window in the GUI map?

Several web page windows with similar html names - they all end in or contain "| MyCompany" The GUI Map has saved the following physical description for one of these windows:
{
class: window,
html_name: "Dynamic Name | MyCompany"
MSW_class: html_frame
}

The "Dynamic Name " part of the html name changes with the different pages.

Replace:

{
class: window,
html_name: "!.*| MyCompany"
MSW_class: html_frame
}

Regular expressions in GUI maps always begin with "!".

How can withwin runner to make single scripts which supports multiple languages?

Actually, you can have scripts that run for different locales.I have a set of scripts that run for Japanese as well as English Locales. Idea is to have objects recorded in GUI Map with a locale independent physical description. This can be achieved in two ways.
1. After recording the object in the GUI Map, inspect the description and ensure that no language specific properties are used. For ex: html_name property for an object of class: html_text_link could be based on the text. You can either remove these language dependent properties if it doesnt really affect your object recognition. If it does affect, you need to find another property for the object that is locale independent. This new property may be something thats already there or you need to create them. This leads to the next option.
2. Have developers assign a locale independent property like 'objname' or something to all objects that you use in your automated scripts. Now, modify your GUI Map description for the particular object to look for this property instead of the standard locale dependent properties recorded by WR (these default properties are in GUI Map Configuration).
or
You could also use a GUI map for each locale. Prefix the GUI map name with the locale (e.g. jpn_UserWindow.gui and enu_UserWindow.gui) and load the correct map based on the current machine locale. Specifically, you can use the get_lang() function to obtain the current language setting, then load the appropriate GUI map in your init script. Take a look at the sample scripts supplied with WinRunner (for the flight application). I think those scripts are created for both English and Japanese locales.

After taking care of different GUIs for different locales, the script also needs some modification. If you are scripting in English and then moving on to any other language (say Japanese), all the user inputs will be in English. Due to this the script will fail as it is expecting a Japanese input for a JPN language. Instead of using like that, assign all the user inputs to a variable and use the same wherever the script uses it. This variables has to be assigned (may be after the driver script) before you call the script which you want to run. You should have different variable scripts for different languages. Depending on the language you want to run, call the appropriate variable script file. This will help you to run the same script with different locale

After clicking on "login" button, they opens other windows of the web application, how to check that page is opened or not

After clicking on "login" button, they opens other windows of the web application, how to check that page is opened or not


When your expecting "Window1" to come up after clicking on Login...
Capture the window in the GUI Map. No two windows in an web based
application can have the same html_name property. Hence, this would
be the property to check.

First try a simple win_exists("window1", ) in an IF condition.

If that does'nt work, try the function,

win_exists("{ class: window, MSW_class: html_frame,
html_name: "window1"}",);

How to have winrunner insert yesterdays date into a field in the application?

1) Use get-time to get the PC system time in seconds since 01/01/1970

2)Subtract 86400 (no seconds in a day) from it

3)Use time_str to convert the result into a date format

4)If format of returned date is not correct use string manipulations to get
the format you require

5) Insert the date into your application


Alternatively you could try the following :

1) In an Excel datasheet create a column with an appropriate name, and in
the first cell of the column use the excel formula 'today() - 1'

2) Format the cell to give you the required date format

3) Use the ddt- functions to read the date from the excel datasheet

4) insert the reteived date into your application

How to write an email address validation script in TSL?

public function IsValidEMAIL(in strText)
{
auto aryEmail[], aryEmail2[], n;


n = split(strText, aryEmail, "@");
if (n != 2)
return FALSE;

# Ensure the string "@MyISP.Com" does not pass...
if (!length(aryEmail[1]))
return FALSE;

n = split(aryEmail[2], aryEmail2, ".");
if (n < 2)
return FALSE;
# Ensure the string "Recipient@." does not pass...
if (!(length(aryEmai2[1]) * length(aryEmai2[1])))
return FALSE;

return TRUE;
}

How do you handle TSL exceptions?

Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch.
The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this function to respond to the unexpected error in the way that meets your specific testing needs.
Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file

How do you handle pop-up exceptions?

A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be
Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box.
User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box.

How do you handle unexpected events and errors?

WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.
WinRunner enables you to handle the following types of exceptions:
Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window.
TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code.
Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object.
Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a test run.

What are the three modes of running the scripts?

WinRunner provides three modes in which to run tests: Verify, Debug, and Update. You use each mode during a different phase of the testing process.
Verify
Use the Verify mode to check your application.
Debug
Use the Debug mode to help you identify bugs in a test script.
Update
Use the Update mode to update the expected results of a test or to create a new expected results folder.

WITHOUT the GUI map, use the phy desc directly....

It's easy, just take the description straight out of the GUI map squigglies and
all, put it into a variable (or pass it as a string)
and use that in place of the object name.

button_press ( "btn_OK" );
becomes
button_press ( "{class: push_button, label: OK}" );

How to get the resolution settings ?

Use get_screen_res(x,y) to get the screen resolution in WR7.5.
or
Use get_resolution (Vert_Pix_int, Horz_Pix_int, Frequency_int) in WR7.01

Winrunner testscript for checking all the links at a time ?

location = 0;
set_window("YourWindow",5);

while(obj_exists((link = "{class: object,MSW_class: html_text_link,location: "
& location & "}"))== E_OK)
{
obj_highlight(link); web_obj_get_info(link,"name",name);
web_link_valid(link,valid);
if(valid)
tl_step("Check web link",PASS,"Web link \"" & name & "\" is valid.");
else
tl_step("Check web link",FAIL,"Web link \"" & name & "\" is not valid.");
location++;
}

How to use WinRunner to test the login screen ?

When you enter wrong id or password, you will get Dialog box.
1. Record this Dialog box
2. User win_exists to check whether dialog box exists or not
3. Playback: Enter wrong id or password, if win_exists is
true, then your application is working good.
Enter good id or password, if win_exists is false,
then your application is working perfectly.

How to use WinRunne to check whether the record was updated or the record was delelte or the record was inserted or not?

Using WinRunner check point features: Create->dDB checkpoint->Runtime Record check

For new users, how to use WinRunner to test software applications automately ?

A: The following steps may be of help to you when automating tests
1. MOST IMPORTANT - write a set of manual tests to test your application - you cannot just jump in with WR and expect to produce a set of meaningful tests. Also as you will see from the steps below this set of manual tests will form your plan to tackle automation of your application.
2. Once you have a set of manual tests look at them and decide which ones you can automate using your current level of expertise. NOTE that there will be tests that are not suitable for automation, either because you can't automate them, or they are just not worth the effort.
3. Automate the tests selected in step 2 - initially you will use capture/replay using the steps in the manual test, but you will soon see that to produce meaningful and informative tests you need to add additional code to your test eg. use tl_step() to give test results. As this process continues you will soon see that there are operations that you repeatedly do in multiple tests - these are then candidates for user-defined functions and compiled modules
4. Once you have completed step 3 go back to step 2 and you will find that the knowledge you have gained in step 3 will now allow you to select some more tests that you can do.
If you continue going through this loop you will gradually become more familiar with WR and TSL, in fact you will probably find that eventually you do very little capture/replay and more straight TSL coding.

What is meant by Waterfall Model?

The waterfall model is a popular version of the systems development life cycle model for software engineering. Often considered the classic approach to the systems development life cycle, the waterfall model describes a development method that is linear and sequential. Waterfall development has distinct goals for each phase of development. Imagine a waterfall on the cliff of a steep mountain. Once the water has flowed over the edge of the cliff and has begun its journey down the side of the mountain, it cannot turn back. It is the same with waterfall development. Once a phase of development is completed, the development proceeds to the next phase and there is no turning back

What is user acceptance testing?

What is user acceptance testing


Answer1:
The final testing stages by users of a new or changed information system. If successful, it signals the approval to implement the system live. Cosmetic and other small changes may still be required as a result of the test, but the system is considered stable and processing data according to requirements.

Answer2:
A formal product evaluation performed by a customer as a condition of purchase.

Answer3:
In software development, user acceptance testing (UAT) - also called beta testing, application testing, and end user testing - is a phase of software development in which the software is tested in the "real world" by the intended audience. UAT can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially

Answer4:
Formal testing conducted to enable a user or other authorised entity to determine whether to accept a system or component to determine whether to accept a system or component. Often known simply as acceptance testing or customer acceptance testing, CAT. Acceptance tests are based upon business requirements.

What to consider for the Test Plan?

1. Why you cannot download a Word version of this test plan.
I have received numerous requests for an MS Word version of the test plan.
However, although the web pages were created directly from an word document, I no longer have a copy of that original word document.
Also, having prepared numerous test plans, I know that the content is more important than the format. See the next point for more info on the content of a test plan.
2. What a test plan should contain
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.
A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment.
A test plan should ideally be organisation wide, being applicable to all of an organisations software developments.
The objective of each test plan is to provide a plan for verification, by testing the software, the software produced fulfils the functional or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this generally means the Functional Specification.
The first consideration when preparing the Test Plan is who the intended audience is – i.e. the audience for a Unit Test Plan would be different, and thus the content would have to be adjusted accordingly.
You should begin the test plan as soon as possible. Generally it is desirable to begin the master test plan as the same time the Requirements documents and the Project Plan are being developed. Test planning can (and should) have an impact on the Project Plan. Even though plans that are written early will have to be changed during the course of the development and testing, but that is important, because it records the progress of the testing and helps planners become more proficient.
What to consider for the Test Plan:

1. Test Plan Identifier
2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Suspension Criteria and Resumption Requirements
11. Test Deliverables
12. Remaining Test Tasks
13. Environmental Needs
14. Staffing and Training Needs
15. Responsibilities
16. Schedule
17. Planning Risks and Contingencies
18. Approvals
19. Glossary

3. Standards for Software Test Plans
Several standards suggest what a test plan should contain, including the IEEE.
The standards are:
IEEE standards:
829-1983 IEEE Standard for Software Test Documentation
1008-1987 IEEE Standard for Software Unit Testing
1012-1986 IEEE Standard for Software Verification & Validation Plans
1059-1993 IEEE Guide for Software Verification & Validation Plans