Custom Search

Saturday, September 27, 2008

Software Testing Process

Software Testing Process

1 The Approach

This section will not detail more on the testing process but will throw some light on a standard testing process. Any testing process will start with the planning of a test (Test Plan), building a strategy (How To Test), Preparation of test cases (What To Test), Execution of test cases (Testing) and end in reporting the results (Defects).

The above process need not be wholly iterative but execution of test cases and reporting results can be considered iterative in most cases. But now comes a question of how different is a web testing process over other testing process? Practically it is not difference it is the priority areas which needs to be set for a web testing. For web testing the following few key focus areas like Compatibility, Navigation, User Interaction, Usability, Performance, Scalability, Reliability, and Availability etc can be considered during the testing phase.

1.1 The Do’s

This section will have a list of areas or tasks, which one has to follow in a web testing process. Though it might be common with other testing process it is suggested that these areas are given enough attention.

2 Plan & Strategy

Neatly document the test plan and test strategy for the application under test. Test Plan serves as the basis for all testing activities throughout the testing life cycle. Being an umbrella activity this should reflect the customer’s needs in terms of milestones to be met, the test approach (test strategy), resources required etc. The plan and strategy should give a clear visibility of the testing process to the customer at any point of time.

Functional and Performance test plans if developed separately will give lot more lucidity for functional and performance testing. Performance test plan is optional if the application does not entail any performance requirements.

2.1 The Do’s

Ø Develop test plan based on a approved Project Plan

Ø Document test plan with major testing milestones

Ø Identify and document all deliverables at the end of these milestones

Ø Identify the resources (Both Hardware/Software and Human) required

Ø Identify all other external systems, which is going to interact with the application. For example the application can get the data from any other mainframe server. Identification of such systems will help one plan for integration testing as well

Ø If performance testing is under the scope of testing then clearly identify the application performance requirements like number of hits/second, response time, number of concurrent user etc. Details about different testing methodologies (Spike testing, Endurance testing, stress testing, capacity testing) during the performance testing phase can also be documented.

Ø Get the test plan approved.

Ø Include Features to be tested to communicate to customer what all will be tested during the testing life cycle

Ø Include Features not tested to communicate to customer what all will not be tested during the testing life cycle. (As part of risk management)

2.2 The Don’ts

Ø Do not use draft (unapproved) test plans for reference

Ø Do not ignore the test strategies identified in the test plan during testing.

Ø Do not make changes to any approved test plan without official change request

3 Test Case Design

Do not mix the stages of testing (Unit testing, Integration testing, System testing, Functional testing etc) with the types of testing (Regression testing, Sanity testing, User Interface testing, Smoke testing etc) in the test plan. Identify them uniquely with their respective input and exit criteria.

Any good and complete testing is as good as its test cases, since test cases reflect the understandability of the test engineer over the application requirements. A good test case should identify the yet undiscovered errors in testing.

3.1 The Do’s

Ø Identify test cases for each module

Ø Write test cases in each executable step.

Ø Design more functional test cases.

Ø Clearly identify the expected results for each test case

Ø Design the test cases for workflow so that the test cases follow a sequence in the web application during testing. For example for mail applications say yahoo, it has to start with a registration process for new users, then signing up, composing mail, sending mail etc.

Ø Security is high priority in web testing. Hence document enough test cases related to application security.

Ø Develop a trace ability matrix to understand the test case coverage with the requirements

3.2 The Don’ts

Ø Do not write repetitive UI test cases. This will lead to high maintenance since UI will evolve over due course.

Ø Do not write more than one execution step in each test case.

Ø Do not concentrate on negative paths for User acceptance test cases if the business requirements clearly indicate on the application behavior and usage by the business users.

Ø Do not fail to get the test cases reviewed by individual module owners of the development team. This will enable the entire team to be in the same line.

Ø Do not leave any functionality uncovered in the test cases unless and until if it is specified in the test plan as features not tested.

Ø Try not to write test cases on error messages based on assumptions. Document error message validation test cases if the exact error message to be displayed is given in requirements.

4 Testing

This phase is little crucial in terms of customer standpoint, be it internal or external. All the efforts put in earlier phases of testing is going to reap results only in this phase.

A good test engineer should always work towards breaking the product right from the first release till the final release of the application (Killer attitude). This section will not just focus on testing but all the activities related to testing is it defect tracking, configuration management or testing itself.

4.1 The Do’s

Ø Ensure if the testing activities are in sync with the test plan

Ø Identify technically not strong areas where you might need assistance or trainings during testing. Plan and arrange for these technical trainings to solve this issue.

Ø Strictly follow the test strategies as identified in the test plan

Ø Try getting a release notes from the development team which contains the details of that release that was made to QA for testing. This should normally contain the following details

o The version label of code under configuration management

o Features part of this release

o Features not part of this release

o New functionalities added/Changes in existing functionalities

o Known Problems

o Fixed defects etc.

Ø Stick to the input and exit criteria for all testing activities. For example, if the input criteria for a QA release is sanity tested code from development team, ask for sanity test results.

Ø Update the test results for the test cases as and when you run them

Ø Report the defects found during testing in the tool identified for defect tracking

Ø Take the code from the configuration management (as identified in plan) for build and installation.

Ø Ensure if code is version controlled for each release.

Ø Classify defects (It can be P1, P2, P3, P4 or Critical or High or Medium or Low or anything) in a mutual agreement between the development team so as to aid developers prioritize fixing defects

Ø Do a sanity testing as and when the release is made from development team.

4.2 The Don’ts

Ø Do not update the test cases while executing it for testing. Track the changes and update it based on a written reference (SRS or functional specification etc). Normally people tend to update the test case based on the look and feel of the application.

Ø Do not track defects in many places i.e. having defects in excel sheets and in any other defect tracking tools. This will increase the time to track all the defects. Hence use one centralized repository for defect tracking

Ø Do not get the code from the developers sandbox for testing, if it is a official release from the development team

Ø Do not spend time in testing the features that are not part of this release

Ø Do not focus your testing on the non critical areas (from the customers perspective)

Ø Even if the defect identified is of low priority, do not fail to document it.

Ø Do not leave room for assumptions while verifying the fixed defects. Clarify and then close!

Ø Do not hastily update the test cases without running them actually, assuming that it worked in earlier releases. Sometimes these pre conceived notions would be a big trouble if that functionality is suddenly not working and is later found by the customer.

Ø Do not focus on negative paths, which are going to consume lots of time but will be least used by customer. Though this needs to be tested at some point of time the idea really is to prioritize tests.

5 Test Results

What comes next after the testing is complete. Can it be considered completed?

The answer is No. Any testing activity at the end should always be accompanied with the test results. The test result can be of both defects and the results from the test cases, which were executed during testing.

5.1 The Do’s

Ø Ensure that a defect summary report is sent to the Project Lead after each release testing. This on a high level can discuss on the number of open/reopen/closed/fixed defects. To drill down the report can also contain the priority of open and reopen defects.

Ø Ensure that a test summary report is sent to the Project Lead after each release testing. This can contain details about the total number of test cases, total number of test cases executed, total number of passed test cases, total number of failed test cases, total number of test cases that were not run (This here means the test cases were not able to run here either due to non-availability of production environment or non-availability of real time data or some other dependencies. Hence looking at the non-run test cases should give a clear picture what areas were not tested. This should not contain information on the test cases which were not run due to lack of time), total number of test cases that were not applicable.

Ø On a high level if the above details can be tracked for all releases then this should give a clear picture on the growing stability of the application.

Ø Track metrics as identified during the plan stage

5.2 The Don’ts

Ø Do not attempt to update anyone with huge information on test results. It has to be precise. You need not give information of the test execution steps which failed during testing as this will be a tedious process for one to sit and go through these test cases

Ø Finally what it comes out is how easily is the test result information can be interpreted. Hence do not leave room for assumptions while interpreting the test metrics. Make it simple!

6 Conclusion

As a conclusion testing should focus on 100% test coverage rather than 100% test case coverage because at times 100% test coverage cannot fully guarantee that the application is thoroughly tested, it is the test coverage which really matters. Testing is a not a monotonous jobs anymore as it imposes lot more challenges than it was thought to be. The Quality of the product is finally the outcome of how good was the testing done. At last the effectiveness testing is what it counts when comes to customer satisfaction. Testing Gives Boulevard learn new tools, which can help one in the testing process. But testing can only identify the presence of defects and cannot certify the absence of defects. Hence it really counts to deliver a defect free product.

Some Major Test cases for web application cookie testing

Some Major Test cases for web application cookie testing:
The first obvious test case is to test if your application is writing cookies properly on disk.
Test cases:
1) As a Cookie privacy policy make sure from your design documents that no personal or sensitive data is stored in the cookie.
2) If you have no option than saving sensitive data in cookie make sure data stored in cookie is stored in encrypted format.
3) Make sure that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.
4) Disable the cookies from your browser settings: If you are using cookies on your site, your sites major functionality will not work by disabling the cookies. Then try to access the web site under test. Navigate through the site. See if appropriate messages are displayed to user like “For smooth functioning of this site make sure that cookies are enabled on your browser”. There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)
5) Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.
6) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.
7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies. Note that the cookies written by one domain say rediff.com can’t be accessed by other domain say yahoo.com unless and until the cookies are corrupted and someone trying to hack the cookie data.
8 ) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.
9) Cookie Testing on Multiple browsers: This is the important case to check if your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.
10) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value say if previous user ID is 100 then make it 101 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.