Custom Search

Tuesday, March 31, 2009

Black Box is also called as:

Black Box is also called as:
1. Behavorial Testing
2. Input/Output Driven Testing
3. Specification Based Testing
4. Open Box Testing

White Box is also called as:

White Box is also called as:
1. Structural Testing
2. Closed Box Testing
3. Opaque Box Testing
4. Glass Box Testing

Technique for black box testing

Equivalence partitioning is a test case selection technique for black box testing. In this method, testers identify various classes of input conditions called as equivalence classes. These classes are identified such that each member of the class causes the same kind of processing and output to occur.


Basically, a class is a set of input conditions that are similar in nature for a system. In this test case selection technique, it is assumed that if the system will handle one case in the class erroneously, it would handle all cases erroneously.


This technique drastically cuts down the number of test cases required to test a system reasonably. Using this technique, one can found the most errors with the smallest number of test cases.


To use equivalence partitioning, you will need to:

Determining conditions to be tested
Defining and designing tests
Determining conditions to be tested:


All valid input data for a given condition are likely to go through the same process.
Invalid data can go through various processes and need to be evaluated more carefully. For example:
Treat the blank entry differently than an incorrect entry.
Treat a value differently if it is less than or greater than a range of values.
If there are multiple error conditions within a function, one error may override the other, which means that the subordinate error does not get tested unless the other value is valid.
Defining and Designing Test Cases:

First, include valid tests and include as many valid tests as possible in one test case.
For invalid input, include only one test in a test case in order to isolate the error.
Example: In a company, first three digits of all employee IDs, the minimum number is 333 and the maximum number is 444. For the fourth and fifth digits, the minimum number is 11 and the maximum number is 99.
So, for the first three digits the various test conditions can be


a. = or > 333 and = or <>
b. <>
c. > 444, (invalid input, above the range)
d. Blank, (invalid input, below the range).
And for the third and fourth digits the various test conditions can be
e. = or > 11 and = or <>
f. <>
g. > 99, (invalid input, above the range)
h. blank, (invalid input, below the range)
Now, while using equivalence partitioning; only one value that represents each of the eight equivalence classes needs to be tested.



Now, after identifying the tests, you will need to create test cases to test each equivalence class. Create one test case for the valid input conditions and identify separate test cases for each invalid input.


As a black box tester, you might not know the manner in which the programmer has coded the error handling. So, you will need to create separate tests for each invalid input, to avoid masking the result in the event one error takes priority over another.


Thus, based on the test conditions, there can be seven test cases:

Test case for a and e - (both are valid)
Test case for b and e - (only the first one is invalid)
Test case for c and e - (only the first one is invalid)
Test case for d and e - (only the first one is invalid)
Test case for a and f - (only the second one is invalid)
Test case for a and g - (only the second one is invalid)
Test case for a and h - (only the second one is invalid)

All About Black Box Testing

Black box testing is a test design method. Black box testing treats the system as a "black-box", so it doesn't explicitly use Knowledge of the internal structure. Or in other words the Test engineer need not know the internal working of the “Black box”. It focuses on the functionality part of the module.

Some people like to call black box testing as behavioral, functional, opaque-box, and closed-box. While the term black box is most popularly use, many people prefer the terms "behavioral" and "structural" for black box and white box respectively. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged.

Personally we feel that there is a trade off between the approaches used to test a product using white box and black box types.

There are some bugs that cannot be found using only black box or only white box. If the test cases are extensive and the test inputs are also from a large sample space then it is always possible to find majority of the bugs through black box testing.

Tools used for Black Box testing: Many tool vendors have been producing tools for automated black box and automated white box testing for several years. The basic functional or regression testing tools capture the results of black box tests in a script format. Once captured, these scripts can be executed against future builds of an application to verify that new functionality hasn't disabled previous functionality.

Advantages of Black Box Testing:

- Tester can be non-technical.

- This testing is most likely to find those bugs as the user would find.

- Testing helps to identify the vagueness and contradiction in functional specifications.

- Test cases can be designed as soon as the functional specifications are complete.

Disadvantages of Black Box Testing:

- Chances of having repetition of tests that are already done by programmer.

- The test inputs needs to be from large sample space.

- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult.

- Chances of having unidentified paths during this testing.

- Graph Based Testing Methods: Software testing begins by creating a graph of important objects and their relationships and then devising a series of tests that will cover the graph so that each objects and their relationships and then devising a series of tests that will cover the graph so that each object and relationship is exercised and error is uncovered.

Error Guessing: Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation: Either when reading the functional documents or when you are testing and find an error that you have not documented.

Boundary Value Analysis: Boundary Value Analysis (BVA) is a test data selection technique (Functional Testing technique) where the extreme values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The hope is that, if a system works correctly for these special values then it will work correctly for all values in between.

- Extends equivalence partitioning

- Test both sides of each boundary

- Look at output boundaries for test cases too

- Test min, min-1, max, max+1, typical values

- BVA focuses on the boundary of the input space to identify test cases

- Rational is that errors tend to occur near the extreme values of an input variable

There are two ways to generalize the BVA techniques:

By the number of variables (For n variables): BVA yields 4n + 1 test cases.

By the kinds of ranges: Generalizing ranges depends on the nature or type of variables:

- NextDate has a variable Month and the range could be defined as {Jan, Feb, …Dec}
Min = Jan, Min +1 = Feb, etc.

- Triangle had a declared range of {1, 20,000}

- Boolean variables have extreme values True and False but there is no clear choice for the remaining three values

Advantages of Boundary Value Analysis:

- Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1

- Forces attention to exception handling

- For strongly typed languages robust testing results in run-time errors that abort normal execution

Limitations of Boundary Value Analysis: BVA works best when the program is a function of several independent variables that represent bounded physical quantities:

1. Independent Variables:
NextDate test cases derived from BVA would be inadequate: focusing on the boundary would not leave emphasis on February or leap years.

- Dependencies exist with NextDate's Day, Month and Year.
- Test cases derived without consideration of the function

2. Physical Quantities:

An example of physical variables being tested, telephone numbers - what faults might be revealed by numbers of 000-0000, 000-0001, 555-5555, 999-9998, 999-9999?
Equivalence Partitioning: Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived. EP can be defined according to the following guidelines:

- If an input condition specifies a range, one valid and one two invalid classes are defined.

- If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.

- If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.

- If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing: There are situations where independent versions of software be developed for critical applications, even when only a single version will be used in the delivered computer based system. It is these independent versions which form the basis of a black box testing technique called Comparison testing or back-to-back testing.

Orthogonal Array Testing: The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing pair-wise interactions by deriving a suitable small set of test cases (from a large number of possibilities).

Categories of Software Errors:

One common definition of a software error is a mismatch between the program and its specification. In other words, we can say, a software error is present in a program when the program does not do what its end user expects.
Categories of Software Errors:

User interface errors such as output errors or incorrect user messages.
Function errors
Hardware defects
Incorrect program version
Requirements errors
Design errors
Documentation errors
Architecture errors
Module interface errors
Performance errors
Boundary-related errors
Logic errors such as calculation errors, State-based behavior errors, Communication errors, Program structure errors, such as control-flow errors.
Most programmers are rather cavalier about controlling the quality of the software they write. They bang out some code, run it through some fairly obvious ad hoc tests, and if it seems okay, they’re done. While this approach may work all right for small, personal programs, it doesn’t cut the mustard for professional software development.
Modern software engineering practices include considerable effort directed toward software quality assurance and testing. The idea, of course, is to produce a high software with the probability of satisfying the customer’s needs.
There are two ways to deliver software free of errors:

Preventing the introduction of errors in the first place.
Identifying the bugs lurking in program code, seek them out, and destroy them.
Obviously, the first method is superior. A big part of software quality comes from doing a good job of defining the requirements for the system you’re building and designing a software solution that will satisfy those requirements. Testing concentrates on detecting those errors that creep in despite your best efforts to keep them out.

Software Testing Bug Report Template

In continuation to my previous post, here in this post, I'm explaining a simple and effective software bug report.

If you are using any Software Testing Management tool or any Bug reporting tool like Bugzilla or Test Director or Bughost or any other online bug tracking tool, then; the tool will automatically generate the bug report. If you are not using any tool, you may refer to the following template for your software bug report:



Name of Reporter:
Email Id of Reporter:
Version or Build:
Module or component:
Platform / Operating System:
Type of error:
Priority:
Severity:
Status:
Assigned to:
Summary:
Description:

Bug Life Cycle

The steps in defect life cycle varies from company to company. But the basic flow remains the same. However, below I'm describing a basic flow for Bug Life Cycle:


A Tester finds a bug. Status --> Open
Test lead review the bug and authoriza the bug. Stats --> Open
Development team lead review the defect. Stats --> Open
The defect can be authorized or unauthorized by the development team. (Here the status of the defect / bug will be Open (For Authorized Defects) & Reject (For Unauthorized Defects).
Now, the authorized bugs will get fixed or deferred by the development team. Status of the fixed bugs will be Fixed & Status will be Deferred for the bugs which got Deferred.
The Fixed bugs will be again re-tested by the testing team (Here based on the Closure of the Bug, the status will be made as Closed or if the bug still remains, it will be re-raised and status will be Re-opened.
The above-mentioned cycle continues until all the bugs / defects gets fixed in the application.

Find more bugs while doing Software Testing ?

Here, in this post, I’m going to tell you some useful tips to find more bugs while doing Software Testing:




Understand the whole application or module in depth before starting the testing.
Give stress on the functional test cases which includes major risk of the application.
Your test data set must include the database records id you are going to test database along with various test case conditions.
If it is not first software testing cycle, use previous test data pattern to analyze the current set of tests.
Perform same tests on different test environment. Find out the result pattern and then compare your results with those patterns.
Do some standard tests like putting the “%” sign or “*” or html tags in the text box and then see the results in output window.
When you are tired, and then do some monkey testing.
Apart from these tips, one thing I would like to recommend to you that you must be thinking every minute to find a bug in the software. Just be Passionate about Software Testing.

Which projects may not need independent test staff?

Which projects may not need independent test staff?
Ans. It depends on the size & nature of the project. Then, it depends on business risks, development methodology, the skills and experience of the developers.

What's the role of documentation in QA?

What's the role of documentation in QA?
Ans. QA practices must be documented to enhance their repeatability. There should be a system for easily finding and obtaining information and determining what documentation will have a particular piece of information.

What is good design?

What is good design?
Ans. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable. It should also be robust with sufficient error-handling and status logging capability and work correctly when implemented. And, good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.

What are the common solutions to software development problems?

What are the common solutions to software development problems?
Ans.

Solid requirements
Realistic schedules
Adequate testing
stick to initial requirements where feasible
require walkthroughs and inspections when appropriate

What are the common problems in the software development process?

What are the common problems in the software development process?
Ans.

Poor requirements
Unrealistic schedule
Inadequate testing
A request to pile on new features after development is unnderway.
Miscommunication

Tell us about some world famous bugs??

Tell us about some world famous bugs
Ans. 1. In December of 2007 an error occurred in a new ERP payroll system for a large urban school system. More than one third of employees had received incorrect paychecks that results in overpayments of $53 million. Inadequate testing reportedly contributed to the problems

2. A software error reportedly resulted in overbilling to 11,000 customers of a major telecommunications company in June of 2006. Making the corrections in the bills took a long time.

3. In March of 2002 it was reported that software bugs in Britain's national tax system resulted in more than 100,000 erroneous tax overcharges.

What are the qualities of a good QA or Test manager?

What are the qualities of a good QA or Test manager?
Ans.

Must be familiar with the software development process
able to maintain enthusiasm of their team and promote a positive atmosphere
always looking for preventing problems
able to promote teamwork to increase productivity
able to promote cooperation between software, test, and QA engineers
have the skills needed to promote improvements in QA processes
have the ability to say 'no' to other managers when quality is insufficient or QA processes are not being adhered
have people judgement skills for hiring and keeping skilled personnel
be able to run meetings and keep them focused

What are the qualities of a good QA engineer?

What are the qualities of a good QA engineer?
Ans.

The same qualities a good tester
Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization.
In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed.
An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

What are the qualities of a good test engineer?

What are the qualities of a good test engineer?
Ans.

A good test engineer has a test to break attitude.
An ability to take the point of view of the customer
a strong desire for quality
Tactful and diplomatic
Good communication skills
Previous software development experience can be helpful as it provides a deeper understanding of the software development process
Good judgment skills

What is Software Quality Assurance?

What is Software Quality Assurance?
Ans. Software QA involves the monitoring and improving the entire software development process, making sure that any agreed-upon standards and procedures are followed. It is oriented to prevention.

What is Software Testing?

What is Software Testing?
Ans. Operation of a system or application under controlled conditions and evaluating the results. The controlled conditions must include both normal and abnormal conditions. It is oriented to detection.

Why does software have bugs?

Why does software have bugs?
Ans.

miscommunication or no communication
software complexity
programming errors
changing requirements
time pressures
poorly documented code
software development tools
egos - people prefer to say things like:
• 'no problem'
• 'piece of cake'
• 'I can whip that out in a few hours'

What is Extreme Programming?

What is Extreme Programming?
Ans. Extreme Programming is a software development approach for risk-prone projects with unstable requirements. Unit testing is a core aspect of Extreme Programming. Programmers write unit and functional test code first - before writing the application code. Generally, customers are expected to be an integral part of the project team and to help create / design scenarios for acceptance testing.

How can web based applications be tested?

How can web based applications be tested?
Ans. Apart from functionality consider the following:

- What are the expected loads on the server and what kind of performance is expected on the client side?
- Who is the target audience?
- Will down time for server and content maintenance / upgrades be allowed?
- What kinds of security will be required and what is it expected to do?
- How reliable are the site's Internet / intranet connections required to be?
- How do the internet / intranet affect backup system or redundant connection requirements and testing?
- What variations will be allowed for targeted browsers?
- Will there be any standards or requirements for page appearance and / or graphics throughout a site or parts of a site?
- How will internal and external links be validated and updated?
- How are browser caching and variations in browser option settings?
- How are flash, applets, java scripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
- From the usability point of view consider the following:

-- Pages should be 3-5 screens longer.
-- The page layouts and design elements should be consistent throughout the application / web site.
--Pages should be as browser-independent or generate based on the browser-type.
--There should be no dead-end pages. A link to a contact person or organization should be included on each page.

What if there isn't enough time for thorough testing?

What if there isn't enough time for thorough testing?
Ans. Consider the following scenarios:

- Which functionality is most important from business point of view?
- Which functionality is most visible to the user?
- Which functionality has the largest financial impact?
- Which aspects of the application are most important to the customer?
- Which parts of the code are most complex?
- Which parts of the application were developed in rush?
- Which aspects of similar/related previous projects caused problems?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of problems would cause the worst publicity?
- What kinds of problems would cause the most customer service complaints?
- What kinds of tests could easily cover multiple functionalities?

When you can stop testing?

When you can stop testing?
Ans.

- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Test budget depleted
- Coverage of code/functionality/requirements reaches a specified point
- Bug rate falls below a certain level Beta or alpha testing period ends

What is configuration management?

What is configuration management?
Ans. It covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools / compilers / libraries / patches, changes made to them, and who makes the changes.

What's an inspection?

What's an inspection?
Ans. It is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.

What is a walkthrough?

What is a walkthrough?
Ans. An informal meeting for evaluation or informational purposes.

What is validation?

What is validation?
Ans. It involves actual testing and takes place after verifications are completed.

What is verification?

What is verification?
Ans. It involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. It can be done with checklists, issues lists, walkthroughs, and inspection meetings etc.

What are the components of a bug report?

What are the components of a bug report?
Ans.


- Application name
- The function, module, name
- Bug ID
- Bug reporting date
- Status
- Test case ID
- Bug description
- Steps needed to reproduce the bug
- Names and/or descriptions of file/data/messages/etc. used in test
- Snapshot that would be helpful in finding the cause of the problem
- Severity estimate
- Was the bug reproducible?
- Name of tester
- Description of problem cause (filled by developers)
- Description of fix (filled by developers)
- Code section/file/module/class/method that was fixed (filled by developers)
- Date of fix (filled by developers)
- Date of retest or regression testing
- Any remarks or comments

What is a test case?

What is a test case?
Ans. A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of a software application is working correctly.

What are the contents of test plan?

What are the contents of test plan?
Ans.


- Title and identification of software including version etc.
- Revision history
- Table of Contents
- Purpose of document and intended audience
- Objective and software product overview
- Relevant related document list and standards or legal requirements
- Naming conventions
- Overview of software project organization
- Roles and responsibilities etc.
- Assumptions and dependencies
- Risk analysis
- Testing priorities
- Scope and limitations of testing effort
- Outline of testing effort and input data
- Test environment setup and configuration issues
- Configuration management processes
- Outline of bug tracking system
- Test automation if required
- Any tools to be used, including versions, patches, etc.
- Project test metrics to be calculated
- Testing deliverables
- Reporting plan
- Testing entrance and exit criteria
- Sanity testing period and criteria
- Test suspension and restart criteria
- Personnel pre-training needs
- Relevant proprietary, classified, security and licensing issues.
- Open issues if any

What is a test plan?

A document that describes the objectives, scope, approach, and focus of a software testing effort.

What are the steps to perform software testing?

What are the steps to perform software testing?

- Understand requirements and business logic
- Get budget and schedule requirements
- Determine required standards and processes
- Set priorities, and determine scope and limitations of tests
- Determine test approaches and methods
- Determine test environment, test ware, test input data requirements
- Set milestones and prepare test plan document
- Write test cases
- Have needed reviews/inspections/approvals of test cases
- Set up test environment
- Execute test cases
- Evaluate and report results
- Bug Tracking and fixing
- Retesting or regression testing if needed
- Update test plans, test cases, test results, traceability matrix etc.

How QA processes can be introduced in an organization?

How QA processes can be introduced in an organization?
Ans. 1. It depends on the size of the organization and the risks involved. e.g. for large organizations with high-risk projects a formalized QA process is necessary.

2. If the risk is lower, management and organizational buy-in and QA implementation may be a slower.

3. The most value for effort will often be in

- Requirements management processes
- Design inspections and code inspections
- post-mortems / retrospectives

What is Performance, Stress & Load testing ???

Performance, Stress & Load testing
Performance Testing: It is conducted to evaluate the compliance of a system or component against stated performance requirements, as specified in the Service Level Agreement (SLA). This usually being the last set of tests performed before implementing the new system, performance testing validates how well the system performs, from a speed and data processing perspective.

The Load, Volume and Stress testing are often grouped under performance tests. However, depending on the size and criticality of the system, these can be viewed as individual test phases.

Load Testing: In load testing the system under test in subjected to various levels of "load", to test its behaviour. In other words we can say load testing checks whether the system works well for the specified requirement (load).

Stress Testing: Testing conducted to evaluate a system or component, at or beyond the limits of its specified requirements. It is quite distinct from load testing because, here the behaviour of the system is checked for extremes and checks the safe load instructions for end-users. It is quite useful for mission-critical software.

Testing Methodology Details

Test methodology is the technical way about how to test a software. Typically, people refer to black-box and white-box for methodologies.

Black-box Testing is mainly testing at system-level, as customers may use the software. Often, it is pretty-much same as system-test. It is the most-common way of testing a product/software when it has end-users; but it may not be applicable when the software is not intended for end-uses, like API.

White-box testing is mainly testing the software as th testers know the detailed logics/codes about the software. It test the internal logics, conditions, operations of the code. It is typically used for unit/functional testing and also for software that has no end-users (like APIs).

Agile S/w Development Methodology

Agile process is evolved in 90s. Agile, in literal, means the ability to move freely- ability to Adapt. Adapting to the changing requirements and adapting to the changing circumstances. AGILE allow for changing requirements throughout the development cycle and stress collaboration b/w s/w developers and customers and early product delivery.
The “Agile Manifesto” establishes a common framework for these processes: Value individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and
responding to change over following a plan. The processes most commonly considered agile include Extreme Programming (XP), Lean Development, Crystal and Scrum.

Scrum:
Scrum is an agile software development method for project management. The word scrum is derived from the Rugby game. Takeuchi and Nonaka noted that projects using small, cross-functional teams historically produce the best results, and likened these high-performing teams to the scrum formation in Rugby. Although Scrum was intended to be for management of software development projects, it can be used in running maintenance teams, or as a program management approach

Scrum terminology
Scrum Master: The person or persons in charge of the tracking and the daily updates for the scrum (equivalent to a project manager).
Scrum Team: A cross-functional team (developers, B.A.s, DBAs, and testers) responsible for developing the product.
Product Owner: The person responsible for maintaining the Product Backlog via continuous interaction with Clients and Stakeholders.
Story: A customer focused description of valued functionality.
Product Backlog: The stories to be completed.
Sprint: A time period (usually 2 to 4 weeks) in which development occurs on a set of stories that the team has committed to.
Burn Down Chart: Daily progress for a sprint over the sprint's length.

Agile s/w development process

Characteristics of Scrum:
A product backlog of prioritized work to be done; It is a set of requirements that you would implement in your product

Completion of a fixed set of backlog items in a series of short iterations or sprints; - Sometimes you call it as a sprint backlog - you will identify a set of requirements that you implement in a perticular sprint.

A brief daily meeting or scrum, at which progress is explained, upcoming work is described and impediments are raised. - This is a all hands meeting. You will have the architect, developers, testers and the scrum master sitting in the same room and discussiing about the progress and the issues. The best part is - every one is involved in the discussion and you can get on the spot clarifications.

A brief sprint planning session in which the backlog items for the sprint will be defined.

A brief sprint retrospective, at which all team members reflect about the past sprint. This happens at the end of each sprint. The team members retrospects the performance of the team - issues faced in the current sprint - improvements needed and all. This retrospective meeting is very important as the minutes of this meeting can be considered for planning the next sprint

Currently we are following Scrum in our projects - and we are able to get a shippable product by the end of every SPRINT


XP - Extreme Programming:
This is another agile development method that is mostly being used in conjunction with SCRUM. This method talks about the implementing best practices in the development cycles.

What is Testing Methodology?

Test methodology is the technical way about how to test a software. Typically, people refer to black-box and white-box for methodologies.

Black-box Testing is mainly testing at system-level, as customers may use the software. Often, it is pretty-much same as system-test. It is the most-common way of testing a product/software when it has end-users; but it may not be applicable when the software is not intended for end-uses, like API.

White-box testing is mainly testing the software as th testers know the detailed logics/codes about the software. It test the internal logics, conditions, operations of the code. It is typically used for unit/functional testing and also for software that has no end-users (like APIs).