2:57 AM

Software Testing Models

One doesn't have to spend much time in the software industry to become familiar with several software development models. Some of the most commonly known include waterfall, iterative, test-first or test-driven development (TFD or TDD), and Extreme Programming (XP). Interestingly, one needs to have a rather diverse set of software development experiences and needs to pay rather close attention to those experiences to realize that there are just as many models for testing software as there are for developing software -- and that the testing model a particular project follows need not be dictated by the software development model.


Categories of testing activities

To aid in this discussion, let's agree to think about software testing in terms of five general categories of activities:

1. Researching information to improve or enhance testing -- This information may come from specifications, use cases, technical design documentation, contracts, industry standards, competing applications, or almost anything else that is likely to improve a tester's ability to test the software deeper, faster or better.

2. Planning and/or designing tests -– This category would encompass such activities as writing test cases, developing test strategies, writing test plans, creating manual test scripts and preparing test data.

3. Scripting and/or executing tests –- Here is where tests are actually executed and/or automated. This is what most non-testers think of when they hear someone talk about software testing.

4. Analyzing test results and new information –- Not all tests produce results that clearly pass or fail. Many tests result in data that can only be understood by human judgment and analysis. Additionally, changing specifications, deadlines or project environments can make a test that had been clearly passing fail without anything changing in the software. This category is where this type of analysis occurs.

5. Reporting relevant information -- Reporting defects and preparing compliance reports are what come to mind first for most people, but a tester may need to report all kinds of additional information.

Again, these five categories are intended to be simple in order to make our discussion about testing models easier. They aren't intended to supplant your current terminology.

Testing waterfall-style

Just like developing software using the waterfall model, testing waterfall-style is a fundamentally linear process except for a minimal feedback loop created by the need to fix some of the problems in the software that are indicated by failing tests. Visually, that feedback loop is equivalent to the small eddy current at the bottom of a real waterfall.

Waterfall-style testing is rarely chosen voluntarily anymore. It is commonly a side effect of some logistical challenge that kept the testers from being able to interact with the application or the developers prior to the first -- and what they hope will be the only -- build of the software. Waterfall testing is occasionally appropriate for situations where it is reasonable to hope the software will "just work," such as applying a service release or a patch to a production application.

Testing, iterative-style

Iterative testing is similar to iterative development in that many of the test iterations happen to coincide with development releases. In that regard, it is like a bunch of waterfall testing cycles strung end to end. Testing iterations differ from development iterations in that there can be iterations prior to the first software build, and there can be multiple test iterations during a single software build. Another difference is that unlike a development iteration, a test iteration can seamlessly abort at any point during the iteration to return to a research mode. While a development iteration can also abort and restart at any time, doing so is quite likely to jeopardize the project schedule.

Iterative software testing is extremely common in the commercial market, though it has many variants. The V-Model, the spiral model, and Rational Unified Process (RUP) based testing are all derivatives of an iterative testing approach. Iterative testing generally works well on projects where software is being developed in pre-planned, predictable increments and on projects where the software is being developed and released in such rapid or unpredictable cycles that it is counter productive for testers to plan around scheduled releases.

Testing, agile-style

Agile-style testing more or less eliminates the element of pre-determined flow from the test cycle in favor of shifting among the five basic activities whenever it adds value to the project to do so. For example, while analyzing the results of a test, the tester may realize that his test was flawed and move directly back to planning and designing tests. In a waterfall or iterative flow, that test redesign would wait until after the current results were reported and preparations were being made for the next test iteration.

Agile-style testing can be implemented as an overall approach or as a complement to any other testing approach. For example, within an iterative test approach, a tester could be encouraged to enter a period of agile testing, side-by-side with a developer, while tracking down and resolving defects in a particular feature.

Agile-style testing is significantly more common than most people realize. As it turns out, this model is what is going on in the heads of many testers all the time, regardless of the external process they are following. Be that as it may, this approach isn't very popular with managers and process improvement specialists because it is misunderstood by many non-testers and few testers following this process are able to express what they are doing in a manner that gives stakeholders confidence that they are actually doing organized and thoughtful testing.

For more information Try Googling it!

2:37 AM

IEEE 829 Compliant Test Plan Template

You have read about how to write a good test plan, now let's go through the 16 clauses of the IEEE 829 plan standard that are as follows:

1. Test plan identifier.
2. Introduction.
3. Test items.
4. Features to be tested.
5. Features not to be tested.
6. Approach.
7. Item pass/fail criteria.
8. Suspension criteria and resumption requirements.
9. Test deliverables.
10. Testing tasks.
11. Environmental needs.
12. Responsibilities.
13. Staffing and training needs.
14. Schedule.
15. Risks and contingencies.
16. Approvals.


These can be matched against the five characteristics of a basic plan, with a couple left over that form part of the plan document itself.

Scope

Scope clauses define what features will be tested. An aid to doing this is to prioritize them using a technique such as MoSCoW.

3. Test Items: The items of software, hardware, and combinations of these that will be tested.

4. Features to Be Tested: The parts of the software specification to be tested.

5. Features Not to Be Tested: The parts of the software specification to be excluded from testing.


Resource

Resource clauses give the overall view of the resources to deliver the tasks.

11. Environmental Needs: What is needed in the way of testing software, hardware, offices etc.

12. Responsibilities: Who has responsibility for delivering the various parts of the plan.

13. Staffing And Training Needs: The people and skills needed to deliver the plan.


Time

Time clauses specify what tasks are to be undertaken to meet the quality objectives, and when they will occur.

10. Testing Tasks: The tasks themselves, their dependencies, the elapsed time they will take, and the resource required.

14. Schedule: When the tasks will take place.

Often these two clauses refer to an appendix or another document that contains the detail.


Quality

Quality clauses define the standard required from the testing activities.

2. Introduction: A high level view of the testing standard required, including what type of testing it is.

6. Approach: The details of how the testing process will be followed.

7. Item Pass/Fail Criteria: Defines the pass and failure criteria for an item being tested.

9. Test Deliverables: Which test documents and other deliverables will be produced.

The associated article on test documentation gives details of the IEEE 829 documentation.

Risk

Risk clauses define in advance what could go wrong with a plan and the measures that will be taken to deal with these problems. An outline of risk management is in an associated article.

8. Suspension Criteria And Resumption Requirements: This is a particular risk clause to define under what circumstances testing would stop and restart.

15. Risks And Contingencies: This defines all other risk events, their likelihood, impact and counter measures to over come them.

Plan Clauses

These clauses are parts of the plan structure.

1. Test Plan Identifier: This is a unique name or code by which the plan can be identified in the project's documentation including its version.

16. Approvals: The signatures of the various stakeholders in the plan, to show they agree in advance with what it says.

For more information try Googling it!

2:30 AM

What are IEEE 829 Test Plan Standards?

Recently I have been searching a lot on testing jargons and come across a list of IEEE 829 Test Plan Standards as it was something which is not known generally.

This could be the list of standards to make your test plan IEEE 829 compliant as per the source.

1. Test plan identifier

2. Introduction

3. Test Items

4. Features to be tested.

5. Features not to be tested.

6. Approach

7. Item pass/fail criteria.

8. Suspension criteria and resumption requirements.

9. Test deliverables.

10. Testing tasks.

11. Environmental needs.

12. Responsibilities.

13. Staffing and training needs.

14. Schedule.

15. Risks and contingencies.

16. Approvals.

For more information try Googling it!

1:28 AM

WIPRO is the Best Software testing Company

IT major Wipro Technologies has won the best practice award for software testing, set up by the US-based International Institute for Software Testing (IIST), the company announced here Friday.

The company's testing services division was given the award for innovating a statistical tool that reduces the time taken to test a product.

'As part of our proprietary tools suite code-named StORM (statistics, operations, research, matrix), the tool helps us to reduce test case development effort by about 30 percent,' Wipro vice-president and testing services' head C.P. Gangadharaiah said in a statement.

With over 10 years of experience in independent testing services, Wipro is the largest third-party offshore testing service provider worldwide.

It has also set-up the first wireless fidelity (Wi-Fi) pre-certification and certification lab at its Bangalore campus.

The IT bellwether offers software testing services to global customers in partnership with other leading vendors such as Hewlett-Packard, IBM and Microsoft.

'The award is one of the initiatives we have taken to advance the software testing profession and encourage IT firms to adopt best practices. The award also enables firms to share their best practices with the software testing community,' IIST chairman Magdy Hanna said.

Wipro was chosen for the award out of 36 technology firms worldwide that were short-listed by the institute.

7:37 AM

Win Runner Frequently Asked Questions Continued......

1) How does WinRunner evaluates test results?
Ans.
Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

2) Have you performed debugging of the scripts?
Ans.
Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

3) How do you run your test scripts?
Ans.
We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.

4) How do you analyze results and report the defects?
Ans.
Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

5) What is the use of Test Director software?
Ans.
TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.

7:21 AM

WinRunner Frequently Asked Questions

1) How you used WinRunner in your project?
Ans. Yes, I have used WinRunner for creating automates scripts for GUI, functional and regression testing of the AUT.

2) Explain WinRunner testing process?
Ans.
WinRunner testing process involves six main stages:
i. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested
ii. Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
iii. Debug Test: run tests in Debug mode to make sure they run smoothly
iv. Run Tests: run tests in Verify mode to test your application.
v. View Results: determines the success or failure of the tests.
vi. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

3) What in contained in the GUI map?
Ans.
WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. There are 2 types of GUI Map files.
i. Global GUI Map file: a single GUI Map file for the entire application
ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

4) How does WinRunner recognize objects on the application?
Ans.
WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.

5) Have you created test scripts and what is contained in the test scripts?
Ans.
Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.