Testing+Tools+Material

Published on May 2017 | Categories: Documents | Downloads: 28 | Comments: 0 | Views: 235
of 132
Download PDF   Embed   Report

Comments

Content

 

Software Testing Material Software Testing: Testing is a process of executing a program with the intent of finding error. Software Engineering: Sof Softw twar aree Engi Enginee neerin ring g is th thee establ establis ishm hment ent and use of sound sound engineering principles in order to obtain economically software that is more reliable and works efficiently on real machines.

Softwaree engineer Softwar engineering ing is based based on Compute Computerr Science Science,, Managem Management ent Science Science,, Econom Economics, ics, Communication Skills and Engineering approach. What should be done during testing? Confirming product as Product that has been developed according to specifications • Working perfectly • Satisfying customer requirements • Why should we do testing? Error free superior product • Quality Assurance to the client • Competitive advantage • Cut down costs • How to test? Testing can be done in the following ways: Manually • Automation (By using tools like WinRunner, LoadRunner, TestDirector …) • Combination of Manual and Automation. • Software Project: A problem solved by some people through a process is called a project.

Information Gathering – Requirements Analysis – Design – Coding – Testing – Maintenance: Are called as Project

Software Project

Problem

Process

Product

Software Development Phases: Information Gathering: It encompasses requirements gathering at the strategic business level.

providee a fram framew ework ork that that enable enabless the mana managem gemen entt to make make reason reasonabl ablee Planning: To provid estimates of   

Page 1 of 132

 

Software Testing Material

• • • •

Resources Cost Schedules Size

  Requirements Analysis: Data, Functional and Behavioral requirements are identified. • • •

 Data Modeling: Defines data objects, attributes, and relationships.  Functional Modeling: Indicates how data are transformed in the system.  Behavioral Modeling: Depicts the impact of events.

Design: Design is the engineering representation of product that is to be built. •







 Data Design: Transforms the information domain model into the data structures that will be required to implement the software.   Architectural design: Relationship between major structural elements of the software. Represents the structure of data and program components that are required to build a computer based system.  Interfacee design: Creates an effective communication medium between a human and  Interfac a computer. Component level description Design: Transfor Transforms ms structural elements elements of the software architecture architecture into a procedural of software components.

Coding: Translation into source code (Machine readable form) Testing: Testing is a process of executing a program with the intent of finding error  •





Unit Testing: It concentrates on each unit (Module, Component…) Component…) of the software as implemented in source code. Putt tting ing the modul modules es to toget gether her and constr construct uctio ion n of so soft ftwa ware re   Integration Integration Testing: Pu architecture. System and Functional Testing: Product is validated with other system elements are tested as a whole



User Acceptance Testing: Testing by the user to collect feed back. Maintenance: Change associated with error correction, adaptation and enhancements. • •

• •

Correction: Changes software to correct defects.  Adaptation: Modification to the software to accommodate changes to its external environment.  Enhancement: Extends the software beyond its original functional requirements.  Prevention: Changes software so that they can be more easily corrected, adapted and enhanced.

Cons nsis ists ts of de defi fini niti tion onss of cu cust stom omer  er  Business Busine ss Requir Requireme ements nts Specif Specifica icatio tion n (BRS): (BRS): Co requirem requ irements. ents. Also Also called called as CRS/UR CRS/URS S (Custom (Customer er Require Requiremen ments ts Specifi Specificati cation on / User  User  Requirements Specification)

Page 2 of 132

 

Software Testing Material Software Requirements Specification (S/wRS): Cons Consists ists of functio functional nal require requiremen ments ts to develop and system requirements(s/w & H/w) to use. Review: A verification method to estimate completeness and correctness of documents. High Level Design Document (HLDD): Consists of the overall hierarchy of the system in terms of modules. Low Level Design Document (LLDD): Consist Consistss of every sub module in terms of Structural logic (ERD) and Backend Logic(DFD)

sam mpl plee mod odel el of an ap appl plic icat atio ion n with withou outt fu func ncti tion onal alit ity y is call called ed as Prototype: A sa  prototype(Screens) technique to verify completeness completeness and correctness White Box Testing: A coding level testing technique of the programs with respect to design. Also called as Glass BT or Clear BT Black Box Testing: It is a .exe level of testing technique to validate functionality of an application applicat ion with respect to customer requirements. During this test engineer validate internal  processing depends on external interface. Grey Box Testing: Combination of white box and black box testing. Build: A .Exe form of integrated module set is called build. Verification: whether system is right or wrong? Validation: whether system is right system or not?

SQA A concep concepts ts are moni monitor torin ing g and measu measuri ring ng th thee Software Quality Software Quality Assurance Assurance(SQA (SQA): ): SQ strength of development process. Ex: LCT (Life Cycle Testing) Quality: Meet customer requirements • Meet customer expectations (cost to use, speed in process or performance, security) • Possible cost • Time to market •

For developing the quality software we need LCD and LCT multip iple le stag stages es of de deve velo lopm pmen entt stag stages es an and d th thee ev ever ery y st stag agee is veri verifi fied ed fo for  r  LCD: A mult completeness. V model: 

integration ion tested modules. Then it Build: When coding level testing over. it is a completely integrat is called a build. Build is developed after integration testing. (.exe)

Page 3 of 132

 

Software Testing Material Test Management: Testers maintain some documents related to every project. They will refer these documents for future modifications.

Information Gathering & Analysis

Design and Coding

Install Build

Maintenance

Assessment of Development Plan Prepare TestPlan Requirements Phase Testing

Design Phase Testing Program Phase Testing (WBT)

Functional & System Testing User Acceptance Testing Test Environment Process

Port Testing Test Software Changes Test Efficiency

Port Testing: This is to test the installation process. Change Request: The request made by the customer to modify the software. Defect Removel Efficiency: DRE= a/a+b. a = Total no of defects found by testers during testing.  b = Total no of defects found by customer during maintenance.

DRE is also called as DD(Defect Deficiency). BBT, UAT and Test management process where the independent independent testers or testing team will  be involved. Refinement form of V-Model: Due to cost and time point of view v-model is not applicable to small scale and medium scale companies. This type of organizations are maintaining a refinement form of v-model.

Page 4 of 132

 

Software Testing Material

BRS/URS/CRS

User Acceptance Testing

S/wRS

Functional & System Testing

HLDD

Integration Testing

LLDD

Unit Testing Code

Fig: Refinement Form of V-Model

Development starts with information gathering. After the requirements gathering BRS/CRS/URS will be prepared. This is done by the Business Analyst. During the requirements analysis all the requirements are analyzed. at the end of this phase S/wR S/wRS S is prepa prepared red.. It consis consists ts of th thee funct function ional al (custo (custome merr re requi quirem rement ents) s) + Syste System m Requirements (h/w + S/w) requirements. It is prepared by the system analyst. During the design phase two types of designs are done. HLDD and LLDD. Tech Leads will  be involved. During the coding phase programs are developed by programmers. During unit testing, they conduct program level testing with the help of WBT techniques. During the Integration Testing, the testers and programmers or test programmers programmers integrating integrating the modules to test with respect to HLDD. During the system and functional testing the actual testers are involved and conducts tests  based on S/wRS. During the UAT customer site people are also involved, and they perform tests based on the BRS. From the above model the small scale and medium scale organiz organizations ations are also conducts life cycle testing. But they maintain separate team for functional and system testing.

Reviews during Analysis: Quality Analyst Analyst decides on 5 topics. after completion completion of information information gathering and analysis analysis a

review meeting conducted to decide following 5 factors.

Page 5 of 132

 

Software Testing Material 1. 2. 3. 4. 5.

Are Are tthe hey y com compl plet ete? e? Are they they correc correct? t? Or Are Are they they right right Require Requiremen ments? ts? Are Are tthe hey y ach achie ieva vabl ble? e? Are they they reaso reasonabl nable? e? ( with with respec respectt to cost cost & time) time) Are Are tthe hey y tes testa tabl ble? e?

Reviews during Design: After Aft er the complet completion ion of analysi analysiss of customer customer require requirement mentss and their their reviews, reviews, technic technical al support people (Tech Leads) concentrate on the logical design of the system. In this every stage they will develop HLDD and LLDD.

After the completion of above like design documents, they (tech leads) concentrate on review of the documents for correctness and completeness. In this review they can apply the below factors. • • • • •

Is the design good? (understandable or easy to refer) Are they complete? (all the customer requirements are satisfied or not) Are they correct? Are they right Requirements? (the design flow is correct or not) Are they follow able? (the design logic is correct or not) Does they handle error handling? ( the design should be able to specify the positive and negative flow also)

User Information

Logi n

User

Inbox

Invalid User Unit Testing: After the completion of design and their reviews programmers are concentrating on coding. During this stage they conduct program level testing, with the help of the WBT techniques. This WBT is also known as glass box testing or clear box testing.

WBT is based on the code. The senior programmers will conduct testing on programs WBT is applied at the module level. There are two types of WBT techniques, such as 1. Exec Execut utio ion n Te Test stin ing  g   

Basis path coverage (correctness of every statement execution.) Loops coverage (correctness of loops termination.)

Page 6 of 132

 

Software Testing Material 

Program technique coverage (Less no of Memory Cycles and CPU cycles during execution.)

2. Operations Testing: Testing: Wh Whit ither her the so soft ftwa ware re is runnin running g under under th thee custo custome merr ex expec pecte ted d environment platforms (such as OS, compilers, browsers and etc…sys s/w.) Integration Testing: After the completion of unit testing, development people concentrate on integration testing, when they complete dependent modules of unit testing. During this test   programmers programmers are are verifying verifying integration integration of modules with respect to HLDD HLDD (which contains hierarchy of modules).

There are two types of approaches to conduct Integration Testing: • •

Top-down Approach Bottom-up approach.

Stub: It is a called program. It sends back control to main module instead of sub module. Driver: It is a calling Program. It invokes a sub module instead of main module. Top-down: This approach starts testing, from the root.

Mai n

Stub Sub Module2

Sub Module1

Bottom-Up: This approach starts testing, from lower-level modules. drivers are used to connect the sub modules. ( ex login, create driver to accept default uid and pwd)

Mai n

Driver

Sub Module1

Sub Module2 +

Page 7 of 132

 

Software Testing Material Sandwich: Th This is approa approach ch combin combines es the Top-d Top-dow own n and Bott Bottom om-up -up approa approache chess of the integration testing. In this middle level modules are testing using the drivers and stubs.

Mai n

Driver

Sub Module1 Stub

Sub Module2

Sub Module3

System Testing:  Conducted by separate testing team • Follows Black Box testing techniques • Depends on S/wRS • Build Bui ld level level testing testing to validate validate internal internal processi processing ng depends depends on external external interfa interface ce •  processing depends on external interface This phase is divided into 4 divisions • After the completion of Coding and that level tests(U & I) development team releases a finally integrated all modules set as a build. After receiving a stable build from development team, separate testing team concentrate on functional and system testing with the help of BBT.

This testing is classified into 4 divisions. • • • •

Usability Testing (Ease to use or not. Low level Priority in Testing) Functional Testing (Functionality is correct or not. Medium Priority in Testing) Performance Testing (Speed of Processing. Medium Priority in Testing) Security Testing (To break the security of the system. High Priority in Testing)

Usability and System testing are called as Core testing and Performance and Security Testing techniques are called as Advanced testing. Usability Testing is a Static Testing. Functional Testing is called as Dynamic Testing. From the testers point of view functional and usability tests are important. Usability Testing: User friendliness of the application or build. (WYSIWYG.) Usability testing consists of following subtests also.

Page 8 of 132

 

Software Testing Material User Interface Testing Ease of Use ( understandable to end users to operate ) Look & Feel ( Pleasantness or attractiveness of screens ) Speed in interface ( Less no. of events to complete a task.)  Manual Support Testing: In general, technical writers prepares user manuals after completion of all possible tests execution and their modifications also. Now a days help documentation is released along with the main application. •





Development Team releases Build

User Interface Testing.

Remaining System Testing techniques like Functionality, Performance and Security Tests.

System Testing

Manual Support Testing.

Help documentation is also called as user manual. But actually user manuals are prepared after the completion of all other system test techniques and also resolving all the bugs. Duri ring ng th this is stage stage of te testi sting, ng, te testi sting ng te team am concen concentra trate te on " Meet Meet Functionall testing: Functiona testing: Du Customerr Requirements". Custome Requirements". For performing what functionality, functionality, the system is developed met or  not can be tested. For every project functionality functionality testing is most important. important. Most of the testing tools, which are available in the market are of this type. The functional testing consists of following subtests System Testing 80 %

Functional Testing

80 %

Functionality / Requirements Testing

Functiona Func tionality lity or Requireme Requirements nts Testing: Testing: Du Duri ring ng th this is subte subtest, st, te test st engine engineer er va vali lidat dates es

correctness of every functionality in our application build, through below coverage. If they have less time to do system testing, they will be doing Functionality Testing only.

Page 9 of 132

 

Software Testing Material Functionality or Requirements Testing has following coverages • • • • • • • •

Behavioral Coverage ( Object Properties Checking ). Input Domain Coverage ( Correctness of Size and Type of every i/p Object ). Error Handling Coverage ( Preventing negative navigation ). Calculations Coverage ( correctness of o/p values ). Backend Coverage ( Data Validation & Data Integrity of database tables ). URL’s Coverage (Links execution in web pages) Service Levels ( Order of functionality or services ). Successful Functionality ( Combination of above all ).

All the above coverages are mandatory or must.   Input Domain Testing: During this test, the test engineer validates size and type of every input object. In this coverage, test engineer prepares boundary values and equivalence classes for every input object. Ex: A login process allows user id and password. User id allows alpha numeric from 4-16 characters long. Password allows alphabet from 4-8 characters long.

Boundary Value values analysis: Boundary are used for testing the size and range of an object. Equivalence Class Partitions: Equivalence classes are used for testing the type of the object. Recovery Testing: This test is also known as Reliability testing. During this test, test engineers validates that, whether our application build can recover from abnormal situations or not. Ex: During process power failure, network disconnect, server down, database disconnected etc…

Abnormal

Backup & Recovery Procedures

Normal Recovery Testing is an extension of Error Handling Testing.

Page 10 of 132

 

Software Testing Material Compatibility Testing: This test is also known as portable testing. During this test, test engi engine neer er va vali lida date tess co cont ntin inui uity ty of ou ourr appl applic icat atio ion n exec execut utio ion n on cu cust stom omer er ex expe pect cted ed  platforms( like OS, Compilers, browsers, etc..) During this compatibility two types of problems arises like 1. Forw Forward ard co comp mpati atibil bilit ity y 2. Back Backwar ward d compat compatib ibili ility ty Forward compatibility: The application which is developed is ready to run, but the project technology or environment like OS is not supported for running.

Buil d

OS

Backward compatibility: The application is not ready to run on the technology or environment.

Buil d

OS

Configuration Testing: This test also known Hardware Compatibility During this th is te test, st, te test st en engin gineer eer valid val idat ates es isth that at whet wh ether her as our appli applica cati tion on build build suppor suptesting. ports ts differ dif ferent ent technology i.e. hardware devices or not? Inter Systems Testing: This test is also known as End-to-End testing. During this test, test engineer validates validates that whither whither our application application build coexistence coexistence with other existing software in the customer site to share the resources (H/w or S/w).

WBAS

Water Bill Automation

EBAS Electricity Bill Automation

Tele Phone Bill Automation

Local Data Base Server

ITBAS

Income Tax Bill Automation  Newly Added Component

Local ESeva Center

TPBAS

Sharable Resource

 New Server 

Remote Servers

Page 11 of 132

 

Software Testing Material

Banking Information System

Bank Loans

The first example is one system is our application and other one is sharable. The second example is same system but different components. System Software Level: Compatibility Testing  Hardware Level: Configuration Testing  Application Software Level: Inter Systems Testing Installation Testing: Testing the applications, installation process in customer specified environment and conditions.

Build

Server

Test Engineer Systems

Build +Required S/w components to run application

Instal lation

1. Setup Program Customer Site Like Environment

2. Easy Interface 3. Occupied Disk Space

The following conditions or tests done in this installation process. • • •

Setup Program: Whither Setup is starting or not?  Easy Interface: During Installation, whither it is providing easy interface or not ? Occupied Disk Space: How much disk space it is occupying after the installation?

Page 12 of 132

 

Software Testing Material Sanitation Testing: Thi Thiss test is also known as Garbage Garbage Testing Testing.. During During this test, test, test engineer finds extra features in your application build with respect to S/w RS. Maximum testers may not get this type of problems.

User Id

Password Login

Forgot Password

Parallel or Comparitive testing: During this test, test engineer compares our application   bui build ld wi with th si simi mila larr ty type pe of appl applic icat atio ions ns or ol old d ve vers rsio ions ns of sa same me appl applic icat atio ion n to find find competitiveness.

This comparative testing can be done in two views: Similar type of applications in the market. Upgraded version of application with older versions. •



Performance Testing: It is an advanced testing technique and expensive to apply. During

this test, testing team concentrate on Speed of Processing. This performance test classified into below subtests. 1. 2. 3. 4.

Load oad Te Test stiing St Stre ress ss Testi esting ng Data Data Volu Volume me Test Testin ing g St Stor orag agee Tes Testi ting ng

Load Testing:

This test is also known as scalability testing. During this test, test engineer  execut exe cutes es our appli applicat catio ion n under under custo custome merr expect expected ed confi configur gurat atio ion n and load load to estim estimate ate  performance. Load: No. of users try to access system at a time.

This test can be done in two ways 1. Manual Manual Testi Testing. ng. 2.By 2.By using using the tool, tool, Load Load Runner. Runner.   Stress Testing:  During this test, test engineer executes our application build under customer  expected configuration and peak load to estimate performance. Data Volume Testing:  A tester conducts this test to find maximum size of allowable or  maintainable data, by our application build.

Page 13 of 132

 

Software Testing Material Storage Testing:  Execution of our application under huge amounts of resources to estimate storage limitations to be handled by our application is called as Storage Testing.

=

Performance

Trashing --

+

Resources Security Testing: It is also an advanced testing technique and complex to apply. To conduct this tests, highly skilled persons who have security domain knowledge.

This test is divided into three sub tests. Authorization: Verifies authors identity to check he is a authorized user or not. Access Control: Also called as Privileges testing. The rights given to a user to do a system task. Encryption / Decryption:  Encryption- To convert actual data into a secret code which may not be understandable to others. Decryption- Converting the secret data into actual data.

Source

E En ncryption

Decryption

Client Destination

Destination

Server Decryption

Encryption

Source

User Acceptance Testing: After completion of all possible system tests execution, our  organization concentrate on user acceptance test to collect feed back. To conduct user acceptance tests, they are following two approaches like Alpha (α) - Test and Beta (β) -Test.  Note: In s/w development projects are two types based on the products like software application ( also called as Project ) and Product. Software Application ( Project ) : Get requirements from the client and develop the project. This software is for only one company. And has specific customer. For this Alpha test will be done.

Page 14 of 132

 

Software Testing Material Product : Get requirements from the market and develop the project. This software may have more than one company. And has no specific customer. For this β- Version or Trial version will be released in the market to do Beta test.

Alpha Testing

For what software applications applicable to specific customer  By real customer  In development site Virtual environment Collect Feedback.

Beta Testing

For software products. By customer site like people. In customer site like environment. Real environment. Collect Feedback.

Testing during Maintenance: Maintenance:

After Aft er the co comp mplet letio ion n of UA Testi Testing, ng, our or organ ganiz izat ation ion concentrate on Release Team and (RT)correctness formation.ofThis conducts Port Testing in customer  site, to estimate completeness our team application installation. During this Port testing Release team validate below factors in customer site: • • • • • • • •

Compact Installation (Fully correctly installed or not) On screen displays Overall Functionality Input device handling Output device handling Secondary Storage Handling OS Error handling Co-existence with other Software

The above tests are done by the release team. After the completion of above testing, the Release Team will gives training and application support in customer site for a period. During utilization utilization of our application application by customer customer site people, they are sending some Change Request (CR) to our company. When CR is received the following steps are done Based on the type of CR there are two types, 1. Enha nhance ncement 2. Miss ssed ed De Defect

Page 15 of 132

 

Software Testing Material

Change Request

Missed Defect Enhancement

  Developers

Impact Analysis

CCB

Impact Analysis Perform that change

Perform that change Review old test process capability to improve

Test that S/w Change

Tester 

Test that S/w Change

thee te team am whic which h wi will ll ha hand ndle less cu cust stom omer er re requ ques ests ts fo for  r  Change Contr Change Control ol Board: Board: It is th enhancement changes. Testing Stages Vs Roles:

Reviews in Analysis – Business Analyst / Functional Lead. Reviews in Design – Technical Support / Technical Lead. Unit Testing – Senior Programmer. Integration Testing – Developer / Test Engineer. Functional & System System Testing – Test Engineer. User Acceptance Testing – Customer site people with involvement of testing team. Port Testing – Release Team. Testing during Maintenance – Change Control Board Testing Stages Reviews in Analysis Reviews in Design Unit Testing

Integration Testing Functional & System Testing User Ac Acceptance Te Testing Port Testing Test Te sting ing durin during g Main Mainte tenan nance/ ce/ Test Software Changes

-

Roles Business Analyst / Functional Lead. Technical Support / Technical Lead. Senior Programmer.

-

Developer / Test Engineer. Test Engineer. Customer si site pe people wi with iin nvolvement o off Te Testing tteeam. Release Team. Change Co Control Bo Board

Testing Team:

From refinement form of V-Model small scale companies and medium scale companies are maintaining separate testing team for some of the stages in LCT. In their teams organisation maintains below roles Quality Defines theapproach objectives of Testing Quality Control: Assurance: Defines done by Test Manager 

Page 16 of 132

 

Software Testing Material Test Manager: Schedule that approach Test Lead: Maintain testing team with respect to the test plan Test Engineer: Conducts testing to find defects Quality Control

Quality Assurance

Project Manager 

Test Managers

Test Lead

Project Lead

Programmers

Test Engineer / QA Engineer 

Quality Control: Defines the objectives of Testing Quality Assurance: Defines approach done by Test Manager  Test Manager: Schedule, Planning Test Lead: Applied Test Engineer: Followed Testing Terminology:Monkey / Chimpanzee Testing: The coverage of main activities only in your application during testing is called as monkey testing.(Less Time) Gerilla Testing: To cover a single functionality with multiple possibilities to test is called

Gerilla ride orTesting: Gerilla Testing. (Nolevel rulesofand regulations to test issue) in your application Level by activity coverage of aactivities Exploratory during testing is called exploratory testing. (Covering main activities first and other activities next) Sanity Testing: This test is also known as Tester Acceptance Test (TAT). They test for  whither developed team build is stable for complete testing or not? Development Team Released Build Sanity Test / Tester Acceptance Test

Functional & System Testing

Page 17 of 132

 

Software Testing Material Smoke Testing: An extra shakeup in sanity testing is called as Smoke Testing. Testing. Testing team rejects a build to development team with reasons, before start testing. Bebugging: Development team release a build with known bugs to testing them. Bigbang Testing: A single state of testing after completion of all modules development is called Bigbang testing. It is also known as informal testing. Incremental Testing: A multiple stages of testing process is called as incremental testing. This is also known as formal testing. Static Testing: Conduct a test without running an application is called as Static Testing. Ex: User Interface Testing Dynamic Testing: Conduct a test through running an application is called as Dynamic Testing. Ex: Functional Testing, Load Testing, Compatibility Testing Manual Vs Automation: A tester conduct a test on application without using any third party testing tool. This process is called as Manual Testing.

A tester conduct a test with the help of software testing tool. This process is called as Automation.

Automation (40% -60%)

Impact & Criticality

Need for Automation: When tools are not available they will do manual testing only. If your company already has

testing tools they may follow automation. For verifying the need for automation they will consider following two types: Impact of the test: It indicates test repetition  No1  No2 Multiply Result

Page 18 of 132

 

Software Testing Material Criticality: Load Testing, for 1000 users. 

Criticality indicates complex to apply that test manually. Impact indicates test repetition. Retesting: Re execution of our application to conduct same test with multiple test data is called Retesting. Regression Testing: The re execution of our test on modified build to ensure bug fix work  and occurrences of side effects is called regression testing.

Any dependent modules may also cause side effects.

Impacted Passed Tests

Modifie d Build

Failed Tests

Buil d

11 Test Fail 10 Tests Passed

Development

Selection of Automation: Before starting one project level testing by one separate testing team, corresponding project manager or test manager or quality analyst defines the need of  test automation for that project depends on below factors. Type of external interface:  GUI – Automation. CUI – Manual. Size of external interface:  Size of external interface is Large – Automation. Size of external interface is Small – Manual. Expected No. of Releases:  Several Releases – Automation. Less Releases – Manual. Maturity between expected releases:  More Maturity – Manual. Less Maturity – Automation. Tester Efficiency:  Knowledge of automation on tools to test engineers

– Automation.

 No Knowledge of automation on tools  to test engineers – Manual. Support from Senior Management:

Page 19 of 132

 

Software Testing Material Management accepts – Automation. Management rejects – Manual.

C.E.O

Testing Policy

Company Level Test Strategy

Test Manager/ QA / PM Test Methodology

Test Lead

Test Plan

Test Cases

Test Procedure

Project Level Test Lead, Test Engineer

Test Script

Test Log

Defect Report

Test Lead

Test Summary Report

Testing Policy: It is a company level document and developed by QC people. This document defines testing objectives, to develop a quality software. Address

Testin Test ing g Defi Defini niti tion on : Veri Verifi fica cati tion on & Val Valid idat atio ion n of S/w S/w Test Te stin ing g Pro Proce cess ss : Pro Prope perr Tes Testt Pla Plann nnin ing g bef befor oree sta start rt te test stin ing g Test Te stin ing g Sta Stand ndar ard d : 1 Defe Defect ct pe perr 250 250 LOC / 1 Defe Defect ct pe perr 10 10 FP FP Testing Measurements : QAM, TMM, PCM.

CEO Sign

QAM: Quality Assessment Measurements

Page 20 of 132

 

Software Testing Material TMM: Test Management Measurements PCM: Process Capability Measurements Note: The test policy document indicates the trend of the organization. Test Strategy: 1. Sc Scop opee & Obj bjec ecti tive ve:: Defi Defini niti tion on,, ne need ed and and pu purp rpo ose of te test stin ing g in your your in your  your  organization 2. Business Business Issue Issues: s: Budget Budget Contr Controlli olling ng for testi testing ng 3. Test Test approach: approach: defines defines the testing testing approach approach between between developm development ent stages stages and testing testing factors. TRM: Test Responsibility Matrix or Test Matrix defines mapping between test factors and development stages. 4. Test Test environmen environmentt specificati specifications: ons: Required Required test documen documents ts developed developed by testing team during testing. 5. Role Roless and Respons Responsibi ibilit litie ies: s: Defi Defines nes names names of jobs in te test sting ing team with with requir required ed responsibilities. 6. Commun Communicat ication ion & Status Status Reporting Reporting:: Required Required negotiati negotiation on between between two consecuti consecutive ve roles in testing. 7. Testing Testing measure measurement mentss and metrics: metrics: To estimate estimate work complet completion ion in terms of Quality Quality Assessment, Test management process capability. 8. Test Test Automation Automation:: Possibil Possibiliti ities es to go test automatio automation n with with respect respect to correspo correspondin nding g

 p proj roject ect re requi quirem rement entss and te test sting ing fa facil cilit ities ies / tools tools avail availabl ablee (e (eit ither her comple complete te automation or selective automation) 9. Defect Defect Tracking Tracking System System:: Required Required negotiati negotiation on between between the developm development ent and testing testing team to fix defects and resolve. 10. Change and Configuration Configuration Management: Management: required strategies to handle change requests of customer site. 11. Risk Analysis Analysis and Mitigations: Analyzing Analyzing of future common problems appears during testing and possible solutions to recover. 12. Training Training plan: Need of training for testing to start/condu start/conduct/apply ct/apply..

Test Factor: A test factor defines a testing issue. There are 15 common test factors in S/w Testing.

Ex: QC – Quality PM/QA/TM – Test Factor  TL – Testing Techniques TE – Test cases PM/QA/TM – Ease of use TL – UI testing TE – MS 6 Rules PM/QA/TM – Portable TL – Compatibility Testing TE – Run on different OS

Page 21 of 132

 

Software Testing Material Test Factors: 1. Authorization: Validation of users to connect to application Security Testing Functionality / Requirements Testing 2. Access Control: Permission to valid user to use specific service Security Testing Functionality / Requirements Testing 3. Audit Trail: Maintains metadata about operations  Error Handling Testing Functionality / Requirements Testing 4. Correctness: Meet customer requirements in terms of functionality All black box Testing Techniques 5. Continuity in Processing: Inter process communication Execution Testing Operations Testing 6. Coupling: Co existence with other application in customer site Inter Systems Testing 7. Ease of Use: User friendliness User Interface Testing Manual Support Testing 8. Ease of Operate: Ease in operations

Installation testing 9. File Integrity: Integrity: Creation of internal files or backup files Recovery Testing Functionality / Requirements Testing 10. Reliability: Recover from abnormal situations or not. Backup files using or not Recovery Testing Stress Testing 11. Portable: Run on customer expected platforms Compatibility Testing Configuration Testing 12. Performance: Speed of processing Load Testing Stress Testing Data Volume Testing Storage Testing 13. Service Levels: Order of functionalities Stress Testing Functionality / Requirements Testing 14. Methodology: Follows standard methodology during testing Compliance Testing 15. Maintainable: Whither application is long time serviceable to customers or not Compliance Testing (Mapping between quality to testing connection) Quality Gap: A conceptual gap between Quality Factors and Testing process is called as Quality Gap. Test Methodology: Methodology: Test strategy defines over all approach. To convert a over all approach

into corresponding project level approach, quality analyst / PM defines test methodology. Step 1: Collect test strategy

Page 22 of 132

 

Software Testing Material Step 2: Project type Project Type Traditional Off-the-Shelf Maintenance

Information Information Gathering Gathering & Design Analysis Y Y X X X X

Coding Y X X

System Testing Y Y X

Maintenance Y X Y

Step 3: Determine Depends on application type and requirements the QA decrease number of application columns in type: the TRM. Step 4: Identify risks: Depends on tactical risks, the QA decrease number of factors (rows) in the TRM. Step 5: Determine scope of application: Depends on future requirements / enhancements, QA try to add some of the deleted factors once again. (Number of rows in the TRM) Step 6: Finalize TRM for current project Step 7: Prepare Test Plan for work allocation. Testing Process:

Test Initiation

Test Plannin g

Test Design

Test Executio n

Regression Testing

Test Closur e Defect

Test Report PET (Process Experts Tools and Technology): It is an advanced testing process developed developed  by HCL, Chennai.This Chennai.This process is approved by QA forum of India. It is a refinement refinement form of  V-Model.

Page 23 of 132

 

Software Testing Material

Information Gathering (BRS) Analysis ( S/wRS )

Design ( HLDD & LLDD )

PM / QA

Test Initiation

Coding

Test Lead

Unit Testing +

Study S/wRS & Design Docs

Integration Testing

Test Planning

Test Design

Initial Build

Level – 0 ( Sanity / Smoke / TAT ) Test Automation Test Batches Creation

(Modified Build) Bug Resolving

Next Select a batch and starts execution ( Level - 1 )

(Regression ) (Level – 2) Independent

Defect If u got any mismatch then suspend that Batch

Defect Fixing Report

Otherwise Test Closure Final Regression / Pre Acceptance / Release / Post Mortum / Level -3 Testing

User Acceptance Test Sign Off 

Page 24 of 132

 

Software Testing Material Test Planning: After completion of test initiation, test plan author concentrates on test plan

What to test How to test When to test Who to test

- Development Plan - S/wRS - Design Documents - Team Formation

Development Plan & S/wRS & Design Documents

Team Formation Identify tactical Risks Test Plan Prepare Test Plan

TRM Review Test Plan

1. Tea Team Form Format atio ion n In general test planning process starts with testing team formation, depends on below factors.

Availability of Testers Test Duration Availability of test environment resources The above three are dependent factors. • • •

Test Duration: Common market test team duration for various types of projects. C/S, Web, ERP projects - SAP, VB, JAVA – Small - 3-5 months System Software - C, C++ - Medium – 7-9 months Machine Critical - Prolog, LISP - Big - 12-15 months System Software Projects: Network, Embedded, Compilers … Machine Critical Software: Robotics, Games, Knowledge base, Satellite, Air Traffic. 2. Iden Identi tify fy tac tacti tica call Risks Risks After completion of team formation, test plan author concentrates on risks analysis and mitigations. 1) 2) 3) 4) 5) 6) 7)

Lack Lack of kno knowl wledg edgee on that that doma domain in Lack ack of of bud budge gett Lack Lack of resou resource rces(h s(h/w /w or tool tools) s) Lack Lack of of testd testdata ata (amoun (amount) t) Delays Delays in deli deliveri veries(s es(serve erverr down) down) Lack Lack of developm development ent process process rigor  rigor  Lack Lack of comm communic unicatio ation( n( Ego Ego proble problems) ms)

3. Prep Prepar aree Tes Testt Pla Plan n

Page 25 of 132

 

Software Testing Material

Format:

1) Test Test Plan Plan id: id: Uniqu Uniquee numbe numberr or name name 2) Intro Introduc ducti tion: on: About About Proj Projec ectt 3) Test Test ite items ms:: Modu Module less 4) Feature Featuress to be tested tested:: Responsi Responsible ble module moduless to test test 5) Feature Feature not not to be test tested: ed: Which Which ones ones and why why not? not? 6) Feature pass/fail criteria: criteria: When When above feature is pass/fail? pass/fail? 7) Suspension Suspension criteria: criteria: Abnormal Abnormal situations situations during during above features features testing. testing. 8) Test environm environment ent specificatio specifications: ns: Required Required docs docs to prepare during testing testing 9) Test Test enviro environmen nment: t: Requir Required ed H/w H/w and and S/w S/w 10) Testing tasks: what what are the necessary tasks to do before starting testing 11) Approach: Approach: List of Testing Techniques Techniques to apply 12) Staff and training training needs: Names of selected testing Team 13) Responsibilities: Responsibilities: Work allocation allocation to above selected selected members 14) Schedule: Schedule: Dates and timings timings 15) Risks and mitigations mitigations : Common non technical problems problems 16) Approvals: Approvals: Signatures of PM/QA PM/QA and test plan author  4. Revi Review ew Test Test Plan Plan

After completion of test plan writing test plan author concentrate on review of that document for complet completenes enesss and correct correctness ness.. In this this review, review, selected selected testers testers also involved involved to give feedback. In this reviews meeting, testing team conducts coverage analysis. • • •

S/wRS based coverage ( What to test ) Risks based coverage ( Analyze risks point of view ) TRM based coverage ( Whither this plan tests all tests given in TRM )

Test Design: 

After completion of test plan and required training days, every selected test engineer concentrate on test designing for responsible modules. In this phase test engineer   prepares a list of testcases to conduct defined testing, on responsible modules.  There are three basic methods to prepare testcases to conduct core level testing.   

Business Logic based testcase design Input Domain based testcase design User Interface based testcase design

Business Logic based testcase design: In general test engineers are writing writing list of testcases depends on usecases / functional functional specif spe cific icati ations ons in S/wR S/wRS. S. A usecas usecasee in S/wR S/wRS S define definess how a user user can use a sp speci ecifi ficc functionality in your application.

Page 26 of 132

 

Software Testing Material

BRS

S/wRS Usecases + Functional Specifications

TestCases

HLDD LLDD Coding .Exe

To prepare testcases depends on usecases we can follow below approach: Step 1: Collect responsible modules usecases Step 2: select a usecase and their dependencies ( Dependent & Determinant ) Step 2-1: identify entry condition Step 2-2: identify input required Step 2-3: identify exit condition Step 2-4: identify output / outcome Step2-5: study normal flow Step 2-6: study alternative flows and exceptions Step3: prepare list of testcases depends on above study Step 4: review testcases for completeness and correctness TestCase Format:

After completion of testcases selection for responsible modules, test engineer prepare an IEEE format for every test condition. TestCase Id : Unique number or name TestCase Name : Name of the test condition Feature to be tested : Module / Feature / Service TestSuit Id : Parent batch Id’s, in which this case is participating as a member. Priority : Importance of that testcase Po – Basic functionality P1 – General Functionality (I/p domain, Error handling …) P2 – Cosmetic TestCases (Ex: p0 – os, p1-difft oss, p2 – look & feel) Test Environment: Required H/w and S/w to execute the test cases Test Effort: (Person Per Hour or Person / Hr) Time to execute this test case ( 20 Mins ) Test Duration: Date of execution Test Setup: Necessary tasks to do before start this case execution Test Procedure: Step by step procedure to execute this testcase.

Page 27 of 132

 

Software Testing Material

Step No.

Action

I/p Required

Expected

Test Design

Result

Defect ID

Comments

Test Execution

TestCase Pass/Fail Criteria: When that testcase is Pass, When that testcase is fail. Input Domain based TestCase Design: To prepare functionality and error handling testcases, test engineers are using UseCases or  functional specifications in S/wRS. To prepare input domain testcases test engineers are depending on data model of the project (ERD & LLD)

Step1: Identify input attributes in terms of size, type and constraints. (size- range, type – int, float constraint – Primary key) Step2: Identify critical attributes in that list, which are participating in data retrievals and manipulations. Step3: Identify non critical attributes which are input, output type. Step4: Prepare BVA & ECP for every attribute. ECP ( Type ) Input Attribute

Valid

Invalid

BVA ( Size / Range ) Minimum

Maximum

Fig: Data Matrix User Interface based testcase design: To conduct UI testing, test engineer write a list of test cases, depends on our organization level UI rules and global UI conventions.

For preparing this UI testcases they are not studying S/wRS, LLDD etc… Functionality testcases source: S/wRS. I/P domain testcases source: LLDD Testcases: For all projects applicable Testcase1: Spelling checking Tesecase2: Graphics checking (alignment, font, style, text, size, micro soft 6 rules) Testcase3: Testcase 3: Meaningful error error messages or not. (Error Handling Handling Testing – related related message is coming or not. Here they are testing that message is easy to understand or not) TestCase4: Accuracy of data displayed (WYSIWYG) (Amount, d o b) Testcase5: Accuracy of data in the database as a result of user input. (Tc4 screen level, tc5 at database level)

Table

Form

DSN Bal

66.666

66.7

Page 28 of 132

 

Software Testing Material Testcase6: Accuracy of data in the database as a result of external factors?

DS

Mail Server Image compression

Image Decompression

Mail + .Gif 

Mail + .Gif  Import

Testcase7: Meaningful Help messages or not?(First 6 tc for uit and 7 manual support testing) Review Testcases: After completion of testcases design with required documentation [IEEE]

for responsible modules, testing team testing along withtesting test lead concentrate on reviewanalysis of testcases for completeness and correctness. In this review team conducts coverage 1. 2. 3. 4. 5.

Business Business Requirem Requirements ents based based cover coverage age UseC UseCase asess ba based sed co cover verage age Data Data Mode Modell base based d cove covera rage ge User User Int Interf erface ace based based cove coverag ragee TRM TRM ba base sed d cove covera rage ge

Fig: Requirements Validation / Traceability Matrix. Business Requirements ******

Sources (Use Cases, Data Model…) ***** ***** *****

TestCases * * * * * *

Page 29 of 132

 

Software Testing Material Test

Execution:

Development Site

Initial Build

Testing Site Level-0 (Sanity / Smoke / TAT)

Stable Build

Test Automation

Defect Report Defect Fixing

Level-1 (Comprehensive)

8-9 Times Bug Resolving

Level-2 (Regression)

Modified Build

Level-3 (Final Regression) Test Execution levels Vs Test Cases: Level 0 – P0 Level 1– P0, P1 and P2 testcases as batches Level 2– Selected P0, P1 and P2 testcases with respect to modifications Level 3– Selected P0, P1 and P2 testcases at build.

Test Harness = Test Environment + Test Bed Build Version Control: Unique numbering system. ( FTP or SMTP)

Server

Softbase

Build FTP

Test Environment

After defect reporting the testing team may receive Modified Build •



Modified Programs

Page 30 of 132

 

Software Testing Material To maintain this original builds and modified builds, development team use version control softwares.

Server 1

2

Modified Build

Modified Programs Test Environment Embed into Old Build

Level 0 (Sanity / Smoke / TAT): Afte Af terr rece receiv ivin ing g in init itia iall bu buil ild d from from de deve velo lopm pmen entt te team am,, te test stin ing g te team am in inst stal alll in into to te test st environm envi ronment. ent. After After complet completion ion of dumping dumping / installa installation tion testing testing team ensure that that basic basic functionality of that build to decide completeness and correctness of test execution.

During this testing, testing team observes below factors on that initial build. 1. 2. 3. 4. 5. 6. 7.

Understandable: Understandable: Functionality Functionality is understan understandable dable to test engineer. Operable: Operable: Build Build is working without runtime errors in test test environment. environment. Observable: Observable: Process Process completion completion and and continuation continuation in build is estimated estimated by tester. tester. Control Controllabl lable: e: Able to Start Start// Stop process processes es explici explicitly tly.. Consi Consiste stent: nt: Stab Stable le naviga navigati tions ons Maintai Maintainabl nable: e: No No need need of reinstal reinstallati lations ons Simplic Simplicity ity:: Short Short navigati navigations ons to compl complete ete task. task.

8. Automatable: Automatable: Interfaces Interfaces supports supports automation automation test test script creation. creation. This level-0 testing is also called as Testability or Octangle Testing (bcz based on 8 factors). Test Automation: Aft After er receivi receiving ng a stable stable build build from developme development nt team, team, testing testing team concentrate on test automation. Test Automation two types: Complete and Selective.

Test Automation

* Complete

Selective

(All P0 and carefully selected P1 Testcases)

Page 31 of 132

 

Software Testing Material Level-1: (Comprehensive Testing): After completion of stable build receiving from development team and automation, testing team starts test execution of their testcases as batches. The test batch is also known as TestSuit or test set. In every batch, base state of one testcase is end state of previous testcase. During this test batches execution, test engineers prepares test log with three types of entries. 1. Passed 2. Failed

3. BAll lockexpected ed Passed: values are equal to actual. Failed: Any expected value is variated with actual. Blocked: Corresponding testcases are failed.

Passed

Skip

In Queue

In Progress

Blocked

Failed

Closed

Partial Pass / Fail

Level-2 Regression Testing: Actually this Regression testing is part of Level-1 testing. During comprehensive comprehensive test execution, execution, testing team reports mismatches to development development team as defects. After receiving that defect, development team performs modifications in coding to resolve that accepted defects. When they release modified build, testing team concentrate on regression testing before conducts remaining comprehensive testing. Severity: Seriousness of the defect defined by the tester through Severity (Impact and Criticality) importance to do regression testing. In organizations they will be giving three types of severity like High, Medium and Low. High: Without resolving this mismatch tester is not able to continue remaining testing. (Show stopper). Medium: Able to continue testing, but resolve must. Low: May or may not resolve. Ex:

High: Database not connecting. Medium: Input domain wrong. (Accepting wrong values also) Low: Spelling mistake.

Xyz are three dependent modules. If u find bug in z, then Do on z and colleges: High Full z module: Medium Partial z module: Low

Page 32 of 132

 

Software Testing Material

Resolved Bug

Severity

High

All P0 All P1 Selected P2

Medium

All P0 Selected P1 Some P2

Less

Some P0 Some P1 Some P2

On modified Build to ensure bug resolving Possible ways to do Regression Testing: Case 1: If development team resolved bug and its severity is high, testing team will re execute all P0, P1 and carefully selected P2 test cases with respect to that modification. Case 2: If development team resolved bug and its severity is medium, testing team will re execute all P0, selected P1 [80-90 %] and some of P2 test cases with respect to that modification. Case 3: If development team resolved bug and its severity is low, testing team will re execute some of the P0, P1, P2 test cases with respect to that modification. Case 4: If development team performs modifications due to project requirement changes, testing team reexecute all P0 and selected testcases.

Severity: With respect to functionality Priority: With respect to customer. Severity: All defects are not with same severity. Priority: All defects are not with same priority. Severity: Seriousness of the defect. Priority: Importance of the defect. Severity: Project functionality point of view important. Priority: Customer point of view important.

Page 33 of 132

 

Software Testing Material Defect Reporting and Tracking:

During co During comp mpreh rehens ensive ive te test st execut executio ion, n, te test st engine engineer erss are report reportin ing g misma mismatc tches hes to development team as defect reports in IEEE format. 1. Defec Defectt Id: A unique unique num number ber or or name name.. 2. Defect Defect Descript Description: ion: Summar Summary y of defect. defect. 3. Bui ld Versio Ve rsion nule Id: Pare ntionali build buiality ldtyversio version n number. number. 4. Build Feat Feature ure: : Modul Mod e / Parent Funct Function 5. Testcase Testcase name and Descripti Description: on: Failed Failed testcase testcase name name with with description description 6. Repro Reproduc ducibl ible: e: (Yes (Yes / No) No) 7. If yes, yes, atta attach ch test test pro proced cedure ure.. 8. If No, No, attac attach h snapsho snapshots ts and and stron strong g reason reasonss 9. Sever Severit ity: y: Hig High h / Medi Medium um / Low Low 10. Prio Priorit rity y 11. Status: New / Reopen Reopen (after 3 time timess write new program programs) s) 12. Reported Reported by: Name of the the test engineer  engineer  13. Reported Reported on: Date of of Submission Submission 14. Suggested Suggested fix: optional 15. Assign to: to: Name Name of PM PM 16. Fixed by: by: PM or Team Lead Lead 17. Resolved by: by: Name of the Developer  Developer  18. Resolved on: Date Date of solving solving 19. Resolut Resolution ion type: 20. Approved by: Signature Signature of the the PM Defect Age: The time gap between resolved on and reported on. Defect Submission:

QA Test Manager

Project Manager

Test Lead

Team Lead

Test Engineer

Developers

Transmittal Reports Fig: Large Scale Organizations.

Page 34 of 132

 

Software Testing Material

Defect Submission:

Project Manager

Test Lead

Team Lead

Test Engineer

Developers

Transmittal Reports Fig: Small Scale Organizations. Defect Status Cycle:

New

Fixed (Open, Reject, Deferred)

Closed

Reopen Bug Life Cycle:

Page 35 of 132

 

Software Testing Material

Detect Defect

Reproduce Defect

Report Defect Fix Bug Resolve Bug

Close Bug Resolution Type:

Testing Development Defect Report

Resolution Type There are 12 resolution types such as 1. Duplicate: Duplicate: Rejected Rejected due due to defect like same same as previous reported reported defect. defect. 2. Enhancement: Enhancement: Rejected Rejected due due to defect defect related related to future requirem requirement ent of the the customer. customer. 3. H/w Limitat Limitation: ion: Raised Raised due due to limitations limitations of hardware (Rejected) (Rejected) 4. S/w Limit Limitatio ation: n: Rejected Rejected due to limit limitati ation on of s/w technolo technology. gy. 5. Func Functi tion onss as de desi sign gn:: Reje Reject cted ed du duee to co codi ding ng is corr correc ectt with with re resp spec ectt to de desi sign gn documents. 6. Not Applic Applicable able:: Rejected Rejected due to to lack of correc correctnes tnesss in defect. defect. 7. No plan plan to fix it: it: Postponed Postponed part part timely timely (Not accepted and rejected) rejected) 8. Need Need for More Informati Information: on: Develope Developers rs want more informa informatio tion n to fix. (Not accepte accepted d and rejected) 9. Not Not Repr Reprod oduc ucib ible le:: Deve Develo lope perr want want more more in info form rmat atio ion n du duee to th thee pr prob oble lem m is no nott reproducible. (Not accepted and rejected) 10. User User misun misunder dersta standi nding: ng: (B (Bot oth h ar argue guess yo you u r th think inking ing wrong wrong)) (E (Ext xtra ra negoti negotiat ation ion  between tester and developer) 11. Fixed: Opened Opened a bug to resolve resolve (Accepted) (Accepted) 12. Fixed Indirectly: Indirectly: Differed Differed to resolve (Accepted) Types of Bugs:

Page 36 of 132

 

Software Testing Material

UI bugs: (Low severity) Spelling mistake: High Priority Wrong alignment: Low Priority Input Domain bugs: (Medium severity) Object not taking Expected values: High Priority Object taking Unexpected values: Low Priority Error Handling bugs: (Medium severity) Error message is not coming: High Priority Error message is coming but not understandable: Low Priority Calculation bugs: (High severity) Intermediate Results Failure: High Priority Final outputs are Wrong: Low Priority Service Levels bugs: (High severity) Deadlock: High Priority Improper order of Services: Low Priority Load condition bugs: (High severity) Memory leakage under load: High Priority Doesn't allows customer expected load: Low Priority Hardware bugs: (High severity) Printer not connecting: High Priority Invalid printout: Low Priority Boundary Related Bugs: (Medium Severity) Id control bugs: (Medium severity) Wrong version no, Logo Version Control bugs: (Medium severity) Difference between two consecutive versions Source bugs: (Medium severity) Mismatch in help documents Test Closure: After completion of all possible testcase execution and their defect reporting and tracking, test lead conduct test execution closure review along with test engineers.

In this review test lead depends on coverage analysis: • • • • •

BRS based coverage UseCases based coverage (Modules) Data Model based coverage (i/p and op) UI based coverage (Rules and Regulations) TRM based coverage (PM specified tests are covered or not)

Analysis of the differed bugs: Page 37 of 132

 

Software Testing Material Whither deferred bugs are postponable or not. Testing team try to execute the high priority test cases once again to confirm correctness of  master build. Final Regression Process:  Gather requirements

Effort estimation (Person/Hr) Plan Regression Execute Regression Report Regression Final Regression Testing:

Gather  requirements

Report Regression

Execute Regression

Effort estimation

Plan Regression

User Acceptance Testing: After completion of test execution closure review and final regression, our organization

concentrates on UAT to collect feed back from customer / customer site like people. There are two approaches: 1. Alpha tes testting 2. Beta ttes estting SignOff: After completion of UA and then modifications, test lead creates Test Summary Report (TSR). It is a part of s/w release note. This TSR consists of 

1. Test Test Strate Strategy gy / Methodol Methodology ogy (what (what tests) tests) 2. Syste System m Test Test Plan Plan (sch (sched edule ule)) 3. Tracea Traceabili bility ty Matrix Matrix (mapping (mapping requir requireme ements nts and testcase testcases) s) 4. Automa Automated ted Test Test Scrip Scripts ts (TSL (TSL + GUI GUI map map entrie entries) s) 5. Fina Finall Bug Bug summa summary ry Repor Reportt Bug Id Description Found By Status(Closed / Severity

Module

/ Comme

Page 38 of 132

 

Software Testing Material Deferred)

Functionality

nts

Case Study (Schedule for 5 Months): Deliverable TestCase Selection TestCase Review RVM / RTM Sanity & Test Automation Test Execution as Batches Test Reporting

Responsibility Test Engineer Test Lead, Test Engineer Test Lead Test Engineer Test Engineer Test Engineer & Test Lead

Completion Time 20-30 days 4-5 days 1 day 20-30 days 40-60 days On go going du during te test execution Weakly twice

Commun Comm unic icat atio ion n an and d St Stat atus us Everyone in testing team Reporting Final Final Regre Regressi ssion on Test Testing ing & Test Engineer and Test Lead 4-5 days Closer Review User Acceptance Testing Customer Site People 5-10 days ( Involvement of Testing Team) Test Summary Report Test Lead 1-2 days (Sign Off) Testing computer software – Cem Kamer  Effective methods for software testing – William E Perry Software Testing Tools – Dr. K.V.K.K. Prasad  [email protected] What u r doing? What type of testing process going on ur company? What type of test documentation prepared by ur organization? What type of test documentation u will prepare? Whats ur involvement in that? What are key components of ur company test plan? What type of format u prepare for test cases? How ur pm selects what type of tests need for ur project? When u will go to automation? What is regression testing? When u will do this? How u report defects to development team? How u know whither defect accepted or rejected? What u do when ur defect rejected? How u will learn project with out documentation? What is the difference between defect age and Build interval period? How u will do test without documents? What do u mean by green box testing? Experience on winrunner  Exposure to td… Winrunner Load runner8/10. 7/10. Auditing:

Page 39 of 132

 

Software Testing Material During testing and maintenance, testing team conducts audit meetings to estimate status stat us and required required improve improvemen ments. ts. In this this auditing auditing process process they can use three types of  measurements and metrics. Quality Measurement Metrics:

These measurements are used by QA or PM to estimate achievement of quality in current project testing [monthly once] Product Stability:

 N o. 20% Testing – 80% Bugs O f 

80% Testing – 20% Bugs

 bu g s

Duration

Sufficiency: Requirements Coverage • Type – Trigger Analysis (Mapping between covered requirements and applied tests) • Defect Severity Distribution Organization trend limit check: Organisation trend limit check  • Test Management Measurements:

These twice] measurements are used by test lead during test execution of current project [weakly Test Status Executed tests • In progress • Yet to execute • Delays in Delivery Defect Arrival Rate • Defect Resolution Rate • Defect Aging • Test Effort •

Cost of finding a defect (Ex: 4 defects / person day) Process Capability Measurements: Measurements:

Page 40 of 132

 

Software Testing Material These meas These measure ureme ments nts ar aree used used by quali quality ty analy analyst st and te test st manag managem ement ent to impro improve ve the capabili capa bility ty of testing testing process process for upcoming upcoming project projectss testing. testing. (It depends depends on old projects projects maintenance level feedback) Test Efficiency Type-Trigger Analysis • Requirements Coverage • Defect Escapes Type-Phase analysis. • (What type of defects my testing team missed in which phase of testing) Test Effort Cost of finding a defect (Ex: 4 defects / person day) •

This topic looks at Static Testing techniques. These techniques are referred to as "static"   because the software is not executed; rather the specifications, documentation and source code that comprise the software are examined in varying degrees of detail. There are two basic types of static testing. One of these is people-based people-based and the other is tool based. People-based techniques are generally known as “reviews” but there are a variety of  different ways in which reviews can be performed. The tool-based techniques examine source code and are known as "static analysis". Both of these basic types are described in separate sections below. What are Reviews? “Reviews” is the generic name given to people-based static techniques. More or less any activity that involves one or more people examining something could be called a review. There are a variety of different ways in which reviews are carried out across different organisations and in many cases within a single organisation. Some are very formal, some are very informal, and many lie somewhere between the two. The chances are that you have been involved in reviews of one form another. One person can perform a review of his or her own work or of someone else’s work. However, it is generally recognised that reviews performed by only one person are not as

effective conducted by a group of people all examining the same document (or  whatever as it isreviews that is being reviewed). Review techniques for individuals Desk checking and proof reading are two techniques that can be used by individuals to review a document such as a specification or a piece of source code. They are basically the same processes: the reviewer double-checks double-checks the document or source code on their own. Data stepping is a slightly different process for reviewing source code: the reviewer follows a set of data values through the source code to ensure that the values are correct at each step of the  processing. Review techniques for groups The static techniques that involve groups of people are generically referred to as reviews. Reviews can vary a lot from very informal to highly formal, as will be discussed in more detai det aill short shortly ly.. Two Two ex exam ampl ples es of ty types pes of re revie view w ar aree walkt walkthro hrough ughss and In Inspe spect ctio ion. n. A walkthrough is a form of review that is typically used to educate a group of people about a

Page 41 of 132

 

Software Testing Material technical document. Typically the author "walks" the group through the ideas to explain them and so that the attendees understand the content. Inspection is the most formal of all the formal review techniques. Its main focus during the process is to find faults, and it is the most effective review technique in finding them (although the other types of review also find some faults). Inspection is discussed in more detail below.  Reviews and the test process Benefits of reviews There are many benefits from reviews in general. They can improve software development  productivity  product ivity and reduce development development timescales. They can also reduce testing time and cost. They can lead to lifetime cost reductions throughout the maintenance of a system over its useful life. All this is achieved (where it is achieved) by finding and fixing faults in the  products of development phases before they are used in subsequent phases. In other words, reviews find faults in specifications and other documents (including source code) which can then be fixed before those specifications are used in the next phase of development.

Reviews generally reduce fault levels and lead to increased quality. This can also result in improved customer relations. Reviews are cost-effective There are a number of published figures to substantiate the cost-effectiveness of reviews. Freedman and Weinberg quote a ten times reduction in faults that come into testing with a 50% to 80% reduction in testing cost. Yourdon in his book on Structured Walkthroughs found that faults were reduced by a factor of ten. Gilb and Graham give a number of  documented benefits for software Inspection, including 25% reduction in schedules, a 28 times reduction in maintenance cost, and finding 80% of defects in a single pass (with a mature Inspection process) and 95% in multiple passes. What can be Inspected?

Anything written down can be Inspected. Many people have the impression that Inspection applies mainly to code (probably because Fagan's original article was on "Design and code inspection"). inspecti on"). However, although although Inspection can be performed on code, it gives more value if  it is performed on more "upstream" documents in the software development process. It can be applied to contracts, budgets, and even marketing material, as well as to policies, strategies,  business plans, user manuals, procedures and training training material. material. Inspection also applies to all types of system development documentation, such as requirements, feasibility studies and designs. It is also very appropriate to apply to all types of test documentation such as test  plans, test designs and test cases. In fact even with Fagan's original method, it was found to  be very effective applied to testware. What can be reviewed? Anything that can be Inspected can also be reviewed, but reviews can apply to more things than just those ideas that are written down. Reviews can be done on visions, strategic plans and "big "big pictu picture" re" id ideas eas.. Proje Project ct progre progress ss ca can n be revie reviewe wed d to as asses sesss whet whether her work work is  proceeding according to the plans. A review is also the place where major decisions may be made, for example about whether or not to develop a given feature. Reviews and Inspections are complementary. Inspection excludes discussion and solution optimising, but these activities are often very important. Any type of review that tries to combine more than one objective tends not to work as well as those with a single focus. It works better to use Inspection to find faults and to use reviews to discuss, come to a consensus and make decisions.

Page 42 of 132

 

Software Testing Material What to review / Inspect?

Looking Looki ng at the the ‘V’ ‘V’ li life fe cy cycl clee diagra diagram m that that was was di discu scusse ssed d in Sess Session ion 2, re revi views ews an and d Inspections apply to everything on the left-hand side of the V-model. Note that the reviews apply not only to the products of development but also to the test documentation that is  produced early in the life cycle. We have found that reviewing the business needs alongside the Acceptance Tests works really well. It clarifies issues that might otherwise have been overlooked. overlooke d. This is yet another way to find faults as early as possible in the life cycle so that they can be removed at the least cost. Costs of reviews You cannot gain the benefits of reviews without investing in doing them, and this does have a cost. As a rough guide, something between 5% and 15% of project effort would typically be spent on reviews. If Inspections are being introduced into an organisation, then 15% is a recommended recomm ended guideline. Once the Inspection process is mature, this may go down to around 5%. Note that 10% is half a day a week. Remember that the cost of reviews always needs to be balanced against the cost of not doing Remember them, and finding the faults (which are already there) much later when it will be much more expensive to fix them. The costs of reviews are mainly in people's time, i.e. it is an effort cost, but the cost varies depending on the type of review. The leader or moderator of the review may need to spend time in planning the review (this would not be done for an informal review, but is required for Inspection). The studying of the documents to be reviewed by each participant on their  own is normally the main cost (although (although in practice practice this may not be done as thoroughly as it should). If a meeting is held, the cost is the length of the meeting times times the number of people  present. The fixing of any faults found or the resolution of issues found may or may not be followed up by the leader. In the more formal review techniques, metrics or statistics are recorded and analysed to ensure the continued effectiveness and efficiency of the review  process. Process improvement should also be a part of any review process, so that lessons learned in a review can be folded back into development and testing processes. (Inspection formally includes process improvement; most other forms of review do not.) Types of review

We have now established that reviews are an important part of software testing. Testers should be involved in reviewing the development documents that tests are based on, and should also review their own test documentation. In this section, we will look at different types of reviews, and the activities that are done to a greater or lesser extent in all of them. We will also look at the Inspection process in a bit more detail, as it is the most effective of all review types. Characteristics Characteris tics of different review types Informal review

As its name implies, this is very much an ad hoc process. Normally it simply consists of  someone giving their document to someone else and asking them to look it over. A document may be distributed to a number of people, and the author of the document would hope to receive back some helpful comments. It is a very cheap form of review because there is no monitoring of metrics, no meeting and no follow--up. It is generally perceived to be useful, and compared to not doing any reviews at all, it is. However, it is probably the least effective form of review (although no one can prove that since no measurements are ever done!)

Page 43 of 132

 

Software Testing Material Technical review or Peer review

A technical review may have varying degrees of formality. This type of review does focus on technical issues and technical documents. A peer review would exclude managers from the review. The success of this type of review typically depends on the individuals involved they can be very effective and useful, but sometimes they are very wasteful (especially if the meetings are not well disciplined), and can be rather subjective. Often this level of review will have some documentation, even if just a list of issues raised. Sometimes Sometimes metrics will be kept. This type of review can find important faults, but can also be used to resolve difficult technical problems, for example deciding on the best way to implement a design. Decision-making review This type of review is closely related to the previous one (in fact the syllabus does not distinguish distingui sh them). In this type of review, which may be technical or managerial, the focus is on discussing the issues, coming to a consensus and making decisions, for example about whether a given feature should be included in the next release or not. Walkthrough A walkthrough is typically typically led by the author of a document, document, for the purpose of educating the  participants about the content so that everyone understands the same thing. A walkthrough may include "dry runs" of business scenarios to show how the system would handle certain specific situations. For technical documents, it is often a peer group technique. Inspection An Inspection is the most formal of the formal review techniques. There are strict entry and exit criteria to the Inspection process, it is led by a trained Leader or moderator (not the author), there are defined roles for searching for faults based on defined rules and checklists. checklists. Metrics are a required part of the process. Characteristics Characteris tics of reviews in general Objectives and goals The objec The objecti tives ves an and d goals goals of re revie views ws in genera generall norma normall lly y in incl clude ude the verif verific icati ation on an and d validation of documents against specifications and standards. Some types of review also have an objective of achieving a consensus among the attendees (but not Inspection). Some types of review have process improvement as a goal (this is formally included in Inspection). Activities There are a number of activities that may take place for any review.

The planning stage is part of all except informal reviews. In Inspection (and possibly other reviews), an overview or kickoff meeting is held to put everyone "in the picture" about what is to be reviewed and how the review is to be conducted. This pre-meeting may be a walkthrough in its own right. The preparation or individual checking is usually where the greatest value is gained from a review process. Each person spends time on the review document (and related documents),  becoming familiar with it and/or looking for faults. In some reviews, this part of the process is optional (at least in practice). In Inspection it is required. Most reviews include a meeting of the reviewers. Informal reviews probably do not, and Inspection does not hold a meeting if it would not add economic value to the process. So Some meti time mes th theemeetings meet meetin ing g run ti time me is th the e only onland y ti time me peopl peo plee actua act ually lly The lo look okbest at reviews th thee docum doc(of ument ent.. Sometimes Someti mess the on for hours discuss trivial issues. any level of formality) ensure that value is gained from the meeting. Page 44 of 132

 

Software Testing Material The more formal review techniques include follow-up of the faults or issues found to ensure that action has been taken on everything raised (Inspection does, as do some forms of  technical or peer review). The more The more form formal al revie review w techni technique quess co coll llect ect metr metrics ics on cost cost (t (tim imee spent) spent) and benefi benefits ts achieved. Roles and responsibilities responsibilities For any of the formal reviews (i.e. not informal reviews), there is someone responsible for the review of a document (the individual review cycle). This This may be the author of the document (walkthrough) or an independent Leader or moderator (formal reviews and Inspection). The responsibility of the Leader is to ensure that the review process works. He or she may distribute documents, choose reviewers, mentor the reviewers, call and lead the meeting,  perform follow-up and record relevant metrics. The author of the document document being reviewed or Inspected Inspected is generally generally included in the review, although there are some variants that exclude the author. The author actually has the most to gain from the review in terms of learning how to do their work better (if the review is conducted in the right spirit!). The reviewers or Inspectors are the people who bring the added value to the process by helping the author to improve his or her document. In some types of review, individual checkers are given specific types of fault to look for to make the process more effective. Managers have an important play in review reviews. Even if they are documents excluded from types of peer review, they canrole (andtoshould) management level withsome their   peers. They also need to understand the economics of reviews and the value that they bring. They need to ensure that the reviews are done properly, i.e. that adequate time is allowed for  reviews in project schedules. There may be other roles in addition to these, for example an organisation-wide co-ordi co-ordinator  nator  who would keep and monitor metrics, or someone to "own" the review process itself - this  person would be responsible for updating forms, checklists, etc. Deliverables The main deliverable from a review is the changes to the document that was reviewed. The author of the document normally normally edits these. For Inspection, Inspection, the changes would be limited to faults found as violations of accepted rules. In other types of review, the reviewers suggest improvements to the document itself. Generally the author can either accept or reject the changes suggested. If the author does not have the authority to change a related document (e.g. if the review found that a correct design conflicted with an incorrect requirement specification), then a change request may be raised to change the other document(s). For Inspection and possibly other types of review, process improvement suggestions are a deliverable. This includes improvements to the review or Inspection process itself and also improvements to the development process that produced the document just reviewed. (Note that these are improvements to processes, not to reviewed documents.) The final deliverable (for the more formal types of review, including Inspection) is the metrics about the costs, faults found, and benefits achieved by the review or Inspection  process. Pitfalls Reviews are not always successful. They are sometimes not very effective, so faults that could have been found slip through the net. They are sometimes very inefficient, so that

Page 45 of 132

 

Software Testing Material  people feel that they are wasting their time. Often insufficient thought has gone into the definition of the review process itself - it just evolves over time. One of the most common causes for poor quality in the review process is lack of training, and this is more critical the more formal the review. Another problem with reviews is having to deal with documents that are of poor quality. Entry criteria to the review or Inspection process can ensure that reviewers' time is not wasted on documents that are not worthy of the review effort. A lack of management support is a frequent problem. If managers say that they want reviews to take place but don't allow any time in the schedules for the, this is only "lip service" not commitment to quality. Long-term, it can be disheartening to become expert at detecting faults if the same faults keep on being injected into all newly written documents. Process improvements are the key to long-term effectiveness and efficiency. Inspection Typical reviews versus Inspection There are a number of differences between the way most people practice reviews and the Inspect Insp ection ion process process as describ described ed in Softwar Softwaree Inspect Inspection ion by Gilb Gilb and Graham Graham,, AddisonAddisonWesley, 1993. In a typical typical review, the document is given out in advance, there are typically typically dozens of pages to review, and the instructions are simply "Please review this." In Inspection, it is not just the document under review that is given out in advance, but also source or predecessor documents. The number of pages to focus the Inspection on is closely controlled, controll ed, so that Inspectors (checkers) check a limited area in depth - a chunk or sample of  the whole document. The instructions given to checkers are designed so that each individual checker will find the maximum number of unique faults. Special defect-hunting roles are defined, and Inspectors are trained in how to be most effective at finding faults. In typical reviews, sometimes the reviewers have time to look through the document before the meeting, and some do not. The meeting is often difficult to arrange and may last for  hours. In Inspection, it is an entry criterion criterion to the meeting that each checker has done the individual checking. The meeting is highly focused and efficient. If it is not economic, then a meeting may not be held at all, and it is limited to two hours. In a typical review, there is often a lot of discussion, some about technical issues but much about trivia. Comments are often mainly subjective, along the lines of "I don't like the way you did this" or "Why didn't you do it this way?" In Inspection, the process is objective. The only thing that is permissible to raise as an issue is a potential violation of an agreed Rule (the Rulesets are what the document should conform to). Discussion Discussion is severely curtailed curtailed in an Inspection Inspection meeting or postponed postponed until the end. The Leader'ss role is very important Leader' important to keep the meetings on track and focused and to keep pulling  people away from trivia and pointless discussion. Many people keep on doing reviews even if they don't know whether it is worthwhile worthwhile or not. Every activity in the Inspection process is done only if its economic value is continuously  proven.

Page 46 of 132

 

Software Testing Material Inspection is more

Inspection contains many mechanisms that are additional to those found in other formal reviews. These include the following: Entry criteria, to ensure that we don't waste time Inspecting an unworthy document; Training for maximum effectiveness and efficiency; Optimum checking rate to get the greatest value out of the time spent by looking deep; Prioritising the words: Inspect the most important documents and their most important  parts; Stan Standar dards ds ar aree used used in th thee Inspec Inspecti tion on proces process; s; th there ere ar aree a number number of Inspe Inspecti ction on standards also; Process improvement is built in to the Inspection process Exit criteria ensure that the document is worth and that the Inspection process was carried out correctly One of the most powerful exit criteria is the quantified estimate of the remaining remaining defects per   page. This may be say 3 per page initially, but can be brought down by orders of magnitude over time. Inspection is better

Typical reviews are probably only 10% to 20% effective at detecting existing faults. The return on investment is usually not known because no one keeps track even of their cost. When Inspection is still being learned, its effectiveness is around 30% to 40% (this is demonstrated in Inspection training courses). Once Inspection is well established and mature, this process can find up to 80% of faults in a single pass, 95% in multiple passes. The return on investment ranges from 6 hours to 30 for every hour spent. The Inspection process

The diagram diagram shows shows a product product document document infecte infected d with with faults. faults. The document document must pass through the entry gate before it is allowed to start the Inspection process. The Inspection Leader performs the planning activities. A Kickoff meeting is held to "set the scene" about the documents and the process. The Individual Checking Checking is where most of the benefits are gained. 80% or more of the faults found will be found in this stage. A meeting is held (if economic). The editing of the document is done by the author or the  person now responsible for the document. This involves redoing some of the activities that  produced the document initially, and it also may require Change Requests to documents not under the control of the editor. Process improvement suggestions may be raised at any time, for improvements either to the Inspection process or to the development process. The document must pass through the Exit gate before it is allowed to leave the Inspection  process. There are two aspects to investigate here: is the product document now ready (e.g. has some action been taken on all issues logged), and was the Inspection process carried out  properly? For example, if the checking rate was too fast, then the checking has not been done  properly. A gleaming new improved document is the result of the process, but there is still a "blob" on it. It is not economic to be 100% effective in Inspection. At least with Inspection you

Page 47 of 132

 

Software Testing Material consciously predict the levels of remaining faults rather than fallaciously assuming that we have found them all! How the checking checking rate rate enable enabless deep deep checki checking ng in Inspection

There is a dramatic difference of Inspection to normal reviews, and that is in the depth of  checking. This is illustrated by the picture of a document. Initially there are no faults visible. Typically reviews, the time and esize document theischecking rate. Sothen for  example if in you have 2 hours available availabl for of a review and determine the document 100 pages long, the checking rate will be 50 page per hour. (Any two of these three factors determine the third.) This is equivalent to "skimming the surface" of the document. We will find some faults - in this example we have found one major and two minor faults. Our typical reaction is now to think: "This review was worthwhile wasn't it - it found a major fault. Now we can fix that and the two other minor faults, and the document will now be OK." Think: are we missing anything here? Inspection is different. Inspection different. We do not take any more time, but it is the optimum rate for the type of document that is used to determine determine the size of the document that will be checked in detail. So if the optimum rate is one page per hour and we have two hours, then the size of the sample or chunk will be 2 pages. (Note that the optimum rate needs to be established over time for different types of document and will depend on a number of factors, and it is based on prioritised words (logical page rather than physical page). Of course it doesn't take an hour just to read a single page, but the checking done in Inspection includes comparing each paragraph or sentence on the target  page with all source documents, checking each paragraph or phrase against relevant rule sets,  both generic and specific, working through checklists for different role assignments, as well as the time to read around the target page to set the context. If checking is done to this level of thoroughness, it is not at all difficult to spend an hour on one page!) How does this depth-oriented approach affect the faults found? On the picture, we have gone deep in the Inspection on a limited number of pages. We have found the major one found in the other review plus two (other) minors, but we have also found a deep-seated major fault, which we would never have seen or even suspected if we had not spent the time to go deep. There is no guarantee that the most dangerous faults are lying near the surface! When the author comes to fix this deep-seated fault, he or she can look through the rest of the document for similar faults, and all of them can then be corrected. So in this example we will havee correc hav correcte ted d 5 major major fa fault ultss instea instead d of one. one. This This gi gives ves tr trem emen endou douss le lever verage age to the Inspection process - you can fix faults you didn't find! Inspection surprises

To summarise the Inspection process, there are a number of things about Inspection which surprise people. The fundamental importance of the Rules is what makes Inspection objective rather than a subjective review. The Rules are democratically agreed as applying (this helping to defuse author defensiveness) and by definition a fault is a Rule violation. The slow checking rates are surprising, but the value to be gained by depth gives far greater  long-term gains than surface-skimming review that miss major deep-seated problems. The strict entry and exit criteria help to ensure that Inspection gives value for money. The logging rates are much faster than in typical reviews (30 to 60 seconds; typical reviews log one thing in 3 to 10 minutes). minutes). This ensures that the meeting is very efficient. efficient. One reason

Page 48 of 132

 

Software Testing Material this works is that the final responsibility responsibility for all changes is fully given to the author, who has total responsibility for final classification of faults as well as the content of all fixes. More information on Inspection can be found in the book Software Inspection, Tom Gilb and Dorothy Graham, Addison-Wesley, 1993, ISBN 0-201-63181-4. Static analysis What can static analysis do? Static analysis is amay formorofmay automated can check for violations of standards and can find things that not betesting. faults. ItStatic analysis is descended from compiler  techn technolo ology gy.. In fa fact ct,, many many compil compiler erss may may ha have ve stati staticc analy analysis sis facil facilit ities ies avail availabl ablee for  developers to use if they wish. There are also a number of stand-alone static analysis tools for  various different computer programming languages. Like a compiler, the static analysis tool analyses the code without executing it, and can alert the developer to various things such as unreachable code, undeclared variables, etc. Stat Static ic an analy alysis sis tools tools ca can n al also so comput computee variou variouss metri metrics cs about about co code de su such ch as cy cycl clom omat atic ic complexity. Data flow analysis Data flow analysis is the study of program variables. A variable is basically a location in the computer's memory that has a name so that the programmer can refer to it more conveniently in the source code. When a value is put into this location, we say that the variable is "defined". When that value is accessed, we say that it is "used". For example, in the statement "x = y + z", the variables y and z are used because the values that they contain are being accessed and added together. The result of this addition is then put into the memory location called “x”, so x is defined. The significance of this is that static analysis tools can perform a number of simple checks. One of these checks is to ensure that every variable is defined before it is used. If a variable is not defined before it is used, the value that it contains may be different every time the   program is executed and in any case is unlikely to contain the correct value. This is an example of a data flow fault. Another check that a static analysis tool can make is to ensure that every time a variable is defined it is used somewhere later on in the program. If it isn’t, then why was defined in the first place? This is known as a data flow anomaly and although can be a perfectly harmless fault, it can also indicate something more serious is at fault. Control flow analysis Control flow analysis can find infinite loops, inaccessible code, and many other suspicious aspects. However, not all of the things found are necessarily faults; defensive programming may result in code that is technically unreachable. Cyclomatic complexity Cyclomatic complexity is related to the number of decisions in a program or control flow graph. The easiest way to compute it is to count the number of decisions (diamond-shaped  boxes) on a control flow graph and add 1. Working from code, count the total number of IF's and any lo loop op constr construct uctss (DO, (DO, FOR, FOR, WHIL WHILE, E, REPE REPEAT AT)) an and d ad add d 1. The The cy cycl clom omati aticc complexity does reflect to some extent how complex a code fragment is, but it is not the whole story. Other static metrics Lines of code (LOC or KLOC for 1000’s of LOC) is a measure of the size of a code module. Operands operators is a very detailed measurement by(in Halstead, but not much used now.and Fan-in is related to the number of modulesdevised that call to) a given module. Modules with high fan-in are found at the bottom of hierarchies, or in libraries where they are Page 49 of 132

 

Software Testing Material frequently called. Modules with high fan-out are typically at the top of hierarchies, because they call out to many modules (e.g. the main menu). Any module with both high fan-in and high fan-out probably needs re-designing.  Nesting levels relate to how deeply nested statements are within other IF statements. This is a good metric to have in addition to cyclomatic complexity, complexity, since highly nested code is harder  to understand than linear code, but cyclomatic complexity does not distinguish them. Other metrics include the number of function calls and a number of metrics specific to objectoriented code. Limitations and advantages Static analysis has its limitations. It cannot distinguish "fail-safe" code from real faults or  anomalies, and may create a lot of spurious failure messages. Static analysis tools do not execute the code, so they are not a substitute for dynamic testing, and they are not related to real operating conditions. However, static analysis tools can find faults that are difficult to see and they give objective quality information about the code. We feel that all developers should use static analysis tools, since the information they can give can find faults very early when they are very cheap to fix. WinRunner 7.0  Developed by Mercury Interactive

Functio Func tionali nality ty testing testing tool ( Not suitabl suitablee to Perform Performance ance,, Usabili Usability ty and Securit Security y Testing)  Supports c/s and web technologies ( VB, vc++, java, d2k, power builder, Delphi, HIML etc… WinR nRunn unner er wont wont suppor supports ts .N .Net et,, XML, XML, SAP, SAP, Peopl Peoplee Soft, Soft, Maya Maya,, Flash Flash,, or oracl aclee  Wi applications etc…  To support .Net, XML, SAP, People Soft, Maya, Flash, XML, oracle applications etc… we can use QTP ( Quick Test Professional )  QTP is an extension of WinRunner. 

WinRunner Recording Process:

Page 50 of 132

 

Software Testing Material

Learning

Edit Script

Run Script

Analyze that Test Results Learning: Recognization of objects and windows in your application by testing tool is called Learning. Recording: A test engineer records our manual process in winrunner to automate. Edit Script: Test Engineer inserts required check points into that recorded test script. Run Script: A test engineer executes automated test script to get results. Analyze Results: A test engineer analyzes test results to concentrate on defect tracking. *****

User Id P

*****

Ok 

Exp: Ok enabled after entering user id and password. Explain Icons in WinRunner Note: WinRunner 7.0 provides auto learning facility to recognize objects and windows in your project without your interaction.

Every statement ends with ; like C. Test Script: A test script consists of Navigat Navigational ional Statements & Check Points. In winrunner  scripting language is also called as TSL ( Test Script Language ) like as C.

Page 51 of 132

 

Software Testing Material

Add-in Manager: This window provides a list of WinRunner supported technologies with respect to our purchased license.  Note: If all options in Add in Manager are off by default it supports VB, VC++ interface (Win32 API). Recording Modes: To record our business operations (Navigations) in winrunner we can use

2 types of recording modes. 1. Context Context Sensiti Sensitive ve mode mode (Defaul (Defaultt Mode) Mode) 2. Anal alo og mode ode Analog Mode: To record mouse pointer movements on the desktop, we can use this mode. In Analog Mode tester maintains constant monitor resolution and application position during recording and learning Application areas: Digital Signatures, Graphs drawing, image movements.

 Note: 1. In analog analog mode, mode, WinR WinRunn unner er re recor cords ds mouse mouse pointer pointer moveme movements nts with respect respect to desktop co-ordinates. Due to this reason, test engineer maintains corresponding context sensitive mode window in default position in recording and running. 2. If u want to use Analog Analog mode mode for recordi recording, ng, we can mainta maintain in monitor monitor resolut resolution ion as constant during recording and running. move_locator_track() : WinRunner use this function to record mouse pointer movements on the desktop in one unit of time. Syntax: move_locator_track(track number); By default it starts with 1. It is not based on time. But based on operation. mtype(): WinRunner uses this operations this function to record mouse pointer operations on mtype(): the desktop. Syntax: mtype(<T Track Number> < K key on the mouse used > + / - ); Ex:

mtype ("<T20><kLeft>+" "<T20><kLeft>+"); );

Track no – Deck top coordinates in which you operate the mouse. It stores the mouse coordinates. Actually it is a memory location. Type(): We can use this function to record keyboard operations in analog mode. Syntax: type(“Typed characters”/”ASCII notation”); Context Sensitive mode: To record mouse and key board operations on our application  build, we can use this mode. It is a default mode. Context Sensitive Context Sensitive Mode: Mode: In general general functio functionali nality ty test engineer engineer creates creates automati automation on test scripts in Context Sensitive mode with required check points. In this mode WinRunner 

Page 52 of 132

 

Software Testing Material records our application operation operation with respect to objects and windows. To record mouse and key board operations on our application build, we can use this mode. It is a default mode. Ex: Focus to Window Set_window(“Window Name”, time); TextBox Edit_set(“Edit Name”,”Typed Characters”); Password tteext b bo ox Password_edit_set(“Pwd Ob Object”,”Encrypted Pw Pwd”); Push Button Button_press(“Button Name”); Radio Button Check Box List/Combo Box Menu

Button_set(“Button Name”,ON); Button_set(“Button Name”,OFF); Button_set(“Button Name”,ON); Button_set(“Button Name”,OFF); List_select_item(“List1”, “Selected Item”); Menu_select_item(“Menu Name; Option Name”);

Base State: An application state to start test is called as Base State. End State: An application state to stop test is called as Base State. Call State: An intermediate state of an application between base state and end state is call state. Functionality Testing Techniques:

Behavioral Coverage ( Object Properties Checking ). of every i/p Object ). Input Domain Coverage ( Correctness of Size and Type Error Handling Coverage ( Preventing negative navigation ). Calculations Coverage ( correctness of o/p values ). Backend Coverage ( Data Validation & Data Integrity of database tables ). Service Levels ( Order of functionality or services ). Successful Functionality ( Combination of above all ) Check points: WinRunner is a functionality testing tool, it provides a set of facilities to cover   below sub tests.

To automate above sub tests, we can use 4 check points in WinRunner: 1. GUI GUI cche heck ck po poin ints ts 2. Bi Bitm tmap ap chec check k poi point ntss 3. Data Data Bas Basee che check ck poi point ntss 4. Text ext chec check k poin points ts GUI Check point: To automate behavior of objects we can use this check point. It consists of  sub options. 1. For For Sin Singl glee Pro Prope pert rty y 2. For For Obje Object ct/W /Win indo dow w 3. For For Multi Multipl plee Pro Proper perti ties es For Single Property: To test a single property of an object we can use this option. Navigation: select a position in Script, Create Menu, GUI check point, for single property,

select testable object(Double Click), select required property with expected, click paste. Ex: Update Object

Page 53 of 132

 

Software Testing Material

Focus to Window Open a Record Perform Change

Update Order  Disable Disable Enable

Syntax: object_check_info("Object Name", "Property", Expected value); Ex: button_check_info("Update Order","enabled",0);

If the checkpoints are for numeric value, then no need for double quotes. If the checkpoints are for string value, then place the data in between double quotes. But winrunner takes any value by default in string with double quotes. Problem:

Focus to Window – Item No should be Focused Ok enabled after filling itemno & qty.

 NagaRaju Shopping Item No

Quantity Ok  Expected: No. of Items in Fly to equal to, no of items in Fly From -1, when you select an item in fly from.

 NagaRaju Journey

Fly From Fly To Ok 

Ex: if u select an item in a list box then the no of items in next list boxes decreased by 1. Problem: Focus to Window – Ok should be Disabled

Page 54 of 132

 

Software Testing Material Enter Roll No - Ok should be disabled Enter Name – Ok should be disabled Enter Class – Ok should be disabled

 NagaRaju Shopping

Item No  NagaRaju Journey Quantity List1

Ok 

List2 List3

Ok 

Problem: If type is A, Age is Focused, If type is B, Gender is Focused, If type is C, Qualification is Focused. Else others is focused. (use switch stmt)

Type

Age

Gender

Qualification

Others

switch(x) { case “A”: edit_check_info(“Age”,”focused”, 1); break;   }

List

Page 55 of 132

Text  

Software Testing Material

Ok 

Exp: Selected Item in List box appears in text box after Clicking Ok button. Exp: Selected Item in List box appears in Sample 2 text object after clicking display button.

Sample1

Sample2 Display

List1 Text

Ok 

 NagaRaju Employee

Emp No Dept No Ok  B Sal

Comm

Problem: If basic salary >= 10000 then commission commission = 10% of basic salary. Else If basic salary in between 5000 & 10000 then commission = 5% of basic salary. Else If basic salary < 5000 then commission = 200 Rs. Rs. Problem: If Total >= 800 then Grade = A. Else If Total in between 800 & 700 then Grade = B. Else Grade = C.

Roll No

Ok 

Page 56 of 132 Grade  

Software Testing Material

For Object/Window: To test more than one properties of a single object, we can use this option.

Ex: Update Object Focus to Window Open a Record Perform Change

Update Order  Disable Disable Enable & Focused

Syntax: obj_check_gui(“obj name”, “Check List File.ckl”, “expected values file.txt”, time to create ); In the above syntax check list file specifies list of properties to test of a single object. And its extension is .ckl Expected values file specifies list of expected values for that selected or testable properties. And its extension is .txt Ex: obj_check_gui( obj_check_gui ("Update Order", Order", "list1.ckl", "list1.ckl" , "gui1", "gui1", 1); For Multiple Objects: To test more than one property of more than one object in a single checkpointt we can use this option. To create this checkpoint tester select checkpoin selectss multiple objects in a single window.

Ex: Focus to Window Open a Record Perform Change

Insert Order Disable Disable Disable

Update Order Disable Disable Enable & Focused

Delete Order   Disable Enable Enable

  Navigation: select position in script, create menu, GUI checkpoint, For Multiple Objects, clic click k ad add, d, se selec lectt te testa stabl blee object objects, s, ri right ght cl clic ick k to relie relieve, ve, sp speci ecify fy expec expecte ted d fo forr re requi quired red  properties for every selected object, click ok. win_check_gui(("Object Name", Name", "Check List File.ckl" File.ckl",, "Expected Values File", File", Time Syntax: win_check_gui to Create); Create); win_check_gui(("Flight Reservation", Reservation", "list3.ckl", "list3.ckl" , "gui3" "gui3",, 1); Ex: win_check_gui Case Study: What type of properties you check for what objects?

Object Type

Properties

Push Button Radio Button Check Box

Enabled, Focused Status ( On , Off ) Status ( On , Off )

Page 57 of 132

 

Software Testing Material List Box Table Grid Textt / Edit Tex Edit Box Box

Count ( No of items in List Box ), Value ( Current Selected Va Value ) Rows, Columns, Table Content Enabled Enabled,, Focuse Focused, d, Valu Value, e, Range Range,, Regula Regularr Expres Expression sion,, Date Date Form Format, at, Time Time Format

Changing Check Points:

WinRunner allows us to perform changes in the existing check points. There are 2 types of  changes in existing checkpoints due to project sudden changes or tester mistake. 1. Chan Change ge exp expec ecte ted d valu values es 2. Add Add new new proper properti ties es to te test st Change expected values Wr allows u to perform changes in expected values in existing checkpoints  Navigarion: execute test script, click results, perform changes in expected values in results window of required, click ok, reexecute the test script to get right result Add new properties to test Sometimes Someti mes test engineer add extra properties to existing checkpoint checkpoint due to incompleteness incompleteness of  test through below navigation. Navigation: Create menu, edit gui check list, select check list file name, click ok, select new  properties to test, click ok, to overwrite, change run mode to update, click run executed (default values selected as exp values), click run in verify mode to get results, perform changes in result if required

Enable d Focuse d

ON OFF

Value

Default Value

Running Modes in WinRunner: Verify mode: in this mode wr compare our expected values with actual. Update mode: in this runmode, default values select as expected value Debug mode: to run our test scripts line by line.

During GUI check point creation creation Winrunner creates checklist files and expected values files in HardDisk. Winrunner maintains the test scripts by default in tmp folder  Script: c:\program files\mi\wr\tmp\testname\script Checklists: c:\program files\mi\wr\tmp\testname\chklist\list1.ckl Exp values: c:\program files\mi\wr\tmp\testname\exp\gui1 Input Domain Coverage: Range and Size

Page 58 of 132

 

Software Testing Material

Navigation: Create Menu, GUI Check point, for object/window, select object, select range  property, enter from & to values, click ok. Syntax: obj_check_gui(“obj name”, “What Property you are checking”, “Range of Values from & To”, time to create ) In the above syntax check list file specifies list of properties to test of a single object. And its

extension is .ckl Expected values file specifies list of expected values for that selected or testable properties. And its extension is .txt Ex: obj_check_gui obj_check_gui(("Update Order" Order",, "list1.ckl", "list1.ckl", "gui1" "gui1",, 1);

 NagaRaju Sample  

Age

Input Domain Coverage: Valid and Invalid Classes Navigation: Create Menu, GUI Check point, for object/window, select object, select Regular  Expression property, enter Expected Expression as []*, click ok. Syntax: obj_check_gui(“obj name”, “What Property you are checking”, “Range of Values from & To”, time to create ) In the above syntax check list file specifies list of properties to test of a single object. And its extension is .ckl Expected values file specifies list of expected values for that selected or testable properties. And its extension is .txt

obj_check_gui(("Update Order" Order",, "list1.ckl", "list1.ckl", "gui1" "gui1",, 1); Ex: obj_check_gui Problem: The Name text box should allow only lower level characters

 NagaRaju Sample

 Name

1. Alphabe Alphabets ts in lower lower case case and and initia initiall cap only 2. Alpha Alpha numeric numeric and start starting ing and ending ending with with alphabe alphabets ts only 3. Alphabe Alphabets ts in lower lower case case but but starts starts with with R ending ending with with o only 4. Alphabe Alphabe Alphabets habets ts in in lower lower lower case with wit h Under Unde Score Score in middl middle e in 5. Alp ts low er case with with Space Spacer and Under Under Score Score in middle middle

Page 59 of 132

 

Software Testing Material Bitmap Check Point: It is an optional checkpoint in functionality testing tool. Tester can use this this ch check eckpoi point nt to compa compare re imag images, es, logos, logos, graphs graphs and other other graphi graphical cal object objects. s.(( Li Like ke signatures)

This checkpoint consists of two sub types: 1. For Object/ Object/Win Window dow (Ent (Entire ire Imag Imagee Testin Testing) g) 2. For Screen Screen Area Area (Part (Part of of Image Image Testing Testing)) These options options supports testing on static static images only. WinRunner WinRunner doesn't support support dynamic images developed using Flash, Maya… application For Object/Window: Object/Window: To compare our expected image with actual image in your application  build, we can use this option. Navigation: select a position in script, create menu, bitmap checkpoint, for object/window, select image object. obj_check_bitmap(“Image object Name”, “Expected image file. Bmp”, time to create the image check point) win_check_bitmap("About Flight Reservat win_check_bitmap( Reservation ion System", System", "Img1" "Img1",, 1);

Run on different versions. Expected – Record time Actual – Run time Differences – what are differences For Screen Area (Part of Image Testing): To compare our expected image part with actual image in your application build, we can use this option.

 Navigation: select a position in script, create menu, bitmap checkpoint, for Screen Area, select required region in testable image, right click to releave. obj_check_bitmap( obj_check _bitmap(“Image “Image object Name”, “Image file. Bmp”, time to create the check point, x, y, width, height ) win_check_bitmap( Reservation ion System", System", "Img2" "Img2",, 1, 191, 191, 29, 29, 122 122,, 71 71); ); win_check_bitmap("About Flight Reservat

Run on different versions. Expected – Record time Actual – Run time Differences – what are differences Note: TSL supports variable size of parameter line a function overloading For every project functionality testing, gui checkpoint is obligatory to use. By bitmap check   point used by tester depends on requirements Database Check Database Check Point: Point: To conduct backend testing using WinRunner we can use this option.

Page 60 of 132

 

Software Testing Material

Validating ting Completeness and Correctness of front end operation operation impact Back End Testing: Valida on the backend tables. This process is also known as the database testing. In general, the Backend testing is also known as validation or data and integrity of data. To automate this test, Database checkpoint provides three sub options 1. Defau Default lt Chec Check k (Depe (Depends nds on Cont Content ent)) 2. Custom Custom Check Check (Depend (Dependss on rows count, count, colum columns ns count count and content content)) 3. Runtime Runtime Recor Record d Check Check (New option option in WinRu WinRunner nner7.0 7.0))  

Application

DataBase

DSN Front End

Back End

integrity in database, database, depends on content, Default Check: To check data validation and data integrity we can use this option. DSN: Data Source Name. It is a connection string between front end and back end. It will maintain the connection process. Steps: 1. 2. 3. 4.

Conn Connec ectt to the the dat datab abas asee Exec Execute ute th thee sele select ct sta state teme ment nt Retu Return rn resu result ltss in Exce Excell Shee Sheett Analy Analyze ze the the resu result ltss manua manuall lly y

Application

DataBase

DSN 1

Front End Data Base Check Point Wizard

Back End 2

Select

3 In bitmap checking test between two versions of images.

Page 61 of 132

 

Software Testing Material In GUI checking test same application but with expected behavior. In Database checking test twice on the original data. To conduct testing, test engineer collects some information from development team. • • •

Connection String or DSN Table definitions or Data dictionary Mapping between front end forms and backend tables.

Database Testing Process: Create Database checkpoint (Current content of database selected as Expected.) Insert / Delete / Update operation through front end. Execute Database checkpoint (Current content of database selected as Actual) Navigation: In GUI & Bitmap checkpoints we will starts with selecting the position in script. Create Cre ate Menu, Menu, Databas Databasee Checkpo Checkpoint int,, default default checkpoi checkpoint, nt, specify specify connect connection ion to database database (ODBC / Data Junction) , select sql statement(c:\\PF\MI\WR\temp\testname\msqr1.sql), click  next, click create to select DSN, write select statement ( select * from orders ), click finish.

Syntax: db_check ("Check List File .cdl" .cdl",, "Query Result File.Xls(EXE File)"); File)"); "list5.cdl" , "dbvf5"); "dbvf5"); Ex: db_check ("list5.cdl", Criteria: Expected Difference – Pass Wrong Difference – Fail What Updated: Data Validation Who and when updated: Data Integrity   New Record - Green Color. Modified Record - Yellow Color. Custom Check: Test engineer use this option to conduct backend testing depends on rows count or columns count or table content or combination of above three properties. Default Checkpoint: Content is Property & Content is Expected Custom Checkpoint: Rows Count is Property & No of rows is Expected. During Custom check point creation, creation, winrunner provides a facility to select these properties, properties, in general test engineers are using default check option as maximum. maximum. Because content is also suitable to find the number of rows and columns. Syntax: db_check ("Check List File .cdl" .cdl",, "Query Result File.Xls(EXE File)"); File)"); "list11.cdl" , "dbvf8"); "dbvf8"); db_check ("list11.cdl",

Page 62 of 132

 

Software Testing Material

A

B

1

1

Expected:

2

2

X Y

3

3

X

A B

Y

Front End – Programmers (Programming Division) Back End – DataBase Administrators (DB Division) The front objects names should be understandable to the end user. (WYSIWYG) Runtime Record Checkpoint: Sometimes test engineer use this option to find mapping  between front end objects and backend columns, it is optional checkpoint.

Create te Menu, Menu, Data Databas basee Check Checkpoi point, nt, runti runtime me record record check, check, sp speci ecify fy SQL SQL Navigation: Crea statement, click next, click create to select DSN, write select statement with doubtful columns ( select orders.order_number, orders.customername from orders), select doubtful front end objects for that columns, click next, select any of below options Exactly one match • One or more match •  No match record • Click finish.  Note: For custom and default check points you have to give ; at the end of the sql statement. statement. But in Runtime record check point u have no need to give it. list File Name.cvr", Syntax:  db_record_check ("Check DVR_ONE_MORE_MATCH / DVR_NO_MATCH, DVR_NO_MATCH, Variable);

DVR_ DVR_ON ONE_ E_MA MATC TCH/ H/

Ex: db_record_check ("list1.cvr", "list1.cvr" , DVR_ONE_MATCH, record_num); In the above above syntax syntax checklis checklistt specifi specifies es expected expected mapping mapping to test and variabl variablee specifi specifies es number of records matched. If mapping correct the same values will be presented. Runtime record checkpoint allows you to perform changes in existing mapping, through  below navigation.

Page 63 of 132

 

Software Testing Material Create menu, edit runtime recordlist, select checklist file name, click next, change query (if u want to test on new columns), click next, change object selection for new objects testing, click finish. Synchronization: To define the time mapping between testing tool and application, we can use synchronization  point concepts. Wait(): To define fixed waiting time during test execution, test engineer use this function. Syntax: wait (time in seconds); Ex: wait (10);

applications are taking variable Drawback: This function defines fixed waiting time, but our applications times to complete, depends on test environment.

Change Runtime settings: During our test script execution, winrunner doesn't depends on recording time parameters. To maintain any waiting state, in winrunner we can use wait () function or change runtime settings.

It maintains mainly following information: Delay: Time to wait between window focusing How w much much ti time me ap appl plic icat atio ion n sh shou ould ld wait wait fo forr cont contex extt se sens nsit itiv ivee st stag agee an and d Timeout: Ho checkpoints. There are two runtime settings time parameters Delay: For window synchronization Timeout: For execute in context sensitive and check points Window based statements are not able to execute: Delay + Timeout. Object based statements are not able to execute: Timeout. Navigation: Sett Settin ings, gs, Gener General al optio options, ns, run tab, tab, ch chang angee delay delay & ti time meout out depend dependss on requirement, click apply, click ok.

Window statements : delay – 1sec To focus – 10 sec Set_window(“”,6) ; -time = 11sec -2.  button_press(ok); time = 10 3. button_check_info(“ok”,”enabled”,1);

Page 64 of 132

 

Software Testing Material time = 10sec Drawbacks in Change Settings: If you are changing the Settings once they will be applied to each and every test without user specifications. Due to this most of the times they are not using this change runtime settings option.   Now a days most of the test engineers are using the for object / window property for  avoiding the time mismatch problems For Object/Window Property: Navigation: Select position in script, create menu, synchronization synchronization point, for object/window object/window  property, select object, specify property with expected ( Ex: Status / Progress Bar – 100% completed and enabled…), specify maximum time to wait, click ok.   Name", "Property" "Property",, Expected Value Value,, Maximum time to Syntax: obj_wait_info( obj_wait_info("Object Name", wait); wait ); Ex:

obj_wait_info("Insert Done...", obj_wait_info( Done...","enabled", "enabled",1,10 10); );

For Object / Window Bitmap:

Sometimes test engineer defines time mapping between tool and project depends on images in that application. synchronization point, for object/window object/window Navigation: Select position in script, create menu, synchronization Bitmap, select the required image, obj_wait_info(("Object Name", Name", "image1.bmp", "image1.bmp", Maximum time to wait); wait); Syntax: obj_wait_info For Screen Area Bitmap: Sometimes test engineer defines time mapping between tool and project depends on image area in that application. Navigation: Select position in script, create menu, synchronization point, for Screen Area

Bitmap, select the required image region, right click to releave. Syntax: obj_wait_info( obj_wait_info ("Object Name", Name", "image1.bmp" "image1.bmp",, Maximum time to wait, x, y, width, height); height ); Text Check Point: To cove coverr ca calc lcul ulat atio ion n an and d ot othe herr te text xt ba base sed d te test sts, s, we ca can n us usee th this is op opti tion on / co conc ncep eptt in WinRunner.

To create this type of check points in testing, we can use this “Get Text” option from the create menu. This option consists of two sub options : 1. From From ob obje ject ct / win windo dow w 2. Fr From om sc scre reen en ar area ea

Page 65 of 132

 

Software Testing Material From object / window: To capture object values into variables we can use this option.

Menu, Get Text, From From Object / Window, Window, select required required object (Dbl Navigation: Create Menu, Click) obj_get_text(("Object Name", Name", Variable); Syntax: obj_get_text Ex: obj_get_text( No:", text); obj_get_text("Flight No:", Syntax: obj_get_info obj_get_info(("Object Name", Name", "Property" "Property",, Variable); obj_get_info(("ThunderTextBox_3", "ThunderTextBox_3" ,"value" "value",v1); ,v1); Ex: obj_get_info

From Screen Area: To capture static text in your application build screen we can use this option.  Navigation: Create Menu, Get Text, From Screen area, select required required region to capture value [+sign] , right click to relieve Syntax: obj_get_text obj_get_text(("Object Name", Name", Variable, x1,y1,x2,y2); x1,y1,x2,y2); obj_get_text("Flight No:", Ex: obj_get_text( No:", text,2,3,50,60); text,2,3,50,60);

NagaRaju Sample Input

Output

Item No

Quantity Ok 

Price

$

Total

Retesting: Re execution of our test on same application build, with multiple test data is called retesting. In WinRunner retesting is also called as Data Driven Test (DDT).

Data is driving or changing to test the application.

Page 66 of 132

 

Software Testing Material In WinRunner test engineers are conducting Retesting in 4 ways 1. Dynam Dynamic ic test test dat dataa subm submiss ission ion 2. Thro Through ugh flat flat file file (n (note otepad pad)) 3. From From fron frontt end end grids grids (L (Lis istt box) box) 4. Thro Throug ugh h exc excel el sh shee eett  During the test execution based on first type, tester gives values based on that test execution will be completed (like scanf() in C) But in the remaining three types can be done with out tester execution. Dynamic test data submission: To conduct retesting, to validate validate functionality, functionality, test engineer  submits required test data to tool dynamically. To read keyboard values during the test execution, test engineer use below TSL statements.

Syntax: create_input_dialog(“ Message “); Ex: create_input_dialog(“ Enter Your Account Number : “);

Build

Key Board

Test Script  No1  No2 Multiply Result

Exp: res = no1 * no2 Quantity

Item No

Ok 

Price

$

Total

$

Page 67 of 132

 

Software Testing Material

Tl_step(): tl stands for test log. Test log means that test result. We can use this function to define user defined pass or fail message.

Pass – green – 0 Fail – red – 1 Password_edit_set(“pwd”,password_encrypt(y))

User Id Password Login

Next

Sample 2

Display Text2

Sample 1

Text1 Ok 

Problem: First enter EmpNo and Click Ok Button. Then it will displays bsal, comm. and gsal. Exp: gsal = bsal + comm. Bsal >= 15000 then comm. is 15% If bsal between 15000 and 8000 then commission is 5% If bsal < 8000 then comm. is 200.

Page 68 of 132

 

Software Testing Material

Through flat file (notepad) Sometimes test engineer conducts data driven testing, depends on multiple test data in flat files (like notepad .txt files).

To manipulate file data for testing test engineer uses below TSL functions file_open(): To load required flat file into RAM, with specified permissions, we can use function. Syntax: file_ file_ope open( n(“P “Pat ath h of th thee File” File”,F ,FO_M O_MOD ODE_ E_RE READ AD// FO_M FO_MOD ODE_ E_WR WRIT ITE/ E/ FO_MODE_APPEND); file_getline(): We can use this function to read a line from a opened file. Syntax: file_getline( “Path of the File”, Variable); Like in C file pointer incremented automatically. file_close(): We can use function to swap out a opened file into RAM. Syntax: file_close(“Path of the File”); file_printf(): We can use this function to write specified text into a opened file in WRITE or APPEND mode. file_printf(“P intf(“Path ath of the File”, ”Format”, what values you want to write or which Syntax: file_pr variable values you want to write);

%d - integer, %s - string, %f – floating point, \n – New Line, \t - Tab, \r – Carriage return Substr: we can use this function to separate a substring from given string. Syntax: Substr (main string, start position, length of substring); Split: we can use this function to divide a string to field: Syntax: Split(main string, array name, separator); In the above syntax separator must be a single character. File-compare: to compare two file contents. Syntax: file_compare(“ path of file1”, “ path of file2”, “path of file3”); File3 is optional. And it specifies concatenate of content of both files.

Page 69 of 132

 

Software Testing Material

Build

Values

File

.txt Test Script

 No1  No2 Multiply Result

Exp: res = no1 * no2 Item No

Quantity Ok 

Price

$

Total

User Id Password

Login

Next

$

Page 70 of 132

 

Software Testing Material

From Front End Grids (ListBox): Sometimes test engineer conducts retesting depends on multiple test data objects (like list  box). To manipulate file data for testing test engineer uses below TSL functions list_get_item(): We can use this function to capture specified list box item through item number.

list_get_item(“ListBox Name”,Item No,Variable); list_select_item(): We can use this function to select specified list box item through given variable.

list_select_item(“ListBox Name”,Variable); information about the specified specified property(like list_get_info(): We can use this function to information enabled, focused, count) of list box item into given variable. list_get_info(“ListBox Name”, Property, variable);

Test Data Build

Test Script

Page 71 of 132

 

Software Testing Material

 NagaRaju Journey

Fly From Fly To Ok 

Sample 2

Display Text2

 NagaRaju Sample1   List1

Sample 1

Text1 Ok 

 NagaRaju Sample2  

Page 72 of 132 Ok   

Software Testing Material Display

Text

Type

Age

Gender

Qualification

Others

Data Driven Testing: In generally test engineers are creating data driven tests, depends on excel sheet data.

Loop

Test Data

----

Build

Excel Sheet Data

Test Script

From Excel Sheet: In general test engineers are creating retest test scripts depnds on multiple test data in excel sheet. To generate this type of script, test engineer use data driven test wizard. In this type of  retesting, test engineer fills excel sheet with test data in two ways.

1. From From data data base table tabless using sele select ct statem statement ent (Back (Back End) End) 2. Our Our own own te test st da data ta

Navigation: Create test script for one input, tools menu, data driven wizard, click next, browse the  path of the excel sheet ( path ), specify variable name to assign path of excel sheet ( by

Page 73 of 132

 

Software Testing Material default table as variable ), select add statements to create ddt, select import data from database, optimized optimized text 1. line by line line 2. automatical automatically, ly, click click next, next, specify specify connection connection to databas database, e, specify specify database database connect connection ion (ODBC/ (ODBC/Dat Dataa Junctio Junction), n), select select specify specify sql statement mssql1.sql , click next, click create to select dsn (machine data sourse –  flight32),, write select flight32) select statement to capture capture database content for testing testing into excel sheet, specify position to replace excel sheet column in ur test script, select show data table now, click finish.

Test Script Col1

Col2

Col3

C3 = c1 + c2

  Problems: 1. Prepare Prepare a data driven driven program program to find find factorial factorial of given given number. number. Write Write result result into same excel sheet. 2. Prepare a TSL script to to write a list box item item into excel sheet sheet one by by one. Ddt_open(): We can use this function to open excel sheet into Ram. In specified mode.

Syn: ddt_open(“path of file”, DDT_MODE_READ/ DDT_MODE_READWRITE) This function will returns E_FILE_OPEN E_FILE_OPEN when that file is opened into RAM. Else it returns E_FILE_NOT_OPEN. Ddt_update_from_db(): To extend excel sheet data depends on dynamic changes in the database ( Insert , Delete, Update

Syn: Ddt_update_from_db(“path of excel sheet”, “path of query file”, variable); In the above syntax variable specifies that how many rows newly altered. Ddt_save(): To save recent modifications in excel sheet. Ddt_save(“ path of excel sheet”); Ddt_get_row_count(): To find the no of rows in excel sheet.

Ddt_get_row_count(“path of excel sheet”, variable) Var stores the no of rows in sheet. Ddt_set_row(): To point a row in excel sheet. Ddt_set_row(“path of excel file”,row no): Ddt_val(): To read a value from specified column & pointed row.

Page 74 of 132

 

Software Testing Material Ddt_val(“path of excel file”,col no): Ddt_set_val(): To write a value into a specified column and pointed row. Ddt_set_val(“path of excel file”, “col name”, value or variable): Ddt_close(): To swap out excel sheet from ram Ddt_close(“path of excel file”):

Write a program to write list box items into a excel sheet one by one. Test Suite / Test Batch:- Arranging all tests in one proper order based on their functionality. It gives what test output is used as a input to all other values. Batch Testing: In general test engineers are executing their scripts as batches. Every batch consists of a set of tests, they all are dependent. In every batch end state of one test is base state to next test. When you are executing our tests as batches you are getting a chance to increase our probability of defect detection. Syntax: call “test name” () call “path of the test”(); In the above syntax we can use first one, when calling and called tests both are in the same folder. We can use second syntax when both are in different folders.

Calling Test Call TestName() -------

Main Test

Test Name1

-------

SubTest

Parameter passing: Winrunne Winrunnerr allows you to pass arguments arguments between, calling test to called test, or main test to subtest. Navigation: Open subtest, file menu, test properties, select parameters table, click add to create more parameters, click apply, click ok, use that parameters in required place to test script.

From the above model main test is passing values to subtest. To receive that values, subtest maintains a set of parameter variables.   Data Driven Batch Test: WinRunner allows you to execute our batches with multiple test data. 

Page 75 of 132

 

Software Testing Material

Calling Test

Test Name

n = 10

Call TestName(n) -------

-------

Main Test

SubTest

texit(): sometimes test engineers are using the statement in test script to stop test execution in the middle of the process. Treturn(): we can use this statement to return a value from a called test to a calling test.

Treturn(variable or value); Treturn(10);

Calling Test Temp = Call TestName(n) If(temp==1) Printf(); Else Printf();

Main Test

Test Name

n = 10

Edit_set(“”,n); If(condition) Treturn(0) Else Treturn(1); SubTest

Silent Mode: In general winrunner returns pause message, any standard checkpoint is failed during test execution. If u want to execute our tests scripts without any initiation when a checkpoint is failed we can follow below navigation to define silent mode.

Fail Settin  Nrun ext tab, selectEn ableind batchmode” Settings, gs, Test1 general options, “run batchmode” option, click apply, Navigation: click ok.

Test2 Test3

Page 76 of 132

Test4  

Software Testing Material

Fail

Test1

Sample Windo w appear s

Test2 Test3 Test4 window appears: if (win_exists(“sample”) == E_OK) win_exists() we can use this function to find existence of a window. In the desktop in min, max or hidden position.

Syn: win_exists(“ window name “, time); Time – is optional. Homework: Login after 5 secs. If next enabled go to next window. Else try for other user. Shopping: Prepare above batch test for ten users which information available in excel sheet during this  batch execution tester passing item no & quantity as parameters. User Defined Functions: Like as programming programming languages winrunner also provides a facility facility to create user defined functions. In TSL user defined functions are created by test engineer to initiate repeatable navigation.

Page 77 of 132

 

Software Testing Material

In the above example, test engineer creates four automation test scripts to test four different functionalities depends on functionality dependency. Test engineers are calling this login  process as base state.  Public / static function function name(in/out/inout argument name, ……) {

Repeatable Navigation return (value or variable ) }

if u want to create a user defined function to maintain end state of one time execution is base state to next execution we can use static functions. But static maintains constant locations for internal variables in that current test execution. Out  put of one test execution is input to other test.

a = 100 Static a=0 ------a = 100 Test

User er Defin Defined ed Funct Functio ions ns al allow lowss only only conte context xt se sensi nsiti tive ve state stateme ments nts and Cont Control rol Note1: Us statements and doesn't allow check points and Analog statements. Note2: In batch testing one test calling other test through saved test name. One test invoking one function depends on function name. to call one function in test, that function .exe must reside in RAM.

Page 78 of 132

 

Software Testing Material

Public function add (in a, in b, out c) { c = a + b; }

Calling Test: X = 6; Y = 6; Add( x , y , z Printf(z);

);

Public function add (in a, in b) { c = a + b; return c; } Calling Test: X = 6; Y = 6; Z = Add( x , y); Printf(z);

Public function add (in a, inout b) { c = a + b; }

Calling Test: X = 6; Y = 6; Add( x , y ); Printf(y); In - general args

Page 79 of 132

 

Software Testing Material Out – return values Inout – both Return: to return one value  Note: udf allow only only cs statement statementss & control stmts stmts and doesn't allow check points points & analog statements. Compiled Module: Open winrunner and build, click new in winrunner, record repeatable navigations as user defined functions, save that test in dat folder, file menu, test properties, general tab, change test type to compiled module, click apply, click ok, write load() statement of that compiled module in startup script of winrunner.  

  Note: Note: WinRunne WinRunnerr mainta maintains ins a default default program program as a startup startup script. script. This This script script executed executed automatically when u launching winrunner. In this script we can write load() statement to load your function. Load(“ Name of the compiled Module”, 0/1,0/1)

0-User Defined compiled module 1-system Defined compiled module 0-Path appears in the winrunner window menu 1-Hides the path unload(): We can use this function to unload unwanted functions from RAM.

Syntax: unload(“ Path of the Compiled Module “, “ Unwanted Function Name “); Reload(): We can use this function to reload, unloaded functions again. Syntax: reload(“ Path of the compiled Module”, 0/1,0/1)

0-User Defined compiled module 1-system Defined compiled module 0-Path appears in the winrunner window menu 1-Hides the path Predefined Functions: These functions are also known as built in functions or system defined functions. WinRunner provides a facility to search required tsl function in a library, called function generator.

To search for a required function in function generator, we can follow below navigation. Create Cre ate menu, menu, insert insert functio function, n, from function function generat generator, or, select select require required d categor category, y, select select required function depends on description, enter arguments, click paste. invoke_application(): WinRunner allows you to open a project automatically.

Page 80 of 132

 

Software Testing Material invok invoke_a e_appl pplica icati tion( on("P "Path ath of .e .exe" xe",, "c "com omma mands nds", ", "work "workin ing g di direc rector tory" y" , SW_S SW_SHO HOW W / SW_HI _HIDE / SW_MIN MINIMIZ IZE E / SW_RESTORE /SW_SHOWMAXIMI IMIZED / SW_SHOWMINIMIZED / SW_SHOWMINNOACTIVE / SW_SHOWNOACTIVE); Commands – Used in X Runner for Unix OS. Working directory directory – At the time of running the temporary files are stored in this directory. If  u didn’t specify any directory by default it takes c:\windows\temp folder. Executing a Prepared Query: Db_connect(): We can use this function to connect to database using existing DSN or  Connection. Syntax: db_connect(“Session Name”, “DSN=*******”); Ex: db_connect("Query1","DSN=Flight32"); Db_execute_query(): We can use this function to execute required “Select” statement on connected database. Syntax: db_execute_query(“Session Name”,”DSN=******”,variable”); Ex: db_execute_query("Query1","select * from orders where order_number <= "&x,rno); Db_write_records(): We can use this function to write query results into specified file. Syntax: db_write_records(“Session Name”, “File Path”, TRUE/FALSE, NO_LIMIT”); Ex: db_write_records("Query1","nrdbc1.xls",TRUE,NO_LIMIT); Extra Functions in WinRunner: Some times test engineers are adding user defined function names to generator to maintain user defined functions for future references.

To do this task, we can use below DSN statements. Generator_add_category():- We can use this function to create a new category generator. Syntax: Generator_add_category(“ Category Name “); Ex: Generator_add_category(“ NagaRaju “); Generator_add_function: Generator_add_fun ction: we can use this function to add your user defined function name to all functions category. Syntax: generator_add_function(“Function Name”, “Description”, arity, “Argument name”, “argument type”, “default value”, - - - - -); Genera Gene rato tor_ r_ad add_ d_fu func ncti tion on(“ (“ na nam me” e”,, “d “des escr crip ipti tion on”, ”, 5, “a” a”,, “br brow owse se() ()”, ”, “” “”,, “b” b”,, “point_ “po int_wind window() ow()”, ”, “”, “c”, “point_ “point_obje object”, ct”, “”, “d”, “d”, “selec “select_li t_list(0 st(0 1 2 3 4 5)”, “”, “e”, “e”, “type_edit”, “”);

Page 81 of 132

 

Software Testing Material Browse() - is for file path Point_window() – is for window message Point_object() is for object types Select_list() is for selecting list of items Type_edit() is for if we no need of all out to select we take this function (by default we use space). Generator_add_function_to_category(): Generator_add_function_to_category(“category name”, “function name”):  Note: We can execute above third function after completion of second function execution. We can write above three statements in start up script of WinRunner  Select TSL functions for: 1. Prepare TSL to execute below Prepared Query. (select * from orders where order_number  <= x and order_number >= y) 2. Change time out without using settings Setval(“time out”, time); System category function. 3. Find parent directory of WinRunner(Where WinRunner installed in your computer) Getenv(“M_HOME”); Getenv(“M_ROOT”); 4. Point SystemDate: Get_time (only time not date) Time_str  5. What is the difference between invoke_application(); and system(); One .exe is enough for invoke_application. To open one application system() means through title of the software. System category there are 8 functions usually in interviews Syntax: 1. dos_system(): To execute DOS commands. 2. time_str(): To capture system date with time. 3. get_time(): To capture system time value. 4. getvar(): To capture system variable values ex: Timeout, delay 5. setvar(): To change sytem variable values 6. getenv(): To capture environment information ex: m_home(), m_root() 7. system(): To open an application using title of the software. 8. invoke_application(): To open an application using .exe path. Built in functions / Predefined Functions: Functions: All TSL language language function functionss are availab available le in “Functi “Function on Generato Generator”. r”. Test Test engineer engineer select select

required function depends on requirements (depends on Automation needed) through below navigation.

Page 82 of 132

 

Software Testing Material Create menu, insert function, from Function Generator, Select Category, Select Function name with arguments, Click Paste. Clip board testing: A tester conduct a test on selected content of an object is called clip  board testing.

1.Edit_get_selection(“ obj name”, var) – specified selected objects. Difference between edit_get_selection() and obj_get_text(). Some part of application can be tested ie called as clip board testing and All entire application can be tested is called as general testing. win_exists() we can use this function to find existence of a window. In the desktop in min, max or hidden position. Syn: win_exists(“ window name “, time); Time – is optional. Open Application: WinRunner provides a facility to open your project automatically (System Category Function). invoke_application(): WinRunner allows you to open a project automatically. invok invoke_a e_appl pplica icati tion( on("P "Path ath of .e .exe" xe",, "c "com omma mands nds", ", "work "workin ing g di direc rector tory" y" , SW_S SW_SHO HOW W / SW_HI _HIDE / SW_MIN MINIMIZ IZE E / SW_RESTORE /SW_SHOWMAXIMI IMIZED / SW_SHOWMINIMIZED / SW_SHOWMINNOACTIVE / SW_SHOWNOACTIVE); SW- Set Window SW_SHOW – Focus to Window. Commands – Used in X Runner for Unix OS. Working directory directory – At the time of running the temporary files are stored in this directory. If  u didn’t specify any directory by default it takes c:\windows\temp folder. Executing a Prepared Query: Db_connect(): We can use this function to connect to database using existing DSN or  Connection. Syntax: db_connect(“Session Name”, “DSN=*******”); Ex: db_connect("Query1","DSN=Flight32"); Db_execute_query(): We can use this function to execute required “Select” statement on connected database. Syntax: db_execute_query(“Session Name”,”DSN=******”,variable”); Ex: db_execute_query("Query1","select * from orders where order_number <= "&x,rno);

Page 83 of 132

 

Software Testing Material Db_write_records(): We can use this function to write query results into specified file.

db_writ rite_re e_record cords(“ s(“Sess Session ion Name”, Name”,”De ”Destin stinatio ation n File File Syntax: db_w  NO_LIMIT); TRUE – With Header. FALSE – Without Header  Ex: db_write_records(“Query1”,”Nr.txt”, TRUE”, NO_LIMIT);

Path”, Path”, TRUE/F TRUE/FALS ALSE”, E”,

Db_disconnect(): We can use this function to remove database connection establishment. Syntax: db_disconnect(“Session Name”); Ex: db_connect("Query1");

3. win_ win_ex exis ists ts() ():: 4. op open en appl applic icat atio ion, n, execute prepared query(db_disconnect) Learning: In general a test automation process starts with learning. Learning means that recognization of objects and windows in your application by testing tool.

WR 7.0 supports auto learning and pre learning. Auto Learning: During recording, WR recognizes objects and windows with respect to tester operations. This type of auto recognization is called as Auto Learning. Steps: Start recording Recognize object Script generation Catch entries Catch objects 1

Build

WinRunner 

Button_press(“Ok”); Ok  4 2

5

3

Logical Name: Ok  { class: Push Button label: Ok 

GUI Map

} So before closing the WinRunner we have to do two tasks.

Page 84 of 132

 

Software Testing Material

Save script Save GUI Map Disadvantage of winrunner is without entries it won’t works. Note: If GUI Map is empty, our existing test scripts are not able to execute. To maintain

these entries longtime along with our test scripts, we can follow two possible administrations. Global GUI MAP File: From the above model test engineer creates a global GUI Map file and maintains explicitly explicitly in hard disk. By default WinRunner allows you to create global GUI Map file. Test1 ----

GUI Map

Test2

---

----

Save

-Open .gui (HDD)

Test3

----

---

Explicitly (Using File Menu in GUI MAP Editor).

Per Test Mode: It is a new option in WinRunner 7.0. In this mode winrunner implicitly handles entries in GUI MAP.

From the above model WR maintains auto process for save and open of entries with respect to test. Due to this reason, WR increase entry redundancy (Repetition) when an object / window participate in more than one test. By default WR follows Global GUI Map. If you want to change to per test mode, we can follow below navigation. Navigation: Setting Settings, s, general options, environment environment tab, select the gui map file per test, click  apply, click ok.

Page 85 of 132

 

Software Testing Material Test 1 ---Test 2

---

.gui ----

GUI Map

.gui Save

----

Test 3 ----

Ope n

Implicitly

--.gui

----

winrunner7.0 testers are also follows pre learning concept, before Pre Learning: Sometimes winrunner7.0 start recording. Due to this reason, pre learning is only suitable for global GUI Map. Navigation: open project, create menu in winrunner, Rapid Test Script Wizard, click next, show application main window, click next, (Select No Tests), click next, specify sub menu symbols (.., >>, ->), click next, specify learning mode (Express or Comprehensive), learn, (after learning) say yes or no to open project automatically, click next, remember the paths of  start up script and gui map file, click ok.

In general test engineers are following the auto learning concept in global GUI map file. They are not using auto learning with per test mode and pre learning regularly. Difference between Auto learning and Pre learning: Auto Learning During Recording  No need for extra navigation Global GUI Ma Map fi file or or Pe Per Te Test Mo Mode

Pre Learning Before Recording Using RTSW Per T Teest Mo Mode

Depends Depen ds on te test st requir requirem ement ents, s, wi winru nrunne nnerr te test st en engin gineer eerss ar aree perfor performi ming ng ch chang anges es in corresponding objects or windows recognization entries. There are six types of situations to  perform changes in GUI map entries. You will change the entries of GUI Map in 6 ways. 1. 2. 3. 4. 5. 6.

Wild Wild Card Card Char Charac acte ter  r  Regu Regula larr Exp Expre ress ssio ions ns Vi Virt rtua uall Obje Object ct Wiz Wizar ard d Mappe Mapped d to to sta standa ndard rd cl class ass GUI GUI Map Map Conf Config igur urat atio ion n Sele Select ctiv ivee Reco Record rdin ing g

Page 86 of 132

 

Software Testing Material Wild Card Character: Sometimes window or object labels are variating with respect to inputs in your application. To create data driven test on this type of windows and objects we can perform changes in corresponding entries in GUI Map. The Wild Card characters can be used to organize entries in WinRunner using !,*

Fax Order No. 6 { class: window, label: "Fax Order No.6", MSW_class: "#32770" } Fax Order No. 6 { class: window, label: "!Fax Order No.*", MSW_class: "#32770" } Regular Expressions: Sometimes in your application build objects / windows labels are

variating depends on the events. name We are changing catches entries it used in logical with respect in to logical runtimename. point.Winrunner at runtime will Start { class: push_button, label: "![S][t][ao][a-z]*" } for(i=1;i<=5;i++) { set_window ("Personal Web Manager", 3); button_press ("Start"); printf(" Button Pressed is : "&i); } For Number change: Wild card Characters If Toggle Characters: Regular Expression GUI Map Configuration: Sometimes in your application more than one object consists of  samee physic sam physical al descr descript iptio ion n wi with th respec respectt to WinR WinRunn unner er defau default ltss (C (Cla lass ss and Label Labels). s). To recognize this object individually we can perform changes in GUI map configuration.

It is used when one object is not recognized by the tool, then in WinRunner it recognizes by using this feature. Navigation: Tools, GUI Map Configuration, Select Object Type, Click Configure, Select

di disti stingu nguis ishab hable le proper proas perti ties es in into to click oblig obligat atory ory and optio optional nal (I (In n gener general al Test Test engin engineer eerss ar aree maintaining mswid optional), ok.

Page 87 of 132

 

Software Testing Material If class and label are same. We select mswid (Micro Soft Window ID) If applicable properties and obligatory properties are same we use optional (mswid). Command1 { class: push_button, label: Command1, MSW_id: 1 } Note: Here we can maintain MSWID as assistive. Because every two objects consists of  different mswids. Mapped to Standard Class: Someti Sometimes mes test engineers are not getting required properties of  an object. This is used when one object is recognized but the required properties are not coming to that object. Then map this object to any of the standard matching object and get the required properties. Navigation: Tools, GUI Map Configuration, Select non testable Object, Click Ok, Click  Configure, select mapped to class, click ok. Virtual Object Wizard: To forcibly recognize, non recognized objects we can use this option. Navigation: Tools, Virtual Object Wizard, Click Next, Select expected type depends on nature of the object, click next, mark that non recognized object area, right click to relieve, click next, enter logical name to new entry, say yes/no to create more, click finish. Selective Recording: It is a new concept in WinRunner 7.0. In WinRunner if u have more than one application on the desktop at the time of recording it may record about the unnecessary application applicat ion details also in the TSL if you didn’t specify exactly exactly what application u need. For  this type of situations in WinRunner we are specifying it explicitly using this path.

Settings -> General Options -> Record Tab, click selective recording, select record only on selected application(By default Off), Record on Start Menu and windows explorer, browse required application path, click ok. Note: Selective recording is a new concept in WinRunner7.0. This concept is not applicable to analog mode, because WinRunner records operations with respect to desktop co-ordinates co-ordinates in analog mode. User Interface Testing: WinRunner is a functionality testing tool. But it provides a facility to cond conduct uct user interface interface testing. testing. In this this user interfa interface ce automat automation ion testing, testing, WinRunn WinRunner  er  depends on Micro Soft 6 Rules. Micro Soft 6 Rules:

1. Ok/C Cont Contro rols lscel ar are nittenc t Cap C ap 2. Ok /Can ance l eexis exIIni iste nce e 3. Syst System em men menu u exis existe tenc ncee

Page 88 of 132

 

Software Testing Material 4. Cont Contro rols ls ar aree vis visib ible le 5. Contr Control olss are are not not over overla lappe pped d 6. Cont Contro rols ls are are al alig igne ned d To apply the above six rules in your application build, WinRunner use below TSL functions. Load_os_api(): WinRunne WinRunnerr use this function to maintain path between, windows windows OS system calls and application programming interface to apply that six rules. Syntax: load_os_api() Configure_chkui(): To specify, interest of tester to test required six rules in that six. Syntax: configur configure_chkui(T e_chkui(TRUE/F RUE/FALSE ALSE,, TRUE/FALS TRUE/FALSE, E, TRUE/FALS TRUE/FALSE, E, TRUE/FALS TRUE/FALSE, E, TRUE/FALSE, TRUE/FALSE);

lbl_chk=TRUE; ok_can_chk=TRUE; sys_chk=TRUE; text_chk=TRUE; overlap_chk=FALSE; align_chk=FALSE;

checks capital letter of labels on controls. checks existence of OK/Cancel buttons. checks existence of system menu. checks if all text of controls is visible. checks that controls do not overlap. checks alignment of controls.

Note: Orders of rules is mandatory. Check_ui(): WinRunner use this function to apply configured rules on specified window. Syntax: check_ui(“windowname”); The above three functions are not built in functions. But developed by Mercury Interactive as a system defined compiled module. Compiled module means that a permanent .exe of user  defined functions. Navigation: Open application build on desktop, create menu, rapid test script wizard, click  next, show application main window, click next, select user interface test, click next, specify sub menu symbols(>>, <<, …), click next, specify learning mode (Express / Compreh Com prehensi ensive), ve), cli click ck learn, learn, after after learning learning,, say yes/no yes/no to open your your applica application tion during during winrunner launching, click next, remember paths of startup script and GUI map file, click  next, remember the path of UI testing, click ok, specify true for required rules, click run, analyze the results manually. Regression Testing: Rec Receive eive modifie modified d build build from developm development ent team, team, GUI regressi regression, on, Bitmap regression, Real regression to ensure bug fixing and resolving.

Development Team released Modified Build

GUI Regression Bit Map Regression

Find Screen level differences  between old and new builds

Regression Test to ensure that modification

Page 89 of 132

 

Software Testing Material To find screen level changes: GUI Regression, Bitmap Regression. From the above process, test engineer performs GUI regression and bitmap regression before  perform functionality functionality level regression. regression. To perform this preliminary level verification we can use WinRunner concepts in RTSW(Rapid Test Script Wizard). GUI Regression Testing: To find objects properties differences between old build and new  build, we can use this option in RTSW.

Old

New

GUI Check Points Navigation: Open old build on the desktop, create menu, Rapid Test Script Wizard, click  next, show application main window, click next, select use existing information, click next, select GUI regression test, click next, remember the path of test script, click next, click ok, close old build, open new build, click run, analyze results manually.

To find image objects level differences between old build and Bitmap Regression Testing: new build, we can use this option in RTSW.

Old

New

Bitmap Check Points Navigation: Open old build on the desktop, create menu, Rapid Test Script Wizard, click  next, show application main window, click next, select use existing information, click next, select Bitmap regression test, click next, remember the path of test script, click next, click ok, close old build, open new build, click run, analyze results manually.

After er receivi receiving ng modifi modified ed build build testing testing team plans plans function functionalit ality y regressi regression on after  after  Note: Aft completion of GUI regression and Bitmap regression. In this scenario, GUI regression is mandatory and bitmap regression is optional. Exception Handling: A non-modifiable runtime errors is called exception. To handle testing exceptions WinRunner provides three types of handlers. TSL Exceptions • Object Exceptions • Pop-up Exceptions  • TSL Exception Exceptions: s: Th These ese except exceptio ions ns ar aree raised raised when when sp spec ecifi ified ed TSL TSL state stateme ment nt re retur turns ns specified error code.

To create TSL exceptions can follow below navigations. Tools, exception handling,weselect exception type as TSL, click new, enter exception name, select expected TSL function, select expected return code, enter handler function name, click 

Page 90 of 132

 

Software Testing Material ok, click paste, click ok after reading suggestion, click close, record required navigation to re recov cover er expect expected ed situa situati tions ons as funct function ion body, body, make make it as co comp mpil iled ed modul module, e, writ writee load load statement in startup script of WinRunner. Public function nagaraju(in rc, in func) { printf(func &” returns “&rc); } Object Exceptions: TSL exceptions depend on corresponding TSL statement and return code. But all negative situations are not suitable suitable to define depends on the TSL statement statement and re retur turn n code. code. Some Some of th thee negat negative ive situa situati tions ons de defin fined ed by te teste sterr wi with th respec respectt to objec objectt  properties. Object exceptions raised when specified object property is equal to our expected.

Build

Down

Enabled ----

-----

Test Script

Handler

To create this type of exceptions, we can follow below navigation. Tools, Exception Handling, select exception type as object, click new, enter exception name, select traceable object, select property with expected, enter handler function name, click ok, click paste, click ok after reading suggestion, click close, record recoverable navig nav igat ation ion,, make make it as compi compile led d modul module, e, writ writee lo load ad state stateme ment nt in start startup up script script of  WinRunner. Public function nagaraju (in win, in obj, in attr, in val) { printf(“ Enabled “); “); } Pop-up Exceptions: These exceptions raised when specified window comes to focus during test execution we can use this type of exceptions to skip unwanted windows during test execution. Navigation: Tools, exception Handling, select exception type as Pop Up, click new, enter  exception name, show that unwanted window, specify handler action, click ok. To administrate exceptions WinRunner provides the below TSL functions Exception_off(“ Exception Name ”) ; Exception_off_all() ; Exception_on(“ Exception Name ”) ; Note: When you create exception, by default exception is ON. #checks capital letter of labels on controls. ok_can_chk=TRUE; #checks existence of OK/Cancel buttons. sys_chk=TRUE; #checks existence of system menu.

text_chk=TRUE; overlap_chk=FALSE; align_chk=FALSE;

#checks if all text of controls is visible. #checks that controls do not overlap. #checks alignment of controls.

Page 91 of 132

 

Software Testing Material

Rational Robot 



Developed by Rational Also known as SQA Robot Functionality testing tool like as WinRunner  Supports c/s and web technologies



Records our business operations in Rational Basic(RB). RB is like as VB

 

1 2 3 4

Developed by Records operations in Recording language like Learning

5

Recording

Win Runner Mercury Interactive Test Te st Sc Scri ript pt Lang Langua uage ge (T (TSL SL)) C Auto Learning and Pre Learning

Context Sensitive and Analog Mode

Rational Robot Rational Rati Ration onal al Basi Basic( c(RB RB)) VB Implicit learning (Recognizes objects based on Mswid) Object Obj ect Orienta Orientation tion and Low Level Recording Reco Re cord rd Menu Menu,, Tur urn n to other Mode.

 reco Notrding e: gInRobot Lot ow records Lerds vel recordin Rob reco

6

GUI CheckPoint

7

Bit Map CheckPoint

8

Database CheckPoint

9

Text Ch CheckPoint

10

Window Exi Existence

11

File Co Comparison

the mouse pointer   move mo veme ment ntss al alon ong g with with time Insert, Inse rt, TestCa TestCase se (Check  (Check  For Single Property Point), Poi nt), object object propert properties, ies, For Object / Window savee check sav check point, point, selec selectt For Multiple Objects te testa stable ble object object,, speci specify fy expe xpect cteed for re req qui uirred  properties. Robot allows one checkpoint for one object. For Object/Window Insert, TestCase, For Screen Area Window/Region Image. Default Def ault,, Custom, Custom, RunTim RunTimee  Not Applicable Record Checkpoint Insert Ins ert,, TestC TestCase ase,, Al Alpha pha Get Text   Numeric ( Textbox / From Object/Window, Inbox), Clipboard ( Copy From Screen Area content), Object Data(L Dat a(List, ist, Menu, Menu, Table, Table, Data Window and Active X) Win_exists(“Window  Name”, time);

Insert, TestCase, Window ex exis iste tenc nce, e, sa save ve ch chec eck  k 

  poi oin nt, se sellect tes esttab ablle window, click ok. Fi File le_c _com ompar pare(“ e(“pat path h of fi file le1”, 1”, Insert Insert,, Testc Testcase ase,, File File

Page 92 of 132

 

Software Testing Material “path of file3”); 12

File Existence

13

User User Defi Define ned d Pas Pass/ s/Fa Fail il

14

Batch Te Testing

15

Open Project

16

Sync Synchr hro oni nizzatio ation n

of  comparison, save checkpoint, checkpoi nt, Browse file1 & file2, click ok. Fi File le_o _ope pen( n(“p “pat ath h of file file1” 1”,, Inser nsertt, Test stccase se,, File Mode); Existence, save checkpoint, Browse Testable file, click ok. tl_step(), printf() Insert, right click to , specify result type (Pass, Fa Fail il,, War arni ning ng,, Non one) e),, click ok. Call “TestName”(); Insert, Call test Call “Path of Test”();  procedure,  procedur e, select required subtest, click ok.  Note: Robot doesn't allow  parameter passing. Invoke_application(); for .exe Ins Insert ert,, st start art appli applicat cation ion,, files   browse browse applicat application ion path, path, click ok. Delayf ayfor() or(),, No Runtim Runtimee Wait(), change runtime Del

Login Save Savess Test Test name namess with with

settings, For Object/Window settings but by default 10 insert, wait   property, For Object / seconds, stat atus us,, te test stca case se,, ob obje ject ct Window Bitmap, for Screen st  p pro rope pert rtie ies, s, For For th thee la last st area tw two o we ha have ve Posi Positi tive ve /  Negative region.  No login window One login window  Noname1, noname2 … Test1,test2, …

17 18

file2”,”path

Silk Test 5.0 

Developed by Segue

 

Also known as SQA Robot Functionality testing tool like as WinRunner, Rational Robot and QTP Supports c/s and web technologies Records our business operations in 4TL(Four Test Language) like as java Follows single thread of process (Learning, Recording, Check points, edit script are not separate. All are done at a time)

  

Navigation: Start, Programs, Silk test, file menu, click new, click logo, click next, browse manual test path, click next, select new test frame or existing test frame, click next, read suggestions, click next, open window by window manually, click return to wizard, click next, read suggestions for recording, click next, record out business operations, set mouse pointer  on required object and click ctrl + Alt to create check point(Property, point(Property, method, Bitmap), Bitmap), click  ok, continue recording, insert checkpoints like as above, click done to stop recording, click 

next, set application base state, click run test, click close, analyze results manually.

Page 93 of 132

 

Software Testing Material URL’s testing:  Enter Base URL, specify depth to walk, click press, analyze results manually manually ( Red color not working, black color working)

QTP (Quick Test Professional) Professional)

Some of the professionals are also calling it as Quick Test Pro. The present version is QTP 6.5. WinRunner does not support the ERP and .Net. • • •



Developed by Mercury Interactive. Derived from WinRunner. Supports Client/Server, Web Applications, ERP and Multimedia Technologies (Maya, Flash … like dynamic images) for functionality testing. Records our business operations in VBScript.

Learning: Automation starts with learning. Like as WinRunner QTP supports auto learning only. During recording QTP creates recognization entries for objects and windows. [In WinRunner every entry is maintained in GUI Map Editor. Every entry consists of logical name and physical description.

In WinRunner entries are maintained in two ways. Global GUI Map Editor and Per Test Mode. Global GUI Map Editor: Advantage: Entries can be used in more than one test. Drawback: It won’t provide auto save and open. Per Test Mode: Disadvantage: The entries can’t be used in more than one Test. Advantage: It provides auto save and open.]

QTP maintains entries in object repository. [Repository is a Folder or Directory and it is created by user and saved by system] This repository maintains auto save and open. Path: Tools -> Object Repository. You will change the entries of GUI Map in 6 ways. 7. Wild Wild Card Card Char Charac acte ter  r  8. Regu Regula larr Exp Expre ress ssio ions ns 9. Vi Virt rtua uall Obje Object ct Wiz Wizar ard d 10. Mapped to standard standard class class 11. GUI Map Configuration Configuration 12. Selective Selective Recording Recording 1. The Wild Card characters can be used to organize entries in QTP (like WinRunner  using !,* ) 2. QTP also supports Regular Expressions like WinRunner 

Page 94 of 132

 

Software Testing Material

3. Virtual Object Wizard: It is used when one object is not recognized by the tool, then in WinRunner it recognizes by using this feature. But in Winrunner to this task it takes more time, where as in QTP also same process but with small navigations. 4. Mapped to Standard Class: This is used when one object is recognized but the required  properties are not coming to that object. Then map this object to any of the standard matching object and get the required properties. 5. GUI Map Configuration: Some times two objects may have same logical and physical names nam es also. also. To differentia differentiate te one object from other other in WinRunne WinRunnerr it interna internally lly uses the MSWID. But in QTP we have to follow the given path, Tools -> Object Identification Identification -> select object Type -> Select distinguishable distinguishable properties into mandatory and assistive -> click Ok.  Note: Here we can maintain MSWID as assistive. Because every two objects consists of  different mswids. 6. Selective Recording: In WinRunner if u have more than one application on the desktop at the time of  recording it maywhat record about the unnecessary also in TSL if u didn’t specify exactly application u need. For application this type ofdetails situations inthe WinRunner v are specifying it explicitly using this path. Settings -> General Options -> Where as in QTP if u click start recording it asks for whither u want to do Selective recording or not. If u choose selective recording then it displays one window in which u have to choose the application and working directories. Like as WinRunner QTP also supports the static recording. This option appears when u click recording. Recording: QTP records our business operations in VBScript. By default this tool starts recording in general mode. If you want to record mouse pointer movements, v can use

Test Menu -> Analog / Low level recording. In Winrunner two modes are available Context sensitive or Analog mode. In QTP three modes are available like General, Analog and Low level. Check points: To conduct functionality testing on different technology applications, QTP  provides below check points. 1. Stan Standa dard rd ch chec eckp kpoi oint nt:: To test the behavior and input domains of objects we can use this checkpoint. This checkpoint allows one object at a time.

Select position in script -> insert menu -> checkpoint -> standard checkpoint ->select testable object -> click ok after confirmation -> select required properties with expected ->click ok.

Page 95 of 132

 

Software Testing Material In QTP QTP for one pr prope opert rty y u ca can n give give 2 value valuess li like ke consta constant nt expect expected ed or pa param ramete eter  r  expected. 2. Bi Bitm tmap ap Chec Checkp kpoi oint nt:: QTP supports static and dynamic images to compare. The maximum timeout for picture elements is 10 seconds. 3. Data Databa base se Chec Checkp kpoi oint nt:: QTP provides backend testing facility through this checkpoint like as WinRunner WinRunner default check.

insert -> checkpoint ->Database Checkpoint -> specify sql statement -> click create to select DSN -> write select statement -> click finish. 4. Text Text Chec Checkp kpoi oint nt:: To capture object values into variables we can use this option. Vbscript supports variables declaration. 5. Text TextAr Area ea ch chec eckp kpoi oint nt:: To capture static text from screens we can use this option. Data Driven Testing:

Like as WinRunner, QTP also supports retesting with multiple testdata. There are 3 possibilities such as Dynamic Testdata, from FrontEnd Grids and Excel Sheet. 1. Dynamic Testdata: In WinRunner we will be use Create_input_dialog(“Dialog Message : ”) to do same dynamic testdata under data driven testing. But in QTP we use inputbox(“Message”); function to read data from the user. Var = inputbox(“Message”); 2. From FrontEnd Grids: Depends on listbox, menu, activex, table and data window. Test engineer conducts retesting. retesting. If u want to search any vbscript functions functions follow below navigation.

insert -> step -> method -> select required required object object -> click click ok after confirmation confirmation -> click  click  next -> enter arguments -> click next. 3. Exc Excel Shee Sheett: Create testscript for one input -> insert testdata into excel sheet columns -> tools menu -> data driver -> select position to use or replace excel sheet columns -> click parameteriz parameterizee -> click next -> select required required column name -> click  click  finish. Batch Testing: Like as WinRunner QTP also allows batch testing. To form batches QTP

supports WinRunner Tests also. Batch Testing can be done in 2 ways

Page 96 of 132

 

Software Testing Material 1. QTP Test to QTP Test: Ins Inser ertt -> call call to actio action n -> browse browse subtest subtest -> specif specify y  parameter data using excel sheet columns -> click ok. 2. QTP Test to WinRunner Test: Insert -> call to WinRunner Test-> browse the path of  test -> click ok. Note: QTP supports WinRunner 7.0 and higher versions only because QTP supports auto learning and from WinRunner7.0 onwards auto learning is possible. Synchronization points: To define time mapping between QTP and project we can follow  below navigation.

insert -> step -> synchronization synchronization piont [this is exactly equal to for object/window object/window property in WinRunner]-> select indicator object -> click ok after confirmation ->   specify expected  property with value -> specify maximum time wait -> click ok. Recovery Scenario Manager: This concept is equal to exception handling in WinRunner. Through this concept, QTP recover from executable scenarios with required handler.

Tools -> Recovery Recovery Scenario Scenario Manager -> Click New -> click next -> Select Trigger Trigger type (->pop up,ok. object state, application crash, test run error ) -> define the situation with handler  click Extra Features in QTP: Faster than WinRunner to create a test. • Supports .Net, SAP, People Soft, Oracle Applications, Multimedia and XML as extra • than WinRunner. It records business operations in vbscript

Test Director 6.0 •

Developed by Mercury Interactive

• •

Test Management Working as Client Tool / Server application

Project Administrator

Test Director

Ms – Access, SQL Server, Oracle

Project Administrator:  This part is used by test lead, to create new database areas, to store new projects testing documents and to estimate test status of an on going project.

Page 97 of 132

 

Software Testing Material Create Database: Start, programs, TD 6.0, Project Administrator, Login by Test Lead,  project menu, new project, specify location of database(Private, Common), click create, click  ok.

For one project data database, test director tool maintains tables and views. Estimate Test Status: start, programs, TD 6.0, project Administrator, login by test lead, select project name in list, select select project name in list, click connect, click Extension Extension symbol in front of project name, select required table in list, extend query if required, click Run SQL, analyze the results manually to estimate the test status. Test Director: This part is used by the test engineer to store corresponding test documents into corresponding database, created by Test Lead.

Start, programs, TD 6.0, Test Director, Select project Name, Login by Test engineer  • • •

Plan Tests Run Tests Track Defects.

Plan Tests: During test cases writing for responsible modules, test engineers use this part to store their testcases into database for future references. Create Subject: Plan Tests, click Folder New, Enter Responsible module name as Test Script, click Ok. Create Sub Subject: Plan test, select subject name, click folder new, enter sub subject name, click ok. Create TestCase: Plan Test, select subject name, select sub subject, click Test New, select Test type, enter Test name, click ok. Details: After completion of testcase creation, test engineer maintains below details for  that testcase.

TestCase ID, TestSuit ID, Priority, Test Environment, Test Duration, Test Effort, Test Setup and Testcase Pass/Fail Criteria. Design Steps: After typing required details for testcase we can prepare a step by step  procedure for that testcase to execute.

Design steps, click new, enter step description with expected, click new to create more steps, click close. Test Script: For automation test scripts, test director provides launch button to open WinRunner.

Click launch, set application base state for that test, record required navigation, insert required check points, click stop recording, click save.

Page 98 of 132

 

Software Testing Material Attachments: To maintain extra information for test cases, test engineer use this part. It is optional.

Attachment, Click File/Web, Browse required file path to attach, click open. •

receiving a stable build from the development team concentrate on Run Tests: After receiving test execution. TD provides a facility to create automated TestLog during testcase execution.

Create Batch: Run Tests, click testset builder, click new, enter suit ID, click ok, select required tests and add into batch, click close. Execute Automated Test: Select automated test in batch, click automated, set application in  base state as per that test, click run, tools menu, test results, file menu, open, browse executed test, analyze results manually, manually, close winrunner, winrunner, change test status to Passed / Failed depends on results analysis.   select ct manual manual testing testing batch, click manual, manual, cli click ck start start run, set Manual Manu al Test Execution Execution:: sele application in base state, run every step manually, specify status for every step, click close after execution of last step. •

Track Defects.

During test execution, test engineer use this part to report defects to development team. Track defects, click add, fill fields in the defect report, click create, click close, click mail, enter To Mail ID, click ok. Test Director Icons: Filter: To select required tests or defects in existing list we can use filters concept. Navigation: click filter icon, specify filter condition, click ok. Sort: To arrange defects in a specified order in a list, we can use this sort icon. Navigation: Click Sort Icon, select required filed, specify sort direction (Ascending / Descending), click ok. Columns: We can use Icon to select specific columns in display list. Navigation: Click Columns Icon, select required columns into visible list, click ok. Report: To create printouts we can use this icon to create hard copies for defects. Navigation: Click Report Icon, Specify Report Type, info or table, specify printout type, click ok, click print per every page. Test Grid: List of testcases coming in single window, under all subjects and sub subjects.

This option provides list of all test cases under all subjects and sub subjects.

Page 99 of 132

 

Software Testing Material

Quick Test Professional        

Developed by Mercury Interactive Also known as Quick Test Pro Functionality testing tool like as WinRunner  Extension of WinRunner  Supports c/s and web technologies Supports Client/Server, Web Applications, ERP and Multimedia Technologies (Maya, Flash … like dynamic images) for functionality testing. Records our business operations in VBScript. Supports launching of WinRunner to execute TSL scripts

1 2

Developed by Records operations in

Win Runner Mercury Interactive Tes estt Scr crip iptt Langu anguag agee (T (TSL SL))

3 4

Recording language like Learning

C WinR Wi nRun unne nerr

5

6

7

su supp ppor orts ts

Quick Test Professional Mercury Interactive VBScr BScrip iptt fo forr expe expert rt vi view ew.. And An d hi hier erar arch chic ical al st step epss in tree view VB QTP P su supp ppor orts ts on only ly Auto Auto Auto Auto QT

Learning and Pre Learning to Learning re reco cogn gniz izee th thee ob obje ject ctss an and d windows in your application Main inta tain inss th that at re reco cogn gniz ized ed Entry Maintenance Ma entries in GUI Map to edit Location th that at entr entrie iess we ca can n foll follow ow  below navigation Tools, GUI Map Editor  Types of entry Global GUI Map file / Per  Test mode to maintain entries maintenance longtime Wild Ca Card Character Uses the Wild Card characters (! ,. * )in reorgani reor ganizati zation on entries entries when when that window labels are variating with respect to input

8

Regular Expressions

Uses the regular Uses regular expressi expression on entries when object labels are variating Tools , GUI Ma p configuration, click   configure.

9

GUI Map Co Configuration

Tools, GUI Map configuration When more than one object

le lear arni ning ng to re reco cogn gniz izee th thee objects and windows in your  application Maint Ma intai ains ns th that at recogn recognize ized d entries in Object Repository to edit that entries we can follow below navigation Tools, Object Repository Global entries with auto save an and d au auto to op open en in into to ob obje ject ct repository Uses the Wild Card char araacters (! ,. * )i )in n reorganization reorgani zation entries when that window labels are va vari riat atin ing g with with re resp spec ectt to input Uses the regular expression en entri tries es when when objec objectt la label belss are variating Tools, object identification, identification, selec sel ectt objec objectt type, type, specif specify y mswid as assistive properties , click ok  Tools, Object Identification, Select Sel ect Object Object type, type, specify specify MSW ID as assistive

cons consis ists ts of sa same me ph phys ysic ical al  property, click ok  des esccrip ription (MS (MSW ID as optinal)

Page 100 of 132

 

Software Testing Material 10

Mapp Mapped ed to stan standa dard rd clas classs

11

Vi Virt rtua uall Obj Objec ectt Wiza Wizard rd

12

Sele Selecctive ive Rec Recordi ording ng

13

Recording

Tools, GUI Map configuration, click add When Wh en winr winrun unne nerr do does es no nott returns all testable properties to objects (Mapped to standard class) Tools, virtual object wizard

Tools, Object Identification, click add Sele Select ct non te testa stable ble object object,, specify spe cify environm environment ent,, cli click  ck  ok.

Tools, virtual Tools, virtual objects, objects, new virtual object. when any object is not When objects are not recognized by QTP recognized by WinRunner  Se Sett ttin ings gs,, ge gene nera rall op opti tion ons, s, File Menu, New Test, Click  re reco cord rd ta tab, b, cl clic ick k se sele lect ctiv ivee Start Start Reco Recordi rding, ng, se selec lecti tive ve recording recording window appears When we want to record our    business operations on specific applications Recordi Rec ording ng all allows ows two types types Recording allows three types modes such such as ge gener neral, al, of mod odes es su such ch as Cont Contex extt of modes Analog Mode and low level Sensitive and Analog Mode recording Defaul ultt mode is Cont nteext Sensitive and F2 is short key to change from one mode to other 

14

15

In Low level recording QTP records mouse pointer  movements on desktop along with time as extra General mode is default and allows below Shortcuts Start Recording: F3 Low level Recording: Ctrl + Shift + F3 Anal An alog og Reco Record rdin ing: g: Ctrl Ctrl + Shift + F4 For Single Property Select position in script -> GUI CheckPoint insert menu -> checkpoint -> For Object / Window standard checkpoint ->select ->select For Multiple Objects te testa stabl blee object object -> cli click ck ok  after aft er confirm confirmati ation on -> select select re requ quir ired ed pr prop oper erti ties es with with expected ->click ok. WinRuna unane nerr check check point pointss al allow lowss consta constant nt values values are expect expected ed.. But But QTP QTP Note: In WinR checkpoints allows constant and parameter values as expected [Ex: Expected values in Excel column] X = create_input_dialog(“xx”); Button_check_info(“OK”,”enabled”,x); Note2: In QTP standard checkpoint allows one object at a time to test For Object/Window In Inser sert, t, checkp checkpoin oint, t, bi bitm tmap ap Bi Bitt Ma Map Che ChecckPoi kPoin nt For Screen Area check chec k point, point, select select testabl testablee WinR Wi nRunn unner er suppor supports ts stati staticc im image age [st [stat atic ic of dynam dynamic ic], ], images only

click ok after confirmation, click select area if required, click ok 

Page 101 of 132

 

Software Testing Material

16

17

18

Note1: QTP supports static and dynamic images to compare when you select multimedia option in add-in manager  Note2: It supports dynamic images play up to 10 seconds as maximum. Default Def ault,, Custom Custom,, RunTim RunTimee insert -> checkpoint Dat Databa abase Ch Chec eckP kPo oint int Record Checkpoint ->Database ->Data base Checkpoint Checkpoint [like win runner default ch chec eckp kpoi oint nt -> sp spec ecif ify y sq sqll statement -> click create to selec sel ectt DSN DSN -> writ writee selec selectt statement -> click finish Note: QTP supports database testing w r t database content insert -> checkpoint ->Text Get Text: Text Ch CheckPoint Che heck ckpo poin intt & Tex extt ar area ea From Object/Window, Check point From Screen Area From Selection Web Web test checkpoint only Functions that will be Obj_get_text(“ generated in Text  Name”,variable); Checkpoint

Object Option explicit Dim vnames …. Window( Win dow(“wi “window ndow

name”). name”).

Wined nedit it(“ (“Ob Objec jectt Name Name”). ”). Obj_get Obj _get_te _text(“ xt(“ Object Object name name Wi GetVname ”,variable,x1,y1,x2,y2); Window( Win dow(“wi “window ndow name”). name”). Winedit(“Object Web_obj_get_text(“object name”, “#Row no “,  Name”,x1,y1,x2,y2). “#Col #Colum umn n no no”, ”, va vari riab able le,, GetVname “t “text ext befor before” e”,, “t “tex extt after after”, ”, Window(“frame name”). Wine Wi nedi dit( t(“O “Obj bjec ectt Name Name”, ”, time to create); “text “te xt before” before”,”te ,”text xt after”). after”). GetVname Web_frame_get_text(“frame name”, variable, “text  before”, “text after”, time to create); Testing DDT/Retesting in 4 ways: Dynamic test data submission Through flat file (notepad) Fr From om fron frontt end end gr grid idss (L (Lis istt  box) Through excel sheet Create_i e_inpu nput_d t_dia ialo log(“ g(“ test data   N = Creat Message”); For(I=1;I<=n;I++) {

19

Data Driven Methods

20

Dynamic submission

DDT/Retesting in 3 ways: Dynamic test data submission Fr From om fr front ont end grids grids (L (List ist  box) Through excel sheet Option explict Dim vname Vname = inputbox(“ Message”); For I=1 to n step 1

} next 21

Through fl flat ffiile

File_open(); File_getline(); File_compare();

Through flat files driven testing is applicable

data not

Page 102 of 132

 

Software Testing Material

22 23

24

File_printf(); File_close(); Fro From fr front ont en end gri grid ds Li List, st, menu, menu, ac acti tive ve x, la label bel,, List, menu, active x, label, data window data window Thr Through ough exce xcel sh shee eett Tools, data driver wizard Create testscript for one input -> insert testdata into excel sheet columns -> tools menu -> data driver -> select  p pos osit itio ion n to us usee or re repl plac acee excel sheet columns -> click    param parameter eterize ize -> clic click k next next -> selec selectt re requi quired red colum column n name -> click finish. Searching for required Create menu, Function If u want to search any Generator  vb vbsc scri ript pt func functi tion onss fo foll llow ow functions  below navigation. insert -> step -> method method -> selec sel ectt requ requir ired ed objec objectt -> cli click ck ok after after confirm confirmatio ation n

25

Batch Te Testing

Call “TestName”(); Or  Call “Path of Test”();

-> click next -> enter   arguments -> click next. To form batches QTP support sup portss WinRun WinRunner ner Tests Tests al also. so. Batc Batch h Test Testing ing ca can n be done in 2 ways

1. QTP Test to QTP Test: In Inser sertt -> call call to to actio action n ->  b bro rowse wse subte subtest st -> sp speci ecify fy   param parameter eter data using using excel excel sheet columns -> click ok. 2. QTP Test to WinRunner Test: Inse nsert -> ca calll to WinRun Win Runner ner Test-> Test-> browse browse the path of test -> click ok.

26

Note: QTP supports WinRunner 7.0 and higher versions only because QTP supports auto learning and from WinRunner7.0 onwards auto learning is possible. User Defined Actions User Defined Functions User Defined Functions Repe peat atabl ablee naviga navigati tions ons in Repe Re peat atab able le na navi viga gati tion onss in Re pliication reco corrded ded as application recorded as appl to create one func unctions. ons. To make it as actions   perm permane anent nt .e .ext xt we ca can n use reusable action. compiled module concept We can follow below

navigation In Inse sert rt,, ne new w ac acti tion on,, action name

ente enter  r  with

Page 103 of 132

 

Software Testing Material

27

28

29

description, descript ion, select select reusabl reusablee action, on, cl cliick ok, re reccord re repe peat atab able le na navi viga gati tion on in your application Note: To call that reusable action in required test, we can use insert , call to action insert -> step -> Wait Sync Synchr hron oniz izat atio ion n poin pointt Change runtime settings synchronization piont [this is For object/window exactly equal to for   For object/window bitmap object/ obje ct/wind window ow property property in For screen area WinRunner]-> select indicator object -> click ok  after confirmation confirmation ->  specify expected property with value -> specif specify y maxi maximu mum m time time wait -> click ok. Tools -> Recovery Scenario TSL Exc Exceptio ption n Han Handl dlin ing g Manager -> Click New -> Pop Up click next -> Select Select Trigger  Object type ( pop up, object state, Web for web only ap appli plica cati tion on crash, crash, te test st run

Tech Techno nolo logy gy Supp Suppor orte ted d

error ) -> define the situation wi with th ha hand ndle lerr -> br brow owse se reusable action for recoveryrecovery> click finish. QTP P su suppo pports rts .N .Net et,, XML, XML, Does oes not suppor pportts .Net, QT SAP, P, Peopl Peoplee Soft, Soft, Orac Oracle le XML, ML, SAP SAP, Pe Peop ople le Soft, oft, SA Orac aclle appl pplicat atiions ons and applications and multimedia multimedia objects for testing objects for testing

Page 104 of 132

 

Software Testing Material

Quality Assurance

They are mainly responsible for   prevention of defects Identifying efficient life cycle models,  process, methodologies etc… according to quality standards. Review the reports and documents that are prepared by QC team or whole  project team. The major concern is on the process  being implemented. Are we following the right method for  developing or not. Verification

Quality Control

For detection of defects Responsible for implementation of the life cycles, methodologies etc… for the testing of the application. Prepare the reports, documents, according to the standards or guidelines given by QA team The major concern is product being developed Product properly done or not. Validation

ISO (International Organization for Standardization)

ISO is given for all companies. CMM is given for only software companies. 6-Sigma is for all companies. If you implement the 20 clause (8 sections) then u will get ISO. In the year 1947, non government organizations joined together and formed ISO. There are 145 countries are there in ISO. India is among them. ISO is the Greek word. This is derived from the word ISOCESS. Actually ISOCESS means equal or total. It is equal for all in the world India, USA … ISO 9000 – Guidelines 9001, 9002, 9003, 9004 – Certifications Whenever you want to get certifications first you have to follow certain guidelines. 9001 – For companies design, development, testing and inspection. 9002 – Except design remaining activities. (Companies called as Production) 9003 – Testing and inspection only. 9004 – Continuous improvement.

9001:2000 (Year or Version)

For every six years they are releasing the version. Latest version is 2000. And we can expect the next version in 2007. Whither it is a hotel or software company they can get 9001. But by verifying the scope we can confirm what type of company.  Now a days there is no 9002 and 9003. They are giving only 9000, 9001 and 9004.

Page 105 of 132

 

Software Testing Material How to get Certification: Certification: BVQI – Beaurea of Verta Quality International (USA based company, branch in Hyd) ICL – International Certification Limited (USA based company, branch in Secunderabad) STQC – Software Testing, Quality Testing

If u want to get Certification first approach any one of the above company they will say implement 20 clause. Next they will come to audit and finally certifies. If u don’t know how to implement 20 clause they are conducting training through company as External Auditor 3 months course. They will conduct this. Internal Auditor for Rs 25,000 and they will conduct with in 4-5 days. Difference between the External or Lead auditor and Internal auditor is the former can work  in two or three companies in a day. The later will works in only one company. Format – The structure is studied. They visit all the departments and prepare this. Check list – What are the requirements Procedure – Work based on 20 Clause. Procedure Manual – Prepare procedure and distribute to all departments and inform them to implement it to get the Certification. What ever the work you are doing you have to prepare the documents. Reasons are 1. Futu Future re ref refer eren ence ce 2. Empl Employ oyees ees may may leave leave organiz organizat ation ion Generally auditor should have 10+Exp and 5 cycles of implementation.

Procedure Manual Procedure Check List Format

NCR – Non Conformance Report Report Types of Certifications:

1. Ext xter erna nall Aud Audit it 2. SUR SURVELL VELLAN ANCE CE Audi Auditt 3. Rece Recert rtif ific icat atio ion n

Page 106 of 132

 

Software Testing Material

1. Externa Externall Audit: Audit: To To renewal renewalss for every every 3 years years 2. SUR SURVELL VELLAN ANCE CE Audi Audit: t: Ever Every y 6 mont months hs th they ey wi will ll come come and and ch chec ecks ks.. But But th they ey informs before coming. They will issues one NCR if u didn’t follow and they once again audits the same issue after 3 months. They gives 3 or 4 NCRs and finally cancels the certification. 3. Recert Recertific ificatio ation: n: If they cancels cancels then then go for recerti recertifica fication tion SEI-CMM (Software Engineering Institute – Capability Maturity Model) SEI-CMM levels: This is given to software companies only There are five levels are there in CMM like level 1,2,3,4,5 There are different CMMs are there like SEI-CMM also called as Software CMM, PCMM, CMMI- CMM for Integration. In th thee ye year ar 1987, 1987, MIKE MIKE PAUL PAULK K andBILL CURT CURTIS IS (T (The hey y ar aree worki working ng as facul faculty ty in CARNEGIE CARNE GIE MELLON University, University, Pits burgh, USA) formed together. They released CMM version1.0 version1. 0 from the SEI. They have observed the ISO, in ISO software organization organizationss are not getting any special facilities. So they formed SEI and released CMM. In CMM auditors are called as Assessors. Anybody can become as Assessors but you have to attend training classes in Chennai or Mumbai. KPMG etc... Institutes are conducting this course. There are two types of companies Disciplined / Matured Company Indiscipline / immature Company

1. Initial

2. Repeatable

Adhoc

Project Management

Adhoc

Discipline

3. Defined Software Change Management

Change

4. Managed

5. Optimized

Quality Management

Hitech Change

Predictable

Hitech

There are five levels of CMM, each level has got number of processes. For example level2 has the process as project management. Each process is called as KPA. If an organization implements all the KPA’s then based on them it is given a level. Infosys was assessed at level4 in Dec 1997 and at level5 in Dec 1999.

Page 107 of 132

 

Software Testing Material PCMM: People CMM. It also got 5 levels. This is mainly deals with the HR principles. For  selecting and recruiting they are having one structure. That will be given by this. CMMI: CMM for Integration. They use SEI CMM, Systems engineering principles and IPDCMM (Integrated Product Development). Small company can get up to ISO and CMM Level-3, PCMM Level-3 and CMMI. CMMI is the latest technology and most of the companies are trying to get this.

6 σ (Six Sigma) This is given to all companies. This is derived from Greek letter ‘σ’ which means Standard Deviation. 6 σ is a metric which gives various standard deviations The greater the number before ‘σ’ the less will be the defect in the process variation, more will be quality and customer satisfaction. ISO, CMM and 6 σ all are for customer satisfaction. If it is 5 σ the error may be 265 in 1 million LOC. If it is 6–σParts the error may beMillion. 3 in 1 million LOC. PPMQ for Proper DMAIC – Define, Measure, Analyze, Improve and Control. Generally any company first does this DMAIC and next goes for 6 σ. DFSS – Design for Six Sigma. This is for software organizations. In 6 σ you will be given Champion, Major Black Belt, Black Belt, Green Belt, White Belt, Orange Belt. Champion – Owner of the company. Black belt holder will train the Green belt holder. 6 σ companies – Satyam, Motorola, Wipro, TCS etc… But the first company in Hyderabad which got this one is GE. CMM Levels: What is CMM: It defines how software organizations mature or improve in their ability to develop software.

This model was developed SEI of Carnegie Mellon University in late 80s. Infosys was addressed at level 4 in Dec 1997 and at level 5 in Dec 1999. Why CMM: CMM is a software specific specific model. CMM describes how software organizatio organizations ns can take the  path of continuing improvement, improvement, which is so required in this highly competitive competitive world. Keep improving is CMM Mantra.

Level1: or Ad-hoc. There no KPAs in this level.KPAs at this level look at project Level2: initial Repeatable. There are are 6 KPAs in this level.  planning and execution.

Page 108 of 132

 

Software Testing Material Level3: Defined. Defined. There are 7 KPAs KPAs in this level. Organizational Organizational process process is the focus area here. Level 4: Managed. There are 2 KPAs in this level. Understanding of data Leve Le vell 5: Opti Optimi mizi zing ng.. Ther Theree ar aree 3 KPAs KPAs in th this is le leve vel. l. The The focu focuss he here re is cont contin inua uall improvement. As we move from level 1 to 5, the project risk decreases and quality and productivity increases. (KPA can be compared to Clause in ISO standards). Level1: Initial or Ad-hoc. There are no KPAs in this level. Level 1 is immature state. The software process is characterized as adhoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort. Here there is no objective basis for judging product quality or for solving product or process problems. Therefore product quality is difficult to predict. Activities intended to enhance quality such as reviews and testing are often curtailed or eliminated when projects fall behind schedule. Highlights of this level:  

The processes with in this level are highly unstable and unpredictable. The projects are purely person dependent. Ie, when the persons involved leave the  project or the company, things come to a halt. Also the performance depends on the capabilities of the individuals rather than the organizational capability.

As we move from level1 to level5, the project risk decreases and quality and productivity increases. Level2: Repeatable. There are 6 KPAs in this level. KPAs at this level look at project  planning and execution. Repeatable, as the word reveals, means that processes employed in the project are repeatable. Basi Ba sicc proj projec ectt mana manage geme ment nt pr prin inci cipl ples es ar aree es esta tabl blis ishe hed d to tr trac ack k co cost st,, sc sche hedu dule le,, and and functionality. The necessary process discipline is in place to repeat earlier success on projects with similar applications using best practices from past projects. Projects in these organizations have installed basic software management controls. Highlights of this level: Realistic ic project commitments are based on the results observed on previous projects  Realist and on the requirements of the current project. schedules, and functionality. functionality.  The project managers for a project track software costs, schedules, The problems meeting commitments are identified when they arise.  The projects process is under the effective control of a project management system, following realistic plans based on the performance of previous projects. Requirements Management: To establish a common understanding between the customer and the project team

Page 109 of 132

 

Software Testing Material It involves establishing and maintaining an agreement with the customer on the requirements requirements for the software project. Goal Go al:: so soft ftwa ware re pl plan ans, s, pr prod oduc ucts ts,, an and d ac acti tivi viti ties es ar aree ke kept pt co cons nsis iste tent nt with with th thee syst system em requirements allocated to software. Software Project Planning: This involves establishing reasonable plans for performing the software engineering and for  managing the software project. Software project planning involves developing estimates for  the work to be performed, establishing the necessary commitments, and defining the plan to  perform the work.

Goal: software estimates are documented for use in planning and tracking the software  project. Software Project Tracking: To provide the adequate visibility visibility into actual progress so that management management can take effective actions when the software project’s performance deviates significantly from the software  plans. Softw So ftware are proje project ct tracki tracking ng and oversi oversight ght involv involves es tracki tracking ng and review reviewing ing th thee softw software are accomplishments and results against documented estimates, commitments, and plans and

adjusting these plans based on the actual accomplishments and results. Goal: Actual results and performances are tracked against the software plans. A documented (Project Plan) is used for tracking.

Software Subcontract Management: Thee pu Th purp rpos osee of so soft ftwa ware re su subc bcon ontr trac actt mana manage geme ment nt is to se sele lect ct qu qual alif ifie ied d so soft ftwa ware re subcontractors and manage them effectively. Software Quality Assurance The purpose of the Software Quality Assurance   is to provide management with appropriate visibility into the process being used by the software project and of the products being built. Softwar Soft waree Qualit Quality y Assuran Assurance ce involves involves reviewin reviewing g and auditin auditing g the softwa software re products products and activit act ivities ies to verify verify that they comply comply with with the applica applicable ble procedur procedures es and standar standards ds and   providing the software project and other appropriate managers with the results of these reviews and audits.

Goal: Software Quality Assurance activities are planned. Software Configuration Management: The purpose of the Software Configuration Management is to establish and maintain the integrity of the products of the software project throughout the project’s software life cycle.

A software baseline library is established containing the software baselines as they are developed.. Changes to baselines developed baselines and the release of software products built from the software   baselin baselinee library library are systema systematic tically ally controll controlled ed via the change change control control and configur configurati ation on auditing functions of Software Configuration Management.

Page 110 of 132

 

Software Testing Material Goal: Software Configuration Management activities are planned. Selected work products are identified and controlled. Changes to work products are controlled. Level2 is concentrated on project level processes, Level3 looks from the organizational organizational view  point. Level3: Defined. The softwar softwaree process process for both managem management ent and engineer engineering ing activi activities ties is documen documented ted,, standardized, and integrated into a standard SW process for the organization (E.g. Software Configuration Management process). All projects use approved and tailored versions of the organizations standard software process for developing and maintaining software. Data and information informa tion from projects is regularly regularly and systematicall systematically y collected collected and organized organized so that the same can be reused by other projects.

There are 7 KPAs in this level. Organizational process is the focus area here. Organizational Process Focus:  Organizational Thee purpos Th purposee of th thee Organi Organiza zati tiona onall Proce Process ss Focus Focus is to establ establis ish h the organi organiza zati tiona onall responsibility for software process activities that improve the organizations overall software  process capability.

The important goal of this KPA is software process development and improvement activities activities are coordinated across the organization. To do an effec effecti tive ve jo job b of id ident entif ifyi ying ng and using using the best best pract practic ices, es, organi organizat zatio ions ns must must establish a group with that responsibility and build a plan for how the organization will improve its process. Such as a plan should include periodic assessments of the organizations  process maturity, maturity, leading to plans for improvement improvement in capability. capability. This process engineering engineering is done by SEPG, which looks out for the interest of every project in the organization.   Organizationall Process Maturity: Organizationa The purpose of this KPA is to provide a usable set of software processes assets that improv improvee   proc process ess perfo perform rmanc ancee ac acros rosss projec projects ts.. This This involv involves es de devel velopi oping ng and maint maintain aining ing th thee organization’s standard software process, along with related process assets. Some goals of the KPA are to have a standard software process for the organization. Information related to the use of process by projects is collected and reviewed. Descriptions of softw software are life cy cycl cles es th that at ar aree approv approved ed fo forr use by th thee pr proje oject ctss are docum document ented ed and maintained. The organizations software process database is established and maintained. Training Program: The purpose of this KPA is to develop the skills and knowledge if individuals so they can  perform their roles effectively and efficiently.

Training Program involves first identifying the training needed by the organization, projects, Training and individuals, then developing or procuring training to address the identified needs. Each software project evaluates evaluates its current and future skills needs and determines how these skills willl be obtaine wil obtained. d. Some Some skills skills are effectiv effectively ely and efficie efficientl ntly y impart imparted ed through through informa informall methods, where as other skills need more formal training methods to be effectively and efficiently imparted.

Page 111 of 132

 

Software Testing Material Integrated Software Management: The purpose of Integrated Integrated Software Management is to integrate integrate the software software engineering and management activities into a coherent, defined software process that is tailored from the organizations standard software process. Software Product Engineering: The purpose of the Software Product Engineering is to consistently perform a well defined engineering engineer ing process that integrates all the software engineering activities to produce correct, consiste cons istent nt softwa software re products products effecti effectively vely and efficie efficiently ntly.. Softwa Software re Product Product Enginee Engineering ring involves performing the engineering tasks to build and maintain the software using the  projects defined products and appropriate methods and tools.

Level4: Managed. There are 2 KPAs in this level. Understanding of data Leve Le vel5 l5:: Opti Optimi mizi zing ng.. Ther Theree ar aree 3 KPAs KPAs in th this is le leve vel. l. The The fo focu cuss he here re is cont contin inua uall improvement.

Software Testing 10 Rules 1. Test early and test often. 2. Integrate the application development and testing life cycles. You'll get better results and you won't have to mediate between two armed camps in your IT shop. 3. Formalize a testing methodology; you'll test everything the same way and you'll get uniform results. 4. Develop a comprehensive test plan; it forms the basis for the testing methodology. 5. Use both static and dynamic testing. 6. Define your expected results. 7. Understand the business reason behind the application. You'll write a better application and better testing scripts. 8. Use multiple levels and types of testing (regression, systems, integration, stress and load). 9. Review and inspect the work, it will lower costs. 10. Don't let your programmers check their own work; they'll miss their own errors.

Page 112 of 132

 

Software Testing Material

Configuration Management What is configuration configuration management?

Our systems are made up of a number of items (or things). Configuration Management is all about effective and efficient management and control of these items. During the lifetime of the system many of the items will change. They will change for a number of reasons; new features, fault fixes, environment changes, etc. We might also have different items for different customers, such as version A contains modules 1,2,3,4 & 5 and version B contains modules 1,2,3,6 & 7. We may need different modules depending on the environments they run under (such as Windows Wi ndows NT and Windows 2000). An indication of a good Configuration Management system is to ask ourselves whether we can go back two releases of our software and perform some specific tests with relative ease. Problems resulting from poor configuration management Often organisations do not appreciate the need for good configuration management until they experience one or more of the problems that can occur without it. Some problems that commonly occur as a result of poor configuration management systems include:  the

inability to reproduce a fault reported by a customer;

 two

programmers have the same module out for update and one overwrites the other’s change;

 unable  do

to match object code with source code;

not know which fixes belong to which versions of the software;

 faults

that have been fixed reappear in a later release;

a

fault fix to an old version needs testing urgently, but tests have been updated. Definition of configuration configuration management A good definition of configuration management is given in the ANSI/IEEE Standard 7291983, Software Engineering Terminology. This says that configuration management is:  “the

process of identifying and defining Configuration Items in a system,

 controlling

the release and change of these items throughout the system life cycle,

 recording

and reporting the status of configuration items and change requests, and

 verifying

the completeness and correctness of configuration items.”

This definition neatly breaks down configuration management into four key areas:  configuration

identification;

 configuration

control;

 configuration

status accounting; and

 configuration

audit.

Configuration identification is the process of identifying and defining Configuration Items in a system. Configuration Items are those items that have their own version number such that when an item is changed, a new version is created with a different version number. So configuration identification is about identifying what are to be the configuration items in a system, how these will be structured (where they will be stored in relation to each other) the

Page 113 of 132

 

Software Testing Material version numbering system, selection criteria, naming conventions, and baselines. A baseline is a set of different configuration items (one version of each) that has a version number itself. Thus, if program X comprises modules A and B, we could define a baseline for version 1.1 of  program X that comprises version 1.1 of module A and version 1.1 of module B. If module B changes, a new version (say 1.2) of module B is created. We may then have a new version of  program X, say baseline 2.0 that comprises version 1.1 of module A and version 1.2 of  module B. Configuration control is abo about ut the provis provision ion an and d manag manageme ement nt of a contro controlle lled d librar library y containing all the configuration items. This will govern how new and updated configuration items can be submitted into and copied out of the library. Configuration control is also determines how fault reporting and change control is handled (since fault fixes usually involve new versions of configuration items being created). Status accounting ena enable bless tracea traceabil bility ity and impact impact an analy alysis sis.. A datab database ase holds holds all the information relating to the current and past states of all configuration items. For example, this would be able to tell us which configuration items are being updated, who has them and for what purpose. Configuration Configura tion auditing auditing is the proces processs of ensurin ensuring g that that all co confi nfigur gurati ation on man manage agemen mentt procedures have been followed and of verifying the current state of any and all configuration items is as it is supposed to be. We should be able to ensure that a delivered system is a complete system (i.e. all necessary configuration items have been included and extraneous items have not been included). Configuration management in testing

Ju Just st abou aboutt ever everyt ythi hing ng us used ed in test testin ing g ca can n reas reason onab ably ly be plac placee un unde derr the the co cont ntro roll of a configuration management system. That is not to say that everything should. For example, actual test results may not be though in some industries (e.g. pharmaceutical) it can be a legal requirement to do so.   VERIFICATION AND VALIDATION (V&V) Verification: Are we developing the right product?   Validation: Are we developing the product right?   Verification and Validation is the difference between 'What and How'  Two types of V&V

1. Static V&V 2. Dynamic V & V Static V&V:  

1. Technical Review 2. Inspection

Page 114 of 132

 

Software Testing Material 3. Code Walk through. We are doing V&V in documents which is in papers. Static Verification corresponds to verification verific ation and validation of products, products, when it is static. This includes all quality quality Reviews, composition composit ion of the product. eg. Its Its structure, size size and shape etc. That's That's why it is called as Static V & V Dynamic V&V: In Dynamic V&V we are conducting Testing the application in real time with executables. That's why it is called as Dynamic V&V.

SOFTWARE TESTING

  Definition 1: Software Testing is the process of executing a program with the intent of  finding bugs. Definition 2: Tes Testing ting is a process of exercisin exercising g or evaluati evaluating ng a system system componen component, t, manual or automated means to verify that it satisfies a specified requirement.

by

The basic goal of the software development process is to produce a software that has no errors. In an effort to detect errors, each phase ends with V & V activity such as Technical review. But But most of the V & V (review) is based based on human evaluation evaluation and can't detect all errors. As testing is the last phase in the SDLC (Software Development Life Cycle) before the final software is delivered, it has the enormous responsibility of detecting any type of  errors Two basic approaches for software testing: 1.White Box Testing or Structural testing or Glass Box testing. 2. Black Box Testing or Functional testing   Combination of white box and block box testing is called as 'Gray box testing'  

White Box Testing: White Box Testing is done by the developers, Developers have to do 1.Path testing 2.Condition testing 3.Data flow testing 4.Loop testing. Software Engineers can derive test cases that

Page 115 of 132

 

Software Testing Material 1.Guarantee that all 'independent paths' within a module have been exercised at least once. 2.Exercise all logical decisions on their true and false sides. 3.Execute all loops at their boundaries within their operational bounds. 4.Exercise internal data structures to assure their validity.

We must go for 'white box testing' when Typographical

errors are random, Logical errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed.

Usually all the organizations organizations go for Block Box Testing. Because in block box testing, we are checking checking the functional functionality ity of the applicatio application. n. In block block box testing testing,, structu structure re of the  program is not considered. For better customer satisfaction, we have to do white box testing first, then conduct block   box testing. We will discuss about the BLACK BOX Testing, in a detailed manner.

 

BLACK BOX TESTING

Bloc Block k box te testi sting ng fo focus cuses es on funct functio ional nal re requi quirem rement entss of a softw software are,, ta taki king ng no consideration of detailed processing logic. In Black box testing, testers attempt to find errors in the following categories: 1.Incorrect or missing functions. 2.Interface errors 3.Errors in data structures 4.Performance errors 5.Initialization and termination errors. Levels of Black Box Testing Faults occur during any phases phases in in SDLC. SDLC. Verification Verification is is performed performed on the output of    each phase. But some faults are likely to remain undetected by these methods. Theses faults reflect in the code.

Testing is usually relied on to detect these faults, in addition to the faults introduced in coding  phase. Due to this, different levels of testing are used in the testing process.   Clients Needs | Requirement

<---------> <--------->

Acceptance Testing | System Testing

Page 116 of 132

 

Software Testing Material | Architecture & Design | Coding

| <---------> <--------->

Integration Testing | Unit Testing

  From the service providers point of view the following are to be done. 1.Unit Testing 2.Integration Testing 3.System Testing

  UNIT TESTING  

In Unit Testing, Different modules are tested, against the specifications produced during design for the modules. Unit testing is essentially for verification of the code produced during the coding phase and hence the goal is to test the internal logic of the modules.   Module interface is tested to ensure that information properly flows into and out of the  program unit under test. Unit testing is the lowest level of testing Individual unit of the software software are tested in isolation from other parts of a program.

In UNIT TESTING, we have to do the following checks 1.Field level checks. 2.Field level validation 3.User Interface check  4.Functionality check.

Field Level Checks:   In Field Level checks, we have to do 7 types of checks.

Here we are checking a particular field in a screen or module, to check whether the field accepts 1.Null characters 2.Unique characters 3.Length 4.Number  5.Date 6.Negative values

Page 117 of 132

 

Software Testing Material 7.Default values.

For Example, consider a Course Registration form that contains the following fields.

COURSE REGISTRATION FORM SCREEN

Option (Add/Modify/Delete)

Drop down combo box.

Funlty chk.

Type of Course

Drop down combo box.

Funlty chk.

Registration Number 

 Number Field

Student Name

Text Field

Address

Text Field

Phone Number  Date Time (Part/Full time)

 Number Field Date Field Drop down combo box.

Funlty chk.

Timing (7-9am/9-11am/7pm-9pm)

Drop down combo box.

Funlty chk.

Student ID

Automatic generation.

Funlty chk.

Batch Code

Automatic generation

Funlty chk.

   

Push Button

Funlty chk. Save Button

Push button

Exit Button

Funlty chk.

** Funlty chk. – Functionality Check.

Based on the above screen, screen, we have to prepare prepare a internal test plan. plan. Based on the internal internal test plan, we can prepare test cases.

Internal Test Plan

FC --> Functionality Check and we have to test the functionality of the screen.

Page 118 of 132

 

Software Testing Material Y --> Have to write Test cases. N --> Not necessary to write test case

 Remarks   Null (type of  check) Option FC Type of FC course Student Y name Address Y Phone  N number  Date Y Time FC Timing FC

Field   Name

Student ID Batch code Save  button Exit  button

Unique

Length

Number Date

-ve

Default  

N

Y

Y

N

N

N

N N

Y Y

N Y

N N

N Y

N N

N

Y

Y

Y

Y

N

FC FC FC FC

We have to write test cases only for the 'Y' option, Not necessary to write test cases for the 'N' Option, The above internal test plan is mainly to reduce the number of test cases. For Ex. Student name is text field type. For that, we have unit test cases as indicated below   Unit Test case for Student name field S   l no. UTC/ UT C/00 001 1

UTC/ UT C/00 002 2

Test case Expected result Actual result   Ent nter er bla blank nk spa space ce and and pro proce ceed ed Sh Shoul ould d di displ splay ay error  error  message and set focus (Null check)  back to student name field. ( because it should not accept  blank) Ski kip p the the fiel field d and and pr proc ocee eed d Shoul Sh ould d di displ splay ay error  error  message and set focus

(Null check)

 back to student name field.( because it should not accept the

Page 119 of 132

 

Software Testing Material

UTC/ UT C/00 003 3

Ent nter er na name me of 20 ch char arac acte ters rs.. (length check)

UTC/ UT C/00 004 4

UTC/005

null or blank space) Shoul ould acc cceept and   proceed proceed.( .( we assume assume 20 as th thee maxi maximu mum m li limi mitt of th thee st stud uden entt name field)

Ent nter er na name me of 21 ch char arac acte ters rs (length check)

Shoul Sh ould d e.di displ splay ayecau error  err or  me mess ssag age. (Bec (B ause se the maximum limit is 20 characters) Enter numbers '12345' in theSh Shoul ould d di displ splay ay error  error  name field.( number check) message and set focus   back to the field. ( be beca caus usee te text xt fiel field d should not accept numbers)

For 'Student 'Student name' field field we have to write test test case. case. The above above test case case is written based on the internal test plan. Test cases is written only for 'Y'. i.e. For applicable one.  Not necessary to write Test case for 'N'. --This is to reduce the number of test cases. Field Level Validation: Here we have to check  1. Date range check  2. Boundary value check.

In date range check, we have to check whether the application is accepting greater than the system date or not.   Date Range check - If we are having a Date field in a screen we have to write test case to check the date field as Date field. S  l no. Test description Test case Expected result   UTC/001 Enter blank space or skip theSh Shoul ould d di displ splay ay error  error  field ( null check) message and set focus  back to the date field. ( because. Date field should not accept  blank space) UTC/002 Enter da date in DD/ DD/MM/YYYYShoul ould acc cceept and format. (date check)  proceed UTC/003 Enter date in mm/dd/yyyySh Shoul ould d di displ splay ay error  error  format (date check) message. And set

ffield( ocus because. back tIt o isthof  e DD/MM/YYYY

Page 120 of 132

 

Software Testing Material

UTC/004

UTC/ UT C/00 005 5

format. Enter number '1234567'Sh Shoul ould d di displ splay ay error  error  (number check) messa me ssage. ge. Becau Because se it should not accept just numbers. Ente Enterr '-23 '-2323 2323 2324 24'' and and pr proc ocee eed dSh Shoul ould d di displ splay ay error  error  ( -ve check)

UTC/006

messa me ssage. ge. Be cause se-ve it should not Becau accept numbers. Enter date greater than the Should display error  system date. message. mess age.(( because because it should not accept more than system date)

  In boundary value check we have to check a particular field with stand in the boundaries For e.g. e.g. If a number field has a range of 0 to 99 we have to to check check whether whether the the field field is accepting -1, 0, 1 i.e. < , =, > to the lower boundary and 98, 99, 100 -- have to check with < , = , > values of upper boundary.  

User interface Check   

In User Interface check, we have to check  1.Short cut keys 2.Help check  3.Tab movement check  4.Arrow key check  5.Message box check. 6.Readability of controls 7.Tool tip validations 8.Consistency with the user interface across the product. For User Interface Check we have to write test case as   Sl no. UTC/001

UTC/002 UTC/003 UTC/004 UTC/005 UTC/006

Test case Tab related checks

Expected result Actual result   To mo m ove a cr cross a ll ll t he he f ie ield in the screen with a sequence. Press the arrow row ke keys Shoul hould d mov movee ac acro ross ss the fields in a sequence. Press the short cutShould open the keys (Alt + K) corresponding screen Tool tip check To di display th the to tool ttiip ba based on the selection.

Screen ttiitle ch check Should vi visible tto o th the us user. Dialog box contentShould be clear to the user. check 

Page 121 of 132

 

Software Testing Material UTC/007

Scroll bar checks

Should scroll softy.

In User Interface Check we have to check, how the application is User Friendly. Functionality checks  

Here we have to check  1.Screen functionality 2.Functionality of buttons, computation, automatic generated results 3.Field dependencies. 4.Functionality of buttons.

  In functionality check, we have to check, whether we are able to ADD or MODIFY or DELETE or VIEW and SAVE and EXIT and other main main functions in a screen. Here we are checking whether  Combo box drop down menu is coming or not While clicking 'save' button after entering details, checking whether it is saving or  not. While clicking 'Exit' Button should close the current window... Automatic result generation like, for e.g. When entering date of birth, system should automatically generate age, based on the system date. So we have to do this type of functional checks. Let us see a sample test case for functionality checks.    

Sl no. UTC/001

UTC/002

UTC/003

UTC/004

UTC/005

UTC/006

Test case Expected result Select 'ADD' option of theSh Shou ould ld op open en a ne new w combo box. Registration Registra tion form, to

enter the new student details Select 'Delete' option of the Sh Shou ould ld de dele lete te th thee combo box current student details. Select 'View' option of theSh Shou ould ld di disp spla lay y th thee combo box. selected student details. Select 'Mo 'Modify' opt option of th theShould allow the user  combo box. to do the modification. Click 'Save' and proceed Should save the en ente tere red d de deta tail ilss an and d Click 'Exit' and proceed

update in the date Should close the screen.

Actual result  

Page 122 of 132

 

Software Testing Material

 

INTEGRATION TESTING

Many Unit Tested Modules are combined into subsystems, which are then tested. The goal is to see if the modules can be integrated properly. This testing activity can be considered testing the design.

 

Integration Testing refers to the testing in which the software units of an application are combined and tested for evaluating the interaction bet them. In Integration Testing we have to check the integration between the module Mainly we have to check  Data

Dependency between the modules Data Transfer between the modules. Types of Approaches for Integration Testing   1.Big Bang approach 2.Top Down approach 3.Bottom Up approach.

 

BIG BANG APPROACH: A

Type of Integration Integration Testing, in which software components components of an application are combined all at once into a overall system, and tested.  According to this approach, every module is first unit tested in isolation from every module. After each module is tested, all of the modules are integrated together at once.   Big bang approach is called as " Non Incremental Approach" Here all modules are combined and integrated in advance. The entire program is tested as a whole. If Set of bugs encountered correction is difficult. If one error is corrected new bug appears and the process continues.  

Disadvantages:

Tracing down of defect is not easy.

TOP DOWN APPROACH Program

is merged and tested from top to bottom. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module. Module, sub ordinate to the Main Control modules are incorporated into structure in either depth first or breadth first method.

Page 123 of 132

 

Software Testing Material 

Here we have to create a 'Stub' - this is a dummy routine routine that simulates a behavior of a subordinate. If a particular particular module is not completed or not started, we can simulate this module,  just by developing a stub. Advantage: 

It product is done is inmore a an reliable. environment that closely resembles that of reality, so the tested functionally simpler simpler than drivers and and therefore, stub stub can be written with with Stubs are functionally less time and labor.

 

Disadvantage: Unit testing of lower modules can be complicated by the complexity of upper  modules.

  BOTTOM UP APPROACH Begins

construction & testing with atomic modules (i.e. Modules of lowest levels in the program structure) Program is merged and tested from bottom to top. The terminal module is tested in isolation first, and then the next set of the higher level modules are tested with the previously tested lower level modules. Here we have to write ' Drivers' program, that accept the test case data, passes such data  Driver is nothing more than a program, to the module (to be tested) and prints the relevant results. Advantage: Unit testing of each module can be done very thoroughly. Disadvantage: Test Drivers Disadvantage: Drivers have to be generated for modules modules at all levels, exce except pt for top controlling module.

SYSTEM TESTING

Here Testing Testing conducte conducted d on a complet complete, e, integra integrated ted system system to evaluat evaluatee the system system's 's compliance with its specified requirements. Compete software build is made and tested to show, that all requirements are met. TYPES OF SYSTEM TESTING   VOLUME TESTING: To find the weakness in the system with respect to its handling of  large amount of data, during short time period. ( focus is amount of data) STRESS TESTING: The purpose of stress testing is, to test the system capacity, whether it

is handling large number of processing transactions during peak periods. (moment)

Page 124 of 132

 

Software Testing Material CONCURRENCY TESTING: It is similar to Stress Testing, here we are checking the system capacity to handle large number of processing transactions in an INSTANT. PERFORMANCE TESTING: System performance can be accomplished in parallel with volume and stress testing, because system performance is assessed under all conditions.   System performance is generally assessed in terms of response time and throughput rates, under different processing and configuration condition.

  REGRESSION TESTING: Is the re-execution of same subsets of test cases that have already executed, to ensure that changes(after defect fix) have not propagated unintended side effects. Regression Regressio n Testing is the activity that helps to ensure that changes do not introduce   unintended behavior or additional bugs.   SECURITY TESTING: Attempts to verify that protection mechanisms built into a system will infact protect it from improper penetration. System is is protected in accordance accordance with importance importance to organization, organization, with respect to   security levels.   RECOVERY TESTING: Forcing the system to fail in different ways and checking how fast it recovers from fail. COMPATIBILITY TESTING: Checking whether the system is functionally consistent COMPATIBILITY across all platforms. SERVER TESTING: Here we have to check Volume, Stress, Performance, data recovery testing, backup and restore testing, error trapping data security, as a whole. Here we have to check the PAIN ( e business concept) PAIN: P-Privacy A- Authentication of parties I- Integrity of transactions N - Non repudiation. WEB TESTING: In web testing we have to do compatibility testing, browser compatibility, video testing (pixel- testing on font and alignment) modem speed, web security testing and directory director y set up. This is a real real time and highly highly tedious tedious to web testing. Automated Automated tool is a must to do web testing. ACCEPTANCE TESTING: Performed with realistic data of the client to demonstrate that the software software is is working working satisfactoril satisfactorily. y. Testing here focuses focuses on the external external behavior behavior of the system. ALPHA TESTING: Alpha testing is conducted at the developers place, by the customer. The software is tested in a natural setting with the developer 'looking over the shoulder'

of the user(i.e. customer) and recording errors and usage problems. Alpha test are conducted in a controlled environment.

Page 125 of 132

 

Software Testing Material BETA TESTING: Beta Testing is conducted at one or more customer sites by the end user  of the software. Here the developer is not present during testing. Here the client tests the software or system in his place and recording defects and sending his comments to development team.  

So the above is the detailed description about the System Testing.

TEST PLAN:

A test plan is a general document for the entire project that defines the scope, approach to be taken, taken, and the schedule scheduless of intended intended testing testing activiti activities. es. It identifies identifies test items, items, the features to be tested, the testing tasks, who will do each task and any risks requiring contingency planning. The test planning can be done, well before the actual testing commences and can be done in parallel with the coding and design phase. The inputs for forming test plan are 1.Project plan 2.Requirement specification document 3.Architecture and design document.

Requirements document and Design document are the basic documents used for selecting the test units and deciding the approaches to be used during testing.   Test plan should contain Test unit specifications Features to be used Approaches for testing Test deliverables Schedule Personnel allocation   Test Unit: Test unit is a set of one or more more modules together with associated data, that are from a single computer program and that are the object of testing. Test unit may be a module or few modules or a complete system.

  Features to be tested: Include all software features and combinations of features that should be tested. A software featur featuree is a software characterist characteristics ics specified or implied by the requirements or design document.

  Approach for Testing: specifi specifies es the overall approach to be followed in the current project. project. The technique that will be used to judge the testing effort should also be specified.   Test Deliverables: Should be specified in the test plan before the actual testing begins Deliverables could be Test cases that were used Detailed results of testing

Test summary report Page 126 of 132

 

Software Testing Material

In general Test case specification report Test summary report and Test Log report. Should be specified as deliverables. environment in which testing testing was done, Test summary Report: It defines the items tested, environment and any variations from the specification observed during testing. Test Log Report: Provides chronological record of relevant details about the executions of  the test cases. Schedule: Specifies the amount of time and effort to be spent on different activities of testing and testing of different units that have been identified. Personnel Allocation Personnel Allocation: Id Ident entif ifies ies the person personss respon responsi sible ble for perfo perform rming ing the differ differen entt activities. Test Case Execution and Analysis:

Steps to be performed to execute the test cases are specified in a separate document called the 'test procedure procedure specification specification'.'. This document document specifies specifies special req. req. that exist exist for  setting the test environment and describes the methods and formats for reporting the result of  testing. Output of the test case execution is: Test log report, Test summary report, and bug report. Test log: Describes the details of testing Test summary report: Gives total number of test cases executed, the number and nature of   bugs found, and summary of any metrics data.   Bug Report: Give the summary of all errors found.

  DEFECT CATEGORIES Defect Categories: Defects are mainly classified into two categories   Defect Category-I: Here in Defect Category - I is again classified in to

1. Defects from specifications specifications: Products built varies from the product specified. 2. Defect in capturing user requirement: requirement: Variance is something that user wanted, that is not in the built product. But was also not specified specified in the product.   Defect Category - II: Here defects are in 3 categories

1. Wrong: 2. Missing:

i.e. incorrect implementation i.e. User requirement is not built into the product.

Page 127 of 132

 

Software Testing Material 3. Extra:

Unwanted requirement built into the product. Techniques to Reduce the Test Cases

Writing test test cases to all possible possible checks is irrelevant. irrelevant. So we can reduce reduce the number of  test cases by avoid some unwanted checks. To reduce the number of test cases, there are three methods to be followed. 1. Equivalence Class Partitioning (ECP) 2. Boundary Value Analysis (BVA) 3. Cause Effect Graphing (CEG) Equivalence Class Partitioning (ECP): 

ECP is a black box testing method that divides the input domain of program into classes of  data, from which which test cases can be derived. It uncovers classes classes of errors, there by reducing reducing the total number of test cases that must be developed. Group of tests forms equivalence class if, * They all tests the something * If one test finds a defect, the others will * If one test does not find a defect, the others will not. Tests are grouped into one equivalence class when 

They affect the same output variables  They result in similar operations in the program  They involve the same input variables Process of finding equivalence classes is * Identify all inputs * Identify all outputs * Identify equivalence classes for each input and output * Ensure that test cases test each input and output equivalence class at least once. Guidelines for finding equivalence class * Look for range numbers * Look for membership in a group * Look for equivalent output events * Look for equivalent operating environment. Boundary Value Analysis (BVA): 

BVA is a test case design technique that complements equivalence 'partitioning'. BVA leads to selection of test cases that exercises bounding values. Rather than selecting any elements of equivalence, BVA leads to the selection selection of test case at the 'edges' of the class.

Page 128 of 132

 

Software Testing Material Guidelines for BVA: 1. If input input condition condition is a range range bounded bounded by values values 'a' 'a' and 'b'. 'b'. Test Test case should should be   designed with values 'a' and 'b', just above and just below a & b.   2. If input condition condition specifies a number of values, test test case should should be developed developed that that exercise exer cisess the minimu minimum m and maximum maximum numbers numbers.. Values Values just above above and just below below the

maximum and minimum should be tested. Apply the above guidelines for output conditions also. SOME IMPORTANT TESTING HINTS

Testing is the phase where the errors remaining from all the previous phases (i.e. SDL Testing SDLC) C) must be detected. detected. Hence testi testing ng performs a very critical critical role for quality assurance assurance and for  ensuring the reliability of software. Success of testing in revealing errors depends critically on test cases. What is the Difference between Error, Fault Fault Failure Failure and Bug Bug ? Error: It refers to the discrepancy between computed or measured value and theoretically correct value. i.e. Difference between actual output and correct output of the software.

  Fault: Fault is the basic reason for software malfunction. i.e. Fault is a condition that causes a system to fail in performing its required function.   Failure: Is the inability of the system or component to perform a required function according accordin g to its specifications. specifications. A Software Software failure occurs if the behavior behavior of the software is different from the specified behavior.  

Bug: Non Functionality to a functionality

Presence of an error implies that a failure must have occurred, and the observance of a failure implies that a fault must be present in the system. During the testing process only failures are observed by which presence of fault is deduced. dedu ced. The actual actual faults faults are identifi identified ed by separate separate activiti activities es commonly commonly referred referred to us 'debugging'. In other words, for identifying faults after testing has revealed the presence of faults the expensiv expe nsivee task task of debuggin debugging g has to be be perfor performed med.. This This is the the reaso reason n 'why 'why testi testing ng is is expensive.' Reason for Testing System separately( Unit, Integration and System Testing):

Reason for testing parts separately is that if a test case detects an error in a large  program, it will be extremely difficult to pin point the source of error. It is difficult to construct construct test cases so that that all the modules will be exe executed. cuted. This may increase the change of module's error undetected.

increase the change of module s error undetected.

Page 129 of 132

 

Software Testing Material

 

What is the need for independent testing/ third party testing:?



Someti Some time mess er erro rorr oc occu curs rs be beca caus usee th thee pr prog ogra ramm mmer er di did d no nott un unde ders rsta tand nd th thee specifi spec ificati cation on clearl clearly. y. Testin Testing g of a program program by its program programmer mer will not detect

suchconcern errors, but independent testing may succeed in finding them. Time  If the customer want the third party testing  Non-availability of testing resources  It is not easy for some one to test their own program with proper frame of  mind for testing 

  What is the Testing Principles? 1. All the test cases should be traceable to the customer requirements. 2. Testing should be planned long before testing begins. 3. Testing should begin 'in the small' and process towards testing 'in the large' 4. To be most effective, testing should be conducted by an independent third party   What is the life time of a bug?   Once you find the defect, time spent to fix the defect is called life time of the bug. Attributes of a Good test: 1. A good test has a high probability of finding an error  2. A good test is not redundant. 3. A good test should be 'best of breed' 4. A good test should be neither too simple nor too complex.

  Why Software has bugs? Due to 1. Software complexity 2. Programming errors 3. Changing in requirement 4. Poorly documented code 5. Miscommunication between the inter group 6. Software development tools or OS may introduce their own bugs.   When to Stop Testing? We can Stop Testing when 

Full execution of all test cases with internal acceptance and customer acceptance  When Beta or Alpha Testing period ends  Bug rate falls below certain level  Test budget depleted  Test cases completed with certain % passed

  What is Error Seeding?

Page 130 of 132

 

Software Testing Material   Once the software software is 100% bug free. Just to check the efficiency of Test Tester, er, we have to 'insert certain number of bugs' in project in various points and give it to tester to test.

Efficient tester will find the 'inserted bugs'. Error seeding is just to check the efficiency of the tester. We have to check the efficiency of the tester once the software is 100% bug free. DEFECT CLASSIFICATION CLASSIFICATION

As per ANSI/IEEE standard 729 the following are the five level of defect classification are   1. Critical: The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) with the system. 2. Major: The defect results in the failure of the complete software system of a subsystem, or  of a software unit (program or module) within the system. There is no way to make the failed compone com ponents, nts, however however,, there there are acceptab acceptable le processi processing ng alternat alternatives ives which which will will yield yield the desired result. 3. Average: The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or inconsistent results, or the defect impairs the systems usability. 4. Minor: The defect does not cause failure, does not impair usability, and the desired  processing results are easily obtained by working around the defect. 5. Cosmetic: Th Thee defec defectt is the re resul sultt of non-c non-conf onform orman ance ce to a standa standard, rd, is re rela lated ted to the aesthetics aestheti cs of the system, or is a request for an enhancement. enhancement. Defects at this this level may be deferred or even ignored. In addition to the defect severity level defined above, defect priority level can be used with severity categories to determine the immediacy of repair. A five repair priority scale has also be used in common testing practice. The levels are: Resolve Immediately:

Further development and /or testing cannot occur until the defect has been repaired. The system cannot be used until the repair has been effected

Give

High Attention: The defect defect must must be resolved resolved as soon as possible possible becaus becausee it is impairing development / and or testing activities. System use will be severely affected until the defect is fixed. Normal Queue: The defect should be resolved in the normal course of development activities. It can wait unit a new build or version is created.

Low

Priority: The defect is an irritant irritant that should be repaired but which can be repaired after more serious defect have been fixed

Defer:

The defect repair repair can be put of indefinitely. indefinitely. It can be resolved in a future future major 

system revision or not resolved at all. Page 131 of 132

 

Software Testing Material

  Cost of a defect

==

Testing efficiency ==

Total effort spent in testing -----------------------Tot. no. of defect.

No. of Test cases ----------------No. of defects.

Defect Closure rate == how much time takes to close the defect

No. of defect -------------Defect Density == KLOC/FP KLOC- Kilo Lines Of Code FP - Functional Point analysis. Software Testing Related Web Sites:

www.softwareqatest.com www.rstcorp.com www.mmsindia.com www.facilita.co.uk  www.autotestco.com www.kaner.com www.badsoftware.com www.model-based-testing.com www.soft.com www.jrothman.com www.webservepro.com www.testworks.com www.ftech.com www.geocities.com www.aptest.com www.testing.com www.stqemagazine.com www.sqe.com www.io.com www.testingstuff.com www.stickyminds.com

Sponsor Documents

Recommended

No recommend documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close