Real World Software Testing

Published on March 2017 | Categories: Documents | Downloads: 24 | Comments: 0 | Views: 203
of x
Download PDF   Embed   Report

Comments

Content

Real World Software Testing

Agenda
• Software Testing
– Importance

• Challenges and Opportunity

Questions?

Real World Software Testing

Software Quality Testing Labs

Agenda
• Quality
– QA – QC

What is Quality ?
• Quality from the
– Customer’s Viewpoint Fit for use, or other customer needs – Producer’s Viewpoint Meeting requirements

Quality Function
• Software quality includes activities related to both
– process, and to the – product

• Quality Assurance is about the work process • Quality Control is about the product

What is Quality Assurance?
• • • • • • Quality assurance activities are work process oriented. They measure the process, identify deficiencies, and suggest improvements. The direct results of these activities are changes to the process. These changes can range from better compliance with the process to entirely new processes. The output of quality control activities is often the input to quality assurance activities. Audits are an example of a QA activity which looks at whether and how the process is being followed. The end result may be suggested improvements or better compliance with the process.

What is Quality Control?
• • • • • • • Quality control activities are work product oriented. They measure the product, identify deficiencies, and suggest improvements. The direct results of these activities are changes to the product. These can range from single-line code changes to completely reworking a product from design. They evaluate the product, identify weaknesses and suggest improvements. Testing and reviews are examples of QC activities since they usually result in changes to the product, not the process. QC activities are often the starting point for quality assurance (QA) activities.

Prevention and Detection
Prevention is better than cure . . . . . . but not everything can be prevented!

prevention

detection cure

QA and QC Summery
• QA • Assurance • Process • Preventive • Quality Audit • QC • Control • Product • Detective • Testing

Quality... it’s all about the End-User
• Does this software product work as advertised?
– Functionality, Performance, System & User Acceptance ... testing



Will the users be able to do their jobs using this product?
– Installability, Compatibility, Load/Stress ... testing



Can they bet their business on this software product?
– Reliability, Security, Scalability ... testing

Questions?

Real World Software Testing

Software Quality Testing Labs

Agenda
• Testing
– What is testing – Objectives of Testing

What is Testing?
• Testing is the process of demonstrating that errors are not present • The purpose of testing is to show that a program performs its intended functions correctly • Testing is the process of establishing confidence that a program does what it is supposed to do • These definitions are incorrect in that they describe almost the opposite of what testing should be viewed as

What is objective of Testing?
• Objective of testing is to find all possible bugs (defects) in a work product • Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should.

What is Testing?
• DEFINATION: Testing is process of trying to discover every conceivable fault or weakness in a work product. • Testing is a process of executing a program with the intent of finding an error. • A good test is one that has a high probability of finding an as yet undiscovered error. • A successful test is one that uncovers an as yet undiscovered error

The purpose of finding defects is to get them fixed
• The prime benefit of testing is that it results in improved quality. Bugs get fixed. • We take a destructive attitude toward the program when we test, but in a larger context our work is constructive. • We are beating up the program in the service of making it stronger.

Secondary benefits include
• Demonstrate that software functions appear to be working according to specification. • That performance requirements appear to have been met. • Data collected during testing provides a good indication of software reliability and some indication of software quality.

What is Testing Summary
• Identify defects
– when the software doesn’t work

• Verify that it satisfies specified requirements
– verify that the software works

Mature view of software testing
• A mature view of software testing is to see it as a process of reducing the risk of software failure in the field to an acceptable level [Bezier 90].

What exactly Does a Software Tester Do?
• The Goal of a software tester is to find defects, • and find them as early as possible, • and make sure they get fixed.

What does testing mean to testers?
• • • Testers hunt errors
– Detected errors are celebrated - for the good of the work product

Testers are destructive - but creatively so
– Testing is a positive and creative effort of destruction

Testers pursue errors, not people
– Errors are in the work product, not in the person who made the mistake



Testers add value
– by discovering errors as early as possible

How testers do it?
• By examining the user’s requirements, internal structure and design, functional user interface etc • By executing the code, application software executable etc

Testing Involves
• • • • • Generate test conditions cases Create required test environment Execute tests Report and track defects Test reporting

Questions?

Real World Software Testing

Software Quality Testing Labs

Agenda
• Software Development and Software Testing Life Cycle
– Verification and Validation

Basic form of Testing
• Basic form of Testing
– Verification & – Validation

SDLC

Requireme nts Functional Specs. Design

Release for Use Build System Build Software

Development Activity

Coding

Progressive distortion

Relative cost to fix software defects
1000 100 10 1 Req Spec Design Development

SDLC

Requireme nts Functional Specs. Design

Release for Use Build System Build Software

Development Activity

Coding

Verification & Validation

Requireme nts Functional Specs.

Review

Release for Use Build System Build Software

Acceptan -ce Test System Test

Specs Review Test Plan Design Review Revised Test Plan Coding Code Review

Design

Integration Test

Development Activity V&V Activities

Unit Test

Test Life Cycle Model

Testing happens throughout the software life cycle

Requireme nts Functional Specs.

Review

Release for Use Build System Build Software

Acceptan -ce Test System Test

Specs Review Test Plan Design Review Revised Test Plan Coding Code Review

Static Testing

Design

Integration Test

Development Activity V&V Activities

Unit Test

Dynamic Testing

What is Verification?
• • • • • • • • Verification is process of confirming whether software meets its specifications The process of reviewing/ inspecting deliverables throughout the life cycle. Inspections, walkthroughs and reviews are examples of verification techniques. Verification is the process of examining a product to discover its defects. Verification is usually performed by STATIC testing, or inspecting without execution on a computer. Verification is examining Product Requirements, Specifications, Design, Code for large fundamental problems oversight, omission. Verification is a “human” examination or review of the work product. Verification – determining if phase completed correctly
• “are we building the product right?”

What is Validation?
• • • • • • • Validation is process of confirming whether software meets users requirements The process of executing something to see how it behaves. Unit, integration, system, and acceptance testing are examples of validation techniques. Validation is the process of executing a product to expose its defects. Validation is usually performed by DYNAMIC testing, or testing with execution on a computer. Eg. If we use software A on hardware B and do C then D should happen (expected result) Validation – determining if product as whole satisfies requirements
• “are we building the right product?”

Some test techniques
Static
Reviews Inspection Walkthroughs

Dynamic Behavioural Structural
Control Flow Non-functional etc. Functional etc. Equivalence Partitioning Boundary Value Analysis

Data Flow Statement

Usability Performance

Branch/Decision Branch Condition

Verification
• Reviews • Walkthrough • Inspection

Verification “What to Look For?”
• Find all the missing information • Who • What • Where • When • Why • How

Peer Review
• Simply giving a document to a colleague and asking them to look at it closely which will identify defects we might never find on our own.

Walkthrough
• • • Informal meetings, where participants come to the meeting;and the author gives the presentation. Objective:
– To detect defects and become familiar with the material

Elements:
– A planned meeting where only the presenter must prepare – A team of 2-7 people, led by the author – Author usually the presenter.



Inputs:
– Element under examination, objectives for the walkthroughs applicable standards.



Output:
– Defect report

Inspection
• Formal meeting, characterized by individual preparation by all participants prior to the meeting. • Objectives:
– To obtain defects and collect data. – To communicate important work product information .

• Elements:
– A planned, structured meeting requiring individual preparation by all participants. – A team of people, led by an impartial moderator who assure that rules are being followed and review is effective. – Presenter is “reader” other than the author. – Other participants are inspectors who review, – Recorder to record defects identified in work product

Checklists : the verification tool
• An important tool specially in formal meetings like inspections • They provide maximum leverage on on verification • There are generic checklists that can be applied at a high level and maintained for each type of inspection • There are checklists for requirements,functional design specifications, internal design specifications, for code

Fundamental Validation strategies
• White-box testing • Black-box testing

White-box testing

White-box testing
• White box tests require knowledge of the internal program structure and are derived from the internal design specification or the code. • They will not detect missing function (I.e. those described in the functional design specification but not supported by the internal specification or code)

Black-box testing
requirements output input events

Black-box testing
• Are derived from functional design specification, without regard to the internal program structure. • Tests the product against the end user, external specifications. • Is done without any internal knowledge of the product • It will not test hidden functions (i.e functions implemented but not described in the functional design specification), and errors associated with them will not be found in black-box testing.

Questions?

Real World Software Testing

Software Quality Testing Labs

Agenda
• Software Development and Software Testing Life Cycle
– Verification and Validation

Validation activities
• Low-level testing
– unit (module) testing – integration testing

• High-level testing
– function testing – system testing – acceptance testing

Unit Testing
• Unit testing is the process of testing the individual components of a program • The purpose is to discover discrepancies between the module’s interface specification and its actual behavior

Unit Testing
• Carried out at earliest stage • Focuses on the smallest testable units/ components of the software • Typically used to verify the control flow and data flow • It requires the knowledge of the code hence performed by the developers

Unit testing
• Testing a given module (X) in isolation may require:
1) a driver module which transmits test cases in the form of input arguments to X
and either prints or interprets the result produced by X 2) zero or more “stub” modules each of which simulates the function of a module called by X. It is required for each module that is directly subordinate to X in the execution hierarchy. If X is a terminal module (I.e. it calls no other modules), then no stubs are required

Build scaffolding for incomplete programs
• Stubs and drivers are code that are (temporarily) written in order to unit test a program • Driver – code that is executed to accept test case data, pass it to the component being tested, obtain result (or pass/fail).
– main () { foo(1,1,1,1,1); foo(1,2.1,2,1); }

• Stub is a dummy subprogram . . . May do minimal data manipulation, print verification of component entry, return
– assignTA(prof, course) { print “You successfully called assignTA for,” prof, course }

Integration Testing
• Integration testing is the process of combining and testing multiple components together • To assure that the software units/ components operate properly when combined together • To discover errors in the interface between the components, Verify Communication Between Units • It is done by developers/ QA teams

Types of Integration Problems
• • • • • • Wrong call orders Wrong parameters Missing functions Overlapping functions Resource problems (memory etc) Configuration/ version control

Function Testing
• Function testing is the process of attempting to detect discrepancies between a program’s functional specification and its actual behavior • It verifies that the software provides expected services • Includes positive and negative scenarios i.e. valid inputs and invalid inputs

System Testing
• System testing is the process of attempting to demonstrate that a program or system does not meet its original requirements and objectives, as stated in the requirements specification • It tests business functions and performance goals when the application works as a whole • It verifies software operation from the perspective of the end user, with different configurations/setups

System Testing
• Is performed by a testing group before the product is made available to customers. It can begin whenever the product the product has sufficient functionality to execute some of the tests or after unit and integration testing are completed. • It can be conducted in parallel with function testing, because the tests usually depend on functional interfaces, it may be wise to delay system testing until functional testing has demonstrated some predefined level of reliability, e.g. 40% of the function testing is complete.

The steps of system testing are:
• Decompose and analyze the requirements specification • Partition the requirements into logical categories and, for each component, make a list of the detailed requirements • For each type of system testing: • For each relevant requirement, determine inputs and outputs • Develop the requirements test cases

• ….

The steps of system testing are:
• Develop a requirements coverage matrix which is simply a table in which an entry describes a specific subtest that adds value to the requirements coverage, the priority of that subtest, the specific test cases in which that subtest appears • Execute the test cases and measure logic coverage • Develop additional tests, as indicated by the combined coverage information

Types/ Goals of System Testing
• • • • • • Usability testing Performance Testing Load Testing Stress Testing Security Testing Configuration testing • • • • Compatibility testing Installability testing Recovery Testing Availability testing

Usability Testing
• How easy is it for users to bring up what they want? • How easily can they navigate through the menus?

Usability Testing
• Usability testing is the process of attempting to identify discrepancies between the user interface of a product and the human engineering requirements of its potential users. • Usability testing collects information on specific issues from the intended users. • It often involves evaluation of a product’s presentation rather than its functionality.

Usability Testing
• Usability testing involves having the users work with the products and observing their responses to it. • Unlike Beta testing, which also involves the user, it should be done as early as possible in the development cycle. • The real customer is involved as early as possible, even at the stage when only screens drawn on paper are available.

Meaningful message from Microsoft's Access.

•Hmmm...what would you do?

Meaningful message from Microsoft's Excel.

Lotus N otes

Rational ClearCase irrational message

Performance testing
• To determine whether the program meets its performance requirements • The IEEE standard 610.12-1990 (Software Engineering Terminology) defines performance testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements.

Performance testing
• Many programs have specific performance efficiency objectives • Performance testing focuses on performance parameters such as
– transaction response time, – throughput etc.

• e.g. in database systems the response time relates to the time to obtain a report after clicking on a specific button

Load Testing
• Load testing is subjecting a system to a statistically representative (usually) load.

Stress Testing
• The IEEE standard 610.12-1990 (Software Engineering Terminology) defines • stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

Stress Testing
• Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, Disc, CPU, network bandwidth etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

• •



Configuration testing
• To determine whether the program operates properly when the software or hardware is configured in a required manner • It is the process of checking the operation of the software with all various types of hardware. • e.g. for applications to run on Windows-based PC used in homes and businesses. • The PC : different manufacturers such as Compaq, Dell, Hewlett Packard, IBM and others. • Components : disk drives, video, sound, modem, and network cards. • Options and memory • Device drivers

Compatibility testing
• Testing whether the system is compatible with other systems with which it should communicate • It means checking that your software interacts with and shares information correctly with other software • e.g.What other software (operating systems, web browser etc.) your software is designed to be compatible with? • Upgrading to a new database program and having all your existing databases load in

Security testing
• It attempts to verify that protection mechanisms built into a system will protect it from improper penetration • Security is a primary concern when communicating and conducting business especially business critical transactions • Checking
– How the website authenticates users?
– How the website encrypt data? – How safe is credit card or user information? – How does the website handle access rights?

Installability
• To identify the ways in which the installation procedures lead to incorrect results • Installation options
– – – – New Upgrade Customized/Complete Under normal & abnormal conditions

• Important - Makes first impression on the end user

Recovery testing
• To determine whether the system or program meets its requirements for recovery after a failure

Acceptance Testing
• System testing performed by end users to determine whether the software is ready for final deployment

Acceptance testing
• Acceptance testing is the process of comparing the end product to the current needs of its end users. • It is usually performed by the customer or end user after the testing group has satisfactorily completed usability, function, and system testing • It usually involves running and operating the software in production mode for a pre-specified period

Acceptance testing
• If the software is developed under contract, acceptance testing is performed by the contracting customer. Acceptance criteria are defined in the contract. • If product is not developed under contract, the developing organization can arrange for alternative forms of acceptance testing
– ALPHA – BETA

Acceptance Testing
• ALPHA and BETA testing are each employed as a form of acceptance testing • Often both are used, in which case BETA follows ALPHA • Both involve running and operating the software in production mode for a pre-specified period • The ALPHA test is usually performed by end users inside the development organization. The BETA test is usually performed by a selected subset of actual customers outside the company, before the software is made available to all customers

Alpha testing
• At developer site by Customer • Developer
– Looking over the shoulder – Recording errors, usage problems

• Controlled environment

Beta testing
• • • • At one/ more customer sites by end user Developer not present “live” situation, developer not in control Customer records problems (real, imagined) and reports to developer

Final Candidate testing
• Testing before final release • Also called Golden candidate testing

Retesting
• Why retest?
– Because any software product that is actively used and supported must be changed from time to time, and every new version of a product should be retested

Progressive testing and regressive testing
• Most test cases, unless they are truly throw-away, begin as progressive test cases and eventually become regression test cases for the life of the product.

Regression testing
• Regression testing is not another testing activity • It is a re-execution of some or all of the tests developed for a specific testing activity for each build of the application • Verify that changes or fixes have not introduced new problems • It may be performed for each activity (e.g. unit test, function test, system test etc)

Regression Test
• Testing software that has been tested before.
– Why?
• Peripheral testing for bug fixes • Retesting the old version functionality with the new version

Smoke Testing
• Smoke testing determines whether the system is sufficiently stable and functional to warrant the cost of further, more rigorous testing. • Smoke testing is also called Sanity testing.

Questions?

Real World Software Testing

Software Quality Testing Labs

Agenda
• Software Development and Software Testing Life Cycle
– Verification and Validation

Validation methods
• The following are commonly used Black Box methods :
– equivalence partitioning – boundary-value analysis – error guessing

Equivalence Partitioning
• An equivalence class is a subset of data that is representative of a larger class. • Equivalence partitioning is a technique for testing equivalence classes rather than undertaking exhaustive testing of each value of the larger class.

Equivalence Partitioning
• If we expect the same result from two tests, you consider them equivalent. A group of tests from an equivalence class if,
– They all test the same thing – If one test catches a bug, the others probably will too – If one test doesn’t catch a bug, the others probably won’t either

Equivalence Partitioning
• For example, a program which edits credit limits within a given range ($10,000-$15,000) would have three equivalence classes:
– Less than $10,000(invalid) – Between $10,000 and $15,000(valid) – Greater than $15,000 (invalid)

Equivalence Partitioning
• Partitioning system inputs and outputs into ‘equivalence sets’
– If input is a 5-digit integer between 10,000 and 99,999 equivalence partitions are <10,000, 10,000-99,999 and >99,999

• The aim is to minimize the number of test cases required to cover these input conditions

Equivalence Partitioning
• Equivalence classes may be defined according to the following guidelines:
– If an input condition specifies a range, one valid and two invalid equivalence classes are defined. – If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined. – If an input condition is boolean, then one valid and one invalid equivalence class are defined.

Equivalence Partitioning Summary
• • • • • • • • • Divide the input domain into classes of data for which test cases can be generated. Attempting to uncover classes of errors. Based on equivalence classes for input conditions. An equivalence class represents a set of valid or invalid states An input condition is either a specific numeric value, range of values, a set of related values, or a Boolean condition. Equivalence classes can be defined by: If an input condition specifies a range or a specific value, one valid and two invalid equivalence classes defined. If an input condition specifies a Boolean or a member of a set, one valid and one invalid equivalence classes defined. Test cases for each input domain data item developed and executed.

Boundary value analysis
• “Bugs lurk in corners and congregate at boundaries…”
Boris Beizer

Boundary value analysis
• A technique that consists of developing test cases and data that focus on the input and output boundaries of a given function. • In same credit limit example, boundary analysis would test:
– Low boundary plus or minus one ($9,999 and $10,001) – On the boundary ($10,000 and $15,000) – Upper boundary plus or minus one ($14,999 and $15,001)

Boundary value analysis
• • • • • • • Large number of errors tend to occur at boundaries of the input domain BVA leads to selection of test cases that exercise boundary values BVA complements equivalence partitioning. Rather than select any element in an equivalence class, select those at the ''edge' of the class Examples: For a range of values bounded by a and b, test (a-1), a, (a+1), (b-1), b, (b+1) If input conditions specify a number of values n, test with (n-1), n and (n+1) input values Apply 1 and 2 to output conditions (e.g., generate table of minimum and maximum size)

Example: Loan application
Customer Name Account number Loan amount requested Term of loan Monthly repayment Term: Repayment: Interest rate: Total paid back:
2-64 chars. 6 digits, 1st non-zero £500 to £9000

1 to 30 years
Minimum £10

Account number
first character: valid: non-zero invalid: zero 5 6 valid 7

number of digits: invalid

invalid

Conditions Account number

Valid Partitions 6 digits 1st non-zero

Invalid Partitions <6 digits >6 digits 1st digit =0 non-digit

Valid Invalid Boundaries Boundaries 100000 5 digits 999999 7 digits 0 digits

Error Guessing
• Based on the theory that test cases can be developed based upon the intuition and experience of the Test Engineer • For example, in an example where one of the inputs is the date, a test engineer might try February 29,2000 or 9/9/99

Error guessing
• Error guessing is an ad hoc approach, based on intuition and experience, to identify tests that are considered likely to expose errors • The basic idea is to make a list of possible errors or error prone situations and then develop tests based on the list. • Eg. Where one of the input is the date, a test engineer may try February 29, 2000 • A=B/C What if C is 0?

Questions?

Real World Software Testing

Software Quality Testing Labs

Agenda
• Software Development and Software Testing Life Cycle
– Verification and Validation

White-box methods for internalbased tests
• There are thre basic forms of logic coverage:
– statement coverage – decision (branch) coverage – condition coverage

Statement coverage
Execute all statements at least once • • • This is the weakest coverage criterion. It requires execution of every line of code at least once. Many lines check the value(s) of some variable(s) and make decisions based on this. To check each of the decision-making functions of the line, the programmer has to supply different values, to trigger different decisions. As an example consider the following: If (A<B and C=5)
– THEN do SOMETHING

• •

SET D=5

Statement coverage
• • • If (A<B and C=5)
– THEN do SOMETHING

SET D=5 To test these lines we should explore the following cases:
a) A<B and C=5 (Something is done, then D is set to 5) b) A<B and C!=5 (Something is not done, D is set to 5) c) A>=B and C=5 (Something is not done, D is set to 5) c) A>=B and C!=5 (Something is not done, D is set to 5)

Decision (branch) coverage
Execute each decision direction at least once

• For branch coverage, the program can use case (a) and any one of the other three. • At a branching point a program does one thing if a condition (such as A<B and C=5) is true, and something else if the condition is false. To test a branch, the programmer must test once when the condition is true and once when it’s false.

Condition coverage
Execute each decision with all possible outcomes at least once

• This checks each of the ways that the condition can be made true or false. • This requires all four cases.

Questions?

Real World Software Testing

Software Quality Testing Labs

Agenda
• Software Development and Software Testing Life Cycle
– Verification and Validation

Test Planning
• It is the process of defining a testing project such that it can be properly measured and controlled • It includes test plan, test strategy, test requirements and testing resources

Test Design
• It is the process of defining test procedures and test cases that verify that the test requirements are met • Specify the test procedures and test cases that are easy to implement, easy to maintain and effectively verify the test requirements.

Test Development
• It is the process of creating test procedures and test cases that verify the test requirements • Automated Testing using Tools • Manual Testing

Test basis
• The basis of test is the source material (of the product under test) that provide the stimulus for the test. In other words, it is the area targeted as the potential source of an error: Requirements-based tests are based on the requirements document Function-based tests are based on the functional design specification Internal-based tests are based on the internal design specification or code. Function-based and internal-based tests will fail to detect situations where requirements are not met. Internal-based tests will fail to detect errors in functionality

• • • •

Test
• An activity in which a system or component is executed under specified conditions, the result are observed or recorded, and an evaluation is made of some aspect of the system or component. • A set of one or more test cases.

Two basic requirements for all validation tests
• Definition of result
– A necessary part of a test case is a definition of the expected output or result.

• Repeatability

Test Case
• • • A set of test inputs, execution conditions, and expected results developed for a particular objective The smallest entity that is always executed as unit, from beginning to end A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results Test case may also include prerequisites





Test Case
• A "test case" is another name for "scenario". It is a particular situation to be tested, the objective or goal of the test script. For example, in an ATM banking application, a typical scenario would be: "Customer deposits cheque for $1000 to an account that has a balance of $150, and then attempts to withdraw $200". • Every Test Case has a goal; that is, the function to be tested. Quite often the goal will begin with the words "To verify that ..." and the rest of the goal is a straight copy of the functional test defined in the Traceability Matrix. In the banking example above, the goal might be worded as "To verify that error message 63 ('Insufficient cleared funds available') is displayed when the customer deposits a cheque for $1000 to an account with a balance of $150, and then attempts to withdraw $200".

Test Case Components
• The structure of test cases is one of the things that stays remarkably the same regardless of the technology being tested. The conditions to be tested may differ greatly from one technology to the next, but you still need to know three basic things about what you plan to test: ID #: This is a unique identifier for the test case. The identifier does not imply a sequential order of test execution in most cases. The test case ID can also be intelligent. For example, the test case ID of ORD001 could indicate a test case for the ordering process on the first web page. Condition: This is an event that should produce an observable result. For example, in an e-commerce application, if the user selects an overnight shipping option, the correct charge should be added to the total of the transaction. A test designer would want to test all shipping options, with each option giving a different amount added to the transaction total.





Test Case Components
• Procedure: This is the process a tester needs to perform to invoke the condition and observe the results. A test case procedure should be limited to the steps needed to perform a single test case. Expected Result: This is the observable result from invoking a test condition. If you can’t observe a result, you can’t determine if a test passes or fails. In the previous example of an e-commerce shipping option, the expected results would be specifically defined according to the type of shipping the user selects. Pass/Fail: This is where the tester indicates the outcome of the test case. For the purpose of space, I typically use the same column to indicate both "pass" (P) and "fail" (F). In some situations, such as the regulated environment, simply indicating pass or fail is not enough information about the outcome of a test case to provide adequate documentation. For this reason, some people choose to also add a column for "Observed Results." Defect Number Cross-reference: If you identify a defect in the execution of a test case, this component of the test case gives you a way to link the test case to a specific defect report.







• Sample Business Rule • A customer may select one of the following options for shipping when ordering products. The shipping cost will be based on product price before sales tax and the method of shipment according to the table below. • If no shipping method is selected, the customer receives an error message, "Please select a shipping option." The ordering process cannot continue until the shipping option has been selected and confirmed.

Characteristics of a good test
• An excellent test case satisfies the following criteria:
– – – – It has a reasonable probability of catching an error It is not redundant It’s the best of its breed It is neither too simple nor too complex

Test Execution
• It is the process of running a set of test procedures against target software build of the application under test and logging the results. • Automation

Test evaluation
• It is the process of reviewing the results of a test to determine if the test criteria are being met.

What should our overall validation strategies be?
• Requirements-based tests should employ the black-box strategy. User requirements can be tested without knowledge of the internal design specification or the code. Function-based tests should employ the black-box strategy. Using the functional-design specification to design functionbased tests is both necessary and sufficient. Internal-based tests must necessarily employ the white-box strategy. Tests can be formulated by using the functional design specification.





Questions?

Real World Software Testing

Software Quality Testing Labs

Effective way to report a defect
"A problem well stated is a problem half solved." - Charles Kettering

Agenda
• Testing • Defect • Defect report
– – – – Summary Description Severity Priority

• Defect tracking

How many testers does it take to change a light bulb?
• None. Testers just noticed that the room was dark. Testers don't fix the problems, they just find them

What is Testing?
• Objective of testing is to find all possible bugs (defects) in a work product • Testing is process of trying to discover every conceivable fault or weakness in a work product

The purpose of finding defects
• The purpose of finding defects is to get them fixed • The prime benefit of testing is that it results in improved quality. Bugs get fixed

What exactly Does a Software Tester Do?
• The Goal of a software tester is to find defects, • and find them as early as possible, • and make sure they get fixed. • The best tester isn’t the one who finds the most bugs or who embarrasses the most programmers. The best tester is the one who gets the most bugs fixed. - Testing Computer Software

What Do You Do When You Find a defect?
• Report a defect • The point of writing Problem Reports is to get bugs fixed. - Testing Computer Software

What is definition of defect?
• A flaw in a system or system component that causes the system or component to fail to perform its required function. - SEI - A defect, if encountered during execution, may cause a failure of the system.

Some typical defect report fields
• • • • • • Summery Date reported Detailed description Assigned to Severity Detected in Version • • • • • • Priority System Info Status Reproducible Detected by Screen prints, logs, etc.

Blank defect report

What is the most important part of the defect report?

Who reads our defect reports?
• • • • • • • Project Manager Executives Development Customer Support Marketing Quality Assurance Any member of the Project Team

Example 1

Example 2

Downstream affects of a poorly written subject line

#

ID

Status

Build

Severity Subject Priority

Entered By

1 310103 2 310174 3 310154

Open Open Open

6.6.00 2 - High 2 - High box problem Login 6.5.20 3 - Medium Pane 3 - Medium Result 6.5.20 2 - High 1 - High modiule - Cannot creat a new Admin employee record

What may happens to these defects?
• What if your Manager or Team lead is in a meeting where they are looking at hundreds of defects for possible deferral • The Product team cannot make the right decisions about this software error

A better summary line

What is the 2nd most important part of the defect report?

Who reads the description part of the defect report?
• • • • • Development Project Manager Customer Support Quality Assurance Any member of the Project Team

Example 1

Example 2

What’s missing in these descriptions?
• The product build that the error was found on. • Operating System that it was tested on. • Steps to reproduce.

A better description:

How serious is the defect?
Sev. 1 Desc. Show Stopper Criteria Core dumps, Inability to install/uninstall the product, Product will not start, Product hangs or Operating System freezes, No workaround is available, Data corruption, Product abnormally terminates Workaround is available, Function is not working according to specifications, Severe performance degradation, Critical to a customer Incorrect error messages, Incorrect data, Noticeable performance inefficiencies Misspellings, Grammatical errors, Enhancement requests. Cosmetic flaws

2

High

3

Medium

4

Low

How to decide priority?
Priori Desc. ty 1 Show Stopper Criteria Immediate fix, block further testing, very visible

2

High

Must fix before the product is released

3

Medium

Should fix if time permits

4

Low

Would like fix but can be released as is

Not all bugs are created equal
Defect Severity Priority

Data corruption

?

?

Misspelling

?

?

An example of a bug:
• Jane Doe is typing a letter using a word processor program. The program is version 2.01 and runs only on Windows NT. Jane had typed something in incorrectly and pressed Escape. When she pressed Escape the program exited. Jane opened the program again, only to find that ALL of her work was gone. Annoyed with what happened Jane got up & walked away.

Investigate the bug further?
• Repeat the scenario & see if the user is prompted to save there work? • What happens if you open a file & make changes and then press Escape? Are you prompted to save? • What if you are typing text in and then you open a file on top? Does your work get replaced with the file that was just opened?

What is the bug?
• Jane was not prompted to save her work. When she pressed Escape, all of it was lost when the program exited.

What is the condition?
• When Jane presses Escape.

What part is the Failure?
• The user is not prompted to save changes, nor are they automatically saved. All of the changes are lost.

Every bug has a condition and a failure
• Condition – is usually the circumstances or the steps that were executed prior to the bug appearing. • Failure – is the bug itself.

What we need to write in the defect report
• • • • • • What are the conditions and the Failure? Use the Failure info. for your Subject line Steps to reproduce the Software Error Determine the Severity & Priority Include the OS & build number Information on other conditions tested

Report bugs as soon as possible
Likelihood of bug being fixed Bug

Serious Bug

Minor Bug

Project Start

Time

Project End

Defect Tracking
• Monitoring defects from the time of recording until satisfactory resolution has been determined.

Change Request Life Cycle
Close Change request submitted to database Postponed UnAssigned Change request is resolved in some approved way.

Submitted

Assigned

Resolved

Validated

Change request is open.
Duplicated

Change

request is assigned to engineer.

Outcome of change request is verified.

Reject Fix

References
• Testing Computer Software - Cem Kaner • Software Testing - Ron Patton

Questions?

Effective way to report a defect
"A problem well stated is a problem half solved." - Charles Kettering

Agenda
• Testing • Defect • Defect report
– – – – Summary Description Severity Priority

• Defect tracking

Test Plan Objectives
• To identify the items that are subject to testing • To communicate , at high level, the extent of testing • To define the roles and responsibilities for test activities • To provide an accurate estimate of effort required to complete the testing defined in the plan • To define the infrastructure and support required.

Why Planning and not the Plan
• The test plan is simply a by-product of the detailed planning process that’s undertaken to create it. It’s the planning process that matters, not the resulting document. The ultimate goal of the test planning process is communicating the software test team’s intent, its expectations, and its understanding of the testing that’s to be performed.



Why Planning and not the Plan
• The result of the test planning process must be a clear, concise, agreed-on definition of the product’s quality and reliability goals Defines the scope and general directions for a testing project Describe and justify test strategy The Software test plan is the primary means by which software testers communicate to the product development team what they intend to do

• • •

Test plan - Outline
• • • • • • • • Test-plan identifier Introduction Test items Features to be tested Features not be tested Approach Item pass/fail criteria Suspension criteria and resumption criteria • • • • • • • • Test deliverables Testing tasks Environmental needs Responsibilities Staffing and training needs Schedule Risks and contingencies Approvals

Source: ANSI/IEEE Std 829-1998, Test Documentation

Test Plan Details
1.


Test-Plan identifier
Specify unique identifier

2.
– –

Introduction
Objectives,Background,scope and references are included. Summarize the software items and software features to be tested.The need for each item and its history may be included.

Test Plan Details
3. Test items - identify the test items including their version/revision level. - Supply references to the following item documentation e.g. - Requirements specification,design specification, User guide, Operations guide, Installation guide -Reference any incident reports relating to the test items.

Test Plan Details
4. Features to be tested - identify all software features and combinations of software features to be tested. - identify the test-design specification associated with each feature . 5. Features not to be tested - identify features and significant combination of software features which will not be tested and the reasons.

Test Plan Details
6. Approach Describes the overall approach to testing. Specify the major activities , techniques, and tools which are used to test the designated group of features. - The approach should be described in sufficient detail to permit identification of major testing tasks and estimation of time required to do each one. - Specify any additional completion criteria, constraints .

Test Plan Details
7. Item pass/fail criteria specify the criteria to be used to determine whether each item has passed or failed testing. 8. Suspension criteria and resumption requirements – criteria used to suspend all or a portion of the testing activity. specify the testing activities which must be repeated , when testing is resumed.

Test Plan Details
9. Test deliverables identify the deliverable documents. Such as - test plan - test design specifications - test case specifications - test procedure specifications - test logs - test incident reports - test summary reports - test input data and test output data - test tools(e.g modules, drivers and stubs)

Test Plan Details
10. Testing tasks - identify the set of tasks necessary to prepare for and perform testing.identify all intertask dependencies and any special skills required.

Test Plan Details
11. Environmental needs - Hardware - Software - Security - Tools - Publications

Test Plan Details
12. Responsibilities - Identify groups responsible for managing, designing, preparing, executing, checking and resolving. - these groups may include – testers, developers, operation staff, user representatives, technical support team.

Test Plan Details
13. Staffing and training needs - specify test staffing needs by skill level. - identify training options for providing necessary skills

Test Plan Details
14. Schedule - estimate the time required to do each testing task. - specify the schedule for each testing task - for each resource(i.e. facilities, tools, and staff) specify its period of use.

Test Plan Details
15. Risks and contingencies -identify the high risk assumptions of the test plan. Specify contingency plans for each . -e.g. delayed delivery of test items might require increased night shifts scheduling.

Test Plan Details
16. Approvals - specify the names and titles of all the persons who must approve the plan.

Example – Payroll system
• The system is used to perform following major functions
– – – – – Maintain employee information Maintain payroll history information Prepare payroll checks prepare payroll tax reports Prepare payroll history reports

Plan
Identifier – TP1 Introduction1. Objective – i. To detail the activities required to prepare for and conduct the system test. ii. To communicate to all responsible parties the tasks which they are to perform and the schedule to be followed in performing the tasks. iii. To define the sources of the information used to prepare the plan. iv. To define the test tools and environment needed to conduct the system test.

Introduction contd.
• Background
– History

• Scope
– This test plan covers a full system test for corporate payroll system.

• References
– List of documents used as sources of information for the test plan.

Test items
• All items which make up the corporate payroll system. • Documents which provide basis for defining correct operation
– Requirements documents – Design description – Reference manual

• Items to be tested are
– Program modules,User procedures,operator procedures

Features to be tested
• Database conversion • Complete payroll processing for salaried employees only • For hourly employees only • For all employees • Periodic reporting • Security • Recovery • performance

Features not to be tested
• Certain reports will not tested because they are not to be used when the system is initially installed. • e.g
– Training schedule reports – Salary reports

Approach
• The test personnel will use the system documentation to prepare all test design, case, and procedure specifications. • Personnel from the payroll accounting departments will assist in developing the test designs and test cases.

Testing done will be for the following :
• Verification of converted database using “data base auditor”---for checking value ranges within a record & required relationships between the records. • Counting input & output records • Payroll processing – using sets of records of employees. salaried employees, hourly employees, Merged set of these two. • Security testing -It is an attempted access without proper password to -Online data entry & -Display transactions

Testing done will be for the following :
• Recovery testing • Performance testing
– Performance will be evaluated against the performance requirements by measuring the run times of several jobs using production data volumes.

• Regression
– Regression test will be done on a new version to test the impacts of the modifications

Item pass /fail criteria
• The system must satisfy standards requirements for system Pass/Fail stated in development standards & procedures • The system must also satisfy memory requirements – must not be greater than 64K • Entry criteria – integration testing must be completed. • Exit criteria – execution of all test cases and completion of testing.

Suspension Criteria & Resumption Requirements
• SUSPENSION : for e.g.Inability to convert employee information database will cause suspension of all activities. Show stopper defects . • RESUMPTION:After suspension has occurred a regression test will be run. Arrival of build with fixed defects.

Test deliverables
The following documents will be generated by the system test group & will be delivered to configuration management group DOCUMENTATION: • System test plan • System test design specifications • System test case specifications • System test logs • System test incident report log • System test incident reports • System test summary

Testing tasks
• • • • Prepare test case specifications Prepare test procedure specifications Build the employee information data Execute the test procedures

Environmental needs
• Hardware: Testing will be done on the XYZ h/w configuration. • Software: Operating s/w:the production operating s/w will be used to execute test cases Communications s/w

Responsibilities
Following groups have responsibility • System test group • Development project group • Corporate payroll department.

Staffing & training needs
Following staff is needed Test group: • Test Manager 1 • Senior Test Analyst 1 • Test Analyst 2 • Test Technician 1 Payroll supervisor: 1 TRAINING: The corporate Pay roll department personnel must be trained to do the data entry transactions .

Schedule
Prepare a task list for the same with dates ***

Risks & contingencies
• If the testing schedule is significantly impacted by system failure, the development manager has agreed to assign a full-time person to test group to do debugging

Questions?

Agenda
• Testing • Defect • Defect report
– – – – Summary Description Severity Priority

• Defect tracking

Test Reporting
• Test reports are a means of communicating test status and findings both within the project team and to management • Test report are a key mechanism available to the test manager to communicate the value of testing to the project team and IS management.

Test Reporting
• Reports usually focus on the test work products produced, defect identified during the test process, the efficiency of the test an test status. • The frequency of test reports should be at the discretion of the team, and the extensiveness of the test process. Generally large testing projects will require much more interim reporting than will small test projects with very limited test staff.

Test Reporting
• Requirements Tracing • Function Test Matrix • This maps the system functions to the test case that validates that function • The function test matrix show that tests must be performed in order to validate the functions. This matrix will be used to determine what tests are needed, and the sequencing of tests. It will also be used to determine the status of testing

• In a large testing project, it is easy to lose track of what has been tested and what should be tested. The question often arises, "Is the testing comprehensive enough?" • A simple way of determining what to test is to go through the source documents (Business Requirements, Functional Specifications, System Design Document, etc.) paragraph by paragraph and extract each requirement. A simple matrix is built, with the following format:

Test Assets
• As we develop our software we also create test assets/ test ware such as
– – – – – Test Plan Test Cases Test Data Test Results Defect Reports

Source Document Func Spec

Section 3.1

Requirement An account number must be entered, of 8 digits and a 1 digit checksum. Invalid account numbers must be diagnosed with an appropriate error message. The account number must exist in the Customer Database; if not, an appropriate error message must be displayed. Given a valid account number, the customer details must be retrieved from the Customer Database and displayed on the CD-102 screen form.

Test Case 147

Func Spec

3.1

148

Func Spec

3.2

149

• Basically, the Traceability Matrix relates each functional requirement to a specific test case. Thus if someone says, "Did we test that the account numbers must be valid?", the matrix indicates which test case does that test. • The Traceability Matrix is created before any test cases are written, because it is a complete list of what has to be tested. • Sometimes there is one test case for each requirement; other times, several requirements can be validated by one longer test case. • Frequently, there are NO business requirements available; although this is a major mistake (as no one has documented what the business needs), such is life. In this case there won't be a source reference document, so there is no "traceability"; the matrix is then simply called a "Test Matrix".

Requirements Tracing/ Function Test Matrix

• The test case numbers can be color coded or coded with number or symbol to indicate the following: • Black: Test not yet developed • Blue: Test developed and executed • Red: Test developed but not executed

Requirements Tracing/ Function Test Matrix

Requirement/ Function A B C D n

Test Case Number 1 15 45 73 m 2 46 74 9 47 75

Functional Testing Status Reporting
• Objective: The purpose of this report is to present what functions have been fully tested, what function have been tested but contain errors, and what functions have not been tested. The report will include 100 percent of the functions to be tested in accordance with the test plan • Example: A sample of this test report showing that 50 percent of the functions tested have errors, 40 percent are fully tested, and 10 percent of the functions have not been tested is illustrated in the above graph • How to Interpret the Report: The report is designed to show status. It is intended for the test manager and/or customer of the software system. The interpretation will depend heavily on the point in the test process at which the report is prepared. As the implementation date approaches, a high number of functions tested with uncorrected errors, plus functions not tested, would raise concerns about meeting the implementation date.

Functional Testing Status Report

50 Percent 40 30 20 10 0 Tested Not Tested

Functions Working Timeline
• Objective: The purpose of this report is to show the status of testing and the probability that the development and test groups Will have the system ready on the projected implementation date The example of the functions working timeline (Figure) shows the normal projection for having functions working. This report assumes a September implementation date and shows from January through September the percent of functions that should be working correctly at any point in time. The actual line shows that the project is doing better than projected. How to Interpret the Report: If the actual is performing better than the planned, the probability of meeting the implementation date is high. On the other hand, if the actual percent of functions working is less than planned, both the test manager and development team should be concerned and may want to extend the implementation date or add additional resources to testing and/or development.





Functions Working Timeline

100 80 60 40 20 0 Jan Feb Mar Apr May Planed Actual

Defects Uncovered versus Corrected Gap Timeline
• Objective: The purpose of this report is to show the backlog of detected but uncorrected defects. It merely requires recording defects when they have been detected, and recording them again when they have been successfully corrected. Example: The example in graph shows a project beginning in January with a projected September implementation shows the cumulative number of defects uncovered in test and the second line shows the cumulative number of defects corrected by the development team, which have been retested to demonstrate that correctness. The gap then represents the number of uncovered but uncorrected defects at any point in time. How to interpret the Report: The ideal project would have a very small gap between these two timelines. If the gap becomes large, it is indicative that the backlog of uncorrected defects is growing and the probability of the development team correcting them prior to implementation date is decreasing. The development team needs to manage this gap to ensure that it remains minimal.





Defect Uncovered versus Corrected Gap Timeline

100 80 60 40 20 0 Build 1 2 3 4 5

Gap
Detected Corrected

Average Age Uncorrected Defects by Type
20 15 10 5 0

Days

Critical

Major

Minor

Defect Distribution Report
• Objective: The purpose of this report is to show how defects are distributed among the modules/units being tested. It shows the total cumulative defects uncovered for each module being tested at any point in time. Example: The defect distribution report example shows eight units under test and the number of defects that have been uncovered in each of those units to date. The report could be enhanced to show the extent of testing that has occurred on the modules. For example, it might be color coded by the number of tests, or the number of tests might be incorporated into the bar as a number, such as the number 6 for unit that has undergone six tests at the point in time that the report was prepared. How to Interpret this Report: This Report can help identify modules that have an excessive defect rate. A variation of the report could show the cumulative defects by test. Foe example, the defects uncovered in test 1, the cumulative defects uncovered by the end of test 2, the cumulative defects uncovered by test 3, and so forth. Certain modules that have abnormally high defect are ones that frequently have ineffective architecture, and are candidates for rewrite rather than additional testing.





Defect Distribution Report
Number of Defects 80 60 40 20 0

A

B

C

D

E

F

G

H

Questions?

Agenda
• Testing • Defect • Defect report
– – – – Summary Description Severity Priority

• Defect tracking

Why not just "test everything"?
Avr. 4 menus 3 options / menu system has 20 screens Average: 10 fields / screen 2 types input / field (date as Jan 3 or 3/1) (number as integer or decimal) Around 100 possible values

Total for 'exhaustive' testing: 20 x 4 x 3 x 10 x 2 x 100 = 480,000 tests If 1 second per test, 8000 mins, 133 hrs, 17.7 days (not counting finger trouble, faults or retest)

10 secs = 34 wks, 1 min = 4 yrs, 10 min = 40 yrs

How much to test?
Number of Missed Bugs Quantity Optimal Amount of Testing Over Testing Cost of Testing

Under Testing

Amount of Testing

How much testing?
• It depends on RISK
– – – – – – risk of missing important faults risk of incurring failure costs risk of releasing untested or under-tested software risk of losing credibility and market share risk of missing a market window risk of over-testing, ineffective testing

Testing addresses risk
• So, risks help us both to determine what to test and how much to test. It is risk that helps the tester to prioritize the tests which WILL be run above all the tests which COULD be run.

What is Risk?
• Risk can be defined as a combination of the likelihood of a problem occurring, and the impact it would have on user

Risk Analysis

Which quadrant's pieces need the most testing?

Rare, but devastating

Yipe!

Harmless

Annoying

Each piece of the system is assigned values

High Impact and high Likelihood, need the most testing

Each piece with high Impact needs attention, even if the Likelihood value is low

Next we test the high Likelihood pieces

Risk Based Testing
High

Impact of Failure

2 - High Risk

1 - Very High Risk

4 - Low Risk

3 - Moderate Risk

Low

Likelihood of Failure

High

When should you stop testing?
• • • • • When the desired number of test cases have been executed All identified defects have been addressed When cost of testing outweigh potential cost of not fixing a bug When you are confident that the system works correctly It depends on the risks for your system

Questions?

Agenda
• Testing • Defect • Defect report
– – – – Summary Description Severity Priority

• Defect tracking

How does a client/server environment affect testing?
• • • Concurrency Configuration Portability – Load
• Peak load testing

• Performance
– Stress

• Database
– Integrity – Validity – Volume testing

How can World Wide Web sites be tested?
• Web applications are special kind of client/server • What additional tests are required?

e-Business Application Testing Challenges
• Content is dynamic – Personalization means every user gets a different page • Wide variety of browsers and platforms – Every environment has potential problems • The application keeps changing – New builds every day!

Most Common Problems in WebSites
• Quality/Content
– Broken Links
• Error 404

– Missing Components

• Navigation support • Browser compatibility

Common Web Site Testing Objectives
 General page layout
• frames • images • tables

 Functional testing
• of each transaction • using different sets of valid data • across different browsers

 Regression testing
• hardware and software upgrades • web site enhancements

General Page Layout
Frame Image

Table

Functional Testing

(for each transaction and with valid and invalid data)
Create an order

E - Commerce

View Shopping Cart

Delete an order

Create new account

Functional Testing

(verify web site works for different browsers)

Create an order

Create an order

View Shopping Cart Delete an order

View Shopping Cart Delete an order

Create new account

Create new account

Regression Testing
Build 1.i
Create an order

Build 2.0
Create an order

View Shopping Cart Delete an order

View Shopping Cart Delete an order

Create new account

Create new account

Questions?

Agenda
• Testing • Defect • Defect report
– – – – Summary Description Severity Priority

• Defect tracking

Testing Tools
• Capture/Playback - Capture user interaction with application, playback and compare against a baseline (Automated functional testing tools) • Test Management: Create test documents (test plans, cases), track test execution progress during test cycle • Performance/Stress Test: Measure performance of application under expected load and stress conditions. • Defect Tracking: Report errors found during test execution, track through fix and re-test.

Why Use Testing Tools?

No Testing

Manual Testing
• • • • Time consuming Low reliability Human resources Inconsistent

Automated Testing
Speed Repeatability Coverage Reliability Reusability Programming capabilities Cost

Which Test Cases to Automate?
 Tests that need to be run for every build of the web site (sanity level, regression test)  Tests that use multiple data values for the same actions (data driven tests)  Identical tests that need to be executed using different browsers  Mission-critical pages  Transaction with pages which won't change in short term

Which Test Cases to Automate?
 Time-consuming manual tasks  Tests that require detailed information from application internals (e.g. SQL, GUI attributes)  Multi-user scenarios  Stress/ Load  Tests which are inexpensive to automate

More repetitive execution? Better candidate for automation.

Which Test Cases Not to Automate?
 Usability testing
– "How easy is the web site to use?"

 One-time testing  Ad hoc/random testing
– based on intuition and knowledge of web site

 Tests without predictable results

Improvisation required? Poor candidate for automation.

Test automation is based on a simple economic proposition
• If a manual test costs $X to run the first time, it will cost just about $X to run each time thereafter, whereas: • If an automated test costs $Y to create, it will cost almost nothing to run from then on. • $Y is bigger than $X, ranging from 3 to 30 times as big, with the most commonly cited number seeming to be 10. Suppose 10 is correct for your application and your automation tools. Then you should automate any test that will be run more than 10 times.

Questions?

Thank You!

CMM- Capability Maturity Model
• It is an industry-standard model for defining and assessing the maturity of a software company’s development process . • It was developed by SEI(Software engineering Institute) and Carnegie Mellon University, under direction of the U.S. DoD.

CMM levels
• Its 5 levels provide a simple means to assess a company’ software development maturity and determine the key process areas they could improve to move up to the next level of maturity. • Level 1: Initial – Ad hoc and chaotic process. A project’s success depends on heroes and luck. Unpredictable and poorly controlled.

CMM levels
• Level 2 : Repeatable – Project level thinking. Can repeat previously mastered tasks. • Level 3 : Defined – Organizational level thinking. Process characterized, fairly well understood and standardized. • Level 4 : Managed – Process measured and controlled. Process is under statistical control. Product quality is specified quantitatively beforehand. • Level 5 : Optimizing – focus on process improvement. New technologies and processes are attempted.

Verifying requirements
• To ensure that users’ needs are properly understood before translating them into design. • Written from customer or market perspective. • Properties of good requirements specifications are:
– – – – – – Precise, unambiguous, and clear Consistent Relevant Testable - i.e. measurable Traceable Achievable

Verifying the functional design
• Functional design is the process of translating user requirements into the set of external interfaces. • Written from an engineering perspective • Checklist for functional design specification covers certain items like
– Check if each requirement has been implemented. – What’s missing – Watch for vague terms like – some, sometimes, often, mostly, most etc.

Verifying the internal design
• Internal design is the process of translating the functional specification into detailed set of data structures, data flows, and algorithms. • Internal design checklist covers items like
– Does the design document contain a description of the procedure that was used to do preliminary design? – Is there a model of the user interface to the computing system? – Is there a high-level functional model of the proposed computing system?

Verifying the code
• Coding is the process of translating the detailed design specification into a specific set of code. • Verifying the code involves the following activities
– Comparing the code with the design specification – Examine the code against a language-specific checklist

• Code checklist – sample items
– Data reference errors – unintialized variables – Data declaration errors – variables with similar names – Computation errors – target variable type smaller than the right hand expression

Coverage
• How do we measure how thoroughly tested a product is ?
– The measure of “testedness” for a software are the degrees to which the collective set f test cases enhance the
• Requirements coverage • Function coverage • Logic coverage

Example
• procedure(a, b, x) if((a > 1) & ( b = 0)) then x = x/a; end if((a = 2) | (x > 1)) then x = x + 1; end end

Statement coverage
• It is determined by assessing the proportion of statements visited by set of proposed test cases. • 100% statement coverage is where every statement in the program is visited by at least one test. • Disadvantage
– it is insensitive to some control structures. – It does not report whether loops reach their termination condition.

Decision/Branch coverage
• It is determined by assessing the proportion of decision branches exercised by the set of proposed test cases. 100% branch coverage is where every decision branch in the program is visited by at least one test. • Disadvantage –it ignores branches within Boolean expressions which occur due to operators.

Condition coverage
• It is determined by assessing that each condition in a decision takes all possible outcomes at least once. • it measures the sub expressions independently of each each other. • Has better sensitivity to the control flow than decision coverage.

Volume Testing
• To determine whether the program can handle the required volumes of data.

Number of characters:

Customer name
1 2 64 65

invalid Valid characters:

valid Any other

invalid

A-Z -’ a-z space

Conditions Customer name

Valid Invalid Valid Invalid Partitions Partitions Boundaries Boundaries 2 to 64 chars <2 chars 2 chars 1 chars valid chars >64 chars 64 chars 65 chars invalid chars 0 chars

Account number
first character: valid: non-zero invalid: zero 5 6 valid 7

number of digits: invalid

invalid

Conditions Account number

Valid Partitions 6 digits 1st non-zero

Invalid Partitions <6 digits >6 digits 1st digit =0 non-digit

Valid Invalid Boundaries Boundaries 100000 5 digits 999999 7 digits 0 digits

Loan amount
499 invalid
Conditions Loan amount Valid Partitions 500 - 9000

500 valid
Invalid Partitions <500 > 9000 0 non-numeric null

9000

9001 invalid

Valid Invalid Boundaries Boundaries 500 499 9000 9001

Equivalence Partitioning
• Naturally, we should have reason to believe that test cases are equivalent. Test are often lumped into the same equivalence class when:
– – – – They involve the same input variable They result in similar operations in the program They affect the same output variables None force the program to do error handling or all of them do.

Equivalence Partitioning

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close