Software Testing

Published on February 2017 | Categories: Documents | Downloads: 17 | Comments: 0 | Views: 90
of 15
Download PDF   Embed   Report

Comments

Content

 

Software Testing Testing is a set of activities that can be planned in advance and conducted to find errors within a document. A strategy for software testing must accommodate low level tests that are necessary n ecessary to verify that a small source code segment has been correctly implemented as well as high level tests that validate major system functions against consumer requirements. Verification and ValidationVerification-Verification refers to the set of activities that ensure that software correctly implements a specified function i.e. refers to code Validation- Validation refers to different set of activities that ensure that the software that has been built is traceable to customer requirements. Testing provides the last bastion from which quality can be assessed and errors can be uncovered. Quality cannot be tested only at the end; it should be incorporated at every step in software testing. Stakeholders in testing process-The software developer is always responsible for testing the individual units of the program, ensuring en suring that each performs the function or exh exhibits ibits the behavior for which it was designed. Developer also conducts integration testing- a testing step that leads to the construction of  the complete software architecture. The role of the independent independen t test group (ITG) is to remove the inherent problems associated with testing. Developer and ITG work closely throughout the software project to ensure that through tests will be conducted. cond ucted. Software testing(parts) can also be viewed as spiral model that can be divided into the following four phasesa) Unit Testing Testing-- Unit testin testing g begins at the the vertex vertex of the spiral spiral and concentr concentrates ates each unit ( i.e. component) of the software as implemented in software code  b) Integrati Integration on TestingTesting- The focus is on design design and constructi construction on of the software software architecture. c) Validation Validation TestingTesting- Requests Requests establi established shed as part of the the software software requiremen requirements ts analysis are validated against the software that has been constructed. d) System System TestingTesting- The software software and other other system system elements elements are tested tested as a whole. e) Object Orient Oriented ed TestingTesting- It involves involves testin testing g the classes classes for data data and then then classes classes are integrated into an object-oriented architecture, a series of regression tests are run to cover error due to connections and collaborations between classes and side effects caused by addition of new classes.

 

Details of Testing Unit Testing- Unit testing focuses verification effort on smallest unit of software designthe software component or module. Using the component-level design description as a guide, important control paths are tested to uncover errors within boundary of the module. The unit test focuses on the internal logic and data structure within boundaries of the component. Local data structures are examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution. All independent paths through the control structure are exercised to ensure that all statements in a module have been executed at least once. Among common errors in unit testing are1.) Incor Incorrect rect arithmeti arithmeticc precedence precedence 2.) Mixed Mixed mode mode operati operations ons 3.) Incor Incorrect rect initializa initialization tion 4.) Precis Precision ion inaccur inaccuracy acy 5.) Boundar Boundary y value failur failures es For testing a unit a driver program must be developed. Driver is nothing but the main  program that accepts test data, passes such data to the component and prints relevant results. Stubs serve to replace modules that are subordinate to the component to be tested. Integration TestingIntegration testing is a systematic technique for considering the software architecture while at the same conducting conduc ting tests to uncover errors associated with interfacing. The objective is to take unit test components and build a program structure that h has as been dictated by design. Types of Integration1. Incremental Incremental Testi Testingng- Incrementa Incrementall testing testing is the antithe antithesis sis of the the big bang approach. The program is constructed and tested in small increments where errors are easier to isolate and correct. 2. Top down Integra Integration tion TestingTesting- It It is an integra integrall approach to to contributio contribution n of the software architecture. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module. modu le. It is again of two typesa) Depth-firs Depth-firstt integrati integrationon- It integrat integrates es all component componentss on a major control control  path of a program structure. Selection of a major path is somewhat arbitrary and depends on application specific characteristic. cha racteristic.

 

 b) Breadth Breadth first integra integrationtion- It incorpor incorporates ates all components components directl directly y subordinate at each level moving across structure horizontally. 3. Bottom-up Bottom-up integrati integration on testingtesting- Bottom Bottom up integration integration testing testing begins begins constructi construction on and testing with atomic modules (i.e. components at lowest levels) in the  program. Because components are integrated from the bottom-up testing required for the components subordinate is always available. 4. Regression Regression Testin Testingg- Each time time a new module is is added as as part of integrat integration ion testing, software changes. New data flow paths are established, new I/O may occur and new control logic is invoked. As functions are added there is a need to test them to avoid any side effects to the whole software. Regression testing may  be done manually by re-executing a subset of test cases or may use an aautomated utomated software tool. 5. Smoke testin testingg- It is a testi testing ng approach approach that is mainly mainly used used while softwar softwaree products products are being developed. It can be applied to complex, time-critical software products. The following steps are involveda) All software software component componentss are integrat integrated ed into a build. build. A build includes includes all all data files, libraries, reusable modules and engineered components.  b) A series series of tests tests are designed designed to expose expose errors errors that will will keep keep the build build From properly performing it’s functions. c) The build build is integrate integrated d with other other builds builds and the entire entire product product is smoke smoke tested. 6. Sandwich Sandwich TestingTesting- Selectio Selection n of integration integration testing testing strateg strategy y depends on on the software characteristics and the project schedule. A combined approach called sandwich testing uses top-down integration tests for top levels and bottom-up b ottom-up approach for bottom levels. Integration testing must always identify critical modules which have high level of  control, definite performance requirements, are complex and are error-prone. 7. Object-Oriented software testing testing strategies (Unit and Integration Testing) i) Unit testi testingng- An encapsul encapsulated ated class class is the focus focus of unit unit testing testing.. Operations Operations within the class are smallest testable units. Also the operation should be tested in each of the classes within which it is used. ii) Integr Integrati ation on Testin Testingg- There There are are three three differe different nt approa approaches ches for for integ integrat ration ion testing an OO system. a. Thread based testingtesting- It It integrat integrates es the set of of classes classes requir required ed to respond to one input or event ev ent of the system.

 

 b. User based based testing testing-- First First the independe independent nt classes classes called called the the server  server  classes are tested followed by the dependent classes called the stub classes. c. Cluster Cluster testingtesting- A cluster cluster of collab collaborati orating ng classes classes is exerci exercised sed by designing test cases that attempt to uncover errors in the collaboration. Validation Testing (for system requirements) Validation testing tests the software for software functions in a manner that can be reasonably expected by the customers. These expectations are defined in the software requirements specification which contains the visible attributes of the software. A test-plan outlines the classes of tests to be conducted cond ucted and a test procedure defines specific test cases. Both the plan and procedure are designed to ensure that all  performance requirements are attained, documentation is correct and usability requirements are met. An important element of validation process is a configuration review. The purpose of  the review is to test the software configuration and the necessary details to help in the support phase. Alpha and beta testingAcceptance testing- When customer software is built for a customer, a series of  acceptance tests are conducted to enable the customer to validate all requests. An acceptance test can range from an informal test drive to a planned aand nd systematically executed series of tests. Alpha testing- Alpha testing is conducted at developer’s d eveloper’s site by end users. Beta testing- The beta testing is conducted at end user’s site. Beta testing is a live application of the software in an environment that cannot be controlled by the developer. System TestingIt involves testing everything including hardware, software, people and information. System testing is actually a series of different tasks whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose all work is to verify that system elements have been properly integrated and perform allotted functions. It involves the following stepsa) Recovery Recovery testingtesting- Recovery Recovery testing testing is a system system test that that forces the the software software to fail in a variety of ways and verifies that recovery is properly performed. Automatic recovery, reinitialization, checkpointing mechanisms, data recovery and restart

 

are evaluated for correctness. If recovery requires human intervention, the mean time to restart (MTTR) is evaluated to determine whether it is within acceptance acce ptance limits.  b) Security Security testingtesting- Security Security testing testing verifies verifies that protectio protection n mechanisms mechanisms built into the system will infact protect it. The tester may attempt to acquire passwords through external means, may attack the system with custom software designed to breakdown any defence that may have been constructed, may overwhelm the system denying service to others, purposely cause system errors, may browse through insecure data.   c) Stress Stress testingtesting- Stress Stress testi testing ng executes executes a system system in a manner manner that demands demands resources in an abnormal quantity or frequency or volume. For example special tests may be designed that generate ten interrupts per second, input data rates may  be increased, maximum memory utilization tests may be done, excessive disk  reading tests may done.   d) Performanc Performancee testingtesting- Performanc Performancee testing testing is done to test the the run time performanc performancee of the software within context of the integrated system. It is important to monitor  execution intervals, log events and sample machine states on regular basis. e) DebuggingDebugging- Debugging Debugging occurs occurs as a consequence consequence of successful successful testin testing. g. When a test case uncovers an error, debugging is an action that results in the removal of  the error. The debugging process begins with the execution of a test-case. Debugging strategies include brute force (using memory dumps and output),  backtracking (step wise tracing), and cause elimination (formulating a reason and testing it). Testing Tactics Definition- Software testability is how easily a software program can be tested. Characteristics for testing a systema) Operabilit Operabilityy- If a sy system stem is is designed designed and implemente implemented d with quality quality in mind, mind, relatively few bugs will block the execution of tests, allowing testing to  progress without fits and starts.  b) Observabil Observabilityity- Inputs Inputs provided provided as part of testin testing g produce distinct distinct outputs outputs c)

Controllability- The better we can control the software the more the testing testing can be automated and optimized. Controlling the software and hardware variables is important.

c) Decomposibi Decomposibility lity-- The software software system system is built built from independent independent modules modules that that can  be tested independently.

 

d) Simplicit Simplicityy- There should should be functional functional simplicity simplicity , structural structural simplici simplicity ty and Code simplicity. e) Stability Stability-- Changes to to the software software are infreq infrequent uent , controll controlled ed when they occur occur and do not invalidate existing tests. f) Understandabi Understandability lity-- The architec architectural tural design design dependencies dependencies between between external external and and , internal and shared components are well understood. Black Box and White Box testingBlack Box test examines some fundamental aspect of a system with little regard for the internal logical structure of the software. White Box testing involves examination of the procedural details, logical paths through the software and collaborations between components co mponents are tested by providing test-cases that examine specific set of conditions and or loops. 1. White White Box TestingTesting- White White box testing testing sometime sometimess called glass-bo glass-box x testing testing is a test test design philosophy that uses the control structure described as part of componentlevel design to derive test cases. Using the white box testing method the software engineer can derive test-cases that cover areas like check all independent paths, exercise logical decisions, execute loops and use internal data structures. White-Box testing methodologies a) Basis-path Basis-path testin testingg- Basis Basis path testing testing is a testi testing ng technique technique proposed proposed by McCabe. For basis path testing we useFlow Graph- It depicts logical control flow. We can derive a flow graph from a flow chart. Each statement is a node and control structures control the flow path.  b) Independent Independent Path-n Path-n independent independent path path is any path path through through the program program that that introduces at one set of processing statements. It can be derived from a flow-graph. For example Path 1 : 1-11 Path 2 : 1-2-3-4-5-10-11 c) Cyclomatic Cyclomatic complex complexityity- It is is a software software metric metric that that provides provides a quantitive quantitive measure of the logical complexity of a program. Cyclomatic complexity defines the no. of independent paths in the basis set of a program and  provides an upper bound on the number of tests. It can be computed as  No. of regions V (G) = E-N+2 where E is no of edges and N is no of nodes d) Deriving Deriving test cases- It It involves involves steps steps likelike- drawing drawing a flow graph graph from a flow chart, determining cyclomatic complexity, determine set of linearly independent paths and determine test cases for each set of paths.

 

e) Graph Metri Metricscs- To prepare prepare a softwa software re tool tool we can store store the the nodes of flow flow graph and compute the independent paths to derive test cases. f) Control Control structure structure testin testingg- It involves involves conditi condition on testing testing in which a test test case case is designed that check the logical condition. It involves data flow testing which is a method that selects test paths of a  program according to location of definition and use of variables. A definition-use (DU) chain of a variable X is {X, S, S’} where definition of variable x in statement statement S is live at statement S’ i.e. there exists a path path from S to S’. All definition-use chains should be covered at least once. It also involves loop-testing that focuses exclusively on loop constructs covering different types of loops like simple loop, nested loop, concatenated loop and unstructured loops.   2. Blac Black k Box Box Te Test stin inggBlack Box testing also called behavioral behav ioral testing focuses on the functional requirements of the software. Black box testing attempts to find errors in the following categories- incorrect or missing function, interface errors , errors in data structure, performance or behavior errors, initialization or termination errors. a) Graph based based testing testing methods methods-- A graph graph is created created which which is a collection collection of  nodes that represent objects, links that represent relation between objects , node weights that describe properties of a node, link weights that describe some characteristics of a link. This link can be a directed link, symmetric link, bidirectional link or parallel link. Objects to be tested can be document window, document text etc.  b) Transactio Transaction n flow modelingmodeling- It involves involves finite finite state modelin modeling, g, data flow modeling and timing modeling. c) Equivalence Equivalence Partiti Partitioningoning- Equ Equivale ivalence nce partitioni partitioning ng is a black box box testing testing technique which divides the input domain into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that covers a class of users, thereby reducing the total number of test cases that can be developed. d) Boundary Boundary value analysi analysiss- A great number number of errors errors occur occur at the the boundary boundary Of the input domain. BVA leads to a selection of test cases that exercise  boundary values. It complements equivalence partitioning because it leads to selection of test cases at the edges edg es of a class.

 

f) Orthogonal Array Testing- Orthogonal Orthogonal array testing can be applied to  problems in which the input domain is relatively small but too large to accommodate exhaustive testing. If there are three input X, Y and Z instead of taking one input at a t a time, it is possible to consider all different  permutations i.e. 3^3=27. It’s like testing all sides of a cube.

3. Object Object Orient Oriented ed Test Testing ing Methods Methods-Object oriented testing is mainly used for testing object oriented testing software. It involves the following methodsa) Fault Based Based TestingTesting- The The objective objective of fault-bas fault-based ed testing testing within within an OO s System is to design tests that have a high likelihood of uncovering  plausible faults. To determine whether faults exist , test cases are designed to exercise the design or code.  b) Integratio Integration n TestingTesting- Integration Integration testing testing looks looks for plausible plausible faults faults in Operation calls or message connections. Integration testing applies to attributes or operations. Testing should exercise the attributes to determine whether proper values occur for distinct types of object o bject behaviour. c) Testing Testing class hierarchyhierarchy- Tests Tests have have to be done done for both both the the base and Derived classes. d) Scenario Scenario based testin testingg- Scenario Scenario based based testing testing concentrates concentrates on what what the User does. This means capturing tasks that the user has to perform then applying them and their variants as tests. Scenarios uncover interaction errors, but to accomplish these test cases must be more complex and more realistic than fault based tests. e) Testing Testing surface surface and deep structu structurere- Surface Surface structur structuree refers refers to externally externally Observable structure of an OO program that is the structure that is visible. Deep structure refers to internal technical details of an OO program. That is the structure that is understood by executing the design and the code. f) Te Test stin ing g at at cla class ss le leve vellRandom Testing- If a class contains a number of operations, a variety of  operation sequences may be generated randomly. E.g. for banking application Test case r1 : open.setup.deposit.close open.setup.dep osit.close Partition Testing- Partition testing reduces the number of test cases required to exercise the classes by partitioning the input domain. Inputs and outputs are categorized and test cases are designed for each category. g) Inter-clas Inter-classs test case case designdesign- Test case case design design becomes more complica complicated ted As integration of object oriented system begins. It is at this stage that collaboration of classes must begin. Multiple-class testing- For each client class use the list of operations to generate a series of random test sequences. The operation will send messages to other server classes. For each message that is generated determine d etermine the collaboration classes and the corresponding operations in the server class. For each operation in the server object determine the messages that it transmits.

 

For each of the messages determine the next level of operations that are involved and incorporate these into test sequence. h) Tests from from behavior behavior modelsmodels- The The state diagram diagram for for a class can can be used to Help derive a sequence of tests that will exercise the dynamic behavior of  a class and those classes that collaborate with w ith it. The tests to be designed should achieve all state coverage.   4. Testing Testing for speciali specialized zed environment environmentss- Unique Unique guidelines guidelines and approaches approaches are are Required to test different environments. a) Testing Testing GUI’sGUI’s- Finite Finite state state modeli modeling ng graphs may be used used to derive derive a Series of tests that address specific data and program objects involving GUI.  b) Testing Testing client-server client-server architec architectureture- Testing Testing client-ser client-server ver software software occurs At three levels. i) Individual client applications are tested in a disconnected mode. ii) Client applications and server software are tested together. iii) Complete client/server architecture including network  operations and performance are tested. Other tests include application function tests, server tests, database tests, transaction tests and network  connection tests. c) Testing Testing documentatio documentation n and help facilit facilitiesies- Document Documentation ation testing testing can can be Applied in two steps- i) Review and inspection examines the document for  editorial clarity ii) Live test uses documentation in conjunction with the actual program. d) Testing Testing for real real time systems systems-- Comprehensi Comprehensive ve test case case design methods methods For real-time systems continue to evolve. The steps involved are a re – i) Task  testing tests each task independently. ii) Behavioral testing simulates simulates the  behaviour of real time system and examines its behavior as a consequence of external events. iii) Inter-task testing tests asynchronous tasks that are known to communicate with one another and testing them with different data rates and processing load to determine if inter-task synchronization errors. iv) Software and hardware are integrated integrated and full full range of system tests is conducted in an attempt to uncover errors at the software/hardware level. Product Metrics 1. Software Software quality quality- Software Software quality in conformance conformance to explicit explicitly ly stated stated Functional and performance requirements, explicitly documented development standards and that are expected of all professionally developed software. A few important factors should be kept in minda) Software Software requirem requirements ents are are the foundati foundation on from which qualit quality y is measured. Lack of conformance to requirements is lack of quality.

 

 b) Specified Specified standard standardss define a set set of development development criter criteria ia that guide guide The manner in which software is engineered. c) There is a set set of implici implicitt requirements requirements that that often often goes unspecifi unspecified ed e.g. Ease of use. 2. McCall McCall’s ’s qualit quality y factors factorsa) Correctness Correctness-- The extent extent to which which a customer customer satisfies satisfies it’s it’s specifica specification tion And fulfills customer’s mission objectives.  b) Relia Reliabilit bilityy- The extent extent to to which which a program program can can expect expect to perform perform its its Intended function. c) Efficiency Efficiency-- The amount amount of computi computing ng resources resources and and code require required d by a Program to perform its functions. d) IntegrityIntegrity- The The extent to to which access access to software software or data data by unauthori unauthorized zed Persons can be controlled. e) UsabilityUsability- The The effort effort required required to learn, learn, operate, operate, prepare prepare input for for and Interprets output of operations. f) Maintainabi Maintainability lity-- The effort effort require required d to locate locate and fix fix an error error in a Program. g) Flexibili Flexibilityty- The effort effort requir required ed to modify modify an operational operational program program.. h) Testabilit Testabilityy- The eff effort ort required required to to test a program program to ensure ensure that that It performs its intended function. i) Portabili Portabilityty- The The effort effort requir required ed to transf transfer er a program program from from one one Hardware/software environment to another.  j) Reusability Reusability-- The extent extent to to which which a program program can can be reused reused in another  another  Application. k) Interoperab Interoperabilit ilityy- The eff effort ort required required to couple couple one system system to to another. another. 3. Quanti Quantitat tative ive View Viewa) MeasureMeasure- Measure Measure provides provides a quantitati quantitative ve indication indication of the extent extent , Amount, dimension, capacity or size of some attribute of a product or   process.  b) MetricMetric- A quantitati quantitative ve measure measure of the degree degree to which which a system, system, Component or process processes a given attribute. c) IndicatorIndicator- An indicat indicator or is a metric metric that provide providess insight insight into into the software software Process, software project or the product itself. d) Characteri Characteristics stics-- Metrics Metrics can be character characterized ized by five five activitie activitiessi. Formul Formulati ation on – The deriva derivati tion on of softwa software re meas measure uress and and m metr etrics ics appropriate for the representation of the software being considered. con sidered. ii ii.. Collec Collectio tionn- The The mecha mechanis nism m used used to to accum accumulat ulatee data data requir required ed to to derive formulated metrics. iii. ii i. Anal Analys ysis is-- Th Thee compu computa tati tion on of metr metric icss and appl applic icat atio ion n of  mathematical tools. iv. Interp Interpret retati ationon- The The evalu evaluati ation on of metric metricss in an effo effort rt to to gain gain insigh insightt into quality. v. Fa Faul ultt detec detecti tionon- Det Detect ect faul faults ts usi using ng deve develo lope ped d metri metrics cs..

 

4. Goal oriented oriented software software measurem measurement ent- GQM emphasizes emphasizes the need need toa) Establ Establish ish an expl explici icitt meas measure uremen mentt goal. goal.  b) Define Define set of questions questions to be answered answered to achieve achieve the the goal. c) Identify Identify the the metrics metrics that help to achieve achieve the goal. goal. 5. Attribute Attributess of effecti effective ve software software metric metricssa) Si Simp mple le an and d com comput putab able le  b) Empiricall Empirically y and and intuitivel intuitively y persuasive persuasive c) Cons Consis iste tent nt and and object objectiv ivee d) Consistent Consistent in the the use use of units and dimensi dimensions ons e) Progra Programmi mming ng langu language age indepe independen ndentt f) Effect Effective ive mechan mechanism ism for for high high quality quality feed feedback  back  6. Metr Metric icss for for anal analys ysis is mod model el a) Function Function point metri metricc- It is used used for measuri measuring ng the functional functionality ity Delivered by a system. The metric is used toi) Es Esti tim mat atee cos costt o orr eefffo forrt ffor or desi design gn and and tes testtin ing g ii) Predict no. of errors iii) Forecast no no of of com comp put utaations It depends oni) No. of external input (EI) ii) No. of external outputs (EO) iii iii) No. of ext exter erna nall enqu enquir irie iess (E (EQ) Q) iv) iv) No. of int inter erna nall log logiical cal ffiiles (IL ILF F) v) No. o off eext xteernal in interface fi files (EIF) FP= Count * [ 0.5 + 0.01 * Σ Fi ] Fi= Value adjusted factor depending on other factors  b) Metrics Metrics for specify specifying ing qualityquality- Analysi Analysiss model can can also be used used to test Completeness, correctness, understandability, verification, internal and external consistency, achievability, conciseness, traceability, modifiability, Precision and reusability.  Nu=Nf+Nnf   Nu= Total requirements  Nf= No of functional requirements requirements  Nnf= No of non-functional requirements Specificity of requirements Qf= Nui/Nt  Nui= Common or unambiguous requirements CompletenessQz= Nu/ [ Ni * Ns ]  Nu= No. of unique functional requirements  Ni= No. of input stimuli  Ns= No. of state specifications

 

7. Metric Metricss for design design model modela) Architectu Architectural ral design design metricsmetrics- These These are also called called black black box metrics metrics and These do not require knowledge of any software component. The design metrics arei) St Stru ruct ctur ural al compl complex exit ityyS(i)= f out(i)^2 Fout= No. of outputs ii) Data complexityD(i) = V(i) / [ fout(i) + 1 ] V(i)= No. of input and output variables iii) System complexityC(i)=S(i)+D(i) S(i)= Structural complexity D(i)= Data Complexity  b) Metri Metrics cs based based on tree tree diagram diagram of module module structur structureeSize= n + a n= no of nodes a= no of arcs arc to node ratio= a/n Another form of metric is S1= total no of modules S2= no of modules whose correct function depends on source of  Data input S3= no of modules whose correct function depends on prior  Processing S4= no of database items S5= total no of unique data items S6= no of database segments S7= no of modules with single entry and single exit Program structure D1=1 (OOPS) D1=0 (Structural) Module independence D2= 1- S2/S1 Modules not dependent on processing D3= 1- (S2/S1) Database Size D4= 1- S5/ S4 Database compartmentalization D5= 1- S6/ S4 Module entry entry and exit exit characteristics DSQ = Σ wi Di c) Metri Metrics cs for for object object orie oriente nted d design design-Size, Complexity, Coupling, Sufficiency, Completeness, Cohesion, Primitiveness, Similarity, Similarity, Volatillity 8. Class Class orien oriented ted m metr etrics icsa) Ck metri metrics cs suite suite ( Chidam Chidamber ber and and Kemerer Kemerer))i) Weight Weighted ed method methodss per class class (WML (WML))- Assum Assumee that n meth methods ods Of complexity C1, C2,……..Cn are defined for class C. The specific complexity metric that is chosen should be normalized so that nominal complexity for a method takes on a value of 1.0. WMC= Σ Ci

 

For i= 1 to n ii) ii) Dept Depth h of of iinh nher erit itan ance ce tree tree (D (DIT IT))- This This metr metric ic is th thee max maxim imum um Length from the node to the rest of the tree. More the depth of  class hierarchy more is the complexity. iii) iii) Numb Number er of child childre renn- The The sub subcl clas asse sess tha thatt are are su subo bord rdin inat atee tto oa Class are considered its children. As NOC increases complexity will increase. iv) iv) Coupl Couplin ing g bet betwee ween n obj object ect cl clas asse sess- CBO CBO is th thee num numbe berr of  Collaborations listed for a class on its CRC index card. As CBO increases the reusability of a class increases and so does do es its complexity. v) Resp Respon onse se for for a cl clas asss ((RF RFC) C)-- IItt iiss a se sett o off met metho hods ds th that at ca can n Potentially be executed in response to a message received by an object of that class. As RFC increases so does design complexity. vi) vi) La Lack ck of co cohe hesi sion on in meth method odss (LC (LCOM OM))- LCOM LCOM is th thee num number  ber  Of methods that access one or more of the same attributes. If  LCOM is high so is coupling. Cohesion should be high where as LCOM should be low.    b) MOOD metrics suite- Metrics for OO design are quantitative in nature. i) Meth Method od inh inher erit itan ance ce fact factor or (MI (MIF) F)-- It is is the the amou amount nt of in inhe heri rita tance nce Used. MIF= Σ Mi(Cj) / Σ Mi(Cj) Ma(Cj)= Md(Cj) + Mi(Cj) Ma(Cj)= the number of methods that can be involved in the association with Cj. Md(Cj)= the number of methods declared in class Cj Mi(Cj)= the number of methods inherited in Cj. ii) ii) Coup Coupli ling ng Fa Fact ctor or (CF) (CF)-- Cou Coupl plin ing g can can be de defi fine ned d in in the the following wayCf= Σ iΣ  j is_client(Ci,Cj) / ( Tc^2-Tc) I= 1 to Tc and j= 1 to Tc Is_client=1 if relationship exists between Ci and Cj =0 otherwise c) OO metr metrics ics by Lorenz Lorenz and Kidd Kidd i) Clas Classs Si Size ze-- no of oper operat atio ions ns an and d no of at attr trib ibut utes es NO NOA. A. ii) ii) Co Comp mpon onen entt leve levell desi design gn met metri rics cs-- It inv invol olve vess five five con conce cept ptss and measures. Data Slice ( data values), Data tokens ( variables) Glue tokens ( attached to data slice) SuperGlue tokens (attached to every data slice) Stickiness ( no of data slices bound) iii) iii) Coupl Couplin ing g met metri rics cs-- Dha Dhama ma pr prop opos osed ed a met metri ricc ffor or mo modul dulee Coupling that encompasses data and control flow, global coupling and environmental coupling. The formula isM= Di + ( a * Ci) + D0 + ( b * C0 ) + Gd + ( c * Gc) + w + r 

 

Di= no of input data parameters Ci= no of input control parameters D0= no of output data parameters C0= no of output control con trol parameters Gd= no of global variables used as data Gc= no of global variables used as control W= no of modules called or fan-out R= no of modules calling the modules under consideration   d) Complexity Complexity Metrics Metrics- The most important important complexity complexity metric metric is Cyclomatic complexity. Operation oriented metrics proposed arei) Aver Average age ope opera rati tion on size size (OSa (OSavg vg))- Alt Altho hough ugh lin lines es of of co code de Could be used as operation size, the LOC measure suffers from a set of problems. ii) ii) Oper Operat atio ion n co comp mple lexi xity ty (O (OC) C)-- The The co comp mple lexi xity ty of an operation can be computed using any of the complexity metrics as in conventional software. iii) iii) Av Aver erag agee no no of of pa para rame mete ters rs pe perr opera operati tion on (N (NPa Pavg vg)) - The The larger the number of operation parameters, the more iv) iv)

complex the collaboration between objects. User User inte interf rface ace de desi sign gn metr metric icss- A typ typic ical al GUI GUI use usess lay layout out entities- graphic icons, text, menus, windows etc.The metrics are the time required to achieve a specific operation, no of operations required, no of data or content objects.

9. Metric Metricss for source source code codeMetrics for source code include Program Length N= n1 log n1 + n2 log n2 n1= no of distinct operations n2= no of distinct operands  N1= total no of operator occurrences  N2= total no of operand occurrences Program Volume V = N log ( n1 + n2 ) 10. Metrics for testingProgram Level PL= 1 / [ n1/2 * N2/n2 ] Effort L = V/PL Metrics for object oriented testing include- lack of cohesion in methods (LCOM), Percentage of public and private variables (PAP), public access to data members (PAD), no of root classes (NOR), fan in (FIN), no of children (NOC), depth d epth of  inheritance (DIT)

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close