What is Software

Published on February 2017 | Categories: Documents | Downloads: 23 | Comments: 0 | Views: 219
of 32
Download PDF   Embed   Report

Comments

Content

What is Software Testing? and Difference between Testing and QA? What is Software? A set of programs to take inputs, to do process and provide outputs. Software Application VS Product? Software developed depending on specific customer requirements is called as application or project. Software developed depending on overall requirements in market is called as software product. What is Software Testing? Testing is the process of executing Test cases with the intent of finding bugs in software. OR Software Testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test. Difference between software testing and Quality assurance (QA)? QA is the process of verifying or determining whether a product or service meet or exceed customer expectations. SDLC (Software Development Life Cycle) Models After accepting a software proposal, the corresponding project managers should one of the available 5 sdlc model to follow in the development cycle. There are 5 SDLC models available currently. They are

1. 2. 3. 4. 5.

Waterfall Model Prototype Model Incremental Model Spiral Model V-Model

Waterfall Model: When the customer requirements are clear and complete Requirements Gathering—>Analysis & Planning—> Design—> Coding—> Testing—> Release & Maintenance Prototype Model: When the customer requirements are unclear and confusing Incremental Model: When the customer requirements are clear but not complete, because client is giving requirements in installment basis. Spiral Model: When the customer requirements are clear and complete but enhancing in future Note: In above 4 SDLC models the testing is available as one stage and the stage was also conducted by same developers.Due to this the organizations are concentrating on multiple stages of testing and separate testing teams for quality software development. V-Model: V- stands for Verification and Validation
• •

It is a recognized model by organizations This model defines the mapping in between multiple stages of development and multiple stages of testing



To decrease project cost,the organizations are maintaining the separate testing team only for system testing. Because the system testing stage is working as bottle neck stage in software development. What is a Test Plan? What it Can Contain?? A software project Test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software Product. The completed document will help people outside the test group understand the ‘why’ and ‘how’ of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: 1. 2. 3. 4. 5. 6. 7. 8. Title Identification of software including version/release numbers Revision history of document including authors, dates, approvals Table of Contents Purpose of document, intended audience Objective of testing effort Software product overview Relevant related document list, such as requirements, design documents, other test plans, etc. 9. Relevant standards or legal requirements 10. Traceability requirements 11. Relevant naming conventions and identifier conventions 12. Overall software project organization and personnel/contactinfo/responsibilities 13. Test organization and personnel/contact-info/responsibilities 14. Assumptions and dependencies 15. Project risk analysis 16. Testing priorities and focus 17. Scope and limitations of testing 18. Test outline – a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable 19. Outline of data input equivalence Classes, boundary value analysis, error classes 20. Test environment – hardware, operating systems, other required software, data configurations, interfaces to other systems 21. Test environment validity analysis – differences between the test and production systems and their impact on test validity. 22. Test environment setup and configuration issues 23. Software migration processes 24. Software CM processes

25. test data setup requirements
26. Database setup requirements

27. Outline of system-logging/error-logging/other capabilities, and tools

such as screen capture software, that will be used to help describe and Report bugs 28. Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs 29. Test automation – justification and overview 30. Test tools to be used, including versions, patches, etc. 31. Test script/test code maintenance processes and version control 32. Problem tracking and resolution – tools and processes 33. Project test metrics to be used 34. Reporting requirements and testing deliverables 35. Software entrance and exit criteria 36. Initial sanity testing period and criteria 37. Test suspension and restart criteria 38. Personnel allocation 39. Personnel pre-training needs 40. Test site/location 41. Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues 42. Relevant proprietary, classified, security and licensing issues. 43. Open issues 44. Appendix – glossary, acronyms, etc. All these are not compulsory in a test plan, but these are all can contain in a test plan. What to include and what to exclude is all that depends on the organization and the process they follow. What is Unit Testing? unit testing is the first testing stage in software development. Validating each and every line of code in programs is called unit testing. All the single programs in software should under go unit testing to ensure that each and every single program is working fine independently. Whit box testing techniques are used in Unit testing phase. There are 4 techniques in white box testing. They are 1. Basic path Coverage 2. Control Structure Coverage 3. Programs Technique Coverage 4. Mutation Coverage

White box Testing Techniques There are 4 testing techniques in White-box testing.

1. Basic path Coverage 2. Control Structure Coverage 3. Programs Technique Coverage 4. Mutation Coverage Basic path Coverage: It’s a program testing technique. Using this technique, the programmer is validating the execution of that program without any runtime errors. In this validation, the programmer is Running that program more than one time to cover all the area of that program. The number of times the programmer running the program to cover whole program is called Cyclomatic Complexity. For example: If you execute a if else program, you need to execute it two time to cover each and every line in the program. So the Cyclomatic complexity of that program is 2 i.e. Cyclomatic complexity=2 Control Structure Coverage: After completion of basic path coverage, the corresponding programmer is validating that program correctness in terms of inputs, process and outputs. Program Technique Coverage: After completion of control structure coverage, the corresponding programmer is calculating the execution speed of that program. If the execution speed is not acceptable, then the programmer is performing changes in that program structure without disturbing functionality. For Example: Take swapping of 2 numbers as example. This can be done using 3 variables and 2 variables. Mutation Coverage: Mutation means that a change. The programmers are performing changes in a tested program to estimate the correctness and completeness of that program testing. Note: White-box testing techniques are also known as clear box techniques OR Open box testing techniques.

Software Integration and Interation Testing Techniques? After completion of related programs writing and unit testing, the programmers are inter connecting that programs to form a software. There are 4 approaches to integrate programs such as 1. Top-down Approach

2. Bottom up Approach 3. Hybrid Approach 4. System Approach Top-down Approach: In this approach the programmers are interconnecting main program and some sub programs. In the place of remaining under constructive sub programs the programmers are using temporary programs called as Stubs. Bottom Up Approach: In this approach the programmers are inter connecting sub programs without main program.In the place of under constructive main program the programmers are using temporary program called as Driver. Hybrid Approach: It is a combination of top down and bottom up approaches. This approach is also known as sandwich approach. In the place of under constructive sub and main programs programmers are using temporary program called stub and Driver. the

System Approach: From this approach the programmers are integrating programs after completion of 100% of coding. This approach is also called BigBang Approach. What is System Testing? After completion of software Integration and integration testing, the development team is releasing a software build to Test engineer team. The testing team is conducting system testing on that software in two sub levels such as 1. Functional Testing 2. Non-Functional Testing Functional testing is concentrating on customer requirements and the Non-Functional testing is concentrating on customer expectations. Functional Testing: It’s a mandatory testing level, during this test the testing team is validating a software build functionality in terms of below factors with respect to customer requirements.

1. Behavioral / GUI: The changes in properties of Objects OR
controls in a software is called behavioral or GUI.

2. Input Domain: Whether the objects are taking correct type and
size of inputs or not?

3. Error Handling: Whether our software is preventing wrong
operations or not? 4. Manipulations: Whether our software is generating correct output or not?

5. Database Validity: Whether our software front end screens
operations are correctly impacting on database of the software or not? 6. Sanitation: Finding extra operations in a software with respect to customer requirements. The above factors checking on a software is called as functional testing. During this checking the testers are using black box testing techniques or closed box testing techniques. Non-Functional Testing: After completion of functional testing successfully, the testing team is concentrating on non-functional testing. During non-functional testing, the testing team is concentrating on customer expectations or software characteristics. This non-functional testing is classified into below sub testing topics. 1. Usability Testing (UI Check) 2. Manual Check (Help documents testing) 3. Compatibility Testing OR Portability Testing 4. Configuration Testing 5. Inter system Testing 6. Multi languity Testing 7. Data volume Testing 8. Installation Testing 9. Performance Testing 10. Load testing 11. Stress testing 12. Endurance Testing 13. Security Testing 14. Parallel Testing 15. User Acceptance Testing (UAT) 16. Software Release and Release Testing

Black box Testing Techniques? System testing uses black box testing techniques. There are 4 techniques in Black box

1. 2. 3. 4.

Boundary Value Analysis (BVA) Equivalence Class Partitions (ECP) Decision Tables (DT) State Transition Diagrams (STD)

Boundary Value Analysis (BVA): BVA takes care about the size in software testing For example: Testing Password Field, Password field should be minimum 4 chars and maximum 8 chars. BVA for this is
• • • • • •

Min=4 Min-1=3 Min+1=5 Max=8 Max-1=7 Max+1=9

Equivalence Class Partitions (ECP): ECP takes care about the Type in software testing. For example: Testing username Field, User name should be combination of Capital letters and numbers. Note: If our Test is related to a field, the testers are using BVA and ECP techniques because exhaustive testing is impossible from testing principles Decision Tables: If our test related to an operation with alternative expectations, the testers are using DT techniques. For example: If login user name and password are correct, next window will appear. If any one of user name or password is are wrong, error window should open. So here we have 2 alternatives like Next window and error window. State Transition Diagrams (STD): If a test related to an operation with no alternative expectations, the testers are using state transition diagrams.

How Many Software Testing Techniques are there? In total there are 3 types of testing techniques are there. They are 1. Review Techniques 2. white box Techniques 3. black box Techniques Note:


Combination of White box and Black box techniques are called Gray box or Validation Techniques or Dynamic testing techniques.



Review techniques are also called as Verification techniques or Static testing techniques

Review Techniques: The techniques used to verify the documents that will be useful in testing are called Review techniques or verification techniques or Static testing techniques. There are 4 techniques in this. They are 1. Walk Through 2. Inspections 3. Peer Reviews White box Techniques: There are 4 types in this 1. Basic path Coverage 2. Control Structure Coverage 3. Programs Technique Coverage 4. Mutation Coverage Black box Techniques: There are 4 types in this as well

1. 2. 3. 4.

Boundary Value Analysis (BVA) Equivalence Class Partitions (ECP) Decision Tables (DT) State Transition Diagrams (STD)

What is Difference Between Testing,QA and QC? Testing: The process of executing a system with the intent of finding defects including Test planning prior to the execution of the test cases. Quality Control: A set of activities designed to evaluate a developed working Product. Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives. The key difference to remember is that QA is interested in the process whereas testing and quality control are interested in the product. Having a

testing component in your development process demonstrates a higher degree of quality (as in QA).

Difference between Re-testing and Regression Testing? This is a very popular interview question where candidate should very careful to answer this.




Re- Test:Retesting means we testing only the certain part of an Application again and not considering how it will effect in the other part or in the whole application. Regression Testing:Testing the application after a change in a module or part of the application for testing that is the code change will affect rest of the application.

What is Test bed and Test data? test bed is an execution environment configured for software testing. It consists of specific hardware, network topology, Operating System, configuration of the Product to be under Test, system software and other applications. The Test Plan for a project should be developed from the test beds to be used. test data is that run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software.

What is AUT? aut is nothing but “Application Under Test“. After the designing and coding phase in Software development Life Cycle, the application comes for testing then at that time the application is stated as Application Under Test. What is Globalization Testing? Globalization Testing: The goal of globalization testing is to detect potential problems in Application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems. Globalization testing checks proper functionality of the Product with any of the culture/locale settings using every type of international input possible.Proper functionality of the product assumes both a stable

component that works according to design specification, regardless of international environment settings or cultures/locales, and the correct representation of data.

What is Localization Testing? Localization Testing: Localization is the process of customizing a software Application that was originally designed for a domestic market so that it can be released in foreign markets. This process involves translating all native language strings to the target language and customizing the GUI so that it is appropriate for the target market. Depending on the size and complexity of the software, localization can range from a simple process involving a small team of translators, linguists, desktop publishers and engineers to a complex process requiring a Localization Project Manager directing a team of a hundred specialists. Localization is usually done using some combination of in-house resources, independent contractors and full-scope services of a localization company.

What is Traceability matrix? Traceability matrix: A Simple Definitions for testing:

1. Traceability matrix is a document defines mapping between
customer requirements and prepared Test cases. 2. It is document which maps requirements with test cases. By preparing Traceability matrix, we can ensure that we have covered all functionalities in our test cases. The following is the sample template A Complex Definition from WIKI: Traceability matrix is a table that correlates any two baseline documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements (sometimes known as marketing requirements) and detailed requirements of the software Product to the matching parts of high-level design,

detailed design, test plan, and test cases.Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if one must be made. Large values imply that the item is too complex and should be simplified.

30 Software Testing Types There are many testing types used in different situations by Test engineers.

1. black box testing – Internal system design is not considered in
2.

this type of testing. Tests are based on requirements and functionality. internal logic of an Application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

3. white box testing – This testing is based on knowledge of the

4. unit testing – Testing of individual software components or

modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test Driver modules or test harnesses. 5. Incremental Integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers. 6. Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. 7. Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

8. System testing - Entire system is tested as per the requirements.
Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system. 9. End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics realworld use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. 10. Sanity testing – Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix. 11. Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types. 12. Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application. 13. Load testing - Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a Web site under a range of loads to determine at what point the system’s response time degrades or fails. 14. Stress testing - System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load. 15. Performance testing – Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this. 16. Usability testing - User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing. 17. Install/uninstall testing – Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment. 18. Recovery testing - Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. 19. Security testing – Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks. 20. Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above. 21. Comparison testing – Comparison of Product strengths and weaknesses with previous versions or other similar products.

22. Alpha testing – In house virtual user environment can be created
for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing. 23. Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose. 24. Agile testing-Agile testing is a software testing practice that follows the statutes of the agile manifesto, treating software development as the customer of testing. Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing. Since working increments of the software is released very often in agile software development there is also a need to test often. This is often done by using automated acceptance testing to minimize the amount of Manual labor. Doing only manual testing in agile development would likely result in either buggy software or slipping schedules because it would most often not be possible to test the whole software manually before every release. 25. GUI software testing-In computer science, GUI software testing is the process of testing a product that uses a graphical user interface, to ensure it meets its written specifications. This is normally done through the use of a variety of test cases. 26. Volume testing-Volume Testing belongs to the group of nonfunctional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application for a certain data volume. This volume can in generic terms be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will explode your database to that size and then test the application’s performance on it. Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application’s functionality with that file test the performance. 27. Sanity testing-it is a very brief run-through of the functionality of a program, system, calculation, or other analysis, to assure that the system or methodology works as expected, often prior to a more exhaustive round of testing. 28. Smoke testing -Smoke testing is done by developers before the build is released or by testers before accepting a build for further testing. 29. Ad hoc testing-Ad hoc testing is a commonly used term for software testing performed without planning and documentation. 30. The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing, being the least formal of test methods. 31. Maintenance testing-Maintenance testing is that testing which is performed to either identify equipment problems, diagnose equipment problems or to confirm that repair measures have been

effective. It can be performed at either the system level , the equipment level or the component level .

Question asked at Microsoft Interview:What is the 80/20 rule in Software Testing? The Principle says 80% of Defects are Caused by 20% of Code: The concept here is the Pareto Principle, originally described by Vilfredo Pareto and later formalized by Joseph Juran. Of course, this is just a rule of thumb, but an important one. Whether the percentages are really 70/30 or 90/10, the reality is that most things are caused by a few underlying factors. For software testers, knowing this fact can offer tremendous value. If a tester is simply looking at a list of 100 bugs, it may not be clear if there is any underlying meaning. But if the tester were to combine those bugs based on some kind of category, it may be possible to see that a very large number of bugs come from very few places. Here are a few recommendations for getting the most out of this principle:






Try to sort bugs by root cause and not by outcome. Grouping all the bugs that made the software crash isn’t that helpful. Grouping all the bugs that resulted from module XYZ is more helpful. Work with developers to look for innovative groupings. For example, 80% of the program’s bugs may result from calling the same underlying library. However, that may not be readily apparent from where the bugs occur within the program. Remember that bugs may result from flawed procedures. For example, a large number of bugs could be present because a developer is using out of date specifications.

This principle can be tremendously powerful in reducing the bug counts within a program because solving just a few things can make a program much more stable. By the way, it can help your customers too. From an article titled Microsoft’s CEO: 80-20 Rule Applies To Bugs, Not Just Features: In recent months, Microsoft has learned that 80 percent of the errors and crashes in Windows and Office are caused by 20 percent of the entire pool of bugs detected, and that more than 50 percent of the headaches derive from a mere 1 percent of all flawed code.

What is difference between Smoke, Sanity and Monkey Testing!

Smoke and Sanity are most confusing words for you right? But i included one more type here that is Monkey testing. Yes monkey testing is also popular word/questions asked in many Interviews. But you may call the same thing in different terminology in your organization. Here we go with those 3… What is Smoke Testing? Smoke tests get their name from the electronics industry. The circuits are laid out on a bread board and power is applied. If anything starts smoking, there is a problem. In the software industry, smoke testing is a shallow and wide approach to the Application. You Test all areas of the application without getting too deep. This is also known as a Build Verification test or BVT. What is Sanity Testing? In comparison, sanity testing is usually narrow and deep. That is they look at only a few areas but all aspects of that part of the application. A smoke test is scripted–either using a written set of tests or an automated test– whereas a sanity test is usually unscripted. What is Monkey Testing? A monkey test is also unscripted, but this sort of test is like a room full of monkeys with a typewriter (or computer) placed in front of each of them. The theory is that, given enough time, you could get the works of Shakespeare (or some other document) out of them. This is based on the idea that random activity can create order or cover all Options. Finally a Sanity test is not the same as a Smoke Test or a Build Verification test. The former is to determine a small section of the application is still working after a minor change (which is not a good policy, btw–you should do a Regression test instead).

Difference between:Load Testing Vs Stress Testing? One of the most common, but unfortunate misuse of terminology is treating “Load testing” and “Stress testing” as synonymous. The consequence of this ignorant semantic abuse is usually that the system Is neither properly “load tested” nor subjected to a meaningful stress Test. What is Load Testing? Testing the Application with customer expected load in customer expected environment is called Load Testing.

What is Stress Testing? Testing the application with customer expected load in customer expected environment continuously until the application breaks/crashes is called Load Testing. In simple words we can say: Load test doing continuously to find the system break point is called Stress testing. This is very important question in Interviews.

What is Functional Testing? Functional Testing: It’s a mandatory testing level, during this Test the testing team is validating a software build functionality in terms of below factors with respect to customer requirements.

1. Behavioral / GUI: The changes in properties of Objects OR
controls in a software is called behavioral or GUI.

2. Input Domain: Whether the Objects are taking correct type and
size of inputs or not?

3. Error Handling: Whether our software is preventing wrong
operations or not? 4. Manipulations: Whether our software is generating correct output or not? 5. Database Validity: Whether our software front end screens operations are correctly impacting on database of the software or not? 6. Sanitation: Finding extra operations in a software with respect to customer requirements. The above factors checking on a software is called as functional testing. During this checking the testers are using black box testing techniques or closed box testing techniques.

What is Non-Functional Testing? Non-Functional Testing: After completion of functional testing successfully, the testing team is concentrating on non-functional testing. During non-functional testing, the testing team is concentrating on customer expectations or software characteristics. This non-functional testing is classified into below sub testing topics. 1. Usability Testing (UI Check) 2. Manual Check (Help documents testing) 3. Compatibility Testing OR Portability Testing 4. Configuration Testing

5. Inter system Testing 6. Multi languity Testing 7. Data volume Testing 8. Installation Testing 9. Performance Testing 10. Load testing 11. Stress testing 12. Endurance Testing 13. Security Testing 14. Parallel Testing 15. User Acceptance Testing (UAT) 16. Software Release and Release Testing

Difference between Functional and Non-functional Testings? After completion of software Integration and Integration testing, the development team is releasing a software build to Test engineer team. The testing team is conducting system testing on that software in two sub levels such as 1. Functional Testing 2. Non-Functional Testing Functional testing is concentrating on customer requirements and the Non-Functional testing is concentrating on customer expectations.

What are the key challenges of testing? Following are some challenges while testing software 1. 2. 3. 4. 5. 6. 7. 8. Requirements are not freeze. Application is not testable. Tester & Developer communication is not happening Defect in defect tracking system Miscommunication or no Communication Bug in software development tools. Proper decision making and team management Time pressures

9. 10. 11.

Lack of resources Lack of tools Lack of training

What is IEEE 829? Importance of it? IEEE 829-1998, also known as the 829 Standard for Software Test Documentation is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgment outside the purview of the standard. The documents are: 1. Test Plan: a management planning document that shows: * * * * * how the testing will be done who will do it what will be tested How long it will take what the test coverage will be, i.e. what quality level is required

2. Test Design Specification: detailing test conditions and the expected Results as well as test pass criteria. 3. Test Case Specification: specifying the test data for use in Running the test conditions identified in the Test Design Specification 4. Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed 5. Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next 6. Test Log: Recording which tests cases were run, who ran them, in what order, and whether each test passed or Failed 7. Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed 8. Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order

to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.

What is CMM Level? Importance of it? Many of you guys might be working in CMM level companies. But guess most of you don’t know what exactly CMM level is and importance of it. Here we go with the full details and history… CMM (Capability Maturity Model) is a model of process maturity for software development – an evolutionary model of the progress of a company’s abilities to develop software. In November 1986, the American Software Engineering Institute (SEI) in cooperation with Mitre Corporation created the Capability Maturity Model for Software. Development of this model was necessary so that the U.S. federal government could objectively evaluate software providers and their abilities to manage large projects. Many companies had been completing their projects with significant overruns in schedule and budget. The development and Application of CMM helps to solve this problem. The key concept of the standard is organizational maturity. A mature organization has clearly defined procedures for software development and project management. These procedures are adjusted and perfected as required. In any software development company there are standards for processes of development, testing, and software application; and rules for appearance of final program code, components, interfaces, etc

The CMM model defines five levels of organizational maturity:

1. Initial level is a basis for comparison with the next levels. In an
organization at the initial level, conditions are not stable for the development of quality software. The Results of any project depend totally on the manager’s personal approach and the programmers’ experience, meaning the success of a particular project can be repeated only if the same managers and programmers are assigned to the next project. In addition, if managers or programmers leave the company, the quality of produced software will sharply decrease. In many cases, the development process comes down to writing code with minimal testing. 2. Repeatable level. At this level, project management technologies have been introduced in a company. That project planning and management is based on accumulated experience and there are standards for produced software (these standards are documented) and there is a special quality management group. At critical times, the process tends to roll back to the initial level. 3. Defined level. Here, standards for the processes of software development and maintenance are introduced and documented (including project management). During the introduction of standards, a transition to more effective technologies occurs. There is a special quality management department for building and maintaining these standards. A program of constant, advanced training of staff is required for achievement of this level. Starting with this level, the degree of organizational dependence on the qualities of particular developers decreases and the process does not tend to roll back to the previous level in critical situations. 4. Managed level. There are quantitative indices (for both software and process as a whole) established in the organization. Better project management is achieved due to the decrease of digression

in different project indices. However, sensible variations in process efficiency may be different from random variations (noise), especially in mastered areas. 5. Optimizing level. Improvement procedures are carried out not only for existing processes, but also for evaluation of the efficiency of newly introduced innovative technologies. The main goal of an organization on this level is permanent improvement of existing processes. This should anticipate possible errors and defects and decrease the costs of software development, by creating reusable components for example. The Software Engineering Institute (SEI) constantly analyzes the results of CMM usage by different companies and perfects the model taking into account accumulated experience.

What are the Common Factors in Deciding When to Stop Testing? Common factors in deciding when to stop Testing: 1. Show stoppers encountered 2. Too many minor bugs pending to be fixed 3. Deadlines (release deadlines, testing deadlines, etc.) 4. Test cases completed with certain percentage passed 5. Test budget depleted 6. Coverage of code/functionality/requirements reaches a specified point 7. Bug rate falls below a certain level 8. Beta or alpha testing period ends All this depends and varies from organization to organization. But these are the general factors to take into consideration. If you want to add anything, let us know by making a comment below.

Is it Possible to Make a Product 100% Bug Free? 100% bugfree testing is not at all possible. If we say 100% testing is completed, then that will be the end to any software. Then why we are getting so many projects still. Because we didn’t develop a bugfree project. We can’t develop a bugfree project. Take an example, Why Win Runner, oracle has so many versions? Yahoo Messenger, Rediffbol has versions? so many examples are there. I’ll give a real time example. HSBC is client to XXXXXXX company. Everyday there are problems in customer side. Everyday Developers and Test Engg.s clarifying and fixing the bugs in the real usage which is also called support or maintenance. Finally what i mean to say is, No project is

100% bugfree. We can do 100% testing, but we can’t assure that we have developed a 100% bugfree project or Product. We can say testing is completed when dead lines are met, when project budget is over, when all High and Medium bugs are fixed.

What Should be Done if you Don’t Have Enough Time for Testing? Since it’s rarely possible to Test every possible aspect of an Application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include: 1. Use risk analysis to determine where testing should be focused. 2. Which functionality is most important to the project’s intended purpose? 3. Which functionality is most visible to the user? 4. Which functionality has the largest safety impact? 5. Which functionality has the largest financial impact on users? 6. Which aspects of the application are most important to the customer? 7. Which aspects of the application can be tested early in the development cycle? 8. Which parts of the code are most complex, and thus most subject to errors? 9. Which parts of the application were developed in rush or panic mode? 10. Which aspects of similar/related previous projects caused problems? 11. Which aspects of similar/related previous projects had large maintenance expenses? 12. Which parts of the requirements and design are unclear or poorly thought out? 13. What do the developers think are the highest-risk aspects of the application? 14. What kinds of problems would cause the worst publicity? 15. What kinds of problems would cause the most customer service complaints? 16. What kinds of tests could easily cover multiple functionalities? 17. Which tests will have the best high-risk-coverage to timerequired ratio? Client will also contribute some ideas to utilize the available time. All this depends on organization to organization and person to person.

Huge List of Software Testing Companies in the World Here is the big list of all the companies providing testing consultation or doing testing software testing. One of the big Indian company listed here is AppLabs Technologies. I found this list at TestingStuff and you click on each company to know the details about that company. This list contains brief descriptions of various people and organizations that help companies with testing. Its major goal is to let you know who’s available.

15 Top Interview Q & A’s in Manual Testing Interviews

1. Why most of the software company preferred Manual testing even
though many automation testing tools are present in the market?

2. What is Test Coverage?
3. 4. 5. 6. 7. 8. 9.

What are the Type of CMM Levels, Explain Each Level. How do you go about testing a project? in Manual testing? what is build? what is build configuration? What is Entry Criteria & Exit Criteria. What type of metrics we prepare in testing? What is Independent Testing? what is the difference between system integrated testing and integrated system testing and Manual Testing? 10. Please explain test matrices? 11. What is Gap Analysis?. Is there any template for it.Describe briefly about Gap Analysis Please? 12. Do the test cases differ for Functional,Integration,System and User Acceptance Testing? 13. What is the clear meaning of test case, levels in test case ? contents of test case. 14. Could any body tell me about ‘Deferred test’?,please. when will we use ‘Deferred test’? Who will use it? 15. What is Risk Analysis? Types of risks? Answers: 1. Here are some reason why companies choose manual testing initially
• • • • •

Automation tools have some limitation Tools can’t test 100% of the Application features Automation is too costly Automation needs skilled persons, which makes companies again to spend much money Tools are very costly (Ex: QTP costs 6 lack per each licence)

2. Test coverages are of two types. They are-

• •

Features to be tested:The list of all the features within the test engineer’s scope. Features not to be tested:The list of all the features beyond the test engineer’s scope.

Ex: low rik areas,skipping some functionalities based on the time constraints. 3. CMM Levels Explanation 4. Just follow STLC process: Test Initiation, Test Planning, Test Design, Test Execution, Test Reporting & Test Closure. 5. In a programming context, a build is a version of a program. As a rule, a build is a pre-release version and as such is identified by a build number, rather than by a release number. Reiterative (repeated) builds are an important part of the development process. Throughout development, application components are collected and repeatedly compiled for testing purposes, to ensure a reliable final Product. Build tools, such as make or Ant, enable developers to automate some programming tasks. As a verb, to build can mean either to write code or to put individual coded components of a program together. 6. Entry criteria:
• • • •

All source codes are unit tested All QA resource has enough functional knowledge H/W and s/w are in place Test plans and test cases are reviewed and signed off

Exit criteria:
• • •

No defect over a perod of time or testing effort Planned deliverables are ready High severity defects are fixed

7. In testing there are two types of metrics.
• •

Process metrics Product metrics

Process Metric: A metric used to measure the characteristic of the methods,technique, and tools employed in developing,imlementing,and maintaining the software system. Product Metric: Metric used to measure the characteristic of the documentation and code.

8. Testing by individuals other than those involved in the development of the product or system

What is configuration Management? Configuration management (CM) is the processes of controlling, coordinate, and tracking the Standards and procedures for managing changes in an evolving software Product. Configuration Testing is the process of checking the operation of the software being tested on various types of hardware.

What is heuristic checklist used in Unit Testing? Here is the check ist maximum companies using at the moment… 1. Understand the need of module/function in relation to specs 2. Make sure about the type of values you are passing to the function as input. 3. Have a clear idea about the output that you are expecting from the function based on point 1(above). 4. Be specific about the test data you are passing to the function in terms of type(incase of positive testing). 5. Remember that you need to do both positive and negative testing. 6. Be Clear about type casting (if any). 7. Have a cristal clear idea about type of assertions (is used to Test/compare the actual with expected). 8. Be clear about how the function is being called and is there any other function calls involved in the function you are testing. 9. Perform Regression testing for each new build and keep a log of modifications you are making to your unit testproject(better if you use Visual source Safe). 10. Its always better to debug both positive and negative testing to see how the function is performing so that you can have a clear understandability about the function you are testing. 11. its always better to have a seperate project for unit testing by using just referencing the dll of the Application.

Difference between Subroutine, Procedure & Function? What is a Subroutine? A subroutine is a piece of code executed upon demand, separate from the main flow of the program. The interpreter will jump to the code, execute it, and return to the main program. Keywords such as GOSUB and RETURN, from the BASIC language, often used with line numbers, provided a programmer-friendly way to achieve this. The assembly language equivalents were less easy to digest, usually comprising a comparison, jump (or branch) and return, complete with offsets, memory addresses, or, in more modern assemblers, labels. Subroutines are considered by many to be hard to maintain, difficult to read and digest, and are held in a similar light to the GOTO statement. In other words – reserved for use when nothing else will do. It is worth noting that a modern language, implementing a modular Structure, with named labels rather than line numbers, might be able to include subroutines in an acceptable fashion. However, with functions and procedures available, there may be no call for this approach at all. Generically, the term subroutine can be used to denote a piece of code, separate from the main body, fulfilling a discrete task, in language-neutral terms. For example, a document might refer to the file handling subroutines. What is a Procedure? A procedure is a named block of code, like a subroutine, but with some additional features. For example, it can accept Parameters, which might be input, output, or pass-through. Traditionally, a procedure returning a value has been called a function (see below), however, many modern languages dispense with the term procedure altogether, preferring to use the term function for all named code blocks. Subsequently, the keyword PROCEDURE exists only in certain languages, and has disappeared from many. It is worth noting that languages like C do not use it, and that BASIC based languages do, whereas Modula and Pascal based languages have both the PROCEDURE and FUNCTION keywords, in which a FUNCTION is a PROCEDURE that returns a value.

What is a Function? As we saw above, a function is considered in some languages to be a procedure that returns a value. However, it is usually the term used to refer to any named code block. It is worth noting that C based languages use the function keyword exclusively. Functions can accept parameters, return values, and are usually maintained separately from the main program code. Many programming languages have a special kind of function (in C, the main function) designated as the entry point to a program. BASIC based languages have no such entry point, since they are entirely procedural in that execution begins at the top of the program, and continues until it is told to stop.

What is GA Build in Testing? (Asked at Oracle Corp Interview) General availability (GA) General availability (GA) is the point where all necessary commercialization activities have been completed and the software has been made available to the general market either via the Web or physical media.

Commercialization activities could include but are not limited to the availability of media world wide via dispersed distribution centers, marketing collateral is completed and available in as many languages as deemed necessary for the target market, the finishing of security and compliance tests, etc. The time between RTM and GA can be from a week to months in some cases before a generally available release can be declared because of the time needed to complete all commercialization activities required by GA. Another term with a meaning almost identical to GA is FCS, for First Customer Shipment. Some companies (such as Sun Microsystems and Cisco) use FCS to describe a software version that has been shipped for revenue.

What is Error Guessing and Error Seeding? Error Guessing is a Test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.

Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal. And also to estimate the number of faults remaining in the program.

What is Defect/Bug Leakage? This is one of the most asked interview question for experienced people. Here we go with it: If the client/customer/end user finds a defect while using the released Application/Product.Then it is called as Defect Leakage or bug leakage. OR in other words After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak.

Why Projects Fail – The Basic SIX Factors Behind Computer projects fail when they do not meet the following criteria for success:
• • •

It is delivered on time. It is on or under budget. The system works as required.

Only a few projects achieve all three. Many more are delivered which fail on one or more of these criteria, and a substantial number are cancelled having Failed badly. So what are the key factors for success? Organisations and individuals have studied a number of projects that have both succeeded and failed and some common factors emerge. A key finding is that there is no one overriding factor that causes project failure. A number of factors are involved in any particular project failure, some of which interact with each other. Here are some of the most important reasons for failure. 1. Lack of User Involvement Lack of user involvement has proved fatal for many projects. Without user involvement nobody in the business feels committed to a system, and can even be hostile to it. If a project is to be a success senior management and users need to be involved from the start, and continuously throughout the development. This requires time and effort, and when the people in a business are already stretched, finding time for a new project is not high

on their priorities. Therefore senior management need to continuously support the project to make it clear to staff it is a priority. 2. Long or Unrealistic Time Scales Long timescales for a project have led to systems being delivered for products and services no longer in use by an organisation. The key recommendation is that project timescales should be short, which means that larger systems should be Split into separate projects. There are always problems with this approach, but the Benefits of doing so are considerable. Many managers are well aware of the need for fast delivery, leading to the other problem of unrealistic timescales. These are set without considering the volume of work that needs to be done to ensure delivery. As a result these systems are either delivered late or only have a fraction of the facilities that were asked for. The recommendation here is to review all project plans to see if they are realistic, and to challenge the participants to express any reservations they may have with it. 3. Poor or No Requirements Many projects have high level, vague, and generally unhelpful requirements. This has led to cases where the developers, having no input from the users, build what they believe is needed, without having any real knowledge of the business. Inevitably when the system is delivered business users say it does not do what they need it to. This is closely linked to lack of user involvement, but goes beyond it. Users must know what it is they want, and be able to specify it precisely. As non-IT specialists this means normally they need skills training. 4. Scope Creep Scope is the overall view of what a system will deliver. Scope creep is the insidious growth in the scale of a system during the life of a project. As an example for a system which will hold customer records, it is then decided it will also deal with customer bills, then these bills will be provided on the Internet, and so on and so forth. All the functionality will have to be delivered at one time, therefore affecting time scales, and all will have to have detailed requirements. This is a management issue closely related to change control. Management must be realistic about what is it they want and when, and stick to it. 5. No Change Control System Despite everything businesses change, and change is happening at a faster rate then ever before. So it is not realistic to expect no change in requirements while a system is being built. However uncontrolled changes play havoc with a system under development and have caused many project failures.

This emphasises the advantages of shorter timescales and a phased approach to building systems, so that change has less chance to affect development. Nonetheless change must be managed like any other factor of business. The business must evaluate the effects of any changed requirements on the timescale, cost and risk of project. Change Management and its sister discipline of Configuration Management are skills that can be taught. 6.Poor Testing The developers will do a great deal of testing during development, but eventually the users must run acceptance tests to see if the system meets the business requirements. However acceptance testing often fails to catch many faults before a system goes live because:
• • • •

Poor requirements which cannot be tested Poorly, or non planned tests meaning that the system is not methodically checked Inadequately trained users who do not know what the purpose of testing is Inadequate time to perform tests as the project is late

Users, in order to build their confidence with a system, and to utilise their experience of the business, should do the acceptance testing. To do so they need good testable requirements, well designed and planned tests, be adequately trained, and have sufficient time to achieve the testing objectives. Conclusion These six factors are not the only ones that affect the success or failure of a project, but in many studies and reports they appear near, or at the top of the list. They are all interlinked, but as can be seen they are not technical issues, but management and training ones. This supports the idea that IT projects should be treated as business projects.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close