Document Author Kurt Schwartz Kurt Schwartz Kurt Schwartz
Approval Date
Approved Version
Approver Role
Approver
The Catalyst Team has adopted the BearingPoint Delivery Framework as its proposed solution delivery methodology for the VoteCal project. The BearingPoint Delivery Framework Methodology contains process, templates and techniques used to deliver BearingPoint services. BearingPoint Delivery Framework SM and BearingPointTM are trademarks or service marks of BearingPoint, Inc. and/or its affiliates.
Table of Contents
1. INTRODUCTION ................................................................................................................................. 4 1.1 1.2 1.3 1.4 1.5 1.6 1.7 PURPOSE AND OBJECTIVES ..........................................................................................................................4 SCOPE ............................................................................................................................................................4 COMPLIANCE WITH RFP REQUIREMENTS ...................................................................................................6 ASSUMPTIONS ...............................................................................................................................................6 REFERENCES .................................................................................................................................................7 INTENDED AUDIENCE AND DOCUMENT USE ...............................................................................................7 DOCUMENT CONTROL ...................................................................................................................................7 ROLES AND RESPONSIBILITIES ...................................................................................................................7 TRAINING REQUIREMENTS ...........................................................................................................................9 OTHER CONTRIBUTING GROUPS ...............................................................................................................10
APPENDIX C – TEST TRACKING SPREADSHEET .........................................................................46 APPENDIX D – SAMPLE TEST SCRIPT.............................................................................................48
Tab 4M - 3
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
1.
Introduction
Software testing is one of the most important parts of developing any system, and is used to validate the functionality of the software, ensuring the defined business requirements and application designs are satisfied. Successful testing discovers errors or “bugs” and provides software developers with the information required to correct or “debug” the software. Software testing is a crucial part of Secretary of State (SOS) VoteCal project development and implementation effort; as such, testing will account for a significant portion of the integration effort within the project phases related to construction and implementation. The VoteCal testing approach will follow BearingPoint’s application development methodology - the BearingPoint Delivery Framework - which incorporates both testing methodology and test planning techniques. An overview of the Delivery Framework is presented in the VoteCal Project Management Plan. The remainder of this document describes the levels of testing to be performed, the approach we will take to perform the testing, the overall responsibilities for the development of test data and test scripts, the responsibility for the execution of the testing, and the stage in which each level of testing will occur. Although not explicitly called out throughout this plan, the proposed testing methodology is intended to encompass all system components operate as documented and intended, including the following: Software o o o o o o 1.1 Configured Items Programmed Items Programs Reports Fail Over Fail Back
Hardware
Purpose and Objectives
The objective of the Test Plan is to establish the testing management processes and procedures to facilitate project testing execution and delivery for the Catalyst Team Project Manager (PM), SOS and the entire VoteCal Project team. It serves as the controlling document, to which the VoteCal project team will adhere throughout the life cycle of the project. The Test Plan establishes project management's interpretation of the why, what, how, who, how much, and when of the testing aspects of the project. The Test Plan will become the basis for verifying the system operates as documented and intended. 1.2 Scope
The scope of the Test Plan is intended to refer to the required testing activities and methodology in relation to the voter registration system. If SOS chooses to opt for the
Tab 4M– Page 4
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
EMS, this plan would be adapted to incorporate the EMS. This test plan maps to and will become Deliverable 3.2 - VoteCal System Acceptance Test Plan.
Tab 4M– Page 5
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
1.3
Req # P15
Compliance with RFP Requirements
Requirement Description The Bidder must provide a draft Test Plan that includes a discussion of the proposed Test Methodology and a sample Test Defect Log. The actual detailed Test Plan and Test Defect Log must be submitted, finalized, and approved no later than fifteen (15) State working days prior to the commencement of testing activities. All business functional and technical requirements in this RFP must be traceable to the Test Plan and the Bidder must provide SOS with a Requirements Traceability Matrix (refer to Requirement P5), which will provide a link from each test case back to each of the business functional and technical requirements in the RFP for testing purposes. Bidder must include a discussion of all levels of testing that will be performed and the training to be provided for the SOS testing staff. SOS intends to perform a test with pilot counties (counties to be determined – Bidders should assume a total of 1.5 million voter registration records for the pilot counties). This must be factored into the Bidder’s activities, PMP, and schedule. If a Bidder proposes a Commercial off-the-Shelf (COTS) application or a Modified-off-the-Shelf (MOTS) application, out of the box testing must be included to validate the base product is functioning properly. Negative testing scenarios must be included. Bidder must address all levels of testing to be performed, including stress testing and how they will manage these activities including managing of the test environments. The Test Plan must include testing for all configured and programmed items, all programs and reports, and a complete “end-toend” test including testing of interfaces to the county systems. It will be the decision of the VoteCal Project Manager when acceptance testing has been successfully completed. The final detailed Test Plan will become the basis for verifying that the system operates as documented and intended. NOTE: SOS has contracted with an Independent Verification and Validation (IV&V) contractor to perform independent testing of the delivered applications. Bidder must resolve any discrepancies identified by the IV&V contractor before testing is considered accepted and signed off by SOS. Bidders must factor this activity and working with the IV&V contractor into their work plan. Proposal Section VoteCal Test Plan Page
All
1.4
Assumptions
The following are the Catalyst Team assumptions: SOS will provide adequate staffing and take a lead role in user acceptance testing in the timeframes agreed to in the detailed work plan. It is assumed that the testing conducted by the IV&V contractor will not require any additional training, environments, or preparation other than that which is needed for the activities detailed within this plan. Unit testing will be accomplished by the Development Team as part of the integral process of developing and releasing application functionality. Testing activities will
Tab 4M– Page 6
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
be performed against application functionality that has been released by the Development Team. 1.5 References
The Test Plan references materials from the following sources: 1.6 VoteCal Project Management Plan (Volume 1, Section 4, Tab 4A) Data Conversion Plan (Volume 1, Section 4, Tab 4L) BearingPoint Delivery Framework Requirements Traceability Matrix and Gap Analysis (Volume 1, Section 4, Tab 4G) Training Management Plan (Volume 1, Section 4, Tab 4K) Intended Audience and Document Use
The Test Plan is a component of the Project Management Plan (PMP). It defines the framework for testing management on the VoteCal Project. All project team members are the intended audience. 1.7 Document Control
The Catalyst Team will own the Test Plan and will work with SOS to update and review this plan prior to the start of each subsequent project phase. This plan is a living document with detail being increasingly filled in, one phase in advance, as the project progresses. During review activities, the plan will be updated as needed resulting from continuous test management process improvement efforts by the project leadership and project team members. The actual Test Plan and Test Defect Log will be submitted, finalized, and approved by SOS no later than fifteen (15) State working days prior to the commencement of testing activities. In addition, at the same time we will provide a Requirements Traceability Matrix that provides traceability to the Test Plan and, specifically, links from each test case back to each of the business functional and technical requirements for testing purposes.
2.
Resource Requirements
There are various team member resources and stakeholders involved in managing and executing the testing process. In some cases, one individual may perform multiple roles in the process. All project team members will receive initial training on their testing responsibilities by their manager/lead when they join the project. Project meetings are used to brief team members on any changes to the process. 2.1 Roles and Responsibilities
The table below defines the roles and responsibilities for testing and the related deliverables:
Deliverable Test Plan Catalyst Team Role Primary SOS Team Role Support
Tab 4M– Page 7
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Deliverable Test Scripts
Catalyst Team Role Primary
SOS Team Role Support
System/Integration Testing Regression Testing User Acceptance Testing User Acceptance Training Performance Testing IV&V Testing IV&V Training
Primary Primary Support Primary Primary Support Support
Support Support Primary Support Support Primary Primary
Further discussions are needed with SOS to better define planned IV&V testing. At this time it is not clear if IV&V testing will be done as a part of acceptance testing or if additional test cycle(s) are required. Review of the IV&V SOW did not provide clarification of these testing activities. The Catalyst Team understands that we will be required to resolve any discrepancies' identified by the IV&V before Testing is considered accepted and signed off by the SOS. The following list describes the key project team members and their roles in the testing process: SOS Project Manager: responsible for all day-to-day activities and insuring that the project is completed on time and within budget. Will determine when testing has been successfully completed. Catalyst Project Manager: responsible for all day-to-day activities and insuring that the project is completed on time and within budget. Testing Manager (Testing Lead): responsible for creation of the Test Plan, including necessary templates, timelines and test schedule. Monitors progress of the various test phases and report on findings. Ensures 100% test coverage of the defined tests scripts and test requirements. Works with the Development Team Lead to resolve defects and retesting as necessary. Validates and sign-off on all test results. Monitors test results and tracks defects. Tester: responsible for creation of System Test scripts, set up System Test data, and conducting System Testing. Develop test scripts, identifying appropriate testing data, performing penetration testing, validity checks, overflow checks, and other types of potential programming weaknesses. Work with the end-user test group to identify potential weaknesses and process flaws in the functional areas being tested. Development Manager (Technical Lead): responsible for directing the technical infrastructure and data integrity aspects of the project, and supporting and
Tab 4M– Page 8
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
customizing VoteCal while in production. Oversees the development efforts by the testing team members. Development Team: responsible for developing the technical infrastructure and data integrity aspects of the project, and supporting and customizing VoteCal while in production. SOS Subject Matter Experts (Acceptance Testers): SOS Elections Division personnel who will ultimately use the system in a production environment will be responsible for participating in user accepting testing. The Subject Matter Experts will work in conjunction with the VoteCal Test Team executing the tests and recording the results and defects; including judging completeness and accuracy of the business functionality – screens, reports, interfaces, and judging if the user interface is acceptable for all groups/types of users. Training Requirements
2.2
One of the ultimate goals of the project is to support the VoteCal project testing team members in their effort to become self-sufficient after the solution is implemented. As part of our methodology, the Catalyst Team promotes knowledge sharing and knowledge transfer throughout the entire Delivery Framework life cycle. The Catalyst Team’s philosophy and approach in working with SOS is to produce reusable templates and repeatable processes during each phase of the project. This approach helps SOS team members to replicate the processes, activities, and deliverables required to support future enhancements, upgrades, and implementations. Knowledge transfer is an integral part of the BearingPoint Delivery Framework methodology. As part of the project activities, the Catalyst Team monitors the progress that SOS employees make related to the use of this methodology for future deployments and implementations. Helping SOS achieve self-sufficiency in replicating the implementation process is one of the most important components of the Catalyst Team’s VoteCal implementation methodology. Figure 4M-1 depicts the key components in the Catalyst Team’s plan to prepare SOS staff for performing user acceptance testing. The same components play a role in preparing SOS for end user and ultimately system support responsibilities.
Tab 4M– Page 9
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Figure 4M-1 SOS VoteCal User Acceptance Training
The following table high lights the key components and topics that constitute the required training for SOS project testing team members on the software, methodologies, and tools used for the VoteCal project.
Audience SOS UAT Testers
Course Description VoteCal Testing
Prerequisites (None)
Content
Length of Class 2 Days
User Acceptance Test Scenarios
For further information on the training methodology and courses, please refer to the Training Management Plan. 2.3 Other Contributing Groups
The following are the other contributing groups that play a role in contributing to the processes in this plan: SOS Information Systems Division (ISD) – ISD is responsible for providing access and support relating to the data center environment that will host the system servers and the servers that support the project documentation repositories. County IT Staff – Are responsible for providing remote or on-site access to local EMS servers as these become integrated components of VoteCal
Tab 4M– Page 10
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
3.
3.1
Testing Methodology
Methodology Overview
The BearingPoint Delivery Framework is a Rational Unified Process (RUP) based methodology and shares common phases and the step-wise, iterative approach necessary to avoid pitfalls often encountered in large system implementations. As the BearingPoint Delivery Framework is built on our experience, the Catalyst Team brings a methodology that goes a step further in avoiding risk. While RUP leaves some activities to the discretion of the practitioner, we have taken additional steps and “methodized” activities often overlooked. BearingPoint Delivery Framework methodology is driven by experienced professionals and composed of sets of tasks designed to deliver a complete solution. The tasks are grouped by phases, similar to the RUP methodology. BearingPoint Delivery Framework, however, is designed based on BearingPoint’s experience and goes beyond system design for you. The testing methodology incorporates both positive and negative testing scenarios into each type of testing, providing a complete coverage of all of the possible outcomes to verify the solution performs to specification. Negative Testing, or testing for fail conditions, is where the expected outcome is a failed test. Positive testing, or testing for pass conditions, is where the expected outcome is a passed test. Through application of both types of testing scenarios, the application can truly be fully tested to verify it is operating to specification.
3.2
Testing Across the Project Phases
The BearingPoint Delivery Framework is structured by phases and further divided into tasks and subtasks. Tasks and subtasks are activities performed throughout the project, and each phase may require a set of unique or like tasks and subtasks. Figure 4M-2: BearingPoint Delivery Framework Methodology
Tab 4M– Page 11
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
3.2.1
Project Initiation and Planning
The Project Initiation and Planning phase is focused on the requirements of the new system. This phase focuses on your business issues, including collection and validation of the requirements. 3.2.2 Design
The Design phase is focused on fine-tuning the detailed business and technical systems requirements and developing the appropriate technical design documents and test plans for each application component. Thus, the BearingPoint Delivery Framework approach delivers detailed testing specifications that are used in the Development phase. System designs are created to support actual construction of your new system. As a result, the overall architecture and technology needed to support your new system is now taken into consideration. Activities include developing the models for your new system so that we can further refine technical requirements and work to eliminate risks identified. To verify that all requirements are in the design, specific functions are evaluated against and traced to the requirements. These functions are translated into test cases that validate the system is performing to specification. The test cases are linked to each of the business functional and technical requirements through the Requirements Traceability Matrix. (Please refer to Volume 1 – Tab 4F - Requirements Traceability Matrix and Gap Analysis for additional details on requirements traceability.) 3.2.3 Development and Testing
The Development and Testing phases focus on the development and testing of the VoteCal system components. Its purpose is to construct and configure, in an iterative fashion, the solution elements defined during the design phase. In constructing your new system, we focus on forging your new VoteCal solution through the use of technology, software components, configuration and integration, and systems testing. Integration, system, and regression testing are executed within this phase. These forms of testing all initiated in and performed throughout the build phase, continuing until development activities have ceased and the code base has stabilized. 3.2.4 Pilot Deployment and Testing and Deployment and Cutover
The Pilot Deployment and Testing and Deployment and Cutover phases focus on the delivery of a complete and functional system to the production environment. Full migration of the production environment is performed, along with acceptance testing, training end users, and preparing for knowledge transfer. We revisit previous phases to secure all inputs for ongoing maintenance and performance. Performance, security, and fail over testing are also executed within this phase.
Tab 4M– Page 12
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
3.3
Types of Testing
Different types of testing shall be required during the course of the project. The different types of system testing are described below; each designed to test different aspects of the system. Most of the testing types will be the responsibility of The Catalyst Team, but some (e.g. User Acceptance) will be the responsibility of the SOS Team with support from The Catalyst Team. 3.3.1 Out of the Box Testing
The Catalyst Team believes that out of the box testing is not appropriate for the proposed solution. Although our solution reuses significant portions of a similar system delivered in Illinois, enough new development and modification is required that out-of-the-box test results would be non-applicable to VoteCal. Our testing methodology will incorporate detailed testing, for all solution components and business functions, during the project lifecycle. 3.3.2 GUI Testing
The Catalyst Team intends to incorporate GUI testing throughout the various types of testing being conducted as part of the project. GUI testing activities are not only based on verifying the system operates as documented and intended, but also ensuring that usability has been incorporated into the design and development of the system. Usability includes validating that the GUI is laid out in a logical manner and provides the end user with a sense that the interface was designed to facilitate ease of use and adoption. The development team will develop a set of GUI guidelines to be followed when developing the system, and testing activities will incorporate tasks to validate the interface is implemented consistently and is functioning to specification. These are the key GUI testing activities that are incorporated into the project phases: • Design – GUI standards are defined and incorporated into the system user interface design specifications • Development and Testing – GUI standards are developed into the system and testing activities employed to confirm consistency, functionality to specification, and usability • Pilot Deployment and Testing and Deployment and Cutover – Application of GUI standards are confirmed by the client as part of user acceptance testing GUI testing will be conducted through definition and execution of manual scripts to validate the GUI functionality.
Tab 4M– Page 13
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
3.3.3
System/Integration Testing
This section details the BearingPoint Delivery Framework methodology for system/integration testing. The discussion of the methodology includes detailing scope, approach, process, test data, testing scripts, and approval for the system/integration testing process. 3.3.3.1 Scope System/integration testing is the process in which the Catalyst Team personnel, SOS SMEs (Subject Matter Experts) and other personnel assigned by SOS, will conduct integrated tests of developed applications to verify that system functions are properly integrated and correctly working together. The modules to be tested are grouped into major business functional areas of VoteCal, and any unlisted items are included in the major business functional areas listed below. The System/Integration Test process will test all integration modules, including the following business functions of the VoteCal application: Core Components All functions (modules) included in the implementation of VoteCal including: o o o o o o o o o On-line Functions External System Interfaces (DMV, CDCR, CDPH, NCOA, NVRA) Batch Programs Correspondence (Ad-hoc and Batch) Reports (Ad-hoc and Batch) Live XML Web-service interfaces Batch Data Exchange interfaces. Data Conversion Effectiveness EMS Vendor Upgrades
Simulated County EMS Systems will be connected in order to pre-test:
Pilot County EMS Systems will be connected to test.
3.3.3.2 Approach System/integration testing will be performed to test the fully integrated system. The data used will be a snapshot of the production data, and then updated as needed. The business functions will be tested until they meet the System/Integration Testing Acceptance Criteria outlined below. The tested business functions will then be deemed ready for acceptance testing. System testing will be performed on all business functions, including external system interfaces, and batch job scheduling. The Catalyst Team and SOS testing project teams, with assistance from the user community, will develop test data, test scripts, and expected results. This System/integration testing approach overlaps with the activities carried out under the Data Conversion Plan. The Data Conversion Plan describes where key testing checkpoints are inserted between Data Conversion and Pilot Testing steps.
Tab 4M– Page 14
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Test scripts are organized according to module and business processes. They list the tests that are planned and link back to the detailed requirements definition matrix under column “Unit Test Script” and “System Test Script”, defining the test coverage. Each test is described in detail according to test steps and specific pass/fail evaluation criteria that are outlined as an expected outcome for each test step. The combination of steps and expected outcomes can be thought of as a test script. The BearingPoint Development Framework methodology for systems/integration testing generally requires that staff perform system test cases, using a representative sampling of data spanning expected real-world variations, prior to client business staff involvement. While it is not possible to test all mathematical combinations of data variation, the Catalyst Team will test with multiple conditions appropriate to the specific test. Our goal is to perform test cases that touch every line of code, in order to be confident we have a quality product prior to delivery to SOS. Code coverage is tracked by tracing the test scripts back to the requirements to ensure that all modules and functions are tested. Tracing the test scripts back to the requirements assures that all functionality existing in the base system and added by current development is tested. Also, code reviews will be conducted by developers to ensure that web development standards are met and adhered to. The test cases will be designed to execute and test code logic, conditional paths, and parameters. In addition, in some cases the development of specific functionality, such as standard reports and some correspondence may not be completed prior to starting of the system test process. In these cases, when functionality becomes available, Catalyst Team staff will perform the test cases with a representative sampling of data variations prior to delivering the system test scripts to SOS for acceptance testing. The Catalyst Team and SOS testing project teams anticipate that conversion programs, designed to transfer data from the legacy system to the development and test databases, will be executed to support the system testing process. System/integration testing will include: Business Functionality Testing - This test will verify the functionality of the entire process (such as County Search) as a unit and ensure that all business requirements have been met. Test Scripts will be executed to ensure that the individual functions (modules) of each business application (such as Maintain County) work in unison with each other and with other business applications. Concurrency Testing - Concurrency Testing verifies that record locking, processing and updates work correctly and adequately when multiple users access the same record(s). Black Box Testing – Black box testing is conducted with no knowledge of the internal design or structure of the system component being tested. The tester selects valid and invalid test case input, determines what the appropriate output should be for each input, and executes the tests validating the expected output matches the provided input. White Box Testing - White box testing is conducted with knowledge of the internal design or structure of the system component being tested. The tester determines, through analysis of the programming code, all of the logic paths through the component and develops test case inputs that exercise all of the logic paths. For each test case input, the tester determines what the appropriate output should be. The tests are executed validating the expected output matches the provided input.
Tab 4M– Page 15
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Negative Testing – Negative testing is the creation of test cases where the failure of the test is the expected result. Negative testing is that testing which attempts to show that the component of an application does not do anything that it is not supposed to do. This is also often referred to and executed as a bounds test. Positive Testing – Positive testing is the creation of test cases where the passage of the test is the expected result. Positive testing is that testing which attempts to show that a given component of an application does what it is supposed to do.
One of the key components that will be thoroughly tested as part of system/integration testing is the VoteCal Match Engine. Very specific test data sets will be created for use with this process. The data sets will be designed to produce very specific expected outcomes, allowing for the assessment that the VoteCal Match Engine is working to specification. The following are some of the data set scenarios that will be applied: Match: Data resulting in a match with no changes Update: Data resulting in a match with updates Add: Data resulting in no match Duplicate: Data resulting in intentional duplicates (created through direct database injection)
Through use of these and other test cases, we will be able to consistently validate the Match Engine component is operating to specification. 3.3.3.3 External Agency Interface Testing External Agency Interfaces are primarily batch file processing. As such, the testing of the data import and processing of data from external agency will require coordination with the specific agencies such as DMV, CPCR, CDPH, NCOA service provider, and SSA through the DMV to obtain test data. Once test data files are collected for a specific external agency, the test methodologies defined in this plan will be applied. As the data from external agency may not be sufficient to test all desired scenarios, additional test cases will need to be defined to provide for test cases such as negative test results. 3.3.3.4 Pilot Testing System/integration testing will incorporate pilot testing that involves testing of VoteCal external system interfaces against pilot county election management systems. The Catalyst Team will work with SOS to identify counties that will participate in the pilot testing effort. Pilot Site Selection Objectives: Include at least one each from a large, midsize, and small county. Include at least one county using each type of EMS. Exclude Counties with a one-off EMS solution. Their pilot testing activities will be combined with implementation activities during the implementation phase. The counties selected should have strong cooperative relationships with their EMS vendor. The counties selected should have a positive relationship with SOS.
Tab 4M– Page 16
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
The Catalyst Team has an approach that allows for early VoteCal external interface development and testing that leads to a smooth transition of pilot testing against county EMS. The VoteCal application components will initially be tested against dummy election management systems (EMS) with dummy test data. This first phase of testing will provide for the simulation of interfaces to external systems and will allow for the validation of the proper functionality of the web service calls. Once this testing has been successfully completed, and the pilot counties identified and prepared for their participation in the testing effort, the external interface target systems would be substituted from the dummy systems and data to the real county EMS systems.
3.3.3.5 Process This section details the systems/integration testing process flow. Figure 4M-5: System/Integration Test Management Process Flow
The System Testing Process includes defect tracking, creation of appropriate test data and test scripts and final approval. ST-1 ST-2 ST-3 Test Testing Team Develops Test Scripts Upon successfully completion of IUT, the testing team will develop test scripts for system testing. Testing Team Conducts Tests The testing team tests the application test scripts. Testing Team Notifies Development Team Leader of Problems or Successful The testing team will create a Jira defect report to communicate problems encountered. The Development and Testing Team leads will review the defect reports. If the testing is successful the development team leader will be notified of the successful test. Development Team Leader Assigns Modifications The development team leader assigns modifications/corrections to developer.
ST-4
Tab 4M– Page 17
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
ST-5
Developer Corrects Errors The developer makes modifications/corrections and notifies the development team leader. Upon the development team leader’s approval, test scripts will be sent back to the testing team for retest. Upon development team leader approval, the testing team will once again conduct the system testing (ST-2). Note: Steps ST-2 through ST-5 will be repeated as many times as necessary. System Test Completed The testing team leader records when system testing of a business function is completed. Project Management Plan is Updated Upon successful completion of system testing, the testing team leader will ensure that the project workplan is updated to reflect the completion of the respective system testing.
ST-6 ST-7
EMS Integration Portions of this System/integration testing process that relate to EMS Vendor system integration must be carried out throughout the implementation phase as each county is connected. The Data Conversion Plan and Implementation Plan describe where key testing checkpoints are inserted between Pilot Testing and Implementation steps. 3.3.3.6 Defect Tracking The Catalyst Team System Test Team will use Jira to record and maintain defect reports. A Defect Report is the mechanism each individual tester will use to record application defects. Defect is a generic term that identifies any issues discovered with the new application. Defects may take the form of application problems (sometimes referred to as “bugs”), missing features or functionality, slow performance, cosmetic flaws, or desired enhancements. Testers will log each defect discovered into Jira. Please refer to Appendix B for a sample Jira screen capture form. 3.3.3.7 Training SOS and Catalyst Team personnel will be trained as required. Section 2.2 Training Requirements of this plan details the training methodology. Additionally, the SOS team members will have attended discovery sessions. Through participation in discovery sessions and the specified training, the SOS team members will have developed a basic familiarity and understanding of the application functionality. Testers will reference the Detailed Design Documents for clarification on what is the expected result of the test scripts. Additional details will be outlined in the Training Plan. 3.3.3.8 Test Data System/Integration testing will utilize a test database instance separate from but identical to the Production database. Data from conversion will be utilized for testing against system test scripts. A separate Test VoteCal application build will be generated prior to the system test and will be rebuilt as necessary. A separate test server will host the VoteCal application and DLLs (Dynamically Linked Library). The Test Build will be modified with the coding
Tab 4M– Page 18
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
changes based on defects found in System/Integration Testing and will be version controlled using Subversion. VoteCal test data will be created several different ways. One of the key methods involves the careful creation of data sets that contain specific characteristics that will provide for testing specific scenarios and have expected outcomes. Another key method involves creating test data from real data sets, derived from the SOS legacy systems. Once the data has been obtained it will be transmogrified (replacing attributes such as last name, first name, social security number, date of birth, etc. across all records in the data set) to the point that the test data records are anonymous and do not resemble real records from the originating system. This technique provides the added benefit that, although the complete records to not resemble those in the originating system, the data is representative of the real data the system will be processing once fully implemented. 3.3.3.9 Test Scripts The Independent Unit Testing (IUT) test cases will evolve into more detailed system/integration test cases. The Testing team, with assistance from SOS SMEs, will create detailed system test scripts for all VoteCal processes. The Catalyst Team is fully responsible for the creation of system test scripts. Please refer to Appendix D for a sample test script. 3.3.3.10 Approval The system project test team (Catalyst Team and SOS IT staff & SME’s) will perform the System tests and execute the test cases and confirm that all processes in VoteCal work as designed and meet requirements, thus releasing it for User Acceptance testing. The SOS and Catalyst Team project managers will review and accept the results of system test. The following table describes the severity levels of errors and number of each type of errors a business function can have in order to continue on to UAT.
Level 1 2 3 4 5
Example Specific business function is unavailable. 0
Number Allowed to exit System/Integration Testing
A VoteCal screen, in a specific business function, terminates unexpectedly. A calculation or screen button, dropdown, etc... in a specific business function does not function. Cosmetic error: e.g. colors, fonts, pitch size, but overall calculation/function is correct. A change of the screen functionality, new screen, etc… is desired.
1 screen per business function 10 objects per business function 25 errors per business function No limit
Note: Severity ranges from highest at Level 1 to lowest at Level 5.
Tab 4M– Page 19
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
3.3.4
User Acceptance Testing
This section details the BearingPoint Delivery Framework methodology for User Acceptance Testing (UAT). The discussion of the methodology includes detailing scope, approach, process, test data, testing scripts, and approval for the user acceptance testing process. 3.3.4.1 Scope UAT is the process where the user community will test various areas of the new VoteCal application to ensure it performs correctly. Entire processes will be tested as a single unit to verify proper integration of the various business functions. Identification of application problems, missing features and future enhancements is the most critical of the tasks in the UAT process. The applications cannot be ready for production without the diligent testing efforts of the UAT team. The identification of application problems through the final correction will be carefully controlled and monitored throughout the test process. 3.3.4.2 Approach UAT is one of the most critical stages in the software application implementation process. UAT will provide SOS the opportunity to conduct rigorous tests of the new VoteCal application in order to confirm system design and functionality prior to accepting the system for implementation. This stage requires significant coordination and cooperation between SOS SMEs, Operations staff , SOS project UAT team, and the Catalyst Test team. The Catalyst Test team will provide detailed test scripts used in the system test as the basis for conducting the user acceptance test. The Catalyst Test team will be responsible, with assistance from SOS project management, SMEs and functional testers, for the construction of the test schedule, for distributing the test scripts. SOS will have the flexibility to add or modify UAT test scripts to meet individual testing requirements. Similar to the conversion for system testing, existing data will be converted from the Legacy System into the VoteCal system to allow users to perform tests using current data. SOS will conduct the UAT using an iterative approach. The test script is processed all the way through, and each time a defect or deviation from a requirement is detected it is recorded in Jira, the defect tracking tool. The defects are corrected and the functions reported on each defect are retested. When corrections have been made and operation is confirmed, the entire process is repeated. This continues until the script can be processed without any significant defects. SOS personnel, who will ultimately use the system in a production environment will perform UAT. This approach to application testing provides three significant benefits: Personnel who will use the system in a production environment receive an advance look at the application, reducing resistance to change during implementation Personnel conducting the test receive a form of “on the job training” during this timeframe, reducing the learning curve when the application is put into production SOS has an opportunity to conduct a test and identify application problems, missing features and future enhancements
3.3.4.3 Process This section details the user acceptance testing process flow. SOS user community members, who will ultimately use the system in a production environment, will perform UAT. The Catalyst Team will assist in problem identification,
Tab 4M– Page 20
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
perform any necessary software corrections, and coordinate subsequent regression testing as required. The Catalyst Team will develop UAT plan. The Catalyst Team will have primary responsibility for UAT training, preparing SOS and County staff to effectively participate in user acceptance testing. Following completion of training, SOS will be in a position to drive user acceptance testing from the provided UAT plan internally and at the county level. The Catalyst Team will support SOS in the execution of UAT. The UAT Process includes scheduling, defect tracking, training of the testers, creation of appropriate test data and test scripts and final approval to implement the applications. The proposed flow of the Acceptance Test Management Process is illustrated below: Figure 4M-6: User Acceptance Test Management Process Flow
The following procedures describe the approach to the UAT process: AT-1 Tester Completes Defect Report The Defect Report is created and entered into Jira by each individual tester to record application defects. “Defect” is a generic term that identifies any issues discovered with the new application. Defects may take the form of application problems (sometimes referred to as “bugs”), missing features or functionality, slow performance, cosmetic flaws or enhancements. Testers will complete a report for each defect discovered. Completed Defect Reports will then be routed to the Testing Lead assigned by SOS for review.
Tab 4M– Page 21
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
AT-2 Testing Lead Reviews Defect Report The Testing Lead, with assistance of designated SOS personnel, will review the Defect Reports submitted by the testing team. The purpose of this review is to ensure the Testing Lead understands the defect being reported. If the defect is not clear or is missing information, the Testing Lead must discuss the defect with the tester. The Testing Lead will ensure the defect reported is properly classified and is indeed a defect, not a tester misunderstanding of application functionality or incorrect use of the application. The Testing lead will consolidate similar defects to minimize the recording of duplicate defects in Jira. Upon identification of Defect Reports that are not actual defects, the Testing Lead or other approved personnel will usually review the defect with the tester to ensure the tester understands why the defect was not recorded. This will assist the tester in understanding application functionality and the defect recording process. The Testing Lead will input reviewed and approved defects into the error database in Jira. The project development team for this process will train the Testing Lead identified by SOS. The Acceptance Test Coordinator (ATC) will also update the defect status as PASS or FAIL when a defect has been retested. The SOS project team will generate defect reports on a periodic basis as needed to identify new defects and determine the status of existing defects. Near the end of the testing period, these reports will be produced more frequently, oftentimes daily. As required, the SOS project team will review the new defects to ensure they understand the nature of the problem and clarify any questions related to the defects with the Testing Lead. If the SOS project team reviews defects and determines that the defect is not a problem or requires re-classification, they will meet with the Testing Lead and discuss these defect changes. If the SOS project team determines that the defect is not actually a problem, the defect will be marked as Rejected in Jira. A defect will be rejected primarily due to a user misunderstanding of functionality (i.e. there is not really a problem). The Testing Lead will then have the ability to generate reports from Jira of rejected defects for review. If the problem is determined to be a request for an enhancement, the request will be reviewed with the Project Manager for determination of an enhancement date and any appropriate follow-on activities. The Catalyst Team project team will assign fixes to developers once the defect has been reviewed and approved. These reviews will take place on an as-needed basis, and more frequently as the testing period ends, oftentimes daily. The Catalyst Team project team will update the defect in Jira to reflect the assignment and the new status (i.e. ready for repair) and print defect lists by developer for distribution to the team. Developers will attempt to re-create the defect once they have been assigned to repair it. If they are unable to re-create the defect, additional detail will be required
AT-3 Reject Defect Report/Review with Tester
AT-4 Testing Lead Inserts/Updates Defect in Jira
AT-5 SOS Project Team evaluates Defect Reports
AT-6 Update the Defect as Rejected
AT-7 Assign Repair to Developer
AT-8 Provide Additional Defect Detail
Tab 4M– Page 22
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
to gain a clear understanding of the problem. The Testing Lead will coordinate gathering additional detail from the tester and provide this detail to the SOS and Catalyst Team project teams. AT-9 Developer Repairs Module The developer will take the necessary steps to repair the module and perform a unit test to ensure the repairs are correct prior to forwarding the module to the Catalyst Team project test team for validation. The Catalyst test team will update Jira to reflect the module has been repaired and is ready for Catalyst independent testing. If the Catalyst Team test team confirms the repairs, it will update the defect to indicate it is ready for a new build. If the Catalyst Team test team finds the repairs have not been made or additional problems have been created, it will update the defect status in Jira to reflect that the repairs are not complete and return the module to the developer with the results of their testing. As developers repair defects and the Catalyst test team confirms the results, repaired modules will be ready for inclusion in the next build. However, additional defects for a given module that have not been corrected may prohibit inclusion of the module in the next build. The Catalyst Team test team will review modules for outstanding defects to ensure the module is in a state adequate to be included in the next build. The developer, known as the “Buildmaster”, who is assigned the responsibility of application builds will create a new build based upon direction from the Catalyst Team project test team. This individual will also be responsible for confirming that all ancillary tasks required for the build have been completed. The Catalyst Team project team will email the SOS Testing Lead when the next build is scheduled. After the build is complete, the Catalyst test team lead will distribute release notes and a defect report with a list of defects ready for re-testing to the SOS Testing Lead. The SOS Testing Lead will then notify the SOS user acceptance test team of the modules and defects, which are ready for re-testing. The SOS Testing Lead will coordinate the re-testing of defects in the user acceptance testing environment. UAT testers will notify the SOS Testing Lead upon confirmation that a defect has been corrected. The SOS Testing Lead or other approved personnel will then update Jira to close this defect. If the tester indicates that the defect remains, the SOS Testing Lead or other approved personnel will update the defect to indicate failure. The SOS project team can then review these defects. The SOS Testing Lead will be responsible for creating status reports of the defects and reporting such to the Catalyst Team and SOS testing teams.
AT-10 Project Test Team Tests Modules and Updates Defect Status
AT-11 Release for Build
AT-12 Create New Build for Acceptance Test
AT-13 UAT Team Re-Tests Defect
AT-14 Test Lead Closes Defect
3.3.4.4 Defect Tracking The SOS UAT team and the Catalyst Team project team will use Jira to record and maintain defect reports. A Defect Report is the mechanism each individual tester will use to record
Tab 4M– Page 23
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
application defects. Defect is a generic term that identifies any issues the UAT team discovers within the new applications. Defects may take the form of application problems (sometimes referred to as “bugs”), missing features or functionality, slow performance, cosmetic flaws, or desired enhancements. Testers create a Jira defect report for each defect discovered. Completed Defect Reports will then be routed to the Testing Lead for review. 3.3.4.5 Training Formal classroom training on application functionality will be provided to SOS SMEs, Testing Leads and key users by the Catalyst Team project team prior to beginning of UAT. The testers will be trained in the process and procedures of UAT. The testers will learn how to perform testing, including how to read and interpret test scripts. They will be taught how to interpret test results and how to record them. The testers will be introduced to Jira, the defect reporting tool, for “Problem Incident Reporting” and will be taught how to report defects. In addition to the test procedures, the Testing Leads will be trained to use Jira to record and maintain defect reports. 3.3.4.6 Test Data The Database Administrator (DBA) will provide a test database instance that contains the characteristics identical to the Production database, including enhancements from the development instance, and data from the data conversion process. In essence, the test database will be a copy of the conversion database made at any given point in time. It will not be kept synchronized to the conversion database. Data from conversion will be utilized for testing against test scripts. A separate test server will host the VoteCal application and DLLs. The Test Build will be modified with the coding changes based on defects found in UAT and will be version controlled using Subversion. VoteCal test data will be created several different ways. One of the key methods involves the careful creation of data sets that contain specific characteristics that will provide for testing specific scenarios and have expected outcomes. Another key method involves creating test data from real data sets, derived from the SOS legacy systems. Once the data has been obtained it will be transmogrified (replacing attributes such as last name, first name, social security number, date of birth, etc. across all records in the data set) to the point that the test data records are anonymous and do not resemble real records from the originating system. This technique provides the added benefit that, although the complete records to not resemble those in the originating system, the data is representative of the real data the system will be processing once fully implemented. 3.3.4.7 Test Scripts Detailed system test cases will evolve to user acceptance test cases. During testing, SOS will have the flexibility to add test cases and modify existing test cases to meet their acceptance requirements. Any additions or modifications to the test cases will need to be documented by SOS. Please refer to Appendix D for a sample test script. 3.3.4.8 Approval The SOS UAT team will execute the test cases and confirm that the new VoteCal system conforms to the requirements set forth. The SOS Project Manager will review and accept the test results of the user acceptance test. The User Acceptance project test team will perform the User Acceptance tests and execute the test cases and confirm that all processes in
Tab 4M– Page 24
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
VoteCal work as designed and meet requirements, thus accepting it for production. UAT is only part of the Go-Live criteria which is further outlined in the Roll-out Plan, so only the criteria to accept the code for production will be put here. The following table describes the severity levels of errors and number of each type of errors a business function can have in order to continue on to production.
Level 1 2 3 4 5
Example Specific business function is unavailable. 0
Number Allowed to exit System/Integration Testing
A VoteCal screen, in a specific business function, terminates unexpectedly. A calculation or screen button, dropdown, etc... in a specific business function does not function. Cosmetic error: e.g. colors, fonts, pitch size, but overall calculation is correct. A change of the screen functionality, new screen, etc… is desired.
0 screen per business function 5 objects per business function 15 errors per business function No limit
Note: Severity ranges from highest at Level 1 to lowest at Level 5. 3.3.5 Regression Testing
This section details the BearingPoint Delivery Framework methodology for regression testing. The discussion of the methodology includes detailing scope and approach for the regression testing process. 3.3.5.1 Scope During this problem correction process, appropriate regression testing is conducted. By regression testing, we mean selective re-testing to detect faults introduced during the modification and correction effort, both to verify that the modifications and corrections have not caused unintended adverse effects, and to verify that the modified and related (possibly affected) system business functions still meet their specified requirements. Regression testing plays a key role in providing a retest function when aspects of the system environment change that have the potential for adversely impacting the functionality of the system. Examples include system maintenance through applying a patch to application software such as the operating system, development environment, or application database. 3.3.5.2 Approach The Catalyst test team will be responsible for developing test plans and all test materials for regression testing, as well as for executing all tests and certifying their completion prior to user testing for all functionality being delivered. In addition, when a new business function is delivered for production, regression testing must also be performed to verify that the already installed, related (possibly affected) system business functions still meet their specified requirements.
Tab 4M– Page 25
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
When a new business function is delivered or an unexpected programming change is made in response to a problem identified during user testing, a regression test plan will be developed by the Catalyst test team based on the understanding of the program and the change being made to the program. The test plan has two objectives: first, to validate that the change/update has been properly incorporated into the program; and second, to validate that there has been no unexpected change to the unmodified portions of the program. Thus, the Catalyst test team will be expected to: Create a set of test conditions, test cases, and test data that will validate that the business function/change has been incorporated correctly. Create a set of test conditions, test cases, and test data that will validate that the unmodified/related portions of the program or application still operate correctly. Manage the entire cyclic process. The Catalyst test team will execute the regression test and certify its completion in writing to SOS prior to passing the modified application to the users for retesting. In designing and conducting such regression testing, The Catalyst Team will assess the risks inherent to the modification being implemented and weigh those risks against the time and effort required for conducting the regression tests. In other words, the Catalyst test team will design and conduct reasonable regression tests that are likely to identify any unintended consequences of the modification while taking into account schedule and economic considerations. Performance Testing
3.3.6
This section details the BearingPoint Delivery Framework methodology for performance testing. The discussion of the methodology includes detailing scope, approach, testing scripts, and approval for the performance testing process. The proposed VoteCal System has been thoroughly performance tested as part of the implementation effort in Illinois. The same testing process will be repeated here. 3.3.6.1 Scope Performance tests will be conducted to ensure that the configuration adequately meets the performance requirements as identified in the RFP. Performance testing includes stress/load testing and volume testing. Performance Testing shall be performed to demonstrate and validate that the System or components meet performance criteria and do not introduce unacceptable degradation in system performance. Before system performance testing will be performed, the performance related requirements will be clearly defined and documented here. The following requirements that relate to system performance and this implementation of VoteCal were extracted from the confirmed requirements matrix.
Requirement {TBD} {TBD} RFP Section {TBD} {TBD} Catalyst Team’s Response to Requirement {TBD} {TBD}
The final version of this document will incorporate the following table populated with the system sizing and performance requirements identified in the RFP.
Tab 4M– Page 26
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
3.3.6.2 Approach Different types of performance tests will be conducted. This section highlights the types of performance testing. 3.3.6.2.1 Stress and Load Testing Stress testing typically verifies software functionality in an environment where the system resources are saturated. It is used to determine that a function, or group of functions, behave properly when performed repeatedly over an extended period of time and in an environment where the system resource limits have been stretched. The VoteCal application will run over industrial strength high-end hardware with a top-tier SQL Server database that can handle extremely large volume transactions in a production environment. It is not anticipated that we could cross the threshold of transaction volume that requires expensive and time-consuming stress testing of the VoteCal application using specially designed software for the purpose of stress testing. Stress tests are only indicated where large transaction volumes, in the range of thousands per second with thousands of users is anticipated. The public website functions of polling place and registration status is the one exception, these will be addressed with targeted usage of the Segue performance testing tool to simulate stress. For applications with fewer than 1,000 seats, volume testing is the most effective way to assure that the application can process the volumes that can reasonably be forecasted. 3.3.6.2.2 Volume Testing Volume Testing verifies how the VoteCal system responds to a large number of transactions and large amounts of data within acceptable time limits. Day-to-day usage of VoteCal system by SOS staff will not produce a large volume of on-line transactions, as inherent with voter registration business processing. Rather, the high volume transactions are processed in batch mode. Volume testing will be conducted for both batch and online processes. Volume testing will be performed primarily for the batch processes with two times the anticipated volume of batch transactions and production data. In addition, we will conduct concurrent testing of the online application that involves a group of testers simultaneously executing test scripts and performing various inquiries. This will help ensure the system satisfies the performance requirements when subjected to the triggers of simultaneous usage. During the concurrent tests, measurements will be taken to ensure that VoteCal meets or exceeds the performance requirements (these requirements are presented above). The concurrent testers will record the measurements. The testers will be coordinated via the phone or Internet during the testing process. 3.3.6.3 Test Data VoteCal performance testing data will be created several different ways. One of the key methods involves the careful creation of data sets that contain specific characteristics that will provide for testing specific scenarios and have expected outcomes. Another key method involves creating test data from real data sets, derived from the SOS legacy systems. Once the data has been obtained it will be transmogrified (replacing attributes such as last name, first name, social security number, date of birth, etc. across all records in the data set) to the point that the test data records are anonymous and do not resemble real records from the originating system. This technique provides the added benefit that, although the
Tab 4M– Page 27
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
complete records to not resemble those in the originating system, the data is representative of the real data the system will be processing once fully implemented. 3.3.6.4 Test Scripts Specific test scripts, which verify performance related requirements (identified in the RFP), will be created and executed. Scripts related to the execution of the batch processes and scripts that generate concurrent requests for both inquiry and transaction maintenance will be developed. Also, SOS will identify IT and functional users who will test the VoteCal system, in the training room with a stopwatch if necessary, to verify the response time initiated by concurrent requests. Please refer to Appendix D for a sample test script. 3.3.6.5 Approval SOS and the Catalyst Team will verify and accept the results of the performance tests prior to the applications being placed into production.
Tab 4M– Page 28
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
3.3.7
Security Testing
This section details the BearingPoint Delivery Framework methodology for security testing. The discussion of the methodology includes detailing scope and approach for the performance testing process. The Catalyst team will engage team members from the BearingPoint Solutions Group, specializing in Security, to perform security testing. The BearingPoint team has extensive experience in performing internal and external assessment services focusing specifically on vulnerability mitigation and aggressive network penetration testing (also known as Black Team Assessments). 3.3.7.1 Scope This section details our conceptual approach to security testing. BearingPoint estimates that the specified security testing effort will comprise three weeks for two full time employees (FTE). The Catalyst Team will need to work with SOS to define and refine the scope of the testing effort, using what is provided in this section as a starting point. We will work with SOS to negotiate mutually acceptable rules of engagement for the security testing effort. 3.3.7.2 Approach BearingPoint will evaluate the protection of the security architecture and security controls through the use of automated tools that probe for remote network security vulnerabilities. BearingPoint’s testing activities will consist of attempts to electronically transgress external firewalls, routers and any such network devices protecting the SOS network. The testing activities typically are pursued in the stages described below. BearingPoint intends to deploy a three phase approach to this external Penetration testing assessment. Figure 4M-7 below depicts the three phases of this approach. Figure 4M-7: Penetration Testing Phased Approach
3.3.7.2.1 Network Discovery Network discovery consists of mapping and identification on active devices on the network that directly support the SOS VoteCal system. During this stage of the activity, BearingPoint will attempt to characterize the target network and VoteCal system components, begin to develop an understanding of the network architecture, and determine the devices and services available on the target application. BearingPoint will also determine the network architecture and security mechanisms used to protect the application, both inside and out.
Tab 4M– Page 29
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
3.3.7.2.2 Vulnerability Scanning Vulnerability scanning consists of scanning the active devices for vulnerabilities. Activities will focus on scanning devices discovered during network discovery for potential vulnerabilities. The Catalyst Team will test TCP and UDP services to include all common services, such as FTP, Telnet, Sendmail, DNS, SMTP, SNMP, etc., as well as the most commonly used and exploited UDP ports inside the network. Vulnerability scanning is the automated process of proactively identifying vulnerabilities of computing systems in a network in order to determine if and where a system can be exploited and/or threatened. While public servers are important for communication and data transfer over the Internet, they open the door to potential security breaches by threat agents, such as malicious hackers. Vulnerability scanning employs software that seeks out security flaws based on a database of known flaws, testing systems for the occurrence of these flaws and generating a report of the findings that an individual or an enterprise can use to tighten the network’s security. Vulnerability scanning typically refers to the scanning of systems that are connected to the Internet but can also refer to system audits on internal networks that are not connected to the Internet in order to assess the threat of rogue software or malicious employees in an enterprise. Our internal security audit process is focused on the computers, servers, infrastructure and the underlying software comprising the target. The test can either be performed by a team with no prior knowledge of the site (black box) or with full disclosure of the topology and environment (crystal box), depending on the customers preference. This phase of testing involves a network enumeration phase where target hosts are identified and analyzed and the behavior of security devices such as screening routers and firewalls are analyzed. Vulnerabilities within the target hosts are then identified, verified and the implications assessed. Vulnerability scans will be conducted from IP addresses’ that have been preestablished with the customer. Open source tools such as UnicornScan, Nmap, Dig, Google, etc. will be used in the evaluation of all systems but provides a more complete view of the site security. Testing will be performed from a number of network access points (as agreed upon by the Rules of Engagement (RoE)), representing each logical and physical segment. As an example, our 2007 vulnerability assessment and management target list for a major Federal Agency includes: firewalls intrusion detection system routers switches external penetration testing war dialing Oracle and SQL Server configuration testing VoIP infrastructure assessment mainframe assessment Unix server farm assessment Windows server assessment wireless access point reviews
Tab 4M– Page 30
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
external third-party connection assessments
Our assessment process also involves an on site in-brief of activities during this phase and Interview site personnel, a review of detailed network information and compare findings from what the client "thinks" their network topology looks like. We review policies and procedures and conduct on-site security evaluations of systems as required/requested and determine the existence of rogue systems based on the Rules o Engagement. We also determine and assist in mitigation of exfiltration of data when discovered, through deep impact scanning and baselines in traffic function we have discovered in our work. We have adopted a hybrid approach combining two proven methods of testing, the NSA approved Information Evaluation Methodology (IEM) and the Open Source Testing Methodology (OSTMM). The IEM was developed by analyzing processes implemented throughout the Evaluation community including the NSA, Government Agencies, and Industry. The IEM process involves many specific and repeatable stages: Identification of vulnerabilities and exposures in the customer network enclave, network boundary, network devices, server & workstation level Transfer knowledge of best practices and industry standards to end user First order prioritization of vulnerabilities based on the customer’s mission and input Identify exposed information through the identification and verification of customer information system assets Identify vulnerabilities to systems that process, store, or transmit critical information Validate the actual INFOSEC posture for these systems Recommend solutions to eliminate or mitigate identified vulnerabilities based on security objectives
3.3.7.2.3 Vulnerability Exploitation Vulnerability exploitation consists of attempts to exploit vulnerabilities to confirm a system’s susceptibility to attack. Generally, this final activity will not pursue exploits that may result in a denial of service unless specifically requested and agreed to by the Catalyst Team and SOS. The following is a detailed description of the vulnerability scanning tools employed by the BearingPoint Security Solutions Group: Nikto: Nikto is an open source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 3200 potentially dangerous files/CGIs, versions on over 625 servers, and version specific problems on over 230 servers. Scan items and plugins are frequently updated and can be automatically updated (if desired). It uses Whisker/libwhisker for much of its underlying functionality. It is a great tool, but the value is limited by its infrequent updates. The newest and most critical vulnerabilities are often not detected. Paros proxy: A Java based web proxy for assessing web application vulnerability. It supports editing/viewing HTTP/HTTPS messages on-the-fly to change items such as cookies and form fields. It includes a web traffic recorder, web spider, hash calculator, and a scanner for testing common web application attacks such as SQL injection and cross-site scripting WebScarab: In its simplest form, WebScarab records the conversations (requests and responses) that it observes, and allows the operator to review them in various
Tab 4M– Page 31
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
ways. WebScarab is designed to be a tool for anyone who needs to expose the workings of an HTTP(S) based application, whether to allow the developer to debug otherwise difficult problems, or to allow a security specialist to identify vulnerabilities in the way that the application has been designed or implemented. WebInspect: SPI Dynamics' WebInspect application security assessment tool helps identify known and unknown vulnerabilities within the Web application layer. WebInspect can also help check that a Web server is configured properly, and attempts common web attacks such as parameter injection, cross-site scripting, directory traversal, and more. Whisker/libwhisker: Libwhisker is a Perl module geared geared towards HTTP testing. It provides functions for testing HTTP servers for many known security holes, particularly the presence of dangerous CGIs. Whisker is a scanner that used libwhisker but is now deprecated in favor of Nikto which also uses libwhisker. Burpsuite: Burp suite allows an attacker to combine manual and automated techniques to enumerate, analyze, attack and exploit web applications. The various burp tools work together effectively to share information and allow findings identified within one tool to form the basis of an attack using another. Wikto: Wikto is a tool that checks for flaws in webservers. It provides much the same functionality as Nikto but adds various interesting pieces of functionality, such as a Back-End miner and close Google integration. Wikto is written for the MS .NET environment and registration is required to download the binary and/or source code. Acunetix Web Vulnerability Scanner: Acunetix WVS automatically checks your web applications for vulnerabilities such as SQL Injection, cross site scripting, and weak password strength on authentication pages. Acunetix WVS boasts a comfortable GUI and an ability to create professional website security audit reports. Watchfire AppScan: AppScan provides security testing throughout the application development lifecycle, easing unit testing and security assurance early in the development phase. Appscan scans for many common vulnerabilities, such as cross site scripting, HTTP response splitting, parameter tampering, hidden field manipulation, backdoors/debug options, buffer overflows and more. N-Stealth: N-Stealth is a commercial web server security scanner. It is generally updated more frequently than free web scanners such as Whisker/libwhisker and Nikto, but do take their web site with a grain of salt. The claims of "30,000 vulnerabilities and exploits" and "Dozens of vulnerability checks are added every day" are highly questionable. Also note that essentially all general VA tools such as Nessus, ISS Internet Scanner, Retina, SAINT, and Sara include web scanning components. They may not all be as up-to-date or flexible though. N-Stealth is Windows only and no source code is provided. Tenable Nessus vulnerability scanner: The world-leader in active scanners, featuring high speed discovery, configuration auditing, asset profiling, sensitive data discovery and vulnerability analysis of your security posture. Nessus scanners can be distributed throughout an entire enterprise, inside DMZs, and across physically separate networks. Saint vulnerability scanner: Lets you exploit vulnerabilities found by the scanner with the integrated penetration testing tool, SAINTexploit. Shows you how to fix the vulnerabilities, and where to begin remediation efforts —with the exploitable vulnerabilities. Lets you scan and exploit both IPv4 and IPv6 addresses. Shows you if the network is compliant with PCI security standards. Correlates CVE, CVSS, the
Tab 4M– Page 32
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
presence of exploits, and more. Allows you to design and generate vulnerability assessment reports quickly and easily. Lets you present the findings of even the largest network scans in an easy-to-read format with colorful charts. Shows you if your network security is improving over time by using the trend analysis report. Gives you the following cross references, which can be automatically correlated in reports: CVE, IAVA, OSVDB, BID, CVSS 2.0 (PCI requirement), and SANS/FBI Top 20. Gives you the option to store the vulnerability data locally or remotely; your vulnerability data does not need to be sent across the Internet as it does with some other scanners. Provides automatic updates at least every two weeks, or sooner for a critical vulnerability announcement. Lets you manage and schedule scans across large enterprises with the SAINTmanager™ remote management console. 3.3.8 Fail Over/Fail Back Testing
This section details the BearingPoint Delivery Framework methodology for fail over and fail back testing. The discussion of the methodology includes detailing scope, approach, process, testing scripts, and approval for the fail over fail back testing process. The VoteCal system features a tight architecture that supports performance and redundancy requirements through SQL Server clusters and Application/Web Server pairs. 3.3.8.1 Scope The purpose of fail over and fail back testing is to validate that failure of application components (namely hardware devices) will not bring the entire system down as well as verification of the fail back process for bringing the system back into a normal state of operation after a fail over event. 3.3.8.2 Approach The Catalyst test team will be responsible for developing test plans and all test materials for fail over and fail back testing. The test team will work with SOS IT Staff in executing all tests. The test team will document and validate the fail over and fail back testing procedures and results. 3.3.8.3 Process The Catalyst test team will use a number of approaches to conduct fail over testing approaches including the following: While conducting application testing scenarios, the SQL Server service will be stopped on one of the servers simulating a database outage or complete hardware failure. While conducting application testing scenarios, the Web Server service will be stopped on one of the servers simulating a web server outage or complete hardware failure. While conducting application testing scenarios, the Ethernet network cable will be removed from the server simulating a network outage or complete hardware failure.
Through execution of these scenarios, the fail over test will be considered successful if the VoteCal system remains responsive to user requests.
Tab 4M– Page 33
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Fail back testing for each defined scenario or group of related scenarios will include testing documented procedures for bringing the system or systems back to full operation on in a normal production environment and reporting of the test results. 3.3.8.4 Testing Scripts Specific test scripts, which verify fail over related requirements, will be created and executed. Scripts related to the execution of the batch processes and scripts that generate concurrent requests for both inquiry and transaction maintenance will be developed as required. Also, SOS will identify IT and functional users who will test the VoteCal system, in the training room, to verify the system remains responsive during the execution of the tests. Fail back procedures will be tested to determine that when the specified and documented fail back procedures are executed that the systems a returned to a normal operational state. Please refer to Appendix D for a sample test script. 3.3.8.5 Approval SOS and the Catalyst Team will verify and accept the results of the fail over and fail back tests prior to the applications being placed into production.
4.
Testing Process Management
The progress of the system and user acceptance testing will be maintained in spreadsheets. The Test Tracking Spreadsheet is used to track individual testers against what functional area they are assigned to. See Appendix C for an example of the tracking spreadsheet. The Test Script Tracking Spreadsheet is used to monitor and record the progress of individual test scripts by functional area. Please refer to Appendix D for an example of a test script. 4.1 Test Tracking Spreadsheet
Separate copies of the Test Tracking spreadsheet will be created, one each for System and User Acceptance Testing scenario. The spreadsheet includes the following: Rows for each Functional Areas Column for each Tester
Please refer to Appendix C for more details and a sample of the Test Tracking Spreadsheet. 4.2 Test Script Tracking Spreadsheet
As test scripts are written and coding nears completion, test scripts will be tracked on a test script tracking spreadsheet. The purpose of this spreadsheet is to track the history of the test effort by showing the progress and status of the test scripts. This spreadsheet will have the following columns: Functional Area Script Name Tester(s) Assigned
Tab 4M– Page 34
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Date Started Date Completed Status Testing Lead Signoff
Please refer to Appendix D for more details on and a sample test script tracking spreadsheet.
5.
Test Environment Management
The Catalyst Team understands the importance of having multiple environments for development, testing, training, and production. Multiple environments are important in system integration projects to: Ensure the integrity of production code and data Permit rigorous testing of code before it is migrated to production Allow for training to occur in a stable environment prior to the production implementation Isolate the impacts of problems
Five separate and isolated environments are required to support the entire software development and deployment life cycle: Development User Acceptance Testing (UAT) Training Integration Testing and Staging / Pre-Production Production
To ensure that no subtle problems arise in the application because of hardware differences or clustering incompatibilities, we have used the same components throughout the environment, only reducing the number of clustered and backup servers and the required Storage Area Network capacity. To simplify the management of the various configurations, we have architected our environments so that the Integration Testing, User Acceptance Testing, and Training environments are three identical implementations of the same specification.
6.
Requirements Traceability
As with System/Integration Testing, it is critical to have traceability of User Acceptance Testing test scripts and results back to the business and technical requirements. As described in the Requirements Management Plan, the Requirements Traceability Matrix will provide for traceability between the requirements managed within IBM Rational RequisitePro and the test scenarios. IBM Rational RequisitePro will contain each of the business functional and technical requirements in the RFP, and any others introduced via
Tab 4M– Page 35
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
the change control process. Please refer to the Requirements Management Plan for a more detailed description of the complete requirements management process. The following diagram, copied from the Requirements Management Plan, details the requirements traceability between test case scenarios, functional requirements, and features. Figure 4M-8: Requirements Traceability Diagram
6.1
Requirements Testing Metrics
Acceptance testing will provide similar metrics to those used for System Testing, which will include: Testable Requirements (TR) - This represents the total number of requirements that must be validated. Validated Requirement (VR) - Each TR is linked to a test script to exercise that particular requirement. Once each test script linked to a particular requirement has passed, that requirement is passed. VR is the count of all passed requirements. Success Rate - The percentage of VRs divided by TRs provides the Success Rate.
7.
Tools
The following is a listing of the tools used to support the testing and other related processes for the VoteCal project: Catalyst Team VoteCal Solution Framework: The Catalyst Team’s baseline artifacts provide a comprehensive jump-start for the VoteCal solution. CruiseControl.net: Server integration
Tab 4M– Page 36
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
FxCop: Analyzes managed code assemblies, facilitating creating test cases that support identification and testing of component logic branches. Jira: Issue tracking IBM Rational RequisitePro: Requirements tracking, test case, and test script management Microsoft Visio: Visual modeling of as-is and to-be models. Microsoft Visual Studio 2005 Professional (.Net Framework 2.0): Software development, debugging, and unit testing NAnt: Automated builds NUnit: Unit testing Segue (Borland) SilkPerformer: Performance testing Subversion: Software version control
8.
Glossary
Acceptance Testing – Formal testing performed by users prior to approving the system for production readiness (sometimes called user acceptance test). Black Box Testing – Black box testing utilizes an external perspective of the test object to derive test cases. The tester selects valid and invalid input and determines the correct output. There is no knowledge of the test object's internal structure. Business Function – a group of one or more modules when grouped together allow a user to perform a complete business transaction from beginning to end such as TBD – this will be tested by Catalyst Team and VoteCal Functional Testers during Systems/Integration Testing and by SOS during User Acceptance Testing (UAT). Component – a discreet element such as the “Address” tab within Maintain County – this will be tested by the Developers and Catalyst Team Functional Testers during Unit and Independent Unit Testing; Defect – a generic term that identifies any application problems discovered within the application. Defects may take the form of application problems (sometimes referred to as “bugs”), missing features or functionality, slow performance, cosmetic flaws, or desired enhancements. Defect Report – Defect reports, the mechanism where each individual tester will use to record application defects, are created using Jira. Defects may take the form of application problems (sometimes referred to as “bugs”), missing features or functionality, slow performance, cosmetic flaws, or desired enhancements. GUI – Graphical User Interface Independent Unit Testing – The testing carried out by individual programmers (or test team members) on code they did not write. Instance – Used to denote a dynamic set of interacting objects, in essence a “cut” or “build” of code that is made available to be tested. Integration – The process or device used to connect two or more systems that were not originally designed to work together. Integration Test – Testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them.
Tab 4M– Page 37
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Jira – A software tool to record, track, and report defects by priority, category, type, project module, functional area, initiator, and impact area. Module – a set of objects and components that when combined form a workable, testable functional unit such as “Maintain County” – this will be tested by Catalyst Team Developers and Functional Testers during Unit and Independent Unit Testing; Negative Testing – testing for fail conditions. The expected outcome is a failed test. For example, a name field requires input of alpha characters and the test script calls for input of numeric or special characters. Object – the lowest level of testable code – this will be tested by developers during Unit Testing using automated testing tools; Positive Testing – testing for pass conditions. The expected outcome is a passed test. For example, a name field requires input of alpha characters and the test script calls for input of alpha characters. Problem Incident Reporting (PIR) – An automated tool for reporting, tracking and resolving problems. It is also used to manage enhancement requests and potential change orders. Representative Sampling – refers to testing performed by dividing the population ("total dataset") into subpopulations (strata or "representative data") and random samples are taken of each stratum". Also referred to as Stratified Sampling or Proportional Sampling. Subject Matter Expert (SME) – A person who is a recognized expert in a particular field or has extensive knowledge in a particular subject. System Testing – The final testing stage on a completed project (prior to client acceptance test), when all hardware and software components are tested as a whole. Unit Testing – The testing carried out by individual programmers on their own code prior to releasing application components. User Acceptance Testing (UAT) – Acceptance Testing Formal testing performed by users prior to accepting the system called user acceptance test). Accepting the system means that SOS has determined that it meets the acceptance criteria and is ready for promotion to the Production environment. White Box Testing – White box testing utilizes an internal perspective of the system to design test cases based on internal structure. It requires programming skills to identify all paths through the software. The tester chooses test case inputs to exercise paths through the code and determines the appropriate outputs.
Tab 4M– Page 38
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Appendix A – Test Defect Log
Defect Tracking ID
Short Desc
Long Desc
Severity
Project Impact
Opened By
Test ID
Stage Found In
Defect Type
Assigned To
Additional Participants
Target Completion Date Status
Actual Completion Date Comments Resolution
Key/Instructions: Field Name Defect Tracking ID Short Description Long Description Severity Project Impact Opened By Test ID Stage Found In Description/Instructions Enter the Defect Tracking ID. This may be the Test Case ID with a "D" in front of it or a sequential number, depending upon client request. In a few words, enter a short description of the defect that would succinctly describe it. Enter a long, detailed description of the defect that can be used to correct the issue. Enter 1-4 (see chart below). Severity refers to the extent of the anomaly in more objective engineering or business terms (importance of the fix to the client). Enter 1-4 (see chart below). Project Impact refers to the impact required in resolving the defect to the projects resources, schedule or other commitments. Who opened this defect ticket? It may either be initials or a full name. Indicate ID of test script/case that was being executed when this defect was identified What Stage was this defect found in? Unit Test, Multi-Unit Test, System Test, Performance Test, Acceptance Test, and Production Regression are valid entries.
39
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Defect Type Assigned To Additional Participants Target Completion Date Status
Defect type includes: missed requirement, incorrectly implemented requirement/design, code error, documentation error, etc. This is a drop-down field, however additional defect types can be added. Who was assigned this defect to resolve? It may either be initials or a full name. Stakeholders and other team members required to participate in the resolution and sign-off What is the completion date that this defect should be resolved by? This should contain values such as: Open, In Progress, Postponed (an open item that will be addressed at a later date), Transitioned (client agreement to leave the defect intact, but transition the project), Monitor (an open item that cannot be reproduced), Cancelled (this was not a defect and therefore did not require a resolution), Resolved. What is the completion date that this defect was resolved? Any comments that would help the software developers to resolve this problem. This may include the suspected cause of the defect. What was the resolution for the defect?
Actual Completion Date Comments Resolution
40
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Appendix B – Jira (Defect Reporting)
Atlassian’s Jira Software will be used to track defects and report on the status and progress of the testing. Jira is a web-based team defect and change tracking tool for problem incident reporting related to system development and testing. An instance of Jira is hosted at Catalyst’s Data Center and will be available to the Sacramento-based development team. It can be reached through any internet connection which will allow for use by EMS Vendors, County Staff, or Project Team Members working off-site. Our proposed system architecture calls for a Tools server to be deployed as part of the VoteCal infrastructure. Depending on the preferences of the SOS for security and control and on the availability of the infrastructure environment, Catalyst will relocate the Jira instance to this VoteCal Tools server. The Catalyst Development Team is already familiar with the Jira product and will train SOS staff, the testing team, and other project team members as needed.
41
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
8.1
Jira Issue Creation
After Jira is configured with project information such as component names and team members, new Bugs / Defects / Issues will be created and categorized by system component and version. Priorities, team members, and due dates will be assigned to each issue.
42
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
8.2
Jira Issue Tracking Capabilities
Users can attach screen shots and files as needed. They can select issues to watch even when not assigned. Some features such as voting (sometimes used by open source developers) are not applicable to the VoteCal project
43
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
8.3
Jira Issue Reporting
Each user will customize their view of issues. They will filter and sort by various attributes such as priority, component, assignee, due date etc.
44
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
8.4
Jira Dashboard
Project managers and leads will use the dash board feature of Jira as a tool to help track overall progress
45
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Appendix C – Test Tracking Spreadsheet
Rows for each Functional Areas: These are the areas that will be tested. There will be multiple test scripts that will be created for each of these functional areas. Column for each Tester: These are the people that will do actual testing of the functional areas. The testers can be assigned to multiple areas depending on their functional knowledge and subject matter expertise.
46
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Appendix D – Test Script Tracking Spreadsheet
Key/Instructions: Field Name Functional Area Script Name Tester(s) Assigned Date Started Date Completed Status Testing Lead Signoff Description/Instructions These are the process areas tested. There will be multiple test scripts created for each of these functional areas. This is the name of the script, which usually is descriptive of the expected actions that the script is testing for that functional process e.g., address change, schedule payroll, add beneficiary. This is the name of the tester(s) assigned to execute the test script. This is the date the test script was initially executed by the tester(s). This is the date the test script was successfully executed without error. This is the status of the script (Not started, In Progress, Complete, or Failed). This is the signoff for the Testing Lead to verify the test script has successfully completed.
47
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Appendix D – Sample Test Script
Shown below is an example of a test script. Test scripts will be developed using Excel spreadsheets, but will be similar in fashion to the table shown below.
Ref. Scenario Action Expected Results Pass/Fail Defect ID Test Date Initials Test Results / Comments
1.1
Menu Navigation
Move Cursor over the Main Menu to pull down the detailed menu items.
Displays a pull down menu of the detailed menu items. If more than one level of Menu exits, then the submenu will appear to the right side as a pull down menu. Click on the Menu Item to Open the VoteCal screen.
1.2
Search on County Name
On the Application Tool bar, there is an editable search text box, enter the county name and click on the Magnifying glass to search for the county
Search Successful - The application module/screen populates with details for the county searched. Search Failed - Displays a message stating that no record can be identified for the search criteria entered. When the search results in more than one county, it will be listed in a selection screen. The selection screen will display details based on the Application module selected.
48
RFP SOS 0890-46 Catalyst Consulting Group Vol. I, Sec. 4, Tab 4M VoteCal Test Plan
Ref.
Scenario
Action
Expected Results
Pass/Fail
Defect ID
Test Date
Initials
Test Results / Comments
1.3
Search With Wild Card Characters
Enter Search criteria prefixed/suffixed by "%" to match varying number of characters (e.g.) S%
This will list all records with last name/first name with the characters "vi" such as Sacramento Santa Cruz San Joaquin etc.