Architecture

Published on May 2016 | Categories: Documents | Downloads: 80 | Comments: 0 | Views: 775
of 85
Download PDF   Embed   Report

Comments

Content

White Paper

BMC® Service Level Management 7.0

Architecture

November 2006

Copyright 1991–2006 BMC Software, Inc. All rights reserved. BMC, the BMC logo, all other BMC product or service names, BMC Software, the BMC Software logos, and all other BMC Software product or service names, are registered trademarks or trademarks of BMC Software, Inc. All other trademarks belong to their respective companies. BMC Software, Inc., considers information included in this documentation to be proprietary and confidential. Your use of this information is subject to the terms and conditions of the applicable end user license agreement or nondisclosure agreement for the product and the proprietary and restricted rights notices included in this documentation. Restricted Rights Legend U.S. Government Restricted Rights to Computer Software. UNPUBLISHED -- RIGHTS RESERVED UNDER THE COPYRIGHT LAWS OF THE UNITED STATES. Use, duplication, or disclosure of any data and computer software by the U.S. Government is subject to restrictions, as applicable, set forth in FAR Section 52.227-14, DFARS 252.227-7013, DFARS 252.227-7014, DFARS 252.227-7015, and DFARS 252.227-7025, as amended from time to time. Contractor/Manufacturer is BMC Software, Inc., 2101 CityWest Blvd., Houston, TX 77042-2827, USA. Any contract notices should be sent to this address. Contacting Us If you need technical support for this product, contact Customer Support by email at [email protected]. If you have comments or suggestions about this documentation, contact Information Development by email at [email protected]. This edition applies to version 7.0 of the licensed program.

BMC Software, Inc.

www.bmc.com

Contents
Overview .............................................................................................................. 5 Terminology......................................................................................................... 6 High-level architecture........................................................................................ 7 Main components ................................................................................................ 8 SLM data model .................................................................................................. 9 SLM definition module data model ................................................................ 10 Collector component data model..................................................................... 13 SLM processing data model............................................................................ 14 Administration components ............................................................................ 15 Service target definition.................................................................................... 16 Request-based and availability service targets................................................ 17 Performance monitoring service targets.......................................................... 18 Compliance-only service targets ..................................................................... 20 Agreement definition......................................................................................... 20 Service target weighting.................................................................................. 21 Review periods................................................................................................ 22 Contract definition ............................................................................................ 22 Service target processing .................................................................................. 23 Request-based and availability processing...................................................... 24 Performance-monitoring processing ............................................................... 32 Measurement form .......................................................................................... 32 Milestone processing ......................................................................................... 33 Request-based milestones ............................................................................... 33 Availability milestones.................................................................................... 36 Performance-monitoring milestones ............................................................... 36 Agreement/compliance milestones ................................................................. 36 SLA compliance processing.............................................................................. 37 Compliance and compliance history ............................................................... 38 Request-based calculation ............................................................................... 38 Availability and performance-monitoring calculations................................... 40 Compliance only ............................................................................................. 43 MSP................................................................................................................. 43 Business time design.......................................................................................... 44 Collector module design.................................................................................... 45

3

Data collector .................................................................................................. 46 Collection point............................................................................................... 46 Service target manager .................................................................................... 46 Service target processor .................................................................................. 46 Data store ........................................................................................................ 46 AR configuration interface.............................................................................. 46 Plug-in functionality........................................................................................ 47 Communications ............................................................................................. 48 Configuring the collector ................................................................................ 48 Data collection ................................................................................................ 49 Web services ................................................................................................... 51 Service target processor .................................................................................. 51 Service target manager .................................................................................... 53 Collection point plug-ins................................................................................. 55 SLM engine overview........................................................................................ 61 SLM rules and associations............................................................................. 63 Rule actions..................................................................................................... 64 Supplementary information ............................................................................. 66 Collector database schema .............................................................................. 66 SLM processing module significant forms ..................................................... 75 SLM definition forms...................................................................................... 80

4

Overview
The BMC Service Level Management (SLM) 7.0 product combines together the service support and infrastructure metrics and events data into a common SLM product. SLM allows users to set agreements and service target goals on these using a common mechanism. The architecture reflects the seamless integration and emphasizes that it is one SLM product in the presentation and the data layer. The goal of the SLM product is to let businesses track performance and availability targets of their infrastructure components and service desk. It gives businesses a high-level, detailed picture of where problems exist so they can correct them and maintain high-quality service. The three entities used in monitoring the service levels are: service targets, agreements, and contracts. The following diagram shows the relationship among these three entities:
Figure 1: Contracts, agreements and service targets

Contract
ƒ ƒ ƒ ƒ Contract parties Effective dates Accounting codes Purchase price

Agreement: SLA, OLA, or Underpinning Contract
ƒ ƒ ƒ ƒ Compliance target Review periods Penalties & rewards Milestones

Agreement: SLA, OLA, or Underpinning Contract
ƒ ƒ ƒ ƒ Compliance target Review periods Penalties & rewards Milestones

ƒ ƒ ƒ ƒ ƒ

Service Targets Goals Costs Terms & conditions : KPI expression or qualification Measurement rules Milestones & actions

Service Targets

Service Targets ƒ ƒ ƒ ƒ ƒ

Service Targets Goals Costs Terms & conditions : KPI expression or qualification Measurement rules Milestones & actions

Service Targets

Service Targets

5

Terminology
The following definitions are commonly used terms in SLM 7.0: Data source—Input data that the SLM module processes: Data sources based on BMC Remedy AR System, such as incident and change requests, and sources other than AR System, such as Patrol, ART, SIM, and so on. Service target—Defines the “goal” of what is being measured, and the target value of the goal. Also defines the instances of the data source that are being monitored, such as the terms and conditions. Agreement—Defines compliance of single or multiple service targets over time periods such as weekly, monthly or daily. The agreement specifies a target compliance percentage that must be met over these time periods. Templates—Saved instances of components that can be reused while defining service targets and agreements. Templates are shared, but can be customized by the object that uses them. Service target processor—The engines that evaluate data source data to determine if a service target is met. There are two engines, one based on AR System workflow for service support data and one based on Java-based engine for infrastructure data. SLA processor—Performs scheduled compliance calculations over a large amount of data, over a period of time. Collector—Components that runs on the same server as AR System. Its purpose is to manage the collection points and retrieve and evaluate data from them for each available data source. Collection point—Distributed components that are responsible for the collection of data. There can be one or more collection points, and each one can be installed on a separate server. The collection point uses a pluggable architecture to allow data to be collected from multiple data sources. Collection node—Logical entity that is a combination of a collection point and a data source that it communicates with. An example is a collection point installed on server “A” to collect data from a Patrol agent. System metric—Key performance indicators that are available to be collected from each data source.

6

High-level architecture
The following diagram presents a high-level view of SLM architecture.
Figure 2: High -level architecture diagram showing the various SLM components and their purpose and relationships.

Service Level Management : High Level Architecture
Configuration of Agreements and Service Targets Legend

Definition, Configuration, & Storage of: ƒ SLAs, OLAs, & Underpinning Contracts ƒ Service Targets ƒ Milestones

AR System process Non-AR System process Uses third party technology AR System Storage

Data collection and Service target processing

High Volume Infrastructure Data Sources (Performance Manager, SIM, ART, SNMP)

Collector Component Svc Target Processor for Data Collector Infrastructure data Svc Target Processor for Time based data (requests, incidents)
AR System DB

AR System data sources (Incident & Problem Mgmt, Change Mgmt, Asset Mgmt, Other)

Svc Target Results (Infrastructure data )

Svc Target Results (AR based data)

SLA Processing and Milestones/Actions

Periodic SLA Processing (Daily, Weekly, Monthly, Quarterly)
Viewing and Accessing Results
33.333 3% 33.333 3% 33.333 3%

Milestone and Action Processing

Actions (Email, Pager, Run Process, Create Event, Workflow)

Dashboard Reports SLA Performance Data

SIM Portal

Web Services

All definition and configuration is performed by a common interface. The data is stored in the AR System database. The service target processing for the data collected from AR system data sources is performed within AR System. The results are stored in the AR system database forms.

7

The high-volume, high-frequency infrastructure data from other data sources is processed by the collector and only the results are pushed into AR System. The service target results from both data sources are used by the SLA compliance processor to get periodic compliance results.

Main components
The main components of the SLM product are: SLM Data Repository—The AR System database is the repository of all the configuration and run time data for SLM. Configuration—The user interface and workflow needed to configure service targets, agreements, contracts, templates, and other options. Collector—Java application that receives and interprets data from the various infrastructure monitoring data sources using plug-ins. It communicates with AR System using the AR System API to read and update service target information. Data sources and plug-ins—The products that SLM integrates with as shipped are shown in Figure 3. The data sources based on AR System use the native AR System interface to communicate with the main SLM data repository, while the collector acts as a gateway for other data sources to communicate with SLM. SLM Engine—A C++ binary that runs under the armonitor service and creates the filters to process service targets for AR System data sources. Dashboard—A user interface to display the latest SLM data —service target and compliance status using charts and tables. Reports—A set of pre-defined Crystal reports that run on SLM data.
Figure 3: Supported data sources and SLM components

8

SLM data model
The SLM data model comprises of four main components: Definition or configuration component forms—Stores information about the definition of service targets, agreements, contracts, milestones, and actions. Collector component forms—A set of view forms that are needed for the collector module. Real-time processing forms—Used for calculations of measurements, compliance, and triggering milestones. Administrator console forms—Used to store preferences and other administrative information.

9

SLM definition module data model
The following diagram illustrates a high-level view of the SLM Definition Module data model.
Figure 4: High level data model of the service target definition module

Service Target Entity Relationships
SLM:ConfigDataSource PK instanceId ReferenceFormID SLMConfDSSLAApplicationFormDisplayAs SLMConfDSFieldAsUniqueIdentifierDD SLA_Appform_Applicable Type

SLM:GoalSchedule SLM:ServiceTarget PK InstanceId Title SLMId SLA_Application Form User Display SLMGoalType TermsAndConditions TargetHours TargetMinutes GoalValueAlarm GoalValueWarning StartQualification StopQualification ExcludeQualification ReferenceFormId UsedMeasCriteriaTemplateId GoalTypes BusinessEntityTag BusinessEntityID CategoryId Folder (zD_CategoryPath) CollectionNodeList KPIIDList PK instanceId instanceName GoalWeekday StartTime EndTime GoalGUID GoalType GoalHours GoalMinutes GoalSeconds GoalWarningValue GoalAlarmValue WarningOperatorSign AlarmOperatorSign Cost SLO_InstanceId

BusinessEntity PK Entity ID Entity Title

SLM:ConfigGoalType

SLM:CollectionNodes PK NodeId Name CPID NodeConnectionPropertyString

DisplayLabel GoalType SLM:Milestone PK instanceId Title ExecuteWhen ExecuteAtPercentage ExecuteAtHour ExecuteAtMinutes RepeatFor RepeatHours RepeatMinutes ExecuteIf ParentInstanceId RuleDefinitionGUID SLM:Category PK InstanceId InstanceName

SLM:SystemMetrics SLM:TemplateMeasurementCriteria PK instanceId instanceName AppFormDisplayName MeasurementGoalType StartQualification StopQualification ExcludeQualification CNID FullName PK ID

Legend AR System Forms SLM Forms View Forms

SLM:RuleDefinition PK InstanceId InstanceName

10

Figure 5: High level data model of the agreement definition module

Agreement (SLA) Entity Relationships
SLM:Association SLM:SLADefinition PK InstanceId Title Expiration Date Notification Date Agreement Type Compliance Target SLM:ServiceTarget InstanceId1 InstanceId2 AssociationType PK InstanceId Title GoalType

SLM:SLAAssociation SLM:Contract SLM:Milestone PK instanceId Title ExecuteIf ParentInstanceId RuleDefinitionGUID InstanceId1 InstanceId2 PK InstanceId

SLM:PenaltyRewards PK InstanceId SLA_InstanceId PenaltyRewardComplianceRange 1 PenaltyRewardComplianceRange 2 PRType PRAmount

Forms include: SLM:ServiceTarget—Stores the definition information for service targets, such as name, description, data source, terms and conditions, goals, and so on. SLM:SLADefinition—Stores the definition of the agreement, such as name, description, expiration date, target compliance, review periods, and so on. SLM:PenaltiesRewards—Defines a sliding scale of penalties and rewards, based on the compliance target. SLM:Contract—Defines the overarching contract between service providers and customers, including one or more agreements. SLM:GoalCost—Repository for custom and template goal and cost schedule (time ranges by weekday). SLM:Category—Stores the hierarchy of folders seen in the tree control on the console and dashboard. SLM:Milestone—Defines of milestones, such as name, description, type, condition, and so on. SLM:RuleAction—Base definition of actions related to a milestone. Actions of different types can be derived from this base object.

11

SLM:ConfigGoalTypes—User-defined set of Goal labels, such as Incident Response Time, Application Availability, and so on. These goal labels apply to four major goal categories: Request-based, Performance Monitoring, Availability, and Compliance Only. SLM:ConfigDataSource—Stores configuration information for each data source for the SLM application. BusinessEntity—Core AR System object; it defines a set of time segments as “available” or “unavailable”. This object is used to define business hours, holidays, and blackout periods. SLM:SystemMetrics—View form on the system_metrics table that stores the definition of KPIs discovered on a collection node. SLM:CollectionNodes—View form on the collection_node table that stores the definition of collection nodes.

12

Collector component data model
The collector component creates a set of tables in the underlying database used by AR System. These tables are used directly by the collector and also by AR System using view forms. See the collector database schema section in this paper for detailed descriptions of these tables and their use.
Figure 6: Data model for the collector component.

See Supplementary Information for details of these tables and descriptions of each field.

13

SLM processing data model
After the service targets and agreements are defined, the “processing” data model of SLM can be addressed. This includes service target processing, milestones, and compliance processing. The forms in this component are: SLM:Measurement—This form tracks the measurement records and the performance of a service target, recording the latest service target status and the timestamps of the changes. Different types of information are stored on this form. All the information is related to service target processing. SLM:MeasurementChild—This form tracks each change in state on the measurement for performance monitoring and availability service targets, such as from available to unavailable. Because each change in state is known, SLA compliance can, at any time, calculate the compliance performance of each service target for a review period to sum up their contribution to the SLA compliance percentage. MeasurementChild records are needed to make changes to the SLA compliance value through approved changes to the existing data. SLM:EventSchedule—This form tracks when milestones should execute. Each record in this form has a timestamp indicating when a milestones should execute and for which instance of service target and application. SLM:SLACompliance—This form tracks all the compliance calculations. A record is created in this form for each agreement, review period, and contract combination. As each review period completes, the record is marked as done and a new one is started. SLM:ComplianceHistory—This form tracks compliance data for each service target contained in an agreement.

14

Figure 7: Data model for SLM processing (service target & compliance) entities

SLM Processing Entity Relationships

SLM:SLACompliance PK PK PK SLA InstanceId Review Period InstanceId ContractInstanceId

SLM:SLAComplianceHistory PK PK PK PK SLA InstanceId SVT InstanceId Review Period InstanceId Contract Instance ID

SLM:ServiceTarget PK InstanceId Title Goal Type SLM:Measurement PK PK SVT InstanceId Application InstanceId Measurement Status Application Form (Data Source) PK Instance ID SLM:EventSchedule SLM:MeasurementChild PK PK SVT InstanceId ApplicationInstanceId

Incident Change Request Service Request Unavailability Records

SLA_Reference InstanceId SLMEventSchedule _SVTInstance ID SLA_Time Scheduled SLA_Rule Event Occurred

Administration components
A set of forms involved in administration console options are used to configure items for the definition and processing modules. They are mostly items that can be configured before setting up service targets and agreements because they might not frequently change. An administrator generally configures them. SLM:ConfigDataSource—UI and repository for all the data sources that work with SLM. Any custom AR System form that needs to work with SLM will have an entry in this form, as well as other data source types like Performance Monitoring and Service Impact Manager. Other significant configuration settings related to the data sources are also stored in this form. In conjunction with the back end form SLM:Object, all forms working with the SLM application needs to be registered with their own GUID. SLM:ConfigGoalTypes—This is a repository for mapping of internal goal types to user-defined goal labels. Users see and use the user-friendly labels such as Request Time, Application Performance, etc but the application will work off the goal types to be able to process the service target accordingly.

15

SLM:ConfigPreferences—This form stores general application settings that apply across the product including the service target identification prefix and location of log files. SLM:ConfigReviewPeriods—This form stores review periods that users select in the agreement definition to determine the time spans for compliance calculations. SLM:ConfigSLAGroups— This form stores the service target groups information for the service target group feature that is specific to request-based service targets. SLM:ConfigSLAOwners—This form stores “aliases” or names for groups of user IDs that can be used in alert actions. This enables emails and alerts to be sent to a group. SLM:ConfigSLMComments—This form stores pre-configured comments created by users to add to their measurement and compliance records. Comments will typically include reasons for missing a compliance or goal.

Service target definition
Service targets are used to track goals for each data source. The definition of a service target must include the data source to which it applies and a goal to indicate when a target is missed. Service targets that can be defined in SLM according to the following types: Request-based—Service targets apply to data sources such as Incident Request, Infrastructure Change, customer service issues, and so on. Service targets “attach” to the tickets or requests based on the defined “terms and conditions” and tracks the time elapsed between the start condition and end condition to determine if the time is within the specified goal. There is no limit to the number service targets that can attach to a record. The data source for this type of service target is an AR System form that must be configured to work with SLM. Some of the predefined data sources for this type of service target are: • • • BMC Remedy Service Desk BMC Remedy Change Management BMC Remedy Asset Management

Performance Monitoring—Service targets that are evaluated on system metrics coming from infrastructure items, such as servers and applications. The data sources in this case are network management products that produce relatively high volume, high frequency data about availability and performance of machines, services, and applications. This type of service target is usually processed outside of AR System, in the collector module, see the collector component of this whitepaper.

16

The data sources for this type of service target are: • • • • • BMC Performance Manager BMC Performance Manager Express BMC Transaction Management ART BMC Transaction Management REM BMC Service Impact Manager

Availability—Service targets that measure availability of assets and services over a long period of time. The service target tracks the up and down time of these assets, based on defined available and unavailable qualifications. Compliance-Only—Service target used to process the compliance of SLA data that has been evaluated for a specific goal. This data exists in the database, but a view form is needed to access this information. Using config data source, configure a specific form that has access to this information. By using specific field IDs to point to the data, SLM can evaluate the data points for compliance. In this case, the service target is a placeholder that defines the data source and the conditions under which the service target applies. However, no goals or milestones are defined. Service-Impact-Manager—Used for service targets that have BMC Service Impact Manager as the data source. This goal type is used to track service targets based on the status of a CI: a component or a service monitored in the SIM service model. Service targets of this type are also processed by the collector. While defining these service targets, however, special features are available like the ability to use CMDB CI Viewer to navigate the CI relationship tree and to select a CI. Service target definition can also be cross-launched in context from the SIM Service Model Editor and the Impact Portal.

Request-based and availability service targets
In SLM 7.0, fewer auto-generated filters were built than the previous versions of Remedy SLA. Instead of building a set of filters per service target to process a ticket, one filter is now built (Association filter) for each service target, and a filter guide (per application form) is used to process the service target. One generated filter per service target: zSLMGen:SVTID_MeasStdAssoc—Filter to attach a service target to an instance (record) of the application form, based on the terms and conditions of the service target. This filter creates a record in the SLM:Measurement form with a.measurement status set to Attached.

17

One filter guide and 18-20 filters per application form:
zSLMGen:<<application>>_MeasReqAvailStrt2Stp (GUIDE) zSLMGen:<<application>>_MeasReqReStart`! zSLMGen:<<application>>_MeasReqStart`! zSLMGen:<<application>>_MeasReqStartExclude`! zSLMGen:<<application>>_MeasReqExclude`! zSLMGen:<<application>>_MeasReqNOTExcludeNOTStop`! zSLMGen:<<application>>_MeasReqStop`! zSLMGen:<<application>>_MeasReqReOpen`! zSLMGen:<<application>>_MeasReqDetach`! zSLMGen:<<application>>_MeasAvailAvail`! zSLMGen:<<application>>_MeasAvailNOTAvail`! zSLMGen:<<application>>_MeasAvailUnavail`! zSLMGen:<<application>>_MeasAvailNOTUnavail`! zSLMGen:<<application>>_MeasAvailUnknown`! zSLMGen:<<application>>_MeasAvailNOTUnknown`!

One filter per milestone:
zSLMGen:<SVTID>_1010_<MilestoneName>_BR`!

The execution order of the filter guide and milestone filters can be changed so that it does not interfere with your custom workflow.

Performance monitoring service targets
For this type of service target, no auto-generated filters are created because the processing is done by the collector and the collector accesses the service target definition directly to pick up all relevant information. There continues to be a concept for T&C but for this goal type, it is called Key Performance Indicator. Customer must pick a KPI from a tree UI component. This tree displays all the available KPI from a selected collection node. To help customers, there are three different views for creating your Key Performance Indicator, depending on the type of KPI expressions you need for your performance-monitoring service target. Single KPI Expression—This is the simplest view because it uses only one KPI. Use the tree to select the status or numeric KPI and the KPI will appear in the Key Performance Indicators field to the right. The user interface will display the appropriate configuration fields, depending on the KPI type. A numeric KPI uses a number to display the Warning and Alarm options. A status type KPI uses a selection list to allow users to select from OK, Warning, Alarm, and Offline.

18

Arithmetic Expression—This view allows users to create an arithmetic expression using MAX, MIN, SUM, and AVG, over a set of numeric KPIs. Use the tree to select numeric KPIs and the KPI will appear in the Key Performance Indicators field to the right. This type allows the use of the Single and Scheduled Goal/Cost UI. Because this is only for numeric type KPIs, the Warning and Alarm fields will ask for integer number to compare against when determining the performance of the KPI. Boolean Expression—This view allows the user to create a boolean expression which combines a Single KPI Expression and an Arithmetic Expression. The expression is complete with goals. Because the result of this evaluation is either True or False, the user will not be able to use the Single and Schedule Goal/Cost option. Use the tree to select numeric KPIs you need and the KPI will appear in the Key Performance Indicators field to the right. Build either a Single KPI Expression or an Arithmetic Expression. The Operator and Value fields are required before you can hit the Add button to move the KPI component to the Expression field. The KPI that is selected appears in a string format that follows the following convention: -> (leading character) Collection Node Name:
Collection Node ID: KPI Type (numeric or KPI Hierarchy (Name0 KPI Hierarchy (Name1 KPI Hierarchy (Name2 KPI Hierarchy (Name3 KPI Hierarchy (Name4 KPI Hierarchy (Name5 KPI Hierarchy (Name6 KPI Hierarchy (Name7 KPI Hierarchy (Name8 KPI Hierarchy (Name9 status): on SLM:SystemMetrics)\ on SLM:SystemMetrics)\ on SLM:SystemMetrics)\ on SLM:SystemMetrics)\ on SLM:SystemMetrics)\ on SLM:SystemMetrics)\ on SLM:SystemMetrics)\ on SLM:SystemMetrics)\ on SLM:SystemMetrics)\ on SLM:SystemMetrics)\

-- (trailing character) Because the KPI uses certain characters as delimiters, it is important that the names used to identify a node in the tree do not use the following:
“\”, “:”, “- -”, “->”, ">", "<", "<=", ">=", "=", "!=", “AVG(”, “MIN(”, “MAX(”, and “SUM(”.

After the expression is built, a command is called to validate the expression you constructed. A record is pushed to the SLM:Landscape form with a WebServiceType of Validate Expression. If the call returns an error when the service target is saved, a message appears.

19

Compliance-only service targets
The terms and conditions definition for this type of data source works the same as for request-based. The compliance only data source is an AR System form. A qualification based on fields on the form will define the terms and conditions. When creating a transaction-monitoring service target, the Wizard view only needs the first tab. Service target based milestones and the other tabs for performancemonitoring are not supported.

Agreement definition
Agreement is the container that customers use to evaluate how their service targets are performing over time. This allows customers to see how service targets commitment are performing over a time period or review periods (daily, weekly, monthly, quarterly). Additionally, customers can give a weight to each service target to give some service targets more value or importance in their line of business. Agreements allow milestones to perform traditional SLM actions to notify or perform work to meet their contractual commitments. Compliance processing takes all the related service targets related to an agreement and look at its performance from the start of the compliance record till it ends. The formula used for each service target type varies as follow. • • • Ticket-Based service targets: Met / (Met + Missed) Compliance Only: Met / (Met + Missed) Availability: Up Time/ (Up Time + Down Time) Ignores Unknown time and counts all the time from all the records related to the service targets. All time calculations are done using Business Entity call. Performance-Monitoring: 100—(Down Time / (review period time— Unknown)



There is only one measurement record per service target. We know what the total time is at the start of the review period and this formula ensures that the percentage decreases from 100% as the measurement records down time and unknown time. Availability is unable do this because one service target may include more than one measurement record. When seeing the performance of an agreement that has not been completed, it should be viewed as “how has the agreement has performed so far,” based on the number of records, and the availabilility of its asset/service. The review period is the compliance of the agreement for a segment of time independent of past or future data points.

20

Service target weighting
When adding service targets to an SLA, users can give each service target a contribution percentage to the SLACompliance. This allows an Urgent service target to be worth more than a Low Service target. This weighted contribution is determined by giving each service target a weighted value. To give a service target more importance to the contribution of the SLACompliance value, users gives each service target a weighted value. The weighted contribution percentage is calculated by each service target’s weight, divided by the total weight of all service targets.
Weighted value Weighted calculation Weighted contribution percentage 50% 25% 12.5% 12.5%

Service Target Urgent Service Target High Service Target Medium Service Target Low

20 10 5 5

20 / (20+10+5+5) 10 / (20+10+5+5) 5 / (20+10+5+5) 5 / (20+10+5+5)

This method allows weighted contribution percentages to be dynamically calculated, based on the relative importance of each service target. If a user adds another service target and gives it a weighted value, the weighted percentage is calculated automatically. Users do not need to manually update the percentage to give a sum of 100%. When a SLA compliance calculation is performed that gives the performance percentage of a service target, the SLA compliance is determined using the service target’s weighted compliance contribution percentage.
Period performance Compliance calculation Compliance percentage gained 45% 22.5% 10.625% 9.375% 87.5%

Service Target Urgent Service Target High Service Target Medium Service Target Low Total Compliance %

90% 90% 85% 75%

50% x 90% 25% x 90% 12.5% x 85% 12.5% x 75%

21

The SLA compliance is the summation of each service target’s performance, multiplied by its weighted contribution percentage. For this period, the SLA compliance comes out to 87.5%.

Review periods
SLA compliance processing gathers data from each service target defined in an agreement for a time segment or review period to calculate an overall performance percentage. Users can define a review period of type of daily, weekly, monthly, or quarterly. For each review period, the SLA compliance percentage is calculated at regular intervals to provide a current evaluation of the service targets performance. For example, for an agreement with a review period of type daily, the compliance record will be updated every hour to show the current performance of all the Service targets under it. The next day when the review period ends, a final update is done and the compliance record is closed out. A new record is created to track the next day’s compliance performance independent of the previous day’s result.
Review period Daily Weekly Monthly Quarterly Automated calculation intervals

Every hour (at five minutes after the hour) Every four hours (from midnight: 00:05:00, 04:05:00, and so on) Every day Every day

Contract definition
Contract is the highest-level object in the SLM hierarchy. It is an entity used to define contractual information for the agreements: who is the agreement with, what dates it covers, who is the service provider. The contract object is derived from the contract base in Asset Management. If Asset Management and SLM are installed, all contracts are available in a single repository. Contracts in SLM are not only a container for multiple agreements and a way for customers to group all agreements for a customer together, but also are also a way to drive row level security for Managed Service Providers (MSPs). Grouping agreements in a contract makes sure that the compliance records for these agreements are separated by contract.

22

Data model for the Contracts module:
Figure 8: Entity relationships for Contract module

CTR:ContractBase PK InstanceId

SLM:ContractBase PK InstanceId

SLM:Contract PK InstanceId

SLM:SLADefinition PK InstanceId

SLM Object ITSM Asset Management Object SLM:SLAAssociation PK PK InstanceId 1 InstanceId 2 AssociationType

The Customer field is used to restrict access to the contract. It is added to field 112, along with Unrestricted Manager. Once this field is populated, anyone not in that group and not an Unrestricted Manager will not be able to see the contract from the console. The menus for the company, organization, department and supplier, and Type on the contract form are dynamic. Users can configure them and point them to the desired form. These menus can be configured by changing the data in the Configure Contract Menus option from the Application Settings option of the Administration Console. When an ITSM application is installed with SLM, these menus will automatically point to the ITSM menus for these entities. The SLM:Contract:OnOpen:SetContractMenus active link reads the value from the configuration form and dynamically sets the menus. The pre-defined sample menus are stored in SLM:SampleContractMenus.

Service target processing
As noted in the high level architecture diagram in Figure 1, service target processing is done at two places: AR System for AR System data sources and in the collector for the non-AR System, metric data sources. The following two sections describe these processes.

23

Request-based and availability processing
For Request-based and availability service targets, processing is performed using filters on the application form and the SLM:Measurement form. The auto-generated filters on the application form create and update the record in the SLM:Measurement form. Filters on the measurement form perform further calculations. When an AR System-based data source is configured to work with SLM, the SLM engine creates a set of shared filters that are used to capture events for SLM processing. These events contain a date and time element that allows SLM to track the duration of tasks to make sure that you keep your contractual level of service. The SLM:Measurement form takes these events and tracks the progress of the service targets.

Request-based processing

X
Attached

Y
Start Measuring

[ Z
Continue Measuring

\
Stop Measuring

Exclude

When a user submits a request in the application form, such as HPD:HelpDesk, the auto-generated associate filter executes and a measurement entry is created. The measurement status is set to “Attached.” The Start filter executes when the Start When qualification is met. It sets the OverallStartTime with the current time, and sets the MeasurementStatus to “In Process”. Note: If a user is using the start reference time option, the start time is set to a user defined date and time field on the application form. When OverallStartTime is populated with current time, SVTDueDate is calculated by adding the GoalSchedGoalTime to the OverallStartTime, using the business time functions. The Exclude filter executes if the Exclude When qualification is met. It sets the DnStartTime field with the current time, and sets the MeasurementStatus to “Pending”.

NotExclude filter will fire when NOT(Exclude When) qualification is met. It will set the DownStopTime with current time, and MeasurementStatus to “In
Process.”

24

When DownStopTime is populated with current time, DownElapsedTime is calculated by getting the differene between the DownStopTime and the DownStartTime. This time is then added to the previous DownElapsedTime. When UpStopTime is populated with current time, UpElapsedTime is calculated by getting the different between the UpStopTime and UpStartTime. Then it is accumulated with the previous UpElapsedTime. The Stop filterexecutes when the Stop When qualification is met. It sets the OverallStopTime with current time. When OverallStopTime is populated with the current time, OverallElapsedTime is calculated by getting the difference between OverallStopTime and OverallStartTime. UpTime is calculated by getting the difference between OverallElapsedTime and DownElapsedTime. MetMissedAmount is calculated by getting the difference between GoalSchedGoalTime and UpTime. MeasurementStatus is set to “Met” if UpTime is less than or equal to GoalSchedGoalTime. MeasurementStatus is set to “Missed” if UpTime is greater than to GoalSchedGoalTime. MeasurementDone is set to Yes. Note: An entry placed in SLM:EventSchedule executes when the SVTDueDate time is passed to set MeasurementStatus to “Missed Goal” to signify that the goal of the service target has not been satisfied. However, the Measurement record is not completed until the Stop Measuring condition is satisfied.

Special cases for request-based processing
This section describes special cases for request-based processing.
Service target group processing

Service target groups are a special feature that works only for request-based service targets. This scenario applies when fields on the incident or change request change so that the terms and conditions of the currently attached service target do not apply. A new service target is attached, however, time calculations must to be done as if a new service target is attached. This typically happens, for example, if there are different service targets for high, medium, and low priority requests of a certain category, and they are grouped under one service target group. If an incoming ticket has a high priority and a high service target is attached and starts processing, after some time has elapsed and the request changes to a low or a medium priority, a new service target is attached. However, the new service target inherits the start and excludes times from the first service target and does not behave as if it just started.

25

The following diagrams demonstrate the most common scenarios that might occur:
ServiceTarget 100 HIGH Starts Service Target 200 LOW Attaches

Measurement Status = “In Process” Measurement Status = “Pending”

In this diagram: When service target 200 attaches, it inherits the start time from service target 100. Milestones are removed from the service target. Milestones are calculated from service target 200 to start from the start time of service target 100.
ServiceTarget 200 LOW Attaches

ServiceTarget 100 HIGH Starts

Measurement Status = “In Process” Measurement Status = “Pending” In this diagram: When service target 200 is attached, then service target 100 is an Exclude state. Milestones are removed from service target 100. Service target 200 is in an Exclude state with the down time inherited from service target 100. When service target 200 changes from pending, milestones from service target 200 are calculated to start from the start time of service target 100 with an additional offset of the time both service targets were in a pending state.

26

ServiceTarget 100 HIGH Exlcude

ServiceTarget 200 Low starts

In this diagram: When service target 200 attaches, it inherits the start time from service target 100. Milestones are removed from service target 100. Milestones from service target 200 are calculated to start from the start time of service target 100, which in this case is the same time as the start time of service target 200. In this scenario, nothing is inherited by service target 200.
ServiceTarget 100 HIGH Starts ServiceTarget 200 LOW Attaches

In this diagram: When service target 200 is attached, it inherits the start time from service target 100, and the up time and down time is calculated by service target 100. The measurement record is removed from service target 100. Milestones are removed from service target 100. Milestones are calculated from service target 200 to start from the start time of service target 100, with an offset from the down time calculated by service target 100. The time might be inconsistent because the down time was calculated using the business time from service target 100. If the business times used are not matched, the time could be unpredictable. To be sure that the time calculation is done correctly, review the time value in the SLM:Measurement form.

27

ServiceTarget 100 HIGH Starts

ServiceTarget 100 HIGH Stops

ServiceTarget 200 LOW Attaches Inherits Start from ServiceTarget 100

In this diagram: Service target 100 has been processed and closed. When service target 200 is attached, it inherits the start time from service target 100, and the up time and down time calculated by service target 100. The measurement record is removed from service target 100. The “dead” time between the service target’s working time service target is calculated as an Exclude time using the business time tag from service target 100. Milestones from service target 200 are calculated to start from the start time of service target 100, with an offset from the down time calculated by service target 100 (Exclude time and the “dead” time). The time might be inconsistent because the down time was calculated using the business time from service target 100. If the business times used are not matched, the time could be unpredictable. To be sure that the time calculation is done correctly, review the time value in the SLM:Measurement form.
ServiceTarget100 HIGH Starts ServiceTarget 100 HIGH Stops ServiceTarget 100 High Attaches again Inherits Start from SLO 100

In this diagram: Service target 100 has been processed and closed. When service target 100 is attached, it inherits the start time from service target 100 and the up time and down time calculated by service target 100. The measurement record is removed from service target 100. The “dead” time between the service target’s working time service target is calculated as an Exclude time using the business time tag from service target 100. Milestones from the new service target 100 are calculated to start from the start time of the original service target 100, with an offset from the down time calculated by service target 100 (Exclude time and the “dead” time).

28

Service Target 100 HIGH Starts

Service Target 100 is closed and SLO 200 attaches as closed

In this diagram: Service target 100 has been processed and closed. At the same time the ticket is changed so that service target 200 is attached in a closed state. When service target 200 is attached, it inherits the start time from service target 100 and the up time and down time calculated by service target 100. The measurement record is removed from service target 100. Milestones are not created from the new service target 200 because it is closed. Service target 200 is evaluated using the calculated time from service target 100, but using the goal from service target 200.
ServiceTarget 100 HIGH Starts ServiceTarget 100 HIGH Close ServiceTarget 100 is closed and Service Target 200 attaches as closed

In this diagram: Service target 100 has been processed and closed. Later, service target 200 is attached in a closed state. When service target 200 is attached, it inherits the start time from service target 100 and the up time and down time calculated by service target 100. The measurement record is removed from service target 100. Milestones are not created from the new service target 200 because the service target 200 is closed. Service target 200 is evaluated using the calculated time from service target 100, but using the goal from service target 200.

29

Start Time for request-based service targets

In cases when a ticket is submitted into the system later than when it was requested, such as for a phone request, the service target needs to be effective from an earlier time. When the service target starts, the start time is taken from a field on the application form that stores the time when the “Start When” measurement criteria is met. If this field is blank, it takes the system time as the start time if the “Start When” measurement criteria are met. To specify the Start Time for the service target, it must be configured in the SLM:ConfigDataSource form. Specify the field name that holds the start time information from the application form in the Start Time for Time-Based service targets field on the SLM:ConfigDataSource form. Measurement calculation uses the contents of this field as the Start Time of the measurement instead of timestamp of the ticket submission.
Reopen request

Reopening a request is the default behavior in request-based processing and does not need to be configured. If a request is resolved, or meets its’ stop condition, then its’ measurement goes to a closed state with either a Met or a Missed status. If the request is opened again, that is opened after being resolved or closed, it is moved back to an open state, and a new measurement record is created that finds a previously closed measurement record and inherits its’ elapsed times and start time. The old record is deleted and the new measurement record becomes active. The old stop time is used to calculate the exclude time or the “dead time” between the measurement records. This time is not counted towards the completion of the SLO. If this behavior is not wanted, have safeguards that ensure that the ticket cannot be set to a state in which it is considered open again.
Reference Goal From Application form

A service target can use a goal specified on the application form instead of on the service target. This goal can be time elapsed in seconds, or a due date and time. This way the goal can vary based on the request that the service target attaches to. This is particularly useful for a change request where a planned completion date of the request can be the goal for a service target instead of a fixed goal. To specify the Reference Goal for the service target the administrator must configure it in the SLM:ConfigDataSource form by setting the Reference Goal for Time-Based SVTs or Reference End Goal for Time-Based SVTs field. This configuration can be turned on or off per service target by using the Use Goal defined on the Application Form field on the Goal tab of the service target.

30

Availability processing

X
Attached and Available

\ Y
Not Up and Not Down has a status of “Unknown”

Available

[
Unavailable

Z
Measurement child record Measurement child record Measurement child record

Measurement child record

Measurement child record

The Association filter is executed when the terms and conditions qualification is met. An entry is created in the SLM:Measurement form with MeasurementStatus set to Attached. Because there is always a status for an availability service target, the status changes to the appropriate state and the OverallStartTime is set to the current time. The Up filter is executed when the CI is available. When the qualification is met, UpStartTime is set with current time, and MeasurementStatus is set to In Process. The unavailability filter is executed the CI is when is unavailable. When the qualification is met, zD_UpStopTime is set with the current time.

The Unknown filter is executed when the NOT (Availability) AND
NOT(Unavailability) qualification is met, and MeasurementStatus is set to Unknown. The unavailable time is updated.



Leaving an Unknown state leaves the CI either available or unavailable. The appropriate filter executes to update the measurement record with the status and time. NotDown filter is executed when the NOT (Unavailability) qualification is met, and zD_DownStopTime is set with the current time. In all these scenarios, a MeasurementChild record is created for each change in states. This record is required for SLA compliance calculations and to provide data for the Dashboard.





31

Performance-monitoring processing
Performance-monitoring service targets are evaluated by the collector (see the details below in the collector design section), but the results are stored in a database table that is visible from AR System as a view form. A record is created in SLM:Measurement to track each such service target and only the state changes are updated on this record. This enables the measurement form to keep track of the last known status of the service target and also have data about how long each state lasted each time. This is also used for triggering milestones.

X
Measurement Record created (When service target is saved)

\ Y
Data missing status of “Unknown”

Met

[
Warning Missed

Z
Measurement child record Measurement child record Measurement child record

Measurement child record

Measurement child record

Measurement form
For all types of service targets, the latest status of the service target measurement is available on the SLM:Measurement form. This form is used to track the state changes during the service target processing life cycle. The significant fields on this form are: Note: For an expanded list of fields, see the SLM processing module forms section. SVTInstanceID—Character field that uniquely identifies the service target that is being used to track this ticket. SLMId—Character field that uniquely identifies the service target that is being used to track this ticket, as seen on the definition of the service target, such as SLM00100. ApplicationInstanceID—Character field that uniquely identifies the ticket that is being tracked. SLA Group—This field under the General tab determines which group the Measurement record belongs to OverallStartTime—The inherited Start Time from the found Measurement record.

32

UpElapsedTime—Time that the service target has been tracked as Up. This is the time that is counted towards the time spent working on the ticket for this service target. UpTime—Final accumulated time spent on the ticket. GoalSchedGoalTime—Goal of the service target. MeasurementStatus—Status of the service target. For a Request-based service target, status values are: • Attached—When a service target is attached to a ticket. • In Process—When Measurement Start condition is met on the ticket. • Pending—When the Measurement Exclude condition is met. The time in this state is excluded from the final overall elapsed time calculation. • Missed Goal—When the service target Due datetime has passed, but the Stop qualification is not met. • Met—When the Measurement Stop qualification is met before the service target due date and time is passed. • Missed —When the Measurement Stop qualification is met after the service target due date and time is passed. • Detached—When the ticket is modified so that the terms and conditions of the service target no longer match the ticket. • Invalid—If the service target has been deleted, or wrong value populated in the ticket form, the value expected to match the configuration in the SLM:ConfigDataSource form.

Milestone processing
Milestones are an integral part of the SLM product because they help users take action before service targets and compliance targets are missed. Milestones can be defined for agreements and each type of service target, except compliance-only. The processing of milestones, such as when they trigger, is different in each case.

Request-based milestones
For Request-based service targets, milestones can broadly be the following types: Percentage of time from start or end of the measurement In this type of milestone, the specified percentage is converted into seconds, based on the goal hours and minutes. For example if the service target has a 2 hour goal, then a 50% milestone should trigger at 1 hour or 3600 seconds from start time of the measurement. The appropriate calculation of the number of seconds is done based on the percentage specified and whether it is from start or end of the measurement.

33

Hours and minutes from start or end of the measurement This type of milestone is very similar to the percentage of time, except that the milestone executes in a specific number of hours and minutes after the measurement start condition is met. Start, Stop or Exclude qualifications occurs on the Application form This is a very simple milestone where there is no time criteria are involved, but the milestone actions occurs as soon as any of the start, stop or exclude conditions are met by the request. Custom condition occurs on the Application form This is a simple milestone very similar to start, stop, or exclude qualifications. The only difference is that it executes based on a user-defined custom qualification. Each Request-based milestone generates one filter that executes on the Application form. The SLM:EventSchedule form is used to track the time for executing a milestone for time-based milestones. Every time a service target measurement starts for a request, such as if the OverallStartTime is changed on the SLM:Measurement form, an entry is created in the SLM:EventSchedule form with a due date and time for the milestone. An escalation monitors the entries in this form to make sure that the milestones execute at the correct time. When the milestone executes, the entry in SLM:EventSchedule is deleted.

34

Figure 9: Milestone processing for request-based service targets

35

Availability milestones
Availability milestones execute based on calculation of the cumulative availability percentage or down time of assets stored in the SLM:Measurement form. There are three options for Availability milestones, based on which milestone can execute: Availability Percentage—A condition based on the Available %field on the SLM:Measurement form that can be specified for the milestone to execute, for example, Available %’ < 90. Down Count—A condition based on the Down Count field on the SLM:Measurement form that can be specified for the milestone to execute. Down Time—A condition based on the Down Time field on the SLM:Measurement form that can be specified for the milestone to execute. Each availability milestone generates one filter on the join form between the SLM:Measurement form and the application form, for example,SLM:ServiceRequest_SLA.

Performance-monitoring milestones
Performance-monitoring service target milestones are based on how long the service target measurement has been in an Alarm (Missed) or Warning state. There are two types of milestones: Hours/Minutes from Alarm Start—Executes the milestone when the time specified has elapsed from when the service target measurement went into a missed state. When an alarm event occurs, that is the service target processor of the collector evaluates a service target and detects a missed state, it populates the DownStartTime and Missed status on the SLM:Measurement form. This triggers workflow to create an entry in the SLM:EventSchedule form for a due date and time of X hours and minutes from that time, where X is the milestone hours and minutes. Hours/Minutes from Warning start—Follows the same logic as for the Alarm case, except it monitors when the service target measurement WarningStartTime field is set by the service target processor. In both cases, the milestone entries in the SLM:EventSchedule form have the due date and time for the milestone to execute, and are triggered by the same escalation as for request-based milestones.

Agreement/compliance milestones
For agreements, the milestones are triggered based on the compliance calculations. Here the types of milestones supported are based on each review period—daily, weekly, monthly or quarterly. For each of these the following milestones can be triggered:

36

Compliance at risk—The milestone executes when the calculated compliance percentage is less than the compliance at risk field on the agreement. One filter is created on the SLM:SLACompliance form that monitors the value of the Met Percent field and compares it to the compliance at risk on the agreement to trigger the milestone. Compliance target missed—The milestone executes when the calculated compliance percentage is less than the target compliance on the agreement. One filter created on the SLM:SLACompliance form that monitors the value of the Met Percent field and compares it to the contents of the compliance target field on the SLM:SLADefinition form to trigger the milestone. Compliance percentage less than—The milestone executes when the calculated compliance percentage is less than the value specified on the Milestone. One filter created on the SLM:SLACompliance form that monitors the value of the Met Percent field and compares it to the contents of the SLACompliancePercentage field on the SLM:Milestone form to trigger the milestone.

SLA compliance processing
SLA compliance gathers data from each service target defined in an SLA for a time segment or review period to calculate an overall performance percentage. Users can define a review period of type Daily, Weekly, Monthly, and Quarterly. For each review period, the SLA compliance percentage is calculated at regular intervals to provide a current evaluation of the service target’s performance.
Review period Daily Weekly Monthly Quarterly Automated calculation intervals Every hour (at five minutes after the hour) Every four hours (from midnight: 00:05:00, 04:05:00, and do on) Every day Every day

For example, for a SLA record with a review period of type Daily, an SLA compliance is created. This record will be updated every four hours to have the current performance of all the service target s under it. The next day, the current SLA compliance record is closed out and a new record is created. Every time the SLA compliance data is calculated, SLA compliance milestones are checked. If the milestone conditions are met, the actions are performed. A running total of the Impact Cost of each service target is calculated, based on the cost defined in the service target definition.

37

When the time of the review period is reached, the current SLA compliance record performs its final calculation and the record is closed out (MeasurementDone = “Yes”). The penalty or reward is calculated at this point and can be seen through Dashboards and reports. A new SLACompliance record is then created with the Start Time and Last Sample Time set to $TIMESTAMP$. If the agreement status is not enabled, compliance calculations will not take place.

Compliance and compliance history
To improve processing efficiency when determining the performance percentage of each service target, a SLAComplianceHistory record is created per review period that holds each service target’s current calculated value since the last sampled time. The ComplianceHistory record references the SLAInstanceId and the ComplianceInstanceId. This data is also used for Dashboard displayed information, and holds the Status and Impact Cost of the most current service target records processed.

Request-based calculation
The SLACompliance percentage for a Request-based service target is determined by the total count of the met measurement records, divided by all the resolved measurement records in that review period.

38

Records that have reached a status of Missed Goal are considered Missed.
Measurement record calculation × Last Sampled Time • Retrieve SLM:SLAComplianceHistory count from the last Calculate Now action. • Count all Met records since Last Sampled Time and add to the values retrieved in the history • Count all Missed records since Last Sampled Time and add to the values retrieved in the history. • Push the new total to the history record. • Calculate the Met percentage. • Multiply by the service target’s Weighted Contribution percentage. • Keep a running summation of each service target’s SLA compliance contribution.

Calculate Now Date/Time Ø × Last Sampled Time

Calculate Now Date/Time Ø

The Impact Cost calculation is a summation of each Missed measurement record’s impact cost (SVTImpactCost). This running total is also kept in the SLM:SLAComplianceHistory record. Note: Request-based records might give inaccurate data for cases where measurement records are re-opened and the measurement had already been counted in a past review period.

39

Availability and performance-monitoring calculations
Availability of Asset or Service and Performance-Monitoring compliance are determined by its availability time, divided over its total time. This availability status can be Available, Met, or Warning. The compliance percentage counts down from 100%. This is done taking the Unavailable or Missed MeasurementChild records, and dividing by the total review period time minus the Unknown time, to give the Unavailable percentage. Because this number only increases through time (unless the customers modify existing down time), the compliance percentage only goes down. To calculate the total time for a review period, a “days of the month” lookup is performed. For example, day 1 is 2/12 and the total number of seconds for this monthly review period is needed. Day 2 is 3/12. A lookup is performed on February to find that this year, there are 28 days in February. For this monthly period, the SLACompliance period will span 29 days. In a typical scenario of an asset or service moving from an available state to an unavailable state and back, a measurement record is used to track the overall time of the asset or service. To provide information for SLACompliance, a series of MeasurementChild records are needed that track the start and stop times of each transition (within business hours).
Asset goes down Asset goes down

Ý Õ Asset goes up

Ý

MeasurementChild

MeasurementChild

MeasurementChild

MeasurementChild

Start Time Stop Time Elapsed Time Status: Available

Start Time Stop Time Elapsed Time Status: Unavailable

Start Time Stop Time Elapsed Time Status: Available

Start Time Stop Time Elapsed Time Status: Unavailable

40

When using the MeasurementChild records to determine an availability service target’s availability percentage, four scenarios that must be considered. These scenarios relate to how compliance calculations need to segment the MeasurementChild records to fit within the review period timeframe. For each scenario, the Impact Cost is calculated for each down MeasurementChild record. On the service target definition, the cost is defined in terms of Cost per Min. Because the down time tracked in MeasurementChild is kept in seconds, a conversion of the downtime is made to minutes. This value is stored in the TotalImpactCost field in the SLAComplianceHistory form.

41

MeasurementChild Records

×
Last Sampled Time

In Progress

Û

Scenario 1

Elapse Time of this completed record can not be used because the sampling time is after the start of this record Pending

MeasurementChild has started before the Last Sampled Time. Time for this segment is calculated from Last Sampled Time to the Stop time of this record.

Û

Scenario 2

Ý
Elapse Time = 100 seconds In Progress
Calculate Now is triggered for this time frame.

MeasurementChild is completed within this Calculate Now time frame. Elapse Time is used.

Elapse Time = 25 seconds Pending

Û

Scenario 3

This MeasurementChild record is still open and the Elapse Time has not been calculated. Calculate Now Date/Time

MeasurementChild has started after the Last Sampled Time but the record has not been completed. Time for this segment is calculated from Start Time to Calculate Now Date/Time.

Ø ×
Last Sampled Time

Calculate Now is triggered for this time

Û

Scenario 4

MeasurementChild has started before the Last Sampled Time but the record has not been completed. Time for this segment is calculated from Last Sampled Time to Calculate Now Date/Time.

Calculate Now Date/Time
frame.

Ø

42

Compliance only
Because users can arrange their data in any method they choose and the AR System view form can take any shape, SLM defines fixed field IDs that reference database table fields. The following table lists fixed field IDs:
Field SLOResultsStatus Field ID 301378400 Field type Integer Field definition 0 = Met Not 0 = Missed ImpactCost EpochTimeSeconds 300834800 301643200 Real Integer Impact Cost of missing a transaction Time in seconds since 1970

MSP
The MSP feature allows SLM data to be segmented when for an environment with multiple tenants using one database. This is accomplished using the Row Level Access field 112. From the Contract object, users can associate a Contract to multiple agreements, but each contract can be tied to only one AR System group. The association record that ties the Contract object to these SLA objects must hold the group information.
Figure 10: Partitioning compliance data based on contract /customer

Contract For Customer ABC Field 112 = GroupABC

Contract For Customer XYZ Field 112 = GroupXYZ

Agreement 98% compliance Daily and Weekly Field 112 = GroupABC;GroupXYZ Weekly Compliance Record for

Both groups can see the agreement

Daily Compliance Record for

Daily Compliance Record for

Weekly Compliance Record for

Only Group ABC can see these

Only Group XYZ can see these

43

By default, all agreements will have their field112 set to PUBLIC unless a contract is tied to it. When a contract with an AR System group is applied, the PUBLIC SLA compliance record is not deleted. The status of these records is set to closed by setting a flag to signify that the records will no longer be processed. On the Contract object, if the MSP group is changed, the existing SLA compliance records is closed out and, but not deleted. If the MSP group is changed again to meet an older designation, the old records will not be reopened. A new set of compliance records is created. When configuring a data source to enable MSP, users must define a field on the application form to holds group information. The content of this field is copied to the measurement records and permeated to the MeasurementChild records. SLA compliance records will only be able to access those associations and MeasurementChild records that have the same group ID.

Business time design
Business time design in SLM is based on the AR System business time module. Most of the detailed information about this is available in the AR System manuals. Here are the forms SLM uses for its business time functionality: Business time segment—Defines a schedule activity using one or more Recurrence Times. An activity can be defined as one where time is either available, unavailable. This form can be used to define “Blackouts”, “Maintenance” or any such Business Activity. It can also be used to override another activity. It is a core form like business time and holidays. Business segment-entity association—This form contains the association between entities and activities, and Schedules that apply to those entities. The relationship is many-many, so an entity can have many Schedules activities applied to it and many entity’s can share a schedule. Business segment-entity association details—This is a join form between the Business Activity and the Association form to view the details. Business time shared entity—This form has the attributes, Entity Type and POIID(Parent object instance id) for the association to different Instances of ObjectTypes. Business time can be applied to service targets. There are two ways to define business time for a service target . • • Select a business time entity on the service target itself, it then applies the same entity to all the records of that service target. For request-based only, configure a field on the application form that holds the business entity information. The service target measurements now inherit this business entity from the request or ticket that it applies to.

44

SLM uses the business time commands provided by AR System to calculate the due date and the elapsed time on the SLM:Measurement form. These commands are:
Application-Bus-Time2-Assoc-Subtract Application-Bus-Time2-Assoc-Add Application-Bus-Time2-Assoc-Diff

See BMC Remedy Action Request System 7.0 Configuring guide for additional information on business time.

Collector module design
The collector component provides SLM with raw metric values used to evaluate Performance-Monitoring style service targets. In many environments, the set of metrics available is very large, so the collector needs to be able to efficiently manage the collection of the required data. To achieve this, the data collector uses a distributed architecture, delegating the work of collecting data to light-weight distributed components called collection points. The collection points can be placed close to the source of data for efficiency, and to simplify the management of secure networks. The data collection at each collection point is controlled by the data collector, a component that is tightly integrated into SLM. The data collector controls the discovery of available metrics from each of the collection points, and the scheduled collection of enabled data.
Figure 11: Collector main components

45

Data collector
The data collector is a component that runs on the same server as the SLM engine. It manages the collection points and to retrieve data from them. There can be only one collector.

Collection point
The collection point is a distributed component that is responsible for the collection of data. There can be one or more collection points, and each one can be installed on a separate server. A collection point can be installed on the same server as the collector. The collection point uses a pluggable architecture to allow data to be collected from multiple data sources.

Service target manager
The service target manager facilitates the processing of threshold service targets. It loads threshold service target definitions from AR System so the service target processor can process them. It also informs the service target processor whenever new data arrives from the data collector and is ready to be processed.

Service target processor
The service target processor performs the evaluation of threshold service target expressions. It also performs milestone and cost calculations on the service target results.

Data store
The data store is the repository for data created by the collector. This includes configuration information about collection points, collection nodes, and system metrics. It also includes raw data values for system metrics collected from the collection points, and results produced by the service target processor from processing performance-monitoring service targets.

AR configuration interface
The AR Configuration interface is the user interface for configuring the collector module. This includes operations for adding, removing, and modifying collection points and collection nodes.

46

Plug-in functionality
Plug-in functionality is provided for the data collector to construct service targets and calculate service target compliance, and also to provide data source status information. The functionality includes: • • • • Configure data sources Supply KPI landscape information Collect data Check data source status

Configure data sources
The plug-in API provides methods to configure the plug-in as a whole or the instance of the plug-in called collection node. The information required to configure a collection node is determined by the plug-in and typically includes data source user name, password, communication protocol, port, etc. The plug-in level configuration is applied to all collection nodes of the particular plug-in, and usually should not be configured through the collection node creation interface.

Supply KPI landscape information
Upon request from the plug-in API, the plug-in returns the KPI landscape it views in the data source. The landscape view can either be dynamic or static. A dynamic view requests communication between the plug-in and its data source and usually through data source API calls. A static view usually involves some kind of configuration file, which contains landscape information.

Collect data
The plug-in returns a list of data samples for the specified KPI and time interval. If no data is found an empty list is returned. Depending on the data source, the plug-in may implement a mechanism to retain history data samples for certain time periods. The data returned has to be in numeric form. If the KPI in the data source contains status information, such as OK, WARNING, ALARM and so on, a status map is used to convert the status code from the data source to one of the four status values used in SLM (OK, WARNING, ALARM and OFFLINE).

Check data source status
The plug-in API provides methods to check data source connectivity and throws exceptions should landscape discovery or data collection fail. The error message associated with the failure is shown in the collection node configuration console.

47

Communications
The collector and collection point communicate using HTTP and HTTPS. The protocol is similar to standard SOAP web services, but is a proprietary mechanism. For greater network security, the collector initiates all collector and collection point communication. The collection point uses an embedded Jetty server for handling communication requests. By default, the server uses HTTP on port 7089. You can configure the port by modifying the port property in the Service.properties file on the collection point.

Authentication
Communication between the collector and collection points uses a mechanism similar to web session management to handle authentication. When a collection point is added to the collector, an authentication key is given to the collection point. Each time the collector tries to connect to the collection point, it must provide the same key to gain a “session” key. This session key must be provided with each request, otherwise the request is denied. This mechanism prevents effective communication with the collection point without knowledge of the authentication key.

Encrypted communication
Using HTTPS performs encryption of the data. Although HTTP is the default communication method, HTTPS communication is enabled by installing a security certificate to the server and modifying the Service properties file on the collection point. To use HTTPS as the communication method, the collector must know to use HTTPS. This is configured as part of the connection properties for the collection point.

Configuring the collector
The collector is configured using AR System configuration forms. These forms make web service calls to the collector. The following functions are performed by this configuration: Add collection point—Enter a name for the collection point, a host name, which is the name of the host where it is installed, and the port number chosen when installing the collection point. The default port number value is 7089. Enter the Authentication Key, which is similar to a password and helps make sure that only this server can communicate with the collection point. Get node types—Each collection point can have a number of node types, depending on the plug-ins installed. Before you add a node you must get the node types available for the collection point.

48

Add collection node—Each data source to collect data from will become a node, such as Patrol agents on different servers. A node is associated to a collection point that communicates with it using the plug-ins for that node type. Configuring a node typically includes providing a data source user name, password, communication protocol, port, and so on. Discover KPIs—After a collection node has been configured for a collection point, you can then initiate discovery to find all the available KPIs at the node.
Figure 12: Discovery Operation Timing diagram

AR System UI
1. Send

Collector
2.

Collection Point / Node

discovery web services request.

Request sent to collection point/node to perform discovery.

3.

Request acknowledgement sent to UI.
4. Discovery

result sent to collector where metrics are saved to the database.

5.

AR System UI informed by AR System API that discovery operation is complete.

Data collection
Configuration
The configuration of the metrics to collect is done by the service target manager (documented separately). The service target manager analyses the expression of each service target and sets the startTime field of any required system metric to a non-zero value. The value used will be the earliest startTime of any of the service targets that use the system metric. This non-zero value indicates to the collector that the metric must be collected. Only values that occur after the startTime field will be collected.

Scheduling
The collector scheduler controls the collection of metric values. Each collection node defined in the system has a corresponding CollectionNodeScheduleJob that controls the collection from the node. These schedule jobs are instantiated when the collection node is created. The jobs are scheduled to run at regular intervals. The interval for each job is independent and is controlled by the ScheduleFrequency

49

property of the collection node entry. When a collection node is created the ScheduleFrequency is undefined (0), the scheduler assigns a default interval of 300 seconds (5 minutes). The scheduler attempts to make sure that the scheduled jobs for different collection nodes do not overlap by offsetting the start time for each schedule. If there are many different jobs with different schedules, it might not be possible to make sure that the schedule jobs never overlap. In this case, one of the jobs might be delayed.

Collection
When the CollectionNodeScheduleJob runs, it queries the system metric table in the database to determine which metrics must be collected from the collection node. Any metrics from the node that have a StartTime value greater than zero will be collected. For each metric to be collected, the scheduled job determines the time range of the data to be collected. The start time for the collection period is typically the greater of the last timestamp for the metric and the StartTime for the metric. The start time might also be constrained by the missing data limit or the history-supported limit. These limits prevent an attempt to collect data for a period that has already been judged to be missing data, or for a period that is beyond the limit of available history data from the node. See the following section for more details about missing data. The end time for the collection period is typically the lesser of the current time and the start time + MaxCollectionInterval, where MaxCollectionInterval is 1 hour by default. This makes sure that the amount of data requested in a single collection job is constrained. The collection job compiles this list of metrics and their associated start and end times, and sends the request to the collection node. The plug-in then retrieves the requested history data from the collection node and returns the results to the collector. The results are processed by filling any missing data gaps and saving to the metric numeric value table in the database. Finally, any component that has registered to be notified of incoming data is signaled with a message that contains a list of the metrics collected, along with the start and end times of the collected data.

Status mapping
Any collected value that is of type status must undergo a status mapping procedure. This procedure allows each data source type to define a separate set of status levels. These are then mapped to the standard set of status levels used within the SLM application. The mappings are defined per collection node and are stored in the status_map mapping table in the database. When the collector receives a value of type “status”, it performs a lookup from the mapping table to convert the external data source status value to the internal SLM status value.

50

Missing data
The collector provides data to the service target processor in a way that allows the processor to recognize conditions where data is not available. In retrieving data values, the collector and collection points make sure that all data is collected in order. The collector will not ask the collection point to retrieve data with a timestamp earlier than data already collected. Any gaps in the retrieved data are presumed to be where data is unavailable or missing, and will not be requested again. When a missing data situation is recognized by the collector, it pads the section of missing data with “guessed” values that indicate the missing data condition to the service target processor. These guess values are identified by the “isGuessed” field of the value record being set to 1. In most cases, the value used in the guess is the same as the last collected good (non-guessed) value. Where no previously good value exists, the value of the guess is set to -111111. In determining whether a missing data condition exists for a particular metric, a time period known as the “MissingDataInterval” is used. If no value has been received for an amount of time equal or greater than the MissingDataInterval, then a missing data guess will be created. The MissingDataInterval is set to be the same as the frequency of the system metric. If the frequency of the system metric is not defined (that is, it is 0), then MissingDataInterval is set to be the same as the collection frequency of the collection node.

Web services
The data collector provides web services to allow AR System (or other external entity) to interact with the collector. These web services use the SOAP protocol and are provided using the Apache Axis SOAP framework (see http://ws.apache.org/axis/).

Service target processor
Service target processor components
The processor is comprised of the following: Processor threads that listen for events coming from the service target manager when data is ready for a particular service target to be processed. The Evaluator component, which extracts system metric values from the database using the Data Access Object (DAO) layer, and optionally builds the expressions for alarm and warning levels. It also calculates and stores the impact cost when applicable. The Milestone object, which detects whether a service target state change has occurred, based on the previous result of the evaluation of the expression, and determines if a milestone has to be triggered (going through the measurement record).

51

The service target processor stores service target results in the selected repository. The service target processor produces the following output: Service target ID Service target guessed value flag - Actual versus Guessed value. Service target result status • • • • Ok or Not breached (0) Warning (1) Breached (2) Unknown (3)

Service target value—This is not available for guessed data. Missing Data rule—If the data has been guessed, the missing data rule is set to the rule selected by the user. Cost Timestamp

Service target expression
The terms and conditions user interface for performance-based service targets contain an expression builder where users can create a service target expression using different operators. The expression building is an essential part of the performance-based service targets. Note: In this section, service target terms and conditions is referred to as service target expression. The service target expression can be built using a combination of operators. The resulting expression must be well formed, containing matching parenthesis and valid metric comparisons. The results of examining all segments in the expression indicate whether the service target is breached or not. If expression evaluation result is true, a breach has occurred. Examples of service target expressions:
ServiceTarget name: BackupServer1 (Boolean expression) BackupServer1 - Terms and Conditions = (Node1/NT_CPU/NT_CPU/CpuProcessorUtilization > 80) && (Node1/NT_Memory/NT_Memory/MemoryUtilization > 85) ServiceTarget name: ExchangeServers (Arithmetic Expression) ExchangeServers - Terms and Conditions =

52

AVG(Node1/Exchange/ Exchange /RoundTrip, Node2/Exchange/ Exchange /RoundTrip, Node3/Exchange/ Exchange /RoundTrip, Node4/Exchange/ Exchange /RoundTrip, Node5/Exchange/ Exchange /RoundTrip)

Service target evaluation
The service target evaluation component uses JFormula to parse and evaluate the service target expressions. JFormula is a third-party library for evaluating various mathematical expressions in a string setting form. The expression parsing and validation occurs first at the user interface level, when a user defines the service target expression. After the expression is saved AR System calls a method in the evaluator class using web services to perform a syntax check. The service target processor validates the service target expression during initialization of the evaluator object, and makes sure the service target’s system metrics set by the service target manager are complete and that the expression is well formed. The service target result calculation involves the following factors: • • • Alarm and warning thresholds Boolean versus arithmetic expression Missing data rules

Service target manager
The service target manager is responsible for receiving events from the data collector about new metrics available, applying the new metrics to the service targets that are waiting to be processed, and then passing the service targets that have all the needed KPIs to the service target processor for processing. Additional responsibilities include loading and unloading service targets, and thus making sure all loaded service targets are updated with any metrics that might have come in and making sure that the service target pool and system metrics being collected remain in synch with the latest service target definitions

Design
The service target manager maintains a pool of service targets that are being actively processed, a mapping of the system metrics that these service targets are interested in and a collection of service target processors. Because the service target manager is the coordinator between the collector and the service target processor, it uses a singleton design and strives for efficiency by making decisions as quickly as possible, and then delegating. To accomplish this the service target manager is multi-threaded and is therefore carefully synchronized to avoid data corruption and

53

deadlock states. The design accommodates multiple service target processors running concurrently. In SLM 7.0 only a Single service target processor is used.

Startup
On startup, the service target manager calls the AR System API to retrieve all threshold service targets and load them and their corresponding goals. On loading each service target, the data store is checked to see if metrics exist and all available metrics are retrieved and set for each service target that requires them. Any service targets that are ready to be processed are set in the processing queue and an event is triggered to start service target processing. Any metrics not currently being collected are set to be collected so the data will be available from the Collector for future processing. The service target manager then enters its main loop for regular processing.

General processing
After startup, the service target manager will begin waiting for messages from the collector for new data that has been collected. As the data arrives the data is processed and the data is applied to the service targets that are waiting. The service target manager then checks for any service targets that are ready to be processed and notifies a service target processor to run the calculations for the service target.

Load service target
A save operation in the user interface triggers a Load service target request to be sent to the collector. The request causes a specific service target to be loaded. The service target manager looks for the service target in the running pool. If the service target is loaded, it is unloaded to stop it from processing while the necessary data is updated. Any change in the KPIs used is detected. Collection is set appropriately: any metrics no longer used by any service target are set to be not collected and any new KPIs are set to start being collected.

Unload service target
The service target is discovered in the running pool and then its metrics are removed from the metric to service target map. Any metrics no longer needed by any service target are then set to be not collected and the service target is deactivated.

54

Collection point plug-ins
BMC Performance Manager (PATROL Agents)
The plug-in uses platform-independent API (PEMAPI) to facilitate communications directly between the plug-in and the classic Patrol agents. All calls through PEMAPI are single-threaded. A Java Native Interface (JNI) wrapper for the PEMAPI directs Java calls to PEMAPI C language functions.
Discovery

Discovery finds the hierarchical structure of Patrol name space, which is
Application |_ Instance |_ Parameter

Parameter has both numeric and status data, while Instance and Application can have only status data.
Data collection

Numeric metric data is retrieved from Agent history directly. Because Agent does not store status in its history table, a local status data collector is scheduled for each targeted Agent to retrieve status metric data every minute. It keeps data samples up to one hour, or sixty data samples if the number of samples in one hour is less than sixty.
The collection frequency and the data retention time can be configured in the patrol_datasources.xml found under the collection point install dir /config:
<!-- freqency to collect status sample, default to 60 seconds --> <property-def name="status-collection-freq" type="string" defaultValue="60" hidden="true"/> <!-- how long status sample will be kept, default to 3600 seconds --> <property-def name="status-collection-window" type="string" defaultValue="3600" hidden="true"/> Status collector

The BMC Performance Manager plug-in employs a status data collector to regularly retrieve status parameter values from Patrol agents. This is required because Patrol agents do not store historical information for parameter status. A status metric is added to the status data collector the first time a getData request is received for the metric.

55

The status data collector has a mechanism to automatically detect and remove unused status metrics. The algorithm keeps track of the metric idle time to determine if a metric is not used. If the metric has not been collected for a period up to two times the data retention window, the metric gets removed from the collector.
Agent security

If a targeted Agent is running at a specified security level, a dedicated collection point must be installed on a separate machine to communicate with Agents at the same security level. Before a collection point is installed, the host machine must be configured to support Patrol agent security. One way to do this is to install a classic Patrol agent with the desired security level using the BMC common installer. This sets up the required Patrol security environment. After the installation, the Patrol Agent is not required to run.

BMC Performance Manager Express (BMC Portal)
The plug-in uses the Patrol Express web API to communicate with Patrol Express servers. The plug-in collects system metric data only from infrastructure elements. Patrol Express Elements can have status data. Parameters can have both status and numeric data.
Discovery

Discovery finds the hierarchical structure of Patrol Express infrastructure elements, which is
Element |_ Application |_ Instance |_ Parameter or Application |_ Instance |_ Parameter

There can be a maximum of two levels of Application. Only Parameter can have numeric and status data.
Status collector

Numeric metric data is retrieved directly from Patrol Express. Status data is collected using the same mechanism as the BMC Performance Manager plug-in.

56

BMC TM Application Response Time
The ART plug-in makes requests to ART server using the ART Web Service API. The SLM 7.0 plug-in supports two versions of ART server: version 2.8 and version 2.9.
Web services API

ART defines the following web services, which are used in the plug-in: sccsystem—Provides an authentication method and simple utility methods. The plug-in uses it to establish a session to the ART server. sccentities—Service that provides read access to the two main entities: Project and Location. The plug-in uses it to build landscape information. sventities—Service that provides read access Monitors and Transactions. The plugin uses it to build landscape information. svdata—The plug-in uses this service to query transaction data.
Discovery

Discovery builds the ART landscape into the following hierarchical structure: Project
|_ Transaction |_ ClientSet Location |_ Measure |_ ServerSet Location |_ Measure

Only Measure can have numeric data.

SNMP Trap
The SNMP Trap plug-in does not send request to a data source, but instead listens for incoming SNMP traps from SNMP agents. The plug-in is a SNMP manager that runs at port 162 when loaded by the collection point. If two collection points are installed on the same machine, one of the SNMP plug-ins must change its port.
SNMP Trap version

The SLM 7.0 implementation of the plug-in can only accept v1 and v2c traps. Version v3 traps are not supported.
Discovery

The plug-in loads landscape information from the snmp.conf configuration file. Every time the discovery method is called, snmp.conf is loaded, thus you do not

57

have to restart the collection point to discover new landscape information after snmp.conf is modified. The following is a sample snmp.conf file:
# Default status mapping: 0-ok 1-warn 2-alarm 3-offline # For snmpv1 trap, TrapID is "EnterpriseID GenericTrapNumber SpecificTrapNumber" # For snmpv2c trap, TrapID is "snmpTrapOID" # Format 1: label; status/numeric; TrapID; value; TrapID; value;... # Format 2: label; status[:StatusMapping]/numeric; TrapID; parameterOID; # where [:StatusMapping] is in the form of :parameterValue:statusValue:... # where statusValue is 0(OK),1(WARN),2(ALARM), or 3(OFFLINE) # for example, map "on" to OK and "off" to ALARM :on:0:off:2 SNMPAgent 172.18.53.130 172.18.52.141 localhost # v1 trap, linkup - ok, linkdown - alarm linkStatusV1; status; 1.3.6.1.4.1.1824 2 0; 0; 1.3.6.1.4.1.1824 3 0; 2 # v2 trap linkStatusV2; numeric; 1.3.6.1.6.3.1.1.5.3; 100; 1.3.6.1.6.3.1.1.5.4; 200 # Format 1 status example enterpV1; status; 1.3.6.1.4.1.1824 6 1; enterpV2; status; 1.3.6.1.4.1.1824.0.1;

0; 1.3.6.1.4.1.1824 6 2; 0; 1.3.6.1.4.1.1824.0.2;

2 2

# Format 2 status example statusV1; status:on:0:off:2; 1.3.6.1.4.1.1824.1 6 0; 1.3.6.1.4.1.1824.1.0.0.1; # Format 2 numeric example numericV1; numeric; 1.3.6.1.4.1.1824.1 6 0; 1.3.6.1.4.1.1824.1.0.0.1; numericV2; numeric; 1.3.6.1.4.1.1824.1; 1.3.6.1.4.1.1824.1.0.0.1; SNMPAgent wqiu-hou enterpV1; status; 1.3.6.1.4.1.1824 6 1; enterpV2; status; 1.3.6.1.4.1.1824.0.1;

0; 1.3.6.1.4.1.1824 6 2; 0; 1.3.6.1.4.1.1824.0.2;

2 2

The entries in the file tell the plug-in how a system metric is mapped to one or multiple traps. For example, if a SNMP trap collection node is created with “localhost” in the agent IP field, the following parameters are discovered in the landscape: • • • • • • • linkStatusV1 linkStatusV2 enterpV1 enterpV2 statusV1 numericV1 numericV2

If the agent IP in the collection node cannot be matched in the snmp.conf file, the landscape for this collection node will be empty. The snmp.conf file can be updated while the collection point is running.

58

The SNMPv1 trap and SNMPv2c trap are different in terms of trap identification. To uniquely identify a trap, a TrapID is defined as: For SNMPv1:
TrapID = EnterpriseID[space]GenericTrapNumber[space] SpecificTrapNumber Example: 1.3.6.1.4.1.1824 5 0

For SNMPv2c:
TrapID = SNMPTrapOID Example: 1.3.6.1.4.1.1824.1.0.0.1

Service impact manager plug-in
The SIM plug-in is a pluggable data collection piece that feeds SIM CI status data to an SLM collector. The SIM plug-in follows the general contract between a collection point and collector, and has the same installation procedure as other SLM data collection plug-ins.
Impact Manager Web Service Server

The plug-in is designed as a client to communicate the Impact Manager instance by connecting to the Impact Manager Web Service (IIWS) server and accessing BMC II Web Services. IIWS server is part of the Impact Manager installation. Before running the Web Service API, make sure the IIWS server is correctly configured. The cell configuration entries in the mcell.dir file of the %installDirectory\mcell directory provide the cell information that the SIM plug-in uses to make connection.
Status collection

A background process collects the status of the target CIs every minute from the moment a CI is registered. A CI is registered when the collector queries its status for the first time. The criterion to deregister a CI and thus stop status collecting is that no data request has been made to the CI in a certain period of time (the default is two hours). The plug-in keeps one hour’s data in an internal data reservoir for each CI. Any data points beyond the one-hour window get purged. The elapsed time is the time between the most current data point and the oldest data point.
SIM Status Mapping

SIM defines nine types of status: None, Blackout, Unknown, OK, Info, Warning, Minor, Impacted, and Unavailable. SLM can only have four status values: OK, Warning, Alarm, and Offline. The default mappings between those values are defined in the sim_datasources.xml file of the collectionpointInstallDirectory\config directory.

59

Transaction management plug-in
The Transaction Management ™ plug-in collects transaction management metrics using the web services that are deployed on the JBoss transaction management presentation server. The SOAP calls are separated into four web services: • • • • txmsystem configuration statistics events

The SLM plug-in utilizes web services in txmsystem, configuration and statistics. No events web services are used. TM plug-in performs the following: • • Discovers the transaction management metrics landscape. Collects either status or numerical values of the metrics.

TM landscape

The TM landscape has three classes: service, activity and transaction. Service—A business service can have one or more sub-business services. Business services with no sub-business services can contain business activities and/or transactions. Activity—A business activity can be a member of one or more business services. A business activity can have one or more transactions. Transaction—A transaction can be a member of one or more business activities and/or business services. A transaction can also be a member of a business service without being a member of a business activity. Landscape Structure—The TM plug-in discovers all services, activities and transactions that are visible to the user. It arranges the landscape in a hierarchical tree that lists all first level services. All sub services are promoted to the first level. The second level lists activities and/or transactions and the third level lists transactions under activities.
Data collected

EndUser is the response time of the end user and BackEnd is the response time of the back end. They are numeric values in seconds. Services, Activities and Transactions all have status values -- OK, Warning or Error.
Status mapping

The TM status can be OK, Warning or Error. SLM have four status values. They are: OK, WARNING, ALARM and OFFLINE

60

The default mappings between those values are defined in the txm_datasources.xml file of the collectionpointInstallDirectory\config directory. The following is an example of the status mapping section in the file:
<property-def defaultValue="0" name="STATUSMAP:0:OK" type="string" hidden="true"/> <property-def defaultValue="1" name="STATUSMAP:1:WARN" type="string" hidden="true" /> <property-def defaultValue="2" name="STATUSMAP:2:ERROR" type="string" hidden="true" />

It maps TM OK to SLM OK and TM Warning to SLM Warning and TM ERROR to SLM ERROR. There is no OFFLINE mapping.

SLM engine overview
The main function of the Business Rule Interpretation Engine (SLM Engine) is to interpret stored rules when they are created to construct the filters needed to implement these rules. The engine is written in C++ and runs continuously Windows process or a UNIX daemon that run under armonitor similar to the Assignment Engine and Approval Engine. The command line to start the engine is in the armonitor.cfg file in the conf subfolder of the AR System install folder. The SLM engine starts when AR System starts. Along with the SLM engine, another the Application Dispatcher (arsvcdsp) engine acts as a “command controller” for all engines running under AR System. It receives commands queued in the Application Pending form and sends a signal to “wake up” the engine that needs to process each command, based on the category. The dispatcher is installed as part of AR System and works with any engine that runs under armonitor. The SLM:RuleDefinition form is the main form that the SLM engine references. It contains reference to each object needed to create a filter. The meta-data needed by the engine to generate the structures needed to create a filter is stored in backend forms. The SLM:RuleDefinition form triggers building of these filters by issuing the following application command:
Application-Command BR-BRIE Create-Rule BR-BRIE is the category of this command and Create-Rule is the command received by the engine. When a filter executes this command, an entry is created in the Application Pending form. This triggers the Application Dispatcher engine to evaluate this command and trigger the SLM Engine to start processing.

61

See the following figure for the data model for the back-end forms or “meta-data” used by the SLM engine and how it translates to an auto-generated filter.
Figure 13: SLM Engine backend meta-data objects

SLM Engine Meta-data Entity Relationships

SLM:RuleDefinition PK InstanceId InstanceName BuildStatus RuleStatus

SLM:Object PK Form Name PK Locale PK ID InstanceId

SLM:Association PK InstanceId 1 PK InstanceId 2 PK AssociationTypeId AssociationName RecordStatus

SLM:RuleEventData SLM:RuleConditon PK InstanceId InstanceName SLA_RuleConditionType Condition PK InstanceId InstanceName EventCreate EventModify EventDelete EventRetrieve EventMerge

SLM:RuleActionSequence PK InstanceId InstanceName

SLM:RuleActionProcess PK InstanceId InstanceName

SLM:RuleActionSetValue PK InstanceId InstanceName

SLM:RuleActionCallGuide PK InstanceId InstanceName

SLM:RuleActionSetValueItem PK InstanceId InstanceName SVA_InstanceId

In looking at an auto-generated filter through BMC Remedy Administrator, each part of the filter can be mapped to data in one of the back-end forms shown in Figure 13.

62

Form Name in SLM:Object InstanceName in SLM:RuleDefinition ARExecutionOrder in SLM:RuleDefinition Association record between Rule InstanceId and ID on SLM:Object

Action record in SLM:RuleAction Association record between Rule InstanceId and InstanceId of

RuleAction

Condition field in SLM:RuleCondition Association record between Rule InstanceId and InstanceId of RuleCondition

RuleEventData record in SLM:RuleEventData Association record between Rule InstanceId and InstanceId of RuleEventData

SLM rules and associations
Every auto-generated filter in SLM is a “Rule,” and has an entry in the SLM:RuleDefinition form. Each rule must have one event, one condition and one or more actions. These rules must be associated to a form that is registered by an ID (unique GUID) in the SLM:Object form. The SLM:RuleDefinition form contains a list of all auto-generated filters built by SLM. Using the GUID from the InstanceId

63

(field 179) from this form, you can search on the SLM:Association form to view the event, condition, action and object associated with this rule.
Figure 14: Associations look for a rule definition

Rule actions
Actions are somewhat different from other associations to a rule because a sequence of actions can be associated to one rule. The action forms are designed so all actions are derived from the SLM:Rule Action base action form. The main actions supported are: • Set Value Action o o o • • Set Value Set Fields Push Field

Alert Action Run Process Action

An action of type Action Sequence is used internally to form a list of actions in a sequence, that is a set of milestone actions. The Alert action is implemented internally as a Push Field action that pushes the alert data in the SLM:RuleActionNotifier back-end form, which in turn sends an alert or email.

64

Figure 15: Rule Actions Entity Relationships
Rule Actions Entity Relationships
SLM:RuleAction PK InstanceId InstanceName Action Type SLM:RuleActionProcess _base PK InstanceId SLM:RuleActionSetValue_base PK InstanceId SetValueMode

Join Form
SLM:RuleActionRunProcess PK InstanceId InstanceName Run ProcessCommand

Join Form
SLM:RuleActionSetValue PK InstanceId InstanceName SetValuesMode DesitnationServer SourceServer DestinationQualification SourceQualification SLM:RuleActionSetValueItems PK InstanceId PK SVA_InstanceId PK RecordStatus InstanceName FieldId FieldValueExpression

SLM:Association PK PK PK InstanceId 1 InstanceId 2 AssociationType ObjectId1 ObjectId2

SLM:RuleActionSequence PK InstanceId InstanceName

65

Supplementary information
This section contains information about SLM tables.

Collector database schema
The following tables are in the collector database schema.

AGGREGATION_SCHEDULE table
This table holds the data aggregation intervals.
Column ID TIMESTAMP SCHEDULE Description Unique identifier PK Time last run in seconds since epoch. Data aggregation schedule Data type Char Integer Char Nullable No No No Unique Yes No Yes

66

COLLECTION_NODE table
This table holds the collection node definitions.
Column Description Data Type ID CPID Nullable Unique

Unique identifier PK Id of collection point that this collection node belongs to. FK Used by SLM as PK is same as ID Unique name for CN Time delta that represents how much history data is available from this CN Time that must elapse from last received metric value until the data is considered missing Plugin type ART SNMP etc. How often data is requested from by the collector from the collection point Data needed to connect to the underlying application represented by this CN Is the CN active Code representing the state of this CN Text associated with the status

Char Char

No No

Yes No

REQUESTID

Char Char Integer

No No No

Yes Yes No

NAME HISTORYLIMIT

MISSINGDATALIMIT

Integer

No

No

TYPE SCHEDULEFREQUE NCY

Char Integer

No No

No No

NODECONNECTION PROPERTYSTRING

Char

Yes

No

ISENABLED STATUS

Integer Integer Char

Yes Yes Yes

No No No

STATUSMESSAGE

67

COLLECTION_POINT table
This table holds the collection point definitions.
Column Description Data Type Nullable Unique

ID REQUESTID NAME CONNECTIONDATA

Unique identifier PK Used by SLM as PK is same as ID Unique name for CP Data needed for the collector to connect to the CP Private key for data communication between CP and collector Is the CP active Code representing the state of this CP Text associated with the status

Char Char Char Char

No No No No

Yes Yes Yes No

AUTHKEY

Char

Yes

No

ISENABLED STATUS STATUSMESSAGE

Integer Integer Char

Yes Yes Yes

No No No

68

METRIC_AGGREGATION table
This table holds the result of data aggregation.
Column Description Data type Nullable Unique

ID SMID

Unique identifier PK Id of system metric that this entry is for FK Beginning of aggregation time period End of aggregation time period Minimum value for time period Maximum value for time period For numeric data this is the average and the mode for status data Data type of the system metric numeric or status Number of times the metric was seen in the time interval Unique schedule that defines the collection interval for this aggregation entry

Char Char

No No

Yes No

STARTTIME

Integer

No

No

ENDTIME MINVAL MAXVAL AVGMODE

Integer Double Double Double

No Yes Yes Yes

No No No No

TYPE

Char

Yes

No

OCCURANCES

Integer

No

No

SCHEDULE

Char

No

Yes

69

METRIC_NUMERIC_VALUE table
This table that holds the data values that are being collected.
Column Description Data type Nullable Unique

ID SMID

Unique identifier PK Id of system metric that this metric value is for. FK Time that this metric was generated in UTC Metric data type numeric or status Actual data value Did we generate a guessed value

Char Char

No No

Yes No

TIMESTAMP

Integer

No

No

TYPE VALUE ISGUESSED

Char Double Bit

No No No

No No No

70

SLO_RESULT table
This table that holds the result of the service target Engine evaluating the service target expression and applying Goals and missing data rules.
Column Description Data Type Nullable Unique

ID REQUESTID TIMESTAMP STATUS

Unique identifier PK Used by Slm as PK is same as ID Time that this result is for The result of the expression evaluation Was missing data present for any KPI used in the computation The cost calculated for this outage The result of the computation from the service target Engine The ID of the service target that the result is for FK Text representation of the timestamp Name of the service target that this result is for The actual rule that was applied if missing data was used in the expression evaluation

Char Char Integer Integer

No No No No

Yes Yes No No

MISSINGDATA RULEAPPLIED

Integer

No

No

IMPACTCOST VALUE

Double Double

No No

No No

SLOID

Char

Yes

No

TIMEDATESTR NAME

Char Char

Yes Yes

No No

MISSINGDATA RULE

Integer

Yes

No

71

SLO_TIMESTAMP_MAP table
This table holds information about each service target’s next processing time.
Column Description Data Type Nullable Unique

ID SLOID

Unique identifier PK Id of the service target that this entry is for. FK Time that the next service target result is for.

Char Char

No No

Yes Yes

TIMESTAMP

Integer

No

No

STATUS_MAP table
This table that holds the mapping of external application statuses to the internal representation.
Column Description Data Type Nullable Unique

ID CNID SOURCESTAT USVALUE SOURCESTAT USNAME INTERNALSTA TUSVALUE

Unique identifier PK Id of CN that this mapping is for. FK Status value in source Source status label Status value used internally

Char Char Integer Char Integer

No No No No No

Yes No Yes No No

72

SYSTEM_METRIC table
This table holds the discovered KPI definitions.
Column Description Data type Nullable Unique

ID CNID

Unique identifier PK The collection node that this system metric was discovered from. FK Used by SLM as PK is same as ID The complete and unique name for the KPI that this system metric represents Does the KPI represented by this system metric have historical values Was missing data present for any KPI used in the computation The cost calculated for this outage The time when this system metric was first requested from the collection node NAME0—NAME9 represent the flattened and non-unique (Collection node data is not included) presentation of the FULLNAME. Used by SLM to display as a tree in the expression builder.

Char Char

No No

Yes No

REQUESTID FULLNAME

Char Char

No No

Yes Yes

HASHISTORICAL

Integer

No

No

TYPE

Integer

No

No

FREQUENCY STARTTIME

Double Double

No No

No No

NAME0

Char

Yes

No

NAME1 NAME2 NAME3

Char Char Char

Yes Yes Yes

No No No

73

Column

Description

Data type

Nullable

Unique

NAME4 NAME5 NAME6 NAME7 NAME8 NAME9 ISENABLED LASTDISCOVERY TIME DISCOVERYFAIL COUNT Is this system metric available for use. Last time the collector was able to discover this system metric. Number of times the collector was unable to discover an system metric

Char Char Char Char Char Char Bit Integer

Yes Yes Yes Yes Yes Yes No No

No No No No No No No No

Integer

No

No

74

SLM processing module significant forms
The following forms are in the SLM processing module.

SLM:Measurement form
Field name SVT_InstanceID Object ID 1 ApplicationInstanceID Object ID 2 GoalSchedGoalTime SVTDueDate OverallStartTime Field ID 490008000 490008100 490009000 490009100 300272700 300364900 300364400 Description Unique GUID used internally used to identify an service target. Object ID of the service target form. Request ID or any unique field used to identify the application records, such as instanceId from Incident. Object ID of the application form name. Stores the Goal Time, in seconds, as a copy from the Goal field on the SLM:Goal Schedule form. Stores the Due Date/Time by adding the OverallStartTime and the GoalSchedGoalTime. Stores the initial start time of a request. For example, ‘OverallStartTime’= 8/3/2004 3:03:03 PM, which is the time when the initial Start When Qual is met in the request. Stores the stop time of a request. For example, ‘OverallStopTime’= 8/8/2004 3:03:03 PM, which is the time when the Stop When Qual is met in the request. Stores the length of time between the start and stop time of a request. For example, if ‘OverallStartTime’=8/3/2004 3:03:03 PM, and ‘OverallStopTime’ = 8/8/2004 3:03:03 PM., ‘OverallStopTime’= 8/8/2004 3:03:03 PM. Because OverallElapsedTime = OverallStopTime—OverallStartTime, OverallElapsedTime is 259200 seconds (3 days). Stores the starting of the exclude time of a request. For example, ‘DownStartTime’= 8/5/2004 5:05:05 PM, which is the time when the Exclude Qual is met in the request. Stores the stopping of the exclude time of a request. For example, ‘DownStopTime’= 8/5/2004 6:05:05 PM, which is the time when the NOT(Exclude Qual) is met in the request. Stores the total Exclude Time of a request. For example, ‘DownStartTime’= 8/5/2004 5:05:05 PM, and ‘DownStopTime’ = 8/5/2004 6:05:05 PM. Because DownElapsedTime = DownStopTime—DownStartTime, DownElapsedTime is 3600 seconds.

OverallStopTime

300364500

OverallElapsedTime

300436500

DownStartTime

300364600

zD_DownStopTime

300364700 Display Only Field

DownElapsedTime

300364800

75

Field name UpStartTime

Field ID 300440300

Description Stores every start time of a request. For example, ‘UpStartTime’= 8/5/2004 5:05:05 PM, which is the time when the Start When Qual is met in the request. Stores the stopping of the start time of a request. For example, ‘UpStopTime’= 8/5/2004 6:05:05 PM, which is the time when the NOT(Start When Qual) is met in the request. Stores the total Up Time of a request. For example, ‘UpStartTime’= 8/5/2004 5:05:05 PM, and ‘UpStopTime’ = 8/5/2004 6:05:05 PM. Since UpElapsedTime = UpStopTime— UpStartTime, then UpElapsedTime will be 3600 seconds. Stores the length of time between Start and Stop Time of a request, excluding the Exclude time (Down time). For example, ‘OverallElapsedTime’ = 120 seconds, and UpElapsedTime = 90 seconds. Because UpTime = UpElapsedTime, UpTime is 90 seconds. Stores the length of time between GoalSchedGoalTime and UpTime. For Request-Based, it stores the time when MeasurementStatus=”Missed Goal”. For example, if ‘MissedGoalStartTime’= 8/5/2004 5:05:05 PM, it is the time when the MeasurementStatus=”Missed.” Stores the progress(status) of a service target for a request. The status can be Attached, In Process, Pending, Met, Missed, Invalid, Missed Goal, or Detached. Stores the progress (status) of a measurement. The status can be either Yes or No. The status is set to Yes when Stop When Qual is met Keeps an asset or service available during a certain percentage of time. For example, IT commits that a group of servers will be up and running 90 percent of the time for a period of six months. Tracks the number of times that an asset or service was unavailable. The goal of the service target is not to exceed a certain number of down occurrences. For example, an asset cannot be down for more than five times in a period of six months. Tracks the duration that an asset or service was unavailable. The goal of the service target is not to exceed the committed amount of down time. For example, the service target can track the number of hours and minutes that an asset has been up and running or down and non-functional.

zD_UpStopTime

300440200 Display Only Field

UpElapsedTime

300440400

UpTime

300436600

MetMissed Amount MissedGoalStartTime

300365300 301491900

MeasurementStatus

300365100

MeasurementDone

300364200

Availability%

300433200

AvailabilityDownCount

300450800

AvailabilityDownHr And AvailabilityDownMinTime

300497500 300497600

76

SLM:SLACompliance
Field Name CalculateNow CalculateNowDateTime CalculateNowDateTime_i nt CommentAddedText ComplianceAtRisk ComplianceStatus ComplianceStatusPrevio us ContractId Field Id 300425200 301496300 301578000 301922500 301470400 301553000 301922200 301136800 Description Flag used to trigger the start the compliance calculation Date/Time value of when the calculation is done Integer date/time value of when the calculation is done Comments from the dashboard Percentage value from the agreement definition of when the compliance is considered at risk. Possible status are: Compliant, At Risk, and Breached http path to the location of the icon to signify the previous status of the compliance. ID on the contract for which this compliance is created for. There will be a compliance record per Contract and per agreement Contract name for which this compliance record pertains to Current server used for the reference table for compliance only table. Shows the review period frequency: Daily, Weekly, Monthly, and Quarterly Instance of ID of the compliance record. This value is referenced by the ComplianceHistory record. Temp variable used to store values for calculation. Temp variable used to store values for calculation. Log in information to send SIM events.

ContractName CurrentServer EventFrequency instanceId Interval Met Count Interval Missed Count keywordCnc keywordDelimiter keywordEvent keywordEventNew keywordSlot keywordSlotList Last Sampled Time LastSampledTime_int LastSLAComplianceCalc

301801800 301762100 301363300 179 300497100 300497200 302255800 302256300 302255900 302256000 302256100 302256200 300835500 301577900 300575400

The last time the compliance was done. Used by the compliance logic to determine the time slot for the next calculation Signifies if this is the first compliance value as used by the Dashboard

77

MeasurementDone Met Count Met Percent Missed Count Missed Percent objectId2 Path ReferenceForm ResetNow ReviewPeriodEndTimeSt amp

300364200 300440500 300440700 300440600 300450400 490009100 8 301759000 300440800 301800100

Yes if the compliance record is closed. Count of the all the met tickets for ticket-based service targetss Calculated compliance percentage Count of the all the missed tickets for ticket-based service targetss Calculated compliance for the missed percentage Contract Instance ID Location in the tree of the agreement for which the compliance record pertains Reference from for the compliance only. It will look to this form for data. Flag to signal that the compliance record is ready to be closed and to created another record for the next review period Review Period End TIMESTAMP is the total time in seconds of the review period. Used to enable Performance-Monitoring service targets to be calculated in descending percentage from 100%. InstanceId from the review period used for this compliance record Review period name from the review period used for this compliance record Goal from agreement for compliance value. If lower than this value, the compliance is said to be breached. ID of the agreement for this compliance record Category of the agreement: Service Level Agreement, Operational Level Agreement, or Underpinning Contract Instance Id of the agreement for this compliance record Name of the agreement for this compliance record Path for the location of the agreement in the tree control object Comments added from the dashboard

ReviewPeriodInstanceId ReviewPeriodName SLA_Performance Goal SLA_SLA ID SLA_SLA Type SLAInstanceID SLAName SLAPath SLMCommentsComment SLMCommentsDate SLMCommentsDetail SLMCommentsGroup SLMCommentsPerson

301494900 300501400 300316100 300314700 300260900 490008000 300411500 301593800 300452100 300426900 300442700 300411600 300442600

78

SLMDescription SLMStatus SLMType slotListDelimiter StartTime StartTimeTIMESTAMP StopTime Total MetMissed Count TotalImpactCost UnknownTime

300260300 300314900 301724500 302256400 300365600 301804400 300365700 300450300 301539300 301754300

Description text from the agreement Status from the agreement Type from the agreement Used for logging to SIM to send then the changed SLM status Time the compliance was created Start Time as an integer Time the compliance record is closed. Total number of tickets counted for the current review period Total cost of all the service targets in this review period Time from the MeasurementChild records that are considered Unknown

79

SLM definition forms
The following form is used for SLM definition.

SLM:ServiceTarget form
Field name AlarmOperator AlarmStatus Field ID 301474900 301475100 Description Operators that apply to the Alarm value. List of Alarm status values for status KPIs: OK, Warning, Alarm, or Offline. Data Source: The application form for request-based and availability Status of the filters built: Built Successfully, Could Not be Built, Build In Progress. Business Entity ID for the selected Entity. Name of the selected Business Entity. ID that points to the folder that the service target belongs to in the SLM:Category form. Impact Cost per minute of service target being missed.

Applies To

300523400

BuildStatus

300543500

BusinessEntityID BusinessEntityTags CategoryId

301413200 300830700 301461600

Cost

300462600

DataMissedAfterThisNumberOfMinutes DataSourceName

301465400 300260200 Form name corresponding to the Applies To from the Config Data Source. Form name corresponding to the Applies To from the Config Data Source. Date and time when to start the service target processing. Qualification to specify when the time measurement should be excluded from the elapsed service target time.

DataSourceNameTrunc

301450300

EffectiveFrom

300272100

ExcludeQualification

300273200

FieldContainingBusinessHoursFieldID

300504700

80

Field name FieldContainingBusinessHoursSel FieldContainingEntityFieldID FieldContainingHolidayScheduleFieldID FieldContainingHolidayScheduleSel GoalGUID GoalKPIType GoalTypes

Field ID 300502200 301413300 300504800 300502100 301267400 301270800 300905300

Description

Selection field that maps to the internal allowed goal types: Request-based, Performance-Monitoring, Availability and Compliance-Only. Value of the selected KPI that defines when service target will be Missed. Value of the selected KPI that defines when service target will be Warning. Unique GUID identifying each service target. Holds the KPI information used to create the KPI expression.

GoalValueAlarm

301263300

GoalValueWarning

301263200

instanceId

179

KPIIDExpression KPIIDList KPIQualification MeasCriteriaDescription MeasCriteriaTemplateId

301367900 301368000 301668300 301648900 301271000

Description of the Measurement template. A pre-defined measurement template that can be selected instead of entering the start, stop, and exclude qualifications. Unique ID (GUID) of the template.

MissingDataHours MissingDataRule

300477200 301465300 Defines the service target status if no data is received to evaluate the target for performance monitoring. Object ID of the service target form as registered in the SLM:Object -

objectId

490000100

81

Field name

Field ID

Description
SLM_SLODEFINITION form

ProcessingFrequency ProcessingFrequencyHours ProcessingFrequencyMinutes QualBuilderFormName

301516100 301516200 301516400 301576000

Frequency of processing of the service target for Performance-Monitoring type.

Enables the qualification builder to access another form to launch when the user clicks the “Define” button.

ReferenceEndGoalforRequestBasedSVTsFieldID ReferenceEndGoalforRequestBasedSVTsSel ReferenceFormID ReferenceTimeGoalforRequestBasedSVTsFieldID RestartServiceTarget

301387400 301387500 300260400 301362600 301473300 Specifies if the measurement should restart when a certain condition is met. The restart qualification is defined on the Data Source.

RestartWhen SLA_Appform_Applicable Type SLA_Main Group

301472900 300695300 300478200 Specifies service targets in the same group. A category of Service Level Agreement, Operation Level Agreement, or Underpinning Contract. Detailed description of the service target. User defined Goal descriptions, configured in the Configure Goal Type option in the Administrator Console Unique generated ID, such as SLM00100. Settings of Enabled, Disabled, or Invalid. Qualification to specify when to start the

SLACategory

300260900

SLMDescription SLMGoalType

300260300 300315600

SLMId SLMStatus StartQualification

300314700 300314900 300273000

82

Field name

Field ID

Description service target time measurement.

StartTimeforRequestBasedSVTsFieldID StopQualification

301270100 300273100 Qualification to specify when to end the service target time measurement. Single goal time in hours Single goal time in minutes A qualification or expression that defines the criteria for the data source instances that the service target applies to. The name describing the service target.

TargetHours TargetMinutes TermsAndConditions

300397900 300398100 300271400

Title UseEntityAsDefinedOnTheAppFormSel UseGoalAsDefinedOnTheAppFormSel

490000400 301472400 301339900

Used to take the goal definition from a field on the application form instead of from the service target. The field is configured on Data Source. Goal for Service target to be met. Hours and minutes for Request-based, Status or numeric values for PerformanceMonitoring. Can be a single goal or a Schedule. Used to take the start time of the service target measurement from a field on the application form as configured on the Data Source.

UseGoalCostSelection

300431700

UseStartTimeAsDefinedOnTheAppFormSel

301563900

83

*66085* *66085* *66085* *66085*
*66085*

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close