четверг, 17 марта 2016 г.

Integration

Integration

INTRODUCTION TO INTEGRATION

Understand the Integration Capabilities

If our application needs to respond to requests from external applications the integration is known as a service. If our application needs to initiate the request by making a connection to an external service, the integration is known as a connector.

Integration Wizards

The development of most integration is supported by wizards, which guides us through creating connectors and services and dramatically accelerates development.
The connector wizards use metadata, such as a WSDL file, EJB class, or a data table definition to create the connector and mapping rules.
DesignerStudio>Integration>Connectors>...
Classes and properties used for mapping the request and response data are typically created as well.
The service wizard lets us create a service for our application, allowing external systems to send a request to our application.
DesignerStudio>Integration>Services>ServiceWizard
The wizard creates service and deployment records as well as listeners if they are required for the specific service.

Invoke a Connector

A connector is invoked either from a data page or an activity. If you call a connector from a flow you can use the Integrator shape to reference the activity.
Use the connector with a data page if you want to fetch data from the service and with an activity if you want to push data to the service.
We can specify the type of connector to use as well as the request and response data transforms to map the request and response. The data page can be used on its own or referenced by or copied to a property.
If we use an activity we can explicitly call a connector. We use data transforms to map application data to the request and from the response.

Configure Resource Settings

When we migrate an application from one server or environment to another, references to the external systems that those applications connect to, such as endpoint URLs and JNDI servers might change.

Some of these references are in rules, which may belong to locked ruleset versions. To avoid having to unlock and update rules, and to manually update these references after migrating, the Global Resource Settings feature is often implemented.

Diagram the Connector Processing Model

  1. Before the connector is invoked from a data page or activity a data transform is used to map data between our application and integration clipboard pages.
  2. Then the connect rule is invoked. Connect rules are protocol specific and implement the interface to the remote service. They specify the target service, and provide data mapping configuration details for outbound and inbound content.
  3. The service client is initialized based on the connect rule type.
  4. The request data is mapped into the format required by this type of connector using the mapping rules specified in the connect rule. Don’t confuse this mapping with the data transforms. This mapping is between the clipboard and the format required by this type of connection.
  5. The request is sent to the external system.
  6. When the response is received, the response data is mapped using the mapping rules specified in the connect rule.
  7. The service client is finalized, and control returns to the data page or activity.
  8. Finally a data transform is used to copy all or part of the response from the integration clipboard data structure to our application.

Diagram the Service Processing Model

  1. The service listener is responsible for sensing that an incoming request has arrived. This functionality is sometimes provided by the underlying Web or Application Server, and sometimes provided by a PRPC Listener.
  2. The service listener receives the request and instantiates the “Service API” to provide communication with PRPC. Then, via the Service API, control is handed to PRPC.
  3. PRPC looks up the service package and related service rule, using the access group that is specified in the service package.
  4. It then establishes the service requestor, and optionally performs authentication based on security credentials that are passed in the request. Once authenticated, service processing continues using the authenticated user’s access group, not the access group that is contained in the service package.
  5. The request is mapped onto the clipboard according to the specifications contained in the service rule.
  6. Control is passed to the service activity, which provides the logic for the service.
  7. When your service activity completes, control is returned to the Service API.
  8. The service rule maps the clipboard data to form the response data.
  9. The Service API then cleans up the requestor that it previously created, and returns control to the service listener.
  10. Finally, the service listener sends the response.
The service activity is the primary activity that is executed by the service. It contains the ‘business logic’ with the request as input and creates the Clipboard structure that represents the response.
The service rule is the first step toward exposing our service activity as a service. It specifies the service activity, defines the external API of our service (the ‘signature’ that external applications use to communicate with this service), and specifies the data mapping for the request and response.

The service package is data, not a rule. It specifies the access group for locating the service rule, plus provides “service requestor” authentication, pooling and deployment options.

The job of the service listener is to sense when a request has arrived. In many Integration Types, this functionality is provided by the underlying Web or Application Server.

In those Types where this is not provided, PRPC provides Listener data classes that allow us to specify the details about listener initialization.

The data instances should be associated with our application’s ruleset since it is our application that provides the service. There are no special considerations with regards to the enterprise class structure when creating a service. Note that it is a best practice to use authentication.

ERROR HANDLING AND DEBUGGING

There are techniques that allow an application to detect and possibly recover from errors when interacting with another system. These techniques vary depending on the role of the application in the integration.

Understand Error Handling in Connectors

It is important that we understand the types of errors that can occur when integrating with other applications or systems so that we can quickly identify and fix the error.
There are two basic types of errors that can occur when integrating through a connector:
  • Transient -  typically don’t last very long and correct themselves over time.
  • Permanent - an also occur. These errors are typically due to a configuration error or an error in the remote application logic.
Depending on how the connector is invoked there are different mechanisms for determining if there has been an error. Each of these mechanisms is described below.
  • If the connector is invoked from a data page we can use the Post Load Processing Activity to check for and handle errors.
  • If an error occurs while loading the data page a message is added. Check for messages in the Post Loading Processing Activity and handle the error.
  • If the connector is invoked from an activity, use the Transition Step Status to check if the connector failed and if so handle the error.
In addition to checking and fixing the errors directly after the connector is invoked it is also possible to use the Error Handling Flow feature available for most connectors.
Note that the Error Handling Flow is not executed if the error was handled in a transition in the activity or if the connector is invoked from a data page.
The Error Handler Flow feature is enabled by default and is triggered if the exception is not caught elsewhere.
Unless we specify a different flow, the standard flow Work-.ConnectionProblem is used.
Let’s walk through the ConnectionProblem flow to understand its behavior.
If the exception is a resource unavailable exception, the case is automatically assigned to a workbasket named IncompleteConnections.

The RetryConnection Service Level Agreement (SLA) goes into effect. It tries the connector again from the original flow for a maximum of five times: one at goal event, one at the deadline event and then three times at passed deadline.

Based on the RetryConnection activity, if the connector succeeds during any of those five attempts, the ConnectionProblem flow assignment is cancelled, the ConnectionProblem flow ends and the original flow resumes.

However, if the connector fails during any of those five attempts for the same reason, meaning with the same resource unavailable exception, the assignment is kept in the workbasket so the next SLA event can retry it again.

The SLA stops trying after the fifth unsuccessful attempt. In that case, the assignment stays in the IncompleteConnections workbasket until a user or system administrator either restarts the flow or cancels the assignment.

The other path in the standard ConnectionProblem handles the situation when a different exception other than the resource unavailable exception is returned by the connector. In that case, the ConnectionProblem flow spins off the FlowProblems flow and ends.

If a new exception is thrown while retrying from the previous attempts with the resource unavailable exception, the assignment in the IncompleteConnections workbasket is reassigned to the FlowProblems flow.

The FlowProblems flow, depending on its parameters settings, can either route the work item to a problem flow workbasket or notify an operator about the issue. From there, the step that causes the issue can be retried, or the initial flow can just resume, or the initial flow is restarted from the beginning or the assignment is simply cancelled.

Of course, we can also create a new flow for handling connector errors or customize the standard ConnectionProblem flow by creating our own version of it or any of its parts to meet our needs.


Simulate a Connector


Connector simulations are very useful in situations where the data source is not available or when one wants to dictate the response returned. Connector simulators can be setup for most connector types.
The Integration landing page (DesignerStudio > Integration) gives us an overview of the available connectors and their simulators.
The landing page shows for which connector types simulations are available and if it is active. Connector simulations can be temporarily enabled and disabled from the landing page using the Enable/Disable Simulations button. It is also possible to permanently disable all active simulators in one easy step using the Clear Simulations button. Note that SQL connectors cannot be simulated and that there are no connector simulations available out-of-the-box.
The connector simulator configuration can be accessed directly from the landing page by clicking the “X Simulations link” or from the Simulations button on the connector rule.
Simulation activities can be defined on a global or user session level. If global is selected the connector is simulated for all users on the system. If user is selected the connector is simulated for the current user only.
It is possible to define several simulation activities for each connector, but only one can be active per level. Hence, a connector can have one simulation activity active as a global simulation and another one active for the user session. In such a case the user session simulation overrides the global simulation.
If we’re using our connector with a data page we have the option to simulate the data source instead of the connector.
Place the connector simulation activity in the same class as the connector it simulates. It is worth considering placing the connector simulation activities in a separate RuleSet, which is not moved to the production system.
The step page in the connector simulation activity is the service page of the connector it simulates so it is easy to set response properties. Note that the properties are set directly on the service page and the stream and parse rules that some connector types have defined on the service page are not used in simulation mode.
We can use pre-conditions to return different responses based on things such as values in the request.

Understand Error Handling in Services


It is critical to catch errors and exceptions during execution of a service rule so that they can be communicated to the client application in a clean way. It is a best practice to configure clipboard property values to communicate service errors to the calling application.
Many service types have an exceptions or faults tab. Service SOAP, for example, provides a Faults tab.
Other service types allow a service error condition to be defined on the Response tab.
When the service encounters a processing error and a condition evaluates to true an error message as defined is returned to the calling application. The following options are available for defining an error response message.
When – Specified When rule returns true.
Queue When – If the specified When rule returns true the request is queued and a PRPC-specific SOAP fault that includes the queue ID of the request is returned. Asynchronous processing is covered in another lesson.
Mapping Error – Error occurs while mapping incoming data from the request message to the clipboard.
Security Error – Authentication fails.
Service Error – Valid instance of the service activity cannot be found.
If the mapping, security, and service errors are not defined the system returns standard exceptions.

Debug Connectors and Services

Tracer

We can use the Tracer to capture session information about the connector’s actions from the moment the session starts. The Tracer can monitor any active requestor session. When using it for a connector, we start it for our requestor session before running the connector activity.

A Service, however, runs in a new requestor session and the processing is usually so quick that it can be hard to catch the event to trace it. Therefore, the Trace option available in the Run menu is more convenient to trace service rules. Using this option we can trace a specific rule. This Trace option can be used when unit testing the service using the Run option in the Actions menu and to trace real service requests invoked from an external client application.
In the Tracer, we can use the following options when tracing services.
  • The services option adds steps when the service invocation begins and ends. Nested within those steps are entries that show when the inbound and outbound data mapping begins and ends.
  • The parse rules option adds steps when the parse rules begin and end their processing.
  • The stream rules option indicates when HTML and XML stream rules begin and end their processing.

Clipboard

We can use the Clipboard tool to examine property values and messages associated with them.

For connectors, it is easy to examine the clipboard since connectors typically run in our requestor context. You can create or select a work item, move it through the flow and examine the Clipboard before the connector is invoked and after it obtains a response.

The session data of a service requestor is also accessible on the Clipboard. However, because the duration of a service request is so short, it is nearly impossible to examine the Clipboard if it is invoked externally and therefore not run in our requestor context. To see the Clipboard of a service rule, we must invoke the service using the Run option in the Actions menu.

Log File

A message is written to the Alert Log file when some processing events exceed their threshold. Alert messages are available for both services and connectors.

During a service execution, the following operations can contribute to a long transaction time:
  • Inbound data mapping
  • Outbound data mapping
  • Data parsing
  • Service activity execution
If one of those time thresholds is exceeded, a PEGA0011 alert message is reported in the alert log.

Similarly, a PEGA0020 alert message is reported when a connector call to an external system has exceeded an elapsed time threshold.

Beside the Alert Log, we have the Pega Log in which system errors, exceptions, debug statements are gathered. While testing our integration interactions, we can increase log level settings, add more loggers and then examine the results in the log file. Use the DesignerStudio > System > Tools > Logs > Logging Level Settings landing page to update the log level settings.

For connectors, we need to set a logging level for the Invoke activity.

Below is a list of Java classes with their matching service types. We can set logging levels for the classes appropriate for the service we are testing.

Service TypeClasses
Allcom.pega.pegarules.services
SOAP or .Netcom.pega.pegarules.services.soap.SOAPService
com.pega.pegarules.services.soap.SOAPResponseHandler
com.pega.pegarules.services.soap.SOAPUtils
EJBcom.pega.pegarules.services.jsr94
Javacom.pega.pegarules.services.jsr94
JMScom.pega.pegarules.services.JMSListener
MQcom.pega.pegarules.services.MQListener
Filecom.pega.pegarules.services.file
Emailcom.pega.pegarules.services.EmailListener

CONFIGURING A SOAP CONNECTOR

Use the Create a SOAP Connector Wizard

The wizard walks us through the process of gathering the details of the service and defining a data model from a WSDL. The wizard then creates the records needed to interact with an external SOAP service.
From the Designer Studio menu select Integration > Connectors > Create SOAP Integration to start the wizard. A SOAP integration requires a WSDL that describes the remote service.
We can provide a URL indicating where the WSDL is hosted. The system prompts us for credentials if the server hosting the WSDL requires authentication.
Alternatively we can upload the WSDL as a file. Many WSDLs do not stand alone, but reference other WSDLs or schemas defining shared data models. The wizard only allows a single file so WSDLs that don’t stand alone cannot be uploaded as a file, but needs to be referenced via a URL.
Next we have to select the operations that we want to be able to call in our application.
If the service requires authentication click the Edit Authentication link to configure it. It is possible to configure a single Authentication Profile for the service or different profiles for each selected operation. We can specify an existing profile or provide the required information such as authentication scheme, user name, and password to have an Authentication Profile generated by the wizard.

You can use the Test button if you want to test the connection for an operation.

Click Next to display the Final screen where we can configure several key components

In the Integration field, enter the name of this integration service. The short name is a unique identifier and defaults into the Description field. It is also used in the class, RuleSet, and authentication profile names created.

The Reuse Layer determines where this connector can be reused.
  • Global Integrations indicates that the integration can be used across organizations and applications.
  • Implementation Integration Class indicates that the integration can be used within this application only.
  • Organization Integration Class indicates that the integration can be reused across applications within the organization.
If implementation or organization integration is selected we can either create a new RuleSet for this integration above the reuse layer or use the integration RuleSet of the reuse layer.

The setting we use for the reuse layer determines the base class and ruleset in which the connector rules created.

Reuse LayerBaseclassRuleSet OptionConnector RuleSetRuleSet Prerequisite
ImplementationOrg-App-Int-SupplierNewSupplierIntegrationAppInt
Reuse LayerAppIntOrgInt
OrganizationOrg-Int-SupplierNewSupplierIntegrationOrgInt
Reuse LayerOrgIntBase Pega RuleSet
GlobalInt-SupplierN/ASupplierIntegrationOrgInt has SupplierIntegration as a prerequisite
If the integration already exists at the reuse layer the message “This integration already exists at the reuse layer specified. If you proceed the integrations will be merged” appears and proceeding creates a new RuleSet version of the integration RuleSet.
Click Preview to view a summary of records the wizard will create for this integration.
Click Create to create the integration. Depending on what RuleSet options we selected and the security settings in use for the application and its rulesets, a password (or passwords) may be required to proceed.
We can use the Undo Generation button to remove the records generated.
Select DesignerStudio > Application > Tools > All Wizards to see a list of all wizards. This allows us to complete a wizard in progress or undo generation of a completed wizard.

Handle WSDL Changes

If we need to make changes to the WSDL once it has been created, we can re-run the wizard. The wizard then creates a new RuleSet version of the integration RuleSet and creates new and updated rules in that RuleSet version.
 

Updating Connectors Generated Prior to Pega 7

Prior to Pega 7 the Connector and Metadata Accelerator was used to create a SOAP connector. If our application uses connectors generated prior to Pega 7 we cannot use the new wizard to update those interfaces if they change. This is because of changes to how the base class name is constructed and the way value lists were utilized prior to Pega 7. For such cases the old wizard is still available under DesignerStudio > Integration > Connectors > Create Other Integration.

Understand the Records Generated

The SOAP connector wizard we just used created the following rules.
  • Classes and properties representing the request and response data
  • Connect SOAP rule
  • XML Stream and Parse XML rules for request and response mapping
In addition to the above rules, an Authentication Profile record is created if we configured it in the wizard.
Classes and properties are created to hold the request and response parameters.


Connect SOAP

Most of the fields on the service tab were populated from the WSDL file.

If the operation requires authentication and it was configured in the wizard an Authentication Profile will be referenced in the Authentication section.

The Service Endpoint URL (shown in the Connection section) varies depending on the connector’s environment (development, QA, or production). The Global Resource Settings feature allows us to specify the endpoint URL with a system variable, rather than using a rule that may be maintained in a locked ruleset.
The wizard defaults the response timeout to 120,000 milliseconds. Typically we are going to want to adjust this to a much lower value, depending on the SLA of the service. In particular, if this service is invoked as part of a user’s interactive session, this value should be reduced to a few seconds.
The error handling properties hold status and exception information returned by the connector and the processing options are advanced settings that allow us to setup the connector to operate asynchronously.

The Request tab


Entries in the request headers specify any data that needs to be included in the SOAP envelope header.

Entries in the request parameters specify the data that will be mapped into the body of the SOAP request message. In this particular case the body is mapped using an XML stream rule.

The Response Tab


We use the target property if we want to store the entire original SOAP response with the case. This might be useful or required for auditing or history purposes.

Response header entries are used to map the incoming SOAP envelope header.

The response parameters allow us to specify how to map the data from the body of the reply message to Clipboard properties. In this particular case the body is mapped using an XML parse rule.

The Faults Tab


A SOAP fault is an error in a SOAP communication resulting from an incorrect message format, header-processing problems, or incompatibility between applications. When a SOAP fault occurs, a special message is generated that contains data indicating where the error originated and what caused it. The properties on the faults tab allows us to map the faults details returned and handle the error appropriately.

The Advanced Tab

               

The SOAP connector architecture uses an Axis client to make SOAP calls and process the response. The client properties allow us to specify settings for the Axis client.

If the connector requires security we can configure it on the Advanced tab.

Compensating actions are intended for use when the connector succeeds, but a subsequent step in the process determines that the action of the connector should be reversed. Typically the compensating action sends a message to the external system by executing another connector that un-does the action of the original connector. Compensating actions are not intended to help recover from system failures.

XML Stream

The XML stream rule maps data from the Clipboard to the SOAP request.

The Mapping Tab


Here, we map the supplier to the update supplier service request as we can see in the tree structure.

The XML Tab


The XML tab defines how the properties will be mapped into the XML message structure.

Parse XML


The XML parse rule maps the response to the Clipboard.

Mapping Tab



Here, the list of suppliers is mapped from the get supplier list service response.

3 комментария: