Sunday, December 12, 2010

Contactless Credit Cards and Electronic Pickpocketing

Recent development in the Credit Card market is the contact-less cards - just wave at the terminal to pay.
Providers include
Visa: Visa Contactless, PayWave
MasterCard: PayPass
American Express: Express Pay

In Australia, major banks like Commonwealth Bank, ANZ, Macquarie Bank & NAB are offering.
Limit: Visa PayWave - AU$100 and Mastercard PayPass - AU$35

  • Greater convenience - no need to carry cash in hand
  • Greater speed to pay
  • Innovative experience
  • Greater security – while purchasing goods, your card will never leave your hands. This reduces the risk that your card details may be copied or compromised in any way.

How Does a Contactless Credit Card Work?
The cards contain RFID chips and the communication between credit card and terminal are completed through radio waves and the data transmission encrypted. It will work as long as the card is 4 centimetres or less from the terminal. No more swiping or insertion into card reader; simply tap and go.

Risks & explanations

These chips encode basic information (e.g., account numbers, expiration dates) that can be picked up by point-of-sale RFID readers, eliminating the need for cards to be physically handled or swiped. One possible drawback to this technology is that unauthorized persons might use RFID readers of their own to surreptitiously glean that same information using a card reader and a netbook computer to engage in card "skimming".
Luckily, The data streams emitted by contactless cards don't include such information as PINs and CVV security codes or in newer cards,  customer name. Without those pieces of information a card skimmer should not be able to utilize the stolen card numbers to print up counterfeit cards or engage in Card Not Present transactions.
Payment companies claim that the process of making purchases with the cards involves verification procedures based on powerful encryption that make each transaction unique. Most cards transmit a dummy number that does not match the number embossed on the card and that number can be used only in connection with the verification token, that is encrypted before being sent.
Alternately a stainless steel wallet saves the card ;)

They are already popular in transport world:
  • Octopus card in Hong Kong (1st ever)
  • Oyster card in London
  • Navigo pass in Paris
  • Suica in Tokyo
  • SL Access card in Stockholm
  • Clipper card in San Francisco
  • Delhi Metro rail

In2Pay Contactless Payment - even iPhone App
Visa and DeviceFidelity collaborated to combine Visa’s contactless payment technology Visa payWave with the In2Pay technology. The microSD memory slot of the iPhone enables a mobile contactless payment device. This applies to any mobile phone.
 The In2Pay solution transforms any mobile phone with a microSD memory slot into a mobile contactless transaction device, offering a full-featured user interface that supports multiple mobile operating systems. The In2Pay microSD v2 is Trusted Service Manager (TSM) ready, allowing TSM client software on mobile devices to interact with the In2Pay Secure Element through a new Java based Application Programming Interface (In2Pay API). The patented TSM ready architecture of the In2Pay v2 allows TSM providers to support the In2Pay solution without modifying the TSM server designed for embedded or SIM based NFC solutions. With In2Pay v2, DeviceFidelity builds on the plug-and-play features of previous versions to meet the growing market demand for a mobile contactless solution that can interact with wallet solutions of established TSM vendors and can be issued through multiple delivery channels.

Open Project Directory
RFID - Radio Frequency Identification 
CVV - Card Verification Value (normally 3 digit security code behind the card)

Tuesday, November 23, 2010

When to use OSB & BPEL?

Use OSB for:
  • Endpoint routing (providing location transparency) so that we do not care about the physical location of the endpoint.
  • Endpoint abstraction (interface transparency) so that we do not care about the exact data formats required by the endpoint because the OSB will take care of transformations.
  • Load balancing so that we do not care about which of multiple service implementations will actually service a request. 
  • Throttling so that we do not care about how use of services is restricted.  
  • Enrichment so that we do not care about how additional data is provided to the request to match the expected request and response formats.
  • Simple synchronous composition so that we do not care if our abstract service call is actually made up of two or more physical service calls.
  • Protocol conversion so that we do not care what physical transports are being used.
  • Sync/async abstraction so that we can treat services as fire and forget or query response according to the needs of the client.

Use BPEL for:
  • Complex composition of parallel flows that involve more than a couple of services.
  • Long running compositions that may run for minutes, hours or days.
  • Asynchronous compositions that require correlation of requests and responses.
  • Process abstraction that enables us to track processes and their interactions with multiple services.
  • Human workflow
Related blog topics:

what is EAI?


   1. A centralized broker that handles security, access, and communication. This can be accomplished through integration servers (like the School Interoperability Framework (SIF) Zone Integration Servers) or through similar software like the Enterprise service bus (ESB) model that acts as a SOAP-oriented services manager.
   2. An independent data model based on a standard data structure, also known as a Canonical data model. It appears that XML and the use of XML style sheets has become the de facto and in some cases de jure standard for this uniform business language.
   3. A connector, or agent model where each vendor, application, or interface can build a single component that can speak natively to that application and communicate with the centralized broker.
   4. A system model that defines the APIs, data flow and rules of engagement to the system such that components can be built to interface with it in a standardized way. This aids Orchestration

  • Real time information access among systems
  • Streamlines business processes and helps raise organizational efficiency
  • Maintains information integrity across multiple systems
  • Ease of development and maintenance
  • High initial development costs, especially for small and mid-sized businesses (SMBs)
  • Require a fair amount of up front business design, which many managers are not able to envision or not willing to invest in.
Most EAI projects usually start off as point-to-point efforts, quickly becoming unmanageable as the number of applications increase.

SOA is a loosely coupled solution to replace EAI (didn't kill EAI though).

Canonical Data Model & the pattern

Canonical Data Model

Canonical implies => simplest form possible based on a standard, common view within a given context. Canonical Model is a design pattern used to communicate between different data formats in Enterprise Application Integration (EAI) - it is intended to reduce costs and standardize on agreed data definitions associated with integrating business systems. Canonical Data Model - an enterprise design pattern which provides common data naming, definition and values within a generalized data framework.

A typical migration from point-to-point (P2P) interfacing to message based integration (MOM) begins with a decision on the middleware to be used to transport messages between endpoints. Often this decision results in the adoption of an Enterprise Service Bus (ESB) or Enterprise Application Integration (EAI) solution. Most organizations also adopt a set of standards for message structure and content (message payload). The desire for consistent message payload results in the construction of an enterprise/business domain Canonical Model or adoption of an XML message standard used as the basis for message objects.

The goal of the Canonical Model is to provide a dictionary of reusable common objects and definitions at an enterprise or business domain level to enhance system interoperability. "A Canonical Data Model allows developers and business users to discuss the integration solution in terms of the company's business domain, not a specific package implementation. For example, packaged applications may represent the common concept of a customer in many different internal formats, such as 'account', 'payer', and 'contact'. Defining a Canonical Data Model is often the first step to resolving cases of semantic dissonance between applications." Enterprise integration models provide a foundation for a decoupled, consistent, reusable integration methodology which can be implemented using messaging supported by middleware products. Message payloads (business data content) in the form of XML schema are built from the common model objects thus providing the desired consistency and re-usability while ensuring data integrity.
Done as "Data format and transformation" in two steps: the adapter converts information from the application's format to the bus's common format. Then, semantic transformations are applied on this (Eg: converting zip codes to city names, splitting/merging objects from one application into objects in the other applications, and so on). Eg: When integrating 2 disperate systems (say: Mainframe MF & a Siebel), the canonical model could be common language, each one translates to for speaking to eachother.

Canonical Schema Pattern

In order for a service consumer to send data (related to a particular business entity e.g. a purchase order), it needs to know the structure of the data i.e. the data model. The interaction between services often requires exchanging business documents often complying to certain stds eg: XML schema document (xsd). Once the service consumer knows the required data model, it can structure the data accordingly. However, under some conditions it may be possible that the service consumer already possesses the required data, which relates to a particular business document, but the data does not conform to the data model as specified by the service provider. This disparity among the data models results in the requirement of data model transformation (so that the message is transformed into the required structure as dictated by the service provider). This runtime data model transformation adds processing overhead and complicates the design of service compositions.

In order to avoid the need for data model transformation, the Canonical Schema pattern dictates the use of standardized data models for those business documents that are commonly processed by the services in a service inventory. Here, a Standardized Service Contract design principle advocates that the service contracts be based on standardized data models. This is achieved by performing an analysis of the service inventory blueprint, in order to find out the commonly occurring business documents that are exchanged between services. These business documents are then modeled in a standardized manner. For example, in case of web services, the business documents are modeled as XML schemas. Once a standardized data representation layer exists in a service inventory, different service contracts can make use of the same data models if they need to exchange the same business documents. This eliminates the need for any data model transformation and reduces the processing overhead associated with the data model transformation. Another way is to reuse the commonly used elements composed as complex types but involves repeated usage eg: Between Checkout Service & Payment Service, LineItems are repeatable and can be reused.

Diagram A 
Service A is using a different data model as compared to Service B for the same business document. When messages are exchanged, runtime data model transformation needs to be performed.

Diagram A
Both services are using the same data model for representing a particular business document. AS a result, no data model transformation is required when messages are exchanged.

Sunday, November 21, 2010

Exposing Ora BPEL processes as Web services in OSB layer

With Oracle Service Bus's native transport for Oracle BPEL Process Manager (BPEL transport), you can expose BPEL processes as Web services in the service bus layer, letting other services invoke BPEL processes; thus letting you include BPEL processes in your service oriented architecture (SOA).
 Communication between Oracle BPEL Process Manager and Oracle Service Bus is done over SOAP only. OSB and Oracle BPM do not provide full support for SOAP RPC encoding.
  • SOAP 1.1. SOAP 1.2 is supported only from Oracle Service Bus to Oracle BPEL Process Manager using synchronous communication.
  • SOAP headers

BPEL transport has the following restrictions:
  • No attachments
  • No WS-Security or WS-RM
Oracle BPEL Process Manager supports transaction propagation through its API, and the BPEL transport is transactional to support transaction propagation when Oracle BPEL Process Manager is deployed on Oracle WebLogic Server. For example, if a process begins in a service outside of Oracle BPEL Process Manager, Oracle Service Bus can propagate the transaction to Oracle BPEL Process Manager through the BPEL transport to complete the transaction.

Communicating to and fro Oracle BPEL Process Manager through Oracle Service Bus
1. Synchronous: Invoking Processes in Oracle BPEL Process Manager
Description of Figure 34-1 follows
  a. Create a Business Service in Oracle Service Bus that represents the BPEL process you want to invoke.
  • Create a WSDL-based business service. Generate the WSDL from Oracle BPEL Process Manager.
  • Select the bpel-10g transport in the business service configuration.
  • Set the Endpoint URI
  • Configure the remainder of the business service

  b. Create a Proxy Service in Oracle Service Bus that invokes the business service

 2.  Synchronous: Calling External Services from Oracle BPEL Process Manager
Description of Figure 34-2 follows
  • Create a Business Service in Oracle Service Bus that represents the external service you want to invoke
  • Create a Proxy Service in Oracle Service Bus that invokes the business service.
    • You must create the proxy with a SOAP WSDL to invoke the business service.When defining your proxy service, for the Service Type select WSDL Web Service, and select the desired port or binding.
    • Select the sb transport in the proxy service configuration.
    • To invoke the proxy service from Oracle BPEL Process Manager, export the proxy service's effective WSDL and import it into your Oracle BPEL Process Manager development environment. Invoke the proxy service from Oracle BPEL Process Manager as you normally would.
 3. Asynchronous: Invoking Processes in Oracle BPEL Process Manager
Description of Figure 34-3 follows
  • Create two proxy services in Oracle Service Bus: one that invokes the business service and another that handles the callback.
    Request Proxy Service
    • Since the callback will be sent on a different connection in asynchronous communication, you must establish the callback address in the request proxy. This callback address will be passed to the callback proxy and callback business services so that the message is sent back to the correct client.
      As part of the business service configuration, you select a Callback Proxy. At run time, the BPEL transport uses this proxy as the callback proxy.
      For approaches to setting a callback address if you do not select a callback proxy in the business service
    Callback Proxy Service
    • Configure the proxy to use a Service Type of WSDL SOAP or Any SOAP Service and the SB or HTTP transport. Use the SB transport if you want transaction propagation from Oracle BPEL Process Manager to Oracle Service Bus.
      If you select this proxy service as the business service's callback proxy, the BPEL transport provides the correct callback URI at run time.
  • Create two business services in Oracle Service Bus: one that makes the request to the Oracle BPEL Process Manager process you want to interact with and another that handles the callback.
    Request Business Service
    • Create a WSDL-based business service. Generate the WSDL from Oracle BPEL Process Manager. Select a Service Type of WSDL Web Service, and select the appropriate binding or port in the WSDL.
    • Select the bpel-10g transport in the business service configuration.
    • Set the role to Asynchronous Client.
    • Set the Endpoint URI.
    • Use the Callback Proxy field on the bpel-10g transport configuration page to select the callback proxy you created.
    Callback Business Service
    Configure the business process you need to handle the callback.
 4. Asynchronous: Calling Service Providers from Oracle BPEL Process Manager

Description of Figure 34-4 follows

Creating and Configuring the Services

  • Create two proxy services in Oracle Service Bus: one for the request that invokes the business service and another that handles the callback.
    Request Proxy Service
    • Configure the proxy service to use the sb transport.
    • Since the callback will be sent on a different connection in asynchronous communication, you must establish a callback address so that the message is sent back to the correct client.
    Callback Proxy Service
    • Configure the proxy service to pass the callback address to the business service. The callback URI is provided in the request. Use URI rewriting to extract the callback URI and forward it to the business service.
  • Create two business services in Oracle Service Bus: a request business service that invokes the external service and a callback business service.
    Request Business Service
    • Configure the business service to invoke the external service.
    Callback Business Service
    • The callback business service receives the callback address from the callback proxy. The URI rewriting performed by the callback proxy service determines which BPEL process to send the response to.
      Create a WSDL-based business service. Generate the WSDL from Oracle BPEL Process Manager. Select a Service Type of WSDL Web Service, and select the appropriate binding or port in the WSDL
    • Select the bpel-10g transport in the business service configuration.
    • Set the Endpoint URI to bpel://callback. The callback URI is provided by the callback proxy service.
    • Set the role to Service Callback on the bpel-10g in transport configuration tab

If the callback address is always known, for example when the client and BPEL service are linked together because of a trading partner agreement, you can provide the exact callback address in the callback business service instead of using bpel://callback.
BPEL thru OSB: Associating messages with correct conversation

Ora BPM thru OSB
Calling BPEL from OSB - good one 
Asynchronous BPEL to BPEL Through Oracle Service Bus Example
WS-Addressing Reference
Working with Proxy Services
Asynchronous BPEL to BPEL Through Oracle Service Bus Example
Setting a callback address
Working with Proxy Services

Other related topics:

Ora BPEL thru OSB - associating message with correct conversation

Associating Messages with the Correct Conversation
When using stateful services, the messages sent synchronously between Oracle Service Bus and Oracle BPEL Process Manager are known as a conversation. Oracle BPEL Process Manager supports the following mechanisms for ensuring that messages are correctly associated with each other as part of a conversation. These mechanisms are independent of each other, and you may choose to use both to ensure correct association.
  • BPEL Correlation – BPEL correlation is part of the BPEL specification. When a WSDL-based business service in Oracle Service Bus sends a message to a BPEL process, the BPEL engine examines the message to find the target BPEL process instance.
  • Opaque Correlation using WS-Addressing – When a conversation is initiated by a client through Oracle Service Bus to a BPEL process, the BPEL engine looks in the WS-Addressing SOAP header for the "messageID" value to use as the ID for the new conversation. The conversation ID is carried through the conversation as the "RelatesTo" value.
 "MessageID" and "RelatesTo" are used to store the conversation ID in conversations between Oracle Service Bus and Oracle BPEL Process Manager, making sure related messages remain in the same conversation.
The BPEL transport does not let you specify whether a given operation is a start or continue operation. Instead, the BPEL transport looks for the "MessageID" and "RelatesTo" properties and sets them accordingly.
The following describes how the BPEL transport uses "MessageID" and "RelatesTo" in synchronous and asynchronous conversations:
  • Synchronous conversation: In the initial request, the "MessageID" determines the conversation ID. In the remaining communication, the BPEL transport provides the conversation ID as the RelatesTo value.
    If there is no value assigned to "MessageID" or "RelatesTo," the transport assumes either no conversation is occurring or that Oracle BPEL Process Manager is handling the correlation.
  • Asynchronous callbacks - In the initial request, the "MessageID" determines the conversation ID. In the remaining communication, the BPEL transport provides the conversation ID as the "RelatesTo" value in the callback.
    If there is no value assigned to "MessageID" or "RelatesTo," the transport assumes either no conversation is occurring or that Oracle BPEL Process Manager is handling the correlation.

Correlation illustrated

Related blogs

OSB proxy - SB Transport

The SB transport allows Oracle products (bpm) to synchronously invoke an Oracle Service Bus proxy service using RMI. The inbound transport allows clients to access SB proxy services using RMI. The outbound transport allows the invocation of SB proxy services using RMI. By default, accessing all services using T3 protocol, IIOP, HTTP, T3s, IIOPS, or HTTPS depends on the configuration of the target server.

SB transport supports:
  • Propagation of the transaction context. The transaction originated in the client Oracle Service Bus servers can optionally be propagated to the SB proxy service.
    Propagation of the security context. By default, the security context associated with the SB client thread is used to invoke the SB proxy services. This may require enabling domain trust between domains.
  • Invocation of SB proxy services, with custom identities, by the outbound endpoint using a service account.
  • Specification of time out value for non-transactional invocations. The client request returns when Oracle Service Bus does not respond to the request within the specified interval.
  • Association of a dispatch-policy for both request and response connections
  • Optimization of RMI call and call-by-reference when routing to a SB business service without a JNDI provider.
  • The following service types:
    • WSDL service
    • Any SOAP service
    • Any XML service
  • The following messaging patterns:
    • Request (one-way) and request-response for the inbound transport.
      For an Oracle Service Bus client the by default the messaging pattern is inherited from the pipeline of the SB outbound transport.
      For a non-Oracle Service Bus client by default messaging pattern is request-response.
    • Request and request-response for the outbound transport Environment Values.
  • The following default values for the Quality of Service (QoS):
    • Exactly-Once for non-Oracle Service Bus clients
    • Best-Effort for Oracle Service Bus clients

Oracle BPEL PM - Dehydration Store?

BPEL is the standard for assembling a set of discrete services into an end-to-end process flow, radically reducing the cost and complexity of process integration initiatives. BPEL is an OASIS standard executable language for specifying actions within business processes with web services. Processes in Business Process Execution Language export and import information by using web service interfaces exclusively. Leaders in this arena include Oracle BPEL PM, Websphere PM etc.

Oracle BPEL PM utilizes a database to store metadata and instance data during runtime. The process of updating process state in the database is called Dehydration. This data is stored in what is known as the Dehydration store, which is simply a database schema (also called dehydration store, BPEL schema, BPEL tables). The Dehydration Store database is used to store process status data, especially for asynchronous BPEL processes, like BPEL’s metadata and instance data. This exists in x_SOAINFRA schema created by running RCU.

This is separate and independent from any database objects used by your BPEL processes for storing application or business data. For performance reasons, the BPEL schema does not utilize foreign keys and thus master-detail relationships are not obviously inferred from looking at the schema definition.These dependency relationships are maintained by the BPEL engine.

Oracle BPEL Process Manager uses the dehydration store database to maintain long-running asynchronous processes and their current state information in a database while they wait for asynchronous callbacks. Storing the process in a database preserves the process and prevents any loss of state or reliability if a system shuts down or a network problem occurs.

The database schema ddl can be found at: \Oracle_SOA1\rcu\integration\soainfra\sql\bpel. With proper knowledge of this schema, administrators can bypass the BPEL Console and write SQL queries against the store directly OR use BPEL Process Manager API.

Oracle BPEL Process Manager Console provides a user-friendly, Web-based interface for management, administration, and debugging of processes deployed to the BPEL server. BPEL Process Manager API provides an exhaustive set of classes to find, archive, delete instances in various states, delete callback/invoke messages across different domains, or query on the status of specific domain, process, or instance. In production environments, administrators need strong control over management tasks. Via a PL/SQL query or BPEL API against the BPEL Dehydration Store database, it is possible to automate most of these administrative tasks.

Key classes for performing administrative tasks are:

Class/Interface Methods
Class WhereConditionHelper Provides methods such as whereInstancesClosed(), whereInstancesStale(), and whereInstancesOpen(), which construct a where clause that search for respective instances.
Interface IBPELDomainHandle Allows the developer to perform operations on a running BPEL process domain. Provides methods such as archiveAllInstances(), deleteAllInstances(), d eleteInstancesByProcessId(), deployProcess(), and undeployPorcess(), deleteAllHandledCallback(), and deleteAllHandledInvoke().
Interface IinstanceHandle Allows the user to perform operations on an active instance. Provides methods such as isStale() , getState() , getModifyDate() , and delete() .
Class Locator Allows the user to search for processes, instances, and activities that have been deployed and instantiated within an Orabpel process domain. Provides methods such as listInstances() and listActivities() and can take where clauses as parameters.
Tables and their relationships are:

TASK table stores tasks created for an instance. The TaskManager process keeps its current state in this table. Upon calling invoking the TaskManager process, a task object is created, with a title, assignee, status, expiration date, etc. When updates are made to the TaskManager instance via the console the underlying task object in the db is changed.
Table nameDescription
CUBE_INSTANCEContains one entry for each BPEL instance created. It stores instance meta data information like creation date,last modified date, current state, process id etc.
An important column is cikey. Each BPEL instance is assigned a unique ID -  is the instance ID that you see in your BPEL console.Gets incremented in a sequence with creation of BPEL instances. This key cuts across a lot of tables in the dehydration tables.
Following are processes state codes and their meaning
Closed and Aborted8
Closed and Cancelled7
Closed and Completed 5
Closed and Faulted 6
Closed and (Pending or Cancel)4
Closed and Stale9
Initiated 0
Open and Running1
Open and Suspended2
Open and Faulted
CUBE_SCOPEStores the scope data for an instance. It stores BPEL scope variable values & some internal objects to help route logic throughout the flow.
INVOKE_MESSAGEStores incoming (invocation) messages (messages that result in the creation of an instance). This table only stores the meta data for a message (for example, current state, process identifier, and receive date). Following are message states and their meanings
CANCELLEDMessage Processing Cancelled3
HANDLEDMessage is processed2
RESOLVEDMessage is given to BPEL PM but not yet processed1
UNRESOLVEDMessage is not yet given to BPEL PM0
DLV_MESSAGECall back messages are stored here. All non-invocation messages are saved here upon receipt. The delivery layer will then attempt to correlate the message with the receiving instance. This table only stores the metadata for a message. (eg. current state, process identifier, receive date).
WORK_ITEMStores activities created by an instance. All activities in a BPEL flow have a work_item table. This table includes the meta data for the activity (current state, label, and expiration date (used by wait activities)). When the engine needs to be restarted and instances recovered, pending flows are resumed by inspecting their unfinished work items.
SCOPE_ACTIVATIONScopes that need to be routed/closed/compensated are inserted into this table. In case of system failure, we can pick up and re-perform any scopes that should have been done before the failure
DLV_SUBSCRIPTIONStores delivery subscriptions for an instance. Whenever an instance expects a message from a partner (for example, the receive or onMessage activity) a subscription is written out for that specific receive activity. Once a delivery message is received the delivery layer attempts to correlate the message with the intended subscription
AUDIT_TRAILStores record of actions taken on an instance. As an instance is processed, each activity writes events to the audit trail as XML. As the instance is worked on, each activity writes out events to the audit trail as XML which is compress ed and stored in a raw column.
AUDIT_DETAILSStores details for audit trail events that are large in size. Audit details are separated from the audit_trail table due to their large size. The auditDetailThreshold property in Oracle BPEL Control under Manage BPEL Domain > Configuration is used by this table. If the size of a detail is larger than the value specified for this property, it is placed in this table. Otherwise, it is placed in the audit_trail table
XML_DOCUMENTStores process input and output xml documents. Separating the document storage from the meta data enables the meta data to change frequently without being impacted by the size of the documents
WI_EXCEPTIONStores exception messages generated by failed attempts to perform, manage or complete a work item. Each failed attempt is logged as an exception message
PROCESS_DESCRIPTORStores BPEL processes deployment descriptor(bpel.xml)
Record of events (informational, debug, error) encountered while interacting with a process.
INVOKE_MESSAGE_BINStores invoke payload of a process. This table has foreign key relationship with INVOKE_MESSAGE table
DLV_MESSAGE_BINStores received payload of a call-back process. The metadata of a callback message is kept in the dlv_message table, this table only stores the payload as a blob. This separation allows the metadata to change frequently without being impacted by the size of the payload (which is stored here and never modified).
This table has foreign key relationship with DLV_MESSAGE
WFTASKStores human workflow tasks run time meta data like taskid,title,state,user or group assigned, created and updated dates.
WFTASKMETADATAStores task meta data. Content in this table comes from '.task' file of BPEL project
WFASSIGNEEStores task assignee information
WFMESSAGEATTRIBUTEStores task input payload parameters
WFATTACHMENTStores task attachments
WFCOMMENTSStores task comments

In a production environment, it will be necessary to archive the information before you delete the information—and to do so for hundreds of instances. Fortunately, you can achieve this goal using PL/SQL or EJB.

Datastore for Dehydration Store:
Oracle BPEL Server obtains database connections using an application server JTA data source. Oracle BPEL Server by default is configured to use the Oracle Database Lite dehydration store. For stress testing and production, Oracle recommends that you use Oracle Database 10g/11g. The same recommended when BPEL involves large attachments.

Domain and Process Configuration Property Settings

Two types of processes in Ora BPM. These processes impact the dehydration store database in different ways:
  • Transient processes: does not incur any intermediate dehydration points during process execution. If there are unhandled faults or there is system downtime during process execution, the instances of a transient process do not leave a trace in the system. Instances of transient processes cannot be saved in-flight (whether they complete normally or abnormally). Transient processes are typically short-lived, request-response style processes. Eg: synchronous process.
  • Durable processes: incurs one or more dehydration points in the database during execution because of the following activities:
    • Receive activity
    • OnMessage branch in a pick activity
    • OnAlarm branch in a pick activity
    • Wait activity
    Instances of durable processes can be saved in-flight (whether they complete normally or abnormally). These processes are typically long-living and initiated through a one-way invocation. Because of out-of-memory and system downtime issues, durable processes cannot be memory-optimized. The asynchronous process you design in Oracle JDeveloper is an example of both transient and durable processes.
Idempotent BPEL Property
A BPEL invoke activity is by default an idempotent activity, meaning that the BPEL process does not dehydrate instances immediately after invoke activities.  
  • false: activity is dehydrated immediately after execution and recorded in the dehydration store. provides better failover protection, but at the cost of some performance, since the BPEL process accesses the dehydration store much more frequently
  • true (default): If Oracle BPEL Server fails, it performs the activity again after restarting. This is because the server does not dehydrate immediately after the invoke and no record exists that the activity executed.
    This setting can be configured for each partner link in the bpel.xml file.

BPEL Process Manager API
Managing a BPEL Production Environment 
Purging strategies for dehydration store
jaisy-OrabpelInterface - JMX monitoring for Oracle Bpel Process Manager
Ora BPEL PM Performance Tuning
Ora BPEL Webinar
SOA Best Practices: The BPEL Cookbook
Pattern-based Evaluation of Oracle-BPEL - also good to understand xml block to each bpel component
Migrating dehydration for oc4j server in Ora BPM
Oracle BPEL PM - Components
Ora BPEL thru OSB
Exposing Ora BPEL processes as Web services in OSB layer
When to use OSB & BPEL?
BPEL 10g Purging Strategy
BPEL 10g Partitioning

WS-ReliableMessaging aka WS-RM

WS-ReliableMessaging describes a protocol that allows SOAP messages to be reliably delivered between distributed applications in the presence of software component, system, or network failures.

An Application Source (AS) wishes to reliably send messages to an Application Destination (AD) over an unreliable infrastructure. To accomplish this they make use of a Reliable Messaging Source (RMS) and a Reliable Messaging Destination (RMD). The AS sends a message to the RMS. The RMS uses the WS-ReliableMessaging (WS-RM) protocol to transmit the message to the RMD. The RMD delivers the message to the AD. If the RMS cannot transmit the message to the RMD for some reason, it must raise an exception or otherwise indicate to the AS that the message was not transmitted.

The AS and RMS may be implemented within the same process space or they may be separate components. Similarly, the AD and RMD may exist within the same process space or they may be separate components.
The important thing to keep in mind is that the WS-RM specification only deals with the contents and behavior of messages as they appear "on the wire". How messages are sent from the AS to the RMS, how they are delivered from the RMD to the AD, whether messages are persisted on-disk or held in memory, etc.; none of these considerations are part of the WS-RM specification.

The WS-RM protocol defines and supports a number of Delivery Assurances. These are:
  • AtLeastOnce - Each message will be delivered to the AD at least once. If a message cannot be delivered, an error must be raised by the RMS and/or the RMD. Messages may be delivered to the AD more than once (i.e. the AD may get duplicate messages).
  • AtMostOnce - Each message will be delivered to the AD at most once. Messages may not be delivered to the AD, but the AD will never get duplicate messages.
  • ExactlyOnce - Each message will be delivered to the AD exactly once. If a message cannot be delivered, an error must be raised by the RMS and/or the RMD. The AD will never get duplicate messages.
  • InOrder - Messages will be delivered from the RMD to the AD in the order that they are sent from the AS to the RMS. This assurance can be combined with any of the above assurances.
 This is implemented in prominent App servers like WL, WS, Glassfish, Netweaver etc.

More detailed reference

Oracle BPEL PM - Components

  • BPEL Designer—a graphical and user-friendly way to model, edit, design, and deploy BPEL processes. BPEL Designer also enables you to view and modify the BPEL source code. This is SOA Composite Editor in Oracle JDeveloper.
  • Oracle BPEL Server—the server to which you deploy the BPEL process that you design and that contains human workflow, technology adapters, and notification services components. Default=Weblogic AS. Websphere can also be configured
  • Oracle BPEL Console—the console from which you run, manage, and test your deployed BPEL process. Oracle BPEL Console provides a Web-based interface for management, administration, and debugging of processes deployed to Oracle BPEL Server.
  • Dehydration store - by default Oracle Database Lite; other enterprise db like Ora10g/11g & SqlServer can also be configured for this using JTA datasource.
Also, read related topic:

Saturday, November 20, 2010

Compensational Transaction - maintaining Integrity in SOA

Transaction processing techniques play a major role in preserving data consistency in critical areas of computing. The reliability provided through transactional guarantees are required in many types of applications, found in for instance workflow systems, mobile systems, and lately also in SOA like web services based systems.

Web services provide interoperable application-to-application communication, allowing new applications to leverage existing software functions in a platform independent fashion. The transactional behaviour of a function accessed through a web service depends on the underlying implementation of the web service. Often a database system will provide the required local transactional behaviour. However, when an application combines multiple web services in order to complete a given task, coordination of the participating web services is required in order to preserve data consistency.
Traditionally two phased commit (2PC) based protocols have been used to achieve such coordination (e.g. X/Open DTP, CORBA OTS). Because of the loosely-coupled nature and autonomy requirements of web services, 2PC-based protocols may however not be appropriate in this environment. This could be because of the following reasons:
  • The application uses multiple non-extended-architecture (XA) resources.
  • The application uses more than one atomic transaction, for example, enterprise beans that have Requires new as the setting for the Transaction field in the container transaction deployment descriptor.
  • The application does not run under a global transaction.
A web service is either a stand-alone service or a composite service relying on other web services to perform its task. Individual web service invocations may commit early without further coordination, provided that the effect of the invocation can be semantically reversed at some later point by executing a compensating transaction. Typically, compensating transactions are not focused in the design of web service transaction models, and implementation of this functionality is left to the application developer, here web service developer.

This is especially relevent in case of long-running transactions that avoid locks on non-local resources, use compensation to handle failures, potentially aggregate smaller atomic transactions. In contrast to rollback in ACID transactions, compensation restores the original state, or an equivalent, and is business-specific. The compensating action for making a hotel reservation is canceling that reservation, possibly with a penalty.

Interaction between web services, are typically handled through conversational transactions, that involve participation from several web services. The unit of business at each web service represents a subtransaction, also called component transaction.
The transactional behaviour of a single subtransaction is typically provided locally by an underlying database system. Additionally, transactional behaviour of the conversational transaction must be guaranteed through coordination and management of the set of subtransactions. If one or more subtransactions abort, the conversational transaction may or may not need to be cancelled depending on the business logic of the service. It is totally up to the web service starting the conversation to decide if all-or-nothing semantics should be enforced.
Within classical transaction processing, dependent transactions would have to be aborted if the dependent-upon transaction aborted. Resorting to cascading rollback of dependent transactions is however not acceptable in web services, both since dependent transactions may themselves be committed, and also since rollback may be disallowed by autonomous web services. Autonomous web services typically consider the results of a committed subtransaction as final and durable.

Here, the Compensating Transaction enters the stage - to semantically undo the results of the early committed subtransactions. A compensating transaction preserves database integrity without aborting other transactions.

Compensation Transaction -> semantically undoes the partial effects of a transaction T without performing cascading abort of dependent transactions, restoring the system to a consistent state.
The web services designer is responsible for determining the compensation rules, which are used to dynamically generate compensating transactions during runtime.

This can be automated by:
  • In BPEL, using invocable Compensating Transaction
  •  Coordinator protocols like OASIS Business Transaction Processing, and WS-CAF - mediate the successful completion or use of compensation in a long-running transaction.
  • Including a Rules Engine (which does on an Event if a Condition satisfied does an Action)
  • Custom solution using database as a rules repo(using triggers for events)

Also, read related blogs:

Transactions - ACID

set of properties that guarantee database transactions are processed reliably ie. Transaction

A - atomicity, 
C - consistency, 
I - isolation, 
D - durability

 For example, a transfer of funds from one bank account to another, even though that might involve multiple changes (such as debiting one account and crediting another), is a single transaction.

Atomic - requires that database modifications must follow an "all or nothing" rule. Each transaction is said to be atomic.

Consistent - ensures that any transaction the database performs will take it from one consistent state to another. The consistency rule applies only to integrity rules that are within its scope. Mechanisms include:
  • abort the transaction, rolling back to the consistent, prior state;
  • delete all records that reference the deleted record (this is known as cascade delete); or,
  • nullify the relevant fields in all records that point to the deleted record.
Isolation - requirement that other operations cannot access data that has been modified during a transaction that has not yet completed.
The question of isolation occurs in case of concurrent transactions (multiple transactions occurring at the same time). Each transaction must remain unaware of other concurrently executing transactions, except that one transaction may be forced to wait for the completion of another transaction that has modified data that the waiting transaction requires. If the isolation system does not exist, then the data could be put into an inconsistent state => leads to dirty reads.

Durability - ability of the DBMS to recover the committed transaction updates against any kind of system failure (hardware or software).
guarantee that once the user has been notified of a transaction's success the transaction will not be lost, the transaction's data changes will survive system failure, and that all integrity constraints have been satisfied, so the DBMS won't need to reverse the transaction. Many DBMSs implement durability by writing transactions into a transaction log that can be reprocessed to recreate the system state right before any later failure. A transaction is deemed committed only after it is entered in the log.

Achieving ACID:
  • Locking vs multiversioning
  • Distributed transactions
Locking vs multiversioning 
Locking means that the transaction marks the data that it accesses so that the DBMS knows not to allow other transactions to modify it until the first transaction succeeds or fails. The lock must always be acquired before processing data, including data that are read but not modified. Non-trivial transactions typically require a large number of locks, resulting in substantial overhead as well as blocking other transactions. For example, if user A is running a transaction that has to read a row of data that user B wants to modify, user B must wait until user A's transaction completes. Two phase locking is often applied to guarantee full isolation.
In Multiversioning, database provides each reading transaction the prior, unmodified version of data that is being modified by another active transaction. This allows readers to operate without acquiring locks. I.e., writing transactions do not block reading transactions, and readers do not block writers. Going back to the example, when user A's transaction requests data that user B is modifying, the database provides A with the version of that data that existed when user B started his transaction. User A gets a consistent view of the database even if other users are changing data. Snapshot isolation is similar to multiversioning where reference done on a snapshot of the data.
Distributed transactions
2-phase commit is commonly used solution. Here,  in the first phase, one node (the coordinator) interrogates the other nodes (the participants) and only when all reply that they are prepared does the coordinator, in the second phase, formalize the transaction.
unit of work performed within a enterprise system a database and treated in a coherent and reliable way independent of other transactions.
Follows ACID properties.

Also, refer related blogs:
Snapshot Isolation
Compensation Transaction

Snapshot Isolation in concurrent Transactions

Snapshot isolation is a guarantee that all reads made in a transaction will see a consistent snapshot of the database (in practice it reads the last committed values that existed at the time it started), and the transaction itself will successfully commit only if no updates it has made conflict with any concurrent updates made since that snapshot.
In practice snapshot isolation is implemented within multiversion concurrency control (MVCC), where generational values of each data item (versions) are maintained.
Snapshot isolation is called "serializable" mode in Oracle.
If V1 and V2 are two balances held by a single person, Phil. The bank will allow either V1 or V2 to run a deficit, provided the total held in both is never negative (i.e. V1 + V2 ≥ 0). Both balances are currently $100. Phil initiates two transactions concurrently, T1 withdrawing $200 from V1, and T2 withdrawing $200 from V2.
If the database guaranteed serializable transactions, the simplest way of coding T1 is to deduct $200 from V1, and then verify that V1 + V2 ≥ 0 still holds, aborting if not. T2 similarly deducts $200 from V2 and then verifies V1 + V2 ≥ 0. Since the transactions must serialize, either T1 happens first, leaving V1 = -$100, V2 = $100, and preventing T2 from succeeding (since V1 + (V2 - $200) is now -$200), or T2 happens first and similarly prevents T1 from committing.
Under snapshot isolation, however, T1 and T2 operate on private snapshots of the database: each deducts $200 from an account, and then verifies that the new total is zero, using the other account value that held when the snapshot was taken. Since neither update conflicts, both commit successfully, leaving V1 = V2 = -$100, and V1 + V2 = -$200.
Snapshot isolation present this.
If built on MVCC, snapshot isolation allows transactions to proceed without worrying about concurrent operations, and more importantly without needing to re-verify all read operations when the transaction finally commits. The only information that must be stored during the transaction is a list of updates made, which can be scanned for conflicts fairly easily before being committed.
  • Materialize the conflict: Add a special conflict table, which both transactions update in order to create a direct write-write conflict.
  • Promotion: Have one transaction "update" a read-only location (replacing a value with the same value) in order to create a direct write-write conflict (or use an equivalent promotion, e.g. Oracle's SELECT FOR UPDATE).
Materialize the conflict: by adding a new table which makes the hidden constraint explicit, mapping each person to their total balance. Phil would start off with a total balance of $200, and each transaction would attempt to subtract $200 from this, creating a write-write conflict that would prevent the two from succeeding concurrently. This approach violates the normal form.
Alternatively, we can promote one of the transaction's reads to a write. For instance, T2 could set V1 = V1, creating an artificial write-write conflict with T1 and, again, preventing the two from succeeding concurrently. This solution may not always be possible.

What is MVCC?
- provides access to implement transactional memory (shared memory allowing a concurrent group of load and store instructions to execute in an atomic way).

For detailed study:
Snapshot Isolation

related blog entries:
ACID Properties
Compensation Transaction

Wednesday, November 10, 2010

Adabas - grand-old database beast

ADABAS (acronym for Adaptable DAta BAse System) - primary database management system; vendor=Software AG. One of the fastest OLTP db.
Features: offers 24x7 functioning, Parallel Sysplex support, real-time replication capability, SQL and XML access and other leading edge capabilities.
Historically, ADABAS was used in conjunction with Software AG's programming language NATURAL, so that many legacy applications (eg: Mainframe) that use ADABAS as a database on the back-end are also developed with NATURAL as well.
Best suited for very high volumes of data processing or in high transaction online analytical processing environments.
Proven to be very successful in providing efficient access to data and maintaining the integrity of the database. ADABAS is now widely used in applications that require very high volumes of data processing or in high transaction online analytical processing environments.

Technical Info:
Inverted list database - content based indexing of records => quicker search but slower storage

  • Files - major organizational unit (similar to ~tables)
  • Records - content unit within the organizational unit (~rows)
  • Fields - components of a content unit (~columns)
  • No embedded SQL engine; popular external query mechanism ADASql
  • Search facilities may use indexed fields or non indexed fields or both
  • No implicit referential integritiy constraint => parent-child relations must be maintained by application code
  • Supports two methods of denormalization: repeating groups in a record ("periodic groups"); and multiple value fields in a record ("multi-value fields")

Netstring format

A netstring is a formatting method for byte strings that uses a declarative notation to indicate the size of the string.
Netstrings store the byte length of the data that follows, making it easier to unambiguously pass text and byte data between programs that could be sensitive to values that could be interpreted as delimiters or terminators (such as a null character).

Eg: the text "hello world!" encodes as:
12:hello world!,
And an empty string as:

Since the format is easy to generate and to parse, it is easy to support by programs written in different programming languages. In practice, netstrings are often used to simplify exchange of bytestrings, or lists of bytestrings. For example, see its use in the Simple CGI and the Quick Mail Queuing Protocol (QMQP).
Netstrings avoid complications that arise in trying to embed arbitrary data in delimited formats. For example, XML may not contain certain byte values and requires a nontrivial combination of escaping and delimiting, while generating multipart MIME messages involves choosing a delimiter that must not clash with the content of the data.
Note that since netstrings pose no limitations on the contents of the data they store, netstrings can not be embedded verbatim in most delimited formats without the possibility of interfering with the delimiting of the containing format.
In the context of network programming it is potentially useful that the receiving program is informed of the size of the data that follows, as it can allocate exactly enough memory and avoid the need for reallocation to accommodate more data.

Sunday, November 7, 2010

SOA Composite Editor for JDeveloper - installation

Oracle SOA Composite Editor - JDeveloper extension for SOA technologies: SOA Composite Assembly, BPEL PM, Mediator, Human Task, Business Rules, Adapters.

    1. Help->Check for updates
       In the Update Wizard, select Search Update Centersand ensure Oracle Fusion Middleware Products is checked. Select Oracle SOA Composite Editor and click on Next to begin downloading.
        If this doesn't work add a new update site with link:
       and it'll show the editor for updating JDev

    2. From file - Direct download file at
 In the Update Wizard
        and choose install from file radio option. This file is about 230MB.

Installing JDev for BPM

BPELscript - javascript-like language for BPEL

BPELscript is a language to specify BPEL processes. It provides a compact syntax inspired by scripting languages such as JavaScript and Ruby and a full coverage of all features provided by BPEL. A programming language which omits the XML-overhead of BPEL but offers the same features as BPEL.
BPELscript provides:
  1. a compact syntax inspired by scripting languages such as JavaScript and Ruby
  2. the full coverage of all features provided by BPEL
  3. a translation from & to WS-BPEL 2.0
  4. The translation to WS-BPEL 2.0 ensures that BPELscript can be executed on all workflow engines supporting WS-BPEL 2.0.

namespace pns = "";  
namespace lns = ""; 
@type ""  
import lServicePT = lns::"loanServicePT.wsdl"; 
process pns::loanApprovalProcess {  
    partnerLink customer = (lns::loanPartnerLT, loanService, null),  
    approver = (lns::loanApprovalLT, null, approver),  
    assessor = (lns::riskAssessmentLT, null, assessor);  
    try {  
        parallel { 
            @portType "lns::loanServicePT" @createInstance  
            request = receive(customer, request);  
            signal(receive-to-assess, [$request.amount < 10000]);  
            signal(receive-to-approval, [$request.amount >= 10000]);  
        } and {  
            @portType "lns::riskAssessmentPT"  
            risk = invoke(assessor, check, request);  
            signal(assess-to-setMessage, [$risk.level = 'low']);  
            signal(assess-to-approval, [$risk.level != 'low']);  
        } and {  
            approval.accept = "yes";  
        } and {  
            join(receive-to-approval, assess-to-approval); 
            @portType "lns::loanApprovalPT"  
            approval = invoke(approver, approve, request);  
        } and {  
            join(approval-to-reply, setMessage-to-reply); 
            @portType "lns::loanServicePT"  
            reply(customer, request, approval);  
    @faultMessageType "lns::errorMessage"  
    catch(lns::loanProcessFault) { |error| 
        @portType "lns::loanServicePT" @fault "unableToHandleRequest"  
        reply(customer, request, error);  

how it works:
It uses a translator based on the ANTLR v3 (ANother Tool for Language Recognition) parser generator which is based on a predicated-LL(*) parsing strategy. This solution uses the implicit tree structure behind the input sentences to construct an abstract syntax tree (AST) which is a highly processed and condensed version of the input. The translator maps each input sentence of the source language to an output sentence by embedding actions (e.g. code) within the grammar or tree. This actions will be executed according to its position within the grammar or tree. To support an easy handling of implicit declarations the translation is broken down into multiple passes.