Wednesday, February 26, 2020

Call Custom function in OIC - Random Number Generator

Oracle Integration Cloud has lot of out-of-the-box functions available. But there would be cases where the available functions do not match our requirements. In such cases, we can create custom functions and make sure that these functions are available in our mapper or integration layer for use.

The only constraint in OIC in that the custom functions need to be written in Java Script (and not Java, unlike other products like SOA and OSB) and exported as a java script library.

Registering a Library:

Let's take a simple example of generating random numbers in the provided range and using that in our integration.


Here's the java script code -


function getRandomNum(min, max) {
  var rnd = Math.floor(Math.random() * (max - min)) + min;
  return rnd;
}

This function takes two input parameters - min and max values. The function will calculate random numbers between the min and max values each time.

Few points to note here -


  • Logic should always be wrapped in a function, if you want it to be used in OIC.
  • You always have to assign the return value to a variable and return it.

Following code will not work as intended -
  return Math.floor(Math.random() * (max - min)) + min;
You will have to assign it to a variable say rnd and return it.

Save the code in the first snippet as .js file (RandomNumGenerator.js) in your local machine. Now, we will register the js file as a library in OIC.

1. Login to OIC console, go to Integrations -> Libraries and click on the Register button at the top right corner.
2. In the Register Library pop up screen, choose the js file that you saved above. Give proper name and description of the custom library and click on Create.



3. Library has been registered successfully. You can now see the function you created in the left functions pane and some input and output parameters that you need to define.


4. Classification Type defines if you want to use the library to be used in orchestration or xpath. I chose orchestration.
5. Define the type of the input and output parameters, where they should be of Number, String or Boolean type.

Now the final library looks as follows. We are done with registering the library.



Invoking the library in your integration:

In this example, lets create a simple App driven orchestration based integration. This will be exposed as a synchronous REST service which will take two variables as input in query parameters and return a json response which will contain the random number.

1. I have created the integration with name PG_CUSTOMFUNCTION.
2. The REST trigger (named GetRandomNumber) details looks as follows -

Resource /getRandomNum

Method GET

Query Parameters

  • minVal
  • maxVal

Response

Response Media Type
  • application/json
Response sample
    { "randomVal" : "" }
3. From the Actions palette, drag and drop Javascript call after the initial trigger. Give it a name (genRandomNum) and description.
4. An editor screen of javascript opens. Click on + function to choose the javascript library that you registered.

5. Now map the input values of the javascript library by clicking on the edit icon next to Value.

6. Map the Query param minVal to the js function's input min.

7. Similarly map the Query param maxVal to the js function's input max. Save and close the action window.

8. Now map the output of the js function to final output randomVal as follows.



9. Final integration looks as follows -



Activate the integration and test it. In the below REST url, I submitted the minVal as 5 an maxVal as 99999.

https://test.oracletest.com/ic/api/integration/v1/flows/rest/PG_CUSTOMFUNCTION/1.0/getRandomNum?minVal=5&maxVal=99999

I received the response in JSON as follows. 

{
"randomVal" : "53152"
}

You will get different randoms values each time you test the service.



Tuesday, February 25, 2020

Types of Integrations in OIC

If you are SOA experienced person and new to OIC, I have covered OIC in relation to SOA . You can have a look if interested.

This blog would help us understand the differences between various integrations in OIC. I would mostly be covering the differences in technical features, so that it would be helpful during development.

Types of integrations in OIC –

Basic Routing – Any simple app to app integrations can be done through basic routing.
- No complex orchestration supported. Simple app to app integration with mappings.
- Data enrichment can be done by querying data from other systems
- Looping not supported
- Error handling not supported
Features: You would find only connections and triggers in this type of integration.

App Driven Orchestration – Supports complex orchestrations with multiple applications involved.
- Supports multi-step integration
- Looping, Error handling and many advanced concepts are supported
- Provision for Callbacks, email notifications 
- For app or business events specific integrations.
Features: You would find actions like scopes, assign, logger, notifications, for loop, etc along with connections and invokes.

Scheduled Orchestration – As the name suggests, this type of integration can be scheduled to be triggered at a mentioned time or in a timely frequency.
- Most of the features of App driven orchestration are supported here as well
- Majorly used for file based integrations or batch processing
Features: All features as in App Driven orchestration.

File transfer – Basic integration that would help to transfer files from one location to location.
Features: All features as in App Driven orchestration.

Publish to OIC - Supports publishing of any kind of messages to OIC.
- This is more like a basic routing with just connections available to publish messages to OIC through IC Messaging Service Connection.
- No other orchestration can be done here.

Subscribe to OIC – Supports subscribing the messages from OIC.
- Allows subscription of messages and pass it along to other downstream services.
- Data enrichment and app connections are allowed. 
- Not as extensive as App driven orchestration.

The above two integration can be related to working with messaging services in weblogic like Java Messaging Service.

Note: Based on the integration you choose to develop, the activities related to that integration are shown in the right palette.

Oracle Integration Cloud in relation to SOA

Are you Oracle SOA (Service Oriented Architecture) developer trying to understand OIC (Oracle Integration Cloud) from development perspective?

If you are already a pro in SOA, then OIC is a cake walk for you. But, if you are just acquainted with SOA and want to relate/compare it with OIC, well, you are at the right place.

This post will help SOA developers to understand OIC easily in SOA terminology.

1. Connection - A connection in OIC has multiple references in SOA.
  • DB Connection - It can be referred to datasource created in weblogic. DB based Connection will have the connection details to connect to DB like host name, port, service name/SID, username, password, etc.
  • REST Connection - Can be related to an empty REST adapter created in SOA without any service or schema related details associated.
  • SOAP Connection - a SOAP/Webservice adapter in SOA with WSDL associated.
  • FTP Connection - Similar to FTP outbound connection pool created in SOA's weblogic server. This will contain FTP server's host address, port, username, password, security policy etc
  • Oracle ERP Cloud Connection - ERP Cloud adapter in SOA with ERP services catalog url, username and password defined.
Similarly, there are different other types of Connections which can be created.

2. Trigger and Invoke - These are the roles that can be assigned to a connection. Trigger role means that the Connection can be exposed as a "Service" in SOA terms and accept input like "Receive" activity. An Invoke role means that the Connection can be invoked as a "Reference" partner link in SOA and sent input through "Invoke" activity.

3. Integrations - These can be compared to "Composites" in SOA. Like each composite can be designed as Empty Composite, Composite with BPEL, etc, integrations also be designed as Basic Routing, App Driven Orchestration, Scheduled Orchestration, etc.

4. Packages - Packages are like Applications in SOA. All integrations in a package can be bundled together and moved to different environments.

5. Libraries - Unlike SOA where custom java classes can be written and imported into the component, OIC does not support Java libraries. OIC only supports Java Script libraries. You can write all your functions in java script, add them as a library and use in OIC integrations.

OIC is much more simpler product than SOA in terms of the richness of components.
OIC is can be used in scenarios where you want to do simple integrations with services or APIs already present in cloud or on-premise. Where message size is not too huge and there is not lot of orchestration needed.

Trimming and Padding characters in XSLT

We had this specific requirement from a service provider to ensure that the value sent for a field is always of fixed length. If value from any front end application is less than mentioned length, we, in BPEL, have to append or prepend the value with some character until the length is reached.
This is also called padding characters for a field.

Note: This blog does not explain how to create an XSLT and basics of XSLT.

I would take an example and explain how it is achieved.

Prepend characters or left padding -  Field length needed is 5 characters. If the value of the field is less than 5 characters, prepend character 'a' to the start of the value.

I have used the following code snippet - 


<xsl:template name="prepend-pad">
<!—recursive template to right justify and prepend the value with whatever padChar is passed-->
 <xsl:param name="padChar"/>
 <xsl:param name="padVar"/>
 <xsl:param name="length"/>
 <xsl:choose>
  <xsl:when test="string-length($padVar) < $length">
   <xsl:call-template name="prepend-pad">
    <xsl:with-param name="padChar" select="$padChar"/>
    <xsl:with-param name="padVar" select="concat($padChar,$padVar)"/>
    <xsl:with-param name="length" select="$length"/>
   </xsl:call-template>
  </xsl:when>
  <xsl:otherwise>
   <xsl:value-of select="substring($padVar,string-length($padVar)-$length+1)"/>
  </xsl:otherwise>
 </xsl:choose>
</xsl:template>
 

Above template is kind of reusable function which is be called from multiple places in the XSLT.
padChar - Is the character with which you would want the padding to happen.
padVar - The actual value which needs padding.
length - The length to which you need to adhere.

All I need to do is call the above template from my XSLT for whatever field, padding needs to be applied.

Call template is done as follows -
<xsl:call-template name="prepend-pad">
 <xsl:with-param name="padChar" select = "a" />
 <xsl:with-param name="padVar" select = "blog" />
 <xsl:with-param name="length" select = "5" />
</xsl:call-template>

This would give me the following result

Say -
padVarpadCharlengthresult
bloga5ablog
ga5aaaag


Append characters or right padding - Field length needed is 5 characters. If the value of the field is less than 5 characters, append character 'a' to the end of the value.


<xsl:template name="append-pad">
<!—recursive template to left justify and append the value with whatever padChar is passed-->
 <xsl:param name="padChar"/>
 <xsl:param name="padVar"/>
 <xsl:param name="length"/>
 <xsl:choose>
  <xsl:when test="string-length($padVar) < $length">
   <xsl:call-template name="append-pad">
    <xsl:with-param name="padChar" select="$padChar"/>
    <xsl:with-param name="padVar" select="concat($padVar,$padChar)"/>
    <xsl:with-param name="length" select="$length"/>
   </xsl:call-template>
  </xsl:when>
  <xsl:otherwise>
   <xsl:value-of select="substring($padVar,1,$length)"/>
  </xsl:otherwise>
 </xsl:choose>
</xsl:template>
 

Call the above template from XSLT similar to the first example.

This would give me the following result

Say -
padVarpadCharlengthresult
bloga5bloga
ga5gaaaa

In this way, we can ensure that the field values are padded according to the requirement.


Thursday, February 20, 2020

Distributed Polling in SOA

In a clustered environment, the most common issues we see while polling database for records is that the same record is being retrieved twice. This issue would not occur in development environments because most of the development environments are not clustered. They are single node environments.

But as we move our BPEL code with DB poller to higher environments which are clustered (with multiple nodes) with active-active setup, then this issue is common.

In order to resolve this issue, the solution is to set up Distributed polling in DB adapter. This feature marks SELECT FOR UPDATE SKIP LOCKED on the rows fetched, which would prevent other nodes from retrieving the same records.

Lets see how the distributed polling is done and also discuss other jca properties which would help to tune the adapter processing.

Here are few properties from activation-spec of jca file which needs discussion -

<property name="PollingStrategy" value="LogicalDeletePollingStrategy"/>
<property name="PollingInterval" value="5"/>
<property name="MaxRaiseSize" value="2"/>
<property name="MaxTransactionSize" value="4"/>
<property name="NumberOfThreads" value="2"/>
<property name="ReturnSingleResultSet" value="false"/>
<property name="RowsPerPollingInterval" value="20"/>       

When distributed polling is checked, with above properties set in jca and DB adapter is deployed, the following process takes place.
  1. Based on the NumberOfThreads configured, polling threads are created. Each thread will initiate a transaction and search for matching rows from database with SELECT FOR UPDATE SKIP LOCK issued. 
  2. If no matching records are found, the thread will release the transaction and sleep until PollingInterval (PI) duration is met. 
  3. When matching rows are found, each thread (which already has started a transaction and found rows) will issue a FETCH of certain number of rows from database. This "certain number of rows" is defined by MaxTransactionSize (MTS)
  4. Once the rows are fetched, the thread will not send all the rows as-is to destination. It will loop over the fetched rows, group them based on MaxRaiseSize (MRS) set and then send to destination.
  5. After sending to destination, the thread will compare the number of rows delivered to destination with RowsPerPollingInterval (RPPI). 
If rows delivered => RPPI, the thread will sleep till the duration of PollingInterval and then wake up.
If rows delivered <  RPPI, the thread will continue fetching, looping and delivering process.

Lets take the example of the values set in above code snippet and see how polling happens.
  1. 2 threads are created as NumberOfThreads is set to 2.
  2. Each thread initiates a transaction and search for matching rows. Say, there are 10 matching rows returned from DB cursor.
  3. Each thread will only issue fetch for 4 rows as MaxTransactionSize is set to 4.
  4. Once 4 rows are fetched, the thread will loop and group into 2 rows batches based on MaxRaiseSize.
  5. 2 rows are delivered to destination.
  6. Thread will compare rows delivered (2) and RowsPerPollingInterval (20).
  7. As rows delivered < RPPI, the thread will loop through the next batch and send to destination.
Here’s how the processing looks like -


The RowsPerPollingInterval property is used to throttle the polling threads.  If the RPPI property is not set, then the polling threads will continue to FETCH and process rows until there are no more available.  If the RPPI property is set, the smaller the value, the slower the processing.  Another way to view the RPPI value is with the following scenarios:

MTS = 10 and RPPI = 10, then each thread will only process one MTS batch before sleeping
MTS = 10 and RPPI = 20, then each thread will process two MTS batches before sleeping
MTS = 10 and RPPI = 30, then each thread will process three MTS batches before sleeping

This way we can ensure distributed polling in db adapter.




Pipeline Alerts in OSB

Alerts in OSB are used to notify members of the team about any issues/abnormalities/factors in the services that need to be notified for immediate or future action.

Two types of alerts can be configured in OSB - Pipeline alerts and Service Level Agreement (SLA) alerts.

This blog will provide basic examples of how to set Pipeline alerts.


Pipeline Alerts:

These are simple alerts setup in the OSB pipeline based on the message context/body/payload. We can consider these alerts to be more for business purpose or error handling. To notify business users about any discrepancies in the data being submitted or any other errors to be reported.

Say, for example, 
a) In an Order processing flow in OSB, if there are any orders being processed with Price more than $1000, an alert notification needs to be raised.
b) If there are specific business errors in the flow, an alert notification need to be raised.

Here, I have created a simple OSB to show on how to work with Pipeline alerts. This blog will not cover the basic OSB creation process.

This OSB will take two int elements and an Operation parameter as input. If the operation is Sum, addition is done on the input parameters and response is returned. If the operation is Sub, subtraction is done. If the operation in anything else, an Alert needs to be raised.

Here's the screenshot of how pipeline looks. SumNode and SubNode are the two valid nodes which would return results. Default node will raise an alert with invalid operation error. The highlighted part is the alert that has been added. I will show in detail on how to go with the creation of pipeline alert.



Right click on the project and Create an alert destination. Give a specific name.


In the next screen, ensure that Alert logging is enabled. This will help to view the alerts in the alerts dashboard.


In the Pipeline, in default operation node, drag and drop an alert activity. You would see the following alert properties.


Lets start filling up each of the above properties.

Content will be the actual data you would want to capture in the alert. In my case, I would want to show the operation that caused the issue.
 

Summary can be some descriptive heading for the alert.

Severity - There are several levels present in the drop down. Select the one which you would want the alert to be notified as.


Finally, for the destination, choose the alert destination you created in the initial steps.


We are done with the alert setting. Now test the OSB with an operation other than Sum and Sub, you would find the alert in the console.

Navigation to check the alert - Go to EM console -> Select the service-bus under SOA -> Click on Alerts History tab.

In the Alert Type -> Choose Pipeline Alerts. You can also filter by Alert Summary name. Click on Search. You can find the alerts as following.


Click on the Alert Summary for any one of the notifications, you can find all the details that have been logged.