JAX-RS REST with Swagger in IBM Integration Designer

We have seen over the last 2-3 years an evolution in the way how we are interacting with our services.  This is thanks to mobile devices which don’t have the same processing power like our servers in the data centers and also bandwidth data restrictions might play here an important factor. Enterprise Services in a Service Oriented Architecture use(d) HTTP/SOAP as an interoperability protocol. A lot of software products have Webserviece APIs via SOAP but due to the growing business demand to go mobile these interfaces don’t play very well with our devices. Besides the already mentioned factors the computation cost of SOAP message data serialization is notable, that’s why HTTP/JSON has taken over the Mobile and Cloud world. Though JSON has many obvious advantages as a data interchange format – it is lightweight, human readable, well understood, and typically performs well – it also has its issues (I will save this for another article).

To everyone working with Web services the standard definition file is a WSDL file which basically is a services contract between client and server with well defined objects and schemas, containing either document-oriented or procedure-oriented information. In WebSphere Integration Developer (WID) or now IBM Integration Designer (IID) you can easily create a Web service HTTP/SOAP Binding with a WSDL file or use a HTTP Binding with a builtin JSONDataHander to expose your service as a RESTful service in a couple of clicks. Sadly at this point no one knows how to use our RESTful API and you have to spend time to write additional documentation or define a WADL file, which is incredibly time consuming to create descriptions with.

Fortunately there is swagger.io a framework to help fill the gap. Swagger is a specification for documenting REST APIs. It specifies the format (method, url and representation) to describe REST Web services. It provides also tools to generate the documentation from application code.

IBM BPM 8.5.x  is using WAS 8.5.x as runtime platform, which as part of the official EE6 platform supports JAX-RS. Integration Designer doesn’t support neither JAX-RS nor Swagger as part of the auto-generated Bindings. There is a RFE for this feature. Luckily it supports Web Application development and that’s what I’m going to use to create a JAX-RS Web Interface, declare swagger annotations and call out via a SCA Reference Partner to my Mediation.

We are going to start with the interface, objects and the Stand-alone Reference component. The Name of the standalone ReferencePartner is “CarPartner”.

jaxrs-1

 

jaxrs-1a

Now let’s create a Web Application Module (war)

jaxrs-1b

and link it to our Mediation Module

jaxrs-1c

When cleaning/building your workspace, IID compiles your application so any modifications done in the default WEB module will be erased, that’s another reason for using a separate custom Web Module. From here on, you can start creating your JAX-RS Interface and write the necessary code to initialize swagger and also make the connection to your SCA Component.

In order to give swagger the knowledge of the structure of the schema objects in use for the JAX-RS interface we have to generate JBAX-POJOs/SDO implementation classes because swagger is not designed to understand XSDs. You can do that by switching from default (Business Integration) perspective to Java perspective and use the builtin wizard to generate Java classes.

jaxrs-1d

If you are using Maven or Gradle you can get the swagger libraries from the Maven repo. For the purpose of this walk through I have extracted the necessary jar files from the POM.XML via

mvn install dependency:copy-dependencies

and copied them into the WAR/lib folder. There are also two additional libraries for the CORS filter support which are needed to grant access to swagger-ui.

jaxrs-1e

Let’s assemble everything. There are various ways to initialize swagger. First update the web.xml with the following entries:

<servlet>
<description>
This is the description for the sample servlet
</description>
<display-name>Gateway</display-name>
<servlet-name>com.mygateway.GatewayApplication</servlet-name>
</servlet>
<servlet-mapping>
<servlet-name>com.mygateway.GatewayApplication</servlet-name>
<url-pattern>/rest/*</url-pattern>
</servlet-mapping>
<filter>
<filter-name>CORS</filter-name>
<filter-class>com.thetransactioncompany.cors.CORSFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>CORS</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

 

Create an Application class with the following entries: (more info can be obtained in the swagger documentation)

package com.mygateway;

import io.swagger.jaxrs.config.BeanConfig;
import io.swagger.jaxrs.listing.ApiListingResource;
import io.swagger.jaxrs.listing.SwaggerSerializers;
import java.util.HashSet;
import java.util.Set;
import javax.ws.rs.core.Application;

public class GatewayApplication extends Application {

 public GatewayApplication() {
 BeanConfig beanConfig = new BeanConfig();
 beanConfig.setTitle("PoT SWAGGER - REST GATEWAY");
 beanConfig.setDescription("Test API REST Interface with Swagger Annotations");
 beanConfig.setVersion("0.0.1");
 beanConfig.setSchemes(new String[] { "http" });
 beanConfig.setHost("localhost:9080");
 beanConfig.setBasePath("/Gateway/rest");
 beanConfig.setResourcePackage("com.mygateway");
 beanConfig.setScan(true);
 beanConfig.setPrettyPrint(true);
 }

 public Set<class<?>> getClasses() {
 Set<class<?>> classes = new HashSet();

 //JAX-RS interface
 classes.add(GatewayService.class);
 //Add Swagger initialisation classes
 classes.add(ApiListingResource.class);
 classes.add(SwaggerSerializers.class);

 return classes;
 }
}

My JAX-RS class (GatewayService.java) has the same 3 operations as my SCA Interface. Once the HTTP call is initialized a call-out is made via the SCA Partner “CarPartner” to my Mediation and the response is feed though a SDO/JSON data formatter.

@Api(value = "/cars", description = "RESTful API to interact with Car Servcie.")
@Path("/cars")
@Stateless
public class GatewayService extends Application {

 @ApiOperation(value = "Get Car by ID", notes = "Get model details by ID", responseContainer = "Car", response = Car.class)
 @GET
 @Path("/{id}")
 @Produces(MediaType.APPLICATION_JSON)
 public Response getById(
 @ApiParam(value = "Car ID", required = true) @PathParam("id") int id)
 throws Exception {

 // Invoke SCA Stand-alone Reference Partner
 Service service = (Service) ServiceManager.INSTANCE.locateService("CarPartner");
 DataObject car = (DataObject) service.invoke("getCarById",Integer.valueOf(id));
 DataObject res = (DataObject) car.get(0);

 //Use SDO to JSON Converter
 com.ibm.bpe.message.JsonConverter jc = com.ibm.bpe.message.JsonConverter.getInstance();
 String resp = jc.convertObject2JSON(res);

 if (res != null) {
 return Response.ok(resp).build();
 }
 return Response.status(Response.Status.NOT_FOUND).build();
 }
...
...
}

The response class definition points to the SDO Java implementation generated though the wizard.

 response = Car.class

You might ask why I’m using the generated Java classes only for swagger presentation purpose? Well, many ways lead to Rome. You could use for the Response following approach:  EMF EObjects or SDO EDataObjects to Java POJO to JSON, or create a dynamic EMF model from XSDs, or serializae JSON to XML to SDO. I still believe a simple SDO/JSON transformation will give the best performance. On the other hand swagger needs a strong typed definition of your objects, whereby the DataObject implementation is dynamic, hence we use the generated classes for the swagger presentation layer.

After the deployment of the APP to Process Server we can invoke the REST service.

jaxrs-1g

Now it’s time to see the service definition in my local swagger-ui.

swagger2

Advertisements

WebSphere JDBC adapter and hierarchical objects with Oracle

At my client’s we have used the JDBC adapter to integrate with Oracle tables extensively in the past. In fact, the solution in place is pretty much hinging on Oracle tables for marketing campaigns and product configuration details.

So far we have always been conservative in our approach, creating integration flows one table at a time or working against Oracle views. Trying to not ask too much from the adapter.

New business requirements for a future release demand a bit more from our JDBC connectivity. We have to retrieve, update and create from multiple related tables, which is mostly fine, but we ran into a little rough patch doing the inserts.

It is worth mentioning than none of this would have been an issue with DB2, which supports Identity columns. The JDBC adapter understands these Identity fields and generates the Business Objects accordingly. But with Oracle things are a little different.

Oracle implements auto generated primary keys with a combination of Sequences and Triggers. The sequence supplies the ID value and the trigger fires before the insert populating the primary key with it.

Let’s take a simple example of two related tables, AUTHOR and BOOK.
AUTHOR.ID will be the primary key and BOOK.AUTHID will be the foreign key relating book rows to their author. I’m using Oracle XE 10g and created the tables using the web ui.

You can start creating a new WID integration project and drop an outbound JDBC  adapter in the assembly. Go through the wizard as usual but make sure you click the ‘Edit Query…’ button and check the ‘Prompt for additional configuration settings when adding business objects’ check box, as shown in the image below:

After you run the query you can start adding tables. Add the AUTHOR table first accepting all defaults.

Next add the BOOK table and build the child/parent relationship as in the following image:

Complete the wizard (I don’t generate business graphs, clear the checkbox)

Now is when the manual changes happen. Open the generated BO for the AUTHOR table, I’m using the system squema and the BO is called SystemAuthor.

You have to manually add the UID annotation to the ‘id’ attribute and supply the sequence name as shown below. This is done by a right-click on the metadata element.

You can repeat the process for the BOOK object for good measure, though for this case it isn’t strictly required.

Next we have to modify the Oracle trigger:

CREATE OR REPLACE TRIGGER  "BI_AUTHOR"
  before insert on "AUTHOR"
  for each row
WHEN (new.id is NULL)
begin
    select "AUTHOR_SEQ".nextval into :NEW.ID from dual;
end;

The manual change highlighted in red is required because the JDBC adapter, once is told that the ID field is generated, will query the sequence and populate the attribute with the retrieved value. It also synchronises any foreign keys on child objects. But just before the insert operation, without this manual change, the trigger will fire, replace the value of the auto-gen field with the next value in the sequence and cause the parent/child relationship to be out of sync.

You can also modify the trigger for BOOK inserts. We don’t have child objects of BOOK in this example to worry about but it makes sense to do it for completeness and to be ready for them.

You can now deploy and test. One thing to watch out for is the fact that the integration test client will populate the foreign key on the child object with a default value unless you manually unset it. So make sure you explicitly set any authid book attribute to unset as below:

The response should look like this:

You can also check your tables and verify that the rows have been inserted as expected.

Best regards

Gaby

SQL Exception: Unable to insert null into process_template

This is an old foe that has reared its ugly head again at my client’s.

One of the coding standards we have in place states that in a BPEL process, invoke activities must catch their corresponding interface faults, and if necessary throw a process specific fault to be caught at the top level of the BPEL.

A developer was very diligent following this standard, which is good news, but he forgot to populate the namespace of the fault he was throwing.

Version 7 does this for you, which is neat, but in v6 if you don’t specify the namespace as in the image below, it will be left empty.

If you leave the fault namespace blank, the process might not behave as expected, but it will deploy and start on your integrated test environment.

However, when deployed to a real environment using Oracle as the persistence mechanism this process will not start.

Enabling detailed trace on the server (com.ibm.bpe.*=all) revealed a rather terse ORACLE SQL EXCEPTION: UNABLE TO INSERT NULL into PROCESS TEMPLATE(null,tempalteId,).

Only ‘thanks’ to past experience and recollection I was able to diagnose and correct the problem.

SCA Recursion

A service I’m designing for a client will retrieve mortgage details to be displayed by the web tier.

It is common for mortgages to consist of a main account which itself is composed of a number of sub-accounts, and it is not inconceivable for the sub-accounts themselves to have further sub-accounts.

To retrieve the entire tree my service has to call the mortgage account back-end multiple times while drilling down into accounts for sub-accounts to retrieve.

All those main and sub accounts are the same data type, so recursion seemed a good fit: I receive a request with a mortgage account Id, call the back end service who responds with account details, including a list of sub-account Ids, and for each sub-account I call myself, appending responses to the list of mortgage details in the final response.

The end result, is a response structure with nested mortgage details.

Let’s start by taking a look at the assembly diagram:

The MortgageAccount untyped component represents the back-end. I will be manually emulating this, so an untyped component will suffice.

RetrieveMortgageDetails is a BPEL microflow which references the back-end mock and also references itself.

I’ve implemented two interfaces, one for the microflow and another one for the mock back-end.

Both interfaces have a message input with a mortgage account Id but the response objects are slightly different.

The microflow returns a MortgageDetails object, which has as an attribute a list of MortgageDetails, as follows:

The mock back-end returns a MortgageAccount object, which has as an attribute a list of sub-account Ids:

The microflow is implemented as follows:

The first invoke calls the back-end, then for each sub-account Id in the response I call myself, appending the response to the subAccount attribute in my response to the calling client. Of course the ‘calling client’ will be myself when traversing the nested sub-accounts, and the real client once all sub-accounts are retrieved.

Here’s the result of a test run:

You can see how the response has a main mortgage account (id 1) which has two sub-accounts (1.1 and 1.2) You can also see that sub-account 1.1 has two nested sub-accounts (1.1.1 and 1.1.2)

It’s Store-and-Forward, Jim, but not as we know it.

You might have heard of v7 formal support for store-and-fwd and wondering what this post is about. Well, it’s not about v7’s support for it.

The product feature relates to its ability to recognise when a target system is unavailable, work upstream until it finds the first asynchronous interaction point, and start storing requests until human intervention via a dedicated Business Space widget to reopen the information flow.

The key points here are 1) only asynchronous interactions are supported 2) human intervention is necessary to restore the flow.

I’m working against a different scenario here. What I want to do is to automatically switch from synchronous to asynchronous processing of selected interactions when a target system is not available. I want synchronous consumers of my service to always get a synchronous response, either complete or partial, depending on whether my respective target systems are reachable or not.

I also want a retry mechanism that will continue accepting requests and process them all ‘offline’ once the failed external system is restored.

And I want the business process to pick up where it left off and carry on with its activity sequence.

So, imagine you apply for a credit card online, and there are 3 key steps to process your application. First we score your risk, then we validate your application and last we fulfill your order.

You can write a short running BPEL process to orchestrate those three services and give the web front end a synchronous response.

Now, suppose risk scoring is a third party service that’s notorious for being down for housekeeping a few hours every day.

Clearly we can’t fulfill your application without having scored your risk, but neither we want to just tell you to come back later, much later, and that we are sorry but you just wasted your time filling up a form.

What we want is to tell you that your application has been received, it is being processed, and you can look forward to your new credit card arriving in the post real soon (or if you haven’t qualified, a communication to that effect).

So, lets look at a simple prototype of the short running process, without any store and forward capabilities.

Not a lot going on here. We receive a credit card application request, we prepare requests to a number of external systems, we invoke them in sequence and we reply to the client.

The external systems are implemented as mediation modules and stubbed with Java SCA components. I log the message and create a response from these components.

For the scoring service I did a bit more work. I configured a jndi string binding that I can manipulate through the admin console and depending on its value I throw a modeled fault. This is so I can emulate the system being unavailable.

I assume you can complete these tasks without assistance.

You can then run some basic tests and confirm that all your modules are hanging together and everything behaves as it should.

So now we can start thinking about how to approach the case when the scoring service is offline.

The first thing you’re going to need is a new module with a long running process implementing a new ScoringService interface with a one way operation taking the same input parameter type as the actual scoring service mediation.

You can think about this asynchronous LRP as a ‘wrapper’ to the synchronous scoring service.

So, this LRP is called asynchronously (there is no reply) and is instantiated once and only once. You will have to work on your correlation properties/set, so requests are routed to the running instance.

On initial request, an instance is created, the request is placed in a list and an attempt is made to call the scoring service. This call is likely to fail (we wouldn’t be here at all otherwise), so the fault handler executes, which puts the process in receive mode. Every additional request is appended to the list and every time we end up putting the process in receive mode again.

I we haven’t received a request for some time, we timeout the fault handler so we can probe the Scoring Service.

At some point the Scoring Service will be up and running again and for each pending request we will invoke it, get its response, remove the current pending request from the list and invoke the credit card application short running process, letting it know the scoring activity is now complete (we pass in the score result).

Note that ‘resuming’ the credit card application process does not technically resume anything. It simply creates a new instance but with the scoring data already present.

Next you have to modify the short running process so it can detect that the Scoring Service is down, call the async ‘wrapper’ and reply a partial response to the client.

When this short running process is called from the UI and the Scoring Service is up, it behaves exactly as before, and the UI receives a complete response.

When the Scoring Service is down, the fault handler runs, the long running process is called, and the UI receives a partial response.

When this short running process is called by the long running one, the scoring invoke is not attempted, the process proceeds with validating and fulfilling the credit card application, and the reply goes back to the long running process, which you can use for generating customer communications.

This approach keeps the business logic in a single place (the short running process), and effectively deals with offline treatment of requests when a given system is down.

It also addresses resource management, by creating a single long running process, rather than one for each pending request.

And because a long running process state is persisted, all those pending requests survive a server restart, so nothing is ever lost.

ttfn – gabz

WID Heap Status

This is a back to basics tip, but I’m surprised by the number of questions about this from experienced developers.
I like to keep an eye of how much java heap WID is using, particularly while building big workspaces.

So I check ‘Show heap status’ in General Preferences:

Then you’ll see it at the bottom right corner:

WID’s default max heap is 512M. I normally change it to 1024 by editing eclipse.ini and changing -Xms512m to -Xms1024m

Short but sweet

Building a reusable subflow

A lot has been written already about performing common logic to messages using a variety of approaches, but they typically do logging using a message logger or trace primitive. They shy away from using a Service Invoke within a subflow, and for good reason, as the SI primitive’s output terminal returns the response from its invocation and it’s tricky to propagate the original request to the parent flow. Particularly when you don’t know what this original request looks like!

This post focuses on how to model you data and build a reusable subflow to perform message enrichment without knowing the specific message type you’re operating on.

And the reason we are using a subflow, as opposed to a service gateway pattern is because, when a subflow lives in a shared library, it can be selectively reused as if it were a mediation primitive. So we can create a new module, or a new component in an existing assembly, or add it as a primitive to an existing mediation flow. It is a very flexible approach.

The linchpin that enables us to build generic subflows lies in the design and modeling of interfaces and data objects. We would like the subflow to be message type agnostic, we want it to operate within the context of various interactions. In other words, we make no assumptions on the shape and size of the parent flow we might find ourselves within.

But we must have some certainties. Like, if we use the subflow (as in this example) for message enrichment, at some point we will need to navigate to elements in the message body.

In order to guarantee those certainties we will enforce a set of best practices at design time.

  1. Every interface operation has a single input, a single output, and a common set of faults.

  1. Every message type inherits from a base type. This base type contains attributes common to all messages. These are our certainties.

The picture above shows the input parameter type for the retrieveCustomerDetails operation. We create the RetrieveCustomerDetailsRequest business object by inheriting from BaseRequest. The BaseRequest has a attribute that we choose to name ‘header’ of type RequestHeader, which itself inherits from CommonHeader.

Any attribute we model into the CommonHeader BusinessObject is guaranteed to be present in the message. We just need to figure out how to navigate to them (later).

The same is true for faults: create faults that inherit from a BaseFault, add a BaseFault attribute named consistletly (this example uses ‘header’), and make this attribute a superset of CommonHeader. This will ensure your certainties will be also present in faults.

We can start building the subflow now. Create the subflow in a shared library so you can reuse it from any project depending on it.

The purpose of the subflow in this particular example will be to populate a regionId attribute, which is present in all messages (because is a CommonHeader attribute).

We can call this subflow EnrichRegionId. Below is a screen capture of the complete subflow in the mediation flow editor.

At the heart of the subflow there is a Service Invoke primitive, the purpose of which is to retrieve a region code from an external system.

The sublow has an input and three output nodes, the input represents the request message as passed in by the parent flow, the first output will propagate the enriched request message to the parent flow, and the following two output nodes are used for propagating faults.

For those three nodes, their terminal message types are set to anyType.

What this means is that, at development time, the tooling is told that any WSDL type can be fired in or out of these terminals (at runtime this is always true, terminal type information exists for the primary benefit of the tooling).

This is one of the V7 improvements I like the most. I can force V6 into something similar but it is nowhere near as elegant.

Now, lets take a look at what’s going on in the subflow.

I have no idea of which message type I will receive, the service invoke primitive half way down the flow loses my request (its output terminals all relate to its own operation) and I have to find a way to propagate it to the parent flow.

The solution is to keep the /body part of the message in context throughout the flow, using a message element setter.

I used the correlation context out of habit. I don’t have a response flow to worry about here. The context type is xsdAny, so any body type can be safely stored here. One thing to watch out for is developers modeling context data at the parent flow level. This can break things and has to be worked around (by asking developers to include an xsdAny attribute in their context objects, that the subflow can use to store the body), but lets keep it simple.

So that’s my request body stored away. Now we have to operate on /body attributes on a /body type which is mostly unknown.

All we know are the certainties we designed into our data model.

We also know that, for as long as we use the default binding style on our interfaces (document literal wrapped) the SMO path into the body payload will be predictable.

Heres what the SMO representation of an updateCustomerDetails request looks like:

We can get hold of /body, the ‘header’ attribute is our designed in certainty, but we know nothing about what’s between them. All we know is that we are going to find an attribute called ‘header’ two levels down from /body.

The good news is that we can use custom XPath to navigate to it:

In my XML map, the one that provides the Service Invoke with its input parameters, I use a custom XPath transform to retrieve a field from the body in the correlation context:

You can see how if hop over the uncertainties to get to what I need.

After the service invoke, I use a message element setter to enrich the body and propagate it to the parent flow:

1 enriches the message body currently in context and 2 copies it to the body propagated to the parent flow.

Here’s a closer look at 1:

The external system used to retrieve the region id is mediated, so the service invoke uses an internal interface which can return the same faults used throughout the solution. This means I can simply wire those faults to the corresponding output nodes. I’m treating service invoke timeouts and failures as serviceFaults so I can transform and wire those too.

At parent flow level, adding the subflow is very simple. Just drop it in the request flow editor, wire it up and accept the automatic casting proposed by WID (those Set Message Type primitives are automatically generated):

The response flow is inconsequential. Simple wires for message pass-through: