Coach repeating table pagination toolkit

My first contribution to IBM BPM Wiki

A toolkit for paginating repeating tables in coaches. Not the only one available but the only one that does it just the way I like it.

http://bpmwiki.blueworkslive.com/display/samples/Coach+Table+Pagination+Toolkit

Using Teamworks code to write JavaScript

Here’s a little trick I learned recently and might be of use to someone else out there.

The basic requirement is to dynamically change the options in a combo box, based on data held in teamworks variables. More specifically, I want the options in ComboBox0 to change depending on what I select in ComboBox1. And I don’t want to wait for an AJAX round trip every time I change the selection.

So, suppose we have two local variables, both of which are lists of NameValuePair, and suppose they are initialised with data.

Here’s a pic of the variables:

And here’s the code that initialises them (lazy, I know, but it’ll do):

tw.local.itemList1 = new tw.object.listOf.NameValuePair();
tw.local.itemList1.insertIntoList(tw.local.itemList1.listLength, new tw.object.NameValuePair());
tw.local.itemList1.insertIntoList(tw.local.itemList1.listLength, new tw.object.NameValuePair());
tw.local.itemList1[0].name = "name1";
tw.local.itemList1[0].value = "value1";
tw.local.itemList1[1].name = "name2";
tw.local.itemList1[1].value = "value2";
tw.local.itemList2 = new tw.object.listOf.NameValuePair();
tw.local.itemList2.insertIntoList(tw.local.itemList2.listLength, new tw.object.NameValuePair());
tw.local.itemList2.insertIntoList(tw.local.itemList2.listLength, new tw.object.NameValuePair());
tw.local.itemList2[0].name = "name3";
tw.local.itemList2[0].value = "value3";
tw.local.itemList2[1].name = "name4";
tw.local.itemList2[1].value = "value4";

Then I create a coach with two Combo Box controls.

One control (ComboBox1) with manual data:

The other control (ComboBox0) bound to itemList1. I’m only binding it so it’s initially populated.

An finally, the interesting part, a block of custom html with the code to do the business:

<script type="text/javascript" src=<#=tw.system.model.findManagedFileByPath("jquery-1.4.2.min.js" , TWManagedFile.Types.Web).url #> > </script>
<script>
<#
 var optionList1 = [];
 var optionList2 = [];
 for (var i = 0; i < tw.local.itemList1.listLength; i++){
 var optionItem = [];
 optionItem.push("{\"text\":\"" + tw.local.itemList1[i].name + "\"", "\"value\":\"" + tw.local.itemList1[i].value + "\"}") ;
 optionList1.push(optionItem);
 }
 for (var i = 0; i < tw.local.itemList2.listLength; i++){
 var optionItem = [];
 optionItem.push("{\"text\":\"" + tw.local.itemList2[i].name + "\"", "\"value\":\"" + tw.local.itemList2[i].value + "\"}") ;
 optionList2.push(optionItem);
 }
#>
$(function(){
$('#ComboBox1').bind('change', function(e){
 var items = [];
 if($('#ComboBox1 option:selected').val() == "one"){
 items = [<#=optionList1#>];
 }
 else {
 items = [<#=optionList2#>];
 }
 $('#ComboBox0').empty();
 $.each(items, function(index, opt){
 $('#ComboBox0').append($('<option></option>').val(opt.value).html(opt.text));
 });
});
});
</script>

So, what exactly are we doing here?

First I’m importing JQuery, because I’m lazy and I find it helpful. But don’t get caught up with the JQuery stuff. It’s not the point of this post.

The code in question is divided into two parts:

First javascript that runs on the server, this is the chunk enclosed by pointy hashes (<# … #>).

And second, javascript that runs on the browser (the rest).

This is the conceptual leap I had to make, in Teamworks, not all javascript is equal, there’s the TW stuff and the plain old JS stuff. One executes on the server, before the coach is served, the other one is JS as we know it, for the browser.

With this in mind we can do all sorts of funky stuff.

And the best way to grok what’s happening here is to look at the source as served to the browser. So, I’m running the coach, I’m asking IE to show me the page source, and here’s the relevant bit of code:

<DIV class="customHTML" id="CustomHTML0"><script type="text/javascript" src=http://host.com/teamworks/webasset/2064.70c17442-cee9-46f5-857a-b58c9baf0e6a/W/jquery-1.4.2.min.js > </script>
<script>

$(function(){
$('#ComboBox1').bind('change', function(e){
 var items = [];
 if($('#ComboBox1 option:selected').val() == "one"){
 items = [{"text":"name1","value":"value1"},{"text":"name2","value":"value2"}];
 }
 else {
 items = [{"text":"name3","value":"value3"},{"text":"name4","value":"value4"}];
 }
 $('#ComboBox0').empty();
 $.each(items, function(index, opt){
 $('#ComboBox0').append($('<option></option>').val(opt.value).html(opt.text));
 });
});
});
</script></DIV>

The key thing to notice is how the items assignments work. The browser shows you how, within the conditional statement, the ‘items’ array is populated with JSON objects, but these JSON objects are actually created on the server, based on tw variables!

So basically what we are doing within the pointy hashes is generating the javascript that ultimately runs on the browser.

And once you crack that concept, well,  that’s where the fun starts, really :-)

Custom Coach Overrides and Single-Sign-On

At my client, we have defined and built a business process using IBM BPM 7.5.1, which includes coaches, and we display these coaches within our existing IBM Portal based solution.

There were a few challenges along the way in order to integrate BPM and Portal, including configuration of Single Sign On (SSO), and none of which actually caused real pain, apart from one little, almost trivial issue, which took us quite some time and effort to sort out.

We want the BPM coaches to look and feel just like all other Portal content, so we lifted the relevant CSS classes from the Portal solution and use them to override the default coach CSS. Of course that was fun to do, there was a lot of Firebug involved and eventually things were looking rather good, to the point where our coaches are now indistinguishable from any other portal content in the system.

But once we successfully completed the SSO configuration we lost the custom look and feel, which was puzzling, because we could see the customised coaches when testing the BPD from Process Designer, we could see them when we were running them on the playback server, but they reverted to the default coach appearance when going through Portal.

So eventually I had to roll up my sleeves, pump some Bassdrive into my skull and really dig in.

I fired up Fiddler, started monitoring HTTP traffic from my browser and discovered that my requests for the custom_coach.css file resulted in a 302 redirecting to the teamworks login page.

Interesting, SSO worked fine for everything but the CSS file. Teamworks did not see that particular request as coming from a trusted source.

And then I saw it. All other requests were to http://bpmhost.company.com:webport/teamworks/blah, but the CSS one was to http://bpmhost:appserverport/teamworks/csspath

When Teamworks serves the Coach, it sends to the browser links to a bunch of things, including the CSS file in question, and this link was being composed using the machine hostname, without the fully qualified domain, and with its app server port, rather than the web server port.

There are two problems with this. One is the port as it was forcing the browser to talk directly to a particular app server in the cluster, when we really want it to go through IHS, though this wasn’t breaking things. The other one is the non qualified hostname. This did break stuff.

The way SSO seems to work, when the browser receives the LTPA cookie, it is allowed to propagate it to servers in the same domain. So, after logging in to portalhost.company.com it was successfully requesting coach content to bpmhost.company.com but failing to retrieve the CSS file from bpmhost.

It turns out that there is a configuration file in the system folder for the bpm installation called 99Local.xml that controls such things. It defines URL prefixes for quite a number of moving parts, including Teamworks web resources, so a global replace of all hostnames to include the domain, and port to point at the web server, followed by a server restart did the trick.

 

 

BPM 7.5.1 – Waging war against cyclic dependencies

I’m wearing the hat of toolkit producer these days and I’m responsible for delivering a number of outbound JDBC Advanced Integration Services.

Another team member is working on another toolkit, which also implements JDBC AISs.

Both toolkits need the JDBC connector module to compile, and both toolkits will bundle the JDBC RAR (embedded adapter).

This arrangement creates two interesting problems. One at ID workspace compile time, the other at runtime deployment time.

When I create the outbound JDBC import on my toolkit implementation module, the JDBC connector project with the adapter RAR is created, and in order to compile the toolkit, this project has to be associated with it or its dependencies.

So I associated the connector with my toolkit. Splendid.

Now, when I bring into my workspace the other team member’s toolkit, which also has the connector associated with it, things get hairy. Process Center tells me the connector already exists in my workspace, and I say “fine, makes sense, don’t bother loading it to my workspace a second time then” and I deselected it from the projects to bring in.

But then, what happens is, the connector is associated with only one of the toolkits, and my ID workspace can’t build. Joy.

To make matters worse, the runtime starts moaning about duplicate contributions, along the lines of “CWYBC_JDBC is already installed”.

Now, we can install the rar standalone on the server and that would resolve the deployment issue, but we still need a way to build the toolkits.

So what we did is factored out the connector project into a third toolkit and made both my toolkit and the other team member’s depend on it.

This approach worked fine, the connector module is associated with a single toolkit and that solves the build problems, and, somewhat counter intuitively, even though both our toolkit implementation modules dependencies ask for the connector project to be embedded, the deployment works OK as well.

Business Objects and Lombardi Coaches

We’re working with IBM BPM 7.5 these days, or rather making it work, and a little idiosyncrasy of the product had us head-scratching for a few hours.

You may or may not know that the Lombardi part of the product uses variables to represent business process data. You would usually define variables local to the business process definition (BPD) and use data mappings to select what data will be used as inputs and outputs to the various activities within the process.

You can define your own data types (business objects) for these variables. So as way of example we defined an Individual, with string attributes for first and last names. Nothing fancy.

Then added a private variable to the BPD named individual of type Individual. Note how we did not specify a default value for the variable.

Suppose you now create a very simple BPD to use a UI Coach to populate the individual name and a simple javascript service to write the names to the log:

The log activity has a simple system script service to write the individual’s attributes to the system log:

When you run this, the first activity will let you populate the individual variable values:

What happens next is not that predictable. One would assume that the UI form took the user supplied values for fname and lname and used them to initialise and populate the individual.

Not quite.

Once the Coach task completed the process moved on to the log activity and failed. Crashed and burned.

The BPD instance has failed and the error is in the Log activity:

The details of the error are self explanatory, the individual variable is undefined:

It turns out that complex type variables need to be explicitly initialised, particularly when Coaches are involved!

http://publib.boulder.ibm.com/infocenter/wle/v7r1/topic/wle/modeling/topic/init_complex_vars.html

So, the IBM documented approach is to use a script activity to intitialise them:

But we also found out in our tests that giiving the variable default values also solves the problem, even if the values are empty strings:

So, hope this tip comes useful to someone else and you can look fwd to more posts on BPM 7.5

I’m sure we’re not going to be short of material for quite some time…

ttfn

WebSphere JDBC adapter and hierarchical objects with Oracle

At my client’s we have used the JDBC adapter to integrate with Oracle tables extensively in the past. In fact, the solution in place is pretty much hinging on Oracle tables for marketing campaigns and product configuration details.

So far we have always been conservative in our approach, creating integration flows one table at a time or working against Oracle views. Trying to not ask too much from the adapter.

New business requirements for a future release demand a bit more from our JDBC connectivity. We have to retrieve, update and create from multiple related tables, which is mostly fine, but we ran into a little rough patch doing the inserts.

It is worth mentioning than none of this would have been an issue with DB2, which supports Identity columns. The JDBC adapter understands these Identity fields and generates the Business Objects accordingly. But with Oracle things are a little different.

Oracle implements auto generated primary keys with a combination of Sequences and Triggers. The sequence supplies the ID value and the trigger fires before the insert populating the primary key with it.

Let’s take a simple example of two related tables, AUTHOR and BOOK.
AUTHOR.ID will be the primary key and BOOK.AUTHID will be the foreign key relating book rows to their author. I’m using Oracle XE 10g and created the tables using the web ui.

You can start creating a new WID integration project and drop an outbound JDBC  adapter in the assembly. Go through the wizard as usual but make sure you click the ‘Edit Query…’ button and check the ‘Prompt for additional configuration settings when adding business objects’ check box, as shown in the image below:

After you run the query you can start adding tables. Add the AUTHOR table first accepting all defaults.

Next add the BOOK table and build the child/parent relationship as in the following image:

Complete the wizard (I don’t generate business graphs, clear the checkbox)

Now is when the manual changes happen. Open the generated BO for the AUTHOR table, I’m using the system squema and the BO is called SystemAuthor.

You have to manually add the UID annotation to the ‘id’ attribute and supply the sequence name as shown below. This is done by a right-click on the metadata element.

You can repeat the process for the BOOK object for good measure, though for this case it isn’t strictly required.

Next we have to modify the Oracle trigger:

CREATE OR REPLACE TRIGGER  "BI_AUTHOR"
  before insert on "AUTHOR"
  for each row
WHEN (new.id is NULL)
begin
    select "AUTHOR_SEQ".nextval into :NEW.ID from dual;
end;

The manual change highlighted in red is required because the JDBC adapter, once is told that the ID field is generated, will query the sequence and populate the attribute with the retrieved value. It also synchronises any foreign keys on child objects. But just before the insert operation, without this manual change, the trigger will fire, replace the value of the auto-gen field with the next value in the sequence and cause the parent/child relationship to be out of sync.

You can also modify the trigger for BOOK inserts. We don’t have child objects of BOOK in this example to worry about but it makes sense to do it for completeness and to be ready for them.

You can now deploy and test. One thing to watch out for is the fact that the integration test client will populate the foreign key on the child object with a default value unless you manually unset it. So make sure you explicitly set any authid book attribute to unset as below:

The response should look like this:

You can also check your tables and verify that the rows have been inserted as expected.

Best regards

Gaby

SQL Exception: Unable to insert null into process_template

This is an old foe that has reared its ugly head again at my client’s.

One of the coding standards we have in place states that in a BPEL process, invoke activities must catch their corresponding interface faults, and if necessary throw a process specific fault to be caught at the top level of the BPEL.

A developer was very diligent following this standard, which is good news, but he forgot to populate the namespace of the fault he was throwing.

Version 7 does this for you, which is neat, but in v6 if you don’t specify the namespace as in the image below, it will be left empty.

If you leave the fault namespace blank, the process might not behave as expected, but it will deploy and start on your integrated test environment.

However, when deployed to a real environment using Oracle as the persistence mechanism this process will not start.

Enabling detailed trace on the server (com.ibm.bpe.*=all) revealed a rather terse ORACLE SQL EXCEPTION: UNABLE TO INSERT NULL into PROCESS TEMPLATE(null,tempalteId,).

Only ‘thanks’ to past experience and recollection I was able to diagnose and correct the problem.

Follow

Get every new post delivered to your Inbox.

Join 162 other followers