As many know I am a proponent of the platform IBM i. Many see it as an old legacy platform which cannot play in the modern world of computing. Nothing could be farther from the truth. It is one of the most advanced platforms out there.

And what about the programming languages which has been with the platform since its beginnings? RPG has evolved into a modern and one of best programming languages for business logic. Because of its ILE capabilities it has access to the whole ILE environment including ILE C and C++ modules and libraries. And thus it can play in any game it needs to play … also in the web and micro service game of today.

But first let us have a look at the following case.

The Case

We have an online auction house for our famous online game. The game started decades ago where we only had to cover ingame trades. We have a program for managing these trades: ACT001. It covers all the necessary attributes of an ingame auction.

And then … things started to evolve. Other media channels (like WhatsApp, Discord or Telegram) sprang up and we want to take advantage of these new possibilities. We like to inform players about new auctions of interest and so we need to provide the interface to these media channels with data. There is also a player by the name of “extranet” which needs some of the data.

So we have more information to manage which might evolve into multiple programs for managing data and we have different channels where data enters our system. Additionally we have multiple systems and interfaces which need to be provisioned with our data.

After some years most RPG programs look like this:

begsr savact;

write actdtaf01 data;

call act002; // WhatsApp
call act003; // Telegram
call act004; // Extranet


And the situation is more like this:

Unstructured program calls

Every auction data management program needs to call every distribution program which leads to a maintenance nightmare. One solution would be to decouple things.

Decoupling : Pros and Cons

Pros for decoupling components (programs):

  • defined interface between components
  • easier to extend
  • easier to maintain
  • easier to scale
  • easier to handle errors (and replay events)
  • … and some more

Of course like with everything else in life there are not only pros but also cons like

  • you cannot derive from the management program where the data will flow to
  • some events must be processed in the correct order
  • may need more infrastructure (software)
  • … and probably some more

But in my opinion the pros outweight the cons big time.


Some may say:

That is pretty easy. Put the calling of the distribution program in a trigger. It will be executed every time anything changes.

Yes. That may be a solution. But in case a distribution program is called and the call results in an error what will happen to the update of the data? Will it be rolled back? But there is nothing wrong with the data. Perhaps the receiving system is not online.

Why should we not update our own system and move on?

We could do that but then what about the update of the other system that needs our data? Now we need a way to handle the error case(s) … for every distribution program.

So a trigger may be one viable solution … but there are also others solutions.

Another solution

If we distribute our data to other systems it may not be needed there immediately and may not always need to be in sync with our own system. So most of the time we may be able to do the distribution asynchronously.

Synchronously : We tell someone that our data has changed and watch that someone do his work with our data to update his system.

Asynchronously with one receiver: We tell someone that our data has changed and he should update his system. We don’t wait and watch his update but just move on and do our work.

Asynchronously with multiple receiver: We tell someone that our data has changed and that he should inform everybody who is interested in the data change. And we just move on.

The one-to-one situation of data publisher and data receiver can already be done with little effort natively on IBM i.

The magic keywords are: data queues!

They are easy to manage and use and are very performant. But in our case they are of little help as we have many different data receivers.

For the 1-to-n situation we can use a concept called pub/sub. One publisher publishes an event and multiple subscribers may subscribe on this type of event. This concept is quite generic and may be used pretty much everywhere. Frontend, backend … it doesn’t matter.

Does this solution rules out the use of a database trigger: no. Acutally a trigger might be a good way to trigger this event and act as the data/event publisher.

There is currently no ILE solution for pub/sub that I know of. But that does not matter so much as we want to do things in an asynchronous way anyway (pretty much fire and forget) and thus it does not matter if it is ILE or not. The only thing which should be ILE compatible is the data publisher. And we got that covered with an open source service program available on

There are many systems available which cover the pub/sub concept. A very famous, stable and widely used system is part of the Apache software ecosystem: Apache Artemis.

A good thing about Apache Artemis is that it is written in Java and thus can run on IBM i without any problems.

In the next episode we take a look at Apache Artemis and how to run it on our beloved IBM i.

Happy integrating!