This document describes how process limits, work package size, and parallel execution are working in censhare asset automation.


At the end of the article you can find example configurations:

  • Example 1 - working of non-persistent command.

  • Example 2 - working of persistent command.

  • Example 3 - Optimize PDF creation command (aa-render-pdf) for maximum parallel processing - single censhare application server

  • Example 4 - Optimize PDF creation command (aa-render-pdf) for maximum parallel processing - multi censhare application server (master and x nodes, all with direct DB connection)
  • Example 5 - Optimize PDF creation command (aa-render-pdf) for maximum parallel processing - multi censhare application server (master and x remotes, only master has direct DB connection)

Important hints

  • a censhare server sends events

  • if a command is listening to an event, a task is being generated

  • an event is going to be discarded if no command is listening to it

  • a command can start working based on time and on events

  • every command has its own queue for incoming events – visible in AdminClient/Status/Commands

  • we also have an event queue in the Oracle database = event task

  • note the difference between command event queue and event task

This is the line in the command setup where all this can be configured:

Work package size

Our tooltip: de: xml-queue-size = Maximal number of events of an instance en: xml-queue-size = Maximale Anzahl der Events pro Instanz

  • An instance is a command instance. One instance will be created at server start and it can be called the root instance of a command. The instance is by default in a wait state.

  • Events will be submitted in packages to the command

  • The XML queue size is setting the maximum submitted events per package.

  • Also, fewer events than the XMLqueue-size can be submitted in a package to the command.

Parallel execution

Our tooltip: de: fork-command = Parallele Ausführung durch mehrere Instanzen en: fork-command = Parallel execution with multiple instances

Parallel execution ON: More Instances of a command can be started = parallel working.

  • Tasks for a command start a new command instance, a sub-command

  • The root command instance remains in a wait state and is waiting for new tasks

  • A further new task will again start a new command instance, a further sub command instance

  • The root command instance still remains in a wait state and is still waiting for new tasks


Parallel execution OFF: Only one instance, the root instance of a command exists = serial working.

  • tasks for a command wake up the root instance of the command

  • the root command instance change to active state and process the events. No further command instance will be created.

  • If the command has finished, the next events will be processed.

Persistent events

Our tooltip: de: persistent-events = Ereignisse werden in die Event-Task-Tabelle geschrieben, gehen somit nicht verloren und können auch von anderen Application-Servern abgearbeitet werden en: persistent-events = Events are stored in the event task table. Therefore they can't get lost and can be executed by other application servers

A censhare server sends events and if a command is listening to an event, a task will be generated.

Persistent events off:

  • tasks are written into the command queue

  • tasks will be lost if the command fails to process them if the server stop working, or will be shut down


Persistent events on:

  • the task is written into the command queue and the event task

  • an active task is labeled and counted in the event task

  • tasks can be posted again after failure

Process limit

Our tooltip: de: event-count-limit = Maximale Anzahl der Events, die insgesamt parallel bearbeitet werden können. Nur aktiv bei persistenten Ereignissen und paralleler Ausführung en: event-count-limit = Maximal number of events, who are processed parallel. Only active at persistant events and parallel execution

  • only used if persistent events are switched on

  • Number of maximum requested tasks will be labeled as "EXECUTING" in the event task

  • If parallel execution is active and more instances are started, only half of the limit will be caught out of the event tasks

  • If parallel execution is active and more instances are started, all commands are honoring this limit jointly.

Examples

Example 1

Parameter: Work package 20, parallel on, persistent off, process limit 10

TASK: 40 Events occur simultaneously

Important: The process limit has no effect because persistent is set to off.

The Tasks will be split into two work packages (2 x 20). The root command will fork two new Command instances. These two commands run unlimited to process their tasks.

Then 20 new events will be sent to the command: One work package with 20 (1 x 20) tasks will be generated and a further command instance will be forked from the root command. Now three commands are using the server resources to process their tasks. Further events are coming for the command: further command instances will be forked from the root command and using server resources.

In this example, the command is running unlimited. In the worst case, this can result in performance problems or even a complete server downtime.

Example 2

Parameter: Work package 20, parallel on, persistent on, process limit 10

TASK: 40 Events occur simultaneously

Important: The process limit will be honored because persistence is set to ON.

The tasks will be split into two work packages (2 x 20). The root command will fork two new command instances. Each of these commands tries to label his 20 tasks in the event task as 'EXECUTING'. As much as a total of 10 tasks is in progress. The other tasks will be ignored for now.

Then 20 new events will be sent to the command: One work package with 20 (1 x 20) tasks will be generated and a further command instance will be forked from the root command. This new command also tries to label all of these 20 tasks in the event task as 'EXECUTING'. But it can only label as much as a total of 10 tasks is in progress. The other tasks will be ignored for now again. Once one of the command instances has finished its tasks, it will be closed. Now the event task will be checked if there are still tasks to do. If so, a new command instance will be forked and tries to label only 50% of the process limit (5 in this example) as 'EXECUTING‘ in the Event Task.

In this example, the command is running limited. Only in a few cases, this can cause performance problems. If so, you can reduce the process limit.

Example 3

Optimize PDF creation command (aa-render-pdf) for maximum parallel processing - single censhare application server

Parameter: 1 censhare Render Client with 20 InDesign-Server instances, logged into single censhare application server Work package 1, parallel on, persistent on, process limit 40

TASK: 40 Events occur simultaneously

Important: The process limit will be honored because persistence is set to on. This example is valid for every render command e.g. for aa-place-collection as well.

The tasks will be split into 40 work packages (40 x 1). The root command will fork 40 new command instances. Each of these commands tries to label his task in the event task as 'EXECUTING'. As much as a total of 40 tasks is in progress. As only 20 renderer Instances available, 20 commands have to wait for free instances.

Then 20 new events will be sent to the command: If all instances in use, the events are stored in the event task.

Once the active command instances have finished their tasks, they will be closed. Now the event task will be checked if there are still tasks to do. If so, new command instances will be forked and try to label only 50% of the process limit (20 in this example) as 'EXECUTING‘ in the Event Task. This will lead to a maximum of 20 new commands and all available InDesign-Server instances are used.

Example 4

Optimize PDF creation command (aa-render-pdf) for maximum parallel processing - multi censhare application server (master and x nodes, all with direct DB connection)

Parameter: 1 censhare Render Client with 1 InDesign-Server instance, logged into master censhare application server 1 censhare Render Client with 1 InDesign-Server instance, logged into node server (this setting is only valid if for each node server a Render-Client + IDS instance is available and logged in) Work package 1, parallel on, persistent on, process limit 1 Ignore remote events set to enabled=false (so that no matter where the event is posted by the user, app1 or app2, both Render-Clients incl. their IDS instance(s) are fully loaded at any time)

TASK: 40 Events occur simultaneously

Important: The process limit will be honored because persistence is set to ON. This example is valid for every render command e.g. for aa-place-collection as well.

The tasks will be split into a work package of 1. The root command will fork 40 new command instances. Each of these commands tries to label his task in the event task as 'EXECUTING'. As much as a total of 40 tasks is in progress. As only 2 renderers (IDS) Instances are available, the rest of the commands have to wait for free instances.

Example 5

Optimize PDF creation command (aa-render-pdf) for maximum parallel processing - multi censhare application server (master and x remotes, only master has direct DB connection)

Parameter: 1 censhare Render Client with 1 InDesign-Server instance, logged into master censhare application server 1 censhare Render Client with 1 InDesign-Server instance, logged into remote server (this setting is valid independently if on the other remote servers a Render Client (with IDS instances) is logged in) Work package 1, parallel on, persistent on, process limit 1 Ignore remote events set to enabled=true (we need to avoid that a job is rendered e.g. on the master which is located in Munich, but the event was posted by a user logged onto the remote server in India)

TASK: 40 Events occur simultaneously

Important: The process limit will be honored because persistence is set to ON. This example is valid for every render command e.g. for aa-place-collection as well.

The tasks will be split into a work package of 1. The root command will fork 40 new command instances. Each of these commands tries to label his task in the event task as 'EXECUTING'. As much as a total of 40 tasks is in progress. As only 1 renderer (IDS) Instance is available, the rest of the commands have to wait for free instances.

How this works with Timer events

A command can also be started based on time or cron events.

  • The number of tasks is determined by the result of the search. The search and result limit is defined in the filter section of the command.

  • In case that persistent event processing is switched off, the command is generating the events and they will be written into the command queue. The event task is not used. In case you set up a very short timer event and the command can not process the search result before the next timer event happens, you can observe a growing command event queue and you should extend the time between the command calls.

  • In case that persistent event processing is switched on, the command is generating the events and they will be written into the event task table. The event task is used.

  • In both cases, the events will be submitted to the limit of the search filter.

  • ‚Parallel execution on‘ will be honored. Further command instances can be forked from the root command and will use server resources.

Too many search results can result in performance problems or even a complete server downtime. To prevent that, switch off parallel execution and/or reduce the filter limit.

Example: A command is configured parallel but not persistent. The limit in the asset filter is set to 50. All 50 results will be written in the command queue. 50 subcommands starting during the same time working on this queue.