Friday, June 20, 2014

On Joget Workflow Scalability

Since v3.0, we had done quite a number of code profiling steps and optimized the identified bottlenecks. There is significant performance improvement in v4.

In terms of customer implementations, we have a customer in Asia that implemented a cluster of up to 5 servers on AWS (Amazon Web Services) to handle up to 1500 concurrent users. There's another in Europe that has several thousand total users running on a single on-premise server.

As Joget Workflow is a platform and not directly an end-user app, the scalability and performance would depend on a number of factors e.g. complexity of the apps and use cases, usage patterns, tuning of the OS/DB/JVM/app server stack, etc. 

The best approach would be to perform profiling/sampling on specific apps/use cases that are considered slow, that would give a good indication on any possible bottlenecks/resource contention issues.

There is a very general guideline at, but it is difficult to recommend specific environments. From what we know, each of our customers have very varied infrastructures to support different requirements because there are are potentially many factors involved, for example:
  1. Total number of users
  2. Maximum expected concurrent users
  3. Number of apps running on the platform
  4. Complexity of each of the apps
  5. Amount of data generated in each app 
  6. Network infrastructure
  7. etc.
For the standard Enterprise Edition, you can perform vertical scaling by increasing server resources as required when load increases. If there are bottlenecks it is important that the deployment is tuned and optimized e.g. Java VM tuning, app server tuning, database optimization, etc. There is an article in the Knowledge Base located here.

If a single node is not enough, then horizontal scaling can be done by clustering and load-balancing multiple copies of Joget on separate application servers. There is a clustering guide document, but clustering is only available in the Large Enterprise Edition.

To summarize, the structure would very much depend on your environment and usage. Perhaps some things to consider:
  1. How many total and concurrent users are there? Will this grow in future? 
  2. In your current environment, is the current infrastructure sufficient for the load? Would it be possible to increase the server resources?
  3. If the needs outgrow one server node, do you want to consider implementing the Large Enterprise Edition for clustering and/or load balancing? 
  4. Another possible approach could be to partition the apps. Are there specific apps that incur the highest load? Maybe you might want to separate apps into different servers.