Kubernetes: What the Hype is All About, and a Practical Tutorial on Deploying Joget for Low-Code Application Development

EDITOR’S NOTE November 2020: This post was originally published in June 2019. Please refer to the Knowledge Base for the latest tutorial.

Introduction to Containers, Docker and Kubernetes

Container technologies such as Docker and Kubernetes are essential in modern cloud infrastructure, but what are they and how do they work? This article will present a quick introduction to the key concepts. To help you further understand the concepts in a more practical manner, the introduction will be followed by a tutorial in setting up a local development copy of Kubernetes. We will then deploy a MySQL database and the Joget application platform to provide a ready environment for visual, rapid application development.


Containers are a way of packaging software so that application code, libraries and dependencies are packed together in a repeatable way. Containers share the underlying operating system, but run in isolated processes.

At this point you might be asking how a container is different from a virtual machine (VM) running on a VM platform (called hypervisors) such as VMware or VirtualBox? Virtual machines include the entire operating system (OS) running on virtual hardware and is good for isolating the whole environment. For example, you could run an entire Windows Server installation on top of a Mac computer running macOS. Containers, on the other hand, sit above the OS and can share libraries so they are more lightweight and thus are more suitable for deployment on a larger, more efficient scale. The diagram below illustrates the difference in a visual manner for easier understanding.

Difference between virtual machines and containers


Docker is an open source tool to create, deploy and run containers. In Docker, you essentially define a Dockerfile that is like a snapshot of an application that can be deployed and run wherever a Docker runtime is available, whether in the cloud, on your PC, or even within a VM. Docker also supports repositories such as Docker Hub where container images are stored to be distributed.

While Docker is not the only container technology available (with alternatives such as CoreOS rkt, Mesos, LXC), it is dominant and the de facto standard in industry right now.


If Kubernetes sounds Greek to you, it’s because it literally is. Kubernetes is the Greek word for “captain” or “helmsman of a ship”. Kubernetes, shortened to K8s (convert the middle eight letters into the number 8), is an open source container orchestration platform. What does orchestration mean in this case? While containers make it easier to package software, it does not help in many operational areas, for example:

  • How do you deploy containers across different machines? What happens when a machine fails?
  • How do you manage load? How can containers be automatically started or stopped according to the load on the system?
  • How do you handle persistent storage? Where do containers store and share files?
  • How do you deal with failures? What happens when a container crashes?
An orchestration platform helps to manage containers in these areas, and more.

Originally created by Google based on their need to support massive scale, Kubernetes is now under the purview of Cloud Native Computing Foundation (CNCF), a vendor-neutral foundation managing popular open source projects.

There are alternatives to Kubernetes (such as Docker Swarm, Mesos, Nomad, etc) but Kubernetes has seemingly won the container orchestration war having been adopted by almost all the big vendors including Google, Amazon, Microsoft, IBM, Oracle, Red Hat and many more.

Get Started with Kubernetes

So far you have learned that Docker and Kubernetes are complementary technologies. You package your applications into Docker containers, and these containers are managed by Kubernetes.

Using Docker is pretty straightforward. You basically need to install the Docker environment, after which you will be able to launch container images using a “docker run” command. A simple tutorial for running a Joget Workflow container image is available at https://dev.joget.org/community/display/KBv6/Joget+Workflow+on+Docker.

Understanding and installing Kubernetes is a more complicated proposition. There are several basic and essential concepts that need to be understood:
  1. A Kubernetes cluster consists of one or more nodes. Nodes are machines (VMs, physical servers, etc) that run the applications.
  2. A Pod is the smallest Kubernetes object that contains one or more containers, storage resources, network IP and other configuration.
  3. A Service defines a set of Pods and how they are accessed.
  4. A Volume is a shared storage for containers, and many different types are supported.
  5. These Kubernetes objects are defined in YAML format in .yaml files
  6. A command line interface tool, kubectl, is used to manage these objects via the Kubernetes API.

Simplified view of Kubernetes objects

There are many more concepts in Kubernetes, but the basic ones above should suffice to get started with Kubernetes.

There are many Kubernetes solutions available for different requirements from different providers, ranging from community tools for local testing, to production environments from cloud providers and enterprise vendors.

For the purpose of this tutorial we’ll use Minikube, a tool that runs a single-node Kubernetes cluster in a virtual machine for local development and testing. We’ll be using a Mac running macOS, but you can adapt the instructions for your OS.

Install VirtualBox

The first step is to install a VM platform. We’ll use the open source VirtualBox as the VM platform. Follow the download and installation instructions at https://www.virtualbox.org/wiki/Downloads

Install kubectl

The next step is to install the Kubernetes command-line tool, kubectl, which allows you to run commands against Kubernetes clusters e.g. deploy applications, inspect resources, view logs, etc.

1. Download and set executable:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl \
&& chmod +x ./kubectl

2. Move the binary to your PATH:

sudo mv ./kubectl /usr/local/bin/kubectl

3. Test to ensure the version you installed is up-to-date:

kubectl version

Full instructions are at https://kubernetes.io/docs/tasks/tools/install-kubectl/

Install Minikube

Now let’s install Minikube, a tool that runs a single-node Kubernetes cluster in a virtual machine on your laptop.

1. Download and set executable:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
 && chmod +x minikube

2. Move the binary to your PATH:

sudo mv minikube /usr/local/bin

Full instructions are available at https://kubernetes.io/docs/tasks/tools/install-minikube/

Start Minikube

1. Start Minikube and create a cluster:

minikube start

The output will be similar to this:

😄  minikube v1.1.0 on darwin (amd64)
💿  Downloading Minikube ISO ...
131.28 MB / 131.28 MB [============================================] 100.00% 0s
🔥  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
🐳  Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
💾  Downloading kubeadm v1.14.2
💾  Downloading kubelet v1.14.2
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Verifying: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"

Test Minikube Installation

1. Run a sample HTTP application

kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080

2. Expose the service so that external connections can be made

kubectl expose deployment hello-minikube --type=NodePort

3. Inspect the pod

kubectl get pod

4. Once the STATUS is Running, test the service using curl

curl $(minikube service hello-minikube --url)

5. Delete the service and deployment

kubectl delete services hello-minikube
kubectl delete deployment hello-minikube

Full instructions are available at https://kubernetes.io/docs/setup/minikube/#quickstart

Deploy MySQL on Kubernetes

To deploy a MySQL database image, we’ll use an example YAML file provided in the Kubernetes website k8s.io.

1. Create persistent storage using PersistentVolume and PersistentVolumeClaim

kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml

2. Deploy the MySQL image

kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml

3. Inspect the deployment

kubectl describe deployment mysql
kubectl get pods -l app=mysql
kubectl describe pvc mysql-pv-claim

4. Run MySQL client to test

kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword

Full instructions are available at https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/

Deploy Joget on Kubernetes

Once the MySQL database is running, let’s run a Docker image for Joget Workflow Enterprise that connects to that MySQL service.

1. Deploy joget image using an example YAML file in the Joget Knowledge Base.

kubectl apply -f https://dev.joget.org/community/download/attachments/42599234/joget-deployment.yaml

2. Inspect the deployment

kubectl describe deployment joget
kubectl get pods -l app=joget

3. Once the STATUS is Running, get the URL for the service

minikube service joget --url

4. Access the URL in a browser with an additional /jw in the path to access the Joget App Center e.g.

Joget App Center

You now have a running installation of Joget Workflow, and you’ll be able to visually build a full app in 30 minutes without coding.

Scale Joget Deployment

Now we can demonstrate how Kubernetes can be used to manually increase and decrease the number of Pods running.

1. Scale the deployment to 2 pods (called replicas)

kubectl scale --replicas=2 deployment joget

2. Examine the running pods, and you should see 2 pods running Joget

kubectl get pods

NAME                     READY STATUS RESTARTS   AGE
joget-7d879db895-c9sbb   1/1 Running 0    27s
joget-7d879db895-wpnsf   1/1 Running 0    37m
mysql-7b9b7999d8-lk9gq   1/1 Running 0    65m

3. Scale the deployment down to 1 pod

kubectl scale --replicas=1 deployment joget

4. Examine the running pods, and you should now see 1 pod running Joget.

kubectl get pods

Words of Caution on Kubernetes

This tutorial using Minikube is very simplistic and is meant for learning the basic concepts behind containers, orchestration and Kubernetes. In a real production environment there are many more things for you to consider, for example:
  1. How do you manage installations on multiple nodes? You would probably need to have some sort of automation tools like Ansible or Puppet.
  2. How do you monitor the Kubernetes cluster? You would need something like Prometheus.
  3. How do you manage real persistent storage? You would use shared PersistentVolumes. (e.g. using NFS or some other storage solution like Ceph)
  4. How do you manage security e.g. handle passwords? You would need to use secrets for passwords.
  5. How do you manage your Docker images? You would want to run your own private Docker registry.
  6. How do you handle clustering and ensuring applications are scaled according to load? You will need to setup autoscaling and ensure that the Joget and/or database images are preconfigured for replication. There is an example OpenShift blog post entitled How to Automatically Scale Low Code Apps with Joget and JBoss EAP on OpenShift.
So in actual fact, Kubernetes is hard! And Kubernetes is not a complete stack, with many missing pieces that you would need to fill.

This is where the many products and solution providers in the Kubernetes ecosystem play a role. For example, Red Hat OpenShift Container Platform is packaged as Kubernetes for the enterprise by providing many missing components like improved monitoring, logging, a container registry, web console UI and build automation along with commercial support. Cloud providers like Amazon, Microsoft, Google and many others provide managed solutions, so you just concentrate on your container images with less worries on managing the actual Kubernetes platform.


In this article we introduced containers, Docker and Kubernetes. We also presented the difficulties in using Kubernetes in a real production environment. If Kubernetes is so hard, then why would you want to use it? Kubernetes might not be for everyone but it does provide tremendous value if done right, which is why the IT industry is consolidating around it. Kubernetes offers amazing capabilities to deploy and manage at scale, so it is especially suitable for organizations that require large scale deployment of applications.

We covered a tutorial installing a local copy of Kubernetes and deploying Joget with MySQL. While Kubernetes deals with infrastructure deployment issues, Joget addresses application development challenges. With Joget on Kubernetes, you will be able to visually build applications in minutes. To get started with the Joget platform:


Popular Posts

Single Sign-On (SSO) Capabilities In Joget: OpenID Connect, SAML, Kerberos and More

No-Code, Low-Code and Pro-Code: Why Flexibility is Essential for Digital Transformation

How to Solve Your Enterprise App Performance Problems

Transforming Customer Onboarding: How Allied Benefit Systems Achieved Reporting Accuracy and Streamlined Business Processes with No-code/Low-code.

DISINI™ Vehicle Inspection & Insurance App