Cloud & Engineering

Sohrab Hosseini

Mule on OpenShift: Part 2 - Build & Deploy

Posted by Sohrab Hosseini on 15 March 2019

docker, kubernetes, tech, mule, platform, openshift, container, anypoint

Since the publication of this blog post, MuleSoft has released Mule 4.2. In this version, they have introduced a "start-up performance improvement" that effectively causes a tight coupling between the wrapper and the runtime container. As such, the Unwrapped approach outlined in this post will not work in Mule 4.2+.

To re-include the wrapper, you need these changes.

mulesoft+openshift

In part 1 of this series, we discussed the different deployment models that we have used in the past to deploy containerised Mule applications on OpenShift Container Platform. Here we expand on the topic by discussing best practices around build and deployment such applications.

Container Base Image

In a microservices architecture, you often find yourself deploying many containers that contain the same technology stack. Running 100s of Mule application containers is a typical sight in our projects.

Mule Application Container ImageIn order to effectively leverage the Docker layer-sharing capabilities, we normally establish a Mule runtime base image, upon which, the Mule applications images will be built. But before that, we need to choose a base image to use as the foundation.

OpenShift Container Platform users have access to a large catalogue of Red Hat container images. These images are supported by Red Hat and are regularly patched for defects and security exploits. Additionally, many are optimised to run seamlessly on OpenShift. 

Mule targets Java 8 runtime so a quick solution would be to use Red Hat's OpenJDK 8 image. This image comes packed with these features:

  • Detects and runs Maven builds during source-to-image (s2i) assemble phase
  • Detects and runs runnable JARs, e.g. Spring Boot application JARs
  • Detects and runs compiled classes (outside a JAR)
  • Container-aware JVM tuning, especially respecting container memory/CPU limits
  • Jolokia agent for JMX interactions over REST
  • Hawkular agent for monitoring
  • Prometheus JMX exporter agent for monitoring

This image covers a lot of bases and we often find it useful to get Java applications up-and-running quickly. However, its broad appeal does have some drawbacks. Primarily, the build tools in the image are not needed at runtime and may even constitute additional attack vectors for rogue applications.

Our preference is often to have a "minimal" image containing only the necessary software needed to run the application. Our current preference is rhel-minimal (formerly rhel-atomic), a minimal Red Hat Enterprise Linux container image. By default, it comes with only the essentials and does not include other software such as Python, SystemD or even Yum.

Starting with this image, we need to install and configure a few things based on our requirements:

  • OpenJDK 8 (only JRE)
  • Any internal Certificate Authorities
  • Default Timezone
  • Init/signal translators like tini (for older versions of Docker)
  • Etc.

We also do not want to lose all the nice features that I listed for Red Hat OpenJDK image above so they are added back in. Fabric8 Java Docker images are usually upstream from the Red Hat images so their source code is used as inspiration for re-introducing some of those features.

Mule Runtime Image

With a minimal and hardened base image at hand, now we can turn our attention to building the runtime image for Mule.

This boils down to installing our desired version of the Mule runtime into the image. Mule is distributed as an archive so the installation process is fairly trivial.

As part of the runtime installation, we often check the libraries included with Mule for known exploits and remove the ones that score higher than is comfortable with our security posture. OWASP dependency-check is rather useful in this regard but other application/container scanning tools such as CoreOS Clair or Black Duck should also be considered and integrated into the CI/CD process.

A feature we inherit from Fabric8 images is run-java-sh, a universal Java start-up script. It is container-friendly and checks certain file paths and environment variables to configure itself. You will normally find the following in our images:

  • classpath file configured for Mule runtime (See part 1 for more info)
  • run-java-options file reproducing the system properties normally found in the Tanuki wrapper

Another feature you often find in OpenShift images is source-to-image (S2I). This is a Red Hat invention to automate build and image creation of popular frameworks, such as Maven for Java, NPM/Yarn for NodeJS, Pip for Python, etc.

Technically speaking, S2I is a collection of shell scripts for building the image (assemble) and running the application once the container starts (run).

I personally have some mixed feelings about S2I. While I find it suitable for interpreted runtimes such as NodeJS or Python, I find it hard to reconcile its sensibilities with compiled builds such as Java or Go. For these use cases, I prefer what I call B2I where the binary is created outside the image and then given to runtime image to bake in. This removes the need for build frameworks to be present in the image.

To enable B2I, an assemble script is included with the Mule runtime image. Our scripts typically come with these features:

  • Ability to accept the Mule archive as a binary source
  • Ability to use a URL to retrieve the Mule archive, if URL provided in a specific environment variable
  • Align Log4J configuration to enterprise standards
  • Strip out any sensitive data that a developer may have left in the repository by mistake

Please note we update Log4J configuration to always log to the standard output as is the convention with containerised applications. These logs can then be captured by the container orchestrator and processed alongside the rest of the logs.

Finally, the run script in the image simply invokes run-java-sh script.

Building Mule Application Images

Now that we have a Mule runtime base image, each Mule application image is simply a thin layer on top that contains the application archive.

OpenShift provides some Custom Resource Definitions (CRDs) for building images. BuildConfigs define how a Build is configured. An Image is produced from a Build which is then stored in an ImageStream. These constructs in addition to a Continuous Integration (CI) tool allow us to produce all required images.

For a Mule application the following steps are executed by the CI tool:

  1. Checkout the source code from source control
  2. Perform a Maven build to unit test the application and package it as a Mule archive
  3. Perform an OpenShift build with Source Strategy, using the Mule archive as the Binary Source

Since we have taken care of most things when building the base image, each individual application image build is as simple as above.

Deployment

Once an application container image is ready, its deployment is no different from any other image on OpenShift. In fact, this is the very value proposition of containers: Simplified and unified deployment, regardless of technology stack which is abstracted away by the container specification.

In case of Mule applications, we attach a few volumes to the application pod:

  • Secret containing Mule application configuration
  • Secret containing the Mule licence
  • PersistentVolume for Mule working directory, especially if the application contains a batch process to support graceful recovery after pod crash

As for resources, I had found (using Mule 3.x in Mule:Unwrapped model) that depending on application size, JVM + Mule application may not even start if it has less than 500MB of memory. The JVM start-up is also a CPU-intensive activity so sufficient CPU resources must be allocated.

Mule Application Development

Before I leave you, here are some tips to be observed by Mule developers to ease the application build and deployment for a containerised environment.

  • Ensure 12-factor application principles are followed
  • Enforce the same port to be used by all applications when exposing APIs
    • This will reduce misunderstandings and deployment failures
  • Introduce liveness check (and readiness check if applicable)
    • Enforce the same port and path for all applications
    • Prefer using a separate HTTP listener for this so it would affect normal traffic to the API
  • Ensure the application design does not preclude the application from running as multiple replicas

 

If you have made it this far, I congratulate you on your perseverance (otherwise if you just skipped to the end, you need to take a hard look at the choices that brought you here). As a bonus, I have created a GitHub repo that contains an example of a Mule base image and demonstrates the concepts introduced in this post.

 

If you like what you read, join our team as we seek to solve wicked problems within Complex Programs, Process Engineering, Integration, Cloud Platforms, DevOps & more!

 

Have a look at our opening positions in Deloitte. You can search and see which ones we have in Cloud & Engineering.

 

Have more enquiries? Reach out to our Talent Team directly and they will be able to support you best.

Leave a comment on this blog: