Cloud & Engineering

Accelerating Mule ESB Development With Project Templates

Posted by Admin on 27 May 2014

mule

For a recent Mule ESB project we needed to pump out a lot (100+) of ESB service operations. Most of them followed the same pattern: synchronous request-response to a single provider API. With so many similar flows to knock out, our thoughts quickly turned to ways to automate development. How much of our code could we generate? Thanks to strong coding and naming standards and the clean and open nature of Mule XML configuration, the answer turned out to be 'almost all of it'.

Looking at the example flows, we found several kinds of task to automate:

  • Create new Mule XML flow files.
  • Augment existing Mule XML files with extra connectors and endpoints.
  • Generate XSLT skeletons and test data files from XML schemas.
  • Template JUnit test code.

We chose the venerable Apache Ant tool to handle these jobs because of its great support for file system management tasks, token replacement and XSLT transformations.

New Flow Files

New files were the easiest. Our code followed strict naming conventions so creating new flow files was simply a case of changing the relevant names. Mule's very simple and clean XML format helped greatly here - there were no embedded service-specific XML namespaces or other tricky code generation steps.

We used Ant's copy task with embedded filter sets to do simple text token replacement. The replacement values come from Ant properties so we could provide them in properties files or from the command-line (handy for scripting bulk code generation).

Check out some simplified code examples here.

Augmenting Existing Flow Files

Our example projects declared Mule connectors and global endpoints in a central mule-config.xml file - one per application. This left us in a tricky position - we had to add new endpoint elements into this file while preserving any original content (including comments and formatting). The files are XML documents, so we couldn't simply append text to the end without ending up with invalid code.

The solution here was XSLT. We started with an XSLT 'identity transformation' (i.e. copied input to output). Then we matched the last Mule endpoint tag in the file, copied it and then appended our new endpoint at that point. Because we were treating the XML as XML (not plain text) we adhered to XML's structured format. Using an XSLT 'choice' we were even able to check if our new endpoint had already been added (to stop us generating duplicate code).

We then used Ant's 'style' task to execute the XSLT and passed in Ant properties as XSLT input parameters.

See an example transform here.

XSLT Skeletons and XML Test Data

As an output of the previous design phases we had XML Schemas describing each operation's input and output. Schemas were not enough though. For our code generation we needed XML documents that matched those schemas. Tools like Eclipse or OxygenXML can generate these sample documents but not without manually interacting with a GUI - obviously a show stopper for code generation!

Eventually we came across an open source library called JLibs. A simple Java Main class to wrap the library and we could generate sample XML documents for our schemas. We then passed our sample document through a code generation XSLT to generate the actual XSLT skeleton (how's that for meta-programming?)

See here for the sample XML generator class and here for how to invoke from within an Ant target.

JUnit Tests

Code with no tests is bad. Generated code with no tests is even worse. The solution? Generate some tests...

We already had a Mule FunctionalTestCase written for our example flow. With some refactoring we reduced the amount of code to generate to the names of the input and output files. These few fields were simple enough to template using simple test replacement with Ant's copy and filter set tags.

To guard against developer laziness, the generated test classes contained some basic assertions (non-null response, no error status field etc.) and then a 'fail' assertion. This forces developers to open up the code and add their own operation-specific assertions.

In Conclusion

By combining the techniques above we were able to remove about 70% of the coding effort from producing a new service operation. Developers could focus purely on the operation-specific data mappings and test cases. Even if the particular operation didn't follow the pattern exactly (e.g. it involved more complex service orchestrations), the generated code provided a solid start for development.

 

If you like what you read, join our team as we seek to solve wicked problems within Complex Programs, Process Engineering, Integration, Cloud Platforms, DevOps & more!

 

Have a look at our opening positions in Deloitte. You can search and see which ones we have in Cloud & Engineering.

 

Have more enquiries? Reach out to our Talent Team directly and they will be able to support you best.

Leave a comment on this blog: