Jenkins in a Box

Posted on 2015-2-28

At my work we've had experience with CI before. We had a Jenkins instance running, but we didn't actively maintain the job configuration. We only ever really stripped out parts of the build and we grew to hate its automated emails about failed tests we didn't understand and did not feel responsible for. The problem was with our attitudes towards CI and the evergreen excuse of not "being given" any time to fix it properly. There were some attempts at rectifying this, but nothing stuck. It was time to take a stand. This article is about how I went about using Docker and some other tools to right this wrong.

Backstory

Our web-applications had been under continuous development for about five years at this point and all but one of the five (perhaps six) Java developers that had come before me had all left the company. Our development process was quite ad-hoc, implementing features as they come along and dropping everything to fix bugs we created in the previous cycle. Everyone agreed we needed to improve our testing and delivery process, but there was so much work to be done! We didn't have time to research tools and develop/define a process, right? It's a hard sell to anyone (including myself in the beginning) that pausing feature development is worth it to assess and improve the way you work. In the meantime you make small improvement like optimizing the Maven build, but it felt like trying to fill a bucket full of holes. Then one spare Friday I wrote a shell script to automate our deployments and after seeing the time invested pay itself back many times over (after only a few weeks of use), I was sold on the idea of more automation. Being a software engineer, it should have been a no-brainer from the start. I had enough clout to claim the time and off I went.

The Excavation

The first challenge was decoupling the web-application from its external dependencies so that we could have a truly isolated environment for running tests. After many years the application was interwoven with its dependencies to such an extent, that separating it felt akin to an archaeological excavation. Previously, tests ran against the development database with certain assumptions about data. I wanted tests to be reproducible and for them not to break incidentally. Builds would ideally run in a clean environment, something that can be provisioned and torn down endlessly.

Part 1. SQL Dumps & Imports

Our SQL tables were a mix of Hibernate auto-generated tables and manually created ones. Our code base contains non-ORM and MySQL specific queries. For these reasons, using an embedded database was too big of a change to the application. I believed it would be safer to limit the number of changes to the production code. So instead, the MySQL database would be (re)created using a DDL export made with mysqldump and populated with data using LOAD DATA INFILE. I analyzed our production data to identify the minimum data needed and added some extras useful for test cases. Next I created a SQL script that made a sub-selection into temporary tables which were then exported with SELECT ... INTO. The reason for the temporary tables is so that I could make joins on the tables containing transitive relationships without excessive locking. All of these tar.bz2'd were small enough to bundle up into our code repository. The commands used for the entire process were documented and added to version-control, because the whole point of this endeavor was reproducibility.

Part 2. Images

Our website lets users upload images to customize their tiles and backgrounds. We use vfs to abstract the remote filesystem so switching to local storage was simple. However, there are more images to take care of. Some of the most essential image files are not part of the web-app but do exist on our CDN. After years of managing the assets manually the exact state of which images were in use was a bit vague. I also made a batch job to compare and delete unreferenced images from the asset servers, perhaps an interesting topic for another time. To avoid having to scour all the sources and the newly-created sub-selection of production data for image references, I created a proxy web-application that first checks the local filesystem for an image before fetching it from the CDN and caching it locally. This proxy can now run on the test environment providing the images on an as-required basis which keeps the environment somewhat lean.

Part 3. Email, Solr etc.

The registration process depends on our mail server. This is something that is actually quite easily replaced with GreenMail. A thin web-interface to the inboxes allows integration tests to complete the full registration process without mocking out the email sending. Finally, the Solr instance backing the site has a custom schema and config that we have stored in version control. So spinning up an empty Solr is a script-friendly task. The rest of the dependencies were stubbed into insignificance because they weren't considered core component.

Docker as a Platform

A lot has been said about Docker and Docker Inc. All I can tell you is that I am very pleased with the platform and how it's helped us realize our continuous integration ambitions. It didn't have to serve external users so security concerns (something Docker has been faulted for) were minimal. Docker fit our requirements. From the inside of a container, it looks like you're on your own Linux machine, minus System V. We can spawn more build slaves as needed and the installation and management through the docker client and daemon is straight-forward. So, was the choice for Docker influenced by the hype? Sure. Then again, I may not have even considered the scale of build automation that we have now if Docker's popularity hadn't made me aware of the benefits and possibilities. Having said that, I wouldn't call it production-ready yet. Since version 1.4 random crashes have been very rare, but they still happen. The attention and hype gives me confidence that Docker and Rocket will be around for a while.

Jenkins Config

Jenkins talks to the build slaves (containers) as regular SSH slaves. There is a Jenking Docker plugin, but it would occasionally lose track of the container and start a new one. The config wasn't very well documented and after trying a regular SSH slave setup, I find that it meets all our needs. The main takeaway for us has been: Jenkins doesn't need to know it's talking to a container. Our Jenkins instance itself also runs in a container. Its exploded war is mounted as a volume.

Jenkins automates building and tracks changes in your code repository, but configuration is manual. If you branch your code you'll have to clone the Jenkins job, which isn't hard, but tedious. Maybe someone branched and didn't tell you, and they don't care about running tests until you're asked to merge their branch. With jenkins-autojobs these rogue developers will get the benefit of all the automated goodness.

Expanding Tests: Selenium

Our Docker image includes Tomcat, Firefox + Chrome and Xvfb. That can only mean one thing: headless browser testing! Inside the container is a script used to deploy the applications to Tomcat. This is launched from inside the Maven build (aptly during the integration-test phase).

One of the most satisfying moments of my professional career has been discovering my first "real" bug through a browser test some weeks after I wrote it. It's somehow more satisfying than a failing unit test, which is usually just a "dumb" mistake. Browser tests can fail even when all the wires are connected properly, but there is something more fundamentally wrong. Or... they can fail when someone changes a CSS class... There is good and bad.

Bonus: Development Containers

Guess what you get after creating a fully featured build slave Docker image? That's right, a fully featured development Docker image. The definition can be copied and built on every developer's machine or distributed through a custom registry. The user friendliness needed enhancing, so a lot of time was spent writing utility scripts.

Conclusion

I don't think any existing packaged CI solution would have been easier to customize to our application than building our own solution. The application was never built with containment in mind. Still, our current solution doesn't feel like we're locked in. The custom shell scripts are short and composable, meaning they can be understood and replaced if need be. The participants in the automated build process are now all well-defined and packaged into a portable container format. This meant an easy step to reproducible development environments. Thanks to setting up CI using Docker, we have made our application more modular. Even though, right now, it's really just one giant module with a few small auxiliary ones). Thanks to this we are one step closer to using a hypervisor or something like CoreOS as a production platform. All of this in return for something that has already paid for itself in terms of stability and quality assurance.

Written with StackEdit.