Rails Continuous Integration in the Cloud with CircleCI

Every once in a while I come across tools and services that convince me right from the start. The last one was Nitrous.io, for example. This time it’s CircleCI.

Here at SQUAR we’re doing all our server-side development with Rails. Rails itself does a great job making test-driven development actually work, but without continuous integration (CI), test automation is half the fun (and even less the value, for what’s worth).

I’ve been a big fan of Jenkins since forever and it never occurred to me that I might some day abandon it. Here’s how CircleCI made me do it anyways:

  • Go to circleci.com, sign up with GitHub and a few clicks later your project is all set up and gets CI’ed already
  • CircleCI practically figures out how to build and test our project on its own, so from this point onwards every push to GitHub (master or branch) is being tested. And I receive an email if my tests didn’t run through.
  • It integrates nicely not only with, but also on GitHub. For example it tells me on the pull requests page if my tests for this branch passed or failed.
  • CircleCI infers permissions and collaborators from GitHub, so there’s no accounts and permissions to set up on CircleCI. None at all.
  • It has a hook built in already that can pick up artifacts from a special folder and publish in on the Web on every build page. That makes it a breeze to e.g. integrate test coverage reports by just dropping the HTML-rendered results into this folder and letting CircleCI do the rest.
  • Another good one that I haven’t gotten around to set up yet is the Heroku integration. Setting up a continuous deployment chain that pushes changes from particular branches to Heroku after tests passed is reduced to very few convenient steps.

With Jenkins this would have been quite a bit more of a hazzle to set up. Not to mention that CircleCI is running on the Web and is completely on demand (where are you with that, Jenkins?), with plans that will get you relatively far starting as low as 19 USD/month.

Ok, the above was the short and over-excited run-down of my first experience. I actually need to make a few more remarks to put it into the right perspective:

  • It wasn’t working as out-of-the-box as I described above. I had to make one customization to our build to make it actually work. But this change was well documented and all I needed to do is drop a circle.yml file into my repository with a few lines of code.
  • I also had to jump through a few hoops to make CircleCI pick up our coverage reports. But the support was super helpful and I was actually happy to experience the great customer service instead of being upset it didn’t work right away.
  • We’re only using it for Rails at the moment and it’s great at that. Given that we’re doing quite a bit more (e.g. Android and iOS apps), we’ll have to figure out how much of that we can set up with CircleCI as well. I have a feeling we’ll still end up using Jenkins for whatever jobs CircleCI can’t get done. Because no matter what I said about Jenkins above, it’s still by far the most versatile of ‘em all.
  • There’s also Travis. It’s very established for CI’ing open-source projects and has recently started to become available for private repositories as well. From all I know it’s actually better and more mature and even offers CI for iOS apps (which is rare and awesome). But it does set you back a whopping 129 USD/month for the cheapest plan. I don’t doubt that’s money well invested once you’re at a certain scale, but it felt a bit too much for us as startup that’s hardly 3 months old. Other than that, I believe Travis would have impressed me as an old Jenkins veteran just as much…

If you’re building your tool stack in the cloud then you should give CircleCI a try. And I believe you’ll be stunned, and thankful, and you never want to go back.

Eating your own dog food

Early feedback is important. The earlier in the life cycle of development feedback comes in, the faster you can iterate, figure out what is working and what is not working, improve, and iterate again. You should release early and release often.

Releasing early and often usually aims at release cycles of something like 2 weeks. Depending on your kind of system, this can be shorter, but especially for native apps, much shorter release cycles aren’t really feasible. An even quicker way to get feedback is to give your software into the hands of your own colleagues and selected testers – constantly. Within your own organization, nobody prevents you from releasing continously, as often as multiple times in a day, without the overhead of an official release. You can then take the feedback of your own peers to iterate even faster. In modern tech slang this has become known as “eating your own dog food”.

Here at Klamr we try to get ongoing development into the hands of all our colleagues as fast as possible. The key to do this is Continuous Integration, that’s where everything ties in. Here’s how we do it:

  1. Jenkins: We’re using Jenkins as our continuous integration server and use it to automate most of our tasks. For each project there is a Jenkins job that pulls the latest code regularly, builds it, tests it, and then distributes it. Jenkins is amazingly easy to set up and configure, yet incredibly flexible and powerful. Ever since we started using it, it has grown with us into dozens of very different jobs for pretty much every project we’re working on.
  2. GIT branching strategy: while working on new features we need to decide when exactly changes should be made available internally. The general requirements are never to break builds altogether and not to break core functionality. We don’t pull every single change that is made anywhere in the project. We hook our Jenkins job into our GIT branching strategy to give the responsibility to decide which change is ready to our engineers. They have control over it by pulling changes into certain branches when they are ready.
  3. Schedule your distribution: depending on the project we either distribute immediately on every new change, or nightly. This is configured in Jenkins. My personal rule of thumb is: the more transparent new versions are for your (internal) users, the quicker and easier distributions/deployments are, and the less frequent commits to your distribution branch are, the better is to distribute changes immediately. When starting a new project, I generally start with this. Once problems appear that can be solved by slowing down, go to nightly distributions. Everything running server-side like a web app, for example, is completely transparent for users (just as they are in your production environment), new versions aren’t disrupting anybody. That’s a good candiate for very frequent distributions. An iOS application, on the other hand, needs to installed manually, hence pushing out 20 new versions every day tends to be disrupting for everybody. The last thing we want to do is make our co-workers feel disrupted and annoyed, that just leads to less and worse feedback.
  4. Distribute: the actual deliveries are all automated, but differ quite a bit depending on the type of software. Some examples of what we do:
    • Backend application: this get built and deployed to internal servers. This is the most complex deployment process we’ve got, especially things like database migrations don’t make it exactly trivial.
    • Web application: our klamr.to web application is deployed on every new change to an internal, protected web server. It is then connected to our live database, so everybody in the company can use this web application instead of our live production web application. Changes on here have sometimes only been finished for minutes until they get available.
    • Android: our Android app is distributed in two ways: new APK files are sent out directly via email (Android makes installing new APK’s directly from email attachments so much easier than iOS) and via the service Appaloosa Store. The latter has some nice advantages like providing a custom store app and push notifications for new versions.
    • iOS: our iOS app is distributed via Testflight. There’s a few catches for iOS, for example that you need to build on a machine running Mac OS. That’s why we have a separate Jenkins instance only for building the iOS app. Most other Jenkins jobs are running on one Linux-based instance hosted on Amazon EC2. Also, devices must be explicitly registered in your ad-hoc provisioning and Apple restricts the number of internal devices to 100. No rocket science once it’s all set up, but a few extra hoops to jump through.
  5. Real data: It’s important to allow internal users to use these early builds against their real Production data. Our web application, for example, runs on an internal URL, but is configured against our Production servers and database. This allows us to test drive new features early on with our real accounts. This leads to much better feedback than asking people to test features on isolated servers with fake data and helped a lot with internal acceptance.
  6. Automate: the key to all of this is automation. If it’s not automated, regular distribution either doesn’t happen, or it wastes valuable engineering time. And as mentioned already above, this all ties into continuous integration. Much of the process and infrastructure described above should be in place anyways to continuously build and test your software in an automated way.
  7. Release notes: for us it proved incredibly helpful to automate release notes for each internal distribution. Remember that one of the main reasons to do all this in the first place is to get early feedback. Without release notes, it’s not possible for anybody to know what has changed and to know which part of your apps to pay attention to. We’re not doing this in all places but if we do it, we’re using GIT commit comments. They aren’t suitable for end users, but they are more than good enough for internal users.
  8. Respect: although these builds are only internal, we highly respect them. This means we never try to break them (see above), we try to make using and updating them as easy as possible for our co-workers, and our engineers are quickly reacting to any kind of feedback that comes in.

Regular internal distribution helps us to keep the feedback cycle as short as possible, sometimes even down to minutes. Automation of all the tasks involved helps us to keep moving fast, even as the number of systems and their complexity grows. I would highly recommend trying to automate as much as possible right from the start.

Are you eating your own dog food? What is your experience with this? Are you using different techniques and tools? Leave a comment, I’m very interesting to hear what you’re doing.