, a side-project

Over the past several years, my friend Brandon Tindle and I have been discussing and prototyping various ideas around HTTP monitoring. I'm happy to say that we've finally gotten brave enough to start shipping, even if it is a slightly embarrassing technological preview. If you'd like to see it, head over to, check it out for a few moments, and come back here to learn about its underpinnings.

The goal of is not to build yet another HTTP monitoring service - it's to re-imagine how one could be built for today and build it for the long-term. Some of the principles that we've established are as follows:

  • be as Unix-like as possible. Do one thing well and be open
  • measurements are the primitive, everything else is a function of that. Canary will ultimately become a distributed measurement engine, and any UI, alerting, etc will be secondary
  • no more black box measurements - everything should be measured with curl
  • measurements should happen as quickly as possible, no more than 10s apart
  • the system should be as open a possible - the measurement data should be free to the public (if authorized by the user) for analysis
  • UIs should leverage modern technologies - realtime delivery of measurements of data and state change to the browser is a possibility now. Do that
  • different UI's for different cases - a contextual tactical response UI that helps you understand if you are in a failure state and if you are recovering is necessarily different than the tooling you'd use to look at long-term trends or one you'd use for a postmortem
  • make room for others to innovate. We'll never get it all correct but by keeping to our core values and allowing others to use our data and APIs, very interesting things can happen

The current preview doesn't really cover any of these points fully outside of getting data to you quickly. It is barely usable and limited to a single site. However, if you are interested in seeing what we cook up, I invite you to follow us on Twitter where we'll be announcing as we iterate.

Brief thoughts on Docker

I've been playing with Docker recently, digging in to see if we can use it to decouple our ever growing number of applications at GitHub from the underlying infrastructure.

Initial experiments are promising. I built an image containing a minimal rbenv configuration and was able to get a trivial app up and running - all while watching an episode of [Sons of Anarchy]( I even opened up an amazing pull request when the action slowed.

I am most excited about the Dockerfile. By bundling this with an application, I can specify the preferred execution environment rather than have to fight with whatever is already in play. I can reduce suprise and help enforce consistency. My deployment can be bundled with most of the information needed for a successful execution. The Procfile describes the processes that can be run, the Dockerfile describes the runtime. Mix in some environment variables and you might be in business. There are plenty of details to work out, but this has so much potential.

I encourage everyone to explore Docker, as this or something very much like it will gain traction and change things in the near future.