How (And Why) To Release Hourly, Not Yearly

Hiya guys! Patrick (patio11) here. You signed up for periodic emails from me about making and selling software.

Continuous rollout means exposing features to your customers essentially as soon as they're done, rather than batching them together into "releases." (This is related to continuous deployment, but that is more of the developer-centric model of the practice. This is less a technical challenge and more of a business decision for you.) Let's talk about why you want to do it and how you can get started.

Features That Aren't Shipped Might As Well Not Exist, So Minimize Them

Traditionally, software is created in a pipelined process, often called waterfall development, where a particular unit of work goes through stages like:

  • planning
  • coding
  • testing
  • release
  • maintenance

Each of these stages is scheduled and managed roughly independent of each other. You could think of this as sort of a factory metaphor, where a chassis proceeds down the assembly line to various stations, with things gradually getting bolted onto it, and at the end of the assembly line you first have something which constitutes a car rather than a collection-of-car-parts. This process is terribly wasteful in software development. (You can read about how software borrows -- and misapplies -- metaphors from traditional manufacturing in the Lean Startup book, if you'd like.)

For a variety of operational reasons, batching features up into defined releases is counterproductive for software companies. Many companies are beginning to re-do engineering and business processes such that this doesn't happened, and features are rolled out to customers "as soon as they're done." In practice, this generally means going from yearly/quarterly release cycles to weekly/biweekly release cycles as a first step, and then eventually getting to doing multiple releases every day.

Why? Well, consider the case of Paessler Software, a client of mine which makes downloadable network monitoring software. Dirk Paessler, the CEO, burns with missionary zeal on this topic. Paessler previously released one new version of the software a year, generally around August. To hear Dirk tell it, this meant:

  • The software stopped visibly improving every May, as the team went into a feature freeze for the August release
  • The engineering team experienced repeated crunch to hit the release date, impairing productivity
  • The employees were unhappy because it was difficult to take a vacation in summer without impacting the schedule
  • The schedule was inflexible because Marketing, Sales, and Engineering all focused their on-going efforts around the yearly release, such that blowing the date meant e.g. disappointing customers/partners/media who had been promised something delivered on that date

This story isn't unique to Paessler. I've heard similar things from Fog Creek and a host of other established software companies: scheduled releases are basically the devil. We've put up with them for a while because they're the devil we know.

The Alternative To Doing Infrequent Releases: Very Frequent Releases

The alternative to taking 15 new features and 30 bug fixes and calling them version N+1 of the program is to release those features and fixes incrementally, in batch sizes as low as possible. My more established consulting clients generally do a bi-weekly release cycle. In my software businesses, and those of many of the hip-hop-happening Rails/Django crowd making social networks for lemurs, a release can literally be as small as a single changed line of code.

Why do this?

  • It reduces engineering risk, because a single feature or three going off the rails no longer threatens the ability to hit the announced ship date
  • It increases customer satisfaction because they experience the software as getting better all the time
  • It decouples marketing and engineering, to the happiness of both departments. Engineering no longer has Marketing breathing down their neck about "When? When? When?", and Marketing no longer has to worry about having to pull e.g. a planned email campaign or media buy because of a late blocking issue.
  • It smoothes sales, because decoupling the marketing/sales pushes from the quarterly/yearly release cycle tends to result in more even sales throughout the year rather than one or four massive spikes. (This eases cash-flow management for the business and lets you make investments like e.g. new hires faster and with more confidence)
  • It increases product quality, because customer feedback comes in about new features immediately (when they're fresh in the team's mind and pliable) rather than months later (when their original designs are largely forgotten and they've solidified into the code base, raising the cost of revising them)

Not that these are business problems, not just technical problems.

What Can We Build To Make This Easier?

One of the major objections to doing continuous rollout is that you don't want to ship buggy software to customers. This objection is often dealt with very defensively, for example by including weeks of exhaustive testing prior to OKing releases. (My inner cynic wonders how well weeks of testing have found 100% of bugs at your companies, since every software project I've ever worked on shipped with bugs in it regardless of how much upfront testing was done.)

Customers have different preferences for bugs versus speed of delivery, though. Some customers are very conservative with their technology adoption and would prefer never, ever, ever seeing a bug. Some customers have pressing business problems they need resolved today, and if you gave them a solution which was built out of bubblegum and bailing wire that only worked 75% of the time, they'd name their children after you. We should cater to our customers' preferences in this.

One way to do this is by delivering different versions of the software to different customers. There are, broadly speaking, two ways to do this, depending on whether you're running a SaaS app or something installable.

Feature Flags for SaaS: One Software Build In Production, Multiple Experiences

I'm a big fan of A/B testing, where you ship two versions of the software but show a given user only one version, to see whether the difference in behavior motivates them to take some action of interest to you. It turns out that you can re-use concepts from A/B testing to make software better.

For example, suppose we have new functionality for our site/software, but we're not quite sure yet whether it is 100% stable. We can do a staggered release to the userbase, perhaps releasing it to 10% of accounts randomly, to hand-picked guinea pigs (like our internal accounts or our devoted beta testing group), or to some combination thereof. We could even enable or disable the feature globally, at will. Hence the name for the practice: feature flags.

The pseudocode for this looks something like:

if (current_user.has_access_to?(:new_feature_X)
  #enable the new feature
else
  #do things the old way
end

This way, we upper bound the technical risk of shipping, because the new code added can only impact the 10% (or less) of the userbase which we choose to expose to it, rather than 100% of the userbase. Additionally, since we're hosting this software, we can dynamically turn features on or off in real time. This isn't a license to go completely Cowboy Coder, but if our monitoring or customer support detects a significant problem we weren't aware of, we can open up our dashboard, temporarily disable the new feature (or remove it from the affected users' accounts), and then fix it at our leisure rather than going into Oh No The World Is Burning emergency maintenance mode.

I know a half-dozen companies which have independently implemented libraries to do this internally. The consensus is that it takes one developer approximately one to two weeks to get the library to "functional", and after a week of using it, your developers are never going back. It makes their lives so much easier for so little additional work (one if statement) that they'd be crazy to.

Many of you reading this are familiar with source control merge conflicts, where multiple team members are producing multiple features in parallel, and periodically work totally stops while somebody (generally a senior engineer who drew the short straw) gets to try to integrate all the disparate development branches into something which actually functions. A nice side-effect of using feature flags is that much development can be done on the same branch without stepping on each other's toes, allowing merges to be more frequent and less of a productivity-destroying ordeal.

Note that this gives you additional customer support options. Ever had a customer with a feature request who you've had to say "Sorry, we're totally going to do that, but you'll have to wait until August for it." Now you can tell them "We have a feature available on an experimental basis which meets your needs. Should I turn it on for you? I'd love to talk about your experience in using it, and if it doesn't work out, we can turn it off again." Happier customers giving you better and more immediately useful feedback: pure win.

There's some off-the-shelf OSS software which encapsulates feature flags for you (though, again, it's only a week or two to scratch-implement). Examples:

  • Gargoyle for Django, which has the UI built-in for you. (SoundCloud swears by this.)
  • Rollout for Rails
  • There exist libraries for .NET, JAVA, etc, but I'm less familar with them. Check your local Googles for [feature flags $YOUR_TECH_STACK]

This is the capsule summary of implementing feature flags as a practice, by the way. When you actually do it, you'll have to decide things like "What happens to changes in the DB schema?" (Many companies will tell you: "We make them very carefully, infrequently, and keep them backwards compatible for the next few releases while we shake bugs out.") But it's fairly easy to dip your toes in the water: take one smallish feature out of your next scheduled release, code it in your deployment branch with a feature flag, and release it outside the normal release cycle to your internal users. After you haven't broken it for a day or three, release it to everyone all at once. Be amazed at how awesome this is, then try it again.

Build Channels For Installable Software: Multiple Builds In Production, Multiple Experiences

Feature flags are a great solution for SaaS, where you have total control over all customers' accounts with a mouse click, but they're less than ideal when customers install the software on their desktop, server, or mobile device. There is a good alternative for these scenarios, and you might have used it in Chrome already. Warning: this is very technically challenging to get up and running with.

The basic idea is that:

  1. We make our software capable of self-updating. (This can be totally automated or require user interaction.)
  2. We offer our users multiple update schedules to pick from, called Channels.

We then communicate with users like this: "The Bleeding Edge Channel gets a software update every night. It will break occasionally. On the plus side, you get the newest, best version of the software every morning. The Stable Channel gets a software update every two weeks. It's what we use on our machines -- it will rarely have very serious bugs in it, but generally has almost all of the new features available. Finally, the Managed Deployment Channel gets updated quarterly. It's the best choice for enterprises who want a predictable release schedule."

This gives you the best of both worlds: all the predictability of infrequent releases, for the customers who need that, but vastly improved productivity for your development team and company as a whole.

  • Feature delivery by Engineering is now continuous throughout the year, rather than artificially concentrated around release dates. This means their workload remains roughly constant, eliminating the crunch-and-recover cycle, resulting in happier and more productive employees
  • The cycle between feature delivery and customer feedback tightens, allowing you to iterate faster and more effectively
  • You get reports of bugs earlier, from customers who they impact less, rather than later, from all customers at once. This increases the perceived quality of your software and decreases your support (and maintenance) costs.

Is it easy to get started with doing release channels? Nope. Every company I've ever talked with said this was a substantial engineering investment to get going. Quoting Dirk from Paessler again:

The big challenge was to set up a development environment that allowed us to work on three channels with a two digit count of developers at the same time. This environment had to have automated builds, automated testing, versioning, and had to support several development IDEs (in our case Delphi, Visual Studio, Aptana, our in-house-translation and localization software, our in-house-build-system, our manual/documentation software, ISS setup-programs, automated build- and testing routines, etc... Most of the time we are working on more than three branches of the software, too. This whole process took about a year to create.

I am personally convinced that "continuous" will become the new "normal" in software development. It already is for SaaS products. It already is for Chrome and Firefox. It already is for various open-source projects.

Why? Additionally to the advantages described above I believe that this new mode of development will be the only chance to handle the ever more complex software projects in this world. Smaller steps, more often.

I'll stake any amount of money on Dirk being right.

Automated Testing: You're Smart. Do It.

I'm a Rails developer. Automated testing -- starting with unit tests and gradually getting more elaborate -- is not just the best practice in the Rails community, it is doctrine. You get your Macbook and you run your unit tests and you will like them or you'll be sentenced to write XML parsers for understaffed J2EE projects.

I fought this conclusion for years, but eventually decided to try it. And, holy cow, unit testing improved my engineering practices like no other change I've ever made. In particular, focusing my testing efforts on complicated code with subtle interactions both

  • got me in the discipline of writing less complicated code with more explicit interactions (which is both easier to test and, honestly, just breaks less)
  • prevented some truly catastrophic bugs from shipping
  • gave me more confidence to quickly address problems in the live service without worrying that the new re-deploy would cause core functionality to fail

It certainly isn't a panacea, but if you're not already testing, I encourage you to give it a try. Start small: write a few tests for happy and not-so-happy combinations of inputs to That Code. You know, That Code, the code that breaks more frequently than all the other code and that causes you the most prematurely lost hairs. We've all got That Code. Just spend an hour or two on to start. Then, as you discover more problems in That Code, make sure they don't re-appear by writing a failing test for them to reproduce the issue prior to fixing them.

A lot has been written on good testing practices, and honestly I'm not an expert on them. Just treat this as one more +1 from somebody you trust: if you're not already testing, starting is one of the best things you can do this year, engineering-wise. It will make continuous rollout much, much easier than it would be otherwise, but even if you never do continuous rollout it provides huge incremental value for the business.

Got Questions?

Dirk and I are preparing an interview about these topics, because we're mutually passionate about them and think software companies will benefit. Do you have a question about either the business case or the engineering details? Drop me an email and I'll try to address it in the interview, coming soon to an inbox near you.

Until next time.

Regards,

Patrick McKenzie