DIY dark-launching feature toggle in 16 lines of Ruby

Dark launching and soft launching functionality is an important ingredient for continuous shipping. To me by now, even 2 days of code changes piling up feel like more than necessary. Often enough, this is due to the fact that pending code changes aren’t complete enough yet to get them into production, let alone show it to anyone.

As a solution to this there are approaches like dark-launches and soft roll-outs, but often enough they require code and tool changes that are being delayed until very late. That’s quite a bummer since dark-launching is so helpful in moving fast, from both a technical and a product management point of view.

  • Developers are happy when they can get code into Production (“nice, it works, check!”)
  • Product Managers are happy when they can get early early feedback, at least from a few select customers (and without having to sacrifice on anything by publicly releasing something unfinished)
  • (The right group of) Users are happy if they get early access to features

Sounds like reason enough to stop NOT doing it right away huh? ;)

I was in this very situation on a relatively new project, with a million things to do, and I’ve decided to hack a DIY solution together and see where that gets me. Turns out it only a few lines of actual code and is already much much better than nothing. Here’s the premise:

  • I just wanted to be able to hide access to a feature, i.e. hide the link in the nav bar that leads users to it (yeah, I know, but it’s ok in this case, and I bet in a lot of cases it is)
  • We’re on Heroku so I thought Heroku config vars would be a great way to control it (no code deploy necessary, but also no overhead with databases and backend access etc., Heroku already provides everything)
  • Toggling only on a per-user basis (we were THAT small, yes), no other fanciness like user groups, geographic distribution, load balancing or whatever (yet)

And here’s the code that made it work:

Quick run-down:

  • Called the class DarkLaunch, more because I liked the sound of it than of its correctness ;)
  • It has this one feature toggle method that can be used to surround links etc. a la “if DarkLaunch.feature_visible(…)”. It returns true whenever a particular user should see the feature in this moment, and false otherwise
  • It always returns false if there’s not current_user (since we’re toggling on a per-user basis)
  • It always returns true for Development and Test (which leads to other problems but for the moment I liked it to have everything visible on Dev)
  • Each feature becomes an identifier (like UPLOAD_PHOTOS) that is used when calling the feature_visible()
  • It expects a Heroku config var named FEATURE_UPLOAD_PHOTOS
  • This config var is expected to contain a comma-separated list of user IDs to should have access to the feature
  • feature_visible returns true if the ID of the given user is in that list
  • Or once we’re ready to make a feature public to everybody, we can just set the variable to “PUBLIC”. In that case it always returns true, without checking user IDs anymore
  • And otherwise it returns false, blocking the feature for everybody else

Usage is then dead simple, as long as “launching” is as simple as showing something on the UI or hiding it:

It’s a bit of a quick hack, of course, and far from a complete or well-done (or flexible, or …) solution in so many ways. But it was great to see that a few lines of code added so much value to rolling out a feature. Feel free to let me know what you think or if you’re interested in using more of this. And who knows, maybe it becomes a little Gem… :-)

 

The Web / Mobile Feedback Loop

Backlogs for Web and mobile products don’t exclusively contain new features. One eye should always be on what has been done and how that is working out. A proper feedback loop gives valuable input that helps to determine what should be done next.

On one hand, of course, there’s high level goals and vision that define new features and the larger chunks of upcoming work (which just reminds me of this great article about how Spotify has done Prioritization in their early days). But then there’s more. For example. there’s bugs, there’s A/B testing results, there’s the Google Analytics account that somebody should actually have a look at, and there’s more. Most people know most of these, but mostly, they aren’t managed really well all together. So I thought a good start would be to all those sources of input on the feedback loop that may (or may not) affect our priorities:

  1. The product vision (this is what your management and product managers want to do, the longer term goals, this isn’t actually on the feedback loop, I just wanted to have it on the list)
  2. Business figures (e.g. your sales numbers; I dare say this input is usually indistinguishable from #1 (because it comes from the same people?), but I’d argue that it’s “feedback”, unlike #1)
  3. Analytics (the likes of Google Analytics)
  4. Feedback that is built into your product (without being explicit feedback, it’s basically extracted from normal usage of the app)
  5. A/B Testing (e.g. Optimizely or a variety of other ways to do them)
  6. Explicit customer feedback (lots of sources here incl. all the feedback your customer support and sales teams gather, but there’s also tools you can use that allow your customers to give feedback online, e.g. murm.io (for specific feedback on your existing features) or tools a la Uservoice and ZenDesk)
  7. Crash reporting tools (Crashlytics, Crittercism, …)
  8. Dogfooding (your own company using your product, often this is a much smaller feedback loop since it allows you to get feedback on unfinished work that wasn’t even released yet)
  9. External ratings (e.g. what your users say about your app on Google Play and iTunes)
  10. Customer opinions out on the web (blogs, social media, very similar to the point before but wide-spread on the Internet)
  11. Beta testers and special user groups (there’s a bunch of tools that help you, e.g. Testflight)

This was just a first shot and I’m merely thinking out loud.

It’d also be interesting to see how all of these can be managed more effectively than having different people “keep an eye on it” or having 13 different tools at our disposal to log in and check regularly. I’d greatly welcome less overhead to collect them, a better way to manage and follow up and make them a part of the development process, and create a lot more transparency for teams and stakeholders around them.

I’d be interested to hear what others think or whether there’s anything missing on the list above.

Fighting Scope Creep with the Techcrunch Test

No doubt scope creep is one of the biggest dangers to any software project. The possibilities to build everything are just too tempting and too often we think perfectionism is a virtue. Before we realize it, we’ve lost focus, got off track, and blown up the project far enough to be in trouble.

The Techcrunch Test (as I call it) originated for me during my work for a social local mobile app. Part of the job was to prioritize features properly and – at least equally important – find the right sizing of each feature. Thanks to iterative and incremental approaches it’s not necessary to be complete and perfect on a feature the first time around. However, cutting down features enough on this first time is often easier said than done. More often than not there were heated discussions about what needs to be done and what can safely be cut out and de-prioritized.

Techcrunch, one of the world’s most popular technology blogs, was one of the publications we were waiting to appear on. We knew we would have achieved something if Techcrunch would have started to write about us. (If you’re working in a different space, you should replace Techcrunch with a publication or authority that matters for you.)

The Techcrunch Test helps getting brought back down to earth whenever you’re tempted to build too much or set wrong priorities. For a feature X – or a part of a feature – ask youself and your team the following question:

If 3 months from now we will have failed and Techcrunch was to write about our failure, would Techcrunch say the following:

“If only they had introduced feature X they would have become successful!”

?

I promise you, in most cases the answer to this question will be No. And especially if you’re doubting the importance of a feature anyways already, the answer will almost always be No.

It’s that simple. Once you’re at the point of using the Test, pretty much nothing you’re putting into this question will appear important enough to be built afterwards. And there you go: don’t build it. Instead, focus on what really matters, focus on what sets you apart, focus on what is at the core of what you’re trying to achieve, focus on what brings you forward.

Focus on what Techcrunch would praise you for, some day.