The Web / Mobile Feedback Loop

Backlogs for Web and mobile products don’t exclusively contain new features. One eye should always be on what has been done and how that is working out. A proper feedback loop gives valuable input that helps to determine what should be done next.

On one hand, of course, there’s high level goals and vision that define new features and the larger chunks of upcoming work (which just reminds me of this great article about how Spotify has done Prioritization in their early days). But then there’s more. For example. there’s bugs, there’s A/B testing results, there’s the Google Analytics account that somebody should actually have a look at, and there’s more. Most people know most of these, but mostly, they aren’t managed really well all together. So I thought a good start would be to all those sources of input on the feedback loop that may (or may not) affect our priorities:

  1. The product vision (this is what your management and product managers want to do, the longer term goals, this isn’t actually on the feedback loop, I just wanted to have it on the list)
  2. Business figures (e.g. your sales numbers; I dare say this input is usually indistinguishable from #1 (because it comes from the same people?), but I’d argue that it’s “feedback”, unlike #1)
  3. Analytics (the likes of Google Analytics)
  4. Feedback that is built into your product (without being explicit feedback, it’s basically extracted from normal usage of the app)
  5. A/B Testing (e.g. Optimizely or a variety of other ways to do them)
  6. Explicit customer feedback (lots of sources here incl. all the feedback your customer support and sales teams gather, but there’s also tools you can use that allow your customers to give feedback online, e.g. murm.io (for specific feedback on your existing features) or tools a la Uservoice and ZenDesk)
  7. Crash reporting tools (Crashlytics, Crittercism, …)
  8. Dogfooding (your own company using your product, often this is a much smaller feedback loop since it allows you to get feedback on unfinished work that wasn’t even released yet)
  9. External ratings (e.g. what your users say about your app on Google Play and iTunes)
  10. Customer opinions out on the web (blogs, social media, very similar to the point before but wide-spread on the Internet)
  11. Beta testers and special user groups (there’s a bunch of tools that help you, e.g. Testflight)

This was just a first shot and I’m merely thinking out loud.

It’d also be interesting to see how all of these can be managed more effectively than having different people “keep an eye on it” or having 13 different tools at our disposal to log in and check regularly. I’d greatly welcome less overhead to collect them, a better way to manage and follow up and make them a part of the development process, and create a lot more transparency for teams and stakeholders around them.

I’d be interested to hear what others think or whether there’s anything missing on the list above.

Fighting Scope Creep with the Techcrunch Test

No doubt scope creep is one of the biggest dangers to any software project. The possibilities to build everything are just too tempting and too often we think perfectionism is a virtue. Before we realize it, we’ve lost focus, got off track, and blown up the project far enough to be in trouble.

The Techcrunch Test (as I call it) originated for me during my work for a social local mobile app. Part of the job was to prioritize features properly and – at least equally important – find the right sizing of each feature. Thanks to iterative and incremental approaches it’s not necessary to be complete and perfect on a feature the first time around. However, cutting down features enough on this first time is often easier said than done. More often than not there were heated discussions about what needs to be done and what can safely be cut out and de-prioritized.

Techcrunch, one of the world’s most popular technology blogs, was one of the publications we were waiting to appear on. We knew we would have achieved something if Techcrunch would have started to write about us. (If you’re working in a different space, you should replace Techcrunch with a publication or authority that matters for you.)

The Techcrunch Test helps getting brought back down to earth whenever you’re tempted to build too much or set wrong priorities. For a feature X – or a part of a feature – ask youself and your team the following question:

If 3 months from now we will have failed and Techcrunch was to write about our failure, would Techcrunch say the following:

“If only they had introduced feature X they would have become successful!”

?

I promise you, in most cases the answer to this question will be No. And especially if you’re doubting the importance of a feature anyways already, the answer will almost always be No.

It’s that simple. Once you’re at the point of using the Test, pretty much nothing you’re putting into this question will appear important enough to be built afterwards. And there you go: don’t build it. Instead, focus on what really matters, focus on what sets you apart, focus on what is at the core of what you’re trying to achieve, focus on what brings you forward.

Focus on what Techcrunch would praise you for, some day.

Building the Right Product with Hypothesis-Driven Development

In my previous post about Making Continuous Delivery work with Scrum and Sprints I wrote about how to shorten release cycles significantly by changing your process and adding in the obvious amount of test and release automation.

A comment challenged that by basically saying “Well, this might help you build your product right (and in shorter cycles), but building the right product is a whole different question. And maybe the more important one.” Hard to disagree.

I wanted to dig deeper. These days you can’t be wrong by starting in the vicinity of Lean Startup if you’re looking for how to build the right product efficiently. As an engineer I’m familiar with a lof X-driven development techniques but then I’ve came across one I haven’t about before: Hypothesis-Driven Development.

The basic idea is simple:

  • Instead of requirements, you formulate assumptions, or hypotheses
  • At the same time you define a measurable signal, that will tell you whether you were right or wrong in a reasonably short amount of time

This sounds like a great start to get to a structured approach to factor the question of the right product into your development.

But of course building the right product and building the product right aren’t mutually exclusive. Nor would I say one is more important over the other. They both are. Where hypothesis-driven development guides you to make sure you’re being intentional about your assumptions and the need to test them, good old fashioned engineering techniques like test-driven development and test automation make sure you’re implementing your hypotheses right. Without being able to successfully (bug-free and all) deliver an increment of your software that aims at testing an assumption, you’re not going to get the right answers either.

The article I stumbled upon was http://agile.dzone.com/articles/hypothesis-driven-development which also links to a great presentation about Replacing Requirements with Hypotheses.

Eating your own dog food

Early feedback is important. The earlier in the life cycle of development feedback comes in, the faster you can iterate, figure out what is working and what is not working, improve, and iterate again. You should release early and release often.

Releasing early and often usually aims at release cycles of something like 2 weeks. Depending on your kind of system, this can be shorter, but especially for native apps, much shorter release cycles aren’t really feasible. An even quicker way to get feedback is to give your software into the hands of your own colleagues and selected testers – constantly. Within your own organization, nobody prevents you from releasing continously, as often as multiple times in a day, without the overhead of an official release. You can then take the feedback of your own peers to iterate even faster. In modern tech slang this has become known as “eating your own dog food”.

Here at Klamr we try to get ongoing development into the hands of all our colleagues as fast as possible. The key to do this is Continuous Integration, that’s where everything ties in. Here’s how we do it:

  1. Jenkins: We’re using Jenkins as our continuous integration server and use it to automate most of our tasks. For each project there is a Jenkins job that pulls the latest code regularly, builds it, tests it, and then distributes it. Jenkins is amazingly easy to set up and configure, yet incredibly flexible and powerful. Ever since we started using it, it has grown with us into dozens of very different jobs for pretty much every project we’re working on.
  2. GIT branching strategy: while working on new features we need to decide when exactly changes should be made available internally. The general requirements are never to break builds altogether and not to break core functionality. We don’t pull every single change that is made anywhere in the project. We hook our Jenkins job into our GIT branching strategy to give the responsibility to decide which change is ready to our engineers. They have control over it by pulling changes into certain branches when they are ready.
  3. Schedule your distribution: depending on the project we either distribute immediately on every new change, or nightly. This is configured in Jenkins. My personal rule of thumb is: the more transparent new versions are for your (internal) users, the quicker and easier distributions/deployments are, and the less frequent commits to your distribution branch are, the better is to distribute changes immediately. When starting a new project, I generally start with this. Once problems appear that can be solved by slowing down, go to nightly distributions. Everything running server-side like a web app, for example, is completely transparent for users (just as they are in your production environment), new versions aren’t disrupting anybody. That’s a good candiate for very frequent distributions. An iOS application, on the other hand, needs to installed manually, hence pushing out 20 new versions every day tends to be disrupting for everybody. The last thing we want to do is make our co-workers feel disrupted and annoyed, that just leads to less and worse feedback.
  4. Distribute: the actual deliveries are all automated, but differ quite a bit depending on the type of software. Some examples of what we do:
    • Backend application: this get built and deployed to internal servers. This is the most complex deployment process we’ve got, especially things like database migrations don’t make it exactly trivial.
    • Web application: our klamr.to web application is deployed on every new change to an internal, protected web server. It is then connected to our live database, so everybody in the company can use this web application instead of our live production web application. Changes on here have sometimes only been finished for minutes until they get available.
    • Android: our Android app is distributed in two ways: new APK files are sent out directly via email (Android makes installing new APK’s directly from email attachments so much easier than iOS) and via the service Appaloosa Store. The latter has some nice advantages like providing a custom store app and push notifications for new versions.
    • iOS: our iOS app is distributed via Testflight. There’s a few catches for iOS, for example that you need to build on a machine running Mac OS. That’s why we have a separate Jenkins instance only for building the iOS app. Most other Jenkins jobs are running on one Linux-based instance hosted on Amazon EC2. Also, devices must be explicitly registered in your ad-hoc provisioning and Apple restricts the number of internal devices to 100. No rocket science once it’s all set up, but a few extra hoops to jump through.
  5. Real data: It’s important to allow internal users to use these early builds against their real Production data. Our web application, for example, runs on an internal URL, but is configured against our Production servers and database. This allows us to test drive new features early on with our real accounts. This leads to much better feedback than asking people to test features on isolated servers with fake data and helped a lot with internal acceptance.
  6. Automate: the key to all of this is automation. If it’s not automated, regular distribution either doesn’t happen, or it wastes valuable engineering time. And as mentioned already above, this all ties into continuous integration. Much of the process and infrastructure described above should be in place anyways to continuously build and test your software in an automated way.
  7. Release notes: for us it proved incredibly helpful to automate release notes for each internal distribution. Remember that one of the main reasons to do all this in the first place is to get early feedback. Without release notes, it’s not possible for anybody to know what has changed and to know which part of your apps to pay attention to. We’re not doing this in all places but if we do it, we’re using GIT commit comments. They aren’t suitable for end users, but they are more than good enough for internal users.
  8. Respect: although these builds are only internal, we highly respect them. This means we never try to break them (see above), we try to make using and updating them as easy as possible for our co-workers, and our engineers are quickly reacting to any kind of feedback that comes in.

Regular internal distribution helps us to keep the feedback cycle as short as possible, sometimes even down to minutes. Automation of all the tasks involved helps us to keep moving fast, even as the number of systems and their complexity grows. I would highly recommend trying to automate as much as possible right from the start.

Are you eating your own dog food? What is your experience with this? Are you using different techniques and tools? Leave a comment, I’m very interesting to hear what you’re doing.