October 17, 2019

Maximizing Return on Investment in Your Product

At Cronofy, we endeavour to spend our time well. Time is the lifeblood of any company: capital either buys you more time or allows you to bring on more people to increase your bandwidth. Time will always move forward and you want your product to at the same pace. A lot of time can be wasted doing the wrong thing and that's why we aim to commit as little as needed to test an idea at each stage along its journey to being a feature. It doesn’t matter how many X your engineers are if they’re implementing the wrong thing.
16 min read
Profile photo of Garry Shutler
Garry Shutler
CTO and co-founder
Blog post Hero Image

At Cronofy, we endeavour to spend our time well. Time is the lifeblood of any company: capital either buys you more time or allows you to bring on more people to increase your bandwidth. Time will always move forward and you want your product to at the same pace. A lot of time can be wasted doing the wrong thing and that's why we aim to commit as little as needed to test an idea at each stage along its journey to being a feature. It doesn’t matter how many X your engineers are if they’re implementing the wrong thing.

Do as little as possible, but no less 👩‍🔬

Optimizing for feedback is built into the DNA of the company. It's what leads us to punch above our weight. Here's how we approach developing an idea into a feature before a single line of code is written:

  • Have a discussion about whether something is technically feasible
  • Add an entry to the "future" roadmap, perhaps with a paragraph or two to clarify what the feature aims to achieve if it isn't clear from the title
  • Wait and listen to customers and prospects (often someone will have triggered the initial idea)
  • Test the waters with prospects, being careful to apply the Mom Test, without committing to anything

The aim of all this is to build evidence that there is a need to be addressed. By this point, at worst a few hours have been spent with some initial investigation into feasibility. The conversations with customers and prospects would have happened anyway; we've just got some more value out of them. This doesn't mean that we don't ever build anything speculatively. Sometimes you have the conviction to build a car rather than a better horse, but over 90% of our features are proved this way before any significant commitment is made. Having something in the roadmap for them gives a point to collate ideas we come across incidentally. That could be quotes or use cases from customers, links to related documentation, anything that fleshes out either an idea or how it might be realized.

Output

  • A seed of an idea to be tested
  • A place to record findings
  • Total time risked: 1-2 hours

Bottom out the why ❓

When we gather enough confidence there is a real need there, we'll spend more time fleshing out why the feature needs to exist: what need is it filling? What problem is it solving? This provides useful context to anyone coming into contact with the feature later down the line. Be that an engineer implementing it, a marketer describing it, or an account executive recommending it to a prospect or customer. Even these few paragraphs or pages are open to scrutiny. Does it fully explain the "why"? Particularly without the context of any verbal conversations that have taken place? Do we have enough examples of the problems it should be solving? This initial draft will generally be written by one person and reviewed by another. Once both are happy, we share it in Slack so that potentially anyone can provide feedback. If there's someone who is expected to bring particular insight then their feedback would be requested explicitly, but ideally they'd be one of the two original authors. This will collectively take a day of time to create and usually there will still be no commitment to delivering the feature. This primarily serves to allow people to test the water with more authority about what something might do, allow engineers to develop the system with some idea of what it may need to be capable of in the medium term in case there are opportunities to make that easier, or to better inform how much time it may take. This enables the collection of more precise feedback and insight from customers and prospects, and potentially saves time further down the line by reducing how hard it may be to implement if it comes to it.

Output

  • A better frame for discovery
  • A place to record findings
  • Total time risked: 1 day

Maybe now start on the "how" 📝

Up until now, we've been building a clear picture of the capability gap that's being filled. Inevitably some idea of how the problem will be solved has been formed, but we try and keep that as abstract as we can for as long as we can. The next stage is to flesh out the initial brief. It's often easy to get carried away with everything a feature could address, but what we want to get to is what it must address. Hopefully, the chances of developing something that will not be used have been minimized through the previous activities, but there's still a possibility. The aim of this stage is to generate a brief to be implemented. In the discovery phase many use cases may have been recorded, in which case these may be narrowed down or distilled to the most common ones we must solve. These become the yardstick by which the brief is measured. If it cannot be seen how they are covered by the brief, it is insufficient in scope, or contains insufficient detail. Conversely, if there is functionality in the brief that doesn't apply to any use case that's a sign it can, and probably should, be cut. Depending on what the feature involves, some prototypes can help bring clarity. This may be visual mockups for something end-user facing, or example request-responses for API calls. The key is that they be lightweight: a sketch not a render. They are intended to add context to the brief, not form a specification. Not only do we not want to paint ourself into a corner, but we don't want to discourage people from adding their perspective. A visual experience may be sub-optimal once realized or much harder to implement than anticipated; an API design may be inefficient when the reality of implementation hits. We don't want to restrict a change in direction at a later date in the face of reality. We're making sure that such a change is made with knowledge of the underlying goal in mind. Once the brief is settled, again we will have a couple of people work on this together, then it is again shared internally. We will usually call out more people for explicit feedback than the previous phase.

Output

  • A strong definition of the capability gap to be filled
  • A shared understanding of what we expect to deliver
  • Total time risked: 3 days

Share more widely 📡

We've gained sufficient confidence that there is a need to be satisfied, we know approximately what we are going to do to solve it, and we should have a good idea of how long it will take to deliver. The next step is to work out when we need to deliver it. At this point we have something concrete rather than abstract and so it is a good time to share with the prospects and customers who showed an interest in the feature. This conversation also serves as a check that they still have the need, intend to use Cronofy to do so, and when that is expected to happen. To this point, only a few days have been invested in the feature. Usually the delivery will involve weeks, or even months, of effort. We want to avoid knowingly spending time developing a feature that won't be used for several months. It's much better to deliver a feature with a lower perceived value that will be used straightaway over a high value feature that won't be used for several months. We want a return on the investments we make as soon as possible in all things. The ultimate feedback comes from usage, something which cannot be replicated. Plus customer priorities change all the time, the same way ours do. Doing something as late as possible but no later helps minimize this risk. Again, at this point, a feature may pause on its journey through the lifecycle. We may have a customer who needed clarity on what we could deliver before they proceeded to the next phase of their due diligence, and they may not intend on integrating Cronofy for another six months.

Output

  • Confirmation of satisfying a customer requirement
  • An earmark of when this may be needed, usually no more precise than a given quarter
  • Total time risked: 5 days

Do we implement yet? ✋

We've tried so hard, and come so far. But in the end, does it even matter? Perhaps not. We've invested a week of cumulative effort so far, but it is much more expensive to implement, and then have to maintain, a feature that is not needed or at best has a niche market. Every change has a mostly intangible long-tail cost, which however pessimistically you try to be when you define it, is higher than you expect. Particularly given our context as an API provider, as soon as we release a feature we're supporting it for a long time, whether we like it or not.

What do we implement? 🎯

With a customer or three chomping at the bit for a feature, we can finally justify the investment in the development now, and the commitment to maintain it for the foreseeable future. But how do we go about implementing it? Referring to the brief, we can start defining one or more specifications. The brief will outline the capability required for this phase and the total time we're expecting to invest to implement it, but there is often a smaller not quite-viable feature set that covers a specific end-to-end experience. This is often the best place to start to dig into the known unknowns, and shake the tree for unknown unknowns. An end-to-end example will result in something tangible for everyone involved and give a better idea of how the constituent parts will hang together. Ideally this will be possible to deliver in a week or two; we're still trying to keep iterations very small and we're now tackling the unknown of how the feature will fit into the system. A specification will be a more detailed version of a subset of the brief. It will often use the mockups or draft documentation within the brief as a starting point, but it is not tied to them. We're usually several months older and wiser since the brief was written and we want to do the best job we are capable of today. Ideally a specification will be developed by at least one person close to the implementation. We want as much context as possible to be in the mind of the implementer as no form of communication is perfect. Specifications are cheaper to iterate on than code, and serve as a great check against what was written in the brief and what was intended. Again, our collective thinking may have moved forward making the brief out of date, defining the work as close to the point of implementation as is possible leads to the best implementation we are capable of. We don't treat this as a gated process: each specification can be implemented straight away. We find it a useful process to ensure everything is thought through and trust each other to solicit feedback as needed. While the specification is precise and focused, the brief acts as a guide to the implementer of what they may need to account for in the near future. This helps when evaluating implementation options as a more informed choice can be made about which path is the most likely to work out in the future. Predictions are never going to be perfect, but by being informed we hope to avoid spending time backtracking that could be spent improving other parts of the feature or working on something different altogether. This end-to-end implementation will rarely be made available to customers, but it will be released to production. As much as unknowns are risky, the impact of a change on production is also a risk. The larger the change, the larger the risk, and so we minimize this by releasing often as even if something is not in active use, we then know it hasn't impacted any other feature unexpectedly. Above automated tests, each change required on the journey to implement a feature will be measured against the related brief and specification. For example, specifications will often include drafts for our public documentation, so a common test is to ignore the code and follow the documentation, as naively as you can manage, to see if it guided you through the journey to a successful result. This can be great feedback either on the documentation, or the errors returned by the API. The further towards sharing the feature we are, the more thorough this kind of check will be.

Output

  • Specifications to implement and measure the implementation against
  • A functional implementation that exercises things end-to-end to surface surprises
  • Total time risked: 2-3 weeks

Now we implement, right? 🛠

Yes, we've minimized the risks as much as we can without actually implementing the feature:

  • We've validated a capability gap that customers need
  • We've built up shared context around the capability and how we're going to address it
  • We've implemented something that, while limited, does work
  • We've released it to production to know it doesn't break anything else

The next milestone is to get something that fulfills the majority of the brief which could be shared with the known customers for their feedback. As with everything we do, we want the out-the-box experience to be a strong starting point – if not perfect – for 80% of cases, but give people the power to lift the hood and tweak to suit their needs. Perhaps counter-intuitively getting that final result is best approached by starting from the foundations of a feature and working upwards, rather than starting from a more abstract idea and adding configuration options. Once that foundation is laid, that is usually the point at which we will reach to the customers we discovered earlier to get their feedback. The initial implementation may be a bit daunting to use at first, but we're going to be much more engaged in helping the customers use the feature at this early stage to hear where the rough edges are first hand. The documentation will be improved throughout the process through having to explain it to another audience. We should also gain insight into common implementation patterns. This should lead to more sensible defaults, or a higher-level API which is easier to approach, than we could have made an educated guess at. The benefit to the customer is that they get their hands on the feature first, and they can influence what the earliest implementation is capable of.

Output

  • A useful feature!
  • Total time risked: 4-8 weeks

Hello, World! 👋

Once the majority of the brief is satisfied, this is usually when a feature will be made more widely available by being publicized as an alpha feature. There may be some known capabilities we want to add, or rough edges when it comes to the overall development experience, but it will be both useful and stable to the best of our knowledge. The goal of the alpha phase is to garner feedback from customers outside of the initial cohort. This can be through other customers we didn't realize had a requirement satisfied by the feature or new prospects that come along. The feature having more visibility helps grease the wheels to a wider pool of feedback. Development hasn't stopped. We'll continue until we've satisfied the entirety of our brief, and may have responded to feedback from customers, or in the case of something relying on external parties, possibly responded to the exotic edge cases you only find in the wild. Even several years in I'm still occasionally surprised by what we come across!

Output

  • A very useful feature!
  • Total time risked invested: 6-12 weeks

Beta, baby ✊

Our brief satisfied, and customer feedback received and addressed, the feature will be moved to beta. Our guarantee of stability is even stronger than the alpha phase, and the supporting tooling and documentation is to the standard we expect of ourselves. The goal of this phase is to verify all those things to be true, generally by having several customers using the feature in production without issue. We generally do not expect to need to make any changes during this phase. We're happy the feature is done according to its current brief. We’re looking for end-user success as a proving point for this stage, not just implementation within customer products. By now we will know the feature can be used by our customers to improve their workflows, but we’re looking for their use to be a success in the wild. As a company we’ll be building marketing collateral for an appropriately sized launch campaign. The ultimate goal of a feature is not only to solve problems for customers, but to generate new leads from people looking to solve similar problems.

Ready for launch 🚀

The feature is proven and hardened. Marketing and sales has been collateral created. The next step is to truly announce it to the world as ready for everyone to make use of. This may sound like the end, but it’s usually only the beginning. There’s often many variants of the feature within the original brief that were cut out of scope. We won’t develop those speculatively; we’ll start the process all over again but with a much shorter overall timeline as the foundation of the feature is already in place, in use, and built with half an eye on such things coming in the future.

Reducing uncertainty every step of the way 🔦

The essence of this approach is to reduce our exposure in terms of time spent without a return on our investment. We look at each stage and ask "what's the largest risk?" and then, "what's the cheapest way we can mitigate it?" With judicious application of the Pareto Principle we strive to maximize the amount of time we spend with a known return at the end. That approach is applied to everything we do, not just product development.