Article

Doing just enough and choosing the right tests for Lean product development

Doing just enough and choosing the right tests for Lean product development

In traditional product development, people love the ‘big-bang’ approach – they like to have something gold plated, perfect, fully realised and feature complete, with all the bells and whistles designed and delivered from day one.

The problem with this is that it takes a lot of time, money, and effort. And you don’t need all the bells and whistles to prove that your product idea is worth pursuing – you only need to do just enough.

Lean product development is built around this idea of doing just enough – we build just enough, document just enough, design just enough, and test just enough to deliver a product or prove a hypothesis.

For example, central to Lean is delivering a Minimum Viable Product (MVP), a version of the product with just enough features to validate the idea. It’s the opposite of the ‘big-bang’ approach.

Anything delivered in excess of what’s defined in the MVP is called waste in Lean terms. And when trying to bring an innovative product to life quickly and efficiently, you have to do all you can to eliminate waste.

Lean seeks to do this by focusing on pace, rather than perfection, meaning you deliver sooner, save money, and accelerate your learnings.

Part of this is about developing a great hypothesis that articulates the assumptions about your product, your customers, or the market. In the last blog, we demonstrated how to write a great hypothesis that’s:

  • Exact so you have a clear idea of what success looks like;
  • Discrete so you’re only answering one question and not several at once;
  • Testable so that your assumptions can be proven true or false and you can realistically do that in your work environment, budget, and timeframe.

In this blog, we’re going to focus on that last point – testing – and how you can apply the Lean approach of doing just enough to test your hypotheses, focusing on the different types of experiments you can do and sharing our tips of how to approach testing.

Different types of experiments for different types of hypotheses

When we talk about running experiments, people naturally jump to running clickable prototypes.

But there are many different types of test that you can run, so the key is identifying the right test for the right hypothesis.

As we mentioned in the last blog, a good hypothesis should ask a question about either your product’s desirability (do customers want it?), feasibility (can we build it?), or viability (can we make money from it?).

Therefore, different questions or assumptions require different tests:

DESIRABILITY

  • Create an online ad for a product that doesn’t exist and see how many people click on it
  • Invite customers to use a feature that’s not yet ready and see how many people sign up (this is known as a fake door)
  • Create an explainer video for your product and record customer feedback to it
  • Build a clickable prototype of your product that you can put in front of customers and record their feedback

FEASIBILITY

  • Perform a Wizard of Oz test where you effectively present the illusion of an operational solution to users to gauge their response
  • Build a single-feature Minimum Viable Product (MVP), a basic prototype of your idea to validate your hypothesis with real technology or real data
  • Conduct role play sessions like the ‘speed boat’, in which the team assesses its ability to build a product through the metaphor of a boat trying to reach an island

VIABILITY

  • Conduct interviews with customers and industry experts to gather qualitative data about whether customers will pay what you need to make the product financially viable
  • Create a brochure for your product, including its price point and key features, and solicit customer feedback on it
  • Run a social media campaign to gauge customer interest in a product idea and, in the case of crowdsourcing, even raise funds to realise the idea

Each of these tests has pros and cons, so it’s crucial that you bring it back to your hypothesis and select the test that will actually answer the question you’re asking.

Testing artificial intelligence feasibility for a legal firm

We recently did some Lean product development work with a legal firm. The hypothesis was around validating a technology-based hypothesis, asking whether it was feasible (and viable) to use AI to classify documents in a corpus for due diligence.

This process is typically done manually by paralegals. In the legal world, reducing that time-consuming manual work represents a significant cost saving – and would free up paralegals to do more valuable activities.

In just six weeks, we built a Lean MVP using a representative sample of 1,000 legal documents to validate if classification of the documents with AI, and a ‘Tinder swipe’ style frontend for the paralegals, would be usable and reduce time for the review process.

The outcome was effectively a business case – yes, AI can be used to classify documents and, with a suitable frontend, the paralegals would want to use the platform.

And, accounting for hosting costs, build costs, and increased billing hours, the return on investment for this solution would start within a year and generate around £500,000 annually. Not to mention the less quantifiable benefits of improved output quality, reduced opportunities for human error, and faster results for the client.

Some tips for testing

From here we can offer some advice on what to do after a test:

  1. Constantly bring it back to the vision – People will always want to add more, whether it’s questions the hypothesis should answer or the number and type of tests you do. Don’t be afraid to push back by returning to the guiding question: Do we need to do that? What value does it add? How does it help us meet our core customer need? Does it need to be perfect to get the results we need?
  2. Act according to the results of the experiment – Stop if the experiment tells you to stop. If the experiment indicates that there is no feasibility, desirability or viability, it's not worth sinking more time or money into the idea. Either drop the idea, or pivot and change to one that is going to add value. It’s crucial to get senior stakeholder buy-in at the testing phase and ensure they understand the Lean approach so you can do the right thing and act according to the results of the experiment.
  3. Don’t just experiment without adding value – Every experiment should add value to your business in some way. In today's rapidly changing world, just experimenting (or trying to innovate) without adding value is a waste (excess). Make sure that you add value with every experiment you run and, if necessary, pivot early to deliver just enough to get the most value from you experiment.

Some final tips for testing

Testing is a key part of the Lean product development process – it generates the learnings that fuel the engine of Lean.

But it’s just one part of the process and you need to be able to take the results from your test and feed them back in the learning loop that drives further product innovation.

If you want to know even more about how to put it into action, you can join our upcoming webinar: “The Bare Necessities: How a Lean approach to product development can deliver more value with less work”.

In this online session, myself, Alice Kavanagh and Tim Walpole will demonstrate how we’ve used rapid cycles of hypothesis-driven learning and experimentation to help clients deliver more value with less work.

Register for this event