Step 1: Define Critical Hypotheses to Test
Before creating the experiment, define the critical hypothesis you want to test. The key is to focus exclusively on the essential hypotheses and not get distracted by secondary ones. For my first experiment, I chose to test the single assumption underlying the problem and market hypothesis of the pivot in which I was engaged: was managing cloud costs a product or a feature? A focus upfront on the most essential hypotheses will ensure the tightest possible experiment, and produce higher quality validated learning.
Step 2: Design an Experiment
There are many different types of experiments you can run in a B2B business. Some I considered included:
- The Concierge - Offer a people-led service to solve a specific problem for a business. It is essentially consulting executed with a tight scope of work that validates your critical hypotheses.
- The Letter of Intent - Get one or more companies to commit to purchasing a product given the delivery of specific functionality. The functionality can be sold using a video MVP, slide deck, or a proof of concept.
- The MVP - Build an MVP and run an experiment with trial customers.
- The Marketing Campaign - Design a marketing experiment for a yet to be developed product or service, and evaluate the learning based on the response to the campaign.
- The Resale - Resell an existing product or service that delivers a value proposition that allows you to test critical hypotheses.
I settled a combination of The Concierge and The Resale, and designed an experiment that delivered a people-led service backed by existing products already in the market.
Step 3: Define Pass/Fail Criteria
For an experiment to be scientific, it must be both measurable and empirical. Define upfront the exact process by which you will run the experiment, the number of customers involved, the data to collect, and how you will interpret the data at the conclusion of the experiment. Tightly defined criteria allows you and your team to be dispassionate when it comes time to interpret the results.
My first experiment was designed to run about a week per customer, with a total of five companies. The experiment was designed to first test whether customers would pay for a cloud cost consulting service, and if yes, whether the work executed to deliver the service could in fact be automated with technology.
Step 4: Pull in Secondary Hypotheses
Now that you have a clear experiment defined to prove/disprove the critical hypotheses, you can optionally pull in additional lesser hypotheses. The key is to identify hypotheses that can be tested without substantially altering the experiment. In reviewing my validation board, I found 5-6 hypotheses that either were indirectly tested, or could be tested by slightly altering the experiment or its pass/fail criteria.
Step 5: Find an Advisor
Before running the experiment, I found it useful to engage an outside observer to provide support - ideally someone with minimal vesting in the outcome. Engage the advisor to review the experiment, and make adjustments as appropriate based on their feedback. Schedule regular checkpoints with your advisor to review the data throughout the experiment lifecycle. A good advisor should be the Sculley to your Mulder.
Note: Having an advisor does not mean you are allowed to have strong opinions Do your best to divest yourself of a point of view regarding the results.
Step 6: Run Experiment
Now it’s time to roll up your sleeves and run your experiment. Stick to your plan, share data openly across your team, and follow the data where ever it takes you. Once you have acquired the desired validated learning, be prepared to rinse and repeat.