How We Validate Hypotheses in Admitad Startups: Tracker’s Review
Table of content
How Admitad Startups tests hypotheses
Example of running a validation cycle
Collabica case study
Problem 1. Lack of interest in MVP
Problem 2. Weak copy for our mailout
Problem 3. Wrong customer portrait
Our findings
The most common mistakes with testing
Why it is normal to fail with your hypotheses
In conclusion
Testing hypotheses in our studio. What mistakes are common during the validation process and why we encounter them. Our tracker Vera Ivanova shares her priceless expertise
People often ask me, “So what does a tracker do?” As a tracker in Admitad Startups, my job is to ensure that a startup’s team overcomes current obstacles. I help entrepreneurs set goals, build hypotheses, and test them.
I once heard that between something urgent and something important, our brain chooses neither and goes for the easiest. As I see it, my task is to make startupers not fall into this trap and attend to what matters instead.
In this article, I’ll explain how Admitad Startups validates hypotheses in such a way that we, indeed, pay attention to the most important and urgent things.
How Admitad Startups tests hypotheses
We use the HADI cycle — it’s a scheme for validating ideas. The acronym follows the names of key cycle steps:
- Hypothesis. We pose an idea following this template — “If I perform an action X, I might get result Y”.
- Action. After coming up with a hypothesis, we carry out our action X.
- Data. We analyze information provided by the result Y of our action.
- Insight. Now we know what actually happens when we perform an action X. Based on our insight, we can change our hypothesis.
Hypotheses are used to lift certain limits that a startup encounters. Basically, the team finds an obstacle and poses an idea of how they can remove that obstacle. Then, they carry out the plan and see if their idea was true.
Depending on what result the team wants to deliver, the hypotheses can be strategic and tactical.
- Strategic hypotheses are all about the long term. You can’t just validate them in a snap. As a rule, they are subdivided into many smaller hypotheses or steps to achieve the stated strategic goal.
- Tactical hypotheses are about instant results. They allow you to get closer to testing the strategic ones. If the desired result can be obtained in just one action, it means that the hypothesis is tactical.
An example of a strategic hypothesis is, “If I study this market, then I can build a product for it.” However, market analysis is a huge multi-step process, so you can’t just perform a single action. It requires extensive work. Another example is, “If I conduct 10 interviews, then I’ll find the consumers’ pain points.” But to do that, you first need to understand what segment of the audience your product targets, where you can find these people, how you can contact them, etc.
A tactical hypothesis would be, “If I make this button on my website red-colored, the conversion will double” or “If I launch a landing page, I will get 10 applications.” There’s no variables in these ideas, so they can be checked almost-instantly. The result will be produced right after performing an action X.
Whether a hypothesis is strategic or tactical, running a cycle of validation takes a long time regardless. Even the launch of a landing page consists of multiple subtasks, so preparing and performing your test might be a lengthy process. I want to stress this because first-time founders usually underestimate how much time everything takes, thinking they could be done in a week or two. But we’ll talk about timeframes in a bit — for now, let’s see some examples for reference.
Example of running a validation cycle
Depending on the stage of the product’s development, a team creates different types of hypotheses because each step has different obstacles. A hypothesis might reflect on a product, market, team, hiring process, etc.
For instance, on the seed stage, projects typically conduct CustDev interviews. The main challenge at this point is that a startup doesn’t know where to search for respondents. That’s why the first hypothesis would be about where they could find them. Say, it sounds like, “If we launch the website called ’Respondent.org’, we will conduct 10 interviews in 2 weeks.”
Seems easy to do, but it has lots of subtasks inside it. We will need to:
- Launch a website
- Appoint interviews with respondents
- Conduct our interviews
- Draw conclusions from the data we gathered, etc.
These are our milestones. Let’s imagine we agreed that the team will have them completed by our next tracker session. But success on the first try is rare; in 2 weeks, it might turn out that our Respondent.org website has failed to bring us as many people as we wanted. In this case, we understand that we need to 1) analyze why it didn’t work out and 2) take some extra action.
Producing the result (in this case, attracting 10 respondents) takes more time than we thought, which delays the process of validation. So we change the hypothesis and run the HADI cycle all over again, learning what does and doesn’t work along our way.
On this stage, it is common to run into a wall. Don’t get discouraged, though. At an early stage of a startup’s development, most hypotheses suck. Imagine that you are in an open field. You need to blindly choose a direction and just go, exploring the environment without actually knowing where it would get you. There will be lots of “Oops, wrong turn, let’s start it all over again” as it is simply inevitable to waste some time or make wrong assumptions. It might feel exhausting, but you need to be prepared for it.
The next iterations of testing hypotheses will have a similar structure.
- Step 1. We come up with a hypothesis related to the obstacle and/or limitation typical for the current startup stage.
- Step 2. Within this hypothesis, we set intermediate milestones and start implementing the tasks.
- Step 3. After the agreed deadline, we look at whether we succeeded or not, why our validation sucked (if it did), and try to draw some kind of conclusion.
Collabica case study
We are currently working on Collabica — an application that helps Shopify stores add products from the catalog to their assortment and sell them cross-store, increasing the average check.
Recently, the founder proposed a hypothesis about finding customers, “If we offer our MVP via an email newsletter, we will bring 10 clients in 2 weeks.” But in the process, we ran into 3 problems that greatly delayed the testing of this hypothesis.
Problem 1. Lack of interest in MVP
It turned out that potential customers showed little interest in our MVP. They thought it inconvenient since it required a lot of manual work. However, they would be willing to try the full product — which is still in development, so we couldn’t immediately show it to our potential clients and convert them.
In total, for several dozen interviews, we only found one customer. Having an MVP instead of a full-fledged software greatly reduced our conversion.
Problem 2. Weak copy for our mailout
Emails that we were sending to our mailing list went right into the spam folder. This happened because we did the mailout from a new account that hadn’t even been used properly. Also, the messages were so generic that people were not interested to open and read them.
Therefore, we needed to learn how to rewrite letters so that they look more personalized, making a potential client at least want to open them.
Problem 3. Wrong customer portrait
The first clients that Collabica converted were tiny shops with little to no traffic that produced a single sale from our catalogs. But this was half the trouble. Another category of leads were people who had no Shopify store at all.
It means that our marketing efforts brought in leads, but not the ones that we needed. Our mailout just failed to target well-established sellers. Therefore, we had to reconsider our customer profile and start looking for retailers with high traffic.
Our findings
In the end, we realized that our search field turned out to be too wide and had to be narrowed down. Also we may have underestimated other traffic sources. There are more ways than one to kill a cat, and emails might just not be it.
As a result, within our strategic hypothesis about bringing 10 clients in 2 weeks, we built another dozen hypotheses about where to get leads from. So the whole process stretched out for several months.
The most common mistakes with testing
You might think that one day, you’ll be done with damned validation and just move on. But here’s the thing. There’s no special moment to build hypotheses. It’s a never-ending process that keeps going and going — even well into a company’s development.
A hypothesis can be formed about any element of your business, be it your product, your marketing efforts, your sales team, even the founder. A simple guess like, “John isn’t good with this task, let’s give it to Mary instead” is also a hidden hypothesis that says, “If I delegate this task to Mary, our sales might grow at least some percent.”
So the mistake numero uno is that our teams fail to see hypotheses behind their assumptions. Very often, when a person says, “Change this one thing, and it will work beautifully”, they do not have this “If X — then Y” framework in their mind. But from a tracker’s point of view, even the founder’s grumbling about employee productivity is a hypothesis. We just don’t know for a fact that Mary will work better than John.
Why is this a serious mistake? If you do not write down your hypothesis, the incoming flow of tasks will quickly become overwhelming. Instead of the HADI cycle, you’ll have an endless grind with no clear results. In such a disorganized workflow, it is impossible to tell which action worked and which didn’t.
Another mistake is the lack of communication, especially when tasks are tied to different teams. You know how these things go: John expected that Mary would send him a document, Mary forgot, and John thought, “Alright, I guess I’ll just succumb to these unfortunate circumstances that I absolutely can not influence.” Thankfully, a tracker’s job is to facilitate communication within the team and make sure that everyone hears and understands each other.
Sometimes startuppers are scared of formulating hypotheses because these might turn out to be stupid. Even having the most primitive idea about the future application, platform, website, etc. is better than having none. For perfectionists who want their hypothesis to be 100% precise, it is utterly off-putting. Nevertheless, even the stupidest assumptions make the validation process more productive when turned into “If X — then Y”.
The last mistake is having unrealistic expectations when setting deadlines. At an early stage, there are many unknown variables, so anything can throw the team off the schedule. Still, founders sometimes get stubborn because they are confident they can do the task quickly even though our studio’s experience suggests that this is impossible.
Why it is normal to fail with your hypotheses
A founder is a strong believer, a person of ideas. When you become invested in some concept, you develop blind spots in certain areas that are responsible for your understanding about this world and how your product applies to it. Because of these blind spots, the founder does not question their beliefs. Instead, they would swear black and blue that the channel works well, the product is needed in the market and so on and so forth, even though there is no factual evidence.
On the one hand, this is a good thing. Any other person would have abandoned their idea after one or two unsuccessful attempts to implement it. Founders are completely different species. They think, “It did not work out? Let’s try again.” This is exactly what allows them to keep building their startups, but on the other hand, it takes its toll. And this toll is that founders do not see reality.
They come up with certain opinions of the environment they are in, but these are not always true to life. At times, passionate and impressionable startuppers lack an objective set of eyes to properly assess the circumstances. So it is necessary that a founder’s ideas about how this market and this reality works come closer to reality.
To make it happen, a tracker needs to ask questions, making sure they are on the same page with a founder. Something like:
- “I see that you believe this channel works. But listen, I don’t quite get it. Can you explain how you confirmed it?”
- “We have now agreed that we need to work on this metric. But could you please tell me how this action you suggested contributes to it?”
It’s called hacking people’s defense mechanisms. Essentially, it’s all about acknowledging the possibility of failure. It’s about saying, “If everything goes well, we will keep up with our plans. But these plans might also fail spectacularly. You never know for sure.”
In conclusion
In the Admitad Startups studio, we use the HADI cycle to validate ideas. It is a 4-step process that always keeps going regardless of the stage of business development.
- Build a Hypothesis (“If I do X, I might get Y”)
- Perform an Action
- Collect and analyze Data
- Form an Insight
Startup founders are strong believers in their concepts, that’s why they sometimes get carried away. Building and testing hypotheses provides guidance as well as anchors them in reality, bringing their understanding of their startup’s achievements closer to how it actually performs in the market.
To avoid common mistakes in the process of validation, it is important to communicate with your team. Remember:
- Any unconfirmed idea, assumption, or opinion is a hypothesis, and you need to back it with real data and action.
- Even the stupidest hypothesis is better than none, so don’t be scared of looking like a fool if it fails. You’ll learn something anyway.
- Your tracker monitored dozens of projects already. They know the results of many what-ifs, so it would be more constructive to trust their experience (especially when it comes to estimating time-frames).
I hope I answered all your questions on how Admitad Startups tests its startup hypotheses. Thank you for reading!