Startups are a game of making high impact decisions under high levels of uncertainty. In this high opacity environment, many startups have opted for making quick, intuitive decisions and “failing fast.”
And there’s some sense to this strategy. Speed is important. All else equal, you’d rather be the startup that can get through 10 decisions in a week than the startup that just gets through one. Each decision and the execution upon it provides the opportunity to learn and iterate.
This is all well and good, but there’s an underlying assumption in many startups that high uncertainty outcomes shouldn’t be measured much (if at all). Implicit in the assumption is the belief that high uncertainty outcomes can’t be measured. Or at least not reliably.
For this reason, many startups have been allergic to forecasts and processes around decision making. Making clear and thoughtful predictions using a formal model reduces speed. And if there’s a distrust around “predicting” uncertain outcomes, it’s easy to see these models as a waste of time.
While I agree that processes can reduce speed, forecasting and measuring to reduce uncertainty can be a highly effective use of time.
It’s easier to reduce uncertainty around a decision that you’re highly uncertain about than it is to reduce uncertainty on a decision you have near perfect foresight on.
If you have literally zero certainty about an outcome, just about any information you gather will reduce uncertainty and is therefore likely to be valuable. While a decision that you’re near certain about the outcome will become increasingly costly to reduce uncertainty, as there’s not much uncertainty left to reduce.
So the question shouldn’t be whether forecasting makes sense, but when does it make sense? For what decisions would reducing uncertainty be valuable? And how can startups set themselves up to reliably measure and reduce our uncertainty for the decisions that matter?
There’s a popularized image of a singular CEO boy genius that makes unilateral decisions for their startup. And if business is good, it’s assumed the decision maker is good.
But does this really suffice for evidence of the decision maker’s performance?
If a startup is not tracking the process, forecasts, and results of the decisions that make for a startup’s success (or failure), how do we know that the CEO (or any decision maker in the startup) is making effective decisions?
Maybe startups don’t spend much time on decisions or developing a decision making process, because they haven’t been tracking the data that would indicate there’s a problem to address.
And maybe the decision makers in power don’t really have all that much incentive to shake the boat. How safe is their job if they put tracking in place only to reveal that they’re not quite the savvy decision makers that they made themselves out to be?
I started to think deeply about decision making when I came face-to-face with the results of a few bad decisions in my own startup. As I reflected back on the process leading up to these decisions, I started to wonder if these mistakes couldn’t have been avoided in hindsight.
So I dove into any and all research I could find about decision making, and it’s taken me down a year-long rabbit hole that I’m still sorting through.
When I started, I held my intuition and decision making in pretty high regard. But the more I read, the more I realized that I’d deluded myself.
I’d bought into the unilateral decision maker ideal, and I must have been trying to act the part. But after reading the work from Tetlock, Kahneman, Dalio, Taleb, and many others, I began to see that my intuition was blinded by a cornucopia of biases.
So I got to work. I began searching for processes and quantitative models that have been proven to reduce human bias and net better forecasts and decisions than expert intuition alone.
But as I researched, a problem became clear. The tools on the market were crap.
If a tool wasn’t bloated with features that were irrelevant for the decisions startups would make, they were horribly complex and difficult to learn. And since crowdsourced expertise is a key part of effective decision making, the training costs for a startup would be hard to swallow.
And there was nothing like a one-stop-shop. A startup would have to stitch together at least a handful of disjointed tools if they wanted to apply the lessons backed by science.
As I worked to sort through the mess, I wondered if this couldn’t be easier. Why isn’t there a software solution that’s purpose-built for effective decision making in startups?
Well for one, it’s a hard problem to solve. There’s a lot of tools, processes, and quantitative models that would need to be incorporated, and in a way that requires as little training for the decision makers as possible.
And secondly, it’s not really a problem that most startups believe needs solving. Some might not be aware there’s a problem in the first place!
But a journey isn’t fun if you don’t have to go way the fuck uphill, right?
Lord of the Rings wouldn’t have been much of a story if Frodo walked over to his stove, threw the ring in the fire, high fives Sam, and rolled credits.
So while I’m confident this is a problem worth solving, I’m not sure I’m the right person to solve it. Not yet at least. I have more to learn and another startup that I’m committed to. I don’t think this a part-time gig, and I’m not in a place to forgo my other duties.
But it’s something I’m noodling on. And it’s something I plan on testing in my own startup and life.
more posts like this
A reflection on what working on my father's software company has been like, and the many detours along the way.Continue reading →