The Impact of Survivorship Bias on Innovation Policy
There is often more to learn from failure than from success
In our recent review of best practices for innovation licensing, a number of interesting themes came up that we were not able to explore in full detail. One of these relates to the idea of survivorship bias.
You may have come across this idea through the textbook example of World War II era fighter planes. If you are an engineer seeking to reinforce an airframe by studying fighter planes that returned after a dogfight, in examining the pattern of bullet holes, the proper response is to reinforce where the bullet holes are not. The logic is simple: if you’re looking at it, it survived. Any bullet holes that you can see are not critical, since the pilot made it back to base. Planes that do get hit in critical locations do not return to base, and so you will never see those bullet holes.
In other words, correcting failures is less about hearing about the challenges experienced by those who are successful as it is about engaging with failures and mapping the empty spaces between success. Luckily for us, engaging with failed technology commercialization attempts and failed policy initiatives is a lot easier than recovering a downed airframe from behind enemy lines.
As I get deeper and deeper into the literature on effective innovation and technology commercialization policy, I see a significant gap with respect to failure analysis across the board: almost nobody studies failed startups, or failed innovation policies, precluding any possibility of learning to avoid whatever issue caused it. We can study companies that survive the gauntlet of early-stage and deep tech commercialization in as much depth as we want, but in doing so we will not learn about insurmountable challenges. Any company that was successful was by definition able to overcome everything that was thrown at them.
There are many well-accepted concepts that I have come across in my review of early-stage innovation resources that I suspect are based on survivorship bias, rather than any actual cause and effect relationship.
In this post, I explore some of the empty spaces between success stories. This post is not so much about drawing concrete conclusions—after all, I lack the data to do so—as it is about trying to shine a light on some dark corners of innovation policy in which real insight is to be found, if only we would go looking.
Self-fulfilling prophecies
I have written previously about the importance of startups in the process of technology commercialization, and there is a wealth of literature and practical examples at the level of ecosystems to make a strong case that startups are better vehicles for disruptive innovation than large, established companies. Smaller companies are more agile, are able to move quickly, and can go after small or even as-yet non-existent markets, whereas incumbents are encumbered by procedural issues and a mandate to deliver short-term profits over long-term disruption. This is not a new idea: Christensen wrote about this in great detail in 1997 in his iconic book “The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail,” and numerous authors have since built on this idea. Innovation is rarely profitable in the short term.
On the other hand, if you look at the fraction of successful attempts, venture-backed startups appear to be poorly suited to technology commercialization outside of biotech, an argument made eloquently in Dan Breznitz’s book “Innovation in Real Places”, among others. Arora et al. (2007) have singled out new drugs as being more effectively commercialized by established companies, and numerous authors have pointed to the ineffectiveness of the traditional VC model in cleantech and climate tech. Malek et al. (2014) highlight the importance of dedicated accelerator resources in cleantech, demonstrating clearly that bringing these innovations to market requires management of complex stakeholder relationships that small companies rarely have the resources to do on their own.
There is nothing wrong with any of their analysis. The problem lies in the conclusion drawn by policy makers: that if you want to commercialize a new drug, or bring some climate innovation to market, you need to do it via a large, established company. By not engaging with the cause of the performance difference, we create self-fulfilling prophecies, reinforcing where the bullet holes are, rather than where they are not. We have over-indexed on the outcomes we see, rather than attempting to understand process failures that we do not.
The common theme that links tech sectors where startups struggle even when venture-backed is the existence of significant regulatory hurdles or a need to engage with complex bureaucracy. Bringing a new drug to market involves massive regulatory burden and careful trials and approvals (and rightly so). Bringing cleantech to market often involves engaging with municipal, provincial, and federal government stakeholders at the same time, none of which talk to each other and all of which have constructed procurement processes that are difficult to navigate without internal knowhow. No surprise the VC model fails here, since most VCs have no idea how to navigate these issues, either.
In short, I do not think the disproportionate rate of failure of small companies in these spaces is an inherent feature of small companies, nor is it necessarily a feature of specific technology sectors, so much as it is a failure to learn from and correct the issues at play. Large companies with dedicated internal resources are better equipped to navigate bureaucracy, but in concluding that we should therefore rely on established companies in these technology spectors we filter out true disruption, since it is often not in the best interest of an established company to disrupt.
It is not all bad: as noted earlier, there are systems in place that (at least as of 2014) were assisting with cleantech commercialization in startups. More recently, Phil de Luna and Deep Sky is providing a platform and means to get a diverse set of cleantech projects off the ground by essentially wrapping lots of small companies in a larger umbrella aimed at bridging the scale gap, a direct response to the actual failure modes of cleantech startups. Perhaps medical innovation is in need of something similar?
Failure analysis in public policy
This idea generalizes to public policy initiatives: failure analysis is one of the more valuable things that the public sector can do with respect to innovation, given that they are the holders of all related data.
I wrote previously on the failures of both Innovative Solutions Canada (ISC) and the Digital Adoption Program (DAP). These programs were undersubscribed, but instead of conducting an analysis to learn why, both programs had their budgets cut until they were in line with what was already being spent. ISC is still technically alive, but I have yet to come across any public analysis of what went wrong and why. Instead of cutting underperforming programs and learning from the failures, we create new ones, at last count reaching 140 of them.
As far as I can tell, part of the issue is that failure is embarrassing and risks public criticism, running into the mismatch between innovation impact timelines and election cycles. As a result, the KPIs through which we evaluate the individuals involved are misaligned with the ability to acknowledge or learn from program failure. This is related to the fact that the metrics that we collect in relation to most innovation programs are short-term (jobs, revenues, etc.) and are not actually connected to useful outcomes.
ISC failed because of scale. ISC tried to copy the SBIR, but did not account for the fact that demand-driven policy levers require actual demand to work. Importing an American policy verbatim into a system an order of magnitude smaller and with lower risk tolerance was always a nonstarter.
The DAP failed because of a mismatch between the eligibility criteria and the companies that actually could have benefitted from it. By gating access with $500k in revenues, it excluded the entirety of pre-revenue technology development for which $15,000 in support for digital adoption is actually useful, and by giving a flat $15,000, it made the grant too small to be worth the effort in the scaleup phase. There were not enough companies that were both eligible and for which the grant had value to fill the program, so it got closed down before reaching its original budget allocation.
Thoughtful failure analysis should not be a purely post-mortem activity, however, because policy success and failure is not a binary thing. Even existing programs that are generally considered effective in their niche, like SR&ED, IRAP, and others, all have stories of failures, applicants that fell through the cracks, and value that was lost as a result, but there is no engagement or connection to the companies that do not get in, that close down, or that leave the country in response. We celebrate the numerator of successes, without considering the denominator of attempts. I have no doubt that if we went one by one through Senator Deacon’s list, we would find the same story repeated over and over.
Without a willingness to acknowledge and understand all outcomes and to iterate and improve our innovation policy frameworks, we will continue down unproductive paths, as has been obvious for a while as we continues to stagnate in OECD rankings.
Learning from failure
If there is one thing I know about the innovation community, is that being a part of it instills a “pay-it-forward” ethos and a willingness to share scar tissue in support of the next generation. That is in no small part why CanInnovate exists at all.
What I wrote above is mostly speculation informed by anecdotes and first-hand experience. It is not evidence, backed by data, and at present it can’t be, since the data on failures is usually not collected or made public. As an ecosystem, we need to address this through long-term data collection on the impact of policy initiatives, without being afraid to show the world that we sometimes fail.
If you have attempted to bring an innovation to market in Canada and failed, I want to hear from you. Send me an email with a summary of your story, along with any reflections you want to share on why your attempt did not work out. I will collect these stories, and once I have enough, I will anonymize and share some of the lessons in future posts.
Help us reinforce where the bullet holes aren’t.