Bootstrapping an Innovation Ecosystem
A self-sustaining framework for investing in emerging technology
Simulating venture philanthropy
This post presents a project that has been in the works for a long time, and one that combines everything I’ve written about risk tolerance, venture philanthropy, and the Simple Agreement for Innovation Licensing (SAIL) into a single, cohesive framework for investing in emerging technology. Using data from PitchBook, I built a simulation model that enables me to back-test the performance of venture philanthropy as a means to invest in emerging technology. A pre-print of that article is available at the link below.
The results in the paper validate the thesis that investing in emerging technologies is primarily a numbers game, and that attempting to pick winners actively drives underperformance. It shows that there is good reason to believe that venture philanthropy will be self-sustaining if (and only if) it operates at the right scale, and it explains clearly why neither the public nor private sector can address the problem alone—risk must be shared for this to work. It also shows that investing in emerging technology is worth doing even if it fails more than 96% of the time.
Putting risk tolerance under the microscope
It’s now well-established that Canada is great at research but struggles to realize socioeconomic value from the results. For as long as CanInnovate has existed I have argued that this problem is structural, and is rooted in risk-intolerance. The thesis is not all that complex, and relies on three assumptions:
that emerging technologies follow an extremely skewed distribution of value in which a minority of technologies produce a majority of the return;
that it is practically impossible to predict which technologies will ultimately be in the valuable minority in the early stages; and
that the value of the small number of successes more than offsets the cost of the majority of failures.
The first two assumptions are, I think, well-established as truth at this point. The third one requires validation. The purposes of this paper were to demonstrate, using real-world performance data from university-based startups around the world, that the third assumption holds true, and to identify precisely why neither the public nor private sector alone have yet addressed the failure to bring emerging technologies to market.
If these assumptions are sound, it follows that the only “winning” move is to invest in almost everything, accepting the risk that most will fail. A number of previous simulation models have demonstrated convincingly that in these conditions, all that matters for investment performance is that you do not miss the valuable minority. You can find a simple model that demonstrates this here, and a more sophisticated one here. Their conclusions are broadly the same: performance depends heavily on the number of investments made, requiring a large number of investments and patience to cut through the noise of high failure rates and long development cycles.
Risk intolerance is a self-fulfilling prophecy of underperformance: risk-intolerant investors try to pick winners and as a result make only a few bets, increasing the chances of missing a homerun. Given the rarity of truly valuable technologies and the second assumption above, this undersampling effectively guarantees that they are missed, which causes their investment portfolios to underperform. This then becomes the rationale for risk-intolerance in the future, trapping an ecosystem into a negative feedback loop that results in a shallow pool of investment opportunities. Recent research featured at The Odin Times capture the problem simply:
Pool quality acts as a ceiling or constraint - you can’t pick winners that don’t exist in your pool.
Thus, a small improvement in pool quality yields greater returns than a much larger improvement in picking skill, as picking skill compounds on pool quality to produce alpha.
Put another way, your ability to pick well only matters if there are good options to pick from.
The only way out of this cycle is to embrace risk and accept that a majority failure rate is a positive indicator that we are investing broadly enough to make sure we are not missing the valuable opportunities. Risk and failure are key elements of strategy, rather than things to be minimized.
The need for Canadian data
Aside from the conclusions and caveats noted in the paper, there is one task that remains: creating a version of the simulation model that is specific to Canada. The model built for this paper is based on startup companies from the US, Europe, the UK, and the Nordic countries, with a majority of the companies being based outside North America. While the innovation ecosystems in the EU and UK are less different from Canada than, say, that of the United States, this still presents an obvious issue when trying to make quantitative predictions about the Canadian ecosystem.
To address this, more Canadian data is needed. Specifically, data on the founding date, the complete fundraising history, dilution taken in each round, and the eventual outcome (if it has been resolved), of Canadian startups that set out to commercialize publicly funded research. If you know of a source for any of this data (or even just a subset of it), please get in touch.
Many thanks to Innovation Support Services at the University of Ottawa and to Intellectual Property Ontario for funding this research.




The Canadian specific data gap you're pointing to matters more than it might seem. Simulation models built on US or European data may not capture the structural differences in how Canadian ecosystems operate. If the thesis is that investing broadly works, the question becomes: does the Canadian pool have enough potential homeruns to justify the strategy? That's an empirical question that needs Canadian data to answer.