On the importance of mission and vision in innovation
A clear overarching vision is an essential prerequisite to aligning policy missions toward long-term support for Canadian innovation
Mission, vision, values, and other buzzwords
One of the best pieces of advice I ignored from mentors in the first year of building Northern Nanopore Instruments (aside from “don’t try to do this in Canada”) was to establish a clear mission, vision, and values to guide our development.
At the time, I dismissed this as tech-bro valley-speak nonsense. I was far too busy with technical concerns and figuring out how we were going to fund our work to worry about the softer side of business development. I very slowly learned that I was completely wrong, and that having an overarching target outcome (vision) and a clear and concrete goal (mission) that pushes toward it, combined with heuristics to guide complex decision-making (values) are foundational elements, not just of building a company, but of achieving almost anything.
Variations on this theme are ubiquitous. At the personal level, Stephen Covey frames it in his 7 Habits of Highly Effective People as “Begin with the end in mind”. Silicon Valley has “mission, vision, values” at the level or organizations, and innovation policy has “mission-oriented innovation” at the level of nations. In the Canadian context, various reports and stakeholders have called for “whole of government leadership”. Fundamentally these are all the same idea, and have the same purpose: to align efforts toward a common goal and enable efficient cooperation between independent components of a complex system.
Mission-oriented innovation is currently in vogue globally, and The Observatory of Public Sector Innovation (OPSI) has an excellent report entitled “13 Reasons why missions fail” that examines this trend. Focusing on the level of nations, they examine various failings of innovation policy, pinpointing reasons that while having a mission is necessary, it is not sufficient. You can find many of these issues here in Canada.
In this article, I discuss a few of the reasons they identify and connect them to the Canadian context, focusing on those failings that I see as most impactful. I suggest skimming through the definitions provided in in the OPSI article as a primer to the rest of this.
The cost of siloed and orphaned missions
Innovation is a pipeline with many phases. Starting from ideation; technologies must survive research, either academic or industrial; be brough into a commercial entity, be that a small startup or established corporation; then be developed into a revenue-generating product or service; and finally establish itself in the market. There are no shortcuts. Every stage needs to happen, in order, and each stage requires a different set of activities and support structures that pick up where the previous one ends. Innovation ecosystems that do this well are ones that establish an overarching vision that ensures the missions of the various agencies that support each stage are aligned in achieving it, without leaving any gaps.
This is probably the most significant policy failing in the Canadian context: there is no unifying vision, and little consideration for the existence of gaps between program mandates. NSERC aims to support research; NRC IRAP, SR&ED, the SIF, and others support industrial R&D, commercialization; BDC provides funding to support Canadian companies that otherwise have a hard time finding funding via Canadian VCs, but there is little to no communication between them and significant gaps between the stages of development at which they kick in. While a handful of provincial organizations do most of the heavy lifting in the valley of death, there is little cohesion or communication between them.
It takes a fundamentally different approach to support academic research as it does to support a nascent company. It requires an entirely different framework still to enable scaleup activity, and yet another to incentivize participation by larger organizations in incubating innovative technologies. It makes good sense to have different agencies focusing on different stages of the pipeline, but without lining up inputs and outputs at each stage to ensure continuity of support, the result is overall disorganization and inefficiency as intellectual property leaks out of gaps in the pipeline.
As a concrete example of misaligned program mandates, consider the split of eligible expenses between the CanExport SMEs program and the NRC IRAP IP Assist program. CanExport subsidizes the cost of preparing and filing patents in target jurisdictions, but is limited to a few target jurisdictions and comes from a shared pool of funding that also covers marketing and travel, forcing a choice between marketing and IP protection. IRAP IP Assist will pay for IP strategy advice, but not filing patents or anything that results in a legal opinion. In effect, getting support at the federal level for complete IP strategy development requires independently securing funding from two different government agencies.
To add to the problem, both programs are gated by revenue minima, meaning that firms must somehow actually go to market and generate revenues before getting support for establishment of the IP strategy that would allow them to do so.
This is not a problem that can be solved by creating yet another innovation support agency to fill the gaps. The funding exists and is sufficient to the task, it is simply being deployed inefficiently due to a lack of cohesion between program mandates and a lack of clear vision to guide rectification of the issue. This requires a careful look at existing program mandates across every level of government to address, not yet another siloed mission.
Attempting surgery with a Swiss army knife
I’ve written before about the challenges involved in having even highly competent generalists conducting due diligence on specialized technology portfolios. This problem is evident across most of the federal level support organizations. When specialist knowledge is required to evaluate a new technology portfolio, a mismatch between the specific expertise of the evaluator and the technology leads to a perceived amplification of risk, and under-investment in valuable IP portfolios. OPSI identifies this as “ill-equipped mission teams.”
This goes hand in hand with what OPSI calls “mission portfolio blindness”. When a clear vision is lacking, each individual within the pipeline has a relatively myopic view of the overall goal and falls back on things like local KPIs to measure their contribution. Since these KPIs are proxies for actual impact, it can quickly result in chasing short-term metrics over the actual desired outcome - a desired outcome which is not even defined, in Canada’s case. This in turn results in tracking metrics that fail to capture the overall success of the program (OPSI’s “non-systemic mission evaluation”), instead relying on proxy metrics like job creation and short-term revenues that completely fail to capture the long-run impact of innovation policy.
Innovation support structures that do this well handle it by involving specialists when they are required. This does not mean bringing in in outside experts for ad hoc due diligence. No matter how good the consultant, an investment decision should never be made by someone who is not actually invested themselves. Rather it means partnering with experts who have skin in the game: the VC community, in particular. Israel does this very effectively, and more recently, Ontario Centers of Excellence have adopted a variation on this model, wherein they invest by following a lead VC whose investment thesis aligns with the technology portfolio in question.
Election cycle myopia
A perennial challenge in designing a policy framework that has a more than 10-year time to impact relates to funding cycles: both the annual budget cycle, and the election cycle, both of which are a subset of OPSI’s heading of “politically dependent missions”.
Being uncertain of their budget for the next fiscal year, most government agencies are forced to deploy resources in annual spikes of uncertain magnitude, making it impossible for firms to budget support for the next fiscal year. On a longer timescale, elections that involve significant political priority changes lead to even larger uncertainties and budget changes. Together, these cyclical challenges have a cooling effect on innovation, leading to risk-avoidant behavior and slow development in Canadian firms that must wait to understand their available resources before making hiring and investment decisions. The problem is a mismatch of timelines: annual and election cycles are much shorter than the time it takes an innovative idea to have economic impact, and uncertainties on these timescales make Canada an unattractive place to undertake long-term technology development.
While some Canadian funding programs at the academic R&D level have funding programs that span multiple years, these are all focused only on the first step in the innovation pipeline. Organizations that take over after the academic stage should allow for similar long-term funding as a means to instill confidence in Canadian innovators that they will be supported throughout the process.
The best-known example of an innovation agency that navigates this effectively is DARPA. For the purposes of this discussion, the important point is that DARPA development programs are funded for the long term up front: funding is allocated from the start with the intention of being spent over the timescale of a decade, and is from that point onward largely independent of the ensuing government fiscal years and election outcomes. This allows DARPA to commit to long-term development, secure in the knowledge that the rug will not be pulled from under them by political shifts.
While I do not think the DARPA model should or even could be transplanted into the Canadian context, there are elements that can, and divorcing innovation funding from short-term cycles tops the list. With funding that spans multiple years committed from the start, firms can budget and plan development roadmaps that go beyond the next government fiscal year. In the Canadian context, use of Crown Corporations as a means to separate budgets from cyclical issues is a good way to start.
Wrapping Up
There is an enormous amount of funding available in Canada for innovation, spread across every level of government, in a dizzying array of programs. The level of available resources is not the issue. Rather, the lack of a cohesive framework to guide and measure the impact of this funding across long timescales leads to inefficient resource allocation and severely diluted impact. Cost-neutral reform and refactoring of the system to align efforts across the entire innovation pipeline that goes beyond simply forming yet another innovation agency to fill a perceived gap is required.
Change starts at the top. Until Canadian policy-makers establish a vision to guide the individual program mandates that make up our innovation pipeline, the individual agencies that make it up will continue to underperform the potential impact of their funding.
Great post. I’d love to hear more of your thoughts on continuity between funding programs and what might that look like in some future state. Should program A have some feed in to program B? Should provinces own X and feds own Y?