Feedback on Senator Colin Deacon's Review of Federal Programs for Business Innovation
The report is a refreshingly frank assessment of the challenges that Canada has created for itself
Two weeks ago, Senator Colin Deacon and his team released a report entitled “Federal Programs for Business Innovation” that identified approximately 150 funding programs that seek to enable business innovation in Canada.
While I knew before reading the report that Canadian public innovation programming was a mess driven by fragmented policy initiatives and disconnected programs, the reality is worse than expected. Having 150 separate innovation programs borders on farcical. The next time you hear someone complain that “Canadian culture” is to blame for our low productivity, point them at this report.
The report is a great starting point for an important national debate, asking good questions about the issues that underlie the ineffectiveness of Canada’s Rube Goldberg machine of innovation programming, and correctly identifying many of the policy issues that stand between Canada and economic benefit arising from publicly-funded research.
In this article, I provide feedback on the answers to the 7 questions asked in the report, as well as adding some additional answers of my own.
Questions and Answers
Globally, what best practices do the most successful government-funded innovation programs use to effectively catalyze business investment in innovation?
The report correctly identifies failures of procurement and methods to bring innovation into the government as a key problem preventing catalysis of innovation in the private sector, and points to the SBIR program in the US as a model for doing so. The report indicates that all departments should have a mandate to spend part of their budget to develop innovation programs.
However, before taking any action on this, it is important to note why similar programs attempted in Canada have already failed. Anyone who has attempted to navigate the Canadian government’s procurement system will tell you that the process is complex to the point of being completely inaccessible to small companies, favoring large incumbent companies that can afford to hire people specifically to manage it. Issues with procurement have been thoroughly studied by a number of industry groups, including through an excellent report by the Canadian Council of Innovators, and do not need further elaboration.
Even if this were addressed, however, without people inside the government who understand the nature of the companies they are supporting, and early-stage innovation generally, there will always be friction. Aimless procurement for the sake of procurement metrics, as was the approach taken to date, is not useful for anybody. It is evident from past results that it is insufficient to mandate that a portion of budget be spent on innovation. The government must first build an innovative mindset through onboarding of people with first-hand experience of the process of innovation.
It is important to note that the SBIR in the US does not operate in a vacuum, and that simply transplanting the program will certainly fail. Whole-of-government embrace of innovation was built into the American public psyche first, through the Bayh-Dole Act and other similar legislature. Transplanting the SBIR into Canada without first laying a similar foundation simply will not work. The problems lie upstream of the details of program design an must be addressed in order. It is also the case that Canada lacks of the scale of the USA, a critical part of what makes it work: any technology can usually find a related SBIR call for proposals. Not so in Canada.
A model that has worked in places more directly comparable to Canada than the US is to build systems to enable to public funding to follow private investment, taking a soft approach to guiding innovation through choice of capital deployment partners rather than directly choosing to enable specific innovations. I have written about this model in detail before.
Finally, the report mentions use of accountability measures to track the efficacy of related programming. While this is a critical element of any solution to these challenges, it is important to acknowledge that the impact of innovation are usually time-delayed from the point of implementation, in some cases by years, and that any system designed to measure the impact of innovation policy must build around this delay. Most of the metrics used to track innovation today involve immediate revenue generation and job creation, and as such completely fail to capture any long-term impact of innovation, devaluing especially innovation that is still in the early stages and directly leading to missed opportunities for value creation from Canadian deep tech in particular.
How do the most successful Government innovation programs ensure that program delivery aligns with the pace and realities of business?
The key takeaway in the report hits the nail on the head: “it is counterproductive to create programs that support companies at one stage but then introduce consequential policy changes that negatively impact those same companies at other stages.” Policy stability is critical, especially for deep tech, as was noted by Lisa Lambert of Quantum Industry Canada in my most recent article.
I would go one step further, however, and add that it is counterproductive to create programs that support companies at once stage, and then fail to communicate with programs that could support them in the next stage when it comes time to hand them off.
The most important consequence of the fragmentation of Canadian innovation supports into so many disconnected programs is a lack of communication between them. NSERC funds academic research and NRC IRAP is supposed to fund early industrial research, but NRC IRAP has no idea what is being developed in NSERC projects, or any awareness of related spinout activity, and in most cases will not get involved until after revenues have been demonstrated, leaving a gap between through which IP is lost. If there is one thing that successful government innovation programs do well, it is to communicate between the different stages of the innovation pipeline. Canada has quite literally none of this.
This need for communication is not limited to government programs. It is critical that the public sector and private sector are aligned as well, providing a means for companies to seek different forms of and sources for support depending on their needs and stage of development. This public-private model for innovation advancement is in active use in other countries, and can be readily adapted to the Canadian context.
Have application criteria been simplified to be clear, justifiable, and accessible, while also leveraging arm’s length private sector due diligence?
In the course of both my academic work and my work developing a deep tech startup, I have never once been able to reuse information between government applications. The most extreme example of this was an application for matching funding for research between a federal and a provincial program that were ostensibly designed to work together. You either get both, or neither. Both the federal and provincial agency required essentially the same information in aggregate, but phrased and split information between the questions sufficiently differently that I needed to write the entire application twice. In my entire career applying for various elements of funding from programs both from the academic and private sector, I do not recall a single time I have been able to refer to information in a previous application.
Europe addresses this issue with the Only Once Principle. Canada needs the same thing. As previously noted, the government needs to build inter-program communication into everything it does, and the amount of redundant and waste effort imposed by the current approach is a key reason why. If a similar Only Once mandate is adopted in Canada, the required inter-program communication will be a natural by-product.
The report also notes that side-car investing as a key area for overhead reduction. This is an excellent idea: it is generally true that trying to have generalists do due diligence on technology portfolios that require specialist expertise will be a waste of time and effort, and offloading due diligence to specialist private-sector players with a financial stake in the outcome has proven a sound strategy in other countries. However, it should be explicitly noted that this is mainly effective in much earlier stages than is currently implemented in Canada. Technology portfolios are lost to Canada long before they need $10M+ investment, and Israel’s example, as well as that of the quantum sector in Canada, proves that proportionately greater impact can be achieved with much smaller investment if it is made earlier in the process.
Are the most innovative companies being supported, regardless of their technology, industry or location?
A major failing of Canada’s innovation supports, especially with respect to deep tech, lies in gatekeeping. Most federal programs require minimum revenues or minimum FTE count to be eligible for support. This reflects a failure to take adequate risk by the Canadian government that results in complete exclusion of deep tech from most of the 150 programs identified. While this gatekeeping is sometimes explicit (for example, CanExport requiring $100k of revenues, or what used to be the Digital Adoption Program requiring $500k), it is often implicit instead. For example, the requirement by SR&ED that money be spent before the decision whether or not to reimburse is a soft version of that same revenue requirement that ensures that about half of the $4.2B spent by SR&ED goes to just 20 firms, about half of which are not even headquartered in Canada.
Regional concentration of innovation is a natural and largely unavoidable outcome of building a successful innovation ecosystem. Regions that effectively support technological innovation will attract investment, and the availability of that investment will in turn attract those seeking to fund technological innovation. A regional concentration of innovators allows for more efficiency in network operations, and all of this combines to create a regional magnet for talent. This positive feedback loop is the story of how Silicon Valley came to be, as well as Canada’s own technology hub around Waterloo.
I do not see any value in fighting against the tendency toward regional concentration. Rather, any effort to ensure that all companies can access support regardless of location should be focused on ensuring mobility of those seeking support, rather than trying to disperse local innovation ecosystems.
What Key Performance Indicators (KPIs) are used to measure the achieved outcomes of these programs, and do these KPIs influence government decision-making?
I have written at length about the inefficacy of the metrics currently collected by innovation supports in measuring actual program outputs. In most cases, a careful review of innovation program metrics will find that they are mainly concerned with short-term proxies for impact that are completely disconnected from actual target long-term outcomes (if those long-term target outcomes are even defined).
The metrics become the targets. If we measure success in the number of patents filed, then we will end up with a ton of patents, and no economic activity based on those patents. If we measure success in he number of jobs created, we miss the impact of private-academic partnership programs like Mitacs. If we measure success in terms of year-over-year revenue boost, then we exclude all of pre-revenue deep tech until it is far too late to be a serious player in the space. This should all sound familiar, because this is exactly what Canada does, and it should be abundantly clear to all concerned that it does not work.
Innovation is a long road, and innovation based on deep tech doubly so. The investment made in establishing the Perimeter Institute is only now starting to pay off, decades later. Even though it catalyzed Canada’s quantum sector and is at the heart of the reason Canada leads in quantum technologies, any government metric that assessed its impact in the intervening time would likely have deemed it a failure.
Any attempt to address this needs to build into KPIs the acknowledgement that impact is delayed from investment, sometimes by years or decades, and that this delay is different depending on the nature of the investment.
Lastly, it is critical to keep in mind that asking program participants to assess the efficacy of programs in their current form will always paint a rosier picture than the reality. No company that depends on federal programming to get off the ground is going to publicly speak out against those programs in any meaningful way (some programs, that will remain nameless, even make recipients sign a contract stating that any public mention of them must first be approved by their communications team. Yes, really.), and as such, participant reviews should never be used to assess program efficacy. The reality will always be strictly worse than such feedback indicates.
Is federal procurement effectively supporting innovative companies, helping to verify product viability and providing early revenue?
This one can be answered with a resounding no, especially in the early stages of deep tech. Since I and others have written about this in detail before, I will simply refer to a substantial body of previous work on the subject. The report correctly identifies the failure of the ISC and brings up the example of Finland as a leader in this area. I have little to add to this beyond what I have already written.
Are there successful programs that unlock the commercial value of government-funded R&D (e.g., intellectual property, data, etc.) for the benefit of the Canadian economy?
The key point raised by the report is again spot on. “Canada will never realize the potential of its investments in world-leading research and education if the essential innovation inputs (ideas and talent) keep leaving the country unlocking that value elsewhere in the world.”
Not all innovation supports are ineffective, though it is safe to say that all could be better. The questions raised in this report are well thought out to serve as a guide to how these programs can be improved in general.
Policy Fragmentation
The mere fact that 150 separate programs exist at all, requiring a collaboration between Statistics Canada, the office of Senator Deacon, and TBS in order to build a comprehensive list, is at the core of the problem. This represents the logical extreme of Canada’s approach to addressing gaps in programming: just make another program to fill the gap. Instead of using effective metrics of performance to identify and fix weaknesses in existing programs, our government, across multiple political mandates by all the major political parties, has simply left ineffective programs intact and attempted to grout in new, ever-more-narrowly focused funding mandates that only serve to complicate navigation of an already Byzantine system, apparently never pausing to wonder why such gaps exist in the first place.
The best thing that Canada can do to address this is to change or entirely cut existing programs before seeking to enact new ones. The amount of inefficiency and waste that exists in current innovation programming leaves room for major improvements without major additional expenditures, but this requires first that we consider our innovation system as a whole and make decisions informed at the system level, rather than trying to address issues of performance of individual programs in isolation.
I applaud Senator Deacon and his team for taking the first step toward doing just that.