AI Adoption in Canada: A Conversation with Daniel Munro and Creig Lamb
The directors of Shift Insights share their recommendations for human-centred, values-based AI adoption in the Canadian public and private sectors
This week I interviewed Daniel Munro and Creig Lamb. Among their several hats, Daniel and Creig co-direct Shift Insights, where they work to “cut through complexity to provide clarity about the social, technological, and economic challenges and opportunities facing Canada”. In that role, they recently released “Northern Potential: Advancing Canada’s AI Ambitions”, a report that examines AI adoption in both the public and private sectors in Canada that is well worth a read.
Canada’s history with AI should be familiar to most readers: from leading the world in the research phase, to taking a backseat on bringing the technology to market, and now considering how best to adopt and use the technology in the context of a push for technological sovereignty after the fact. Daniel and Creig raise several key questions that Canada must answer as we work to adopt AI: asking what we hope to achieve, and why. They share insights into how Canada can begin to answer these questions as a guide to driving effective AI adoption in a way that is consistent with Canadian societal values, and make concrete suggestions through which the Canadian public sector can overcome practical challenges standing between Canada and the efficiency gains promised by AI. The values-based, human-centred approach they suggest in the report should be required reading for policy makers involved in any aspect of AI adoption.
While our conversation touches on several aspects of the report itself, we cover more ground, including issues of data sovereignty, intellectual property, and the “low-innovation equilibrium” in which Canada finds itself. The interview is not a substitute for the report, and I suggest reading both together to get the full benefit of their contribution.
Your email client will probably truncate this post. My key takeaways are presented at the end, so be sure to read the web version if you want to get the whole story. Many thanks to Sasha for taking the time to share his insights.
Interviewer’s note: Daniel Munro and Creig Lamb approved the final version of the section entitled “Interview with Daniel Munro and Creig Lamb” and had editorial input on that section, with the option to rephrase and expand on the ideas discussed in the interview without changing or removing any intended meaning. The key takeaways presented at the end are my own commentary, and do not necessarily represent the views of Daniel Munro, Creig Lamb, or Shift Insights.
Interview with Daniel Munro and Creig Lamb
KB: Tell us about yourselves and give the audience some context for where your perspectives are coming from.
DM: I do three main things. I’m co-director with Creig of Shift Insights, which is a policy research shop that examines the social, technological and economic challenges and opportunities facing Canada with a view to helping us become a more prosperous and just society. We look at a wide range of policy issues related to innovation, education and training, skills, and technology development and adoption.. It’s in that capacity that the two of us put together this report for the CSA Public Policy Centre. I’m also Director of Research and Innovation at Actua, which is a youth STEM outreach organization. We deliver STEM camps, clubs, and workshops to youth all across the country. And I’m a senior fellow in the Innovation Policy Lab at the Munk School of Global Affairs. I’m actually a political philosopher by training. The innovation and skills policy work came more recently, if 15 or 20 years ago could be called recent. That’s ancient history to someone Creig’s age. In any case, in the first part of my career I was a political philosopher and I spent a little bit of time in academia. I still keep the political philosophy hat in my back pocket and I keep one foot in academia thinking about innovation and technology ethics.
CL: My main policy hat is shared with Dan. I’m the co-director of Shift Insights and the chief economist. I come at it from an economics background primarily. I do a lot of the data analysis and all of those components. Before that I was at the Brookfield Institute which is now The Dais. That’s where Dan and I met and worked on a bunch of projects together. I was leading a lot of their tech innovation skills and automation research at the time. And in my spare time, I design lamps.
DM: That’s the humble way of putting it. The real way of putting it is that you’re a designer and entrepreneur focused on lighting solutions.
KB: Let’s start with a little bit of context on the report you published on AI adoption in Canadian public and private sectors. How did that come about, and what are the goals of the report?
DM: When the government appointed a minister for artificial intelligence and digital innovation and gave him the job of developing an AI strategy for Canada, Creig and I thought that given what we had done on innovation policy and especially technology adoption in both the public and private sector, it might be useful to distill what we had learned in the form of helpful guidance for the minister to develop the strategy. In talking with the CSA policy group, the scope expanded to offering advice for diffusion and adoption in both the public and private sectors, since our previous work has given us some insights into what does and doesn’t work in those areas. And so what was initially supposed to be something like a moderate-length letter to the minister with some helpful hints turned into a longer report that came out a few weeks ago.
Initially, we thought about releasing something shorter much more quickly, in part because it wasn’t going to be based on new research. It was going to be a curation of the kinds of things that we already knew that might be helpful. We actually anticipated a four to six week turnaround, but organizations move at the speed of organizations, and as we got into the work we realized something more robust would be more helpful, so it took us months instead of week and we produced a longer report. We’re grateful to the CSA group for supporting the work and shepherding the review, design, and publication with us. We think it turned out to be better than it would have been had we just thrown a letter up on our website.
CL: I would add that while the intention was for quick advice, we quickly realized that the pieces that we were summarizing and contextualizing are challenges that the government has been tackling for decades in some form or another. It remains relevant even in the new context of AI. I think even as different mandates and different policies come out, these will probably continue to be challenges that the government will be tackling in some way, shape, or form over the next decade.
DM: The core question that occupied our thoughts and organized the writing was that in light of difficult innovation performance in the private sector and a string of challenges with technology adoption in the public sector, what is the minister going to do differently such that we don’t get the same result that we’ve seen time and time again? AI has features that are different from previous technologies, but it’s similar enough as a technology, and in terms of the background conditions that drive or block adoption. We’ve seen this story before and we don’t want to repeat it. But we’ve got to do something different in order to get a new story. That was the orienting question for us.
KB: You suggest in the report that Canada has this window of opportunity to translate early leadership in AI research into sustained economic and social benefits. What do you think is a realistic niche in which Canada can lead in AI, given that we don’t have the capacity to compete head to head with the American hyperscalers and others in the space?
DM: Creig can agree or disagree with me, but I’ll start by saying that I share the pessimism. We’re not Pollyannaish about this by any means. We share the skepticism in that we’ve seen this story time and again. In this case we have moved about as slowly as we always move, so the window might be closing.
To zoom out a little bit, just to put it into context, when we’re thinking about AI in Canada, we need to make a couple of distinctions. We’re thinking about it in two ways. One is from a “tech-making” perspective, we might call it, and the other is a “tech-taking” perspective. Tech-making focuses on the technological innovation, commercialization, and sales side of things, and what can be done to spur better performance. The tech-taking side of things itself has two dimensions: adoption in the public sector for the sake of productivity and efficiency gains (as well as pursuit of key public sector values), and adoption in the private sector. So there’s actually three moving parts here.
We’re less pessimistic and less skeptical about the prospects for tech-taking and using technology, both in the public and private sector. We are probably a little bit more skeptical about the tech-making side of things. I do think that there might be some niche areas where we can offer things. One that really comes to mind is AI for supply chain management. Think about the Scale AI innovation cluster. We might ask whether we can develop ways of using AI to spot possibilities and challenges in global supply chains and then use that information to manage those supply chains better. We might also be able to export AI for supply chains solutions for use by other companies and countries. That’s an area where I think the window may be closing, but we might still have some opportunity to do things there - especially given ongoing disruption and changes in global supply chains.
I also think that there are probably opportunities to do niche things with a Canadian focus. Even if we don’t develop things that have global export potential, there are some things that we can do domestically around health technology. Given concerns about health data and privacy, “Canada first” solutions might be the way for health tech. You may not be able to export what you develop in that area, but there’s probably enough both technical know-how to do Canada-focused things, and political will that would be quite receptive to Canadian made solutions for health. There’s probably some defence related things as well.
CL: To bolster your point on the tech taking side of things, looking at it from a broad economic perspective, we’re really thinking about how to improve things like productivity with the goal of gaining the widest benefit in terms of wages, equity, distribution, etc. Canada has had a particularly deplorable track record on adopting technology writ large both in the private and public sectors, but there are some signals that AI with its ease of use and subscription-based models that require relatively low upfront investments may be able to break some of these trends. So we do think that there is an opportunity to facilitate the adoption of AI and promote diffusion in a more concerted way across the private sector that might look a bit different than some other forms of digital technology.
I think the opportunity, and where most of our effort should lie, is focusing on other sectors in which we have a competitive advantage or that are strategically important or that simply employ a lot of people and focus on making them more competitive through the use of AI. While I will always say that the development of technology in our homegrown tech sector is important we will never out compete the giants. There are places on the margin that we can grow, but I think really it is about helping the average firm recognize how they can compete more internationally and hire more people and just be more productive. Hopefully that wealth gets distributed in the form of wages and things like that.
KB: That leads into a tension that you explicitly call out in the report, which is the idea that the minister has a mandate to ensure Canada has the enabling infrastructure, data storage, compute, energy, etc., to use AI effectively. You acknowledge in the report and in the preceding discussion that we are probably going to be dependent on foreign hyperscalers for AI service provision, both in the public and private sectors. At the same time, there is a lot of talk about “technological sovereignty” as a guiding principle of policy development. What does sovereignty mean in this context, given our external dependencies?
DM: You’re asking a political philosopher to define sovereignty. Do you have two hours?
I’ll give you a really pithy answer. My understanding, or my way of thinking about sovereignty, centres on whether or not you can meaningfully shape your own future. When we’re thinking about meaningfully shaping our Canadian future, that doesn’t necessarily exclude all external influences or inputs. It just means that when you pull everything together, you need to be able to say that you had or have the power to make decisions about where you want to go. I think there’s a tendency to think that sovereignty requires absolute control over all components, products and uses, which is simply unrealistic given the reality of fragmented global value chains, how components are pulled together to produce products and services, and how no country can produce everything itself that it might require. But that’s not resignation to an uncomfortable reality. Relying on others for inputs can in many cases enhance our power to shape our own future, by allowing us not to get bogged down in things that we don’t do well. Ultimately, the core question should be: “can we meaningfully shape our own future in this space.”
That means that on some things, it may be okay to rely on the external hyperscalers. In other cases, it might not be okay to rely on external hyperscalers. It comes down to what kinds of data you’re using, where that data is stored, and the privacy requirements around it. It depends on whether someone can push you in a direction you don’t want to go because the control some vital component or input you can’t get elsewhere. In some areas, I’d lean more towards the “tech nationalist” approach – when it comes to health applications and health data – versus other areas where we’re simply crunching public data for economic reports or trends.
I think the second question that I would ask is simply “what is it that we want to achieve with AI or with any other technology”?
In the tech-taking space, we want to achieve efficiency, productivity, maybe some other goals like more accurate answers to questions that people have about government services. What are the tools that we need in order to do that, and can we maintain meaningful control over what the future of that activity looks like?
When it comes to tech-making, that’s where it gets a lot trickier. If we want to have homegrown firms that develop solutions both for domestic consumption and for global export, how do we maintain IP? What data do we draw on, and to what extent, if at all, can we use the products of the hyperscalers?
That’s how I would think about AI sovereignty, which is not an answer. It’s more like principles for how to think through what you want to do in any given case. I’m going to do that to you again on your other questions. I’m not going to give you an answer. I’m going to give you criteria.
CL: This is how we think about things: we come up with a framework and then send it out to the world. I would add, though, that I often wonder, from a public policy lens, how much we also impose our views, and views around things like tech nationalism, on individual firms, and whether they actually coincide with the views of the individual firms. Firms are probably mainly concerned with making a go at it and making a living and being competitive. The question of whose infrastructure they’re using probably matters less for the individual firm than the public sector writ large. So I wonder sometimes how much we impose those viewpoints on specific firms, and in doing so, to what degree do we put them at a competitive disadvantage in an industry in which they were already starting 10 steps behind.
That doesn’t answer anything, either, it just adds another question.
KB: I want to go back to a point raised by Daniel around intellectual property. Part of the minister’s mandate is to maintain broader societal values and democratic norms as we adopt AI. Large language models ingest massive amounts of data. They,, and for the most part ignore copyright to get it. On the other hand, if you’re going to build an AI system that has any kind of chance at competing with what’s out there already, you pretty much have to play that game. Is there a path to using LLMs that is consistent with tIP rights, or does this new technology require us to fundamentally rethink what copyright means?
DM: I’ll give you half an answer and tell you why I won’t give the other half answer. The half answer is that if we take an application or use-specific perspective on some things, I think there are ways to develop AI solutions that don’t necessarily violate copyright or expose intellectual property or personal data to risk.
I’ll make it more concrete. If you’re designing a chat bot for the Canada Revenue Agency that’s intended to answer questions about Canada Revenue Agency procedures, rules, and that kind of thing, you will be drawing on data that doesn’t require IP protection: CRA policies, procedures, and things like that feed into that. It could help answer questions that users might have about how to file their taxes or what exemptions would be permitted. So, on one hand, for certain kinds of applications, I think we can develop those without violating these other things. To be sure, the base model on which a CRA chatbot is built may have been trained on copyrighted data and thus have antecedent IP violations built in. That’s a problem. I’m not sure how you disentangle those things, to be honest. It’s certainly something that the government will need to keep in view.
I will say I lean more towards protecting copyright and intellectual property than I do towards changing rules simply to facilitate development of some new technology. That might be a personal bias. Creig is the second most creative and stylish person I know. The most creative and stylish person I know is my wife, who’s an artist. In the arts community, the attitude towards generative AI is deep skepticism. The pervasive view in the arts community is that AI is inextricably built on IP theft. And in those specific use-cases, I think that view is right. . I think both analytically, but also for personal reasons, I lean more towards the idea that, if there is a tension between these two things, between protecting IP and developing AI, we should protectIP rather than allow violations to facilitate the development of AI services and products. That might suck for AI developers, but we all have to innovate under constraints.
CL: I concur with that. I think this report came at it from the lens that what already exists is a given, that we can’t dial it back, and then considering what we can do in the current environment in terms of adding additional measures and protections. That’s the subject for a whole other report that I would be willing to dive into.
KB: You gave a few examples of applications, and the mandate given to the minister is to push towards efficiency and effectiveness in the public sector. On the other hand, LLMs match patterns and produce likely text, not necessarily correct text. How do you balance the drive for efficiency against the reality that AI systems hallucinate and have built-in biases? Is it possible, in the framework suggested by the guiding principles that you develop in the report, to identify the kinds of tasks that can be handed off to AI versus the stuff that should remain in human hands?
DM: This is the other question where I’m to give you two criteria rather than an answer.
In the public sector, when we’re thinking about whether or not to adopt AI to make some government function or service more efficient, yes, there’s the concern about whether or not the technology will hallucinate, and produce inaccurate answers with an enormous amount of confidence, which is actually the real problem. It’s not just that the outputs are wrong sometimes, it’s that they’re wrong and there’s no function for admitting they’re wrong. So one principle or a question that I would ask is whether, for any given task, the use of AI is more or less accurate than the current way of doing that task. Again, take the CRA example. Is a CRA chatbot any more or less accurate than getting a CRA agent on the phone? Because we know they’re not perfect. Let’s say that 4 or 5% of answers from a chatbot are inaccurate. If it turns out that this is more accurate than the people who were doing that job, we should be cautiously OK with that, and deal with reality rather than perfection, given that existing systems are also imperfect.
There is one caveat, before I give you another principle. Human-centered approaches are better at learning from error and improving than chatbots. If you tell a human agent that they got something wrong, they have a capacity to integrate the better answer into the mix pretty quickly. By contrast, getting a chatbot to acknowledge and learn from errors is a much more involved process. And if it produces a better answer later, it’s still not doing that because it has selected the right answer as the right answer, it selects the right answer because it’s the most likely answer, or it’s been given a specific instruction to produce a specific response even if its underlying model doesn’t recognize it as the most likely response. In short, I think the human capacity for learning from error is still superior to that of chatbots.
The second principle for thinking about this is to reject the idea that one is replacing the other and think about whether there are ways that the technology can complement the existing human approach to collectively produce a better outcome. To beat this example into the ground, maybe what you get is a CRA agent who answers a question, but before actually giving it, checks it against what the chatbot would produce. If the answers align, then your likelihood of accuracy is probably much higher. If they don’t align, then the CRA agent has to either think about whether they’re wrong or whether the chatbot is.
The funny part about that is that it would be a lot less efficient, but it might be more accurate. So this is one of these cases where you have to step right back and ask what it is that we want to achieve through the adoption of technology. We often say we want better efficiency, but we might also want to improve public sector accuracy. Those might not always go hand in hand.
CL: When you were saying in terms of supplementing existing human tasks and increasing efficiency, we’re really talking about fundamental automation, and that is how we’ve gained efficiencies and productivity growth throughout the entirety of technology adoption throughout all of history. There’s a famous example of the ATM and the bank teller where everyone thought the ATM was going to replace the bank teller completely, but all it really did was remove the transactional, basic, routine-oriented handing of cash as a job task for the bank teller. Their job became actually more complex and value added in terms of customer relations and financial advice and things like that. Automation allows people to remove some of the routine things and do tasks above and beyond what technology can do, which overall leads to efficiency gains. The more we can see people eliminating some of the routine tasks that would take them all day so that they can do more complex things that day, the more we will see efficiency gains throughout the government.
I think it would be a useful exercise for people to go through within government and look for areas where the most efficiencies can potentially be gained with like an eye to some quick wins to gain momentum with the least amount of potential risk.
I think we cannot separate the skills question from this area. As we’ve learned with AI, and the more I talk to folks who are using AI and employing AI within their companies, the more I realize it actually takes quite a bit of labor to verify the output and check and ensure that hallucination hasn’t occurred and actually understand the mechanisms, to some degree, of how AI is working, how to ask it questions, and how to query it properly. I think there’s going to have to be a lot of effort put toward building that skillset within the government. That’s been a shortcoming of the government for a long time.
DM: I’ll add one thing to not the last thing that Creig said, but the one previous, riffing off the example of introducing ATMs and how that actually allowed tellers to move up the value chain. (James Besson, Learning by Doing). I 100% agree that when done well, you introduce a technology that can automate the lower value tasks and thereby liberate employees to do higher value tasks. This is good for them, good for the business.
Here’s where I’m going to get myself in trouble: I’m not convinced that that’s the perspective or orientation that the government is taking around AI. What we’re seeing is an AI strategy at the same time that we’re seeing an attempt to reduce the size of the public sector. There’s this hyper focus on the number of public servants we have, or the ratio of public sector employees to population. And there’s a sense among many people that the number is too high or the ratio is out of whack, so we need to fire a bunch of civil servants. My sense is that the government isn’t thinking in the sort of James Besson “ATM-moving-employees-up-the-value chain” way. They’re thinking about how AI can fill the roles of public sector workers that they want to take off the payroll, and that concerns me.
As we note in the report, the public service has many values that it must uphold. These include respect for people, respect for democracy, integrity, stewardship, excellence, equity and accessibility. Reducing the size of the public sector by adopting AI might get you some efficient gains or cost savings. Maybe. Big maybe. We also need to think about the impact on these core values. For what it’s worth, upholding these values likely requires a lot of inefficiency and redundancy. The public sector isn’t the private sector and we shouldn’t treat technology adoption in these sectors as equivalent processes.
CL: The automation side of things is kind of where I got my start in public policy and innovation research. I took a historical perspective on this, and throughout history, the most successful and long lasting versions of automation have been labor-augmenting, not labor-replacing. I think looking at technology as augmenting human capacity rather than replacing it is actually the smartest way to do it, and clearly the most ethical, and that’s the most important piece, but even if you’re looking at it from the perspective of pure efficiency gain, you should think about augmenting tasks, not replacing entire jobs.
KB: Let’s dive into the skills question. Clearly there is a difference between the stated intention and the approach that’s being taken to public sector adoption of AI. Is this the result of missing skills in the government to properly utilize this technology? What gaps do you see?
DM: I think Creig can probably speak to this better, but I’ll take a swing at it anyway. With respect to using AI in the public sector, there’s obviously some skills required in order to do that. In the report, we lay out three categories of skills that need to be considered.
One category is technical skills: you need people who have the technical and technological skills to develop, implement, and maintain AI solutions.
Then you need user skills. To go back to our CRA example, the CRA agent might not have to be an expert in how the chat bot actually works, but needs to be able to make the right queries, offer the right prompts, and have the capacity to critically evaluate whatever comes out the other end.
Finally, you need management skills: the skills to properly manage technology transformations, which is partly, if not largely, what a lot of this is about. Management skills to procure solutions, to manage the transformations, to ensure that employees have the other categories of skills, and things like that. So you need technical skills, user skills, and management skills to do all of this right.
One of the things that we’re concerned about is that at the same time that the government has shortages, especially in the technical skills side of things and to some extent on the management side, they’re actually cutting the public service. The Treasury Board says 30% of digital roles in the government are currently vacant. There’s already a technical skills gap. At the same time that they have those vacancies, they’re letting people go. What I’m concerned about is the signal that sends to people who might fill those roles that jobs in the public sector are precarious. The government already faces challenges around whether they can pay the same rates to technical people as the private sector. If you’re on the technical side of things, you might have good software skills, and you’re thinking about where to go. If the government says they need some technical people, but you’re also reading that the government is just getting rid of people, seemingly left, right, and center, why would you go work for the government? My concern is that they’re almost shooting themselves in the foot here, where stated need does not align with the signals they are sending.
CL: I think that’s a good point. You’ve seen it time and time again. As the government increases its focus on sort of digital technical talent they’re increasingly competing with the private sector. What’s the value proposition for people with either highly technical digital skills or just general digital literacy, who could work in all sorts of private sector roles that would pay more, probably have more flexible hours, all sorts of benefits, maybe even in cities that they want to live in or remote. So why would they work for the government? Part of the answer is job security, but if they’re sending the signals around a lack of public sector job security or a shrinking workforce, that undermines the value of that component.
I think the management side is another really big piece, and building up skills around procurement and oversight for digital transformations. We’ve seen so many challenges with respect to that. How can they improve their capacity and skills in areas that enable them to both identify areas where digital transformation should happen and then manage the procurement and development and diffusion of the results.
KB: Your report discusses how Canadian firms have been stuck in a “low innovation equilibrium”. Can you elaborate on what that means? Given that this issue predates the recent AI explosion, has the situation been changed at all, or could it be changed, by AI?
DM: My first experience with the phrase “low innovation equilibrium” came from a paper written by Peter Nicholson in which he tries to explain why business innovation is so low in Canada. His answer to that question is that businesses are only as innovative as they need to be, because, he says that a lot of Canadian businesses have been able to maintain reasonable profits without having to innovate. Canada has the conditions, whether it’s market conditions, the cost of labour, or what have you, in which they’ve been able to maintain reasonable levels of profit without innovating. If you can maintain profits without innovating, why rock the boat? Firms simply don’t have any incentives to innovate.
Creig and I take this a step further. We have a conceptual model that breaks a couple of these things apart. We agree with the Nicholson line of thinking, but when we think about why firms do or don’t innovate, one side of the equation is whether or not they have incentives to do so, and the incentives in Canada tend to be weak. There is that reasonable profitability. There’s weak competition. If you’re not facing a lot of competition, why would you innovate? So there’s weak incentives.
The other side of the equation is limited capacity. Even if you’re a firm that wants to innovate you might lack the resources: financing, expertise, etc. In the past, a lot of government efforts to spur innovation have focused only on that limited capacity issue, for example by offering financing or tax credits or expertise around entrepreneurship and digital technology adoption or whatever. I think this is changing, but I think in a lot of cases, government policy doesn’t spend a whole lot of time thinking about the incentive side, or at least if they think about it, they haven’t taken any sort of strong measures to address it.
There is some noise around competition and improving the competitive environment, but the actual steps to do that haven’t really been as robust as I think they need to be. My worry is that to the extent that AI might contribute to innovation and productivity , a lot of firms are facing these weak incentives and in some cases, limited capacity to actually do things. If you want to spur adoption and innovation in AI, you have to begin thinking about improving those incentives before addressing the limited capacity issues. Focus on the drivers before we focus on the supports. We haven’t done that in the past, and I think we’re making that mistake again with AI. We’ve used the same conceptual framework for a number of things that we’ve done.
CL: While their AI has a particularly strong hype around it, the available data on adoption would suggest over the past few years that AI isn’t much different in terms of incentives and capacity challenges. We’re facing similar issues in terms of adoption. Adoption is relatively low and dispersed across the select few industries and large firms, those with the incentives and capacity to do so, so it doesn’t feel particularly different than other versions of digital technology.
DM: With one caveat though. When you look at the Canadian survey of business conditions, across the economy, 12% of firms say that they have adopted or will be adopting AI over the past 12 or next 12 months (which is up from 6% so it’s accelerating, but it’s still just 12%). This is what you get when you ask people who are authorized to speak about what firms are doing. When you talk to employees, though, the numbers are over 50%. Officially, 12% of firms might have plans and strategies and be thinking about this intentionally, but meanwhile something like half of employees in the economy are using AI, whether their firms are on board or not. This “shadow AI use” is a source of risk, obviously, so there’s actually a funny thing happening with AI, or at least certain kinds of AI, where firms are as conservative as they have always been with every other digital technology, but employees seem to be racing ahead in ways that we haven’t seen with previous technologies.
CL: This is a really good point. We’ve also seen a corresponding thing that does change the incentive structure. Companies are realizing that this is happening and are deciding to permit the use rather than developing a more deliberate strategy. The incentive is to let adoption happen by itself and just pay for the subscription or whatever else needs to happen to make it possible. That’s an interesting point of differentiation from previous technologies.
KB: In the report, you allude to fragmentation in the approach to public sector AI adoption: you identify vertically siloed departments, thousands of legacy systems, and poor application health, and you suggest a centralized body as a means to deal with it. What drives that fragmentation, and what does the centralized approach that you suggest look like practically?
CL: I think what drives the fragmentation is just the nature of the bureaucracy and the risk aversion and lack of communication and all of these sorts of bureaucratic limitations and rules. I have personally noted, on projects where I was working with two departments, that often government departments that are working on very similar things don’t know anything about the efforts of the other.
I think another piece is just the size and the mandates of these departments and the number of things that they’re doing. It’s really difficult for them to continue to keep on top of what everyone else is doing and coordinate and consolidate efforts.
There also tends to be a tendency to reinvent and build the same thing over and over again, as opposed to checking who’s doing what and partnering.
DM: For context, I think the fragmentation in the public sector predates any technology adoption. I mean, that’s just a feature of bureaucracies that are oriented to different functions and given different lines of power. So the fragmentation predates AI. The fragmentation or differences in technology adoption and systems and things like that is just a reflection, a symptom, of that deeper fragmentation.
The upshot is that what you get is six different departments that are trying to do the same thing end up with six different ways of doing that same thing because they don’t talk and because they have their own unique preferences about what exactly it is that they want a technology to do or a solution to be, and so you end up with these thousands of legacy systems that are different across the different departments.
An AI strategy isn’t going to solve the underlying problem of public sector fragmentation. Our hope is that if you can create or establish some kind of central body that will support and incentivize communication across departments, across ministries on uses of technology that appear to be similar, that you might get something that is a little more consistent across the different parts of the public sector. There is a suggestion about central coordination, information sharing, shared systems, and procurement of solutions that can be used by different parties in the report where AI is concerned. There’s a hope that that kind of thing can be done, but we’re adults about this. We know that this isn’t going to address the underlying problem. That’s a much, much bigger problem.
CL: You’re right. I do recognize that in a lot of instances, funding mechanisms for departments drive that kind of siloed approach. I also understand that there are different privacy issues across departments that require these sorts of siloed and fragmented legacy tools and systems, or at least explain this kind of thing. I think it is the hope that a central body could cut through a bit of that noise and identify common areas and common issues across departments and identify certain areas where it makes sense to adopt AI and ways to mitigate the risks that are common across departments, allowing insights from one department to inform another.
The issue is that we’ve also seen this happen and seen this done a number of times with central bodies and it often doesn’t play out in the way we think it will. Sometimes we end up with something that lacks the teeth it might need to be effective, and so it ends up being ignored.
KB: It strikes me that something that has an overarching awareness and oversight of all of the various is an ideal use case for an AI system. I’m curious if that’s something you thought about: is there a path to having an AI system actually do the convening work, or would we just be creating an Ouroboros of fragmentation?
CL: I don’t know why I didn’t think of that. It would be great for the central body to use AI.
DM: I think AI might be able to help with this, but having a well-staffed central body with the right mandate and authority to develop policies, share ideas and best practices, coordinate procurement and implementation, is the key piece. I can think of a couple of things that are already in place that suggest some precedent or pathways for a central body. We’ve got the Canada Digital Service. They should be a candidate for being part of something like this. But they’re wildly understaffed. They will do a project here and there and they can’t keep track of the whole. If you overlay some kind of function that has that sort of broader perspective across the whole public sector and not just individual projects that might be helpful.
You’ve also got the Treasury Board, which has developed the policies on automated and algorithmic decision making, so it’s not like it’s an idea that comes out of nowhere. There are things that could be transformed into this role. Maybe AI becomes part of this as well to help with fragmentation – but I think we need the agent and authority before we bring in the technology to help.
KB: Your report talks about Innovative Solutions Canada and compares it to the SBIR program in the US. This is a comparison I’ve made in other contexts. What creates the divide between the technological driver that SBIR has been for the US for the last 40 odd years versus the relatively lower impact that ISC has had in the Canadian ecosystem?
CL: Kyle, I know you’ve written about this. I like your take, so I’m going to mirror back some of your takes to you. I have a few things that I will add to what you’ve said.
The Innovation Solutions Canada was obviously modeled after the SBIR and there have been recommendations to adopt an SBIR-type of program floating around, both provincially and federally, for a long time. SBIR is one of the mainstays of successful innovation policies out there, along with DARPA and others.
You can see that they replicated some of the foundational principles with Innovation Solutions Canada. 21 departments are mandated to spend 1 % on procurement and R&D. As you’ve seen, these departments were consistently underspending by two thirds of their actual mandate. In 2024 the budget got slashed to match what departments were actually spending to about a third of what it was before. ISC didn’t really have the teeth of the SBIR, which is another kind of pre-commercial procurement where departments across the U.S. that spend $100 million or more on R&D have to spend roughly 3% of their R &D budget on small and medium sized enterprises.
I think there are three explanations I have for this. One is just fundamental risk intolerance. You can just see in governments across Canada that there’s just a lower tolerance for the kind of risks that comes with pre-commercial procurement and working with small innovators. You can see that in the kinds of companies that the Canadian government works with and our preferred vendor lists. They really skew toward major corporations that handlea wide array of government activities and can provide a turnkey solution. Our ability to work with one-offs and tolerate project failures is fairly low. I think there’s a bit of a cultural thing and a bit of a government culture thing that would need to be addressed somehow.
There’s also the issue of governance of the SBIR versus the ISC. Why would a program that’s being mandated to spend a certain amount not actually achieve its mandated spend. If you’re being forced to do something, why would you not do it? Why can’t it be enforced? I think this comes down to both a governance issue and a capacity issue.
Canadian government departments just have very low capacity for even basic things like moving away from “waterfall” procurement to more more agile procurement. So working on pre-commercial procurement with smaller firms on risky projects is not within the realm of knowledge and expertise of most departments. The string of big failures on the procurement side of things is kind of evidence to this.
It’s not like the US doesn’t have this issue, but you can see a slightly different governance structure: ISC is a central body that’s mandating government departments. SBIR has that central body, but then it has 12 other bodies within individual departments that manage and facilitate the actual guidance and oversight of the actual day-to-day of the SBIR implementation within the department. The DOD, the Department of Health, etc each have their own SBIR bodies that help facilitate this. I think that helps these departments actually work with smaller firms and commit to spending on the pre-commercial procurement.
You would have to not just mandate. You have to have these disperse networks of people with expertise embedded within the organization working on a day-to-day basis with the people that are managing their programs and the projects and the individual procurement to actually reach the numbers that you want.
And finally, as you’ve pointed out, there is the issue of scale. SBIR funds more than 5,000 projects in phase one and phase two. ISC funded something like 30 in the same timeframe. ISC is 1% of R&D spend and SBIR is 3.2%.
Altogether, ISC is a sort of “uncanny valley” type of program where on the surface, it looks like it was the SBIR, but as you start to dig, you realize it is not, and the differences turn out to be really important.
KB: Is there anything I should have asked you but did not?
DM: The only thing that I’d add or emphasize, that we say in the report, is that we’d really like to see not a technology first strategy on AI, but a values first approach. Rather than taking this technology and trying to find places where it might add value or advance something that we care about, instead ask “what is it that we want to achieve, and to what extent and how can this technology help us do that?”
Whenever people ask my friend and colleague, Danny Breznitz, what Canada’s innovation policy or strategy should be, he always says that the first thing we need to do is answer the question “what do we want from innovation?” Only then can we talk about an innovation strategy. I think that applies here as well. What is it that we want? What is it that we want to achieve?
We’re going to disagree about that, obviously. We have different values, we have different priorities. But once we come to some kind of answer or conclusion about that in the public sector and in the private sector, only then can we figure out what role AI could play in getting there.
A values-first rather than a technology-first approach to this is the way to go.
CL: That’s a really good point. Even on the low innovation equilibrium piece, I think sometimes we assume that it’s a problem because people think that innovation is intrinsically good. They want to bolster their R&D numbers so that they look good on an international stage or something. But if the end goal is to have people earning incomes and companies earning profits to facilitate those incomes, if that’s happening with or without innovation? Does it really matter?
DM: Innovation is just a mechanism. It’s an instrumental good. We have to ask the more fundamental question about what it is that we want to achieve, and then figure out how innovation fits into that.
Key Takeaways
The report by Shift Insights, and the commentary in the interview above, paints a complex picture of AI adoption in Canada. Aside from the usual challenges of fragmentation and siloed attempts to move forward that characterize pretty much all of Canada’s attempts to innovate, the core of what I took away from our conversation was a need to take a step back and answer a few fundamental questions before trying to do anything else.
What are we trying to achieve?
Daniel describes himself as a political philosopher, while Creig is an economist. It’s a powerful combination: throughout their report and this interview, they use a values-based framework to inform their recommendations, grounded in the realities of the incentives that dictate behaviour. Instead of prescribing a particular solution to Canada’s AI adoption challenges, their approach is to ask thoughtful questions that help us understand why we care about AI adoption in the first place and how the incentives might be misaligned, and only then to make concrete recommendations as to what needs to be done. It is an approach that I deeply appreciate, and use in my own work anywhere that systemic change is part of the path forward.
While this approach shows up throughout both the interview and their report, it appears most explicitly in our discussion of sovereignty. There is a huge amount of commentary making its way through the innovation space on what it means to have technological sovereignty and data sovereignty, and while these concepts seem straightforward on the surface, they are fairly difficult to define in practice. Daniel gets to the heart of what sovereignty means in the context of AI in just two questions:
“can we meaningfully shape our own future in this space?”
and
“what is it that we want to achieve with AI, or with any other technology?”
In other words: part of the reason for the difficulties Canada faces in its AI adoption journey is because we have not agreed on what is important. Daniel makes clear that he does not necessarily have answers for these questions to suggest; rather, that these are “principles for how to think through what you want to do in any given case.”
This idea generalizes far beyond AI, a lesson that I learned more slowly than I care to admit. Early in building my own company, mentors and advisors tried hard to educate me on the importance of clearly articulating “mission, vision, and values”. As a physicist who had not yet made the transition to business thinking, I mostly ignored their advice as tech-bro nonsense. In the end, it turns out I was wrong, and the tech-bros were right (broken clocks, etc.). Daniel and Creig’s questions invite us to articulate a vision (what we want to achieve long term), a mission (how we plan to get there), and values (the principles we agree are important to uphold in making decisions en route). Only once we have agreed on what these are does it make sense to decide how to go about adopting AI, or building Canadian data sovereignty, or, frankly, almost anything that requires significant change.
It is a framework that is particularly effective for complex decision-making, since values, in particular, provide guidelines that allow us to make decisions with ambiguous implications, secure in the knowledge that the decision was made with purpose and that we have guardrails with which to course-correct when things get off track enroute.
Shaping our future
I am personally skeptical of Canada’s ability to lead in the AI space, and have expressed as much in contributions to the innovation discourse. As far as I can tell, AI is just the latest in an often-repeated story in Canada: we contribute foundational, groundbreaking research to the world, and then sit back and watch as someone else makes it practically useful.
That being said, Daniel and Creig make a strong case that there are still opportunities for AI to create value for Canada and to direct our own future in the space, even if those opportunities may not involve setting the global agenda. They see models that are purpose-built for Canada as being an important area of focus, built using proprietary datasets that are specific to the society they serve. Health data is a key example, as is anything that similarly depends on sensitive, personally identifying data for an associated AI model to be useful:
“[T]here are probably opportunities to do niche things with a Canadian focus. Even if we don’t develop things that have global export potential, there are some things that we can do domestically […]. Given concerns about health data and privacy, “Canada first” solutions might be the way [….]”
The suggestion from the interview is to focus on models trained on Canadian data, geared toward solving domestic challenges.
While the suggestion is sound, it reinforces the importance of addressing issues of control over data that have plagued the space to date. Our ability to shape our future in AI hinges on being able to control the data on which that AI is trained.
Challenges around data sovereigty go beyond personal information, however. As I have previously contributed to writing, our ability to shape our future depends on control over our intellectual property, and AI does not have a great track record of respecting it. A key issue that arises when considering Canadian specific data is the need to fine-tune existing models rather than training them from scratch, given the volume of data involved. Even if the data that we use to train a Canadian-specific model is managed in a way consistent with our values, the base models almost certainly have not been. Per the interview:
“the base model […] may have been trained on copyrighted data and thus have antecedent IP violations built in. That’s a problem. I’m not sure how you disentangle those things, to be honest. It’s certainly something that the government will need to keep in view.
I will say I lean more towards protecting copyright and intellectual property than I do towards changing rules simply to facilitate development of some new technology.
Despite this preference, Crieg takes a pragmatic view:
“[T]his report came at it from the lens that what already exists is a given, that we can’t dial it back, and then considering what we can do in the current environment in terms of adding additional measures and protections.”
Is productivity the end goal?
Canada is stuck in a “low-innovation equilibrium”, an idea that has been around for some time. Unlike most commentators on the issue, however, Daniel and Crieg do not take for granted that an innovative Canadian economy and the associated productivity are the end goals, asking that we go one step further and first ground ourselves by articulating why this matters. As Crieg puts it:
“I think sometimes we assume that [low levels of private innovation is] a problem because people think that innovation is intrinsically good. […] But if the end goal is to have people earning incomes and companies earning profits to facilitate those incomes, if that’s happening with or without innovation, does it really matter?”
Daniel echoes the point:
“Innovation is just a mechanism. It’s an instrumental good. We have to ask the more fundamental question about what it is that we want to achieve, and then figure out how innovation fits into that.”
Especially where the public sector is concerned, the answer to these questions has practical consequences for what it means to adopt AI at all. Daniel and Creig observe that we need to go through an additional layer of reflection on what we want, since there are tradeoffs to be made even after we have decided to proceed with AI:
“[Suppose a CRA agent] answers a question, but before actually giving it, checks it against what the chatbot would produce. If the answers align, then your likelihood of accuracy is probably much higher. If they don’t align, then the CRA agent has to either think about whether they’re wrong or whether the chatbot is. The funny part about that is that it would be a lot less efficient, but it might be more accurate. So this is one of these cases where you have to step right back and ask what it is that we want to achieve through the adoption of technology. We often say we want better efficiency, but we might also want to improve public sector accuracy. Those might not always go hand in hand.”
In other words: efficiency may come at the cost of accuracy, and vice versa. There is not necessarily a “right” balance, and even if there is, it will be highly context-dependent. A clear goal and values to guide us toward it are required to arrive at an answer that will serve Canadians, and getting these right has real consequences:
“There’s this hyper focus on the number of public servants we have, or the ratio of public sector employees to population. And there’s a sense among many people that the number is too high or the ratio is out of whack, so we need to fire a bunch of civil servants. […] They’re thinking about how AI can fill the roles of public sector workers that they want to take off the payroll, and that concerns me.
Only when we come to a consensus on how we want AI to benefit Canadians will we be able to move forward with AI adoptions, grounded in the values we have collectively decided are important. I do not think that blindly chasing productivity for productivity’s sake is the answer. The report by Shift Insights makes several recommendations on how to approach the necessary questions that should be required reading for anyone in the public service that is involved in AI adoption efforts.
Many thanks to Daniel and Creig for taking the time to share their insights, and for their contribution to the national conversation.



