The Economics of Health Innovation: Assessing Value and Impact with Sasha van Katwyk
Sasha van Katwyk shares his insights on improving public and private investment decisions on emerging health technologies
This week I interviewed Sasha van Katwyk, Senior Health Economist and Managing Principal at the Institute of Health Economics (IHE). As Sasha describes it, “IHE is a not-for-profit research institute that […] sits at the intersection of academia, government, and industry”, providing insight to policy makers and innovators alike on the value and potential of new health technologies.
This interview is the second instalment in a series on “purposeful research”, focused on finding ways to identify research projects that address real societal needs. My goal in this interview was to learn how health innovation is valued and evaluated by the different stakeholder groups in the adoption process, and to better understand what tools can be applied early in the innovation process to focus healthcare R&D investment where it can achieve the greatest impact.
Sasha’s unique perspective at the intersection of academic research, private sector health innovation, and healthcare policy will be of interest to anyone operating anywhere in that continuum.
Your email client will probably truncate this post. My key takeaways are presented at the end, so be sure to read the web version if you want to get the whole story. Many thanks to Sasha for taking the time to share his insights.
Interviewer’s note: Sasha van Katwyk approved the final version of the section entitled “Interview with Sasha van Katwyk” and had editorial input on that section, with the option to rephrase and expand on the ideas discussed in the interview without changing or removing any intended meaning. The key takeaways presented at the end are my own commentary, and do not necessarily represent the views of Sasha van Katwyk or the Institute of Health Economics.
Interview with Sasha van Katwyk
KB: Tell us about yourself and give readers some context for where your perspectives are coming from.
SVK: I’m an economist by training, specializing in risk assessment to inform social protection policy. I use simulation modelling to allow for the combination of disparate evidence sources to inform economic decision-making and forecast outcomes. I first used that skill set while working in South Africa to do simulation models for cash transfer programs; think welfare payments, anti-poverty initiatives, and taxation policy, things like that. When I moved back to Canada, I got into health research because there’s a growing area of simulation modelling in health economics. That’s where I first got involved in health policy, about 10 years ago. I worked within a research institute that was academically oriented, doing in-hospital trials and hospital-level policy-making. I then moved to the Institute of Health Economics (IHE) where I’m working on policy analysis and research for provinces, innovation investment centres, and formulary committees, all asking questions about health system capacity and decision analysis.
KB: What does IHE do, and how do you fit into it?
SVK: IHE is a not-for-profit research institute that we often say sits at the intersection of academia, government, and industry because those are the three sets of collaborators that we usually work with. Our work is a mix of grant-funded research and service-specific consulting. We also provide independent third-party analysis and advice for health policy and investment choices.
In my role as Managing Principal, I’m involved in a couple work portfolios; Health Systems Modelling, where our clients are either provinces or specific health condition foundations, Innovation Support, where we work with SMEs and independent researchers to define their value proposition and identify the best evidence generation strategies to build their economic case for system-level adoption, and Methods Advancement, where we just nerd out on the latest mathematical modelling methods and service delivery strategies in order to keep the organization at the forefront of quality analytics and delivery.
KB: Sitting at the intersection of those three stakeholder groups, what kind of disconnects and misunderstandings do you see in terms of how they are approaching health policy?
SVK: Everybody has their blind spots, which lead to misunderstandings and incorrect assumptions about what others’ priorities and needs are.
With researchers, the main blind spot is that they don’t understand the set of constraints under which policymakers operate. Researchers tend to present problems and solutions in a way that’s very “blue sky” policy-oriented, that imagines system-wide change. Even when the changes seem relatively modest to them, they’re often not aware that implementing what they are proposing—what they identify as the optimal solution—would require coordination and collaboration across many agencies within a government body and would consume an enormous amount of political capital to achieve. Because researchers don’t fully appreciate the policy environment they are working within, and the political and operational pressures within government agencies, there’s a disconnect between how they frame problems, how they present potential solutions, and what is actually practical from a policymaker’s perspective.
From the policymaker’s perspective, in my experience, they often don’t have access to the operational or technical details of the problem and aren’t operating with a deep understanding of the environment they are regulating. They don’t know these problems in depth. Often this is because, at the level where decision-making actually happens, policymakers think in terms of the policy levers at their disposal. If a problem can’t be addressed by pulling one or some combination of those levers, it’s very hard for change to happen without legislation, and now you’re demanding a lot of the agency or department you’re engaging with. As a result, there is a strong institutional gravity toward existing options. If the problem is not solvable through a relatively straightforward policy lever, it is difficult to sustain policymaker engagement.
For innovators, especially independent or small-scale groups, I think the main blind spot is that they don’t understand the decision-making process and what policymakers are using to assess the value and viability of a technology that they might want implemented. In part that’s not their fault, but it’s still a significant blind spot. Innovators often envision that the system works one way, but in reality it operates very differently. As a result, innovators are often unaware of what actually drives meaningful change at the system level.
KB: In IHE’s role at the intersection of all those blind spots, what have you found to be effective at convening all of those viewpoints and getting them onto the same page?
SVK: We are most successful when there is alignment with policymakers who know they want to resolve a problem and come to us with an understanding of the levers at their disposal, and are looking for solutions that fit within those levers. Often, we have to get them to that place before any math can actually happen, and it can be where the real intense work has to happen. Once we have this clarity, it is easier for us to then look at the research, do the deep dive, develop a nuanced understanding of the nature of the problem, bring in the researcher perspective on the true nature of the problem and the available solutions, and assess where those solutions fit within what is legislatively possible, what is practical, and what is financially feasible at the policy level.
We’ve definitely produced analyses that are missing one of the major actors, and we’re trying to speak to that actor as part of the process. That can be useful, but it’s much more challenging if there isn’t already a high level of clarity about what is possible, who is already invested, and who is willing to commit capital to the effort. When you don’t have that buy-in to actually implement change, it makes the whole process significantly more difficult.
KB: Walk me through the decision-making process that occurs when a policymaker is evaluating a novel health technology. Where are these policymakers sourcing problems, and once a possible solution is identified, what does the evaluation process look like?
SVK: That’s a really big question. There are a couple of caveats that need to be made as part of answering it. One is that not all health technologies are pitched as a solution to a clearly defined problem. It is often the case that there are technologies that have industry champions or policy-oriented champions who have identified a specific health technology and have the necessary network within government to drive a decision forward somewhat independent of a formal problem-identification process. That absolutely happens. So an important caveat is that when policymakers already have a solution in mind, the process is very different than when they are recognizing a problem, or appreciate that a problem exists but don’t necessarily know what the solution is or what technologies could be implemented. It can also be the case that even though a particular problem has not been formally identified as a policy priority, an innovator approaches the government with a health technology solution. You’re running into very different versions of the government decision-making process in each of those cases.
The other caveat is that when we say “health technology,” we need to be more specific. There are different processes for different kinds of health technologies. If we’re talking about pharmaceuticals, for instance, there’s a relatively well-established federal process for regulatory approval and health technology assessment. That process then filters down to the provincial level, which is where coverage and reimbursement decisions are actually made. Broadly speaking, it involves a national-level process of going through Health Canada for regulatory approval, then the Canadian Drug Agency for clinical and health technology assessment, and then the pan-Canadian Pharmaceutical Alliance (pCPA) for price negotiation, and only at that stage do provinces begin making reimbursement decisions.
There are other technologies—for example, digital technologies or medical devices—that might or might not go through Health Canada, depending on how they are classified. I’m not going to pretend to be an expert in how different health technologies are classified, because it can be a gray zone in some cases.
Another unfortunate complication in answering your question is that the means by which decisions are made about whether and how to adopt a health technology are often quite ad hoc. Every province has its own process, and the level of development and maturity of that process is highly variable. In some cases, the decision is not even made at the provincial level; instead, it may be made at the health system or regional health authority level, with organizations making adoption decisions independently. Even when it is a provincial decision, the kinds of evidence being presented and what the pathway toward adoption looks like can vary substantially, and sometimes the process is not especially well defined, even within a given province or territory. It might not even be clear to policymakers themselves what the exact process is in a given case.
Basically, every part of your question has a number of asterisks attached to it. It depends what you mean by health technology, it depends what you mean by decision-maker, and it depends what you mean by adoption.
KB: Let’s simplify by using concrete examples of the kinds of analysis that IHE does — walk me through the inputs and outputs of your simulation models, and how they help inform decisions downstream.
SVK: The primary work that we do that is relevant here is what we call “health technology assessment” (HTA). Health technology assessment is a set of methods used to evaluate the clinical effectiveness, applicability, and economic impact of a health technology. As economists, we are acutely aware that every dollar spent on one patient is a dollar we can’t spend caring for another, so we’re ultimately interested in knowing how every dollar can achieve the most health benefit, or what we call cost-effectiveness. So, when we do HTA, we are trying to identify what of several competing options is going to be the most cost-effective to the purchaser. HTA doesn’t have to be conducted in a single-purchaser health system environment, but that is typically how we apply it.
Within an HTA, we first need to understand the current standard of care that the health technology in question would affect. Specifically, we want to understand what the current standard of care costs and what the associated health outcomes are. From there, we then need to know where the new health technology fits within that standard: is it supplementing the current process, or is it replacing some part of it? What are the implications for total costs to the health system and to society, and what differences are expected in health outcomes for patients, caregivers, and other parts of the health system?
For example, we might see benefits in terms of reduced resource use within a hospital setting, which might not necessarily be fully captured within a health system costing model, but would still be highly relevant to hospital operations. As another example, if you can improve ER wait times, that doesn’t necessarily change the overall cost of running that ER in a substantial way, but in terms of efficiency, throughput, and staff well-being, it might be very influential. So those are additional outcomes that might be relevant. Again, that all falls into the broader categories of measuring costs, measuring health outcomes, and measuring functional or system-level outcomes that may be important to decision-makers.
From there, we build a simulation model, which is a mathematical representation of the current standard of care, in which we simulate patients moving through that process and through the healthcare system. We capture variability in health outcomes and variability in costs based on patient characteristics, health system characteristics, differences in physician practice patterns, and differences in patient response to treatment or diagnostics. In other words, we account for heterogeneity within the population and within health system performance. We then create a base-case model that represents the current state of affairs—the current version of the world. This allows us to estimate what it currently costs the health system to deliver care and what patient outcomes look like under the status quo.
Starting from that base case, we then introduce the new health technology into this simulated version of the world. To do this, we draw on a variety of data sources: clinical trials, pilot studies, observational data, or in some cases early effectiveness estimates. We run the model and observe how outcomes and costs change relative to the base case.
We run the model many times to account for heterogeneity in outcomes and uncertainty in the treatment effect and other model parameters. This gives us a full picture of the expected health outcomes, system outcomes, and costs. From there, we conduct the economic analysis. At this point we get into cost-effectiveness analysis or cost–benefit analysis, which are structured ways of translating those differences in costs and outcomes into decision-relevant metrics, based on what we understand the priorities of the decision-maker to be. This allows us to quantify the trade-offs involved in adopting this technology versus not adopting it, and to more clearly understand which version of the world we would prefer to implement. Of course, this usually involves uncertainty, so we need to explicitly account for that as part of the decision-making process.
That overall process is the health technology assessment. We present that information to the decision-maker, and the decision-maker then makes a decision. Sometimes the cost-effectiveness results are clear and it’s a relatively straightforward decision, but more often the decision is complicated by evidence uncertainty, heterogeneity, and data limitations, all of which introduce decision risk. Part of the process is helping the decision-maker understand that risk, understand their options, and identify whether there are ways to mitigate that risk—for example, through phased implementation or conditional adoption.
KB: How do you define “optimal” when you evaluate standards of care?
SVK: It depends on what kind of intervention we’re trying to model. Let’s take a really simple example: a diagnostic tool. There’s no shortage of new health technologies that aim to improve diagnostic efficiency and diagnostic accuracy. There’s also a ceiling on how effective a diagnostic tool can be—you can only be so accurate and you can only be so fast. So that’s a good example of what we mean by “optimal”: the upper bound on achievable performance, given technological and clinical constraints.
A more complicated example is something like diabetes, where there are multiple points in the care pathway where you can intervene. Once somebody has diabetes, there are many ways we can treat and manage those patients, and we can model that standard of care. But we could also intervene earlier through prevention, screening, or better self-management to prevent cases from occurring or to delay progression. The optimization question becomes: what combination of interventions across prevention, management, and treatment produces the greatest health benefit for the population we have and with the resources available?
When we model this, “optimal” usually means the most efficient allocation of resources across those different parts of the pathway—where an additional dollar spent produces the most health benefit. Preventing a case entirely often represents something close to an upper bound on potential benefit, because you avoid all downstream costs and health losses associated with that case. But in reality, the optimal solution usually involves some mix of prevention, monitoring, and treatment, and the model helps us understand where along that pathway investments have the greatest value.
We can’t always calculate a perfectly optimal system in a literal sense, but we can usually identify which interventions are likely to produce the greatest benefit at the margin, and that’s usually what decision-makers actually need to know.
KB: Clearly the process you’re describing is only as good as the data that informs it. Understanding that the quality and quantity of available data will reflect the stage of development of a new health technology, where is that data typically coming from?
SVK: Evidence collection is the biggest part of any health technology assessment, and usually we’re combining data from many sources. At a minimum, we are collecting evidence about the health technology itself, which ideally is coming from trials. From that, we might get data on the effectiveness of the technology. We also need data comparing those results to some standard of care or alternative technologies that are already in use.
That can get complicated, because sometimes a trial conducted in another jurisdiction, like the United States, may compare the technology to a standard of care that is different from the standard of care in Canada. They might be using different drugs, different technologies, or different clinical pathways. In these cases, there is some extrapolation and statistical adjustment required to make sure we are comparing apples to apples and that the results are applicable to the jurisdiction where the decision is being made.
Then there’s the system-level data, which we try to collect from administrative data bases or fee schedules. We then have to evaluate that data, clean it, and transform it into something that allows us to simulate the care pathway and the patient population. Often, administrative data has gaps that we need to fill—we may need to impute missing data, forecast certain parameters, or better characterize populations that are not well captured in the data but still need to be represented in the model. Sometimes this involves making assumptions informed by alternative data sources, including qualitative or observational data about those populations.
Finally, we may also use data from the published literature that looks at things like patient experience, physician practice patterns, and health system implementation in different settings. Health systems are not implemented uniformly, and outcomes can differ significantly depending on factors like urban versus rural settings, socioeconomic characteristics, and real-world challenges like differences in clinician or patient adherence to guidelines. We need to account for those factors if we want the model to reflect the real-world population and to support equitable decision-making.
The fact that there are so many different sources of data that need to be brought together is actually why simulation modelling is used in the first place. If all of the data came from a single source, we could often answer the question using a simpler statistical analysis. It is because we have to combine clinical data, administrative data, literature, and assumptions about how the system operates that we build a model of the care pathway and the population, and then run that model many times — what we call probabilistic sensitivity analysis — to generate a range of possible outcomes. That allows us to understand not just what we think will happen, but what could happen under different assumptions and accounting for uncertainty in our data, and only then can we draw conclusions for decision-makers.
KB: I understand you’re building an early healthcare assessment tool to be able to start making predictions about the potential value of these healthcare technologies earlier in the process. How you mitigate the inevitable data quality issues that come with the early technology in general?
SVK: I would describe early health technology assessment (eHTA) not necessarily as a single tool, but rather as a set of methods. We’re taking the same HTA framework but applying it earlier in the product development process, recognizing that at earlier stages we have less data available, which means there will be higher levels of uncertainty in our conclusions. That introduces a unique set of challenges, but when we’re talking about early-stage innovation, there’s a bunch of valuable insights eHTA can still offer.
We still assess the current standard of care and the current care pathway. What we don’t necessarily know at that stage is the true treatment effect of the new health technology. Often, the technology is early enough in development that we don’t even know exactly where it will fit in the health system. For example, it might ultimately be best suited to a specific sub-population, a specific indication, or a specific point in the care pathway.
With that in mind, what we do in eHTA is develop analyses that are not meant to definitively conclude whether the technology will be cost-effective — it’s too early for that. Instead, we focus on answering a few key questions.
First: given the current standard of care, what is the unmet need?
Another way of asking this is to what extent the current standard of care is not optimal compared to best achievable outcomes or ideal patient health. If we are already achieving very good outcomes, then there may be limited room for improvement, and therefore limited value in introducing a new technology. We can quantify this as “headroom” — both therapeutic headroom, how much health improvement is possible, and economic headroom, how much are we willing to pay for that improvement. This helps determine whether the unmet need is large enough to justify the cost and complexity of introducing a new technology.
Second: given the current standard of care, how much more effective would a new technology have to be, at the point it is ready for adoption, for it to be considered cost-effective?
At this stage, we may not yet know how effective the technology actually is — it may not have been trialled yet, or it may still be at the prototype stage. So instead, we ask: given the current standard of care, what level of improvement would be required for any new technology to be considered cost-effective at a given price? This is what we call threshold analysis. We essentially map combinations of cost and effectiveness and identify the boundary at which the technology would be considered cost-effective. That allows developers to see whether the required improvements are realistically achievable and whether the technology is likely to be economically viable before they invest heavily in development and trials.
The third type of analysis applies when a technology is closer to the clinical trial stage. Clinical trials are extremely expensive, and it is not necessarily sufficient to run a trial, publish the results, and present those results to a decision-maker. A purchaser may say: “You’ve shown that the technology is effective, but you haven’t shown whether it is cost-effective, and you haven’t estimated the budget impact.” In other words, decision-makers require evidence that goes beyond clinical effectiveness alone.
So the question becomes: what evidence does the decision-maker actually need in order to make a decision, and what data should be collected during the trial to support that decision? For example, we might know that a technology improves life expectancy, but we don’t yet have good data on quality of life gains from better patient outcomes, which means we cannot calculate cost-effectiveness. That tells us that collecting quality-of-life data during the trial is critical. In other cases, quality-of-life data may already be well established, and instead it may be more important to collect data on adverse events, diagnostic accuracy, or resource use. Wherever the biggest evidence gaps are that lead to evidence uncertainty usually lead to the highest decision risk. We can price that risk.
We call this value of information analysis — estimating the value of collecting additional information, in terms of how much it improves decision-making. It helps determine whether it is worth the cost of collecting certain data as part of a trial.
So those are three broad types of eHTA, and they correspond to different stages of technology development. At the earliest stage, when you may not even have a prototype yet, eHTA is mainly about determining whether there is sufficient unmet need and potential value to justify investing in development at all. At the next stage, when you have a technology concept, threshold analysis helps determine whether it is economically feasible — whether it could be effective enough and inexpensive enough to be adopted. Finally, when you are preparing for trials and evidence generation, value of information analysis helps determine what data you should collect so that, after the trial, you have the evidence needed to support adoption. That helps ensure that you only need to run one trial, not two or three, to generate the evidence required by decision-makers.
KB: I am seeing a lot of interest from policymakers in identifying research that addresses a societal need. Could eHTA be flipped on its head and used to map out where there’s lots of economic or performance headroom? If so, it could be a key input into how health research funding could be more efficiently allocated.
SVK: Exactly right. These methodologies can be, as you say, flipped on their head: instead of starting with a solution and asking what problem it solves, we can start by identifying the biggest problems for which we need better solutions. The same methods can be applied in that direction.
We can use headroom analysis, threshold analysis, or value of information analysis to assess a health domain and identify where the largest sources of unmet need are, where the largest performance gaps exist between current care and best achievable outcomes, and where the biggest areas of uncertainty are that currently limit decision-making in the health system.
In the case of value of information analysis, we can actually quantify how much a health system should be willing to invest to close the gap between the current standard of care and the optimal standard of care. By quantifying that, we can help guide funding agencies, innovation accelerators, and health systems toward the areas where even small improvements in health technology or care delivery would translate into large health or economic gains. In other words, it provides a way to prioritize research and innovation funding based on potential system-level value, rather than just scientific interest or technological feasibility.
We’ve definitely seen excitement from policymakers on that front. The challenge from their perspective is that there are many competing health priorities, so the real question becomes where to start and how to prioritize across disease areas and interventions.
KB: Based on what you’re seeing from results of applying this process so far versus how you’re seeing actual healthcare research spending being allocated, do you have a sense of the kind of efficiency improvements we could see were our research funding being more effectively allocated at the front end?
SVK: It’s a complicated question. I think the simple answer is that, in my experience, when I look at how health system priorities are set, it’s largely done by looking at the biggest ticket items. This usually means that conditions with the highest prevalence and the highest healthcare system cost get the most attention. Which may seem logical. But once you look more closely, you realize that there are significant gains to be had in these less common health conditions that are currently given much less attention. There is enormous headroom in these areas.
Note that these are not necessarily rare conditions, because rare conditions are often already very expensive to treat and therefore do receive attention. Rather, I’m talking about things like preventive interventions and population health measures aimed at improving the overall health of the general population. Those areas are often under-invested in, even though they can have substantial downstream effects: they prevent late-stage disease and delay disease progression, which can produce very large long-term benefits. From an economic standpoint, that is often where some of the largest gains can be realized, but in practice I see fiscal pressure and immediate budget impact driving a lot of prioritization.
I’ve seen that there’s a lot of patient advocacy that informs priorities. This is a good thing — it’s important to a democratic health system. But it’s often the case that when a new technology is developed, suddenly there’s a tremendous amount of advocacy about that solution and about the importance of implementing that solution, and so all of a sudden a disease gains a level of attention and a level of prioritization because we because we found a new solution to it, which puts priority onto a condition based on the existence of a new technology rather than prioritization technology development based on where the most need for health system investment actually lies.
KB: Coming back then to the innovator and the researcher side of the equation, what should innovators and researchers be thinking about before having a conversation with you and with IHE?
SVK: The most common blind spots we see when working with innovators are, first, that the innovator doesn’t know who their purchaser is, and second, that they don’t know what their purchaser values. It’s not really the innovator’s fault that they don’t know these things. Our health system isn’t particularly transparent about how decisions are made. The process differs by province, sometimes by health authority, and the regulatory and decision-making processes are evolving, with new approaches being introduced over time. We don’t make it easy, so it’s understandable that innovators often don’t know these things.
The main thing I would say is that every innovation is different, every pathway into the health system is different, and the decision-making environment is often difficult to navigate from the outside. That’s really where we see our role — helping innovators and researchers understand that landscape, understand what evidence is going to matter, and think through how their technology can realistically fit into the system. We’re always happy to have those conversations early, because the earlier you start thinking about these questions, the better your chances are of generating the right evidence and ultimately getting your innovation adopted.
Key Takeaways
Recent changes to provincial research funding mandates make clear that Canada is starting to reorient its investment in research toward projects and technologies that respond to a clear societal need. This is a non-trivial task, and funding agencies have mostly used the existence of a partner organization willing to co-fund the research as the primary signal of societal need. This is reflected mainly in grant eligibility criteria, for example NSERC Alliance and I2I. However, this approach leads to a very narrow definition of societal need, as there are many challenges for which no champion organization exists yet. Focusing entirely on private sector involvement leaves opportunities for valuable and impactful research on the table. Sasha’s commentary on eHTA makes clear that we have a much larger toolkit at our disposal for identifying research worth funding than basing it entirely on existing private sector involvement. Where health technology is concerned, it is actually possible to optimize our investment of research resources in a way that maximizes impact. As Sasha puts it:
“[I]stead of starting with a solution and asking what problem it solves, we can start by identifying the biggest problems for which we need better solutions. […] In other words, [eHTA] provides a way to prioritize research and innovation funding based on potential system-level value, rather than just scientific interest or technological feasibility.”
Part of why this works in the healthcare space specifically is that the process of adoption of novel technologies is (at least somewhat) systematized, through the regulatory pathways that govern whether and how new technologies become part of the public health system in Canada.
Health regulation is often viewed by early-stage innovators and investors primarily as a hurdle to be overcome. It is expensive, complex, and disproportionately impacts new startups that do not yet have the internal processes experience to navigate it effectively. Sasha’s commentary makes clear that it can also be the basis for effective decision-making and can inform optimization of both public and private sector health tech investment. By creating predictable pathways to impact, regulation contributes to making it possible to assess the potential value of technologies much earlier, in ways that simply would not be possible in a less regulated space.
Starting from the idea stage, eHTA can be used to progressively refine the direction of research and decide at each step whether and how to move forward:
“At the earliest stage, when you may not even have a prototype yet, eHTA is mainly about determining whether there is sufficient unmet need and potential value to justify investing in development at all. At the next stage, when you have a technology concept, threshold analysis helps determine whether it is economically feasible — whether it could be effective enough and inexpensive enough to be adopted. Finally, when you are preparing for trials and evidence generation, value of information analysis helps determine what data you should collect so that, after the trial, you have the evidence needed to support adoption. That helps ensure that you only need to run one trial, not two or three, to generate the evidence required by decision-makers.”
This is not to say that regulation is inherently a good thing — simply that, if it is consistent and well-understood, it can be inform high-quality decision-making.
It is apparent, though, that Canada’s regulatory landscape in the health space suffers from the same fragmentation that plagues every other aspect of its innovation pipeline:
“Every province has its own process, and the level of development and maturity of that process is highly variable. In some cases, the decision is not even made at the provincial level; instead, it may be made at the health system or regional health authority level, with organizations making adoption decisions independently. Even when it is a provincial decision, the kinds of evidence being presented and what the pathway toward adoption looks like can vary substantially, and sometimes the process is not especially well defined, even within a given province or territory. It might not even be clear to policymakers themselves what the exact process is in a given case.”
Obviously, this is not a landscape that can be reasonably understood by innovators and researchers that have not navigated it before. However, it is one that can and should be used by research funding agencies to identify health research that responds to a societal need, and one that can be the basis of a tight feedback loop between research funders, the researchers applying for funding, and the innovators taking that research beyond the lab:
“If we are already achieving very good outcomes, then there may be limited room for improvement, and therefore limited value in introducing a new technology. We can quantify this as “headroom” — both therapeutic headroom, how much health improvement is possible, and economic headroom, how much are we willing to pay for that improvement.
We essentially map combinations of cost and effectiveness and identify the boundary at which the technology would be considered cost-effective. That allows developers to see whether the required improvements are realistically achievable and whether the technology is likely to be economically viable before they invest heavily in development and trials.
[…]
Wherever the biggest evidence gaps are that lead to evidence uncertainty usually lead to the highest decision risk. We can price that risk.”
This kind of analysis is critical both for funders and innovators. I cannot count the number of times I’ve engaged with an early-stage health tech company excited about a new technology that have already started to build a company without having given any thought to the economics of what they are trying to build.
On the funder side, the alternative is a more reactive approach in which research breakthroughs lead research funding investment decisions:
“[I]t’s often the case that when a new technology is developed, suddenly there’s a tremendous amount of advocacy about that solution and about the importance of implementing that solution, and so all of a sudden a disease gains a level of attention and a level of prioritization because we because we found a new solution to it, which puts priority onto a condition based on the existence of a new technology rather than prioritization technology development based on where the most need for health system investment actually lies.”
While not a problem in and of itself, it is unlikely that a reliance on serendipity will lead to economically optimal use of scarce resources. Effective research funding policy must consider both focused discovery and opportunistic engagement with unplanned breakthroughs.
Not only does eHTA help decide what is worth pursuing at the research funding stage, it makes what is pursued more likely to succeed at each step of the process while reducing cost and risk en route. It’s not about picking winners (many early health technologies can and will still fail to achieve their potential impact for all kind of reasons unrelated to their economic and technological merits), it is about being thoughtful in matching spending to real need so that when there are successes, they are impactful at a societal level.
As we shift our research priorities toward addressing unmet Canadian challenges, analysis like eHTA should inform the investment of at least a portion of our research funding. The existence of a private sector company is not the only signal of demand for innovation — and with a reasonable expectation that technologies will be cost-effective ahead of time and a clear understanding of the risks, research funders allocating resources informed by eHTA can be confident that the private sector will step into the niche at the appropriate time in the development process.
Many thanks to Sasha for sharing his insights.



