Introduction and context: German policy innovation and international expertise
In late 2019, Germany passed the Digital Healthcare Act,
which, among other things, created a regulatory and reimbursement pathway for digital health applications in the German market.
The “Fast-Track” pathway
establishes market access for certain categories of digital health applications (known by their German acronym, DiGA)—namely those that meet the definition of lower risk medical devices and are primarily used by patients rather than physicians. When such products meet prespecified requirements related to safety, functionality, quality, data protection, data security, and interoperability, they are eligible for regulatory review and subsequent entry into a directory of regulated, reimbursable DiGA maintained by the German Federal Agency for Drugs and Medical Devices (BfArM).
Those products that meet the requirements and can demonstrate so-called positive care effects are then reimbursed by all of Germany’s statutory health insurers, which cover over 90% of the population, over 73 million individuals. Notably, the definition of positive care effects explicitly includes both traditional clinical outcomes (such as improvements in morbidity, mortality, and quality of life) as well as a group of positive patient-centered outcomes, known as structural and procedural effects, that digital tools might be uniquely well-suited to address. Possible categories of structural and procedural effects include access to care, health literacy, adherence, and coordination of care.
The Fast-Track pathway, a combined regulatory and reimbursement process, explicitly provides for flexibility in both how and over what period of time researchers can present evidence for positive care effects. Researchers can use a variety of study designs to demonstrate such effects, but must broadly constitute some form of quantitative, comparative study showing that the application of a certain DiGA (usually in addition to standard care) is better than the absence of its application.
Both the US business environment and the US regulatory environment are hospitable to the use of real-world data (RWD) and in many contexts, have a longer tradition of using RWD than in Europe. This is due to both the private sector landscape, in which a number of large, data-driven health technology companies have access to large amounts of patient-level data, as well as the progressive regulatory environment, with roadmaps for real-world evidence (RWE) dating back to 2018.
Against this backdrop, the Digital Medicine Society (DiMe) and the Health Innovation Hub of the German Federal Ministry of Health (hih) convened a set of roundtable discussions in 2020 and 2021 to bring together experienced international experts in evidence generation for digital medicine products, broadly defined to include tools driven by high-quality hardware and software that support the practice of medicine broadly, including treatment, recovery, disease prevention, and health promotion for individuals and across populations. The expert roundtables included regulators and public servants, practicing physicians, health policy researchers, clinical trialists, digital medicine experts, epidemiologists, health-care economists, decision scientists, industry representatives from companies working on RWE (both for their own products as well as technical consultants to such companies), non-profit organisations in the health-care and entrepreneurship sectors, and representatives from both public and private health insurance providers.
With a growing international research community with experience and expertise in using RWD and novel study methodologies to generate high-quality information, the roundtable’s conveners saw an opportunity to connect the experience of international experts with the innovative policy environment in Germany in order to (1) accelerate and stimulate innovative approaches to digital medical product evaluation, and (2) promote international harmonisation of best evidentiary practices.
This Viewpoint highlights findings from these discussions, which aimed to identify key opportunities to articulate better practices and highlighted methodological challenges and outstanding questions for this emerging field of research.
Advancing these topics and international agreement on definitions and best practices will be vital to the safe, effective, and evidence-based deployment of DiGA in Germany, and can serve as a model for international adoption in two key ways. First, because the Fast-Track pathway explicitly creates opportunities for the generation of RWE, if it proves successful, it might serve as a model for other RWE-driven regulatory programmes in the future, such as the US Food and Drug Administration (FDA)’s Digital Health Software Precertification Program (which is currently only in its pilot stage). Second, as the Fast-Track provides for evidence-based price negotiations after the first year of a product’s marketing, it is expected to help drive the coverage of DiGA towards more value-based reimbursement, an outcome that many countries might be keen to emulate, if successful.
Real-world data and real-world evidence
The FDA defines RWD as “data relating to patient health status and/or the delivery of health care routinely collected from a variety of sources” and defines RWE as “the clinical evidence regarding the usage and potential benefits or risks of a medical product derived from analysis of RWD.”
Both concepts are highly relevant to the new German regulation, which explicitly provides for their use. RWD can be collected through a variety of sources and tools as part of routine care or as digitally enabled add-ons—eg, using digital tools to collect patient-reported outcome measures (PROMs) or patient-reported experience measures (PREMs). Many PROMs use risk-adjusted instruments to turn qualitative symptoms into a numerical score.
This makes them actionable for triage to orient patients towards the most appropriate care pathway.
More generally, digital tools lend themselves uniquely well to the collection of such RWD, as many aspects of a product’s meta-data (eg, frequency and duration of use) can be collected without additional patient or provider effort. Such tools have many use cases, including replacing traditional tools, such as digitising and replacing pen-and-paper PROMs, and more advanced uses, such as orienting patients towards the most appropriate care pathway for triage.
In the USA, the FDA has provided myriad examples of how RWE can be used for regulatory submissions
and has collaborated with academic researchers in its use,
and public–private partnerships have laid out a roadmap for developing study endpoints in real-world settings.
Other organisations have specifically developed patient-facing resources on the subject.
However, the use of RWE in Europe has been limited to a handful of promising, but still to a large extent exploratory, initiatives.
Digital health applications in Germany and beyond
In Germany, DiGA are defined as lower risk medical devices (class I or IIa according to Europe’s Medical Devices Regulation,
which fully came into force in May, 2021) that have a primarily digital mechanism of action, do more than just collect data, are used primarily by the patient, and support “the recognition, monitoring, treatment or alleviation of diseases or the recognition, treatment or alleviation or compensation of injuries or disabilities.”
Importantly, the DiGA Fast-Track makes clear that the choice of a comparison group for any DiGA study should be based on “the reality of healthcare” and the regulation provides for explicit use of “retrospective data sources such as billing data of a health insurance fund” and the use of historical controls, approaches that lend themselves directly to the use of RWD for RWE generation.
Furthermore, the Fast-Track allows for studies that are “clinical or epidemiological studies” as well as those “using methods from other scientific fields such as healthcare research, social research or behavioural research”,
laying a clear path for the presentation of evidence collected outside of traditional randomised controlled trials (RCTs).
Notably, the Fast-Track creates two opportunities for applications to be included in the BfArM directory: products can either be permanently or provisionally listed. The latter category represents a preliminary approval decision based on early evidence and the submission of a so-called evaluation concept (ie, an evidence-generation plan) for the first year on the market, which must be prepared by a manufacturer-independent (third-party) scientific institution (
panel 1).
Panel 1
Examples of regulated digital health applications
An example of an early digital health application that completed the Fast-Track process is the digital therapeutic Elevida (GAIA; Hamburg, Germany), a digital health application for individuals with multiple sclerosis who also suffer from fatigue. For this product, evidence of positive care effects was generated from a randomised controlled trial of 275 patients with multiple sclerosis with fatigue. The trial compared the use of the Elevida application as well as standard multiple sclerosis care (the intervention group) with standard multiple sclerosis care alone (the control group). A significantly lower Chalder Fatigue Scale score was found in the intervention group compared with the control group after 12 weeks (the primary survey time endpoint) and differences were also detectable at 24 weeks.
Many analogous products have gone through other regulatory approval processes internationally. For example, reSET (Pear Therapeutics; Boston, MA, USA) had the first de-novo approval of a digital therapeutic by the US Food and Drug Administration.
In the case of reSET, real-world evidence observational studies have been used to examine efficacy and product usage. Other examples include the use of BlueStar (Welldoc; Columbia, MD, USA) for people with diabetes and EaseVRx (AppliedVR; Los Angeles, CA, USA) for treating pain.
The opportunity: novel approaches for evidence generation to support broad acceptance of digital health applications
Of the 19 DiGA that were approved as of Aug 1, 2021, only five had submitted a final and complete evidence packet and were therefore eligible for permanent listing in the DiGA directory. All five of these presented evidence from traditional RCTs as part of their formal approval process. Yet RWE and evidence generation outside of the context of traditional RCTs represent a tremendous opportunity for efficient, agile, patient-oriented learning about DiGA. In the case of Germany’s Fast-Track pathway, non-RCT approaches to evidence generation are unequivocally sanctioned by the regulator,
,
yet have rarely been used in practice. This might, in part, be due to the real or perceived risks of regulatory uncertainty in pursuing such approaches. If RWE is facilitated by regulatory policy, but there is little or no track record for success, it is potentially riskier or costlier for a manufacturer to have an unsuccessful RWE attempt and then invest in a more traditional study design. Indeed, regulatory uncertainty has been discussed as a disadvantage for first-in-class products in traditional medical device markets in the past.
Encouragingly, regulatory positions on both sides of the Atlantic are evolving to support high-quality RWE approaches. For example, the use of RWE in traditional device development has gained good traction,
and the FDA’s recent Data Modernization Action Plan
focuses on creating the infrastructure necessary within the FDA to embrace new approaches to science-based regulation of evolving technologies by interacting with data in new ways. In addition, the FDA’s Digital Health Center of Excellence is working to “strategically advance science and evidence for digital health technologies that meets the needs of stakeholders.”
Furthermore, methods and approaches in Europe are advancing in a number of diverse health-care contexts. For example, the Innovative Medicines Initiative’s GetReal Institute
is focused on “facilitating the adoption and implementation of RWE in health care decision-making in Europe”, while specific initiatives such as Mobilise-D
are focusing on best practices for using RWE to generate digital mobility endpoints in chronic obstructive pulmonary disease, Parkinson’s disease, multiple sclerosis, hip fracture recovery, and heart failure. Germany, in particular, has also begun to collect RWD for certain pharmaceutical products (such as gene therapies) through newly established registries, indicating an ongoing interest in such initiatives.
,
What is needed? Articulating best practices and methodological challenges
In several areas, international agreement on best practices for the execution of clinical studies will be vital to ensure that medical product developers can benefit from internationally accepted standards after investing in the development of new bodies of evidence for digital health products. As is the case for other medical products such as drugs and devices, local factors must be considered, but overarching approaches should transfer.
Ongoing discussion of such best practices for evidence generation represents a meaningful opportunity for dialogue in the short term and can lead to the articulation of guidance or standards from the evidence generation community in the longer term. Core areas for agreement as well as priorities for future research span multiple dimensions (
panel 2).
Panel 2
Topic areas where precompetitive collaboration, research, and the development of best practices will speed broad acceptance of high-quality evidence to support digital health applications
Missing data
Handling and understanding the implications of missing data during study design and evaluation
Study endpoints
Selecting, defining, validating, and establishing both clinical and non-clinical endpoints
Comparator group
Identifying whether application plus standard of care versus standard of care alone is sufficient and whether washout periods are indicated
Multimodal interventions
Testing individual modules or components of digital health applications alone—when, why, and how?
Study question
Understanding and standardising hypothesis testing around whether digital health products are complements or substitutes to existing standards of care
Equity
Disambiguating digital application use from phone ownership in the evaluation of safety and effectiveness
Generalisability
Characterising the generalisability and transportability of findings to broad populations
Confounders
Controlling for clinical professionals who play a critical role in deploying digital tools—especially in the context of research studies—and might be differentially supportive of the product in the clinical study context compared with the real world
Fit for purpose
Generating a clear, broadly accepted conceptual framework for when certain approaches are acceptable with respect to data, study design, analytical methods, etc
Missing data
A key area is the establishment and broad acceptance of best practice for both handling and understanding the implications of missing data. These best practices should be clear at both the study design and evaluation stage of evidence generation.
Researchers leading the UK Biobank study have begun to probe the question of missing data, specifically as it relates to the definition of missing accelerometry data and minimum acceptable wear time of digital sensor products in real-world studies.
Study endpoints
A number of other study design considerations will also be crucial. These include selection, definition, and establishment of both clinical and non-clinical endpoints that can be used to support positive structural and procedural effects, such as those included in German regulations. It will also be vital to adapt PROMs and PREMs for digital health contexts and create guidelines for the migration of PROMs and PREMs to digital platforms.
For the collection of PROM and PREM data (as well as more broadly in the selection, definition, and establishment of endpoints), it is and will remain vital to consider the patient’s perspective.
Indeed, the International Consortium for Health Outcomes Measurement defines relevant outcomes as “the results of treatment that patients care about most…They’re real-world results, like physical functioning or level of pain”,
a reminder that focusing on the things that matter most to patients is a core goal of providing high-value health care.
The establishment of new, digital study endpoints is an area where much progress has been made recently and where we expect standard practices to be increasingly taken up by the research community. For example, DiMe’s crowdsourced, publicly available Library of Digital Endpoints represents growth in these new measures, with 225 unique digital endpoints currently in use in medical product development at the time of writing.
The expansion of open access tools like this will create important public goods for RWE researchers in the context of app evaluation and beyond.
Comparator group
Other priorities will include defining best practices and fit-for-purpose methods for defining a comparator group for a digital intervention. Example questions include: would a trial consider a treatment group consisting of an application plus standard of care versus a control group of standard of care only—or should these groups be structured in other ways? Should investigators plan for a clinical trial with or without washout periods during which patients do not receive any therapy before beginning an intervention? How can innovations in data science and biostatistics, such as the use of synthetic control arms and the resulting externally controlled trials, support clinical counterfactuals in the digital medicine setting?
Multimodal interventions
Relatedly, the research community will need a better understanding of when and how to test individual modules or components of digital health applications alone upon rollout, and when and how to test applications over various treatment durations.
Study question
For payers and providers, understanding and standardising hypothesis testing around whether digital health products are complements or substitutes to existing standards of care will be vital to both coverage and treatment decisions. The nature of these products should therefore be considered during the study design process, as all stakeholders will benefit from the establishment of high standards in the research process.
Equity
Equity considerations must also be addressed. Evaluation should also focus on the variability of impact across different populations, ensuring that health disparities are not reinforced or newly introduced, rather than simply assessing the average impact across the entire population.
Generalisability
Similarly, investigators should take steps to characterise the generalisability and transferability of findings by all characteristics of the target population, the clinical teams interacting with the tools in question, and the location and context of the study.
Initiatives like the US National Institute of Health’s
All of Us Research Program,
which is building a diverse database to inform thousands of studies on a multitude of health conditions, represent important steps towards building real-world datasets that include otherwise under-represented groups and account for heterogeneity in individuals and their environments. Such projects will support both equity and generalisability in the application of RWD to both digital health applications and medical innovation more broadly.
Confounders
An ongoing challenge will be separately evaluating a digital application from the clinical support system underlying its intended uses—in particular (but not exclusively) for those products that include support by health-care professionals, such as health coaches or nurses, who might be differentially supportive of products in the clinical study context compared with the real world.
The Implementation of a Randomized Controlled Trial to Improve Treatment with Oral Anticoagulants in Patients with Atrial Fibrillation (IMPACT-AFib) study is currently tackling this challenge. It is a multiarmed study examining provider-only, patient-only, and combined educational interventions, seeking to address the underuse of oral anticoagulants in patients with atrial fibrillation using the FDA’s Sentinel database.
Helpfully, the nature of the RWD that are expected to be used in RWE studies will also be largely available in the clinical practice setting beyond initial product evaluation. Researchers will be able to study confounding factors more easily in data-rich clinical practice settings relative to other research settings, where such data might not be digitised or captured at all.
Fit for purpose
Finally, for many digital applications, the burden of explaining why a high-quality RCT is not feasible or practical is likely to fall on manufacturers. RWE is usually easier and less costly to generate, but there should be a clear conceptual framework for when certain approaches are acceptable. This can be shaped by product risk or centred on the nature of the scientific question that a study intends to answer. Feasibility might also be shaped by the adoption and use of digital tools themselves—which will determine the volume and representativeness of available data—as well as practical considerations, such as lags in the availability of claims data to investigators.
Future outlook
Even with consensus on best practices, there remains a need to clarify differences between evidence standards needed for regulatory approval versus payer coverage—especially outside of the German setting (where regulatory and reimbursement approval are both part of the Fast-Track process for market access). Furthermore, the requirements and contours of health technology assessments (HTAs) are expected to be more dynamic
in nature, requiring new approaches to HTAs for digital products. The concept of dynamic HTAs also includes the possibilities of flexible reimbursement based on ongoing assessment of a technology’s performance.
Further challenges include data governance for health data and how data can be used and reused (especially in the European regulatory context
), as well as operational challenges such as establishing digital formularies for applications (although early examples have emerged in the USA).
,
In addition, there must be a cultural shift in academic and industry research as well as within both the payer and provider communities to embrace rigorous use of RWE and its ability to generate high-quality evidence in certain contexts. Myriad examples of how RWE has been used for evidence generation in medical device
validation and indication expansion provide a number of cases of how transparency and methodological rigour can accompany RWE in practice.
Encouragingly, new methods and techniques in data science for causal inference and biostatistics have emerged in recent years, key innovations that have the potential to add rigour to the use of RWD for the generation of RWE.
,
In the case of externally controlled trials, such methods have already been shown to reduce inflated false positive error rates of standard single-arm trials in other settings, such as cancer research.
Public agencies have also begun to issue guidance on technical issues related to causal inference; for example, the National Institute for Health and Care Excellence in the UK has issued guidance on estimating treatment effects, with a particular focus on mitigating selection bias at the design stage of evidence generation.
Beyond the data science and regulatory communities, innovation in methodological approaches to improve the speed and rigour of the development of the next generation of digital health products requires action from all stakeholders
.
Industry thought leaders must approach evidence generation with rigour and transparency, holding themselves to the highest possible standards. In many cases, there will be clear opportunities to take the lead from regulators in a number of areas where clear guidance and policy are already established. At the same time, regulators and other public bodies must continue to show leadership and more clearly communicate with product companies and investigators.
In thinking about moving products from research to patients at scale, payers will need to be open to considering new value propositions with appropriate evidence as well as new, high-quality methods of evidence generation. To facilitate these new evaluation methods using RWD, payers also need to make their RWD accessible to researchers and to encourage approaches such as “coverage with evidence development”
and managed access models, whereby patients can access therapeutics earlier while studies are still ongoing. Vitally, the patient’s perspective must be considered as new forms of data are collected and established for research purposes. Those collecting RWD must prioritise patient preferences regarding the information captured and considered during decision making. They must also ensure that data capture poses the least possible burden to patients and that patient data and information are appropriately protected.
Finally, health-care providers will increasingly encounter publications and information based on RWE. Learning to incorporate such evidence into clinical practice and appreciating how to assess such studies will be a non-trivial task for practicing medical professionals moving forward. Here, medical journals and professional societies can take a leading role in educating health-care providers about high-quality evidence generation as well as the specific considerations that are appropriate for understanding how RWD and RWE are used broadly and in the evaluation of digital tools in particular. Professional societies can also show leadership by incorporating high-quality RWE into practice guidelines and educating their members on appropriate use within their respective medical specialties.
Where gaps remain, multistakeholder, precompetitive collaboratives like the DiMe–hih partnership will be key to convening experts, addressing evidence and trust gaps, and subsequently driving the dissemination of best practices and examples. All of these efforts will be bolstered by new, RWE-based public funding models for clinical studies that engage broad-ranging stakeholder groups.
Comments are closed.