As health care costs continue their upward climb in the face of persistent quality gaps and pervasive disparities, the need for effective interventions remains paramount. However, identifying which interventions are most effective – and for whom in which context – can, itself, be an expensive and time-consuming endeavor. The debate about which evaluation methods to use when ranges from the merits of randomized controlled trials to the potential of other quasi-experimental designs to the wealth of opportunities presented by data emerging from a vast number of new sources, including electronic clinical data.

For too long, the conversation in our field has been too simplistic in its framing: only RCTs (randomized controlled trials) will do to know which interventions work vs the delivery of public health and health care services is too complicated for anything but observational evaluation designs. We reject both of these statements!

We in HSR need to keep innovating in our methods and designs to keep up with the innovation around us in health care and public health financing, payment, and delivery. Addressing our complex world requires a complex set of interventions, which then require more sophisticated – and yes complex – evaluations to understand not just whether an intervention worked or not, but how?, for whom? and why?. Indeed, it is by answering these questions that we’ll be able to better understand the impacts of an intervention, and how it may be applied to other settings.

As such, there are a host of effective study designs and methods that should be considered when evaluating complex health interventions. These may include designs, such as cluster randomized stepped wedge, controlled before and after, and natural experiment. The choice of design depends very much on the research question and study context—with each design having its own set of merits and limitations, or trade-offs. Other research methods, such as qualitative methods, realist evaluation, and observational studies also make an important contribution to the evaluation of complex interventions by getting at why an intervention, or intervention components, may or may not have worked. Regardless of what design, methods, or mix of methods are used, it is vitally important to be clear on who is the audience for the evaluation results and how those results will be used.

This brings us to real world evidence, and the potential to harness what is happening in health and health care through advances in information technology and analytics, be it from electronic health records or artificial intelligence. The availability of vast amounts of data, along with new tools to process these data, provides an unprecedented opportunity. We can learn more, know more, and do more—and by learning as we go, we can adapt, and truly improve our efforts.

The bottom line? There is much to be learned from these varied data sources, and evaluations of complex interventions are likely to benefit from a combination of approaches.

Committed to supporting the use of sound evidence to inform policy and practice, AcademyHealth has been engaged in a number of efforts to advance the evaluation of complex health interventions. On a global level, we have participated in collaborative activities with partners, including a symposium on evaluating system innovations in health care and public health in London in 2015 and a global seminar on learning from improvement in Salzburg in 2016. Here at home, we have focused on strengthening the rigor and relevance of evaluations through smaller convenings, conference sessions at our Annual Research Meeting, and commissioned publications—including a meeting report, Evaluating Complex Health Services Interventions: Challenges, Goals, and Proposals for Progress and an evaluation guide, Evaluating Complex Health Interventions: A Guide to Rigorous Research Designs. AcademyHealth also launched a collaborative project with the National Pharmaceutical Council (NPC) focused on methods transparency for real world evidence, including observational studies and patient registries.

These research and evaluation issues are central for many working in health and health care. In an August 2017 New England Journal of Medicine review article, Evidence for Health Decision Making — Beyond Randomized, Controlled Trials, Thomas Frieden, former Director of the Centers for Disease Control and Prevention, argued for an expanded evidence base for policymaking. A number of organizations are developing guidance aimed at strengthening evaluation efforts. For example, PCORI is developing a set of methodology standards for evaluating complex interventions, and the Office of the Assistant Secretary for Planning and Evaluation is developing a primer on evaluation efforts that will provide guidance for evaluators, program administrators, and policy decision makers on addressing evaluation challenges.

Given the breadth of work on this issue and new opportunities for leveraging real world evidence, we look forward to hearing from leading experts, Anne Beal, M.D., M.P.H., Global Head of Patient Solutions at Sanofi and Bernard Hamelin, M.D., Global Head Medical Evidence Generation at Sanofi. In next week’s blog, Dr. Beal and Dr. Hamelin will address the following question: How do we extract learning from our fast moving, complicated world and do so whilst still being confident that our evaluation findings are robust and we can base future decisions on them. After all, people’s lives and health depend on us.

Staff

Marya Khan, M.P.H.

Senior Manager - AcademyHealth

Marya Khan is a Senior Manager at AcademyHealth. She is responsible for managing projects under grants from th... Read Bio

Blog comments are restricted to AcademyHealth members only. To add comments, please sign-in.