Artificial intelligence (AI) has the potential to advance health research and care. Further, the field of health services research (HSR) is positioned to demonstrate how AI can be harnessed and equitably utilized to assess what works, for whom, in what context, and at what cost within our health system. However, it is important for the field to establish a common understanding of the current state of AI use and study in HSR, and to identify the associated challenges and opportunities to guide the next steps for the field.
In June 2024, AcademyHealth convened experts with diverse vantage points on the HSR ecosystem to explore three domains of interest:
- AI in Health Care focuses on the longstanding presence of AI-driven tools in health care and the wide range of uses in improving healthcare delivery.
- HSR with AI demonstrates how AI technology can be used to enhance and support research conduct.
- HSR on AI highlights how the field of HSR can contribute to generating evidence on the use of AI to improve health outcomes for all.
Recognizing the wealth of work already dedicated to AI use in health care, AcademyHealth’s convening and prioritized activities focused on the latter two domains. These activities were designed to connect HSR professionals across settings and sectors and to support sharing foundational knowledge and integrating their perspectives on critical and emerging topics for the field.
Following these activities, AcademyHealth reviewed the ideas and insights generated by participants and developed a new resource summarizing themes from their input. This includes their reflections regarding key players as well as current challenges, opportunities, and potential next steps for the field. Themes from the feedback on these topics indicated the following areas as especially salient and high-priority.
- Trust and transparency: Among many challenges associated with trust and transparency, participants raised concerns about knowledge gaps around AI use – including around the many ways AI already supports health care and research work. For example, patients may not realize that they regularly interact with AI-enabled scheduling bots in setting up appointments. Factors contributing to concerns include the lack of transparency around AI tool development, and broader public mistrust in science and the health care system. Among possible next steps for addressing mis- and distrust, participants suggested approaches for placating concern among patients and providers (e.g., communicate information about the use of AI in plain language as opposed to technical jargon) as well as researchers (e.g., establish an independent, third-party “seal of approval” that indicates the AI tool was equitably and ethically designed).
- AI literacy: Health services researchers with various levels of expertise need sufficient AI literacy to safely and effectively integrate AI into health research and care delivery. Information on the state of AI literacy across the field remains limited, and participants expressed limited awareness of tools or trainings tailored specifically for HSR professionals’ needs. Potential next steps for bridging literacy gaps may involve creating safe spaces for experimentation with AI tools to support workforce training and provide opportunities to use and evaluate AI tools in a safe and ethical way. Furthermore, tools to assess existing literacy across the field can inform development of resources designed to meet learning needs and help researchers keep pace with the evolution of AI technology.
- Regulations, guidelines, and governance: Participants raised the need for clear regulations and guidelines in this space, along with recognition that this is a complicated landscape and different groups or agencies have differing levels of scope and authority to set and enforce policies. To improve clarity and cohesion across the field, it will be important to establish methods for evaluating AI tools, create implementation guidance or guardrails for AI use, and develop ethical guidelines to ensure that no harm is done.
This summary also highlights themes from participants’ feedback regarding other important considerations for the field: the potential for exploring new and emerging approaches to evaluating AI (e.g., using non-traditional or Indigenous methods to examine cultural interrelatedness), and the importance of supporting equitable AI use and outcomes by taking intentional steps to address health, digital, and AI literacy.
These activities were a first step toward meeting a communicated need for the field: convening diverse partners and collaborators for shared dialogue, to help inform future AI-related activities across the field. The field of HSR is primed to play a critical role in building the needed evidence on AI use and its impact, and doing so will require agile approaches to evaluating technologies and educating people about them. As AcademyHealth continues to explore these themes, we look forward to engaging with partners similarly committed to supporting and preparing the field in its next steps. As our work develops, if you're interested in following along or learning more, click this link and connect with us.