Artificial Intelligence (AI) has taken the world by storm, disrupting many aspects of our everyday lives, including health services research. AI has a myriad of applications that are actively being explored as it becomes more deeply integrated in many aspects of our everyday lives. As early adopters, RAND and AcademyHealth wanted to share our experience, and lessons learned using AI tools and technologies to support and streamline meeting discussions.
As part of the Agency for Healthcare Research and Quality (AHRQ) EvidenceNOW Managing Urinary Incontinence (MUI) initiative, five grantees are testing novel ways to implement and disseminate evidence-based urinary incontinence (UI) interventions for women in primary care practices. RAND and AcademyHealth are contracted to support and evaluate grantees’ efforts, including providing technical assistance, developing and sharing resources, conducting summary analyses and assessing the effectiveness of the interventions to inform future practice and policy recommendations.
At the 2024 annual meeting of grantees, our goal was to facilitate dynamic learning and engagement across the grantees using ChatGPT and other AI-enabled tools. The RAND team facilitated a session on UI intervention sustainability, where breakout groups engaged with sustainability topics by reviewing evaluation findings (e.g. quotes from qualitative interviews), comparing challenges, facilitators and strategies across their projects, and identifying recommendations for each other’s projects and future initiatives. Instead of traditional ways of capturing and sharing insights from breakouts, our process included the following.
- First, we utilized Otter.AI to transcribe each of the breakout discussions in real time. After the sessions were over, we exported the raw Otter.AI transcripts and sent them all to the RAND team.
- Once we compiled all the transcripts, a team member analyzed them using an internal version of ChatGPT. Our team developed a Python script to facilitate secure interactions with the large language model (LLM). Using the script, we could also query multiple prompts to more than one text file at a time.
- In this instance, we asked ChatGPT to do the following prompts/queries for each of the breakout group transcripts:
Prompt_ID | Prompt |
summary | Generate a 3-5 bullet point summary of this transcript. For each bullet point, produce only one sentence. |
haiku | Generate a haiku poem that summarizes this transcript. |
haiku_three | Generate three different haiku poems that summarize the main point of the transcript. |
- We then populated the PowerPoint slides with the AI-generated outputs. These slides were used for the large group discussion. During the discussion, the breakout group facilitator asked the participants to reflect on the GPT-generated summary and haiku, using the following questions:
- For those who were in this breakout group, how well do these bullets summarize what you discussed? What might you add? What might you elaborate? Is there anything missing?
- For those who weren’t in this breakout group, which of these points resonate most for your project?
Benefits/Challenges of Using AI for Engagement
Overall, using ChatGPT saved time with report backs (by having it summarize key takeaways based on the transcript) and generated focused discussion topics for participants.
During the report back, we noticed that attendees were intrigued and excited about this new approach and were quick to point out any inconsistencies with the AI-generated summaries. Although at times the AI-generated summaries were vague, the facilitator asked participants to elaborate on what they had discussed to refine the summaries and better understand the key takeaways.
Using AI during this meeting also allowed more time for participant reflections. Traditionally, participants would need to summarize their breakout discussions, but with AI-generated summaries, we could facilitate discussions more efficiently as they immediately had information to react to. Additionally, ChatGPT-generated haikus provided a creative and amusing way to encapsulate the breakout discussions.
Lessons Learned and Tips for Those Interested in Using AI for Meetings:
- Our team had the benefit of using an internal version of ChatGPT, that would not release input data into the broader web environment . Before inputting project data, verify that you are working within your institution and your funder’s policies on data privacy and security or AI acceptable use.
- This process may not run smoothly on the first try: practice with a few trial runs to make sure all the technology works. We tested the transcription and Python workflow before the actual meeting day. We recommend a “dry run” of the process so that during the meeting, the process runs without a hitch!
- After generating ChatGPT output, ensure the facilitators are familiar with the output before presenting it to the larger group. We had limited time between the sessions, so facilitators had only a few moments to read the AI-generated text. When we do this again, we would ensure facilitators have sufficient time to digest the output, which would allow for a richer reflection and group discussion.
- Get creative and find ways to make the ChatGPT output memorable! We asked GPT to create haikus as an element of fun, but other options could include summaries in the form of song lyrics or dad jokes.
- Finally, don’t be afraid to take the plunge! If all else fails and the AI outputs are completely wrong – that’s okay! This opportunity allows for productive discussions – sometimes it is easier to pinpoint why something is wrong and correct mistakes. We used this as a facilitation tool to engage participants to verify and elaborate on the output. It is encouraged that facilitators and researchers do not take AI-generated content at face value and always check outputs for vagaries, inconsistencies, and biases.