Article

Input Needed On Using AI To Create Lay Summaries Of Trial Results

By CISCRP Staff|Feb 19, 2025

Lay summaries (LS) of clinical trial results play a critical role in improving transparency and accessibility in medical research by transforming complex scientific information into understandable content for the public and patients. Under the European Clinical Trial Regulation (EU CTR) 536/2014, clinical trial sponsors must publicly provide an LS within one year of a trial’s completion (within six months for pediatric trials). With the EU CTR now in effect, there is increased demand to create LS as efficiently as possible.

AI has emerged as a promising tool to increase the efficiency of LS creation and reduce resourcing, particularly through the use of large language models (LLMs). However, the application of AI in LS requires careful oversight and thoughtful integration to ensure that patient-facing content remains accurate, unbiased, and culturally sensitive.

Background And Development Of The Considerations Document

The increasing potential of AI to streamline LS development has raised critical questions about its responsible use. To address these needs, a working group has developed a draft document outlining considerations for the use of AI in drafting LS.

The working group comprises experts in clinical trial transparency, medical writing, technology, plain language communication, and patient engagement from more than 15 organizations in the U.S. and EU. Facilitated by the Center for Information and Study on Clinical Research Participation (CISCRP), the group’s goal is to provide actionable recommendations for integrating AI into LS creation processes while maintaining high standards for quality and accountability. The draft was developed through a consultative process and is aligned with recognized industry standards, including the Good Lay Summary Practice guidance. This document is now available for public review with a comment period open through Feb. 18.

Opportunities And Challenges Of Using AI For LS

AI offers a range of opportunities for improving LS development:

  • Efficiency: AI can reduce the time required to draft summaries, allowing organizations to deliver clear, patient-friendly content more quickly.
  • Consistency: Automated processes can help ensure uniformity in tone and structure across summaries.
  • Accessibility: AI models trained in plain language principles can support health literacy.

Despite these benefits, the challenges of using AI in LS cannot be overlooked:

  • Accuracy: AI may misinterpret complex clinical data or introduce factual errors (e.g., AI hallucinations). Proper human oversight is necessary.
  • Bias: AI systems can replicate and amplify biases present in their training data.
  • Transparency: Undisclosed AI involvement in creating LS could erode public trust.
  • Ethics and Compliance: Ensuring compliance with privacy regulations and ethical standards is essential when deploying AI in patient-facing communications.

To address these challenges, the considerations document emphasizes a balanced approach: AI should complement, not replace, human expertise. Every AI-generated LS should undergo thorough review and validation by qualified professionals to ensure accuracy and relevance.

The considerations document outlines six key points to guide responsible AI use in LS development:

  1. Humans Must Be in the Loop: AI should augment, not replace, standard LS processes. Human involvement is crucial in areas such as model development, review and revision, and health literacy refinement. Existing personnel roles remain vital, with AI experts playing a newly integrated role.
  2. Researcher Involvement: Collaborating with researchers ensures accurate interpretation and plain language communication of results. While AI can expedite LS creation, expert oversight mitigates risks of misinformation or misinterpretation.
  3. Data Privacy: AI tools must respect data privacy by using aggregated rather than individual data, adhering to established reporting standards.
  4. Disclosure of AI Use: Transparency about AI involvement is essential to maintain public trust. Clear guidelines outline where, when, and how AI use should be disclosed.
  5. Trust: Ensuring accuracy, consistency, and the absence of bias is critical to maintaining credibility. Misinformation or biased outputs can harm public confidence.
  6. Technology Considerations: AI systems evolve rapidly, necessitating governance through internal standards, testing, monitoring, and continuous improvement to align with ethical and regulatory standards.

Public Comment Period: An Opportunity To Shape Best Practices

The public comment period is a vital step in refining the considerations document. Your feedback as clinical research professionals, medical writers, patient advocates, or technology experts is crucial to ensuring that the recommendations are comprehensive, actionable, and applicable across diverse contexts.

Comments are invited from a wide range of stakeholders, including research professionals, patients, and advocacy groups. Reviewers can provide input via an online survey hosted on the CISCRP website. The survey allows for detailed comments on specific sections of the document as well as general feedback. Respondents are also encouraged to identify their stakeholder group(s) to provide valuable context for their input.

How To Participate

The public comment period runs until February 18, 2025. 

  1. Review the Document: The draft document is available on the CISCRP website for download.
  2. Provide Feedback: Use the online survey to submit your comments. The survey includes fields for line-specific feedback, proposed revisions, and general observations.
  3. Share the Opportunity: Spread the word within your networks to ensure broad participation from diverse perspectives.

Following the close of this period, all feedback will be reviewed and adjudicated by the working group. Insights from the comments will inform revisions, and the final document will be publicly released as a resource for the clinical research community.

Looking Forward

AI holds significant promise for advancing the development of lay summaries, but its use must be guided by principles of accuracy, transparency, and ethical responsibility. The considerations document represents a collaborative effort to establish a framework for responsible AI integration in LS processes. Your input is invaluable to ensuring this resource meets the needs of all stakeholders and supports the ultimate goal of improving patient and public understanding of clinical research results.

We encourage you to participate in this public comment period and contribute your expertise to this important initiative. Together, we can ensure that the use of AI in lay summary creation is both innovative and responsible, fostering greater trust and transparency in clinical research.

Note: Generative AI was used to assist with the creation of the first draft of this article.

By: Behtash Bahador, MS, and Kim Edwards, Ph.D., CISCRP

Acknowledgements:

The authors are members of the working group developing the “Considerations for the Use of Artificial Intelligence in the Creation of Lay Summaries of Clinical Trial Results.” This article represents the current thinking of the group and the outcomes of the group’s work. The authors thank and acknowledge the respective contributions of all work group members.

About the Authors:

Behtash Bahador, MS is the director of health literacy at the nonprofit organization CISCRP and holds a Master of Science in health communication from the Tufts University School of Medicine. Since 2014, he has collaborated with a range of stakeholder groups to establish and implement patient- and community-centric initiatives across the life cycle of clinical research.

Kim Edwards, Ph.D. is the senior director of health communication services at CISCRP, where she oversees the creation of easy-to-understand trial resources for patients, participants, and the public. Kim earned her BS in neuroscience from Trinity College and her Ph.D. in developmental and brain sciences from the University of Massachusetts Boston.