Onisha Etkins and Karen Goldstein
Onisha Etkins (L) and Karen Goldstein co-authored this article.

Artificial Intelligence has emerged as a pivotal tool in public health, helping with disease forecasting and processing vast amounts of health data. But it also introduces the risk of worsening existing health inequities or creating new ones, especially for systemically marginalized groups. This concern is prompting calls for action, from policymakers pushing for fair AI regulation to companies like Dove speaking out against AI’s demonstrated bias. While progress is underway in public health communications, where health equity and inclusiveness are guiding tenets for building trust, there remains a need for more guidance on using AI responsibly.

Public health communicators—especially those in nonprofits or resource-strapped settings—are understandably eager to use AI to boost their effectiveness, which offers opportunities like tailoring messages to specific audiences, identifying influential media and tracking campaign success. However, it’s crucial to use AI carefully to avoid bias in the results and address any bias in its users. By using established guidelines as a framework, such as the CDC’s Inclusive Communication Principles, we can ensure that AI enhances health communications while staying true to the goal of health equity.

Based on our experience helping clients navigate this unchartered territory, we have identified four key principles to support health communicators in efforts to mitigate bias with AI.

Address bias in AI tool development

Reducing the risk of biased outcomes begins with ensuring bias is identified and minimized at every stage of AI development. Large language models can be thoughtfully customized to counteract underlying known biases by using advanced prompting and custom data across diverse data sources to ensure a broad understanding of health topics, regularly evaluating algorithms for fairness and accuracy with checks and balances systems and actively involving diverse perspectives in the development process. For example, to reach Black women for a campaign addressing cancer disparities, we would want to incorporate data specific to this group from various sources, including published studies and theoretical frameworks, surveys oversampling underrepresented groups, qualitative research with the target groups and social media data from multiple platforms. Promoting a culture of transparency and accountability within teams to openly address and mitigate biases is also critical.

Assess data gaps with traditional market research (where possible)

Traditional market research methodologies—like surveys, focus groups and interviews—provide tailored insights into attitudes, motivations and behaviors often missed by digital data sources. Particularly for underrepresented populations, this type of research deepens understanding of cultural nuances and social contexts to fill gaps or correct biases in AI databases. For example, a recent study showed how AI language models underperformed in identifying depression for Black individuals because it missed the nuance regarding how depression is expressed on social media among this group. In this example, supplementing the social media analysis with qualitative research across Black communities (e.g., African American, Black Caribbean, etc.) would have highlighted specific cultural nuances that could have been used to train and improve the model’s language interpretation. Integrating diverse datasets from qualitative and quantitative approaches helps enrich AI algorithms, enhancing the fairness of AI-driven decisions and ensuring more accurate outcomes.

Enhance AI to acknowledge its biases, communicate them to users and consider them when producing results

By implementing algorithms that can identify and acknowledge biases, AI systems can communicate this information—including data sources and methods for analysis—transparently to users. Furthermore, AI can be programmed to consider these biases when generating outputs, ensuring that decisions and recommendations are fair and accurate. Paired with human oversight, this approach fosters trust and accountability in AI systems, promoting ethical and responsible usage across various applications and industries.

Educate AI users on crafting queries to minimize biased responses

By understanding how to frame questions effectively, users can guide AI systems, minimize the potential for biased outputs and ensure that AI generates relevant and objective information. This may involve adding follow-up questions for users to draw from based on their input or having demos on framing prompts for specific use cases. Empowering users with the skills to navigate AI systems promotes critical thinking and enhances the overall quality of interactions with AI technology.

As industry leaders can attest, public health organizations—including nonprofits, foundations and federal agencies—are working diligently to reduce bias in AI while leveraging its powerful capabilities for health communication programs. An example of how we’ve recently harnessed machine learning’s efficiency while prioritizing inclusivity and relevance is our work for a national substance use recovery campaign for young adults in systemically marginalized communities. Using a customized large language model, we leveraged social media data, community-based research, and socio-historical context to understand audiences, tailor strategies and messages, and ensure a health equity-based approach.

AI holds immense potential in public health communications, and confronting inherent biases remains essential to ensuring inclusivity and health equity. While integrating these principles is an ongoing process that will evolve alongside AI use, the public health community can work together to responsibly harness its potential in health communications every step of the way.

***

Onisha Etkins, MS, PhD, is Research Director at JPA Health. Karen Goldstein, MPH is Senior Vice President at JPA Health.