<img src="https://www.instinct-agilebusiness.com/806375.png" style="display:none;">

Integrating Artificial Intelligence into your Health Practice

Artificial Intelligence

 0:00
Good afternoon, everyone. Welcome to today’s webinar hosted by Patagonia Health. Today’s topic is integrating artificial intelligence into your health practice. If you aren’t familiar with the Zoom webinar platform, take a look at the control panel at the bottom of your screen. Here, you can configure your audio settings and ask questions if you have any.

 

Speaker Introductions

 0:26
As a quick background about our speakers today:

  • Aaron Davis is the Director of the Center for Public Health Initiatives at Wichita State University. He applies business strategy and systems thinking to advance local public health efforts, including EHR implementation and informatics. Aaron supported the creation of public health AI guidance and co-leads AI trainings and the Kansas Community of Practice for Kansas LHDs. He was named a de Beaumont Foundation 40 Under 40 leader.

  • Tatyana Lin is the Director of Business Strategy and Innovation at the Kansas Health Institute, where she leads initiatives that strengthen public health through cross-sector partnerships and innovation. She specializes in health in all policies, AI in public health, and health impact assessments. Tatyana co-authored a national AI policy resource and co-leads an AI and public health community of practice for the National Network of Public Health Institutes.

We’re really excited to have them both here today. Thank you so much for joining us. Without further ado, I’ll pass it over to you.

 

Opening Remarks

 1:57
Good morning, everyone. Both Aaron and I are excited to be here with you for the next hour. Thank you for making time in your busy schedules. We hope this information will be helpful to you and your organizations.

As we continue, please feel free to put your comments and questions in the chat. We have a packed agenda, but we’ll do our best to get to your questions. If we don’t, we’ll respond later in writing, in the comments or chat, or provide answers through Patagonia Health after the webinar.

We also want to thank our host, Patagonia Health, for having us here today on this important topic. I’m very excited to be part of this webinar and to talk about advancing AI in healthcare and public health. Aaron and I have spent the last three years working on this together.

 

Kansas Health Institute Background

 2:58
Just a few words of context about the organization I’m with. I’m at the Kansas Health Institute in Topeka, Kansas. If you’re in the Midwest, please stop by and visit our offices.

We’re a statewide organization that also works nationally. Our goal is to improve the health of Kansans and contribute to work across different policy areas. We are nonpartisan—we don’t take positions on policy issues—but we conduct a lot of research and work closely with communities. We’ve been engaged in AI for the past three years.

 

Wichita State Background

 3:37
Hi, I’m Aaron Davis. I see a couple of Kansas folks on the call today—thank you for joining us.

The one thing I’ll add is that I love systems disruption, and I think artificial intelligence is one of those disruptors. I’m especially interested in figuring out how we can advance the public health field at the local government level.

At Wichita State, I work with the Community Engagement Institute. We do anything that helps support community efforts, engaging on all levels and across projects. Thanks for letting us be here. I’ll turn it back to you, Tatyana.

 

The AI Journey

 4:23
So what did our AI journey look like?

To help organizations in Kansas and across the country become more comfortable with AI and develop organizational policies, we created AI policy guidance as part of a public health infrastructure grant. We’re also working with partners to strengthen opportunities for policy development specific to public health.

We co-authored policy with the American Public Health Association, specifically focusing on workforce implications of AI. We’ve been running a community of practice here in Kansas and at the national level through the Network of Public Health Institutes.

We are also testing different tools for public health functions, including tools used in webinars like this one. We share what we’ve tested, provide knowledge about those tools, and help build the capacity of peer organizations.

 

National Reach and Interest

 5:45
Here’s a snapshot map of our activities as of May. Since then, we’ve had additional projects in June and July. As you can see, we’ve worked across the East Coast, West Coast, and Midwest.

What this map really shows is that there is a lot of interest in AI across organizations and a big appetite for information. We encourage everyone here to continue peer networking—share your policies, share what you’ve learned in trainings, and support each other.

We also value our partnerships with national organizations, which help share resources and lessons more widely.

 

AI Policy Guide

 6:45
If you’re on your journey of developing AI policy, we recommend using the guide we developed. It includes:

  • Examples of provisions from policy reviews across states and counties.

  • Sample language you can adapt, such as for data privacy or transparency sections.

  • Guidance on how to get started writing a policy and what structures to put in place to make the process smoother.

We’d also like to recognize our partners at Health Resources in Action, who contributed to developing the guide, and the CDC for funding it through the Public Health Infrastructure Grant.

 7:50
Aaron has dropped a link to this resource in the chat, and there’s also a QR code on the screen. Feel free to scan it with your phone and visit the resource.

 

Webinar Objectives

 8:10
Here’s what you can expect today:

  1. Learn new ideas for applying AI to daily tasks.

  2. Understand key ethical considerations when using AI.

  3. Gain practical tips on creating effective prompts for AI systems.

 8:47
Here’s our agenda:

  • Set a foundation for the conversation.

  • Walk through examples from counties across the country.

  • Talk about what to consider before using AI tools.

  • Share specific AI tools we’ve found helpful.

  • Discuss next steps.

And yes—you will have access to the slides and QR codes after the presentation.

 

Participant Poll

 9:25
We’d like to hear from you. On your screen, you’ll see a poll question: How comfortable are you with using AI tools in your work?

1 = Not comfortable at all
5 = Very comfortable

Scan the QR code or go to menti.com and enter the code on your screen.

 11:01
Thank you for participating. Here are the results:

  • The majority of you said curious but a little hesitant.

  • Second most common response: comfortable but still learning.

  • About 40% said not comfortable.

  • A small group said very comfortable.

If you’re hesitant, please share in the chat what gives you pause. If you’re very comfortable, share how you got there—what helped you become confident with AI?

From our experience, this is where most people are right now: some curious and learning, others still uncomfortable.un

 

What Is AI?

 12:12
We could talk about AI for hours, but here are the basics.

  • Artificial Intelligence: any program or technology that acts or simulates human intelligence.

  • Machine Learning / Deep Learning: types of AI that allow computers to process natural language and understand complex data patterns.

  • Generative AI: systems that create new content—text, images, audio, or video—based on prompts.

Not all AI is the same. Some systems need little oversight, while others require heavy human involvement, especially in decision-making.

AI is not new. It dates back to the 1950s with Alan Turing’s imitation test, later called the Turing Test. The term “artificial intelligence” was coined in 1956.

Fast forward: IBM Watson in the 1990s. Then in 2020, OpenAI launched, bringing us ChatGPT in 2022, which made AI widely accessible and sparked today’s boom.

 

AI Literacy Model

 15:00
When we think about hesitancy, this AI literacy model is useful:

  1. Understand – Learn what AI is, its opportunities, and its limitations.

  2. Explore – Start experimenting with tools like ChatGPT or Microsoft Copilot.

  3. Integrate – Adopt AI solutions strategically at the organizational level.

  4. Adopt & Scale – Fully embed AI across the organization, if appropriate.

Today, we’ll focus mostly on the first two stages.

 

AI Policy Landscape

 16:43
There’s a lot happening in AI policy at the federal, state, and local levels.

  • Federal level: Recent executive orders emphasized removing barriers to AI development so the U.S. can be a leader in the global AI race. Some documents reference AI in relation to education and workforce.

  • State level: Many laws are being proposed and passed, especially around health and government use of AI. The National Council of State Legislatures (NCSL) tracks this.

  • Themes: human oversight, workforce implications, transparency, and taxpayer accountability.

Keep an eye on what’s happening both nationally and in your state—it will directly impact how AI is applied in your organizations.

 

Applications in Public Health

 20:07
AI can support the 10 Essential Public Health Services:

  • Assessment: speeding up surveillance, forecasting, analyzing social determinants of health.

  • Policy Development: analyzing policy documents quickly, as CDC did during COVID-19.

  • Assurance: supporting workforce training and identifying resource needs.

 

Case Study: Restaurant Inspections

 22:06
Chicago Department of Public Health developed a predictive AI model for restaurant inspections using 11 years of historical data (over 92,000 free-text reports).

The system identified common food safety violations (like pest infestations and improper storage) and flagged high-risk locations, helping target resources more effectively.

Considerations: historical data accuracy and potential bias in inspections across neighborhoods. Still, this was highlighted in APHA publications as a promising approach.

 

Case Study: Health Chatbots

 25:00
AI chatbots can support health education.

One example: Layla Got You, a chatbot developed with community input from Black and Hispanic women. It provides sensitive family planning and sexual health information to women ages 16–25.

The chatbot operates 24/7, answers common questions, and reduces the burden on public health staff.

 

Case Study: Real-Time Translation

 27:12
Language access is a challenge across the country. Some health departments are using AI-powered translation tools like Pocketalk.

  • Provides real-time HIPAA-compliant translations.

  • Used in Johnson County, Kansas, and Niagara County, New York.

  • Proven especially useful during mass vaccination campaigns.

If your department uses Pocketalk or similar tools, we’d love to hear your experience.

Additional Strategies for Using AI

 29:11
Other areas where AI can help:

  • Policy Analysis: Summarizing key points, reviewing legislation.

  • Research: Summarizing literature. (Example: Petal AI for document analysis.)

  • Writing & Editing: Plain-language rewrites, proofreading, narrative improvement.

  • Administrative Tasks: Drafting agendas, summarizing meeting minutes.

  • Communications: Social media posts, hashtags, brainstorming ideas.

  • Critical Thinking: Serving as a “devil’s advocate” to test arguments.

Participant Reflections

 30:58
We’d like to hear from you: what tasks have you used AI for in your work?

 33:20
Participants provided some examples: proofreading, email writing, social media, workflows, plan reviews, quarterly reports, and feedback.

Please also share the benefits you’ve seen (efficiency, accuracy, creativity) and any pitfalls you’ve encountered.

Considerations Before Using AI

 34:01
Before using AI, keep these points in mind:

  1. Rationale – Be clear about why you’re using AI. Not every task needs AI. Ask:

    • What problem am I trying to solve?

    • Have other solutions been tried?

    • Is AI the right tool for this?

  2. Ethics – Think about transparency, privacy, equity, and workforce impact.

  3. Quality – Always review AI outputs carefully and maintain human oversight.

Key Ethical Considerations

 Speaker 35:30
Let’s talk a little bit about ethical considerations. Today we’ll just touch on a few.

There are several to keep in mind:

  • Bias mitigation: AI systems can present bias because they’re trained on data. The system’s quality depends on both the data and the algorithm’s design.

  • Human oversight: We need a “human in the loop” to verify AI outputs.

  • Data privacy: Many organizations, especially health departments, deal with sensitive data. Avoid putting confidential or identifiable data into public AI tools unless your IT department has cleared and approved them.

  • Transparency: Be clear with stakeholders about when you’re using AI and for what purposes.

  • Explainability: Some AI systems are “black boxes.” We can’t always explain how they reach conclusions. That’s not always a problem (for example, writing an email), but it is critical when AI is used to determine eligibility for services.

  • Environmental impact: AI requires significant energy and resources. This is an often-overlooked consideration.

 

Bias Mitigation

 Speaker 37:43
Biases in AI come in different forms. Some are built into the data and algorithms, while others are introduced by users.

  • Systemic bias: If a model is trained on biased data, its outputs may reflect that bias.

  • Interpretation bias: The AI may not interpret prompts correctly without enough context.

  • Human bias: Users bring personal perspectives when asking questions or evaluating outputs.

The key is to recognize biases and work to mitigate them. Aaron will show later how prompts can help reduce bias.

 

Human Oversight

 Speaker 38:42
Human oversight is crucial. For example, the FDA recently released an article about the AI system Ezra, which was designed to speed up the drug approval process. But the system had limitations—it sometimes produced errors or hallucinations, meaning it generated incorrect information.

Humans must review outputs to catch errors and ensure accuracy.

Oversight can be tailored to the risk level of the task:

  • Low risk: Summarizing articles, drafting simple documents. These require only light staff review.

  • Medium risk: Prioritizing outreach to communities. This needs stronger human review, rationale for accepting outputs, and bias spot checks.

  • High risk: Eligibility decisions for services or candidate screenings. These require mandatory human approval, multi-level review, signed protocols, and audits. Some states have even prohibited AI use for these high-stakes tasks.

Data Privacy

 Speaker 41:01
Data privacy is critical. Currently, there isn’t a single regulation mandating how AI systems disclose or handle input data. Here are some best practices:

  • Before use: Identify what type of data you’re working with. De-identify sensitive information. Avoid using confidential data in public AI tools.

  • During use: Use dummy data whenever possible. Minimize the amount of data shared. Disable data training features if the tool allows.

  • After use: Review outputs to avoid unintentional disclosure. Document how data was used. Monitor changes in the AI platform’s privacy policies.

Prompting and Generative AI

 Speaker 42:25
Now Aaron will take us through prompting.

 Speaker 42:29
Thanks. When we talk about generative AI systems like ChatGPT, Microsoft Copilot, or Gemini, the way we interact with them matters a lot. These tools rely on prompts—the instructions or tasks we give them.

If you sit at a restaurant and say, “Feed me,” you might get food, but you don’t know what. If you specify, “A cheeseburger, medium-well, with lettuce, tomato, pickle, and fries,” you’ll likely get exactly what you want. Prompting AI works the same way.

If you’re not sure what you want, you can have a conversation with the system to refine your request. The better we are at prompting, the better our results—and the less time and energy we spend.

 

Tips for Better Prompts

When prompting AI, keep in mind:

  • Provide context—enough detail to guide the system.

  • Keep prompts neutral to avoid introducing bias.

  • Never include sensitive or personal data.

  • Refine prompts based on the system’s responses.

  • Use structured frameworks: specify the action, purpose, and expected outcome.

For example:

  • Weak prompt: “Write me an email.”

  • Strong prompt: “Write a concise, professional, and friendly email to a colleague, requesting their review of a new FAQ sheet on billing procedures. Express appreciation and set a clear deadline.”

AI Use Cases

Generative AI uses typically fall into three categories:

  1. Assessing & Analyzing – reviewing drafts, critiquing comprehensiveness, summarizing misconceptions.

  2. Brainstorming – generating ideas, creative approaches, project planning.

  3. Creating – producing agendas, meeting notes, checklists, or even full documents.

 

Time-Saving Potential

 Speaker 48:59
Research shows that people working on computers spend about 30% of their time on tasks that could be automated or improved with AI. With effective use, AI can save about six hours per week per person—roughly 300 hours per year.

The key is using AI responsibly: not applying it everywhere, but targeting the right tasks.

 

Case Study: Gamma AI for Presentations

 Speaker 50:00
One tool we highlight is Gamma AI, which creates presentations.

For example, if you’re preparing a community health assessment presentation at the last minute, you can input your requirements into Gamma AI, and it will generate a full PowerPoint in seconds.

It saves hours, but still requires human oversight. For instance, the system once generated an image where people appeared to be drinking wine instead of water. It’s a reminder that while AI can create useful drafts, humans must review and refine outputs.

 

Resources and Next Steps

 Speaker 54:17
Thanks, Aaron. We’ve dropped a few resources in the chat, including links to helpful guides.

To answer a question: yes, the tool I just showed is called Gamma AI. For video creation, there are several tools like Canva AI and Sora (part of ChatGPT).

 Speaker 55:04
Another helpful tool is Petal AI, which allows document review directly within files.

 

Building AI Literacy

 Speaker 55:20
Here are some next steps for your team:

  1. Clarify your rationale: What problem are you trying to solve with AI?

  2. Identify specific areas for AI use and available tools.

  3. Build AI literacy—join webinars, take classes (like the Johns Hopkins course), and network with peers.

  4. Develop an AI policy: Use the guide provided to structure your organizational approach.

  5. Start small with use cases like document review to build comfort.

 

Workshops Offered

 Speaker 56:32
We also offer additional workshops, including:

  • Prompt engineering (crafting effective prompts)

  • AI policy development

  • AI tools for different purposes (logic models, quality improvement, public health communications)

  • Responsible and ethical AI use

 

Closing and Audience Q&A

 Speaker 57:20
We have about three minutes left. Please share questions in the chat or unmute yourself. Also, please scan the QR code to provide feedback.

 Speaker 57:59
Yes, some organizations restrict AI use. If your city, county, or organization has restrictions, we’re happy to present to leadership or provide additional information.

 Speaker 58:33
Thank you also for mentioning the Johns Hopkins course. I’m taking it as well—it’s been great so far.

 Speaker 58:49
We’ll make sure the slides and links are shared afterward for those who had trouble copying from the chat.

 

Final Remarks

 Speaker 59:09
Thank you for being such an engaged audience and for spending time with us. We hope to connect again in the future.

Host 59:20
Thank you so much, Aaron and Tatyana, for that great presentation. We really appreciate it.

For anyone not yet using Patagonia Health: we are an integrated EHR, practice management, and billing solution specifically designed for public and behavioral health organizations.

Have a great day, everyone!



logo-without_text

Patagonia Health is the preferred EHR, Practice Management, and Billing solution for public and behavioral health providers. We empower you with the tools you need to simplify admin work and transform care in your community.

Other Webinar on This Topic