false
Catalog
AAOE Virtual AI Summit
Legal Considerations for AI In Healthcare
Legal Considerations for AI In Healthcare
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right, can you hear me, Jessica, and I made you a co-host, so you should be able to turn your camera on. Okay, and we have 5 minutes as well. So, okay, great. You can see me. I can see you. You look so. Okay, so now share screen. Yeah, let's go ahead and make sure we can have your screen shared and you were having what did we say was the better way to go to PowerPoint slideshow. Would be the better option. Yeah. So, do you already have your PowerPoint slideshow started? Yeah. Yes. I think it's going to select the right 1 to. See, okay, so can you just I think you can probably just yeah, perfect. Great. Let me just make sure too. I can, like, toggle the way that I want to where am I going to have to get it. Okay, I'm going to have to use the mouse, but that is totally fine. Okay. Yes, it appears I can go back and forth with my own computer some and I have the polls ready to go to since that's your 1st. Great. Yeah, that's my 1st time writing any poll questions. I was like, these are probably really easy. But as you said, just get them to, like, hit something might help. But thanks. Yes. Well, and I'll tell you, we haven't had any polls yet in the event and people just went back to back to back. So I think it'll be good to kind of, like, little reignite energy to they're getting break. They'll get some interactivity happening. So I think it'll be good to kind of like, you know, kind of like, you know, Yeah, great. Great. Yeah. And I'll just keep me. I don't know if you can, like, button or message in any way. But, you know, I don't I always get afraid that I'll be the 1 who, like, doesn't have anything to say. But at the same time, it's only 45 minutes. And I'm like, I probably going to go up until that's okay. Yeah, we it kind of worked out because 1 went over and the other 1 ended, like, right on time. And so stuff like this with a live event, it fluctuates and people expect that. I'll give you if you'd like, I can give you a five minute warning in a chat. And then you can kind of see a prompt come up and that way you don't feel like I'm verbally rushing you and you're like, Oh, sorry. Right, right. It'll give you a chance to see, okay, I have still have some more ground I need to cover. So let me do that. Right. Awesome. And hopefully for anything that's because the recording and the presentation will be posted the site. Correct. Yes. Okay, great. Hopefully I try to embed links for stuff and put citations and everything. So hopefully all the links work to when people want to use them. Yes, it'll be available. I'm sure it will be and accessible because I'll upload it as a PowerPoint. But if someone has trouble, I can always grab it and turn into a PDF and links usually work really well in PDFs too. Great, great. And you're in you guys are in Indiana, right? So it's one o'clock there. It's two o'clock. Okay, so you guys are I couldn't tell. We're on Eastern. Yeah. Yes, can you make me a co host again, when you if you don't I can't see all the people in the, in the chat or respond. Yes, I have made you a co host and thank you I actually didn't know that you couldn't see the chat if you weren't a co host, interesting. Yeah, you can see it you just can't, I can't, I can't chat with anyone I can't put my questions in there anything. Oh gosh. Thank you for letting me know. Yeah, I thought I had gotten you on the last one and there were so many people who initially jumped in that I missed you. No worries. Kathy, this is Rachel, Carrie with Whitford and Kathy is our Chief Marketing and Membership Officer at AOE. She's sort of my backup in case we have a technical emergency or something. Okay. Yes. Yeah, and I definitely want to follow up with you after this presentation. I'm trying to help our members put together an AI policy. And so I'd love to get your input if you're willing to have as much time you want to give on that. I drafted a couple of things based on best practices but I don't know the legal stuff really at all for healthcare so yeah. Yeah, and there was a lot of, I mean, thankfully, excited a lot of tools and things that are kind of already prepped that we can pull so you know legitimate sources that we can put in there. So yeah, I would love to help. Awesome. Thank you. I'll definitely send that to you. Great. All right. Well, if you're ready, I'm going to go ahead and launch the session. Okay, sounds great. Hello everybody. While we have everyone coming into the room and back from break, I'm going to go ahead and introduce our next speaker. The next session we have here is Legal Consideration for AI in Healthcare. And as AI continues to grow in healthcare, it raises an important legal and regulatory issues. So to guide us through these complex topics, we have Rachel Carey, who is counsel at Whitford, Taylor & Preston, LLP. Rachel's extensive experience in healthcare law will provide critical insights to the legal landscape surrounding AI. So please join me in welcoming Rachel Carey. Thank you so much, Jessica, and thank you everyone for joining me here today. I am a healthcare attorney based out of Richmond, Virginia, and I have been following sort of the developments in AI for a while. I think I started looking at things in 2015 for the clinical settings, and then when I worked in the managed care space in sort of the Medicaid medically complex area for one of the larger insurance here, we had a lot of AI tools that people had questions on. And then now sort of with the landscape, how it's been in the last four years, there's been a lot of development that it's hard to keep up with. And we'll see that the legal and regulatory framework is trying to catch up with everything that's happening, and it's always changing. And it'll be interesting to see where we go, especially between now and the election. But it's definitely something that's more of a work in progress than any definitives. But, you know, I'm here to help us start trying. So let's just go ahead and get into it. So with that, I think everybody can see my screen. I'm going to go to the next slide, this next slide, which is our first poll question, which is really just meant to sort of get an idea of what everybody has top concerns about. And, you know, something that maybe we can spend more time on, or that, you know, I'd be willing to ask, answer more questions on. So go ahead and put in your answers for that. Obviously, no wrong answers for this one. Jessica, you can let me know when everyone's responded. So we have responses coming in. So far, the top is consent requirements, with the next in line is just where to find that framework, and then understanding bias, following closely behind that. Right, right. Well, that's great. So it looks like there's the consent requirements. Thankfully, there we have a specific section in the presentation for that. And you know, I'm glad that we have kind of a good mix here. The presentation tried to give a comprehensive view of everything and they kind of go back in terms of referencing different materials that are earlier and later. So hopefully there's plenty to cover all concerns here. So do I just have to hit end poll? Yeah, I'll go ahead and I'll share the results so everyone can see as well. And you can reverse. Thanks. Do I just hit stop sharing? I can handle that so that way you can move forward. Yep, I got it. Thanks. Okay. All right. Go to the next slide Okay, great. So this is just kind of an outline that probably corresponds with sort of what we had up for the poll question. We're just going to do a brief look at sort of current AI uses and where it's been in health care right now, because I saw some of the other topics that are being addressed today. And so with the limited time, we only have 45 minutes. You know, we're going to quickly go through some of that, but just make sure we have some of the terms under control. Then next, go to sort of the enforcement cases that we're seeing with the FTC, the ONC, OCR, FDA, things like that, and then go into sort of what is the actual regulatory framework, what is on the books, things that are more guidance related, best practices, things like that. And then between three and four, that's where we're going to see the consent requirements that everybody was asking about. So look for it there, and obviously, you know, I'll alert everyone to that. Then we're going to look at sort of limitations for managed care entities, which I think will be an interesting one for you all, just because I don't think that everyone follows this as closely or knows some of the outlets that the managed care companies get their guidelines for things and when CMS puts them out. And then at the end, we'll be kind of where we have the trusted and established frameworks that the regulators are kind of pointing to everyone to use. So, great. So what is artificial intelligence? And ultimately, you know, it's kind of just very basic idea of using machines and computers to do things that humans would do with predictive pattern recognition and using data sets to do the pattern recognition. They're generally broken up into sort of two distinct categories of non-generative AI and generative AI, generative AI kind of being more just the actual recognition of things. So image recognition, language processing, and sort of putting out, you know, what is spoken and showing it as like a transcript on the stream, chat bots, fraud detection, sort of, again, those pattern recognition things. Generative AI is using algorithms and on top of those data sets and predictive pattern recognition to produce new information. So chat GBT, I think, is probably one that's gotten the most recognition nationwide in its usage. But there's other companies putting out document drafting services. I know there's a lot for the legal realm, interpretation services, and all sort of analytic tools that you can feed in data sets or elements and get new information. So on the screen here are kind of the main areas that are, that healthcare is using today, that is using AI today. We see a lot of the diagnosing patients. There's a lot of tools there, and that has the most FDA-approved tools, being able to sort of read CT scans, x-rays, and things like that to sort of help make accurate and more quickly diagnose a patient. A lot of transcribing medical documents, sort of a technology that helps convert sort of language into medical records and have sort of consistent language in there. Throughout drug discovery and development has been a big area as well. And then, again, just more admin efficiency. So chat box, patients kind of calling in and asking for where they can find something or if they can schedule something, those sorts of areas. This, I thought, was just a good picture of specific tools in areas that are being used in the orthopedic space that are using AI right now, and I kind of just wanted to go over some of those terms that are in the middle there. I think the way that the image is presented is that, you know, not all, like, within each of these, there's a sort of term of art in the AI space that people are looking to, and they, each one is a little bit more advanced. So, machine learning is kind of, I think, the one that most people are familiar with in terms of, you know, feeding it a data set or image, and it's sort of using the pattern recognition to teach itself and to get better and produce better results. The deep learning is kind of a step up from that and serves this neural network. There's been a big surge of research in what they call convolution neural networks, pertaining to diagnostic and imaging recognitions and classification of tumor detection. It kind of is, like, based on a grid pattern and multiple layers of that. Again, I'm not really a developer, so that is not my area of expertise, and so I'm sure that some of the other presenters today probably are more in that developer space and can kind of tell you the mathematics and actual inner workings of it, but if anyone has any questions on any of them up there, you know, the citation is there, but I'm not going to go into that. The citation is below, or I can certainly follow up and, you know, answer any questions there, but the main idea here is that a lot of all of these tools are sort of meant to help speed things along and have more accuracy and helping with some of the hurdles that we have in healthcare today of burnout, efficiency, and cost, with, you know, things taking longer and things costing more, so that is kind of the general idea of why people are interested in these, which I'm sure I don't have to tell you about, but that's kind of the general idea of why people are interested in these, which I'm sure I don't have to tell you about, but that's kind of the general idea of why people are interested in these, which I'm sure I don't have to tell you about, but that's kind of the general idea of why people are interested in these, which I'm sure I don't have to tell you about, but that's kind of the general idea of why people are interested in these, which I'm sure I don't have to tell any of you, but obviously we're here because there's big risks that are coming about with all of these in terms of bias and just producing hallucinations or incorrect material as the, all these tools are not 100% foolproof, and obviously regulators have caught on to that and are starting to introduce ideas of, you know, enforcement and fines and other things like that, that come along with using these tools, so that's kind of the general idea of why people are interested in these, which I'm sure I don't have to tell you about, but that's kind of the general idea of why people are interested in these, which I'm sure I don't have to tell you about, but that's kind of the general idea of why people are interested in these, which I'm sure I don't have to tell you about, but that's kind of the general idea of why people are interested in these, which I'm sure I don't have to tell you about, but that's kind of the general idea of why people are interested in these, so we want to make sure that we have everything together, so if you decide to go down this route, what do we look for and how do we best stay in compliance and guard ourselves against any sort of ramifications and do the best work and protect our patients. So the current enforcement against AI and healthcare, it's kind of, there's limited guardrails right now, and it's kind of patchworky, obviously, nothing is sort of completely solidified as to where we're going, and it's kind of hard to even, especially right now, but between now and the election, to see if anything, any routes that have been taken might be reversed, but the current administration, the Biden-Harris administration has taken several steps to at least get committees created and to get any sort of the agencies under them to start looking at this and actually pushing enforcement. So I think we're at the next poll question. So I don't think that this will be very hard for anybody, but what are the legal responsibilities from everyone's understanding of what they have to look for when implementing an AI diagnostic tool? And please pick the one incorrect answer, and hopefully this will not be too hard. I haven't written poll questions before. So this is my first time at it. So hopefully, you know, maybe I'll, as I do this more, I'll come up with better and new ones. No, they're great. So far, it's looking like the majority of people are saying practices can fully delegate all diagnostic responsibilities to the AI system without any legal implementing. Oh my goodness, I can't talk. Implications. That is correct. I'm glad that the audience saw that, because obviously, that's why we're here today, is that there's big risks for fully delegating and not overseeing sort of our third parties, or in the event that anyone tries to partner or create their own tools, obviously, you have to guard against this as well. And you can't just delegate to sort of a third party vendor to do the compliance yourself. I think a big message from all of this is that you have to be really thinking methodically, have a real plan, document, obviously, and that it's an ongoing thing to keep trying to get better and monitor everything that's going on. Hey, Rachel, we have a question in the chat. Do you want me to share that with you now, or do you want to hold those until the end? Let's hold till the end. I will definitely make sure with anybody that I follow up with any questions. I just know that I want to make sure all the material is covered, and that we get to everything. All right. So first up on current enforcements are FDA warnings and recalls. There's been an increasing number of artificial intelligence and AI machine learning in the past five years through multiple different pathways. There's the pre-market approval pathway. There's authorization via a big notebook pre-market review, and then clearance via the 510K pathway. But with all of these, there comes ones that, obviously, even with the market approvals, maybe aren't what they seem to be, or they show issues later on. The FDA will post a list of any of the recalls that they do for anything they've approved, including AI devices. Between 2019 and 2021, a large number of radiological devices had recalls. There was around 10%, 71 of 755 devices, including 14. And then there was 14 of 285 machine-learning devices that had recalls. Of these recalls, two were designated as class three, which was a low risk. The rest were class two, and none of them got the highest level of risk. The FDA, as you see here on this slide, has put out more guidance and sort of blog advice in terms of what they want to see for developing AI and monitoring and all that. They work with sort of an international group to sort of put together what they think is essential, their principles first that are essential for transparency in AI. And these are the good machine-learning practices. Two to note there is GMLP7, which is focused on the performance of humans in the AI team to develop the AI tool, and then what a number of people were concerned about with the first poll question is clear and essential information being delivered to the patient. FDA defines transparency as the degree to which appropriate information is clear and communicated to relevant audience. It also lists four elements of effective transparency, which is ensuring that impact risks and patient outcomes is communicated, the intended user and audience knows the context in which the tool is being used, training and successful communication about the tool with the user and the patient, and then understanding the environment in which the tool is also going to be used because there's a difference in sort of consent beforehand or when you're going to be using a tool ongoing in a process with something and maybe they haven't seen it or realized that it was being used and then letting them know and then having consent afterwards. So that was kind of the idea behind that element. The next elements we have here are enforcement under the Federal Trade Commission, HIPAA violations, which are carried out through CMS's OCR team and then the ONC team for the bias and discrimination at the end, I should have put that there. But as you can see under the Federal Trade Commission, both the flow period and ovulation tracker and the GUDRX violation are two of the big ones that you might've heard about. With the GUDRX case, this was the first violation of the FTC's health breach notification rule that in addition to them saying that they didn't feel like they advertise, that they weren't advertising the way that they were actually using the product. They were saying they engaged in advertising activities that use pixels to collect share and users information and prescription medications and health conditions that were then passed and delivered to well-known third-party social media and advertising platforms that were not consented to or made aware in any way to the actual users that put them in there. In terms of the ovulation tracker, there's been a heightened scrutiny with this with some of the Supreme Court decisions around the overturning of Roe v. Wade and that the Biden administration has looked at protecting this more. And OCR, HHS have taken heightened, put out new heightened requirements to protect reproductive health and privacy interests for these patients just due to sort of the ongoing sort of state tensions between the federal government and states that are looking for that information to kind of advance whatever their agendas are. With EZ Healthcare and this ovulation tracker, the developer, the FTC indicates violated section five of disclosing user information. And again, not really getting the appropriate consents to use this information to deliver to any sort of third parties. So I think, and then on this last piece, again, this is usually looked at the bias and discrimination by ONC under HHS, but ultimately what they're really looking at in addition to ONC and a lot of the other entities that are stakeholders in artificial intelligence is bias that is pretty much inherent and baked in to any tools that you're using. This is just sort of a link to a study that sort of revealed a number, one of many biases that have been revealed in algorithms, in AI tools. So if you're just kind of looking for a use case, I went ahead and put that in there. But one of the things in terms of looking at privacy and consents, it's not always as straightforward as you might think because depending on apps and developers, apps and developers, especially from third parties, aren't always HIPAA regulated. And that's why the FTC sort of steps in there. But, and that's why they have health breach notification rule to kind of catch those players that operate outside of HIPAA. But it's still not always very clear because you can have a situation where you have a patient that puts something that makes, that gives their input and data to you and you are a covered entity. But if they also go ahead and share their information with something that is not a covered entity, then it could possibly become non-HIPAA covered. So in a lot of these developments, you do kind of have to track and think out loud or just kind of diagram like how front end users and the sort of practitioner side users are actually gonna put in data and if they know what they're doing and follow sort of where the information goes to see sort of where things would fall under for regulators like this and enforcement. So next, these are sort of executive orders, the main executive orders that we've seen that have kind of laid the land here so far. We have two from the Trump administration that just sort of kind of promoted the idea of looking at AI more and sort of upping the game and sort of getting an idea together of how they're gonna do this and actually looking to incorporate it more in government. And then under the Biden administration in February 23 is when we really started seeing some attention being given to combating bias in any of these tools. And then in this last one in October 23 is the first sort of direction that we're seeing given to HHS about actually creating a task force and looking to ways that we can force anybody that's receiving federal assistance, federal financial assistance, which also they take to mean as if you're getting any sort of payments from any of the government programs, how they can look to make any of these providers combat racial and other bias in any of these algorithmic tools. Next, they also put out this White House Bill of Rights. That was the, well, this was actually prior to that executive order. It was put out in 2022 and it has these sort of five big elements that they look to, that they wanna see, that they wanna see addressed and that they think are essential to these. They believe that everyone deserves protection from unsafe and ineffective use of these AI devices. They want to see proactive and continuous measures to design them fairly and safely. They also wanna see any developers or users take privacy with the utmost attention and sort of ongoingly assess privacy and then notice consent sort of being in the same vein as the poll question, just that that is being looked at. But again, this is just guidelines. These aren't actually put in stone anywhere as of right now. And then in terms of the human alternatives, just the idea that they wanna see that if these fail, that there is some sort of actually human element to address what AI is being used in lieu of some sort of human. They wanna see that that fallback is there. So Congress right now hasn't really had much traction in sort of coming up with a comprehensive legislative scheme for AI, which is certainly very frustrating for a lot of the stakeholders. And I'm sure for you all as well, if you've been following, it's not likely that we're gonna get any time soon. And with that, we're pretty much left with a lot of the states sort of having their attempts at combating this and just sort of a patchwork area of, again, just different levels of AI regulation. This year alone, there was 22 states that have introduced legislation for AI health use. There was only 11 in 2023 and then only three in 2022. So you're seeing that it's ramping up a lot. And again, these are just sort of the high notoriety ones, I guess, are the ones that are getting a lot of press. Colorado probably being the most significant as it now has a consumer protection element that requires developers and deployers of high risk AI to document steps to avoid algorithmic discrimination. And this is set to take place on February 1st, 2026. I think there's some concerns as the way the law is drafted if that is really going to happen, but it is probably the best attempt we've seen at actually trying to get buy-in requirements from any of these employers and users to step up and really do the compliance that's needed. But if we continue to sort of see, again, just these sort of patchwork elements of like one state here, one state there, will certainly be difficult for any sort of players operating in multiple states to comply with all these different levels that you might start seeing developers maybe only having more regional or state-specific business plans. So there's more to see there. Utah, which I don't think is up there, is another one that requires developers to disclose that they're using AI help tools by 30 different types of healthcare professionals. But again, that one also is having the same problem as the Colorado one, in that it seems that there might be some delays due to ambiguous drafting. Next, we have the state attorney general stance. Ultimately, they feel like they're in a good position to enforce federal laws and any sort of state laws that come about. They want to be involved, and they just think that they're in a good position to enforce anything that comes along here. They delivered both the letter in June 2023, and then again in 2024, just about how important they think it is. And so I think that they are basically just at a point where they're like, we're ready to do whatever the next signal is. This is probably something, I think there's a link down there, the Federation of State Medical Boards, House of Delegates adopted their overall stance in this bipartisan coalition that puts out these essential elements that they want to see when providers are using AI tools. And a lot of similar ideas here from what we saw with the White House Bill of Rights and things like that. But one other thing that I wanted to emphasize here is that why they do, this federation does take the stance that they would like to see AI be used in a way that supports healthcare providers. They don't feel like there was any sort of replacement of physicians, and they ultimately feel that any physician should make reasonable efforts to identify and address biases before using AI systems in patient care. So really sort of harking back on sort of the physician and any sort of person that wants to use it to assist in their clinical decisions, they're really putting an emphasis on them to, again, be an active user and sort of understanding how these tools are made and the drawbacks of them. So this is, jumping back to the federal side, are other areas of law that have kind of already existed that are being used by federal agencies to put out more modern rules to combat the bias or other areas of concern with using AI in healthcare. The Civil Rights Act for the non-discrimination concerns, the FTC Section 5, which was noted in sort of those enforcement cases. Which is kind of the consumer protection element of, you know, you said you're gonna do this in sort of those notice elements that we were talking about with GoodRx. And then the FTC has also put out their guidelines as well for, but again, it's not healthcare specific, but they have put out their guidelines in addition to the other entities that I mentioned before. The FDA is sort of a funny one in that you do have to kind of think of what the tool you're using is actually doing. And depending on what it's actually doing, it could be regulated by the FDA or not. This is just kind of showing you what elements or tasks that it does, that the tool does, if it will be regulated by the FDA or not. Sort of, again, your less judgment-based one, and again, less clinical judgment-based ones are essentially not gonna be regulated by the FDA. So anything that sort of just helps with billing, scheduling, and those sort of administrative pieces are not. The health management one is kind of iffy. So just sort of identifying patients, but not making any sort of real, again, generative AI or clinical decisions or suggestions is kind of in the middle here. And then any of these that sort of suggest any sort of diagnosis for patients and help with any of that clinical decision-making that is more than just sort of identifying is regulated by the FDA. And then these are just sort of ideas to help you kind of think through what would make something regulated by the FDA and then what they're looking for in terms of appropriate AI tools. So anything that's intended to analyze medical image or signal anything to look at further, depending on if the image or pattern of inputs, what they are, it could or could not be exempted from FDA regulation. And then similar here, the relevance of the patient-specific data could also trigger FDA regulations as well. And then in terms of what the intention is for the AI tool, if it does any of these elements here in this third line, it likely will fail an exemption and will have to meet full FDA regulations. And then again, sort of when it can be used in the environment, we're seeing that again here. They're basically saying if it's going to be used in time-sensitive rushed environment, that's going to fail any exemption and they're going to want to look at those further. So going back to sort of the traditional ideas of, again, things that are already in place and more modern rules being used, we're seeing, sorry, I think that I actually flipped. Yeah, I think I might have actually flipped these slides, but we'll go from there. So ONC, I'm not sure if you guys have heard of ONC, but that has updated, they've updated their name to another acronym, which you'll see here, but I'm still calling it ONC for now, under HHS, the Office of National Healthcare Coordination. They have kind of put out these elements that they, under their update, this rule that they've put out in terms of what they call predictive decision support interventions. They have defined that and then sort of what they call sorts attributes. So in source attributes are sort of the underlying quality information. And they have, under this rule, they are requiring 31 source attributes and that the developer also has their established intervention risk management and that any of these developers actually have any of this information readily available for patients to see and use in two places. So again, just kind of being, going along those ideas of transparency and making everything available. I think one thing to note here though, is that I'm still not really sure in terms of facing patients. It's kind of like when you get those limited warranties or things with a device and it's like multiple pages, I'm not sure how effective that would be in terms of having people understand, lay people and patients understand, it's at least something that they're putting out there and it's a step in the right direction. And we'll see that again in terms of consent, just what really gets the point across. I'm not really sure this completely gets it, but again, I think it's at least there. So HIPAA considerations, again, certain elements will not, certain tools may not be required to comply with HIPAA, but especially ones that are again, either developed internally or ones that you go ahead and execute a business associate agreement and that they're kind of stepping into your shoes, which I suggest for most, to help you as a partner apply with any of the HIPAA obligations that you have. They have to basically follow any of those HIPAA regulations in terms of minimum necessary information and requirements and proper disclosure and things like that. So it's important to keep that in mind. And based on this OCR, the Office of Civil Rights has come out with a rule back in May based on section 1557 of the Affordable Care Act to basically direct AI and users of AI to put efforts towards eliminating discrimination based on race, color, national origin, sex, age, and disability. So this rule really just aims at sort of making any users take active steps to identify and then think about how they're going to eliminate these sorts of things. I think that that's the end there. They clarified that in using these tools, unique facts and circumstances of each individual must be considered, citing studies where reliance on certain algorithms have resulted in racial and ethnic disparities. So, and that's another factor we're going to keep seeing here is that, again, data sets are very important in why they might be able to give a certain suggestion. You have to look at the person individually when making a final decision of how you're going to treat a patient. And in terms of measuring compliance, they would look at sort of a case-by-case basis on, I guess, any allegations that's brought up to them. They'd probably look at the size of the entity that an OCR complaint is being brought against and the resources that the entity has. I would say, because this is so new, don't rely on the idea that you'd be a smaller provider with limited resources to get you off the hook by any means. They'd also look at sort of what the intended tool, what the tool being used, what the intention was for that tool and if it was used in that manner and whether there was an intent to use it and whether the product information was requested and delivered by the developer to the user and if there was actual understanding of what was needed to run the tool successfully and if individual practitioner assessment was still done in that specific case. Something that I'll come back to, I'm just monitoring our time here. What time are we ending? We have about seven minutes left. Seven minutes, okay. Then I'm gonna probably go through this pretty quickly. Areas of liability to consider are sort of this product liability, intellectual property, medical malpractice. I think that the product liability is probably areas that you all are pretty familiar with. I think intellectual property is something just to be reminded of in that when you're working with a third party, I think generally anything that is created is usually assumed, the data is assumed to be the ownership and went into contracts that the user owns that data. If there's any reason the developer needs it, you should look to sort of write specific exceptions into any contracts you have, okay. So this is the consent. We might have to end on this one, but go ahead and do the last poll question, which is how does informed consent imply the use of AI in making clinical decisions? And please pick the one incorrect answer again. So far it's a sweep. Everyone seems to be picking C. Good. That is correct. Awesome. So informed consent is, again, kind of those – it's usually done in a lot of states by case law and then specific regulations put out by boards and things like that, but they definitely existed prior to AI kind of being a big thing here, and it's not like there's any exception in this realm either. Again, it's kind of something that, in terms of what is required, you're seeing sort of that stance by the Federation of State Medical Boards that they really want to push this and any sort of other frameworks and guidelines, but it's something, again, you kind of have to think through and that, like, does the user really understand what's going on? You have to explain how the AI works and how the drawbacks and the sort of accuracy of the tool – you have to distinguish between what the tool is doing and how, at the end of the day, the physician or the practitioner is still responsible for the clinical decision-making and disclose the recommendations that are made by the AI tool. And if your physician – treating physician is ultimately agreeing or disagreeing with those as sort of the recommendations there. So it's not really – I put Virginia there as sort of an example of sort of what their standard is, but again, that doesn't – it doesn't overly help to definitively say what you need to comply with informed consent. It's definitely something that you have to sort of continually talk through, and probably it's going to – again, if you adopt AI, we'll have to continually assess what is used and asked and disclosed to determine if that meets informed consent. And I think if you have any questions on if the patient really understands, then you should look at it. And it is hard because, again, sometimes with informed consent, you kind of feel like you're – especially if they don't interact or ask any questions, you're just not sure, but proper documentation is always kind of where you go with that in terms of what was discussed and sort of those elements there on the slide. If those were explained and there was no questions and they agreed, I think, you know, you're at least documenting you did everything you could. The next slide here is this next section. These next few sections I'm going to kind of go through really quickly. Limitations on managed care using AI. You might have seen certain cases, and we've had certain cases come into our firm as well, of managed care entities using AI tools to deny claims and prior authorizations. And CMS has really come down – well, I wouldn't say they've really come down, but people really want to, and they've started to come down in that area on what they are willing to allow for these tools. And ultimately, you, again, have to go patient-focused in utilizing their experience, their specific condition, and still using specifically physicians' specific information about the patient. They have to basically use that as well in sort of their prior authorization decisions. And this rule here that was put out last year basically was saying they can't really just ignore any of that that has to be incorporated and that any sort of adverse determinations has to be re-reviewed by another in the same field. And this just goes basically to talk about the specific memo that was put out to managed care entities earlier this year that kind of gives more specifics. If you want to go to that memo about what they can do and the specifics of how they can do these coverage determinations and limitations on there. So, at this point, we're basically just kind of looking at, like, how do you decide and what do you need to assess in order to determine if you want to look at any of these from a legal perspective? And this next slide is sort of just these main areas that you really need to talk through, which is, like, source data. Again, like, is this data representative of the patients we serve? Patient requests. You know, the patient might not own all the output data, but they are still required to have access into a copy of anything that is made and any of their records, just like any other medical record. Transparency, again, with sort of informed consent. Data protection, which, you know, is just kind of probably what you all are used to in terms of showing compliance with protection and security. Third-party operations and their algorithms and automation tools. These are some of the sort of essential questions that need to be looked at and asked when working with a third party or anything that you personally develop. I think a lot of tools that are still coming out are kind of under-tested and don't really put forth a lot of sort of the mechanisms they use to assess what's going on. I think whenever you're looking at a third party, you want to look to see if there's a safety assessment and a bias evaluation and if they look to sort of use an independent entity to do those assessments as well. And, Rachel, we're right at time here. Okay. So I just want to note, we have the next session starting in just a moment, but that Rachel's slides are all available as well. And if you, I believe you have your contact information. Yes. I'm going to drop that into the main lobby chat so that if you have follow-up questions for Rachel, we're able to do so. And I will really respond. I hate it when speakers don't respond. So I will look into your question and I will respond. Amazing. Yeah, Rachel's been super communicative with me as well. So I'm dropping that in the lobby chat now. And I just want to say thank you so much to you, Rachel, for putting this together. It was a very thorough presentation and really helpful, good information for our members and attendees to think of. So, again, thank you so much. And we are going to be moving on here to the next session. All right. Thank you. And, again, there's links and everything in there as well. And, yes, please reach out to me with any questions. There's links in there too. So thank you all for your time, and I hope it was informative and useful. Yes, thank you so much. Thanks.
Video Summary
The video transcript covers a discussion on Legal Considerations for AI in Healthcare led by Rachel Carey. It delves into topics such as FDA warnings, HIPAA violations, bias, and discrimination in AI tools. The importance of transparency, patient consent, and proper documentation is highlighted. The presentation also explores the limitations of managed care entities using AI tools for prior authorizations. Rachel emphasizes the necessity for healthcare providers to understand, assess, and comply with the legal frameworks surrounding AI in healthcare. The session wraps up with key areas to consider when evaluating the use of AI tools in a healthcare setting. Rachel offers to address any follow-up questions and provides her contact information for further assistance.
Keywords
AI in Healthcare
Legal Considerations
FDA warnings
HIPAA violations
bias and discrimination
transparency
patient consent
prior authorizations
legal frameworks
×
Please select your language
1
English