false
Catalog
Federal and State Regulations Impacting AI Use in ...
2025 Legal Considerations for AI In Healthcare
2025 Legal Considerations for AI In Healthcare
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Recording. Awesome. Go ahead, Rachel. Great. I am just about to hit the beginning slide. Okay. Hopefully, I think everything is showing on my end. So I definitely, since we had a run through before, I think we're good to go. My name is Rachel Carey. Before you go on, I'm sorry, Rachel. We're actually just seeing the back, not the slides. You're not seeing the slides. Yeah. We're just seeing the app. I think they might have opened up in a different monitor. So I want to make sure that we get pretty soon. There we go. Thank you so much, Rachel. Always have to try. No, I appreciate it. I didn't want everybody to go through without having anything to view. But thank you so much for getting me on the right page. So let's get going. We have a lot to get through. And I want to make sure that we get through everything that is useful, because that is the point of doing these, is to have real-world application for you guys. As explained, I am counsel here at Whiteford Taylor in Preston. I'm a health care attorney. I do pretty much everything health care except Med-Mal. I do board representations, but I mainly focus on corporate, regulatory, transactional, compliance, administrative enforcement, so recoupments or audit enforcement and helping through that process as well. I work through larger health systems to individual providers. And a lot of my practice has been focusing on ambulatory surgery centers, which is why I have been trying to work a lot with orthopedic executives and how to sort of help grow that business and AI. That is why we're here today. If I don't get to a question, please, please follow up with me. I do answer questions. You know, it's an evolving field, as we will see. Things are changing kind of daily here. So please follow up with me. I have no problem answering questions and we'll get back to you. This is just a little outline that when you're going through, it just kind of helps, you know, keep track when you're looking at the sides personally to see kind of quickly, like where you can go and what section to go to, to look at things. We are going to go over informed consent, kind of generally informed consent rules are different in each state, along with sort of under the regulatory framework, you know, we'll go over some of the big states, but there's been a lot of bills that have come up this year where I am in Virginia. We just passed one. It's the 24th. If the governor doesn't veto it today, it will go through. So again, just a lot working through there. And under section six, there's definitely some real practical items under there, or if you're going to engage in an AI with an AI vendor and contracting, but I definitely definitely urge you to look through six and seven just for real contracting advice. And then also just regular compliance monitoring advice. So with that, we will get going here. So I think I, what I wanted to do is sort of just lay out again, very limitedly, not take up too much of the presentation time about what artificial intelligence is, some key terms and where we kind of are today with certain devices and maybe more of the common ones you're seeing in practice right now. So AI generally is just using computers to complete tasks that would be done by human and by using certain pattern recognition with data sets. And, you know, with AI, you can have sort of the big open-ended ones like chat, GPT, or even within chat, GPT or other devices, you can sort of limit those data sets. And then with AI as well, you kind of get into, it's very important to sort of look at if something is non-generative AI or generative AI. And making this distinction will become important, especially when you are either contracting, either contracting with a vendor that's providing our AI or just vendors in general. If you're, you know, everybody has, you know, business associate agreements and vendors they use for certain things that might be reviewing patient data or anything like that. And I've certainly seen complaints from attorneys where they, you know, they'll have a BAA that they receive and they'll say like, you can't use AI at all and not realizing that. So under natural language processing, certain things like just taking records that are in Spanish and translating it, that is AI. You know, we have had certain AI incorporated for a long time that maybe, you know, just wasn't thought about. So if you were to put in like a business associate agreement, like, you know, vendor will not use any AI, you know, you'd be really hindering certain tasks from going through that. It is important to sort of have that language in there and to really think through that. And then other things on there, you know, I think everybody, again, is sort of familiar with chat, GBT and that being real true generative AI. There are certain document drafting tools out there now. And then I think they're like, in terms of like more sophisticated examples, those will be coming up. I think radiology and diagnostics is sort of where we've been seeing a lot of the sort of rise that we've seen in the past. That's kind of getting a lot of this concern and a lot more of the regulatory changes we've been seeing now. Again, radiology, tumor screening, dermatology, more of the patient management functions. So scheduling or just interacting on like a website, like I'm trying to find this, I'm trying to talk to this. Can you tell me this? We've seen a lot of that. And then on another thing, and that you all probably are more so interacting with AI yourselves is when you are dealing with the insurers and them and the insurance companies with the managed care organizations, using AI and algorithms running claims through that or any sort of documentation about prior ops up front and then claims and also the backend, which we'll talk a little bit more about. But I wanted to highlight those differences, just sort of the main points of where AI is used today. I remember when I got this here from this journal here, I thought because it's specific to ortho, I really wanted to just sort of give you all more of the complex functions that you maybe could look at in terms of getting more complex tools that are more specific to ortho. So I'm going to talk a little bit more about that. So you could look at in terms of getting more complex tools that are more in the diagnostic and treatment in clinical realm. I guess, you know, it kind of gives you a good eye there and I thought that it would be more useful for you all to maybe explore those more if you wanted. Another common AI tool that I keep seeing and hearing is like ambient listening. You'll probably hear that a lot where there is sort of sensors that are put in the room that sort of start. I wouldn't say that I know that they are definitely recording, but through this ambient listening, they're able to basically generate all the notes that would have been taken for that into all the diagnosis, treatment plans, everything that would be. Now, again, I think talking to some of my colleagues, it's unclear if everything's recorded and then that is, you know, in terms of it being stored, it shouldn't be stored unless that is in some sort of contract or anything. But so if you're going to be doing ambient listening, so sort of questions on like, is it like recording? Then the notes are taken and, you know, it can be done very quickly, but those are the sort of important questions to ask. And if it's cloud stored. So like if it is cloud stored, so then is there recording capped? Is it put in the cloud? That sort of thing. But ambient listening is one of the ones they get a lot of questions on because it's easily built into any of the electronic health records. And it's just one of those things that it seems to be picking up very quickly. So just some sort of like cautionary tales up front is what I wanted to get through some of this is sort of what I wanted to show you here based on things that like already happened and sort of the regulators and areas that are already kind of going after some of these AI devices. So obviously the FDA, they do a lot of, you know, they clear a lot of these devices up front that are mandatory to go through some of their standard pathways. But already, even as early as 2020, they were already recalling a lot of AI powered software. They put out more recent guidance under their blog that they maintain and their director of digital health center for excellence. To be honest, I don't know if he's still there. I know the FDA has had a lot of layoffs and I don't, so I don't really know what's going on there, but my understanding is the staff that's responsible for her AI review is definitely kind of in flux right now. And it's sort of, you know, their stance and the manpower they're going to have to really scrutinize a lot of the AI going forward seems to be very much in question right now. So, but that's not to say sort of the advice they gave about how to develop and look at tools for if they are safe to use or just will help limit your liability. That's not to say that those, that's not what they said wasn't fair. So it's still worth it to go look at some of the advice that they've already put out. With that, I did want to say that the FDA did sign on with a number of more international players to adopt the good machine learning process, good machine learning practices with it focusing on, you know, performance of the human AI team and users that provide clear and essential information. And they go on to sort of talk about what they see as needed transparency in these tools and really honing in on sort of the tools, really communicating what is being generated by a person, what's being generated by AI and communicating it effectively to relevant audiences. So I think that was probably the main thing just for the FDA that I wanted to sort of show current enforcement here. But you can always go to the website too, in terms of recalls, there's plenty of recalls on there. Other current enforcement that is of note that is probably under the FTC's health breach notification rule. If you've heard of the good RX case and the flow period ovulation tracker, these all come with sort of AI being used to gather sensitive information around reproductive rights and doing so without really properly disclosing to patients that this is happening sort of thing. And some of these apps too, again, were used in a way that it wasn't where there was like a patient, a doctor patient relationship, they were engaged on their own. So it's not as if they were a covered entity or a business associate from a covered entity, but also what they were doing, these apps were in a place where they weren't really covered under HIPAA. And so the FTC has kind of recognized this loophole for some of these apps that they are using AI and not sort of doing what they think is proper disclosure to let users know of how AI is being used and tracked. So those were two very notable cases there that I think are still kind of ripe for either continued enforcement or possibly drawn back. I think we'll have to see how the new administration takes it up with the FTC. I think a lot of like HHS, FDA, we're already kind of seeing what I'm going to be getting into shift in approach, but just it's still worth highlighting now. And then again, under HIPAA violations, I mean, again, we've seen already a lot of sort of AI technology, maybe not meeting sort of the basic requirements of HIPAA or the users who were the actual clients not having in their business associate agreements requirements for any of these AI providers to meet the minimums there. And also just maybe specifically looking at certain things that AI has that maybe other vendors don't have. I think one of the big concerns around AI providers is the possibility of re-identifying data and that not really being spoken to in some of their business associate agreements. So we have seen some cases under that. And then on the far right there is sort of talking about just bias and discrimination and how that's kind of essentially accidentally or not built into a lot of AI and how a lot of HHS has really sort of what we'll go through. They put out sort of their stance on this and what they've expected from it. Some of it has been sort of drawn back, but that it is a thing. And it has been, they put out guidance and warnings on it. So we'll continue to get on to this now. Again, so the new administration sort of the tagline for their approach to AI right now is innovation over regulation. And you'll definitely see that with sort of what they've stricken down and what they've put in place. And I just also want to highlight what we're going to now is sort of this list of past executive orders, realizing that, you know, there's executive orders, regulation, and statutes and sort of, you know, executive orders can move a lot more quickly. They don't, you know, the president doesn't need really any say to do a lot of these. But regulation is one where, you know, it has to be the comment period, the drafting, and then statutes is anything that's passed by Congress. So in terms of executive orders, if there was regulations kind of over that, it'd be one of those where there might be some tension there and it wouldn't automatically be undone by an executive order. So just keep that in mind when I'm going through what I'm saying. So the first one here is sort of maintaining leadership in AI. You know, that was during the first Trump administration. So it wasn't too surprised to see that still upheld. And then even towards the end, the one in December 2020, also during the first Trump administration. So it was not also not surprised to see that remain. But the two Biden-era AI executive orders were rescinded. They focused more on equity and promoting non-discrimination, which goes to a lot of that bias testing I was talking about on the prior slide. The other thing I want to say, again, is just because these were rescinded and there might not be executive orders on them doesn't mean there's not state laws or things coming or that sort of making sure bias is rooted out and monitored won't help limit liability against third parties. You know, I think even if there is an executive order on it, you have to realize that bias can definitely open you up to a decent amount of liability in other areas. And then this was just kind of going again towards the main executive order that Biden had to sort of have HHS sort of come up with a plan to sort of root out bias and promote equity and just want to make sure everyone realized that is no longer in place. They also have this White House bill, White House AI Bill of Rights. It wasn't an executive order, so I wasn't really sure. But if you go to that link there, you'll kind of get an error page and then redirected sort of the White House main homepage. So I would say. But again, are these all good areas to focus on? Yes, I would say so. But it is not. I don't think it's a thing anymore. So the latest executive order that was passed pretty much as soon as he went into office was the removing barriers to American leadership and artificial intelligence that sort of just looks at AI as a matter of national competitiveness and, you know, essential to economic success. It sort of criticizes too much focus on bias in AI. And so, again, I think that that's something that it will maybe be for definitely all providers, this idea of, you know, I think you need to be aware of it and monitor it. And that's going to be available at the end sort of a monitoring plan. But at what cost? And that's kind of, again, a sort of personal risk analysis that you have to do when you're doing when you're incorporating AI. And then the only last thing I want to say as well as, again, all the states are coming out sort of with their new with their laws on AI that could be more restrictive than the federal level. But in terms of Congress there, they haven't really gotten their act together to really put forward a comprehensive proposal for any sort of legislation on AI. they kind of want to learn a little bit more. And obviously I think we all are aware that it moves very slow at this level. So we have not seen anything there. Again, these are probably some of the major states that I wanted to highlight as having, as having more robust AI laws and sort of more of the notable ones. I wanna say that Colorado and California has even sort of tightened up these, what they have in place. And one of them, the Colorado one, isn't going into effect till next year and they thought it was gonna become more restrictive over time anyway. So I wasn't too surprised to hear that. I think Utah as well as one where specifically providers have to, they have a law that requires providers to know that they're using AI. And then I think, as I said, Virginia is about to pass one if it does not, if it does not get vetoed today. There has been some thoughts about certain industry stakeholders calling on Congress to again, preempt state laws by passing a federal one. But I'm just not, I don't have much faith in that considering the current flavor has been to, again, sort of return more power to the state. So unless it maybe is going against those, against those executive orders that he found to such a degree that it really challenges what he said, then maybe there'll be some push from the executive order to also maybe rein in some of Congress or put pressure on them. But at this point, I'm not sure that Congress will get their act together in time to do that. The next thing, so state attorney generals, they kind of feel like this, again, they feel like healthcare and AI is a high risk area. They support advancement, but also transparency and government oversight. In June, 2023, a bipartisan coalitions of AGs issued a comment letter just kind of putting out their opinion on them, AGs sort of feeling like they're in a good place to go after these cases and sort of help regulate this sort of thing. And then in January, 2024, coalition of 26 democratic attorney generals issued a letter to the FCC, just sort of commenting about being on guard about the emerging issues with AI. So, I mean, I think that they're also ready, the AGs to go after this when they feel like there's a particular threat in their state that is brought to their attention. State medical boards, this is definitely important. The Federation of State Medical Boards issued this navigating the responsible and ethical incorporation of artificial intelligence into clinical practice. They stress that AI is not designed to sort of replace physicians and professional judgment and just reiterating that AI does have falsehoods or issues that it just can't be replaced and sort of looking at these areas one through seven are things to look at when doing compliance monitoring for any AI that's incorporated into your practice. This, again, why there is no sort of comprehensive AI law from Congress currently, this was just kind of highlighting other existing laws that have been used to go after certain AI, one with the Civil Rights Act, again, going to sort of that bias and that has really been brought up against a lot of the major insurers about some of the algorithms that they use to review claims and prior authorizations. The Affordable Care Act has, again, similar sort of requirements to root out any bias like that in just sort of operations. And then also, again, specifically with AI, with the rule that came out, I think it's 504, we're not so sure if that will maintain with some of the criticism that's been, well, with criticism of ACA that's been ongoing, but definitely sort of the current administration's sort of attitude to really go after ACA, so we're not sure on that. Again, the FTC Act Section 5, which is sort of that health breach notification rule, and then again, FDA, sort of their overall authority to regulate AI as a medical device. This here is just sort of breaking down, again, sort of if someone approaches you or you're using AI or you're developing them in-house, I know that sort of bigger health systems are developing certain AI tools in-house, this just kind of helps you think about like if it needs to get FDA clearance or not, or look at if there is some sort of exemption, because that is possible as well. Basically, the more administrative it is, more likely you're going to clear, you don't need anything from FDA. I think when you get into this health management, some of the listening that suggests diagnosis codes and projections of like hospital stays or anything like that, that's when it really is kind of like, we're not really sure. Again, the ambulant listening to me, it's on the brink there. So it might just seem, it's not just listening and transcribing notes, so a lot of these AI tools anymore, so that's really when you kind of have to look at and it should be FDA cleared or not. So just wanted to put that out there. This again, just kind of goes to helping you sort of decide if it needs to get FDA clearance or not. And they really want you to sort of look at sort of what is the intended use? Is it intended to interpret and analyze medical images or data? Does it display or analyze medical information? And just sort of what recommendations it puts out to healthcare professionals. This really kind of just helps run through, should this get clearance or not and what sort of tools? Because I also wouldn't, I wouldn't count out that people are coming to you trying to sell things that aren't cleared and should be. And for the most part, everything that I've seen come through is not as if people market these tools with any, usually they don't come with any sort of research or backing saying, hey, we've done this research about the bias. This is what we do to root it out. This is our monitoring. They don't do any of that. So at least not that I've heard. Someone can let me know if they've encountered any where they have, that'd be a new one for me. But this is just sort of to help you go through and realize if there should be more, if someone approaches you, if they should talk more about the FDA clearance or not. This regulation too was, it's kind of talking about if there is some sort of AI device or algorithm, what needs to be, what needs to be told to you in terms of how it is pulling data. And what you'll see is this decision support interventions and sort of the data, the source attributes in this kind of dictates how many and with the minimum source attributes that need to be used for AI. And it basically needs to tell you again, where and what they're using and where you can find that information. It's sort of take away the short of the long on this one. So this is what I was talking about earlier in terms of the ACA, section 1557 about their non-discrimination rule, about ACA's non-discrimination rule. It basically recovers, makes any covered entity cannot discriminate on the base of race, color, national origin, sex, age or disability. And it makes it a responsibility of any covered entity to take reasonable steps to locate and sort of root out this sort of behavior and discrimination in any of the tools and settings they have. And it also requires that you kind of come up with a plan to mitigate this sort of risk. It was recently finalized, I think last year sometime. It's pretty consistent with a number of Supreme Court cases that have came out. If you all have been to the Supreme Court, if you all are all scouts or anything like that, again, I think it remains to be seen if this will not be challenged by the current administration, or they will look to maybe replace it with something else or at a minimum not enforce it. I also think that that is, or the enforcement will be scant. The OCR, when they did put this out though, they did say that compliance will be measured on a case-by-case basis. Obviously, you can't expect the same sort of compliance from a much smaller provider as you would from a state health system or anything like that. They don't have the same tools and anything, but you shouldn't maybe ignore your efforts based on lack of size or anything like that. It's definitely something where you need to at least be able to demonstrate that you've put in thought ahead of time and did what you can. For those that have kind of followed the Supreme Court's Loper v. Bright decision last year, that kind of takes away some of the regulatory authority's ability to sort of reign supreme. The one thing that I will say about this regulation is that it doesn't really put into it any sort of interpretation with a lot of things. With the Loper case, really, it was challenging any sort of regulations that came out that were really trying to take an interpretive stance of statutes because they really don't want regulators to do that. They shouldn't be making new laws or anything. That's kind of the basis of that. In terms of AI, in terms of this rule in particular, you're not seeing a whole lot of interpretation as much as just sort of more rules to sort of operationalize laws at play here. And that goes for most of the regulations that I have named already in this presentation that I'm not sure that we'll see anything get challenged on that basis as much as we'll just see maybe lighter enforcement than we thought when it was originally passed last year. So, and this is an interesting one as, you know, to go off that, knowing that, you know, with Loper having happened and then also having the idea that the new administration, the administration that is currently, that has taken office might be light on enforcing the section 5057 of the ACA. One that I think everyone probably hoped that they won't be light on enforcing is sort of the limitations on managed care organizations and insurers on using algorithms and AI for prior office and claims management. So, and there's been certain cases if you follow the Team Health First United for about suing for United using certain software that automatically down code certain claims in the ER. There's also been, I think Cigna or Humana use their NH predict or NaviHealth. I'm not sure who that product is put out by. That might actually be, you know, another insurer's tool. I don't want to accidentally name the wrong one. But determine length of stays. That kind of, you know, triggered a lot of scrutiny on this and, you know, CMS went back and sort of looked at, again, sort of the new rules that came out with section 5057 under the ACA. There was also section 504 under the Rehabilitation Act that was very similar, that they went ahead and put out additional guidance for the MA plans regarding, you know, how they can use these tools. And the main thing to take away from this is that they stressed how much an individual patient and their personal account needs, their personal stance needs to be considered in making recommendations and not just sort of doing, you know, hitting a button or anything like that and going forward with whatever recommendation that is. And if there's going to be some sort of adverse decision, there has to be appropriate physician or other appropriate healthcare professional to finalize that recommendation ultimately. Now, do I think that's going to prevent them from using any of these AI tools? No, but hopefully, you know, it will just be an additional burden for them to document appropriately and that you can at least use in your repertoire of tools to sort of challenge any of their denials. And where this information came out, I just wanted to highlight was in this HPMS memo with this health plan management system, if you're ever interested, they sort of do these blast memos or they were before Trump came back into office and kind of halted a lot of regulator communication. I think that those will probably pick back up at some point, but it just talks about sort of, you know, they expanded more on sort of the requirements here. And I think for supplemental benefits, it also kind of limits too that there can't be any sort of like hidden criteria especially when it comes to the supplemental benefits under that last sort of circle there. And just realizing that certain standards of medical necessity might not be appropriate for that particular case. Again, just sort of really stressing that, you know, you can't just put someone into an algorithm or their facts and think that that's going to stand. You really have to show and demonstrate that you're really taking into account someone's individual case at hand. So now we're going to move into sort of the HIPAA and other sort of real life considerations that you need to think about when you're deciding to use these tools. And obviously HIPAA is one of the big ones. And then, you know, there's a slide on sort of the other liability that you need to think about as well. So obviously here, one of the main things HIPAA is completely at play, meaning you have to make sure that if you're using AI that is for permissible use, that you're using like the minimum and necessary amount of data to perform whatever task you're asking the vendor or an AI tool to use and that whatever you have in place with the business associate agreement, kind of you make sure that they are in compliance you make sure that they are basically stepping in your shoes and remembering that a business associate agreement doesn't relieve you of any of your responsibilities as a covered entity. It basically just means that you can get them to pay for a lot of things that you might get slapped with by a regulator. So you really want to make sure that they are leveling up to the level of responsibility that you have when you're doing these agreements. So again, specific considerations, HHS clarified in sort of the most recent rule that I think that comment period for this, the proposed rule was in January and I think it just closed this month for comments. They were, you know, that rule stress sort of AI and training data and, you know, everything that we're talking about here today. And two of the big things here is that computer processing is not like a permissible use. So it's one of those that you need to sort of get that permission for, and then just make sure that the business associate is actually taking any of that data or anything you give it and maintaining the necessary, maintaining the necessary level of security. This malware encryption thing, I think this part, I think I haven't seen this as much where it's like malware encrypting without someone seeing it. I haven't seen that come forward with anybody that's, you know, passed an agreement near me, but they're saying that this is sort of like a disclosure. If you have this sort of malware incident, I mean, obviously those sorts of cases are just a nightmare in general, but just realizing that that could possibly seen as a disclosure as well, if you have that sort of malware happen. Again, when you're doing these agreements, you want to see like, what is the minimum and necessary and is the AI only using minimum and necessary and what does it have access to? That's just one of the questions you want to think about when you're going through, when you're engaging a vendor for this and is it using de-identified data and is there a chance that it could somehow re-identify data? Other types of liability are again, that are without even considering the new, the proposed rule that just closed is just general products liability, design fault, flaws, failure to warn, any sort of user error, are you using it the way that the AI company intended to be used? Intellectual property is a big one too, in terms of, well, who owns the generated data, if it's generative AI, that sort of thing, the algorithm and device ownership, and then just medical malpractice as well. Realizing that data is information, I think something that we're not seeing a lot, but that when I discuss with other attorneys is getting paid or in somehow using that as some sort of negotiating tool. If you're giving them the vendor information that's making their product better, I would definitely ask to get paid, and if not, at least get something bargained for somewhere else in terms of bigger indemnification or just something that is more favorable to you, that you are helping them with their other products towards other customers. So it's definitely something to always keep in sort of your negotiating tactics, be aware of that, and use that point, that you are always, whenever you're using the tool, you're making their product better to sell somewhere else. In terms of who owns output data, I'd always make it clear that you own any of the data that is generated, and that's generally the rule is that the licensee, the customer should own all output data provided by the AI solution, and that you main control over any of the results of the system, and if there's going to be any sort of carve out, then that specifically needs to be in the contract. Informed consent, again, sort of that national, the Federation of Medical Boards, I just wanted to highlight here that they address this in sort of that responsible doc, the responsible of AI use document that I was telling you about. Informed consent obviously is a little different in each state. In Virginia, it's really, it's a little bit more complicated, but it's really been set by case law, and what they generally have had is informed consent would be the standard that's used is what a reasonably prudent provider in Virginia or the same field would disclose in a similar circumstance at that time, so it's definitely, you know, that's very specific there, but to kind of help with just general considerations, if you look to this document that was approved by the Federation of State Medical Boards, they think that providers should consider or sort of explain how AI works, determine whether the systems training the data were representative of the relative patient population that they are using it on, describe the predictive accuracy and the processing, make sure you sort of, the provider distinguishes the roles of AI versus the provider, compare the AI results versus what they would have personally thought, and explain maybe why they, explain why, if they're going to take a different route than what AI suggested. So now we're going to sort of selecting a vendor. I wanted to highlight a Fierce Healthcare article that I came across that said, you know, among 2,400 hospitals that were surveyed in 2023, 65% reported using AI predictive models, mainly for like health trajectory's scheduling, treatment recommendations, billing support, and health monitoring, and among these hospitals that were using this predictive AI data, 61% evaluated those models for accuracy using the data, but of that 61%, only 41% were evaluating the AI and the data for bias. And we, again, we're going to go into sort of like what, what that is and what we mean when we say bias. And that was just sort of going to show that obviously these tools right now are not, while they can certainly help with efficiency and just generally making things somewhat better, it's not clear that it's necessarily making all the right decisions all the time, even now, and that you really have, anybody who adopts AI really has a responsibility to look into the bias feature. Again, maybe it might not be regulations, or, you know, you might not get caught under the regulations depending on how the regulators in this administration are going to enforce things, but realizing that, you know, based on just medical malpractice alone, if the tools you're using are being used in diagnosis and things like that, and it might not be right, you're opening yourself up to liability. So when you're selecting a vendor and realizing as well that, I don't know if any of you have ever been approached, but often how a lot of providers are approached by sort of their current EHR provider, that these are sort of the just areas that you should think about when they are coming to you. Like, you know, what source data are you using? What is transparency? How do you define transparency? Are they using any sort of third-party vendors themselves? And are they using AI? And sort of like, are they being required to be held to the same standards that you would want that vendor to be held to? How do the patients interact in their requests between AI, if there is any? Again, the data protection and security, and just sort of the process of those algorithms and automated decision tools. And then next here, again, I think at least for me, sometimes it's just nice to have a list of questions ready. You know, these are, you know, specifically, I just wanted to give you a list of questions that you could take and get answers to. Even if you don't know what to do with those answers yet, you can at least have this list of questions and then go back maybe to your team and look them over and maybe decide if something is shady or you just need more information and can go from there. So I think, again, at the bottom here, I think one of the things to consider is again, one that I added recently is this cloud-based system and just realizing, you know, what information is retained by the AI provider, or again, is certain information stored up in the cloud, that sort of thing, because that might change how you, your expectations for certain security and just questions and things like that, and how long records are maintained and anything. I think that, I think I said earlier, though, that a lot of developers and people that are coming around to sort of market these AI tools, they are not readily giving this information. And a lot of them really don't have it when they come to talk to you. So again, it's definitely worth it to maybe pose this list of questions and just be like, hey, come back and then I'll read, you know, and then we'll consider, but it's definitely not something that they're kind of readily coming around with and being able to, you know, readily coming around with and toting as a sign of, you know, an ethical or good partner to business with. Another thing to sort of consider with some of these devices that I have noted here is just sort of alert fatigue that, depending on how the devices are, you don't want to get in a situation where there's so many alerts or so much information pushed to providers that they maybe aren't considering it as much as they should. So just, you know, think about too many call bells and things like that, that just, that's, again, just something to add to your area of consideration for these devices. So when you're actually going, when you're presented with any sort of contract by a vendor, these are just things, again, the very practical things to consider is sort of, you know, what is their business associate agreement? Are they, are you going to use yours? Do you have one that you think can cover the AI portion? Or, you know, are you looking to use theirs? And, you know, are you able to really vet it as much as you think that you should specifically for AI as well? And just remembering, you know, really looking to see that there's AI-specific language in whatever agreement you decide to use, yours or theirs. I would definitely look at sort of the service, the services and standards, sort of the measurable indicators that they list as success. You know, are they meeting certain metrics and what metrics do they have to meet to sort of fulfill their obligations of the agreement? Make sure that there is clear safeguards that they are committing to maintain and make sure that there's clear consequences for those safeguards. I think the main thing that you get a lot in these, anything that's dealing with cybersecurity or data privacy is incredibly low caps for failures in cybersecurity. And just realizing, you know, a modest, you know, it's very likely that a modest issue in this area could completely tank a practice of the, you know, a decent size. Something I see a lot is these cybersecurity in general, but obviously to AI providers, wanting to limit their indemnification cap to like the last 12 months of service or something. And that's just not, no, you shouldn't ask for way more than that. And if you need to, you know, justify your reason, you can pull plenty of just, you know, pretty recent cases of, you know, minor breaches and things like that, that really end up taking a practice. So that's definitely, and, you know, depending on sort of the position you're in, how much you need it, the AI tool or where you're at, it, again, it might just come down to, again, a specific cap number, but you at least really want to realize the consequences of not having, of not really reviewing this and negotiating it. You really should put in some time for that. And then again, for any breaches or failures, just make sure that there's very clear written and incorporated language about strict and clear timelines to turn that around, the notifications for you to turn around, because you're the covered entity, you're the one who needs to report. Make sure that, obviously, they're not just going to give you this AI tool and not give you any sort of training and maintenance and support, again, that kind of goes to just any sort of regular vendor you're working with. And I would definitely take some time to maybe, you know, feel out if you're comfortable with sort of the staff that would be doing that for you. Again, going back to indemnification, along with sort of cybersecurity and privacy, make sure that the indemnification provisions contemplate sort of IP infringement. You know, we don't want to be getting any sort of dings for them not properly securing the IP for whatever they're using. And then any sort of misuse of data too, if that's something that you can get in under the indemnification version, that's a good one to get in there as well. And then just ask about their cybersecurity life, their cybersecurity insurance. And, you know, again, that will help with the caps. You shouldn't be getting, they really shouldn't be putting anything lower than their insurance at a minimum in terms of the caps. So I would definitely check for that as well. And then again, on limitations to liability, I would just really look at anything where they're really trying to limit, limit something that could really be way more devastating for you. And then further, again, just like general insurance too, just look at sort of their general insurance besides cybersecurity that they carry. Make sure that you look under, scrutinize the reps and warranties that they, you know, rep and warrant. And the warrant being that they are representing for future cases as well. I will see that a lot too, where maybe they're trying to say represent only and then not warrant anything for the future. I would not let them strike that. But they have IP ownership now, and there's no patent infringement in the future, because that really is their responsibility to maintain that. And then just, again, include that under reps and warranties that they have adopted and will maintain now moving forward, HIPAA design and compliant procedures and things like that. And make sure that, again, the role of AI is very clear in the materials. There's requirements for provider training, and just make sure that it's very clear to you on the contract, the liability that you would have for using this AI and that you're comfortable with whatever amount of liability you decidedly want. Okay, we're going to go through this kind of quickly, because we only have five minutes. Oh, I did a better job though, getting through the material. So we've talked a lot about, or I've explained a lot sort of, a lot about bias and the dangers of having that in AI and algorithms. And to make clear, it's not deliberate human choices. It's more so a design flaw or a flaw based on certain data or training that's done of the AI that then gets incorporated because it's learning, they're learning models. So you feed it only a certain type of data, it learns and continues to incorporate that learning for future cases. And then sort of the different ways bias come in or labels here I have sort of listed here, the one that I just talked about with AI is only using one sort of specific type of data. It can be trained and not useful for other populations. And that's a big part here. When you're looking at data, you really want to ask and scrutinize your patient population. What sort of patient population are you using? Was it trained on that sort of patient population? Are there differences? And how accurate is it for one population versus the other? Sort of examples to note here is that, you know, early stage facial recognition was pretty 98% accurate for white men, but only like 70% accurate for women of color. And, you know, there's, you could see how something like that translated to like what would be normal blood pressure reading or something like that could really skew what your, you know, your evaluation of the patient. Another thing that tends to happen a lot is that sort of some data that a lot of the data that's pretty openly usable is like Medicare posted as any sort of de-identified Medicare posted data. But obviously that's skewed towards a certain population of a certain age. So those are the certain things that you really want to ask about and know if there's something incorporated that needs to be trained out or anything like that. So OCR, the Office of Civil Rights under HHS, what their take on what a provider is responsible to do in terms of addressing data. Again, it can be flexible in terms of the scale of the provider and their operations, but ultimately if you're trying to say that you don't know, they need you to get up and know. They're like, and they, you need to sort of, they put a responsibility in under their rules to sort of ask about, you know, bias based on race, color, national origin, sex, age, or disability. And you're like, if you should know or otherwise should have known, they're basically saying that you need to follow up. And they kind of give these different outlets that someone could, or any provider could really educate themselves anywhere from federal rules, bulletins, the Agency for Healthcare Research and Quality, medical journals, things like that. And then in terms of how they would, if, you know, you were going to get a fine, these are the sorts of things that would take into consideration for the level of punishment in terms of, you know, size and resources, whether there was any tool or attempt to comply beforehand. And then just sort of if they had actually asked for product information from the developer on the front end. OCR expressed strong support for this, the National Institutes of Standards and Technology for Artificial Intelligence. And that is on this last slide here. You can access it here in this link that will be in the materials that are going out for you. This is not binding law or required, but it's what they think basically gives you good compliance framework to root out bias and show efforts to comply under their rules. So I think that that is mostly it for all I, I'm surprised that I made it through everything. I know that it's a lot to take in. Again, please follow up with any questions. I will respond, I promise, and I appreciate everybody's time. And if you want to discuss this more, I'm more than happy to discuss this more with you all at another time. Awesome. Thank you so much, Rachel, for your time today and all this interesting information that our members can use for their practice. We don't have any questions in the chat today. So I will say thank you, Rachel, for sharing your information. Please reach out with her for any questions. As I posted in the chat, the recording will be available within the next few days. We will send out an email with the recording as well as the presentation that Rachel has beautifully made within the next few days. Thank you to all our attendees for joining us today. And thank you so much, Rachel, for the presentation. I hope you guys all have an amazing rest of your day. Yes, thank you. Bye.
Video Summary
In this comprehensive presentation, healthcare attorney Rachel Carey discusses the integration and implications of artificial intelligence (AI) in the healthcare sector, focusing on compliance, practical applications, and regulatory challenges. She highlights the critical role AI plays in various healthcare functions, such as diagnostics, billing, and patient management, while emphasizing the importance of informed consent and regulatory oversight.<br /><br />Carey addresses different types of AI, distinguishing between non-generative and generative AI, with specific focus on ambient listening devices and their implications for patient privacy and data security. She also outlines the evolving regulatory landscape, including past executive orders and state laws, emphasizing the need for healthcare providers to adapt to these changes proactively.<br /><br />The presentation underscores the challenges of AI bias and the responsibilities of healthcare providers to monitor and mitigate it, especially given the potential for legal and ethical liabilities. Carey provides practical advice on selecting AI vendors, ensuring compliance with HIPAA and other legal obligations, and negotiating contracts that protect provider interests. Overall, the presentation serves as a guide to navigate the complex AI regulatory environment in healthcare, urging stakeholders to stay informed and exercise due diligence in their AI implementations.
Keywords
artificial intelligence
healthcare sector
regulatory challenges
patient privacy
AI bias
HIPAA compliance
data security
informed consent
healthcare providers
×
Please select your language
1
English