false
Catalog
AAOE Virtual AI Summit
Keynote
Keynote
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right. Welcome, everybody. That should help. Okay, so sorry if you heard a little echo in the background. Welcome, everybody. Thank you so much for joining us today. This is AOE's virtual AI summit. My name is Jessica Thornburg and I am the education manager for AOE. Before we begin, I just want to take a quick moment to thank the sponsors for today's event, who without their support, it wouldn't have been possible. So, some of our session sponsors for today are AI Health, Gale AI, iScribe, Social Climb, and Whiteford. The next sponsor that I want to introduce is Molly Van Oort, who you actually see on the call here. Molly is with NextGen, who is our title sponsor today. So, Molly, I welcome you to just say a few words. Sure. Ladies and gentlemen, hello and welcome to the AOE virtual AI summit. I'm Molly Van Oort, director of specialty solutions at NextGen Healthcare. It's my honor to be here today as we discuss the transformative potential of artificial intelligence in the field of orthopedics. We are on the brink of a new era where AI is poised to revolutionize the way we diagnose, treat, and manage orthopedic conditions. First, let's take a moment to reflect on the incredible progress that has been made in recent years. AI has already begun to make its mark in orthopedics, with applications ranging from imaging and diagnostics to surgical planning and rehabilitation. Machine learning algorithms are now capable of analyzing medical images with remarkable accuracy, enabling earlier and more precise diagnoses. This not only improves patient outcomes, but also reduces the burden on healthcare providers. AI is also playing a crucial role in personalized medicine. By analyzing vast amounts of patient data, AI can help tailor treatment plans to the individual needs of each patient. This approach not only improves the effectiveness of treatments, but also minimizes the risk of complications and side effects. However, as with any technological advancement, there are challenges to overcome. One of the primary concerns is the ethical implications of AI in healthcare. Ensuring patient privacy and data security is paramount, and we must be vigilant in addressing these issues as we continue to integrate AI into our practices. Additionally, there is a need for ongoing education and training, webinars like today, to ensure that healthcare professionals are equipped to work alongside AI technologies effectively. Despite these challenges, the future of AI in orthopedics is incredibly promising. As we continue to innovate and refine these technologies, we look forward to a future where AI not only enhances our capabilities, but also transforms the way we deliver care to our patients. By embracing these advancements, we have the opportunity to improve patient outcomes, reduce healthcare costs, and ultimately enhance the quality of life for countless individuals. Thank you for your time today. I look forward to the insightful discussions and presentations that will follow. Together, let's explore the future of AI in orthopedics and unlock its potential. Thank you. Thank you so much, Molly. Before I introduce our keynote sponsor who will be introducing our keynote speaker, I do want to cover just a couple of quick housekeeping notes as well. Throughout today's event as a whole, we have the Q&A box available for you to utilize to ask our speakers questions throughout the sessions. We're going to be handling most of those throughout the end during a Q&A segment after each session, unless the speaker sees something timely and they want to address it earlier. But make sure you're using that. And then if you're going to be utilizing the chat function, just make sure when you are in the chat to select everyone so we all can see your messages to each other. So, with that being said, I want to introduce Laura Potta, who is with Infinex, who is our keynote speaker today. Laura, welcome and thank you so much. Hi, everyone. Let me see. Turn on my camera. How are you? Thank you, everyone, for joining today. I'm excited to actually introduce our next speaker, Samantha Towler at TOA. But just to give you some information about myself, I am Laura Potta with Infinex Healthcare. I oversee our customer success department here. Infinex Healthcare is an end-to-end revenue cycle company. And we actually work with Samantha Towler with Tennessee Orthopedic Alliance. They've been a longstanding client of ours, amazing client. But Sam is the MRI patient services supervisor at TOA. She's been with TOA for the last seven years. We've had the great, you know, we've just had a great relationship partnership with TOA. She's been super fantastic in helping getting some of these AI initiatives set up with her team as well as ours. She will be speaking on how our AI-powered patient access software has helped her practice, especially on the MRI side of things to help with the reduction of front-end denials, helping with increase in the revenue and reimbursements because of the reduction of denials, increase in staff productivity, and overall, the patient and physician satisfaction scores. So with AI, obviously we can do a lot. It's not, you can't, AI is not, cannot be replaced a human by any means, but it can definitely help with workflow and throughput. So without further ado, I'd like to take, introduce you guys to Sam and I'll let her take it from here. Sorry, Laura, Sam's session is actually later. This is Mateo's session, the keynote session. I'm sorry. I didn't want, you were on such a roll and I didn't want to interrupt your roll. Mateo. So, okay. I'm sorry. This is, well, let me start all over again because I thought that Sam was next. Nothing like virtual meetings. Well, I already gave the intro, gave an intro for Sam. I'll be on later, do it again. But Mateo, pleased to introduce you. Excited to hear what you have to say. We're excited just to be AOE sponsors in terms of helping with these initiatives like AI and the orthopedic space is huge. I mean, it's growing. It's a, it's a key word. It's definitely a buzzword without the industry, within the industry. So, I mean, it's, it's a, it's a lineup of great presentations today. A lot of stuff good on the, on the agenda. So excited. Apologize that I went out of turn. No, we're just excited to have you here. Thank you so much, Laura. Thanks for your sponsorship. And yeah, we're, we're excited as well from the AOE standpoint. And Mateo, I'm going to go ahead and pass it on over to you to start with your presentation. Sure. Well, hi, everyone. Greetings from sunny Munich at the moment. So I will just quickly start sharing my screen as it's already got the content we need. So. All right. I will not be able to see you. I don't think I can. So, yeah, welcome to this today's call today's sort of keynote speaker session. So my name is Mateo. As you can tell, I'm Italian, but my accent certainly is not. I'm basically Irish Italian. So I've been working in the field of design and innovation for just over 20 years now, basically predominantly in consumer side. And I recently went into farm and health care in the last eight to nine years. And I focus a lot on user research and concept validation for new business ventures. These are typically digital business ventures. So things that I typically work on are exploring, defining and validating these new business and services that tech companies typically tend to offer. How this might sort of manifest if you were to ever sort of work with me or discuss with me. These are international large scale international user research studies with patients, customers, physicians, nurses, you name it. Innovation workshops that follow either a Google design sprint or a lean startup approach. Currently, I work at Siemens Healthineers as a senior design strategist working on new digital ventures. I'm heavily embedded now with the UX department, which is under technology excellence and with the strategy team. Previously, I was at Accenture and even before that at Amgen and a few other design agencies such as Pilotfish and Design Affairs. So for today's conversation, I mean, just to give you some probing ideas, you can ask any questions in the chat, not just the ones we have here, but they could be anything. Where could I start with Gen AI or AI in general for your personal physician activities and responsibilities? What key skills and roles do I need as a health care professional to leverage AI? That's a good one. How do I address health care disparities and improve health care and equity? Or health care equity, I should say. And how do I help my sort of organization to stay ahead of the curve with AI as it's quite a fast moving topic? So let's begin with some of the challenges that set the scene. I think these challenges are not probably certainly familiar to you. They're not of a surprise, but just to ring home in on the point here, typically, radiology faces quite a large shortfall of staff. This is not just radiology. This applies to a lot of physicians, nurses, of course, with ever increasing demands on health care providers and infrastructure and industries. Generally, there is a shortage of staff everywhere. And this is not just for radiologists. We're seeing the same in cardiology and elsewhere as well. So generally, we're facing a lot of reduction of staff, you could say, or shortage of staff. These could also be compounded where staff is not as sort of trained in the same manner, for example, or is maybe not as incentivized to stay in for a long term career. So for example, we are seeing now doctors sort of get burnt out and eventually drop out of the health care system altogether. So this brings me on to the next topic, which is typically physicians in general, about 63% of them report having one or more burnout symptoms. This is, of course, quite prevalent also with nurses and technicians, or technologists, if you're familiar with that terminology. So generally, it's widespread. And again, this does not help and exacerbates the previous point as well. So let's have a look at some of the potential in AI for health care, right. So what I wanted to do here was select some interesting sort of use cases that are already proving to be successful to show you that this is not all just hype. It can be in some areas, but generally, there is a lot of positive results coming through, depending on how the AI is implemented. And rolled out and how it's sort of tested in a certain pilot can prove to be quite successful. So of course, AI reduces emergency room waiting times accurately by 20% and increases patient satisfaction. We've seen quite a lot of studies, not just from the Mayo Clinic, but from others, including from Siemens Health and NIS, where AI can be used to actually reduce or triage certain symptoms, manage scheduling and so on. And in the end, also engage with that patient. So through a patient portal app, for example. And then in early cancer detection, and I think this is probably where there is maybe the main focus for the healthcare industry is to use that for early cancer detection. So AI detects early stage lung cancers, and this is not just lung cancer, we're talking also mammography and other topics, even coronary calcifications, if you're looking into heart related topics with quite a good level of accuracy, 94%. So outperforming a junior residents, for example. So it can even outperform a traditional radiologist or physician. And you can sort of look at a few different sort of, I would say, kind of case studies or pilots that have been done. One is by Transpara, where they've sort of proven in the Scandinavian countries that cancers can be detected 20% more easily or quicker, and typically have also reduced burnout rate for physicians and reduce stress and cognitive load. And if then we look at the enhanced diagnostic accuracy, of course, another example here for AI being used in imaging or diagnostics is Renopathy. We have an accuracy of about 94% when it's being deployed, compared to 91, which is not that dramatically different. But what we're saying is it's performing at the same level as a typical physician. And this is in collaboration with Google Healthcare. So again, the Gemini model that they're using and the Palm model that they're using is quite effective at delivering those results. Then if we look at some personalized medicine, we've got sort of AI being used to tailor cancer treatments in the planning stages. So this comes probably just after the report has been compiled, and we have effectively a diagnosis. And we can see an increase of treatment efficiency by about 30% because of that personalized medicine approach. So understanding, again, the patient context, understanding their, perhaps, how they have reacted to certain drug treatments and their history as well. This has all kind of been also proven by the Memorial Sloan Cancer Center, which they've shown that there's a 30% increase in treatment when that has been applied to their planning. And then, of course, symptoms trackers or symptoms monitoring AI, again, has been employed here for triaging, but also for monitoring, where it can quickly detect early symptoms or early complications and bring patients back into the system. So again, this is also a way of engaging with the patient, monitoring the patient, and making sure that these adverse events don't happen sort of too late and uncontrollably, basically. And there was a startup here called Babylon Health that's effectively pioneered this in not only in the UK, but also in Rwanda, in Africa, basically. And the last, but by no means least, is triaging, again, for hospital resource management. AI here was used by several different use cases or vendors, and then there's some case studies here. One was actually by Siemens using crew plays to reduce resource utilization. And again, here you can sort of see typically, again, you're using a combination of monitoring the patient, monitoring the staff availability, and the scanner to reduce that utilization of staff. So maybe then we go into something a bit more detailed. So that's just to show you that it is being quite heavily utilized. It's showing some good benefits in efficiency gains clinically and operationally. And this brings us to sort of just a quick landscape overview. So I've sort of simplified it. Of course, this can be quite a lot more complicated. But in this sort of end-to-end value chain, you've got triaging, exam preparation, image acquisition, reading, reporting, and the follow-up and treatment, or treatment and follow-up. As well as this kind of backbone of constant administration and clinical operational side of things. So what we sort of, at least I have identified through various different collaborations, both at Siemens and externally with the Board of Innovation and other vendors as well. Or partners, I should say, is along the following sort of opportunity areas. So clinical context, understanding the patient history. So perhaps, again, seeing a pattern based off previous reports. And also based on that demographic and their general symptoms that they have perhaps reported. Then, of course, we've also got note-taking agents, which is a very big common theme now with Gen-AI. So these LLMs now act as agents to facilitate certain activities or transactions. We've also got educational agents that can support physicians in upgrading their skills. So again, a similar activity, but many different types of agents here. So it's called a multi-agent approach. Then assisted imaging acquisition. Of course, this is where traditional machine learning and deep learning will play a role in helping technologists or technicians to better acquire a more appropriate image for a physician, a cardiologist, or a radiologist to read. Then, of course, there's typically a post-processing that might sort of help in either segmenting that image or doing some sort of pre-post-processing of that sort of case study. And then, of course, reading and diagnosis. There's a lot of AI being used there not only to provide differential diagnoses, as well as actual detecting of findings, which typically is done in image processing, but also to highlight and prioritize those findings. But also to kind of standardize how people report within a department, for example, or the variation of how I might report compared to how you might report. So sort of reducing the variability of reporting across physicians and radiologists and generally healthcare practitioners. Peer-to-peer communication is also something that I've seen a lot of startups now take on board with predictive analytics and also some machine learning. They're looking at sort of, again, supporting physicians on how do I present this case to a tumor board, right? So how do I present my patients or my lists of patients to a tumor board, and how do I go through it efficiently with the right images, the right reports, the right evidence, and also while making it interactive? Patient communication is a huge one. Again, another type of agent here. It's a different type, but again, this can facilitate anything from nutritional advice to drug adherence, anything along those lines. And, of course, personalized treatment. Then we come to the more operational side, that's the ones in orange. So these could be orchestrating the scheduling and giving you some clinical insight as to how you or your team or your department or even a scanner is performing, right? Or a specific device is performing. Of course, feet management here as well. And we've seen AI now being also rolled out in the backend of IT systems and reimbursement and billing, which is a huge burden on quite a lot of patients and physicians as well. So opportunities, number one, I would say is leveraging patient history and data. So typically there's a lot of incomplete data, sort of, I would say radiologists here typically see about 20% of reports lack or 25% of reports lack enough information, both before reading and after reading. And usually this also can impact a lot of different topics, but effectively physicians don't have the time to consult multiple reports or multiple scans. And of course that compounds the whole situation further. So possible solutions that we're already seeing from different vendors, both EMR vendors and diagnostic imaging vendors and others is something that summarizes the patient context, provides an interactive dashboard that you can query, something that's very easy to read and quick and provides you a certain level of confidence about why the patient has come in, what are the things you might need to follow up on or add additional sort of pathology reports or tests or scans and so on. The next one would be symptoms tracker. So again, leveraging patient history and symptoms as they report either in the emergency room or as they're doing it at home to make sure that you personalize those protocols as you start going into the sort of diagnostic side of the value chain, as you sort of go into that direction. You can also tailor those scanning protocols to meet the requirements. So if the patient has, for example, a pacemaker and it's been detected in the dashboard, it can automatically reduce that. And companies like Ada Health and also DeepSea is a good one are doing exactly this. So here's a good overview of the potential benefits of the app. So we have, of course, early detection and reduced workload. That is one of the main benefits. So companies like Epic with regard, DeepSea, they're using that to manage physician workload and prioritize which cases need to be sort of, which patients need to be talked to first. So again, if you've got someone who comes in with heart issues, you of course need to understand how severe and how urgent that is depending on the different pathways that are in there. And of course, expediting not only triage, but also exam preparation. So Ada and K Health also use LLMs and NLP to kind of interpret the symptoms, triage those patients, and also provide certain recommendations as to follow up for the next sort of a long, so to go further along into the pathology and value chain. Next opportunity would be leveraging that operational data that is often embedded in different systems and scanners and EMRs, of course, and of course, for being as automated as possible. So scheduling scanners is quite a logistical challenge. Actually, it's a huge burden for a lot of physicians and nurses. For example, MRI typically can lose per MRI about 300,000 per year based on no-shows alone, right? So if a patient's not showing up. So again, here AI can be used to both engage with patients to see how likely are they to be a no-show, but also to guarantee that they will turn up. And also then to schedule and reschedule things as needed. So again, automated scheduling and scanner based on staffing availability. So again, pairs the patient's engagement and availability with the staff and the appropriate scanner as well, and schedules accordingly. So companies here, of course, Notable Health, Siemens Healthineers, CrewPlace Enterprise Options. These are good examples. And then of course, as we mentioned previously, we have protocols using the previous data and leveraging that. So very similar to the previous one, but I just wanted to repeat it here. Again, companies here like DeepSea are a good example of that. And we go into the last one, but second to last, so augmented human decision. So then this is coming into that part of the value chain where we're talking about reading and reporting. So reading quality is typically impacted by communications and the number of cases and the volume of imaging. So if you think of physicians and specifically radiologists, but they're often overwhelmed with a lack of information about the patient, right? So they have to do a reading based off no prior report sometimes, or having difficulty accessing a prior report or context about why that patient was sent in. Also potentially they might be disrupted by having to do an intervention, for example, or by colleagues. And of course they start losing track of their thought pattern or whatever they were reading at the time. So of course, again, the sheer volume of cases, the complexity of cases as well is also increasing. And the number of screening is also, so the volume again is dramatically impacting that. So possible technology solutions here could be again, we're already seeing this. So AI prioritizing the work list that needs to be read based off that sort of post-processing of the actual study. So again, this could be sending it through multiple AI algorithms. So if you could have, for example, a knee exam that is sent through both a bone AI, as well as a sort of ligament or tissue AI, and is able then to interpret multiple topics. Again, of course, also automated findings, detection and segmentation. So being able to annotate, so detect fractures, for example, or detect calcifications or detect nodules, cancer nodules, for example, and able to then annotate those very clearly and very visually to the radiologist so that they can simply just verify or double check that results. Of course, companies here, Transpara is a good one for mammography. Red Brick AI is also another good one specifically for spinal and bone related topics. Benefits, of course, improving early detection is a good one, of course. The number of times that patients basically reach or get a delayed diagnosis in mammography is quite, quite severe because mammography can progress quite quickly. If it's not caught early enough. So of course you can prevent that and of course improve your own reputation, the department's reputation, as well as increased satisfaction from a patient point of view. And then of course, decision support and report standardization, which is what I mentioned previously that you wanna kind of reduce the variability of how I write a report. So make sure that you present things in a similar way that if someone has to read it, you're able to kind of carry on with that. And of course, help you with potentially differential diagnoses. So you always want to sort of maybe query what the patient had as different comorbidities or previous issues in previous scans and reports, and you might then get a sort of support for a differential diagnosis as well. And the last one, which is alleviating patient, or sort of elevating patient experience and support. So typically patients are quite unfamiliar with the terminologies that are in healthcare. And of course, about 45% of patients request additional information. So this typically means that they will often go to a radiologist or a physician for a follow-up, even though the report might say no findings or unremarkable findings, for example. So they typically want to understand why they've been asked for a follow-up, if nothing has been found, for example. And here, what you could say, or what we're already seeing as well is possible technologies are clinical note-taking sort of patient agents. So here are, these are effectively LLMs that transcribe and summarize dialogues between the physician and the patient. So this could be typically maybe the GP or the radiologist discussing it with the patient, for example, and supporting that and being able to communicate that directly to the patient and also maybe to the referrer as well. So you can see things that have been sent maybe to your GP as well. And that's typically a good example here is ABRIDGE. Clinical administrative agents. So there's also a shift as well to individual agents, but then also multi-agents where they will effectively be a chain of mini-agents kind of doing multiple tasks for you end to end. So these could start or are already starting where you get, for example, Hippocratic have a patient-facing clerical agent that will discharge you from a ward, for example, provide you with the medication, maybe even offer sort of medication adherence reminders, nutritional and physiotherapy advice, and of course also maybe reimbursements and complaints. And we're already seeing this. The next step here is to actually connect some of these together, especially the ones that are very adjacent and Hippocratic is already offering a very nice example of where that will probably go. Streamlining sort of benefits here, of course, for elevating patient communication is streamlining the actual communication with the patient, which can be quite at times fragmented and can be quite confusing. So having something quite ready, easy to hand, and something that can be effectively controlled with a certain training, of course, does help both physician and patient as well. So again, here we can see eliminating manual note-taking, keeping patients informed, which improves satisfaction, and maybe even adherence as well. And the last one, again, is automating these sort of more mundane tasks. These could be, again, referral letter or billing, as well as discharging a patient from a ward. And I come sort of now to the end of the actual presentation. So currently I've been working with a few external people, as well as other companies, and I've developed my own version of sort of a code of ethics. So I have worked a lot with Merck in Darmstadt in Germany. So Merck, as you might know, is a pharmaceutical company. They have a great sort of drive and sort of passion for digital ethics. And I attended one of their events last year, their first one ever, actually. And this has sort of been inspired from that experience, and as well as some of my own on different AI pilots at Siemens. So the first one I would say is patient privacy and confidentiality. So again, it's sometimes an understatement, but it's amazing how things are not anonymized enough and protected enough, and informed consent being one of them. So we sometimes assume that there is informed consent, and I think that also needs to be automated and considered carefully. Transparency and explainability. I think with Gen-AI specifically and LLM models, this is perhaps the most obvious topic at the moment, which is communicating the limitations. This can be seen as like, for example, if you sort of type it into a chat GPT, provide me a way to, I don't know, something illegal effectively, how can I, I don't know, rob a bank? It already will basically not provide you with the response, of course. So that level of throttling, so to speak, is communicating the limitation and also applying limitation. So again, we see that with chat GPT, where it says, I'm not able to provide information of that topic. And it basically sort of, you can even, it can even notify certain individuals. And then of course, ensure that the models can be explained. For the most part with machine learning, it's a pretty much one-to-one. You put in some sort of rule or coding, and it's pretty clear what the output is. But when it comes to deep learning and even more with LLMs, it becomes even harder to explain. So again, that needs to be considered for both providers and manufacturers as well. Fairness, bias and fairness. So again, monitor biases. So again, typically this comes from the training data. However you did the foundation model, or if you're even training it with, so you're doing your training for that foundation model or the local LLM. I think this is a key topic to keep in mind because it's quite easy to have a set of data that represents a typical male in his mid thirties and not consider or factor in gender societal topics as well as population demographics. So accessibility is another one. So typically these AI services are only accessible by the more wealthier demographic. And I think it's quite important. I think one of the good examples that I've seen at the moment is there are a few hospitals using AI specifically for lung cancer screening where they are targeting the, specifically the sort of underprivileged or the sort of, yeah, the less affluent demographic of society. And I think that's a really good sort of raison d'etre, good reason for being in terms of doing good in the world. And that comes to one of the other ethics as well. Clear accountability. I think sometimes it's not very clear who is responsible. So where the lines of responsibility lie and the decision-making. So sometimes that needs to be quite clear if you're building products and systems and services. And auditing as well. Logs for auditing is actually quite an important one, specifically if you wanna be ISO compliant for certain AI systems. The last two, I would say, are the ones that are closest to my heart. So ethical, which would be augmenting human decision. So rather than replacing physicians, aim to do good, understand how this could potentially impact the job of a nurse, a physician, a technologist, what would motivate them to use it, what would motivate them not to use it, and aim to do good. The last one is human-centered by design. So try to tackle the problem rather than provide a glitzy, glamoury, sort of, yeah, very trendy kind of solution that does not stick in the end, and try and tackle the real problem and make it usable from the very beginning. With ChapTV-T, I think it was a great example that it was incredibly usable, but it was very unclear what problem it was trying to address from the beginning. So again, part of that requires some exploration, but that is part of it. I will stop sharing. Hopefully I have not lost you here. And if there are any questions, I'm happy to answer. Thanks, Matteo. So I'm gonna give it just a few minutes for any questions to come through, but do wanna just mention to you while you were presenting, there were some things submitted in the chat about what they're looking to try to solve in AI, different attendees. And so some of the examples are automated coding of notes to produce claims and coding. Along with that, catching missed documentation or missed CPT codes from an encounter. Clinical document improvement could benefit from AI as well and scheduling insurance eligibility verification and prior authorization. So, and I see that somebody has raised a hand. If you can drop anything into the Q&A or the chat, that would be really helpful. So should I try and attempt to answer some of those? I would just say if you're seeing anything on the horizon in particular that maybe you didn't cover in the initial presentation, it'd be great to have your perspective, especially from the global point of view, because you're seeing things on a little bit of a different level than we are here in the States. Yeah, yeah. So I think if I remember there is, I can't remember if it's Glass AI or there's one called, I think it's called Glass AI, don't quote me on it. But basically there are already these sort of LLM models that will take in the ICD-10 codes, the actual codes that you generate when you're doing a specific healthcare activity and prepare that for reimbursement by filling in a form. I think one is called Glass AI, don't quote me. And then there's another one called Corti, I think it's called Corti AI. Might already be in my presentation actually to be really honest, because I remember, yeah, Corti AI. So that's a good example of using those codes and already automating the sort of the billing and reimbursement process. At the moment, I'm not working on that at Siemens. I know Epic is working on that quite, at least they publish topics on it, because of course, somehow it's tied into their system as well. So there are different players trying to do that. And it depends what scale you're trying to do it at. And sort of, is it more of a local practice or an individual level, or is it more on a sort of large hospital type level? I would say probably that's something that I'm not as experienced in reimbursements, but I think that's where I would see, and this probably goes back to more of the meta topic, where Gen AI and specifically LLMs come into play, I think are on the administrative side. So for AI, if you're looking at LLMs, the greatest benefit will be on those administrative tasks, not on the clinical ones. So either detection of pathologies and annotations or segmentations of those pathologies and measuring them out. So that will be typically machine learning and deep learning. So that's where I see players like Microsoft potentially creating a brand new industry around, and maybe even more around sort of operational topics and administrative topics, because I think that has been somewhat always a bit out of focus, at least. It's not the glamorous side, but it's the side that actually most physicians and patients, I believe, struggle with the most. So when you talk to a physician, at least the ones that I've talked to in the last few decades or the last decade, basically is typically what they complain about is the administration. So they spend, I remember one of the sort of, one of the statistics, 40% of their day is spent on reviewing EMR report or EMR softwares, right? So medical records and looking through data through patient history. And I think about 20, 30% is done. Part of that is 30 is part of that 20, 30% is also administrative tasks as well on top of that. So you can say that they're not really there to do complete clinical work, 100%. They're doing quite a lot of administrative tasks. And I think that's what we sort of, what the whole goal is to reduce and automate. But yeah. Awesome. And just to know regarding the chat, because of the nature of the way this event is set up, there were actually two simultaneous chats happening. And so we actually moved it all to the Zoom chat because from a logistic standpoint, we were having a little bit of feedback when we were trying to monitor both and I didn't want the recording to be compromised that way. So yeah, if you had any questions that you dropped in the other chat, if you open up the actual like Zoom session, I think you should be able to see the chat in Zoom and that's a good place to drop things into. And we'll do that for all the sessions moving forward as well. And then I do also wanna share that in the session itself, we have the presentation that Mateo put together as a downloadable file. So if there are any different things that were referenced within there that were linked within the presentation, you'll have access to that to be able to click and check it out from your standpoint as well. Maybe then just also to sort of take it at, again, a more broader level. So I'm currently involved a lot with NVIDIA and another provider looking at different topics like manufacturing, like how do you use Gen AI or LMS there, for example, to update machinery, sort of maintain, again, updates and maintain machinery and also send out technicians. There's a bunch of use cases that you could also, so the way to look at it is, there's always gonna be the clinical side and there's gonna be the sort of operational clinical side. So how do you manage your team, your clinical team? And then there's gonna be the administrative IT side where you've got infrastructure. I think those are the three layers that I always sort of think about, sort of the front end and the back end, basically of a stage, right? So the front stage, you could say of a theater is where the doctor will, a physician will practice their art and of course that's where there's a lot of visible value, but then you're gonna get just as much in the backstage. And I think this is where I see there's a lot of opportunity, more perhaps in the mundane, boring tasks, because also those are gonna be, those are traditionally quite small. So taking some codes, filling a form for reimbursement is a relatively simple step that an LLM model can actually do, live language model can actually do. Protocol selection, for example, is also a good one. There are a bunch of different use cases and providers that will take the patient history, look through it, get a certain level of understanding, okay, this person has a pacemaker, therefore I can't use protocol A, I need to select protocol or I need to offer protocol only B, C and D or something. And I think that already streamlines the physician side, for example, where the technologist is there at scanner side and is sometimes not as familiar with that technology as a radiologist or sort of a more senior physician, for example. I think that's a really good example of tying the front and the back. And I think there's always this skeptical, this sort of level of skeptical side to using AI and specifically LLM and challenge GPTs. I think what I've always sort of noticed is that there are, I mean, we've spoken to doctors who are using it for research purposes as well, sometimes worryingly and sometimes in a nice way, but where they're using it to actually understand differential diagnoses, or even, again, generally see what has been written and rewriting their work as well. So you can have very simple cases to very complex cases that are more clinical, but generally there's always this code where will AI replace physicians? And I think the experiences I have had on the pilots and various fairs that I've attended, there was one last year that I attended, which is the RSNA Royal Society of North Radiologists in America, basically. And it was quite clear that AI is not certainly machine learning and deep learning is not at that level to replace physicians. And I don't think it's just, it's too specific. It's too minute to do. It does a very small, small task, more or less 80%. So generally it's not ready to do the full thing. But the comment that was always being passed around which a lot of doctors said was actually that those who are not going to be using it in their practice in the sort of next five years will be losing out on a lot of value because they will still be doing those relatively mundane tasks. For example, lung nodule detection is quite an important one, right? So it's an important clinical activity, but detecting and reading lots of CT images is very time consuming and it can be quite, it can be quite, yeah, a mundane exercise that does need to be done, but it still takes time. And of course, if you can automate that and you can allow physicians to do, I don't know, three or more interventional procedures that day, you're increasing, first of all, you're increasing the revenue of that team or that provider, that hospital. So that hospital is able to then get higher reimbursements on interventions than reading a report, for example. And then you're also able to cover much more of that local demographic, right? So if you can triage and then detect, you can then, in theory, screen a lot of your population, more, right, more of your population. You can maybe even detect things earlier. So there's a lot of advantages. It's just, it's gonna take some time, but I think there's already going to be, I feel that there is, there will be a cultural revolution here, just like what we had with the internet and with the industrialization, it will sort of eventually take over. It might not happen as soon as people are saying. So I think if you think of autonomous driving, I think when that came out, people thought it was gonna be in the next three years or five years, and it still took, I mean, still taking basically 10, 15 years plus. So it's on its way. It's just maybe, we always underestimate how soon it will happen, or overestimate, I should say. We had a question come in the chat, and I know that Kathy has addressed it too, but I just think it'd be interesting to hear your perspective as well, because you've been a part of other startups too. For organizations looking to deploy AI, what steps do you suggest leaders take to secure funds to experiment with AI? I mean, to experiment, I mean, typically, I mean, there has to be not a lot of, not all clinical activities can be reimbursed, right? So the first thing really is to understand in what area are you going to apply AI, where reimbursement is covered, for example. Mammography, in some states, is covered, the use of AI and the use of ultrasound, for example, right, ABUS, but in some states, it's not. So I think that is very specific. So you first of all need to map out the policies and the reimbursement policies that are available that do cover before you do any AI activity. And that's probably what I would sort of first do first. In terms of funding, I think this is probably a bit out of my area, because I'm not involved with more government, even here in the European Union, I'm not involved with funding, but typically there are always some sort of funding support from, I mean, if I think of the German government, there are grants and there are support measures in place, even expertise advice. I know one consultancy here in Germany that offers ISO support to get your product ISO certified, right, so there's an ISO, I think it's called 42,001, is a specific ISO related to AI deployment and AI, building AI systems. And that can be basically supported by the government here in Germany, at least in terms of, they give you a consultant and they pay off the rate. I don't know what the deal is, but effectively they do that. In terms of funding, yeah, I wouldn't really know how you would do it, but in terms of proving it internally, right, let's just say you need, you have budgets at the hospital and you need to prove it internally. I would try to, first of all, again, see which policies can be reimbursed, try to tie those with a strong argument to improving either clinical efficiency or clinical accuracy or clinical accuracy or efficiency, general operational efficiency overall, or generally even an even better, maybe something related to patient care, like earlier detection or higher patient satisfaction or even physician satisfaction, like you can measure. And we are doing that on a couple of pilots where we're measuring physician cognitive load. So we're measuring the cognitive load of physicians and seeing if the stress and the difficulty has declined with the use of AI. And we have seen some improvements. I think one pilot had about 11% cognitive reduction overall. And again, for patient satisfaction, you can literally, it depends on your turnaround time. So for example, lung cancer screening in the UK is quite long. I think they have a backlog of about a month, maybe even more, three months, I think is the backlog. So there's no point, you could do actually a patient satisfaction there where you say, right, implement some sort of patient portal where if they get their report read by AI sooner and by a radiologist sooner, they can then give you feedback. In the US and maybe in Germany, the lead time typically for these reports are not quite short. I remember there's one, there's a couple of hospitals I know in the US that they all read a CT in three days. So the patient's really not gonna see any benefit in turnaround. But what you might be able to support there is, can you convert that report into a nice report that they can understand, that they can read? And that's where LLMs can play a role, for example. I think there's a company called RAD-AI. I saw them at the RSNA fair. They use LLMs in their reading and they have a foundation model that they've trained. I think the foundation model is actually from Microsoft. They then have a local LLM model that is trained off the reports of that radiologist. So I'm John radiologist. I send to this company about a thousand reports. They train that model. And then every time I write that report, whether I dictate or I do some click reporting, it will use the vernacular and the terminologies that I use, right? And then tries to standardize it as much as possible. But then you can also convert that into a patient version very easily, where the other, it will then use another LLM model to basically say, right, I wanna now standardize this communication from a hospital perspective or a department perspective to the patient and make it, you know, and you have to be careful there. There's a caveat there, which is, of course you don't wanna communicate anything serious. So, you know, you don't wanna say you've been diagnosed with five lung nodules of, I don't know, five millimeters large or something like that. Or you have to put metastases in your bones or whatever. So of course it's usually typically no findings or incidental finding that has relatively limited clinical value. So for example, yeah, you have a mild bone fracture probably caused from, I don't know, activity use or something like that. So it's something that wouldn't really cause too much alarm to a patient. So again, you can use that. So I would say back to your topic, which is try to prove the value, try to find, map out the reimbursement rate, try to prove the value, try to connect that, be clear with what the problem is, and then bring in vendors. I think you're not gonna do it always on your own. So sometimes there are very talented IT departments and radiology departments that are very technical. I'm thinking one, if they're watching now, Essen, the Essen Hospital in Germany, they have a very talented radiology team that is, they're radiologists, but they have incredible levels of programming ability and their IT team is also exceptional. And so they build their own foundation model. They've built their own LLMs and they're able to understand the patient context much more innately, much more quickly by dialoguing with a dashboard, for example. So you can do that also with your internal team. So try to understand the problem, try to then build that argument quite clearly and tie it to some sort of KPI, like again, productivity increase or improved accuracy or more consistent accuracy, or I don't know, even screening. Maybe the triaging, for example, is a good one. Transpara and Therapics will do triaging quite well where they will read lots and lots of mammography exams and triage them. In theory, those types of vendors will give you a broader access to your population so you can screen more of your population, which is great because your reimbursement rate goes up, but also you're doing good, right? You're basically preventing cancer in certain areas or within your district or within the area that you cover. So I think that there's always an argument to be built and I would say, don't do it alone, although if you have a great team, like the Essend team, fine, but even they are working with Siemens, for example. And I know another hospital, the Mayo Clinic is working with Google, for example, on cardiac AI. I think there they're using mostly deep learning from what they've published. I think there's some LLM models there, but generally I think it's more deep learning topics there and cardiac related. So yeah, I think you just have to, you will have to strategically partner. And the way to strategically partner is do a pilot. Make sure the scope is not too broad because your expectations will be, first of all, inflated and you will go through pains, growing pains at the beginning. That's probably my biggest advice is, always do a pilot with multiple vendors. Lower your expectations. So for example, if you have something like there's a turnaround time in this whole thing that you're trying to build, don't do a turnaround time. If your normal turnaround time is 24 hours, don't do a turnaround time of 24 hours. Try to sort of say, okay, well, will the patient really be sort of hit badly if they get it in 72 hours or 48 hours? Probably not. So I think then that's when you sort of can sort of reduce your expectations a little bit. And then of course, when the system is up and running and the teething pains have gone, then you can start to reduce, to really refine the scope. Okay, I want you to report all calcifications in the heart by location. And I want you to report this in a nice, beautiful PDF within 24 hours or less. Then of course you can be a bit more stringent, but the main issues that you will always face is with any provider. So whether you do it with your own team or an external partner, whether it's Accenture, BCG or Siemens or whoever, is integration. There's a lot of integration issues. Products typically work great on their own and great with their data sets that they did in their lab. So if I think of certain AI companies that are doing chest CT AI, right, for lung screening, x-ray, not, so actually x-ray CT, x-ray lung screening, they test very, in a very clinical, I would say not clinical, that's the wrong word, in a very sort of sterile environment, right, with a specific demographic group at a specific hospital and they prove that it's 89, 90% efficient, but it's not, that was a very specific sandbox. So your sandbox and their sandbox might be totally different. Your patient demographic might be prone to certain, for example, certain pathogens in the air or whatever, so dust or coal in the air or something. So for example, if you're in Milwaukee where there was this sort of, this rust belt in the US, those, some of those cities are prone to certain diseases in the lungs that are not common in other parts of the US. So you have to keep that in mind. Awesome. So I just wanna be cognizant, we have two more minutes, but there was one more question in the chat. Have you seen AI be able to fill discrete data fields in an EMR from ambient listening of an assessment and for those clinicians that have point-click EMR models? Yes. So there is a prototype that we saw by the ESSIM team and they did exactly that. They had, and I don't know the context exactly, but they had an EMR system that was able to pull data from the patient records and history and basically provide a summary and a sort of an overview. We published this actually at the RSNA last year and might be able to, so it was a published article as a collaboration between Siemens and ESSIM Hospital. I might be able to dig it up and put it in the PPT if I can find it, but it should be on the RSNA somewhere, a website somewhere for last year, 2023. But yes, it can be done. I think that's actually where LLM models are surprisingly, have surprised a lot of people that somehow, typically with machine learning, you have to have a direct connection. With LLM models, you still have to connect them, but there's some sort of, and sometimes it's a bit of a magical black box and that's why there's a problem there with the sort of transparency. We don't quite know how it does it all the time. So it can be quite surprising, but it's good that it can do that because of course, that's a major problem that a lot of physicians have. Awesome. Well, Matteo, thank you so much for spending the time and going through all these concepts with us. A lot of what you talked about, I think we'll be getting into a deeper level with other speakers presenting today. So it was a great kickoff for our event and especially taking time in your evening as you're six hours ahead of us. So really appreciate it. And we'll be starting the next session here momentarily, which is AI and orthopedics transforming healthcare practice with Dr. Grant Muller. So thank you and we'll see you all on the next session shortly. Thank you very much. Thank you. Thanks.
Video Summary
In the video transcript, Matteo discusses the transformative potential of artificial intelligence (AI) in the field of healthcare, specifically focusing on orthopedics. He highlights how AI is revolutionizing the way we diagnose, treat, and manage orthopedic conditions. Matteo mentions that AI has already made significant progress in applications like imaging and diagnostics, surgical planning, and rehabilitation. He emphasizes that AI can analyze medical images accurately, leading to earlier and more precise diagnoses, thus improving patient outcomes and reducing healthcare provider burden. Matteo also addresses the challenges of integrating AI in healthcare, such as ethical implications, patient privacy, data security, ongoing education, and training for healthcare professionals. He suggests that the future of AI in orthopedics is promising, with the potential to enhance patient outcomes, reduce healthcare costs, and improve the quality of life for individuals. Lastly, Matteo outlines a code of ethics for deploying AI in healthcare, focusing on patient privacy, transparency, explainability, fairness, accessibility, clear accountability, auditing, ethical augmentation of human decision-making, and human-centered design. He also offers insights on securing funds and experimenting with AI, the importance of partnerships and integration, and the potential for AI to fill discrete data fields in electronic medical records (EMR) from ambient listening of assessments.
Keywords
artificial intelligence
healthcare
orthopedics
diagnostics
surgical planning
patient outcomes
ethical implications
data security
electronic medical records
human-centered design
×
Please select your language
1
English