false
Catalog
AAOE Virtual AI Summit
AI In Orthopedics
AI In Orthopedics
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I'm just waiting for Grant to jump in. Well, for everyone to jump in, right, because none of the attendees are here. There we go. I just, yeah, well, they'll start here. So Grant, I'm going to make you a co-host and then I'm going to start the webinar. I give me one second. I want to make sure everything was, I'm in my house. And so it's a little, I want to make sure everything's blurred because there's junk behind me. Yep. It's blurred. All right. Awesome. Hello everyone. We're going to give it just a moment while people jump in and I will introduce our next speaker. All right. Looks like we've got a lot of people in here. So thank you. Our next session is AI and Orthopedics Transforming Healthcare Practice. We're honored to have Dr. Grant Moeller, the CEO of Gale AI Help to lead this discussion. Dr. Moeller is going to be sharing valuable insights into the types of AI learning and their practical applications in the orthopedic practice. He's actually an orthopedic physician. So he's going to have some great insight for everybody. And so without further ado, I'm going to pass it over to Dr. Moeller. Well, thank you for the introduction. That was a great talk by Mateo. I think it's a good segue because a lot of the topics in there, I'm actually going to kind of dive more into the nuts and bolts. Really kind of give you an explanation of kind of how AI works, but from like a 40,000 foot view. I've given my talk to a couple test audiences and being here in Austin, Texas, I'm surrounded by Google developers and Microsoft developers and you know, they told me they picked up on everything. Some of my non-tech friends said that, you know, they understood about 60% of the talk. So don't be afraid to stop and ask questions just because some of these topics, like I said, I'm trying to break it down. So it's really understandable from a non-technical point of view. So I'm using a lot of analogies and things like that. Also, just a quick plug. Some of the questions, I'm not really talking about my company, but Gale AI, some of the questions we ask, that's what we do is we read op notes, clinic notes. We do it in about three seconds and we can tell you, you know, your reasoning, your ICD, CPT codes, as well as CDI. And so that's something I was about jumping out of my seat when I saw that question asked. And then also too, from my experience of integrating into EMRs, it's not that hard to fill out some of these discrete fields. A lot of it comes down to the actual EMR and some are more willing than others to allow you access to certain APIs and certain points. It's really just which it's going to be EMR specific as to what all they're going to let you do. But on that note, let me share my screen. Everybody can see our title slide. As I went more and more into the nuts and bolts, I decided this is now called AI is explained by an orthopedic surgeon. So take that how you will. As Jessica said, my name is Grant Muller. I'm an orthopedic surgeon. I'm a self-taught computer programmer and kind of IT nerd my whole life. I've been practicing and about four to five years ago, I started this company. We've taken it from the ground up and it really is. It's just it's I'm all about process improvement and I just there's so many things in medicine we can tackle and with all the tools that we have at our disposal today, you know, I'm doing my best to make everything more efficient and a little bit less painless. And so for this talk, I got a couple learning objectives, you know, we're going to talk about the different types of learning. We're going to talk about, you know, supervised and unsupervised learning. We're going to talk about problems with AI and then I'm not going to go into this as much because I feel like everybody else is talking about it. But really what I like to do is kind of teach you building blocks then build on those building blocks to bigger concepts in the hope that this will get the wheels turning in your head and you can start to think how can I use this for my own benefit in my practice? And so when we think of AI, you know, what do you think of? Because when I hear AI, I hear a lot of different things. I think of AI as machine learning. I think of it as natural language processing. I see computer vision, large language models and generative AI. And so I'm going to kind of go through all these topics, like I said, on just a kind of a brief overview and then we'll dive a little more in depth. And if you have any questions, please stop and ask. And so for our first talk, you know, we're going to talk about machine learning. And so what is machine learning? So, you know, typically in programming, we give the computer some explicit rules on how to behave and how to function. And if it encounters a scenario that's never seen before, then, you know, usually it's not going to function as well or it's not going to do well. But with machine learning, you know, we give a program an algorithm and then we give it all these data points that have been labeled. So we call that a data set. Some of these data sets are very precise. Some are large, but usually it functions off these data sets. And really a lot of it is it's kind of predictions. You know, some people say machine learning is just fancy math or fancy statistics, but that's where, you know, we've gotten to where it's more and more accurate, it can do more and more. And we're going to talk about that in this slide. And so with types of learning, you know, you think about it, think about machine learning almost like a child. You know, when a child learns, you've got to give feedback so it knows what's right and what's wrong. And so with something called reinforcement learning, we give the model, we give it a reward. And so it starts to learn, okay, if I do this, you know, if I do this differently, I'm getting to the reward quicker. And that's something called reinforcement learning. Also too, when we're sitting there watching it, that's called supervised learning. And so if you're letting the model or the AI make predictions or make choices, usually with supervised learning, you have a user sitting there. And so then they can say, yes, you're right. Yes, you're wrong. And what's crazy is a lot of people don't realize you've been doing this for a while, whether you knew it or not. So you see this little comic on the right from XKCD that says, to prove you're human, click on all the photos that show where you would hide in the event of a robot uprising. And so anytime you're using Google or using Microsoft and ask you to select a stop sign or ask you to select a traffic light to prove you're human, it doesn't decide you're human on whether or not you get it right or wrong. What they're actually looking at is response time as well as kind of the mouse patterns in your cursor. They can determine if you're human, how you react to the question. Now with you selecting the stop sign and you selecting the red light, you're doing supervised learning and you're giving their models validated data sets. You're saying this is a stoplight. This is not a stoplight. And so I think everybody always finds that interesting when they realize, oh, I've been training AI for a while. And so this is also something called human in the loop, where like I said, we have real-time feedback from users. I like human in the loop because to me, it gives me more trust in a model we've developed if I know someone's been overseeing it. I think you can develop a faster model with smaller data points if you have somebody overseeing it. Now, some of these giant companies just have tons and tons of data and so they can do it faster. But for the little guys out there, it's all about speed, efficiency, and trust. And then two, you can influence the model that you're training a little bit how you want. And like in our company, I always tell our practices, we can customize this to you. Like we can code it how you want it to code and not to get off on too much of a tangent. But if this gives you any insight, which you already know about our landscape, a lot of times I'll have users asking me if it can automatically do a certain modifier. And well, I have RCMs and coders on staff and they can't even come to terms on when the modifier is supposed to be used. And so it's like if humans can't agree on when we're supposed to correctly do this, how do you expect me to teach a computer? And so that's why I usually leave it up to the practice and I say, I'll code how you want us to code. Now, this is kind of a graph on reinforcement learning. This is from Jonathan Boogan, our chief data engineer. But so in this picture, you've got the little stick person and they're trying to find the pot of gold. And so when they first start, they're just kind of wandering around aimlessly and eventually they get to the pot of gold. And now when the computer sees this, it says, okay, here's the direction I went and here's how close I was to the goal. And so you can see as we get closer to the goal, the arrow gets bigger, meaning this was a more important step. And so then over time, the computer begins to really optimize this and now it's not taking any wasted steps and it's getting to the goal as fast and as efficiently as possible. And so, like I said, that's one type of learning called reinforcement learning. There's other types of learning like unsupervised and typically for unsupervised learning is something where you give a model just tons and tons of data. So think about if we looked at all the data on Amazon for that day and we asked the model, we said, show me the top five products purchased. Well, we don't really need a human to kind of go through all that. It's just going to look at everything, rank it, go through statistics and can say, hey, you know, these are the top five products purchased and that's unsupervised learning. We've also got something called self-supervised learning, and this is something that I kind of struggled with a little bit to understand what it was, but a good example of self-supervised learning is if you have a sentence missing a couple of words and then you ask the model to fill in those words from the missing sentence. And so what it's going to do is it's going to find those words, but as it's finding those words, it's also teaching itself about those words as well as the relationships between those words and other data points. And so it's learning as it completes the task. We also have something called few-shot, one-shot, and zero-shot. And so with few-shot learning, you would show a model, say, three pictures of a dog. You might show it, you know, a gold retriever, a German Shepherd, and a Rottweiler, and then you ask it, you're like, you show it a couple pictures of a dog and horse and you say, tell me which one's the dog, which one's the horse. And so with just a couple examples, it can start to learn, this is what a dog looks like. With one-shot, you show it one picture. This is typically used for something where there's less data and less examples available. And so, you know, you might show it a picture of a dog and then you show it a picture of a horse and it's like, well, it's got a head, it's got a tail, it's got four legs, maybe it's a dog. And then there's zero-shot, where zero-shot, you have no examples. You just say, here's our guiding principles. Dogs look about like this, they've got fur, they're usually smiling, you know, there's four legs, give or take a leg and a tail. And so then you show it pictures and it tries to make a decision based off your guiding principles. And so really, you don't give any examples, only ideas. And then one of the last types of learning we're going to talk about is transfer learning. And this is something that I don't want to go too deep in the weeds, but this is what's really exciting to me, is transfer learning is where you teach a model or you teach a robot something and then they can pass that knowledge to another robot or another model. And so where stuff like this is useful is when you look at these robots by Tesla, by Honda, by Boston Dynamics, you know, you've got these robots that can run, they can dance, and it's like, how did they do this? Did we have the robot dance, you know, eight million times? Obviously not, because that's going to be a lot of power. That's going to be a lot of energy. And what they do is they actually train these robots in a virtual environment so they can simulate it super fast. And then once that model learns, then they transfer the model that's learned how to balance to the actual robot. And so, you know, now that we've learned a little bit kind of about different learning methods, now what? So now we're going to kind of build a little bit on basic concepts of kind of neural networks, natural language processing. This is a picture, just to show you kind of the power of some of these LLMs and LVMs. I used OpenAI and I typed college baby diploma scared and it generated the same image in about 15 seconds and it's not bad. The only mistake it made is it's got two tassels, but maybe for baby college graduation, they have two tassels. And so with neural networks, this is something kind of, these are early networks of kind of how machines and computers learn. And so I've got this picture on the right, you see a neural network on the far right, and then you've got a ganglion kind of in the middle. And so, you know, what is a neural network? Well, you're in the picture on the far right, your data comes in, it goes through a couple different levels, and then it's spit out and it's kind of, to me, like a ganglion, like in the body. The nucleus will start the signal, the signal will travel down the myelin sheath, then it gets to the axon terminal, where usually it'll perform a function like, you know, you go to the next ganglion or the finger will move or you'll feel a smell or something like that. And so, you know, everyone talks about deep learning. Well, the difference between neural networks with shallow versus deep really just comes down to how many layers there are in the network. And so with a deep network, you have more than three layers. And so data comes in, it goes through those three different layers where it makes decisions, sorts it, and then passes it as output. And so these are usually kind of forward moving. The data just comes in and spit back out. There's a couple of different neural networks called like recurrent neural networks, you know, general adversarial networks. But we don't for this talk, we don't really need to get too much into that. But so that's just kind of a basic building block of how machine learning works, as it's almost mimicking a ganglion in the brain. And so now we have natural language processing. And so what is natural language processing? Natural language processing is, it's really to teach a computer to understand the text for more than just the letters. We think back to early Microsoft Word where you hit Control F and you type a word and it finds the word. But now with natural language processing, you may type, let's see, you may type fish and it finds fish, it may find suckerfish, it may find tuna, like it understands the difference in the relationships kind of in the field. And so it looks at emotion, intent, and context. And so with natural language processing, machine learning, some models, we can look at analytics, we can do information extraction, we can kind of determine relationships between the words. And so with orthopedics, it's like, I don't wanna dive too much into this, but just to really reinforce the learning point, we do, you can do chart reviews. So you could say, hey, show me all the patients that came to our clinic for ankle pain. And then it's gonna go through those charts. It's gonna find words associated with ankle pain. Like we said earlier, people are using it for research. My primary focus is we're using it for billing and coding. You can do things like comb through charts and see if there's early predictors of infection that we didn't think about. I see the question kind of on the bottom about MAKO and stuff like that. I'm actually gonna talk about that in two slides. So I'll get to that in a second. But with natural language processing, there's a lot going on on this next slide. And so if you don't understand this, it's not the end of the world, but this just kind of explains how a computer sees the language versus how we see it. And so when we read a book, we read left to right, up to down. With a computer, we may send it a paragraph of text, but it's not gonna read it like us. It does something called preprocessing. And that's what you see on the right. So when you feed a computer a paragraph of text, it goes through a bunch of different steps. And so the first thing is something called tokenization. And when you hear the word tokenization, it's how does the computer break it up? Like how does it break it up? Do we do a paragraph at a time? Does it read it a sentence at a time? Does it read it every four words at a time? Or does it read it every word at a time? Then, depending on how you break up your text, it starts to go through something called text cleaning. So where typically for a computer, it doesn't care about uppercase versus lowercase. You know, it usually doesn't care about punctuation. So it's gonna remove all that so it doesn't get confused. It's less processing power. Then it's gonna do something called part of speech tagging. And so that's just where it labels everything as noun, adjective, adverb, things like that. Then it's gonna remove what are called stop words. And so stop words, if you think about it, it's kind of the words we have in English that are like filler words. So if you say the cow jumped over the moon, do we really need the, do we really, yeah. So you know, the computer is gonna read it as cow jumped over moon, which is essentially the same sentence as the cow jumped over the moon. So it's gonna have the a, an, things like that. And then limitization is kind of a concept where, you know, to a computer, have, having, had, has are all the same word. And so limitization is when it converts that word really to its base form, whereas stemming converts it to kind of the stem that they all have in common. So for instance, with that example, have, having, had, had, the limitized word is probably gonna be have, but the stemmed word is just gonna be ha or ha because that's what they all have. And so this is just a way of how computers kind of standardize and clean up language and then pass it on to the model. And so what can we do with this? Well, with NLP, there's a lot of different functions. And I don't, like I said, don't wanna get off too in the weeds, but there's things like named entity recognition where it can start to learn whose people's names are, you know, names of practices, things like that. And when it sees these names, it now assigns more importance and more weight to that sentence. So it spends more processing power or more thought on a certain sentence if say it contains a patient's name or it contains Dr. Smith. We also have something called feature extraction where, you know, if you feed it a ton of charts and you're like, hey, show me all these charts of infection, not only is it gonna show you all the charts of infection, but it's gonna be like, hey, guess what? In these infected charts, you know, most of these patients had a hemoglobin A1C of 12.6 and they were smokers. And so we can start to kind of draw more information and more relations out of that than we originally intended to. And then lastly, something that NLP or natural language processes is capable of is sentiment analysis. And so this is something where it can look at the overall tone or basically, you know, you say, is this a good paper? Is this a bad paper? Is this a happy review? Is this a positive review? And so it can look at all those positive reviews, scan the net and then push those positive reviews to the front page of your website. So you're not gonna see any negative reviews. I always laugh when I go to a doctor's website and they haven't cleaned it up. And, you know, you have a bunch of one-star reviews on the front page for the doctor. But moving on to computer vision. So if natural language processing is the eyes, or excuse me, is the ears and the mouth for a computer, computer vision is the eyes. So we're teaching it to see images for what they are. You know, traditionally it would break images up into thousands of pixels, but now as you can see on the right, so this is a picture from like a self-driving car, you know, it identifies the car, it identifies the stop sign, it sees the pedestrian and knows you'll get 50 points if you hit the pedestrian. It shows us things for what they are and the relationship is. And in orthopedics, this is stuff we're already using. So if anybody here has a gait lab, anytime you're analyzing kind of gait or motion patterns, you know, that's kinematics. Anybody, and Mateo can probably talk about on this till the end of time, but you know, anybody that's getting MRIs and CTs and you get your 3D models, that's all image segmentation that's been generated from the CT or MRI. And you can do that with fracture modeling, you can look at ACL tunnel placement, things like that. There's a lot of different startups that are looking at, say I had a picture of an AP pelvis. Well, we can look at the chondral loss in the femoral heads and make a prediction based on when they'll need a hip replacement. And now the question that was asked earlier, I mean, this is one of the core principles of the VELUS robot, of the MAKO robot. And this is what a lot of the companies are, you know, they're robots are robots, but it's their software now that they're selling and their software is doing more features. It's getting smarter, it's helping the surgeon. I actually had a talk on kind of augmented reality that I took out of the slide. I'm happy if anybody wants, I can email you my slides on that, but Microsoft is now coming out with something called the Microsoft HoloLens. It's kind of like the Apple Vision Pro, but people are now, I have some friends that are starting to wear it in surgery and they get a small little screen and they're able to kind of do CT overlays. I'm gonna pick up the pace a little bit because I realized I've been running behind. So with generative AI, everybody knows about generative AI. We've got chat GPT. This came out from a key development about 2017 when Google put out a paper called Transformers, Attention is All You Need. And so the reason this is important is Transformers now actually look at relationships between the words. So it's something called an encoder and decoder. So like I showed on that slide earlier with neural nets where it just came through and passed through once, an encoder and a decoder looks at the word comes in, it's encoded as like an idea in the space, and then the decoder puts it as output. So a good explanation of this is when you translate from English to another language. And as you guys know, English is a very idiomatic language. And so, six years ago when you use Google Translate, you kind of get the right words, but the grammar would be completely off. Now it knows, it takes that English phrase, translate it into a thought or an idea, and then the decoder allows it to be output as the correct grammar and syntax. And so this is actually a slide from NVIDIA showing foundational models, kind of like Mateo talked about earlier, but what it is, we have these large foundational models that we can do so much with. And so these are all kind of based off the attention mechanisms. They're just billions of parameters that are trained on the entire internet. And one of the reasons that this has come about is something called Moore's law. So every two years, we double our computing power, whereas the cost of that is cut in half. And so if you look at GPT-4, it's trained on 175 billion parameters. There's over 200 billion in GPT-4.0. But some of the pitfalls of AI, I mean, it can be trained on outdated data, it can be prone to hallucinations. And so when we say hallucination, the model produces its own rules and facts. And so if you see this example on the right, it's talking about somebody who's trying to figure out why their cheese is sliding off the pizza. Well, Google Gemini suggests, you know what, add an eighth cup of non-toxic glue and it'll make the cheese stickier. And it's just a matter of fact that we usually believe that. And then one of the other things of AI we talked about earlier is it has to be, the cost-benefit analysis has to be there. You know, one of the things with my company is we try to keep everything super cheap. We charge a dollar a note to read it, but sometimes the computing cost of reading the note that quickly costs us more than a dollar. And so you have to figure out, is this process worth me using AI on? And then lastly, with HIPAA, you know, you need to make sure you have a BAA in place and the company you're using is not using your PHI to train. Just real quick, the FDA has put out a guide. Your AI will be classified as a medical device unless if it's practicing medicine. So if it's just sitting there to kind of help you with recommendations and show images, that's fine. But if it's making a diagnosis, it's probably gonna be classified as AI. Briefly, there's now things called RAGs or retrieval augmented generation. And what these are is you basically force ChatGPT to read the CMS guidelines or read the insurance guideline before it comes to a prediction. So you use these rules and these policies as kind of your check reign for your AI. And I love this quote from AWS. They're like, the LLM is an overly enthusiastic new employee who refuses to stay informed of current events, but always answers every question with absolute confidence. Kind of reminds me of some surgeons I know, but always trust, but verify and focus on QA. There are, I've got about three or four slides left. And so I'm just gonna go pretty quickly, but there are products out there. This is Microsoft Presidio. And so this is an example of it. This uses kind of the principles of what we've talked about earlier. It uses named entity recognition, it uses pattern recognition. And what it does is it'll go through, it'll find PHI and it'll randomize it. And so that's something that we use just because we don't wanna store any PHI ever. And this is, you know, Mateo was talking about AI replacing physicians. This is actually a paper Google put out and they did a study where they created a chatbot, responds with empathy, makes informed decisions, ask intelligent questions. And they blinded patients to whether they were talking to a real primary care doctor or Amy. And the patients preferred Amy in 26 out of 28 categories. And then when they had a specialist rate the performance, the specialist preferred Amy in 38 out of 42 categories. So it's a little scary. It's not there yet, but it may be the future. And you know, not everything needs AI. I just kind of put this here. We've got AI toothbrushes. We had Google create a robotic arm where it finds where's Waldo and then puts a creepy little hand on it. We've got AI moonwalking shoes, a cat pain detector, so you can aim your camera at the cat's face and it'll tell you if they're happy or not. But you know, where can we use AI and ortho? This is kind of wrapping up. This is what the rest of the day is about for ambient notes, for scheduling, for coding. You know, I have the slide in here from my company, but long story short, we're leaving 200 billion on the table in undercoding administrative costs and the cost of fighting rejected claims. And so we can look at patient outcomes, telemonitoring, denial appeals. And so in my opinion, where do we go from here? Well, I think Mateo said it great too is, you know, we tackle the mundane. We let people focus on patient care and what's important. And we use it to augment existing processes. And I love this Twitter or this tweet. And it's like, you know what the problem with AI is? I don't want to do laundry and dishes. I want to do art and writing, but right now AI is the one doing the art and writing and I'm having to do the manual labor. And so, you know, just to really end this is, I think AI has just tons of potential and possibility and we need to figure out where to focus and tackle the low hanging fruit. And so on that note, I hope I didn't run over too late. No, you're great. Thank you so much. I dropped Dr. Mueller's email in the chat there. So if anyone has follow-up questions, feel free to reach out. He had it up in his slides as well, which are able to be downloaded from the session. And we're going to move on to our next session with NextGen right now. So Dr. Mueller, thank you so much. And we'll see everyone in just a moment. Thanks.
Video Summary
The video transcript features Dr. Grant Moeller discussing the implementation of artificial intelligence (AI) in orthopedics. Dr. Moeller explains various aspects of AI, such as machine learning, natural language processing, computer vision, and generative AI. He emphasizes the importance of using AI to tackle mundane tasks, improve efficiency, and enhance patient care in orthopedic practice. Dr. Moeller also discusses the potential applications of AI in chart reviews, research, billing, coding, image segmentation, and more. He highlights the need to address pitfalls of AI, ensure compliance with regulations like HIPAA, and focus on quality assurance. Additionally, Dr. Moeller presents examples of AI products and shares insights on the future of AI in healthcare. He concludes by underscoring the vast potential of AI and the importance of leveraging it to optimize healthcare workflows and outcomes.
Keywords
artificial intelligence
orthopedics
machine learning
natural language processing
computer vision
patient care
healthcare workflows
×
Please select your language
1
English