The Neurotransmitters: Clinical Neurology Education

Artificial Intelligence in Neurology with Dr. Braydon Dymm

February 04, 2024 Michael Kentris Season 1 Episode 34

Send us a text

Discover what happens when artificial intelligence (AI) meets the complex world of neurology as we chat with Dr. Braydon Dymm, a stroke neurologist interested in the intersection of AI, neurology, and education.

We examine current uses of AI in clinical workflows, from interpreting scans to summarizing patient histories, showcasing the invaluable ways AI can enhance patient care. We also explore how AI tools are reshaping medical education and clinical practice. Imagine junior residents using AI to broaden their diagnostic skills, pushing the boundaries of their clinical reasoning. The discussion doesn't shy away from the tougher questions either—what does it mean for accountability in the medical profession when AI's advice is ignored, and how does such reliance alter relationships between doctors and patients? Join us as we navigate a whole new world together!

You can find Dr. Braydon Dymm on Twitter/X at @BraydonDymm

  • Check out our website at www.theneurotransmitters.com to sign up for emails, classes, and quizzes!
  • Would you like to be a guest or suggest a topic? Email us at contact@theneurotransmitters.com
  • Follow our podcast channel on 𝕏 @neuro_podcast for future news!
  • Find me on 𝕏 @DrKentris


The views expressed do not necessarily represent those of any associated organizations. The information in this podcast is for educational and informational purposes only and does not represent specific medical/health advice. Please consult with an appropriate health care professional for any medical/health advice.

Dr. Michael Kentris:

Hello and welcome back to The Neurotransmitters for another episode where you can get all of your clinical neurology education. I am very excited. Today my introduction is all over the place. I'm joined today by Dr. Braydon Dymm. Thank you so much for joining us today.

Dr. Braydon Dymm:

Thank you for having me. I'm excited to be here.

Dr. Michael Kentris:

So is it all right if I call you Braydon for this episode?

Dr. Braydon Dymm:

Yes, of course.

Dr. Michael Kentris:

So you've been so kind to volunteer your time and come on and talk with us about a very hot topic in kind of the world at large, but in medicine specifically and even more specifically in neurology, and that is artificial intelligence. So just give us a little bit of your background and kind of how you became a bit involved with AI and medicine.

Dr. Braydon Dymm:

Yeah, absolutely. So I don't have much of a technical background myself. I'm actually a traditional pathway to becoming a doctor. In undergrad I majored in biochemistry and then in medical school and then I went to residency in neurology at the University of Michigan and I've always been more technical, and so in my residency, as part of my senior year, I was elected to be the technology czar, so I was always helping out my attendings with their Grand Rounds PowerPoints. And then I went to fellowship, I got my stroke fellowship and then now I'm a neurohospitalist and I have a very keen interest in AI in medicine and AI in neurology and I like promoting it. I like making people aware of it. I like to consider myself very interested in the intersection of AI, neurology and education.

Dr. Michael Kentris:

Awesome. No, that sounds very, very interesting. So I guess I have a very marginal computer science background. I fiddled around with some programming back in high school but I haven't really touched it much since then, which I guess is going on 20 years now. But for those who may be less familiar, when someone says AI or artificial intelligence, what do they generally mean by that?

Dr. Braydon Dymm:

Sure. So right now it seems kind of like a buzzword and it's not really clear what we mean. But when people talk about AI, they're often referring to a range of technologies, from simple automated systems to the complex learning algorithms, google Health Technology researchers. They actually recently published an overview of this in JAMA, covering the three eras of AI. The first era has existed since the 1950s. Ai 1.0, and that's more symbolic AI and probabilistic models that translate human knowledge into computational rules. In healthcare, this is used in our rule-based clinical decision support tools and all of those annoying EMR pop-ups. In the 21st century, we moved into the AI 2.0 era. This is the machine learning techniques known as deep learning, and those took center stage. This is usually when models learn from these ground truth labeled examples to automatically sort and classify things such as medical images. For several years now, this has already made huge impact in everyday life and in healthcare. I use deep learning based methods in every stroke code when the head CT is processed into either no hemorrhage or hemorrhage suspected.

Dr. Michael Kentris:

Yeah, there have been a number of papers, I think, like CT heads for acute strokes and acute hemorrhages, that I think that's been one of the most published use cases so far. Is that correct?

Dr. Braydon Dymm:

That is definitely one of the most popular use cases. They also have just published a study on LVO detection In stroke codes. You're looking for the large vessel occlusion that would make you candidate for thrombectomy. The use of these automated alert systems that can read the CTA and tell you or tell the emergency physician hey, there's a large vessel occlusion here. Maybe you should call your neuro interventionalist. That actually accelerates the time and shortens that door to groin. Time the patient, the care they need faster.

Dr. Michael Kentris:

Gotcha, excellent, so you said that was the second stage.

Dr. Braydon Dymm:

Right. So the new, exciting, emerging era AI 3.0, introduces us to the foundational models and generative AI. This is more than just a buzzword. I really believe these new technologies carry transformative potential, but also new challenges, such as the propensity for hallucinations or generating plausible but incorrect information. I would have preferred the word confabulation as more neurologically accurate. Yes, but AI researchers have already run away with the term hallucinations in the literature, so it's already gotten there.

Dr. Michael Kentris:

Right, it's got a little bit more hold on the public language than confabulation does most likely.

Dr. Braydon Dymm:

Right, right. So these models really stand out for their general abilities handling various tasks without the need for retraining on new data sets. They can adapt their behavior based on simple text instructions, which means that instructing them to write this note for a specialist consultant, versus write this prior authorization for insurance approval, allows the model to respond competently really to any request.

Dr. Michael Kentris:

Yeah, it's very, very fascinating and I know for a lot of people in healthcare who may be listening. I think I might have seen you post this just recently, or maybe somebody else on Twitter, slash X, talking about ambient AI, doing just that with note generation, which I've played around with a little bit in kind of a beta format, and it is quite time-saving.

Dr. Braydon Dymm:

Yeah, so for the last 10 or 15 years we've had these rudimentary text or speech-to-text models, and it's really often parodied as a source of major frustration, which is you try to say something and then it'll misinterpret you and it's just really not working. But they've actually been working on it for the last 10 years and so now I think that the technology has gotten good enough to really be in the prime time and to save you time rather than trying to work with it and going back and edit all of the mistakes that it made my program director, zach London. He loves to point out the dictation errors, which can sometimes be really funny.

Dr. Michael Kentris:

And I saw he actually has a board game out now called dictation errors.

Dr. Braydon Dymm:

Yep, exactly. Yeah, I haven't had a chance to play it, but that looks really fun.

Dr. Michael Kentris:

It does. It does. So, yeah, it seems like AI. Especially anyone who's been online any time in the last year, year and a half is like all of these different software platforms are like now with AI. It's like the low-fat craze of the early 90s come again, but with AI. So you see, all this integration Even the platform that we're currently recording this podcast on has quote-unquote AI integrations and, to your point, it does do that speech-to-text, like all the transcripts for the show are primarily AI generated. And then we go back because it will misspell people's names or certain technical terms because it's more of a. It is not a specialized AI. So I was hoping you could just speak a little bit to hear a lot about, like, general AI versus kind of more specific use cases and what's the difference between that.

Dr. Braydon Dymm:

Yeah, sure. So the term general AI sometimes gets referred to as artificial general intelligence. This is a hypothetical advance in the future that could understand, learn and apply its intelligence broadly, kind of like a human, but we're really not there yet. Even the best models have severe limitations. One of those in the context of generative AI is this every time you use it it has to reset its memory, and so it really doesn't learn over time as you teach it and get it to do things. So what we have now are various forms of narrow AI or selective AI, which are kind of specialized in these particular tasks, like language processing for our notes and the rudimentary image recognition.

Dr. Michael Kentris:

Gotcha. Now I know one of the things that we see is they talk about training these different selective AI cases on datasets. Good, how does that relate to kind of past like deep learning? I know my brother's a CSC, a computer science engineer guy, and he studied deep learning. I have very limited understanding of it. That versus, say, neural networks, versus like kind of the new hotness that we keep seeing is these large language models or LLMs. How are these all related to one another or are they related at all?

Dr. Braydon Dymm:

Yeah, that's a great question. So, as a neurologist, I have a particular interest in these neural networks. So neural networks, they really are the foundational technology for many types of AI, including those new large language models, or LLMs, that are leading the generative AI hype wave that we really can't get away from. So neural networks were inspired by the structure of the brain and consist of layers of interconnected nodes that process data in a way that are somewhat akin to our neurons. Of course, actual neurons are orders of magnitude more complex and these AI neural networks are really just doing fancy linear algebra with matrix multiplications of a ridiculous amount of numbers. Llms are a specific type of neural network designed to read and generate human speech and human language. It goes from input to output by analyzing the input text, finding patterns and associations based on the training, and then generating a response that aligns with those patterns.

Dr. Michael Kentris:

Now one of the criticisms that I've read about with respect to AI in general, but specifically with use cases in medicine, is that once you kind of go below that surface level input, you don't really know what's going on there.

Dr. Braydon Dymm:

That's a big foundational problem that's trying to be addressed in the current literature on the use of AI in medicine. So it's called the black box problem. So for that AI 1.0, that good old fashioned AI, you can follow its logic, you can see exactly. Okay, here are the inputs, here are the rules and here are the outputs. If the outputs aren't what you want, you can debug it and make it what you would like. So it's completely understandable.

Dr. Braydon Dymm:

These large language models, they're black boxes in the sense that you really can't look inside them. You have no idea what's happening in between the input and the output. So if you give it a patient scenario and you ask it to come up with an assessment and plan, you don't really know how it's generating that. There's this assumption that, well, these large language models have read the entire internet and so somewhere in there there's information about how patients present and what kind of diagnosis they must have and what kind of plan must be generated from these. But there's not really a great way to debug this other than to do some fancy programming and adding extra layers and extra tools, or to just retrain a brand new model from scratch.

Dr. Michael Kentris:

Got you, and I think that is one of the criticisms of something like, say, like a chat GPT, which is a less specifically trained out of the box, if you will, where it does look at all of this online data through a certain date, but it doesn't necessarily discriminate the quality of information.

Dr. Braydon Dymm:

Right, exactly, and so that's one thing that we're trying to innovate upon. So if you could imagine, every website, every article, every book, it is being funneled into this AI and then is being treated as equal. That's not necessarily the best way that you want your doctor to know what's going on, because there are certain texts and certain information that should be more highly valued than others. We all know that various things on the internet are not that trustworthy, so one thing that people are trying to do are to take these what they're called foundational models. These are the models that are very general. They have this broad training on all sorts of text and then doing this fine tuning process, just on medical information or just on patient interactions, to really get it to be more specific to those use cases.

Dr. Michael Kentris:

Gotcha and I know we talked a little bit about specific use cases for, say like radiology. I know I've played around with chat GPT feeding in say like some standard clinical vignettes. I gave it a case vignette and again, this was probably about four or five months ago for a classic Guillain-Barre case. Someone comes in with kind of subacute ascending weakness and numbness, with reduced reflexes, blah, blah, blah.

Dr. Michael Kentris:

And it gives you like a. It pops out a differential for you and you know GBS was on the differential but it also included other things like multiple sclerosis or some other things that didn't really like. You know that most neurologists wouldn't see as very likely. So it didn't, and I think it put GBS at like three or four on its differential. So I'm sure as these different models become more and more selective, we'll probably continue to see this iterative improvement. But I don't know in your experience I'm sure you've been experimenting with these tools as well what are your thoughts as far as kind of using these to, let's say, supplement someone's diagnostic process?

Dr. Braydon Dymm:

Yeah, absolutely. So. I've actually used this for my residence. I'm in a program in, uh, in Charleston, west Virginia, where we have a residency program, and our junior residents, I'm trying to teach them, you know, the value of a differential diagnosis, right? So, uh, so too often they might anchor on the very first thing that that comes to mind. You know, oh, this is a stroke, oh, this could be a seizure, um, but, but we really should be thinking more broadly than that and and trying to, you know, come up with the possibility of, oh well, what if that first thing, what if that was a coincidence, right? Or, uh, what if they have a, a rare condition? That's not, you know, the first thing, that that always comes up.

Dr. Braydon Dymm:

And so I've actually used this in a lecture where I will say, okay, well, here's a case, here's a standard case, um, try to come up with a differential. And then, so you know, our residents will, uh, come up with two or three things and then I'll plug it into the, you know, differential diagnosis generator, and then it'll come up with six or seven things. And so I think that this is this is actually really valuable as an educational tool, so that way I can show that. Hey, you know we, there's really a lot more to think about Um, and you know if, if the differential diagnosis generator is coming up with things that are not correct, well then explain to me why is it not correct Right? So if they have lost the reflexes, then why is the suggestion of multiple sclerosis less likely?

Dr. Michael Kentris:

Right, yeah, and it is.

Dr. Michael Kentris:

It is interesting, um, because I remember uh I don't know if you listened to uh bedside rounds podcast uh as well, but he does a lot of work there with we're talking about AI and I think they had published a paper in uh NEJM about a specific trained case where they were they trained it off all the past New England Journal cases and it was performing like nearly as well or maybe even slightly better than than most of the experienced internists like uh on these, these case reviews.

Dr. Michael Kentris:

So it is one of those things where you do wonder, like, yes, it's a good training tool, but at what point do we like uh the implications of this? Where, like we think about our, for instance, you mentioned the uh the electronic medical record pop-up boxes for, like safety protocols. When are we going to have to worry about like well, the AI is suggesting this diagnosis that I don't think is likely. Am I going to have to click? You know it's like insufficient evidence or benefit outweighs risk on on these pop-up boxes? Uh, going forward as well? Uh, I think that has some some interesting implications. Uh, as far as that piece, but I was wondering if you had any thoughts on that.

Dr. Braydon Dymm:

Yeah, so this is something that's been looked at as far as you know where. Where is the liability uh for, for doctors with using these new tools, right? So, uh, there there was a uh, you know, a two by two graph of you know, uh, of the of the tool being, you know, more accurate, uh, the tool being less accurate, and then the doctor being correct, and then the doctor being incorrect. And where does the liability fall in each of these situations, right? So, of course, if, if the uh, if the tool is not really that great, uh, and it's less accurate, and the and the doctor makes a good decision, then, uh, then the. You know, the patient's happy and everyone's happy because we ignored the, the incorrect tool.

Dr. Braydon Dymm:

But if we start getting to the point where these tools are showing really high reliability and the, the tool suggests something and then the doctor ignores it, then, um, at what point could the patient turn around and say, hey, my doctor is ignoring the AI, it's ignoring the, the best evidence that we have available. You know how? How is that good patient care? So that's, that's something that you know we're going to have to really take a critical look at. Um, and you know when is that going to come up you know when? When would the first doctor be sued for ignoring the AI recommendation? I don't know.

Dr. Michael Kentris:

Yeah, it's, it's a very intriguing kind of social question, um, and it does. It kind of makes me think a little bit to your, to your early mention of using it as a training tool. You know, uh, I see all the memes on time. You know, it's like, oh, our, I think we're of a similar age where, back in the middle school, you know, your math teacher would tell you you're not always going to have a calculator in your pocket to, uh, to do like, you know, long division or something like that.

Dr. Michael Kentris:

And you know, we're now getting to the point and we look at a lot of the way that we are trained is there's a lot of memorization of large amounts of information, um, both in, you know, undergraduate as well as graduate medical education. Like, for instance, I, um, I'm reading through a lot of things about different spinal cord disorders right now and specific genetic mutations, and it's like I've never seen one of these cases in my my career and it's not something, as you said, that readily comes to mind. So, or we think back to say, like the couple dozen different types of charcoamery tooth disease that exist out there. Right, the majority of us, even as specialists in neurology, could not probably differentiate between these, maybe beyond the first three or four different types? Uh, clinically at least, not, not without a bit of digging and research. So what will be the utility of memorizing all this information? Would trainees' time be spent more effectively on other tasks? I don't know. Is that a conversation that's come up in your investigation so far?

Dr. Braydon Dymm:

Not quite as much, right, but that question is not new, right, because we have the internet and that question has come up for step one. So the step one exam this is really that rote memorization of the Krebs cycle of biochemistry and a lot of things that are not really useful for everyday clinical practice. The argument has been, well, that it creates the solid foundation. So, as a medical student, you really need to build up this foundation of biochemistry, pathology, all of the diseases and pathology that gets tested in our step one. That allows us to think more deeply about our clinical cases in step two and step three. And so, right, are we always going to have a calculator on us in the setting of our clinical work?

Dr. Braydon Dymm:

Actually, yes, we do have our smartphones now and our smartphones have calculators on them, so we actually do have access to a calculator at all times and, in addition, we pretty much have access to the internet at all times. I mean, I can't remember the last time that in my clinical work that the internet has completely gone out. Even if there's a Wi-Fi problem, you still have access to LTE and you can access with your phone. So, yeah, we will have access to these tools and I think that we should be using them and integrating them into our education and integrating them into our jobs. There's a really good quote from Dr Isaac Cohane, who's a bioinformatics expert and pediatric endocrinologist. This is quoted by the New England Journal of Medicine, the AI Subjournal, which is a relatively new subjournal. He says that if you're not using it, if you're not using the AI, you're like those kids who are coming out of college in the 1990s and they're still not learning how to use the word processor.

Dr. Michael Kentris:

Yeah. Yeah, I've seen similar things written by people online. It's like you're not going to be replaced by AI, but you will be replaced by someone who uses AI.

Dr. Braydon Dymm:

Right, that's the tagline. That is definitely the tagline to really promote the idea that this is helpful. This can make you more efficient. This can make you more accurate in your diagnoses. If you use this tool, maybe it can also help you communicate with patients better. Maybe it can help you translate your language into more patient-friendly language. There was another study that looked at patient consent forms. It found that chat GPT can write better patient consent forms than the current ones out there. It really begs the question why haven't we been doing this already? It's really this opportunity to use these technologies for patient benefits wherever we can find it.

Dr. Michael Kentris:

That's a great point. I know we've talked about our specific imaging use cases and in training. In what ways do you find that you're integrating it more into your actual clinical practice these days? Are you with challenging cases? Are you plugging the vignette into? Are you training up your own models? How are you integrating it into your workflow?

Dr. Braydon Dymm:

Sure Integrating AI into Intorio Medicine. I think that it really comes back down to that 1.0, 2.0, 3.0 paradigm. We're all forced to use that 1.0 paradigm with our clinical support tools and our EMR pop-ups. Then I've been really impressed with some of the 2.0 developments I've already mentioned these non-contrast head CT scans can reliably but not totally perfectly be sorted into hemorrhage versus no hemorrhage. Ekgs have already gotten to a point where it's relatively useful for a preliminary read, although you'll want to confirm with the cardiologist before diagnosing somebody with new onset H-refibrillation when EEG has proven extremely difficult. When I was in residency it was near useless in identifying spikes or seizures on our long-term EEGs. But there's maybe a hint of promise here. Last year I did see a study that then this was in JAMA neurology from researchers in Norway and they claim to achieve human expert level performance in fully automated interpretation of routine EEGs.

Dr. Michael Kentris:

Yes, that was an interesting study. For those who know, the signal-to-noise ratio in EEG is very poor. They were able to categorize I think that was by Sandor Beniske's group and he's been big into the AI development for EEG for quite some time. They were able to categorize it into abnormal epileptiform, abnormal non-epileptiform and then, I think, focal versus general as well. Don't quote me on that general public. It is quite fascinating. Then you are able to use it as a screening tool.

Dr. Michael Kentris:

The quantitative EEG has been something that's been going on for quite some time and that's been showing continued iteration. I know from the time I did my fellowship in 2017, 2018, up through now, they've just continued to improve these different recognition platforms for spikes and sharp waves and all that kind of stuff. While the false positive rate is quite high, if you take 24 hours of material and you're able to screen through it in a matter of like 5, 10 minutes, that's still as you said. It is a tool to enhance the workflow. It doesn't take the person out of the workflow, but it's able to streamline it and make it faster and more efficient.

Dr. Braydon Dymm:

Right, exactly Then the new, the generative AI, this 3.0, I think it's shown incredible promise for diagnosis assistance. Imagine we get this AI that can read the electronic medical record, including the outside sources, and come up with a summarized pre-charting summary for you. This is something that plagues outpatient neurologists is patients come in with all these records and you have to sort through all of it. It can really take some time. But if you had this AI that could competently do it for you, that would be really a valuable tool.

Dr. Michael Kentris:

Right. I immediately think of all the duplicative things that are in charts, the things that may be irrelevant, a lot of things that are, we all know, inaccurate. How many times have you asked a patient I see here that you had a myocardial infarction back in 2020, and they're like, no, I didn't. I'm like, well, someone wrote it here. It's like that never happened. It is one of those things that is still going to be, I think, subject to you can't. You can't take the human out of it, because sometimes the data that's put in is just wrong.

Dr. Braydon Dymm:

Yeah, that's the rule garbage in, garbage out. If we can try to distinguish what is the real, true information here In order to trust these systems, you'd want to know, okay, where is this information coming from, if you could see it. Well, okay, this summary came from this outpatient note and you could reference outpatient note to confirm then that's how the trust in these systems would go up. Because I think the trust in these systems is still not quite there yet, and for good reason. They still suffer from a few different issues, like that hallucination issue that you really have to raise your eyebrow and say I'm not really sure if I want to cede all of the decision-making power just yet.

Dr. Michael Kentris:

Right. I think of that one clip from Idiocracy where you step onto a platform and a bunch of lights flash and they tell you the diagnosis Right.

Dr. Braydon Dymm:

Yeah, that'd be really fantastic. I think that they're trying to make these little mall kiosks where you go in and it scans you and then you come out with you're more healthy now.

Dr. Michael Kentris:

Right, yeah, and it is interesting. I tend to be a I don't want to say pessimist, but let's say skeptic by nature. But with recent things that are, whenever you see the no hate necessarily, but the quote-unquote tech bros where they start pushing into the healthcare space, and you do see sometimes some of these developments proceeding without, let's just say, less concern for some of the ethical considerations, like an example being kind of the full body MRIs that have been kind of on the news for the last couple of years. Right, we talk about incidental omas, incidental findings that perhaps they prompt further work up down these different pathways, biopsies, procedures that have risks of complications, etc. Etc. How do you think something like that kind of this free and open access to non-invasive diagnostics could pair with something like AI in terms of risk stratification, to say like, oh, we shouldn't touch that, or this has more aggressive features, or things of that nature?

Dr. Braydon Dymm:

Sure, so I think that.

Dr. Braydon Dymm:

So, for on that topic of these full-body MRIs, this is a. This sounds and I completely agree with you, right, it sounds really awesome, right, it sounds like that would be the perfect tool to check your entire body out and tell you and catch anything at the earliest stages. Unfortunately, we're not quite there yet, and you mentioned the incidental omas, and the fact of the matter is is that the vast majority of the population is actually fairly healthy and they don't have these cancers, right? And so the proportion of people who are going to be really stressed out and really fearful that their health is in critical danger because an automated report said that there is a mass somewhere, right, the value of that system does not meet it. It does not justify all of this stress and all of this problems and all of the cost of this system.

Dr. Braydon Dymm:

I trust our current guidelines and to the age-based and risk score-based screening guidelines. That's exactly where we are. That's validated, and those are looking at cost-based support population studies, right. So maybe we could get to the point where these incidental omas have certain signatures that are picked up by future AI, that you wouldn't have to worry about them. They'd be correctly called as benign beforehand right, but we're definitely not there yet.

Dr. Michael Kentris:

Yeah, yeah, and that's the thing, right, we just don't have the data sets to accurately identify these things without sticking a needle into it and taking a piece of tissue.

Dr. Braydon Dymm:

Exactly.

Dr. Michael Kentris:

So something I just saw in the news earlier this week and I think probably a lot of people did. We're recording this in the first week of February, but there was a tweet from Elon Musk talking about the first Neuralink implant. So what are your thoughts about implantable hardware, ai integration, all that kind of dystopian nightmare stuff?

Dr. Braydon Dymm:

So I'm a little conflicted because, on the one hand, I'm really excited about the progress of technology and developing things that help patients and help people, but I think that the hype with this sort of thing is way out of control. This is not something that your average person is going to schedule an appointment with a neurosurgeon and plop in a chip on your parietal cortex to enhance your sensory abilities. This is not consumer technology. This is really something that should be available for those with severe neurological illness. That would really justify the risks of a neurosurgery and hardware implementation. This is not the first time. Elon Musk is not the first person to invent hardware that interfaces with the human brain. It's a whole field of brain-computer interface. I think that I'm excited for the development of it, but the hype is way out of control.

Dr. Michael Kentris:

Yes, I would agree. I don't want to be all negative. I think it holds a lot of promise for people with things like ALS or Lou Gehrig's disease and spinal cord injuries who have different degrees of paralysis, and things like that. So I think it certainly holds a lot of promise for select populations. I don't know if you were one of these. I've in the past been a bit of an anime nerd and I think of Ghost in the Shell and things like that. We're not talking about fiber optic eyeballs and artificial muscle implants and all this cyborg-level technology. We're not in the, as a hind-line would say. We don't have sufficiently advanced technology to the point that it appears like magic.

Dr. Braydon Dymm:

But yeah, yeah, I wish we were living in that cyberpunk future. We're not there yet, but maybe Elon Musk can take us there.

Dr. Michael Kentris:

So one of the hot takes as far as AI integration. So let's say right, because we know there are different devices, as you said, that are involved with like neuromodulation and these different brain-computer interfaces. Classic examples would be like people with Parkinson's disease or medically refractory epilepsy who have these different implanted devices that they are. Some of them are closed loops, some of them are open loops, that is to say, sensing and stimulating, and we have these are able to be interrogated just like you would like a pacemaker or other implantable devices. They're not necessarily on Wi-Fi, but that's the thing, right? Any device that exists that has an open connection to it could theoretically be hacked. So if we start putting in these devices that, as you said, might affect sensory perception or motor control, and someone hacks into it and turns it off or changes it, I think that has some pretty big societal implications, especially if we see, let's say, 20 years from now, widespread adaptation of these things.

Dr. Braydon Dymm:

Yeah, absolutely. That's a major risk I've actually heard about that for those with diabetes who have these closed-loop insulin delivery systems. If you can imagine, the worst case scenario would be somebody on their phone using Bluetooth to hack into the insulin delivery system and inject all of the insulin available, putting that patient into a hypoglycemic episode that could potentially be deadly. That's not the cyberpunk future, that's here and now. That's a real risk.

Dr. Michael Kentris:

That sounds like a great episode of CSI. But where do you see in the more immediate, non-dystopian future, where do you see AI being taken and how do you think that people can best prepare for it and make sure that they don't get left behind like our forefathers in the word processor era?

Dr. Braydon Dymm:

Sure, I think that's paying attention and taking a look at what's out there and what's being offered and not being shy to try things out. I think that that's the best way to really stay up to date and to follow these people on the latest literature on this. New England Journal of Medicine has their own AI subjournal that you can follow, as well as people on social media that promote these sort of things and raise awareness of these sort of things. But at a certain point we will all be taken into this brave new world. I believe that this might happen at the level of an institution, at the level of a hospital system, and will now be introduced to, maybe a co-pilot. If we log into our computer one day and then a pop-up will say hey, welcome, we've got this new co-pilot system. Since then, you are just forced to learn it like a new tool and it'll just be integrated into your everyday workflow.

Dr. Michael Kentris:

Yeah. Yeah, you're right, it's very likely that it's coming for us. As you said, it's to an extent already here. If you're not within a couple of years of retirement, it probably behooves you to familiarize yourself so that you can make sure that you can get the best utility out of it once it is validated and more reliable in some of these newer use cases.

Dr. Braydon Dymm:

Yeah, absolutely. I think that we're still a very far away from this Star Wars Dr.3 CPO walking around and doing our clinical exams and procedures for us. But in the immediate future I think we can't expect AI to assist us with tasks such as predicting disease risk analyzing vast amounts of health data from our electronic medical records. There are projects that utilize deep learning to comb through the EHRs to predict things. There's promising results in diseases like diabetes, schizophrenia and various cancers. Then if you copy and paste the patient record, the history and the exam into an AI model, it can already help you come up with this differential and plan. You can either start using it right now or you could wait until it's integrated into your EMR for everybody in the hospital system to use. The thing is is that patients have already caught on to this. We are used to patients going to Dr Google and coming in with fears of multiple sclerosis or cancer. Our doctors prepared for patients going to Dr Chat GPT and coming away with a more highly valued second opinion.

Dr. Michael Kentris:

Right, that does beg the question a little bit. I'm probably using that wrong, but in terms of plugging in our history and examination, I think that's going to put that emphasis back on the information gathering. I think about this analog to digital interface, which is a lot of neurophysiologies, about this interface between this messy analog system that is people and this data processing in the digital space. How do I take this story that might be wandering all over the map and condense that into a coherent timeline, which sometimes the ambient AI is trying to do? It still depends on you as the physician, as the examiner, asking the right questions to elicit the right information. How many times have you been taking a history and you're going through your diagnostic algorithm, asking this question, that question, and then you get a yes to something. All the dominoes fall into place and you start really chugging along towards a diagnosis.

Dr. Michael Kentris:

This is one of the things that we also see. How many times have you gotten a chart? It's like pulses intact and bilateral feed. This person has a below the knee amputation. It's that garbage in, garbage out. If we don't take the time to speak to our patients, to examine our patients properly, if we put in garbage data into the AI, we're still going to get a garbage differential diagnosis back. I think that in a way, might be beneficial, because it's going to put the physician back at the bedside with the patient to spend more of that time instead of on all these administrative tests that have bloated over the last 40-plus years. Spending more time with the patients, getting accurate information and being able to integrate that more effectively. At least, that's my hope.

Dr. Braydon Dymm:

I think that's a great vision for how these tools will really help us with our burnout. That's a huge source of burnout is all of the clinical documentation and the interfacing with EHRs. If we can try to remember at our core what is the skill that really is the most valuable, that's our patient interaction. If we can spend more time with the patient to gather this information and then that information is automatically being documented and processed, that's the part that we don't really care for. I don't really care, at the end of the day, to take all my notes and then to summarize it, reorganize it and put it in multiple different places in the chart. I love that vision for really getting back to what's the most valuable way that we can use our skills as a doctor to ask the right questions, to do the physical exam and to explain things to the patient as a human to human, absolutely Well said.

Dr. Michael Kentris:

Any final thoughts on AI or technology in general as you see it over the next five, 10 years?

Dr. Braydon Dymm:

Sure, I hear a lot of concern about oh, AI will take our jobs something to worry about. There's a few reasons why I'm not concerned about that. I really don't think robots are about to take our jobs as doctors anytime soon. First of all, human doctors we walk around in the real world along with our patients. Robodoc is not about to whip out the reflex hammer and do formal strength testing. This is extremely challenging and nobody has any idea when we might make those breakthroughs. Second, there's been a growing doctor shortage as the population grows, but the pipeline for new doctors has remained stagnant. These technologies promise to make our system more efficient, so more patients can get the care they need. Then, third, I think that for regulatory reasons, there will always need to be a human in the loop. Is the DEA really going to hand out licenses to software to allow automated prescribing of morphine or lorazepam? I just don't see that happening.

Dr. Michael Kentris:

That would definitely be a brave new world situation right there. I mean that in the classic sense of the novel. We met on Twitter. Where can people find you online? What should they reach out to ask you about?

Dr. Braydon Dymm:

Yeah, absolutely, I'm on X now, formerly Twitter. My handle is, luckily enough, I got my name just as it is. It's at Braydon Dymm, which I'm sure we spelled out on the intro to the podcast. You can find me there. I like to tweet a lot about AI and medicine and neurology and education. I've done a collaboration with MedEd models on the use of AI for the everyday clinician and in medical education. We've shared prompting strategies for the best results for various use cases. Those range from generating multiple choice questions, feedback planning and then when not to use it, such as when writing a letter of recommendation.

Dr. Michael Kentris:

Excellent, excellent. I think you have some upcoming speaking engagements as well.

Dr. Braydon Dymm:

That's right, I will be. Oh well, this will come out after, but I will have been at the International Show Conference. I'll be a social media ambassador there. Then I will be at the AAN annual meeting in April, where I will be at a panel at bright and early at 7 AM on Wednesday morning where we will be talking about AI for neurologists.

Dr. Michael Kentris:

Excellent. Yeah, I know last year of the AI panel was standing room only. Even though the early hour is there, I fully expect there to be a lot of buzz about it.

Dr. Braydon Dymm:

That's exciting. This will be my first time being there as an attending and my first time being there as a presenter rather than just an abstract. That's awesome. I'm very excited that's great to hear.

Dr. Michael Kentris:

You can find me also on exformerly Twitter at drkentris K-E-N-T-R-I-S. You can also check out our website theneurotransmitterscom for more neurology education materials. Braydon Dymm thank you again so much for joining us today. I really appreciate you taking the time. I feel like I learned a lot today.

Dr. Braydon Dymm:

Thank you so much. It was really a pleasure talking to you and I'll happy to be back anytime.

Dr. Michael Kentris:

Yeah, we'll have to keep you on the speed dial for all of our hot take updates.

Dr. Braydon Dymm:

Yeah, and tech fixes. If you've got a tech problem, I'm your guy.

Dr. Michael Kentris:

Awesome. Thank you so much.

Dr. Braydon Dymm:

All right, take care.

Dr. Michael Kentris:

You too.

People on this episode