I'll try to keep it G-rated.
G-rated. No, you don't have to keep it G-rated. It has to be like PG 13. Yeah. This, it has to be not explicit <laugh>, which is, uh, I can't imagine that I would do anything explicit, though. I do swear a lot on the words. You
Get pretty worked up about these topics,
I do too. Okay. That's, that's why we like each other.
Um, because we're both, let's see,
Oh, it's more than that. Some
Fiery. Fiery. I think we both have, well, I have heterodox opinions that exist on the periphery of what is acceptable at the bi. Yeah.
I would, I would say I err in that direction as well, however,
But I'm more spreading inward.
Yeah. I know. Oh, well, I don't know why I'm even recording this. I've made some big life decisions lately that you will not agree with. I decided I'm not gonna change who I am. <laugh>
Oh. To like, fit in better at the bi. Cause I feel like I'm kind of a weirdo here, but I made that decision. I was like, I'm not gonna change who I am. I'm doing good things. No,
Who you are. I know. But that was a, well,
Here, I'm gonna stop. We'll talk
Cut. Oh, it is. I'm not gonna swear. Can you introduce yourself?
Yeah. I am Shawni Herig. I'm a, uh, clinician investigator at Beth Israel Deaconess Medical Center. I do pharmaco epidemiology and I love studying hospitalized patients. It's an area that hasn't been well investigated up until, I don't know. Well, that's not really true. I guess the Institute of Medicine. You can cut
All this out. I'm gonna cut it all out. Uh, are <laugh> um, why are we talking today?
I don't know. You don't know. You keep asking. You keep asking me to do these things and I keep I keep saying, oh, I don't know Adam. I don't know if I have anything
To add. Why do you think we're talking today?
I think we're talking today because, um, we, anytime math comes up, you, you seem to associate me with math and, and base theorem comes up a lot. I don't know. I, so I think that we're talking today because we are moving very quickly towards computer involvement. Ai I should say AI involvement in the practice of medicine and clinical diagnosis, and really incorporation of AI into every single facet of medicine, including medical publication, for example. Um, which has come up at my editorial board meetings lately. Um, I bet it has. And so how, how this is gonna change the face of medicine. How, um, how AI is being applied and, and kind of what my thoughts are as a, as a researcher, an epidemiologist on, on top of all
That. So, as a brilliant epidemiologist, very numbers minded, I want to ask you, unfair question to you, what does it mean to make a diagnosis?
You know, uh, it means that we have come to a high probability conclusion as to what is causing a patient's given presentation, symptoms syndrome, et cetera.
Does the probability of how your ability to make that diagnosis differ based on what the, what the disease states are considering are?
You're saying, you're saying do, do I adjust my confidence level accordingly? Yeah. So depending,
Cause there are magical threshold at which you'd say this is a diagnosis. And would that be different for say, a community acquired pneumonia Yeah. Versus a aor aortic dissection?
Well, I, I think, you know, there are certain things you, we, we like to say, you have to have a high index of suspicion for where you're gonna consider them, and you're gonna work really hard to rule them out. Because if you don't, the patient dies or, or, uh, there's devastation as a result. Um, and then there are other things where, you know, sometimes that has to do with the treatment. Let's say the treatment is benign and maybe you don't need to be totally sure that's what they have because the treatment has very little risk. It's, and it's a win-win. And so you maybe you're gonna call it that and just try treating
Like pneumonia. Yeah. That's exactly what I was thinking about.
This is getting into like management movement
Or you know, how, or uti, I mean, how often do we tell, do we decide that a patient with dementia comes into the hospital, delirious has a uti and we're willing to call it that even though probably nine out of 10 of those U T I diagnoses made in that context are not
Actually accurate. Yeah. Because really what we're doing is what is the risk of exposure versus the potential benefits, correct. Yes. Or risk of non exposure versus, okay. That's right. So I want to talk briefly about clinical decision support and your experience with it. Have you, do you routinely use clinical decision support tools?
Um, do I implement them or do I actually, as a clinician?
It depends what you're talking about. So it's impossible as a clinician to avoid interaction with clinical. This is important in a common ordering system anymore like that. It's just not possible.
Like interaction checkers, you mean? Right. That would be the classic example that you can't avoid l
L uh, yeah. Interaction checkers, uh, dosage prompts or clinical decision support. Yeah. So like, anytime you go to enter any drug within our, within our system, you, there's a dropdown with pre-selected dosages that you can choose from. Now you can select other, but those dose dosages in and of themselves are clinical decision support. It's making it harder for a physician to do the wrong thing
In a way that can be harmful. I, you don't know how much trouble I got and when I tried to appropriately dose Bactrim based by people's weight. Like some of our, this is gonna be cut up, but some of our colleagues told me they were uncomfortable with this. And I was like, you guys know that Bactrim is weight dosed everywhere in the world. This is just a quirk of our P O e. Huh? They fixed it now. Oh, wow. But you used to only be able to do a DS B I D and if you did any more than that,
Wow. So any, so yeah, I, I mean, clinical decision support is baked into every single thing that we do. So I think it's not possible to not not be interacting with clin clinical decision support as a, as a modern day practicing clinician. But in terms of, are you talking about like risk stratification tools or what do you
Think about? Yes. So do you, I should have phrased my question better. Do you use clinical decision support as an adjunct to your diagnostic process?
Um, uh, gimme an example of what, what you would be thinking about in that
Realm. Well, there's many. I mean, I, in the
Emergency, like, like do I use house blood for example? Or do I use well, a well
Score Yeah. Wells is kind of the classic. I just wanted to make sure center criteria, the four T score, all of those.
So I do, um, I use it, you know, the, the clinical decision report that I probably use the most is when I'm considering a hit diagnosis.
Um, that's probably the one that I use the most, um, wells I use sometimes. But it, you know, and this, this starts getting into some of the problems with y you know, bay theory and trying to develop clinical diagnosis algorithms or, or programs is the prior probability of disease or the prevalence of disease in any given population really depends on the population in very nuanced ways. Right? So, you know, from a tr from a study, you know, wells, you know what the average risk is for a patient of that profile who's presenting to the emergency department, for example, for, for, you know, they're now wells for hospitalized patients. But the point is that not all of these, uh, these risk stratification tools meant to support clinician decision making can be applied right to the setting in which the physician is applying it. You know, the, the prior probabilities change vastly, depending on the patient population you're actually applying it
To. And that would be spectrum bias if you wanted to put the, the diagnosis, the diagnostic word
On that. Yeah. Or as, um, d the Donal mm-hmm. <affirmative>, as the Donal said, I think he called it, he had a nice term for it, geographic portability or something. Um, which Yeah, that's exactly right.
And it's the same idea, I mean, to Don Ball noted that even like neighboring hospitals worked better, but the further that he got even in in England, it worked less good. Less well and less well.
Yeah. Well, and you know, he, he, there are two interesting issues embedded in that. Um, one is the idea that the, the patient presenta patients with the same disease can present very differently depending on where they are. There are cultural differences in express depending
Right. Pain, you know, there, there are a million reasons that patients with the same disease can present differently depending on geography or various things. Um, and then there's the idea that different diseases have different prevalences in different geographic areas. So you've got, so, so even if patients are presenting with exactly the same constellation of symptoms, if the prevalence of disease is really different in one pl place than another, then that, you know, that information that you gain from some diagnostic test actually doesn't change your likelihood that that patient has that disease. Pro or con, if the prevalence is super low, you know, like mm-hmm. <affirmative>, like, so it's kind of like
Temporarily as well, right? As time goes by patients, for example, we saw this with aspirin trials, right? People are started on, for example, statins, which may minimize the effect of certain, so our prediction tools change over
Time. That's right. That's, that's totally right. And so you'd have to be constantly changing the inputs, like unless, which is what we're gonna talk about today more, which is unless you have a system that instantaneously in the moment is constantly updating its own information, right. Um, and, and you know, that I think is the biggest difference in what would've blown to Don Ball's mind is you read his work and it's, it's operating under the paradigm of the time, which is that in order to get a computer to do these things, you have to know how humans think. And then you have to teach the computer how to think like humans think. When in reality what's happening with AI is computers are almost now understanding how we think better than we could even tell you how we think. Like they're just using so much information to come to an answer in a way that we might not have ever even thought to come to that answer. Because they're able to recognize this is an important piece of information, even if we don't recognize that it's an important piece of information.
This I'm not gonna put in the episode, can I ask, how is it that you and I get this, but most of our colleagues do not seem to get this because it is driving me crazy. Like Rich Schwartzstein was like, I'm not even gonna go to the thing he's putting on. He does not get it. <laugh> one bit.
Yeah. I, I don't know. Is it just, is it just that they haven't read enough about
It? I think they haven't engaged with it. I don't think they've tried to engage it in
A meaningful way. I think just dismiss it immediately as, you know, and it's like, you know, um, one of, one of the more interesting examples of this that I remember like a long time ago, so I'm old now. This was like,
Yeah, I'm old too. If it makes you
Feel any better, <laugh>, this was back in 2007 when I was taking class at the School of Public Health and I was in a pharmaco epidemiology class. And, you know, part of risk prediction and research is, um, the standard thing teaching has been, you know, you wanna choose your predictor, your hypothesized predictors, aari based on your clinical knowledge. Like get out your doctor hat and say, these are the things that I think predict something. But around the time that pharmac epidemiology was really blooming, um, and propensity scores were coming out, we realized that you don't actually have to be parsimonious with your predictors that you choose and not, and that's because propensity scoring gives you much more power if you have, have a, a common exposure. Um, but not only that, but it's better if you're not parsimonious. Cuz what we found is if you tell a computer mine through this data and figure out which of these variables are gonna be the most predictive of a certain exposure, the computer will identify things that you never would've thought of. Like in one of the, um, times that this was done in the development of, you know, these high-dimensional propensity scoring approaches where they select from like all data points that are available within a medical record system, things were coming out that we never would've thought of as humans to use as predictors.
Can you give it an example?
I I will, I can and I'm going to. Oh, great. So the example was, you know, um, and if I remember correctly, this was a study where we were trying to predict use of statins on the part of a physician, and it turned out that it wasn't, you know, high. It wasn't just high cholesterol, high blood pressure, blah, blah, blah. Looking things like looking at how many visits has the patient had in the prior year, how many healthcare encounters mm-hmm. Have they had mm-hmm. <affirmative>. So actual, actual measures of engagement with the healthcare system, um, were were just as good if not better than these predictors that any physician would've given you off the top of your head. And like, it's, it's making use of all of the information in a way that humans would not have even thought of, but that turned out to like massively increase your predictive ability.
I I assume you've read the Mayo study on, uh, EKGs predicting Amyloid. Have you seen that? No. So it's, they used them as machine learning algorithm, um, with the final diagnosis and just by looking at the EKGs more predictive than cardiac m MRI than C M R I. Wow. By an ML agro algorithm. Something that can be operationalized right now.
And, you know, it makes, it makes sense if you think about it, right? You of course wanna have low, um, you, you
Wanna have, the voltage is gonna be a little bit lower, but it's also gonna change in imperceptible ways that a human would never notice.
That's totally right. Uh, you know, the other thing that jumped out at me, um, that I think, you know, into Don Ball's work, he, he was talking about that, um, that, you know, the steps to diagnosis, there's information acquisition. Yes. And then there's information interpretation and then there's application of that interpretation to the care of the patient. And he very quickly dismisses the computers as ever having a role in the information acquisition component. But we're already seeing a million examples of that. Right. So, you know, starting from, you know, years back with the emergence of telemedicine and the fact that you can have a stethoscope that's on a patient's chest and a physician across the country can be, you know, listening to that. So that's like a very primitive example of it. But recently, this is like the coolest thing that's happened to me lately. So I wear contact lenses and it's so annoying having to get these contact lens exams for the purposes of, of ordering contact lenses when I'm like a young fairly health, well young is debatable, but fairly healthy.
I think you're young as someone gaining an age income
<laugh> individual without ever having had any eye problems, why do I need to go in, pay $150, have a contact lens appointment? And so a company that I won't use the name of is now doing contact lens appointments through your phone. Yeah. And you set your phone up at a certain distance from yourself and provided you our low risk, which they have ways to calculate based on prior history, et cetera, et cetera. You can use your phone and it snapshots your eye and it's, it's, and there's not a doctor on the other end assessing it. Your own phone then assesses whether your eyes look okay, whether they see any gross abnormalities, and you use the, the phone itself, tech teching EKGs from your Apple Watch. You know, like the number of ways that computers now actually collect data in the absence of human hands is just incredible. So I really think that computers are gonna be involved in every single aspect of those three facets of clinical diagnosis that the Don Ball spoke about.
So it's interesting that you picked up on that in that paper because I had, do you, do you have a sense on what he was reacting to there?
Um, the, I, well I was thinking that he was reacting to the idea that humans might be superfluous at some point, but that he was basically saying he was getting a little defensive about it and saying like, no, we're always gonna be necessary cuz we're always gonna have to collect information.
So he was, he was reacting to really just even 10 years before the generation of like Ledley lusted, Kiev Broadman. He, in that paper, he makes a joke about people going around with like a papa edict talking about the end of humans. There were people who thought that just by doing a review of systems and had failed so miserably that I think de Dame Ball is reacting to their overclaims and being very cautious about, like, he's being very cautious about cuz he developed a legitimately powerful system.
Yes. Yeah, he did. And, and it is scary to think about and you know, the idea that are are, is there ever going to be a time where we are superfluous? And I don't know the answer to that. It, I think it's, it's
Possible. Can I, uh, this is what I would say. I would say that until three months ago, I don't think there was a point entertaining it because of the, the nature of the hypothetical deductive process. Right. But now we need to actually test it. Right? Yeah. I think, I don't know the answer, but it's something that we need to investigate.
Yeah, I totally agree. Um, I I could see that happening and I agree with you. My thinking on that has evolved. Like, you know, five years ago I would've said, no, there will never be a time, it's always going to be, uh, computers and AI augmenting our human decision making, clinical decision making. But, but I could now, it doesn't seem crazy to me for it not to be augmentation. At some point
We need to talk about the randomized control trial doing with the Stanford AI people. Meza here.
Is this, this isn't what you're, you submitted to Jamiah?
No, no. We're, it's a trial we're designing now, but I'll tell you about it after.
Oh, it's giving, is it like to Don Ball's work but upgraded with
Um, that's so much <laugh>. I know. This is, this is like ch Anyway, um, back to our discussion. Yes. I had always assumed that if something was going to replace humans, it would be modeled as you were saying, on very large data sets and be effectively a Bayesian network. Right. Somehow model reality. But these LLMs seem to have changed that
A large language models generative.
Yeah. It's changed that discussion because in a way, like if you look at the history, if you look at what Dbol did, there is no inherent reason that you couldn't scale that up. It would just take a ton and a ton of data inputs more than any one human ever could do. And more that could be modeled. Right? That's right. That's right. So I assumed, you know, it was gonna happen just a long time in the future, <laugh>.
Yeah. Yeah. And you, you have to, and that type of model still relies on physicians recognizing what the inputs need to be. Whereas the beauty of AI is it will pick up on
Are more predictive that we didn't know. And that's the other really interesting thing. And one of the, one of the things that makes people scared about using AI for research is that it'll come up with things that, that turn out to be predictors or things that fly in the face of what we know ice cream. And we don't know. You saw that? No.
Read the Atlantic headline. Now it's about ice cream being protective against diabetes.
It's an amazing article. But, but you don't know why that is. So like, you know, you don't know if that's just colinear with something else, if that's a proxy for something else, or whether, whether it actually is Right. Because you don't know. Like, I mean,
Isn't that true about humans too though? It
Is. It is. It is. But at least with humans you can at least, you can kind of like unpack things a bit more. A lot of times with AI you can't really unpack how they got to that.
I, I agree. But I would say that humans all the time tell just so stories like physiologic reasoning will often tell stories mm-hmm. <affirmative> that physiology will then prove are not true. And we still talk about like a contraction alkalosis, even we've known for 40 years, that's not the case. Right? Yeah. Because that heuristic is helpful to us helping the, helping the patient, not because it's accurate. Mm-hmm. <affirmative>. So I don't know that it's that different.
All right. Let's talk a little bit about what did dumb Ball did before we, that this is gonna take a lot of editing. Of course it's what I expected. But what did, did dumb Ball do to build, uh, the, the, um, abdominal pain model to build AAP help?
Yeah. So, um, in order to build that, he had to recognize what the different, what the, uh, pertinent elements of, of the history truly are. So in other words, what the most, um, discriminating elements Yes. I suppose of the history because, you know, he talks a lot about how the more junior the person is, the more superfluous. Many of the questions that are asked are. So he kind of, it, it seems like him and and his colleagues decided what the important inputs would be. Um, and then, and then started, um, well actually I don't, I, this was not totally clear to me and I'm curious,
I may have not sent the paper where they described how they did it.
Paper. Oh yeah. So I can tell you what they did. What they did is for two years they collected all the epidemiologic data with, again, they did pre-specify all the things and they had multiple physicians examine every single patient who came in. So that's what they kept. And then they calculated concurrence.
That's what I kept wondering. Yeah. So I didn't get that
Paper and Yeah, I didn't send that with you.
That was the big question mark for me is what was their gold standard really concurrence
Agreeing. So just if, if the majority of people thought this is acute appendicitis, then that get got logged in the system as this is a case of acute
Appendicitis. Oh no. They used pathologic diagnoses, surgical diagnosis with overall standard. I meant for deciding in the final model what went in, they based it on things.
Oh, that's what they did.
Agreed upon highly. And they didn't do a kappa cuz Kappas didn't exist. They just did concurrence. Okay. Not that kappos are very similar. So
Okay. So they basically said, should this go in or not?
Yeah, exactly. And they're like, look, it is quite a brilliant thing like this. This will be like when we have our templated exams, unless to hear Kaji did the exam, I don't usually trust what it says because there's so much variability in how people perform exams. And the down ball kind of proved that scientifically, right? Like what is an acute abdomen? Well it turns out surgeons couldn't really agree Yeah. As to what the criteria for an acute abdomen were mm-hmm. <affirmative>. But they could agree on whether, for example, they had a fever shocker. Mm-hmm.
<affirmative>. Yep. Um, yeah. So is that, I mean, you just answered
That. Yeah, I was good. I answered my own question. Yeah.
<laugh>. Yeah. The paper, the papers that you sent me were, um, the one that was basically looking at how well the
The Yeah. And by the way, what you wrote in your email was different from what the paper covered. That's what I think is
That asked. Oh, essentially you the I swapped the papers. Yes. Okay. Sorry.
No, no, that's okay. That's why
I was, there were like 15 papers. Yeah.
So being, yeah. Yeah. So I had trouble. Okay. I had trouble knowing exactly where, where that all came from, but I got to see like how it performed. And then I read his like, summative. The best one was that one. The last one. The last
One. The last one is brilliant. You saw the year it was published, right?
Or something. Nah, 78 was the last one. 78. But look at, he's like quoting Kahneman Tyversky. He's like, he was brilliant. Even, even and he saw the future.
That's right. He totally did. And even like the security concerns, right? Yeah. And maintaining anonymity and
Well, okay, let's talk about that. What, what do you think, what do you think the Don Ball's legacy was?
Um, because this was used until 2003,
Really? Mm-hmm. <affirmative>
Until CTS effectively became common in all UK emergency rooms. AAP help was continued. It's, it existed online. You can see cashed versions of what the website looked like. Anyone could log in and put the information in
His legacy, which I think is actually kind of what we're talking about as potentially being challenged now. I, I felt the biggest legacy was the idea that we can and should work together with computers to deliver better medicine. Um, when I say that, I think that that is potentially going to be challenged. It's the, should we even be part of that equation? Um, part of that that I think, you know, maybe not even that long down the road, there's gonna be some question about like, you know, as we were just discussing it, we could hit a point where humans are not necessary in that. I could see that happening. And maybe not in all of medicine, but in certain parts of medicine. Can
Oh yeah, definitely radiology. I'm curious about your thoughts on this. We will probably not put this in, are there specialties? Cause I have a,
Are gonna go out, that are gonna go out first. Cause I think there is, and I think there's actually a good way of thinking about the way they deal with uncertainty to explain that
How about in our field in internal medicine, what's subspecialties are most at risk?
Yeah. Agree. Okay. What else?
I have, it's so mathematical. It's, so
There's another one that I think is in like, really high risk.
Let me think about what, what, we've got cardiology that's not going anywhere.
Um, I think the procedures
Procedural well, yeah, yeah, yeah. Um, GI
Procedures. Yeah. And if you think about the nature of a lot of abdominal pain, sy syndromes, like
We, oncology. I think oncology. Oh. So the way that I see this, do you, are you familiar with the concept of, of stochastic versus epistemic uncertainty? I'm,
No. I mean, I've heard those words
Before. So stochastic uncertainty is scientific uncertainty. Uncertainty that can be modeled and predicted. Epistemic uncertainty is, you've quoted a Rumsfeld before, it's the unknown unknowns. Yep. When I go and interview a patient, it is almost all epi. Well, it's a combination, but there's a lot of epistemic uncertainty. I don't know what's in front of me and I need to use cognitive processes in my head to figure that out. Oncology, when you're d coming up with a treatment plan for a patient mm-hmm. <affirmative>, there is no epistemic uncertainty. It is all stochastic. You have the information that you need in front of you. You need to look at the data and pick the best treatment plan. Yeah. I think a computer can probably do that now. Yep.
So I, and if you think psychiatry is probably the most protected field mm-hmm. <affirmative>, psychiatry is all epistemic. Mm-hmm. <affirmative>, even this diagnoses are epistemic. Mm-hmm. <affirmative>, they're made by humans. There's not clear gold standards. Yep. I would think that replacing humans with, by the time you can replace a psychiatrist, you've replaced every human.
Right. I think general internal medicine is much closer to psychiatry then. Oh
No. Yeah. Yeah. You really, I mean if you think about it, they already are having bots. I mean like how much do you really, aren't they using people for therapy already on calls like suicide risk lines? Mm-hmm. <affirmative> and for teen, you know, like just to have someone to talk to. That's a great point. Isn't that, aren't there whole dating sites where
You can have conversation? You read about that?
Well, I've heard of it. I don't know how I heard about it. But, you know, where you can have a whole conversation with someone and not realize that they're a
Bot. I'm not referring to therapy though. I'm referring to the diagnostic, the cognitive work of diagnosis.
I mean, I think chatbots are gonna get eerily good. They already are eerily good.
Yeah. My point, I guess I was thinking of it along the lines of how procedures or what's gonna keep, um, or what's gonna keep GI docs or
Cardiologists. No, it's not. They're gonna train PAs who can do procedures who use an ai. Yeah. You're not, why would you pay a gastroenterologist for that?
Oh, cut this out. It's very cynical. <laugh>. No. Um, anyway, that's how I think the stochastic versus epistemic uncertainty for the, the cognitive work of the diagno of the clinical reasoning process. Mm-hmm. <affirmative> and anybody, oncologists signed their own death warrant a long time ago when they started to build these guidelines and like, they've made it so that a computer could replace that. Well,
I'll tell you the other thing that comes up into dole's work that I think is going to be a hard thing to tackle with just a computer is interpreting what a patient means when they say certain things. Uhhuh <affirmative>. Right. Uhhuh <affirmative>. So it, it is really hard sometimes to decipher what it is a patient is actually saying. Um, and I mean, I guess if a human can do it, we are using some, we're using cues and maybe we aren't able to describe what cues those are that we're using. So by the virtue of that, maybe computers could do the same thing. But I just think that there's so much, um, in that, in that, in kind of the way that we interpret the information that we're receiving, there's so much inherent like subjectivity or there's
Readiness and there's not Yeah. There's notness clear boundaries.
Right. Um, and that's, that's actually describing epistemic uncertainty. Mm-hmm. <affirmative>. Um, we have, we see that all the time in our job. Yeah. Uh, I, I think it's a fair question though to ask how the computer will work. Yeah. Rather than saying it can't,
But sometimes your knowledge, your interpretation is based on your years of experience with that patient. Right? Yeah. It's cuz I know this person, I know he's an under reporter of pain or I know she's a, uh, over reporter of something. You know, like it's just,
Or it's based on, on reading the room, the patient's worried wife who is beside That's right. Them who's making a face where he is like, I haven't had any chest pain. That's right. Or the fact that if a farmer comes in, you know, that something terrible is going on
Or a veteran. Yeah. <laugh>. If a veteran is there and his wife is beside him, <laugh> my pretest probably have something bad. It's very high <laugh>.
Okay. So did Don Ball's legacy? Well I I'm still not sure what you think his legacy is. Like he is the father of all modern clinical decision support tools, but you think that those days are over or approaching their end?
I think that as he conceived of them, I think those days are approaching their end. Um, well, alright. So I do, I do think that there will always be bayesian type aspects to clinical reasoning. And um, I'd have to think a little bit about specifically where that will continue to be so important.
Diagnostic tests I imagine isn't that kind of gonna be the classic area?
Yes. And so, you know, I think where, um, those things fall apart is, you know, you can know the sensitivity and specificity of a test and as we've talked about on prior episodes that you've done, even that is actually not an inherent property of, you know, it's not a fixed property of
This house. Can you, can you teach medical students that? Because whatever I tell them that I get in fights and it always goes back to what their textbook
Says. But, but even assuming that we can capture sensitivity, specificity, the estimate of the prior probability, which is the, the prevalence of disease, um, is so dependent on a million aspects of that patient sitting in front of you. So like you, you can know the prevalence among a population. So in the US what's the prevalence of X, Y, or Z? But that doesn't tell you the prevalence of that disease in a male patient in their sixties. Yeah. With Ashkenazi Jewish background, who also happens to smoke and who, so like, you know, you, you never truly know the prevalence of disease in the, you can go narrower or narrower. Narrower. So that's where I think computers could, you know, they can do a million different permutations of what the prevalence of disease is in tinier and tinier subgroups of patient populations and really refining that prior probability. Um, they could also tell you what the test performance characteristics are, how sensitivity and specificity
Varies and how they work at your institution. If they can talk to that's each other, they can tell you how they work on a national level. You
Know, we're always psyched when we, like I was psyched when I found out we had a hospital antibiogram, right? So I knew exactly what prevalence of different organisms was at our own specific hospital. Like, you know, so having so computers will facilitate more accurate inputs, um, for doing the same types of calculations that Daba was, um, using in his, uh, early versions of these diagnostic reasoning tools.
I'm gonna push you a little further on this because I want to clarify what you're saying. Yep. In the, like in the clinical reasoning world, there's actually kind of a belief that ba for many of the reasons that you and I both know, that bay and reasoning, clinical reasoning is a not a dead end, but that it's always going to be relatively small. Do you actually think that's true? Do these new computers do, does it portend the end of of bay and reasoning? Or is it that the computers are gonna be so much better at bay and
Reasoning? That's what I'm, that's what I'm saying. That's
Yeah. It's that I think that they will be able to do it much, much better.
Okay. I I mean that that seems uncontroversial. Yeah. Well it probably is among some people. Do
Controversial? I would love for you to be more controversial. Well, here, no, here's my next question then, because I wonder how much we, uh, well God we don't even have that much time to talk, but there's so much to talk about right now. 2023. It is April going on May, this episode won't be out for another month. So god knows what's gonna happen between then who, what is the role of human cognition in clinical reasoning
No. Like here going on out Right. Looking, looking to the future or what in the near term, what do you think the role of right human cognition will be in clinical reasoning?
So I think we've already alluded to this a bit, which is I think that there are nuances of, of, uh, information acquisition mm-hmm. <affirmative> that I think humans, it will behoove uss to have humans involved in. Although you could argue at, at some point, um, maybe the patient will interact directly with the computer and, you know, at the end of the day it's, it's, we gotta go on whatever the information is that the patient, um, is giving. So,
So information acquisition
In information acquisition I think is one area where we, we may need to continue to be involved for maybe longer than some other parts. Um,
And by information acquisition you mean talking to the patient? Mostly
That is, yes. Yes. But you know, scientists, we like to say fancy words for very simple concepts. <laugh> Yes. Talking to the patient. Um, I, I I think that counseling and delivery of of information, I feel like that's always going to be, it's always going to feel better getting informa hard information to hear from someone that you have a long-term relationship with who can really sit down and, and explain it to you. And I don't know, I mean I, I do know that chat GT G P T is already able to give pretty wonderful explanations of
Things, but I, I wrote a prompt that does, um, oh my god, what is it called that does, uh, motivational interviewing a great prompt because one of our very cynical colleagues was like, it couldn't motivationally interview somebody. So I wrote a prompt to make it it do it. And then she tried to break its will and it just kept motivational interviewing very, very respectfully and well, well
Do, you know, what I think is very interesting too, is the intersection of the pandemic with all of this. And what I mean by that is that I never thought that we would get to a point where humans, where it would be like the Jetsons or you know, where you don't leave your house all, you know, you, you, you do everything from, from a computer screen and that there's no need for human interaction. Cuz I was always like, people are always gonna need human interaction. But you know, the pandemic took away so much human act interaction and it hasn't fully come back and we don't know that it ever will. And I think now, you know, people still get interaction, but they get it from the people they specifically want to get it from as opposed to being forced to. And where I'm going with all of this is that I think that the pandemic has made it that we are all much more comfortable relating to other individuals through a computer screen mm-hmm. <affirmative> by virtue of what has happened during the pandemic, such that that gap between the human element, um, and what a computer can give has been narrowed.
Right. And this is what I would say. We both have children and my kids are going to grow up interacting with chatbots and because, you know, they're two and four chatbots that are more powerful than the ones now. That's right. So you and I I don't think would ever feel comfortable being counseled by a chatbot. I don't think that's gonna, like society changes, it's changed with technology before. There's no reason to think it won't continue the change. That's
Right. Yep. There, there's just less, less human presence in general and make, and people are just more and more comfortable with lack thereof,
I think. All right. So here this is kind of what I want to focus on. We're both medical educators of, of, um, certain stripes. I don't know, we're both medical educators. I now struggle. So I was just, this is, I'm gonna cut this part up, but I was at Cleveland Clinic and I asked who'd use chat G P T? And every time I asked this question, more and more hands go up. Um, we are now in a position where early learners, you and I are old, my hair is gray. Um, yeah. <laugh>
Early. We know, we know how to train clinicians that think really well. And believe it or not, it's not what they teach in EBM classes. It's not like learning basian analysis. It's seeing lots of patients building up illness scripts, thinking, like, thinking critically about our patients in a way that we learn to collect that information, put it into a narrative in however the hell the human mind works. Does that Well we're in a position where our early learners are now going to be using diagnosis with chatbots very early on. Mm-hmm. <affirmative>, what does that mean for teaching? What does that mean for clinical reasoning and what does that mean for teaching diagnosis and clinical reasoning?
Yeah, that's right. I mean, and like, and is there value to going through the act of getting it wrong so many times? Um, because if
There is, if you don't have a computer, you can do it better
On your computer. So if, if I, I guess my question back to you is, are you getting at, uh, are we actually going to be able to assess independent thought mm-hmm. <affirmative> in the future? Um, or are you getting at the idea that if they're fed the answers early on that they'll never learn it? Well I like cuz cuz some of what, you know, PBL problem-based learning right? Is all, is the whole point of it is to muddle through things on your
Own. It, it's um, what a desirable difficulty. Right? That's the, the cognitive principle on how we learn. That's right. So it has to raise this question like how how do we teach people? And then yes. How do we, I don't care about assessment as much, but how do we assess people or do we even assess people? Like what happens when we take this away from being an internist?
Oh my gosh. And Adam, that applies in every single area, right? Like this comes up in, in medical publishing now too, right? Because like if someone use chat g P t then is that really their work? Does chat G p t need to be listed as an author? And like if I have chat g p t write a grant for me and I don't tell anyone that I did that, you know, and then I'm getting
Grant people are doing that.
How, how, how is anyone gonna test me to know if I really do know what I'm talking about? Or if chat G p t did it like, is the only way that we're ever actually gonna be able to assess whether a scientist's thoughts were their own independent thoughts is to have them up on a stage without any, without any, you know, computer based
That has been proposed, bring back oral exams for internal medicine. That may be the only way they should know. I
Think it may be the only way.
Um, but then it raises, oh my God, it raises so many that I think that medical educators are now who haven't seriously engaged with chat G P t do not realize what is coming ahead. Cause I don't have any answers.
Either ex I know the problems but I don't know the answers and I probably, there's problems that I haven't even foreseen.
Yep, that's that's right. And you know, I I think that we need to be, so in, in one of the, um, meetings, uh, for the, for a journal that I'm on, we were talking about, you know, so what is our policy on chat G P T? And you know, I liked the idea, there were a lot of different ideas batted around and I, I liked the idea of of allowing it, but asking people to disclose. To disclose it. Yeah. Because there's no way that we can an anticipate right now all of the many applications that are coming down the pike. So you don't wanna start listing it's okay for this, but not for this because, you know, tomorrow there's already 10 more applications that we don't have already mentioned. So it's, you know, I think right now we're in a major learning zone where we need to, we need to understand all the many ways that this can actually be used and applied. And that's gonna continue to evolve and change.
And that's what I think about clinical reasoning. We need to understand how it's useful for clinic. I will tell you right now, it's useful for clinical reasoning. Yeah. I think the question is how, what are those limits? What are the limitations? Well,
You know, did, so back to your question about what is, did Don Ball's legacy? I think the idea that having the computer provide their thought, their, their, um, their thinking around something and then providing that information to the clinician and allowing them to take that information into account, that I think is fundamentally a great concept. Right? Because I do think that there need to be checks and balances in any system and you know, we get that to some extent, like, um, on teaching cer, you know, in, in hospitals systems are set up so that there's never just one person involved in the care of a patient, right? There's pharmacists who are making sure your dosage is right. There's nurses who are making sure that, you know, um, grossly oversimplifying roles. But I just think that you never wanna solely rely on one type of processor.
And we've talked about this before and that's a problem with diagnosis as is there's usually no one to check uss Right. Or challenge
Us. And so I, I think if I have to, if I had to answer your question, come to come up with an answer to your question about what his legacy is, I think it's, it's just the idea that, that humans and computers could work together to achieve a better diagnostic process and that computers can actually help to make humans better at what they do. So I really loved that in his work. He found that, um, physicians were able to improve with that information and that when they shut that
They, they got worse again.
They, when they shut that off, they got worse again. And then they turned it back on and they got better. And so I think that there are definitely, for as long as humans are involved in delivery of healthcare and, and um, medical care, I think there are certainly going to be ways where computers can help us do the roles that we're still doing better. Okay.
Anything else you wanna add?
Uh, I don't think so. I think we covered, I think we covered most stuff. Turn