Support WBUR
What the next generation of doctors needs to know about AI

AI is helping doctors treat patients in American hospitals. But many new doctors say they haven’t been trained in how to use it. Now, Stanford University is mandating AI training for all its medical students.
Guest
Dr. Lloyd Minor, dean of the Stanford University School of Medicine.
Also Featured
Dr. Raja-Elie E. Abdulnour, pulmonary and critical care physician at Brigham and Women's Hospital in Boston. Editor-in-chief of NEJM Journal Watch.
Josh Cheema, advanced heart failure and transplant cardiologist at Northwestern Medicine.
Shinjini Kundu, assistant professor of radiology at the Washington University in St. Louis School of Medicine.
I. Glenn Cohen, professor at Harvard Law School.
The version of our broadcast available at the top of this page and via podcast apps is a condensed version of the full show. You can listen to the full, unedited broadcast here:
Transcript
Part I
MEGHNA CHAKRABARTI: Dr. Raja-Elie E. Abdulnour's patient was in the ICU and on a ventilator. Abdulnour is a pulmonary and critical care physician at Boston's Brigham and Women's Hospital, and all of his medical training told him that his patient had pneumonia, but it wasn't clear which type.
Dr. Abdulnour believed the patient had streptococcal pneumonia, one of the most common, contagious and serious forms of the disease.
If not treated swiftly enough, the prognosis was dire. Possible sepsis and lung failure.
DR. RAJA-ELIE E. ABDULNOUR: We do a test. The test suggests that this patient has streptococcal pneumonia, so has a lung infection due to streptococcus, and everybody's excited. Okay, this is a positive test. We know what's the cause of the patient's breathing trouble. Let's go ahead and change our antibiotics so we can just target that bacteria.
CHAKRABARTI: But then Dr. Abdulnour paused, he realized that he had seen the exact same patient a few months earlier with the exact same symptoms. They had seen the exact same test results and already tried treatments for streptococcal pneumonia, but the patient wasn't getting better.
ABDULNOUR: And so I asked myself, could it be that today the positive test is in fact a sign of a previous infection? Because if that's the case, then we need to hold our horses and actually keep our antibiotics to be broad instead of very narrow.
CHAKRABARTI: Now, false positives in tests like these are extremely uncommon. They only happen in about 3% to 4% of patients.
Dr. Abdulnour was the most senior physician in the ICU that day. And he needed another opinion, so he turned to artificial intelligence, a program called OpenEvidence. The tool rapidly searches through vast medical databases, and it told Dr. Abdulnour that yes, his test result could be a false positive, and it gave him sources explaining why.
ABDULNOUR: I looked at every word. I made sure that everything it said made sense to me. I looked at the references and then I realized that it's a good answer, and I told the team, we need to hold our horses.
CHAKRABARTI: The care team gave the patient those broad-spectrum antibiotics instead of the narrowly focused one, and the patient's condition improved to the point where they got off the ventilator.
Later testing showed that the AI program had made the right call.
ABDULNOUR: This is an example of how using OpenEvidence or AI in general changed the way I practice, it changed my decision making and that of the team as well, and that's happening every day.
CHAKRABARTI: Two thirds of physicians already use AI in some capacity; that's according to the American Medical Association. The number of doctors using it nearly doubled between 2023 and 2024, in just one year. In addition to scanning medical research, AI is helping diagnose patients, analyzing things like chest x-rays, scheduling appointments.
Dr. Abdulnour uses it to transcribe clinical visits. He sets up his phone and the AI scribe listens to his conversation with patients and then creates a draft doctor's note. In seconds.
ABDULNOUR: I've heard some people say, finally, I come home and I can spend time with my family. I can spend time with my kids.
I'm able now to do more research and I'm able actually to talk more to my patients. And patients themselves are also saying the same thing. They're saying, finally, my doctor's looking at me instead of looking at the screen.
CHAKRABARTI: AI is transforming medicine at a blistering pace. After all, here is a field that relies on all sorts of data, tests, numbers, and experience to treat patients.
It's like streams of input coming from all sides to a doctor, as they try to help someone. So as a powerful processing and analytic tool, AI is poised to make the practice of medicine better, but it could also make it worse. How will insurance companies use AI to determine who gets their treatment? Are new AI tools adequately vetted for bias against certain groups of patients?
After all, that bias comes from humans who created the data sets that AI tools are trained on. How much should a doctor let AI guide their decision making, and at what point in treatment should the human physician step in? These are questions that no other generation of new doctors has ever had to face, and yet med schools around the country are continuing to thrust new graduates into a world where they have to learn of the potential and risks of AI on the job.
Wouldn't it be better to start learning about it in medical schools? One study found that last year just 14% of American medical schools had developed a form of generative AI curriculum, compared to 60% of undergraduate programs. Med schools are indeed trying to catch up. And in September, Stanford unveiled an AI in medical education website.
It features reading materials for students, contact information for experts to talk about how they use AI, and videos from a symposium earlier this year, all about AI in medicine.
In the next 30 minutes, everyone in this audience is going to know all about the basics of large language models. Let's see if we can make it happen. I'm mostly kidding. Of course, we're going to be scratching the surface.
I am not a computer programmer. I have never trained a large language model, and the tools that I'm suggesting today are things that anybody can use to maximize their learning.
Physicians have a lot of questions about their malpractice risk in general, and even more about their malpractice risk when it comes to AI.
Very recently, Bill Gates gave an interview. He said that within 10 years, AI will replace many doctors and teachers. Humans will not be needed for most things. I don't know, folks, the gauntlet has been thrown. What do you think? What do you think is actually needed or not?
CHAKRABARTI: Those talks are all available on Stanford's new AI in Medical Education website.
The curriculum will now be required for all Stanford Medical and Physician Assistant students. So this hour we're going to talk about what a curriculum looks like regarding AI and the practice of medicine, and why you as a patient of a doctor or medical professional somewhere should care about what med students are taught.
And I'm joined by Dr. Lloyd Minor. He is the Dean of Stanford Medical School. Dean Minor, welcome to On Point.
DR. LLOYD MINOR: Thank you, Meghna. It's great to be with you today.
CHAKRABARTI: Okay, so before we get to the nitty gritty of what's in Stanford Med School's AI curriculum, I was wondering if you could start with sort of the thinking behind why the university or why the med school in particular needed to really move quickly to establish some sort of baseline for AI education in medical school graduates.
MINOR: Yes. You mentioned it and you covered it very well in your introduction. These tools are out there, they're available. It's amazing, isn't it, that large language models just became available to the general public three years ago. And yet now they're being adopted and used in so many different sectors and in so many different ways.
And certainly, in medical education, in medical practice, the examples you mentioned and others, provide compelling evidence that large language models are available, that they're already adding value. There are risks, and we should talk about those during our discussion today, but they're available.
They're out there, but how do we really use them appropriately? How do we use them in a way that we get the most information we can for the benefit of our patients to help us develop a diagnosis, prepare a treatment plan, but also be aware of some of the pitfalls that can come up from using large language models.
CHAKRABARTI: You know what's interesting to me is that these AI tools for medicine are quite frankly, just about anywhere, but since we're focusing on medicine, they're kind of hot off the presses, if I can use an archaic analogy.
They're so brand new that how do you go about even teaching a tool which may have only been recently introduced into Stanford Medical Center's own daily operations.
MINOR: Sure. And you mentioned in the introduction, that our website, Stanford Medicine AI and Medical Education is publicly available. It's not behind a firewall. Anyone can access the educational materials that we put together, and those are constantly being updated. I think the first step in the education is to have some general understanding conveyed about how the large language models work. What they do, how they bring information to the fore, how interacting with a large language model, you can help make sure you're getting the right information that's most meaningful to you.
Or that's most meaningful for the way you're going to give information to a patient at the time you do the query with the large language model. Those are the principles behind the curriculum that we put together.
First, understand how do they work? How are they different from a simple internet search? How are they trained? What are the differences between some of the more commonly available models in the introduction. You mentioned OpenEvidence being one example where the model's been trained and curated based upon published and peer reviewed medical literature, in contrast to large language models that are trained from a universe of information and data.
So those are all topics we try to cover in the curriculum, as well as to encourage our students to use the models. It's only through active engagement with a large language model that you really understand its capabilities and its limitations. I should also mention, we've developed a couple of special tools specifically for the education of our students and our trainees.
One we refer to as Clinical Mind AI. And what that is it's a large language model. It's an interactive environment where a student actually conducts an interview with, if you will, an artificial patient, a patient who's been constructed based on the way the scenario has been put together, and a student gets to take the history, gets to interact with the person, and then there's feedback given to the student in real time. And afterwards.
Did you ask the most pertinent questions? Did you ask them in a way that really encouraged the patient to feel comfortable in providing answers to the questions. So these are things that in the past we've done, of course with humans, and we still do. We haven't supplanted the role of medical educators, physicians and others in the educational process.
But we've added these tools to be a benefit during the learning process.
We haven't supplanted the role of medical educators, physicians and others in the educational process. But we've added these tools to be a benefit during the learning process.
Dr. Lloyd Minor
CHAKRABARTI: So we have about 30 seconds before our first break. Dr. Minor, can you just tell me a little bit about why you think med schools have fallen behind on this? Because I echoed that fact earlier that 60% of undergraduate programs have already started including some kind of AI education in their curricula.
MINOR: Several reasons. First, I think there's a understandable reluctance to change a curriculum that is designed to prepare people to provide medical care. That's a tall order; that's a high responsibility. And we don't want to make changes without studying the effects of our changes and understanding how those effects may be mitigated if they're not advantageous.
Part II
CHAKRABARTI: Dr. Minor, I have to say that one of the reasons why we were excited to turn to Stanford Medical School is that Stanford played a big role in a major five-part series we did several years ago about AI and health care. It was called Smarter Health: Artificial Intelligence and the Future of Health care.
And for listeners, by the way, just go to our On Point podcast feed and look up the words Smarter Health and you'll see the award-winning, actually, five-part series there. It was really excellent. But it suddenly occurs to me, Dr. Minor, that this was back in 2022 that we did our series.
Three years later in 2025, and some of the tools that we talked about back in the Smarter Health Series have already been transformed or made multiple orders of magnitude smarter and more capable. And I'm wondering what kind of, what's the approach to the AI curriculum at the med school now that accounts for how rapidly these tools are changing?
Are you teaching students baseline understanding of how the LLMs work or something more detailed than that?
MINOR: It's both, you're right. Things change almost by the minute not to mention the day. And the month and year, and certainly large language models are going to continue to evolve.
We've seen that just over the three years that they've been commonly available. The publicly available large language models today function a lot differently than they did when they first became available to the public three years ago. And that's one thing we introduce in the curriculum is, stay tuned.
Things will continue to evolve, but the principles of how to ask a query, how to engage in a dialogue with a large language model, those principles have remained relatively tried and true. Some have changed as the models have become more sophisticated. But certainly, we're going to need to update our curriculum.
We already know that's one reason that we've made it largely web-based, in addition to in-person instruction, but we made it in a way that we can constantly update it as more information becomes available.
CHAKRABARTI: Do you mind if we just step back for a second, Dr. Minor and I'm not going to take a guess as to when you finished your medical education.
But what was the state of the art in terms of computational tools when you first became a full white coat wearing young resident?
MINOR: It's a great question. When I went to medical school, not only did we have to memorize, of course, the mechanisms of action, the names of drugs that were used to treat a variety of different conditions, but also, we had to memorize the dosages, how many milligrams per kilogram?
That was ridiculous. The human brain is not set to keep arcane numbers. And it very much led to a number of errors in dosing, that sometimes those errors were caught by great pharmacists that saw that this is really out of the range for the appropriate dosing. But there were a lot of medication errors back then.
There were errors made because maybe the physician prescribing a medication didn't know that another medication the patient was on affected the dosing, and so a patient received too much or too little of the new medication. Again, pharmacists, others would oftentimes pick those up, but not always.
Now, everything I just mentioned has been totally transformed. Almost all medication ordering today is done electronically. You're interfacing either with an AI enabled system or with a system that already has been trained to adjust dosages based upon the patient's weight, to look for drug interactions, to flag those interactions to the physician or the other health care provider ordering the medication at the time it's ordered, and to further flag the pharmacist.
A lot of what had to be remembered or done through a series of human generated steps, they're now checks and balances that are built into the software, whether or not it's generative AI enabled or not, but built into the software that controls the way we actually issue a prescription for a medication.
That's just one example, and other examples were you had to rely upon your memory, or of course, when I went to medical school, it was in the days before internet. We didn't even have the ability to do a simple internet search on a condition. You'd have to go to a textbook.
And that was good in a sense, in that it was active learning. But it really did not provide the richness of information that's available at everyone's fingertips today. In, for example, the very illustrative case that you mentioned at the beginning of this program today of the patient that had streptococcal pneumonia but then probably had another bug causing the pneumonia that led to the readmission.
So those are things that, yes, skilled physicians would oftentimes pick up, but not always. And that's really, I think, the power of these AI tools.
CHAKRABARTI: Yes. Agreed. And just to underscore something you said a little bit earlier is that there's this vast sort of body of what American health care is outside of the doctor's office, or the surgical suite in terms of prescriptions, insurance, et cetera, et cetera. And I think AI is highly poised to make significant improvements in those areas. But what about what happens inside the doctor's office?
For example, we talked to Josh Cheema, who is a cardiologist at Northwestern Medicine, who frequently uses AI in his practice. And one problem that Dr. Cheema anticipates other physicians dealing with in the future, possibly even today, I'll say, is explaining how AI factors into the care of a patient to the patient. Especially as AI tools become more accurate and prevalent in medicine.
JOSH CHEEMA: Also, a lot of these algorithms are what we call a black box, and that really is the case.
We don't know exactly what are the aspects that AI is using to make a determination. And so we're not doing this now, I don't know any program in the country that's doing this. But let's say that we use an AI to determine whether or not somebody should get a heart transplant. Not that we should think about whether they're sick enough, but to literally say yes or no, and an algorithm said no, we would need to be able to explain why.
CHAKRABARTI: And Dr. Cheema also told us that explaining why AI comes to a conclusion is going to be important. If you're a patient, he's a cardiologist. Imagine if you're a patient and you get some very bad news about your heart from him, from an AI. You're going to want to know why the conclusion was made. But Dr. Cheema says it's not necessarily going to be about explaining the complicated computer algorithms to patients.
CHEEMA: I think there are medications out there that we simply do not know the mechanism of action. We do not know how acetaminophen or Tylenol works. We do not know how Metformin works, from the most commonly used medication for diabetes, but we know they work.
We use them all the time. And so I care more that these AI systems have been validated externally. If we train in Northwestern, I want to test it in different populations before we're using it other places. Now, whether I can or cannot explain to you exactly how the algorithm works or what it's looking at, to me that's less important than being really rigorous about the validation.
CHAKRABARTI: Dr. Minor, what do you think about that?
MINOR: I agree. I think the topic of validation, the point that, and we really take it one step further, because the methods, the statistical methods that underlie large language models mean that it's very hard for anyone, even the experts designing the models, to go in and explain exactly why a response was generated.
With a particular input. And that's why you can ask the same question, you can pose the same conditions on multiple different occasions, and you may get slightly different responses. That's different than a more deterministic model, where every time you enter, if you ask a simple machine learning based model to calculate the dosage of a drug based upon a patient's weight and height and other circumstances.
You want to make sure that you're getting the same response every time. And that's the case because there's a deterministic model that you can go through and you can look at the algorithm and figure out every step.
And if there's a problem with it, you can then alter the steps to make sure the algorithm is right and appropriate. Large language models are much more statistically complicated, and so don't lend themselves to that sort of deterministic predictive understanding of how they're working.
That doesn't mean for a moment that they're not useful. I think what was said before is that patients should always be aware when we're using whatever technologies we're using. That includes probably the most common use today, and I think it'll be even more common in the future of AI in medical practice.
And that is using ambient AI to transcribe a note or prepare a note from a clinical encounter, that's been so liberating for physicians and other health providers, and it's been wonderful for patients. There's nothing more disconcerting for a patient than when you go in to see a health care provider. The first thing they start to do is to type into an electronic medical record. They're not looking you in the eye. They're not really communicating with you. If they're focusing on entering information into the medical record with ambient AI. That becomes unnecessary because in the background there is a note that's being prepared.
The patients always need to know that's the case. They need to know what's going to happen to the recording after the note is prepared and they need to be able to view the note along with the physician to make sure it's accurate. Those are a few examples. There are going to be many others.
It is very important though for trust and for appropriateness of medical care that our patients understand how large language models are being used, why they're being used, and also have a say in their use in the care that a physician's providing to a patient.
CHAKRABARTI: Yeah so Dr. Miner, you literally said the phrase that I was going to ask you about, which is the way that technology has already wormed its way in between a doctor and her patient.
Because the example you gave is one that I think many people are familiar with. They step into the doctor's office or they're at the hospital and they never actually see their doctor's face, because they're just looking at the screen the whole time, in part to put in those medical billing codes, et cetera, et cetera.
And not only is it disconcerting, but trust, as you mentioned, is a major part of moving towards successful outcomes, right? Because if the patient doesn't have trust in their care team, they're just less likely to comply with what the care team says. And to that point, I'm just wondering if there is a, you're facing a reality as a dean of a medical school where we are not just using technology anymore, we are immersed in a world now where technology is almost in the air that we breathe, in terms of how much we're using it, how even we don't know when we're using it, but it's helping to facilitate our lives.
Could there be a negative outcome in terms of AI actually making doctors less skillful? And that's actually an issue that was raised by Dr. Raja-Elie E. Abdulnour from Brigham and Women's in Boston, who we heard from earlier. And here's what Dr. Abdulnour says.
ABDULNOUR: A real-world example is this is a research study where they looked at endoscopists.
So gastroenterologists are doing procedures, doing colonoscopies. In some institutions they have incorporated an AI tool that helps them detect adenomas in the colon. So these are pre-cancerous lesions. And in that study, what they did is that they removed the AI. And then they tried to look if removing the AI affected the ability of the physician to find these adenomas.
And so what the researchers found is that the adenoma detection rate dropped by 30%. This was to date one of the best examples of how AI can descale.
CHAKRABARTI: That's an example, Dr. Minor, of the over-reliance on of AI to do the fundamental work of medicine. Your thoughts?
MINOR: It's a great point and we've had this dilemma before.
For example, electrocardiograms. Virtually every electrocardiogram done today when not it's done in a physician's office, start in a hospital. It immediately comes out with some sort of computer-generated read, based upon trained algorithms that have looked at millions of electrocardiograms. That doesn't mean that the physician or the cardiologist or whomever is ordering and looking at the electrocardiogram, shouldn't be interpreting it themselves and checking to see is the reading, does that actually correspond to this particular situation?
But we've had that available now for well over a decade. But these tools that are available today, the large language models, the visual based models that are looking for adenomas in the example you just mentioned, those are going to take that to a whole new level. The models that are used to interpret radiology studies, chest x-rays, CT scans, MRIs, we're going to have to be very careful about making sure that physicians still have the knowledge base that is needed to actually question a model. To understand the context of what the model is producing, relative to other things that are going on with the patient.
We're going to have to be very careful about making sure that physicians still have the knowledge base that is needed to actually question a model.
Dr. Lloyd Minor
And so that hasn't, for us, and I think for other medical schools in the United States, it has not diminished the knowledge-based aspects of the curriculum, that still form the core of medical education.
How long that'll be the case or how we train students and other health care providers in the future, that remains to be seen, based upon how the models evolve and how they're used and practiced.
But today, and you mentioned at the top of the program, why is it that there's a larger percentage of undergraduates learning today about large language models and medical students, and part of it is this reluctance, understandable reluctance to move away from aspects of the curriculum that have produced consistently high-quality physicians and other health providers.
CHAKRABARTI: Dr. Minor, what was your, do you mind if I ask, what was your specialty when you were in clinical practice?
MINOR: Sure, I'm an otolaryngologist, so I focus on in particular disorders of the ear and in particular of the balance system. And I'll tell you an example. When I first realized that large language models were really going to change things in the spring of 2023, so that was six, nine months after large language models first became available through ChatGPT and then other models. I was asked to give a review of some of the discoveries I made earlier in my career, describing an inner ear disorder called Superior Canal Dehiscence Syndrome, and developing a surgical procedure to fix it. And I put a lot of time into organizing this review of both the work that I had done, the work that had been done since I moved into other areas of leadership.
And the last thing I did was to ask a large language model, what is Superior Canal Dehiscence syndrome? And the response I got, I recognized a lot of the words. The response I got was really good in terms of the way the information was assimilated, that told me this isn't an incremental advance.
This is a fundamental leap forward.
Part III
CHAKRABARTI: Dr. Minor, you had mentioned earlier a phrase, which I think is vitally important about when to question the models, right? That this has to be part of medical school education now as well. And that's also something that was echoed by Glenn Cohen. He is Deputy Dean at Harvard Law School who studies ethical issues raised by AI adoption in medicine.
GLENN COHEN: Often this is presented as an inevitability. The idea that actually these will be integrated and your job as a physician is to be ready for it when it comes in, as though it's like part of the physical plant, like the four walls of the structure. But actually, I think it's really important to empower physicians and nurses and others in the health care system to understand that they have a stake in this and they should have a stake and a say in the governance and the adoption decision.
CHAKRABARTI: Cohen argues that if medical schools teach AI as something students just need to adapt to, they risk sending clinicians into the workforce as passive recipients or even victims of technology.
COHEN: AI makes a lot of mistakes. AI is bad at explaining what it's doing, and oh, by the way, when it comes to bias, AI is often a little bit racist, but guess what?
All those things are also true of your physician. The real question is, what combination of physician and artificial intelligence improves along those dimensions in a way that makes medicine both more effective and more safe? But also, more just and more equitable in terms of people who experience the health care system.
And to me, that is a heavily empirical question.
CHAKRABARTI: And because the answers to those questions need to be answered with real data over long periods of time. Cohen says doctors need to know how to evaluate an AI tool.
COHEN: It's not like you're buying an AI; it's much more like you're hiring an AI. And just as you would never stop your evaluation of a physician or a nurse after the interview stage.
You wouldn't say they seem great in the interview. I'm sure it's going to be great. You would look at their performance over time and try to find out if there are problems over time. The same is true with AI. We have this kind of obligation to be monitoring it over time and not just treat early results or even preclinical results as really good indicia quality and how it's gonna actually work out for patients.
CHAKRABARTI: And even when the results look impressive, Cohen argues that the deeper question is whether those discoveries change actual outcomes. Are patients living longer, or is an AI system simply flagging more cancers that are too late or expensive to treat?
Is technology expanding access or amplifying existing inequality? Those questions point to the overarching promise and peril of AI.
COHEN: Rather than making really good doctors even better. The big value, ethically speaking, would be to democratize the expertise of even pretty mediocre doctors in America. Just to be able to enable people in rural places, people who have poor access to health care, people in low middle income countries, to access the kind of knowledge that even mediocre U.S. physicians would be a game changer.
And so you have to ask yourself, when we're teaching students about it and asking them to imagine a future with AI and imagine themselves as part of an AI workforce and part of the people building AI, we want to also leave them with the idea that their goals and their targets should also be guided by ethics.
CHAKRABARTI: That's Glenn Cohen, deputy dean and professor at Harvard Law School. Dr. Minor, what do you think?
MINOR: I agree. I would amplify two points. One, absolutely. We should be checking what we're actually getting from the AI we're using and that's going on today. We mentioned the example before of ambient AI used to transcribe a note from an encounter with a patient.
That note has to be reviewed by the physician, and oftentimes it's reviewed by the patient as well before it's finalized, and it's corrected. Those corrections are actually used to train the large language model to make it better the next time. But there has to be the human interface, the human interaction, and checking other results that are coming for the model.
The second example, or the second point I would make in follow up to what was just said is that I do believe that AI already, but even more in the future, will help to democratize care and extend access to care. And let me mention one thing we are doing. I grew up in Little Rock, Arkansas. And a few years ago, I met Alice Walton who lives in Bentonville.
I do believe that AI already, but even more in the future, will help to democratize care and extend access to care.
Dr. Llyod Minor
She's the daughter of Sam Walton, the founder of Walmart. And Alice has done a number of things in Bentonville in that region, including the Crystal Bridges Museum. But I met Alice in the context of her wanting to start a medical school. And over a relatively short period of time, about three years.
From concept to enrollment of the first class this summer, the Alice Walton School of Medicine was founded. There's a wonderful inaugural Dean, Dr. Sharmila Makhija, and I'm the board chair for the medical school. We've been developing and deploying our AI based curriculum with AWSOM, the medical school in Arkansas, and one of the goals of that school and the health care system that's being built in conjunction with it, is to extend access, to bring specialty care, to bring primary care, to bring community-based care to communities that traditionally hadn't in the past.
And technology's going to play a big role in that, because there isn't going to be a cardiologist or a dermatologist nearby to every community that needs to be served with these resources. And that's where, when appropriately deployed, I think AI can have a big impact.
CHAKRABARTI: Yeah. My total agreement on that, because again, I see a lot of upside for AI, especially in medicine.
Because there's the number of uncertain variables that walk into a doctor's office when a patient comes in is vast, and any tool that helps a medical professional, sort of doctor, nurse, PA, whomever, get a better handle on reducing the uncertainty is really important. But at the end of the day. It's still about decision making, right?
The same thing, the same truth about being a doctor 50 years ago is true now, but perhaps maybe even in sharper relief. So taking it back to the medical student, Dr. Minor. I wonder if you just, if you don't mind me taking a minute or two to share a personal example about decision making at actually Stanford Medical Center.
Is that okay?
MINOR: Please.
CHAKRABARTI: Yeah, go ahead. So in 2022 when we did our Smarter Health series, our second episode was all about a tool that had been launched at Stanford Hospital in 2020 called the Advance Care Planning MODEL. I'm sure you're familiar with it.
And it basically, every time a patient gets checked into Stanford Hospital, the ACP model evaluates them and their medical records.
This obviously doesn't, this goes on behind the scenes and issues a piece of information about what the probability of that person of dying within the next year is. Now we talked to Dr. Steven Lin at Stanford, and he said it's not about predicting death. It's supposed, it is a tool to help physicians plan on what kind of care a person might need if they are approaching the end of their life. So it's a really interesting and important tool. Okay, so here's the on the ground experience, if I may. Totally ironically, a few months after we did this episode, my mother had a major illness and was at Stanford Hospital for three weeks, and I just thought, Oh my God.
Like she's being evaluated in part with the ACP and there was a point in her care where she was in very, very critical condition. She had cancer. And the attending physician in the ICU when she first arrived in the ICU came to us, my brother and I, and said overnight, we're not so sure about your mother's chance of survival, et cetera, et cetera.
And so we want to just talk with you about end-of-life options. And it was a rational thing, very rational thing for the doctor to share with us. I think, I can't say for sure, but I think it was, informed in part by the ACP models flagging of my mom. But the thing is there's nowhere in that model that told the model or the doctor that her oncologist, my mother's oncologist, said that she's actually, she is a long-term survival probability.
The acute issue that had landed her in the ICU is treatable and separate from the cancer, et cetera, et cetera. So my brother and I had to step in and say, Her oncologist said she could live for a while. Please do everything you can. And do you see what I'm saying? It's like, I don't know how much AI went into the doctor's advice to us, and I do not fault her one bit.
But as a patient, it made me wonder that we had information that the physician or the AI did not, and it actually strongly determined my mother's care, and she left the ICU eventually. And not in a box, to be frank. How are doctors supposed to think about this? That they still won't have all the information about a patient, even when AI feels very knowledgeable and very accurate, given its computational and analytical power.
MINOR: First, I'm very pleased that your mom got better and that through the care she received and through the care you and your brother provided to her that she was able to improve, leave the hospital and that she got the care that she needed during what sounds like a very critical period of her illness.
You're exactly right and I think that the example that you draw brings up so many important points. We, at the very basic level, one of the goals of the software that you mentioned is to make sure that patients that are in the ICU, that there's some documentation that there has been a goals of care discussion at some point.
Now, oftentimes that's done well before a patient gets severely ill. And in the case that you described of your mom, the oncologist saying, I believe this is a treatable long-term survival cancer. That should be available to anyone taking care of the patient, including the emergency medicine physician at three o'clock in the morning that has never met your mom or other patients they are seeing before that encounter.
That's the baseline goal, is to make sure that those discussions take place, to make sure the discussions are appropriately informed, but at the end of the day, absolutely not to make the decision. The decision has to be made by the patient and their family, informed by a discussion with health care providers, but absolutely not made by a large language model or any other intermediary.
CHAKRABARTI: So let me put a finer point on it because we asked this issue about decision making in a medical context to Dr. Shinjini Kundu, a neuroradiologist and assistant professor of radiology at Washington University in St. Louis.
And obviously radiology has been extremely quick to adopt AI, but the doctor says, for now it is critical for doctors to understand that they are the final decision makers.
SHINJINI KUNDU: So every physician takes the Hippocratic Oath, which says that they must uphold the highest standards for patient care and do no harm.
But AI tools are not bound by the same moral obligations. It's really up to the physician to contextualize the outputs and understand how it might apply to the patient. And I actually think that patients, as well as trainees, find it comforting to know that we can use AI like a second opinion, but they also want to know that a human physician is the one who's interpreting it, contextualizing it, and making decisions together with the patient.
Patients, as well as trainees, find it comforting to know that we can use AI like a second opinion, but they also want to know that a human physician is the one who's interpreting it, contextualizing it, and making decisions together.
Dr. Shinjini Kundu
CHAKRABARTI: Dr. Minor, my question for you is, already medical students and with every passing year, the students who come into medical school, they will be, using AI will just by default to be an almost automatic skill that they come into medical school with, since it's so prevalent already. How does the curriculum at Stanford approach this question of how a physician should balance what AI brings to the overall information to treat a patient versus how and when a doctor should step in and say, No, I'm going to reduce the weight that I give to the information from AI?
MINOR: I think it begins even before the student starts medical school. And making sure that students going into medicine have a real motivation based upon empathy, based upon a desire to interact with patients, to interact with families, and to be the type of information provider and caregiver that really demonstrates a humanistic approach to medical care.
I'm optimistic that technology and AI in specific will help to return that humanism and help make it possible for physicians to devote more of their time to interacting with patients, and less of their time to interacting with the information systems and the technology systems that surround us today and that are not going away.
We wouldn't give up electronic medical records. Moving away from a paper-based system was absolutely the right thing to do. There were a number of unintended consequences. Hopefully AI will help to resolve some of those unintended consequences, and now when you go in to see a physician, they're actually looking you in the eye and communicating with you and not typing into a computer.
CHAKRABARTI: Sorry, Dr. Minor. Forgive the interruption please. It's just that we only have a minute and a half left. I did want to sneak in one more question for you.
MINOR: Go ahead.
CHAKRABARTI: So there you are at Stanford Medical School, Stanford University, Silicon Valley. How do you resist some of the techno utopianism that comes out of Silicon Valley and the AI companies now. They are promising literally the world, but working with the human body will forever remain a realm of uncertainty.
Do you, does the curriculum warn students against thinking against over promises of AI?
MINOR: I think, look, we should always listen to what others are saying. That doesn't mean that we should agree with them or adopt what they're saying. But there's a lot we can learn from people who have transformed technologies, who are bringing information science to a wholly new level.
But at the end of the day, it comes down to why do people go into the practice of medicine? And that should still be driven by a desire to really interact with people, to have relationships with patients and families that no one else has. And to make sure that the sanctity of those relationships are preserved no matter what the technology is.
The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.
This program aired on December 1, 2025.

