Sorry to those who may have seen this post already; something glitched out on my posting of this yesterday and so it didn’t go out to my subscribers as it was supposed to. Lisa
Last Thursday I was sitting in my classroom at the UC Berkeley Haas School of Business with my co-instructor, Dr. Jeff Rideout; we co-teach an MBA-level class called “The Changing Healthcare Economy” and together we were watching a great guest lecture by Dr. Andrew Litt, Dell Computer’s Chief Medical Officer.
The focus of Dr. Litt’s presentation was the imperative to automate healthcare through the proliferation of a variety of forms of information technology. In particular, he spoke about how essential it is for doctors to have access to analytics at the point-of-care that give the physician detailed information in order to more rapidly diagnose medical conditions and prescribe appropriate treatments. Dr Litt illustrated the importance of such technology by showing a picture of a brain MRI where the symptomatology evident in the image, absent any other information about the patient except age and sex, could lead a physician to one of more than 20 different diagnoses. Only by adding information, like medical history and specificity of symptoms–information that is often unavailable to the radiologist reading such an image–is it possible to narrow the diagnosis down to the right one. This is particularly important where the physician might be less experienced and more prone to make a mistake without the availability of a robust set of data.
I love teaching my Berkeley class because the students come to it with such a different perspective and way of thinking than I do or as do so many of the people with whom I interact. While many of the students have had at least minimal exposure to both healthcare and finance issues, they ask such insightful questions because they have not been immersed in the same world that I have for more years than I care to admit. It is amazing how one-dimensional one’s thinking can get. For instance, it would not occur to me to really question the validity of using technology to improve the quality of a physician’s performance. I just believe that technology can, by and large, improve the delivery of healthcare. But one particular student in class last Thursday raised his hand and said, “At what point do physicians get so reliant on technology that they start to get dumber? I read a study about airline pilots that showed that the more they relied on autopilot and the like, the less good they get at flying planes. At what level of technology does this happen to doctors too?”
Well that was a damned good question and one I had not really thought of or heard before. It is not exactly rocket science to think that if you take the human touch out of medicine that it would be of a lesser quality, but I had not taken a step back myself and thought about this issue from a practical standpoint. In fact, the way I usually think about it is quite the opposite. I once had a doctor for a boss who frequently used to say, “by definition 50% of doctors are worse than average.” Again, pretty intuitive thought, but not the way we usually put doctors in perspective. My thinking has generally been that it is the bottom 50% we are fixing by bringing technology into the picture, while we further augment the skills of the top 50%. Philip (the student with the good question) reminded me that we could accidentally send the top 50% down to the lower echelon if we take technology too far.

When I got home I read up about the study on airline pilots. Here’s what one article about the study says (a few excerpts from an article you can read in full by clicking HERE):
Automated flight systems and auto-pilot features on commercial aircraft are causing “automation addiction” among today’s airline pilots and weakening their response time to mechanical failures and emergencies, according to a new study by safety officials. This dangerous trend has cost the lives of hundreds of passengers in some 51 “loss of control” accidents over the past five years, the report found.
Rory Kay, an airline captain and co-chairman of a Federal Aviation Administration committee on pilot training, told the Associated Press that pilots are now experiencing “automation addiction.” “We’re seeing a new breed of accident with these state-of-the art planes,” Kay said. “We’re forgetting how to fly.”
The technology behind the auto-pilot on commercial aircrafts only requires pilots to do approximately three minutes of flying — during take-off and landing – which has contributed heavily to the number of “loss of control” accidents, such as the crashing of Air France flight 447, which nosedived 38,000 feet into the Atlantic in June of 2009. As flight 447 soared through powerful storms over the Atlantic, the plane’s autopilot suddenly disengaged and a stall warning activated. The senior co-pilot then said: “What’s happening? I don’t know, I don’t know what’s happening.” The pilots then pulled the plane’s nose up, when the correct procedure during a stall is the exact opposite: nose down. The co-pilot was yelling “climb, climb, climb!” but was interrupted by the captain, who said: “No, no, no — don’t climb.” The plane slammed into the ocean, killing all 228 on board. A report by France’s Bureau of Investigations and Analysis indicated that there were no mechanical problems with the plane, which would not have crashed had the pilot responded correctly.
The new draft study by the FAA says that pilots often “abdicate too much responsibility to automated systems.” It also found that in more than 60 percent of accidents pilots had trouble manually flying the plane or made mistakes with automated flight controls….FAA’s report recommends that pilots take control of the airplane more often in order to keep their skills sharp — so they are prepared to react when the computers cannot.
What an interesting issue. The very tools that were supposed to enhance the performance of pilots have actually led to their decline….to their becoming “dumber.” This is not the first example of this problem, of course.

For one thing, I have definitely become dumber since I started using my computer to spell check everything I write and calculate any math I need to do. I can barely remember how to spell or determine a percentage anymore; thanks to Al Gore I can look up everything I know on the Internet and don’t need to remember anything. No doubt in my mind that people who rely solely on email and texting to communicate have become dumber; the English language has become so butchered as a result of the use of technology that all I can think when get a text is, “vowels must feel so left out.”

Many attribute computer-based or High Frequency Trading to the decline in the quality of the stock market. There is a huge movement afoot in healthcare to apply comprehensive predictive algorithms to massive amounts of data in order to deliver the answer into the hands of a doctor. In the financial markets they don’t even bother with the last step anymore—the step where a person makes the final decision about the action to take. About 70% of stock trades are made by computers using comprehensive predictive algorithms applied to mass quantities of data with no human intervention whatsoever. The result? Well, many experts attribute the so-called Flash Crash, the day the Dow Jones Industrial Average suffered its second largest intraday trading drop and then recovery (May 6, 2010), to algorithmic trading gone wild. Many financial experts believe that the removal of the human process from most of the stock market trading process has produced a level of volatility that has, by and large, rendered the stock market sub-functional, dumber, and subject to future crashes caused by what amount to the love child of Gordon Gecko and the Terminator.
When asked about this issue of technology dumbing-down doctors, Dr. Litt acknowledged the risk, but responded by saying that physicians are so intensively trained that it would counteract the effect. But that, to me, is a questionable position to take. Pilots get pretty intensive training too. And no doubt, upon graduating flight school, some are better than others…the proverbial 50% are better than average while 50% remain worse than average. Doctors will have a similar outcome, with both great and mediocre ones coming out of medical school into the light of day. The issue comes to light when previously better-than-average doctors let their skills get rusty if they let IBM’s Watson or any other of the algorithmic tools that are hitting the market tell them what is happening with their patients every time. Watson may be right most of the time, when implemented, but not even a computer is right every time. And yet when most of us get an answer from a computer we tend to believe it, even when our head tells us something isn’t right.
Medicine is very much an art, not a science, even today. Every treatment plan is better informed when there is an understanding of the patient’s personal situation, socioeconomic profile and ability to accept/comply with treatment. Patients already report that they hate it when a physician spends the visit looking at a screen instead of their face. While physicians’ performance can and must benefit greatly from more access to data, better decision-support tools and the output of next generation sensors, they will still need to look the patient in the eye and consider the human factors when providing a diagnosis and treatment plan. If doctors start acting like the medical equivalent of an ATM, just letting the machines spit out the diagnoses and prescriptions, they may risk losing that knowledge and intuitive sense that helps them figure out and address the particularly complex situations. We may be able to replace bank tellers with ATMs, but every once in a while we still seek the counsel and help of the old fashioned human version to get the task done right and better. It is not hard to imagine the first medical liability lawsuit where the patient’s claim is that the diagnosis came straight from the computer program, was flat out wrong, and probably should have been obvious if the physician had only questioned the results based on what they should have known.
The bottom line, of course, is that it is essential to put technology in the hands of doctors to enhance care, but also to modify their training and ongoing education to ensure the right balance is maintained…that they keep their skills sharp and evolving despite what assistance technology may provide. Technology—we all love it but beware the dangers of automation addiction—it can make us stupid. No doubt there’s money in it for whoever programs the computer that develops the 12-step program to treat it.
darn, I was hoping this really was chapter two on technology addiction, nice one,
Thanks Kate. Chapter two is when the cyborg doctors take over the earth!
Lisa:
Interestingly, I came on to get to know you. I recently got a call from your assistant asking me to send an Exec Summ about our company. Interestingly, we are a medical informatics company that, yes, assists physicians with a decision support technology (as well as life scientists, hospitals, insurance co’s and patients), who is governed by a pilot’s daughter (me). While my reply may cost me an investment from Psilos, its worth it to me. I don’t know how many people read this, but if if this posts helps a handful , I’m good with that.
Airplanes haven’t really changed that much. Lift, instruments, take off’s and landings of any airliner today isn’t that far off from a Kittyhawk. New landing gears, better navigation systems, better aerodynamics, faster propulsion, bigger – certainly. But, the mechanics of flight haven’t dramatically changed.
One could say the same about the human body. But, when physicians use computational analysis, they aren’t looking for direction on how to treat as an answer. They are looking for the affects of a disease / condition based on an individuals other conditions, the several drugs they’re on – and what needs to be taught in medical schools – genomic influences or risks. There are billions of pages of information on clinical trials we call scientific literature. This combines with medical records, genomic’s, biologic, chemical….blah, blah – about 100 different massive data libraries can probably not only heal a condition, but potentially a disease. And in that way, the human body has changed more than the airplane. From HIV to multiple forms of cancer, to untreatable forms of an age old disease TB – cellular structure and response does change.
Pilots carry a black bag in the airlines, a green on in the Air Force. In them are the manuals needed to fly a plane. The manuals needed to quickly and accurately treat a single patient using evidence and individualized care (based on all of the specifics about that person) the manuals would be enough to fill two or more Empire State buildings. Genetic data alone is enormous. Ask someone at Illumina.
What I find more interesting about your students question is – “Isn’t that what we want? Automated physicians.” The systems in place that guide treatment direction are purchased for physicians, not by them, by hospitals and insurance providers to control costs without killing or maiming anyone – to control liability. After all, it isn’t sitting with the patient and has never been to medical school so it can’t actually treat like autopilot can fly a plane. But universal healthcare means that access to services has got to be reduced. And health systems are doing this using hundreds of millions of dollars worth of technologies.
It is my greatest, most sincere hope that one day you will be able to tell a similar student – health administration is automated to make it cost efficient – doctors practice medicine. As a science.
Jayme, many valid points here. I think the question the student was trying to answer, however, was how do you address a potential problem that some doctors will rely too much on what the technology suggests, e.g. decision support technology, and not apply their judgement and experience to override it when the inputs don’t jive with their real life knowledge. Technology is without a doubt key to getting costs under control and improving patient care and reducing inappropriate practice variation. It just isn’t 100% of the answer. Lisa
Lisa:
I agree – decision support isn’t the singular solution and should be used to expand knowledge, not replace it. And absolutely shouldn’t be 100% of the answer – ironically it is taking away from clinical judgement. Which has to be constantly fed with the latest information on disease, treatments, and outcomes. Today treatment is determined on access based on insurance coverage. If this becomes the “standard of care” why go to medical school? The pendulum is in full swing, no doubt, from one extreme to the other and must come to a balance that centers care on treating the individual. I look forward to continuing our conversation personally.
All the Best –
Jayme