Scott R. Steele, Justin A. Maykel, Bradley J. Champagne and Guy R. Orangio (eds.)Complexities in Colorectal SurgeryDecision-Making and Management10.1007/978-1-4614-9022-7_36
© Springer Science+Business Media New York 2014
36. Facing Our Failures
Department of Surgery, University of Minnesota Medical School, Minneapolis, MN, USA
Division of Colon and Rectal Surgery, Department of Clinical Surgery, Temple University Health System, Red Bank, NJ, USA
In today’s complex medical environment, the potential for error is high. Accountability for patient outcomes extends beyond the individual to include multiple people and the healthcare organization’s systems. As we struggle to accept and attribute accountability and learn from mistakes without blaming individuals, we inevitably encounter issues of fairness, blame, and guilt. A “Just Culture” framework may prove valuable to manage these contentious and emotional issues equitably. Highly trained, skilled, and caring physicians who competently manage the technical aspects of patient complications are poorly trained to face the emotional toll that accompanies harm-producing mistakes. This chapter provides advice and specific recommendations to help physicians and their organizations face their failures.
Professionalism has evolved over the course of history, but only in the past century has the medical profession accepted the principles that a physician is accountable for patient outcomes, including failures, and that the public has the right to know those outcomes.
More recently, it has been recognized that medical failures may also arise from system errors, inadequate communication, and poor organizational culture. Consequently, healthcare organizations are now also held accountable for their systems, processes, facilities, infrastructure, and teams of providers and staff who must be properly trained and deployed.
More accurate methods to assess and track quality and safety are available, but honest self-reporting by providers and healthcare organizations is essential to assure all failures and vulnerabilities are evident. Only then can we optimally use lessons learned from our shortcomings to improve quality and safety outcomes.
By following the principles of “Just Culture,” healthcare organizations can effectively and equitably manage the many issues arising from medical failures, mitigate the emotional and physical toll on physicians and healthcare workers, and use their influence to modify the education and residency training programs to better prepare physicians to meet the public’s demands for safety, transparency, and accountability.
Specific immediate measures for physicians to take following a significant medical failure are outlined. Long-term recommendations to optimize full mental and physical recovery of physicians and other caregivers are detailed.
“Every surgeon carries about him a little cemetery, in which from time to time he goes to pray, a cemetery of bitterness and regret, of which he seeks the reason of his failures.” 
Facing Our Failures
Key Concept: Many things you will encounter are not taught, but must be prepared for to help both you and your patient.
The unexpected ringing of the bedside phone jars you awake at 1:35 am. The tense voice of a nurse reports that your patient is not doing well and is being transferred to the intensive care unit. Never mind that when you last saw him 8 h ago, he had seemed to be making an uneventful recovery 4 days after a low anterior resection of a rectal cancer. Now, fully alert, you recognize this is a life-threatening complication. As you are driving the 15 min to the hospital, you speak to the ICU team and learn the patient’s status is deteriorating rapidly. Options are weighed, orders are given, and you push the accelerator pedal harder as you mentally review the patient’s other medical conditions, the operative details, the intraoperative decision-making, and the initial hospital course looking for a hint of what has happened…of what has gone wrong. At this point, fear and self-doubt creep into your consciousness….
Change the details and virtually all physicians can relate to such a scenario. We are, after all, human. Perfection is not possible and failures will occur. It goes without saying that the patient who is harmed by our shortcomings is the primary victim of medical failures, but that is not our focus here. Instead, our purpose is to help physicians and medical organizations understand how to mitigate the emotional turmoil associated with medical errors and how to learn from our failures and improve our outcomes. Regrettably, these subjects are lacking or minimized during medical school and residency training, and practicing physicians rarely discuss them in a supportive and useful way. Our intent is to at least partially fill this void. While this chapter applies to all care providers, it is written from the perspective of the physician and specifically for the surgeon.
Historical Context: Professionalism and Accountability
Key Concept: A historical review provides the context to understand how most physicians of today have come to accept the belief that professionalism and accountability are inextricably linked.
Until recently, medical practitioners were the antithesis of what we now call professionals, and accountability for outcomes was not a part of their ethos. More rigorous, scientific training and changing societal expectations compelled physicians to accept ever-increasing accountability for patient outcomes as part of their professional responsibility. As noted below, this fundamental change did not occur quickly or easily. In the early 1900s, a few visionary and tenacious physicians insisted that patient outcomes be reviewed openly and objectively so they could learn from each other’s mistakes in a nonpunitive and educational forum. These principles were embodied in what became known as Morbidity and Mortality (M&M) conferences. At the time, this was a radical idea viewed by many physicians as a direct threat to their autonomy. Nonetheless, the idea was gradually embraced and in 1983, the Accreditation Council for Graduate Medical Education (ACGME) made M&M conferences a required component of all training programs.
Antiquity Through the Middle Ages
The notion that physicians are accountable for medical outcomes may have originated from the Hammurabi Law Code of the Babylonians, circa 1780 BC. The Code included 17 laws detailing a physician’s responsibilities and establishing the concept of civil and criminal liability for improper and negligent medical care. It is the first known written attempt to regulate medical practice and to call for accountability: “…if a physician performed a major operation on a lord…and has caused the lord’s death…they shall cut off his hand” .
The ancient Babylonians’ effort to require accountability of medical practitioners was unique. Several millennia passed before subsequent societies and governments developed comparable codes. In medieval Europe, barber-surgeons, an illiterate group of men whose only training consisted of a short apprenticeship, provided bloodletting, crude wound care, teeth extraction, abscess drainage, and enema administration in addition to cutting hair and trimming beards. They had little or no formal surgical training and performed their very limited procedures without specialized knowledge or formal oversight.
Middle Nineteenth Century
As late as the mid-nineteenth century, elective surgery was extremely rare. The germ theory was still unknown, and anesthesia to control the associated pain was in its infancy and generally not available. Half the patients who underwent “serious procedures” such as an amputation died, usually from surgical infections. Most Civil War military surgeons learned the essentials of trauma surgery in battlefield locations working in isolation without help or supervision. Conducting a speedy operation was critical to survival of the operation, but there was little to be done to prevent a subsequent lethal infection. The dismal outcomes improved as the military developed more efficient evacuation and transportation of wounded soldiers to crude field hospitals where help was available and where ether and chloroform or a mixture of the two was used for a drip anesthetic. Military surgeons formed societies to track outcomes and share information that improved survival over the course of the prolonged and brutal Civil War . Unfortunately, this concept of assessing outcomes to make changes to improve care of patients did not transfer to most civilian practices.
Late Nineteenth Century
The beginnings of what we might now call “modern surgery” date to the latter quarter of the nineteenth century when anesthesia and antisepsis were accepted in major medical centers in Europe. The most famous master surgeon of the time was Theodor Billroth, an Austrian, who had adopted Lister’s antiseptic procedures and developed new operative techniques for major abdominal surgery that he performed safely and with excellent outcomes. His apprentices underwent several years of rigorous, scientific training. By contrast, formal medical training simply did not exist in America, and elective surgery was so rare that there were fewer than ten physicians in the entire country whose practice was restricted to surgery in 1889 (, p. 98). The future of surgery in America seemed bleak, and it is understandable that accountability for outcomes of elective surgery was a moot point in the chaotic, unregulated, low-volume surgical “system” of the late 1800s.
William Osler and William Stewart Halsted were charged to open the new Johns Hopkins Hospital in 1889 and its associated medical school in 1893 in Baltimore, Maryland. They insisted upon radical changes to bring “modern medical and surgical training” to the United States. Admission to medical school required that the student had first excelled while earning an undergraduate degree, and the medical school curriculum included laboratory experiments, anatomic dissections, reading original medical journal articles, and discussing the issues raised by the articles with the faculty. Osler and Halsted instituted the graduated responsibility residency system to train young physicians and surgeons at the newly opened hospital. The “pyramid system” used in surgery assured intense competition among trainees, as only a select few were allowed to complete the entire program (, p. 107). Their model was soon followed by a few other institutions, but most medical schools were still proprietary and of poor quality. Accountability for outcomes was not a concern for most.
Early Twentieth Century
Abraham Flexner, a research staff member of the Carnegie Foundation for the Advancement of Teaching, was directed to assess medical education in North America. To do so, he visited all 155 medical schools in the United States and Canada, most of which were proprietary, “for-profit” organizations. His comprehensive, scientific review, delivered in 1910, was highly critical of the nonscientific approach used in the American system of medical education. Flexner advocated formal analytic reasoning coupled with a strong clinical phase of training in academically oriented hospitals as the two essential elements needed to train physicians. He considered research an important but subsidiary element that could lead to improved patient outcomes. The changes he recommended to improve the standards, organization, and curriculum of North American medical schools had a profound impact causing many medical schools to close down and most of those remaining to enact fundamental reforms . As a result, quality improved and medicine was, for the first time, growing into a respected profession in the United States. Accountability for outcomes was still a vague concept, but the future of surgery was no longer in doubt.
By the time of the Flexner Report, major hospitals were performing increasing numbers of complex operations. For example, at the Massachusetts General Hospital (MGH) in Boston, annual surgical volumes averaged 39 procedures between 1836 and 1846, but in 1914 more than 4,000 surgical procedures were done at MGH . The strict requirements for a university undergraduate degree followed by rigorous medical school education and prolonged, focused residency training established surgeons as medical professionals who were dedicated to their patient’s well-being and deserving of the patient’s absolute trust. The public generally agreed that physician autonomy was an undisputed right and physicians defended such authority as essential to do their jobs including overseeing outcomes.
Accountability was generally left to autocratic chairs of departments and strong-willed hospital administrators who made unilateral decisions behind closed doors to both assign blame for events that went wrong and determine the consequences to the trainee or surgeon including dismissal. The patient, their family, or other representatives of the public rarely asked questions and were rarely, if ever, made aware of failures or allowed to be part of deliberations where poor results were discussed. This approach to accountability was not only arbitrary and inequitable but also convinced surgeons to remain silent and hide errors both from their superiors and from their patients rather than risk punishment or loss of authority.
In the early 1900s, Dr. Ernest Codman at Boston’s MGH developed a case report system to track outcomes. He proposed a simple but profound idea, “The common sense notion that every hospital should follow every patient it treats, long enough to determine whether or not the treatment has been successful, and then to inquire, ‘If not, why not?’ with a view to preventing similar failures in the future” (italics from Codman) . He was convinced that reviewing details of bad outcomes would improve patient care, prevent repetition of errors that lead to complications, and modify physician behavior and judgment. This approach required a fundamental shift in thinking about medical error and failures, a challenging adjustment for the autocratic medical establishment to accept. In fact, Codman resigned his position at MGH in 1914 because they refused to accept the “End Result” system or his suggestion that the system be used to evaluate surgeon competence and determine promotions. Nonetheless, Codman’s ideas contributed to the increased standardization of hospital practices, and by 1917, at least some hospitals and physicians were willing to review autopsy findings together and to discuss their errors at so-called “morbidity and mortality” (M&M) conferences (, p. 269).
Middle Twentieth Century
The time from the 1920s to the 1980s is considered the golden age of surgery by some. There seemed to be no limit to what scientifically based and technically adept surgeons working with biomedical engineers could do to improve the lives of every person. No human condition seemed too daunting for innovative, surgical teams to tackle. Cancers could be cured, open-heart surgery evolved from rare and high risk to routine and relatively safe procedures, complex neurosurgery was done with low morbidity, joint replacements became commonplace, and organ transplantation was highly successful. Military surgeons and other investigators found ways to successfully manage major trauma, massive blood loss, shock, malnutrition, and infection.
As these new developments revolutionized patient care, they also increased the complexity and dangers of delivering that care. Physicians, prompted in part by their own recognition of the increased potential for error, by patient advocates, and by increasingly common and expensive malpractice lawsuits, slowly accepted more accountability for poor patient outcomes. M&M conferences, begun in 1917, evolved to become an accepted method for hospitals and training programs to meet their responsibility to be accountable for adverse outcomes. As a result, the ACGME made M&M conferences a required component of all training programs by 1983. The underlying objective was the same as that recommended by Codman, i.e., to enable confidential, peer review of adverse outcomes in an open, objective, nonpunitive, and educational forum with the goal of improving patient outcomes [8, 9].
Assessing Safety and Quality: Highlights
Key Concept: In the past three decades, patient advocates increasingly raised concerns that our profession was failing to consistently provide high-quality, safe outcomes.
Critical analysis of national data confirmed their concerns and put pressure on physicians and researchers to find more reliable methods to meet the public’s expectations of error-free care. Soon, new methods were developed to more accurately track outcomes in a risk-adjusted manner and compare one organization’s results to others across the country. This effort led to countless new organizations devoted to improving safety and quality of medical care. Inevitably, new rules and regulations were written and new terminology emerged.
Public Perception and Influence
Until recently, Americans rarely questioned the authority, treatment decisions, or outcomes achieved by their physicians. They trusted their doctors and believed no other country could match the capabilities of the American healthcare system. While people generally understood that the practice of medicine is imperfect and that failures and complications are inevitable consequences of caring for sick patients, they also assumed that the medical world supervised a standardized, highly effective system to monitor results and prevent errors. The medical profession generally reinforced these societal assumptions, proudly pointing to M&M conferences, certifying examinations by various specialty boards, and numerous hospital rules and processes as examples of how the profession monitors its members to assure the public of safe, high-quality outcomes. Unfortunately, the safety net we had traditionally relied upon did not always keep pace with the evolution of modern medicine and its increased complexity and risk. Sporadic reports of tragic cases of errors resulting in major harm or death prompted some to question how this could occur in the American health system. For example, after the in-hospital death of an 18 year old in 1984, a lawsuit now referred to as the “Libby Zion case” was directed against a teaching hospital in New York. Contentious issues arising from the case included alleged lack of appropriate supervision of trainees and excessively long resident work hours resulting in poor decision-making because of fatigue . Ultimately, a New York state regulation was passed to limit resident physicians’ work to 80 h per week. In July 2003, the ACGME adopted similar regulations for all accredited medical training institutions in the United States .
A major public challenge to the optimistic belief that American medicine was “best in the world” attracted the attention of Congress in the mid to late 1980s. There was a growing public perception that the surgical care provided in the 133 hospitals overseen by the Department of Veterans Affairs (VA) was characterized by excessive surgical mortality and morbidity. Public pressure forced Congress to review the matter. After confirming the safety concerns were legitimate, Congress passed Public Law 99–166 mandating the VA annually report its surgical outcomes on a risk-adjusted basis to account for patient comorbidities and compare them to national averages . This was no small task since there was no risk adjustment model for surgical specialties nor were there national averages to use for comparison! To the great credit of the surgeons, statisticians, and other researchers at the VA, the National VA Surgical Risk Study (NVASRS) was launched in 44 VA medical centers to correct these two deficits and simultaneously improve surgical quality across their system. The success of their efforts led to establishment of an ongoing program, the National Surgical Quality Improvement Program (NSQIP) in 1994 . The VA centers reported a 27 % decrease in operative mortality and a 45 % drop in morbidity rates from 1991 to 2000 as a result of their efforts, a resounding success by any standard .
Institute of Medicine Report
Somewhat surprisingly, the concerns about surgical quality in the VA had little impact on the private sector. Both the medical profession and the public apparently assumed the poor quality was confined to the VA system. Thus it is understandable that the 2000 Institute of Medicine (IOM) report, To Err is Human (, p. 31), shocked the lay public when it bluntly concluded that health care in the United States is not as safe as it should be – and can be. Using estimates from two major studies, they concluded that at least 44,000 people, and perhaps as many as 98,000 people, died in United States’ hospitals each year as a result of medical errors that could have been prevented (, p. 31). Many failures were noted to be system errors and not the fault of a single healthcare worker or physician. Intensive care units, operating rooms, and emergency departments were the sites with the highest rates of preventable errors associated with major consequences. The 2000 IOM report drew attention not only to the loss of lives but also to the many other burdens incurred because of these preventable errors. This included tangible costs estimated at $17 billion to $37.6 billion per year and intangible items like the loss of trust and disability incurred by patients as well as the guilt, frustration, and loss of morale among well-intentioned physicians and other health professionals (, p. 41).
While many doctors and medical organizations initially responded to the 2000 IOM report with disbelief, our increasingly educated and technology-savvy citizenry trusted the IOM as a highly credible source. It is an independent, nonprofit, nongovernmental organization that uses unbiased, evidence-based, authoritative information to advise health and science professionals, policy-makers, leaders in society, and the public at large. Citizen and patient advocates endorsed the IOM report and demanded changes from the healthcare industry. Their advocacy coupled with additional studies confirming the conclusions of the 2000 IOM report forced our profession to acknowledge the fact that our healthcare system and our individual practices are not as error-free as we believed or as safe as our patients assumed. The safeguards, policies, and approaches we had relied on to provide our patients with optimal outcomes are insufficient. Simply put, we fail too often. It was clear that societal norms were changing and our profession and medical industry would be held fully accountable for our outcomes including failures. An increasingly skeptical public no longer trusted our profession to monitor itself in relative isolation behind closed doors. They expected individual physicians and healthcare organizations to find more reliable ways to achieve error-free care, to know their own outcomes, and to be transparent about their outcomes, both good and bad. This reality galvanized more researchers and clinicians to work together to achieve these goals.
Health services researchers recognized the need for hospitals and surgeons to have a reliable method to track surgical outcomes on a risk-adjusted basis to account for patient comorbidities and compare them to national averages. This information is essential if we are to assume responsibility to be accountable for both good and bad outcomes. Given the success of the VA-NSQIP experience, a pilot study was initiated at Emory University, the University of Michigan and the University of Kentucky . It confirmed the methodology used by the VA for NSQIP was applicable to non-VA hospitals. As a result, the American College of Surgeons (ACS) with funding from the Agency for Healthcare Research and Quality began a pilot program in 2001 in 18 private and university hospitals to determine if morbidity and mortality would be decreased. The favorable outcomes led the ACS to enroll more academic medical centers and private hospitals into the program and to work with the Centers for Medicare and Medicaid Services (CMS) to improve surgical quality. Newer versions tailored to more surgical specialties are now available through ACS-NSQIP .