Evidence, Information, and Knowledge: The Basic Elements of Safe Surgical Care




INTRODUCTION



Listen






  • Evidence, information and knowledge are components of professional activity that contribute to safe surgery if identified and applied from a system improvement perspective



  • A structured team approach is needed to improve effective use of evidence in surgical care





SURGEONS: SEEKERS AND USERS OF EVIDENCE, INFORMATION, AND KNOWLEDGE



Listen




It has been said that “The great questions in medicine never change; the answers do with regularity1. To address this complexity, surgeons uniquely interface with evidence, information, and knowledge (EIK). Unlike internists, who diagnose and treat through thoughtful analysis, surgeons frequently diagnose (exploratory laparotomy) and treat (colectomy) through action. Their work is as much physical and technical as it is cerebral. As tactical, tactile experts, surgeons respond to data (ie, the patient’s condition) and information (ie, diagnosis applied by an internist or other generalist after a complete analysis of those data), to draw from knowledge (ie, their training, application of evidence, and experience) to address the problem in real time. Like pilots, surgeons respond to what is in front of them to address a need. Evidence may not be as immediate a resource for these practitioners—given the performance and sometimes unpredictable nature of their roles. Surgeons routinely use published evidence prior to incision to formulate a plan of action, select tools and equipment. However, given the variations of human anatomy and physiology, they are frequently met with situations where the best plans are not adequate, and they must draw from the tacit realm of their complex knowledge base to consider what could go wrong.2 In addition, surgeons may not have the luxury of deliberation—patients don’t always appear with as complete a diagnosis as surgeons might prefer, due to the timing and urgency of the action required. In this space, the use of EIK needs to be explored as an asset that can both detract and enhance safety as it is employed with an eye toward the complexity that duality presents.




EIK DEFINITIONS AND WHY THEY ARE IMPORTANT



Listen




Both patient safety and EIK definitions remain underdeveloped and are therefore messy.3 Establishing shared mental models around definitions will streamline work and ensure progress. A cocreated set of definitions will explore the value of theory as an element of EIK initiatives. This philosophical work can help deter the misuse or lack of application of foundational constructs in the development of larger improvement projects, which can derail program effectiveness and sustainability.4 The goal of this chapter, however, is not to debate definitions. Therefore, the following definitions are suggested to anchor the conversation:





  • Evidence: The scientifically sound, fully researched and validated information and collected data that have been analyzed to gain understanding and validation of a hypothesis. Evidence here is treated as an explicit resource—either hard copy or electronic—published and packaged for use by others.



  • Information: Data that are processed and repurposed and printed for distinct use.



  • Knowledge: What an individual knows. It is broader, deeper, and richer than information or data.5 It is multifaceted, dynamic in nature, context-specific, and embedded in the actions of experts. Its value emerges over time and is influenced by the individual processing it.6,7




Failures in EIK can Result in Harm



Tragedy can serve as a catalyst for learning. In health care, classic incidents of medical error (such as Betsy Lehman, Josie King, and Libby Zion) and new stories regularly provide opportunities to explore factors that contribute to healthcare failure.8-10 While events associated with harm illustrate a broad range of problems in care processes, few examples in surgery or through health care in general explicitly discuss the confluence of EIK failures as contributor to harm. This chapter addresses that gap. It will discuss EIK in a unique space outside of the context of patient information, care activity data, and health information systems. The authors present several examples revealing the lack of reliable procedures to identify, access, disseminate, and share (ie, manage) EIK, and how this contributes to care failure and ultimately system breakdown. Several recommendations to infuse EIK expertise into care delivery will be suggested, notably teamwork designed to capitalize on partnership with expert searchers. The authors advocate for partnership to enhance EIK use in surgical care. The chapter concludes with research questions to seed discussion, to clarify the problem, and to initiate conversations to envision improvements.



Case Examples of General EIK Failures



Given that EIK (as defined above) as a safety factor is a new concept, the illustrations below demonstrate how failures associated with each of those components can result in patient harm.





  • Evidence failure. In June 2001, Ellen Roche, a healthy volunteer in a National Institutes of Health (NIH)-funded clinical trial exploring asthma treatments at Johns Hopkins University Asthma and Allergy Center in Baltimore, Maryland, died as a result of her involvement in the trial. While she participated as articulated by the protocol, she experienced unanticipated asthma-like symptoms once she received the test treatment. She died 2 weeks post her initial engagement in the study. Johns Hopkins acknowledged responsibility for the incident and initiated an in-depth internal investigation. They were transparent about their findings, even sharing their report and the associated documents freely on the Internet.11 No distinct cause for the death was identified, although weaknesses in the process were noted. Among the gaps acknowledged in their analysis was the incomplete review of the literature on the non-FDA-approved use of hexamethonium—the substance being tested on Roche. While the trial’s primary investigator had tried to fully review the applicable literature, he missed older evidence describing the risks of inhaled hexamethonium. Therefore a misinformed decision was made about its use in the study. This evidence-identification failure illustrates the potential for harm as depicted in the fishbone analysis of diagnostic error exacerbated by poor evidence seeking and access (see Figure 24.1) even though it is only 1 of the factors contributing to the incident.12



  • Information failure. Jessica Barnett was a Canadian teen who suffered dizzy spells for years. She and her parents became engaged in the attempt to determine her diagnosis. In the course of seeking treatment, they discovered information that could have initiated a new course of action by her clinical team to identify Jessica’s condition. That information was rebuffed. Jessica died due to a failure to diagnose and treat her condition: Long QT syndrome LQTS. LQTS is not a diagnosis general practitioners encounter often, so it is possible that the practitioners involved have a gap in their experience and use of information and data associated with correctly determining its presence in their patients (Graber: personal communication). This example surfaces several biases that mitigate effective information transfer. The patient’s family had information from reliable sources (ie, evidence) on the Internet, which they found supported their belief that Jessica had LQTS. They offered the information to members of Jessica’s clinical team who didn’t act on it. This failure was exacerbated by cognitive biases and lack of patient-centered and family-centered care culminated in patient harm.13



  • Knowledge failure. The acceptance of what peers, attendings, and professors articulate as “best practice” based on what they know may also be uninformed and therefore unsafe.14 Reverence to peer expertise, the hidden curriculum, and hierarchy that incentivizes its use without question can detract from safety as well. Lack of knowledge, a minimized culture of inquiry to navigate silos and production pressure, and inadequate tools to uncover and activate it contributed to the Roche incident. In addition, a lack of expectation for correct evidence to be supplied by the patient’s family was a barrier to its use. The clinical team’s bias was a higher regard for their own years of training (eg, overconfidence) than the family’s work researching other diagnostic possibilities. This lack of attention to patient and family knowledge about the condition of a loved one and recognition of their efforts to understand it contributed to the Jessica Barnett failure.





Figure 24.1


Potential failures in the EIK seeking process.15 (Reprinted with permission of the publishers from Jones B, Graber M, Alligood E. Analyzing breakdowns in the EIK pathway. In: Zipperer L, ed. Patient Safety: Perspectives on Evidence, Information and Knowledge. Farnham, UK: Ashgate; 2014:247.)





Bias as a Factor in EIK Failure



Biases in a variety of forms have been noted as contributors to error and are factors in EIK failure. Bias in the evidence base—or publication bias—has been discussed as a problem endemic in the research dissemination process. Cognitive biases have garnered recent attention as contributors to error—most notably in discussions of diagnostic error.16 They affect EIK seeking and use. They can prevent it from being initiated, and prematurely shut down the process and misdirect the application of pertinent EIK.



Publication Bias


Publication bias is when published research meets a specific agenda. It complicates the EIK use process as it affects the material that EIK seekers retrieve and view. A full discussion of publication bias, methodological error, and peer review in health care is outside the scope of this chapter.17 However, it’s worth noting that the nature of publication bias can be latent to individuals using the materials. Investment is required to identify publication bias at the care level. That investment consists of embedding an expert to assess the value of published resources for the team.



Cognitive Bias


Cognitive biases are unrecognized influences that affect decision making. Their impact on information seeking in general is a focus of research in search system design.18 Table 24.1 builds on that focus to describe the potential negative impact of a sample of cognitive biases on EIK use in surgical care: availability, confirmation, overconfidence, and premature closure. The 4 examples are drawn from the authors’ experiences to serve as touch points for this discussion of EIK failure.




TABLE 24.1Examples of Cognitive Biases That Affect Information and Evidence Seeking



Whereas evidence access and use errors due to bias or other factors may not be a single cause of patient harm, they are apt to contribute to problems. As with the commonly applied system safety metaphor of the Swiss cheese model, it is the combination of a number of small missteps that can result in failure.20 How EIK factors affect reliable surgical care has yet to be determined. Therefore, interventions to mitigate the influence of weak EIK processes on safety and design of effective solutions to ensure the reliability of the process on the frontline present opportunities for innovation and partnership.



Patient care is based on the surgeon’s accumulated training, education, professional development activities, and experience. This experience consists of information that has been provided by mentors, teachers, colleagues, and textbooks; evidence provided by formal trials in published form; and a surgeon’s own knowledge or knowledge accumulated through active proximity to the work of others. This information and evidence then is modulated by the general context of a surgeon’s work and the specific context of a given patient.21 In this realm of knowledge, physicians are hampered by missing evidence, faulty knowledge, and the cognitive biases already presented.22



The mitigation of inadvertent, unintentional harm to patients due to human error has been the focus of the majority of recent patient safety efforts. A great deal of thought and effort has been devoted to understanding cognitive errors and to designing system defenses to prevent both decision-based and action-based errors.22 A more subtle issue, but one that has equally deleterious patient effects, is the application of outdated or blatantly wrong information, techniques, or pharmacological treatment. Multiple studies have shown that many patients do not receive the currently accepted evidence-based best practice.23,24 Similarly, experts believe that it takes some 17 years for evidence-based changes in care to be broadly implemented.25 As a case in point, the Society of Cardiovascular Anesthesiologists in combination with the Society of Thoracic Surgeons established guidelines for blood conservation in cardiac surgery in 2007; a survey in 2012 found that, while 80% of cardiac anesthesiologists and perfusionists had read the guidelines, fewer than 20% had implemented any.26 There is, therefore, a significant disconnect between the available information and evidence and the knowledge that physicians in general and surgeons in particular apply in their work each day.




USE OF “WHAT IS KNOWN” VS “WHAT IS EVIDENCE-BASED”: MESSY OR DANGEROUS?



Listen




All of medicine is flawed by not just the unknown but by unknown unknowns.27 When evidence is lacking, the “best therapy” may come to be based on opinion, with great variation among individuals. Despite the uncertainty surrounding these best therapies, physicians may not question the basis for the tradition-based action and present its value to students or patients based on that experience rather than evidence. Reasons for this may include production pressure, established incidents of effectiveness, and an established evidence base. If negative outcomes are associated with what might be called best practice, they are dramatic, unsettling, and extremely rare.



In spite of the best efforts of successful programs to standardize and support evidence-based care, much of what is taught to medical students and residents has little evidence to support it as techniques are based on experience and familiarity rather than true evidence. For example, the need to perform a bowel prep prior to colorectal surgery is codified in textbooks and in virtually every surgical training program. This practice, however, is not supported by evidence, even observational cohort studies or randomized, controlled trials (RCTs). The developing evidence, rather, shows that patients requiring emergency colorectal surgery due to trauma, and who thus do not have a bowel prep, fare no worse than elective, prepped patients. Based on this evidence, RCTs are now being done, demonstrating that patients without bowel prep do as well, and may do better, than those with the bowel prep.28 Unfortunately, evidence from RCTs may not be able to compete in an expert’s mind with dogma and actions learned years ago.



There are examples of unsupported therapy being viewed as best practice. Examples include radiation of the thymus gland (resulting in many cases of thyroid cancer in teens) and restriction of oral intake until bowel sounds return, which were employed as best practices at one time, but are now known to be ineffective. Pioneers such as Henrik Kehlet, who was open to the idea that the conventional wisdom regarding management of colorectal surgery patients could be wrong, continue to question all aspects of what is done surgically and are ever cognizant that the information available, even when present in textbooks, could be quite wrong.28,29



The issue of basing treatment on information not supported by evidence (current, complete, or otherwise) is compounded by publication bias. For many proposed treatments, there are multiple studies, some of which are contradictory or inconclusive. The results vary due to different research design and statistical methodology, and due to both alpha (false positive results) and beta (false negative) errors. With such conflicting results, each surgeon must decide which studies he or she believes; once a choice is made, the individual finds it extremely hard to “reverse direction” and change practice, even in the face of mounting evidence.30 This difficulty arises largely from the already discussed cognitive biases, which are hard to identify and eliminate, yet are compounded by a lack of time to fully identify and understand new and developing evidence. These individual biases can be minimized through publication of systematic reviews, meta-analyses, and guidelines developed by committees of experts, with the caveats already noted that even the best intentioned reviews and guideline processes can be undermined by poor literature reviews.



However, there are many situations in which there is new evidence not yet incorporated into guidelines, or conditions for which no guideline or review exists. In these situations, surgeons are hampered not by lack of evidence, but by lack of awareness of the accumulating evidence. The sheer volume of publications, even when limited to specialty specific journals is discussed later, is overwhelming. New strategies are required to significantly add to the reliability of the information and evidence available to the clinicians to counteract these weaknesses in the use of EIK or the negative effects of the wrong EIK.




AWASH IN EIK: THE BENEFITS OF A TEAMING-UP STRATEGY



Listen




In the National Library of Medicine’s PubMed medical literature database, 1,181,513 citations of articles in English were published in 2015. Of those, 189,369 dealt with surgical practice. If a surgeon devoted 6 hours to reading every day, he or she would need to read 1.5 articles per minute to read all 189,369 articles. If only reading material on 1 surgical procedure such as a cholecystectomy (including open and laparoscopic), then more than 1089 articles would need to be read in 1 year. In 2000, when there were far less demands on surgeon time, the average American surgeon read a median of 10 hours a month.31 To cover 1 specific area of practice, a surgeon would need to read over 9 articles each hour during those 10 hours each month. While not impossible, it is improbable that many surgeons will have the time to read that many articles in an hour, or effectively comprehend and integrate even the critical information contained in the material. This sea of material includes evidence and information—which in turn open the door to knowledge (if seekers are apt to take the step to go from the explicit to the tacit).



Evidence


In 2000, Sackett defined evidence-based medicine (EBM) as “the integration of best research evidence with clinical expertise and patient values.”32 In recent editorials, several authors have discussed what they believe limits the application of EBM to practice.33-35 These discussions revolve around the questionable value of poorly designed RCTs, publication bias, and the value of case studies to surgeons. Acknowledging that there are some poorly designed studies, an effective approach should embrace a combination of evidence and the surgeon’s experience. The surgeon—or the surgical team under his or her direction—should rigorously evaluate all of the sources to inform decision making. Even in the best of times, the need for this evaluation on a PRN basis could present a burden to the time-challenged surgeon. Rather than accept the results of poorly designed studies that need extra review time to determine their applicabilty, the professional community and editorial boards may wish to advocate for an improved set of standards for the quality of research reported. The development of protocols to aid in this improvement strategy should include a critical analysis of the EIK used to produce them so that publication bias, confirmation bias, and poorly designed and reported studies do not influence the conclusions. The suggestions and recommendations of colleagues deserve the same thorough examination.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jan 6, 2019 | Posted by in ABDOMINAL MEDICINE | Comments Off on Evidence, Information, and Knowledge: The Basic Elements of Safe Surgical Care

Full access? Get Clinical Tree

Get Clinical Tree app for offline access