The Risks of Health Information Technology: For every Function There is an Equal and Opposite Malfunction




INTRODUCTION



Listen






  • Illustrate a range of hazards to patient safety arising from electronic information systems



  • Compare several conceptual frameworks for analyzing errors and risks of health information technology



  • Review strategies for reducing adverse events related to electronic devices in healthcare settings



  • Information technology is a cannon rolling around the deck of health care; properly controlled, it is a powerful tool for effectiveness, efficiency, and safety, yet its weight and momentum demand constant attention





NEWTON’S LAW OF DEVICES



Listen




Health care is not a system, it’s an ecologya sustained by information. Health information technology (HIT) comprises the electronic devices and software programs that support health care, give access to it, and watch over it. This chapter looks at how these tools advance and also threaten patient welfare. The core of this discussion surrounds electronic health records (EHRs). However, these are tightly entwined with other programs such as monitoring, communications, and decision support.



aAnother label might be, “complex adaptive system.” The behavior of a standard system can be calculated from a given set of inputs. The behavior of an ecology is harder to predict.



A few decades ago, a treatise on healthcare safety would have been dominated by human factors such as knowledge, judgment, and skill; perhaps drug purity. Compared to other high-performance industries, health care was slow to implement computers for everyday tasks, let alone complex decision making. But, today, technology—both as a solution and as a hazard—is of central concern to the field of patient safety. HIT is both the salvation and nemesis of clinicians overwhelmed by an information deluge. It is taken for granted by trainees, exasperates midcareer providers, and is cited as a reason for early retirement by senior practitioners.



It is treacherous to analyze technology while standing on the slope of an adoption curve. See Figure 23.1.




Figure 23.1


Innovation adoption curve. (Data from Rogers E. Diffusion of Innovations. New York: The Free Press of Glencoe; 1962:162.)





The pace of development, implementation, revision, and abandonment of HIT is so intense it is hard to find a stable platform from which to make durable observations. (Ironically, this follows decades when HIT was just party talk among quirky enthusiasts.) The speed of change creates challenges when contemplating the impact of HIT upon safety, access, cost, care effectiveness, and the experiences of patients and providers.



Compare the adoption of pharmaceuticals. Before a drug is released, there are trials followed by surveillance, while safety and effectiveness are methodically understood by communities of researchers, clinicians, and regulators. Development and regulatory processes are often criticized for being sluggish, as is the speed of disseminating therapies that seem promising.b It takes cleverness to predict whether knowledge about 1 chemical can be extrapolated to a related chemical, so the slightest modification of a proven entity into a new drug might demand starting research at Phase I.



bIt is a canon of popular wisdom that it takes “X years” for a proven therapy to become a standard of care, with “X” being 7 or 17, or something like that. But, in fact, the speed of diffusion is variable for different innovations.



In contrast, imagine if drugs were software.




Twenty-thousand new drugs would be released every month.2 Half the population between 14 and 40 would have an idea for a new remedy, or how to customize an old one. New drug companies (some funded with millions of dollars) would be formed and dissolved by investors every week. Physicians and patients would be on their own to test and monitor effectiveness and safety. End-user agreements would immunize manufacturers from liability. Information about dosage, timing, formulation, interactions, and side effects would come from outdated user guides and online discussion forums. New versions, patches, updates, revisions, plug-ins, look-alikes, counterfeit preparations, and off-label protocols would be released daily and distributed on the Internet. Market leading products would be determined by the effect of pop-up ads on customers with no background in physiology, pharmacology, or epidemiology. Advertisers would make claims that knowledgeable people would find ludicrous. Many products would be free (but with ads), some would be expensive, and all existing stock would become nonfunctional without warning when a new version came out. A federal stimulus package would incentivize the widest possible adoption of new products by the largest possible number of users in the shortest possible time; failure to prescribe them would incur penalties upon physicians. Finally, if patients changed providers, all their current products would stop working.




This is parody. But, many hazards of HIT are analogous to those for drugs and devices in relevant ways. There are only so many pathways by which patients can be injured. Wrong dose, wrong patient, wrong diagnosis, wrong treatment, wrong time, and similar errors arise from HIT just as easily as from human and material factors.



HIT safety needs to be approached with strategies different from human performance or drug and device safety, for a number of reasons:





  • HIT events arise from interactions among software, hardware, users, and the environment.



  • Testing HIT in vitro is difficult and does not always identify flaws that may become apparent upon deployment.



  • Informational risk is a unique category of threat.



  • HIT errors occur more frequently than other types of healthcare errors, and are constantly—often unconsciously—intercepted by humans in the system.



  • Malicious intent is a more prominent component of HIT threats than other common healthcare risks.



  • The cycle of HIT development, acquisition, modification, and replacement is rapid, demanding high tempo and agility from safeguards.



  • Personal information tools (eg, cell phones, email) are commonly commingled with professional tools and are extremely difficult to govern.




Newton’s Law of Devices says, “For every function, there is an equal and opposite malfunction.” This reminds us that all systems generate unintended consequences. A risk assessment for HIT begins by examining what a tool does when it’s working properly, and anticipating the effects of it working unexpectedly. This slogan is not flippant. It means it is logically impossible to build software that functions exactly as intended.



This pessimistic reality has an optimistic corollary, in the form of the Law of Safety Learning Systems: “For every hazard, those affected learn to recognize and prevent it, accommodate for it, circumvent it, and reduce its impact.” This equilibrating tendency is the chief reason why so many imperfect and cranky HIT products can be tolerated—and even prove valuable—in the high-stakes environment of health care.




INFORMATIONAL RISK



Listen




Modern society is grappling with the risks of information tools, including the special risks of information itself.c In the pre-IT era of safety analysis, informational error was often commingled with other, nonspecific “human factors” as contributors to adverse events. It was easy to understand how a surgeon might ligate the wrong vessel, or a nurse might forget to check a wristband: Human error, fatigue, haste, poor training, lapse in process, inexperience … But, it’s a different matter if the wristband is printed with the wrong bar code.



cAncient societies were acutely aware that knowledge was powerful. This is reflected in attitudes toward literacy and sorcery.



Even in today’s IT-in-your-face era, with daily news stories of information and information-bearing devices being used in unintended or destructive ways, there is a disconnect between users’ awareness of these risks and our insatiable appetite for the devices that expose us to them. This might be a sign of how deeply the passion for information and connection is embedded in the human psyche.



But, information itself can harm us in many ways. We are hurt when confidences are exposed or privacy is lost; when we rely on data that are false, distorted, or incomplete; when we are deliberately or accidentally mislead, and act upon error; when we fail to learn or lose facts we need for successful living.



Human Error Trapping



From earliest history, physicians have depended on information for success, trust, and reputation. The story of medicine is one of information being discovered, taught, learned, disputed, curated, and marketed (or sometimes hoarded) by individual experts. The culture of medicine is one of privileged information, both knowledge of the art and data about the patient. The fundamental transaction of the physician and patient is grounded in belief, on both sides, that the practitioner possesses information that can help the patient’s health problem. The traditional mode by which medical information was transmitted was the spoken word. This expectation colors the culture of health care today.



The twentieth century marked a shift of the stewardship of medical information from individual human experts to open repositories; a shift in the mode of communication from direct, face-to-face, verbal narrative and paper documents (in classroom and exam room), to the plethora of electronic communication channels we recognize in the twenty-first century.



In high-performance systems like health care, critical decisions and actions are still controlled by humans. This means that almost everything in the workflow—diagnoses, orders, prescriptions, reports, messages, results, and so on—is subject to human oversight.



The benefit for HIT safety that comes from human oversight is proven countless times daily. Today’s software and devices generate errors at a breathtaking rate. Interactions between user, application, and interface are fraught with complexity. They are subject on the human side to errors of distraction and inattention, slips and lapses, misperception, misinterpretation, inadequate training, and other operator failings. On the device side they are subject to programming and design flaws, mechanical and electric interference faults, response lag and downtime, hardware failures, and plain old bugs. Patients are insulated from the risks of this fragile system the same way auto passengers are saved from crashes by the competence of drivers. Think of the 100 times each journey when a crash might have happened, and how training, practice, mechanical design, traffic controls, and human vigilance regularly thwart opportunities for disaster. This same process is what saves patients from a continuous barrage of HIT hazards.



However, the history of auto safety shows that human skill is not enough. The most effective innovations have been in technology, materials, and the infrastructure in which automobiles operate. This precedent is important for HIT.



Extrinsic Risks—Malice



Technology has both intrinsic and extrinsic risks. Health care’s altruistic culture is more attuned to intrinsic dangers of the care process itself (like those itemized in a typical “informed consent”). Cyber risk differs from other healthcare risks because it introduces a much larger component of malice. Of course, willful misbehavior has always been a consideration in patient safety (alcoholism, assault, impairment, drug diversion, fraud), but proportionately this has been tiny compared to the large component it represents in the cyber domain.



HIT can be a doorway through which external, electric pathogens invade the care space. These include numerous species of malware, viruses, worms, Trojans, spyware, phishing attacks, ransomware, social media scams, and fraud; and other programs designed to hijack, cripple or destroy information systems, steal data or money, and hurt people. Symantec, a global cyber security firm, identified 317 million new pieces of malware in 2014.3 Apart from any intrinsic hazards of using HIT, the external threat of cyber risk is a growing problem for patient safety.



A 2015 study of 150,000 phishing emails by Verizon partners found that 23% of recipients open phishing messages, and 11% open the attachments. The first user does this an average of 82 seconds from the time a phishing campaign is launched.4 One of the common pathogens transmitted in this fashion is ransomware: A program that irreversibly encrypts the contents of a computer’s files, giving a criminal the opportunity to demand money in return for the password to decrypt the device. Hospitals where this has occurred have suffered shutdowns of their information systems for periods of days or more.5 And, this is in cases when the ransom was paid and the perpetrator “honorably” provided the password.



While preventing accidents has always been a concern for healthcare organizations, proliferating HIT has forced more attention upon both deliberate and naïve sabotage. Technology provides ready means and opportunities to those tempted to mischief. HIT sabotage is easy to execute and propagates readily. It’s easy for hackers with even rudimentary skills to penetrate and damage information systems and data. Recipes for malware and tools for password cracking are easily found all over the Internet.



Moreover, there is something about electronic technology that attracts meddlesome curiosity. The category of “electronic pranks” has taken a larger place among threats demanding the attention of IT risk managers. Healthcare devices and databases offer many targets for both opportunistic and planned attacks, from random snooping to outright terrorism. Unfortunately, even practical jokes in high-performance environments can sometimes have the same consequences as premeditated felonies. And, sadly, the most common source of willful sabotage of IT systems is disgruntled employees. See Table 23.1.6




TABLE 23.1“Insider” Threats and Remedies



A rigorous discussion of privacy and security in HIT would triple the length of this chapter. While acknowledging this omission, there are a few points to make before moving on.



Privacy and Security



In the HIT world, privacy is separate from safety, but closely related. Describing the harm of a privacy breach is a problem for philosophy and legal systems. Privacy injury is determined by individual circumstances, local culture, and the personal values of those whose privacy was violated and the recipients of confidential information. Nevertheless, Western cultures recognize invasion of privacy as akin to battery. We hold healthcare practitioners accountable for safeguarding the privacy of protected health information (PHI), under laws and ethics older than Hippocrates. The human factor in protecting privacy has always meant adhering to a set of behaviors and principles that respect patients’ rights. Unfortunately, education, personal integrity, and adherence to rules are not sufficient to protect PHI in environments where information is managed electronically.



Privacy protection (against unauthorized access and disclosure of PHI, whether deliberate, accidental, or incidental) is 1 of 3 overarching imperatives for HIT governance. Electronic data are also vulnerable to loss and corruption, and inaccessibility at critical times. These 3 priorities are defined under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) as confidentiality, integrity, and availability.7



Security is the means by which these are addressed. IT professionals take a standard approach to data protection, involving measures in 3 domains: physical, technical, and administrative. See Table 23.2.




TABLE 23.2Security Safeguards



Most measures that advance security have collateral benefits for patient safety and care effectiveness. However, nothing in technology is categorically beneficial, any more than in medicine itself. Everything has side effects, including security controls. A perfectly safe system would not allow anybody to use it. In any security-conscious environment, there is always tension between policy that’s most secure and policy that’s most practical for users.



Well-intended security measures can have unintended effects on safety. Access controls (of which passwords are a component) can be barriers to critical information. A nurse whose password expired while on vacation is not able to work until it’s recovered. A consultant without credentials for a facility’s EHR cannot enter orders in an emergency. Systems that require multiple logins to navigate between modules impair efficiency and induce users to create workarounds. Sharing or defeating passwords (for example, “1234”) opens systems to penetration. Balancing security against usability is part of the task of system configuration.



Healthcare IT systems are particularly vulnerable to extrinsic attacks because of their complexity, the large numbers of people with legitimate access to them, the large numbers of applications connected to them, and the exploitation value of medical information.



The weakest link in any cyber security plan is human users. This vulnerability is divided 3 ways:





  • Innocent mistakes, slips, and lapses



  • Deliberate exploitation by rogue employees



  • Users falling for external exploits such as scams, impersonation, and phishing




Of these, the last is most often the source of major breaches. Some practices that should be addressed in any risk assessment are sharing passwords, weak passwords, reusing passwords (password discipline), using insecure Internet connections, not securely destroying paper or electronic files, using unencrypted storage devices, misplacing unencrypted devices, leaving workstations unattended, falling for phishing scams, and connecting nonsecure devices (eg, personal phones) to secure networks.



Cyber security will increasingly be at the forefront of research and effort by vendors, users—and criminals.



Intrinsic Risks of HIT



A larger concern for HIT is the risks it introduces to health care by way of its own design and operations. The Health Information Technology for Economic and Clinical Health (HITECH) Act resulted in adoption of EHRs by over 75% of eligible providers by 2014. A secondary effect of putting so much technology into so many sites of care has been a flood of aftermarket adverse effects. Unfortunately, even the dominant products in the EHR category have fallen short of advertised capabilities, industry expectations, and customer requirements. Moreover, end-user experience has revealed more design flaws, malfunctions, hazardous conditions, errors, near misses, and instances of patient harm attributable to electronic systems than could possibly be cataloged in a book chapter. Patient safety risks and functional shortcomings of HIT have become manifest in ways that were foreseeable years ago, and also in ways that could not have been imagined.



EHRs often get off to a rough start. In 2005, a tertiary pediatric hospital reported increased mortality among patients 5 months after implementation of “a commercially sold CPOE program that operated within the framework of a general, medical-surgical clinical application platform.” The authors mention that the system “was rapidly implemented hospital-wide over 6 days.”8



In another 2005 article, Koppel et al enumerated 22 different categories of medication errors in a mature, teaching-hospital computerized provider order entry (CPOE) system, with errors occurring almost daily.9 Among the issues identified were:





  1. Information errors




    • Accepting the dose on the screen



    • Duplicating orders



    • Automatic orders linked to procedures



    • Automatic discontinuations



    • Diluent interactions not captured



    • Delayed recognition of contraindications



    • Failure to capture info from all systems



  2. Human-machine interface flaws




    • Can’t clearly identify the patient



    • Can’t view all meds on a single screen



    • Log-in/log-out failures



    • Extra steps required to “activate” orders



    • Automatic cancellation of presurgical orders



    • Downtime delays



    • Orders near midnight interpreted as “tomorrow”



    • Cumbersome interface makes charting difficult




This was one of the first systematic reports (by a group of sociologists) that looked at how errors could be induced by systems intended to improve care. The immediate lessons were that building software for health care is not simple, and a lot more is required for safety and effectiveness than just the concepts of automating work processes and digitizing information. These lessons are rapidly being learned by users, but have not yet been adequately appreciated by developers and purchasers.



In a landmark 2004 article, Ash et al identified a variety of errors induced by patient care information systems.10 These were categorized as:





  1. Errors in entering and retrieving information. These included problems caused by human-computer interfaces that are not suitable for a highly interruptive environment, and cognitive overload from overemphasizing structured and complete information entry or retrieval.



    Some PCIS systems require data entry that is so elaborate that the time spent recording patient data is significantly greater than it was with its paper predecessors. What is worse, on several occasions during our studies, overly structured data entry led to a loss of cognitive focus by the clinician. Having to go to many different fields, often using many different screens to enter many details, physicians reported a loss of overview.



    Rather than helping the physician build a cognitive pattern to understand the complexities of the case, such systems overload the user with details at odds with the cognitive model the user is trying to develop.



    Other issues were fragmentation of attention and data, “overcompleteness” [clutter and noise], overstandardization, decreased readability, and temptation to use technical workarounds (eg, copy/paste).



  2. Errors in the communication and coordination process. These included misrepresenting clinical work as a linear, clearcut, and predictable workflow.





[EHR] systems often appear to be imbued with a formal, stepwise notion of healthcare work: a physician orders an intervention, a nurse subsequently arranges for or carries out the intervention, and then the physician obtains the information about the result.



Yet it has become common knowledge that it is inherently difficult for formal systems to accurately handle or anticipate the highly flexible and fluid ways in which professional work is executed in real life. CarePath or workflow systems are plagued by the ubiquity of exceptions.




Other problems were found to be system inflexibility, the imposition of urgency on clinical processes, unrealistic demands (giving rise to workarounds), systems that do not reflect actual practices (eg, in handoffs and transfers), misrepresenting communication as merely information transfer, loss of feedback (eg, direct human interaction), oversupervision (eg, irrelevant decision support load), and most critically, crippling the vital aspect of teamwork (redundancy) that facilitates error trapping and fault tolerance.



These and other issues will be elaborated in following sections.




SYSTEMATIC APPROACHES TO QUANTIFYING RISKS OF HIT



Listen




An epidemiologic approach to understanding risks begins by naming and counting things. This is a nontrivial exercise, as biologists from Aristotle to Linnaeus to Crick would attest. Technology evolves faster than biology; it’s pointless to attempt a formal description of its scope, which will be different next week. The Office of the National Coordinator for Health Information Technology (ONC) has defined HIT as: “The application of information processing involving both computer hardware and software that deals with the storage, retrieval, sharing, and use of health care information, data, and knowledge for communication and decision making.”11



By itself, this definition doesn’t offer much help to risk managers. What’s needed is a richer classification system that lets safety researchers get a grasp on the form, magnitude, and locations of the risks; the pathways by which HIT causes harm; and the remedies that reduce them.



This undertaking is complicated because HIT is pervasive. But, the subtitle of this chapter is the clue. In risk assessment, whether it’s purely for technology or more broadly includes human and structural factors, 1 way to identify hazards is to study the way each component works when it’s functioning properly, and then envision what would happen if it malfunctioned. Complex systems break in many ways.



Before the publication of the Institute of Medicine’s report, To Err is Human (2000),12 the notion of using a taxonomy to classify data about medical error was dismissed by many as a pointless curiosity. (And, HIT error as a category was essentially invisible in 2000.) But, after the IOM’s discovery of medical errors (or their escape from the closet), interest in studying patient safety grew, and systems for gathering reports were developed in a number of organizations. Initially, most of these were not designed to capture details of events specifically related to HIT.



Ten years later, the technical and clinical communities—and the legal community—began to pay more attention to a small number of researchers and clinicians who were raising alarms about HIT risks. Reports of HIT events began to circulate in the form of narratives.



In parallel, public awareness grew of general cyber risks, and HIT safety became visible above the horizon. Today there is sensitivity—if not expertise—about IT vulnerability across almost every segment of society. In health care, a sense of potential danger has followed the proliferation of electronic technology in both professional and private life, along with increasing revelations about cyber disasters in news and entertainment media (hacking, privacy breaches, spying, data loss, extortion, infrastructure attacks, etc), and a worldwide political predicament that has accentuated public anxiety about—well, everything.



HIT risk is now a prominent topic, and researchers and regulators are applying the same methods to it as to the general problem of medical error. Several taxonomies for classifying adverse HIT events are in active use. Some are noteworthy because of the large numbers of events they have collected in searchable databases; some for their face validity or usefulness for producing actionable insights. A useful event-reporting system must fulfill at least 2 criteria:





  • Typology: Its classification categories must make sense to people who observe events; they must be intuitive, unambiguous, and logical; they must be broad enough to make generalizations and narrow enough to make useful distinctions (coding).



  • Causality: It must be able to generate hypotheses about causation and insights into remedies that are not trivial or obvious from reports (analysis).




HIT Event-Reporting Systems



A number of patient safety organizations have developed or implemented systems for collecting HIT event reports. Some of these incorporate HIT events within general patient safety reporting systems. See Table 23.3.




TABLE 23.3HIT-Related Event-Reporting Systems



Some reviewers have made good use of HIT-specific taxonomies to explore the landscape within the borders of HIT risk. There is so much to learn that even this narrow scope of attention can be productive, just as it can be valuable to report on “events involving syringes,” “events involving pharmacists,” or “events occurring on weekends.” But, dedicating a reporting system to only “events involving HIT” sacrifices the ability to unify reports across domains and potentially discover common themes that might not be apparent in vertical systems.



This is a shortcoming of what is potentially the most ambitious medical event reporting system, the AHRQ Common Formats for Patient Safety Organizations.27 Among a growing portfolio of vertical reporting templates, 1 designed to collect HIT events is the AHRQ HIT Hazard Manager.6 It contains 6 major categories:





  1. Usability



  2. Data quality



  3. Decision support



  4. Vendor factors



  5. Local implementation



  6. Other factors




However, the AHRQ Common Formats do not have a general section specific for “causation”; each vertical domain contains causes appropriate for itself. See Figure 23.2.




Figure 23.2


AHRQ Hazard Manager screen shot. (Reproduced from Walker JM, Hassol A, Bradshaw B, Rezaee ME. Health IT Hazard Manager Beta-test: final report. AHRQ Publication No. 12-0058-EF. Rockville, MD: Agency for Health Care Research and Quality; May 2012.)





Another dedicated HIT taxonomy was developed by Magrabi et al. They used reports submitted to the US Food and Drug Administration Manufacturer and User Facility Device Experience (MAUDE) databased as the source from which they extracted empirically derived categories of HIT-related events. This schema captures causes within a small number of “contributing factors.” See Table 23.4 and Figure 23.3.




TABLE 23.4Magrabi Taxonomy




Figure 23.3


A schematic view of the Magrabi taxonomy. (Reproduced with permission from Magrabi F, Ong MS, Runciman W, Coiera E. Using FDA reports to inform a classification for health information technology safety problems. J Am Med Inform Assoc. 2012;19(1):45-53. By permission of Oxford University Press.)





dThe FDA/MAUDE database is a fine-grained code set comprising thousands of items, divided into 3 major categories: (1) Patient problems (diagnoses and outcomes); (2) Device problems (mechanical and functional issues); and (3) Device component or accessory (identification). It lacks any way of capturing causal pathways and is far too cumbersome to use as a format for reporting typical HIT events.



Using the Magrabi taxonomy, the ECRI Institute ranked the most frequent safety issues, in descending order, among about 20,000 HIT events (2015). See Table 23.5.28




TABLE 23.5Most Frequent IT Safety Issues (descending order): ECRI, 2015



Managers and planners like frequencies, rates, and weighted lists because they drive attention and motivation. This might be called a “public health approach” to adverse events. But, frequency isn’t necessarily the best criterion for prioritizing prevention and mitigation. Priority for a risk manager is based on a complex product of the likelihood of the event, the seriousness of its consequences, the effort required to address it, the degree of confidence in the remedy, and similar calculations. In this list, what’s striking—and potentially misleading, depending on the approach taken to mitigation—is the large “human” contribution to what is being labeled HIT error. An unsophisticated risk manager might diagnose the problem as simply a lack of user training, which would surely be mistaken.



Starting with expert consensus rather than observational reports, a committee convened by the National Quality Forum published a list of key measurement areas for HIT safety, ranked in priority order. See Table 23.6.29




TABLE 23.6NQF Key Measurement Areas for HIT Safety



The Joint Commission published a conceptual model intended as a general framework for patient safety research related to HIT. See Figure 23.4.30




Figure 23.4


Improving hospital patient safety through HIT. A conceptual model to guide research. (Reproduced with permission from Paez K, Roper RA, Andrews RM. Health information technology and hospital patient safety: a conceptual model to guide research. Jt Comm J Qual Patient Saf. 2013;39:415-25. Copyright © Elsevier.)





As with other models dedicated to HIT events, The Joint Commission’s breaks out functions performed by electronic systems as a separate species from the same functions performed by humans or nonelectronic systems (eg, paper charts and forms, card files).



Taking a different approach than a taxonomy, the “Socio-Technical Model” outlined by Sittig and Singh has been cited as a framework for exploring safety risks related to HIT.31 It suggests 8 dimensions from which to view HIT events. See Table 23.7.




TABLE 23.7Socio-Technical Model of Health Information Technology (Sittig & Singh)



In a sense, this model is a meta-taxonomy, simply outlining domains of interest, rather than drilling into the processes within them. It helps by prompting analysts to consider a 360° view of an event. But, this outline doesn’t give guidance about causation, or offer a way to understand HIT events within a general model of medical error. It also shares the “flatness” of the taxonomies, in that it doesn’t directly facilitate multidimensional visualization.



As an illustration of these limitations, The Joint Commission used the socio-technical model to categorize 120 HIT-related “sentinel events” identified between January 1, 2010, and June 30, 2013.32 They found:





  1. Human-computer interface (33%)—ergonomics and usability issues resulting in data-related errors



  2. Workflow and communication (24%)—issues relating to health IT support of communication and teamwork



  3. Clinical content (23%)—design or data issues relating to clinical content or decision support



  4. Internal organizational policies, procedures, and culture (6%)



  5. People (6%)—training and failure to follow established processes



  6. Hardware and software (6%)—software design issues and other hardware/software problems



  7. External factors (1%)—vendor and other external issues



  8. System measurement and monitoring (1%)




Although this overview gives a stratospheric picture of issues associated with serious events, it is not nearly granular enough to generate actionable next steps.



The isolated, “HIT-event” approach artificially dichotomizes healthcare operations (such as order entry, documentation, communication, and decision making) that can be done in a multitude of ways—electronic, otherwise, or both. This creates a blind spot in analysis when events that might share the same causal pathway are reported as if they were different problems. For example, failing to follow up a test result might in one case be blamed on technology, but the general category of failed follow-up has nontechnological causes as well, and the remedy might not be within technology at all. For this reason, embedding HIT event reports within a general event reporting system has advantages for analysts.



HIT within General Models of Medical Error



The World Health Organization has published a broad theoretical overview of medical error, the International Classification of Patient Safety.33 While this is one of the deepest theoretical models of safety events and would accommodate HIT events within its scope, it does not explicitly call these out among its key concepts and terms.34 See Figure 23.5.




Figure 23.5


Conceptual framework for the international classification of patient safety. (Reproduced with permission from the World Health Organization. © WHO, 2009. All rights reserved. WHO/IER/PSP/2010.2.)





The taxonomies used by CMPA, COPIC, CRICO, The Joint Commission, NQF, and PIAA are all comprehensive event-reporting tools, which include more or less detailed subsections for HIT events.



COPIC’s system for classifying HIT-related events is embedded within its general taxonomy of medical error. Table 23.8 is the HIT section from that system, which in total comprises 7 different “axes” (dimensions) on which data are captured, including an axis for “causation” with more than 150 possible choices.




TABLE 23.8HIT-Related Section of Victoroff Taxonomy (COPIC)



One problem for building general-purpose systems for analyzing patient safety events is that any classification of HIT elements must integrate with the schema for other types of events (like procedural misadventures), along with appropriate causation elements. This is challenging, because the causal pathways for HIT events are somewhat different from those of “human error,” yet at the same time HIT errors and human errors often cause each other.



All Taxonomies Come Up Short



A vexing problem for all these classification systems is that they do not give enough help in distinguishing types from causes. For example, an event in which “wrong diagnosis” was part of the causal pathway might be classified as a “diagnostic error event” in one taxonomy, and a “cognitive error event,” “failure to review patient data,” “failure to communicate a result,” “system failure” or other things in other taxonomies. Likewise, “communication failure” might describe both a type of event and the cause of an event, such as “diagnostic error.” This problem of circularity and recursion is currently a challenge for all 2-dimensional classification systems and is not well-addressed by any existing taxonomy.



Many safety events are informationally complex, with multiple intermediate outcomes, stages in evolution and chronology, branching, cascading and recursive causation (in which outcomes of one process become causes of another and even itself), multiple human and material agents with different contributions on several levels, mitigating factors that work positively, negatively or bidirectionally, and so on. So far, no patient safety taxonomy is adequately designed to capture and visualize this complex dimensionality. As a result, the output of even the best databases consists mainly of tabulations in the form of graphs and charts from which the most that can be hoped for is an occasional actionable insight that is not self-evident.



Graphs and charts are valuable for setting priorities and tracking trends, but require expert interpretation for discovering remedies. Many confounding variables and data collection biases make today’s epidemiology of HIT errors problematic to interpret.





  • Lack of harmony among error-reporting taxonomies



  • Imprecision and ambiguity among categories of error types



  • Individual variance in the data capture process



  • Bona fide disagreements among experts in how to interpret event narratives



  • Missing data on many events



  • Regional differences in what, when, why, and how many events are reported



  • Differences between software and hardware products, and even between implementations and uses of the same product in different institutions



  • Differences among the constituencies of event-collecting organizations with regard to the kinds of facilities, practitioners, and technologies they represent




The job of a risk manager is to devise preventive policies based on analysis of causal mechanics, rather than just epidemiology. It is toward this step that the next generation of event analysis should be aimed. Currently, no taxonomy has much power to suggest remedies (although the one in use by the CMPA has some promising visualization capabilities).



Mining Malpractice Claims



Medical malpractice claims are another potential source of reports about HIT events. Medical devices used for diagnosis and treatment, and of course devices designed for implantation in patients, are regular targets of complaints. However, so far there have been relatively few claims alleging liability against EHR systems for directly causing injury due to some kind of product defect. This is probably for several reasons:





  • The main function of many EHRs is documenting the care process (including charge capture), rather than guiding medical decisions.



  • There are very few autonomous technologies that impact patient care without a human being standing somewhere along the final path; blame for errors is easier to cast at a person than a machine.



  • HIT is complicated; it is difficult even for experts sometimes to parse a chain of events in a way that unambiguously explains how an event occurred.



  • HIT products involve a great number of human developers, installers, support technicians, trainers, supervisors, and users, of which the physicians generally have the best liability insurance.




Malpractice claims and suits are weak in terms of sensitivity for identifying safety events in general (because there are many more errors than suits), but, they are strong in their ability to identify cases where there was harm. But, as far as technology is concerned they are a shallow data pool, because the vast majority of professional liability claims do not name a technology as being at fault. Granted, information of one sort or another is in the direct causal pathway of many claims. But information technology is not commonly identified as a perpetrator—even when it is. This will likely change.



Two advantages of studying malpractice claims (and preclaim occurrence reports, when available) are that (1) these contain rich data, often investigated at a level of detail not possible with many other event reports; and (2) they come with a sense of practical urgency (ie, they entail costs) and can motivate corrective actions that other kinds of reports may not.



Looking for HIT errors, CRICO found the following EHR-related categories in a review of medical malpractice claims between 2012 and 2014.35 See Table 23.9.




TABLE 23.9EHR-Related Etiologies Across Settings



Some specific anecdotes from CRICO data between 2012 and 2014:





  • Fentanyl order is altered by a decimal point; patient died.



  • The EHR automatically “signed” a test result when in fact it had not been read; physician, and thus patient, did not receive results of coexisting (to lung) liver cancer and patient was not treated.



  • Patient complained of “sudden onset of chest pains with burning epigastric pain, some relief with antacid”; complaint field was too small; entry noted only as “epigastric pain”; no EKG done; patient experienced a serious cardiac event later that day.



  • Obstetrics patient requested tubal ligation at the time of her fourth planned cesarean delivery. Noted on paper record in office but not transferred to new EMR. Covering physician delivered the baby but did not know of request for tubal ligation; patient became pregnant 6 months later.



  • Critical blood gas value misrouted to the wrong unit; patient expired from respiratory failure.



  • Critical ultrasound result routed to the wrong tab in the EHR; physician never saw the result until a year later; patient experienced delayed diagnosis of cancer.



  • Physician not able to access nursing emergency department (ED) triage note, which would have changed management; patient died of subarachnoid hemorrhage.



  • History copied from a previous note that did not document patient’s new amiodarone medication; delayed recognition of amiodarone toxicity.




It is important to recognize that the allegations in these claims were for traditional legal causes of action, such as “failure to diagnose,” “failure to treat,” “wrong treatment, wrong site, wrong patient.” The HIT component was a “contributor,” rather than the final “cause.” Today’s implication is still that clinicians are responsible for the effects of the tools they use. This assumption is stubborn but unrealistic. The plaintiff’s bar has not yet awakened to the realization that many technology failures happen in ways that are opaque to users, and accountability for them is with programmers, vendors, installers, trainers, network administrators, or even interfering applications from another vendor. But, a new era of technology liability will shortly be upon us.



Process of Care Workflow vs Information Flow



CRICO organizes its analysis of adverse events according to where they occur among 9 stages in a traditional, “process of care workflow.” See Table 23.10.




TABLE 23.10CRICO Process of Care Workflow



These are not a chronology, but rather, logical phases in patient care. Analyzing events this way is useful when diagnosing and preventing errors and hazards, because each stage (especially in hospitals) tends to involve a specific organizational department or team that has an expert perspective on its own environment, problems, and solutions.



While this scheme adds a level of insight into errors in general, it has drawbacks for analyzing HIT events. Patient flow, and materials like pill bottles, syringes, and equipment obey the laws of physical things. Information follows a different set of rules, because it is virtual. It can be in several places at once. It is invisible when it’s not being worked with. It can change at the speed of light. It can alter its shape. It can travel through walls. It has super powers. And, information is vulnerable to kinds of risks different from those of physical things such as hemostats and wheelchairs.



A different strategy for analyzing HIT events might be organized around logical stages in the information-handling workflow. See Table 23.11.




TABLE 23.11HIT Events Organized by Phases in Information-Handling Workflow



This scheme (reminiscent of Magrabi et al) reflects natural segmentations in technology function, rather than stages in the process of patient care. It is the principle this chapter will use for much of what follows.



A Donabedian Model for Evaluating HIT Risks



Another framework that should be mentioned for organizing data about HIT hazards would be Donabedian’s classic triad of structures, processes, and outcomes.36



Structures


Structure data for HIT might look at the availability of specific tools, for example the presence or absence of EHRs or e-Prescribing. Selected statistics (as of February 2016) from the Office of the National Coordinator for Health Information Technology (ONC) Health IT Dashboard include those shown in Table 23.12.37




TABLE 23.12ONC Health IT Dashboard – Selected Findings38



While this kind of data speaks to the success of the federal stimulus strategy, it offers no help understanding the impacts of HIT upon safety. As with malpractice data, it is a long stretch to infer outcomes associated with the mere presence of a class of technology.



Processes


Process data is the most useful category of actionable lessons about how HIT works and fails. Its best form is case reports of patient safety events, which are now being collected in large enough numbers to have validity across installations, and come in 2 flavors: aggregated statistics and individual narratives. Of these, the narratives are far more useful for practical purposes. While data across many institutions help focus priorities and flag areas of potential concern, the downside to aggregate event data is difficulty extracting relevant take-away points for a given organization. This is because of critical differences between EHR products, versions, configurations, functionality, and user practices from site to site, and even between departments of large institutions using the same software. Heterogeneity of HIT environments dilutes the validity of large datasets.



For the foreseeable next years, patient safety workers should look to the anecdote cloud for the deepest and most actionable knowledge about HIT. The slowly growing body of analyzed and published case reports is helpful, but in the technology universe, the typical lag time between study conception, design, data collection, analysis, submission, and publication is so long that findings that are a few years old sometimes become either self-evident or stale. Technology quality experts have learned to take advantage of alternative data sources for the most current signals about bugs, glitches, design flaws, and user satisfaction, rather than wait for traditional scientific channels. This approach sounds like anathema to proponents of evidence-based medicine, but illustrates how the methods of safety science differ from those of clinical science.



User groups attached to EHR products, both independent and vendor-sponsored, are often felt by members to be valuable forums for sharing stories about best practices, solutions, and problems.



Outcomes


A growing body of research reports both positive and negative patient safety outcomes associated with EHRs and other HIT applications. These are discussed throughout this chapter, under sections related to HIT functionality.



However, there is certainly a large body of events for which outcomes are not known. One category in particular is errors/events that do not result in harm. It is hard to measure events that didn’t happen. So, estimating the effect of human interference on the trajectories of potential errors involves lots of assumptions. In a study of 85 EHR-related cases reported to the Pennsylvania Patient Safety Authority in 2012, “77 (91%) were classified as ‘no harm’ (ie, an error did occur, but there was no adverse outcome for the patient) and 7 (8%) were reported as ‘unsafe conditions’ that did not result in a harmful event.”39



It should also be mentioned that there is logically a category of serendipitous events that can be classified as “errors that caused lucky outcomes.” (For example, mistakenly taking an x-ray of the wrong body part that reveals an important finding.) These, like many kinds of interesting error types, are going to appear in the anecdote cloud before they are assembled into published case reports.



A serious challenge for evidence-based analysis of HIT outcomes is that, for software and devices, the cycle of data collection, review, and publication is usually longer than the cycle of product debugging, updating, and enhancement. This means that the most rigorous studies will tend to be of applications that have already become obsolete, or which have undergone significant modifications. Nontraditional event reporting and mitigation pathways with much more rapid response times are going to be needed for HIT, as they have been implemented for software maintenance, updates, and support in general use.



Why Does Event Classification Matter?



The foregoing suggests some important conclusions about HIT event analytics.





  • It’s hard to make inferences about risks and mitigation if you don’t know the names of the risks you are trying to mitigate.



  • There is no general agreement about how to analyze HIT-related safety events.



  • It’s necessary to understand HIT-related causes both within a hierarchy of “machine” events and also in the larger context of “human,” “structure,” and “process” events.



  • There is no simple way to map concepts and categories between existing taxonomies.



  • It is not clear that statistics about HIT events are generalizable across settings.




A patient safety event, whether or not it involves HIT, cannot be understood as a linear, or even a 2-dimensional process. It is a self-interacting cluster. So far, the best available analytic and visualization schemes are flat matrices that do not do justice to the complexity of the things they are meant to study. The world awaits a better conceptual model that may expose discoveries that are not yet appreciated. Ultimately, designers of next-generation classification systems might do well to consider 2 principles:





  1. Patient safety events are multidimensional, causes and effects are circular, and remedies are highly dependent on contexts. Therefore, any framework for classifying them needs to provide multiple ways to display the data, arranged along different lines of sight.



  2. Reports of HIT events need to be integrated into a general taxonomy that captures non-HIT errors, hazards, and incidents.





THE SAFETY BENEFITS OF HIT



Listen




It is important not to demonize HIT. Its promise to improve patient safety is literally beyond imagining. Despite justifiable attention to risks, it is hard to find serious critics who argue that a future without HIT would be better for health, health care, and health costs. We know our children will smile at the primitive systems we use today, as they enjoy dazzling benefits from the systems of tomorrow.



But, this is hard to prove in advance. The peer-reviewed literature, necessarily behind the adoption curve, is still sparse regarding clinical benefits of HIT compared to the traditional categories of procedures, devices, and drugs. Partly, this is because safety is a difficult outcome to study; it often requires estimating the nonoccurrence of low-frequency events. Also, because HIT is so intertwined with human and organizational systems of care, it is challenging to isolate the effects of a given technology.



Because health care is an information-centered enterprise, almost any process within it might benefit from an information system that made it faster, cheaper, more accurate, more measurable, more accessible, and more amenable to being shared across a network of users. After all, this is what information technology has achieved in almost every enterprise where it has been applied, and there is no reason to doubt that its long-term impact on health care will be the same.



A growing body of literature shows positive effects associated with individual applications, or specific functions within EHRs and other information technology in hospital settings. A review of 154 articles from 2007 through 2010 found 92% “positive overall conclusions” (where HIT was associated with improvements in care). However, the authors acknowledged their findings were subject to publication bias.40



A systematic review of 236 studies by the RAND Corporation in 2014 assessed the effect of HIT on “healthcare quality, safety, and efficiency” in ambulatory and nonambulatory settings.41 The results were lukewarm, at best. The reviewers made several valuable observations:





  • Overall, a majority of studies (77%) that evaluated the effects of HIT reported findings that were at least partially positive.



  • This agrees with previous HIT literature reviews suggesting that HIT can improve healthcare quality and safety.



  • [However] Much HIT literature suffers from methodological and reporting problems that limit the ability to draw conclusions about why the intervention and/or its implementation succeeded or failed to meet expectations, and their generalizability to other contexts.




Perhaps the longest studied and most favorably rated apps are electronic prescribing and computerized order entry, particularly when they are enhanced with basic decision support features like dose calculators, drug interaction alerts, and duplicate checking.42 Patient reminder (“tickler”) systems and some notification systems (eg, transmitting hospital discharge summaries to ambulatory physicians) have also shown positive outcomes in observational studies. Ambulatory systems for engaging patients in their own care, adhering to therapy, and keeping appointments have also shown some degree of effectiveness. At least thousands of applications aimed at patients suggest benefits in studies of varying quality, for physiologic monitoring (eg, glucose, blood pressure), telemedicine, lifestyle coaching, and so on. Communication technologies that allow patients and physicians to email, chat, text, and video conference with each other are still in early stages, but logic and the reports of early adopters are extremely promising. More than applications solely aimed at professionals, patient-provider communication heralds the greatest revolution in the structure of the delivery system since the telephone.



Bates and Gawande suggest that HIT can reduce medical errors and adverse events by (1) preventing them; (2) facilitating responses to them; and (3) tracking and providing feedback about them.43



A 2015 study of 45,235 cardiovascular, pneumonia, and surgery patients showed a reduction in predicted versus observed rates of adverse events by 17% to 30% in those “exposed to a fully functional EHR.” The primary outcomes evaluated were the occurrence rates of 21 in-hospital adverse events, classified by 4 clinical domains: hospital-acquired infections, adverse drug events, general events (such as falls and pressure ulcers), and postprocedural events. Among all study patients, the occurrence rate of adverse events was 2.3%.44



A 2012 systematic review of 148 randomized, controlled trials found that clinical decision support (CDS) systems were able to modestly improve healthcare processes and outcomes.45 A 2005 review of CDS systems (diagnostic, reminder, disease management, drug dosing, and prescribing) found improvements in practitioner performance.46 A 2009 Texas study indicated that hospitals with automated notes and records, order entry, and CDS had fewer complications, lower mortality rates, and lower costs.47 A review of literature from 1996 to 2005 showed (as long as 2 decades ago) hints of positive impacts from HIT on chronic illnesses such as diabetes, heart disease, and mental illness. Technologies mentioned included EHRs, computerized prompts, population management (including reports and feedback), specialized decision support, electronic scheduling, and personal health records.48 A systematic review by Jones et al of 236 studies from 2010 to 2013 found “strong evidence” supporting CDS and CPOE. A noteworthy comment from the authors: “However, insufficient reporting of implementation and context of use makes it impossible to determine why some health IT implementations are successful and others are not.”49



In a study from the Pennsylvania Patient Safety Authority 2014, hospitals using “advanced EHRs” had a 27% overall decline in patient safety events, largely because of a 30% drop in events due to medication errors.39



There is also a body of negative studies that failed to show patient benefits from specific apps or EHR components. Some of these may have simply been too small to detect an effect, or may have suffered from other methodological problems. For example, a 2007 retrospective, cross-sectional analysis of EHR use in ambulatory practices was associated with improvement in only 2 out of 17 quality indicators (avoiding benzodiazepine use in depressed patients and avoiding routine urinalysis during annual physicals).50 A 2010 study by Metzger et al found rates of 40%-80% effectiveness among CPOE programs in detecting potential adverse events.51



It would be unreasonable to discard intuitively promising solutions on the basis of a couple of bad reviews. For technology, many biases (commercial, regulatory, academic, and attitudinal) guarantee a trend toward favorable reports, but more important, we are actually learning how to build and use better systems. Still, negative studies are at least as useful as positive ones, and techo-skepticism is a healthy balancing force in a culture accustomed to embracing every new gadget with a marketing budget.




DOES HIT PREVENT MALPRACTICE CLAIMS?



Listen




One study of malpractice claims looked for association with use or nonuse of EHRs by physicians in Colorado.52 The results were inconclusive; the data were from a single state’s physician population and claims were only collected (from 1982) through 2009, which was still during the early phase of wide-scale EHR adoption. To be valid, malpractice claims data need 2-7 years to stabilize following alleged events, which means that there could be significant lag after the adoption of a technology to draw conclusions about whether it reduces claims and suits.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jan 6, 2019 | Posted by in ABDOMINAL MEDICINE | Comments Off on The Risks of Health Information Technology: For every Function There is an Equal and Opposite Malfunction

Full access? Get Clinical Tree

Get Clinical Tree app for offline access