Catharine M. Walsh, Looi Ee, Mike Thomson, and Jenifer R. Lightdale Achieving proficiency in gastrointestinal endoscopy requires the acquisition of related technical, cognitive, and integrative competencies. Given the unique nature of performing endoscopy in infants and children, its training and assessment must be tailored to pediatric practice to ensure delivery of high‐quality procedural care. This chapter outlines current evidence regarding pediatric endoscopy training and assessment. Learning to perform endoscopy largely occurs during formalized pediatric gastroenterology training programs of at least two years’ duration. The traditional endoscopy teaching method is based upon the apprenticeship model, with trainees learning fundamental skills under the supervision of experienced endoscopists in the course of patient care. More recently, novel instructional aids have been utilized with the aim of accelerating learning, facilitating instruction, and helping trainees attain base levels of proficiency prior to performing procedures in the clinical environment. With regard to learning to perform procedures such as endoscopy, skill acquisition has been described by Fitts and Posner [1] as a sequential process involving three major phases: cognitive, associative, and autonomous. In the cognitive stage, a learner develops an initial mental understanding of the procedure through instructor explanation and demonstration. Performance during this stage is often erratic and error filled, and feedback should focus on correct procedural technique and identifying common errors. Subsequently, in the associative phase, the learner translates knowledge acquired during the cognitive stage into appropriate motor behaviors, tasks are gradually executed more efficiently, and there are fewer errors and interruptions. Feedback during the cognitive stage should aim to help learners self‐identify errors and their associated corrective actions [2]. Finally, with ongoing practice and feedback, the learner transitions to the autonomous stage, where motor performance becomes automated such that the skills are performed without significant cognitive or conscious awareness. A relatively recent trend towards ensuring both quality of training and patient safety has prompted educators to seek complementary methods of teaching endoscopy to enhance apprenticeship approaches. In particular, magnetic endoscopic imaging has been developed to provide real‐time images that display three‐dimensional views of the colonoscope shaft configuration and its position within the abdomen during an endoscopic procedure [3]. A metaanalysis of 13 randomized studies found that use of magnetic endoscopic imaging during real‐life colonoscopy is associated with a lower risk of procedure failure, reduced patient pain scores and a shorter time to cecal intubation, compared with conventional endoscopy [4]. With regard to training, research indicates that use of an imager may enhance learners’ understanding of loop formation and loop reduction maneuvers [5]. Simulation‐based training provides a learner‐centered environment for learners to master basic techniques and even make mistakes, without risking harm to patients [6,7]. Mastery of basic skills in a low‐risk controlled environment, prior to performance on real patients, enables trainees to focus on more complex clinical skills [6]. Additionally, within the simulated setting, learners can rehearse key aspects of procedures at their own pace, training can be structured to maximize learning, and errors can be allowed to occur unhindered, with the goal of allowing trainees to learn from their mistakes [8]. However, it is important to recognize that simply providing trainees with access to simulators does not guarantee that they will be used effectively. Instead, there are clearly a number of best practices in simulation‐based education – including feedback, repetitive practice, distributed practice, mastery learning, interactivity, and range of difficulty – which must be employed by the educator to optimize learning [9–13]. Additionally, feedback must be carefully deployed at the end of the simulation with the intention of promoting successful procedural mastery [10,12]. Indeed, terminal feedback, defined as feedback given by a trainer to a trainee at the end of task completion, is more effective than both feedback given during task performance (which can lead to overreliance on feedback by the learner) and/or withholding feedback, which has been shown to handicap learning [14,15]. In short, the simulated setting allows educators to employ a number of strategies, including terminal feedback, which can be detrimental to patient safety when teaching in the clinical setting. There is increasing recognition that teaching endoscopic skills should be performed by individuals with formal skills and learned behaviors, including awareness of principles of adult education, the components of good training, best practices in procedural skills education and appropriate use of beneficial educational strategies such as feedback [16,17]. The ability to teach endoscopy is an important skill that can be improved with instruction. In turn, “train the trainer” courses have been developed to enhance endoscopy teaching [18]. These courses are now mandatory for adult gastroenterology endoscopy trainers in the United Kingdom [19] and are increasingly being implemented across other jurisdictions such as Canada [20]. Assessment of endoscopic procedural performance is ideally an ongoing process that should occur throughout the learning cycle, from training to accreditation to independent practice. This requires thoughtful integration of both formative and summative assessments to simultaneously optimize learning and certificate functions of assessment. Formative assessment is process focused. It aims to provide trainees with feedback and benchmarks, enables learners to self‐reflect on performance, and guides progress from novice to competent (and beyond) [21,22]. In contrast, summative assessment is outcome focused. It provides an overall judgment of competence, readiness for independent practice and/or qualification for advancement [22]. Summative assessment provides professional self‐regulation and accountability; however, it may not provide adequate feedback to direct learning [22,23]. Over the past two decades, there has been a profound shift in postgraduate medical education from a time‐ and process‐based framework that delineates the time required to “learn” specified content (e.g., a two‐year gastroenterology fellowship) to a competency‐based model that defines desired training outcomes (e.g., perform upper and lower endoscopic evaluation of the luminal GI tract for screening, diagnosis, and intervention [24]) [25–27]. Assessment is an integral component of competency‐based education as it is required to monitor progression throughout training, document trainees’ competence prior to entering unsupervised practice, and ensure maintenance of competence. Nevertheless, procedural assessment in pediatric gastroenterology continues to focus predominantly on the number of procedures performed by a learner, as well as a “gestalt” view of their competence by a supervising physician [28]. This type of informal global assessment is fraught with bias inherent in subjective assessment and is not designed to aid in the early identification of trainees requiring remediation. A major limitation to using procedural numbers to determine competency is a demonstrated wide variation in the rate at which trainees acquire skills [29,30]. Furthermore, there are a host of factors which have been shown to affect the rate at which trainees develop skills, including training intensity [29], the presence of disruptions in training [31], the use of training aids (e.g., magnetic endoscopic imagers [3]), the quality of teaching and feedback received, and a trainee’s innate ability [32]. Reflective of these concerns, current pediatric credentialing guidelines outline “competence thresholds” as opposed to absolute procedural number requirements. A “competence threshold” is the minimum recommended number of supervised procedures a trainee is required to perform before competence can be assessed [33]. There is tremendous variability in current credentialing guidelines with regard to competence thresholds for pediatric upper endoscopy and colonoscopy [34–36]. In large part, this variability reflects a current lack of evidence for determining competence thresholds for pediatric endoscopy. As such, today’s guidelines for procedural numbers at which a learner can be assessed for competency in upper endoscopy are principally based on expert opinion [37]. In contrast, current colonoscopy guidelines are empirically based. However, most rely on an early study of competency by Cass et al. [38] that assessed 135 adult gastroenterology trainees from 14 programs and determined that performance of 140 supervised colonoscopies was required to achieve a 90% cecal intubation rate. More recent studies of adult colonoscopy competency have found that thresholds are achieved by 275 and 250 procedures when utilizing criteria including cecal intubation rate, time to intubation, and competency benchmarks on the Mayo Colonoscopy Skills Assessment Tool (MCSAT) [39] and the Assessment of Competency in Endoscopy (ACE) [40] tool, respectively, while it may take upwards of 400 procedures for some trainees to achieve competence. To date, the largest study to prospectively analyze this question examined 297 trainees over one year in the UK and found that it requires 233 colonoscopies to achieve a 90% cecal intubation rate [29]. In addition, a regression analysis of 10 adult studies, including 189 trainees, estimated that 341 colonoscopies are required to achieve a 90% cecal intubation rate [41]. Current pediatric endoscopy training programs are increasingly requiring learners to monitor quality measures, such as independent terminal ileal intubation rate and patient comfort, to be used as part of a global or summative assessment of trainees. Additionally, quality metrics are being used by practicing endoscopists as formative assessment tools to help promote improvement in care delivery [42]. However, the application of quality metrics to pediatric endoscopy requires pediatric‐specific measures, which have yet to be formally developed. Currently, there are limited data on the applicability of adult‐derived quality metrics to pediatric practice and their impact on clinically relevant outcomes. For example, with regard to cecal intubation rate, the reported successful completion rate for pediatric endoscopists varies from 48% to 96% [43–48]. Perhaps of even more pertinence to pediatric procedures, the reported terminal ileum intubation rate varies from 11% to 92.4% [43,45–49]. Additional research is required to help further delineate and define pediatric‐specific quality indicators that can be used for assessment and quality assurance purposes and validate them in a longitudinal prospective fashion [50]. In recent years, accreditation bodies and endoscopy training and credentialing guidelines have all placed greater emphasis on the continuous assessment of trainees as they progress towards competence. To this end, direct observational assessment tools have emerged to support a competency‐based education model that defines desired training outcomes. It is critical to ensure that assessment tools are psychometrically sound and have strong validity evidence. A number of endoscopy assessment tools have been developed in the adult contex [51] but they are not pediatric specific and validity evidence for use in assessing pediatric endoscopists remains limited. Walsh et al. [52] developed the Gastrointestinal Endoscopy Competency Assessment Tool for pediatric colonoscopy (GiECATKIDS), a task‐specific seven‐item global rating scale that assesses more holistic aspects of the skill and a structured 18‐item checklist that outlines key steps. Using Delphi methodology, the GiECATKIDS was developed by 41 pediatric endoscopy experts from 28 North American hospitals, and addresses performance of all components of a colonoscopy procedure, including pre‐, intra‐, and postprocedural aspects. In one study of 116 colonoscopies performed by 56 pediatric endoscopists (25 novice, 21 intermediate and 10 experienced) from three North American academic hospitals, the GiECATKIDS was found to be a reliable and valid measure that can be used in a formative manner throughout training [53]. The GiECATKIDS has also been found to have strong interrater reliability, excellent test–retest reliability, evidence of content, response process and internal structure validity, discriminative validity (ability to detect differences in skill level), validity evidence of associations with other variables thought to reflect endoscopic competence (e.g., ileal intubation rate), and educational usefulness [53]. Ultimately, the integration of rigorously developed assessment tools, such as the GiECATKIDS, will provide a means to document progress throughout the training cycle. In addition, these tools can be used to support trainees’ learning through the provision of instructive feedback, allow program directors to monitor skill acquisition to ensure trainees are progressing, facilitate identification of skill deficits, and help ensure readiness for independent practice [51,54]. Looking to the future, the universal adoption of robust assessment tools by pediatric gastroenterology training programs across jurisdictions would be useful, as it would generate aggregate data that could be used to develop average learning curves of pediatric endoscopists. These data could also be used to define milestones for pediatric endoscopists at different levels of training and help to establish minimal performance‐based benchmark criteria for competence in pediatric endoscopy procedures to support competency‐based training. Differences between pediatric and adult endoscopic practice highlight the need for pediatric‐specific approaches to training and assessment. Intense efforts have been made over the past decade to define the competencies required to carry out pediatric endoscopic procedures and to develop tools to support competency‐based assessment. In addition, new instructional aids, such as magnetic imaging and simulation, have been introduced with the aim of enhancing training quality and accelerating skills acquisition. Ultimately, competency assessment metrics should be inextricably woven within a core endoscopy curriculum to ensure optimal integration of teaching, learning, feedback, and assessment throughout the entire spectrum of training in pediatric gastrointestinal procedures.
5
Pediatric endoscopy training and ongoing assessment
Introduction
Training
Endoscopy skill acquisition
Endoscopy training aids
Training the pediatric endoscopy trainer
Assessment
Assessment based on quality metrics
Direct observational assessment tools
Conclusion