Peter B. Cotton1, Sachin Wani2, Roland M. Valori3, and Jonathan Cohen4 1 Medical University of South Carolina, Charleston, SC, USA 2 University of Colorado Anschutz Medical Campus, Aurora, CO, USA 3 Gloucestershire Hospitals, NHS Foundation Trust, Gloucestershire, UK 4 NYU Grossman School of Medicine, New York, NY, USA Endoscopy is now a major part of most gastroenterologists’ professional lives. Interest in assessing and documenting the skill levels of endoscopists has increased substantially in recent years for many reasons. Patients and potential patients have become more aware of the fact that endoscopic expertise varies, and that outcomes can be compromised by poor performance. Their awareness has risen partly because the profession has (rather belatedly) started to expose some dirty linen, and to publish papers showing imperfect performance. While cognitive elements play an important role in determining outcomes, research has focused on technical aspects, which are (somewhat) easier to measure. In the past, most published series came from expert centers extolling their skills, while those less skilled neither measured their outcomes, nor publicized them for obvious reasons. Colonoscopy provides an excellent example. Many studies by experts published cecal intubation rates close to 98% [1]. Then, a British multicenter audit showed only 77% adjusted cecal intubation, [2] and a study in the United States showed that less than half of 104 endoscopists were reaching the cecum in 90% of cases [3]. These technical deficiencies were then emphasized by reports of “miss‐rates” from back‐to‐back colonoscopies [4], missed colorectal cancers, and huge variations in adenoma detection rate [5]. The advent of CT colonography provided additional evidence and reasons for lesions missed at optical colonoscopy [6]. One leading authority editorialized that “Colonoscopy is not as good as gold” [7] even when done by experts, and likened the difficulties (and importance) in choosing a colonoscopist to selecting a roofing contractor [8]. ERCP is another procedure of significant interest in this context. It is technically challenging, and carries substantial risks, which are increased by technical failure. Experts claim very high technical success rates [9], but the less experienced rarely present their data. A comprehensive audit from Britain showed that only 77% of trained endoscopists achieved a cannulation rate of >80%, and the rate averaged only 66% for senior trainees with >200 cases [10]. The wish to understand and document performance is driven also by medicolegal and employment issues. In the United States, insurance payments may eventually be linked to outcomes, and patients with high deductible plans are shopping around for value. These facts clearly show that there are substantial quality challenges in endoscopy, and that it behooves us as professionals to study and understand the problems, and to put in place mechanisms to document and to enhance performance. This process begins in the training environment, which has been largely unstructured until recently. Trainees were apprenticed to various mentors, who might or might not (often not) have much interest in teaching, or have any training in how to teach. Trainees varied in their dedication to self‐study, and were not motivated by any sort of formal assessment process. Competency was assumed when the allotted training years had passed. It is self‐evident that being involved in a certain number of cases is no guarantee of competence. Trainers vary in their practice spectra, skills, and enthusiasm, and trainees vary in their learning rates. Furthermore, there has been no definition of a “case,” i.e., how much of a procedure had to be done for it to count. These findings parallel a growing movement in medical education. There is an increasing emphasis on standardizing competence assessments and demonstrating readiness for independent practice, as medical training in North America transitions from an apprenticeship model to a competency‐based medical education. The Accreditation Council for Graduate Medical Education (ACGME) has replaced its reporting system with the Next Accreditation System (NAS), which is a continuous assessment reporting system focused on ensuring that specific milestones are reached throughout training, that competence is achieved for all trainees, and that these assessments are documented by training programs [11]. The only way to assess skill levels is by sequential, rigorous, and objective assessment based on clear criteria or objectives. Competence is defined as the minimum level of skill, knowledge, and/or expertise derived through training and experience required to safely and proficiently perform a task or procedure [12]. This means that the practitioner should be allowed to practice the specific procedure without supervision, but there are levels of complexity. Society endoscopy guidelines specify competence thresholds as opposed to absolute procedure volume requirements as a means to determine competence in endoscopic procedures, with thresholds varying between guidelines (Tables 39.1 and 39.2). The most recent document on privileging and credentialing in endoscopy by the American Society for Gastrointestinal Endoscopy (ASGE) suggests that at least 225 hands‐on endoscopic ultrasound (EUS) cases and 200 supervised independent ERCP procedures (including 80 independent sphincterotomies and 60 biliary stent placements) should be performed before learner competence is assessed [12]. Table 39.1 Guidelines for assessment of EUS competence* [11]. * These numbers represent the minimum cases needed to be completed before competence can be assessed. ASGE, American Society for Gastrointestinal Endoscopy; FOCUS, Forum on Canadian Endoscopic Ultrasound; ESGE, European Society of Gastrointestinal Endoscopy; BSG, British Society of Gastroenterology; NR, not reported; CPB, celiac plexus block; CPN, celiac plexus neurolysis. Table 39.2 Guidelines for assessment of competence in ERCP* [11]. * These numbers represent the minimum cases needed to be completed before competence can be assessed. Note that this reference simply quotes the requirement for 80 sphincterotomies and 60 stents that are in the Australian guidelines. They have not been validated in the United States. It should be noted also that any guidelines and thresholds do not account for the variable rates at which trainees learn and acquire endoscopic skills [13]. Thus, these recommended volume thresholds have generally been accompanied with the caveat that a minimum volume of procedures cannot ensure competence. The thresholds remain valuable to guide training programs as to the minimum case volume they need to offer trainees and when they can realistically begin to make summative skill assessment of trainees based on objective criteria. We recognize that the reliance solely on minimum procedure volumes has a number of limitations as it would require several assumptions regarding training, specifically: (1) all trainees learn at the same speed; (2) trainees learn all skills at the same speed; (3) all trainers are equivalent educators; (4) trainees are exposed to procedures of similar complexity and with comparable opportunities for supervised, hands‐on learning; and (5) trainees acquire cognitive endoscopy skills at the same rate as technical skills. As these assumptions are clearly unrealistic, it is imperative that we utilize more rigorous methodologies to assess competence [11]. Thus, the first step is to set the stage appropriately, by defining clearly the overall goal of the training period. What is to be learned, and to what level of performance? We can take ERCP as an example. It is a platform, not a single procedure, and there have been several attempts to categorize the various procedures into levels of “difficulty,” working on the pioneer work of Schutz and Abbot [14]. The latest ASGE workshop [15] extended the concept to focus on clinical complexity as well as the perceived technical challenge, and proposed four grades [16]. We have proposed compressing these into three levels of performance, and practice: Basic, Advanced and Tertiary (Table 39.3). The basic procedures are the well‐established and validated biliary contexts and techniques that anyone offering ERCP should be able to address to a reasonable level. Advanced and tertiary procedures (some of which are not so well validated) require extra training and experience. Some of the 3‐year gastrointestinal (GI) fellowships in the United States include ERCP, but can aim only to reach competence in Basic level cases. Extra training (including fourth year programs in the United States) is needed to address the more complex cases, during which success rates in Basic cases should improve also. Similar grading schemes are being developed for upper endoscopy, colonoscopy, and EUS. Table 39.3 Levels of ERCP complexity. (Adapted from ASGE workshop, reference [15].) Secondly, the training path should be dissected into constituent parts that can be addressed individually. This means designing a curriculum for the specific needs of the trainee, to include all relevant aspects. Cognitive aspects are important but often overlooked in the rush to learn technical skills. Practitioners must have an appropriate knowledge base of the conditions they may encounter, the risks of endoscopy (and how they are minimized and managed), their limitations, and potential alternatives. The technical elements can be dissected similarly and addressed sequentially. Thirdly, all of these elements should be tested. The cognitive aspects can be assessed with standard tests, and are addressed to a certain extent in specialty examinations like the GI boards in the United States. Assessing technical skills during training is more difficult but made easier if the constituent steps to proficiency are defined and tested to a predetermined competency framework. Conceptually, competence in advanced endoscopy should be considered in three broad competence domains: (1) technical (psychomotor), (2) cognitive (knowledge and recognition), and (3) integrative (expertise and behavior) [17]. Competence does not happen; it develops over time. As such, the ACGME’s NAS requires training programs to continuously monitor trainee development from “not yet assessable” to “ready for unsupervised practice” (the target) or beyond (aspirational) [11]. For skills acquisition in endoscopy, use of direct observation assessment tools can help identify areas where trainees’ need more training, inform feedback, and create a framework for teaching to guide their development [18]. This type of direct observation assessment supports judgments about readiness to progress. As there may be a wide variation in skills among endoscopists with similar experience, [19] these tools inform a more rounded assessment of the trajectory of individual trainees toward competence alongside simpler measures such as number of procedures performed and quality metrics which do not enable the provision of focused feedback. To optimize trainee development and ensure competence prior to independent practice, direct observation assessment tools with strong validity evidence are critical [18]. Assessment tools should encompass the full breadth of technical, cognitive, and integrative competencies for performance of high‐quality endoscopy [17]. Ideally, they should support both assessment for learning and assessment of learning [20]. Formative assessment serves to support and drive learning. Formative assessment tools should be integrated in endoscopic training curricula to provide trainees with timely, specific, and actionable feedback to promote self‐reflection, identify learning gaps, and guide future instruction [18]. Summative assessment should use assessment tools that have sufficient psychometric rigor, as they are used for high‐stakes purposes such as certification. When designing an assessment program, using tools with strong validity evidence will provide better measures of competence and more useful data both for learners and training programs [18]. Colonoscopy has been the primary focus for the development of endoscopy assessment tools, which range widely in validity evidence. A recent systematic review evaluated the strength of validity evidence that supports available colonoscopy direct observation assessment tools using the unified framework of validity [18]. This analysis identified 27 studies representing 13 assessment tools (10 adult, 2 pediatric, and 1 both) and while all assessed technical skills, 10 assessed cognitive and integrative skills. The Assessment of Competency in Endoscopy (ACE) tool, the Direct Observation of Procedural Skills (DOPS) tool, and the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT) had the strongest validity evidence compared to other assessment tools. The ACE was refined by the American Society for Gastrointestinal Endoscopy from the Mayo Colonoscopy Skills Assessment Tool, originally designed by a group of Mayo Clinic endoscopists [21, 22]. (https://www.asge.org/docs/default‐source/education/doc‐ace‐white‐paper‐epublished(1).pdf?sfvrsn=881f4a51_6) [22]. The ACE has strong discriminative validity evidence and minimally acceptable criteria for competency that were established through longitudinal analysis of learning curves [23, 24]. This tool, however, is limited because it does not assess nontechnical skills, focuses predominantly on intraprocedural elements of performance, and was initially designed with only local expertise from one institution. Studies evaluating validity evidence for the ACE did not include surgical or non‐physician endoscopists. Additionally, the ACE lacks reliability data and subsequently poor internal structure evidence [18]
39
The Importance of Skills Assessment and Recording Personal Outcomes in the Future of Training
The initial training period
ASGE (United States)
FOCUS (Canada)
ESGE (Europe)
BSG (United Kingdom)
Year of publication
2017
2016
2012
2011
Total number of supervised cases
225
250
NR
250
Pancreaticobiliary indication
NR
100
NR
150 (75 pancreatic cancer)
Luminal indication (mucosal)
NR
25 rectal EUS
NR
80 (10 rectal EUS)
Subepithelial lesion
NR
NR
NR
20
EUS‐FNA
NR
50 (10 CPB, CPN)
50 (30 pancreatic)
75 (45 pancreatic)
Society guidelines
Thresholds for assessment of competence
ASGE
200 supervised ERCP procedures
Gastroenterological Society of Australia (CCRTGE) and Canadian Association of Gastroenterology
200 unassisted ERCPs with native papillary sphincters, 80 independent sphincterotomies and 60 stents
British Society of Gastroenterology
At least 300 ERCPs with a cannulation rate of >80% (last 50 cases); must be competent in sphincterotomy, stone extraction, and stenting
Basic, levels 1 and 2
Deep cannulation of duct of interest, sampling
Biliary stent removal/exchange
Biliary stone extraction <10 mm
Treat biliary leaks
Treat extrahepatic benign and malignant strictures
Place prophylactic pancreatic stents
Advanced, level 3
Biliary stone extraction >10 mm
Minor papilla cannulation and therapy
Removal of internally migrated biliary stents
Intraductal imaging, biopsy, needle aspiration
Manage acute or recurrent pancreatitis
Treat pancreatic strictures
Remove pancreatic stones mobile and <5 mm
Treat strictures, hilar and above
Manage suspected sphincter dysfunction (±manometry)
Tertiary, level 4
Remove internally migrated pancreatic stents
Intraductal guided therapy (PDT, EHL)
Pancreatic stones impacted and/or >5 mm
Intrahepatic stones
Pseudocyst drainage, necrosectomy
Ampullectomy
Whipple, Roux‐en‐Y, Bariatric surgery
Tools for direct observation and assessment of endoscopy skills
Stay updated, free articles. Join our Telegram channel