and Ureteroscopy (URS)

Fig. 16.1

The Uro-Scopic Trainer. (Courtesy of Limbs & Things, United Kingdom)

The Scope Trainer (Mediskills, UK) is another benchtop simulator that features an expandable bladder, normal length ureters, and two kidneys with renal pelves and calyces. Brehmer initially evaluated 14 urologists with a mix of endoscopic experience in task-specific skills which they performed on both live patients and the Scope Trainer [21]. All participants considered the simulator to be representative of live flexible ureteroscopy, suggesting high content validity. Construct validity was also demonstrated, as those participants with subspecialty endourological training scored significantly higher than the rest of the cohort when evaluated on a task-specific checklist. Brehmer then explored the ability of the Scope Trainer to improve dexterity for semirigid ureteroscopy [22]. Using a validated OSATS protocol [23], his group evaluated 26 urology residents in semirigid ureteroscopy skills after a focused training course on the benchtop model. Trainees demonstrated a statistically significant improvement in scores and reported increased familiarity with the procedure.

The Center for Research in Education and Simulation Technologies (CREST) developed an endoscopic urinary tract model (Simagine Health, USA) that was the first to utilize 3D printing techniques for urologic models . Organosilicate models were made with intraluminal tissue-analogous textures to simulate the human renal collecting system (Fig. 16.2). This benchtop model was initially piloted by Kishore et al. with a cohort of residents at their own institution that demonstrated good validity evidence, but it wasn’t blinded [23]. It was subsequently “taken on the road” by Argun et al. to evaluate residents in a blinded three institutional study of various core ureteroscopic skills using a similar OSATS for endourology evaluation tool [24]. Refinements to the tool were provided by evaluating validity evidence of items of the pilot study. This curriculum/model has high construct and internal validity, and the model has been used at multiple AUA and industry-sponsored hands-on courses since.


Fig. 16.2

The CREST ureteroscopy trainer. (Courtesy of Simagine Health, USA)

White et al. examined validity evidence for the Adult Ureteroscopy Trainer (Ideal Anatomic Modeling, USA), a benchtop model that was created with rapid prototyping based on the collecting system of a patient with recurrent nephrolithiasis [25]. A mix of 46 resident and faculty urologists were asked to perform ureteroscopy and basket manipulation of a lower pole stone in the Trainer while supervised by an experienced endourologist. Afterwards, participants completed a questionnaire to evaluate their experience. Results demonstrated robust face, content, and construct validity, as well as high fidelity. Other groups have replicated this process in creating phantoms of the human kidney with intact collecting systems using 3D printing [26].

Villa et al. described the Key-Box (Porgès-Coloplast, France), a benchtop model that is unique in that it does not seek to model significant anatomic or tissue fidelity [27]. Instead, it is composed of a set of maze-like boxes that are designed to be traversed with a flexible ureteroscope. Instead of reproducing human anatomy, the model creates an environment in which trainees are forced to navigate complex spaces to facilitate the dexterity necessary for future ureteroscopy in humans. Villa randomized 16 medical students to either a 10-day training period with this new model or a non-training control [28]. The endoscopic skills of both groups were then assessed by an expert endourologist using a scale developed by Matsumoto [29]. The group with simulation experience scored significantly higher in all measures, including task completion time. The authors concluded that despite its low fidelity, the Key-Box offers a compelling starting point for benchtop ureteroscopy training .

Blankstein described a simulation curriculum based on a ureteroscopy trainer designed by Cook Medical (Cook Medical, USA) [30]. This benchtop model includes a distensible bladder, simple and complex calyceal systems, and a modeled tortuous ureter. Fifteen residents at various stages of training were enrolled in a 2-week course that included didactic lectures, individualized feedback, and simulation training . Trainee performance was recorded on video and then reviewed by two blinded experts. When compared to an initial baseline assessment, postcourse evaluations revealed improvements in task completion times and overall performance scores. Scores correlated with trainee ureteroscopy experience, and 80% of participants rated the benchtop model as realistic. Collectively, the simulator was felt to have high face, content, and construct validity.

The advantages of all of these models and their associated curricula are their relatively low-cost, standardized nature. Portability and usability vary widely among the systems described above. They, like the biologic systems, still require subjective means of assessment.

Virtual Reality-Based Simulators

The URO Mentor (Symbionix, Israel) sought to enhance benchtop model simulator design by augmenting the user’s experience with a virtual reality (VR) component. Initially described by Michel, the simulator is composed of a computer workstation, proprietary software, and a mannequin with associated cystoscopes and ureteroscopes (Fig. 16.3) [31]. These tools not only allow trainees to simulate a variety of endourological procedures, but the system also captures performance data, potentially reducing the high cost of expert supervision needed for conventional benchtop models. Watterson et al. randomized 20 novice trainees to either no training or individualized instruction on the URO Mentor [32]. Before and after intervention, the simulated endoscopic skills of the two groups were assessed both subjectively by blinded observers and objectively by data collected through the simulator. Post-testing revealed significant improvements in all measurements in the training group, with high correlation between the simulator and blinded observer ratings. Though encouraging, such results must be interpreted in light of their limited generalizability, since performance on a simulator is not necessarily generalizable to operative skill.


Fig. 16.3

The URO Mentor simulator. (Courtesy of 3D Systems, USA)

Since that initial report, multiple other groups have examined for validity evidence for the URO Mentor. Wilhelm et al. evaluated 21 medical students in simulated proximal ureteral stone manipulation, with or without training on the URO Mentor [33]. This group’s results largely mirrored the results of the Watterson trial [32]. Jacomides et al. performed a similar study, though enrolled a mix of medical students, junior, and senior residents, all of whom underwent simulation training over several sessions [34]. Post-intervention assessment revealed a benefit for all groups, with medical students demonstrating the greatest degree of improvement. In light of resource constraints, simulation time may be most effective if focused on those with the least endourological experience.

Ogan et al. measured the effect of URO Mentor simulation with respect to subsequent human cadaveric ureteroscopy for both medical students and residents [35]. Trainees were evaluated at baseline on the URO Mentor, underwent 5 h of supervised simulator training, re-evaluated on the URO Mentor, and then performed diagnostic ureteroscopy on a human cadaver while supervised by experienced endourologists. Interestingly, post-training URO Mentor and cadaveric ureteroscopy performance scores correlated strongly for medical students, but not for residents. In the resident group, cadaveric simulation scores more closely correlated with resident postgraduate year level. The authors speculated that VR-based measurements may not be the most appropriate tools for measuring performance of experienced operators, like those residents who had volunteered for their study. They noted that the high cost (approximately $60,000) of the URO Mentor platform may be potentially offset by reductions in operative time.

Knoll et al. compared the performance of experienced and inexperienced urologists during simulated treatment of a lower calyceal stone, also on the URO Mentor simulator [36]. Performance was graded on completion time, stone contact time, complications, and treatment success. Of the 20 participants, those with <40 and >80 previous flexible ureteroscopy cases had statistically significant differences in performance, suggesting robust construct validity. In a larger study, Dolmans et al. asked 89 urologic trainees and faculty to perform endoscopic manipulation of a distal ureteral stone using the URO Mentor [37]. Afterwards, the participants completed a questionnaire about their experience. Of the respondents, 25% rated the realism of the URO Mentor ≥3.5 on a 5-point scale. While 82% felt that it was a useful educational tool, 73% reported that they would purchase the URO Mentor “if financial means were available” [37]. Advantages of the system are standardization and objective means of assessment. Poor force-feedback, inaccurate tool-tissue responses, and cost/unit represent downsides.

Cross-Platform Comparisons

Given the challenges inherent to validating a ureteroscopy simulation platform in isolation, it is hardly surprising that even less data exists to support meaningful cross-platform comparisons . However, that data which does exist is informative. Misha et al. compared the effects of simulation training using two conventional benchtop simulators – the Uro-Scopic Trainer and the Endo-Urologie-Modell (Karl Storz, Germany) – to training with the VR-based URO Mentor [38]. Twenty-one urologists without ureteroscopic experience rotated through all three simulators, each time being graded on endoscopic performance by expert endoscopist. At the end of the day, they completed an evaluation questionnaire. Interestingly, no difference was seen in the degree of improvement that the urologists experience from one station type to another. Participants did, however, rate the URO Mentor experience as having the highest face validity, but given the subjects were without Ureteroscopic experience, this diminishes the impact of such a determination. For example, participants often noted that the URO Mentor offered an opportunity to simulate the challenge posed by respiratory variation.

Chou et al. recruited 16 first year medical students to undergo didactic training on ureteroscopy , followed by randomization to either focused simulator practice with the Uro-Scopic Trainer or with the URO Mentor [39]. Two months later, the participants performed an endoscopic mid-ureteral stone procedure on an ex vivo kidney/ureter model, which was assessed by an expert endoscopist. No statistically significant difference was detected between the two groups, suggesting that didactics followed by training on either platform may be similarly effective.

Matsumoto has explored trainee performance on ureteroscopy simulation in a variety of settings. In a 2002 study, this group randomized 40 fourth year medical students to a didactic session, training with the Uro-Scopic Trainer, or training with a low-fidelity model [29]. The low-fidelity model was constructed from a Penrose drain, a cup, molded latex, and two straws, with a total production cost of $20 CAD. Afterwards, the participants were graded by blinded examiners on their ability to basket extract a mid-ureteral stone. Despite the $3,700 CAD cost of the Uro-Scopic Trainer, students assigned to that group performed no better than those who have used the low-fidelity simulator. Both of these groups scored higher than those who received the didactic session alone. In a later study, Matsumoto evaluated the ability of 16 residents to extract a distal ureteral stone using the URO Mentor [40]. This performance was then compared to their ability to complete a similar task using the Uro-Scopic Trainer . Those trainees with more experience scored higher than their junior colleagues, and for both groups, the performance was comparable across platforms.


Even more recent technological innovations have introduced new opportunities for improvement in surgical simulation. Dai et al. described the role of crowdsourced feedback for surgical education, leveraging platforms like Amazon Mechanical Turk [41]. Crowdsourcing involves the use of a large cohort of non-experts to perform a specific task, like evaluating technical performance. In their analysis of the existing surgical literature, crowd and expert evaluations correlated closely. Non-expert evaluation was also faster and more cost-effective. Conti et al. explored this possibility for ureteroscopic simulation in particular [42]. In their study, the video recordings of 30 residents performing ureteroscopic stone treatment were submitted to the Crowd-Sourced Assessment of Technical Skills (C-SATS, Inc., Seattle, WA) platform for crowd-based assessment. The videos were also scored by faculty endourologists blinded to resident level of training. Both groups used a previously validated evaluation tool intended for ureteroscopy . Not only did the crowd-sourced evaluations fail to correlate with expert evaluations, the expert evaluations themselves had poor interobserver reliability. The authors conclude that video-only evaluation of ureteroscopic skill may be inappropriate. On the other hand, the study similarly highlighted one of the main advantages of the crowd: while expert evaluation turnover ranged from 1 to 9 weeks, crowd workers completed 2,488 evaluations in 36 h.

Nontechnical Skills (NTS)

Nontechnical skill describes a set of behaviors that include cognitive skills, social skills, and personal resource factors that collectively enhance interprofessional collaboration, teamwork, and miscommunication prevention. With a 2013 Journal of Patient Safety study concluding that between 210,000 and 400,000 deaths per year in the United States are due to preventable errors such as communications breakdowns, NTS’ impact on patient care is increasingly being studied [43]. Communication skills are critical in the operating room during ureteroscopy, as basketing and guide-wire handling are often dependent on the surgeon and assistant working together. The potential demand for reducing the surgeon’s reliance on an assistant for stone basketing has led to the marketing of technologies like the Lithovue Empower (Boston Scientific, USA) single-surgeon basketing device. However, the role of the assistant will likely persist in the near future, and the ability of the surgeon to have the skillset necessary to efficiently work in a team environment is indispensable. To this end, nontechnical skill-based literature within urology is in its infancy, but a study by Brunkhorst et al. has demonstrated that incorporating NTS training within a curriculum has measurable benefits [44]. Additional focus and training in this general area are needed, and some of the groundwork has been established by the US Agency for Healthcare Research and Quality, which has developed a program entitled Team Strategies and Tools to Enhance Performance and Patient Safety TeamSTEPPS®, an evidence-based curriculum which seems to improve teamwork and communication between professionals in healthcare environments [45].

Future Directions

Simulation in ureteroscopy moving forward will no doubt benefit from the burgeoning of technological advancements that will enable higher-fidelity virtual and augmented reality and lower-cost higher-quality models. It is crucial that with increasing integration of simulation into ureteroscopy training, the field addresses some of the inherent limitations in the existing scientific literature. The updated validity paradigm requires many existing simulators to be reassessed and to have additional validity evidence gathered with a focus on gathering validity evidence around intended use with intended populations. Additional studies with participant demographics appropriate to the intended simulator end-user are needed (i.e., a simulator validated with senior medical students cannot have its conclusions applied to residents or attendings). Studies are also needed to translate simulator and training performance to improved patient outcomes to help justify the resource investment. Governing bodies and specialty societies will be well-positioned to assume the responsibilities of creating, standardizing, implementing, and gathering validity evidence of simulation curriculum, as evidenced by the Netherlands already implementing a progressive program of formal, national-level training curricula [46]. Finally, there is a growing a trend towards outsourcing assessments to crowdsourced human evaluators or automated means of assessment with embedded data-driven sensors in physical models and/or VR models to reduce the growing burden of expert assessment [41].


While simulation does not replace the need for first-hand ureteroscopy experience, the available technologies and curricula seek to augment the acquisition of skills in this surgery. A wide range of simulation modalities have been described with varying ranges of fidelity each with unique benefits and drawbacks. Having a good understanding of modern validity evidence theory , the intended learner audience, and how a particular simulator may fit into such a construct is necessary to guide the appropriate simulator choice. As the field of simulation in ureteroscopy matures, the authors expect formal and standardized curricula incorporating both technical and nontechnical skills to become increasingly adopted. Technology will continue to mature to bring more cost-effective higher-fidelity simulator models to this future-looking discipline.

Oct 20, 2020 | Posted by in UROLOGY | Comments Off on and Ureteroscopy (URS)

Full access? Get Clinical Tree

Get Clinical Tree app for offline access