Image-Guided Surgery and Emerging Molecular Imaging




Recent technologic advances have ushered in an era of surgery with a focus on development of minimally invasive surgical techniques. Specifically, robotic platforms, with robotic-assisted instrumentation, have helped overcome previous barriers to widespread adoption of laparoscopic surgery. Along these lines, image guidance will soon be incorporated into many laparoscopic/robotic procedures to improve surgeon ease, accuracy, and comfort with these complex operations. Thus, we explore recent advances in image-guided surgery and emerging molecular imaging technologies for minimally invasive urologic surgery.


Key points








  • Image-guided surgery and molecular imaging remain areas of intense basic science and clinical research.



  • Image-guided surgery technologies have become standards of surgical care in other specialties, including neurosurgery and orthopedics, but remains in its infancy in urologic and other soft tissue–based surgical specialties.



  • Current research endeavors into the combined application of image-guided surgery and robotic urologic surgery presents the unique challenges of soft tissue registration, tissue deformation, operative navigation, and incorporation into the surgical work flow.



  • Although progress has been made, continued collaboration between engineers and surgeons is required to achieve the ultimate goals of improved ease and accuracy of performing surgery leading to improved patient outcomes.



  • The incorporation of molecular imaging into minimally invasive surgery remains in the early stages of development, however, is likely to continue to increase in importance with development of molecular markers specific to urologic malignancies.






Introduction


With recent advances in imaging and surgical instrumentation, there has been transition away from traditional open surgery toward new minimally invasive approaches. Open surgery has its distinct advantages in providing the surgeon with unrestricted visual, force, and tactile feedback, often at the expense of large incisions and surgical trauma. The emergence of robotic surgery with mechatronically enhanced or robotic-assisted instruments has significantly improved the capabilities of the minimally invasive surgeon, allowing surgical procedures to be carried out with unprecedented accuracy and efficiency. However, one of the drawbacks of minimally invasive surgery is the lack of haptic sensing and feedback to assist with tissue discrimination. Accordingly, extensive work is being done in the area of image guidance to thereby integrate enhanced visual information to improve ease, accuracy, and surgeon comfort with performing complex robotic surgeries.


Traditional approaches to surgery, both open and robotic, are planned preoperatively using high-fidelity axial medical imaging, such as computed tomography (CT) and magnetic resonance imaging (MRI). These images are typically reviewed off-line and often only as a 2-dimensional (2D) display of the 3-dimensional (3D) anatomy. The surgeon then uses innate knowledge of human anatomy combined with the ability of the human brain to align objects in 3D space in what is referred to as mental coregistration of the imaging onto the body and organs. This, in essence, serves as a road map of anatomic relationships to facilitate the proposed surgery. Conversely, image-guided surgery (IGS) provides in situ, real time covisualization of either preoperative or intraoperative data along with the actual anatomy, and the imaging is displayed in a spatially accurate manner coordinated (registered) to the actual anatomy. Thus, the overall goals of IGS are to provide a fully planned procedure before its execution, integrate either real-time intraoperative imaging or preoperative imaging for enhanced accuracy, and to track anatomic changes and tissue deformation during surgery. In the most simplistic form, image guidance is used to improve the surgeon’s awareness of both the anatomy of the target organ and its relationship with surrounding structures. This review explores recent advances in IGS and emerging molecular imaging technologies aimed to improve accuracy, precision, safety, and surgeon confidence during urologic robotic surgery.




Introduction


With recent advances in imaging and surgical instrumentation, there has been transition away from traditional open surgery toward new minimally invasive approaches. Open surgery has its distinct advantages in providing the surgeon with unrestricted visual, force, and tactile feedback, often at the expense of large incisions and surgical trauma. The emergence of robotic surgery with mechatronically enhanced or robotic-assisted instruments has significantly improved the capabilities of the minimally invasive surgeon, allowing surgical procedures to be carried out with unprecedented accuracy and efficiency. However, one of the drawbacks of minimally invasive surgery is the lack of haptic sensing and feedback to assist with tissue discrimination. Accordingly, extensive work is being done in the area of image guidance to thereby integrate enhanced visual information to improve ease, accuracy, and surgeon comfort with performing complex robotic surgeries.


Traditional approaches to surgery, both open and robotic, are planned preoperatively using high-fidelity axial medical imaging, such as computed tomography (CT) and magnetic resonance imaging (MRI). These images are typically reviewed off-line and often only as a 2-dimensional (2D) display of the 3-dimensional (3D) anatomy. The surgeon then uses innate knowledge of human anatomy combined with the ability of the human brain to align objects in 3D space in what is referred to as mental coregistration of the imaging onto the body and organs. This, in essence, serves as a road map of anatomic relationships to facilitate the proposed surgery. Conversely, image-guided surgery (IGS) provides in situ, real time covisualization of either preoperative or intraoperative data along with the actual anatomy, and the imaging is displayed in a spatially accurate manner coordinated (registered) to the actual anatomy. Thus, the overall goals of IGS are to provide a fully planned procedure before its execution, integrate either real-time intraoperative imaging or preoperative imaging for enhanced accuracy, and to track anatomic changes and tissue deformation during surgery. In the most simplistic form, image guidance is used to improve the surgeon’s awareness of both the anatomy of the target organ and its relationship with surrounding structures. This review explores recent advances in IGS and emerging molecular imaging technologies aimed to improve accuracy, precision, safety, and surgeon confidence during urologic robotic surgery.




General principles of IGS


IGS can be divided into 2 broad categories, which either utilize preoperatively obtained images or intraoperative, real-time imaging. In the first method, preoperatively obtained images for a specific patient are actively integrated into the workflow and visual display for the operation. Therefore, the images are used to map surgical position and orientation rather than functioning simply as a reference atlas. The second method involves active intraoperative imaging with real-time production of imaging (ie, fluoroscopy, ultrasound, CT, MRI) and requires operating room–based imaging modalities, special instrumentation, and ancillary personnel. With intraoperative CT or MRI, the obvious drawbacks of increased cost and personnel and the limited availability of these imaging modalities within the operating room setting, will likely restrict development and widespread adoption of techniques within this realm of IGS. Thus, most research has focused on using preoperative imaging to create an augmented reality with these images superimposed onto the surgical field of view. Imaging is registered with the patient’s intraoperative anatomy to actively display organ, instrumentation, and vital structure location. This type of surgical navigation presents a unique set of challenges to overcome, which primarily involves concepts such as image registration, tracking, and deformation adjustment.


Registration


Central to any IGS system is the process of registration, that is, determining the mathematical relationship between objects in the preoperative images and their physical locations in the operating room. Basic registration is premised on aligning imaging and anatomy in a 3D coordinate space system. A 3D-rendered surface allows for easier understanding of the spatial relationship between a surgical target and other structures the surgeon may wish to avoid. Registration may be done based on anatomic landmarks (points) or markers (fiducials) inserted before image acquisition that can be seen precisely on the imaging study and also identified within the patient in the operating room. Using high-speed computer algorithms, rigid 3D point-to-point alignment is performed. Thus, subsurface anatomy and location of important surrounding structures can be displayed to the surgeon on a video screen or as a virtual overlay on to the patient (ie, augmented virtual reality). The most common example of rigid fiducial-based registration is found with image-guided brain and spine interventions, in which the bony structure’s relationships to the vital other structures is available to the surgeon before bringing the patient to the operating room. However, abdominal organs pose a particular challenge, as they are not accessible preoperatively for placement of fiducials and also lack easily identifiable landmarks that can function as points. Therefore, most work with registration for soft tissue applications has been based on surface registration, whereby a captured topographic surface (physical space) is then matched to a surface that has been extracted from preoperative images (image space). With this technique, large numbers of surface point coordinates are captured by sweeping a tracked tool over the surface of the target organ or assembling a surface using a reflected laser beam geometry captured from a laser range scanner.


Localization and Tracking Techniques


Localization is the process by which the surgeon is able to identify and display surgical tool tip locations within the viewing field and in relationship to the registered imaging and vital structures, which are commonly not yet encountered. This requires accurate tracking of the 3D tool, which is most commonly performed with either of 2 methods, optical or electromagnetic tracking. Optical tracking uses a specialized camera system, which can repetitively determine the tool tip position using the recording of geometric alignment of special trackers (geometrically arranged optical sources or optical reflectors) placed on the proximal end of the tool. These systems have submillimeter accuracy but require line of sight between the camera and trackers, which can be difficult to obtain because of constraints of the operating room environment. Electromagnetic tracking uses a magnetic field sensor device and a wire-based electromagnetic tracker on the patient and instruments, allowing tracking actively by the system. Advantages of this method are no requirement for direct line of sight and good accuracy; however, variations in the electromagnetic field strength and presence of large metallic objects, such as the robot or the operating table, can result in error.


Registration Error and Tissue Deformation


A target is a point with known locations in both real and virtual space that is not used in the creation of the transformation matrix. The difference between the transformed location and its actual location in real space is the target registration error (TRE). This serves as the true assessment of registration quality. However, to be able to obtain this metric, the points must be definitively identified in both spaces, which can be challenging given issues with exposure of the target, tissue deformation, and validation. The most significant barrier to real-time image overlay is that of tissue deformation, which can occur from a variety of sources, including respiratory and patient motion, changes to perfusion, and surgical manipulation. When considering that for many procedures, millimeter or even submillimeter precision is required for safe and effective performance, the ability to account for tissue change (deformation) is essential, especially for image-guided tumor resection. Naturally, the process of surgical manipulation, dissection, and resection results in significant changes to the target tissue, which must be taken into account to ensure accuracy and reduce error.


Intraoperative Imaging


Real-time intraoperative imaging presents a strategy for avoiding issues related to tissue deformation, as the information being obtained is live and dynamic. Examples of intraoperative imaging techniques are MRI, fluoroscopy, CT, cone-beam CT, and ultrasound scan (US); however, to date, no single modality has proven to be the answer to IGS. Intraoperative MRI is costly, slow, and a major intrusion into the surgical process. Fluoroscopy and CT involve significant radiation and accompanied risk to the patient and operating room staff. Cone-beam CT based on flat panel digital detector technology is a modified technique to allow for use in the operating room. X-rays are delivered in a cone rather than the conventional fan shape used in helical scanners. This allows for acquisition of a large area in a single pass of the C-arm using a digital flat panel detector. This modality is not without significant limitations, as it is expensive and not widely available at most hospitals and also produces images of a lower quality than customarily obtained with conventional CT scan. US is mostly a 2D imaging modality that lacks clear tissue discrimination and results in overall poor image quality compared with axial 3D imaging, thereby limiting its utility in IGS. However, in the future, by combining real-time intraoperative imaging with high-resolution axial 3D preoperative images, the strengths of both can be preserved and the weaknesses mitigated.




Image-guided partial nephrectomy


To date, most work and progress toward true image guidance in the urologic field has occurred during robotic-assisted laparoscopic partial nephrectomy. Given the nature of the surgery, partial nephrectomy seems ideally suited for potential improvements with image guidance. First, image guidance may facilitate more facile and accurate identification of important landmark structures, such as the renal hilum and subsequent major blood vessels and their respective relationship to the target. Furthermore, image guidance may improve the precision of tumor resection to ensure complete tumor resection, while achieving maximal nephron sparing. Finally, IGS for partial nephrectomy may increase surgeon comfort with this complex operation, thereby potentially increasing the application of nephron-sparing surgeries.


Initial work on image-guided partial nephrectomy focused on the simplest form of registration, whereby 3D reconstructions were manually overlaid to the best fit of the operative view by the surgeon using knowledge of human anatomy. Manual registration overlay, although an improvement over surgeon mental coregistration, falls short of true IGS and does not allow error calculation. Therefore, fiducial-based registration has been explored by several groups. Teber and colleagues used 3D cone-beam imaging with fiducial markers to create 3D reconstructions of the organ that were subsequently registered to the real time image. This technique produced a high level of accuracy with TRE of 0.5 mm in ex vivo studies ( Fig. 1 ). Although the fiducial insertion technique used is valid in the laboratory, insertion of barbed fiducials and the need for intraoperative scanning have limited clinical application of this technique.




Fig. 1


With the guidance of augmented reality, the renal tumor was identified through the surrounding fat.

( From Teber D, Guven S, Simpfendörfer T, et al. Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo results. Eur Urol 2009;56(2):335; with permission.)


Considerable work has been done using surface-based registration for robotic partial nephrectomy. In surface-based methods, digitization and capture of a patch or cloud of the surface anatomy is accomplished through a stylus or a range scanner. This extracted surface is then fit to a surface segmented from preoperative images. Registered preoperative images are displayed using the IGS software application, as multiplanar reformatted slices or as a rendered volume. These visualizations allow surgeons to see their current surgical position in addition to presenting a predicted map based on preoperative imaging of vital anatomy beneath the organ surface or beyond the optically visualized field.


Most research from our laboratory initially used surface-based registration via tracked da Vinci robotic instrument as a topography-defining stylus to create an intraoperative model of the surface anatomy of the kidney ( Fig. 2 ). To perform this, accurate knowledge of the location of the instrument tip in 3D space is required and can either be obtained via intrinsic or extrinsic tracking. The intrinsic method uses the kinematic chain and tracking capabilities inherent to the da Vinci surgical system ( Fig. 3 A). This approach is limited by inaccuracy of measurements of the da Vinci passive and active robotic joint positions, which have been found to exceed 10 mm. Extrinsic tracking uses commercially available magnetic or optical tracking systems to ascertain the positions of the surgical instrument (see Fig. 3 B). We developed a hybrid approach combining both intrinsic and extrinsic tracking (optical tracking, Polaris Spectra), which significantly decreased the tracking error to less than 2 mm (see Fig. 3 C). Although surface registration through a tracked tool is feasible, error and lack of rapid ability to recapture and reregister remained limitations. Laser range finders present an alternative approach, which has been used with success in open surgery ( Fig. 4 ). However, to date, no suitable laparoscopic laser range finder has yet to undergo trial for renal surgery.




Fig. 2


Surface capture of the kidney registered to segmented CT of the kidney and mass. White lines represent surface tracking done with tracked da Vinci tool. Blue and red lines represent hilar structures. Gray surface model represents segmented kidney surface from preoperative CT scan, including large lower pole tumor (right side of figure).

( From Herrell SD, Galloway RL, Su LM. Image-guided robotic surgery: update on research and potential applications in urologic surgery. Curr Opin Urol 2012;22(1):50; with permission.)

Mar 3, 2017 | Posted by in UROLOGY | Comments Off on Image-Guided Surgery and Emerging Molecular Imaging

Full access? Get Clinical Tree

Get Clinical Tree app for offline access