Introduction
It has been said that the key to a successful open surgery is exposure. Similarly, the imaging modality used in endoscopic surgery, whether it is laparoscopic or robotic-assisted laparoscopic surgery, is key for success. In this chapter, the history of laparoscopic and imaging systems is reviewed. In addition, the difference between analog and digital image processing is explained. Three-dimensional imaging systems in addition to the da Vinci robotic system are described (Intuitive Surgical, Sunnyvale, CA). Furthermore, advances in different scopes and cameras including high-definition (HD) and augmented reality (AR) imaging systems are explained.
History of the laparoscope
Surgical scopes are among the oldest surgical instruments. The first illuminated scope, named the Lichtleiter or “light conductor,” consisted of a viewing tube, candle, and series of mirrors and was developed by Philip Bozzini in 1804. Due to its impracticality, the device did not find favor among the surgeons of that time. However, it served as a source of inspiration to other inventors. Antonin Jean Desormeaux was the first urologist to view inside the bladder in 1855. Utilizing the principles of incandescent lighting, Julius Bruck designed the first scope illuminated using an electrical light source in 1867. He employed a platinum wire loop heated with electricity until it glowed. The main drawback to this design was the amount of heat generated by the light source, which could be conducted along the metal tubing of the scope to the tip. This heat represented a significant risk of burns to both the patient and the surgeon. In 1877, Maximilian Nitze used a lens system to widen the field of view (FOV) and succeeded in creating the first cystoscope as an instrument to visualize the urinary bladder through the urethra. The modern fiberoptic endoscope was invented by the British physicist Harold Hopkins in 1954. Hopkins used the term fiberscope to describe the bundle of glass or other transparent fibers used to transmit an image. The main advantage of the fiberscope is that the illumination source could be kept away from the scope with a significant reduction in the amount of heat transmitted to the scope tip. However, the resolution of the fiberscope was limited by the number of fibers used. Therefore, Hopkins invented the rod-lens system in the 1960s, and it was patented in 1977. Hopkins’ rod-lens system used glass rods in place of air gaps, removing the need for lenses altogether, with resultant clarity and brightness of up to eighty times greater than what was offered at the time ( Fig. 2.1 ). Hopkins’ rod-lens system remains the current standard for rigid endoscopes where high image resolution is required. Over time, with advances in fiberoptics and magnifying lenses, sophisticated surgical scopes have evolved. Developments in scopes and cameras are detailed in the following two sections.
Scopes and technology
Since the 1960s, the classic laparoscope has been composed of an outer ring of fiberoptics used to transmit light into the body and an inner core of rod lenses through which the illuminated visual scene is relayed back to the eyepiece ( Fig. 2.1 ). The different types of laparoscopes are defined in terms of the number of rods, size of laparoscope, and angle of view. Laparoscopes are available in sizes ranging from 1.9 to 12 mm, but 5 mm is the most common size for pediatric patients, and 10 mm is the most common size for adults. Furthermore, viewing angles between zero and 70 degrees are possible, with zero and 30 degrees being the most commonly used. The zero-degree laparoscope offers a straight-on panoramic view. The 30-degree scope employs an angled lens, which can be used to view around corners, and can allow space for manipulation of laparoscopic instruments during surgery.
For a replication of the panoramic view of the human eye, which has an FOV of close to 180 degrees, the panomorph lens was developed. It uses multivisualization software to widen the FOV to 180 degrees rather than the traditional < 70-degree FOV ( Fig. 2.2 ).
Further miniaturization of charge-coupled device (CCD) chip technology and digital imaging allowed the CCD chip camera to be placed at the distal end of the endoscope. Therefore, the image is immediately captured by the CCD chip and converted into an electrical signal for transmission. These systems are called digital video endoscopes, which allow the signal to be transmitted directly to an image display unit with minimal loss of image quality and distortion. This is also done without the need to attach the camera head to the eyepiece of the scope or the fiberoptic cable for the light source ( Fig. 2.1 B). Therefore, digital flexible cystoscopes, ureteroscopes, and laparoscopes with durable deflection mechanisms have been developed (e.g., EndoEye, Olympus America, Melville, NY) ( Figs. 2.1 and 2.3 ).
Cameras and technology
Technologic advances, specifically improvements of how optical information is captured, transmitted, and produced as an image, have greatly enhanced laparoscopic and endoscopic surgery. Initially, an optical image is converted to an electronic signal that has information regarding both color and luminescence. This signal is then transmitted to a video monitor, where it is scanned to produce an image on the screen. The standard analog signal, in the form of Standard National Television Systems Committee (NTSC) video, uses a limited bandwidth that includes both color and luminescence information in a single or composite signal. There are many disadvantages to this system. First, processing of color and luminescence information separately and then combining both segments of information to create a video signal result in what is referred to as “signal noise” or “crosstalk.” This is accompanied by a decrease in resolution, grainy images, and loss of information around the edges of the video image. In addition, images and signals in the NTSC video are processed as voltage ( Fig. 2.4 ). Therefore, it is inevitable that small errors in recording and reproducing these voltages accumulate with each generation of video image. As a result, multiple copies of an analog image will reveal a decrease in quality of the video images.
Recently, digital imaging has revolutionized the process of image processing and display. A digital converter changes all video signals into precise numbers (0 or 1) ( Fig. 2.4 ). Once the video information is digitized, it can be merged with other formats, such as audio or text data, and manipulated without any loss of information. This conversion to a digital signal prevents crosstalk production and image quality degradation. There are two formats of digital imaging. The first is called Y/C or super-video (S-video), which allows the color and luminescence information to be carried as two separate signals with less crosstalk production, with cleaner and sharper images than those generated by composite signals. The second is known as red-green-blue (RGB) format, which is also a component signal. The main difference between the Y/C format and the RGB format is that in the RGB format, the video information (color and luminescence) is separated into four signals: red, green, blue, and a timing signal. Additionally, each signal carries its own luminescence information, requiring four separate cables (red, green, blue, and sync). The separation of each video signal is performed electronically in the camera head. Contrary to the NTSC or Y/C format, the RGB format requires less electronic processing because the color and luminescence information are separate from the beginning. Therefore, the RGB image quality is greatly enhanced when compared with the other two formats (NTSC and Y/C).
Analog medical cameras have been available since the mid-1970s; however, their use in operative applications was limited due to their high weight and their inability to be disinfected. Although the idea of coupling an endoscope with a camera was first described in 1957, it was impractical because cameras of the time were too large and cumbersome. The situation changed with the development of compact CCD cameras in the 1980s, when the endoscope could be coupled with CCD cameras and TV monitors and the entire operating room team could watch the surgery. This allowed for the development of more complex laparoscopic instruments and procedures where more than one hand is required for operation.
Based on a silicon CCD chip, the first solid-state digital camera was invented. It consisted of a silicon chip covered in image sensors, known as pixels. The solid-state digital camera converts the incoming light from a visual scene into a digital signal that can be stored, processed, or transmitted with greater efficiency and reliability than the analog camera. In addition, the cameras are lightweight, fully immersible, sterilizable, and shielded from electrical interference that may be created by cutting or coagulating currents during laparoscopic procedures.
A significant improvement in CCD camera technology has been the development of the three-chip camera, which contains three individual CCD chips for the primary colors (red, green, and blue) ( Fig. 2.5 ). Color separation is achieved using a prism system overlying the chips. The three-chip camera design produces less crosstalk, with enhanced image resolution and improved color fidelity when compared with analog camera. , Further development in digital camera technology included the invention of a single monochrome CCD chip with alternating red, green, and blue illumination to form a color image, rather than using three chips with three separate color filters. This design reduces the space requirements ( Fig. 2.5 ). Recently, complementary metal oxide semiconductor (CMOS) technology has replaced the CCD sensor technology in the industry of digital endoscopes, with superior image resolution, better contrast discrimination, lower power usage, cheaper cost, and 50% weight reduction.
The classical laparoscope does not have the ability to obtain high-magnification and wide-angle images simultaneously, which represents a challenge when both types of close-up views with wide-angle images are required during sophisticated laparoscopic procedures. , This is because when high magnification is required, a laparoscope is advanced closer to the organ, which results in the loss of the angle of view. Therefore, a multiresolution foveated laparoscope (MRFL) was recently introduced. Using two probes (a high-magnification probe and a wide-angle probe), the MRFL system can capture both high-magnification close-up and wide angle images ( Figs. 2.6 and 2.7 ). At a working distance of 120 mm, the wide-angle probe provides surgical area coverage of 160 × 120 mm 2 with a resolution of 2.83l pixel per millimeter (p/mm). Moreover, the high-magnification probe has a resolution of 6.35l p/mm and images a surgical area of 53 × 40 mm 2 . The advantage of the MRFL camera system is that both high-magnification images with a wide FOV can be simultaneously obtained without the need for moving the laparoscope in and out of the abdominal cavity, thus improving efficiency and maximizing safety by providing superior situational awareness. In addition, the MRFL system provides a large working space with reduced laparoscopic instrument collision since the laparoscope is held further away because of the magnification. The in vivo evaluation verified the great potential of MRFL for incorporation into laparoscopic surgery with improved efficiency and safety.
During traditional laparoscopic surgery, an assistant is needed to control the laparoscope. Directing an assistant to control the camera can be challenging and may prolong the operating time. Therefore, the earliest master-slave robotic surgical platforms controlled the laparoscope, freeing the surgeon to operate with both hands and eliminating the need to rely on expert surgical assistants. Therefore, autonomous camera navigation systems were invented to automatically keep surgical tools such as forceps and graspers in view. These systems use different methods for detecting operator intent and tracking the tool tips relative to the camera. These methods include eye gaze tracking, instrument tacking, kinematic tracking, image-based tracking, magnetic tracking system, and inertial measurement unit. , Recently, Weede et al. developed a test system that applies a Markov model to predict the motions of the tools so that the camera follows them. , The system is trained using data from previous surgical interventions so that it can operate more like an expert laparoscope operator. Furthermore, Yu et al. proposed algorithms for determining how to move the laparoscope from one viewing location to another using kinematic models of a robotic surgery system.
Another device that has been recently developed to overcome the camera handling difficulties during the laparoscopic/robotic surgery is the RoboLens, which is a robotic system that employs an effective low-cost mechanism, with a minimum number of actuated degrees of freedom (DOF), enabling spherical movement around a remote center of motion located at the insertion point of the laparoscopic stem. Hands-free operator interfaces are designed for user control, including a voice command recognition system and a smart six-button foot pedal ( Fig. 2.8 ). The operational and technical features of the RoboLens were evaluated during a laparoscopic cholecystectomy operation on human patients. The RoboLens accurately followed the trajectory of the instruments with a short response time.
Currently, laparoscopic endoscopic single-site (LESS) surgery is a further refinement of minimally invasive laparoscopic procedures. The main difficulty is the limited space for the laparoscope and other instruments. The Miniature Anchored Robotic Videoscope for Expedited Laparoscopy (MARVEL) is a wireless camera module (CM) that can be fixed under the abdominal wall to overcome crowding of instruments during LESS surgery. The MARVEL system includes multiple CMs, a master control module (MCM), and a wireless human-machine interface (HMI). The multiple CMs feature a wirelessly controlled pan/tilt camera platform that enables a full hemispheric FOV inside the abdominal cavity, wirelessly adjustable focus, and a multiwavelength illumination control system. The MCM provides a near-zero latency video wireless communication, digital zoom, and independent wireless control for multiple MARVEL CMs. The HMI gives the surgeon full control over the functionality of the CM. To insert and fix the MARVEL inside the abdominal cavity, the surgeon first inserts each CM into the end of a custom-designed insertion/removal tool ( Fig. 2.9 ). A coaxial needle is used to secure the CM during insertion and removal. The CM is secured to the abdominal wall without using a separate videoscope for assistance. The surgeon can control the CM using a wireless joystick, which controls the pan/tilt movement, illumination, adjustable focus, and digital zoom of all the in vivo CMs. Each CM wirelessly sends its videostream to the MCM, which displays the images on high-resolution monitors.
Most recently, Tamadazte and associates introduced their multiview vision system. They tried to determine the advantages of stereovision, wide FOV, increased depth of vision, low cost, and lack of in situ registration between images or additional incisions. The system is based on two miniature high-resolution cameras positioned like a pair of glasses around the classical laparoscope ( Fig. 2.10 ). The cameras are based on two 5 mm × 5 mm × 3.8 mm CMOS sensors with a resolution of 1600 × 1200 pixels, a frame rate of 30 frames/second, a low noise/signal ratio, an exposure control of +81 dB, and a FOV of 51 degrees with low TV distortion (≤ 1%). This device is not more invasive than standard endoscopy since it is inserted through the laparoscope’s trocar ( Fig. 2.10 ).