Laparoscopic/robotic camera and lens systems





Introduction


It has been said that the key to a successful open surgery is exposure. Similarly, the imaging modality used in endoscopic surgery, whether it is laparoscopic or robotic-assisted laparoscopic surgery, is key for success. In this chapter, the history of laparoscopic and imaging systems is reviewed. In addition, the difference between analog and digital image processing is explained. Three-dimensional imaging systems in addition to the da Vinci robotic system are described (Intuitive Surgical, Sunnyvale, CA). Furthermore, advances in different scopes and cameras including high-definition (HD) and augmented reality (AR) imaging systems are explained.


History of the laparoscope


Surgical scopes are among the oldest surgical instruments. The first illuminated scope, named the Lichtleiter or “light conductor,” consisted of a viewing tube, candle, and series of mirrors and was developed by Philip Bozzini in 1804. Due to its impracticality, the device did not find favor among the surgeons of that time. However, it served as a source of inspiration to other inventors. Antonin Jean Desormeaux was the first urologist to view inside the bladder in 1855. Utilizing the principles of incandescent lighting, Julius Bruck designed the first scope illuminated using an electrical light source in 1867. He employed a platinum wire loop heated with electricity until it glowed. The main drawback to this design was the amount of heat generated by the light source, which could be conducted along the metal tubing of the scope to the tip. This heat represented a significant risk of burns to both the patient and the surgeon. In 1877, Maximilian Nitze used a lens system to widen the field of view (FOV) and succeeded in creating the first cystoscope as an instrument to visualize the urinary bladder through the urethra. The modern fiberoptic endoscope was invented by the British physicist Harold Hopkins in 1954. Hopkins used the term fiberscope to describe the bundle of glass or other transparent fibers used to transmit an image. The main advantage of the fiberscope is that the illumination source could be kept away from the scope with a significant reduction in the amount of heat transmitted to the scope tip. However, the resolution of the fiberscope was limited by the number of fibers used. Therefore, Hopkins invented the rod-lens system in the 1960s, and it was patented in 1977. Hopkins’ rod-lens system used glass rods in place of air gaps, removing the need for lenses altogether, with resultant clarity and brightness of up to eighty times greater than what was offered at the time ( Fig. 2.1 ). Hopkins’ rod-lens system remains the current standard for rigid endoscopes where high image resolution is required. Over time, with advances in fiberoptics and magnifying lenses, sophisticated surgical scopes have evolved. Developments in scopes and cameras are detailed in the following two sections.




Fig. 2.1


A, Traditional Hopkins’ rod-lens technology. B, Videoscope technology

(Courtesy Olympus America, Center Valley, Penn.)


Scopes and technology


Since the 1960s, the classic laparoscope has been composed of an outer ring of fiberoptics used to transmit light into the body and an inner core of rod lenses through which the illuminated visual scene is relayed back to the eyepiece ( Fig. 2.1 ). The different types of laparoscopes are defined in terms of the number of rods, size of laparoscope, and angle of view. Laparoscopes are available in sizes ranging from 1.9 to 12 mm, but 5 mm is the most common size for pediatric patients, and 10 mm is the most common size for adults. Furthermore, viewing angles between zero and 70 degrees are possible, with zero and 30 degrees being the most commonly used. The zero-degree laparoscope offers a straight-on panoramic view. The 30-degree scope employs an angled lens, which can be used to view around corners, and can allow space for manipulation of laparoscopic instruments during surgery.


For a replication of the panoramic view of the human eye, which has an FOV of close to 180 degrees, the panomorph lens was developed. It uses multivisualization software to widen the FOV to 180 degrees rather than the traditional < 70-degree FOV ( Fig. 2.2 ).




Fig. 2.2


A, Field of view with a classic laparoscope. B, Field of view with a panomorph laparoscope.

(Modified from Roulet P, Konen P, Villegas M. 360° endoscopy using panomorph lens technology. Proc SPIE Int Soc Opt Eng . 2010;7558.)


Further miniaturization of charge-coupled device (CCD) chip technology and digital imaging allowed the CCD chip camera to be placed at the distal end of the endoscope. Therefore, the image is immediately captured by the CCD chip and converted into an electrical signal for transmission. These systems are called digital video endoscopes, which allow the signal to be transmitted directly to an image display unit with minimal loss of image quality and distortion. This is also done without the need to attach the camera head to the eyepiece of the scope or the fiberoptic cable for the light source ( Fig. 2.1 B). Therefore, digital flexible cystoscopes, ureteroscopes, and laparoscopes with durable deflection mechanisms have been developed (e.g., EndoEye, Olympus America, Melville, NY) ( Figs. 2.1 and 2.3 ).




Fig. 2.3


EndoEye technology. This technological advance allowed for the development of the flexible laparoscope. CCD , Charge-coupled device.

(Courtesy Olympus America, Center Valley, Penn.)


Cameras and technology


Technologic advances, specifically improvements of how optical information is captured, transmitted, and produced as an image, have greatly enhanced laparoscopic and endoscopic surgery. Initially, an optical image is converted to an electronic signal that has information regarding both color and luminescence. This signal is then transmitted to a video monitor, where it is scanned to produce an image on the screen. The standard analog signal, in the form of Standard National Television Systems Committee (NTSC) video, uses a limited bandwidth that includes both color and luminescence information in a single or composite signal. There are many disadvantages to this system. First, processing of color and luminescence information separately and then combining both segments of information to create a video signal result in what is referred to as “signal noise” or “crosstalk.” This is accompanied by a decrease in resolution, grainy images, and loss of information around the edges of the video image. In addition, images and signals in the NTSC video are processed as voltage ( Fig. 2.4 ). Therefore, it is inevitable that small errors in recording and reproducing these voltages accumulate with each generation of video image. As a result, multiple copies of an analog image will reveal a decrease in quality of the video images.




Fig. 2.4


A, Representation of analog video imaging in which video signals remain as voltage waveforms. B, In contrast, digital video systems convert the analog video information to a digital format, which must be converted back to analog information before it is viewed on the video monitor. Conversion to a digital signal gives the digital video image immunity to noise buildup or image quality degradation. CCD , Charge-coupled device.

(From Marguet CG, Springhart WP, Preminger GM. New technology for imaging and documenting urologic procedures. Urol Clin North Am . 2006;33:397-408.)


Recently, digital imaging has revolutionized the process of image processing and display. A digital converter changes all video signals into precise numbers (0 or 1) ( Fig. 2.4 ). Once the video information is digitized, it can be merged with other formats, such as audio or text data, and manipulated without any loss of information. This conversion to a digital signal prevents crosstalk production and image quality degradation. There are two formats of digital imaging. The first is called Y/C or super-video (S-video), which allows the color and luminescence information to be carried as two separate signals with less crosstalk production, with cleaner and sharper images than those generated by composite signals. The second is known as red-green-blue (RGB) format, which is also a component signal. The main difference between the Y/C format and the RGB format is that in the RGB format, the video information (color and luminescence) is separated into four signals: red, green, blue, and a timing signal. Additionally, each signal carries its own luminescence information, requiring four separate cables (red, green, blue, and sync). The separation of each video signal is performed electronically in the camera head. Contrary to the NTSC or Y/C format, the RGB format requires less electronic processing because the color and luminescence information are separate from the beginning. Therefore, the RGB image quality is greatly enhanced when compared with the other two formats (NTSC and Y/C).


Analog medical cameras have been available since the mid-1970s; however, their use in operative applications was limited due to their high weight and their inability to be disinfected. Although the idea of coupling an endoscope with a camera was first described in 1957, it was impractical because cameras of the time were too large and cumbersome. The situation changed with the development of compact CCD cameras in the 1980s, when the endoscope could be coupled with CCD cameras and TV monitors and the entire operating room team could watch the surgery. This allowed for the development of more complex laparoscopic instruments and procedures where more than one hand is required for operation.


Based on a silicon CCD chip, the first solid-state digital camera was invented. It consisted of a silicon chip covered in image sensors, known as pixels. The solid-state digital camera converts the incoming light from a visual scene into a digital signal that can be stored, processed, or transmitted with greater efficiency and reliability than the analog camera. In addition, the cameras are lightweight, fully immersible, sterilizable, and shielded from electrical interference that may be created by cutting or coagulating currents during laparoscopic procedures.


A significant improvement in CCD camera technology has been the development of the three-chip camera, which contains three individual CCD chips for the primary colors (red, green, and blue) ( Fig. 2.5 ). Color separation is achieved using a prism system overlying the chips. The three-chip camera design produces less crosstalk, with enhanced image resolution and improved color fidelity when compared with analog camera. , Further development in digital camera technology included the invention of a single monochrome CCD chip with alternating red, green, and blue illumination to form a color image, rather than using three chips with three separate color filters. This design reduces the space requirements ( Fig. 2.5 ). Recently, complementary metal oxide semiconductor (CMOS) technology has replaced the CCD sensor technology in the industry of digital endoscopes, with superior image resolution, better contrast discrimination, lower power usage, cheaper cost, and 50% weight reduction.




Fig. 2.5


Schematic representation of three-CCD chip and one-CCD chip designs. Red, green, and blue sent to three separate CCDs by a prism. CCD , Charge-coupled device.

(Courtesy Olympus America, Melville, NY. From Lipkin ME, Scales CD, Preminger GM. Video imaging and documentation. In Smith AD, Preminger G, Badlan G, Kavoussi LR, eds. Smith’s Textbook of Endourology . 3rd ed. Oxford, UK: Wiley-Blackwell; 2012:19-37.)


The classical laparoscope does not have the ability to obtain high-magnification and wide-angle images simultaneously, which represents a challenge when both types of close-up views with wide-angle images are required during sophisticated laparoscopic procedures. , This is because when high magnification is required, a laparoscope is advanced closer to the organ, which results in the loss of the angle of view. Therefore, a multiresolution foveated laparoscope (MRFL) was recently introduced. Using two probes (a high-magnification probe and a wide-angle probe), the MRFL system can capture both high-magnification close-up and wide angle images ( Figs. 2.6 and 2.7 ). At a working distance of 120 mm, the wide-angle probe provides surgical area coverage of 160 × 120 mm 2 with a resolution of 2.83l pixel per millimeter (p/mm). Moreover, the high-magnification probe has a resolution of 6.35l p/mm and images a surgical area of 53 × 40 mm 2 . The advantage of the MRFL camera system is that both high-magnification images with a wide FOV can be simultaneously obtained without the need for moving the laparoscope in and out of the abdominal cavity, thus improving efficiency and maximizing safety by providing superior situational awareness. In addition, the MRFL system provides a large working space with reduced laparoscopic instrument collision since the laparoscope is held further away because of the magnification. The in vivo evaluation verified the great potential of MRFL for incorporation into laparoscopic surgery with improved efficiency and safety.




Fig. 2.6


Conceptual idea for operation of MRFL in laparoscopic surgery. MRFL, Multiresolution foveated laparoscope.

(From Qin Y, Hua H, Nguyen M. Characterization and in vivo evaluation of a multiresolution foveated laparoscope for minimally invasive surgery. Biomed Opt Express. 2014;5:2548-2562.)



Fig. 2.7


A, Schematic layout of a dual-resolution, foveated laparoscope for minimally invasive surgery. The scope consists of a wide-angle imaging probe and a high-magnification probe. The two probes share the same objective lens, relay lens groups, and scanning lens groups. B, Multiresolution foveated laparoscope ( MRFL ) prototypes in comparison with a commercially available standard laparoscope.

(From Qin Y, Hua H, Nguyen M. Characterization and in vivo evaluation of a multiresolution foveated laparoscope for minimally invasive surgery. Biomed Opt Express . 2014;5:2548-2562.)


During traditional laparoscopic surgery, an assistant is needed to control the laparoscope. Directing an assistant to control the camera can be challenging and may prolong the operating time. Therefore, the earliest master-slave robotic surgical platforms controlled the laparoscope, freeing the surgeon to operate with both hands and eliminating the need to rely on expert surgical assistants. Therefore, autonomous camera navigation systems were invented to automatically keep surgical tools such as forceps and graspers in view. These systems use different methods for detecting operator intent and tracking the tool tips relative to the camera. These methods include eye gaze tracking, instrument tacking, kinematic tracking, image-based tracking, magnetic tracking system, and inertial measurement unit. , Recently, Weede et al. developed a test system that applies a Markov model to predict the motions of the tools so that the camera follows them. , The system is trained using data from previous surgical interventions so that it can operate more like an expert laparoscope operator. Furthermore, Yu et al. proposed algorithms for determining how to move the laparoscope from one viewing location to another using kinematic models of a robotic surgery system.


Another device that has been recently developed to overcome the camera handling difficulties during the laparoscopic/robotic surgery is the RoboLens, which is a robotic system that employs an effective low-cost mechanism, with a minimum number of actuated degrees of freedom (DOF), enabling spherical movement around a remote center of motion located at the insertion point of the laparoscopic stem. Hands-free operator interfaces are designed for user control, including a voice command recognition system and a smart six-button foot pedal ( Fig. 2.8 ). The operational and technical features of the RoboLens were evaluated during a laparoscopic cholecystectomy operation on human patients. The RoboLens accurately followed the trajectory of the instruments with a short response time.




Fig. 2.8


First prototype of designed robotic cameraman, RoboLens v1.1, in operational configuration.

(From Mirbagheri A, Farahmanda F, Meghdaria A, et al. Design and development of an effective low-cost robotic cameraman for laparoscopic surgery: RoboLens. Scientia Iranica . 2011;18:105-114.)


Currently, laparoscopic endoscopic single-site (LESS) surgery is a further refinement of minimally invasive laparoscopic procedures. The main difficulty is the limited space for the laparoscope and other instruments. The Miniature Anchored Robotic Videoscope for Expedited Laparoscopy (MARVEL) is a wireless camera module (CM) that can be fixed under the abdominal wall to overcome crowding of instruments during LESS surgery. The MARVEL system includes multiple CMs, a master control module (MCM), and a wireless human-machine interface (HMI). The multiple CMs feature a wirelessly controlled pan/tilt camera platform that enables a full hemispheric FOV inside the abdominal cavity, wirelessly adjustable focus, and a multiwavelength illumination control system. The MCM provides a near-zero latency video wireless communication, digital zoom, and independent wireless control for multiple MARVEL CMs. The HMI gives the surgeon full control over the functionality of the CM. To insert and fix the MARVEL inside the abdominal cavity, the surgeon first inserts each CM into the end of a custom-designed insertion/removal tool ( Fig. 2.9 ). A coaxial needle is used to secure the CM during insertion and removal. The CM is secured to the abdominal wall without using a separate videoscope for assistance. The surgeon can control the CM using a wireless joystick, which controls the pan/tilt movement, illumination, adjustable focus, and digital zoom of all the in vivo CMs. Each CM wirelessly sends its videostream to the MCM, which displays the images on high-resolution monitors.




Fig. 2.9


A, Functional diagram of the MARVEL system, including the MCM and the MARVEL robotic CM. B, Customized insertion removal tool used for attaching the MARVEL platform within the peritoneal cavity. MARVEL provides its own imaging during attachment, eliminating the need for a cabled laparoscope during any portion of the procedure. CM , Camera module; MARVEL , miniature anchored robotic videoscope for expedited laparoscopy; MCM , master control module.

(From Castro CA, Alqassis A, Smith S, et al. A wireless robot for networked laparoscopy. IEEE Trans Biomed Eng . 2013;60:930-936.)


Most recently, Tamadazte and associates introduced their multiview vision system. They tried to determine the advantages of stereovision, wide FOV, increased depth of vision, low cost, and lack of in situ registration between images or additional incisions. The system is based on two miniature high-resolution cameras positioned like a pair of glasses around the classical laparoscope ( Fig. 2.10 ). The cameras are based on two 5 mm × 5 mm × 3.8 mm CMOS sensors with a resolution of 1600 × 1200 pixels, a frame rate of 30 frames/second, a low noise/signal ratio, an exposure control of +81 dB, and a FOV of 51 degrees with low TV distortion (≤ 1%). This device is not more invasive than standard endoscopy since it is inserted through the laparoscope’s trocar ( Fig. 2.10 ).


Aug 8, 2022 | Posted by in UROLOGY | Comments Off on Laparoscopic/robotic camera and lens systems

Full access? Get Clinical Tree

Get Clinical Tree app for offline access