Infection Complications After Abdominal Organ Transplantation


Greater infectious risk

 Critical illness entering transplantation

 Prior colonization with antimicrobial-resistant pathogens

 Induction therapy – lymphocyte depletion

 High-dose corticosteroids

 Plasmapheresis (not well studied)

 High rejection risk (HLA mismatch desensitization)

 Early graft rejection

 Graft dysfunction

 Technical complications

 Anastomotic leak

 Bleeding

 Wound infection/poor wound healing

 Prolonged intubation/intensive unit care

 Surgical, vascular, or urinary catheters

Lower infectious risk

 Immunologic tolerance

 Good HLA match

 Technically successful surgery

 Good graft function

 Appropriate surgical prophylaxis

 Effective antiviral prophylaxis

 PCP prophylaxis

 Appropriate vaccination



Urinary tract infection is the most common infectious complication after renal transplantation [2]. After the procedure, urine flow alterations may develop because of ureteral stenosis or vesicoureteral reflux. In addition, some renal transplant patients have underlying urological abnormalities (e.g., neurogenic bladder or chronic vesicoureteral reflux) that increase the risk of posttransplant urinary tract infection. Renal transplantation from cardiac death donors develop delayed graft function more frequently than other types of renal transplantation, which in turn increases the need for dialysis and the incidence of infection [2].

Living donor and cardiac death donor liver transplantations have an increased risk of biliary complications and ischemic cholangiopathy which increases the risk of bile infections [3, 4]. Bile reconstructions other than duct to duct carry a higher risk of bile infections and peritonitis [5].

Regarding pancreas transplantation, bladder drainage is associated with a higher risk of infection than intestinal drainage [6]. As pancreas transplantation involves intestinal manipulation, the risk of peritonitis and abdominal collections is high.

Small bowel transplantation (intestinal transplantation) is the treatment of choice in patients with intestinal failure and complications of parenteral nutrition. Many of the patients who undergo intestinal transplantation will have a heavily scarred abdominal wall from multiple abdominal procedures and previous bowel resections, which may cause technical difficulties during surgery and complications later on. Very often, small bowel and liver transplantation are combined when irreversible liver damage develops due to long-term parenteral nutrition. The small bowel is rich in lymphoid tissue, which increases the risk of allograft rejection. Patients undergoing small bowel transplantation have a higher incidence of infectious complications than other SOT recipients because of a very high load of microorganisms in the intestinal graft and because they require higher degrees of immunosuppression [7, 8]. Intra-abdominal abscesses also occur often as a consequence of bacterial translocation or peritoneal contamination during surgery [9].



2.2 Infection Risk and Classifications


Generally speaking, the risk of infection is determined by the intensity of the exposure to infectious agents (epidemiological exposures) and the net state of immunosuppression.

Epidemiological exposures can be divided into four overlapping categories: (1) donor-derived, (2) recipient-derived, (3) community, and (4) nosocomial exposures.


2.2.1 Donor-Derived Infections


This group comprises infections transmitted with donor organs generally in the form of latent infections (usually viruses such as cytomegalovirus – CMV), unrecognized colonization/infection of biliary or urinary tract, unknown bacteremia, or surgical contamination at procurement or preservation. Infected organ donors have been found to transmit bacteria and fungi carrying resistance to routine surgical antimicrobial prophylaxis [10, 11]. In addition, unexpected clusters of donor-derived infections in transplant recipients have been recognized including those due to West Nile virus, lymphocytic choriomeningitis virus (LCMV), rabies, HIV, hepatitis B and hepatitis C viruses, herpes simplex virus, tuberculosis, endemic fungi, and Chagas’ disease [12, 13].


2.2.2 Recipient-Derived Infections


SOT recipients infected with latent or unrecognized pathogens before transplantation experience reactivation of such agents after surgery, generally during the period of maximum immunosuppression (1–6 months, see below). Common recipient-derived pathogens include viral infections (Herpesviridae, hepatitis B or C), M. tuberculosis, and in determined geographical areas endemic fungi (Histoplasma capsulatum, Coccidioides immitis, Paracoccidioides brasiliensis) and some parasites (Strongyloides stercoralis, Trypanosoma cruzi). Traveling for tourism or other reasons, including medical purposes, has increased worldwide in the last few years, becoming a serious challenge for physicians who attend SOT recipients because unusual epidemiological exposures may result in atypical infection syndromes [14].


2.2.3 Community-Acquired Infections


Patients with favorable posttransplant clinical course and good graft function may present in the late period after transplantation (see below) with the common community-acquired infections as non-SOT recipients. The most common infections are respiratory (generally in the winter period triggered by a respiratory virus) [15] and urinary tract infections (most common in women and kidney transplant recipients). Atypical and severe manifestations are frequent in this setting. Furthermore, uncommon exposures related to employment, hobbies, travel, pets, or marijuana use (Aspergillus species) may determine unusual infection syndromes [16].


2.2.4 Hospital-Acquired Infections


These infections arise from surgical or perioperative complications. Thus the involved pathogens are typically those of nosocomial infections. Indeed, the major threat of such infections is represented by antimicrobial resistance.


2.2.5 Net State of Immunosuppression


The concept of the “net state of immunosuppression” comprises all factors that may contribute to risk for infection [16]. Preexisting disease processes have an important role. Renal failure and dialysis are associated with poor responses to bacterial infections and colonization with hospital-acquired flora. Cirrhosis and portal hypertension reduce acute inflammatory responses (specific antibody formation, chemotaxis) and predispose to bacterial and fungal infections [17]. Breaches in mucocutaneous integrity (e.g., vascular and urinary catheters) and fluid collections (hematoma, ascites, effusions) favor microbial seeding. These infectious hazards must be added to the effects of immunosuppressive therapy. Multiple mechanisms of tolerance (e.g., central vs. peripheral deletion or anergy) have been demonstrated in patients with induced or spontaneous immunologic graft tolerance. Some gaps in function (e.g., NK cells, antiviral immunity) persist for months to years [16].


2.2.6 Timing of Infection


The natural epidemiology of infections following SOT has been well characterized [18]. Most infectious complications occur during the first year after transplantation, which is traditionally divided into:

1.

The first month posttransplantation (early): these infections are almost always hospital acquired, nosocomial bacteria and Candida spp. being the most common causative agents. However, unexplained early infectious syndromes (hepatitis, pneumonitis, encephalitis, rashes, leukopenia) may reflect donor-derived infection.

 

2.

One to six months posttransplantation (intermediate): viral pathogens and graft rejections are responsible for the majority of febrile episodes in this period. However, depending on the antiviral prophylaxis strategy, CMV and other herpesvirus infections may emerge subsequently. Recipient-derived latent infections (e.g., tuberculosis, endemic fungi, T. cruzi) may reactivate in this period, as well as new exposures to opportunistic agents (e.g., Nocardia spp., Rhodococcus equi, tuberculosis, mold fungi, Cryptococcus neoformans, Strongyloides) may result in severe infections. In patients taking TMP–SMZ prophylaxis, some opportunistic infections such as Pneumocystis pneumonia (PCP), Listeria monocytogenes, T. gondii, and susceptible Nocardia species are prevented; however, they can emerge after prophylaxis discontinuation [19].

 

3.

More than 6 months posttransplantation (late): beyond the sixth month after AOT, the level of immunosuppression in the majority of patients has been reduced to minimal levels. As a result, patients are no longer at high risk for opportunistic infections. In a minority of patients, such as those with a “never-do-well” graft, the opportunistic infections that characteristically occur during months 1–6 may still occur. Late-onset CMV disease may also manifest during this period, especially in CMV D+/R- AOT recipients who had received a prolonged course (6 months) of antiviral prophylaxis [20].

 

The timeline is used to establish a differential diagnosis for infectious syndromes at various stages after transplantation. Infections occurring outside the usual period or of unusual severity suggest excessive immunosuppression or epidemiological hazard. The timeline is “reset” to the period of greatest risk for opportunistic infection with the treatment of graft rejection or intensification of immune suppression (e.g., bolus corticosteroids or T-cell depletion) [16]. Changes in immunosuppressive regimens, routine prophylaxis, and improved graft survival may also alter the timeline somewhat.


2.3 Major Etiologies and Their Management


The main infectious pathogens in AOT and their management are summarized in this paragraph.


2.3.1 Virus



2.3.1.1 CMV


CMV is a ubiquitous beta-herpesvirus that infects the majority of humans. The seroprevalence rates of CMV range from 30 to 97 % [21, 22]. In immunocompetent individuals, CMV infection manifests as an asymptomatic or self-limited febrile illness, after which CMV establishes lifelong latency in a small percentage of myeloid and dendritic cell progenitors, which serve as reservoirs for reactivation and as carriers of infection to susceptible individuals.

CMV is a major cause of morbidity and a preventable cause of mortality in solid organ transplant (SOT) recipients [23]. Without a prevention strategy, CMV disease typically occurs during the first 3 months after SOT; this onset has been delayed in SOT patients receiving CMV prophylaxis [24, 25].

In the SOT recipients, CMV may be responsible for primo-infection, reactivation, and/or disease.

1.

CMV infection: infection usually occurs in a seronegative recipient who receives a graft from a seropositive donor (mismatch D+/R-). Seronegative recipients may also acquire CMV infection by blood transfusions. After a CMV primo-infection, the progression to severe disease is frequent in SOT recipients.

 

2.

CMV reactivation: reactivation of CMV in seropositive recipients may occur during the intermediate period. Depending on the immunological host response, reactivation may progress to disease or clear spontaneously.

 

3.

CMV disease: CMV infection or reactivation accompanied by clinical signs and symptoms. CMV disease is categorized into (1) CMV syndrome, which manifests as fever and/or malaise, leukopenia, or thrombocytopenia, and (2) tissue-invasive CMV disease (e.g., gastrointestinal disease, hepatitis, nephritis, pancreatitis). CMV has a predilection to invade the allograft, likely in part due to an aberrant immune response within the allograft.

 

CMV also has numerous indirect effects due to its ability to modulate the immune system. CMV has been associated with other infections such as bacteremia, invasive fungal disease, and Epstein–Barr virus-associated posttransplant lymphoproliferative disease [26]. CMV infection is an important contributor to acute and chronic allograft injury, including chronic allograft nephropathy or tubulointerstitial fibrosis in kidney recipients and chronic biliary stenosis in liver recipients [27].

CMV replication may be detected [23] by nucleic acid testing, antigen testing, and/or culture. Depending on the method used, CMV infection can be termed as CMV DNAemia or RNAemia, CMV antigenemia, and CMV viremia. Given their greater sensitivity, rapidity of response, and ease of execution than the other tests, molecular methods are currently the most used techniques for detection of CMV replication. However, there is still heterogeneity in the types of molecular methods used, which are generally homemade polymerase chain reaction tests detecting CMV DNA or RNA, the type of analyzed specimen (plasma or whole blood), and the way to report the quantitative amount of viral replication (UI/ml or copies/ml). The lack of an international reference standard and variation in assay design [23] have prevented the establishment of broadly applicable cutoffs for clinical decision-making, particularly for preemptive strategies.

As stated above, a major risk factor for CMV disease after SOT is primo-infection. In CMV D+/R- patients, given the absence or weakness of specific T-cell immunological response, viral replication is intense, producing the abovementioned direct and indirect effects.

Prevention of CMV disease is based on two types of strategies: universal prophylaxis (administration of antiviral therapy to all patients at risk) and preemptive therapy (weekly screening for CMV replication and initiation of an antiviral treatment in case of a preestablished level of CMV DNAemia) [28]. A recent comprehensive meta-analysis failed to show the presumptive superiority of universal prophylaxis, in terms of protection by direct and indirect CMV effects, over preemptive strategy [29]. However, current guidelines recommend universal prophylaxis in CMV D+/R- and R+ recipients, during 3–6 months in liver and kidney recipients, and for a minimum of 6 months in patients undergoing intestinal transplantation [23]. Oral valganciclovir at the dosage of 900 mg/day is currently the most used agent for this scope given its efficacy, feasibility, and lower rate of adverse events compared with ganciclovir. There are limited data to support the use of CMV immunoglobulin (CMV Ig) for prophylaxis when appropriate antivirals are given. Some centers use these products in conjunction with antiviral prophylaxis, primarily for high-risk thoracic and intestinal transplant recipients.

The main disadvantages of universal prophylaxis are (i) the occurrence of late CMV infection which seems to be associated with a higher rate of severe disease and (ii) high exposure to antiviral agents with a substantial risk of resistance selection [23]. Evidence exists in support of a hybrid approach, initial universal prophylaxis followed by preemptive therapy [29]. The major disadvantage of preemptive therapy is the need for weekly testing for CMV DNAemia. However, recent studies suggest that the determination of CMV viremia may be done less frequently if the patient has recovered the CMV-specific T-cell response [30, 31]. Recent guidelines also suggest the use of some of the currently available techniques to determine the CMV-specific T-cell response in order to tailor the CMV preventive strategy to the specific patient risk [23].

Therapy for CMV disease in this population does not differ from other SOT patients. Intravenous ganciclovir (5 mg/Kg every 12 h) is preferred as initial treatment, mainly in severe disease, followed by oral valganciclovir (900 mg every 12 h). Some experts prefer to prolong therapy with valganciclovir at prophylaxis dosage for another 2 weeks after symptom resolution and clearance of viremia.


2.3.1.2 EBV


The incidence of posttransplantation lymphoproliferative disease (PTLD) related to Epstein–Barr virus (EBV) is highest in intestinal recipients in which it is up to 32 %, varying between 3–12 % in liver transplant recipients and 1–2 % in kidney recipients [32]. Recently, Caillard also described a temporal sequence of sites of PTLD involvement in adult renal allograft recipients, with disease localized to the graft occurring within the first 2 years, CNS disease occurring between years 2 and 7, and gastrointestinal disease occurring between years 6 and 10 and becoming the predominant site of late disease [33]. Although PTLD in SOT recipients is most often of recipient origin [34], PTLD limited to the graft and occurring early after transplant is predominantly donor in origin [35].

More than half of patients are symptomatic, presenting with a mononucleosis-like picture with lymph node swelling, isolated PTLD lesions in the gastrointestinal tract, disseminated disease, or lymphoma. Risk factors for PTLD are primary infection and therapy with OKT3 or antithymocyte globulins. When PTLD is suspected, CT of the neck, chest, abdomen, and pelvis should be considered to identify occult lesions. Biopsies should be obtained from suspicious lesions. Patients with active EBV disease typically have an elevated EBV viral load, which may be detected before clinical onset and may remain persistently high even after resolution of PTLD.

The high rates of morbidity and mortality attributed to PTLD have prompted efforts aimed at the prevention of EBV [32]. Serial monitoring of the EBV viral load using quantitative PCR assays has been shown to predict occurrence of PTLD in transplant populations and is increasingly being used to guide initiation of preemptive therapy. However, specific target levels of load and specific sites to sample (blood versus tissues), as well as therapeutic preemptive treatment regimens, need to be clarified. The management of patients with PTLD is controversial. Reduction of immune suppression is uniformly recommended in order to enhance a cytotoxic T lymphocyte response. Concerns over the development of rejection can limit this approach. While frequently used, antiviral therapies (primarily nucleoside analogues – e.g., ganciclovir – and immunoglobulins) are probably of limited benefit for the treatment of EBV PTLD. The use of anti-CD20 monoclonal antibody (rituximab) may be necessary for patients who fail to respond to reduction of immunosuppression or in whom the PTLD lesions are judged to be malignant [32].


2.3.1.3 Other Herpesviridae


Among Herpesviridae, herpes simplex virus (HSV) and varicella-zoster virus (VZV) may reactivate or cause primary infection with a more aggressive course including hemorrhagic and visceral lesions than in non-SOT recipients. Some experts recommend acyclovir prophylaxis for seronegative patients or those who experienced several reactivations before transplantation [36].

VZV infection should be prevented in seronegative patients who have contact with individuals with varicella or herpes zoster by the administration of hyperimmune globulin. VZV vaccine should be given to seronegative patients before transplantation.

HHV-8 is associated with Kaposi’s sarcoma (KS), Castleman’s disease, primary effusion lymphoma, and a nonmalignant but highly fatal disease characterized by fever, hemophagocytosis, myelosuppression, and multiorgan failure. In one study, the incidence of Kaposi’s sarcoma (KS) was 0.28 % after kidney transplantation. KS was diagnosed a median of 24 months after SOT, and mortality was 28.5 %. Primary infection with human HHV-8 was found to be an important risk factor.


2.3.1.4 BKV


The human BK polyomavirus (BKV) is linked to two major complications in transplant recipients, polyomavirus-associated nephropathy (PyVAN) in 1–10 % of kidney transplant patients and polyomavirus-associated hemorrhagic cystitis (PyVHC) in 5–15 % of allogeneic hematopoietic stem cell transplant (HSCT) patients. Both diseases occur only sporadically in patients with non-kidney SOT or with inherited, acquired, or drug-induced immunodeficiency.

Primary infection with BKV occurs in the first decade of life as evidenced by increases in BKV seroprevalence to 90 % and more. Natural BKV transmission is not understood, but likely occurs via the respiratory or oral route. Subsequently, BKV colonizes the reno-urinary tract as the principle site of latent infection, most likely via a primary viremia. In healthy BKV seropositive immunocompetent individuals, reactivation and asymptomatic urinary shedding of BKV is detectable in up to 10 %. In individuals with impaired immune functions, particularly after SOT or HSCT, asymptomatic high-level urinary BKV replication is observed with high-level BKV viruria and appearance of “decoy cells” in urine cytology. High-level BKV viruria only rarely leads to viremia and PyVAN in non-kidney SOT. In kidney transplant recipients, however, approximately one-third of patients with high-level viruria/decoy cells develop BKV viremia and, in the absence of any intervention, progress to histologically proven PyVAN. This progressively affects graft function and increases the risk of graft loss from <10 % to more than 90 %.

Because effective and safe antiviral therapies are lacking, screening for BKV replication has become the key recommendation to initiate and guide a stepwise reduction of immunosuppression. This intervention allows for expanding BKV-specific cellular immune responses, curtailing of BKV replication in the graft, and clearance of BKV viremia.

In KT recipients, current guidelines [37] recommend screening for BKV replication at least every 3 months during the first 2 years posttransplant and then annually until the fifth year posttransplant. Screening for BKV replication can be done either by testing urine for high-level BKV viruria/decoy cells or by testing plasma for BKV viremia. Monthly plasma screening is preferred in many centers as it detects clinically more significant replication and provides a widely accepted trigger for therapeutic intervention. Detecting BKV viremia can guide more specific histopathology studies. The definitive diagnosis of PyVAN should be sought by demonstrating PyV cytopathic changes in allograft tissue and confirmed by immunohistochemistry or in situ hybridization (“proven PyVAN”).

Reducing immunosuppression should be considered for KT patients with sustained plasma BKV loads and is mandatory in KT patients with proven PyVAN. In patients with sustained high-level plasma BKV load despite adequately reduced immunosuppression, the adjunctive use of antiviral agents may be considered. The proposed options include cidofovir, leflunomide, fluoroquinolones, and IV IgG. However, there are no robust data regarding the benefit of these agents. Retransplantation can be considered for patients after loss of a first kidney allograft due to PyVAN, but frequent screening for BKV replication is recommended.


2.3.2 Bacteria


Bacterial infections, especially those involving Gram-negative bacteria, represent a major complication in AOT, the frequency ranging between 20 and 80 % of recipients, and they contribute to longer hospital stays and increased hospital costs [38]. Three-fourths of bacterial infection episodes occur in the first month after transplantation. In LT recipients, intra-abdominal (including the biliary tree) and surgical site infections are the most common types of infections, followed by lower respiratory tract infections, catheter-related bloodstream infections, and urinary tract infections [39]. After KT, surgical site infections and urinary tract infections are the leading bacterial infections [40].

Perioperative antibiotic prophylaxis has been shown to be effective in reducing the rate of surgical site infections after SOT [41]. Any transplant program should establish drug, timing, and duration of perioperative prophylaxis on the basis of local epidemiology and type of surgery (duration, complications, etc.). In the case of infection with an MDR pathogen in the donor, the antibiotics used in prophylaxis should be active against the isolated pathogen, and therapy may be prolonged 7–14 days after transplantation depending on the type of infection [13, 42]. The use of wide-spectrum antibiotic prophylaxis also for recipients colonized with an MDR pathogen should be carefully considered for potential toxicity and emergence of further resistance [43]. In LT recipients, some experts have proposed the use of selective intestinal decontamination (SID) with antibiotics, but its efficacy and safety in the prevention of bacterial infections are still a matter of debate. A recent study from Spain did not confirm that fluoroquinolones administered from the time of transplantation have any protective effect against development of early bacterial infections after LT [44]. On the other hand, most pathogens recovered from infected LT recipients undergoing SID were resistant to quinolones [44].


2.3.2.1 MDR Pathogens


Given the frequent exposure to antibiotics for treatment and prevention purposes, the high rate of invasive procedures, the use of indwelling devices, and the prolonged hospitalization, patients undergoing AOT are at very high risk for acquiring infection due to MDR bacteria. Studies performed a decade ago reported Gram-positive bacteria as a leading cause of bacterial diseases in SOT recipients with a high rate of methicillin-resistant Staphylococcus aureus (MRSA) [38]. However, Gram-negative bacteria have recently overcome Gram-positive bacteria with Enterobacteriaceae being the leading pathogens, mainly in the AOT setting. Along with the reemergence of Gram-negative bacteria in this setting, carbapenem-resistant Enterobacteriaceae (CRE) has spread worldwide over the last decade, becoming a serious healthcare problem [45].

Among CRE, the most common pathogen is carbapenem-resistant Klebsiella pneumoniae (CR-KP). SOT has been shown to be an independent risk factor for CR-KP infection [46]. The emergence of CR-KP has been best evaluated in LT recipients for whom the incidence of CR-KP infection after transplant, in endemic areas, is approximately 5 %. Infection mainly occurs early after LT, although some authors have also reported a late occurrence with 50 % of episodes observed after the first month from transplant [47]. The abdomen is frequently the portal of entry of CR-KP infections in this setting with a high rate of bloodstream involvement (>80 %) [45]. The overall mortality of LT recipients infected with CR-KP varies between 25 and 78 % [47, 48].

Given the worse outcome and the limited therapeutic options, prevention of CR-KP in the AOT setting is of paramount importance. To prevent the spread of CR-KP in healthcare facilities, a number of recommendations have been made, including optimizing compliance with hand hygiene and contact precautions, educating healthcare personnel, minimizing the use of indwelling devices, implementing antimicrobial stewardship programs, and active screening for CR-KP colonization [49]. However, currently there is no agreement for the universal screening of asymptomatic LT candidates and recipients, as there are no data regarding the benefit of this screening [50]. Furthermore, there is no agreement for the management of colonized patients, as there are no data about the efficacy of the proposed approaches such as deferring LT, selective intestinal decontamination, antibiotic prophylaxis active against CR-KP, and/or empirical anti-CR-KP therapy. On the other hand there is concern for toxicity and emergence of further antibiotic resistance as a consequence of an overuse of antibiotics active against CR-KP [51].

At our center we have recently prospectively analyzed the impact of colonization before and after LT and other variables on CR-KP infection development in all patients undergoing LT from June 2010 to December 2013. Of the 237 patients who underwent LT, 196 were non-CR-KP carriers, 11 were CR-KP carriers at LT, and 30 patients acquired colonization within a median of 14 days after LT. The CRE infection rates in these three groups were 2, 18.2, and 46.7 % (p < 0.001), respectively. Four variables were independently associated with infection: need for renal replacement therapy, prolonged mechanical ventilation >48 h, histological evidence of hepatitis C recurrence, and CR-KP colonization [52]. In our study, CR-KP colonization acquired after LT predisposed to a higher risk for CR-KP infection than colonization at LT, and thus we believe that in endemic areas active screening for CR-KP colonization should be performed not only before but also after LT. Empirical therapy active against CR-KP may be indicated only for symptomatic colonized patients with a complicated post-LT clinical course.

In a recent single-center retrospective case-control study of 13 KT recipients who developed CR-KP infection during 2006–2010, CR-KP infections were significantly associated with recent exposure to broad-spectrum antibiotics and were more likely to have been managed on an inpatient basis and to have required source control. CR-KP was significantly associated with earlier mortality. Six of 13 (46 %) patients with CR-KP infection and none of the controls died within 6.5 months of infection onset [53]. The authors concluded that investigations on ways to better prevent CR-KP are urgently needed.

Studies on the general population of patients with CR-KP infection have shown that source control and prompt initiation of adequate therapy are associated with better survival [54, 55]. Available clinical data on the treatment of CR-KP infection have demonstrated that (1) monotherapy is associated with lower success rates than combination therapy and increased risk of resistance to “second-line” antibiotics, i.e., colistin, tigecycline, and fosfomycin, and (2) carbapenem-containing combinations are more effective than non-carbapenem-containing regimens, especially for isolates with MICs <4 mg/L. Currently, the MIC “ceiling” precluding the beneficial use of carbapenems is unknown, but a benefit has been observed in case series against isolates with MICs up to 16 mg/L. The selection of specific second-line agents should be individualized to the local resistance patterns, site of infection, and specific toxicity risks of the patient. Antibiotic loading doses should be considered for any patient with suspected CR-KP infection, especially if the patient is critically ill and has evidence of impending or florid sepsis. Contemporary pharmacokinetic studies of carbapenems, colistin, and tigecycline have utilized loading doses to optimize drug activity and exposures early in the course of treatment.


2.3.2.2 Clostridium difficile


C. difficile is an anaerobic, Gram-positive, spore-forming bacillus. C. difficile causes inflammatory diarrhea via two exotoxins, toxin A and toxin B, which trigger a cytotoxic response, neutrophilic infiltrate, and cytokine release [56]. The incidence of CDI is estimated to be 3–19 % in liver recipients, 3.5–16 % in kidney recipients, 1.5–7.8 % in pancreas–kidney recipients, and 9 % in intestinal recipients [57]. The incidence of C. difficile infection (CDI) in SOT recipients is highest within the first 3 months after the procedure, probably because of more frequent antimicrobial exposure, intense immunosuppression, and increased exposure to the healthcare setting. Late-onset CDI occurs months to years after the transplant and is usually associated with either antimicrobial exposure or intensified immunosuppression to treat graft rejection. In a recent retrospective cohort study of all kidney and liver transplant recipients diagnosed with CDI at a single center over 14 years, 170 patients developed 215 episodes of CDI [58]. Among these patients, 162 episodes (75 %) were cured, 13 patients (8 %) died during hospitalization, and 49 patients (29 %) died within 1 year. No deaths were attributed to CDI. Recurrent episode was a major predictor of treatment failure [58].

The laboratory gold standard for C. difficile toxin detection in stool is the cytotoxicity cell assay, and the gold standard for detecting toxin-producing C. difficile is toxigenic culture. However, cytotoxicity cell assays have fallen out of favor because they are relatively labor intensive and involve a delay of at least 24 h before interpretation [56, 59]. Currently, more hospitals are converting to a two-step algorithm that utilizes new molecular methods. Screening stool for the presence of glutamate dehydrogenase (GDH), a common cell wall protein produced by both toxigenic and nontoxigenic C. difficile, is the foundation for many of the new protocols. Testing for the presence of GDH allows rapid and cost-effective screening; however, as GDH does not differentiate toxigenic strains from nontoxigenic strains, subsequent toxin testing (by ELISA or NAAT) is required for those stool specimens that are GDH positive.

Severity of CDI can be divided into three categories: mild to moderate, severe, and severe with complications [60]. Of note, there are no validated methods to objectively categorize patients as such. Mild-to-moderate CDI symptoms are typically diarrhea and possibly also mild abdominal pain and abdominal systemic symptoms. Severe CDI includes abdominal pain, leukocytosis, and fever or other systemic symptoms along with profuse diarrhea. Advanced age and patients with hypoalbuminemia are at increased risk for severe disease. Severe disease with complications includes the symptoms of severe disease accompanied by life-threatening conditions such as paralytic ileus, toxic megacolon, refractory hypotension, and/or multiorgan failure secondary to CDI. The disease severity may rapidly progress, so clinicians should frequently reassess and adjust therapy accordingly.

The first intervention that should occur in any patient with CDI is cessation of the inciting antimicrobial agent whenever possible. Published guidelines support basing the initial antibiotic choice on the severity of CDI [57, 60]. Oral metronidazole (500 mg TID) is recommended for mild-to-moderate disease in both the general population and SOT recipients. However, a major disadvantage of metronidazole use in SOT recipients is an interaction with medications such as tacrolimus or sirolimus, so that levels should be monitored during treatment. Oral vancomycin (125 mg QID) is the preferred therapy for severe CDI. Several studies demonstrated improved response rates with vancomycin compared to metronidazole in severe disease. In contrast to metronidazole, vancomycin does not reach adequate levels in the feces when given intravenously and should never be administered intravenously to treat CDI. In 2011, fidaxomicin was FDA approved for the treatment of CDI. Fidaxomicin is a macrocycline (in the United States it is designated as a macrolide; in Europe as a macrocycle) antibiotic with minimal systemic absorption, high colonic concentrations, and limited impact on normal gut flora. It has been evaluated in patients with no or one prior episode of CDI. Data reveal similar clinical response, but decreased rates of recurrent infection, as compared with vancomycin 125 mg orally every 6 h [61]. Limitations to fidaxomicin include drug acquisition costs and lack of data in SOT recipients. In cases of severe CDI with complications, decreased gastrointestinal motility may limit the efficacy of oral vancomycin by preventing the drug from reaching the site of infection. In these patients, 500 mg every 6 h of oral vancomycin may be warranted in an attempt to increase the probability that adequate levels of vancomycin will be achieved in the colon as quickly as possible. Several case reports also support the use of vancomycin administered by retention enema in cases of ileus [57]. Surgical intervention within the first 48 h of a failure to respond to medical therapy, bowel perforation, or multiorgan failure may reduce mortality in patients with severe disease [57]. Serum lactate levels and peripheral WBC count may be helpful in determining timing of surgical intervention. Lactate levels rising to 5 mmol/L and WBC count rising to 50,000 cells/lL are associated with perioperative mortality; thus, intervention prior to reaching these cutoffs should be considered.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 6, 2016 | Posted by in GASTROENTEROLOGY | Comments Off on Infection Complications After Abdominal Organ Transplantation

Full access? Get Clinical Tree

Get Clinical Tree app for offline access