Abstract
Uremia is a clinical syndrome with a wide range of signs and symptoms. It arises with advanced kidney failure and is characterized by the accumulation of numerous organic chemicals that would normally be cleared by healthy kidneys. Despite an ever-expending catalog of retained chemicals, or uremic solutes, a fundamental understanding of the fundamental cause(s) of uremia remains elusive.
Keywords
ADMA, ESRD, indoxyl sulfate, metabolomics, microbiome, p-cresol sulfate, TMAO, uremia, uremic solute, uremic toxin
Outline
Uremia: The Clinical Syndrome, 274
Classic Signs and Symptoms, 274
Uremia After Dialysis Initiation, 275
Uremic Cardiovascular Toxicity, 275
Uremic Metabolic Toxicity, 275
Uremia and Solute Retention, 275
Solute Production, 276
Uremic Inflammation and Oxidative Stress, 278
Current Catalog of Uremic Solutes, 279
Metabolomics Studies, 279
Proving Causality in Uremia, 280
Uremic Solutes Associated With Adverse Clinical Outcomes, 280
Potential Mechanisms of Solute Toxicity, 282
Treatment of Uremia, 283
Conclusions, 284
In a monograph on the history of uremia research, Richet wrote, “one hundred and fifty years after its birth, ‘uremia’ remains a clinico-chemical enigma.” This description remains apt to this day and summarizes some of the key features of the topic that will be discussed in more detail in this chapter. First, uremia is a clinical syndrome, with a wide range of signs and symptoms, rather than a well circumscribed disease. Second, because uremia arises with advanced kidney failure, it is characterized by the accumulation of numerous organic chemicals that would normally be cleared by healthy kidneys. Indeed, the etymology of uremia derives from the Greek ouron (urine) and haima (blood), or urine in the blood. And third, despite an ever expanding catalog of retained chemicals, or uremic solutes, a fundamental understanding of the specific cause(s) of uremia remains elusive.
Although the essence of Richet’s comment endures, two notes on evolving terminology warrant mention. First, uremia was historically understood to subsume all of the signs and symptoms that arise with kidney failure. Over time, its meaning has shifted such that the term is now often used to signify only the adverse effects of renal failure that cannot be explained by derangements in volume, inorganic ions, or known renal endocrine functions (i.e., calcitriol and erythropoietin synthesis). For the purpose of this chapter, we will largely adhere to this narrower conception of uremia, recognizing that signs and symptoms in patients can be multifactorial and difficult to parse. For more comprehensive discussions of these topics, the reader should refer to Chapter 9 and Chapter 10 . The second note on terminology relates to the impact the advent of widespread renal replacement therapy has had on the uremic syndrome. Dialysis prevents immediate death and attenuates some of the acute symptoms of uremia. However, despite achieving adequate dialysis as defined by current standards of care, many patients continue to manifest some signs and symptoms of uremia, a phenomenon Depner described as the “Residual Syndrome.” This chapter will address both variants of uremia, before and after dialysis initiation, highlighting commonalities and differences when appropriate.
We will begin with a consideration of uremia as a clinical syndrome, highlighting the burden of symptoms and reduced health-related quality of life (HRQoL) experienced by patients with end-stage renal disease (ESRD) on dialysis. Next, we will examine the widely held hypothesis that retained uremic solutes are the fundamental cause of uremia. For consistency we will favor the term uremic solute throughout; this term is sometimes used interchangeably with uremic toxin in the literature, although the latter term is more properly reserved for solutes that are known to exert adverse biological effects. Because the list is always expanding, we do not provide an exhaustive survey of all or even most published uremic solutes. Rather, our goal is to provide a conceptual framework for categorizing different solute types, appreciating why they accumulate with kidney disease, and understanding the limitations of current extracorporeal approaches to their removal. In addition, we discuss factors beyond retention, including diet, the gut microbiome, and oxidative stress, that contribute to increased solute levels in ESRD. Throughout, we review how impactful clinical trials and emerging technologies have modified our understanding of uremia and identified new areas for future inquiry.
Uremia: The Clinical Syndrome
Classic Signs and Symptoms
Uremia affects nearly every organ system ( Table 18.1 ). The term often connotes prolonged illness, but uremia can arise with either acute or chronic renal failure. Common manifestations include loss of appetite, altered smell and taste, nausea, vomiting, progressive weakness and fatigue, neuropathy, impaired sleep, altered mentation, pruritus, and reduced platelet function. Uremic frost, which reflects the excretion of urea through the skin, and uremic fetor, which reflects the breakdown of urea to ammonia in saliva, are now rare as dialysis is usually initiated before these signs develop. Similarly, uremic stupor, coma, and death are uncommon in modern healthcare settings, except when dialysis is deferred in the context of palliation.
Anorexia | Uremic component includes reduced appetite and altered taste and smell; compounded by dietary and fluid restrictions, medications, and comorbidities (e.g., diabetic gastroparesis). |
Nausea and vomiting | In conjunction with anorexia, a very common trigger for dialysis initiation; uremia can affect both central (CNS chemoreceptor) and gastrointestinal (delayed emptying) mechanisms. |
Fatigue | Often with significant contributions from comorbid illness such as anemia and cardiovascular disease as well as deconditioning; reduced muscle mass and resultant frailty are common. |
Neuropathy | Distal polyneuropathy characterized by sensory symptoms, initially paresthesias, but also pain with progression; more advanced neuropathy can also cause motor symptoms. |
Altered mental status | Can range from subtle cognitive deficits to coma; often accompanied by fine action tremor, asterixis, or hyperreflexia. Most common EEG finding is prominence of slow waves. |
Insomnia | Includes difficulty initiating sleep, maintaining sleep, and waking too early; ESRD disturbs circadian sleep–wake rhythm; exacerbated by sleep-disordered breathing, restless legs syndrome, and pruritus, all of which are common with uremia. |
Pruritus | Large variation in presentation; can be generalized or localized to the back, face, arms, etc.; often worse at night, but in some patients most severe during dialysis. |
Bleeding | Primarily due to platelet dysfunction, typically with normal platelet count and normal prothrombin and partial thromboplastin times. |
Pericarditis | Sometimes subdivided into “uremic” (develops before or shortly after dialysis initiation) or “dialysis-associated”; can be serous or hemorrhagic; classic ECG manifestations of diffuse ST elevations uncommon. |
Amenorrhea and sexual dysfunction | Includes disturbances in menstruation and fertility in women, erectile dysfunction in men, and loss of libido in both; associated with disruption in hypothalamic–pituitary–gonadal axis. |
Clinical context matters a great deal when evaluating uremia’s contribution to a patient’s presentation. For example, uremia is sometimes invoked as the cause of coma in the intensive care unit, when in fact hypoxic or ischemic brain injury, infection, hypercarbia, liver disease, and sedating medications may play a more significant role. The confounding effects of medications always require careful consideration. For example, the accumulation of renally cleared medications, such as morphine, oxycodone, gabapentin, and pregabalin, can cause nausea, sedation, or abnormal neurological findings that may be mistaken for uremia. Patient-specific factors such as age, frailty, cognitive and cardiopulmonary reserve, and other comorbidities affect how uremia manifests. In addition, although we outline a conceptual distinction between uremia and derangements in fluid, electrolytes, and renal endocrine functions, this compartmentalization is often not feasible in clinical practice. For example, fatigue in uremic patients is often compounded by congestive heart failure and anemia. There is no gold standard for isolating the effects from uremia or grading its severity. Thus the recognition and assessment of this protean syndrome can only be performed at the bedside, on an individualized basis.
Variability with the timing of uremia onset adds to this complexity. Uremia exists along a continuum of kidney failure, with subtle deficits detectable at relatively modest levels of kidney dysfunction. In clinical practice, however, the term is generally reserved for the appearance of signs and symptoms severe enough to trigger consideration of dialysis initiation. This generally does not occur until the estimated glomerular filtration rate (eGFR) is less than 15 mL/min/1.73 m 2 . The results of an important clinical trial provide valuable perspective on this transition period. The IDEAL Study randomized 828 adults with eGFR 10 to 15 mL/min/1.73 m 2 to early (eGFR 10 to 14 mL/min/1.73 m 2 ) or late (eGFR 5 to 7 mL/min/1.73 m 2 ) dialysis initiation. For the early-start group, the median time to dialysis initiation was 1.8 months compared with 7.4 months for the late-start group. Notably, 322 (75.9%) of the late-start group initiated dialysis before their eGFR had reached the target range, such that the mean eGFR at dialysis initiation was 9.8 mL/min/1.73 m 2 . The reasons for starting dialysis at an eGFR >7 mL/min/1.73 m 2 in the late-start group are informative: 234 for uremia, 28 for fluid overload, 25 for physician discretion, and 4 for hyperkalemia. Thus among individuals randomized to late-start, uremia was the major indication for dialysis initiation and was evident in the majority of patients by the time eGFR had fallen to ∼7 mL/min/1.73 m 2 . During a median follow-up of 3.6 years, there was no difference in mortality between the early- and late-start groups, supporting the use of uremic symptom onset as a major criterion for dialysis initiation.
Uremia After Dialysis Initiation
Dialysis attenuates, but does not eradicate uremic symptoms in most patients. This “residual syndrome” is characterized by high symptom prevalence among patients on dialysis, with rates for individual symptoms ranging from 30% to 70% ( Table 18.2 ). This burden of persistent uremic symptoms contributes significantly to poor HRQoL, with additional contributions from the physiological and psychological burden of dialysis therapy as well as other comorbid illnesses. Individual self-reported symptoms, including anorexia, sleep quality, and pruritus, as well as reduced HRQoL have all been associated with increased mortality in ESRD. Independent of these associations, symptoms and HRQoL are clearly important outcomes in and of themselves. Indeed, HRQoL is often a greater concern than survival for patients; in one study, when asked about the possibility of improvement in HRQoL or survival by switching to intensive HD, 94% would consider it for improving energy, 57% for improving sleep, but only 19% would consider it for improving survival by 3 years.
Symptoms | Prevalence |
---|---|
Anorexia | 49% |
Nausea | 33% |
Pruritus | 55% |
Excessive daytime sleepiness | 44% |
Difficulty concentrating | 54% |
Fatigue | 71% |
Pain | 47% |
In addition to a syndrome of attenuated yet persistent symptoms, uremia after dialysis initiation is characterized by chronic end-organ toxicity. Two examples that have received significant attention, both independent predictors of mortality in ESRD, are uremic cardiovascular disease (CVD) and uremic alterations in energy metabolism. For both topics, the reader is also referred to more comprehensive discussions in Chapter 12 and Chapter 13 .
Uremic Cardiovascular Toxicity
CVD is highly prevalent in patients with ESRD, with a risk for CVD mortality 10- to 100-fold higher than in the general population. In part, this reflects shared risk factors for CVD and chronic kidney disease (CKD), particularly diabetes and hypertension. However, traditional risk factors do not fully account for this excess mortality, and clinical trials have shown no benefit to standard CVD treatments such as angiotensin-converting enzyme inhibitors or statins in dialysis patients. Derangements in erythropoiesis and mineral metabolism that arise with progressive kidney failure likely play a role in the pathogenesis of CVD in ESRD, but raising hemoglobin or lowering parathyroid hormone levels have also shown no benefit. Thus there is significant interest in understanding how the uremic milieu contributes to accelerated CVD pathogenesis. From a clinical perspective, uremic CVD is characterized by left ventricular hypertrophy, myocardial fibrosis, medial vascular calcification, endothelial dysfunction, and sympathetic nervous system derangements. Left ventricular hypertrophy is particularly common, present in up to 80% of individuals at dialysis initiation. Approximately 50% of CVD mortality in dialysis patients is due to sudden death, although the relative contribution of malignant ventricular arrhythmias, bradyarrhythmias, asystolic arrests, or other events is unknown.
Uremic Metabolic Toxicity
Protein-energy wasting (PEW) is the term endorsed by the International Society of Renal Nutrition and Metabolism to describe the syndrome of reduced protein and energy reserves in ESRD. It has consequences throughout the body, but can be conceptualized at least in part as chronic toxicity to skeletal muscle and adipose deposits. Common features of PEW include hypoalbuminemia, low body mass index, and low serum cholesterol levels. PEW is often also characterized by insulin resistance, increased resting energy expenditure, and inflammation, all of which are believed to contribute to its pathogenesis. Anorexia, acidosis, and decreased physical activity are additional etiological factors. Despite amino acid losses of 4 to 8 g/day with peritoneal and hemodialysis, the initiation of maintenance dialysis can improve nutritional status in uremic patients. However, increasing the intensity or frequency of treatment above standard adequacy goals does not further improve nutritional indices.
In summary, uremia is a clinical syndrome that arises with severe acute or chronic kidney failure. It can affect nearly every organ system, but there is no objective indicator of its onset or severity. How and when uremia manifests depends on patient-specific factors, and significant uremic symptoms are a major impetus for dialysis initiation. Although dialysis improves some features of uremia, many patients continue to experience both persistent uremic symptoms and chronic end-organ toxicity.
Uremia and Solute Retention
Several observations suggest that retained solutes are the primary cause of uremia. First, uremia occurs with severe renal failure regardless of the etiology of kidney disease. Thus it is an acquired syndrome attributable to the loss of kidney function itself, rather than a specific manifestation of long-standing diabetes mellitus, systemic lupus erythematosus, autosomal dominant polycystic kidney disease, and so on. Second, severe renal failure leads to the accumulation of numerous solutes that are usually cleared by healthy kidneys, sometimes many-fold above normal levels. Third, solute clearance with dialysis improves, but does not fully ameliorate the uremic syndrome. Fourth, unlike dialysis, kidney transplantation results in dramatic improvement, even normalization, of both uremic solute levels and uremic symptoms.
Normal Kidney Function
Understanding solute retention requires an understanding of normal kidney function. The kidneys generate ∼140 L/day of glomerular filtrate, and then subsequently reabsorb the vast majority of filtered sodium and water, glucose, amino acids, and other urinary constituents required to maintain volume and nutritional homeostasis. Second only to the heart in mitochondrial abundance, the kidney consumes an enormous amount of energy for this reclamation across electrochemical gradients. Although this cycle of filtration and reabsorption seems energetically wasteful, it permits very high rates of clearance of freely soluble waste products such as creatinine (∼120 mL/min/1.73 m 2 ). In addition to glomerular filtration, normal renal excretory function includes tubular secretion, a critical mechanism for the excretion of protein-bound waste products that do not readily cross the glomerular filtration barrier. As with tubular reabsorption, tubular secretion is energy-intensive and is mediated by specific apical and basal membrane spanning transporters. Studies in knock-out mice have begun to map the secretion of specific uremic solutes to specific organic anion transporters (OATs) expressed in the renal tubular epithelium. Finally, the kidneys also take up and catabolize some circulating small molecules, such as asymmetrical dimethylarginine (ADMA) and S-adenosylhomocysteine, as well as numerous peptides. Thus, through a combination of filtration , reabsorption , secretion , and metabolism, the kidneys have a broad and heterogeneous impact on circulating solute levels, minimizing the loss of desired nutrients while facilitating the clearance of waste products.
Renal Failure and Solute Retention
Solutes normally cleared by the kidneys accumulate as kidney function declines. Indeed, kidney dysfunction is most often detected and monitored on the basis of serum creatinine levels, a marker of glomerular filtration. However, the relative rise in creatinine does not necessarily predict the relative rise in other uremic solute levels. This discrepancy is magnified among patients with ESRD on dialysis. Both hemodialysis and peritoneal dialysis, which are discussed in detail in Section III, differ fundamentally from endogenous kidney function. For both, the rate of clearance of freely-soluble solutes , such as creatinine, is substantially less than normal, with standard thrice-weekly hemodialysis and continuous ambulatory peritoneal dialysis achieving creatinine clearances of <10 to 15 mL/min/1.73 m 2 per day. The rate of clearance of protein-bound solutes is even worse because it is free rather than total solute levels that drive the diffusion gradient across the hemodialysis or peritoneal membrane. For example, one study showed predialysis elevations of phenylacetylglutamine (122-fold), hippurate (108-fold), indoxyl sulfate (116-fold), and p-cresol sulfate (41-fold), four solutes normally cleared by tubular secretion, significantly greater than for urea (5-fold) and creatinine (13-fold). Dialysis is also less effective for the clearance of relatively large molecules (often referred to as middle molecules ), generally composed of peptides or small proteins. Some of these molecules, such as β 2 -microglobulin (11.8 kDa), cystatin C (13.3 kDa), leptin (16 kDa), and select advanced glycation end-products (AGEs) undergo glomerular filtration followed by uptake and catabolism within tubular epithelial cells, and thus can accumulate substantially with renal failure. Finally, a limitation specific to hemodialysis is the intermittency of treatment, which stands in contrast to the continuous clearance provided by endogenous kidney function. Whereas hemodialysis removes waste products from circulating blood, its access to the intracellular space or different tissue beds is contingent on rapid equilibration of levels across these different compartments. Thus for sequestered solutes , or solutes that do not rapidly equilibrate across compartments, hemodialysis may transiently lower plasma concentrations without making a large impact on total body solute excess.
Residual Clearance
Due to the relative inefficiency of dialysis in replicating the excretory function of the kidney, residual kidney function (RKF) in dialysis patients can make an important contribution to solute clearance. For example, among individuals on hemodialysis with a mean residual urea clearance of 2.5 mL/min, RKF provided 20% of total urea removal, 34% of the total p-cresol sulfate removal, and 66% of the total hippurate removal. In addition, nonrenal clearance may also be an important route for the excretion of some uremic solutes in patients with renal failure. Creatinine excretion through the gastrointestinal tract is known to increase with advancing renal failure, attributable to increased degradation by gut microbes. To what extent similar mechanisms augment the nonrenal clearance of other uremic solutes is poorly characterized. Conversely, uremia may reduce the nonrenal clearance of some solutes by impairing the expression or function of hepatic and intestinal solute transporters.
In summary, “normal” kidney function includes glomerular filtration, tubular absorption and secretion, and metabolism. Acting continuously and cooperatively, these functions both retain valuable circulating nutrients as well as excrete waste products with a range of physiochemical properties, including protein binding, size, and sequestration. With kidney failure, loss of these functions results in solute retention, with relative levels of accumulation that may not correlate with the levels of filtration markers such as creatinine and urea. Because renal replacement modalities are best suited for small, soluble solutes, the accumulation of large and/or protein-bound solutes can be very high, for example, >100-fold above normal. Even a low level of RKF can have a large impact on uremic solute burden.
Solute Production
Solute Production From Food
Carbohydrates, lipids, and protein are largely composed of glucose, fatty acids, and amino acids, respectively. The catabolism of these molecules converge on the citric acid cycle ( Fig. 18.1 ), also known as the tricarboxylic acid cycle or Krebs cycle . This series of chemical reactions receives carbon end-products from glycolysis, the β-oxidation of fatty acids, and various amino-acid catabolic pathways, ultimately yielding CO 2 , H 2 O, and reducing equivalents that drive adenosine triphosphate (ATP) production via oxidative phosphorylation. Despite this convergence of biochemical pathways, amino acid metabolism differs in two important ways. First, unlike glucose and fatty acids, which are composed entirely of carbon, oxygen, and hydrogen, every amino acid has a nitrogen-containing amino group. Second, unlike glucose and fatty acids, which can be stored as glycogen or triglyceride when ingested in excess of energetic needs, amino acids cannot be stored (muscle is a reservoir of amino acids that can be catabolized under starvation conditions, but muscle mass does not increase simply as a function of protein intake). Thus the daily ingestion and break down of amino acids necessitates ongoing nitrogen excretion to maintain net balance. Most aquatic species simply release ammonia into their surroundings. Because free ammonia is toxic, humans and other mammals shuttle tissue derived amino groups as glutamine or alanine to the liver where the amino groups are converted to urea, which is then excreted by the kidneys.
Given the kidneys’ fundamental role in maintaining whole-body nitrogen balance, nitrogenous waste products represent a large fraction of the uremic solutes identified to date. This includes urea as well as numerous amino acid derivatives. Because the underlying ring structure of tryptophan, phenylalanine, and tyrosine make these amino acids hydrophobic, many of their derivatives are protein-bound. As will be discussed in more detail, gut microbes perform key enzymatic steps in the production of some of these compounds. In addition to protein, other dietary sources contribute to the pool of nitrogen that must be excreted to maintain whole-body balance. For example, uric acid, an end-product of purine nucleotide breakdown, and creatinine, a break-down product of muscle-derived creatine phosphate are abundant nitrogenous waste products in urine. Trimethylamine is synthesized from dietary nutrients such as choline and carnitine.
Finally, nonnitrogen-containing dietary constituents also produce uremic solutes. Because of odd carbon number, side chains, or other structural features, some organic acids cannot be fully metabolized to CO 2 and H 2 O and thus require renal excretion. Acidosis in the context of methanol or ethylene glycol poisonings illustrate this possibility, with dangerous accumulation of metabolites, such as formic acid and oxalic acid, even in individuals with normal kidney function. More commonly, oxalate is derived from plants or from the catabolism of ascorbic acid (vitamin C) and can reach very high levels in ESRD. Sugar alcohols such as sorbitol, mannitol, and myoinositol can also reach high levels in ESRD; myoinositol in particular can accumulate significantly, as it is normally degraded by the kidney.
Solute Production and the Gut Microbiome
Modern sequencing methodologies have enabled a more comprehensive assessment of the gut microbial community, circumventing the bias imposed by traditional culture-based methods. One approach has been targeted sequencing of 16S ribosomal DNA in stool or other gut-derived samples. This gene is present in all living organisms, but slight differences in sequence provide information about microbial diversity at the family, genus, and sometimes even species level. Complementary with 16S ribosomal DNA-based taxonomic profiling, another approach has been to sequence as many fragments of DNA as possible in a given biological sample in a nontargeted fashion. This latter, “metagenomic” approach is more computationally challenging, but provides information about the full breadth of genes, and thus the range of enzymatic capabilities in a given microbial community. Studies using these tools have affected many areas of biomedicine, with several common themes: the gut microbial community is abundant and diverse, it harbors metabolic capacities that are distinct from the host, it varies across individuals and disease states, it has diverse impacts on host metabolism, and it represents a potential target for therapeutic intervention.
The gut microbiome plays a large role in shaping the biochemicals absorbed from diet. Indeed, a major evolutionary driving force for harboring large microbial communities is that they can degrade dietary substances that are otherwise nondigestible, maximizing host nutrition. Perhaps quantitatively most important is the gut microbial digestion of polysaccharides, such as cellulose and starch, yielding short-chain fatty acids, which can then be absorbed and utilized by the host. The gut microbiome also affects amino acid and lipid metabolism. The effect on amino acids is at least twofold. First, amino acids synthesized de novo by bacteria are absorbed and contribute to the host amino acid pool. This effect can be enormous in ruminant animals, which can survive with ammonia and urea as their only sources of dietary nitrogen. In humans, the quantitative importance of microbial amino acid synthesis is much less, ranging from 1% to 20% for select amino acids. Second, bacterial metabolism of luminal protein yields amino acid derivatives that are then absorbed into the host circulation. Bacterial metabolism of tryptophan and phenylalanine/tyrosine, followed by additional conjugation steps in the liver, yield uremic solutes such as indoxyl sulfate and p-cresol sulfate, respectively. One study elegantly underscored the requirement of gut microbes for this process, demonstrating normal levels of indoxyl sulfate and p-cresol sulfate among ESRD patients who had undergone a colectomy. Using mass spectrometry (MS), this study showed that many other compounds, most of which remain unidentified, were elevated only among ESRD patients with intact colons, implicating colonic microbes in their production. Similar results have been demonstrated in mice with renal failure in germ-free (no gut microbes) versus control conditions. Gut microbes have also recently been identified as a potential mediator of the proatherosclerotic effects of dietary lipids. More specifically, they convert dietary choline into trimethylamine, which is then oxidized in the liver to yield trimethylamine N -oxide (TMAO), a potential cardiovascular toxin.
Thus the gut microbiome participates in the production of some, and perhaps many, uremic solutes (see Fig. 18.1 ). In turn, the uremic milieu appears to modify the gut microbiome, with small studies suggesting bacterial overgrowth in the small intestine. One study utilizing 16S rRNA gene analysis showed marked differences in the taxonomy of stool microbes in individuals with ESRD compared with healthy controls. ESRD could modify the gut flora via several mechanisms, including alterations in diet, increased delivery of nitrogenous compounds into the gut lumen, increased frequency of antibiotic use, other medications such as iron, and reduced integrity of the gut epithelial barrier. Given the gut microbiome’s impact on host metabolism, as well as its contribution to uremic solute production, understanding how the microbiome and its functional capacity changes as kidney function declines warrants further investigation.
Uremic Inflammation and Oxidative Stress
Progressive loss of kidney function results in increased systemic inflammation and oxidative stress, which in turn may contribute to various features of the uremic illness, particularly chronic toxicities such as CVD and PEW. Thus uremia is both a cause and consequence of inflammation and oxidative stress. Here, we discuss how this interrelationship overlaps with the solute-centric approach outlined in this chapter while also providing an alternative perspective on uremia.
The etiology of increased inflammation and oxidative stress in ESRD is likely multifactorial, and remains incompletely understood (see Chapter 14 , Inflammation in Chronic Kidney Disease, for a more comprehensive discussion). Ikizler and Himmelfarb have shown that biomarkers of inflammation and oxidative stress increase in patients with CKD stage 3 to 5, remain persistently elevated despite initiation of hemodialysis, and fall substantially after renal transplantation, supporting a causal role for the loss of kidney function in triggering these alterations. These observations align with the rise, persistent elevation, and fall of poorly dialyzed uremic solutes across these clinical scenarios, raising the possibility that solute retention may contribute to the genesis of inflammation and oxidant stress in patients with CKD. Indeed, middle molecules normally cleared by the kidney include cytokines, for example, interleukin-6 (IL-6), as well as complement peptides and AGEs; loss of renal clearance can result in the retention and subsequent accumulation of these proinflammatory molecules. In addition, and as discussed in more detail later, other uremic solutes such as p-cresol sulfate and indoxyl sulfate may induce aberrant inflammation and oxidative stress, either through tissue injury or by stimulating immune cells.
In turn, this milieu can itself drive solute production , creating a deleterious cycle. This process of self-amplification is exemplified by AGEs, a heterogeneous group of compounds derived from the nonenzymatic ligation of proteins and other macromolecules with carbonyl compounds. These carbonyl compounds include sugars (hence the term glycation ) as well as small carbonyl precursors produced from the oxidation of sugars and lipids. AGEs can come from diet or be produced endogenously. The increase in oxidative stress in ESRD can significantly increase endogenous AGE production, even in the absence of hyperglycemia or diabetes mellitus, compounding the loss of renal AGE clearance. AGEs can accumulate in cells and tissue, where they cross-link with proteins and interfere with macromolecule function. In addition, AGEs can signal through the receptor for advanced glycation end-products (RAGE), activating proinflammatory and prooxidant signaling pathways. In addition to AGE formation, oxidative stress produces other macromolecule modifications as well, including lipid peroxidation, protein thiol oxidation, protein tyrosine nitration, and DNA strand breaks. Although most glycated or oxidized macromolecules are not cleared by the kidney, their breakdown products, such as pentosidine and N -carboxymethyllysine (AGEs), 4-hydroxynonaneal and malondialdehyde (lipid peroxidation), and 3-nitrotyrosine (tyrosine nitration), are solutes that can accumulate in ESRD (see Fig. 18.1 ).
Although the burden of inflammation and oxidation in ESRD can be considered through the framework of solute retention and production, it also provides a different perspective on the uremic illness. The toxicity of these processes is likely mediated primarily through their effects on macromolecule structure and function, in circulation and within tissue, rather than the inherent toxicity of the smaller solutes derived from macromolecule degradation. For example, oxidation of lipoproteins has been hypothesized to enhance the atherogenicity of low-density lipoprotein (LDL) cholesterol and impair high-density lipoprotein (HDL)-mediated reverse cholesterol transport, accelerating the development of CVD. Further, mechanisms beyond solute retention and production contribute to inflammation and oxidative stress in ESRD. Increased exposure to infectious agents, either from overt infection or with subclinical pathogen exposure due to periodontal disease, biofilms on hemodialysis or peritoneal dialysis catheters, or impaired gut epithelial-barrier integrity are all potential culprits, as are exposure to bioincompatible hemodialysis membranes or impure dialysate. Whatever the etiology, the cumulative burden of inflammation and oxidative stress correlates strongly with adverse outcomes in ESRD.
Current Catalog of Uremic Solutes
Hemodialysis was initially performed using membranes that provided limited clearance of solutes with molecular weights (MWs) >1 kDa. Because these treatments could improve some symptoms of uremia, for example waking comatose patients or relieving vomiting, there is wide acceptance that some important uremic toxins are small. However, the incomplete resolution of uremia with dialysis raised the question of whether increased clearance of small molecules like urea or increased clearance of relatively larger “middle molecules” would be beneficial. The Hemodialysis (HEMO) study was a landmark randomized clinical trial of 1846 prevalent hemodialysis patients that used a two-by-two factorial design to examine the effects of standard or high-dose dialysis (as defined by urea clearance, Kt / V ) and low-flux or high-flux dialyzers (with the latter providing enhanced clearance of some middle molecules). Unfortunately, neither higher dose (sp Kt / V urea of 1.71 ± 0.11 vs. 1.32 ± 0.09) nor use of a high-flux membrane (β 2 -microglobulin clearance of 34 ± 11 mL/min vs. 3 ± 7 mL/min) improved the primary outcome of all-cause mortality over a mean follow-up of 2.84 years, reinforcing the need to identify and target the actual uremic solutes that cause uremia.
Because a detailed mechanistic understanding of uremia is lacking, a common approach is to classify solutes according to the physical characteristics that affect solute retention and dialytic clearance, such as solubility, protein binding, and size. For example, in a 2003 report of the European Uremic Toxin Work Group, Vanholder and colleagues summarized the existing literature at that time on 90 uremic compounds, dividing them into water-soluble low molecular weight solutes (MW <500 Da), protein-bound solutes, and middle molecules (MW ≥500 Da). This compendium has been updated over time and is available online ( http://www.uremic-toxins.org/ ); of the ∼130 compounds currently listed, approximately half are categorized as water-soluble, one-quarter are categorized as protein-bound, and one-quarter are categorized as middle molecules. Table 18.3 highlights select solutes representative of each class of molecules. As already discussed in the overview of solute production, many of these solutes are nitrogenous waste products. Among water-soluble solutes, these include arginine derivatives (guanidines), nucleotide breakdown products, and TMAO, whereas protein-bound solutes include tryptophan (indoles) and phenylalanine/tyrosine derivatives (phenols). By contrast, middle molecules are composed almost entirely of peptides or small proteins. Markers of inflammation and oxidative stress are represented in all three categories.
Class | Name | Comment |
---|---|---|
Water-soluble | Urea | Used to measure dialysis adequacy |
ADMA | Guanidine; inhibits NO synthase | |
TMAO | Produced by gut bacteria | |
Pseudouridine | Nucleoside | |
Oxalate | Organic acid | |
Myoinositol | Sugar alcohol | |
Malondialdehyde | From degradation of oxidized lipids | |
Protein-bound | P-cresol sulfate | Phenol; produced by gut bacteria |
Indoxyl sulfate | Indole; produced by gut bacteria | |
Pentosidine, N -carboxymethyllysine | Advanced glycation end-products | |
Middle molecules | β 2 -microglobulin | Causes dialysis-related amyloidosis |
Parathyroid hormone, fibroblast growth factor-23 | Hormones that regulates bone and mineral metabolism | |
Leptin, ghrelin | Hormones that regulates appetite and energy homeostasis | |
Interleukin-6, tumor necrosis factor alpha | Cytokines |
The advent of high-throughput analytical approaches has expanded the list of known uremic solutes. Nevertheless, the fundamental question of proving which solutes are toxic remains, and significantly less progress has been made in this regard. In this section, we provide an overview of emerging approaches to solute discovery, the challenge of proving causality in uremia, and strategies to prioritize among a multiplicity of solutes. Finally, we provide a focused discussion of potential mechanisms of solute toxicity.
Metabolomics Studies
Metabolomics refers to the systematic analysis of small molecules, typically <1500 Da, in a biological specimen. Nuclear magnetic resonance (NMR) spectroscopy and MS have served as the two main workhorses in metabolomics. NMR uses the magnetic properties of select atomic nuclei ( 1 H, 13 C, or 31 P) to determine the structure and abundance of metabolites in a biological specimen. MS methods resolve compounds on the basis of their mass, or more precisely their mass-to-charge ratio after ionization, and are typically coupled to up-front separation techniques including gas and liquid chromatography. Although all of these analytical tools have been in use for decades in uremia research, recent advances have increased their speed, sensitivity, and specificity, enabling the simultaneous measurement of hundreds to thousands of analytes in a given blood sample. Metabolomics studies of ESRD, spearheaded by Niwa and others, have identified dozens of novel uremic solutes, rapidly expanding the total list to >270 molecules. Although offering clear advantages for solute discovery, metabolomic approaches also have notable limitations. Current platforms do not provide comprehensive coverage of the metabolome, and for any given method there is heterogeneity in how well individual analytes are measured. Further, metabolomics usually provides semiquantitative results, such that follow-up studies are required to determine the absolute concentrations of newly identified uremic solutes. Proteomics, the comprehensive study of proteins in a biological specimen, has also been utilized in uremia research. Given some technical restraints, however, these studies have been limited to relatively smaller numbers of patient samples, although this is likely to change with emerging, higher-throughput proteomic approaches.
Proving Causality in Uremia
In analogy with Koch’s postulates for identifying infectious pathogens, Bergstrom suggested that proof of a solute’s toxicity requires a demonstration that reducing its concentration improves uremia (or some specific feature of uremia), and that raising its concentration in normal people recapitulates the illness. No purported uremic toxin has met this standard. In a classic study, investigators added urea to the dialysate of three patients to maintain blood concentrations between 181 and 600 mg/dL, for periods of 7 to 90 days. Serial electromyography and electroencephalography did not show evidence of progressive neuropathy or encephalopathy during urea loading. A blood urea nitrogen concentration of less than 300 mg/dL was generally well tolerated, and symptoms such as malaise, vomiting, headache, and bleeding only emerged with urea levels exceeding this threshold, providing compelling evidence that urea alone is not responsible for the uremic illness.
Although Bergstrom’s framework is intellectually appealing, it is unclear whether the approach it embodies is either realistic or appropriate for the investigation of causality in uremia. First, the uremic illness is multifaceted, and it is likely that different solutes are responsible for different symptoms. Second, uremic symptoms are difficult to quantify, and symptoms such as fatigue are multifactorial with contributions from comorbid illness. Third, uremia subsumes both acute and chronic complications. Long-term studies of chronic end-organ toxicities rather than acute symptoms would be difficult to conduct, and in some cases would be unethical. Fourth, some uremic solutes may need to act in combination, or in the setting of advanced kidney failure, to exert toxicity. For example, solutes might coactivate common pathways or displace each other from protein binding, yielding higher free concentrations. Finally, ideal animal models for the study of uremia are lacking. In vivo assessment of uremic symptoms such as fatigue, nausea, change in taste, or pruritus is difficult. Arguably, model systems are more helpful for the study of end-organ toxicities, such as left ventricular hypertrophy, vascular calcification, or alterations in energy metabolism. Even these studies, however, are limited by an inability to faithfully recapitulate the uremic milieu in small animals. For all of these reasons, the goal of mapping uremic symptoms and end-organ toxicities to individual solutes or groups of solutes in humans is a formidable challenge.
Uremic Solutes Associated With Adverse Clinical Outcomes
One approach to prioritizing among numerous uremic solutes is to identify which solutes predict adverse clinical outcomes. In general, these epidemiological studies comprise of dialysis cohorts with stored specimens at baseline and longitudinal follow-up on clinical outcomes. Some issues unique to dialysis patients include timing of sampling in relation to dialysis (day of the week; pre- or posthemodialysis session), frequent administration of heparin (which activates lipoprotein lipase, releasing free fatty acids that can displace bound uremic solutes and increase their free concentrations), and dialysis facility level factors that can introduce confounding (facilities with poor sample handling techniques may have deficiencies in other process measures that cause poor outcomes). In addition, CVD is common in dialysis patients but may be undiagnosed or not captured at baseline, such that new CVD “events” may simply represent preexisting disease. In longitudinal follow-up, cause-specific event rates may be inaccurate, particularly among the substantial fraction of individuals who succumb to sudden death.
Notwithstanding these challenges, several studies have sought to identify solutes associated with CVD outcomes or mortality in ESRD ( Table 18.4 ). Elevated levels of p-cresol sulfate, indoxyl sulfate, ADMA, and TMAO have been associated with these outcomes in some, but not all studies performed to date. Typically, only one or a handful of molecules were examined in any given study, making it difficult to compare the relative strength of associations. Metabolomic approaches enable simultaneous measurement of many solutes. One study examined the association of 165 metabolites and 1-year cardiovascular mortality among 500 incident dialysis patients. Long-chain acylcarnitines were significantly associated with death, but there was no association for the 24 uremic solutes measured, including p-cresol sulfate, indoxyl sulfate, ADMA, and TMAO. The different results across these studies may relate to whether free or total solute levels were measured, how outcomes were defined, differences between incident and prevalent dialysis patients, and differences in duration of follow-up.