Hemodialysis

Hemodialysis (HD) sustains life for more than 2.5 million patients throughout the world. Without it, most would die within a few weeks, thus highlighting the need among caregivers to gain an in-depth understanding of all aspects of HD, including its target, the uremic syndrome. This chapter reviews the history of dialysis; epidemiology of the HD patient population; physical, chemical, and clinical principles of HD; and complications associated with this treatment.

Routine use of HD to preserve life for patients with end-stage kidney disease (ESKD) has been performed only for the past 50 years. Several early pioneers laid the foundation. Graham (1805−1869), a Scottish professor of chemistry, invented the fundamental process of separating solutes in vitro using semipermeable membranes and coined the word “dialysis.” In 1916, Abel dialyzed rabbits and dogs with a so-called vividiffusion device using celloidin membranes and a leech extract, hirudin, as an anticoagulant. He was the first to dialyze a living organism and to use the term “artificial kidney.” In 1924 in Germany, Haas was the first to dialyze a human but was only marginally successful because of toxicity from his crude anticoagulant.

In 1944, Willem Kolff succeeded in using extracorporeal dialysis to support patients with acute kidney failure. His success was partly attributable to the invention of cellophane, the discovery of antibiotics, and the availability of heparin. Kolff is often called the “father of HD,” and his method became the standard for temporary replacement of kidney function in patients with acute kidney failure. , However, HD could not support patients with prolonged or permanent loss of kidney function because of the difficulty with vascular access, subsequently solved by the creation of the arteriovenous (AV) fistula.

Although HD became technically feasible, it remained expensive and inefficient and was offered only to those who were free of comorbid conditions and felt to be most likely to contribute to society. Because HD was so successful in preventing death from kidney failure, Congress, after much debate, passed a law in 1972 approving public funding for dialysis and kidney transplantation, regardless of a patient’s means, education, employment, or comorbidities. This law paved the way for access to life-sustaining kidney replacement for all U.S. patients.

The Hemodialysis Population

Incidence and Prevalence

According to the U.S. Renal Data System (USRDS), 135,972 patients developed ESKD in the United States in 2021, with an unadjusted incidence rate of 399/million population. Of these, 3.1% were preemptively transplanted, 83.8% were started on in-center HD, 12.7% on peritoneal dialysis (PD), and only 0.4% on home HD. Fig. 62.1A shows that the adjusted treated incidence rate of ESKD in the United States has overall been declining since 2006, with the largest decrease of 5.5% in 2019 (see Fig. 62.1B ).

Fig. 62.1

(A) Adjusted incidence rates of end-stage kidney disease (ESKD ) in the United States with time.

(B) Annual percentage change in adjusted incidence rates of ESKD with time. (C) Prevalent counts (×1000) and adjusted prevalence of ESKD in the United States with time.

Modified from U.S. Renal Data System. 2023 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2023.

The prevalent count and prevalence rate of ESKD have overall risen in the United States with a 38% increase in new patients between 2001 and 2019 (see Fig. 62.1C ). However, for the first time, in 2020 the prevalent count began to level off and in that year the prevalence growth was almost −2%, attributed to the COVID-19 pandemic. At the end of 2021, 808,536 patients had ESKD with an adjusted prevalence of 2219/million population. Of these patients, 31.8% had functioning transplants and the remainder were supported by dialysis with 9.9% of patients on home modalities (8.3% on PD; 1.6% on home HD).

Both the prevalence and incidence of ESKD vary widely with age ( Fig. 62.2 ), sex ( Fig. 62.3 ), and race and ethnicity ( Fig. 62.4 ), with a predilection for older age, males, and African-Americans ; it should be noted that data suggest Native Hawaiians and Other Pacific Islanders experience the highest incidence of ESKD but are not included in USRDS data due to differences in collection and reporting of race for this group. Over the past decade, the incidence rate of ESKD was declining for people ≥65, African-Americans, Native Americans, and Hispanics until the COVID-19 pandemic, when people ≥75 and African-American individuals experienced increased incident ESKD (see Figs. 62.2A and 62.4A ). In contrast, the prevalence of ESKD was overall rising, suggesting improved survival for those on dialysis, until 2020, when it decreased for all groups.

Fig. 62.2

(A) Incidence and (B) prevalence of end-stage kidney disease (ESKD) (rate/million by year) with age from 2012 to 2021.

Modified from U.S. Renal Data System. 2023 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2023.

Fig. 62.3

Incidence and prevalence of end-stage kidney (ESKD) disease by sex in 2020.

Modified from U.S. Renal Data System, 2022 Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 20892.

Fig. 62.4

(A) Incidence and (B) prevalence of end-stage kidney disease (ESKD) with race and ethnicity (rate/million by year).

Black, Black or African American; NA, Native American.

Modified from U.S. Renal Data System. 2023 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2023.

Worldwide in 2021, the Jalisco and Aguascalientes states in Mexico, Taiwan, and Brunei Darussalam had the highest incidence rates for ESKD at 603, 522, and 507 per million population (pmp), respectively, followed by the United States (410 pmp), Singapore (380 pmp), and Indonesia (314 pmp). Jalisco experienced the greatest rise in incidence of treated ESKD with 18.7 pmp average annual increase between 2011 and 2021, followed by the Republic of Korea (18.6 pmp), Indonesia (18.3 pmp), Taiwan (10.4 pmp), Singapore (9.6 pmp), and Greece (7.6 pmp). In contrast, average yearly ESKD incidence rates decreased in a handful of countries, most pronounced in Italy (−12.1 pmp), Serbia (−7.2 pmp), and Turkey (−3.6 pmp) from 2011 to 2021. The prevalence of ESKD in 2021 was greatest in Taiwan (3839 pmp), followed by Singapore (2577 pmp) and the United States (2436 pmp). However, the worldwide prevalence of ESKD varies greatly, with the lowest rates reported in Bangladesh, South Africa, El Salvador, Italy, Belarus, and Montenegro at <500 pmp. The largest proportionate prevalence rate increase between 2011 and 2021 was seen in Indonesia (about 12-fold), and it more than doubled in the Republic of Korea and Russia. It should be noted that 2021 data were not available for some countries. The broad ranges of reported ESKD prevalence likely result from variable access to treatment and to preventive and maintenance health care.

Causes of End-Stage Kidney Disease

The causes of ESKD in the United States are listed in Table 62.1 . Since 1980, the percentage of incident ESKD attributable to diabetic kidney disease has increased from almost 0% to nearly 46% in 2020 with an incident rate of ESKD from diabetes mellitus (DM) of 181 pmp in that year ( Fig. 62.5 ). This is primarily due to increased acceptance of patients with DM into dialysis programs but with a potential contribution from improved survival of these patients so that they live long enough to develop ESKD. Diabetes remains the most common cause of ESKD in the United States and in many other countries, also exceeding 45% of incident cases in Brunei Darussalam, Singapore, Malaysia, Hong Kong, Taiwan, Colombia, and Jalisco (Mexico). Remarkably, DM accounted for <15% of new ESKD cases in South Africa, Aguascalientes (Mexico), Italy, El Salvador, and Romania in 2021.

Table 62.1

Causes of End-Stage Kidney Disease in Incident and Prevalent Patients

Data from U.S. Renal Data System, 2022 Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 20892.

Primary Kidney Disease Incident Patients Prevalent Patients
No. % of Total No. % of Total
Diabetes mellitus 59,474 45.6 309,030 38.3
Hypertension 37,168 28.5 213,824 26.5
Glomerulonephritis 8395 6.4 117,603 14.6
Cystic kidney disease 3397 2.6 41,042 5.1
Other urologic 1679 1.3 17,997 2.2
Other/Unknown cause 20,409 15.6 108,424 13.4
Total 130,522 100 807,920 100
Fig. 62.5

Causes of end-stage kidney disease (ESKD) in the United States.

Modified from U.S. Renal Data System. 2022 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 20892.

Mortality

The survival of patients undergoing HD in the United States was slowly improving for a decade, despite increasing comorbidities, with adjusted mortality rate for prevalent HD patients down by almost 13% from 2010 to 2019. However, in 2020 mortality suddenly increased by 17% in one year ( Fig. 62.6 ). This is thought to be due to the disproportionate effects of COVID-19 on patients with chronic kidney disease (CKD) who experienced double the rate of COVID-19 hospitalizations in 2020 compared with patients without CKD; COVID was the third-highest cause of death for patients receiving HD that year when cause of death was known or reported. , Patients who started HD in 2017 have an 81% 1-year survival, 70% 2-year survival, and 40% 5-year survival ( Fig. 62.7 ). The 5-year survival rates are nearly identical between the 2007 and 2017 cohorts, again likely due to the COVID-19 pandemic. Compared with age-matched persons without kidney disease, patients undergoing HD have a markedly reduced life expectancy ( Fig. 62.8 ). At age 60, the average person in the United States can expect to live for about 20 more years but the median remaining years of life for a 60-year-old patient on HD in 2021 is only about 5 years. Remarkably, the mortality of patients with ESKD undergoing dialysis is higher than that of patients with cancer, heart failure, and stroke or CVA (see Fig. 62.8B ).

Fig. 62.6

Mortality rates for patients on dialysis.

Modified from U.S. Renal Data System. 2023 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2023.

Fig. 62.7

Probability of survival for hemodialysis patients by cohort year.

Modified from U.S. Renal Data System. 2023 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2023.

Fig. 62.8

(A) Life expectancy and (B) adjusted all-cause mortality rates per 1000 person years for end-stage kidney disease (ESKD) patients in 2021.

CVA , Cerebrovascular accident; DM , diabetes mellitus; HF , heart failure; MI , myocardial infarction; RT , renal transplant; TIA , transient ischemic attack.

Modified from U.S. Renal Data System. 2023 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2023.

Causes of death for patients with ESKD in 2021 are listed in Table 62.2 . Deaths from cardiovascular disease (CVD) were 38%, with 32% from arrhythmia or cardiac arrest and only 2% from myocardial infarction (MI). COVID-19 accounted for 7% of deaths with another 6% from other infections. Overall, voluntary withdrawal from dialysis accounted for another 12% of deaths. Patients who withdraw from dialysis tend to be older, female, and white, and have higher rates of medical events and greater morbidity in the period preceding withdrawal. A nonuniform approach to advanced care planning with insufficient exploration of goals of care before initiating dialysis in those with multiple comorbidities and poorer quality of life may contribute to the rising incidence of voluntary withdrawal.

Table 62.2

Causes of Death for Prevalent Hemodialysis Patients (2021)

Data from U.S. Renal Data System. 2023 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2023.

Cause of Death % of Total
Cardiovascular disease 38%
  • Arrhythmia, cardiac arrest

32%
  • Myocardial infarction

2%
  • Other cardiovascular

4%
Withdrawal from dialysis 12%
COVID-19 7%
Septicemia or other infection 6%
Malignancy 2%
Other causes 8%
Missing or unknown causes 27%

Transition from Chronic Kidney Disease, Stage 5

The high mortality of patients undergoing HD results in part from many associated comorbid conditions, including CVD, DM, and hypertension (HTN), which take a toll on patients with CKD even before they develop ESKD and require dialysis. One large U.K. cohort of patients with CKD stage G3 or higher experienced six times higher cardiovascular (CV) mortality and four times higher non-CV mortality than those without CKD. A U.S. observational study showed that patients with advanced CKD hospitalized with an acute MI experienced twice the rates of in-hospital mortality and heart failure compared with those without CKD. In 2021, patients with CKD stage G3 and DM or CVD experienced more hospitalizations than CKD stage G3 patients without these comorbidities—33% greater if DM, 151% greater if CVD, and 269% greater if both. Reducing comorbidities is therefore critical in caring for patients with CKD.

In addition, replacing hormone deficiencies such as erythropoietin and activated vitamin D, minimizing mineral bone metabolism disorders, and preventing malnutrition are pivotal in optimizing health and well-being of patients with advancing CKD. Another critical element in preparing patients with CKD for ESKD care is preemptive planning for potential transplantation, as well as for permanent access to support dialysis, beginning when the estimated glomerular filtration rate (eGFR) has declined to 20 to 25 mL/min/1.73 m 2 .

Psychologic support is another important but often overlooked and poorly understood part of caring for a patient on HD. Depression, disabling fatigue, severe restless legs syndrome, insomnia, anxiety, and prolonged recovery time after dialysis are common in patients undergoing HD. These conditions are associated with poorer quality of life, malnutrition, low level of physical function, presence of inflammation, hospitalization, and death. Unfortunately, little is known about the epidemiology, pathogenesis, and effective treatment of these factors. Diagnosing depression may be difficult in patients on dialysis because some of the symptoms used to make the diagnosis may be present due to uremia. Once the diagnosis is made, patients and providers may be resistant to treatment because of increased pill burden, potential toxicity of medications, and lack of mental health resources. Study results are mixed on the efficacy of selective serotonin reuptake inhibitors (SSRIs), intensive HD, and cognitive-behavioral therapy, likely because many studies are small and/or uncontrolled. ,

Because of the multifaceted care required before starting dialysis, timely referral to a nephrologist is essential. Studies have documented higher hospitalization rates, more symptoms of depression, worse anemia and biochemical disturbances, lesser use of PD, more use of dialysis catheters, and greater mortality when nephrology referral is delayed, but some studies have found no difference in mortality risk and in ESKD risk. Potential confounders may be factors underlying late referral, such as patient nonadherence, multiple comorbidities, advanced age, and rapid decline in kidney function. The widely varying definitions of early referral, ranging from >1 to 6 months before starting dialysis to as early as CKD stage G3 (eGFR <60 mL/min/1.73 m 2 ) may have also contributed to the disparate findings. Studies of patients with non–dialysis-dependent CKD followed by nephrologists have suggested that in patients with CKD stage G3a (eGFR 45–60 mL/min), death occurs more frequently than ESKD does, where the opposite is true for patients with CKD stage 5 (eGFR <15 mL/min).

The competing risks of death or ESKD reach parity at CKD 3b (eGFR 30–45 mL/min/1.73 m 2 ) to CKD 4 (eGFR 15–30 mL/min/1.73 m 2 ). In patients managed only by primary care physicians, the mortality rate for patients with CKD stage G3a is comparable with those with earlier CKD stages and rises progressively with more advanced stages of CKD. Referral to nephrologists at CKD stages G3a or G3b was not associated with a change in the risk for death and/or progression to ESKD, , whereas referral at CKD stage G4 was associated with a lower mortality risk but not with risk of progression to ESKD. It is unclear why nephrology care was not associated with the risk for ESKD, but these findings have suggested that the most appropriate referral to nephrology for established CKD may be at stage G3b or G4.

Current clinical practice guidelines have suggested that dialysis be initiated when patients become symptomatic from uremia, often occurring at an eGFR of 5–10 mL/min/1.73 m 2 . In some patients, volume overload and hyperkalemia not responsive to conservative medical management and/or unexplained progressive decline in nutritional status despite aggressive dietary intervention may dictate earlier initiation of dialysis. The goal for patients with kidney failure is a smooth transition from CKD to ESKD, avoiding the complications of overt uremia. The results of several studies have supported these guidelines. The IDEAL (Initiating Dialysis Early and Late) study, the only published randomized controlled trial (RCT) of this topic, reported comparable survival and clinical outcomes in patients randomized to early start (eGFR >10 mL/min/1.73 m 2 by the Cockcroft-Gault equation) versus late start of dialysis (eGFR 5–7 mL/min/1.73 m 2 ). However, 75% of the patients randomized to a late start of HD actually initiated when the eGFR was above 7 mL/min/1.73 m 2 because of intervening symptoms attributed to uremia, volume overload, or nutritional decline. Though a potential weakness of the IDEAL study was use of the Cockcroft-Gault equation to estimate the GFR, a secondary analysis of trial data using the MDRD (Modification of Diet in Renal Disease) and CKD-EPI (Chronic Kidney Disease–Epidemiology) formulas also demonstrated no significant effect of timing of dialysis initiation on survival. Earlier initiation of dialysis is not cost-effective and may be harmful, possibly because of an accelerated loss of residual kidney function (RKF), more use of dialysis catheters, myocardial stunning, depression, and provider inexperience at a time when lifesaving benefits from dialysis are low. , Current guidelines have suggested a patient-centered approach in determining when to initiate dialysis, balancing the prevention of uremic complications with quality of life and patient preferences. ,

Vascular Access

A trouble-free and reliable method of accessing the vascular space is paramount for successful long-term application of HD. The process begins with thoughtful and timely access planning, well before HD is required. The arteriovenous fistula (AVF) remains the best of long-term HD options. However, synthetic arteriovenous grafts (AVG) and even central venous catheters (CVCs) have important roles as vascular access options in the diverse HD population. Even well-established vascular accesses need routine monitoring and maintenance. Troubleshooting access problems is a critical task that is best managed by a multidisciplinary team, which traditionally has included nephrologists, dialysis nurses, surgeons, and interventional radiologists. Interventional nephrology has grown as a procedurally oriented subspecialty and allows nephrologists, who are often most attuned to the needs of the patient, to take the reins of vascular access management and intervention.

Background

The success of long-term HD to manage ESKD has hinged upon innovations in vascular access devices and surgical techniques. The Quinton-Scribner shunt, an external constant flow arteriovenous (AV) conduit that could be repeatedly accessed, was revolutionary despite limitations related to thrombosis, infection, and eventual vascular damage, and allowed HD to be performed for a relatively longer period of time. In 1966, Brescia and colleagues, inspired by the large veins that develop in patients with traumatic AVFs, published the use of a surgically created endogenous AVF, revolutionizing HD vascular access.

A suitable vein must be in proximity to an appropriate artery for creation of an AVF. With vascular disease being a common comorbidity in ESKD, some patients are not candidates for an AVF and must rely on an artificial conduit, an AVG, for a durable HD access. Vascular conduits made from expanded polytetrafluoroethylene (ePTFE, commonly called Gore-Tex), self-sealing polyurethane conduits, biologically derived conduits, and even hybrid graft/catheter devices have all expanded AV access options for patients whose natural veins are not suitable for an AVF. While an AVG traditionally does not provide the same long-term success as a Brescia-Cimino or other autogenous AVF with respect to duration of function and incidence of complications (discussed later), studies using more contemporary patient cohorts have indicated that the gap in key clinical outcomes between AVFs and AVGs may not be as wide as previously described. , Additionally, with higher primary patency rates for AVGs, patients already on HD may shorten their time using a CVC for HD.

Wide-bore catheters placed in large, central veins provide an additional means of vascular access for HD. First, single-lumen and then double-lumen CVCs led to the use of these devices to support patients on HD who lacked AV access. Advances in material science with more flexible plastics and the advent of subcutaneously tunneled cuffed catheters have led to the widespread use of “permanent” catheters, offering longevity of use that make them practical. However, CVCs are sometimes used for longer periods of time than necessary and in some cases in lieu of an AVF or AVG. Almost 85% of patients in the United States started HD with a CVC in 2021, and even among patients with predialysis nephrology care, the rate of CVC as first HD access remains unacceptably high. By comparison, other countries where HD is prevalent have much lower incident CVC rates. Nonetheless, there will always be patients undergoing HD in whom a CVC will be the appropriate or only feasible long-term vascular access.

Before any referral for surgical placement of an AV shunt, clinicians should have a thorough discussion with the patient about dialysis modalities, life expectancy on dialysis, and the option of maximal medical therapy without dialysis. If HD access is deemed appropriate, a careful plan should be laid out. It should begin with protecting existing veins. Nephrologists should champion a “preserve the vein” program in patients with advanced CKD likely to accept HD and in those already on HD. The nondominant arm, using as distal a target vein as feasible, is often the preferred initial site for an AV shunt. Choosing distal veins preserves more proximal vessels for future HD accesses. Vein preservation should include avoidance of peripherally inserted central catheters (PICC), which are placed in the antecubital vein through the cephalic or basilic vein and into the central venous system. Used to facilitate medication infusions or blood draws, PICCs are associated with a high rate of venous stenosis, and loss of these outflow veins prohibits the use of that arm for future HD vascular access. Traditional cardiac pacemakers and implantable defibrillators that require transvenous wire leads running through the left subclavian and innominate vein may also lead to HD vascular access issues by increasing the risk of stenoses within these vessels. Leadless pacemakers such as the Micra device are now available and should be considered in patients who are on or may require HD.

An ideal HD vascular access possesses the characteristics shown in Table 62.3 . As no single type of vascular access fulfills all of these desired requirements, vascular access is the most challenging part of managing long-term HD. In general, compared with AVGs, a working AVF will have a higher patency rate at 1 year. However, AVFs have a higher primary failure rate than AVGs. Vascular access-associated infections are lowest for AVFs and highest for CVCs. Overall health care costs are also associated with type of HD vascular access. U.S. data show that the annual cost of care is highest in HD patients with a CVC, though direct, access-only costs are highest for those with an AVG. Finally, overall survival is highest in those with an AVF versus those with a CVC for HD access. However, many of the above observations are confounded by indication. Even differences in mortality associated with access type are in part explained by patient factors. Therefore the decision of which HD access is best requires a patient-centered approach—longevity of the AV access must be balanced against anticipated patient survival. Access maturation rates and maturation time must be balanced against risks of a longer time using a CVC. Patient expectations must be weighed against the procedural morbidity and number and frequency of secondary procedures that many AV accesses require.

Table 62.3

Characteristics of Ideal Hemodialysis Vascular Access Compared With Commonly Available Types

Desired Characteristic Autogenous AV Fistula AV Graft Central Venous Catheter
Good primary patency rate ✓✓ ✓✓✓
Instant usability ✓✓ ✓✓✓
Long survival ✓✓✓ ✓✓
Low thrombosis rate ✓✓✓ ✓✓
Low infection rate ✓✓✓ ✓✓
High blood flow rate on hemodialysis ✓✓✓ ✓✓✓
Patient comfort ✓✓✓
Patient bathing/hygiene ✓✓✓ ✓✓✓
Minimizes needles ✓✓✓
Minimal cosmetic affect ✓✓ ✓✓

AV, Arteriovenous.

Types of Vascular Access

Arteriovenous Fistulas

This preferred type of vascular access is created by connecting a vein to an artery, which requires that the two vessels be in proximity to each other. Fistula creation is most commonly a surgical procedure that can often be done with local or regional block anesthesia. Both the artery and vein must have adequate luminal diameter for the procedure to be successful, though other complex, poorly understood physiologic requirements are also needed for AVF maturation. In theory, the increased arterial pressure and flow into the venous system leads to compensatory changes in the vascular system, including dilation and elongation of the vein, thickening of the venous walls, and augmented artery-like flow rates through the fistula and outflow vein. When these venous changes occur (the term “arterialized” is often used), an AVF is deemed mature (ready to use) and should be easily accessible by physical examination. It may take 1 to 2 months for an AVF to mature enough for use; however, there should be early examination findings within the first few weeks to indicate progression. Functionally, a mature AVF should provide a robust blood flow (400–500 mL/min) that is equal to or more than the anticipated HD blood flow rate (Q b ), with enough distance between the two needle cannulation sites so that recirculation—dialyzed blood from the venous return needle pulled back into the arterial needle—does not occur. If the HD prescription only calls for a Q b of 200 to 300 mL/min, as is typical in Japan, then an AVF with lower flow rate would be acceptable. The “rule of 6” serves as an easy way to remember physical examination guidelines to determine if an AVF is ready for use: 1. ideally 6 mm in diameter; 2. 6 cm of overall needle-accessible length; and 3. no more than 6 mm below the skin surface.

The original procedure described by Brescia and colleagues was a side-to-side (side of vein onto side of artery) AVF using the radial artery and the cephalic vein at the wrist; it has become the preferred access when these vessels are adequate. However, patients starting dialysis are generally older with more comorbidities that lend to poor vascular health. Thus upper arm veins provide alternatives when distal vessels are felt to be inadequate in size or quality or when a distal AVF has failed. The cephalic and basilic veins are considered superficial veins of the arm and often used in AVF creation. The basilic vein courses up the arm and perforates the deep fascia to join the brachial vein somewhere between the midbiceps and axilla. The basilic vein AVF often uses the brachial artery as its inflow and typically requires an additional transposition procedure (moving the vessel out of its native bed and location) because of its awkward location for needle cannulation along the medial side of the biceps. The brachio-basilic fistula can be performed in stages, with anastomosis followed by a transposition surgery after the vein has matured. This procedure appears to have advantages over an AVG in terms of longevity. The basilic vein has also been transposed to the forearm with successful outcomes. Alternatives to using the cephalic or basilic vein as outflow for an AVF are the brachial vein (considered a deep vein of the arm) and median antecubital vein. The former is anastomosed to the brachial artery and requires a second procedure to superficialize and transpose, whereas the latter (variants of the Gracz fistula) has potential dual outflow through both the cephalic and basilic veins. These AVFs have advantages and disadvantages that are beyond the scope of this chapter.

In upper arm AVFs, end-to-side anastomoses are often the case. Studies suggest similar patency rates of side-to-side versus end-to-side anastomoses but possibly a higher incidence of arterial steal phenomenon and venous hypertension in the arm with the side-to-side technique. Computational flow dynamics suggest the more steep (sometimes near 90 degrees) angle of the anastomosis seen in an end-to-side AVF results in turbulence that may lead to undesired remodeling of the vein. In native veins, the higher turbulence from steeply angled connections predisposes to stenosis. For example, the cephalic arch at the confluence of the cephalic vein and the axillary vein is a common location to find recurrent stenosis requiring intervention in patients with an AVF or AVG with outflow through the cephalic vein ( Fig. 62.9 ). Typical vascular anatomy of the arm and the location of commonly placed AVFs are shown in Fig. 62.10A and 62.10B .

Fig. 62.9

Angiography of a cephalic arch stenosis (white arrow) that developed in a patient with a left upper arm AV fistula.

The high blood flow rate in a location where the vein turns 90 degrees leads to turbulent flow, leading to intimal hyperplasia. Lesions like this need repeated angioplasty and sometimes benefit from stent placement or use of drug-eluting balloons.

Photo courtesy of Dr. Suresh Appasamy, Interventional Nephrology, University of California, Davis.

Fig. 62.10

(A) Vasculature of the right upper limb outlining superficial and deep veins and major arteries.

(B) Location of anatamoses for typical wrist (radiocephalic) and upper arm (brachiocephalic and brachiobasilic) arteriovenous (AV) fistulae. (C) Location of typical AV grafts.

Medical device and procedural advancements have offered another avenue for creating an AVF. These percutaneously created AVFs use novel devices placed in the appropriate vessels under ultrasound and fluoroscopy guidance to connect an artery and adjacent vein by using a short pulse of radiofrequency energy. The common ulnar or radial artery and its respective veins are often target vessel candidates for this type of AVF. This technique potentially allows interventional radiologists and nephrologists to participate in AVF creation. Currently, there are limited data on outcomes, either short or long term. However, it is apparent that patient selection for percutaneous AVF is critical to success. Early reports suggest that additional follow-up procedures, such as angioplasty, are often needed to achieve maturation.

Despite the advantages of AVFs, the prevalence of AVF use in the United States has remained behind that of other countries. The primary failure rate of AVFs is high in the United States, around 25% to 37% in observational studies. To lower this rate, a good physical examination along with use of preoperative ultrasonographic imaging to assess vascular anatomy are used in the evaluation and planning of fistula placement. , Commonly termed “vein mapping,” this study should interrogate veins and arteries. Lower limits for artery and vein internal diameters are 2 mm and 2 to 3 mm, respectively. However, even in situations where vessel size appears to be appropriate, maturation of the AVF may still not occur. Intimal hyperplasia not readily seen with imaging and arterial lesions that limit augmentation of flow after AVF creation are likely additional factors that lead to a nonmaturing AVF. , Surgical technique is also a well-recognized variable in AVF outcomes. Data from the Dialysis Outcomes and Practice Patterns Study (DOPPS) have shown a marked difference in the training of surgeons in Europe compared with the United States. The much greater volume of procedures completed by European surgeons in training may lead to more expertise and comfort in performing AVF procedures.

To increase AVF use in HD in the United States, a “fistula first” initiative began in 2006, aimed to educate providers and patients on the advantages of this form of HD access. The dogmatic placement of AVFs in all patients, however, has disadvantages. A meta-analysis of more than 12,000 patients found the failure rate for AVFs increasing over time, casting concern about overly aggressive “fistula-first” approaches. The major disadvantage of attempting an AVF in all patients when primary failure rates are high and additional procedures are needed to reach AVF maturation (assisted patency) is the longer time with a CVC, during which a higher mortality is observed. , Therefore patient planning for HD access should include consideration of the next step, if an AVF is deemed unlikely to succeed from the onset or if there is primary failure of an AVF attempt.

Arteriovenous Grafts

When an AVF has failed or is unlikely to succeed upon initial access planning, there are two common alternatives to consider: 1. attempt an AVF at another site (often in the upper arm), or 2. place an artificial AVG. Vascular access using ePTFE conduits was the most common type of AV vascular access in the United States in the 1980s and 1990s, partly due to ease of placement, the short time required between placement and use, and perhaps factors related to reimbursement. An AVG still requires an arterial inflow vessel and a patent venous outflow. Grafts can be placed as a straight conduit or as a loop between the vessels. Common vascular connections include radial artery to basilic or cephalic vein, brachial artery to basilic or cephalic vein, and brachial artery to brachial vein ( Fig. 62.10C ). Cannulation for HD is into the graft material itself, so dynamic change of the vein is not needed. Thus AVGs have distinct advantages: They do not need to mature in the same manner as AVFs, so they can be used earlier, and have a lower primary failure rate than AVFs. Although the recognized low primary failure rate for AVGs is an early advantage, the primary patency rate at 12 months is only around 50% (i.e., half of AVGs will have failed or need intervention to remain patent at 1 year). When considering the assisted patency (or secondary patency) rate of AVGs, there is greater parity with AVFs. Another factor to consider is the greater access-related infection with an AVG compared with an AVF, though access-related infection is still highest with CVC use. Grafts also allow surgeons more possible sites for HD access, including the femoral vessels and even the chest wall using subclavian vessels. Placing an AVG in the leg may appear extreme, but an observational report suggests that a thigh AVG may still be better than a CVC. Regardless of location, most ePTFE grafts require about 4 weeks of healing after placement before cannulation is recommended, to allow for fibrosis or incorporation of the graft into the surrounding soft tissue (so that a surrounding hematoma does not occur upon needle cannulation) and for endothelization of the intraluminal surface.

Other AVG materials include alternative synthetic materials, modified biological materials, and tissue-engineered products. An example of an alternative plastic for AVG is polyurethane, known for its self-sealing property when punctured, obviating the need for graft incorporation, which could allow almost immediate cannulation after surgical placement. This graft may play a role in limiting or avoiding the use of a CVC. Seemingly an ideal material, the clinical experience with polyurethane AVGs has been mixed, with some reports of increased infections and difficulty with cannulation, and others of equivalency to traditional grafts. , A commercially available modified biological AVG includes the Artegraft, a collagen graft from acellular bovine carotid artery, whose maker touts earlier cannulation, lower thrombosis, and similar primary and secondary patency rates compared with traditional ePTFE grafts. Tissue-engineered AVGs coming onto the market are created from human cells stimulated to develop collagen and other structures of vascular tissue, grown on a scaffold that eventually dissolves. The cells are then washed away, leaving an acellular vascular graft. Possible benefits include lower risk of thrombosis, less infection, and lower immunogenicity. Outcome studies of these engineered grafts are still limited, and the overall cost-to-benefit balance of using AVGs from this technology is still unclear.

Even with an AVG as a vascular access option, sometimes the lack of a good venous outflow limits placement of an AV shunt. An available device for patients with limited venous outflow options uses a typical ePTFE graft at the point of the arterial anastomosis and transitions to a large-bore, single-lumen catheter at the venous outflow end. The catheter end is then tunneled up the arm and placed into the central venous circulation, much like a typical CVC, establishing a central venous drainage. This Hemodialysis Reliable Outflow (HeRO) device has a potential role in some patients as an AV access of last resort.

Possibly stimulated by areas of turbulent blood flow or changes in wall shear stress, a host of complex biological responses occur, particularly in the outflow vein distal to the AVG anastomosis. The resulting hyperplasia leads to a high incidence of stenosis in the vein immediately downstream from the anastomosis with the graft ( Fig. 62.11 ) and is the leading cause of graft thrombosis. This high incidence of stenosis is a rationale for monitoring AVG flow to allow preemptive intervention before thrombosis, though studies have not universally concluded that ultimate AVG survival benefits from monitoring. The inflammatory response and hyperplasia in AVGs have focused attention on devices and drugs that may prevent or reduce eventual stenosis. In addition to balloon angioplasty, stents are frequently used as a tool to prevent restenosis and maintain patency. Whether drug-eluting stents and drug-delivering angioplasty balloons will have a role in AVG survival has yet to be determined. Use of drug elution from the graft material itself is also under investigation in animals.

Fig. 62.11

Angiography of stenosis in the outflow vein of a left upper arm AV graft (gray arrow).

This typical lesion in patients with an AV graft is felt due to turbulent flow as blood moves from graft to vein. These areas of intimal hyperplasia generally respond to angioplasty but tend to recur, oftentimes requiring stenting or drug-eluting balloons to maintain long-term patency. There are also numerous intragraft stenoses.

Photo courtesy Dr. Suresh Appasamy, Interventional Nephrology, University of California, Davis.

Starting HD with an AVG to avoid CVC time and converting to an AVF at the first sign of graft dysfunction combines the merits of the AVG (its higher primary patency rate) and AVF (improved longevity and resistance to infection). By starting with an AVG, the outflow vein is subjected to the higher pressure and flows that an AVF would similarly produce, thus maturing the vein. When the patient encounters the first episode of AVG dysfunction, the outflow vein is converted to a traditional AVF by connecting the already “arterialized” outflow vein to an artery, taking the AVG out of the circuit. The new AVF may be ready for cannulation immediately or soon after creation.

Central Venous Catheters

Catheters have transformed the application of HD in both acute kidney failure and long-term ESKD settings. Technologic advances have moved CVC designs from large, single-lumen catheters to the present dual-lumen devices. With advances in plastics, CVCs are now more flexible, leading to less vascular trauma, yet are more resistant to degradation by cleansing solutions such as alcohols. Catheters are placed within the venous system and do not require arterial flow, so HD can be provided to those who otherwise would not be able to obtain or sustain an AV shunt. Most importantly, the development of the subcutaneous Dacron cuff allowed a portion of the catheter to be tunneled subcutaneously before exiting the skin, which provides for longer use and protection against infection. Tunneled CVCs have a twofold to threefold lower infection rate than nontunneled CVCs.

Catheters have been both a blessing and a curse. Without a doubt, many patients would face death from kidney failure without the availability of catheter technology for emergent HD access. However, the ease of placement, ability for immediate use, and general convenience for the patient (fear of needles is common) have led to overreliance on CVCs. The USRDS report analyzing 2021 data showed that even in patients with >12 months of nephrology care before incidence of HD, about 72% of patients started with a CVC. Observational studies, such as those using data from DOPPS, relate the high prevalence of CVC use in the United States to some, but not all, of the higher HD mortality compared with similar patients in Europe. , It is also quite evident that health care is extremely fragmented in the United States, so predialysis planning is often difficult if not outright impossible.

Low Q b s through the CVC are a common problem faced by dialysis personnel, resulting in a negative “arterial pressure” and machine alarm that stops the blood pump. Catheter Q b s, even with some of the large-lumen catheters, reach a realistic maximum of 350 mL/min. In some countries, this may be perfectly fine, but in the United States with shorter treatment times and larger patients, any decrease in blood flow may result in inadequate HD clearance. The use of intraluminal tissue plasminogen activator (tPA), 2 mg in one or both ports, is often the approach to a catheter with poor flow. This usually adds about 30 to 45 minutes to a patient’s treatment time, as the tPA is allowed to dwell before trying the CVC again. The success rate for tPA varies and is probably influenced by patient and facility factors. A more chronic problem that can result in poor CVC flow is the development of a fibrin sheath that envelopes the intravascular portion of the CVC. Fibrin sheaths are likely not affected by anticoagulants or antiplatelet agents and cannot be remedied with tPA. Treating a fibrin sheath generally involves stripping the tough scar tissue–like material off the CVC or removing the CVC and using a large angioplasty balloon to disrupt the sheath. Fibrin sheaths often recur.

To avoid thrombosis, catheter lumens are filled or locked with various substances, most commonly high-dose unfractionated heparin solutions (1000–10,000 units per mL) with enough volume to fill both lumens at the termination of HD. Many studies have shown that a fraction of the heparin lock leaks into the circulation and can rarely result in bleeding. Some studies have shown no difference in catheter patency between 1000 units per mL versus 5000 units per mL heparin solutions. An alternative to locking with heparin is sodium citrate, whose calcium chelating effect may theoretically decrease thrombosis. One study demonstrated that locking the catheter with tPA once a week instead of heparin, compared with heparin all 3 days per week, reduces both thrombosis and the incidence of bacteremia. , However, implementing this strategy would be cost prohibitive. Interestingly, using a silicone catheter port cap (Tego) that does not require removal when attaching the HD blood lines and is flushed with saline was comparable with a more traditional catheter lock.

Catheter-related infection is a major complication of CVC use in HD, resulting in high morbidity, costly hospitalizations, and mortality. The frequency of bacteremia associated with HD CVCs has been estimated to be 2 to 4 episodes per 1000 patient-catheter days, a frequency 10- to 20-fold higher than rates of infection estimated in patients with AVF. These catheter-related infections can result in more complex infections such as osteomyelitis, endocarditis, or septic arthritis despite antibiotic therapy.

Patients can present in various fashions with a catheter-related infection, and notably ESKD patients are considered immunocompromised and may not have classic signs and symptoms of infection. Some patients simply feel poorly when they show up for HD, while others will present with tachycardia and hypotension. Fevers in any HD patient with a CVC should prompt a workup with high suspicion for a catheter-related infection. Skin-associated bacteria are the most common cause of tunneled HD catheter infections, with gram-positives accounting for 83% of organisms including Staphylococcus epidermidis (48%) and Staphylococcus aureus (28%) in a large observational study. Since most CVCs used in HD patients outside of the hospital are tunneled, the point of entry for bacteria may be intraluminal during the uncapping of the ports through touch contamination or via bacteria already on the port hubs. Therefore a “scrub-the-hub” protocol, using an antiseptic pad to vigorously remove residue and bacteria, is championed by most HD providers.

When a CVC infection is diagnosed, timely and appropriate antibiotic therapy, adequate duration of therapy, and determining if the catheter requires removal or exchange are critical for eradication of infection, as well as preventing metastatic infections. The type of organism from blood cultures (we feel cultures drawn through catheter ports and peripherally should ideally be part of the initial workup) dictates whether antibiotics alone will lead to full resolution; for example S. epidermidis may clear with antibiotics alone. Other bacteria, such as S. aureus, generally require CVC exchange or removal with a “line holiday” before reinsertion of a new one. The combined use of antibiotic instillation into catheter lock solutions, with systemic antibiotics, may increase success in preserving the existing CVC when removal is not an option. When the infection is isolated to the exit site and does not involve the tunnel, studies have suggested that with antibiotics, CVC exchange can be successfully undertaken.

Given the high infection rate of CVCs, attempts at prevention have been studied and a few prophylactic interventions show promise. The use of mupirocin ointment at the exit site may reduce the incidence of infection. , Locking CVCs with antibiotic-containing solutions (such as gentamicin) after all HD treatments or even just once weekly may also be helpful in reducing the incidence of CVC-related infections. With the prophylactic use of antibiotics, the issue of antibiotic-resistant bacterial selection is always a concern. Catheters impregnated with bacterial-inhibiting agents have not been as promising in preventing infection. Since many CVC infections are theorized to arise from bacteria found on the hub of the catheter ports, the use of a silicone cap that does not require removal each HD treatment (Tego, mentioned earlier) and chlorhexidine impregnated catheter hub caps (ClearGuard HD) are also innovations needing more real-world data.

Even if the thrombotic and infectious complications could be solved, CVCs may not support the Q b s needed to achieve an adequate HD dose in some patients. Additionally, long-term CVC use results in vascular damage that often leads to central vein stenosis and in situ thrombosis. In the end, CVCs must remain an interim solution or the access of last resort in the patient on long-term HD.

Maintenance of Vascular Access Function

After some form of AVF or AVG is in use, maintaining patency and adequate function is paramount—after all, it is the patient’s lifeline. Minimizing injury to a vessel or conduit that is repeatedly punctured by large-gauge needles is no small feat. Thus good cannulation technique is important in AV access longevity. One nonrandomized study compared the rope ladder technique, in which cannulation occurs along the entire length of the fistula, with the buttonhole procedure, where repeated cannulation occurs in the exact spot over 9 months. Unsuccessful cannulation occurred more frequently in the buttonhole group, but there were fewer complications, including hematoma and aneurysm formation, compared with the rope ladder group. More recent data have suggested that the buttonhole technique is associated with a higher rate of infection without evidence of improved fistula survival. At present, the buttonhole technique has fallen out of favor due to its higher rate of bacteremia but is still an important option for patients doing their own cannulation for home HD. Guidelines do not provide definitive recommendations regarding the question of cannulation methodology. Other strategies to extend access life span include flow and pressure monitoring of AVF and AVG, looking for early signs of access dysfunction, and interventions to correct problems found during monitoring.

Monitoring and Surveillance

The rationale for an AV access monitoring protocol is based on a few assumptions: 1. that the natural history of AVF or AVG dysfunction and impending failure can be predicted by observable changes in access pressure or flow that occur beforehand, 2. that if detectable changes in either pressure or flow were reliable and observed regularly, an intervention to prevent emergent loss of access function is available, and 3. that intervening on the problem will salvage the AV access and extend the lifespan of the access.

A consistently applied program of AV access monitoring is recommended by most guidelines. The first step is a good physical examination of the AV shunt. This can be taught to dialysis technicians, nurses, and nephrologists. For example, if an AVF has a bounding pulsation with increasing aneurysm size (or does not flatten when the arm is raised above the head in a lower arm, radiocephalic fistula), then a venous outflow or central vein stenosis should be suspected. In some teaching interventional nephrology programs, the physical examination findings correlate well to angiographic findings. However, the reality is that most dialysis clinics run a tight schedule between patient shifts, and a thorough physical examination of the AV access is rarely performed.

Other hints to access dysfunction may come from information gathered during patient rounds. For example, if a decline in HD urea clearance cannot be readily explained, a poor access flow rate may be the cause. Also, if there is routine difficulty in cannulation or prolonged hemostasis time (particularly if bleeding from the AV shunt at home is reported), arterial inflow or venous outflow stenosis may be the culprits, respectively.

Access pressure can be monitored by measuring static (blood pump not running) or dynamic (blood pump running at a predetermined rate) venous pressures when the patient is on HD by noting the venous chamber pressure. To be a useful tool, AV access pressure monitoring must be done serially using the same needle size, blood tube set, and Q b (if dynamic venous pressure is being used) each time. The change or trend in measurement over time is more important in predicting a stenosis than an absolute threshold value. Clinical studies suggest that AVGs may benefit from monitoring of venous pressures, since most problems in AVGs occur in the native vein within the first few centimeters from the anastomosis. Conversely, arterial inflow problems in an AVG are less readily identified with venous pressure monitoring. Venous pressures are less useful in detecting AVF problems since these have multiple draining side veins and the fistula itself continues to remodel over time.

Access flow rates can also be measured while the patient is on HD. Shunt blood flow measurement that uses an ultrasound-based indicator dilution method (Transonic; Transonic, Ithaca, N.Y.) has proven to be quite accurate. Another method for measuring access flow uses dialysate-side changes in conductivity, the sodium-dialysance methodology built into some HD machines. Both methods are also capable of measuring recirculation. Flow monitoring and its benefit in long-term access outcomes, like with venous pressure monitoring, is not clearcut. A nonrandomized report comparing three monitoring schemes in a single HD program demonstrated significant improvement in short-term AVG and AVF survival using blood flow monitoring versus dynamic pressure monitoring or no monitoring. In addition to access survival, there were fewer missed HD sessions and reduced overall patient costs when using the blood flow monitoring protocol. Some studies of routine flow monitoring found benefit in secondary patency in AVFs but not in AVGs. However, a review of surveillance trials did not definitively support routine monitoring of HD access. Therefore the role of access surveillance is a topic of ongoing research. Nonetheless, a formal monitoring program puts more care team “eyes” on the HD access.

Prophylactic Therapy for Access Maintenance

Access loss occurs from thrombosis and hyperplasia-related stenosis, but agents well known to prevent thrombosis in other vascular diseases have not consistently improved vascular access survival. A multicenter trial in the Veterans’ Affairs Health System with clopidogrel plus aspirin was discontinued because of bleeding problems in the treatment group. The Dialysis Access Consortium (DAC) Study Group supported RCTs in both AVFs and AVGs. For AVFs, the group reported that clopidogrel did not increase the number of usable fistulas. For AVGs, dipyridamole and low-dose aspirin (compared with aspirin alone) started immediately after placement of an AVG did improve graft patency at 1 year but did not result in any longer-term benefits. , Thus there are no clear guidelines on general systemic antiplatelet or anticoagulation use in patients with AVFs or AVGs for access survival.

The myointimal proliferation and resulting stenosis that occur, particularly with AVGs, have also been a focus of prevention, especially because this process may be the ultimate final pathway of access failure. Strategies to prevent cell proliferation in AVGs have been adapted from studies of other vascular diseases, such as cardiac disease. For example, a retrospective study using angiotensin-converting enzyme (ACE) inhibitors in patients with AVGs suggested a reduced risk for graft loss compared with patients not receiving this class of medication. Interventions that have been shown to reduce proliferation in other vascular diseases are being applied to HD vascular access. Brachytherapy has been used for some time—it is safe and has some promise in reducing stenosis in AVGs. Far-infrared light therapy, which improves endothelial function, was shown in an RCT to improve patency rates of new AVFs when administered repeatedly and may become a future tool to improve AVG survival. The use of antiproliferative medications such as paclitaxel, delivered locally by drug-eluting stents or by medication-loaded angioplasty balloons, holds promise for AVG failures related to venous outflow vein hyperplasia, which accounts for a large proportion of lesions that lead to graft thrombosis.

Vascular Access Timing and Decision Making

With emphasis on personalized care, not every patient with advancing CKD should start dialysis and in those with a limited life expectancy upon starting HD, placing an AVF or AVG before ESKD may lead to unnecessary morbidity. Efforts are better aimed at optimizing quality of life, even if it means a short time on HD with a CVC. In patients who accept long-term HD with a better life expectancy, careful discussion, planning, and placement of HD vascular access would ideally be done with a nephrologist in a structured and orderly manner when eGFR falls below 20 mL/min/1.73 m 2 . Unfortunately, many patients starting HD in the United States have little or no pre-ESKD nephrology care. Additionally, GFR decline in CKD is nonlinear and often takes an unpredictable course, making access planning a challenge even for those under the care of a nephrologist.

Although registry data and retrospective studies of large databases have consistently found AVFs associated with the best outcomes, the correct access for an individual patient is more nuanced than placing an AVF in all patients. The decision is complex, as clinical outcomes and risk analysis must consider the morbidity associated with each access type, success rate of the access, and patient choice. A detailed model incorporating many of these variables provides insight into the decision-making process that must also take into account patient factors such as age, sex, comorbidities, and competing risk of death. Studies examining older adults on HD also found that a single-minded, fistula-first approach to vascular access may not be optimal.

General Principles of Hemodialysis: Physiology and Biomechanics

Native kidney solute removal involves filtration and secretion in highly complex, tightly regulated, metabolically active processes. By comparison, solute and fluid removal by HD remains crude, even with the technologic evolution of HD since its inception. HD works by diffusion, based on a transmembrane concentration gradient, and convection, based on a transmembrane hydrostatic pressure gradient ( Fig. 62.12 ). Selectivity of solute removal is achieved by the membrane pore size, letting smaller molecules through, and rejecting larger ones. Solute removal rate depends on not only the concentration gradient and characteristics of the membrane but also on the membrane surface area and the relationship of blood and dialysate flow rates on opposite sides of the membrane. In this section, we introduce basic concepts that provide a foundation for understanding the biomechanics of HD.

Fig. 62.12

Diffusion versus convection.

(A) Diffusion across a semipermeable membrane. The driving force for solute diffusion is the transmembrane concentration gradient. Small solutes with higher concentrations in the blood compartment, such as potassium, urea, and small uremic toxins, diffuse through the membrane into the dialysate compartment. Dialysis dissipates this concentration gradient (i.e., the molecular concentration gradient decreases with dialysis). Larger solutes and low-molecular-weight proteins, such as albumin, diffuse poorly across the semipermeable membrane. (B) Convection across a semipermeable membrane. The driving force for convection, commonly called ultrafiltration, is the transmembrane hydrostatic pressure. When applied to the blood compartment, solvent flows across the membrane into the dialysate compartment, bringing along solutes. For solutes with a sieving coefficient close to 1, there is no change in concentrations in the blood compartment with time.

From Meyer TW, Hostetter TH. Uremia. N Engl J Med. 2007;357(13):1316−1325.

Native Kidney Versus Artificial Kidney

Both native and artificial kidneys are excretory organs, and both use semipermeable membranes. The native kidney’s glomerular membrane, while more complex than that of an HD dialyzer, allows filtration of water-soluble solutes of a certain size and charge. Most large, soluble molecules found in the blood are the products of complex intracellular synthetic processes, may serve active signaling/regulatory processes, and are frequently bound to albumin or other transport proteins. Their loss by filtration through the kidneys is undesirable, so they are generally not well filtered. To a certain degree of success, the artificial membranes of dialyzers mimic this filtration function, with median pore size and distributions that limit loss of albumin and important larger solutes. However, the native kidney and dialyzer filtration barriers work in completely different ways. The natural kidney selectively filters by convection, with plasma water and the solutes in it pushed by pressure, from the blood space into the urinary space. In contrast, the dialyzer removes solutes by simple diffusion, based mostly on the concentration gradient of the substance between blood, water, and the dialysate.

In the native kidney, filtered small water-soluble solutes that are important to retain, like major electrolytes, bicarbonate, small proteins, and peptides, are efficiently reabsorbed by the proximal tubule, where they are either metabolized and their subunits reused or directly transported back into circulation. Dialyzers lack this vital function, so losses of some diffusible solutes (electrolytes, bicarbonate, glucose) are prevented by including the solutes in the dialysate, limiting or eliminating the gradient for diffusive loss.

For many small water-soluble substances, native kidney and dialyzer removal are affected by other important physiologic factors. Urea, for example, is freely filtered by the glomerulus but highly reabsorbed in the native kidney tubule. In dialyzers, there is no reabsorption of urea once it crosses the membrane, and the rapid equilibration of urea across red blood cell (RBC) membranes facilitates additional urea removal during HD. In contrast, creatinine is freely filtered, not much reabsorbed, and actively secreted by tubules in the native kidney. In HD, creatinine also crosses the dialyzer membrane, but there is little additional transport across RBC membranes within the short time that blood transits through the dialyzer. Consequently, creatinine clearance by the native kidneys is higher, and by the dialyzer lower, than their respective urea clearances.

The native kidney also surpasses dialyzer membrane removal of protein-bound substances. For example, hippurate (56% protein bound) is minimally cleared by HD, but in HD patients with RKF, the native kidney excretion of hippurate was 6.6 times that of native kidney urea clearance, suggesting that there was substantial active tubular secretion of the substance. Other metabolic functions, such as erythropoietin (EPO) production and 1-hydroxylation of 25-hydroxy vitamin D, are unique to the native kidney. Despite the differing process of elimination of small solutes by the native kidney and the dialyzer, we can develop concepts and methods of measuring dialyzer clearance of small solutes and compare it with similar clearances by the native kidney. This allows us to quantify HD.

Clearance

HD removes small, water-soluble solutes and fluid. Early cellulose-based dialyzer membranes removed solutes with molecular weights <3000 Da. Urea, with a molecular size of about 60 Da, was readily measured and observed to increase in patients with kidney failure. The observation that HD worked extremely well to reverse life-threatening “uremia” led to the inescapable conclusion that at least some of these small retention solutes were toxic. In early studies, it became clear that urea was not excessively toxic, but its removal mirrored that of other unnamed toxic substances. This association allows us to legitimately measure the HD removal or clearance of urea as a representative small solute of the toxins of kidney failure.

The concept of clearance is pervasive in nephrology and dialysis discussions. Clearance in physiology is best defined as the volume of plasma from which a substance is removed in a defined time period (hence the units of clearance are volume per time ). Clearance is recognized as the best measure of first-order processes such as diffusion and filtration (convective clearance), which are the primary processes of removal in HD and native kidney function, respectively. First-order removal processes are driven by the concentration of the solute, rendering its removal rate directly proportional to the plasma concentration. Clearance (K) is the proportionality constant, as shown in this equation:

K = removal rate / concentration

In a simple flowing system, the removal rate is the difference between the inflow concentration (C in ) and the outflow concentration (C out ) multiplied by the flow (Q):

Removal rate = ( C in − C out ) × Q

The concept of extraction ratio (E) of a substance is the relationship of the difference in concentration across the membrane to the inflow concentration:

E = ( C in − C out ) / C in

Placing Eqs. 2 and 3 into Eq. 1 , we get:

K = Q × E

Thus in a system with a given, constant flow (Q), both clearance (K) and extraction ratio (E) are constant over time, despite marked changes in concentration of the solute in the system. This also implies that clearance (K) is directly related to blood flow (Q).

Although clearance is independent of solute concentration, the converse is not true (i.e., concentrations depend on clearance, and solute concentrations are used to measure clearance). During a period of steady-state kinetics, in which generation equals removal, the concentration of a solute is inversely proportional to its clearance if its generation rate is fixed. Because dialysis is simpler than native kidney function and removes solutes primarily by diffusion, the calculation of clearance is nearly the same for all easily dialyzed substances if one assumes that these solutes are distributed in a single water space or “pool” in the patient. Generation rates of various solutes may differ, but if each is relatively constant from week to week (such as what we see in urea and creatinine generation), then the measured clearance of a representative solute can be used to reflect the effectiveness of the dialysis clearance of all easily dialyzed solutes.

This principle, which forms the basis of established standards for measuring and prescribing HD, has logical merit, but its applicability has been challenged and may require modification for solutes that are strongly sequestered in remote body compartments. , In addition, after adjustment for body size and possibly sex, all patients appear to require the same weekly clearance. Furthermore, the dose requirement or need for HD does not seem to vary from time to time in the same (anuric) patient, provided a minimum threshold clearance is delivered during each treatment.

Clearance Versus Removal Rate

The clearance of a solute must be distinguished from its absolute removal rate. Clearance is best envisioned as a measure of removal expressed as a fraction of the remaining solute and is therefore independent of the concentration (see Eq. 1 in the prior section). Two substances may have the same clearance, but if one is present at half the concentration of the other, the removal rate will also be half. In practical terms, it is impossible to compare dialyzers by measuring removal rates alone because removal depends on the solute concentration. Measurement of clearance eliminates this requirement, allowing use of a single term to make valid comparisons among dialysis modalities and native kidney function.

Similarly, finding a lower concentration of a solute in a patient does not indicate that the clearance is higher; it may simply reflect a lower generation rate. Fig. 62.13 shows two theoretical patients, both with the same 36 kg total body water but different urea generation rates (40 and 30 g/hour). At a steady-state urea clearance of 20 mL/min, the patients have different urea removal rates and serum urea concentrations. If the clearance were to drop suddenly to 15 mL/min in both patients, a new steady-state serum urea concentration (mg/mL) would be noted. Once steady state is achieved again, the removal rate of urea is simply a reflection of its generation rate in the two patients. These concepts are similar in intermittent clearance by the dialyzer on HD.

Fig. 62.13

Clearance versus removal rate of a substance is illustrated here, in two patients with the same 36 L volume of distribution of urea with the same constant clearance of 20 mL/min.

Patients 1 and 2 have urea generation rates of 40 g/hour and 30 g/hour, respectively. When the clearance of both patients suddenly drops to 10 mL/min, absolute removal rate of urea drops but at a different rate despite the same change in clearance. See text for more information.

Serum Urea Concentration Versus Urea Clearance

The serum urea concentration has proven to be a poor surrogate for uremic toxicity. Urea concentration is a balance between generation and clearance. Whereas urea clearance by the dialyzer should correlate with the clearance of other small, dialyzable solutes that are presumably responsible for uremic toxicity, the generation of urea as an end product of protein catabolism correlates poorly with uremic toxicity—cell culture and animal studies have suggested that urea only mildly disrupts cellular function and promotes metabolic processes. In fact, patients on HD with higher urea generation rates have better outcomes, probably as a reflection of better appetite and higher protein intake. It is difficult to dissect the clearance factor from the generation factor in a single blood urea nitrogen (BUN) measurement, but as explained later, this can be differentiated by modeling the change in BUN during a HD treatment. For purposes of measuring the dose and adequacy of HD, only the relative change in urea concentration during HD is used to model clearance; the absolute concentrations are ignored. Thus despite its lack of intrinsic toxicity, urea measurements during HD can be used to assess dialysis adequacy as a surrogate for the clearance of other small, easily dialyzed solutes, some of which must be toxic.

Factors That Affect Clearance in A Flowing System

Diffusive clearance in a flowing system depends on the rates of blood and dialysate flow, as well as the targeted solute’s membrane permeability. The membrane permeability constant (K 0 ) for a given solute is a function of the biomaterial characteristics, including median pore size, pore size distribution, and membrane thickness. Multiplying K 0 by the surface area of diffusion (A) yields the permeability or mass transfer-area coefficient (K 0 A) of the dialyzer. K 0 A is expressed in mL per minute and is independent of solute concentration. The predictable exponential decline in concentration gradient of a readily diffusible solute along the dialyzer membrane from blood inflow to outflow ( Fig 62.14A ) is the basis for the calculation of K 0 A and is widely used in mathematic models of solute kinetics. For countercurrent dialysate and blood flow, where K d is dialyzer clearance, the following equation is applicable:

K 0 A = ( Q b Q d / [ Q b − Q d ] ) ln ( Q d [ Q b − K d ] / Q b [ Q d − K d ] )
Fig. 62.14

(A) Flow-limited clearance.

A logarithmic decline in small solute concentrations, indicated by the arrows on either side of the membrane, is depicted from the dialyzer inlet at the left to the blood outlet on the right . This decline is based on rapid diffusion of solute across the membrane and forms the basis for Eq. 5 . Removal is maximized by countercurrent flow of blood and dialysate. (B) Membrane-limited clearance: The diffusive force is still the solute concentration gradient. Solute concentrations along the membrane from dialyzer inflow to outflow for both blood and dialysate are relatively constant because transport across the membrane is limiting and relatively low.

Analogous to clearance, which expresses the dialyzer removal rate normalized to the inflow solute concentration, K 0 A is an expression of dialyzer performance normalized to specific blood flow (Q b ) and dialysate flow (Q d ) rates. K 0 A is sometimes called the “intrinsic clearance” of a dialyzer and can be viewed as the maximum clearance possible for a particular solute at infinite Q b and Q d . Note that K 0 A is both dialyzer and solute specific. It is the best parameter for comparing dialyzers, with higher values indicating more effective solute removal. Modern high-efficiency dialyzers have K 0 A urea values of >500, with many well over 1000 mL/min.

A useful rearrangement of this equation, solving for K d , provides a measure of dialyzer clearance at any Q b and Q d , as shown in the following:

K d = Q b [ e K 0 A ( Q d − Q b Q d Q b ) − 1 e K 0 A ( Q d − Q b Q d Q b ) − Q b Q d ]

As one can see, Q b has a direct effect on clearance, and when one is attempting to get the most clearance per unit time, optimizing Q b plays an important role. Realistically, HD Q b rate reaches a maximum at around 500 mL/min given the limitations of needle gauge and tubing caliber. We also observe that increase in Q d increases K d. However, the effect of increasing Q d on clearance K d is modest. For example, assuming dialyzer K 0 A urea of 1000 with Q b of 400 mL/min, increasing Q d from 600 to 800 mL/min only increases K d by 15 mL/min ( Fig. 62.15 ).

Fig. 62.15

For small, flow-limited solute removal, like urea, across a membrane, an increase in blood flow rate increases dialyzer clearance to a much greater extent than a similar increase in dialysate flow rate.

This model assumes no convective clearance from ultrafiltration and assumes a dialyzer mass transfer-area coefficient ( K 0 A ) for urea of 1000, like what we expect for a modern, high-flux dialyzer.

The expression of clearance in Eq. 6 does not include the contribution of convective solute removal by ultrafiltration (Q f ), which most patients on HD need to remain euvolemic. Convective clearance results from bulk movement of solute across the membrane driven as plasma water is removed by hydrostatic pressure. Simultaneous filtration across the same membrane used for HD removes additional solute, but the amount removed is inversely related to the efficiency of diffusion for that solute. For example, if the dialyzer removes a solute efficiently by diffusion, with an extraction ratio approaching 100%, addition of Q f contributes minimally to the overall solute removal rate, which cannot exceed 100% of the blood inflow. Mathematically, the effect of Q f on clearance is expressed as follows :

K d = Q b ( [ C in + C out ] / C in ) + Q f ( C out / C in )

where Q b is the blood flow rate, C in is the inflow concentration, C out is the outflow concentration, and Q f is the ultrafiltration rate, in mL per minute. As C out approaches zero, the dialysis component of clearance maximizes and the Q f component extinguishes. Nonetheless, any quantification of solute clearance on HD should include the additional clearance provided by Q f since the additive effect of convective clearance can be substantial over the course of a typical HD treatment.

Other Determinants of Clearance

Treatment-related variables already discussed include the permeability of the membrane to solutes based primarily on size, membrane surface area, blood and dialysate flow rates, and, of course, treatment time. A few other variables that affect clearance during HD are worth mentioning ( Table 62.4 ). Solute-related variables include the physical and chemical properties of the substance to be removed (size, charge, protein binding) and its distribution in the body (intracellular, extracellular, interstitial).

Table 62.4

Factors Influencing Effective Clearance

Solute-Related Treatment-Related (in Order of Importance)
Small Molecules Large Molecules
Molecular size
Molecular charge
Macromolecular binding
Body distribution and sequestration
Blood and dialysate flow
Membrane surface area
Treatment time
Membrane permeability
Membrane permeability
Treatment time
Membrane surface area
Blood and dialysate flow

Solute molecular size determines its membrane permeability but in flowing systems, it also determines where along the dialyzer diffusion takes place. Smaller molecules tend to be cleared rapidly at the proximal end of the dialyzer, leaving the more distal end available for further enhancement of clearance as Q b is increased, making small solute clearance flow-limited (see Fig. 62.14A ). The concentration of larger molecules tends to remain constant along the length of the dialyzer, and thus their clearance is minimally influenced by blood or dialysate flow rates ( Fig. 62.14B ). Note that flow-limited clearance for small solutes and membrane-limited clearance for larger solutes occur at the same time in a dialyzer. The relation between flow and clearance is shown graphically in Fig. 62.16 for both limiting scenarios.

Fig. 62.16

Flow-limited versus membrane-limited clearances.

Flow limits clearance when the membrane is not fully exposed to inflow solute concentrations (see Fig. 62.12). This typically occurs for small, easily dialyzed solutes. In contrast, larger solutes tend to saturate the membrane along its entire length at lower blood (or dialysate) flow rates (see Fig. 62.13). For these less easily dialyzed solutes, further increases in flow have no influence on clearance. K 0 A, Mass transfer-area coefficient.

The molecular activity of a solute determines its capacity for movement across the HD membrane. About 90% of whole blood volume is water, which is where water-soluble solutes can be dialyzed. Thus Q b through the dialyzer for water-soluble solutes should be expressed as blood water flow, or about 90% of whole blood flow. Similarly, blood concentrations should be expressed as blood water concentrations, which are about 7% higher than whole serum concentrations. Note that BUN is a misnomer; serum urea nitrogen is measured.

For charged molecules, the Gibbs-Donnan effect may limit the diffusion of a charged solute by the electrostatic effect reducing the effective blood activity. This effect for blood equilibrated with dialysate is attributable to nondialyzable plasma proteins, mostly albumin, which has a net negative charge (≈17 mEq/mmol albumin). The asymmetric charge distribution across the membrane effectively “captures” a small fraction of the positively charged sodium (Na) ions on the plasma side, reducing their potential for diffusion. Correcting for this charge gradient, the effective Na concentration on the blood side of the membrane is about 3 mEq/L lower than its actual concentration, thereby affecting the movement or dialysance of Na and other positively charged solutes.

In general, the size of the molecule is the most important intrinsic physical feature governing its removal. The rate of movement, or flux (J), of smaller molecules is higher than the flux of larger molecules. Other factors such as plasma protein binding, shape, charge, and sequestration in the intracellular compartment must be considered when attempting to predict solute clearance.

Dialyzer Clearance Versus Whole-Body Clearance

A distinction must be made between clearance across the dialyzer and clearance across the patient. For each of these, the removal rate ( Eq. 1 ) is the same, but the denominator differs. Dialyzer clearance is an expression of solute removal as a fraction of the blood concentration (adjusted for blood water content, about 90%) at the dialyzer inflow port. In contrast, whole-body clearance is an expression of removal as a fraction of average concentrations throughout the body. The average whole-body concentration is substituted for dialyzer inflow concentration in the denominator of the standard clearance formula. Whole-body concentrations are higher than serum concentrations during HD because of solute disequilibrium, or a delay in the diffusive movement of solute from remote body compartments to the patient’s circulating blood during HD. There may also be nonuniform distribution of solutes. In general, whole-body clearances are always lower than dialyzer clearances.

Typical solutes that exhibit disequilibrium distribute preferentially in the intracellular compartment and diffuse slowly across the cell membrane to the extracellular compartment. Such solutes are sometimes labeled as “difficult to dialyze.” For example, HD is not recommended for removal of digoxin in patients with digoxin intoxication, even though it is water-soluble and easily removed in vitro by dialysis. Removal in vivo is limited by sequestration in remote tissue compartments. Solutes such as digoxin have a large apparent volume of distribution, often larger than total body water volume.

In normal physiologic conditions, urea is often termed an “ineffective osmole” and is considered to distribute evenly into total body water (V). Even though this solute normally diffuses rapidly across body membranes, it exhibits a concentration difference between water compartments during the high clearance rate of HD, displaying disequilibrium. Whole-body urea clearance can be calculated by using the equilibrated postdialysis BUN in the denominator in Eq. 1 , requiring the patient to wait 30 to 60 minutes after the completion of HD ( Fig. 62.17 ). Experience has shown that urea sequestration is predictable and has led to the development of correction factors to allow the use of the immediate post-HD BUN, obviating the need to wait for equilibration to take place.

Fig. 62.17

Equilibrated postdialysis blood urea nitrogen (BUN), the basis for eKt/V.

Precise measurements of BUN every 15 minutes during a typical patient’s 2.5-hour hemodialysis shows a logarithmic decrease in concentration during the treatment and a rapid rebound that is complete approximately 1 hour later. The double-pool model shown in Fig. 62.23 and the solid line in the graph predict the concentrations accurately. The equilibrated postdialysis BUN is an extrapolated value shown as the large solid circle .

In quantifying HD by way of urea clearance (K), we express dialysis clearance relative to volume of distribution of urea, which is the patient’s total body water (V). Since HD is intermittent, we must enter a unit of time (t). Incorporating these factors, we arrive at the familiar term, Kt/V, which can be viewed as a fractional clearance of urea per HD treatment. Note that Kt/V has no units, since all unit terms cancel out. Also, Kt/V can be larger than total body water (Kt/V can be >1.0) since HD is a recirculating process.

Effect of Red Blood Cells Passing Through the Dialyzer

In the blood compartment, solutes may diffuse slowly or not at all out of RBCs during transit through the dialyzer. , Water content of RBCs is about 64% by volume, so this may account for the additional transport of water-soluble toxins during HD. Solute clearance depends on how readily they move out of RBCs. For example, creatinine and uric acid are small molecules with clearances similar to that of urea when measured in vitro in a saline solution. Measured in vivo, however, urea clearance is higher than that of creatinine and uric acid because RBCs possess urea channels that provide facilitated transport across their membrane, augmenting the available urea for clearance across the dialyzer.

Sequestration in Remote Compartments

Potassium (K) and phosphorus (P) are cleared easily in vitro, but their whole-body removal is limited by cellular and bone sequestration in vivo, explaining the need for dietary restriction and the use of intestinal binders to diminish absorption. Phosphorus is rapidly removed from the intravascular compartment, causing intermittent hypophosphatemia in patients during conventional HD (high Q b and Q d , relatively short duration of treatment of 3–4 hours). However, P moves from sequestered locations back into the blood space in the 2 to 4 hours after HD, leading to a rapid increase in serum P, and returning close to predialysis levels ( Fig. 62.18 ). Thus sequestration and consequent postdialysis rebound account for the failure of conventional HD alone to normalize interdialytic concentrations of P. Intermittently low concentrations of serum P and other sequestered solutes during HD may also contribute to postdialysis disequilibrium symptoms. One can speculate that the magnitude of sequestration for the toxic solutes responsible for “uremia” cannot be as pronounced as for P; otherwise, HD would not be successful.

Fig. 62.18

Effect of sequestration on serum phosphate levels and removal during dialysis.

Measurements of serum inorganic phosphorus concentrations were taken at 15-minute intervals during both high-flux and standard hemodialysis. The rapid flux of phosphorus due to its relatively high mass transfer-area coefficient caused levels to fall into the hypophosphatemic range (below the lower dotted line ) during most of the 4-hour dialysis. A marked rebound continued for 4 hours after dialysis ended.

Modified from DeSoi CA, Umans JG. Phosphate kinetics during high-flux hemodialysis. J Am Soc Nephrol. 1993;4:1214−1218.

Components of the Extracorporeal Circuit

The HD system for a single patient in the 1940s was about the size of a twin bed. Modern HD machines are about the size of a three- to four-drawer filing cabinet. Central to the dialysis delivery system is the artificial kidney or dialyzer, which acts as the point of exchange between blood and dialysate. The system is designed to deliver blood and properly constituted dialysate to the dialyzer, where diffusion and convection occur. Technologic advances have provided online monitors that accurately measure and regulate blood and dialysate flow rates, circuit pressures, and dialysate composition and temperature. Additional advances include automated safety mechanisms designed to detect blood leaks and air in the circuit and online devices that monitor vascular access, hematocrit (HCT), and dialysis adequacy during each treatment ( Fig. 62.19 ).

Fig. 62.19

Components of a typical modern hemodialysis blood and dialysate circuit.

Much of the dialysate circuit is not readily seen but is internal to the machine. A typical “3 stream” dialysate (water, acid, and bicarbonate concentrates) proportioning system and the use of precise balance chambers to ensure accurate volumetric control of ultrafiltration highlight the advances in modern dialysis machinery.

Blood Circuit

During HD, a steady flow of blood is obtained from a CVC, an AVF fistula, or an AVG. If a CVC is used, blood enters the extracorporeal circuit from ports along the sides of the double-lumen catheter (arterial lumen) and returns through the port at the distal tip (venous lumen). Alternatively, an AVG or AVF is cannulated with two needles with blood flowing from the arterial needle into the blood tubing and dialyzer and returning to the patient through the venous needle. The driving force for the blood circuit is a peristaltic roller pump, which sequentially compresses the pump segment of the tubing against a curved rigid track, forcing blood from the tubing. Elastic recoil refills the pump tubing after the roller has passed, readying it for the next roller. Because of this elastic recoil and because most pumps have only two or three rollers, dialyzer blood flow is pulsatile. Increasing the number of rollers makes the flow less pulsatile but increases the risk of hemolysis and damage to the pump segment.

Pressure monitors are located proximally to the blood pump and immediately distal to the dialyzer. The proximal or “arterial” pressure monitor measures the negative pressure created in the prepump tubing and guards against excessive suction on the vascular access. Accepted ranges for arterial inflow pressures are −20 to −80 mm Hg but may reach −200 mm Hg when the Q b is high. The distal or “venous” pressure monitor gauges the resistance to blood return between the dialyzer and the vascular access. Acceptable values range from +50 to +200 mm Hg. When the upper or lower limits of arterial or venous pressures are exceeded, an alarm sounds and the blood pump turns off. Excessively negative or “low” arterial pressures may be caused by kinks in the tubing, improper arterial needle position, hypotension, or arterial inflow stenosis. Excessively positive or “high” venous pressures can be due to blood clotting in the dialyzer, kinking or clotting in the venous blood lines, improper venous needle position, venous needle infiltration, or venous outflow stenosis. Accurate measurements of the arterial and venous pressures are essential to determining the transmembrane pressure (TMP), which is the pressure difference between the blood and dialysate compartments within the dialyzer and is partly what determines the ultrafiltration rate (UFR). Excessively high pressures anywhere in the blood compartment may rupture the dialyzer membrane or disconnect the blood circuit, leading to an abrupt decrease in pressure in the blood circuit. The automatic shutoff of the blood pump in this circumstance is potentially lifesaving.

Two additional safety devices, the venous air trap and air detector, are located in the blood line distal to the dialyzer. Air may enter the blood circuit through loose connections, improper arterial needle position, or the saline infusion line. The venous air trap prevents any air that may have entered the blood circuit from returning to the patient. If air is detected in the venous line after the air trap, the machine sounds an alarm and a relay switch turns off the blood pump. Excessive foaming of blood also triggers the air detector. These safety features prevent air embolism, which can lead to CVA, cardiac and/or respiratory failure, and death if not immediately recognized. However, microbubbles formed during HD may escape detection and lodge in organs such as the brain and lungs, possibly contributing to the higher incidence of pulmonary HTN and cognitive decline observed in HD patients. , Ensuring a high blood level in the venous air trap, avoiding extremely high Q b s, and adequately priming a dry-packed dialyzer may reduce the incidence of such microemboli. ,

Hemodialyzers

A hemodialyzer, or dialyzer, is often called an “artificial kidney.” Its configuration allows blood and dialysate to flow, preferably in opposite directions, through individual compartments separated by a semipermeable membrane. By convention, blood entering the dialyzer is designated arterial and blood leaving the dialyzer is venous. The many available dialyzers differ mainly in the composition, configuration, and surface area of the membrane. Dialyzers influence the efficiency and quality of HD through their membranes, which determine their K 0 A value (see previous “Factors That Affect Clearance in a Flowing System”), and through the blood and dialysate flow rates, which determine the clearance values ( Table 62.5 ).

Table 62.5

Key Factors That Affect the Solute Clearance of a Hemodialyzer

Parameter Key Factors Clearance
Properties of the membrane Membrane porosity ↑︎
Membrane thickness ↓︎
Membrane surface area ↑︎
Membrane charge Varies
Membrane hydrophilicity ↑︎
Properties of the solute Increased molecular weight and size ↓︎
Charge Varies
Lipid solubility ↓︎
Protein binding ↓︎
Blood side Unstirred blood layer ↓︎
Increased blood flow ↑︎
Dialysate side Dialysate channeling and unstirred layer ↓︎
Increased dialysate flow ↑︎
Countercurrent direction of flow ↑︎

Virtually all commercial dialyzers in the United States are hollow fiber dialyzers constructed with a cylindric plastic casing (usually polycarbonate) that encloses several thousand hollow fiber semipermeable membranes stretched from one end to the other and anchored at each end by a plastic potting compound, usually polyurethane. The semipermeable membrane and potting material separate the blood compartment from the dialysate compartment, where dialysate flows between and around each fiber. The blood compartment or fiber bundle volume ranges from 60 to 150 mL. Blood flows to or from the open end of each fiber through a removable header attached to the blood tubing. Apart from lowering blood priming volume, the hollow fiber design also improves efficiency of solute exchange by increasing the contact area between blood and dialysate. The arterial port design also influences the distribution of blood flow through the hollow fibers and can reduce dialysis efficiency. Thrombosis and the need for potting compound are major disadvantages of the hollow fiber design. The potting compound absorbs chemicals used to disinfect newly manufactured dialyzers (e.g., ethylene oxide) or reprocessed dialyzers (e.g., formaldehyde, peracetic acid, and glutaraldehyde) and acts as a reservoir for these chemicals, allowing them to leach out slowly during HD into the patient’s blood.

Membrane Composition

The biomaterials used to make the hollow fiber dialyzer dictate its clearance and UF characteristics, as well as its biocompatibility. Two major classes of membrane material are available commercially: 1. cotton fiber, or cellulose-based membranes and 2. synthetic membranes. Unmodified cellulose-based membranes contain many free hydroxyl groups, which activate blood components (see “Membrane Biocompatibility”). Treating the cellulose polymer with acetate and tertiary amino compounds improves membrane biocompatibility, presumably through the covalent binding of the hydroxyl groups to form acetylated cellulose and laminated cellulose, such as Cellosyn or Hemophan. ,

The major polymers in synthetic membranes are polyacrylonitrile, polysulfone, polycarbonate, polyamide, polyether sulfone, and polymethylmethacrylate. Although these membranes are thicker, they can be rendered more permeable than the cellulose membranes, yielding greater fluid and solute removal. The larger pore sizes in the synthetic membranes more efficiently remove higher molecular weight substances such as β-2 microglobulin (β 2 M).

Membrane Biocompatibility

Dialyzer membranes may activate white blood cells, platelets, and the complement cascade via the alternative pathway to generate anaphylatoxins C3a and C5a. , The degree to which the membrane activates blood components determines its “biocompatibility.” Bioincompatible membranes may cause allergic reactions, hypoxemia, transient neutropenia (due to leukosequestration), altered immunity, tissue damage, anorexia, protein catabolism, or an inflammatory state. Because HD is repetitive, the effects of low-grade, subclinical membrane interactions during each treatment may be cumulative, eventually resulting in adverse clinical outcomes such as infection, accelerated atherosclerosis, frequent hospitalization, and death.

A membrane’s absorptive capacity can also influence its biocompatibility. Some synthetic membranes, such as polyacrylonitrile, polyamide, and polymethylmethacrylate, are more hydrophobic, bind proteins to a greater extent, and may ameliorate bioincompatible inflammatory reactions through their ability to bind anaphylatoxins and cytokines. , Therefore measurements of these elements in blood may not reflect the true capability of a membrane for inciting the complement cascade and inducing an inflamed state. In general, though, synthetic membranes and modified or “substituted” cellulose membranes are more biocompatible than unmodified cellulose membranes, , , but issues may still arise, such as increased blood levels of bisphenol A (BPA) in patients treated with a polysulfone membrane.

Because bacterial contaminants in product water (see “Water Treatment”) can also activate blood components, it is difficult to determine the relative contribution from bioincompatible membranes versus nonsterile or inadequately purified water to the inflamed state seen in HD patients. With increasing use of modified cellulose and synthetic membranes and closer attention to water quality, the distinction has become even more difficult. Studies evaluating the relative biocompatibility of substituted cellulose versus synthetic membranes have reported no difference. , , Ongoing efforts to improve biocompatibility include coating the membrane with heparin or vitamin E. ,

Membrane Permeability and Surface Area

Dialyzers perform two important functions: elimination of unwanted solutes and removal of excess fluid. The thickness, porosity, composition, and surface area of a membrane determine its ability to clear solutes and remove water. In general, thinner and more porous membranes provide more efficient transport of solutes and fluid. The dialyzer urea K 0 A (or clearance) describes its ability to eliminate low-molecular-weight substances, while the vitamin B 12 and β 2 M K 0 A (or clearance) describe the capacity to remove higher molecular weight substances and the ultrafiltration coefficient (K UF ) is its ability to remove water ( Table 62.6 ).

Table 62.6

Characteristic Values for Standard, High-Efficiency, and High-Flux Dialyzers

Standard High Efficiency High Flux
Blood flow rate (mL/min) 250 ≥350 ≥350
Dialysate flow rate (mL/min) 500 ≥500 ≥500
K 0 A urea 300−500 ≥600 Variable
Urea clearance (mL/min) <200 >210 Variable
Urea clearance/body weight (mL/min/kg) < 3 >3 Variable
Vitamin B 12 clearance (mL/min) 30−60 Variable >100
β-2 microglobulin clearance (mL/min) <10 Variable >20
Ultrafiltration coefficient or K UF (mL/h/mm Hg) 3.5-5.0 Variable >20
Membrane material Cellulose Variable Variable

Most dialyzers have a membrane surface area of 0.8 to 2.1 m 2 . The desirable increase in solute transport associated with larger membranes can be achieved by increasing the length and number or by decreasing the diameter of the hollow fiber, but each maneuver has undesirable effects when carried too far. Lengthening the fiber increases the shear rate and resistance to blood flow, increasing risk of hemolysis. Increasing the number of hollow fibers increases surface area but expands the extracorporeal blood volume, which can compromise the patient’s hemodynamic stability. Smaller-diameter fibers can offset this disadvantage, but as fiber diameter decreases, resistance to blood flow increases, enhancing not only filtration but also backfiltration and clotting. As fibers thrombose, the effective surface area decreases. The design and geometry of the hollow fiber dialyzer represent a delicate balance among these factors.

High-Efficiency and High-Flux Dialyzers

Historically, low dialyzer membrane permeability limited HD efficiency, requiring more than 6 hours per treatment. As dialyzer design improved, treatment times decreased. In the late 1980s with the advent of more permeable dialyzer membranes, more precise UF control, and more reliable vascular access to achieve adequate blood flow, treatment times decreased to 2 to 3 hours three times weekly in the United States. The improved dialyzers ushered in the era of high-efficiency and high-flux HD.

The distinction between high-efficiency and high-flux dialyzers is imprecise, and sometimes these terms are used interchangeably. Both types of dialyzers have improved solute and fluid clearance compared with standard dialyzers and take advantage of higher Q b and Q d to reduce dialysis time while maintaining an adequate dose (see Table 62.6 ). High-efficiency dialyzers have a high K 0 A and high small molecule clearance (such as urea) compared with standard dialyzers. High-flux dialyzers have a highly permeable membrane for larger molecules, such as vitamin B 12 and β 2 M, and have a higher K UF than high-efficiency dialyzers, but not necessarily high urea clearances.

High-efficiency and high-flux dialyzers contain a substituted cellulose or synthetic membrane. Both membranes improve dialyzer permeability; substituted cellulose membranes can be made thinner to increase porosity and surface area, and synthetic membranes can be manufactured with more and larger pores. Both high-efficiency and high-flux HD require the use of bicarbonate dialysate (see “Dialysate Composition”). Additionally, the high K UF of these dialyzers creates the potential for hemodynamic instability with pressure-controlled filtration and thus requires volume-controlled filtration (see “Dialysate Circuit”).

Because of their relative porosity, high-flux dialyzers can remove larger molecules such as β 2 M, which are not removed at all by standard cellulose dialyzers. Removal of β 2 M reduces the risk of carpal tunnel syndrome and other complications in long-term patients. , , Initial results suggest that removal of other large molecules may offer additional benefits, such as an enhanced response to EPO, greater leptin removal possibly leading to better appetite, and perhaps lower mortality and hospitalizations. , However, potential adverse consequences include more efficient removal of amino acids, albumin, and drugs such as vancomycin.

Despite early promise in patients undergoing maintenance HD, RCT or crossover trials comparing high-flux HD with standard HD have found no difference in the incidence of hypotension and intradialytic symptoms, , BP control, neuropsychologic function, hemoglobin (Hgb) concentration, and use of erythropoiesis-stimulating agents (ESAs) or markers of inflammation, oxidative stress, or nutritional status. Three large RCTs, the High-Flux Hemodialysis (HEMO) Study, , Membrane Permeability Outcome (MPO) Study, and EGE Study, on the impact of membrane permeability on CV outcomes detected no significant difference in mortality or morbidity between patients treated with standard versus high-flux membranes. Post hoc subgroup analyses demonstrated that high-flux HD reduced CV events in patients with longer dialysis vintage in the HEMO Study, , reduced mortality in patients with low serum albumin (<4.0 g/dL) and DM in the MPO Study, and reduced CV mortality in patients with an AVF or DM in the EGE study. A meta-analysis of available data concluded that high-flux HD may reduce CVD mortality by 15% but does not alter infection-related or all-cause mortality.

Convection versus diffusion

Further increases in dialyzer flux may not lead to improved middle molecule clearance or outcomes as some solutes that accumulate in kidney failure are protein bound or sequestered inside the cell and must diffuse across cell membranes. The removal of such solutes remains time dependent. Initial experience has suggested that hemofiltration (HF) and hemodiafiltration (HDF) may augment the removal of larger molecules and protein-bound solutes through increased convective clearance. However, RCTs comparing HF and HDF with HD have found no difference in Hgb or ESA resistance, , serum phosphate levels, health-related quality of life, cardiovascular (CV) parameters such as left ventricular (LV) mass and pulse wave velocity, or intradialytic hypotension (IDH), , although some studies have reported less IDH with HDF. , Two of the three largest RCTs, the Convective Transport Study (CONTRAST) and Turkish Online Haemodiafiltration Study (OL-HDF), have reported no difference in CVD and all-cause mortality, but subgroup analyses have suggested a benefit in patients treated with greater convective and replacement volumes (>22 L and 17.4 L, respectively, so-called “high efficiency” HDF). The ESHOL (Estudio de Supervivencia de Hemodiafiltracion On-Line) Study, which achieved about 23 L of convective volume per session, reported lower CVD mortality, all-cause mortality, and hospitalization in patients randomized to HDF. Meta-analyses have concluded that convective therapies may reduce IDH , , and CVD mortality, , but additional high-quality RCTs addressing the effect of convective volumes are needed.

Further technologic advances have allowed the development of high-retention onset or mid- or high-cutoff dialyzer membranes, which have larger but more homogenous pore sizes that will clear molecules up to 45 kDa in size yet prevent the excessive loss of albumin. Clinical data have suggested that these membranes clear larger middle molecules, such as advanced glycation end products and inflammatory mediators, comparable with that achieved with HDF, and may improve the residual syndrome (symptoms that remain in patients despite adequate dialysis) and reduce CVD risk. An economic analysis of RCT data demonstrated cost savings with maintenance HD using a medium cutoff dialyzer, driven by fewer hospitalizations compared with the control group using a high-flux dialyzer.

Dialysate Circuit

Another major function of the HD system is the preparation and delivery of dialysate to the dialyzer. Most dialysis clinics use a single-pass delivery system (which discards the dialysate after a single passage through the dialyzer) and a single-patient delivery system (which prepares dialysate individually and continuously at each patient station by mixing liquid concentrates with a proportionate volume of purified water). To ensure that dialysate concentrates are diluted safely and accurately, the delivery system has many built-in safety monitors. Some clinics use a central multipatient delivery system whereby the dialysate is mixed in an area separate from patient care and then piped into each patient station or the concentrate is piped to each station before mixing. These centralized systems lower patient care costs but require additional effort and cost to modify dialysate electrolyte concentration, such as calcium (Ca) and K, for individual patients.

The HD machine warms purified water to physiologic temperature and then deaerates it under a vacuum. Because the patient is exposed to 100 to 200 L of dialysate during each treatment, the dialysate must be warmed to avoid hypothermia and is maintained at 35°C to 37°C. If the dialysate is too hot, protein denaturation (>42°C) and hemolysis (>45°C) occur. To ensure safety, if dialysate temperature is outside the limits of 35° to 42°C, an alarm will sound and a bypass valve diverts the dialysate directly to the drain, automatically bypassing the dialyzer. Without deaeration, dissolved air would precipitate in the dialysate as negative pressure is applied during HD, leading to malfunction of the blood leak detector and conductivity detector.

The heated and deaerated product water is then mixed proportionately with the concentrate to produce dialysate. Improperly proportioned dialysate can cause severe or fatal electrolyte disturbances. Because the primary solutes in the dialysate are electrolytes, the electrical conductivity of dialysate varies directly with the concentration of solutes. On the basis of this principle, the conductivity monitor downstream from the proportioning pump continuously measures the electrical conductivity of the product solution to ensure proper proportioning. It has a narrow range of tolerance, is usually redundant, and must be calibrated periodically using standardized solutions or by laboratory measurements of dialysate electrolytes. Changes in temperature, the presence of air bubbles, or malfunction of the sensor (usually an electrode) can alter dialysate conductivity.

The dialysate pump, located downstream from the dialyzer, controls dialysate flow and pressure. Although many dialyzers require a negative dialysate pressure for filtration, the circuit also must be able to generate positive dialysate pressures within the dialyzer because positive pressure is required to limit filtration when using dialyzers with a high K UF or under conditions that increase blood compartment pressure. The dialysate circuit regulates the pressure by controlled constriction of the dialysate outflow tubing while maintaining a constant flow rate. In addition, the dialysate delivery system controls the filtration rate, either indirectly by altering the TMP (pressure-controlled UF) or directly by modifying the actual filtration rate (volume-controlled UF). Earlier systems used manual pressure-controlled filtration, requiring dialysis personnel to calculate and enter the TMP, closely monitor the filtration rate, and recalculate and adjust the TMP as needed. For dialyzers with a K UF >6 mL/hour/mm Hg, dialysate delivery systems must have built-in balance chambers and servomechanisms to accurately control the volume of the fluid removed during HD (volume-controlled filtration) and prevent excessive fluid gain or removal.

When blood is detected in the dialysate, the blood leak monitor located in the dialysate outflow tubing sounds an alarm and shuts off the blood pump. Blood in the dialysate usually indicates membrane rupture and may be caused by a TMP exceeding 500 mm Hg or by a membrane damaged by bleach or heat disinfection when reprocessing dialyzers for reuse. Although a rare complication, membrane rupture can be life-threatening because it allows blood to come into contact with nonsterile dialysate.

Online Monitoring

In addition to delivering dialysate to the dialyzer and having many built-in safety features, modern HD machines also record and store real-time data such as vital signs, Q b and Q d , arterial and venous pressures, delivered dialysis dose, plasma volume, thermal energy loss, and access recirculation. Linking computerized medical information systems with dialysis delivery systems can improve patient care by allowing the integration of patient data while maintaining treatment records. These capabilities may be most important among patients receiving in-center self-care or home-based HD, where there are fewer personnel present at the point of care.

Monitoring Clearance

Online monitoring of clearance may provide the best assessment of dialysis adequacy. Online monitors record urea clearance by measuring the urea concentration in the dialysate, either continuously or periodically, , , determining dialyzer Na clearance by pulsing the dialysate with Na and measuring dialysate conductivity at the dialyzer inlet and outlet (ionic dialysance), , , , or determining clearance of uremic solutes by measuring ultraviolet light absorbance of spent dialysate. , , Most online methods for monitoring urea or Na kinetics provide Kt/V based on whole-body clearance in addition to dialyzer clearance. , Online urea monitoring requires repeated calibration and has not gained popularity. Online clearance monitoring removes the expense and risks of blood sampling, reduces personnel time, and allows frequent determination of delivered dose, and provides real-time measurements for instant feedback. However, reported clearance values may differ and adjustments are applied by the instrument’s software to match the urea Kt/V more closely. , Drawbacks of online clearance monitoring include the need for multiple measurements of K d to obtain an average for the treatment, accurate monitoring of treatment time, the need for blood urea sampling to allow determination of protein catabolic rate (a marker of nutrition), and the need to measure or estimate V to allow the calculation of Kt/V from the online K d measurements.

Monitoring Hematocrit and Relative Blood Volume

The HCT can be measured online during HD using an ultrasound determination of plasma protein concentration or, more commonly, an optical measure of Hgb concentration or HCT. , , , Patients who are prone to IDH and cramping may benefit from online monitoring of HCT because their symptoms are usually caused by decreased circulating blood volume when the UF rate exceeds intravascular refilling. , , , , The degree of hemoconcentration reflects the immediate magnitude of intravascular volume depletion. Theoretically, altering the filtration rate during HD to minimize excessive hemoconcentration may reduce symptoms during HD and optimize the dry weight (DW). , , In practice, using just the relative blood volume to guide the filtration rate has not been very successful in ameliorating symptoms, likely because of inaccuracies in measurement, the varied compensatory CV responses to volume depletion within and among individual patients, and a dialysis-induced reduction in arteriolar tone and LV function (myocardial stunning). , , , ,

Online determination of relative blood volume varies significantly among devices and underestimates the true decline in blood volume because of intravascular translocation of blood from the microcirculation (capillaries and venules), which has a lower HCT to the larger vessels. , , What has shown more promise is identifying the pattern of the relative blood volume decline and using a computer-controlled biofeedback system to modify the filtration rate continuously during HD, in combination with clinical assessment such as symptoms and bioimpedance spectrometry. , , , The absolute or total blood volume may be more predictive of intradialytic symptoms; automated online monitoring of absolute blood volume is not currently available, although mathematic models may allow derivation of the total blood volume from relative blood volume. ,

Computer Controls

Solute removal during HD reduces plasma osmolarity, favoring fluid shift into cells, and opposes net fluid removal. Raising the dialysate sodium concentration (Na D ) helps preserve plasma osmolarity, offers the theoretic benefit of reducing IDH and cramping, and may allow continued fluid removal. Computer-controlled Na modeling (“sodium ramping”) changes the Na D automatically during HD, usually starting at 150 to 155 mEq/L, and steps down to 135 to 140 mEq/L near or at the end of HD. Because of predialysis hyponatremia and overall positive Na balance during HD despite the stepdown, this causes increased thirst, excessive interdialytic weight gain (IDWG), and HTN, although the latter is not a consistent finding. , Instead, these unwanted effects can be avoided by individualizing Na D to maintain a gradient of 0 to −2 mEq/L with respect to the predialysis plasma Na concentration, including perhaps diminished intradialytic symptoms because of lower UF requirements. , Such individualization may be accomplished through the use of online conductivity monitoring or by estimating each patient’s inherent plasma Na concentration (Na set point) using an average predialysis Na. This can be measured with direct potentiometry or mathematically corrected for the Gibbs-Donnan effect—Na that is not available for diffusion because of trapping by negatively charged proteins. , In addition, measured Na D deviates significantly from prescribed levels in more than 40% of assays, ranging from 13 mEq/L higher to 6 mEq/L lower than prescribed, making it difficult to interpret studies evaluating the clinical effects of varying Na D . , Given our incomplete understanding of the effects of altering Na D on morbidity and mortality, the indiscriminate use of Na ramping and Na modeling in its current form should be abandoned, and individualization of Na D should be done cautiously (see later “Dialysate Composition—Sodium”). , , ,

Ultrafiltration modeling provides a variable rate of fluid removal during HD according to a preprogrammed profile (e.g., linear decline, stepwise changes, and exponential decline of filtration rate with time). Theoretically, altering the UFR during HD allows time for the blood compartment to refill from the interstitial compartment, leading to less IDH and cramping. As with Na modeling, stand-alone UF modeling is relatively crude and altering the UFR in response to blood volume monitoring may be of more benefit (see previous, “Monitoring Hematocrit and Relative Blood Volume”). The effects of Na and UF modeling may be difficult to separate because they are often used together. , , ,

Technologic advances include the development of HD machines with biofeedback systems, allowing for computer-controlled adjustments of treatment parameters based on real-time input from the online monitors. The most common system used monitors blood volume and adjusts the UFR and dialysate conductivity to prevent them from decreasing below a preset value during HD. Small studies have demonstrated that this device ameliorates symptoms in hypotension- and nonhypotension-prone patients. , , , , , , ,

Automated control of dialysate temperature (T D ) to maintain isothermic dialysis (constant body temperature) has shown promise and is superior to thermoneutral dialysis (using constant T D ) in reducing IDH without incurring a Na load. , , , Other studies have suggested that individualized dialysate cooling (0.5°C lower than body temperature) reduces episodes of IDH by 70%, raises mean arterial BP during HD, and abrogates both HD-associated cardiomyopathy and brain white matter changes. , Although these online monitors and automated biofeedback systems are expensive, they have the potential to reduce IDH, detect vascular access dysfunction, and increase dialysis efficiency while minimizing blood sampling. , , , , By improving patient care, they may prove to be cost-effective in the long run.

Dialysate

During HD, blood in the blood compartment flows in one direction and dialysate flows in the opposite direction in the dialysate compartment (see Fig. 62.14A ). This countercurrent flow optimizes the concentration gradient for solute removal. Dialysate preparation and composition are critical to the success of HD. The solution must be prepared from properly treated water, but sterility is not required because the semipermeable membrane excludes large particles such as bacteria and viruses. The concentrations of vital solutes added to dialysate typically reflect those normally maintained by the native kidneys ( Table 62.7 ). The dialysate is essentially a physiologic salt solution that creates a gradient for removal of unwanted solutes and maintains a constant physiologic concentration of extracellular electrolytes (see later “Dialysate Composition”).

Table 62.7

Solutes Present in the Dialysate

Solute Concentration (mEq/L)
Sodium 135-145
Potassium 0-4
Chloride 102-106
Bicarbonate 25-40
Acetate 2-8
Calcium 0-3.5
Magnesium 0.5-1.0
Dextrose 100-200 (mg/dL)

Water Treatment

Because HD patients are exposed to as much as 600 L of dialysate water/week, treating the water used to generate dialysate is essential to avoid exposure to harmful substances such as aluminum, chloramines, fluoride, endotoxins, and bacteria. Technical advances such as high-flux dialyzers, reuse or reprocessing of dialyzers, and bicarbonate-based dialysate have made high water quality even more imperative. To avoid complications, tap water is softened, exposed to charcoal to remove contaminants such as chloramines, filtered to remove particulates, and then filtered again under high pressure (reverse osmosis or RO) to remove other dissolved contaminants ( Fig. 62.20 ). A complete review of this topic is beyond the scope of this chapter, and readers are referred to reviews on the topic.

Fig. 62.20

Schematic of a typical configuration of a reverse osmosis water treatment system.

Tap water undergoes filtration to remove gross particulate matter and then is softened before exposure to charcoal (carbon tanks) to remove contaminants such as chloramine. A second filtration process removes particulate matter, as well as microbiologic organisms. Finally, water is filtered under high pressure to remove dissolved contaminants such as aluminum (reverse osmosis). Product water is then either stored in a water tank or piped directly to each dialysis station.

Hazards associated with dialysis water

Improperly treated water contains potentially harmful substances and can cause patient injury or death. Accumulation of aluminum may cause osteomalacia, microcytic anemia, or dialysis-associated encephalopathy with dementia or movement disorders. Treating water to keep aluminum levels <10 mg/L has markedly reduced aluminum-associated diseases. , Chlorine is added to municipal water as a bactericidal agent and interacts with organic material in the water to form chloramines (which may also occur naturally or be added directly to municipal water). Direct exposure of blood to chloramines causes acute hemolysis and methemoglobinemia. , , Fluoride can cause cardiac arrhythmias and death acutely , and osteomalacia chronically. Excess Ca and magnesium (Mg) have been linked to the hard water syndrome with a constellation of symptoms including nausea, vomiting, weakness, flushing, and labile BPs. Close communication with water suppliers is critical to anticipate changes in feed water quality from added chemicals and environmental conditions, such as flooding or contamination, because alterations in the water purification process may be required. , With the advent of large-pore, high-flux membranes, efforts at improving water purity have focused on further reducing bacterial endotoxins, which can cause febrile reactions, hypotension, and chronic inflammation. ,

Essential components of water purification

Temperature-blending valves proportion incoming hot and cold tap water to yield a water temperature of about 77°F, the optimal temperature for the carbon tank and most RO membranes (below which the flow rate and efficiency of the RO system are reduced; temperatures above 100°F may damage the RO membrane). Multimedia depth filters then remove particulate matter from the water (see Fig. 62.20 ). Using cation exchange resins that contain Na, the water softener then removes Ca, Mg, and other polyvalent cations from the feed water, preventing these cations from depositing on and damaging the RO membrane. Next, granular activated carbon in the carbon filtration tank absorbs chlorine, chloramines, and other organic substances from the water. Activated carbon is very porous and has a high affinity for organic material, but if not serviced properly or exchanged frequently, it can be contaminated with bacteria. Downstream, water is then filtered through a 5-μm cartridge filter to prevent carbon particles from fouling up the RO pump and membrane. Finally, the water is delivered to the RO unit, which applies high hydrostatic pressure to force water through a highly selective semipermeable membrane that rejects 90% to 99% of monovalent ions, 95% to 99% of divalent ions, and microbiologic contaminants larger than 200 Da. The water exiting the RO unit is termed the “permeate” or “product water” and in most clinics can be used safely for HD or used for mixing bicarbonate or acid concentrates.

When there is heavy ionic contamination of feed water, the product water from the RO unit is further polished with a mixed-bed ion exchange system (deionization system) and then passed through an ultrafilter to remove any bacterial contamination from the ion exchanger. The cationic resin exchanges hydrogen ions for other cations in descending order of affinity—Ca, Mg, K, Na, and then hydrogen. The anionic resin exchanges hydroxyl ions for other anions in descending order of affinity—nitrites, sulfates, nitrates, chloride, bicarbonate, hydroxyl, and fluoride. When the resin is exhausted, previously adsorbed ions, especially those of lower affinity, can elute into the effluent and result in levels that are more than 20 times their usual concentration in tap water, causing severe toxicity and even death. , Because of this danger, the deionization system is rarely used alone in treating water for HD and requires stringent monitoring of product water.

Microbiology of hemodialysis systems

Despite municipal treatment of tap water and the extensive HD water treatment system, water used for HD still can become contaminated with bacteria and endotoxins, , , , mainly with water-borne, gram-negative bacteria and nontuberculous mycobacteria (NTM). This is due to removal of normally protective chlorine and chloramines and the predisposition for biofilm deposition at low flow and stagnation points in the water treatment circuit. Although NTM do not produce endotoxins, they are more resistant to germicides than gram-negative bacteria and can survive and multiply in product water that contains little organic matter. In 1984, the Centers for Disease Control and Prevention (CDC) found NTM in the water of 83% of surveyed dialysis centers.

Routine disinfection and surveillance of water treatment equipment, product water, and dialysate are critically important to optimize dialysis water quality. , , Because of dialyzer reprocessing and the use of high-flux dialyzers, the patient may be exposed to bacterial and endotoxin contaminants in improperly handled product water, either through direct contact of product water with the blood compartment during reprocessing or through backleak of endotoxin into the blood compartment during HD. Therefore high-level disinfection to kill all microorganisms (except bacterial spores) is necessary, as well as stricter standards for water quality. The Association for the Advancement of Medical Instrumentation (AAMI) has adopted the International Organization for Standardization (ISO) guidelines for dialysis water quality, recommending <100 colony-forming units (CFU)/mL of bacteria and a maximal endotoxin concentration of <0.25 endotoxin units (EU)/mL with action levels at ≥50% of maximal levels. , In addition to routinely scheduled disinfection, water treatment equipment and affected HD machines must be disinfected when action levels are detected. For ultrapure dialysate, even more stringent criteria are in place including a bacterial count of <0.1 CFU/mL and endotoxin level of <0.03 EU/mL. A meta-analysis has reported that ultrapure dialysate use is associated with less inflammation and oxidative stress, higher serum albumin and Hgb, and lower ESA requirement. The only RCT to evaluate the effect of ultrapure dialysate on fatal and nonfatal CV events has found no benefit, although its power to detect a difference may have been reduced by a lower-than-expected endotoxin level (0.15 ± 0.22 EU/mL) in the conventional dialysate group. Ultrapure dialysate may also be desirable because of the potential to reduce cost through decreased ESA use.

Dialysate contaminated with bacteria and endotoxins can cause pyrogenic reactions with chills, fever, and hypotension. , , , , Headache, myalgia, nausea, and vomiting also may be present. Typically, the symptoms begin 30–60 minutes into the HD treatment. The source of the reaction is unlikely to be the microorganisms per se as they are too large to cross an intact dialyzer membrane. Instead, bacterial pyrogens such as lipopolysaccharide, peptidoglycans, exotoxins, and their fragments are thought to be responsible. , , , , Pyrogenic reactions are typically seen in association with reprocessing of dialyzers because contaminated water gains direct access to the blood compartment during reprocessing. , , , In the absence of reuse, pyrogenic reactions are rare and occur only with high-level bacterial contamination of the dialysate or bicarbonate solution. Although the larger pore size in high-flux dialyzers may increase backfiltration and allow endotoxins to enter the blood compartment, synthetic membranes also adsorb endotoxins, thereby attenuating the effect of imperfectly processed dialysate. , , However, even in the absence of pyrogenic reactions, low levels of dialysate contamination with microbes may result in chronic inflammation, manifesting as higher serum C-reactive protein levels, increased oxidative stress, lower serum albumin and Hgb levels, and ESA resistance, , , reversed by the use of ultrapure dialysate.

Monitoring water quality

Because of the potential complications that can occur when improperly treated water is used for HD, water quality monitoring is crucial. Source and product water must be assayed routinely to ensure that product water meets standards (in the United States following AAMI guidelines adopted from the ISO) for heavy metal and other ionic contaminants. The frequency of scheduled testing depends on the quality of the water source, type of water treatment system used, and seasonal variation in chemicals added to municipal water to ensure potability.

Samples of source water, water obtained from critical points in the water treatment system, product water, dialysate, and bicarbonate solution must be cultured at least monthly to ensure that bacterial contamination is below the limits set forth by AAMI standards. In addition, water is most commonly tested with the limulus amebocyte lysate assay to determine the degree of endotoxin contamination. , ,

Hemodialysis Adequacy

Historical Perspectives

With early HD, success was measured simply as patient survival, without metrics or measures of how much HD was delivered. As HD evolved and the prophylactic aspect of dialysis was better appreciated, questions related to quantification of HD, in other words the adequacy of HD, led to the National Cooperative Dialysis Study (NCDS) funded by the National Institutes of Health (NIH), the first large study aimed at answering this question. This clinical trial of HD adequacy aimed to control the average predialysis BUN at 50 mg/dL versus 100 mg/dL. The ultimate finding was a strong association among urea clearance, Kt/V, and clinical outcomes. Subsequent observational studies have repeatedly confirmed the higher risk of mortality when the fractional clearance during each HD, expressed as Kt/V, falls below 1.2. Another controlled trial of HD dose and adequacy sponsored by the NIH in the late 1990s (the HEMO Study) showed no further benefit from increasing the dialyzer single-pool Kt/V above 1.3/treatment three times weekly. This study also showed that previously reported benefits from doses above 1.3 observed in uncontrolled studies were subject to bias from regression to the mean and from a newly recognized dose targeting bias. Failure to achieve the targeted dose is apparently a risk factor in itself, independent of the actual dose. Together, these findings have led the medical community and the U.S. Medicare program to issue guidelines for HD adequacy that have become standards of care worldwide. ,

The persistently high mortality rate in the dialysis population, although often unrelated to HD itself, has spurred interest in dialysis adequacy and its methods of measurement over the years. Although discussions focused on the role of nontraditional toxins in patient outcomes are worthwhile, , they should not distract from the basic understanding of how small solute removal is measured. This section reviews the rationale and methods for measuring HD adequacy, focusing on mathematic models of solute kinetics that have been effectively put into clinical practice in nearly all HD clinics. In discussing HD adequacy, it is important to distinguish the adequacy of solute removal per treatment from the global well-being of the patient, placing emphasis on the importance of a more personalized approach to HD prescriptions.

Uremia: The Syndrome Reversed by Dialysis Therapy

The clinical syndrome of “uremia” resulting from kidney failure is a toxic state caused by the accumulation of solutes normally excreted by the kidney, including water-soluble, freely filtered solutes. The relationship between this syndrome and the kidneys was not obvious in antiquity since urine output often remains unchanged even in poor kidney excretory function. After urea was discovered more than 200 years ago, investigators later noted a significant difference in serum urea concentrations between the kidney artery and vein, suggesting the role of the kidney in excreting urea. Elevated concentrations of urea and other organic solutes in the serum of patients with this syndrome suggested that this was a kidney disease, but it was not until HD reversed the syndrome that this hypothesis could be proven. Clinicians can be confident that the immediate, life-threatening aspect of uremia is a toxic state caused by small-molecule accumulation because it is rapidly reversed by HD, a process that does little else than to remove small solutes by diffusion across a semipermeable membrane (therefore fulfilling Koch’s postulates).

Conventional teaching states that urea itself is not highly toxic. Prior studies demonstrate that cellular, vascular and most organ function are not affected adversely by urea per se. However, for the purposes of HD adequacy, we view urea as a representative solute for the numerous, much more toxic substances that have similar molecular properties and behave similarly to urea in dialysis.

Measuring Hemodialysis Adequacy

Measuring urea concentrations in the serum as a method for assessing the effectiveness or adequacy of HD treatments has been replaced by measuring the clearance of urea. Clearance of a solute is not dependent on the serum concentration solute, which can be influenced by its production. In the case of urea, production increases with improved protein intake, often a reflection of overall better well-being. Clearance can be measured instantaneously across the dialyzer or as an integrated parameter over time. For native kidney function, the latter is achieved by collecting timed urine specimens and calculations are made based on the assumption of a steady-state scenario. For intermittent HD, collection of dialysate over an entire HD session is possible but highly impractical. Alternatively, we estimate clearance of an HD treatment by sampling blood levels of urea before and after HD. Notably, we are not in a steady state when dealing with intermittent HD. Therefore the use of pre- and post-HD urea in determining dialysis adequacy is sometimes termed “urea kinetic modeling.” The kinetic (derived from Greek kinein , to move) nature is a fitting characterization of quantifying urea clearance on HD since urea is not only being removed but also produced. The magnitude of urea concentration reduction during each HD session can be translated to urea clearance, much like the decrease in drug levels after a loading dose can be used to measure the drug’s clearance. Application of well-established pharmacokinetic principles to urea kinetics provides an estimate of the elimination constant for urea (K/V), which is essentially the slope of the decrease in concentration expressed on a logarithmic scale, as shown in Fig. 62.21 . K is the urea clearance and V is the volume of urea distribution, which is the patient’s total body water volume. If one incorporates the treatment time element and ignores fluid removal and generation of urea during HD, the log of the ratio of pre- to post-HD BUN values can be simply translated to Kt/V (see Fig. 62.21 ).

Fig. 62.21

Intradialysis urea kinetics: origin of Kt/V.

Left, The nonlinear decrease in blood urea nitrogen during dialysis (solid line) becomes a straight line when plotted on a log scale (dashed line). The fractional rate of decrease is a constant, k = K/V, where k is the elimination constant, K is the clearance, and V is the urea distribution volume. Right, The solution to the equation describing first-order kinetics shows that delivered Kt/V primarily depends on the predialysis and postdialysis blood urea nitrogen values (see text and legend for Fig. 62.20 for definition of variables shown). This oversimplified equation is expanded in Fig. 62.20 to include the other important variables.

Because substantial fluid volume is removed during most HD treatments and a significant amount of urea is generated, especially during longer treatments, a better model of urea mass balance, shown in Fig. 62.22 , is required to measure Kt/V accurately. In addition to the change in urea volume and urea generation, this model can be extended to include the interdialysis interval and the effects of residual kidney urea excretion (K ru ). The latter, in contrast to the dialyzer clearance, is a continuous clearance that has minimal effect on urea removal during HD but provides a marked benefit between treatments when dialyzer clearance is zero. In addition, solute sequestration, or delayed transport of dialyzable solutes between water compartments during HD leading to post-HD rebound (see Fig. 62.17 ), can be incorporated in the model if a second compartment is included, as shown in Fig. 62.23 . The single-compartment model (see Fig. 62.22 ), however, remains the standard for measuring dialysis in most centers, because of not only the complexities of the two-compartment model but also the errors in the single-compartment model caused by ignoring two-compartment effects tend to cancel each other out. Therefore both single and multicompartment models give similar results for Kt/V in the usual clinical range when HD is provided thrice weekly. , A two-compartment model with formal numeric analysis is available on the Internet and may become useful for measuring nonstandard dialysis schedules, such as daily HD, or for more prolonged treatments given thrice weekly. ,

Fig. 62.22

Single-pool model of urea mass balance in a hemodialysis (HD) patient.

When the patient is not undergoing dialysis (most of the time for conventional HD), K d is 0 and removal is determined solely by K ru . V is the urea distribution volume, equated to body water space; C is urea concentration; and dV is the rate of fluid gain (negative during dialysis, positive between dialyses). During HD, total clearance (K) is the sum of K d and K ru . An explicit solution is available to the differential equation that describes the rate of urea accumulation or loss (dVC/dt) as the difference between generation (G) and removal (KC).

Fig. 62.23

Double-pool model of urea mass balance in a hemodialysis patient.

Addition of a second compartment to the diffusion model of urea mass balance shown in Fig. 62.22 accounts for the postdialysis rebound in urea concentration shown in Fig. 62.17 and, in general, is considered a more accurate model. K c is the coefficient of mass transfer between compartments, analogous to dialyzer K 0 A. Solution of the differential equation requires numeric analysis and is not commonly applied in dialysis clinics.

Formal Urea Kinetic Modeling

Formal urea kinetic modeling considers the above aspects of urea and its clearance, using an iterative process of fitting known data into a model to achieve a good fit (usually <1% error). Changes in volume from UF, urea generation, and K ru are incorporated in the modeling. Requiring a computer or programmable calculator, formal urea modeling provides not only the urea clearance Kt/V but also urea production, which can be translated to a normalized protein catabolic rate, useful in estimating nutrition. Urea modeling can be used as the monthly monitor of HD adequacy. Equations (such as the Daugirdas II equation) that give results accurate to that of formal modeling are often used instead.

The first step of urea kinetic modeling is ensuring accurate pre- and post-HD BUN measurements. Pre-HD sampling of blood is straightforward, but the post-HD BUN is prone to variability and measurement errors are more significant when the BUN is low, such as post-HD levels. , Although it decreases more slowly toward the end of HD, the BUN rebounds rapidly as soon as the blood pump is stopped (see Fig. 62.17 ). The early rapid phase of upward rebound is determined by both access recirculation and cardiopulmonary recirculation. Efforts should be made to sample after access-related rebound is complete but before cardiopulmonary rebound begins. The Kidney Disease Outcomes Quality Initiative (KDOQI) guidelines recommend slowing the blood pump to 100 mL/min for 15 seconds (to permit access rebound) or stopping the dialysate flow for 3 minutes and then drawing the sample from the dialyzer inflow port. Access recirculation dilutes the post-HD BUN, causing a falsely high Kt/V, which can endanger the patient because of inadequate dialysis. Sampling after cardiopulmonary rebound has begun gives a falsely low Kt/V.

Solute Generation

In addition to measuring the HD adequacy, urea modeling allows the measurement of two patient parameters that independently influence the patient’s risk of mortality—urea generation (G) and the patient’s volume of urea distribution (V). Accumulation of urea results from both amino acid catabolism (a measure of protein nutrition) and failure of kidney excretion. Although these dual effects on urea concentrations complicate the interpretation of any single measured level, mathematic modeling of urea mass balance allows separation of the two and an estimate of urea distribution volume. Both higher urea generation rates and higher urea volumes are associated with lower mortality. , For patients dialyzed thrice weekly, diurnal variations in urea generation have little effect, but for nocturnal dialysis, the reduction in urea generation at night can cause a significant error, an overestimation of Kt/V, and underestimation of V if G is modeled as a constant.

Blood concentrations are the net effect of solute generation and elimination. If one attributes uremic toxicity to the concentration of accumulated solutes (concentration-dependent toxicity), it might seem logical that the clearance (Kt/V) should sufficiently balance the generation rate to maintain a safe low concentration. However, during the NCDS, attempts to demonstrate this relationship by reducing the dose of dialysis in patients who ate poorly caused an unfortunate vicious cycle of uremia-induced anorexia and malnutrition that eventually led to early discontinuation of the study. , Similarly, observational studies have shown consistently enhanced survival in patients who eat more, even when Kt/V is held constant, and patients who generate more creatinine have a similar higher rate of survival. , It appears that the relationship between diet and toxin generation and elimination is complex and poorly understood.

Urea modeling essentially provides a measure of the urea elimination constant, which can be considered as the fractional rate of urea disappearance during HD (K/V). To calculate K, one must know V or vice versa. Because the prescribed K should be the same as the delivered K, and prescribed K can be determined from Eq. 5 , modeled V is easily determined. By convention, V is expressed after dialysis because it is less variable. Comparison of modeled V from dialysis to dialysis can be used as a quality assurance measure, and values should not differ by more than 15%. , Causes of a discrepancy include access recirculation, dialyzer malfunction (from clotting or fouling of the hollow fibers), blood pump variances, and blood sampling and measurement errors. ,

Urea Volume of Distribution

Several studies have shown that various measures of body size, including V, are associated independently with mortality ( Fig. 62.24 ). , , , Survival rates in larger patients are higher than in smaller patients for reasons that are not entirely clear but may be related to nutrition and the caloric buffer afforded by muscle and fat. Because body size expressed as V is the size-normalizing factor for urea clearance in the Kt/V expression, larger patients require higher clearances and are therefore at higher risk for underdialysis. However, correction for the favorable influence of large size on mortality tends to mitigate this risk, as shown in Fig. 62.25A . Kt/V is a more powerful predictor of mortality than body size, and correction of Kt/V for the independent (and opposing) risk associated with body size renders it an even more powerful predictor of mortality ( Fig. 62.25B ).

Fig. 62.24

Risk of death as a function of dialysis dose and body size.

The risk of death in hemodialysis patients decreases with increased dialysis dose (Kt/V) and may be further stratified by urea volume as a measure of body size. Larger patients, in general, have a lower death risk.

Fig. 62.25

Risk of mortality related to body size and dialysis dose.

These data were obtained from a large observational study of 43,334 patients. (A) Hazard ratio analysis was adjusted for case mix. (B) Hazard ratio analysis included an interaction term between Kt and body mass index and was adjusted for case mix. BSA, Body surface area; L/Rx, liters per treatment.

From Lowrie EG, Li Z, Ofsthun N, Lazarus JM. Body size, dialysis dose and death risk relationships among hemodialysis patients. Kidney Int. 2002;62:1891−1897.

The HEMO study has uncovered a potentially size-independent effect of sex on the response to higher doses of Kt/V. Although mortality was not affected by administering a higher dialysis dose for the 1846 randomized patients as a whole, when females were analyzed post hoc, a borderline significant improvement in outcomes was seen at the higher dose. The counterbalancing effect was a nonsignificant higher mortality in males. However, sex was difficult to separate from size because the two are so closely linked, especially regarding V. If body surface area is considered to be the more appropriate denominator for dosing dialysis, females, and perhaps smaller males, would clearly require more dialysis than larger males when the dose is measured as Kt/V ( Fig. 62.26A and 62.26B ). Similarly, malnourished patients who lose weight have an automatic increase in Kt/V unrelated to the effort of dialysis, simply because the denominator in the Kt/V expression decreases. This dose increase in patients at higher risk of death may explain the reverse J-shaped relationship between Kt/V and survival in observational studies. Although the latest update of the KDOQI guidelines for HD adequacy raises the idea of normalizing Kt by body surface area, use of body weight, in essence, body water (V) to normalize dialysis dose, still predominates currently.

Fig. 62.26

(A) Standard Kt/V in the conventional and high-dose HEMO study subjects, by sex.

(B) Surface area normalized standard Kt/V in the conventional and high-dose HEMO study subjects, by sex. Conversion to surface area was based on an anthropometric estimate of V in each patient. , V, Urea distribution volume.

From Daugirdas JT, Greene T, Chertow GM, Depner TA. Can rescaling dose of dialysis to body surface area in the HEMO study explain the different responses to dose in women versus men? Clin J Am Soc Nephrol. 2010;5(9):1628−1636.

Treatment Time

Average HD treatment times for thrice-weekly HD are lowest in the United States compared with that in many other nations. This stems from the desire to be at the dialysis clinic for the shortest time possible. Other times, patients shorten their treatment times due to discomfort they experience toward the end of the procedure. Muscle cramps, fatigue, and general malaise increase in intensity as more fluid and solute are removed. Paradoxically, shortening the treatment typically accentuates these symptoms because the rate of removal must increase if the patient is to remain in solute and water balance. Extending treatment time (t) or increasing the dialysis frequency tends to alleviate these symptoms. Sometimes, a temporary trial of either an extended duration or increased frequency of HD is sufficient to persuade the patient.

Although the NCDS has shown only a borderline significant effect of HD duration, most population studies have shown that a longer duration is associated with enhanced survival. Like the NCDS, the HEMO study failed to show a significant benefit from longer time but it also did not specifically target dialysis duration, and the range of treatment times in the study was limited. Prospective observational studies and clinical experience favor prolonged treatments, which also result in lower UF rates. , , A prospective, cluster-randomized pragmatic study to look at usual care (3–4 hours per HD) versus long (>4.25 hours per HD) duration in incident patients on thrice-weekly HD was attempted, but the study’s inability to sufficiently recruit into the longer duration arm resulted in termination of the trial.

Alternative Measures of Dialysis Adequacy

Urea Reduction Ratio

Urea reduction ratio (URR) is simply the ratio of the change in BUN by HD to the pre-HD BUN:

URR = ( C 0 − C ) / C 0

where C 0 and C are the pre- and post-HD BUN levels, respectively. This ratio has the advantage of simplicity. However, it is the least accurate measure of HD adequacy with many limitations. For example, as the frequency of HD increases, and presumably its efficiency improves, URR decreases. It is not possible to add URR values to show a cumulative weekly effect and, as the frequency extends to continuous dialysis, URR extinguishes to zero. URR does not take into account interdialytic fluid accumulation, urea generation, RKF, or the additional clearance afforded by UF. On the positive side, in addition to its simplicity, URR has a curvilinear relationship with Kt/V ( Fig. 62.27 ), paralleling the relationship between outcome and Kt/V. Although efforts have been made to convert Kt/V to a URR equivalent or to use the solute removal index, a more reliable index of HD dose, these approaches have not been popular. Other efforts to report the reciprocal of Kt/V as a concentration equivalent, targeting low concentrations instead of high clearances, have not been applied, partly because Kt/V has become ingrained in the practice of dialysis quantification.

Fig. 62.27

Curvilinear relationship between urea reduction ratio (URR) and Kt/V, stratified by degree of ultrafiltration during dialysis.

Whereas the URR (see text) falls with increasing fluid removal during dialysis (ΔWt) from 0% to 10% of body weight, Kt/V increases. The latter more appropriately accounts for the increase in clearance caused by ultrafiltration. Curves are derived from formal urea modeling.

Conductivity Clearance

The average clearance of small, dialyzable solutes is easily derived from measurement of the pre- and post-HD BUN levels, using urea as the representative molecule. The instantaneous clearance of urea could also easily be measured by drawing blood samples from the dialyzer inlet and outlet, though this is not needed when using formal kinetic modeling or equations to determine treatment Kt/V. A novel approach to estimating instantaneous urea clearance across the dialyzer, which can then be used to calculate the treatment Kt/V, is by utilizing the conductivity clearance method.

This method, which is completely dialysate sided, takes advantage of the fact that Na moves across the modern dialyzer membrane nearly equivalent to urea. Thus understanding the dialyzer Na or ionic dialysance (capacity for ionic movement across the membrane), we can assume that urea is being cleared in a similar manner. Machines with this capability measure the change in electrical conductivity at the dialysate inlet and outlet, before and after short but abrupt changes in the dialysate concentration. , With these measures of Na ionic dialysance, usually performed three to six times per HD session, we then have instantaneous clearance (K). With treatment time and an input for V, we can arrive at the Kt/V for that treatment. This method has the advantage of not needing blood specimens and the results are immediately available. The use of an estimated V, usually by one of many available anthropometric calculators, proves to be the shortcoming of this method. Currently, guidelines do not fully support the use of conductivity clearance for routine HD adequacy monitoring.

Comparison of Hemodialysis and Peritoneal Dialysis Doses

The minimum recommended weekly dose of PD expressed as Kt/V is 1.7 (see other sections of this book for discussion of PD adequacy). Importantly, this PD dose is for continuous ambulatory PD, so it represents a continuous clearance. Although the minimum HD per treatment dose, thrice weekly appears to be greater than that of the minimum PD dose, outcomes for patients between treatments are similar, even when adjusted for the lower comorbidity of the average PD patient. Furthermore, solute kinetic analyses have shown that dialysis efficiency improves with increased frequency of treatments, and one can imagine a continuous therapy as the ultimate high-frequency treatment ( Fig. 62.28 ). These observations, together with acknowledgment of little or no benefit from more intense or more prolonged intermittent dialysis, have led to the conclusion that intermittent treatments are less efficient than continuous treatments and have stimulated efforts to define a continuous equivalent clearance expression for HD. This would allow us to have a standard method of comparing clearance of intermittent HD of any frequency/duration, even comparing intermittent to continuous dialysis modalities.

Fig. 62.28

Effect of frequency on peak and average solute concentrations.

Two-compartment formal kinetic modeling of a solute with low K c predicts that both peak and mean concentrations decrease significantly as the frequency of treatments increases, despite no change in the weekly dialyzer clearance × treatment time (Kt).

Standard Clearance and Standard KT/V

This effort to develop a common adequacy “language” that allows comparisons within and between dialysis modalities culminated in the idea of a continuous equivalent expression for urea clearance:

EKR = G / TAC

where EKR is the concept of an equivalent kidney clearance, and when urea generation (G) is constant and stable, then time-averaged urea concentration (TAC) should be as well. , This lends to the concept of a standard K and “standard Kt/V.” , The latter redefines clearance as the removal rate factored for the predialysis concentration, placing more emphasis on the predialysis BUN as a risk factor for uremia. Because the predialysis BUN is always higher than the mean BUN, the standard Kt/V is always lower than the continuous urea clearance and is comparable with fractional clearances achieved with continuous PD. Despite its somewhat arbitrary definition, the matching of doses with PD has generated interest in standard Kt/V (stdKt/V) as an expression of HD dose that is independent of frequency. Clinical applications of HD more frequently than thrice weekly have generated a need for quantification that accounts for the improved efficiency of more frequent treatments. For patients in a steady state of urea mass balance in which generation equals removal and dialyzed according to any schedule of treatments, stdKt/V is defined as follows:

stdKt / V = ( urea removal rate ) / ( peak concentration ) = G / ( average predialysis BUN )

where G is the patient’s urea generation rate derived from formal urea modeling. KDOQI guidelines call for a minimum stdKt/V of 2.0/week, significantly higher than the minimum PD dose but considered safe in the absence of controlled trials of dialysis frequency. An explicit mathematic formula for calculating standard Kt/V based on per-HD single-pool Kt/V has greatly simplified the calculation, and subsequent refinements have allowed inclusion of the effects of UF during dialysis and the patient’s residual native kidney urea clearance :

stdKt / V = 10080 ( [ 1 − e − eKt / V ] / t ) ( [ 1 − e − eKt / V ] / eKtV ] + [ 10080 / Nt ] − 1 )

where Nt is the number of dialysis treatments per week.

Because urea is relatively nontoxic, peak levels probably do not mediate uremic toxicity, so an alternative explanation for the inefficiency of infrequent HD was developed on the basis of the sequestration of compartmentalized solutes other than urea. A two-compartment model that accounts for sequestration gives a pattern of average clearances that closely match stdKt/V values, as in Fig. 62.29 , providing further theoretic support for the clinical application of stdKt/V in recipients of frequent HD.

Fig. 62.29

Effect of increased dose versus increased frequency on effective clearance.

The effective clearance, expressed as “standard Kt/V” on the vertical axis, tends to plateau, despite increases in dialyzer clearance, expressed as single-pool Kt/V (spKt/V) on the horizontal axis. Two different models of solute kinetics show similar diminishing returns as the delivered dose increases. A marked increase in effective clearance can be achieved only by increasing the frequency of treatments.

Nocturnal Hemodialysis and Home Hemodialysis

An alternative to thrice-weekly, in-center HD is nocturnal dialysis. As the term implies, HD is performed during the night, usually while the patient sleeps. Nocturnal HD can be performed in center, thrice weekly, with the major advantage of increased treatment duration of 6 to 8 hours. Observational studies have found regression of LV mass, better bone mineral indices, and improved quality of life with this modality. ,

Home HD, which can be applied during the waking hours for 2 to 4 hours per treatment, 5 or 6 times weekly, provides more frequent treatments while controlling costs and patient burden. Nocturnal home HD is a variant of this, with longer treatment times and perhaps only three or four treatments/week. The advantage of more frequent and/or longer treatments includes patient freedom during the day to conduct normal life activities. Several studies have shown improvements in BP, nutrition, stamina, and health-related quality of life, which are presumably responsible for patient acceptance of a procedure that requires considerably more patient effort and time than standard in-center HD. , , Controlled trials have confirmed improvements in LV mass, BP, need for phosphate binders, and some aspects of quality of life. , However, recruitment of patients for home training was difficult, more vascular access interventions were required, and the nocturnal group of patients in one study experienced a more rapid decline in RKF. , Observational studies have reported similar patient and treatment survival rates, regardless of home modality.

Short Daily Hemodialysis

The incentive to shorten HD duration is often patient generated, but when combined with an increase in frequency, weekly HD adequacy may be significantly improved. For example, per Fig. 62.29 , there is equivalent weekly small solute clearance with thrice-weekly HD with treatment Kt/V of 1.2 compared with five-times-weekly HD with treatment Kt/V of 0.50, which would be a substantially shorter treatment. The increased frequency also allows more balanced fluid removal over the week. Controlled studies of short daily HD have shown improvements in cardiac hypertrophy, health-related quality of life, and cardiac function. , , All studies of short daily HD have shown decreases in IDWG and predialysis BP, but such changes are to be expected when the interdialytic interval is shortened to nearly half that of thrice-weekly HD.

Accounting for Native Kidney Function

Clearance of urea, the principal metric of dialyzer function (K d ), can be augmented by residual kidney urea clearance (K ru ). Addition of the two urea clearances is reasonable for continuous dialysis therapies (i.e., PD) but cannot be calculated for intermittently dialyzed patients in whom the two clearances do not occur simultaneously. Because continuous clearances are more efficient than intermittent clearances, an adjustment is required before K d and K ru can be summed. One can inflate K ru so that it can be added to K d as a total intermittent clearance. More commonly, K d is deflated to a continuous clearance, stdKt/V, and then K ru is added to it. For example, if the patient’s HD achieved stdKt/V determined by Eq. 12 is 2.2/week, the K ru is 4 mL/min, and the patient’s V is 35 L, the two can be added to yield a total continuous urea clearance in the form of native kidney function (units of mL/min) or the form of dialysis weekly standard Kt/V:

2.2 / week × ( 35 , 000 mL / 10 , 080 min / week ) + 4 mL / min = 11.6 mL / min

or

4 mL / min × ( 10 , 080 min / week / 35 , 000 mL ) + 2.2 / week = 3.4 weekly stdKt / V

RKF confers a survival advantage far in excess of that associated with the dialyzer’s urea clearance. Per unit of clearance, RKF is “worth” more than that provided by the dialyzer, most likely from continued excretion of solutes that are eliminated poorly, if at all, by HD, as well as additional salt and water excretion in between HD sessions and maintenance of some kidney synthetic functions. , , Studies suggest that RKF may last longer than realized, even in HD. Additionally, prescribing HD to complement the amount of K ru in a schema now termed “incremental HD” may slow native kidney function decline, and incorporation of K ru in the prescription of HD is underused in many incident HD patients.

The Dialysis Prescription

Goals of Hemodialysis

The goal of HD is to replace the kidneys’ excretory function. To accomplish this, blood and dialysate are circulated in opposite directions (countercurrent) on opposite sides of a semipermeable membrane in the dialyzer (see Fig. 62.14A ), allowing unwanted solutes such as potassium (K), urea, and phosphorus (P) to diffuse from the blood into the dialysate and permitting addition of solutes such as bicarbonate and Ca from the dialysate into the blood. The solute concentrations added to dialysate are selected with the goal of restoring those levels normally maintained by the native kidneys (see Table 62.7 ). The elimination of excess volume is achieved via ultrafiltration (UF) by controlling the hydrostatic pressure gradient across the semipermeable membrane (see “Dialysate Circuit”). The rate of solutes and fluid accumulation varies and depends on each patient’s nutritional and metabolic status, adherence to dietary restrictions, and RKF. Thus the HD prescription must be individualized. The components of the HD prescription that may be adjusted are listed in Box 62.1 .

Box 62.1

Components of the Dialysis Prescription

Duration

Frequency

Vascular access

Dialyzer (membrane, configuration, surface area, sterilization method)

Blood flow rate

Dialysate flow rate

Ultrafiltration rate

Dialysate composition (see Table 62.7 )

Anticoagulation

Dialysate temperature

Intradialytic medications

Hemodialysis Session Length and Frequency

After optimizing Q b and Q d and selecting a dialyzer with a large mass transfer coefficient, the clearance of any solute, such as urea, can be augmented with longer or more frequent HD sessions. However, because diffusive solute clearance depends on solute concentration on the blood side, the efficiency of solute removal declines over the course of the HD session, leading to diminishing returns for total solute removal as suggested by urea concentrations with HD treatments longer than 4 to 5 hours (see “Hemodialysis Adequacy”). Conversely, reducing session length below 3 hours accentuates the effects of intermittence, exacerbates solute disequilibrium, reduces clearance of larger molecules for which removal is more time dependent (such as β 2 M), increases the UFR, and increases the potential for hypotension and myocardial stunning. , Though more frequent HD lessens the impact of rapid solute removal and improves clearance, it incurs added expense, resources, vascular access dysfunction, and potentially patient and caregiver burnout. , ,

Additional benefits of longer or more frequent HD sessions include optimal volume homeostasis and enhanced removal of high-molecular-weight, sequestered, or protein-bound solutes (see “Hemodialysis Adequacy”). , , , Longer or more frequent HD reduces the UFR and may reduce intradialytic symptoms (see “Complications for Patients on Maintenance Hemodialysis”), decrease postdialysis fatigue, improve BP control, and ameliorate myocardial stunning. , , Sequestered solutes such as phosphate require more time to equilibrate among the various volume compartments, leading to improved total removal and lower serum concentrations with more HD time. More frequent HD also may mitigate the greater CV morbidity and mortality observed at the end of the long interdialytic interval in patients receiving conventional thrice-weekly HD. , The Frequent Hemodialysis Network (FHN) Daily Trial, the largest RCT to compare thrice-weekly with 6 days/week in-center HD, confirmed that more frequent HD improves BP and phosphate control, decreases LV mass and LV end-diastolic volume, and results in favorable effects on the composite endpoints of death or change in LV mass and death or change in self-reported physical health. , However, more frequent HD did not significantly improve cognitive function, depressive symptoms, and serum albumin or decrease use of ESAs. The FHN and Canadian studies comparing frequent nocturnal HD with thrice-weekly HD, hampered by the small number of trial participants, only demonstrated beneficial effects on BP and phosphate control and possibly LV mass. , Because of increased burden and cost of longer or more frequent HD sessions, along with concerns regarding vascular access dysfunction with more frequent HD and (in the smaller, nocturnal trial) more rapid decline in RKF, , , more frequent HD has not been routinely adopted. The current conventional practice in the United States is to prescribe HD thrice weekly for 3 to 4 hours per session. Longer duration or more frequent HD is often used for larger patients, those with severe HTN not responding to maximal antihypertensive therapy, or those with volume overload and IDH preventing fluid removal. Prolonging session length and/or including one additional session of HD or UF to avoid the long interdialytic interval can help patients who are struggling to maintain their health and well-being on a conventional thrice-weekly HD schedule, possibly mitigating risks that might develop with daily or near-daily therapy. ,

Dialyzer Choice

In choosing a dialyzer, the most critical determinants are as follows: 1. capacity for solute clearance; 2. capacity for fluid removal; and 3. degree of biocompatibility. The ideal dialyzer membrane would have high clearance of low-molecular-weight (LMW) and mid-molecular-weight uremic toxins, adequate UF, high biocompatibility, and a small blood volume compartment to reduce adverse hemodynamic effects.

Urea is the solute most often used to evaluate dialyzer solute clearance characteristics because of its relevance to kinetic models of dialysis adequacy (see “Hemodialysis Adequacy”). In clinical practice, physicians rely on industry-derived determinations of in vitro dialyzer clearance of LMW and mid-molecular-weight solutes. Gibbs-Donnan effects, membrane adsorption of solute, protein binding of solute, and solute aggregation are not taken into account in determining in vitro dialyzer clearances and will reduce in vivo clearances. The variable relation between a solute’s diffusive and convective clearance (removal by UF) further complicates the determination of solute clearance of different dialyzers. Solutes larger than 300 Da have lower diffusive clearance than smaller solutes, such as urea and K, and may rely primarily on convective clearance. For patients with large IDWG requiring more UF during each HD session, simple comparisons of the in vitro diffusive solute clearances may be misleading.

Another factor in dialyzer selection is its capacity to remove fluid or ultrafiltration coefficient (K UF ), measured in mL/min/mm Hg. The manufacturer performs in vitro tests to determine the K UF of each dialyzer model; in vivo values may vary by as much as 10% to 20%.

As discussed (see “Membrane Biocompatibility”), dialyzer membranes variably activate the coagulation cascade and blood components, with synthetic membranes generally being the most inert and hence most biocompatible, , but even synthetic membranes vary in the degree of biocompatibility. , , Additionally, activated thrombin is adsorbed on the dialyzer membrane, creating a nidus for platelet adhesion and further thrombin deposition. The propensity for thrombogenesis may be another important factor in dialyzer selection, especially when anticoagulation during HD is not feasible. It is unclear whether dialyzers bonded with heparin will reduce thrombosis during heparin-free dialysis. A randomized crossover study suggested that such dialyzers are superior to both saline flushes and infusion in preventing intradialytic thrombosis, whereas others have found comparable thrombotic rates with the use of saline flushes or a polysulfone membrane. ,

An additional consideration in dialyzer selection is whether it will be reused because the chemicals used in reprocessing dialyzers may damage some of the membranes. Bleach, commonly used to strip protein off the membrane and improve dialyzer appearance, may increase the pore size of some synthetic membranes after repeated use. This results in plasma protein loss during each HD, rivaling that seen in nephrotic patients. Heat disinfection may result in cracks in the headers of the dialyzers.

Blood and Dialysate Flow Rates

Configuring Q d countercurrent to Q b maximizes the concentration gradient throughout the length of the dialyzer (see Fig. 62.14A and Table 62.5 ). When flows are in the same direction (co-current), small solute clearance decreases by about 10%. Increasing Q d reduces the accumulation of waste products in the dialysate and provides a higher gradient between blood and dialysate for optimal diffusion. Dialysate flowing along the membrane tends to adhere to it to create an unstirred layer, or boundary layer, which reduces the rate of diffusion across the membrane, but this effect decreases with higher Q d . Dialysate also tends to channel or move along the path of least resistance (streaming effect), resulting in nonuniform flow and bypassing some of the membrane area. As Q d increases or turbulence occurs at the membrane surface, the unstirred layer becomes thinner and channeling is minimized. This also increases the dialyzer’s mass transfer-area coefficient or K 0 A, although the effect is less in vivo than in vitro. These findings prompted an increase in Q d from 500 to 800 mL/min when the Q b is prescribed at 350 to 500 mL/min. Advances in dialyzer technology led to modification of the hollow fiber shape and insertion of inert spacer yarns to reduce channeling and unstirred layers and further improve dialyzer performance. , With the newer dialyzers, increasing Q d above 600 mL/min yields minimal increases in urea, phosphate, and β 2 M clearance , but may still have a significant impact on the clearance of protein-bound solutes. ,

Dialyzer blood flow usually ranges from 200 to 500 mL/min, depending on the type of vascular access, and influences the efficiency of solute removal (see Table 62.5 ). As Q b increases, more solute is presented per minute to the membrane and solute removal increases. Urea removal increases steeply as Q b increases to 300 mL/min, but the rate of increase is less steep as Q b approaches 400 to 500 mL/min because of increased resistance and turbulence within the hollow fibers, resulting in nonlinear flow and reduced clearance. Unlike dialysate, the boundary layer and streaming effects are less prominent on the blood side of hollow fibers because of the geometric advantages of flow in hollow fibers, the scrubbing effects of RBCs, and less variance in Q b . ,

Anticoagulation

Blood clotting during HD results in blood loss and reduces solute clearance through decreased dialyzer surface area. , To prevent clotting, an anticoagulant is usually delivered into the blood circuit before the dialyzer via a peristaltic pump or syringe pump.

Unfractionated heparin is the most commonly used anticoagulant in the United States. It may be given as a bolus at the start of HD (common fixed dose 1000–4000 IU or weight-based dose 25–50 IU/kg) followed by a continuous infusion (1000–1500 IU/hour) until 15 to 60 minutes before the end of the session (terminated sooner if AV access) or as intermittent boluses as needed. Disadvantages of the bolus method include an increase in nursing time and episodic overanticoagulation and underanticoagulation. In patients at risk of bleeding ( Table 62.8 ), low-dose heparin (500–1000 IU bolus followed by 500–750 IU/hour), regional anticoagulation, dialyzers coated with heparin, or no anticoagulation may be appropriate. , ,

Table 62.8

Guidelines for Anticoagulation in Hemodialysis Patients at High Risk for Serious Bleeding

Anticoagulation for Hemodialysis Clinical Condition
No anticoagulation or regional anticoagulation Active bleeding
Recent intracerebral hemorrhage
Significant coagulopathy or thrombocytopenia
Major surgery within 7 days
Intracranial surgery within 14 days
Biopsy of visceral organ within 72 hours
Pericarditis
Low-dose heparin Major surgery beyond 7 days
Biopsy of visceral organ beyond 72 hours
Minor surgery 8 hours prior
Minor surgery within 72 hours
Low-dose heparin or no anticoagulation Major surgery 8 hours prior

In regional anticoagulation, the anticoagulant is infused into the blood circuit arterial line before the dialyzer, followed by infusion of a neutralizing agent into the venous line after the dialyzer. Regional citrate anticoagulation, a common anticoagulation strategy when performing continuous kidney replacement, uses citrate as the anticoagulant and Ca as the neutralizing agent, with the dialysate being Ca free. Citrate binds Ca in the blood, an important cofactor in the coagulation cascade, thereby inhibiting clotting in the dialyzer. Infusion of Ca post dialyzer restores the ability of coagulation. Regional anticoagulation also may be accomplished with heparin as the anticoagulant and protamine as the reversing agent. , Both methods are labor intensive and prone to error in inexperienced hands, requiring frequent monitoring of ionized Ca or partial thromboplastin time, respectively, when using the citrate-Ca or heparin-protamine combination. Citrate anticoagulation can result in hypocalcemia if Ca replacement is inadequate and metabolic alkalosis as citrate is metabolized. However, in intermittent short-duration HD, metabolic alkalosis may not be an issue. Rebound of anticoagulation may be seen after the completion of HD with regional heparinization because heparin has a longer half-life than protamine. Because of the monitoring required and the risk of serious complications, regional anticoagulation is not commonly used in the outpatient HD setting. However, if a simplified treatment protocol could be perfected, regional citrate anticoagulation is desirable in the outpatient setting because citrate may reduce inflammation, lower bleeding risk, and improve clearance from less dialyzer clotting when compared with heparin. , , Currently, low-dose heparin and anticoagulation-free HD remain more commonly used strategies in outpatients.

During anticoagulation-free dialysis, several strategies may help prevent clotting, such as 1. rinsing the circuit before HD with heparinized saline; 2. using a less thrombogenic dialyzer; 3. flushing the circuit with 100 to 200 mL of 0.9% sodium chloride every 30 minutes during HD; 4. avoiding blood or platelet transfusions through the circuit; 5. maintaining a high Q b to decrease sludging of blood in the hollow fibers; and 6. limiting UF as feasible because hemoconcentration in the hollow fibers increases thrombotic risk. In the hypercoagulable patient or if higher Q b and limited UF are not possible, these measures are unlikely to prevent clotting. Remaining options, then, are regional citrate anticoagulation or the use of heparin-coated dialyzers, although the latter may be inferior to citrate in reducing dialyzer clotting. ,

Alternative anticoagulants include low-molecular-weight heparin (LMWH), hirudins, prostacyclin, dermatan sulfate, and argatroban. , Of these, LMWH is more widely used in Europe, , as the complexity of use, expense, lack of sufficient experience, and equivalency to heparin have deterred widespread use of the other anticoagulants. For the rare patient with confirmed heparin-induced thrombocytopenia, lepirudin, bivalirudin, argatroban, and citrate anticoagulation are viable alternatives. , Finally, substituting citric acid for acetic acid in the dialysate may augment the effect of heparin use, improve clearance, and increase dialyzer reuse, presumably because of decreased clotting, but may increase cramps and hypotension.

Dialysate Composition

The dialysate composition is crucial to attaining desired blood purification and to achieving fluid and electrolyte homeostasis. To reach these endpoints, dialysate contains the solutes listed in Table 62.7 in concentrations comparable with those of plasma. Addition of electrolytes and glucose to dialysate reduces or eliminates their concentration gradients and prevents excessive loss during HD. Potassium is nearly always individualized; Na, Ca, and bicarbonate concentrations may also be individualized, although they may be standardized for most patients in a facility where centralized dialysate production and distribution are used. Because the dialysate glucose concentration is comparable with plasma, osmotic forces do not drive fluid removal as they do in PD. Most modern HD platforms use a three-stream method of creating the final dialysate, mixing a large volume of treated water with small amounts of acid and bicarbonate concentrates. The acid concentrate contains Na, K, Cl, Mg, Ca, dextrose, and a small amount of acetate. The bicarbonate concentrate contains Na, Cl, and bicarbonate.

Sodium

Because Na is the major determinant of tonicity of extracellular fluids, the dialysate sodium concentration (Na D ) influences CV stability during HD. Historically, Na D was kept lower than serum Na concentration (SNa) at 130 to 135 mEq/L to facilitate diffusive Na loss during HD and prevent interdialytic HTN, exaggerated thirst, and excessive IDWG. However, with the advent of high-flux dialyzers in the late 1960s and more efficient solute removal, headaches, nausea, vomiting, seizures, hypotension, and cramps became more common and were attributed to the hyponatric dialysate , , but were more likely caused by the use of acetate as a source of base (see later). This prompted a progressive increase in Na D —first to that of SNa and subsequently higher than it—with an improvement in symptoms. The pendulum now has swung back, with some, but not all studies, demonstrating that a high Na D leads to thirst, IDWG, and HTN, leading to a resurgent interest in reducing Na D concentrations and abandoning the use of Na modeling or “ramping” (see “Components of the Extracorporeal Circuit—Computer Controls”) in most patients. , , , , , , To complicate matters further, data from DOPPS have suggested a differential effect of Na D on outcomes; HD patients with the lowest predialysis SNa have a lower mortality when dialyzed against a high Na D , despite an increase in IDWG. ,

The ongoing debate and uncertainty regarding optimal Na D are due to multiple confounding factors, such as lack of RCTs, inaccuracy of Na D delivery compared with prescribed, differences in dietary Na intake and urinary Na excretion, varying tissue Na stores (which affect vascular stiffness), and disparate predialysis plasma osmolarity not due to plasma Na that results in hypotension during HD as osmolarity decreases. , Conflicting data have suggested that one size does not fit all, leading to a growing interest for an individualized approach. Although a computer-controlled biofeedback system using conductivity to lower the plasma Na level to 135 mEq/L may offer the added benefits of improved fluid balance and BP control without sacrificing hemodynamic stability, these methods add complexity and/or increase demand on staff time. , , , A reasonable alternative may be to apply a constant Na D of 136–138 mEq/L empirically or align the Na D with the average predialysis SNa to achieve the same ends, because many dialysis patients tend to be hyponatremic. , ,

Potassium

Unlike Na, only 2% of the 3000 to 3500 mEq of total body K is distributed in the extracellular space. Although colonic excretion of K increases threefold in patients with ESKD and eliminates about 30% of dietary intake, the remaining K accumulates between HD treatments and can become life threatening. By using a dialysate potassium concentration (K D ) lower than that of plasma, excess K is removed during HD, mainly through diffusion down its concentration gradient. , , , However, K flux from the intracellular to extracellular compartment is usually slower than the efflux of K into the dialysate. The use of a high dialysate bicarbonate concentration (BIC D ) and/or creating a large bicarbonate gradient during HD may enhance K shifting into the intracellular compartment, potentially creating significant intradialytic hypokalemia; this is followed by ∼30% rebound in the serum K concentration (SK) 3 to 4 hours after completion of HD. , Life-threatening hypokalemia typically occurs during the first 2 hours of HD, when a high predialysis K favors a precipitous decline in its concentration, leading to arrhythmias through hyperpolarization of cardiac membrane potential, QT prolongation, and increase in ventricular late potentials. , , , ,

Minimizing the risk for intradialytic hypokalemia and postdialysis rebound is made more complex by the high variability in K removal among patients and between treatments for the same patient despite an identical HD prescription. The intracellular distribution of K leads to a variable volume of distribution (V D ) such that the greater the total body K content, the lower the V D and the higher the fractional decline in K concentration during HD. Factors such as amelioration of acidosis, stimulation of insulin release by dialysate glucose, release of catecholamines in response to hemodynamic events, and decline of plasma tonicity all favor intracellular shifting, thus reducing the gradient for K removal during HD. , , , ,

Several large epidemiologic studies have sought to clarify the risks associated with varying K D . , The optimal predialysis SK with respect to survival appears to be 4.6 to 5.3 mEq/L, with poor nutritional status contributing to death at lower SK and fatal arrhythmias at predialysis SK higher than 5.6 to 6 mEq/L. Dialyzing against a K D lower than 2 or 3 mEq/L appears to increase the risk for sudden death, especially in patients with a predialysis SK <5 mEq/L. , A more recent DOPPS study did not find a difference in risk for sudden death between K D concentrations of 2 mEq/L versus 3 mEq/L and was unable to interpret data for K D lower than 2 mEq/L because of the small number of patients in this category, likely reflecting changes in practice since the previous studies. Although some studies have suggested a survival benefit in hyperkalemic patients dialyzed against low K D , , other studies have found no difference in survival, even in patients with predialysis SK levels >6.5 mEq/L. , In addition, lower K D seems to have little effect on predialysis SK, given the 2- to 3-day interval in between sessions. Individualizing the K D , depending on the unique situation of each patient, may be crucial in navigating between increased mortality from predialysis hyperkalemia and sudden death from hypokalemia during and after each HD session. , ,

The prescribed K D is guided by the predialysis SK and the considerations discussed earlier. , , , Because of increased sudden death during and after HD when using K D of 0 mEq/L, presumably due to a rapid decline in SK, its use should be abandoned. Most patients should dialyze against a K D of 2 or 3 mEq/L. Patients with increased total body K from diet, medications, hemolysis, tissue breakdown, catabolism, or gastrointestinal (GI) bleeding may require a lower K D . However, concentrations of 1 mEq/L should be used only when a compelling reason exists because of the higher risk for arrhythmias and death, and only after exhausting all efforts targeting dietary K restriction, use of K exchange resins, and discontinuing medications that interfere with aldosterone production and GI elimination of K (e.g., ACE inhibitor, angiotensin receptor blocker [ARB], and aldosterone antagonists). Patients on digoxin in particular must dialyze against a K D of at least 2 mEq/L because of the greater propensity for digoxin toxicity and death with predialysis SK <4.3 mEq/L and intradialytic hypokalemia. Potassium modeling with a gradual stepdown in K D , thus keeping the blood to K D gradient constant, may optimize rate of removal and minimize the risk for arrhythmias. , , , , However, experience with and data for this approach are scant and consist of small studies using electrocardiography to detect repolarization abnormalities (e.g., prolonged QT interval or QT dispersion) as surrogate markers for sudden death. The validity of these tools as surrogate markers has been questioned in the cardiology literature. , Randomized trials are needed to inform this dilemma.

With the availability of new oral K binding resins—patiromer sorbitex calcium and sodium zirconium cyclosilicate—usually taken on nondialysis days, many patients are able to remain on ACE inhibitors, ARBs, and aldosterone antagonists. , The use of oral sodium polystyrene sulfonate has largely fallen out of favor given the rare but serious side effect of intestinal necrosis and generally poor tolerability.

Calcium

Historically, patients with ESKD were dialyzed against a higher Ca concentration (Ca D ) of 3 to 3.5 mEq/L (1.5–1.75 mmol/L) to help control hyperparathyroidism and prevent Ca and subsequent bone mineral loss. , However, with the shift from aluminum to Ca-containing phosphate binders and increased use of vitamin D analogs in the 1980s and 1990s came more frequent hypercalcemia and concern for accelerated vascular calcification, so the trend shifted to a lower Ca D . While KDIGO and KDOQI guidelines currently recommend a Ca D of 2.5–3 mEq/L (1.25–1.5 mmol/L), they are mostly opinion based, reflecting limited understanding of Ca mass balance in patients undergoing HD. , , Data from retrospective and small randomized studies have suggested that 2.5 mEq/L is the fulcrum for Ca D , below which Ca is removed from the patient and above which Ca diffuses into the patient during HD, although there is wide variability among patients. , , , Patients whose Ca D was reduced from 3 to 3.5 mEq/L down to ≤2.5 mEq/L had decreased serum calcium (SCa) and higher parathyroid hormone (PTH) concentrations, correction of low bone turnover (adynamic bone disease), and improvement in vascular calcification, although other studies have reported unchanged or worsening vascular calcification with lower Ca D . , Vascular calcification was assessed variably with measures of aortic calcification, coronary artery calcification, arterial stiffness, carotid intima-media thickness, and carotid femoral pulse wave pressures. None of the studies addressed interdialytic Ca mass balance (which may be variable given the selection of available phosphorus binders, vitamin D analogs, and calcimimetic agents) and its contribution to vascular calcification.

Dialysate Ca may also affect hemodynamic stability during HD through lowering ionized Ca concentrations, resulting in impaired LV contractility and possibly peripheral vasoconstriction. , , Levels <2.5 mEq/L or a higher SCa to Ca D gradient are associated with an increased risk of IDH, , , , , potentially placing patients at risk of myocardial stunning and increased risk of sudden death. Using a higher BIC D may exacerbate IDH because an increase in pH during HD may further reduce ionized Ca concentration, as well as affect the proportioning of Ca D from concentrate. , ,

Whether or how vascular calcification and IDH relate to morbidity and mortality is not clear. Reports from studies of patients treated with lower Ca D (≤2.5 mEq/L) have ranged from increased sudden death or increased heart failure hospitalization to enhanced or no difference in survival. , , , Most of these studies were small; the largest study involved more than 43,000 patients but used a case-control methodology using retrospective data with potential for residual confounding. Phosphorus is a potential confounder that likely contributes to vascular calcification yet has not been adequately addressed in studies on Ca D .

Given the complexities discussed and the observed wide variations in predialysis SCa, individualizing Ca D may be ideal. Patients prone to IDH or at risk for sudden death may benefit from increasing the Ca D to 3–3.5 mEq/L at the expense of increased risk for hypercalcemia, vascular calcification, and low bone turnover. , , Until a Ca kinetic model accounting for the various factors discussed is available to guide optimal Ca D prescription, one rational approach would be to maintain near-neutral Ca mass balance during HD, which can be accomplished with a Ca D of 2.5 mEq/L in patients with predialysis SCa <8.75 mg/dL and a Ca D of 3 mEq/L in those with predialysis SCa >9.15 mg/dL.

Magnesium

Similar to K, only 1% to 2% of Mg is in the extracellular compartment. Because two-thirds is in bone, Mg flux during HD is difficult to predict. With current practice in the United States of using a dialysate Mg concentration (Mg D ) of 0.75 to 1 mEq/L, up to one-third of HD patients have low serum Mg concentrations (SMg), sometimes exacerbated by concurrent proton pump inhibitor use. Raising the Mg D to 1.5 mEq/L will normalize predialysis SMg in most patients and will make some mildly hypermagnesemic. , Traditional practice has been to avoid Mg supplementation in patients with advanced CKD because of the reduced ability to eliminate it, given concern for life-threatening hypermagnesemia with bradyarrhythmias and neurotoxicity (decreased deep tendon reflexes and muscle weakness); this rarely occurs with an SMg below 4 to 5 mg/dL. Higher SMg may inhibit vascular calcification and suppress PTH but may also reduce bone mineralization, leading to adynamic bone disease. Low SMg may predispose to IDH, increase osteoclast activation, increasing risk of osteitis fibrosa, and increase the propensity for ventricular arrhythmias through depolarizing resting membrane potentials. Epidemiologic studies have suggested that low SMg is associated with mortality, but the absolute level differed among studies and ranged from <1.3 to <2 mg/dL, with a large study suggesting that the optimal SMg may be between 2.7 and 3.1 mg/dL. Adjusting for comorbidities and indices of malnutrition, inflammation, and atherosclerosis markedly attenuated the risk conferred by low SMg, but residual increased risk persisted in malnourished and/or inflamed HD patients. Only one study found an increased risk of death with high SMg above 3.1 mg/dL. Whether a higher Mg D would reduce mortality is unclear. For now, consider the use of higher Mg D for patients with persistent IDH and/or those at risk of cardiac arrhythmias or vascular calcification and lower Mg D if adynamic bone disease is a concern.

Bicarbonate

Correction of metabolic acidosis during HD is achieved by increasing the dialysate concentration of a base equivalent to promote diffusion into the blood. , , Historically, bicarbonate was introduced into dialysate by bubbling carbon dioxide through it to lower the pH and prevent precipitation of Ca and Mg salts. In the 1960s, acetate was introduced as a source of bicarbonate and became the standard for 2 decades. Acetate offered the advantage of a low incidence of bacterial contamination, lack of precipitation with Ca and Mg, and ease of storage. However, it became a hemodynamic stressor when high-efficiency and high-flux HD was introduced in the 1980s because the higher rate of acetate diffusion into blood exceeded the metabolic capacity of the liver and skeletal muscle. Acetate accumulation led to acidosis, vasodilation, and hypotension. These complications prompted a resurgence of bicarbonate-based dialysate, which persists today.

The major complications of bicarbonate dialysate are bacterial contamination and the precipitation of Ca and Mg salts. , , Gram-negative halophilic rods require sodium chloride or sodium bicarbonate to grow and thrive in bicarbonate dialysate. , With regular disinfection of bicarbonate containers, these bacteria have a latency period of 3 to 5 days, an exponential growth phase at 5 to 8 days, and maximal growth rate at 10 days, which compare favorably with a latency of 1 day, exponential growth at 2 to 3 days, and maximal growth by 4 days in a contaminated container. Thus disinfecting the containers and mixing the bicarbonate daily help prevent bacterial contamination. The use of commercially available dry powder cartridges offers an alternative solution to this problem.

Bicarbonate and the acid concentrate, which contains all solutes other than bicarbonate, are separated until use to minimize the formation of insoluble Ca and Mg salts with bicarbonate. , , The acid concentrate derives its name from the small amount of acetic acid (4–8 mEq/L in the final dilution) used to ensure the solubility of divalent cations. The dialysate delivery system draws up the components separately and mixes them proportionately with purified water to form the final dialysate. This technologic advance allowed the widespread reintroduction of bicarbonate as a dialysate buffer in the 1970s. Because some precipitation of Ca and Mg salts still occurs, the dialysate delivery system must be rinsed periodically with an acid solution to eliminate any buildup.

In many HD centers, bicarbonate concentration is fixed at 32, 35, or 38 mEq/L and allows the use of a central bicarbonate delivery system that pumps the bicarbonate concentrate from a central tank to individual patient stations. An advantage of this is fewer personnel back injuries, but a major disadvantage is the inability to individualize the BIC D . By using dry powder cartridges or individual bicarbonate containers at each patient station, prescriptions can be individualized.

Although correction of metabolic acidosis is desirable to reduce protein catabolism, bone demineralization, inflammation, and insulin resistance, overcorrection to generate metabolic alkalosis during HD may predispose patients to hemodynamic instability, paresthesias, muscle twitching, cramping, and reduce cerebral blood flow possibly through alkalosis-induced lowering of the serum K and ionized Ca levels, as well as increased tissue calcium phosphate deposition. , , Several large observational studies have reported that very low (<17 mEq/L) and very high (>27 mEq/L) predialysis bicarbonate levels were associated with mortality and hospitalization, but after adjusting for case mix and markers of inflammation and malnutrition, only the association between very low bicarbonate levels and adverse outcomes remained. A higher predialysis bicarbonate is likely a marker for poorer nutritional status. Patients with mild to moderate acidosis (bicarbonate 20–23 mEq/L) appear to have the best survival, perhaps reflecting more robust dietary protein intake, although newer studies have found no association between serum bicarbonate levels and mortality. Instead, a predialysis serum pH of ≥7.40 is associated with death from CV causes and unexpectedly, a higher BIC D is associated with infection-related mortality. , , , It is difficult to reconcile these findings and no large-scale, event-driven RCTs have compared high versus low BIC D or oral bicarbonate supplementation in ESKD.

Although lowering BIC D for patients with predialysis hyperbicarbonatemia is prudent, especially if postdialysis metabolic alkalosis is present, it may not improve outcomes. Instead, causes of malnutrition and inflammation should be identified and corrected if possible. , Whether patients with very low serum bicarbonate concentrations would benefit from raising BIC D is unclear. In addition, kinetic studies have indicated that raising BIC D increases serum bicarbonate levels at the end of and for 2 hours after HD but does not affect predialysis levels. Postdialysis metabolic alkalosis and its effect on serum K and ionized Ca levels, however, could explain a fraction of the excess mortality observed after the first HD treatment of the week. ,

Residual confounding from inaccurate reporting of comorbidities, variability of serum bicarbonate measurements, disparity between prescribed and delivered BIC D , and incomplete accounting of total base (acetate or citrate in the acid bath are converted to bicarbonate in the body) may have contributed to the disparate findings discussed previously. , Data have suggested that the base equivalent in the acid bath has negligible influence on the serum bicarbonate. If mortality risk is related to a high dialysate-to-blood bicarbonate gradient and abrupt changes in serum bicarbonate levels, bicarbonate modeling and individualizing BIC D may be of benefit, in theory. , Until a definitive answer is available, oral bicarbonate supplementation may be preferable to raising the BIC D to correct very low predialysis serum bicarbonate levels (<18 mEq/L).

Glucose

Historically, high dialysate glucose concentrations were used to provide osmotic pressure for fluid removal and prevent hypoglycemia. However, this can lead to hyperglycemia and reduce K removal through stimulation of insulin production and shifting of K intracellularly. With technologic advances to allow alteration of hydrostatic pressure to enhance UF, using a glucose-free or lower glucose dialysate concentration of 100 to 200 mg/dL has become the current standard. , Glucose-free dialysis results in more hypoglycemia in diabetic patients during HD, especially in those with better glycemic control. However, patients with DM undergoing HD against a dialysate glucose level of 200 mg/dL had more frequent episodes of hyperglycemia and evidence for increased vagal tone, which may increase the risk for IDH. Given the paucity of data regarding optimal dialysate glucose, continued use of physiologic dialysate glucose concentrations (100–200 mg/dL) seems reasonable. ,

Dialysate Temperature

The dialysate temperature (T D ) is generally maintained between 35°C and 37°C at the inlet of the dialyzer (see “Dialysate Circuit”). If the T D is kept constant at 37°C or at the patient’s core body temperature at the start of HD (thermoneutral dialysis), the patient’s core temperature actually increases during HD. The average predialysis core body temperature is around 36.6°C 479 and increases by 0.7°C during HD when the T D is set at 37°C. The pathogenesis is incompletely understood, but it may be that fluid removal and blood volume contraction lead to vasoconstriction and reduced heat loss from the skin. , With progressive heat accumulation, a reflex dilation of the peripheral blood vessels occurs and leads to IDH.

Isothermic dialysis (no intradialytic change in core temperature) using a blood temperature monitor with computer-controlled modulation of T D improves hemodynamic stability in hypotension-prone patients. , , , , If such technology is not available, cooling the T D empirically to 35°C to 36°C or 0.5°C to 1°C below predialysis core body temperature also alleviates IDH but may increase patient discomfort from cold and reduce clearance through inducing flow disequilibrium and impeding solute diffusion across membranes. The hemodynamic effects of dialysate cooling appear similar to those of Na modeling and high Na D but without the undesirable side effects of positive Na balance. , , , , , , However, a small randomized crossover trial suggested that dialysate cooling may be less effective than Na modeling in ameliorating IDH.

Small mechanistic studies have suggested that dialysate cooling ameliorates IDH through increased baroreceptor variability, with an attendant rise in systemic vascular resistance, cardiac output, and stroke volume during HD. , , Dialysate cooling does not seem to improve vascular refilling. Regardless of the mechanism, dialysate cooling in hypotension-prone patients is associated with a lower risk for CVD mortality, preserved LV function, and reduced injury of brain white matter, but it does not appear to affect hospitalizations, CV events, or all-cause mortality. , , , ,

Ultrafiltration Rate and Dry Weight

Another main goal of HD is to maintain fluid balance by establishing a DW and applying UF during each HD to remove the IDWG. Ultrafiltration, or fluid removal, is the result of a combination of positive hydrostatic pressure in the blood compartment and “negative” pressure created in the dialysate compartment. Adjusting the amount of negative pressure within the dialysate compartment controls the TMP; the higher the TMP, the greater the UFR.

Traditionally, DW is defined as the lowest body weight a patient can tolerate without becoming hypotensive, using clinical examination and evaluation as a crude estimate. A more rigorous definition is the body weight at which extracellular volume is physiologic, since both volume depletion and volume overload are associated with significant morbidity and mortality. , , However, physiologically appropriate extracellular volume and body weight are difficult to determine clinically, especially because patients undergoing HD vary widely in their response to fluid removal.

Although healthy individuals can tolerate a loss of 20% of their circulating blood volume before becoming hypotensive, HD patients are highly variable; some become symptomatic with as little as a 2% decline of their blood volume. This variability likely results from a disparate cardiac response and rates of vascular refilling from the interstitial and intracellular spaces, as well as a myriad of other factors. , , Autonomic dysfunction, diastolic dysfunction, increased core temperature, and intradialytic hypocalcemia, hypokalemia, alkalosis, and myocardial stunning may all lead to an impaired cardiac response and impaired constriction of resistance and capacitance vessels during volume depletion. Dialytic removal of solutes, malnutrition, and inflammation may slow vascular refilling through decreased osmotic and oncotic pressures and increased vascular permeability. Hence, patients on HD may become symptomatic before their physiologic weight is reached. Assessment for pedal edema and BP are unreliable tools to determine DW because they correlate poorly with volume status as estimated by multifrequency bioimpedance. , ,

Newer technologies to help determine the optimal DW and improve tolerance of HD include continuous online blood volume determination during HD coupled with computer-controlled UFR and UF modeling (see “Online Monitoring”) and bioimpedance analysis (BIA). Although continuous blood volume determination may reduce IDH, it cannot accurately assess the extracellular volume compartment or detect patients with impaired vascular refilling; therefore it is less useful for determining an optimal DW. , ,

Bioimpedance analysis shows promise in establishing DW and in reducing intradialytic symptoms but is not widely used due to its underlying complex principles and lack of a gold standard method for determining DW to allow full validation. , , , Briefly, an electrical current is applied to the body and the resistance (opposition to flow of the current) and reactance (opposition to passage of the current) are measured. The resistance is used to estimate the volume of extracellular fluid; the reactance is used to estimate the volume of intracellular compartments. Two small RCTs have suggested that multifrequency bioimpedance spectroscopy is superior to clinical evaluation in determining physiologic DW, as evidenced by improvements in BP control, LV mass index, and arterial stiffness, and lower mortality. , Occurrences of IDH and access thrombosis were comparable, but the percentage of patients with RKF declined from 20% to 10% in the BIA group. A larger RCT using a fluid management protocol for patients new to HD with or without BIA to determine DW also did not improve preservation of RKF, though the author’s note significantly slower RKF decline than expected, attributed to the fluid assessment protocol itself, which included measurement of RKF.

Studies have linked greater UFR with higher mortality, although the threshold has been debated; it ranges from more than 10 to 13 mL/kg/hour. Some studies have suggested that a continuum exists with incremental risk, starting at 6 mL/kg/hour. , Strategies to reduce the UFR include reducing IDWG through dietary Na restriction and isonatric or hyponatric dialysis, judicious use of diuretics in patients with RKF, extending session length, and/or increasing frequency of HD sessions. Although a higher UFR, greater IDWG, and shortened session length are closely intertwined, each may independently confer mortality risk through mechanisms that are poorly understood. The HD treatment itself may induce LV dysfunction, independently from the hemodynamic effects of UF.

Current strategies to determine DW rely on clinical evaluation and maintaining a high index of suspicion for volume overload, with periodic empiric challenges of the patient’s end-dialysis weight by 0.2 to 0.3 kg/session when excess volume is suspected. Subtle clinical indicators include persistence of HTN despite escalation of antihypertensive medications, reduced appetite, and very low IDWG. , In hypotension-prone patients, UF modeling, avoiding intradialytic hypocalcemia, hypomagnesemia, and alkalosis, lowering T D , increasing duration or frequency of HD, reducing dietary Na intake, and possibly separating UF from diffusive clearance during HD may be of benefit. , Sequential UF and diffusive clearance provides initial isolated UF with iso-osmotic removal of fluid, followed by diffusive clearance with or without additional fluid removal. Maintaining constant plasma osmolarity during UF prevents further depletion of the blood volume from fluid shifts into the interstitial and intracellular spaces, although sequential UF was found to be inferior to Na modeling and dialysate cooling in preventing IDH.

Dialyzer Reuse

Hemodialyzer reuse peaked in the 1990s, primarily because of the potential benefits of improved biocompatibility and reduced cost. , , Reprocessing dialyzers for repeated use took advantage of dialyzer membrane coating with plasma proteins that occurs during the first treatment, effectively camouflaging hydroxyl groups that can activate blood components. It also reduced the incidence of first-use syndrome, thought to be mediated in part by ethylene oxide, a commonly used sterilant that is absorbed by the potting compound of the dialyzer and can induce an immunoglobulin E (IgE)-mediated anaphylactic reaction (see “Hemodialyzers”).

Automated devices that reprocess dialyzers are safer and result in fewer febrile reactions compared with manual reprocessing. During the cleaning process, bleach or hydrogen peroxide is used to improve dialyzer appearance. However, bleach also strips proteins off the membrane and can damage it (negating the improved biocompatibility afforded by the protein-coated membrane), reduces phosphorus clearance through increased negative membrane charge, and increases albumin loss. , After cleaning, dialyzer integrity is assessed by measuring the volume of the fiber bundle in the blood compartment (fiber bundle volume) and by pressurizing the dialyzer to ensure that the fibers are structurally intact (pressure test). For a dialyzer to be accepted for reuse, the fiber bundle volume must be >80% of the initial value and the dialyzer should hold >80% of the maximal operating pressure. The dialyzer is then packed with chemical disinfectants such as peracetic acid or formaldehyde or subject to heat disinfection with or without citrate. Over the decades, peracetic acid use has gained popularity over formaldehyde; in 2002 use was 72% and 20% respectively, with only 4% of centers using heat disinfection.

Close scrutiny of the safety of dialyzer reuse practices has yielded conflicting results when comparing with nonreuse and with use of various disinfectants, largely because the studies were nonrandomized and uncontrolled. Overall, however, data have suggested that when reuse is applied meticulously and complies with AAMI standards, risk-adjusted mortality rates are similar and various disinfectants are comparable.

Though dialyzer reuse was a common practice in the 1990s (80% of clinics in 1997), the trend began declining in the 2000s with only 24% of dialysis units in the United States practicing this in 2012 and estimated at <10% in 2018. , This sharp decline was due to a change in practice patterns in some large dialysis chain providers favoring single use, the wide availability and lower cost of synthetic and more biocompatible dialyzers, lingering concerns regarding long-term exposure to chemical disinfectants, and the potential for infectious or pyrogenic reactions from flawed reuse practices. , , ,

Dialyzer reuse is now uncommon in the United States and is prohibited in some countries, but for other developing countries, it is still common practice, mainly for economic reasons. Proponents of dialyzer reuse cite environmental concerns. Abandoning reuse altogether would result in an increase in the already estimated 903,000 tons of mostly plastic waste generated annually by dialysis units worldwide. H owever, this must be balanced against the environmental impact of pollution from, and packaging associated with, reuse chemicals. 509 Additional research on best management of the medical waste associated with HD is needed.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

May 3, 2026 | Posted by in NEPHROLOGY | Comments Off on Hemodialysis

Full access? Get Clinical Tree

Get Clinical Tree app for offline access