Individualizing the Dialysate to Address Electrolyte Disturbances in the Dialysis Patient




Patients with end-stage renal disease (ESRD) depend on dialysis to maintain fluid and electrolyte balance. Dialysis allows for solutes to diffuse between blood and dialysate such that, over the course of the procedure, plasma composition is restored toward normal values. The makeup of the dialysate is of paramount importance in accomplishing this goal. In most out-patient settings patients receive hemodialysis using dialysate prepared in bulk and delivered via a central delivery system so that the composition of the dialysate is the same for all patients. While most patients tolerate the procedure when administered in this fashion, many patients suffer from hemodynamic instability of symptoms of dialysis disequilibrium.


One strategy to improve the clinical tolerance to dialysis is to adjust the dialysate composition according to the individual characteristics for the patient. This chapter focuses on the changes that take place in electrolyte and acid–base during hemodialysis and peritoneal dialysis. A discussion will also be provided on how the dialysate can be manipulated in order to improve patient tolerance. Individualizing the dialysate composition is likely to gain increasing importance given the advancing age and increasing number of comorbid conditions found in ESRD patients.


Keywords


hemodialysis, peritoneal dialysis, dialysate, sodium modeling, hyponatremia, hypokalemia, hyperkalemia, hemodynamics, metabolic acidosis, metabolic alkalosis, uremia, metabolic bone disease, hyperphosphatemia


Introduction


The goal of dialysis in patients with end-stage renal disease is to restore the composition of the body’s fluid environment to normal. Dialysis is performed by creating an artificial situation in which the blood is separated from a disposable second solution (the dialysate) by a semipermeable membrane (the dialyzer or the peritoneal membrane). Solutes will diffuse from the blood through the membrane in proportion to their membrane permeability and the concentration gradient. The physician is able to decrease or prevent removal of a solute by adding it to the dialysate, thereby decreasing the concentration gradient for diffusive movement. One may also introduce solute into the dialysate in a concentration in excess of that usually found in the plasma water so that diffusive movement of this solute into the patient is favored. This chapter focuses on the changes that take place in electrolytes and acid–base during hemodialysis and peritoneal dialysis.




Sodium


Hemodialysis


The patient with end-stage renal disease is dependent on dialysis to remove sodium from the body in an amount that matches sodium intake so that balance can be maintained. Sodium is free to cross the dialysis membrane primarily by the process of diffusion, and to a lesser extent by convection. The concentration of sodium in the dialysate plays a pivotal role in determining whether sodium balance is maintained at a level that avoids volume overload and yet provides adequate cardiovascular stability during the procedure.


As dialysis has evolved, there has been continued interest in adjusting the dialysate sodium concentration in an attempt to improve the tolerability of the procedure. In the early days of dialysis, a low-sodium dialysate was typically utilized to reduce the complications (such as hypertension and congestive heart failure) of chronic volume overload. However, with reduced dialysis treatment times it became apparent that such therapy contributed to hemodynamic instability by exacerbating the decline in plasma osmolality (particularly early in the dialysis procedure) and intravascular volume. Subsequent studies demonstrated that raising dialysate sodium to between 139 and 144 mEq/liter was associated with improved hemodynamic stability and general tolerance to the procedure.


There was concern that an increased dialysate sodium concentration would produce a dipsogenic effect resulting in increased weight gain and poor blood pressure control. Studies addressing this issue confirmed that a higher-dialysate sodium modestly increased interdialytic weight gain. However, this excess weight was found to be readily removed with improved tolerance to ultrafiltration.


More recently, there has been interest in varying the concentration of sodium in the dialysate during the procedure so as to minimize the potential complications of a high-sodium solution while retaining the beneficial hemodynamic effects. A high dialysate sodium concentration is used initially with a progressive reduction toward isotonic or hypotonic levels by the end of the procedure. This method allows for a diffusive Na influx early in the session in order to prevent the rapid decline in plasma osmolality due to the efflux of urea and other small-molecular-weight solutes. During the remainder of the procedure, when the reduction in osmolality accompanying urea removal is less abrupt the lower dialysate Na level minimizes the development of hypertonicity and any resultant excessive thirst, fluid gain, and hypertension in the interdialytic period ( Figure 93.1 ).




Figure 93.1


Use of a low sodium dialysate is more commonly associated with intradialytic hypotension. In the initial period of dialysis the extracellular urea concentration falls creating an osmotic driving force for water movement into the cell due to the higher intracellular urea concentration. This drop in extracellular osmolality and movement of water into the intracellular space is exacerbated in the setting of a low dialysate Na concentration. As a result plasma volume falls and the risk of hypotension increases. A high sodium dialysate helps to minimize the development of extracellular hyposmolality allowing for better refilling of the intravascular compartment. Plasma volume remains better preserved and the risk of hypotension is reduced.


As outlined in Table 93.1 , several studies have compared the hemodynamic and symptomatic effects of a dialysate in which the sodium concentration is varied during the procedure to that in which the sodium concentration is fixed. Dumler et al. used a dialysate sodium of 150 mEq/liter during the initial 3 hours of dialysis at the time of ultrafiltration. The dialysate sodium was decreased to 130 mEq/liter for the last hour. The control group was dialyzed against sodium concentration fixed at 140 mEq/liter. Use of the high/low sodium hemodialysis was associated with a smaller decline in systolic pressure and fewer symptomatic hypotensive episodes.



Table 93.1

Summary of Recent Studies Examining Effects of Na Gradient Protocols












































Study (reference) Design Intervention Results
Dumler et al., 10 patients, crossover design Fixed (140) vs high (150)/low (130), Uf only with 150 50% decrease in cramping episodes (no statistical comparison possible)
Raja et al., 10 patients, crossover design Fixed (135 and 140) vs high (145)/low (135) vs low (135)/high (145) No difference in hypotensive episodes between high/low and 140, but both better than 135 and low/high protocols
Daugirdas et al., 7 patients, crossover design Fixed (143,135) vs gradient (160 to 133) No difference in hypotensive episodes or cramps among 3 groups
Acchiardo et al., 39 patients, crossover design Fixed (140) vs gradient (149 to 140 linear, exponential, step) 50% reduction in hypotensive episodes and cramps with gradient protocol
Sandowski et al., 16 patients (16–32 years of age), crossover design Fixed (138) vs gradient (149 to 138, linear, exponential, step) Decrease in intra- and interdialytic morbidity with gradient, no differences in symptomatic hypotension
Levin et al., 11 symptomatic patients and 5 asymptomatic patients, crossover design Fixed (140) vs ramped Na (155–160 to 140) and Uf, each individually tailored Significant decrease in dialysis morbidity with ramped protocol
Sang et al., 23 patients, crossover design Fixed (140) vs gradient (155 to 140, linear or step) Decrease in cramps and hypotension with gradient but only 22% of patients with significant benefit


Other investigators have varied dialysate sodium according to a sodium-gradient protocol in which the sodium is set to decrease from a high to a low level over the course of a dialysis session. The mixed results from these experiences are outlined in Table 93.1 . Raja et al. and Daugirdas et al. found no measurable benefit. Acchiardo et al. found a reduction in hypotensive episodes, and Sandowski et al. had similar results in young patients. The linear and step sodium modeling programs have been found to be better in lowering the risk of intradialytic headache compared to the exponential program. The linear program was the only individual program that alleviated interdialytic cramps. The most striking reduction in the risk for post-treatment hypotension occurred with the step program.


Differences in the incidence of symptomatic hypotension during dialysis or in the degree of interdialytic weight gain between the fixed and variable sodium protocols have been difficult to demonstrate. Levin et al. studied a group of patients who were specifically selected because of the frequent occurrence of symptoms upon dialysis—such as headaches, cramps, and lightheadedness. In a crossover trial, these patients were assigned either to a fixed sodium dialysate and a constant rate of ultrafiltration or to a gradient protocol in which the initial sodium concentration and ramping pattern were individually adjusted to minimize thirst. Use of patient-specific sodium gradient profiles was associated with improvement in all patients with headache and in 70% of patients with lightheadedness. The majority of patients reported an increase in thirst, but there were no differences in interdialytic weight gain or in pre-dialysis and postdialysis mean arterial pressure.


Utilizing a more general dialysis population, Sang et al. compared a linear or step sodium gradient (155–140 mEq/liter) protocol to a fixed sodium dialysate (140 mEq/liter). In this study, sodium modeling was associated with a significant reduction in cramps and symptomatic hypotension. However, these benefits were followed by increasing thirst, fatigue, and weight gain between dialysis sessions—as well as by a higher predialysis blood pressure. The authors concluded that only 22% of patients had a significant benefit from the modeling programs. Finally, a study by Movilli et al. found improved blood volume preservation by using a pattern of high-to-low sodium change (160–133 mEq/liter). The changes in blood pressure were similar between this high-to-low variation and conventional dialysis.


In summary, the available data suggest that in most chronic dialysis patients changing the dialysate Na during the course of treatment offers little advantage over a constant dialysate Na of between 140 and 145 mEq/liter. The inability to clearly demonstrate a superiority of Na modeling may be due to the fact that the time-averaged concentration of Na was similar in many of the comparative studies. For example, a linear decline in dialysate Na from 150 to 140 mEq/liter will produce approximately the same postdialysis serum Na as occurs when a dialysate Na of 145 mEq/liter is used throughout the procedure. In addition, the optimal time-averaged Na concentration (whether administered in a modeling protocol or with a fixed dialysate concentration) is likely to vary from patient to patient (as well as in the same patient) during different treatment times. This variability is supported by studies demonstrating wide differences in the month-to-month pre-dialysis Na concentration in otherwise stable dialysis patients.


Nevertheless, in selected patients Na modeling may be of benefit ( Table 93.2 ). Patients initiating dialysis with marked azotemia are often deliberately dialyzed so as to decrease the urea concentration slowly over the course of several days in order to avoid the development of the dialysis disequilibrium syndrome. The use of a high-/low-Na dialysate in these patients may minimize fluid shifts into the intracellular compartment and decrease the tendency for neurologic complications. Na modeling may also be beneficial in patients suffering frequent intradialytic hypotension, cramping, nausea, vomiting, fatigue, or headache. In such patients, the modeling protocol can be individually tailored to minimize increased thirst, weight gain, and hypertension.



Table 93.2

Indications and Contraindications for use of Na Modeling (high/low programs)











A. Indications



  • Intradialytic hypotension



  • Cramping



  • Initiation of hemodialysis in setting of severe azotemia



  • Hemodynamically unstable patient (as in intensive care unit setting)

B. Contraindications



  • Intradialytic development of hypertension



  • Large interdialytic weight gain induced by high Na dialysate



  • Hypernatremia



Combining dialysate Na profiling with a varying rate of ultrafiltration may provide additional benefit in particularly symptomatic patients. Ultrafiltration profiling is the deliberate use of a high rate of ultrafiltration in the initial part of the treatment when the volume of interstitial fluid available for vascular refilling is maximal and then sequentially decreasing the rate so as to parallel the anticipated fall in interstitial fluid volume. Use of this combined approach may be of particular benefit in ensuring hemodynamic stability in patients with acute renal failure in the intensive care unit.


When prescribing a Na gradient protocol it is important to monitor the patient for evidence of a progressive increase in total body Na. Use of a low-dialysate Na during the terminal phase of the procedure does not necessarily guarantee negative Na balance. In some patients, a high/low Na protocol can lead to large interdialytic weight gain or cause intradialytic hypertension ( Table 93.2 ). Such adverse effects are more likely to occur when the time-averaged Na concentration is greater than the pre-dialysis serum Na concentration. In one report, this complication was avoided when the time-averaged Na concentration was kept 0.5–0.8 mmol/liter lower than the patient’s pre-dialysis serum sodium concentration.


A Na gradient protocol can be administered in such a way that the amount of Na exchanged is the same as with a fixed Na dialysate while better preserving blood volume during ultrafiltration. Coli et al., has described a procedure termed profiled hemodialysis. This technique is based on a mathematical model in which baseline patient characteristics are used to construct a patient-specific sodium profile dialysate prior to each treatment. Initial experience with this procedure has shown improved cardiovascular hemodynamics when compared to a fixed dialysate sodium concentration despite the same total mass of sodium being removed.


In hypertensive patients, adjusting the protocol to achieve negative Na balance may be of therapeutic benefit in the long-term control of blood pressure. In this regard, Flanigan et al. recently compared a fixed dialysate Na of 140 mEq/liter to a gradient protocol in which the dialysate Na was lowered in an exponential fashion from 155 to 135 mEq/liter and then held constant at 135 mEq/liter for the final half-hour of the procedure. Ultrafiltration was discontinued during the final half hour of the session. Use of the variable Na dialysate permitted a 50% reduction in the dose of antihypertensive medication without significant changes in pre-dialysis blood pressure or interdialytic weight gain. Although not specifically measured, use of the terminal low-sodium period likely caused a decrease in the total-body exchangeable Na—thus accounting for improved blood pressure control in Na-sensitive patients. Several other trials have noted a modest reduction in the dialysate Na can be an effective maneuver in lowering blood pressure in dialysis patients.


Sodium loading during the dialysis procedure by stimulating thirst can result in expansion of extracellular fluid volume and cause increased cardiac output and blood pressure. Volume independent mechanisms may also play a role in the hypertensive effect of a salt load. For example, increased sympathetic outflow has been linked to hypernatremia in the Dahl salt-sensitive rat. Increased brain Na and osmolality is associated with increased angiotensin II levels. In addition to potential vasoconstrictive and vasculotoxic effects, angiotensin II stimulates sympathetic outflow by binding to AT 1 receptors centrally. In vitro , high medium Na concentration results in hypertrophy of cardiac myocytes and vascular smooth muscle cells. Increased medium Na also leads to endothelial cell stiffness and decreased nitric oxide production in the presence of aldosterone. Taken together these observations suggest sodium overload and hypernatremia induced by a high dialysate Na concentration may play an important role in hypertension and abnormal vascular function in dialysis patients.


The ability to achieve neutral sodium balance requires the dialysate sodium concentration to be individualized such that with each treatment a constant end-session plasma sodium concentration is reached. In dialysis patients, interdialytic sodium and water loads vary from one patient to another and from treatment to treatment. Water balance can be achieved by making total ultrafiltrate volume equal to interdialytic weight gain. If over time end-dialysis weight and plasma sodium concentration are kept constant (assuming no change in sodium distribution volume), one can assume that the patient will be in sodium balance. As currently practiced, dialysate sodium concentrations (whether fixed or varied) are not chosen with the primary aim of achieving sodium balance. In fact, pre-dialysis serum sodium levels may vary between 125 and 147 mEq/liter in dialysis patients, but for any individual patient the standard deviation of the pre-dialysis serum sodium value is less than or equal to 2.5 mEq/liter. As a result, a dialysate sodium concentration that achieves neutral sodium balance for one patient may cause salt loading when applied to another. This approach risks a pathologic excess in the total sodium mass, which over time can lead to clinical manifestations of volume overload such as hypertension and congestive heart failure.


To properly calculate the dialysate sodium concentration required to maintain sodium balance, measurement of plasma water sodium concentration at the beginning of the treatment is required. Hemodynamics were recently compared in 27 patients dialyzed against dialysate Na set at 138 mEq/liter versus dialysate Na set to match the average pre-dialysis Na concentration multiplied by the Donnan coefficient of 0.95 (individualized Na dialysate). This latter step was undertaken to better approximate the ionic concentration of Na rather than its total concentration. The ionic concentration of Na better reflects the amount available for movement across a dialysis membrane because a small amount of total Na is unavailable for diffusive flux due to Donnan effects and complexes formed with certain anions. Use of the individualized Na dialysate was associated with less interdialytic weight gain, decreased thirst, and improved blood pressure control.


The ionic activity of sodium can also be measured through the use of conductivity measurements. Locatelli et al. have recently described the use of a biofeedback system that allows for the automatic determination of plasma water and dialysate conductivity such that blood sampling can be avoided. With these measurements—along with session time, desired weight loss, and expected end-treatment plasma water conductivity—the dialysate conductivity is automatically adjusted in order to achieve the prescribed final plasma water conductivity. Application of this conductivity kinetic model to patients treated with a variant of hemodiafiltration achieves near-zero hydrosodium balance and improves intradialytic cardiovascular stability. Such techniques may also be useful in patients with a low pre-dialysis Na concentration who are prone to positive sodium balance when dialyzed against a fixed dialysate Na. Newer technology will allow for plasma water and dialysate conductivity to be measured repetitively during the procedure, allowing automatic adjustment of the dialysate sodium on-line throughout the procedure.


With increased ability to individualize the dialysate sodium concentration, one can envision a scenario in which a patient initiated on hemodialysis is initially treated with a dialysate sodium concentration designed to achieve negative sodium balance. Once the patient becomes normotensive or requires minimal amounts of antihypertensive medication, the dialysate sodium can be adjusted on a continual basis to ensure that sodium balance is maintained. In this manner, management of sodium balance would be made similar to management of fluid intake. Achieving the optimal total body sodium content will likely become just as important as determining an accurate dry weight.


Peritoneal Dialysis


Ultrafiltration in peritoneal dialysis is achieved by instilling into the peritoneal cavity a fluid that is hyperosmolar relative to plasma. The creation of this osmotic gradient results in net movement of free water into the peritoneal cavity. The degree of hypertonicity, the dwell time, and peritoneal transport characteristics are the main determinants of volume removal. The magnitude of ultrafiltration is determined clinically by subtracting the volume of fluid instilled in the peritoneal cavity from the effluent volume.


The osmotic driving force of peritoneal dialysate can be adjusted by changing the dialysate glucose concentration. A 1.5% glucose dialysate is only slightly hypertonic to plasma, and small volumes of ultrafiltration will be expected. In contrast, ultrafiltration rates of up to one liter per hour have been obtained with a 4.25% solution. The osmolality of the infused dialysate declines over time due to the movement of water into the peritoneal cavity and the absorption of glucose from the peritoneal cavity. Due to a sieving coefficient across the peritoneal membrane of less than one, the sodium concentration in the ultrafiltrate during peritoneal dialysis is usually less than the extracellular fluid. As a result, there is a tendency for water loss and the development of hypernatremia. Commercially available peritoneal dialysates have a sodium concentration of 132 mEq/liter to compensate for this tendency toward dehydration. This effect is most pronounced with increasing frequency of exchanges and with increasing dialysate glucose concentrations. Use of the more hypertonic solutions with frequent cycling can result in significant dehydration and hypernatremia. As a result of stimulated thirst, water intake and weight may increase—resulting in a vicious cycle.


There are three components that determine sodium transport in peritoneal dialysis. The components that account for removal of sodium from the body are diffusion (due to the concentration gradient between blood and dialysate) and convection (due to ultrafiltration). Acting to oppose sodium removal is peritoneal absorption. Lymphatic and intestinal tissue fluid absorption leads to the convective movement of sodium from the dialysate to blood. There is a strong positive correlation between net ultrafiltration volume and the total mass of sodium removed from the body. Ultrafiltration not only increases sodium removal by convection but increases the diffusive flux by secondarily increasing the concentration gradient from blood to dialysate. The more favorable gradient results from the sodium-sieving effect such that the fluid entering the peritoneal space is hypotonic and dilutes the dialysate sodium concentration.


The use of a peritoneal solution containing the standard sodium concentration of 132 mEq/liter creates a relatively small concentration gradient for sodium diffusion. As a result, more sodium is removed by convection compared to diffusion. Differences in osmolality of the solutions utilized and dwell time account for the variability in sodium removal among the different modalities of peritoneal dialysis. Automated forms of peritoneal dialysis (APD) are characterized by rapid exchanges and short dwell times. Although hypertonic exchanges during APD lead to a more pronounced fall in dialysate sodium concentration, there is little time available for diffusion. It is for this reason that sodium removal in APD is lower than chronic ambulatory peritoneal dialysis, in which dwell times are much longer. Patients treated with APD may achieve adequate ultrafiltration with hypertonic solutions but have inadequate sodium removal.


Sodium and water removal is more difficult to achieve in patients with increased membrane permeability. These patients have increased diastolic blood pressures and are prone to volume overload. In addition, patient and technique survival may be decreased in this setting. Sodium transport in this patient group is characterized by a high peritoneal absorption rate and a decreased ultrafiltration rate. The diffusive flux of sodium is not significantly different in these patients. Typically, such high-transport patients are placed on APD to minimize dialysate dwell time. An alternative strategy that could be utilized is to lower the sodium concentration in the dialysate. Studies examining ultrafiltration and sodium kinetics using a dialysate sodium concentration of either 102 mmol/liter (383 mOsm/kg) or 105 mmol/L (348 mOsm/kg) have shown that a low-sodium dialysate is more effective in the removal of excess sodium compared to a conventional sodium solution.


Enhanced sodium removal may be the result of an increased diffusive flux of sodium or may be secondary to enhanced convective transport due to the higher glucose concentrations used in low-sodium solutions. Alternatively, the decreased sodium concentration in the dialysate may contribute to less sodium absorption. In short-term clinical trials, use of a low-sodium dialysate (98 mmol/liter and 120 mEq/liter) has been shown effective in reducing both body weight and blood pressure in volume-overloaded patients on chronic ambulatory peritoneal dialysis. Limited data also suggest that this strategy may be of use in patients receiving APD.


An additional strategy for enhancing sodium removal and preventing volume overload in patients with increased membrane permeability is to substitute icodextran for glucose as the osmotic agent. Icodextran is a glucose polymer that is isosmolar to plasma but still capable of generating an ultrafiltrate through the process of colloid osmosis. This process is based on the principle that water is transported from capillaries in the direction of impermeable large solutes rather than down an osmotic gradient (as occurs with glucose-containing solutions). Water movement occurs through small pores, whereas in the osmotic effect of glucose water movement is primarily through ultra-small transcellular pores. Sodium sieving does not occur with icodextran as a result of this difference.


In a six-hour dwell, the 7.5% icodextran solution generates an ultrafiltrate volume that is higher than that generated by 1.5% dextrose—despite having a lower osmolality (285 vs 347 mOsm/kg). With prolonged dwell times of 8–12 hours, the icodextran solution provides equivalent or higher ultrafiltrate volumes than that generated by the 4.25% dextrose solution (486 mOsm/kg). In prospective randomized studies, use of icodextran has been found more effective in reducing extracellular water and in removing sodium compared to standard glucose solutions. The ability to maintain a colloid osmotic pressure for prolonged periods makes this solution ideal for overnight dwells in patients on continuous ambulatory peritoneal dialysis and for daytime dwells for those on automated peritoneal dialysis regimens.


In patients with ultrafiltration failure who would otherwise be transferred to hemodialysis, use of icodextran has been shown to extend the time that patients remain on peritoneal dialysis by many months. In addition use of icodextran is associated with less weight gain, improved lipid control, and less hyperinsulinemia as compared with dextrose-containing solutions. It is likely that icodextran will become the preferred agent for the long dwell in most peritoneal dialysis patients.




Potassium


Regulation of Potassium in Renal Disease


Acute kidney injury may lead to marked decreases in distal delivery of salt and water which may secondarily decrease distal K + secretion. For this reason, hyperkalemia tends to occur more commonly in oliguric renal failure. Hyperkalemia is much less common in non-oliguric renal failure since distal delivery of salt and water is plentiful.


Chronic kidney disease is more complicated than acute renal failure. In addition to the decreased GFR and secondary decrease in distal delivery, there is nephron dropout and a smaller number of collecting ducts to secrete K + . However, this is counterbalanced by an adaptive process in which the remaining nephrons develop an increased ability to excrete K + . In a study of normokalemic patients with stage 4 chronic kidney disease, the fractional excretion of K + was 126% compared with 26% in normal controls. The fractional excretion of Na + in the two groups was 2.3% and 15%, respectively. Following the intravenous administration of amiloride, the fractional excretion of K + decreased by 87% in the patients with chronic kidney disease compared with 19.5% in control patients. These findings support the idea that patients with chronic kidney disease are able to maintain a normal serum K + concentration through an adaptive increase in renal K + secretion that is largely amiloride sensitive.


Once a patient reaches end-stage renal disease the capacity for renal potassium excretion is no longer present. Interestingly, despite this limitation, total body potassium (in particular, intracellular potassium) in patients with end-stage renal disease is low or normal. The low intracellular potassium content has been attributed to decreased activity of the Na,K-ATPase, which is a characteristic finding in uremia. Studies in red blood cells taken from uremic patients have shown that the diminished activity of the pump can be reversed when cells are incubated in normal plasma. There is also an improvement in the activity of the pump following dialysis. Red blood cells taken from normal individuals and incubated in uremic plasma acquire the defect.


Decreased potassium concentration, increased sodium concentration, and decreased resting membrane potential have been demonstrated in skeletal muscle from uremic patients. After seven weeks of hemodialysis, these physiologic parameters were restored to normal. These observations suggest the presence of a circulating inhibitor of the Na + ,K + -ATPase in some uremic patients. In other patients, there may be a decrease in the number of pump sites rather than decreased activity. A decrease in pump activity or a decrease in the total number of pumps may account for the impaired extrarenal potassium disposal reported in some uremic patients.


In the absence of renal function, the cellular uptake of potassium becomes an important defense against the development of hyperkalemia. Studies of patients on dialysis have shown a defect in this extrarenal mechanism of potassium disposal. Fernandez et al. compared the disposition of an oral potassium load (0.25 mEq/kg/body weight) in a group of dialysis patients and in normal controls. The normal controls excreted 67% of the potassium load within 3 hours and translocated 51% of the retained potassium intracellularly. In contrast, the dialysis patients did not excrete any of the potassium and only 21% of the retained potassium was translocated intracellularly. The increment in plasma potassium was significantly different between the two groups. The plasma potassium concentration increased by 1.06 mEq/liter in the dialysis patients, whereas only a 0.39-mEq/liter increase was noted in the control group. The impairment in potassium disposal persists even when the potassium load is accompanied by oral glucose, although glucose-induced stimulation of insulin attenuates the maximal rise in potassium levels.


In patients with renal failure, a significant proportion of daily potassium excretion occurs via the gastrointestinal tract. Gastrointestinal losses are important in maintaining potassium balance in chronic dialysis patients because hemodialysis removes approximately 80–100 mEq/treatment (300 mEq/week), yet dietary potassium intake is usually 400–500 mEq/week. In a balance study performed in patients already on peritoneal dialysis, 25% of the daily potassium intake was lost via the feces. The amount of potassium excreted in the stools correlates directly with the wet stool weight. Therefore, constipation should be avoided because it will decrease the gastrointestinal elimination of potassium and increase the tendency toward hyperkalemia.


The mechanism of the increased gastrointestinal potassium loss is not known. The process appears to be due to active secretion, as it is unrelated to plasma potassium or total body potassium. In fact, hemodialysis patients continue to have enhanced rectal potassium secretion even after dialysis—their plasma potassium being less than that of controls. Potassium transport in the large intestine was recently studied in patients with end-stage renal disease using a rectal dialysis technique. Rectal potassium secretion was found to be three fold greater in end-stage renal disease patients as compared to control patients with normal renal function. When barium (a potassium channel inhibitor) was placed in the lumen, colonic potassium secretion was reduced by 45% in the end-stage renal disease patients while no effect was seen in the control group. Immunostaining using an antibody directed to the α-subunit of the high conductance potassium channel protein revealed greater expression of the channel in surface colonocytes and crypt cells in the end-stage renal disease patients while only a low levels of expression was observed in the control group. These data are consistent with increased expression of potassium channels as the mechanism for the adaptive increase in colonic potassium secretion in patients with end-stage renal disease.


Elevated levels of plasma aldosterone may play an important role in stimulating the gastrointestinal excretion and cellular uptake of potassium in patients with end-stage renal disease. Exogenous administration of mineralocorticoids has been shown to decrease the serum potassium in anuric dialysis patients, presumably by increasing colonic potassium excretion. In a prospective study, fludrocortisone administered at 0.1 mg/d was compared with no treatment in 21 hyperkalemic hemodialysis patients. At the end of 10 months, the serum K + concentration in the two groups was not statistically different. However, there was a decrease in serum K + compared with pretreatment values in patients who received the drug.


A recent study examined the effects of glycyrrhetinic acid food supplementation on the serum K concentration in a group of maintenance hemodialysis patients. This substance inhibits the enzyme 11β-hydroxysteroid dehydrogenase II which is found not only in the principal cells of the renal collecting duct but also epithelial cells in the colon. This enzyme converts cortisol to cortisone thereby ensuring the mineralocorticoid receptor remains free to only interact with aldosterone since cortisone has no affinity for the receptor. In 9 of 10 patients given the supplement there was a persistent decrease in measured predialysis serum potassium concentration. In addition, treatment with the supplement significantly decreased the frequency of severe hyperkalemia. These beneficial effects occurred without weight gain or increases in systemic blood pressure suggesting glycyrrhetinic acid supplementation may be of benefit in enhancing colonic K secretion and minimizing the risk of hyperkalemia in dialysis patients.


Angiotensin-converting enzyme inhibitors and angiotensin receptor blockers have both been reported to cause hyperkalemia in patients treated with hemodialysis and peritoneal dialysis. The development of hyperkalemia with these drugs may be due to decreased colonic potassium excretion resulting from lower circulating levels of aldosterone or decreased activity of angiotensin II. In this regard, enhanced colonic potassium excretion in renal failure has been attributed to upregulation of angiotensin II receptors in the colon—suggesting that angiotensin II has a direct effect in stimulating colonic potassium excretion. Blocking the mineralocorticoid receptor with spironolactone given at a dose of 25 mg/day does not raise the serum potassium concentration in hemodialysis patients.


Hemodialysis


Dialysis is required to maintain normal or near-normal serum potassium concentrations in patients with end-stage renal disease. The removal of excess potassium by dialysis is achieved by the use of a dialysate with a potassium concentration lower than that of plasma, creating a gradient favoring potassium removal. Its rate is largely a function of this gradient. Typically, one should not expect more than about 80–100 mEq of potassium removal even with the use of a potassium-free dialysate. Plasma potassium concentration falls rapidly in the early stages of dialysis, but as the plasma concentration falls potassium removal becomes less efficient. Because potassium is freely permeable across the dialysis membrane, movement of potassium from the intracellular space to the extracellular space appears to be the limiting factor in potassium removal. Factors that importantly dictate the distribution of potassium between these two spaces include changes in acid–base status, tonicity, glucose and insulin concentration, and catecholamine activity ( Table 93.3 ).



Table 93.3

Factors Affecting Potassium Removal during Hemodialysis











A. Shifts K into cell thereby ↓ dialytic K removal



  • Exogenous insulin



  • Glucose containing dialysate vs glucose free dialysate



  • Beta agonists



  • Correction of metabolic acidosis during dialysis

B. Shifts K to extracellular space or impairs cell K uptake thereby ↑ dialytic K removal



  • Beta blockers



  • Alpha adrenergic receptor stimulation



  • Hypertonicity



The movement of potassium between the intra- and extra-cellular spaces is influenced by changes in acid–base balance that occur during the dialysis procedure. Extracellular alkalosis causes a shift of potassium into cells, whereas acidosis results in potassium efflux from cells. During a typical dialysis, there is net addition of base to the extracellular space—which promotes cellular uptake of potassium and therefore attenuates the removal of potassium during dialysis. With routine dialysis, the change in blood pH is of small magnitude and the effect on potassium removal is not profound.


By contrast, dialysis in patients who are acidotic will result in less potassium removal because potassium is shifted into cells as the serum bicarbonate rises. Weigand et al. described five patients in whom the serum potassium concentration decreased during dialysis even though the dialysate potassium concentration was higher than the original serum potassium concentration. The decline in potassium concentration occurred in association with a marked rise in pH. In one patient, the decline in potassium concentration was of such magnitude that she became quadriplegic and developed respiratory failure. There appears to be no difference in potassium removal whether acetate or bicarbonate is chosen as the dialysate buffer.


Conversely, the serum potassium concentration can influence the net addition of base. Redaelli et al. found that a potassium-free dialysate was associated with less bicarbonate uptake compared to a dialysate that contained a potassium concentration of 2 mEq/liter. It was postulated that a lower potassium dialysate that results in a high plasma-to-dialysate potassium concentration gradient causes less hydrogen ion movement from the intracellular space to the extracellular space and hence less downward titration of the extracellular bicarbonate concentration. As a result, the concentration gradient favoring diffusion of bicarbonate from the dialysate to the extracellular space is reduced. This relationship should be considered when dialyzing an acidotic patient.


Insulin is known to stimulate the cellular uptake of potassium, and it can therefore influence the amount of potassium removal during dialysis. This effect of insulin was demonstrated in studies comparing potassium removal using glucose-containing and glucose-free dialysates. The use of a glucose-free dialysate was found to result in greater amounts of potassium removal when compared to patients treated with a glucose-containing bath. The use of a glucose-free dialysate would be expected to result in lower levels of insulin. As a result, there is increased movement of potassium to the extracellular space—where it becomes available for dialytic removal.


Changes in plasma tonicity can affect the distribution of potassium between the intra- and extra-cellular spaces. Administration of hypertonic saline or mannitol is sometimes used in the treatment of hypotension during dialysis. These agents would be expected to favor potassium removal during dialysis because the resultant increased tonicity would favor potassium movement into the extracellular space. There are no studies addressing whether there is any significant clinical benefit with this approach.


Beta-adrenergic stimulation is known to shift potassium into cells and thus lower the extracellular concentration. Inhaled beta stimulants have been reported to be effective in the acute treatment of hyperkalemia. Thus, such therapy prior to dialysis may lower the total amount of potassium removed during the dialytic procedure. Allon et al. found that the cumulative dialytic potassium removal was significantly lower in patients treated with nebulized albuterol 30 minutes prior to the procedure compared to patients in whom the albuterol treatment was omitted.


Alterations in serum potassium concentration during dialysis can conceivably have important effects on systemic hemodynamics. A decrease in serum potassium concentration during hemodialysis would be predicted to increase systemic vascular resistance. Hypokalemia has been shown to increase resistance in skeletal muscle, skin, and coronary vascular beds—possibly through effects on the electrogenic Na-K pump in the sarcolemmal membranes of vascular smooth muscle cells. In addition, decreased serum potassium concentration may enhance the sensitivity of the vasculature to endogenous pressor hormones.


Despite the potential for hypokalemia to increase systemic vascular resistance, Pogglitsch et al. found that the incidence of hypotensive episodes were in fact reduced when supplemental potassium was administered during the final 30 minutes of dialysis. One explanation for this seemingly paradoxical finding rests on the known interaction between hypokalemia and the autonomic nervous system. For example, hypokalemia has been found to be associated with dysautonomia in patients with hyperaldosteronism. It is reasonable to speculate that in patients with advanced renal failure, who already have a propensity for autonomic insufficiency, a fall in plasma potassium may uncover or cause impairment in sympathetic responses.


In support of this suggestion, Henrich et al. found that hypokalemic dialysis was accompanied by a fall in plasma catecholamine concentration compared to dialysis in which serum potassium concentration was held constant. Moreover, despite similar reductions in blood pressure the isokalemic dialysis group had a significant increase in heart rate after dialysis and the hypokalemic group demonstrated no significant change. Further studies are needed to investigate the effects of fluctuations in serum potassium concentration during dialysis on the autonomic nervous system.


Changes in serum potassium concentration during dialysis may also influence systemic hemodynamics through effects on myocardial performance. Dialysis is associated with an increase in contractility, which can be attributed to an increase in ionized serum calcium. Increased ionized calcium is most closely related to improved ventricular contractility, but modifying effects of concomitant decreases in potassium may also be important. Haddy et al. have demonstrated that the inotropic effect of increased serum calcium concentration is enhanced by simultaneous decreases in plasma potassium concentration. In this regard, Wizemann et al. found that improvement in myocardial contractility during a series of isovolemic dialysis maneuvers was related to a simultaneous increase in plasma calcium and a decrease in plasma potassium concentration. In the presence of an elevated plasma potassium concentration, a high plasma calcium concentration failed to exert a significant inotropic effect.


An increase in peripheral vascular resistance secondary to the development of hypokalemia could have potential detrimental effects on dialysis efficiency. This decrease in efficiency would result from decreased blood flow to urea-rich tissues (such as skeletal muscle) and in effect increase the amount of body-wide recirculation. In support of this possibility, Dolson et al. found that a dialysate potassium concentration of 1.0 mmol/liter compared to 3.0 mmol/liter resulted in lower values for both the urea reduction ratio and Kt/V in 14 patients with end-stage renal disease. By contrast, Zehnder et al. found no effect of dialysate potassium on dialysis adequacy. Although more studies are needed in this area, it is likely that any effect of a low dialysate potassium concentration to decrease dialysis adequacy is small in magnitude. In addition, increasing the dialysate potassium concentration to improve dialysis adequacy will increase the risk of hyperkalemia during the interdialytic period.


Most patients dialyzed with a fixed potassium dialysate tolerate the procedure well and do not suffer from complications of hypokalemia or hyperkalemia. Nevertheless, there are clinical conditions in which an individualized dialysate potassium concentration may be useful. Patients with underlying heart disease, particularly in the setting of digoxin therapy, are prone to arrhythmias as hypokalemia develops toward the end of a typical treatment. The risk of arrhythmias is also increased in the early stages of a dialysis session, when the plasma potassium concentration may still be normal but rapidly declining. The sudden reduction in the plasma potassium concentration during the initial portions of the dialysis procedure has recently been shown to unfavorably alter the QTc (a marker of risk of ventricular arrhythmias) even in dialysis patients without obvious heart disease. Patients who have suffered a cardiac arrest in the dialysis unit are more likely to have been dialyzed against a 0- or 1.0-mEq/liter potassium dialysate compared to patients without this complication.


With these considerations in mind, Redaelli et al. have studied the effects of modeling the dialysate potassium concentration in such a way as to minimize the initial rapid decline in the plasma potassium concentration. Patients with frequent intradialytic premature ventricular complexes were dialyzed using a dialysate with a fixed (2.5 mEq/liter) potassium level or one with an exponentially declining potassium level (from 3.9 to 2.5 mEq/liter) that maintained a constant blood-to-dialysate potassium gradient of 1.5 mEq/liter throughout the procedure. In the fixed dialysate group, the blood-to-dialysate potassium gradient decreased over the treatment from 3.0 /liter to 1.4 mEq/liter. The variable potassium dialysate decreased premature ventricular complexes—a finding most evident during the first hour of the procedure. The total drop in the serum potassium concentration was no different between the fixed and variable potassium dialysate groups.


In addition to decreasing arrhythmias, maintenance of a constant blood-to-dialysate potassium concentration may prove useful in patients who tend to develop worsening hypertension during the course of the dialysis procedure. Hypokalemia increases resistance in skeletal muscle, skin, and coronary vascular beds—possibly through effects on the electrogenic Na-K pump in the sarcolemmal membranes of vascular smooth muscle cells. In addition, decreased serum potassium concentration may enhance the sensitivity of the vasculature to endogenous presser hormones. In chronic dialysis patients, postdialysis rebound hypertension is greater with a 1.0-mEq/liter than with a 3.0-mEq/liter potassium dialysate. Although not yet studied, preventing the initial rapid decline in the plasma potassium concentration with a ramped dialysate potassium may help attenuate the hypertensive response some patients exhibit toward the end of a dialysis treatment.


In summary, due to the kinetics of potassium movement from the intra- to the extra-cellular space one can expect only up to 70–90 mEq of potassium to be removed during a typical dialysis session. As a result, one should not overestimate the effectiveness of the dialytic procedure in the treatment of severe hyperkalemia. The total amount removed will exhibit considerable variability and will be influenced by changes in acid–base status, changes in tonicity, changes in glucose and insulin concentration, and catecholamine activity. Given the tendency for the plasma potassium to rise in the immediate postdialysis time period, the most efficient way of removing excess potassium stores would be to prescribe two- to three-hour periods of dialysis separated by several hours.


Studies examining the hemodynamic effect of potassium fluxes during hemodialysis are limited. More importantly, deliberate alterations in dialysate potassium concentration to effect hemodynamic stability would not be without risk. Use of low-potassium dialysate concentration may contribute to arrhythmias, especially in those patients with underlying coronary artery disease or those taking digoxin. On the other hand, use of dialysate with high potassium concentration may predispose patients to pre-dialysis hyperkalemia. In patients at high risk for arrhythmias on dialysis, modeling the dialysate potassium concentration so as to maintain a constant blood-to-dialysate potassium gradient throughout the procedure may be of clinical benefit.


Peritoneal Dialysis


Potassium is cleared by peritoneal dialysis at a rate similar to that of urea. With continuous ambulatory peritoneal dialysis (CAPD) and 10 liters of drainage per day, approximately 35–46 mEq of potassium is removed per day. Daily potassium intake is usually greater than this amount, and yet significant hyperkalemia is uncommon in these patients. Presumably, potassium balance is maintained by increased colonic secretion of potassium and by some residual renal excretion. Given these considerations, potassium is not routinely added to the dialysate.


Maximal removal of potassium with peritoneal dialysis is approximately 10 mEq/hr even in the setting of severe hyperkalemia. It should be noted that removal rates with Kayexalate enemas far exceed this value and may approach 30 mEq/hr. In patients undergoing frequent exchanges, hypokalemia may develop. In these instances, potassium can be added to the dialysate to achieve a final concentration of 2–3 mEq/liter. This is particularly important in patients receiving digoxin because the development of hypokalemia can precipitate arrhythmias.




Clinical Disorders of Potassium in the Dialysis Patient


Hypokalemia


In the hemodialysis patient hypokalemia can be a sign of poor oral intake and severe malnourishment. On occasion hypokalemia can result from K + binding in the gastrointestinal tract. Although not a dialysis patient, a serum K + concentration of 0.9 mmol/L was found in a three year old girl following several days of oral and rectal administration of bentonite given as a home remedy for the treatment of constipation. Bentonite, also called montmorillonite or fuller’s earth, is a type of clay primarily composed of hydrated aluminum silicate. Clay eating (geophagia) can be a manifestation of pica and has been reported to cause hypokalemic paralysis during pregnancy and in the postpartum period. Clay eating is also practiced by some dialysis patients and should be considered in the setting of unexplained hypokalemia.


Cation exchange resins are frequently used to manage hyperkalemia in patients with chronic kidney disease. Abuse of these resins can result in hypokalemia, hypomagnesemia, and occasionally metabolic alkalosis. The most commonly used resin is sodium polystyrene sulfonate but calcium polystyrene sulfonate is also available. Following the oral administration of these drugs sodium or calcium is released from the resin in exchange for hydrogen H + in the gastric juice. As the resin passes through the rest of the gastrointestinal tract H + is exchanged for other cations such as potassium which is present in greater quantities particularly in the colon. The affinity of different cations for these resins is as follows: Ca + >Mg + >K + >Na + >and H + . In addition to differences in affinity and concentration, cation binding to the resin is influenced by duration of exposure primarily dictated by gut transit time.


The primary complication of using sodium polystyrene sulfonate is the development of sodium overload. The absorption of liberated sodium from the resin can lead to hypertension, congestive heart failure, and occasionally hypernatremia. Since the resin also binds divalent cations, hypomagnesemia and hypocalcemia can also develop when using this agent. Decreased plasma levels of magnesium and calcium are more likely to occur in patients taking diuretics or those with poor nutrition. Use of the resin can also lead to metabolic alkalosis when administered with antacids or phosphate binders such as magnesium hydroxide or calcium carbonate. As magnesium and calcium binds to the resin, the base is free to be absorbed into the systemic circulation. Chronic use of the resin can also be associated with small and large bowel ulcerations.


Hyperkalemia


Inadequate dialysis is an important consideration in the workup of hyperkalemia. Frequently missed treatments or shortening of the treatment time on a repetitive basis are frequent causes. In patients who are deemed to be otherwise compliant, one should consider recirculation within the vascular access as a potential cause of the disorder.


Dietary indiscretion is one of the most common reasons for hyperkalemia in the dialysis patient. In the presence of normal renal and adrenal function it is difficult to ingest sufficient K + in the diet to produce hyperkalemia. Rather, dietary intake of K + as a contributor to hyperkalemia is usually observed in the setting of impaired kidney function. Dietary sources particularly enriched with K + include melons, citrus juice, and commercial salt substitutes containing potassium (reviewed in Ref. ). Other hidden sources of K + reported to cause life threatening hyperkalemia include raw coconut juice (K + concentration of 44.3 mmol/L) and Noni juice. While clay ingestion can cause hypokalemia due to binding in the gastrointestinal tract, river bed clay is K + enriched (100 mEq K + in 100 gm clay) and can cause life threatening hyperkalemia in chronic kidney disease patients. Ingestion of burnt match heads (cautopyreiophagia) can also be a hidden source of K + . This activity was found to add an additional 80 mmol of K + to one dialysis patient’s daily intake and produced a plasma K + concentration of 8 mmol/liter.


Hyperkalemia can also occur as an iatrogenic complication in the hospital setting. A 16-day-old infant with newly diagnosed maple syrup urine disease was placed on continuous venovenous hemofiltration to treat markedly elevated levels of leucine, isoleucine, and valine. To treat a decrease in serum K + , a 10 ml vial containing 20 mEq KCl was injected into a 5 liter bag of replacement fluid. Within four minutes, ventricular premature beats developed that rapidly deteriorated into ventricular fibrillation. The serum K + concentration was 9.6 mEq/l. The rapid development of hyperkalemia was attributed to injecting the KCl into the dependent portion of the hanging 5-liter bag though a port immediately adjacent to the exit port. As a result of poor mixing, the concentrated KCl was immediately delivered into the patient, resulting in life-threatening hyperkalemia.


Severe hemolysis can produce an endogenous K + load sufficient to cause hyperkalemia, particularly in the setting of impaired renal function. A chronic dialysis patient with a prosthetic aortic valve developed severe hemolysis and hyperkalemia, following the abrupt onset of atrioventricular nodal reentrant tachycardia. The hemolysis and release of K + was attributed to fragmentation of red blood cells by the prosthetic value due to the hemodynamic turbulence brought on by the arrhythmia.


Blood transfusions can be a contributing factor in the development of hyperkalemia. The risk of transfusion-associated hyperkalemia is related to not only the number of red blood cell transfusions but also the rapidity in which the units are given. Concomitant conditions such as low cardiac output, metabolic acidosis, hypocalcemia, hyperglycemia, and hypothermia increase this risk.


Whole blood and PRBC are stored in anticoagulant preservative solution and have a shelf-life of approximately 35 d. The duration of storage can be extended to 42 d through the addition of an additive solution which contains varying concentrations of adenine, dextrose, and other substances. During storage, K + leaks into the supernatant due to aging of red blood cell membranes and decreased synthesis of adenosine triphosphate. The magnitude of this leak increases with duration of storage.


Irradiation of blood to inactivate T-lymphocytes and minimize the risk of graft-versus-host disease enhances K + leakage from red cells due to subtle membrane injury. Depending on the conditions, the supernatant of stored red blood cell units may contain more than 60 mEq/l of K + . If fresh PRBCs are unavailable, the risk of post transfusion hyperkalemia can be minimized by washing the cells and decreasing the amount of additive solution. These maneuvers are of particular use in neonatal patients undergoing surgery for congenital heart disease who require irradiated blood due to the concomitant presence of cell-mediated immunodeficiency disorders.


Renin-angiotensin system blockers are frequently used to treat hypertension in patients with end-stage renal disease. Hyperkalemia is a potential concern with these drugs even in the functionally anephric patient to the extent that aldosterone levels fall and colonic K + excretion decreases. Indeed, a small number of patients undergoing dialysis have been described who developed hyperkalemia in association with ACEI and ARB therapy. By contrast, this complication did not occur in a prospective crossover study of 69 maintenance hemodialysis patients treated with either ACEI or ARB therapy alone or in combination.


Acid–Base


During the course of advancing renal failure, the ability of the kidney to regenerate consumed bicarbonate becomes progressively impaired. As a result, daily acid production leads to a fall in serum bicarbonate concentration. In the long term, the serum bicarbonate concentration eventually stabilizes despite continuing positive acid balance. A stable, though low, serum bicarbonate concentration is maintained at the expense of other buffer stores such as bone bicarbonate. The goal of dialysis in acid–base balance is to transfer sufficient base to the patient to neutralize metabolic acid production and thus correct the metabolic acidosis and prevent depletion of body buffer stores. Base transfer across the dialysis membrane has been achieved using bicarbonate- or acetate-containing dialysate.


Hemodialysis


The early use of bicarbonate as the base in dialysis solutions required a cumbersome system in which CO 2 was continuously bubbled through the dialysate to lower pH in order to prevent the precipitation of calcium and magnesium salts. As a result, in the 1960s acetate became the standard dialysate buffer used to correct uremic acidosis and to offset the diffusive losses of bicarbonate during hemodialysis.


Acetate was an effective buffer in dialysis patients because it is metabolized to bicarbonate primarily in muscle and liver. However, over the next several years reports began to accumulate linking routine use of acetate with cardiovascular instability and hypotension during dialysis. This intolerance to acetate was found to be particularly common in patients with decreased muscle mass where acetate influx would be expected to exceed the capability of converting acetate to bicarbonate. In particular, critically ill patients undergoing acute hemodialysis (especially with the use of large surface area dialyzers) were found to exhibit vascular instability when exposed to acetate in dialysis fluid. Acetate intolerance became more of an issue with the introduction of high-efficiency dialyzers in the 1980s. In this setting, accumulation of acetate was associated with nausea, vomiting, fatigue, decreased myocardial contractility, peripheral vasodilation, and arterial hypoxemia. A more detailed discussion of acetate dialysate is available in the third edition of this book and elsewhere ( Table 93.4 ).


Jun 6, 2019 | Posted by in NEPHROLOGY | Comments Off on Individualizing the Dialysate to Address Electrolyte Disturbances in the Dialysis Patient

Full access? Get Clinical Tree

Get Clinical Tree app for offline access