Biomarkers in Acute and Chronic Kidney Diseases






  • Chapter Outline



  • BIOMARKER DEFINITION, 927



  • PROCESS OF BIOMARKER DISCOVERY, ASSAY VALIDATION, AND QUALIFICATION IN A CLINICAL CONTEXT, 927




    • Phase 1: Discovery of Potential Biomarkers through Hypothesis-Generating Exploratory Studies, 927



    • Phase 2: Development and Validation of an Assay for the Measurement or Identification of the Biomarker in Clinical Samples, 928



    • Phase 3: Demonstration of the Biomarker’s Potential Clinical Utility in Retrospective Studies, 928



    • Phase 4: Performance of Prospective Screening Studies, 928



    • Phase 5: Continued Assessment of the Validity of the Biomarker in Routine Clinical Practice, 929




  • ANALYSIS OF BIOMARKER PERFORMANCE, 929



  • CHARACTERISTICS OF AN IDEAL BIOMARKER FOR KIDNEY DISEASE, 930



  • ACUTE KIDNEY INJURY MARKERS, 930



  • GLOMERULAR INJURY MARKERS, 932




    • Serum Glomerular Filtration Markers, 932



    • Urinary Glomerular Cell Injury Markers, 936




  • URINARY TUBULAR INJURY MARKERS, 937




    • Urine Microscopy, 937



    • α 1 -Microglobulin, 937



    • β 2 -Microglobulin, 938



    • Glutathione S-Transferase, 938



    • Hepcidin-25, 939



    • Interleukin-18, 939



    • Kidney Injury Molecule-1, 941



    • Liver-Type Fatty Acid–Binding Protein, 942



    • Netrin-1, 943



    • Neutrophil Gelatinase–Associated Lipocalin, 943



    • N -Acetyl-β- d -Glucosaminidase, 947



    • Proteinuria, 947



    • Albuminuria, 948



    • Urinary Cystatin C, 948



    • TIMP-2 AND IGFBP-7, 949




  • CHRONIC KIDNEY DISEASE BIOMARKERS, 949




    • Plasma Asymmetric Dimethylarginine, 950



    • Fibroblast Growth Factor-23, 950



    • Urinary Monocyte Chemoattractant Protein-1, 950




  • URINARY RENAL FIBROSIS MARKERS, 951




    • Connective Tissue Growth Factor, 951



    • Transforming Growth Factor-β 1 , 951



    • Collagen IV, 951




  • COMBINATIONS OF MULTIPLE BIOMARKERS, 952



  • FDA CRITICAL PATH INITIATIVE: A NEED FOR BETTER BIOMARKERS, 953



  • THE KIDNEY HEALTH INITIATIVE, 953



  • FUTURE OF BIOMARKERS, 954


Kidney disease is a global health problem. Acute kidney injury (AKI) and chronic kidney disease (CKD) are increasing in incidence. In the U.S., it is clear that the incidence of AKI, regardless of its severity, has been steadily rising at a rate that is disturbingly high, and it is increasingly recognized that AKI predisposes to the progression of CKD toward end-stage kidney disease (ESKD), which ultimately requires dialysis or kidney transplantation. According to the World Health Organization, approximately 850,000 patients develop ESKD every year. Across the globe, treatment of ESKD poses a major challenge for health care systems and the global economy. The burden of kidney disease is most significant in developing countries and is adversely influenced by inadequate socioeconomic and health care infrastructures. Importantly, kidney disease progression may be curtailed if the disease is diagnosed early. Hence, detection and management of kidney diseases, whether acute or chronic, in the early, reversible, and potentially treatable stages are of paramount importance. Biomarkers that will help diagnose kidney injury, predict progression of kidney disease, and provide information regarding the effectiveness of therapeutic intervention will be important adjuncts to our standard management strategies.


Many novel high-throughput technologies in the fields of genomics, proteomics, and metabolomics have now made it easier to interrogate hundreds or even thousands of potential biomarkers at once, without prior knowledge of the underlying biology or pathophysiology of the system being studied. As a result, there is a renewed interest in discovering novel biomarkers for use in drug development and patient care. Despite notable achievements, however, only a few biomarkers—blood urea nitrogen (BUN) level, serum creatinine concentration, urinalysis results, albuminuria, and proteinuria—are routinely used to diagnose and monitor kidney injury. These commonly used “gold standard” biomarkers of kidney function are not optimal to detect injury or dysfunction early enough to allow prompt therapeutic intervention. Although additional candidate biomarkers have been reported, none has been adequately validated to justify its use in making patient care decisions, but a few look quite promising.




Biomarker Definition


In 2001, the U.S. Food and Drug Administration (FDA) standardized the definition of a biomarker as “a characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to therapeutic intervention.” The National Institutes of Health further classified biomarkers on the basis of their utility (see Table 30.1 ). Biomarkers can potentially serve a wide range of functions in drug development, clinical trials, and therapeutic management strategies. There are many different classes of biomarkers: prognostic, predictive, pharmacodynamic, and surrogate. Of note, these categories are not mutually exclusive. Definitions of the different types of biomarkers can be found in Table 30.1 . Examples of biomarkers are proteins, lipids, genomic or proteomic patterns, imaging determinations, electrical signals, and cells present in urine. Some biomarkers also serve as surrogate endpoints. A surrogate endpoint is a biomarker intended to substitute for a clinical endpoint. Furthermore, a surrogate endpoint biomarker is expected to predict clinical benefit (harm or lack of benefit) on the basis of epidemiologic, therapeutic, pathophysiologic, or other scientific evidence. An ideal biomarker is easily measurable, reproducible, sensitive, cost effective, easily interpretable, and present in readily available specimens (blood and urine).



Table 30.1

Biomarker Definitions





















Biomarker A characteristic that is objectively measured and evaluated as an indicator of normal biologic process, pathogenic processes, or pharmacologic responses to therapeutic intervention.
Prognostic biomarker A baseline patient or disease characteristic that categorizes patients by degree of risk for disease occurrence or progression, informing about the natural history of the disorder in the absence of a therapeutic intervention.
Predictive biomarker A baseline characteristic that categorizes patients by their likelihood of response to a particular treatment, predicting either a favorable or unfavorable response.
Pharmacodynamic biomarker A dynamic assessment that shows that a biologic response has occurred in a patient who has received a therapeutic intervention. Pharmacodynamic biomarkers may be treatment-specific or broadly informative of disease response, with the specific clinical setting determining how the biomarker is used and interpreted.
Clinical endpoint A characteristic or variable that reflects how a patient fares or functions or how long a patient survives.
Surrogate endpoint biomarker (type 2 biomarker) A marker that is intended to substitute for clinical endpoint. A surrogate endpoint is expected to predict clinical benefit, harm, lack of benefit, or lack of harm on the basis of epidemiologic, therapeutic, pathophysiologic, or other scientific evidence.




Process of Biomarker Discovery, Assay Validation, and Qualification in a Clinical Context


Primary challenges to the development of biomarkers for kidney injury and toxicity are discovery of candidate markers, design of an assay, validation of the assay, and qualification of the biomarker for use in specific clinical contexts. The process of biomarker identification and development is arduous and involves several phases. For the purpose of simplicity, this process can be divided into the following five phases (adapted and modified from Pepe and colleagues ).


Phase 1: Discovery of Potential Biomarkers through Unbiased or Hypothesis-Generating Exploratory Studies


The primary goal of phase 1 is to identify potential leads using various technologies and to confirm and prioritize the identified leads. The search for biomarkers often begins with preclinical studies that compare either tissue or biologic fluids in diseased animals (e.g., animals with kidney injury) with those in healthy animals to identify genes or proteins that appear to be upregulated or downregulated in diseased tissue relative to control tissue. When biologic samples, such as blood and urine, are readily available from humans, it is possible to forgo the animal model stage. Innovative discovery technologies include microarray-based gene expression profiling that provides information regarding expression of genes, microRNA-based expression, and proteomic as well as metabolomic profiling of biologic fluids based on mass spectrophotometry and other technologies. The candidate marker approach, especially when informed by the pathophysiology of the disease for which the biomarker is being evaluated, should not be ignored.


Once a promising biomarker is discovered, the validation process begins. An assay has to be developed and validated. The validation process is laborious and expensive, requiring access to patient samples with complete clinical annotation and long-term follow-up, as described in the discussion of phase 2. In addition, each biomarker must be qualified for specific application. This statement is especially true in the case of kidney diseases, for which one biomarker alone may not satisfy the requirements of an ideal biomarker. This situation is described in the subsequent section on phase 4. Incorporation of several of these novel biomarkers into a biomarker panel may enable simultaneous assessment of site-specific kidney injury or several mechanisms contributing to clinical syndromes.


Phase 2: Development and Validation of an Assay for the Measurement or Identification of the Biomarker in Clinical Samples


The primary goal of phase 2 is to develop and validate a clinically useful assay for a biomarker that has the ability to distinguish a person with kidney disease/injury from persons with healthy kidneys in a high-throughput fashion. This phase involves development of an assay, optimization of assay performance, and evaluation of the reproducibility of the assay results within and among laboratories. Defining reference ranges of biomarker values is a crucial step before the biomarker can be used clinically. It is important to characterize how the levels of these markers vary with patient age, sex, and race or ethnicity, and how biomarker values are related to known risk factors.


Phase 3: Demonstration of the Biomarker’s Potential Clinical Utility in Retrospective Studies


In phase 3 of biomarker development, the primary objectives are to (1) evaluate the biomarker potential in samples obtained from a completed clinical study, (2) test the diagnostic potential of the biomarker for early detection, and (3) determine the sensitivity and specificity of the biomarker using defined threshold values of the biomarker for utility in prospective studies. For instance, if the levels of biomarker differ significantly between subjects with acute or chronic kidney injury and control subjects only at the time of clinical diagnosis, then the biomarker shows little promise for population screening or early detection. In contrast, if levels differ significantly at hours, days, or years before clinical symptoms appear, then the biomarker’s potential for early detection is increased. This phase also involves comparing the biomarker with several other novel biomarkers or existing “gold standard” biomarkers and defining the biomarkers’ performance characteristics (i.e., sensitivity, specificity) using receiver operating characteristic curve analysis. This latter process is particularly challenging in kidney disease, given uncertainties in the sensitivity and specificity of the gold standard used.


Phase 4: Performance of Prospective Screening Studies


The primary aim of phase 4 studies is to determine the operating characteristics of the biomarker in a relevant population by measuring detection rate and false referral rate. In contrast to phase 1, 2, and 3 studies, which are based primarily on stored specimens, studies in phase 4 involve screening subjects prospectively and demonstrating that clinical care is changed as a result of the information provided by the biomarker analysis.


Biomarker Qualification Process


The application for FDA qualification of novel biomarkers requires the intended use of the biomarker in nonclinical and clinical contexts and collection of evidence supporting qualification. This can be a joint and collaborative effort among regulatory agencies, pharmaceutical companies, and academic scientists.


Steps involved in the biomarker qualification pilot process, as described by Dr. Federico Goodsaid when he was at the FDA, are as follows : (1) submission to an FDA interdisciplinary pharmacogenomic review group of a request to qualify the biomarker for a specific use; (2) recruitment of a biomarker qualification review team (containing both nonclinical and clinical members); (3) assessment of the biomarker context and available data in a voluntary data submission; (4) evaluation of the qualification study strategy; (5) review of the qualification study results; and (6) acceptance or rejection of the biomarker for the suggested use.


Data are shared between the FDA and pharmaceutical industry or academic laboratories through voluntary exploratory data submissions (VXDSs). Submission of exploratory biomarker data through VXDSs allows interaction between reviewers at the FDA and researchers in industry or academia regarding study designs, sample collection and storage protocols, technology platforms, and data analysis. This pilot process for biomarker qualification allowed the Predictive Safety Testing Consortium to apply to both U.S. and European drug authorities simultaneously for qualification of new nephrotoxic biomarkers (kidney injury molecule-1, albumin, total protein, cystatin C, clusterin, trefoil factor 3, and α 2 -microglobulin) as predictors of drug-mediated nephrotoxicity. The FDA and the corresponding European authority (European Medicines Agency, or EMA) reviewed the application separately and made decisions as to whether each would allow the new biomarkers to be “fit for purpose” in preclinical research. Some of these markers were proposed to be qualified as biomarkers for clinical drug–induced nephrotoxicity once further supportive human data are submitted.


It is notable that the process described here is specific for the FDA and the United States and that the biomarker validation and approval process varies significantly around the world. As of June 2014, the FDA had not approved a new biomarker for the diagnosis or clinical management of acute or chronic renal dysfunction; however, several biomarkers have been approved for clinical use in several other countries throughout Europe and Asia.


Phase 5: Continued Assessment of the Validity of the Biomarker in Routine Clinical Practice


Phase 5 addresses whether measurement of the biomarker alters physician decision making and/or reduces mortality or morbidity associated with the given disease in the population.




Analysis of Biomarker Performance


The widely accepted measure of biomarker sensitivity and specificity is the receiver operating characteristic (ROC) curve. ROC curves display the proportion of subjects both with and without disease correctly identified at various cutoff points. An ROC curve is a graphic display of trade-offs between the true-positive rate (sensitivity) and the false-positive rate (1-specificity, where specificity is expressed as a value from 0 to 1) when the biomarker is a continuous variable ( Figure 30.1 ). Sensitivity is plotted along the ordinate, and the value of (1-specificity) is plotted on the abscissa. Each point on the curve represents the true-positive rate and false-positive rate associated with a particular test value. The diagonal, represented by the equation true-positive rate (sensitivity) = false-positive rate (1-specificity) , corresponds to the set of points for which there is no selectivity in predicting disease. The area under this line of “unity” is 0.5, which indicates no advantage relative to the flip of a coin. The performance of a biomarker can be quantified by calculating the area under the ROC curve (AUC). The AUC is the probability that a randomly sampled case has a larger biomarker value (or risk score) than the value for a randomly sampled control. Although this definition makes the AUC easily interpretable, the interpretation is not always clinically meaningful, because patients with the disease (cases) and controls do not present to clinicians in random pairs. Thus, although an ideal biomarker could supply an AUC of 1.0 (a clinical rarity), in actuality the AUC lacks true direct clinical relevance. Despite these flaws, the AUC is widely reported and familiar to clinicians. The shortcomings of AUC extend into the assessment of the incremental change in AUC (Δ AUC) when a new marker is added to a group of previously established predictors. The clinical impact of a Δ AUC of 0.02 is often unclear, and the statistics and P values behind such calculations remain problematic.




Figure 30.1


Receiver operator characteristic (ROC) curves. AUC, area under the curve.


Other important parameters related to biomarker performance, primarily with respect to the testing of larger or specific populations, are positive and negative predictive values. The positive predictive value is the proportion of persons who test positive for a disease and truly have the disease, whereas negative predictive value represents the proportion of persons who test negative and do not have the disease. There is considerable interest in developing algorithms that use a composite of values of several biomarkers that are measured in parallel for the purpose of increasing diagnostic potential or predicting disease course and patient outcomes.


More recently the Net Reclassification Index (NRI) and the Integrated Discrimination Improvement Index (IDI) have been used to evaluate the ability of new biomarkers. They have gained popularity, in part, because of the aforementioned difficulty in interpreting the true clinical meaning of significant but small changes in AUC (e.g., a 0.04 increase). Additionally, new biomarkers often must have exceptional discriminatory powers to increase the AUC once it has reached a certain level. Briefly, reclassification involves the use of predefined risk categories and then recalculates risk using the previously established predictors as well as the new biomarkers. The Reclassification Rate is simply the proportion of the population whose risk category changes with the new biomarker; a low reclassification rate means that treatment decisions will rarely be altered by the new biomarker. The categorical NRI expands on the reclassification rate and uses predefined risk strata but considers an improvement in reclassification as any increase in model-based predicted probabilities of an event (e.g., AKI) after the addition of the biomarker to a pre-existing clinical model as well as a decrease in the probabilities for the non-events (e.g., no AKI). By definition, the categorical NRI does not depend on the underlying performance of the clinical model and is very sensitive to the number and thresholds set for the individual risk categories. A worsening in the reclassification is defined by a decrease in the probabilities for events and an increase in the probabilities for non-events. Pencina and colleagues have previously stated that medium effect sizes have an NRI between 0.4 and 0.6 and that large effect sizes have an NRI greater than 0.6. It is worth noting that NRI values may fall between 0.0 and 2.0 with the ideal NRI being 2.0, consisting of 1.0 (100%) increased reclassification of events and 1.0 (100%) reclassification of non-events.


The use of a category-free NRI has been on the rise; this version of the NRI accounts for any upward or downward movement in predicted risk, regardless of risk strata thresholds. The Integrated Discrimination Improvement (IDI) is defined as the difference in discrimination slopes between the unadjusted and biomarker-adjusted clinic models, with large effect sizes having an IDI of 0.10 or greater and medium effect sizes having an IDI between 0.05 and 0.10. The NRI and IDI have gained rapid acceptance in the renal and nonrenal biomarker literature and are potentially more sensitive metrics than AUC; however, we must use caution with the clinical utility because these metrics have not been widely accepted by all statisticians.




Characteristics of an Ideal Biomarker for Kidney Disease


Characteristics of an ideal biomarker for kidney disease are described in Table 30.2 . For AKI, the biomarker should be (1) organ specific and should allow differentiation among intrarenal, prerenal, and postrenal causes of AKI as well as acute glomerular injury; (2) able to detect AKI early in the course and to predict the course of AKI and, potentially, the future implications of AKI; (3) able to identify the cause of AKI; (4) site specific and able to inform pathologic changes in various segments of renal tubules during AKI as well as to correlate with the histologic findings in kidney biopsy specimens; (5) easily and reliably measured in a noninvasive or minimally invasive manner; (6) stable in its matrix; (7) rapidly and reliably measurable at the bedside; and (8) inexpensive to measure.



Table 30.2

Characteristics of an Ideal Kidney Biomarker










Functional Properties Physiochemical Properties



  • Shows rapid and reliable increase in response to kidney diseases



  • Is highly sensitive and specific for acute and/or chronic kidney disease



  • Shows good correlation with degree of renal injury



  • Provides risk stratification and prognostic information (severity of kidney disease, need for dialysis, length of hospital stay, and mortality)



  • Is site-specific to detect early injury (proximal, distal, interstitium, or vasculature) and identify pathologic changes in specific segments of renal tubules



  • Is applicable across different races and age groups



  • Allows recognition of the cause of kidney injury or disease (e.g., ischemia, toxins, sepsis, cardiovascular disease, diabetic nephropathy, lupus, or combinations)



  • Is organ-specific and allows differentiation among intrarenal, prerenal, and extrarenal causes of kidney injury



  • Noninvasively identifies the duration of kidney failure (acute kidney injury, chronic kidney injury)



  • Is useful to monitor the response to therapeutic interventions



  • Provides information on the risk of complications from comorbid conditions (especially in chronic kidney disease)




  • Is stable over time across different temperature and pH conditions, with clinically relevant storage conditions



  • Is rapidly and easily measurable



  • Is not subject to interference by drugs or endogenous substances



In CKD (unlike AKI), the timing and nature of the insult are very hard to estimate, making the search for early biomarkers for CKD very difficult. An ideal biomarker for CKD shares many of the requirements described earlier for AKI biomarkers, including providing insight into (1) the location of the injury (e.g., glomerular, interstitial, tubular), (2) the disease mechanism, (3) the progressive course of the disease, and (4) the risk of complications from comorbid conditions such as cardiovascular disease and diabetes.




Acute Kidney Injury Markers


In the cardiac sciences, the discovery of biomarkers, such as troponins that reflect early cardiomyocyte damage rather than decreased cardiac function, has enabled the development and implementation of novel therapeutic strategies to reduce coronary insufficiency and associated morbidity and mortality. By contrast, the delay in diagnosis that is associated with the use of kidney biomarkers, such as serum creatinine concentration, has impaired the ability of nephrologists to conduct interventional studies in which the intervention can be implemented early in the course of the disease process. Although the last decade has seen a revolution in terms of diagnostic criteria for AKI with the RIFLE ( R isk, I njury, F ailure, L oss, E nd-Stage Kidney Disease) classification and the Acute Kidney Injury Network (AKIN) definition of AKI being harmonized into the Kidney Disease: Improving Global Outcomes (KDIGO) classification ( Table 30.3 ), these criteria remain limited by their reliance on the serum creatinine concentration on some level. This limitation stems from creatinine’s role as a functional biomarker; it can rise in cases of prerenal azotemia when there is no tubular injury and can be unchanged under conditions of significant tubular injury, particularly when patients have good underlying kidney function and significant kidney reserve. Nonetheless, these criteria have advanced our understanding of the epidemiology of AKI, and these standardized consensus definitions have allowed for comparisons and aggregation of data from a larger number of papers. Biomarkers of AKI can serve several purposes and are no longer thought of as a replacement for serum creatinine. Table 30.4 summarizes several of the potential uses of AKI biomarkers. Figure 30.2 summarizes the kidney-specific locations of the AKI biomarkers discussed in this section.



Table 30.3

Kidney Disease: Improving Global Outcomes (KDIGO) Staging of AKI




















Stage Serum Creatinine Criteria* Urine Output Criteria
1 1.5-1.9 times baseline
Or
≥0.3 mg/dL (26.5 µmol/L) increase
<0.5 mL/kg/hr for 6-12 hr
2 2.0-2.9 times baseline <0.5 mL/kg/hr for ≥12 hr
3 ≥3.0 times baseline
Or
Increase in serum creatinine to ≥4.0 mg/dL (≥353.6 µmol/L)
Or
Initiation of renal replacement therapy
Or
In patients <18 years, decrease in estimated glomerular filtration rate (eGFR) to <35 mL/min/m 2
<0.3 mL/kg/hr for ≥24 hr or anuria for 12 hr


Table 30.4

Potential Utilization for Biomarkers of AKI and CKD









AKI


  • Early detection of AKI



  • Differential diagnosis of AKI (e.g., distinguishing between volume-mediated AKI [prerenal] and intrinsic tubular injury [ATN])



  • Predicting outcomes of AKI at the time of clinical diagnosis (need for RRT, development of post-AKI CKD, short- and long-term mortality)



  • Predicting recovery from AKI



  • Ascertaining the nephron-specific location and etiology of renal injury



  • Monitoring the effects of an intervention

CKD


  • Early detection and diagnosis of CKD



  • Predicting the progression of CKD (rapid vs. slow progression)



  • Predicting outcomes of CKD at the time of clinical diagnosis (development of ESKD, short- and long-term mortality)



  • Predicting cardiovascular disease/outcomes among patients with CKD



  • Monitoring the effects of an intervention


AKI, Acute kidney injury; ATN, acute tubular necrosis; CKD, chronic kidney disease; ESKD, end-stage kidney disease; RRT, renal replacement therapy.



Figure 30.2


Biomarkers in relation to their site of injury in the nephron. GST, glutathione S -transferase; IGFBP7, insulin-like growth factor–binding protein-7; IL-18, interleukin-18; KIM-1, kidney injury molecule-1; L-FABP, liver-type fatty acid–binding protein; NAG, N -acetyl-β- d -glucosaminidase; NGAL, neutrophil gelatinase–associated lipocalin; TGF-β1, transforming growth factor-β1; TIMP-2, tissue inhibitor metalloproteinase-2.

(Adapted from Koyner JL, Parikh CR: Clinical utility of biomarkers of AKI in cardiac surgery and critical illness. Clin J Am Soc Nephrol 8:1034-1042, 2013.)


Urine and serum biomarkers each have advantages and disadvantages. Serum biomarkers are often not stable and are difficult to measure because of interference with several serum proteins. By contrast, urinary biomarkers are relatively stable and easy to assess; however, their concentrations are greatly influenced by the hydration/volume status of the patient and other conditions that affect urinary volume. To overcome this challenge, urinary biomarker concentrations have often been normalized to urinary creatinine concentrations to eliminate the influence of urinary volume, on the assumption that urinary creatinine excretion rate is constant over time and that biomarker production or excretion has a linear relationship with urinary creatinine excretion rate. Waikar and colleagues have challenged this assumption, however, especially in AKI settings, in which urine creatinine excretion rate is not constant and changes over time, greatly influencing the normalized value of a putative urinary biomarker after normalization. They suggested that the most accurate method to quantify biomarkers is the timed collection of urine samples to estimate the renal excretion rate ; however, this approach is not practical for routine clinical care. Ralib and colleagues further delved into this issue by demonstrating that the ideal method for quantitating biomarkers of urinary AKI depends on the outcome of interest; absolute biomarker concentrations best diagnosed AKI at the time of intensive care unit (ICU) admission, whereas normalization to urinary creatinine improved the prediction of incipient AKI. A potential explanation of the failings of normalization is that it often amplifies the signal. For example, when the glomerular filtration rate (GFR) is reduced in immediate response to a tubular injury, the amount of biomarker produced increases, and urinary creatinine level decreases. The normalized value therefore increases by a greater amount in the short term than can be explained by the increase in the absolute level of biomarker production.


Because AKI and CKD share functional and structural aspects, there are overlapping as well as distinct classes of functional and structural biomarkers. Among the functional markers, the GFR is often used as the gold standard. Although the true GFR, as determined by agents that are freely filtered and undergo minimal handling by the tubule (iothalamate, iohexol, inulin), represents a sensitive measure for determining changes in kidney function, tests using these agents are invasive and laborious to perform. Moreover, because of the renal reserve, changes in GFR may not indicate structural injury until significant injury has occurred. On the other hand, structural markers of tubular injury are expressed by tubular cells, and subtle changes in epithelial cells lead to release of these markers into the urine. It is becoming increasingly clear that many of these biomarkers serve as signals for both acute and chronic kidney disease and also may be used to monitor progression from AKI to CKD. A challenge is to define at what level of release of these markers the injury is clinically significant in either the acute or chronic setting. Failure to identify the separate impacts of CKD and AKI on these biomarker values will lead to inappropriate clinical decision and/or poor results in clinical studies.




Glomerular Injury Markers


Serum Glomerular Filtration Markers


During the course of injury, kidney function may be impaired with reduction in the GFR and accumulation of several nitrogenous compounds in the blood. Serum creatinine and BUN concentrations are routinely used as markers of kidney injury, but it is important to recognize these parameters as markers of kidney dysfunction rather than as direct markers of injury.


As discussed elsewhere in this text, the estimated GFR (eGFR), using creatinine as a biomarker, is most reliable for CKD under steady-state conditions. In the acute setting, its use is more problematic for reasons that have already been discussed. In healthy persons, the GFR is in the range of 90 to 130 mL/min/1.73 m 2 . By definition, patients with stage 4 or 5 CKD have GFRs that are below 30 mL/min/1.73 m 2 . Complications of CKD are more pronounced at lower GFRs, and mild to moderate CKD may progress to ESKD.


In AKI, the GFR is only indirectly linked to kidney injury, and changes in the GFR reflect a late consequence in a sequence of events associated with a primary insult to the kidney. Furthermore, because of renal reserve, a large amount of functioning renal tissue can be lost without significant changes in the GFR. The functional effects of renal reserve on the GFR can be demonstrated in kidney donors, who often have only modest changes in serum creatinine levels and the GFR after donating one kidney, even though half of the renal mass is lost.


Ideally, a serum GFR marker should be freely filtered with no reabsorption or secretion in the kidney and should maintain a constant plasma level when kidney function is stable. GFR can be determined using exogenous and endogenous markers of filtration. Evaluation of the GFR using the exogenous marker inulin, iothalamate, or iohexol provides reliable results and represents the gold standard; however, the process is time consuming and expensive and can be performed only in specialized settings. Once the GFR level falls below 60 mL/min/1.73 m 2 , renal functional impairment can be estimated adequately from the serum creatinine level, various equations being used to calculate the eGFR. Although traditionally, these equations have been less accurate for patients with higher GFRs, newer formulas have been constructed utilizing more patients with normal and near normal GFRs.


Creatinine (see also Chapter 26 )


Determination of the eGFR using endogenous creatinine is cost effective but can be problematic. Creatinine is a breakdown product of creatine and phosphocreatine, which are involved in the energy metabolism of skeletal muscle. Creatinine is freely filtered by the glomerulus but is also to a lesser extent (10% to 30%) secreted by the proximal tubule. Under normal conditions, the daily synthesis of creatinine of approximately 20 mg per kg of body weight reflects muscle mass and varies little.


Accumulated data from various studies indicate that the creatinine concentration is not an ideal marker for diagnosing AKI for a variety of reasons, including the following:



  • 1.

    Creatinine production and its release into the circulation vary greatly with age, gender, muscle mass, certain disease states, and, to a lesser extent, diet. For example, in rhabdomyolysis, serum creatinine concentrations may rise more rapidly, owing to the release of preformed creatinine from the damaged muscle. Also, body creatinine production, as measured by 24-hour urinary excretion, decreases with older age, falling from a mean of 23.8 mg per kg of body weight in men aged 20 to 29 years to 9.8 mg per kg of body weight in men aged 90 to 99 years, largely because of the reduction in muscle mass.


  • 2.

    Serum creatinine concentrations are not specific for renal tubular injury. For example, intravascular volume depletion/“prerenal” factors (severe dehydration, blood volume loss, altered vasomotor tone, or age-related decrease in renal blood flow) and postrenal factors (obstruction or extravasation of urine into the peritoneal cavity) may falsely elevate serum concentrations in the absence of parenchymal damage. Thus, a decrease in the eGFR inferred from an increase in serum creatinine level may not distinguish among prerenal, intrinsic renal, and postrenal causes of impaired kidney function; this may not be the case for some biomarkers of renal tubular injury. Even in cases in which serum creatinine is elevated as a consequence of direct renal injury, it cannot be used to determine the location of the injury (glomerular vs. tubular, or proximal tubular vs. distal tubular).


  • 3.

    Static measurement of serum creatinine level does not reflect the real-time changes in the GFR resulting from acute changes in kidney function because creatinine accumulates over time. Given the large amounts of functional kidney reserve in healthy persons and the variable amounts of kidney reserve in patients with mild to moderate disease, creatinine is not a sensitive marker.


  • 4.

    Drug-induced reduction in tubular secretion of creatinine might result in underestimation of kidney function. Medications such as cimetidine and trimethoprim inhibit creatinine secretion and increase the serum creatinine concentration without affecting the true GFR.


  • 5.

    The creatinine assay is subject to interference by intake of certain drugs or by certain pathophysiologic states, including hyperbilirubinemia and diabetic ketoacidosis.



Similarly, the use of serum creatinine levels in CKD is also limited by several patient-dependent and patient-independent variables (including age, race, sex, and comorbid conditions). Serum creatinine concentration can significantly decrease in advanced kidney disease without relation to its renal clearance. The sensitivity of serum creatinine levels in determining kidney function can be improved by serial measurements of timed creatinine clearance (usually, but not always, 24-hour collections). However, many individuals find this collection cumbersome, and errors (e.g., skipped voids) typically lead to underestimation of function.


Serum creatinine is stable during long-term storage, after repeated thawing and refreezing, and for up to 24 hours in clotted whole blood at room temperature. The Jaffé reaction–based assay (alkaline picrate assay) is routinely used in clinical laboratories to assess creatinine levels. However, Jaffé methods overestimate serum creatinine concentration by approximately 25% because of the interference of noncreatinine chromogens, particularly proteins. Interference from glucose and acetoacetate are particularly important because diabetic patients are particularly prone to development of CKD. As a result, eGFRs are higher when Jaffé methods are used than when other approaches are employed. Expert professional bodies have recommended that all methods of creatinine measurement should become traceable to a reference method based on isotope dilution mass spectrometry. Several modifications of the Jaffé method have been made to improve the specificity by decreasing the influence of interfering substances. Enzymatic methods of measuring creatinine are widely adopted by clinical laboratories as an alternative to alkaline picrate assays. Although various substances do interfere with enzymatic assays, the assays are reported to be subject to less interference than Jaffé methods. The high-performance liquid chromatography (HPLC)–based assay has evolved as a potential alternative approach for the measurement of serum creatinine level. Several studies have demonstrated that HPLC methods have greater analytical specificity than conventional methods. This approach clearly has severe limitations with respect to throughput, however.


Finally, over the last decade, there has been a dedicated national effort (within the United States) to standardize serum creatinine assays by establishing calibration traceability to an isotope dilution mass spectrometry (IDMS) reference standard. Prior to standardization, there was a large variability in serum creatinine results among clinical laboratories, with roughly 10% to 20% bias being reported in the literature. This process, which started in 2005, has led to standardization of assays, which has led to less variation in estimating renal function and more accurate eGFR measurements when used in conjunction with the IDMS-traceable MDRD (Modification of Diet in Renal Disease) study equation.


Blood Urea Nitrogen


Blood urea is a low-molecular-weight waste product derived from dietary protein catabolism and tissue protein turnover, and its levels are inversely correlated with decline in the GFR. Urea is filtered freely, and a variable amount (approximately 30% to 70%) is reabsorbed predominantly in the proximal tubule, with recycling between tubule and interstitium in the kidney medulla. The normal range of urea nitrogen in blood or serum is 5 to 20 mg/dL (1.8 to 7.2 mmol urea per liter). The wide reference range reflects the influence on BUN of nonrenal factors, including dietary protein intake, endogenous protein catabolism, fluid intake, and hepatic urea synthesis. BUN concentrations also increase with excessive tissue catabolism, especially in cases of fever, severe burns, trauma, high corticosteroid dosage, tetracyclines, chronic liver disease, and sepsis. In addition, any factor that increases the tubular reabsorption of urea, including decreased effective arterial volume (i.e., impaired renal perfusion) and/or obstruction of urinary drainage, increases the BUN concentration. Because of these limitations, BUN is a not sensitive and specific marker for acute or chronic kidney disease. However, for patients with advanced CKD (e.g., CKD 4-5), some authorities have suggested averaging urea clearance and creatinine clearance to serve as a more accurate estimate of the true GFR. This approach is suggested in part because at these lower levels of renal function, creatinine clearance overestimates the (secretion) GFR and urea clearance underestimates the GFR. BUN is measured by spectrophotometry. Because of these undesirable limitations of creatinine and BUN as markers, there has been a great deal of interest in the identification of improved biomarkers for kidney injury.


Cystatin C


For the last 10 to 15 years, there has been a tremendous amount of research investigating serum cystatin C as a marker of GFR, and urinary cystatin C excretion has been proposed as a tubular injury marker. In 1961, Butler and Flynn studied the urine proteins of 223 individuals by starch gel electrophoresis and found a new urine protein fraction in the post–gamma globulin fraction. They named this protein fraction cystatin C. Cystatin C is a low-molecular-weight protein produced at a constant rate by all nucleated cells and eliminated exclusively by glomerular filtration. It is small (13 kDa) and has a positive charge at physiologic pH. It is neither secreted nor reabsorbed by renal tubules but undergoes almost complete catabolism by proximal tubular cells, and thus, little, if any, appears in the urine under normal circumstances. Any impairment of reabsorption in proximal tubules can lead to marked increases in urinary levels of cystatin C in humans and animals. There have been a number of studies on the diagnostic potential of both serum and urinary cystatin C levels in acute and chronic kidney disease in humans.


Chronic Kidney Disease


Because of the short half-life (approximately 2 hours) and other properties described earlier, some researchers believe that serum cystatin C levels reflect the GFR better than creatinine concentration. Initially, it was thought that the serum levels of cystatin C would be unaffected by gender, age, population ancestry, and muscle mass, but over the last several years, multiple studies have demonstrated that these factors are in fact associated with altered levels of the biomarker. Notably, cystatin C levels have been shown to be associated with factors similar to those associated with creatinine, namely that these levels may be elevated in males, taller and heavier patients, and those with higher lean body mass. However, unlike serum creatinine, which is usually lower in older adults given their decreased muscle mass, a study investigating a subset of more than 7500 subjects from the third National Health and Nutrition Examination Survey (NHANES III) study demonstrated that cystatin C values were elevated in more than 50% of those older than 80 years.


Despite these minor limitations, cystatin C remains an excellent biomarker of CKD and performs on par with, if not better than, serum creatinine in some instances. Equations for estimating GFR and CKD classification are discussed elsewhere in this text (see Chapter 26 ). In a prospective cohort study of 26,643 Americans enrolled in the Reasons for Geographic and Racial Differences in Stroke (REGARDS) study, Peralta and colleagues demonstrated that cystatin C–based eGFR improves CKD classification/definition as well as risk stratification (for development of ESKD and death) relative to creatinine-based eGFR. This correlation with mortality was not novel because cystatin C demonstrated a stronger risk relationship with mortality than creatinine concentration or eGFR in older adults with cardiovascular disease. In the Cardiovascular Health Study cohort of 4637 community-dwelling elderly, higher serum cystatin C concentrations were associated with a significantly elevated risk of death from cardiovascular causes (hazard ratio [HR], 2.27 [95% CI, 1.73 to 2.97]), myocardial infarction (HR, 1.48 [95% CI, 1.08 to 2.02]), and stroke (HR, 1.47 [95% CI, 1.09 to 1.96]) after multivariate adjustment. In the same study, higher serum creatinine values were not independently associated with any of these three outcomes. Furthermore, a study in the general population suggests that cystatin C level has a stronger association with cardiovascular disease outcomes than creatinine concentration or estimated GFR, especially among the elderly. Thus, serum cystatin C levels may be a better marker of kidney function than serum creatinine concentration, especially in older adults.


In addition to older adults, cystatin C has proved superior to serum creatinine in patients infected with human immunodeficiency virus (HIV). Choi and colleagues demonstrated that cystatin C–based eGFRs outperformed serum creatinine levels in the ability to predict 5-year all-cause mortality in a cohort of 922 HIV-infected individuals. These findings are mirrored by those from a later study of 908 HIV-infected women, which demonstrated that CKD risk factors are associated with an overestimate of GFR by serum creatinine relative to cystatin C and that cystatin C significantly improves mortality risk prediction when added to a clinical model that already includes serum creatinine.


This concept of using cystatin C in concert with, rather than in place of, serum creatinine has been gaining momentum. Using cross-sectional analyses and data from 5352 participants from 13 previously published studies, Inker and colleagues developed estimation equations using cystatin C alone and cystatin C and creatinine combined. They then went on to validate these equations in a cohort of 1119 participants from five different studies. They demonstrated that combined equations outperformed the creatinine-alone equations and in some instances led to an NRI of 0.194 ( P < 0.001). Although this study was performed predominantly in Caucasians, thus limiting its broad applicability, it demonstrates the potential of cystatin C (and other biomarkers) to be used to augment the diagnostic scope of serum creatinine rather than to replace it. Given the mounting clinical evidence and the emergence of automated, relativity inexpensive assays, it is increasingly apparent that this biomarker should become a routine part of the nephrologist laboratory assessment of CKD and points to an increased role for cystatin C in the management of patients with CKD.


Acute Kidney Injury


Given its success as a marker of glomerular filtration, several groups have investigated serum cystatin C as a potential biomarker of AKI. In a single-center mixed ICU population of 85 subjects (44 of whom had RIFLE-classified AKI), Herget-Rosenthal and colleagues demonstrated that serum cystatin C had excellent diagnostic value, predicting AKI 24 and 48 hours prior to serum creatinine (AUCs of 0.97 and 0.82, respectively). These data were followed up by a study of 442 patients from two separate ICUs demonstrating that plasma cystatin C increased earlier than serum creatinine and was able to significantly predict several adverse patient outcomes, including sustained AKI, death, and dialysis. Similarly, in a study of 202 diverse ICU patients, in 49 of whom development of AKI was based on urine output and/or serum creatinine RIFLE Failure criteria, serum cystatin C levels showed excellent predictive value for AKI. However, the serum cystatin C concentration did not rise earlier than the serum creatinine concentration.


Outside the ICU, cystatin C levels were shown to be capable of detecting a decrease in the GFR after contrast agent administration earlier than the serum creatinine value in adult patients who underwent coronary angiography. In a prospective study of 87 patients who underwent elective catheterization, contrast medium–induced nephropathy occurred in 18 patients, and ROC analysis showed a higher AUC for cystatin C level than for serum creatinine concentration (0.933 vs. 0.832; P = 0.012). When a cutoff value of more than 1.2 mg/L was used, cystatin C level before catheterization exhibited 94.7% sensitivity and 84.8% specificity for predicting contrast medium–induced nephropathy.


Serum cystatin C has been studied as a biomarker for both early AKI (rising earlier than serum creatinine) and AKI severity, with several smaller studies providing mixed results. The larger multicenter Translational Research Investigating Biomarker Endpoints in AKI (TRIBE-AKI) study investigated several aspects of serum cystatin C following both adult and pediatric cardiac surgery. In 1147 adults, Shlipak and colleagues demonstrated that preoperative serum cystatin C values outperformed serum creatinine and creatinine-based eGFRs in its ability to forecast postoperative AKI. After adjustment for clinical variables known to contribute to AKI, serum cystatin C had a C-statistic (a measure akin to the AUC of a ROC) of 0.70 and an NRI of 0.21 in comparison with serum creatinine ( P < 0.001). However, when this same group investigated sensitivity and rapidity of AKI detection (defined as 25%, 50%, and 100% increases from preoperative values) by postoperative changes in serum cystatin C, they did not find a clear advantage over changes in serum creatinine. In fact, they concluded that serum cystatin C was less sensitive for AKI detection; however, serum cystatin C did appear to identify a subset of patients with adverse outcomes. This failure of postoperative serum cystatin C in adults contrasts starkly with the results in 288 children undergoing cardiac surgery. Zappitelli and colleagues demonstrated that serum cystatin C measured within the first 6 postoperative hours was associated with both stage 1 and stage 2 pediatric AKI. Additionally, postoperative serum cystatin C values were associated with adverse patient outcomes, including duration of mechanical ventilation and length of ICU stay. However, unlike in the adult population, preoperative values were not associated with postoperative AKI.


β-Trace Protein


β-Trace protein (BTP), also referred to as prostaglandin D synthase, has emerged as another promising biomarker for GFR. BTP is a small protein with a molecular weight of 23 to 29 kDa, depending on the size of the glycosyl moiety. BTP belongs to the lipocalin protein family, whose members are primarily involved in the binding and transport of small hydrophobic ligands. It is primarily produced in the cerebral fluid, where its concentrations are more than 40-fold higher than in the serum. BTP is primarily eliminated by glomerular filtration, and its concentrations in urine range from 600 to 1200 µg/L.


The first observation of elevated BTP values in association with impaired kidney function was reported by Hoffman and associates in 1997. Since then, several research studies have been conducted to evaluate the sensitivity and specificity of BTP as a marker of GFR and to compare it with serum creatinine in patients with CKD and in kidney transplant recipients. In two separate cohort studies, one adult and one pediatric, serum cystatin C was shown to outperform BTP for the detection of decreased renal function (as measured by inulin clearance), and both markers were shown to outperform serum creatinine alone.


Another study, by Donadio and colleagues, evaluated the relationship between serum concentration of BTP and GFR in comparison with cystatin C levels. Serum concentrations of BTP progressively increased with reduced GFR, and strong direct correlations were found between GFR and serum concentrations of BTP ( r = 0.918) and cystatin C ( r = 0.937). Importantly, no statistically significant difference was found between BTP and cystatin C as indicators of moderately impaired kidney function.


In a later study, Foster and colleagues investigated the association of BTP, serum cystatin C, and creatinine-based eGFR with all-cause mortality in a subset of patients from the NHANES cohort. They analyzed data from 6445 adults (enrolled from 1988 to 1994) with follow-up through December 2006. All three markers were associated with increased mortality after adjustments were made for demographics. When the mortality risk of the fifth (highest) quintile was compared with that of the third (middle) quintile, however, only the associations with BTP (HR, 2.14; 95% CI, 1.56 to 2.94) and serum cystatin C (HR, 1.94; 95% CI, 1.43 to 2.62) remained statistically significant (creatinine-based eGFR: HR, 1.31; 95% CI, 0.84 to 2.04) (all HR comparing the 5th quintile to the middle quintile). These effects remained significant for both cardiovascular disease– and coronary heart disease–associated mortality. Similarly, in data from the Atherosclerosis Risk in Communities (ARIC) study, BTP was shown to outperform creatinine-based eGFR (using the CKD-EPI [Chronic Kidney Disease Epidemiology Collaboration] equation) in the prediction of mortality and the development of kidney failure.


Concentrations of BTP are not affected by commonly used immunosuppressive medications such as prednisone, mycophenolate mofetil, and cyclosporine. This feature is especially useful in the evaluation of kidney function in kidney transplant recipients, in whom cystatin C concentrations may be falsely elevated as a result of steroid treatment. Unlike with serum creatinine values, age and race were not associated with BTP concentrations. Several new GFR estimation equations based on BTP have been developed for use in kidney transplant recipients. However, these equations require external validation in larger and more diverse patient groups. In contrast to creatinine, one limitation of using BTP is lack of widespread availability and standardization of the assay.


Urinary Glomerular Cell Injury Markers


Defects in podocyte structure have been reported in many glomerular diseases, which have been classified as “podocytopathies.” Injured podocytes have been reported in immunologic and nonimmunologic forms of human glomerular disease, including hemodynamic injury, protein overload states, injury from environmental toxins, minimal change disease, focal segmental glomerulosclerosis, membranous glomerulopathy, diabetic nephropathy, and lupus nephritis. Podocytes may be injured in many forms of human and experimental primary glomerular disease and in secondary forms of focal segmental glomerulosclerosis, including that caused by hypertension, diabetes, and tubulointerstitial disease. Before detachment from the glomerular basement membrane, podocytes undergo structural changes, including effacement of foot processes and microvillous transformation.


Podocyte Count


After undergoing the aforementioned structural changes, podocytes detach from glomerular basement membrane and are excreted into the urine. Urinary levels of viable podocytes have been extensively studied in several renal diseases. Numerous studies have reported that the number of podocytes shed is significantly higher in patients with active glomerular disease than in healthy controls and in patients with inactive disease. Importantly, podocyte number in urine correlates with disease activity (assessed by renal biopsy) and has been shown to decline with treatment. For example, Nakamura and colleagues found podocytes in the urine of patients with type 2 diabetes with microalbuminuria and macroalbuminuria, but not in the urine of patients with diabetes without albuminuria, suggesting that urinary podocytes may represent the active phase of diabetic nephropathy. Numerous studies have linked podocytopenia and disease severity in immunoglobulin A (IgA) nephropathy and diabetic nephropathy. Additionally, in a study of 42 preterm neonates receiving indomethacin, the number of podocytes excreted in the urine was higher when compared to controls not receiving a known nephrotoxin, potentially linking podocytes and nephrotoxin-induced kidney injury. Thus, urinary levels of podocytes may reflect real-time changes in disease activity.


The methods used to count urinary podocytes, however, are limited by several factors: (1) cytologists are needed to perform the counting, (2) the process is very time consuming, and (3) urine sediments contain whole viable podocytes as well as cell debris, and the latter may not necessarily reflect disease status. An improved and standardized laboratory method is urgently needed to facilitate measurement of urinary podocyte number. Alternative methods that indirectly assess the number of podocytes in urine include detection of messenger RNA (mRNA) and protein levels of podocyte-specific proteins by polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA), respectively.


Podocalyxin


Podocalyxin is the most commonly used marker protein for detecting podocytes in urine. A highly O -glycosylated and sialylated type I transmembrane protein of approximately 140 kDa, podocalyxin is expressed in podocytes, hematopoietic progenitor cells, vascular endothelial cells, and a subset of neurons. Podocalyxin participates in a number of cellular functions through its association with the actin cytoskeleton, ezrin, and Na + -H + -exchanger regulatory factors 1 and 2 (NHERF-1 and NHERF-2) proteins. Urinary podocalyxin has been reported as a marker of activity in a number of diseases, including IgA nephropathy, Henoch-Schönlein purpura, diabetic nephropathy, lupus nephritis, poststreptococcal glomerulonephritis, focal segmental glomerulosclerosis, and preeclampsia. Podocalyxin has been reported to be the most reproducible marker for podocyte injury in the urine. Measurements of podocalyxin protein in the urine by ELISA also correlated with histologic changes and disease activity in children with IgA nephropathy, Henoch-Schönlein purpura, lupus nephritis, membranoproliferative glomerulonephritis, and poststreptococcal glomerulonephritis. Several studies have also shown that the number of podocalyxin-positive cells in the urine falls after various therapeutic interventions in patients with focal segmental glomerulosclerosis, lupus nephritis, Henoch-Schönlein purpura, IgA nephropathy, poststreptococcal glomerulonephritis, and diabetic nephropathy. Unfortunately, because podocalyxin is expressed on a number of cell types, the presence of podocalyxin in the urine is not always reflective of urinary podocytes.


Nephrin


Nephrin, a transmembrane protein of the immunoglobulin superfamily, is a component of the filtration slit diaphragm between neighboring podocytes. Immunohistochemical analysis and in situ hybridization have shown that nephrin is primarily expressed in glomerular podocytes. On the basis of these observations, it has been proposed that nephrin is a key component of the glomerular filtration barrier, which plays a pivotal role in preventing protein leakage. Various experimental models of diabetes and hypertension show alterations in nephrin mRNA or protein levels in glomeruli. In experimental models of diabetes, glomerular nephrin mRNA expression was reduced, but treatment with an angiotensin-converting enzyme (ACE) inhibitor or angiotensin II antagonist was able to abrogate this reduced expression. Langham and associates examined renal biopsy specimens from 14 patients with type 2 diabetes and nephropathy who had been randomly assigned to receive treatment with either the ACE inhibitor perindopril (4 mg/day) or placebo for the preceding 2 years. They reported that glomeruli from placebo-treated patients with diabetic nephropathy showed a significant reduction in nephrin expression compared with those from control subjects. This finding is in line with experimental models demonstrating that urinary nephrin excretion is increased in the setting of active podocyte/glomerular injury and that excretion is attenuated in the presence of RAAS (renin angiotensin aldosterone system) blockers. In both placebo- and perindopril-treated patients, a close inverse correlation was observed between the magnitude of nephrin gene expression and the degree of proteinuria. In accordance with these observations, nephrin has been reported in urine (nephrinuria) in several experimental and human proteinuric diseases, including hypertension, diabetes, and pre-eclampsia. Because nephrin is known to be expressed in pancreatic β-cells, there was speculation that β-cells may release nephrin into the serum, which is ultimately excreted in the urine. However, Patari and colleagues demonstrated that nephrin was absent in the sera of nephrinuric patients. Thus, urinary nephrin is most likely produced by the kidneys.




Urinary Tubular Injury Markers


Microscopic examination of the urine has been used for many years to gain insight into the severity of glomerular and tubular injury. Other components of the urine have been used to quantitate tubular cell injury in a more specific and sensitive fashion. These markers have been demonstrated to be extremely valuable in detecting kidney injury in the setting of AKI. Moreover, some of these biomarkers, such as interleukin-18 (IL-18), kidney injury molecule-1 (KIM-1), neutrophil gelatinase–associated lipocalin (NGAL), and liver-type fatty acid–binding protein (L-FABP), have been shown to be potentially useful in a variety of contexts in both acute and chronic kidney injury. Here, the utility of urine microscopy is described briefly and some of the emerging biomarkers of tubular injury are discussed.


Urine Microscopy


Urine microscopy with sediment examination is a time-honored test that is routinely used to assist in the diagnosis of kidney injury. The urine from patients with tubular injury typically contains proximal tubular epithelial cells, proximal tubule epithelial cell casts, granular casts, and mixed cellular casts. Patients with predominantly prerenal azotemia occasionally have hyaline or fine granular casts in their urine. Several studies have shown that the increase in urinary cast excretion correlates well with AKI. Marcussen and associates demonstrated that patients with tubular injury had a high number of granular casts than those with prerenal azotemia.


There has now been a resurgence in urinalysis sediment scoring systems for the diagnosis of AKI. Several of these systems have shown excellent specificity for AKI and correlate well with severity of AKI. However, their widespread acceptance has been hampered by the relatively modest sensitivity of urine microscopy for detecting AKI. Urine microscopy remains a user-dependent tool that displays a tremendous amount of interphysician variability, a feature that likely contributes to its suboptimal sensitivity for AKI. Three of the most widely reported urine microscopy scoring systems are reviewed in Table 30.5 .



Table 30.5

Review of Urine Microscopy Scoring Systems
















Study Scoring System
Chawla et al 2008


  • Grade 1: No casts or RTEs



  • Grade 2: At least 1 cast or RTE but <10% of LPF



  • Grade 3: Many casts or RTEs (between 10% and 90% of LPF)



  • Grade 4: Sheet of muddy brown casts and RTEs in >90% of LPF

Perazella et al 2010


  • 0 points: No casts or RTEs seen



  • 1 point each: 1-5 casts per LPF or 1-5 RTEs per HPF



  • 2 points each: ≥6 casts per LPF or ≥6 RTEs per HPF

Bagshaw et al 2011


  • 0 points: No casts or RTEs seen



  • 1 point each: 1cast or 1 RTE per HPF



  • 2 points each: 2-4 casts or RTEs per HPF



  • 3 points each: ≥5 casts or ≥5 RTEs per HPF


LPF, Low-power field; HPF, high-power field; RTE, renal tubule epithelial (cell).


Several later studies have looked at the potential of using urine microscopy in combination with other biomarkers for tubular injury with varying degrees of success. In the near future, urine microscopy, a current mainstay in the clinical diagnosis of AKI, could be used in concert with markers of glomerular function and validated biomarkers of tubular injury to diagnose AKI.


α 1 -Microglobulin


α-Microglobulin is a low-molecular-weight glycoprotein of approximately 27 to 30 kDa and a member of the lipocalin superfamily. It is primarily synthesized by the liver and is available both in free form and as a complex with IgA. α 1 -Microglobulin has been detected in human serum, urine, and cerebrospinal fluid. Urine and serum values have been found to be elevated in patients with renal tubular diseases. α 1 -Microglobulin is freely filtered at the glomerulus and completely reabsorbed and catabolized by the normal proximal tubule. Megalin mediates the uptake of this protein in the proximal tubule. Therefore, an increase in the urinary concentration of α 1 -microglobulin indicates proximal tubular injury or dysfunction. The urinary levels of α 1 -microglobulin are influenced by age. The normal range in populations younger than 50 years is less than 13 mg per g of creatinine and in those 50 years or older is less than 20 mg per g of creatinine. In comparison with β 2 -microglobulin, α 1 -microglobulin is more stable over a range of pH levels in the urine, making it a more acceptable urinary biomarker.


Acute Kidney Injury


α 1 -Microglobulin quantitation in the urine has been reported as a sensitive biomarker for proximal tubule dysfunction in both adults and children. In a small cohort, 73 patients, of whom 26 required renal replacement therapy (RRT), Herget-Rosenthal and colleagues compared levels of α 1 -microglobulin, β 2 -microglobulin, cystatin C, retinol-binding protein, α-glutathione S-transferase (α-GST), lactate dehydrogenase, and N -acetyl-β-D-glucosaminidase (NAG) early in the course of AKI. They found that urinary cystatin C and α 1 -microglobulin had the highest ability to predict the need for RRT. In this study, urinary α 1 -microglobulin had an AUC of 0.86 for prediction of the need for RRT. This is similar to the results reported by Zheng and associates, who measured α 1 -microglobulin levels in 58 children undergoing cardiac surgery and found that levels were higher in those in whom AKI developed (AKIN criteria). Four hours after cardiopulmonary bypass, α 1 -microglobulin provided an AUC of 0.84 (95% CI, 0.72 to 0.95) with a value of 290 mg/g, providing a sensitivity of 90% and a specificity of 79%. However, follow-up studies have reported mixed results, with Martensson and colleagues finding no difference in α 1 -microglobulin levels between those with and those without AKI in the setting of sepsis and septic shock in a small, prospective, single-center study of 45 subjects.


α 1 -Microglobulin levels at the time of arrival at the emergency department (ED) have also demonstrated the ability to correlate with the development of AKI, with an AUC of 0.88 and a cutoff value of 35mg/g providing reasonable sensitivity (80%) and specificity (81%). However, α 1 -microglobulin level did not remain an independent predictor of AKI in the multivariate model (odds ratio [OR] 1.85; 95% CI, 0.80 to 4.31). In addition, α 1 -microglobulin has been reported as a useful marker for proximal tubular damage and recovery in early infancy and has been shown to correlate with tubular atrophy and interstitial fibrosis on renal transplant biopsy 1 year after transplantation.


Chronic Kidney Disease


There have been fewer studies investigating α 1 -microglobulinin in the setting of CKD, and limited studies demonstrate that this condition may correlate with disease activity and proximal tubule damage in the setting of diabetic nephropathy as well as idiopathic membranous nephropathy. Limitations associated with the use of α 1 -microglobulin level include the variation in serum levels with age, gender, and clinical conditions, including liver diseases, ulcerative colitis, HIV infection, and mood disorders, as well as the lack of international standardization. Urinary α 1 -microglobulin is measured by an immuno­nephelometric assay.


β 2 -Microglobulin


β 2 -Microglobulin is a low-molecular-weight polypeptide with a molecular weight of 11.8 kDa. It is present on the cell surfaces of all nucleated cells and in most biologic fluids, including serum, urine, and synovial fluid. β 2 -Microglobulin is normally excreted by glomerular filtration, reabsorbed almost completely (approximately 99%), and catabolized by the normal proximal tubule in humans. Megalin mediates the uptake of this protein in the proximal tubule. In healthy individuals, approximately 150 to 200 mg of β 2 -microglobulin is synthesized daily with a normal serum concentration of 1.5 to 3 mg/L. Any pathologic state that affects kidney function results in an increase in β 2 -microglobulin levels in the urine because of the impeded uptake of β 2 -microglobulin by renal tubular cells. For spot urine collections, the concentration of β 2 -microglobulin in healthy individuals is typically 160 µg/L or less or 300 µg per g of creatinine or less. Unlike serum levels of urea, those of β 2 -microglobulin are not influenced by food intake, making this polypeptide an attractive marker for malnourished patients with low serum urea levels. In patients with CKD, increases in serum β 2 -microglobulin levels reflect the decrease in glomerular function. In patients with ESKD, serum levels of β 2 -microglobulin are usually in the range of 20 to 50 mg/L. β 2 -Microglobulin accumulation is linked to toxicity because the molecule precipitates and forms fibrillary structures and amyloid deposits, particularly in bone and periarticular tissue, leading to the development of carpal tunnel syndrome and erosive arthritis. Elevations of β 2 -microglobulin have been reported in several AKI and CKD clinical settings, including in cadmium toxicity and following cardiac surgery, liver transplantation, and renal transplantation. In idiopathic membranous nephropathy, β 2 -microglobulin level was identified as a superior independent predictor of the development of renal insufficiency. Other studies have reported that β 2 -microglobulin performs as well as, if not better than, serum creatinine for the detection of acute kidney injury in critically ill children or after cardiac surgery in adults.


Serum concentrations of β 2 -microglobulin should be interpreted cautiously because they are altered significantly in various diseases, including rheumatoid disorders and several types of cancers. Initially, it was believed that the increase in β 2 -microglobulin levels in CKD is solely due to declines in kidney function, but later studies have shown that other factors, including increased synthesis of β 2 -microglobulin, may contribute in patients with ESKD. Another significant drawback associated with the use of urinary β 2 -microglobulin as a marker of kidney injury is its instability in urine at room temperature, particularly when the pH is less than 5.5; for this reason, the urine should be alkalinized and frozen at −80° C immediately after collection.


Glutathione S-Transferase


Primarily two subtypes of the enzyme glutathione S-transferase (GST) are found in the kidney. α-GST is found mainly in the proximal tubular cells, whereas π-GST is found predominantly in the distal tubular epithelial cells. Elevation of urinary α-GST has been reported in several animal models treated with nephrotoxic drugs or after ischemic renal injury. However, in a prospective study of patients with sepsis admitted to the ICU, α-GST levels were no different in patients in whom AKI developed and in patients without AKI. π-GST levels were higher in all patients with sepsis than in healthy volunteers but π-GST levels were not predictive of AKI as defined by the AKIN criteria. In one prospective study, the value of tubular enzyme levels in predicting AKI was assessed in 26 critically ill adult patients admitted to the ICU. AKI developed in 4 patients, and ROC analysis showed that γ-glutamyl transpeptidase, π-GST, α-GST, alkaline phosphatase, and NAG had excellent discriminating power for AKI (AUCs = 0.950, 0.929, 0.893, 0.863, and 0.845, respectively). Both α-GST and π-GST have demonstrated limited ability to detect AKI following adult cardiac surgery. However, in a small single-center study of 123 subjects, π-GST did demonstrate the ability to detect which patients with AKIN stage 1 AKI would go on to progress to stage 3 or need RRT (AUC = 0.86; P = 0.002); however, to date, these data have not been validated in a larger cohort.


In kidney transplant recipients, increased levels of α-GST were associated with cyclosporine A toxicity, whereas π-GST elevation was associated with acute allograft rejection. In a cross-sectional study of patients with diabetes, the relationships between urine albumin/creatinine ratio and urinary levels of collagen IV, α-GST, and π-GST were assessed. Levels of all three markers were directly (albeit weakly) correlated with urine albumin/creatinine ratio, but a progressive increase in the proportion of patients with abnormal biomarker levels in those with normal urine albumin levels, microalbuminuria, and macroalbuminuria was observed only for collagen IV and π-GST.


Hepcidin-25


Hepcidin-25, a 2.8-kDa hormonal regulator of iron metabolism, is produced in the liver, heart, and kidney. Hepcidin binds and induces the internalization and degradation of the transmembrane iron exporter ferroportin. Hepcidin acts to downregulate iron uptake and reduce extracellular iron availability from stored iron. Given its link to iron metabolism and the fact that free iron is known to be released in the setting of the ischemia reperfusion injury and oxidative stress that occurs with cardiopulmonary bypass, urinary hepcidin-25 has been investigated as a marker of kidney injury following cardiac surgery. Ho and colleagues identified urinary hepcidin-25 in a nested case-control study of 44 adults who underwent cardiac surgery. Using surface-enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI-TOF-MS) on urine samples from 22 individuals in whom at least RIFLE Risk category AKI developed and 22 individuals whose creatinine did not increase more than 10% from baseline during the postoperative period (no AKI), these researchers found that hepcidin-25 was dramatically upregulated in the urine of patients with no AKI. Taking this a step further, the same group quantified the concentration of hepcidin in the urine (normalized to urine creatinine) and demonstrated that concentrations were higher in those who did not go on to have postoperative AKI ( P < 0.0005). In a multivariate analysis, hepcidin-25 was significantly associated with the avoidance of AKI with urinary concentrations on postoperative day 1, providing an AUC of 0.80. The data from this small study have been corroborated by those from another modest-sized cohort of 100 adults undergoing cardiopulmonary bypass. Haase-Fielitz and associates demonstrated that 6 hours after cardiopulmonary bypass, urinary hepcidin-25 levels were lower in the nine subjects in whom RIFLE-based AKI developed than in those with no AKI (AUC = 0.80; P = 0.004). These results warrant further preliminary investigations in other AKI settings as well as prompt validation in a larger cohort of cardiac surgery patients.


Interleukin-18


IL-18 is an 18-kDa proinflammatory cytokine that is activated by caspase-1 and is produced by renal tubular cells and macrophages. Animal studies indicate that IL-18 is a mediator of acute tubular injury, including both neutrophil and monocyte infiltration of the renal parenchyma. Studies have shown that caspase-1 knockout mice experienced the same degree of ischemic AKI as wild-type mice injected with an IL-18–neutralizing antiserum, demonstrating that IL-18 is an important mediator of ischemic AKI. Others have shown that IL-18 plays a major role in macrophage activation, with mice engrafted with IL-18–deficient bone marrow experiencing less AKI than those with IL-18–replete marrow. Similarly, in IL-18 knockout mice with AKI, tumor necrosis factor-α, inducible nitric oxide synthase, macrophage inflammatory protein-2, and monocyte chemoattractant protein-1 mRNA expression are all decreased, speaking to the deleterious impact of IL-18 in AKI. In the human kidney, IL-18 is induced and cleaved mainly in the proximal tubules and released into the urine. IL-18 has been shown to participate in a variety of renal disease processes, including ischemia-reperfusion injury, allograft rejection, infection, autoimmune conditions, and malignancy. IL-18 is easily and reliably measured in urine by commercially available ELISA and microbead-based assays.


Acute Kidney Injury


Several studies have demonstrated the usefulness of IL-18 as a biomarker for detection of AKI. Originally, Parikh and associates studied a group of 72 patients and reported urinary IL-18 levels significantly higher in patients diagnosed with acute tubular necrosis (ATN) than in patients with prerenal azotemia or urinary infection and in healthy control subjects with normal renal function. Since then, several large multicenter studies have gone on to investigate the ability of IL-18 to detect AKI in a variety of clinical settings.


The TRIBE-AKI Consortium measured IL-18 in 1219 adults who underwent cardiac surgery. Identification of those at high risk for postoperative AKI required the presence of one of the following: (1) emergency surgery, (2) preoperative serum creatinine higher than 2 mg/dL, (3) left ventricular ejection fraction < 35%, (4) New York Heart Association stage III or IV heart failure—left ventricular function, (5) age more than 70 years, (6) preexisting diabetes mellitus, (7) and concomitant coronary artery bypass grafting (CABG) and valve surgery or (8) repeat cardiac surgery. Those with preoperative AKI, kidney transplants, ESKD, or a preoperative serum creatinine higher than 4.5 mg/dL were excluded. After dividing the cohort into quintiles according to IL-8 level, compared to the lowest quintile, the highest quintile of IL-18 was associated with a 6.8-fold higher risk of AKI, defined as a postoperative doubling of serum creatinine or requirement for acute dialysis. The first postoperative concentration of IL-18 (0-6 hours) provided an AUC of 0.74, which increased to 0.76 after IL-18 values were combined with a clinical model of factors known to impact AKI risk. The TRIBE-AKI pediatric cohort (311 children) reported results in line with the TRIBE adult study, in which the highest quintile of IL-8 values was associated with 9.4-fold increased risk of AKI (doubling of serum creatinine or dialysis) in comparison with the lowest quintile. The effect was slightly attenuated after adjustments were made for clinical factors known to impact AKI (adjusted OR, 28.8; 95% CI, 6.9 (1.7 to 28.8).


In a secondary analysis of the adult cohort, Koyner and associates demonstrated that IL-18 value at the time of AKI can identify those with early AKI (AKIN stage 1) who will progress to more severe stages of AKI (AKIN stage 2 or 3). Of the 380 adults in whom at least stage 1 AKI developed, 45 went on to have stage 2 or 3 disease. In the entire cohort, those whose IL-8 values were in the fifth quintile were at increased risk for development of progressive AKI (OR, 3.63; 95% CI, 1.64 to 8.03) and this effect was only slightly attenuated after adjusting for the clinical model (OR, 3.00; 95% CI, 1.25 to 7.25).


Finally, when investigated in a separate secondary analysis of this cohort, IL-18 concentrations collected in the immediate postoperative period were associated with long-term mortality following cardiac surgery. This later investigation provided a median follow-up of 3.0 years (interquartile range [IQR], 2.2 to 3.6), during which 139 of the 1199 subjects died (50 deaths per 1000 person-years). After adjustments were made for clinical factors known to affect mortality, in patients without AKI (n = 792), those whose IL-18 values were in the third tertile were at increased risk of long-term mortality in comparison with those whose values were in the first tertile (adjusted HR, 1.23; 95% CI, 1.02 to 1.48). This effect was magnified in those subjects with perioperative AKI (n = 407), in whom those with values in the third tertile had an adjusted hazard ratio of 3.16 (95% CI, 1.53 to 6.53) in comparison with the reference cohort. Thus IL-18 provides additional prognostic information about long-term postoperative mortality in patients with and without AKI.


When investigated in the setting of critical illness and ICU admission, IL-18 has not demonstrated the same robust results. In a study of 451 critically ill subjects, in 86 of whom AKI developed within the first 48 hours, Siew and colleagues demonstrated that urine IL-18 did not reliably predict AKI. Although IL-18 levels at the time of ICU admission were higher in those who went on to have AKI, the AUC was 0.62 (95% CI, 0.54 to 0.69) and only marginally improved to 0.67 after the exclusion of those with prior known CKD (eGFR > 75 mL/min). Despite the inability to reliably detect AKI, urine IL-18 levels did correlate with other adverse patient outcomes, including the need for RRT and 28-day mortality. This poor performance in the setting of critical illness has been corroborated by other studies, including a post hoc analysis of data from the EARLYARF trial. In this prospective observational study in two large general ICUs (n = 529), IL-18 provided an AUC of only 0.62 for the diagnosis of AKIN stage 1 AKI but once again performed much better at forecasting the need for RRT or death (within 7 days). Unlike other biomarker studies, this IL-18 study did not demonstrate improved predictive powers with the cohort stratified according to pre-admission CKD stage. In a separate post hoc analysis of the same EARLYARF cohort, urinary IL-18 concentrations were shown to be significantly higher in patients with prerenal azotemia (defined as AKI that recovered within 48 hours of ICU admission and was associated with a fractional excretion of sodium < 1%; n = 61) than in those with no AKI (n = 285). There was a trend toward higher values in those with AKI (n = 114, non-prerenal) than in those with prerenal AKI ( P = 0.053).


In a single-center study of 339 mixed surgical and medical ICU patients, Doi and colleagues demonstrated that IL-18 values were significantly elevated in those with both established and newly diagnosed AKI at the time of ICU arrival. Although biomarker concentrations and AUCs were higher in those with established AKI (AUC = 0.78) than in those with newly diagnosed AKI (AUC = 0.59), concentrations and AUCs in both of these subgroups were significantly different from those in the “no AKI” cohort. In this same study, IL-18 levels were significantly higher in nonsurvivors. These results were fairly similar to those reported by Nikolas and colleagues, who measured urinary biomarkers in 1635 ED patients at the time of admission and compared the values with adjudicated AKI outcomes in which prerenal AKI was defined as RIFLE Risk category that returned to baseline within 72 hours and was in the clinical setting, suggesting decreased transient effective circulating volume. These researchers demonstrated that IL-18 values for patients with more severe intrinsic AKI were significantly higher than those for patients with prerenal AKI. However, there was no difference in values between those with prerenal AKI and those with no AKI.


When measured at kidney transplantation, IL-18 level accurately predicted delayed graft function (AUC = 0.90) and predicted the rate of decline in serum creatinine concentration. In patients with diabetic kidney disease and proteinuria, IL-18 levels in renal tubular cells are higher than in patients with nondiabetic proteinuric disease. To understand the utility of IL-18 and urinary NGAL in predicting graft recovery after kidney transplantation, Hall and colleagues conducted a prospective, multicenter, observational cohort study of recipients of deceased-donor kidney transplants. They collected serial urine samples from 91 patients for 3 days after transplantation. After adjustment for recipient and donor age, cold ischemia time, urine output, and serum creatinine concentration, NGAL and IL-18 concentrations accurately predicted the need for dialysis in transplant recipients. Furthermore, NGAL and IL-18 concentrations predicted graft recovery up to 3 months later. In further follow-up of this cohort, urine IL-18 concentrations collected at the time of surgery were correlated with graft outcomes 1 year after transplantation. Upper median values of IL-18 on the first postoperative day had an adjusted OR of 5.5 (95% CI, 1.4 to 21.5) and poor graft function, defined as a GFR < 30 mL/min or return to RRT.


In a study by Ling and associates involving patients who underwent coronary angiography, urinary IL-18 and NGAL concentrations were significantly increased at 24 hours after the procedure in those in whom contrast medium–induced nephropathy developed but not in the control group. ROC curve analysis demonstrated that both IL-18 and NGAL showed better performance in early diagnosis of contrast nephropathy than serum creatinine ( P < 0.05). Importantly, elevated urinary IL-18 concentrations 24 hours after contrast administration were also found to be an independent predictive marker for later major cardiac events (relative risk [RR] = 2.1).


Chronic Kidney Disease


There is promising data about IL-18 in the setting of CKD. In the Women’s Interagency HIV study, urine IL-18 levels were independently associated with a more rapid loss of renal function after multivariate adjustment. In this cohort study of 908 HIV-infected women, urine IL-18 was the only biomarker (KIM-1 and albumin to creatinine ratio [ACR] were also measured) that was associated with worsening renal function over time, as measured by eGFR–cystatin C. Urine IL-18 predicted an increased RR of renal function decline between 1.4 and 2.16, depending on the model used. In a follow-up study, the same group measured urine IL-18 in 908 HIV-infected and 289 noninfected women in the Women’s Inter-agency HIV study. This cross-sectional cohort study demonstrated that after multivariate adjusted linear regression analysis, IL-18 concentrations were significantly higher in subjects with HIV (38%; P < 0.0001). Additionally, these researchers found that urine IL-18 concentrations were significantly associated with higher HIV RNA levels and lower CD4 cell counts, hepatitis C infection, and high-density lipoprotein (HDL) cholesterol levels, thus pointing to a more extensive role for IL-18 in the setting of HIV-related kidney care. These promising HIV results are in contrast to those of the Consortium for Radiologic Imaging for the Study of Polycystic Kidney Disease (CRISP), which measured IL-18 in 107 patients with autosomal dominant polycystic kidney disease and found that although there was an increased mean IL-18 over the 3-year follow-up period, there was no association between tertiles of IL-18 values and change in total kidney volume or eGFR.


Kidney Injury Molecule-1


Kidney injury molecule-1 (KIM-1 in humans, Kim-1 in rodents), which is also referred as T cell immunoglobulin and mucin domains–containing protein-1 ( TIM-1 ) and hepatitis A virus cellular receptor-1 ( HAVCR-1 ), is a type I transmembrane glycoprotein with an ectodomain containing a six-cysteine immunoglobulin-like domain, two N -glycosylation sites, and a mucin domain. In an effort to identify molecules involved in kidney injury, Bonventre’s group originally discovered Kim-1 using representational difference analysis (a PCR-based technique) in rat models of acute ischemic kidney injury. Importantly, KIM-1 was shown to be significantly expressed in kidneys, specifically in proximal tubular cells of humans after ischemic injury, whereas it was virtually absent or present at low levels in healthy kidneys. KIM-1 has evolved as a marker of proximal tubular injury, the hallmark of virtually all proteinuric, toxic, and ischemic renal diseases. KIM-1 has been shown to be a highly sensitive and specific marker of kidney injury in several rodent models, including models of injury due to ischemia, cisplatin, folic acid, gentamicin, mercury, chromium, cadmium, contrast agents, cyclosporine, ochratoxin A, aristolochic acid, d -serine, and protein overload.


In 2002, the Bonventre group published the first clinical study linking urinary levels of KIM-1 with AKI, demonstrating that tissue expression of KIM-1 is correlated with the severity of acute tubular necrosis and corresponding levels of KIM-1 ectodomain in the urine of patients with clinically significant AKI. Since then, numerous other studies have been published on the ability of KIM-1 to detect AKI in a variety of settings, including cardiac surgery, critical illness, and general hospitalized AKI, which has been mixed.


Later, KIM-1 was investigated in several multicenter larger trials. In the TRIBE-AKI adult and pediatric cardiac surgery cohorts, the fifth quintile of urinary KIM-1 values was associated with a risk of AKI (defined as postoperative doubling of serum creatinine concentration or need for acute dialysis) 6.2-fold greater than that for the lowest KIM-1 values. This risk remained significant (4.8-fold) after adjustment of data for the clinical model used (age, race, sex, cardiopulmonary bypass time, non-elective surgery, preoperative GFR, diabetes, hypertension, and study center). The effect was completely attenuated after urinary IL-18 and plasma and urine NGAL were included in the model. As for the long-term mortality in the TRIBE cohort, the third tertile of perioperative KIM-1 concentrations was found to be associated with increased mortality. In patients without AKI (n = 792), the third tertile had an adjusted HR of 1.83 (95% CI, 1.44 to 2.33), whereas in the 407 subjects with AKI, the adjusted HR was slightly higher, at 2.01(95% CI, 1.31 to 3.1). In the pediatric cohort, those with KIM-1 values in the fifth quintile were at increased risk of AKI in the unadjusted analysis; however, this effect failed to remain significant after a pediatric adjustment model was applied.


The data on KIM-1 in the setting of critical illness–related AKI have been just as mixed as the perioperative results, with KIM-1 providing an AUC of 0.66 (95% CI, 0.61 to 0.72) for the diagnosis of AKI in samples from the EARLYARF study. The results were less impressive with regard to the prediction of dialysis (AUC = 0.62) or death (AUC = 0.56) within the first week following ICU admission. In the cohort of 529 mixed ICU patients, KIM-1 outperformed other biomarkers in its ability to detect AKI at the time of ICU admission in those with a preadmission GFR < 60 mL/min (AUC, 0.7; 95% CI, 0.58 to 0.82). In a separate post hoc analysis of this cohort, KIM-1 levels demonstrated a significant stepwise increase in the comparison of those with no AKI (median serum creatinine 170 µg/mmol [IQR, 69 to 445]), those with prerenal AKI (median serum creatinine 291 µg/mmol [IQR, 121 to 549]), and those with intrinsic AKI (lasting more than 48 hours) (serum creatinine 376 µg/mmol [IQR, 169 to 943]). In addition, KIM-1 demonstrated the ability to forecast the development of intrinsic AKI at the time of ED arrival with an AUC of 0.71 (95% CI, 0.65 to 0.76; P < 0.001). KIM-1 values again increased in a stepwise fashion, values being lowest in those with no AKI or CKD and rising in other subjects in the following order: stable CKD, prerenal AKI, and intrinsic AKI. Additionally, KIM-1 values were able to forecast inpatient mortality and the need for RRT.


The usefulness of KIM-1 has been demonstrated not only as a urinary marker but also as a tool for evaluating kidney injury in kidney biopsy specimens by immunohistochemical methods. For example, Van Timmeren and associates found that the level of KIM-1 protein expression in proximal tubular cells correlated with tubulointerstitial fibrosis and inflammation in kidney tissue specimens from 102 patients who underwent kidney biopsy for a variety of kidney diseases. In a subset of patients whose urine was collected near the time of biopsy, urinary KIM-1 levels correlated with tissue KIM-1 expression in 100% of biopsy samples from patients with deterioration in kidney function and histologic changes indicative of tubular damage. In biopsy specimens from transplanted kidneys, greater KIM-1 staining was detected in 100% of patients with deterioration of kidney function and pathologic changes indicating tubular injury, in 92% of patients with acute cellular rejection, and in 28% of patients with normal biopsy findings. In contrast, Hall and associates demonstrated that urinary KIM-1 levels did not correlate with peritransplantation or 1-year graft function. Similarly, Schroppel and colleagues investigated KIM-1 RNA expression in perioperative samples collected from both living- and deceased-donor kidneys and found no significant correlation between KIM-1 staining and the occurrence of delayed graft function.


Chronic Kidney Disease


KIM-1 also shows promise as a useful biomarker in CKD. In addition to its serving as a marker of proximal tubule dysfunction, animal data demonstrate that KIM-1 is upregulated in the later phases of AKI as well and plays an important role in renal repair; thus, it may be a major player in the pathophysiology of CKD/repair. This ability to serve as a marker of CKD was evident in a nested case-control study involving 686 participants from the Multi-Ethnic Study of Atherosclerosis (MESA). Cases were defined as involving patients with a baseline eGFR higher than 60 mL/min in whom CKD stage 3 subsequently developed and/or who had a rapid drop in kidney function over the 5-year study period. Each doubling of KIM-1 level (pg/mL) was associated with a 1.15 (95% CI, 1.02 to 1.29) increased odds of development of CKD stage 3 or a rapid decline in GFR. Similarly, at study entry, patients in the highest decile for KIM-1 value had a twofold higher risk of this same end point than those in the other 90%. This ability of KIM-1 value to predict the development and progression of CKD was independent of the presence of albuminuria. These results contrast with those of Bhavsar and associates, who measured KIM-1 in a similar case-control substudy of the ARIC study. New-onset CKD stage 3 developed in 143 of the 286 subjects, but KIM-1 did not display the ability to forecast or identify those at risk for CKD development or progression.


Despite these mixed results in community studies of CKD, KIM-1 has been investigated and has shown promise in a variety of other clinical settings, including in children with chronic renal tubular damage from vesicoureteral reflux, HIV, and in adults with nephropathy or diabetic nephropathy. In patients with IgA nephropathy, urinary KIM-1 levels were significantly higher than in healthy controls. Furthermore, the levels of urinary KIM-1 correlated positively with serum creatinine concentration and proteinuria and correlated inversely with creatinine clearance. Similarly, tubular KIM-1 expression as determined by immunohistochemical analysis correlated closely with urinary levels ( r = 0.553; P = 0.032). Sundaram and associates evaluated the potential of KIM-1, L-FABP, NAG, NGAL, and transforming growth factor-β 1 (TGF-β 1 ), together with conventional renal biomarkers (urine albumin level, serum creatinine concentration, and serum cystatin C–estimated GFR) to detect nephropathy early in patients with sickle cell anemia. Only KIM-1 and NAG showed correlations with albuminuria, which were strong; other markers did not show any association with albuminuria.


Liver-Type Fatty Acid–Binding Protein


Urinary fatty acid–binding protein 1 (FABP1) has been proposed to be a useful biomarker for early detection of AKI and monitoring of CKD. Also known as L-type or liver-type fatty acid–binding protein (L-FABP), which will be used in this book, FABP1 was first isolated in the liver as a binding protein for oleic acid and bilirubin. FABP1 binds selectively to free fatty acids and transports them to mitochondria or peroxisomes, where free fatty acids are β-oxidized and participate in intracellular fatty acid homeostasis. There are several different types of FABP, which are ubiquitously expressed in a variety of tissues. At this time, nine different FABPs have been reported: liver (L), intestinal (I), muscle and heart (H), epidermal (E), ileal (I1), myelin (M), adipocyte (A), brain (B), and testis (T). L-FABP is expressed in proximal tubules of the human kidney and localized in the cytoplasm. Increased cytosolic L-FABP in proximal tubular epithelial cells may derive not only from endogenous expression but also from circulating L-FABP that might be filtered at the glomeruli and reabsorbed by tubular cells.


Susantitaphong and associates published a meta-analysis reporting the performance of L-FABP from 15 prospective cohorts and two case-control studies. Although the researchers were able to meta-analyze only 7 of the cohort studies, they demonstrated that L-FABP levels were 74.5% sensitive (95% CI, 60.4% to 84.8%) and 77.6% specific (95% CI, 61.5% to 88.2%) for the diagnosis of AKI. Additionally, they demonstrated that the results were more promising for the predication of in-hospital mortality. They concluded that on the basis of the low quality of many of the studies and the varied clinical settings, L-FABP may be a promising biomarker for the early detection of AKI. In this discussion we highlight some of the larger and later clinical investigations of L-FABP.


Portilla and colleagues demonstrated that L-FABP predicts the development of AKI within 4 hours of surgery in children undergoing cardiac surgery. Others have attempted to validate this finding in the setting of cardiac surgery, with mixed success. The TRIBE-AKI Consortium published the results of the largest study investigating L-FABP in the setting of adult cardiac surgery, demonstrating that after adjustments were made for a clinical model that consisted of factors known to affect the development of AKI, L-FABP did not correlate with the development of AKI in their pediatric (n = 311) or adult (n = 1219) cohorts. The consortium demonstrated that although L-FABP levels were statistically higher in those adults with AKI than in those without AKI, the L-FABP concentration (ng/mL) measured up to 6 hours postoperatively provided an AUC of 0.61, and the performance was only marginally better with the 6- to 12-hour measurement. Similarly, in the pediatric cohort, although the fifth quintile of L-FABP concentrations at the earliest postoperative timepoint (0-6 hours) significantly associated with the development of AKI (OR, 2.9; 95% CI, 1.2 to 7.1), this effect disappeared after adjustment for the clinical model (OR, 1.8; 95% CI, 0.7 to 4.6).


Siew and colleagues reported the performance of L-FABP in 380 critically ill subjects from medical, surgical, trauma, and cardiac ICUs, in 130 of whom AKI was defined as AKIN stage 1. L-FABP levels were higher in those with AKI ( P = 0.003) and were able to discriminate incident AKI with an AUC of 0.59 (95% CI, 0.52 to 0.65). Although L-FABP was able to predict the composite endpoint of death or RRT, using multivariate regression L-FABP significantly predicted the need for acute RRT (HR, 2.36; 95% CI, 1.30 to 4.25). These findings mirror those of Doi and associates, who published a prospective single-center observational cohort study examining the performance of L-FABP in 339 mixed ICU patients. In their study, L-FABP outperformed NGAL, IL-18, NAG, and other biomarkers in the detection of AKI, defined by RIFLE Risk category. Furthermore L-FABP predicted 14-day mortality with an AUC of 0.90. This study, which followed a smaller study (n = 145) by this same group of investigators, has paved the way for L-FABP to be validated for clinical use in Japan.


In a cross-sectional study of general hospitalized patients that included 92 participants with AKI and 68 control subjects (26 healthy volunteers and 42 hospitalized: 29 patients about to undergo coronary catheterization and 13 patients in the ICU with no AKI), Ferguson colleagues demonstrated that urinary levels of L-FABP were significantly higher in subjects with AKI than in hospitalized control patients without AKI, with an AUC of 0.93 (95% CI, 0.88 to 0.97); sensitivity was 83% and specificity was 90% at a cutoff value of 47.1 ng per mg of creatinine. Nickolas and associates examined L-FABP at the time of ED arrival and found that it had only fair discriminatory power with regard to AKI. In their cohort of 1635 subjects, L-FABP provided an AUC of 0.70 (95% CI, 0.65 to 0.76); however, there was a clear and significant stepwise increase in L-FABP concentrations across the spectrum of AKI (normal < CKD < prerenal < intrinsic AKI).


Because L-FABP is also expressed by the liver, liver injury can be a potential contributor to increased urinary levels of L-FABP during AKI. However, previous studies in patients with CKD, AKI, and sepsis have shown that serum L-FABP levels do not have an influence on urinary levels and that urinary L-FABP levels are not significantly higher in patients with liver disease than in healthy subjects.


Urinary L-FABP levels have been investigated as an early diagnostic and predictive marker for contrast medium–induced nephropathy. In a study of adult patients with normal serum creatinine concentrations who underwent percutaneous coronary intervention, serum NGAL level rose at 2 and 4 hours, whereas urinary NGAL and urinary L-FABP increased significantly after 4 hours and remained elevated up to 48 hours, after cardiac catheterization. Nakamura and associates demonstrated that baseline urinary L-FABP levels were significantly higher in patients whom contrast medium–induced nephropathy developed after coronary angiography; however, the investigators did not evaluate the diagnostic performance of urinary L-FABP in predicting AKI.


Chronic Kidney Disease


To date, there have been limited investigations of the role of L-FABP in the setting of CKD. Small studies investigating the excretion of L-FABP in the setting of diabetic nephropathy have been mixed, with some reporting a link between decreased urinary concentrations of L-FABP in the setting of renin angiotensin aldosterone system blockade and preserved GFR and others finding no correlation. Further investigation is needed to elucidate the role of L-FABP in the setting of CKD.


Netrin-1


Netrin-1 is a 50- to 75-kDa, laminin-like protein, initially recognized as a chemotropic factor, that plays an essential role in guiding neurons and axons to their targets. Studies have now revealed diverse roles of netrin-1 beyond axonal guidance, including development of various organs, angiogenesis, adhesion, tissue morphogenesis, inflammation, and tumorigenic processes. Netrin-1 is expressed in several tissue types, including brain, lung, heart, liver, intestine, and kidney.


A study by Wang and colleagues showed a rapid induction of netrin-1 in tubular epithelial cells in response to ischemia-reperfusion injury of the kidney in animal models. In this study, netrin-1 was excreted in the urine as early as 1 hour after a kidney insult, increased more than fortyfold by 3 hours, and reached its peak levels (approximately fiftyfold) before the elevation of blood creatinine and BUN concentrations. Importantly, this rapid increase in netrin-1 expression appeared to be regulated at the translational level because netrin-1 gene transcription was actually decreased after ischemia-reperfusion injury. The researchers also tested the sensitivity and specificity of netrin-1 in animal models of toxin-induced kidney injury, using cisplatin, folic acid, and endotoxin (lipopolysaccharide). These kidney insults resulted in increases in the excretion of netrin-1 in urine, supporting a potential role as an early biomarker for hypoxic and toxic renal injuries. In a later study, through the exogenous administration of netrin-1 following a murine model of ischemia-reperfusion AKI, the same group demonstrated that netrin-1 regulates the inflammatory response in the setting of AKI via the inhibition of cyclo-oxygensase-2 (COX-2)–mediated prostaglandin E 2 production. They demonstrated that netrin-1 regulates COX-2 expression through the regulation of nuclear factor-kappaB (NF-κB) activation.


Although most of the investigations of netrin have focused on cellular and animal models, netrin-1 has been increasingly investigated in humans. Ramesh and colleagues also demonstrated significantly higher urine levels of netrin-1 in patients with established AKI due to various causes (n = 16) than in healthy volunteers. In a later study, the same group of scientists evaluated the potential of netrin-1 to predict AKI in patients undergoing cardiopulmonary bypass. They included serial urine samples that were collected from 26 patients in whom AKI developed and 36 patients in whom it did not after cardiopulmonary bypass. By ROC analysis, the investigators demonstrated that netrin-1 could predict AKI at 2 hours, 6 hours, and 12 hours, with an AUC of 0.74 (95% CI, 0.86 to 0.89). The levels of urinary netrin-1 6 hours after cardiopulmonary bypass correlated with the severity of AKI, as well as the length of hospital stay, and remained a powerful independent predictor of AKI.


Netrin-1 seems to be a promising early biomarker for AKI, but additional studies need to be conducted in larger cohorts with AKI due to various causes to further evaluate its potential.


Neutrophil Gelatinase–Associated Lipocalin


Neutrophil gelatinase–associated lipocalin (also known as lipocalin 2 or lcn2 ) is one of the biomarkers of AKI that has been studied extensively. NGAL has many of the characteristics required for a good biomarker for AKI in comparison with serum creatinine measurement or urine output. It is a 25-kDa protein with 178 amino acids belonging to the lipocalin superfamily. Lipocalins are extracellular proteins with diverse functions involving transport of hydrophilic substances through membranes, thereby maintaining cell homeostasis. NGAL is a glycoprotein bound to matrix metalloproteinase-9 in human neutrophils. It is expressed in various tissues in the body, such as salivary glands, prostate, uterus, trachea, lung, stomach, and kidney, and its expression is markedly induced in injured epithelial cells, including those in the kidney, colon, liver, and lung.


Transcriptome profiling studies in rodent models identified NGAL as one of the most upregulated genes in the kidney very early after tubular injury. Mishra and associates demonstrated that NGAL values were significantly elevated within 2 hours after injury in mouse models of renal ischemia-reperfusion. In addition, urinary NGAL was detectable after one day of cisplatin administration, suggesting its sensitivity in other models of tubular injury.


Acute Kidney Injury


Many clinical studies followed these important observations in animals. Mishra and associates first demonstrated the value of NGAL as a clinical marker in a prospective study of 71 children undergoing cardiopulmonary bypass. In this study, both serum and urinary NGAL levels were upregulated within 2 hours in patients in whom AKI developed. A cutoff NGAL value of 50 µg/L was 100% sensitive and 98% specific in predicting AKI. Following this seminal study, several other groups investigated NGAL in the setting of cardiac surgery, with several demonstrating that both urine and serum NGAL values were able to predict AKI earlier than serum creatinine as well as to correlate with AKI severity. However, no study was able to replicate the near-perfect results reported by Mishra and associates. Given the wealth of studies that have reported on NGAL over the last decade, we have chosen to highlight the larger and multicenter trials that have studied NGAL.


Urine NGAL


In the setting of the ED, urine NGAL was originally shown to perform quite well with the first study by Nickolas and associates in 635 patients, demonstrating that a cutoff of 130 µg per g of creatinine carried a sensitivity of 90% and specificity of 99.5% for AKI, defined as RIFLE Risk category. In this single-center prospective study, urine NGAL also predicted the future need for nephrology consultation, admission to the ICU, as well as need for RRT. In a follow-up multicenter study of 1635 subjects, urine NGAL provided an AUC of 0.81 (95% CI, 0.76 to 0.86) for the prediction of AKI (RIFLE Risk category) and provided an NRI of 26.1% while demonstrating the ability to improve the classification of both AKI events and non-events. Additionally, urine NGAL values were significantly different and increased in a stepwise fashion in the following order: patients with no AKI, patients with CKD, patients with prerenal AKI, patients with intrinsic AKI.


Lieske and colleagues measured urine NGAL in 363 ED patients and determined that NGAL provided an AUC of 0.70 for the detection of AKIN AKI while providing only modest sensitivity (65%) and specificity (65%). In addition to demonstrating that NGAL levels increased with severity of AKI, these researchers showed that pyuria and urinary white blood cells were associated with increased urinary NGAL levels. Urine NGAL has been studied less extensively in the pediatric ED but has demonstrated similar potential in a smaller study (n = 252) with a lower AKI incidence rate (n = 18, 7.1%).


Urine NGAL has also been studied in the setting of critical illness by Siew and colleagues, who showed that urine NGAL was able to predict AKI within the first 24 (AUC = 0.71) and 48 (AUC = 0.64) hours of ICU admission. In this single-center prospective study of 451 critically ill adults, urine NGAL was independently associated with the development of AKI even after adjustment for factors known to be correlated with the development of AKI (including severity of illness and sepsis). NGAL performed similarly when measured in a post hoc analysis of the EARLYARF trial. Data from this prospective observational study performed in two general (mixed) ICUs in New Zealand demonstrated that urine NGAL could modestly predict the development of AKI (AKIN stage 1) (AUC = 0.66) but was also able to forecast the need for RRT (AUC = 0.79) and death (AUC = 0.66) within the first 7 ICU days ( P < 0.001 for all three). Additionally, urine NGAL performed better in predicting AKI on ICU arrival in those with higher baseline eGFRs (AUC = 0.70 for those with eGFRs of 90-120 mL/min vs. AUC 0.64 for those with eGFRs < 60 mL/min). This improved ability to detect the future development of AKI in patients with higher eGFRs has been demonstrated by others in the setting of cardiac surgery. In a separate post hoc analysis of the EARLYARF study, urine NGAL values again demonstrated a significant increase across the spectrum of AKI; values were lowest in subjects without AKI, and values for subjects with transient AKI that lasted less than 48 hours were in between levels in subjects with no AKI and subjects with AKI that lasted longer than 48 hours. In the discovery phase of the multicenter prospective observational Sapphire trial with 522 participants, urine NGAL provided an AUC of 0.66 (95% CI, 0.60 to 0.71) for the development of RIFLE injury or failure within the first 36 hours of study enrollment; the AUC increased to 0.71 (05% CI, 0.66 to 0.76) for the development of RIFLE Injury or Failure disease within the first 12 hours.


In the setting of cardiac surgery, urine NGAL has provided similar mixed results. In the TRIBE-AKI adult cohort, the highest quintile of urine NGAL values obtained 0 to 6 hours after surgery was associated with an increased risk of AKI (defined as doubling of serum creatinine or need for RRT); however, this effect was no longer significant after adjustment for factors known to contribute to AKI risk. In addition to providing an AUC of 0.67 for the detection of AKI in this cohort of 1219 adults, urine NGAL levels were significantly associated with the composite endpoint of inpatient mortality or receipt of RRT, as well as length of ICU stay and length of hospitalization. Urine NGAL did not display the ability to detect AKI progression in the 380 adults in whom at least AKIN stage 1 developed. Although those subjects with urine NGAL values in the fifth quintile at the time of serum creatinine increase were at increased risk for more progressive AKI (e.g., going from AKIN stage 1 to AKIN stage 3), this effect was no longer significant in the adjusted analysis. These results contrasted with those in the pediatric cohort (n = 311), in which patients with urine NGAL values in the fifth quintile remained at significantly increased risk for development of AKI (doubling of serum creatinine or need for RRT) even after adjustment for the clinical model (OR, 4.1; 95% CI, 1.0 to 16.3). Additionally, urine NGAL levels correlated with the length of mechanical ventilation, ICU stay, and hospitalization.


In a separate secondary analysis that examined the long-term mortality of the adult TRIBE cohort, those subjects with urine NGAL values were in the third tertile (n = 407) were at increased risk of death during the median 3.0-year follow-up. The adjusted HR for this group (compared with those in the first tertile) was 2.52 (95% CI, 1.86 to 3.42). A similar effect was not seen in patients with values in the third tertile who did not have AKI (n = 792, (HR, 0.90; 95% CI, 0.50 to 1.63).


Urine NGAL has been investigated in several other smaller studies in more niche cohorts and has demonstrated some promise in detecting AKI in critically ill neonates, hepatic impairment or death in individuals with cirrhosis/hepatorenal syndrome, and both delayed graft function and 1-year graft survival in kidney transplant recipients. However, these trials and others require validation in larger and multicenter investigations.


Plasma NGAL


Plasma NGAL has been examined in many of the same studies with urine NGAL, including in the settings of the ED, the ICU, and following cardiac surgery. DiSomma and colleagues demonstrated that the plasma NGAL from a specimen drawn at the time of ED arrival provided an AUC value of 0.80 for the future development of AKI. In this multicenter prospective cohort study, the AUC improved to 0.90 when ED physician clinical judgment was added to plasma NGAL level. This combination of physician clinical judgment and NGAL outperformed both physician judgment alone and serum creatinine alone, leading to a significant NRI of 32.4%.


In the setting of critical illness, plasma NGAL was measured as part of a post hoc analysis of the multicenter EARLYARF study (n = 528), in which it provided an AUC of 0.74 (95% CI, 0.69 to 0.79) for the development of AKIN stage 1 AKI (n = 147) during subsequent ICU stay. This study defined functional AKI according to the AKIN criteria but also defined structural AKI in terms of urine NGAL concentrations. Plasma NGAL performed even better (AUC = 0.79) at predicting urine NGAL–defined structural AKI (n = 213). In addition to strong associations with creatinine and urine NGAL–based definitions of AKI, plasma NGAL was associated with the need for RRT (n = 19) but not with inpatient mortality (n = 53). In the Sapphire trial, which also examined the performance of plasma NGAL in the setting of ICU-associated AKI, NGAL provided an AUC of 0.64 (95% CI, 0.58-0.70) for the detection of RIFLE Injury or Failure disease within the first 12 hours of study enrollment; this significant ability to forecast more severe forms of AKI did not dramatically change when the ability to detect the same level of AKI over the first 36 hours was assessed (AUC = 0.64; 95% CI, 0.58 to 0.71).


It has been postulated that NGAL’s performance in the setting of critical illness is likely attenuated in part because of the preponderance of sepsis-related AKI in the ICU and that NGAL levels are inherently higher in those with sepsis because it is derived in part from neutrophils. De Gues and colleagues published a prospective observational cohort study of 663 patients admitted to the ICU in whom plasma NGAL was measured four times during the first 24 hours. These investigators demonstrated that plasma NGAL levels were significantly higher in patients with sepsis than in those without sepsis and that when the cohort was stratified according the presence of sepsis (n = 80, 12% of the cohort), plasma NGAL was able to detect AKI remarkably well in both cohorts (AUC of 0.76 in those with sepsis vs. 0.78 in those without sepsis). These data corroborate the work of Noiri’s group, which demonstrated in a prospective observational study of 139 critically ill patients that plasma NGAL levels are highest in those with sepsis associated with AKI, lower in those with non-sepsis AKI, and even lower in those with no AKI. Future investigations of plasma NGAL will need to take this association with sepsis into account as we begin to construct normal ranges as well as clinically validated cutoffs to utilize NGAL in interventional trials to treat patients with the early stages of AKI.


Although its ability to detect AKI in the setting of critical illness requires further investigation, plasma NGAL has been shown to predict recovery from AKI in preliminary studies. In 181 patients with community-acquired pneumonia and at least RIFLE Failure AKI, plasma NGAL measures on the first day that met Failure criteria were able to predict the failure of recovery of renal function. Individuals with high plasma NGAL levels were less likely to recover, with an AUC of 0.74. However, this performance was not significantly different from that of a clinical model consisting of age, serum creatinine, and severity of illness scores. This potential ability to detect nonrecovery of AKI is yet another aspect of NGAL and other biomarkers that requires further investigation.


Plasma NGAL has also been extensively studied in the setting of cardiac surgery. In the TRIBE-AKI adult cohort (n = 1219), plasma NGAL levels were significantly higher in those in whom AKI developed (defined as doubling of serum creatinine or need for RRT) in the early postoperative period. Those in the fifth quintile of plasma NGAL values up to 6 hours after surgery (>293 ng/mL) were at a 7.8-fold higher risk for development of AKI than those in the first NGAL quintile (<105 pg/mL). This effect was attenuated after adjustment for factors known to associate with AKI but remained significant (OR, 5.0; 95% CI, 1.6 to 15.3), although it was no longer significant after adjustment for serum creatinine. Additionally, plasma NGAL was significantly associated with increased length of ICU and hospital stays as well as with a composite of in-hospital death or dialysis. In this same adult cohort, plasma NGAL, measured at the time of a clinical AKI/serum creatinine increase, demonstrated a remarkable ability to detect those individuals with progressive AKI (e.g., going from AKIN stage 1 to stage 2 or 3) (n = 380). After adjustment for the clinical model, patients in the fifth quintile of plasma NGAL values (>322 ng/mL) were nearly eight times more likely to have progressive AKI (OR, 7.72; 95% CI, 2.65 to 22.49) than those in the first two quintiles. Plasma NGAL displayed the ability to improve the reclassification of both those with and those without progressive AKI (events and non-events), providing a category-free NRI of 0.69 ( P < 0.0001). The results in the TRIBE-AKI pediatric cohort were less promising, with plasma NGAL not displaying the ability to predict severe AKI (defined as doubling of serum creatinine or need for RRT) in the early postoperative period. However, the fifth quintile of NGAL (>259 ng/ml) measured within the first 6 postoperative hours was significantly associated with the development of RIFLE Risk AKI with an adjusted OR of 2.3 (95% CI, 1.0 to 5.5), although falling well short of the near-perfect performance published in the original Mishra paper.


In addition to large trials, smaller trials have also investigated NGAL in a variety of AKI settings, so much so that Haase and colleagues conducted a pooled prospective study (n = 2322; 1452 having had after cardiac surgery and 870 with critical illness) that designated subjects as NGAL + or NGAL and as creatinine + or creatinine (creatinine + was defined as RIFLE Risk AKI). After analyzing NGAL data from ten separate prospective observational studies, the group demonstrated that individuals who were NGAL + but creatinine needed acute dialysis more than 16 times more often than those who were NGAL creatinine (OR, 16.4; 95% CI, 3.6 to 76.9; P < 0.001). The study also demonstrated incremental increases in ICU stay, hospital stay, and mortality among the four study groups in the following order: NGAL creatinine , NGAL + creatinine , NGAL creatinine + , NGAL + creatinine + .


The function of NGAL as a diagnostic marker of contrast medium–induced nephropathy has also been evaluated. In a prospective study of 91 children undergoing coronary angiography, both urine and plasma NGAL levels were found to be significantly increased within 2 hours of contrast medium administration in the group in which contrast medium–induced nephropathy developed but not in the control group. By comparison, AKI detection using increases in serum creatinine concentration was possible only later, 6 to 24 hours after contrast agent administration. When a cutoff value of 100 ng/mL was used, both urine and serum NGAL levels at 2 hours predicted contrast medium–induced nephropathy, with AUCs of 0.91 and 0.92, respectively. In several studies of adults undergoing procedures requiring contrast agents, early rises in both urine (4-hour) and plasma (2-hour) NGAL levels were documented, compared with a much later increase in plasma cystatin C levels, providing support for the use of NGAL as an early biomarker for contrast medium–induced nephropathy. A meta-analysis found an overall AUC of 0.89 for prediction of AKI when NGAL was measured within 6 hours after contrast agent administration and AKI was defined as a 25% or greater increase in serum creatinine concentration.


The origin of plasma and urinary NGAL rises after AKI requires further clarification. Gene expression and transgenic animal studies have demonstrated an upregulation of NGAL in the distal nephron segments, specifically in the thick ascending limb of Henle and the collecting ducts; however, most of the injury in AKI occurs in the proximal tubules. On the other hand, the source of plasma NGAL in AKI is not well defined. For instance, in animal studies, direct ipsilateral renal vein sampling after unilateral ischemia indicates that NGAL synthesized in the kidney does not enter the circulation. The increase in plasma NGAL observed in AKI may derive from the fact that NGAL is an acute phase reactant and may be released from neutrophils, macrophages, and other immune cells. Yndestad and colleagues reported strong immunostaining for NGAL in cardiomyocytes within the failing myocardium in experimental and clinical heart failure. Furthermore, any impairment in GFR resulting from AKI would be expected to decrease renal clearance of NGAL, with subsequent accumulation in the systemic circulation. However, the contribution of these mechanisms to the rise in plasma NGAL concentration after AKI has yet to be investigated. NGAL levels are also influenced by various medical conditions, such as CKD, hypertension, anemia, systemic infections, hypoxia, inflammatory conditions, and cancers, making it relatively less specific for kidney injury. Additionally, there is some evidence to suggest that NGAL concentrations degrade over time, with concentrations decreasing by nearly 50% within the first 6 months of storage at −80° C. These degradation issues also affect other biomarkers (including NAG and KIM-1), and their effects on clinical results remain unclear because this is an area of continued investigation. Nevertheless, NGAL represents a very promising candidate as a biomarker for early diagnosis of AKI and potential prediction of outcome.


Chronic Kidney Disease


In addition to extensive investigation in the setting of AKI, NGAL has been increasingly investigated in the setting of CKD. Some of this work was inspired by animal data demonstrating that NGAL, like KIM-1, was highly upregulated by the persistent inflammation and late immune response following AKI and potentially contributes to the development of post-AKI CKD. Moving this concept into humans, Nickolas and associates reported on the correlation of NGAL with histologic changes in native kidney biopsy specimens from subjects with CKD. The group demonstrated that NGAL levels were inversely correlated with eGFR while being directly correlated with both interstitial fibrosis and tubular atrophy.


In a case-control substudy of the ARIC cohort (n = 286), urine NGAL did not initially correlate with baseline eGFR. However, the fourth quartile of urine NGAL was at a more than twofold higher risk for development of incident stage 3 CKD during the follow-up period. It should be noted, however, that this effect was attenuated after adjustments were made for urine creatinine and urine albumin. These adjusted model data corroborate the findings from the MESA cohort, which was unable to find an association between urine NGAL levels and the development of incident CKD stage 3. In a 1 : 1 nested case-control study, NGAL levels were not associated with the development of CKD stage 3 or a decrease in eGFR of more than 3 mL/min per year, over a 5-year follow-up period. Finally, in an analysis from the Chronic Renal Insufficiency Cohort (CRIC), Liu and colleagues demonstrated that there was a strong association between baseline urine NGAL and the risk of CKD progression (defined as a 50% reduction of MDRD-calculated eGFR or development of ESKD) over the mean follow-up of 3.2 years. However, although this effect was significant in an unadjusted analysis, urine NGAL offered no improved prediction after adjustment for baseline age, race, eGFR, proteinuria, diabetes, and other factors known to impact CKD progression (C-statistic of 0.847 for both). Thus, in this cohort of 3386 individuals with CKD, urine NGAL was no better at predicting CKD outcomes than more traditional markers.


These findings from ARIC, MESA, and CRIC contrast directly with those in a prospective observational cohort study of 158 white patients with baseline CKD stage 3 or 4, which demonstrated that urine NGAL (adjusted for urine creatinine) was associated with CKD progression. Forty patients reached the primary endpoint of all-cause mortality or need for RRT during the 2-year follow-up. The baseline urine NGAL was associated with this composite primary endpoint, with every increase in urine NGAL of 5 µg/mmol being associated with a 27% increase in risk of death or RRT. These findings are similar to those of Bolignano and colleagues, who performed a prospective observational study of 96 subjects with CKD with a median follow-up of 18.5 months. They demonstrated that both urine and serum NGAL values were associated with a composite endpoint of either doubling of baseline serum creatinine or development of ESKD. Conversely, in a 4-year follow-up study of 78 patients with type 1 diabetes conducted to evaluate the potential of urinary NGAL level to predict progression to diabetic nephropathy, NGAL levels were not associated with decline in GFR or development of ESKD and death after adjustment for known progression promoters.


N -Acetyl-β-D-Glucosaminidase


NAG is a lysosomal brush border enzyme that resides in the microvilli of tubular epithelial cells. Damage to these cells results in shedding of this enzyme into the urine. NAG has a high molecular weight, 130 kDa, and hence plasma NAG is not filtered by the glomeruli. Its excretion into urine correlates with tubular lysosomal activity. Increased urinary concentrations of NAG have been found in patients with AKI, chronic glomerular disease, diabetic nephropathy, exposure to nephrotoxic drugs, delayed renal allograft function, environmental exposure, contrast medium–induced nephropathy, and sepsis, and following cardiopulmonary bypass. In a prospective study involving 201 hospitalized patients with AKI, patients with higher concentrations of urinary NAG and KIM-1 were more likely to die or require dialysis. The results of this study suggest the utility of NAG in combination with KIM-1 in predicting adverse clinical outcomes in patients with AKI. In another study, urinary NAG concentrations were significantly higher in patients with contrast medium–induced nephropathy than in patients without such nephropathy within 24 hours of the administration of a contrast agent.


Similarly, in a two-center Japanese study of 77 patients undergoing cardiac surgery, NAG values were elevated in those in whom postoperative AKI developed. In this study, biomarker performance significantly improved when NAG was combined with L-FABP (an AUC improvement from 0.75 to 0.81). This same group published a single-center study investigating the performance of NAG in predicting the development of AKI (RIFLE) in a mixed, medical-surgical ICU. NAG did not perform as well in this cohort of 339 subjects, providing an AUC of 0.62 for the development of RIFLE Risk disease. In a cohort of 635 ED patients, an NAG value over 1.0 units/g provided an AUC of 0.71 (95% CI, 0.62 to 0.81) for the development of AKI during the subsequent hospital admission. However, this effect was attenuated in a multivariate analysis that included other novel and traditional biomarkers of AKI (creatinine, BUN, NGAL, etc.).


In the setting of CKD, one study of patients with type 1 diabetes and nephropathy by Vaidya and colleagues showed that lower levels of urinary KIM-1 and NAG were associated with the regression of microalbuminuria. Similarly, in a nested-case-control study from the Diabetes Control and Compliance Trial, baseline NAG concentrations were shown to predict microalbuminuria and macroalbuminuria. To date, there are little or no data on the role of NAG in CKD progression.


There are some limitations in the use of NAG as a marker of kidney injury. Inhibition of NAG enzyme activity has been reported in the presence of metal ions and at higher urea concentrations in the urine. Moreover, increased urinary levels of NAG have been reported in several nonrenal diseases, including rheumatoid arthritis and hyperthyroidism, as well as in conditions with increased lysosomal activity without cellular damage. Because of concerns about its specificity, the clinical utility of NAG as a biomarker has been limited.


Proteinuria


In a healthy person, urinary protein excretion is less than 150 mg/day and consists mainly of filtered plasma proteins (60%) and tubular Tamm-Horsfall proteins (40%). Proteinuria can result from at least three different pathophysiologic mechanisms, glomerular (increased permeability of glomerular filtration barrier to protein due to glomerulopathy, raised glomerular capillary hydrostatic pressure, or altered glomerular filtration coefficient), overflow (due to increased production of low-molecular-weight plasma proteins, e.g., immunoglobulin light chains in myeloma), and tubular processes (decreased tubular absorption of filtered proteins or increased production of tubular proteins by damaged tubules). Proteinuria mechanisms and consequences are discussed in Chapter 53 .


Proteinuria is diagnosed when total urinary protein is greater than 300 mg/24 hour. Methods for detecting and monitoring proteinuria are discussed in Chapter 26 .


Several publications highlight the diagnostic power of total protein for AKI in various drug-induced nephrotoxicities, including cisplatin and nonsteroidal antiinflammatory drugs. Low eGFR is a known risk factor for AKI, but the utility of proteinuria in combination with the eGFR to predict the risk of this disease is now being investigated. In a large cohort of nearly 1 million adult Canadians, James and colleagues demonstrated an independent association among eGFR, proteinuria, and incidence of AKI. This group reported that patients with normal eGFR levels (≥60 mL/min per 1.73 m 2 ) and mild proteinuria (urine dipstick trace to 1+) have 2.5 times more risk of admission to hospital with AKI than do patients with no proteinuria. The risk was increased to 4.4-fold when patients with heavy proteinuria (urine dipstick ≥2+) were included. Adjusted rates of admission with AKI and kidney injury requiring dialysis remained high in patients with heavy dipstick proteinuria independent of the eGFR. These findings confirm previous reports suggesting that eGFR and proteinuria are potent risk factors for subsequent AKI.


Albuminuria


Albuminuria is recognized as one the most important risk factors for progression of chronic kidney diseases. Albumin is a major serum protein slightly larger than the pores of the glomerular filtration membrane, so albuminuria is best known as a biomarker of glomerular dysfunction; the appearance of albumin in large amounts in urine represents compromised integrity of the glomerular basement membrane. In smaller amounts, however, the presence of albumin in the urine may reflect tubular injury. Albuminuria is classified in the KDIGO classification system as stage A1 (urine AE [albumin excretion] < 30 mg/day or urine ACR [urine albumin to creatinine ratio] < 30 mg per g creatinine), stage A2 (previously termed microalbuminuria; urine AE 30-300 mg/day or uACR = 30-300 mg per g creatinine), and stage A3 (previously termed macroalbuminuria urine AE > 300 mg/day or urine ACR > 300 mg per g creatinine). In a number of clinical studies, albuminuria has been shown to be a sensitive biomarker of drug-induced tubular injury. It is routinely used as a marker of kidney damage for making a CKD diagnosis at eGFRs above 60 mL/min/1.73 m 2 .


Guidelines of the National Kidney Foundation (NKF) and of the American Heart Association (AHA), respectively, include microalbuminuria and an increase in the urinary total protein excretion as risk factors for renal and cardiovascular diseases. Both NKF and AHA guidelines suggest measurement of urine ACR in an untimed spot urine sample. Ideally, urine ACR should be assessed in at least three different samples to decrease the intraindividual variation. Albuminuria is a continuous risk factor for ESKD and cardiovascular mortality with no lower limit, even after adjustment for eGFR and other established risk factors. Urine albumin value has been in use as a biomarker for monitoring CKD progression and to monitor potential therapeutic efficacy, although the FDA does not accept albuminuria as a surrogate marker.


Using microalbuminuria as a marker, Levin and colleagues demonstrated that N -acetylcysteine may attenuate contrast medium–induced glomerular and tubular injury. In the last several years, there has been more investigation of urine albumin excretion as a biomarker for AKI. The TRIBE-AKI Consortium measured preoperative albuminuria in 1159 adult patients, organizing the cohort into clinical risk categories on the basis of the preoperative urine ACR: 10 mg/g or less (≤1.1 mg/mmol), 11 to 29 mg/g (1.2 to 3.3 mg/mmol), 30 to 299 mg/g (3.4 to 33.8 mg/mmol), and 300 mg/g or greater (≥33.9 mg/mmol). The incidence of AKI, defined as AKIN stage 1, increased across the ACR categories, with patients whose urine ACR was greater than 300 mg/g having a relative risk of 2.36 (95% CI, 1.85 to 2.82) in comparison with to the group whose urine ACR was less than 10 mg/g. This association was slightly attenuated after adjustments for variables known to affect proteinuria and AKI (RR, 2.21; 95% CI, 1.66 to 2.73). These adult data contrast with pediatric TRIBE-AKI data (n = 294), which demonstrated no association between preoperative urine ACR and the development of postoperative AKI. The adult ACR data support use of an additional biomarker to aid in cardiac surgery AKI prediction models and supplement other data that point to the use of proteinuria/albuminuria as a biomarker of AKI both preoperatively and postoperatively.


In the postoperative setting, the TRIBE-AKI cohort showed that urine albumin concentrations (mg/L) and dipstick proteinuria values obtained within 6 hours of adult cardiac surgery correlated with the future development of AKI. Compared with the lowest quintile, the highest quintile of albuminuria and highest group of dipstick proteinuria were associated with the greatest risk of AKI (adjusted RR, 2.97; 95% CI, 1.20 to 6.91, and adjusted RR, 2.46; 95% CI, 1.16 to 4.97, respectively). However, only postoperative urine albumin concentration (mg/L) was associated with improved risk stratification when added to the clinical model (AUC increased from 0.75 to 0.81; P = 0.006). Despite its known utility in other settings, a higher early postoperative urine ACR (mg/g) was not statistically associated with AKI risk. The poor performance of urine ACR in the context of adult cardiac surgery may be explained by variations in the urine creatinine excretion within and between individuals, which could be especially prominent when renal function is not in a steady state. Urinary albumin (mg/L) in the early postoperative period was highly predictive of long-term mortality in the TRIBE-AKI adult cohort. Specifically, of patients with perioperative AKI (n = 407), those in the second tertile of albuminuria values were at increased risk of death in the 3.0-year follow-up period (adjusted HR, 2.28; 95% CI, 1.06 to 4.88). Although this effect was further magnified in the third tertile of those with AKI (adjusted HR, 2.85; 95% CI, 1.36 to 5.99); there was no increased mortality across any of the tertiles of urine albumin concentration in the 792 subjects without perioperative AKI.


In the TRIBE-AKI pediatric cohort, perioperative values of urine ACR (mg/g), and not albuminuria (mg/L), were found to be predictive of AKI. In children younger than 2 years, an absolute first postoperative urine ACR of 908 mg/g or higher (103 mg/mmol, highest tertile) predicted the development of AKIN stage 2 or 3 AKI with an adjusted RR of 3.4 (95% CI, 1.2 to 9.4) in comparison with the first tertile. In children 2 years or older, a postoperative urine ACR value of 169 mg/g or higher (19.1 mg/mmol, highest tertile), regardless of preoperative values, predicted stage 1 AKI after adjustments for clinical factors, such as age, race, sex, and preoperative eGFR, and type of cardiac surgery (adjusted RR, 2.1; 95% CI, 1.1 to 4.1). Although urine albumin concentration and urine ACR remain established and readily available laboratory tests, the diversity of results in investigations of the development of postoperative AKI indicate that further studies are needed before either may be used in clinical practice.


Urinary Cystatin C


Urinary cystatin C tracks the function of proximal tubular cells. In healthy individuals, the urinary levels of cystatin C are almost undetectable and any damage to proximal tubular cells can impede the reabsorption and enhance the urinary excretion of cystatin C. Several clinical studies sought to understand the potential of urinary cystatin C levels for prediction of kidney injury and its prognosis. Herget-Rosenthal and associates analyzed data for 85 patients in the ICU who were at high risk for development of AKI and used the RIFLE classification to define AKI. The investigators reported that serum cystatin C level signaled AKI 1 to 2 days before changes in serum creatinine level, with AUC values of 0.82 and 0.97 on day 2 and day 1, respectively, as well as demonstrating that urine cystatin C served as a marker of AKI severity, correlating with future need for RRT. Urinary cystatin C concentrations, normalized to urinary creatinine, of more than 11.3 mg/mmol were significantly associated with proteinuria. Attempts to validate urine cystatin C as a marker of ICU-associated AKI have provided mixed results. Siew and colleagues measured urine cystatin C in 380 ICU patients (mixed-surgical, medical, trauma, and cardiac) and demonstrated that there was no difference in concentrations between those with and those without AKI ( P = 0.87). More encouraging data from the EARLYARF trial showed that in a cohort of 529 subjects, urinary cystatin C may have the limited ability to detect AKI (AUC = 0.67 upon ICU arrival) with no significant difference in its ability to detect AKI in patients with GFRs above and below 60 mL/min. Additionally, in a separate post hoc analysis of the same study, urinary cystatin C values exhibited a stepwise and significant ( P < 0.001) increase in the following order: patients with no AKI, patients with prerenal AKI, patients with intrinsic AKI (which was defined as AKIN stage 1 > 48 hours).


In contrast to these mixed ICU-associated AKI data, several small studies investigating urinary cystatin C in the setting of cardiac surgery have demonstrated promise. However, these results were not validated in the TRIBE-AKI study. In unadjusted analyses of the adult cohort, several quintiles of urine cystatin C were significantly associated with the development of either mild (AKIN stage 1) or severe (doubling of creatinine or need for RRT) AKI at both the 0- to 6-hour and 6- to 12-hour postoperative time points. However, the small associations were completely attenuated after adjustments for the clinical model. Similarly, in the TRIBE pediatric cohort, no quintile remained significantly associated with AKI (mild or severe) in the adjusted analyses. Urinary cystatin C demonstrated similar results when measured in a cohort of 1635 ED patients, supplying an AUC of 0.65 (95% CI, 0.58 to 0.72) for the future development of AKI. However, utilizing a multivariate analysis that included traditional (creatinine) and more modern (urine NGAL, KIM-1, IL-18, and LFABP) biomarkers, urinary cystatin C was not a significant contributor to the prediction of the composite outcome of inpatient RRT or death. Finally, when investigated in a prospective multicenter observation cohort study of deceased-donor kidney transplants, urinary cystatin C values from the first postoperative day were modestly correlated with 3-month allograft function, whereas the AUC for predicting delayed graft function at the 6-hour postoperative timepoint was 0.69.


A number of studies have reported increased urinary cystatin C levels in patients with proteinuria, suggesting the possibility of tubular damage as a consequence of protein overload. Currently, cystatin C level has several disadvantages as a biomarker, including lack of international standardization and expense of the assay. Although serum cystatin C level has been demonstrated as a reliable biomarker of eGFR, one must remember that cystatin C synthesis is increased in smokers, patients with hyperthyroidism, those receiving glucocorticoid therapy, and those with elevations of inflammatory markers such as white blood cell count and C-reactive protein level, and the impact of these factors on urinary cystatin C in the setting of AKI has not been fully investigated. Furthermore, several different commercial assays are available to measure cystatin C. Advantages are that the commercially available immunonephelometric assay provides rapid, automated measurement of cystatin C, and results are available in minutes. In addition, preanalytic factors, such as routine clinical storage conditions, freezing and thawing cycles, and interfering substances such as bilirubin and triglycerides, do not affect cystatin C measurement.


TIMP-2 and IGFBP-7


Tissue inhibitor metalloproteinase-2 (TIMP-2) and urine insulin-like growth factor-binding protein-7 (IGFBP-7) have been shown to serve as biomarkers of AKI in the setting of critical illness. They were originally discovered as part of a three-center discovery cohort of 522 subjects. These patients had AKI stemming from sepsis, shock, major surgery, and trauma. More than 300 potential markers were evaluated, with TIMP-2 and IGFBP-7 being the two that best predicted the development of KDIGO stage 2 or 3 AKI. This finding was then validated in a prospective international multicenter observational study of 728 subjects. In this validation study, TIMP-2 and IGFBP-7 remained the top two performing biomarkers for the prediction of RIFLE Injury or Failure stage within the first 12 to 36 hours of study enrollment, providing AUCs of 0.77 and 0.75, respectively. When these two biomarker values were multiplied together, they demonstrated an improved ability to detect this same endpoint (AUC = 0.80). However, we are cautious about the interpretation of these data because the study did not supply information about the combination of TIMP-2 and IGFBP-7 with other biomarkers of AKI (e.g., NGAL, KIM-1, L-FABP, IL-18). This is the first and only publication to demonstrate that either of these markers plays a role in diagnosing AKI and will require validation in follow-up studies. These biomarkers are unique in that they play a role in cell cycle arrest. Both IGFBP-7 (through p523 and p21) and TIMP-2 (through p27) block the effect of cyclin-dependent protein kinase complexes, resulting in short periods of G 1 cell cycle arrest. A commercial assay for these biomarkers is currently available in Europe, but the TIMP-2 and IGFBP-7 test has not been approved for clinical use in the United States.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Feb 6, 2019 | Posted by in NEPHROLOGY | Comments Off on Biomarkers in Acute and Chronic Kidney Diseases

Full access? Get Clinical Tree

Get Clinical Tree app for offline access