In this issue, Hee Yeon Kim et al. [1] further extend an evolving understanding of ALT reference range in ways that affirm the importance of conceiving of laboratory data as probability estimates influenced by clinical circumstances. Furthermore, their work also demonstrates latent clinical information in laboratory data that outdated methods of reporting leave untapped but which an “intelligent” medical laboratory could provide.

While some laboratory results—have binary outcomes, others fall across a range typically distributed within a “healthy population.” But there are pitfalls in approaching the cutoffs as if they cleanly separate “health” from “disease,” as demonstrated by serum concentrations (levels) of alanine aminotransferase (ALT). ALT levels are not distributed normally [2], vary with race, size, and age [2, 3] and can vary widely in specificity and sensitivity for liver disease depending upon how stringently the reference range is defined [4, 5]. While the relative contributions of obesity and the commonly—but not inevitably—linked metabolic syndrome to cardiovascular disease risk are still not entirely clear [6], Kim’s article demonstrates that there may be important clinical information hiding within the normal ranges set by the laboratory. In this study, ALT levels within the established normal range but above a more stringently defined upper limit predicted the metabolic obese phenotype even in patients not themselves overtly overweight. Even for clinicians adroit at the use of standardized thresholds to interpret diagnostic testing [7], a laboratory value falling within a single reference range thus simplifies into oblivion this potentially useful information.

In a paper-and-pencil era, an experienced clinician might interpret ALT in light of individual patient characteristics—ALT levels that would provoke a biopsy if obtained from a small Asian woman with hepatitis B might not raise concern when obtained from a large African-American man. But in the era of Big Data, should the clinical laboratory simply broadcast numbers and ranges, leaving the recognition of meaningful patterns entirely to the clinician?

The importance of laboratory data is unquestionable—the British National Health Service estimates nearly two-thirds of diagnoses already depend to some degree on clinical laboratory information [8]—but its potential outstrips current applications. In the USA, where patient records are frequently scattered among non-interfacing health systems, the medical information in a community’s laboratory archive provides often unrecognized continuity. But with the addition of a small amount of individual data—patient diagnosis codes, ethnicity, prescriptions, BMI—laboratory measurements could yield more important clinical information for the clinician. In automatically performing operations on the raw data, the laboratory moves beyond broadcasting and archiving (“memory”) to intelligent function (“brain”), automatically generating clinical meaning from its data such as indicating an ALT between 20 and 40 might suggest the presence of the metabolic syndrome. If this seems unnecessarily complicated now, wait until the coming proliferation of genomic data and individuated latent medical meaning expands exponentially beyond the scope of any single practitioner.

The simplest improvements in laboratory access and presentation of data take advantage of “memory” to avoid duplication. For example, an intelligent laboratory would retain invariant information for an individual patient (such as hemochromatosis genotype) and if such a test is ordered, first search archives for previous results otherwise unknown to the ordering physician. It is money and phlebotomy-saving to re-report rather than re-assay.

For data that vary over time, an intelligent laboratory can reduce testing by including in every new report the difference from the last such assay, or the average over a designated interval. A cross-covering doctor would respond differently to a midnight call of a platelet count of 30,000 knowing the 6 months average on this patient was 25,000 than he would knowing it was 250,000.

A laboratory can also save time and improve outcomes if abnormal results automatically trigger the appropriate follow-up testing. For instance, a laboratory that recognizes a first positive hepatitis B surface antigen could automatically trigger viral load measurement from the same specimen.

Once “hard-wired” into a reporting system, these improvements require no additional testing since they rely on existing results already in the laboratory’s memory, waiting to be utilized.

The greater a laboratory’s ability to translate results from its database into individualized meta-data, the more intelligent it becomes. For a patient with diagnostic codes indicating chronic liver disease, an individualized reporting screen could include a model for end-stage liver disease (MELD) score automatically updated when data were refreshed. With respect to ALT results, a laboratory that takes into account the age, BMI, and ethnicity of the patient could adjust the reference range to the individual in question and the specific needs of the clinician. An on-screen report could be soft-wired to include an adjustable reference range bar. By toggling the bar, the clinician could adjust the range and for each such setting see the new sensitivity and specificity estimates for predicting liver disease—or metabolic syndrome phenotype—given the patient’s result. Alternatively, reporting could be supplemented with a visual analog reference range bar of a density proportional to the probability of liver disease, and demarcated in another color indicating the probability of metabolic syndrome, individualized for that particular patient’s age and weight, further adjusted for lipid, glycosylated hemoglobin or other pertinent values as they become available. If a mutual fund’s webpage can update anticipated annual income as users toggle between various investment returns, how difficult would it be for an intelligent laboratory containing a patient’s transaminase and platelet levels to present an aspartate aminotransferase-to-platelet ratio index (APRI) sliding scale for the assessment of fibrosis risk, allowing the physician to optimize positive or negative predictive values depending upon the clinical interest? [9, 10] In short, laboratories must go beyond reporting specific laboratory values to create the metabolic meta-data that arise from the interactions of information in an individuated laboratory archive and actively present these results to the ordering clinician.

Websites already allow clinicians to calculate coronary disease risk or to assess the response to treatment for alcoholic hepatitis, generating the personalized meta-data at the level of their personal electronic device. But with the proliferation of high volume genomics, redundant data entry will become increasingly impractical since no clinician will be able to identify all of the pertinent interactions with medical needs. As the intelligent storehouse of the patient’s entire database, the laboratory is the information nexus where meaningful clinical information is most simply and easily generated. Few treating physician would be sufficiently facile with genomics pertinent to the management of acute respiratory distress syndrome, for example, to identify the most relevant data latent in a stored genome. In contrast, the simple addition of the ARDS diagnosis to the patient’s profile could provoke the laboratory to select and report the clinically meaningful information [11].

A report on the “Precision Medicine: Personal Genomes and Pharmacogenetics” meeting in November 2013 suggests a world in which clinicians cannot keep track of genomic data relevant to care without the circumscribing context of individualized clinical information [12]. But prescriptions in the intelligent laboratory database could instantly trigger reporting of sequences pertinent to that patient’s drug metabolism and a diagnosis of neuropathy could generate review of the growing number of sequences relevant to cause. Rapidly increasing knowledge of mutations underlying disease phenotypes and responses to clinical treatment will necessitate regular updates, but reports would be individualized for relevance.

If the future is near, it is not too soon to re-conceive the clinical laboratory not as a mere results broadcaster, but as a patient’s memory and a physician’s auxiliary brain. The intelligent laboratory will provide not only individual test results but clinically important patient-specific information. It will prompt the physician when old data—or new meaning [1]—can be extracted from the archive and assume a more robust more active presence in clinical care.