- First Online:
- Cite this article as:
- Delehanty, M. Medicine Studies (2010) 2: 161. doi:10.1007/s12376-010-0052-2
- 104 Views
Given that many imaging technologies in biology and medicine are non-optical and generate data that is essentially numerical, it is a striking feature of these technologies that the data generated using them are most frequently displayed in the form of semi-naturalistic, photograph-like images. In this paper, I claim that three factors underlie this: (1) historical preferences, (2) the rhetorical power of images, and (3) the cognitive accessibility of data presented in the form of images. The third of these can be argued to provide an epistemic advantage to images, but I will further argue that this is often misleading and that images can in many cases be less informative than the corresponding mathematical data.
Imaging technologies seem, by their very name, to refer to the production of images, yet it is a striking feature of many imaging technologies that their output need not be images. When we examine the means by which PET images are produced, for instance, we see that the fact that they even are images is accidental. While a photograph may be measured and subjected to quantitative analysis subsequent to its production, PET images require that extensive mathematical transformation occur to produce the data that can then be represented in the form of an image. In the case of PET, the result of signal detection, data correction, and reconstruction is a numerical value assigned to each voxel. The final conversion of this data into the form of a vaguely naturalistic image is simply a matter of assigning a color (or gray level) to particular ranges of numerical values and then displaying the data in a 2-D or 3-D array. It could just as easily be represented in other ways. For instance, the change in the average voxel intensity within some defined region or regions of interest over time could be displayed in graphical format. Some neuroscientists and cognitive scientists, in fact, prefer to represent their data in this way.1 Yet, virtually every scientific paper that reports data from functional imaging studies contains at least some photograph-like images. Given that there is a choice between data display formats, why are images the dominant form? In this paper, I will claim that three factors contribute to this: (1) historical preferences, (2) rhetorical power, and (3) cognitive accessibility. The third of these is particularly important, since it is central to the question of whether there is an epistemic advantage to using images. However, the cognitive accessibility of images can sometimes lead to a tendency to ignore the essential numerical nature of this data. When we look more closely at how the value of this data can be maximized, we see that it is essential to recognize it as numerical. By examining the great advances that have been made over the last decade using increasingly sophisticated forms of multi-voxel pattern analysis (MVPA), I hope to show that the cognitive accessibility of images actually provides quite limited epistemic advantages.
What Can We See in the Data?
Some types of data often seem to us to essentially wear their reliability on their sleeve. We tend, for instance, to take photographs, video recordings, and photograph-like images such as X-rays as reliable forms of evidence for certain sorts of visually accessible features of the world.2 This is, in part, a consequence of the fact that the processes involved in producing these forms of evidence usually are reliable. However, it is also partially explained by our familiarity with these sorts of images and the sorts of things they often represent. We are all highly trained in reading these types of images, even if we are not always expert in identifying or interpreting specific kinds of content. The layperson looking at a photograph of a face, for instance, will recognize it as a face though that same person looking at a photograph of a tissue sample stained to reveal macrophages may have no idea what they are looking at.3
This familiarity with certain visual formats not only allows us to identify their content, but, importantly, also often allows us to judge the reliability of the image.4 We know both what a (real) face looks like and what a reliable photograph of a face looks like. Some sorts of variation we know to be permissible—we do not think that a black-and-white photograph is generally unreliable, for instance.5 However, if we see a very blurry photograph, we will be more inclined to question its reliability because we can tell that something has gone wrong in the production of the image. Our ability to read the content of certain types of visual images is sometimes but not always connected to our ability to read their reliability. This is especially evident in the photographic representations of visual illusions such as the Ames room in which people of the same height appear to be dramatically taller or shorter than each other depending on their position in an oddly shaped room that appears perfectly rectangular from the limited perspective of the viewer.6 In this case, the unrecognized unreliability is created by the absence of some depth cues, not from unfamiliarity with what people of different height look like in a “normal” photograph. In other cases, we may be relatively unfamiliar with the content of an image and still be able to recognize it as less than maximally reliable. In the case of the tissue sample, for instance, some people would likely still be able to identify blurriness as a problem, though they would probably not pick up on other problematic features related to content and to parts of the experimental set-up upstream of the production of the photograph.7 Thus, while it is obviously not always correct to do so, it is easy for us to interpret reliable-appearing photographs or photograph-like images8 as being, in fact, reliable.9
This connection between our expertise in reading photographs and our inclination to interpret them as being reliable is undoubtedly connected to the persuasive power of images, a topic that will be returned to in Sect. 3. It also highlights the importance of distinguishing between the production of data (the relationship between the object and the representation) and the use of data (the relationship between the representation and the human user—or observer—of that representation). There is a great deal to be said about the former, but that is not the issue at hand here. It is the fact that the same data, obtained by a specific process for a particular sample, can be displayed in a variety of different ways that is my focus here. It is an important feature of the use of data that different representations of the same data may be interpreted very differently by the user.
In order for an image or any type of data to be reliable, it must satisfy two criteria: (1) there must be a granularity match between the instrument and the description of the world at which a particular question is directed, and (2) the structure of the object must be preserved in the data within finite error bounds (Delehanty, forthcoming). Whether these criteria are met in any given case is in part dependent on the question to which an answer is sought, but the range of possible questions that can be reliably answered using a given instrument is itself constrained by the nature of the processes involved in the instrument.10 This account of reliability does not distinguish between numerical data or images: both can be described in terms of error bounds and granularity.
An essential part of reliability, though, is allowing certain objects, properties, or features to be discriminated. The ability of the user of the data to make certain types of discriminations is affected both by the data production process (constraints imposed by features of the object-representation relationship) and by the data display format. Features of the data production process obviously limit the types of discriminations that can be made using a given instrument. For instance, if one object contains twice the radioactivity as a second, but my method of data production cannot discriminate between x and 2x over this range of radioactivity, the same value will be assigned to both objects and I will not be able to discriminate between the two (at least not with respect to their level of radioactivity). Situations such as this can arise from either the detector used by the instrument or from the mathematical or statistical processing. If a particular detector can (or is set to) collect only a specific range of wavelengths or can register only a maximum number of radioactive counts per second, then it can obviously not be used to make discriminations that would require data about other wavelengths or distinguishing between two different radiodensities both if which produced counts above the maximal rate. Alternatively, mathematical or statistical features such as partial volume effects may be the limiting factor.
For the purpose of examining whether visual images have any kind of epistemic advantage, it will be important to keep in mind not only that any display format can be used more or less effectively, but that, in the case of the sorts of imaging technologies that are the focus of this paper, the use of different data display formats does not indicate a difference in the object-representation relationship, but only in that between the representation and the user. The data collected using the instrument is the same no matter what form of data display is chosen.12 The numerical value associated with each pixel or voxel is not changed when we represent it in a different format or by a different color within a certain format. Thus, if images are to provide some kind of advantage, it will be in terms of their use by the viewer rather than in terms of their content.
Given the above, images are potentially able to play an important epistemic role13 in virtue of their cognitive accessibility. Images make many features of the data set (overall patterns, relationships between parts of the images and between large and small scale structure) more easily accessible to the human cognitive system than do other types of display such as linear strings of numbers. This is particularly true for very large, complex data sets such as those produced by PET and confocal microscopy. It might well be possible for me to extract as much information from a string of numbers identifying the number of blades of grass in each of two halves of a 1 square inch patch of lawn as it would be to see an image of the area with different colors used for different numbers of blades, but as the number of data points increases, so does the efficiency of the visual over the numerical display. A graphical representation of the patch of lawn divided into quadrants rather than halves would, at least for most people, probably make it easier to identify the spatial relationship (directly vertical, directly horizontal, etc.) between the most grassy and least grassy quadrants. For the tens of thousands of voxels in the average PET image, there is no question of our being able to identify areas of high of similar activity by looking at strings of numbers, let alone being able to tell what region of the brain those areas correspond to. But this information is very readily picked up by even a quick scan of the PET image.
Why Images? Some Other Perspectives
Before turning to a more complete examination of the contribution of cognitive accessibility to the epistemic role of images, I want to very briefly acknowledge some of the answers that other disciplines have to offer to the question of why images are a preferred form of biologic evidence, even when other options are available. A considerable amount of work has been done on the history, sociology, anthropology, and rhetoric of scientific images (e.g. Dumit 2004; Lynch and Woolgar 1990; Cartwright 1995; Jones and Galison 1988; Elkins 1999; Kevles 1996; Abraham 2003; Breidbach 2002), and these perspectives are crucial to a complete answer to this question. While it is clearly impossible for me to address all of the responses that these disciplines have suggested, I want to very briefly sketch a couple of possibilities that seem to be particularly important. These are (1) the historical importance of visual evidence and images in medicine and biology, and (2) our affinity for and attraction to images together with the rhetorical power of images.
N. J. Berrill has claimed that biology is and has always been an “eminently and inherently visual” science (1984, 4). Evelyn Fox Keller claims that while various branches of biology take different forms of evidence to be explanatory and there has often been conflict between those, in the tradition of natural history, that give preference to observation (whether direct or via imaging technologies) and those that are more theoretic and give preference to mathematical models, there is a common attraction to the use of visual representations that resemble what we get by direct observation—i.e. naturalistic images (2002, 202). Data that are the output of mathematical models—cellular automata, for example—becomes more acceptable to a broad range of biologists and gains persuasive power when the results are displayed in ways that bear visual resemblance to the objects and processes they are supposed to represent (2002, 272). Essentially, it seems, most biologists like to watch natural objects doing things. Advances in biologic imaging, including confocal microscopy, in the last 15 years or so are widely considered to have revolutionized14 cell biology. While this claim is true simply in virtue of the enormous advances that have been made in the types of questions that can be asked and the ease with which they can be addressed, it is often justified at least in part by making reference to the fact that these advances have allowed us to watch events occurring inside cells. It is not only that we now have the ability to easily ask many questions that were previously difficult or impossible to address: it is that we can see—or watch—things happen. I will have more to say about the difference between seeing and watching later, but for now it is sufficient to notice that having specifically visual access to objects and events of interest is a longstanding desire in biology.
Affinity for and Rhetorical Power of Images
The source of the preference of biologists for visual access to the world that was discussed in the last section is not entirely clear. One plausible way of accounting for it is by reference to the fact that, when used under appropriate conditions, visual perception is usually reliable. We learn to trust the results of our eyes, under most conditions, and ways of investigating the world that seem to be like straightforward, unaided visual observation may more easily be taken to also be trustworthy in virtue of this apparent similarity. In essence, seeing is believing, and if we can come up with new ways of seeing then we might at least be inclined to think that we should believe what we see in these new ways too. While of course no scientist naively believes that our eyes or imaging technologies always produce veridical data, the phrase “seeing is believing” appears in several paper titles (Hearst 1990; Orr-Weaver 1995; Monteith 2000; Herschman et al. 2000) as well as in a letter to the editors of Nature in which the author suggests that our natural tendency to go from seeing to believing is now being inverted through the use of digital manipulation of image data (Greene 2005).
The editorial on which the author of this letter is commenting brings up another reason why images may have persuasive power: we are simply drawn to attractive images. We like to look at them and we like to make them: “Tweaking images is also seductive in a way that adjusting statistics is not, because of the natural human desire to create an aesthetically pleasing picture” (Pearson 2005).18 This sometimes leads us into questionable digital manipulation practices, but it also leads to such things as the calendars of extraordinarily beautiful scientific images that are often put out by companies such as Zeiss that make microscopes. The beauty of the images may, in some cases, be an end in itself, but it may also serve other purposes. In 2003, the American Academy for the Advancement of Science together with the journal Science organized the first annual Science and Engineering Visualization Challenge. The report on the outcome of the 2004 version clearly states that the contest was designed to foster “the ability to convey the essence and excitement of research in digitized images, color diagrams, and even multimedia presentations” since this increases public attraction to and understanding of science and since it “the general public that ultimately supports the global research enterprise … everybody benefits” (Supplee and Bradford 2004). Joseph Dumit, an anthropologist of science, suggests that images can do this in virtue of their ability to serve multiple purposes and hold several different meanings simultaneously. A single PET image can represent not only the actual blood flow in a slice of a specific individual’s brain over a particular time period, but also the pattern of blood flow in some type of person (e.g. schizophrenics), the viability of PET as a research tool for certain disciplines and types of questions, and (perhaps most importantly for the public perception and support of science), the value and importance of research in neuroscience more generally (2004, 4).
As suggested by the multiplicity of meanings and roles they can play, images are important not only for the reception of science by the general public, but for the evaluation of individual pieces of research and research projects by journal editors and grant review boards. Dumit interviewed a number of prominent PET researchers about various aspects of their use of images and found that most claimed that it was crucial to include brain images (as opposed to only graphical or other statistical data) in articles submitted for publication or grant applications since the failure to do so significantly reduces your chance of getting your work published or funded (2004, 57). It is important to note that the quality of the data does not change between these different display formats, but apparently the appeal, power, or apparent importance of the data does. One feature of naturalistic images that potentially contributes to the authority that they may hold outside of a very specific scientific or medical context is their resemblance to photographs. Despite the enormous complexity of producing a PET image, by or for the layperson such images are often interpreted as being essentially photographs of the brain. As such, they inherit the presumed objectivity and reliability of a photograph19 and serve as persuasive evidence for the (multiple) claims that they are used to support. The combination of presumed objectivity and reliability together with the ease with which these images come to hold multiple meanings gives them enormous power. Others have written extensively about the power that visual images (scientific and otherwise) exert on public discourse and emotion (e.g. Cartwright 1995) as well as on the esthetics of scientific images (e.g.;Stafford 1991; Stafford 1994; Stafford 1996 (Elkins 1999)), but I will not discuss their work here. Instead, I will now turn to the feature that contributes to the potential epistemic roles of images: cognitive accessibility.
The fact that visual representations (including not just photograph-like images but diagrams, maps, graphs, etc.) can present us with large amounts of complex data in a way that is more easily available to our cognitive apparatus than is data in a straightforward numerical format is uncontroversial. Even very simple types of visual array such as arranging numbers from highest to lowest makes it much easier for most people to identify certain features of the data (Tufte 1983; Tufte 1997). Faced with tens of thousands of numbers in a PET data set listed in a linear sequence proceeding from the first to the last slice (along the z-axis) and, within a slice, from left to right (along the x-axis) and from bottom to top (along the y-axis), no human could hope to identify regions of higher or lower activity within any reasonable period of time. We might be able to scan the list and eventually come up with a set of the highest numbers, but to keep track of the position on the x, y, z-axes each value belonged and which were adjacent to other high values, while theoretically possible, would be enormously difficult and time-consuming, especially if we were not to add any sort of visual representation (lines, symbols, etc.) to mark the spatial location to which each number belonged. The epistemic value of cognitive accessibility, then, is not that images contain spatial information that is not present in the corresponding numerical data, but that they make it much easier to get it into our heads; to produce belief or knowledge. In general, the larger and more complex the data set, the greater the epistemic advantages of using some form of visual representation.
However, this advantage holds in general for any type of visual representation. Is there any special advantage to photograph-like images compared to other visual formats such as graphs? Recall that some scientists who use PET are reported to prefer graphs to semi-naturalistic brain images. An important caveat that was left out earlier, however, is helpful in identifying what advantage images specifically might have. Even scientists who prefer to analyze their data using various graphical representations use the images at earlier stages in order to get an overall sense of the data in order to judge whether the experiment worked or showed some characteristic(s) that might suggest that something had gone wrong with the experiment.20 What the images do very effectively is to give the user a sense of the overall characteristics of the data: how both adjacent and distant parts of the image compare to one another and what both the global and local characteristics of the data are. Looking at the image, for instance, makes it easy to see whether two regions of interest (ROIs) are active at the same time or if one region becomes active following the other. The same information is present if the data is presented in the form of two graphs, a time course of activation for each of the two ROIs, but in this case additional work must be done (e.g. using the same scale and aligning one graph above the other) in order to pick out this larger scale feature of the data. Under some circumstances, looking at the image may help to initially identify a ROI. While a ROI may sometimes be defined prior to an imaging study in terms of anatomic structure or Talairach coordinates, in other cases the ROI(s) may not be identified until the imaging data has been acquired. In such a case, a ROI will often be defined as an area that is differentially active between control and test individuals or between a baseline state and a response to some stimulus. Delineating the boundaries of a ROI in this situation involves identification of an area of high activity and so requires a very simple kind of “seeing as”. Although the area may have no pre-specified shape, it still requires that we recognize specific features as characteristic of a ROI: a patch of either uniform color (or a mixture of the colors representing the highest activity levels) and the boundaries at which the color shifts to one representing a lower activity.21
This advantage that images have over representation of portions of the data in other formats such as graphs may sometimes be reversed, however. The central point about cognitive accessibility is that some types of representation allow us to more easily make certain sorts of discrimination. Sometimes it is easier to discriminate more local features of the data if the extra, non-local data is removed from view. The presence of excess information can make it harder (though, again, not impossible) to pick out specific features of interest. Thus, if what you are most interested in is relatively subtle differences in the timing of activation of a specific ROI between two populations, it may very well be easier to make the relevant discriminations by looking at time courses for that ROI in the two groups and eliminating all of the data from other areas. The effect of using different color schemes in Fig. 2 is essentially the same: it highlights some differences while obscuring others and so facilitates some discriminatory tasks while making other more difficult.
Representation of the data in numerical form can also offer some potential advantages. In particular, numerical data is capable of representing an infinite number of different values. Of course, no instrument offers infinite precision so that the full advantage of numerical representations is never actually needed. But a numerical format does have the capacity to fully capture the granularity of the instrument: the apparent granularity of the numerical representation will be equal to that of the instrument, whereas the apparent granularity of the data represented as an image will usually be less than that of the instrument. This situation arises because the human visual system is able to discriminate only a very limited number of shades of gray and larger, but still finite, number of colors. Therefore, when PET or other data is represented as a gray scale or pseudocolor image, each gray level or color must be used to represent a range of numerical values. This is not necessarily a bad thing—in fact, by eliminating some differences that are not relevant to answering the question of interest, we can more easily pick out those that are relevant.22 However, it does mean that finer-grained distinctions can be made over the full range of data values using the numerical data rather than an image. The full granularity of the instrument could be captured in a set of images if we iteratively selected small regions of a larger scale image in which only a limited segment of the full color range was present and redefined the intensity range associated with each color such that a smaller and smaller range was used in each iteration. We could eventually capture the full granularity of the instrument, but it would require a large number of images and the fact that the same color would represent different intensities in different images would eliminate any cognitive advantage.
Thus, the cognitive advantage of images over other forms of data display is relative. The existence and extent of the advantage is dependent of the sort of discriminations that need to be made and, even in cases where there is a significant advantage to using an image, the image is easier to use but does not allow anything to be done that could not in principle be accomplished using other forms of data. Were this the end of the story, the conclusion that images, at least sometimes, provide an epistemic advantage would be justified. However, it is not the end of the story. While our human ability to extract information from images does often mean that images are an effective way to identify patterns or other features of significance in the data, the development of increasingly sophisticated tools for multi-variate pattern analysis (MVPA) over the last decade provides an example where the epistemic advantage falls clearly to the numerical data and is lost in the image.
Conventional analysis of fMRI data has involved assessment of the activation status of individual voxels in an attempt to characterize the relationship between some particular cognitive function and particular areas of the brain. After identifying areas in which multiple voxels are activated, data in a ROI (region of interest) are normally spatially smoothed and activity is averaged across the voxels in the ROI. In this way, spatially extended areas of activation are identified, but sensitivity to more fine-grained patterns of activation is lost. So too is any information carried by voxels with sub-threshold levels of activation.23 Thus, although these techniques are effective at reducing noise, they are also very effective at reducing signal sensitivity. MVPA methods are particularly valuable in that they allow for increased sensitivity by searching for patterns of change in activity across all voxels, including voxels that do not show a statistically significant change in response to experimental conditions.24 This approach uses pattern-classification algorithms (trained on independent data sets) to decode information that is present in any changes in these patterns under experimental conditions. These methods are complex but their details need not be elaborated upon for the point I wish to make.25 The crucial point is that these patterns are not available in the data displayed as images. Indeed, it is notable that papers making use of these methods (e.g. Kamitani and Tong 2005; Haynes and Rees 2005; Haynes et al. 2007; Kriegeskorte et al. 2008; Pereira et al. 2009) do not use the photograph-like images that are predominant in other publications. The univariate tools that are used to generate images such as the one in Fig. 2 effectively eliminate the possibility of identifying the majority of the information that MVPA tools can extract using the (nearly) raw numerical data.
There are clear reasons that explain the preference for using photograph-like images as the output for most biologic and medical imaging technologies. While the general idea that “a picture is worth a thousand words” is supported to some extent by the range of contexts in which images do provide an advantage in terms of the accessibility of the information contained within them, this rule has important exceptions, as demonstrated by the example of MVPA. These technologies produce mathematical data, a fact disguised by the presentation of their output as images. In many cases, the raw or near-raw data provides advantages that are lost in some of the later processing steps that precede the production of the image. Thus, the cognitive accessibility of the image does not provide the epistemic advantage it is often presumed to confer.
Julie Fiez (personal communication) reports that it is her experience that preferences for photograph-like images vs. other forms of data display vary considerably among researchers in her field.
However, Daston and Galison (1992) point out that what counts as objectivity has changed over time and is reflected in the practices of scientific image-making. It is not always clear which objects, if reliable images are produced of them, serve as reliable information about some phenomenon or feature of the world. For instance, in examining the number of immune cells of a particular type that are present in different layers of the skin in normal as opposed to scar tissue, should I always photograph and count random fields of view or ought I to instead require that fields be randomly selected but meet some additional criteria—perhaps that they not contain any tears or that the tear not cover more than a certain percentage of the area of the field. Some such criteria undoubtedly enhance the reliability of the data, but it is not obvious just what sort of constraints are there on the criteria that count as legitimate.
This need for some knowledge or interpretive framework to see something as, for instance, a face, rather than just seeing a mixture of different colored areas was noted by Hanson (1965). Kuhn (1970) also discusses seeing versus seeing as in the context of the theory-ladeness of observation.
Photoshop and other types of digital manipulation have obviously changed this to some extent, but we still tend to take photographic images presented as evidence by credible sources to be reliable.
It may be unreliable for certain purposes, but even then we can usually identify the purposes for which it is unreliable (e.g. for discriminating between green and blue eyes).
For instance, they would be unlikely to identify a photograph where all the cells in the tissue were uniformly stained as indicating a problem with the staining technique.
Hereafter I will use the term “photograph” to refer to actual photographs as well as other images such as X-rays that are produced by processes that bear a physical similarity to optical photography.
An additional difficulty in identifying photographs as reliable or not is digital manipulation of images after their initial production. This is a very real concern today given the ease with which images can be altered and has been addressed in several recent pieces in Science and Nature (Ottino 2003; Pearson 2005; Greene 2005). Part of the difficulty is in establishing what degree or type of manipulation is legitimate—i.e. does not compromise the truthfulness or reliability of the data and may, in fact, aid the viewer in making relevant discriminations - and what constitutes fraud or misrepresentation of what was originally perfectly reliable data. These sorts of conditions on selection and manipulation of data, however, are not specific to images but apply to all sorts of data production methods.
More accurately, this constraint is enforced not only be the instrument (which has defined start and end points, as described for the case of PET), but by the experimental set-up including elements upstream of the instrument..
“Correct” here refers to the discrimination(s) needed to answer the question of interest in any given case. It will sometimes be the case that display formats that make some features of the data more easily discriminable by the user also obscure or make impossible to discriminate other features. For instance, if what is needed to answer a particular question is the ability to discriminate relatively small differences within a specific, limited range of intensity values, a different pseudocolor may be assigned to small intensity intervals within this range and larger intervals outside of it. This will, in effect, visually eliminate some differences that occur outside of the intensity range of primary interest. This is, of course, purely a matter of the representation-user relationship, the differences are not eliminated from the numerical data (the object-representation relationship is unchanged by this sort of manipulation).
This will not be true of all other types of instruments. An instrument that uses X-ray or photographic film as the detector, for instance, does not first represent the data in numerical form. The image format in such cases is not optional in the sense that it is with something like PET. The data can be converted to numerical format (e.g. by scanning or otherwise digitizing the image) and then represented in other formats, but in this case it is not strictly accurate to claim that the same data is displayed as an image or in other forms..
Wimsatt (1991) similarly identifies visual representations as the simplest and most inferentially productive means of analyzing multidimensional data and processing information about motion.
This term (or “revolution”) is used in almost every paper that makes reference to the period that is usually taken to begin with the discovery and cloning of green fluorescent protein (GFP).
This is no longer true, however, since X-rays data is often collected digitally rather than by using photographic film..
Along similar lines, in discussing the representational style of electron micrographs, Rasmussen (1997) claims that it was strongly influenced by the way that previous types of cytological images were presented.
The idea that we like to create pictures that resemble the world around us can be traced back as far as Aristotle’s claim that humans have a natural tendency toward mimetic activity (Poetics xxx).
Though awareness of the extent to which photographs are digitally manipulated has undoubtedly reduced the degree to which photographs are seen as reliable and objective representations of the world.
Julie Fiez (Associate Professor, Departments of Psychology and Neuroscience, University of Pittsburgh), personal communication..
The same information could be extracted from the numerical data but it would almost certainly require using a computer and some sort of pattern recognition tool to identify the required features. Except perhaps in very simple cases, we could not easily identify these features from numerical PET data on our own.
Trying to minimize the range of values represented by each color by using as many colors as the human visual system can discriminate would only reduce the ease with which we could make any discriminations.
Voxels which have sub-threshold responses and do not meet conventional criteria for statistical significance have been shown to contain useful information about the cognitive tasks under investigation. For example, see Haxby et al. (2001), Friston et al. (2008), and Ramsey et al. (2010).
Depending on the experimental question and design, it is often possible to apply MVPA methods to data which has undergone only standard pre-processing (slice-scan-time correction, motion correction, and trend removal) with no intervening univariate analysis (Mur et al. 2009).