Studies of visual illusions in both humans and nonhumans – and resulting cross-species comparisons – are extremely important: Because accurate processing of information from the external world is critical to survival, susceptibility to illusions might seem counter-productive, and thus the reasons for, and origins of, its existence are of considerable interest. Such susceptibility has been found to be widespread in humans, with many explanations put forth for its cause and possible function (e.g., Nakayama et al., 1995). Considerable effort has also been expended in examining responses to visual illusions in a wide variety of nonhuman species, for precisely the reasons given by Lõoke et al. (2022).

Kanizsa illusionary figures are among those most often studied – the visual part of our brain views, for example, a specific pattern of three black pac-men on a white surface, ignores the gaps amongst them, and completes contour lines to form what we see as a triangle, but one that is nonexistent. Specifically, because Kanizsa figures are nonexistent, studying how they may be interpreted by nonhumans can provide valuable information on visual processing across species (e.g., Nakayama et al., 1995).

But how can scientists determine exactly what a nonhuman experiences? In most instances, we cannot simply ask nonhumans to describe what they are processing (but see Pepperberg & Nakayama, 2016). Most studies on nonhumans use an experimental design similar to that of Lõoke et al. (2022): Subjects are first trained to distinguish a real object that resembles the targeted illusory figure from one or more objects that differ from the targeted illusory figure in various ways. Subjects are then tested on their ability to choose the illusory figure when compared to one that is constructed of disordered pac-men that do not represent any of the figures involved in training (see review in Pepperberg & Nakayama, 2016). The validity of the results thus depends on the stimuli used for training and testing; i.e., care must be taken with respect to experimental design so that the results of such studies on visual illusions in nonhumans are compelling. For that reason, several possible concerns arise about the design in Lõoke et al.’s (2022) study that could lead to a reappraisal of the strength of their conclusions with respect to what dogs (Canis lupus familiaris) do or do not process regarding Kanizsa figures.

The most important issue in the Lõoke et al. (2022) study is the likelihood that the initial training could have skewed the results; that is, that the dogs were not actually susceptible to the illusion but, because of their training experience, simply responded to some familiar, local cue. Specifically, the authors used an equilateral triangle as a positive training stimulus, and ten additional shapes as negative stimuli: a square, a cross, a circle, an arch, a half-moon, a heart, a hexagon, and the letters “L,” “T,” and “C.” Such training might have led the dogs, when tested, simply to search for the local cue of an apical 60° angle. Other than possibly the heart, none of the other negative stimuli included anything resembling a 60° angle, and in the heart the position of that angle (for which no example is given, but the most commonly used representation is 90°, not 60°) would be rotated 180° from that of the apical position in the Kanizsa figure. Had the authors included something such as a six-sided star as a negative stimulus, they could have avoided this problem.

The ability to evaluate the authors’ conclusion is also impeded by the lack of detailed information provided about the test stimuli that were used and the dogs’ responses to these stimuli. Each dog made some errors; seeing exactly which types of figures they confused with the correct response may be enlightening. Lacking access to a figure demonstrating all the possible ten orientations and locations of the rotated pac-men in the test situation, and another figure or table giving the exact numbers of responses to each of the test figures by each subject, a reader cannot eliminate the possibility that the subjects were searching for an apical 60° angle. Did test figures include a non-triangle with an apical 60° angle and, if so, was that the figure on which the dogs most commonly erred? The authors briefly note the importance of local versus global cues, but did not fully address this issue in their study.

What is also unclear is the connection between extended training and test scores. The authors note that some dogs needed retraining during testing. However, the identity of these dogs is not presented, making it impossible for a reader to determine whether a link exists between retraining and test scores. Did the dog that failed require re-training during the test phase? If so, possibly its overall understanding of the task was less strong than that of the other subjects. Given that at least one dog that needed re-training did succeed, possibly even more emphasis was placed on its response to the apical 60° angle. This additional information is needed to evaluate the results of the study, particularly given that any time a dog needed re-training – i.e., made multiple mistakes on the training stimuli during a test – the test was eliminated from consideration.

Another concern with the lack of detail provided about the test stimuli involves the issue of replication. Replication of this project – and the ability to extend the protocol to additional subjects or species (e.g., brachycephalic dogs, as suggested by the authors) – is not possible because critical information is missing. Given concerns about replicating the results of other studies in the behavioral literature, greater transparency is needed.

It is noteworthy that a Grey parrot (Psittacus erithacus) identified Kanizsa figures by vocally stating what it had observed (Pepperberg & Nakayama, 2016), a finding not cited by Lõoke et al. (2022). The Pepperberg and Nakayama study is one of the few to investigate avian illusory perception. Notably, the parrot’s repertoire of illusory perception was not limited to Kanizsa triangles but included several other illusory figures. Thus, one could not argue that the parrot was fixated on specific local angles or other non-essential aspects of the stimuli. Note that Pepperberg and Nakayama also present a detailed critique of various earlier experiments involving the techniques used by researchers such as Lõoke et al. for studying optical illusions in nonhumans.

In sum, the issues reviewed here are not meant to argue that the dogs studied by Löoke et al. (2022) could not have been susceptible to the Kanizsa illusion but merely to contend that, lacking appropriate supporting evidence, the data do not justify the conclusions offered by Löoke et al. Given the material presented in their paper, it is possible to argue that the dogs could have responded based on a simplistic heuristic (“look for the apical 60° angle”) learned during training rather than based on their susceptibility to the actual visual illusion. Presentation of additional material to support the authors’ claims would strengthen their case significantly, particularly given that they argue for the superiority of their findings compared to previous studies on illusory figures in dogs (e.g., Byosiere et al., 2019, 2020). Actually, failure to find evidence for such susceptibility would be even more interesting, as these illusionary contours are likely processed by some of the phylogenetically oldest parts of the central visual pathway (see review in Nakayama et al., 1995), and thus the ability would be expected to be widespread. Currently, we are still led to wonder about the visual processing abilities of our canine companions.