The role of pictorial cues and contrast for camouflage

Shadows that are produced across the surface of an object (self-shadows) are potentially an important source of information for visual systems. Animal patterns may exploit this principle for camouflage, using pictorial cues to produce false depth information that manipulates the viewer’s detection/recognition processes. However, pictorial cues could also facilitate camouflage by matching the contrast (e.g. due to shadows) of 3D backgrounds. Aside from studies of countershading (patterning that may conceal depth information), the role of self-shadows in camouflage patterns remains unclear. Here we investigated whether pictorial cues (self-shadows) increase the survival probability of moth-like prey presented to free-living wild bird predators relative to targets without these cues. We manipulated the presence of self-shadows by adjusting the illumination conditions to produce patterned targets under directional lighting (lit from above or from below; self-shadows present) or diffuse lighting (no self-shadows). We used non-patterned targets (uniform colour) as controls. We manipulated the direction of illumination because it has been linked with depth perception in birds; objects lit from above may appear convex while those lit from below can appear concave. As shadows influence contrast, which also determines detectability, we photographed the targets in situ over the observation period, allowing us to evaluate the effect of visual metrics on survival. We found some evidence that patterned targets without self-shadows had a lower probability of survival than patterned targets with self-shadows and targets with uniform colour. Surprisingly, none of the visual metrics explained variation in survival probability. However, predators increased their foraging efficiency over time, suggesting that predator learning may have overridden the benefits afforded by camouflaging coloration.


Introduction
For an animal to appear camouflaged, its body coloration must generally match the colour, luminance and pattern of the background (Endler 1978;Stevens and Merilaita 2009).However, most animal body shapes are three-dimensional (3D), so camouflage must also function to conceal the body's shadows (Kelley et al. 2022).This is problematic because an animal's body shape interacts with the illumination conditions, producing shadows across the body's surface (self-shadows) and shadows that are projected on the substrate (cast shadows).Shadows are an important source of visual (e.g.depth) information (Gregory 1990), and unless an animal can disguise these cues, it will remain detectable or recognisable (Penacchio et al. 2015).While a number of studies have revealed the importance of features such as luminance, spatial frequency and background heterogeneity for background matching (Hanlon et al. 2008;Merilaita et al. 2001;Michalis et al. 2017;Price et al. 2019), the role of depth information outside of human vision (Adams et al. 2019;Egan et al. 2016;Sharman et al. 2018) or studies of countershading (patterning which is thought to conceal depth cues (Cott 1940;Donohue et al. 2020;Penacchio et al. 2015;Poulton 1888;Thayer 1896) has received little empirical attention.
The idea that depth information could be important for camouflage has a long history.Poulton (1890) suggested that prey patterning might interfere with predator 3D shape perception when he noted that the cylindrical chrysalis of the purple emperor butterfly (Apatura iris) was shaded to produce an extraordinary impression of life-like flatness.Thayer (1909) suggested that animals might use surface shading (referred to as 'countershading coloration') to counteract the effect of overhead illumination falling on a 3D surface, hence removing the cues used to recover 3D shape.The reversal of this principle -the use of pictorial cues to generate the impression of 3D form -was later suggested by Cott (1940) who proposed that shading could produce 'the illusion of surfaces differentially inclined towards the light'.The wings of Lepidoptera often have patterns of shading that (to human vision) provide depth information that is consistent with 3D objects (e.g., leaves) or textures (e.g., bark) present in the animal's natural surroundings.In an early exploration of such effects, Schwanwitsch (1938) reported that the wing patterns of the dewy ringlet butterfly (Erebia lappona, now E. pandrose) offered a 'remarkable picture', generating visual effects which split the wing into separate surfaces.
Pictorial depth cues are highly effective at producing a percept of depth in humans (Fig. 1A) because of the way in which brains can reconstruct 3D information from a 2D retinal image.For example, ancient Greek painters from the Hellenistic period (323 − 31 BC) used shading to illustrate object contours and to generate the impression of scene depth on a flat canvas (Cook 1976).While animals obtain depth information in a number of ways (Howard and Rogers 2008), an important (monocular) depth cue is an object's shading (note that 'depth' can refer to the distance to an object, as well as to the distance between parts of an object -i.e., shape).Shading can be a reliable depth cue for most visual systems (particularly under directional illumination; i.e., sunshine); light typically comes from a single, overhead light source (the sun), thus the upper surfaces of objects reflect more light (i.e., appear brighter) than the lower surfaces (a notable exception is the deep sea, where illumination is provided by bioluminescence (Johnsen 2014).Correspondingly, changes in light intensity (i.e., shading) across the surface of an object provide important information on the slant of a surface, hence an object's shape.A classic demonstration of this is provided by changing the apparent direction of illumination (Fig. 1A); objects that are shaded as if lit from above appear as bumps (convex), while those that are shaded as if lit from below appear as dents (concave) (Ramachandran 1988).
A large variety of non-human animals are known to use shading cues to resolve object shape, including chimpanzees (Tomonaga 1998), macaques (Arcizet et al. 2009), pigeons (Cook et al. 2012), starlings (Qadri et al. 2014), and cuttlefish (Zylinski et al. 2016).For example, birds use both the direction of illumination and shading information to discriminate between 3D convex and concave surfaces (Cook et al. 2012;Firkins and Kelley 2022;Qadri et al. 2014).Outside the field of countershading coloration, surprisingly few studies have investigated whether pictorial cues are important for animal camouflage.However, there are some notable exceptions: in cuttlefish; animals that are placed on a background of 3D hemispheres showed similar body patterning when placed on an equivalent background with 2D shaded circles, suggesting that they perceive the shaded circles as 3D objects (Zylinski et al. 2016).Costello et al. (2020) found that prey (butterfly-like) targets with false hole markings survived for longer in the presence of avian predators than targets with intact wings.However, the false hole markings did not include pictorial cues such as edge enhancement, so the role of pictorial cues for camouflage remains unclear outside of human visual systems.
In this study, we presented moth-like targets to free-living, wild bird predators to investigate whether pictorial cues influence the survival probability of prey.Images of tree bark were printed on paper and pinned on trees to represent the camouflaged wing patterns of moths.We manipulated the pictorial information available in targets by capturing images of the same section of bark at night, under different illumination conditions.These images were then re-coloured to match the average colour of tree bark, before the targets were presented to birds under natural daytime illumination.We produced targets with pictorial cues (self-shadows; photographed with directional illumination), targets with no pictorial cues (photographed with diffuse lighting, to remove shadows), or targets with no patterning (uniform colour).Although we cannot infer how the birds perceive the targets, we included two directional illumination treatments: bark photographed when lit from above and bark photographed when lit from below.As birds can use the direction of illumination to resolve surface curvature (Cook et al. 2012;Firkins and Kelley 2022;Qadri et al. 2014), a difference in survival between these treatments could indicate that the direction of self-shadows provides visual information that influences detection/recognition processes.Pictorial cues could also increase survival through simple background matching (e.g.self-shadows could match the 2D background texture), rather than because these cues produce a percept of object depth.We therefore photographed the targets in situ to test the role of visual metrics (mean luminance, contrast and edge disruption -modelled for the visual system of avian predators) in explaining survival probability.

Generating 'moth-like' paper targets that vary in pictorial depth
Following previous studies of camouflage using artificial targets, we captured images of natural backgrounds to generate the 'wing' patterns (Michalis et al. 2017).Photographs of marri (Corymbia calophylla) bark were taken in Kings Park, Perth (Western Australia), using a tripod and a DSLR camera (Nikon 7100) fitted with a macro lens (Nikkor 60 mm).Images were captured at the same distance from each tree (~ 40 cm) and a custom-made greyscale colour standard was included in each image for subsequent image calibration.Average reflectance of the greyscale standard was 21% (sd = 0.59%) and measured using a photospectrometer (Ocean Optics USB4000; Ocean Insight, FL.).We wanted to produce targets with the same pattern but with different shadow information.As this required us to control the illumination conditions we did this by performing the photography at night, using a removable macro ring flash (Voking VK-110) to alter the illumination conditions while photographing the same section of tree bark.To create directional lighting, we removed the ring flash and photographed the same section of marri bark from 30 cm directly above (Treatment A-directional illumination, from above; Fig. 1B) or 30 cm directly below (Treatment B -directional illumination, from below; Fig. 1C).We then fitted the ring flash back onto the camera and photographed the bark under less directional lighting (hereafter referred to as 'diffuse': Treatment D -no self-shadows; Fig. 1D).Treatment C had no patterning and was coloured to match average marri bark as viewed by an avian predator (see below for details).
Use of a ring-flash in line with the camera lens results in the obliteration of all pictorial information based on surface angle as long as two conditions are met: (i) the surfaces have diffuse reflectance (bark is generally highly diffuse except at close viewing distances, though no surfaces are perfectly Lambertian), (ii) the surfaces are all the same distance from the light source (which is true for tree bark at the spatial scale of the lepidopteran models).Thus this "diffuse" treatment is extreme in that it blocks even more shadow information than typical natural diffuse lighting, acting as a maximally conservative control for pattern independent of lighting anisotropy.Marri bark is highly textured, and although the furrows are predominantly vertical, changing the illumination from coming from overhead or from below, caused a noticeable shift in the direction of the self-shadows (Fig. 1B & C) at the scale of the targets.A total of twelve marri trees were photographed under each of the three lighting conditions.
To calibrate and re-colour the bark treatment images for the visual system of avian predators, we first needed a measure of the average colour and luminance of marri bark under natural daylight conditions.We therefore took a further series of photos of marri bark from the same location in the shade of the trees on a sunny day.Each of these images was linearized (with respect to the grey standard) using the open-source Multispectral Image Calibration and Analysis toolbox, MicaToolbox (Troscianko and Stevens 2015) for Image J (Schneider et al. 2012).This procedure results in an image where there is a linear relationship between pixel values and light intensity, allowing for objective measures of reflectance (Troscianko and Stevens 2015).We then performed a camera calibration procedure, which produced a cone map model specific to our camera and the outdoor lighting conditions.We used the visual sensitivity of the blue tit (Cyanistes caeruleus) (400-700 nm) in this model, because it is an insectivorous woodland bird and because the spectral sensitivities of the local West Australian birds are not known.
We used MicaToolbox to measure the average colour of marri bark, as viewed by a blue tit (i.e., the cone catch of the long-(L), medium-(M), and short-(S) wavelength-sensitive cones, and the cone catch of the double (luminance-sensitive) cones).A printer calibration, using an iteration technique in MicaToolbox, was performed to measure and select a printed colour (using the same printer, settings and paper used for the targets) and corresponding RGB values that were as close as possible to the blue tit cone catch values of average marri bark.We then re-coloured the night-time images to this average marri green (RGB values = 120, 129, 115) by using an image multiplier for each colour channel.Samples of re-coloured bark from 11 (one set of images was unsuitable) individual marri trees, lit from three different lighting angles (illumination from above or below, diffuse illumination) were then adjusted for scale (1:1) and printed (using a Konica Minolta C658 PS colour printer) on A4 all-weather printer paper (Rite in the Rain, Tacoma, WA, USA).Triangular-shaped targets (50 mm wide x 25 mm high) were cut out of the bark images as random samples of each treatment, while avoiding the edges that showed distortion due to the tree's curvature.Our use of random samples of bark from 11 different trees was intended to prevent the birds from developing a search image for any one pattern type (Pietrewicz and Kamil 1979).A set of uniform, non-patterned targets (Treatment C) were constructed by printing A4 sheets of all-weather paper (as described above) of average marri bark colour (Fig. 1E).

Field predation experiments to determine the survival probability of moth-like targets
The field experiments were conducted in Wilga State Forest near Balingup, Western Australia, in September/October 2020 and April/May 2021.Field experiments were run as three transects (one in spring 2020, two in autumn 2021), in different locations around Balingup, giving a total of 20 blocks (transect 1: 7 blocks; transect 2: 8 blocks; transect 3: 5 blocks).To maximise the camouflage of the targets, targets were placed only on marri trees.Each transect was conducted in native forest by selecting a non-linear series of 60 marri trees that were situated at least 5 m apart.We ensured that transects were located > 500 m away from other habitats (e.g., forested areas, farmland) or forms of disturbance (e.g.roads).Each block consisted of 15 targets of each treatment and the targets were labelled 1-15 to assist in randomly allocating treatments to trees (see below).At the start of each block, the targets were laid out on a pin board and a live mealworm was pinned on top of each target (see Supplementary Material 1 for images of targets photographed in the field).Our preliminary trials revealed that the birds did not consume the mealworms if they were dead.Mealworms were pinned approximately 1/3 down the body from the head and pinned in the upper middle section of each target.Although we did not note whether the mealworms were alive at each check, those that were not predated typically remained alive for the duration of the experiment.We considered the effect of the mealworm being placed on top of the target by determining whether the contrast between the mealworm and the target differed among treatments (see Results section).Targets were orientated upwards and pinned randomly with respect to the pattern of the tree bark.The first block of each transect was laid out at approximately 3 pm, and a randomisation code (custom written in R) was used to pre-assign a treatment code to each tree to prevent any potential treatment order effects.
After setting out the targets at 3 pm, the targets were then checked at dusk (~ 5 pm; after 2 h), at dawn (~ 7 am; after 15 h) and at midday (after approx.21 h) the following day.At each check time, the experimenter made a note of whether the mealworm was present (no predation) or absent (predation).When predation had occurred, a photograph was taken of the target and the bark background, and the target removed.Our in-situ observations and footage (obtained using a motion sensor camera trap) revealed that birds consumed the whole mealworm and left the pin intact.We observed golden whistlers (Pachycephala pectoralis), scarlet robins (Petroica boodang) and rufous tree creepers (Climacteris rufus) taking the mealworms from the targets during the experiments.While we initially attempted to note the weather during the experiment (to include as a random effect), this proved difficult as conditions were highly changeable between sunny and cloudy, even during the time (~ 1 h) it took to check the targets.This component of the experiment could not be avoided and will have introduced additional variance to the data.
We made a note of signs of activity from other predators including any ants that were observed (small ants took > 14 h to move the mealworm) and evidence of predation by spiders (evident from the presence of a hollow exoskeleton).We found that bull ants (Myrmecia spp.) could remove whole mealworms from the pins (< 1 h), so we made a note of any bull ant sightings on or around the tree.At the last check (midday), all uneaten mealworms and targets were removed and this was considered the end of the block.The following block started at 3 pm the next day, so that blocks in each transect were run consecutively.Fresh targets were placed on the same trees within each transect, but at a different random location and orientation (with respect to the sun) on each tree.We conducted a total of seven blocks for transect 1, eight blocks for transect 2 and five blocks for transect 3, resulting in N = 1194 observations.
To consider the effect of the visual metrics of the targets on survival probability, we needed images where a predation event had not occurred (i.e.mealworm still present), as well as images where a predation event had occurred.We obtained these additional images by photographing a sub-sample of 20 targets per block (n = 5/treatment), for targets where predation events had not occurred.This process was randomised by photographing all target codes that were a multiple of three (i.e., targets 3, 6, 9, 12 & 15 in each treatment group), resulting in a total of 621 images.Photographs were taken using a Nikon DSLR 7100 fitted with a 60 mm Nikkor macro lens.Images were captured at approximately 60 cm from the tree and a 20% greyscale was included in each image for subsequent image linearization (see below).We also ensured that a large proportion (> 75%) of bark background was present in the image, allowing us to determine whether any differences in visual information (e.g., luminance of the bark) might account for variation in the survival probability of the targets.

Analysis of target images and bark backgrounds
We used MicaToolbox (Troscianko and Stevens 2015) to linearise the raw images from the camera, generating a calibrated image with normalised reflectance for each channel (red, green blue).Each image was then scaled (using the standard present in each photograph) for size.We selected two regions of interest (ROIs) for subsequent analyses -the area of the target and a sample of the bark background.The latter was selected to be close to the target, ensuring that there was no shadowing present from surrounding vegetation.The calibrated image was then converted to a (chart-based) cone catch model, using the bluetit as the observer and average daylight (D65; 400-700 nm wavelength range) as the illumination.The ROIs were then measured from the cone catch image, yielding luminance values (means and standard deviations) based on the spectral sensitivity of the blue tit's short, medium and long wavelength sensitive cones, and for the luminance-sensitive double cones (Osorio and Vorobyev 2005).
We also measured the level of GabRat disruption in the target ROI, since previous studies (with human vision) have found that this metric is a useful predictor of survival time (Troscianko et al. 2017).Briefly, GabRat uses a Gabor filter (a measure of the spatial frequency of the image) to calculate a ratio of the false edges present in a pattern (edges that intersect the prey's body outline) to the real edges (edges that lie parallel with the body outline).
Higher GabRat values are consistent with more false edges, which should make prey harder to detect (Cuthill et al. 2005).

Differences in survival probability among treatments
We used survival analyses to test the effect of treatment on survival time.Survival time was considered as the time (in hours) between placing the targets on the tree and the time at which the targets were checked and an event was recorded.Data were censored (a function specific to survival models that considers survival of censored items until the point of censorship) if the mealworm was present at the last check (i.e.not predated), if the target was missing (birds sometimes removed the target and pin -both were always relocated at the base of the tree), or the mealworm was taken by a non-avian predator (e.g., ant, spider).We used the coxme package (Therneau 2020) in R (R Core Team 2018) to fit Cox proportional hazards regression models for mixed effects, entering treatment (4-levels) as the independent variable, and survival time and censorship (0: censored, 1: predation event) as the dependent variable.
To account for our sampling design, we included tree identity (to account for repeated sampling of the same trees; 180 levels) as a random (intercept) effect.Transect (3-levels) and block (20-levels) were also included as random effects.We tested for an effect of treatment using likelihood ratio tests (LRTs) to compare the full model (containing the effect of treatment) to a model that contained only the random effects.In these regression models, a positive coefficient (ß) is associated with increased hazard (lower survival probability) relative to the baseline, while a negative coefficient represents decreased hazard (higher survival probability).The exponent of the coefficient is the hazard ratio (HR, or effect size), which represents the ratio of hazards between two treatments, which in this case is from avian predation.The exponent of the standard deviation of the random effects provides a relative risk score associated with the random effects (Therneau 2020).
We tested the assumptions of the survival models for the fixed and random effects using Cox proportional hazards models.While the effects of treatment and transect met the assumption that relative hazard remains constant over time, this was not the case for block because birds increased their foraging efficiency over successive blocks.The diagnostics plots showing the deviance of the residuals also indicated that these models were suboptimal given the structure of the data.We therefore conducted a second series of analyses using Generalised Linear Mixed Models in the lme4 package (Bates et al. 2015), using the censored data (i.e., survival status at 21 h) as a binomial response variable.The fixed and random effects were entered as described for the survival models and we tested the significance of the main effects using LRTs.

Visual metrics of the targets
We considered whether the visual metrics influenced survival times and how they differed among the treatments using a reduced dataset (n = 621 observations) containing the image data.These data comprised all the targets that had experienced predation (n = 536) and the smaller, random sample of those that had not (n = 85).Using the same modelling procedure described above, we entered the effects of the difference in mean luminance between the target and the background (= mean target luminance -mean background luminance) and the contrast between the target and the background (= sd target luminance -sd background luminance).We used these metrics because visual systems detect objects based on differences in luminance (i.e.contrast) rather than absolute luminance (Land and Nilsson 2012).Negative values were multiplied by -1 to generate absolute values for each.We also considered the GabRat (edge disruption) of the target in the luminance channel.Given birds are thought to use separate photoreceptors for luminance and colour vision (Osorio and Vorobyev 2005), luminance information was calculated as the cone capture of the blue tit's double cones.
The covariates were the difference in mean luminance between the target and the background, the contrast between the target and the background, and the GabRat (edge disruption).These three covariates were transformed (log 10 ) to improve their distribution and to increase the fit of the models.Each term was added as a covariate to the model and compared to a baseline model (using the Likelihood Ratio Test) that contained the fixed effect of treatment and the random effects of block and transect.The random effect of Tree ID had very low variance (< 0.001) and was subsequently excluded for these analyses.We also considered whether the presence of the mealworm may have influenced survival probability by determining whether the difference in mean luminance or contrast (both absolute values and log transformed) between the mealworm and the target differed among treatments.The full results of the statistical models are given in Supplementary Material 2).

Effect of pictorial depth cues on survival probability
There were N = 869 predation events, out of a total of N = 1194 observations, with target survival decreasing from 71% after two hours (95% CI: 69-74%), to 41% after 15 h (CI: 38-43%), and to 25% (CI: 23-28%) after 21 h.Patterned targets without self-shadows (D: diffuse illumination) had 22.4% (ß= 0.23, se = 0.10, z = 2.27, P = 0.023) higher risk of predation relative to those with pictorial depth cues (Treatment A: illumination from above; Fig. 2), but the model containing the treatment effect was not a significantly better fit than a model containing only the random effects (LRT: Chisq = 5.61, df = 3, P = 0.13; see Supplementary Material 2 for full output of model).Survival probability varied over consecutive days (blocks), with survival being highest in the first two blocks and lowest in the intermediate (five and six) blocks (Fig. 3).The GLMM models revealed similar findings; while Treatment D (no self-shadows: diffuse illumination) had a significant effect in the model (ß= 0.47, se = 0.21, z = 2.22, P = 0.027; Supplementary Material 2), the overall effect of treatment was non-significant when compared with the null model (LRT: Chisq = 4.84, df = 3, P = 0.18).

Effect of visual metrics on target survival
Analysis of the reduced dataset, comprising visual information on the targets and backgrounds (N = 621 observations), found that the model containing the fixed effect of treat-1 3  ment was significantly better than the null model containing only the random effects (LRT: Chisq = 9.14, df = 3, P = 0.028).Specifically, patterned targets without self-shadows (D: diffuse illumination) had 36% higher risk of predation (ß = 0.31, se = 0.12, z = 2.49, P = 0.013) relative to those with self-shadows (Treatment A: illumination from above), but there was no difference in risk for Treatments B (illumination from below: ß = 0.01, se = 0.13, z = 0.11, P = 0.91) and C (uniform colour: ß = -0.01,se = 0.13, z = -0.04,P = 0.97) relative to Treatment A.

Differences in the visual metrics of the targets among treatments
The difference in mean luminance between the targets and the bark background varied among treatments (F 3, 617 = 4.56, P = 0.004; Fig. 4a); treatment C (uniform bark colour) had a higher difference in mean luminance than treatments A (illumination from above; luminance difference = 0.08; P = 0.004) and D (diffuse illumination; luminance difference = -0.07;P = 0.014) but did not differ from B (illumination from below; luminance difference = 0.06, P = 0.10).Contrast with the background also differed among treatments (F 3, 617 = 20.4,P < 0.001; Fig. 4b); all varied from one another (P < 0.05), except for treatments A and B (difference in contrast = -0.07,P = 0.58).GabRat values were lower for treatment C than for the other treatments (F 3, 617 = 90.5,P < 0.001; Tukey: P < 0.05; Fig. 4c).There was no difference among treatments in the difference in mean luminance between the mealworm and the target (F 3, 36 = 1.38,P = 0.27), but there was a significant difference in mealwormtarget contrast between targets C and B (F 3, 36 =3.71, P = 0.020; Tukey P = 0.01), but not the other targets (all P > 0.05).

Discussion
We found no evidence that pictorial cues play a role in camouflage, and no evidence that the direction of prey self-shadows provides important information for object detection/recognition.There was also no effect of the visual metrics (either when tested alone, or in addition to the effect of treatment) on survival probability.This latter finding is surprising because it is well established that contrast is important for background matching and that high contrast prey are more detectable than low contrast prey (Donohue et al. 2020).However, the success of any given prey's camouflage strategy depends not only on visual detection and recognition (sensory processes) but also on predator learning and search images (perceptual processes) (Stevens et al. 2012;Troscianko et al. 2013).
That we found no effect of the visual metrics on survival suggests that our experimental design may have inadvertently incorporated the perceptual components of camouflage rather than detection probability alone.This is because targets were placed (in randomised positions) on the same tree in each block thus the birds quickly learned which trees they should focus their foraging efforts on.This is also shown in the large differences in survival among (repeated) blocks of the experiment.Importantly, several studies have found that predators may develop search images for some prey camouflage patterns more readily than others (Stevens et al. 2012;Troscianko et al. 2013Troscianko et al. , 2018Troscianko et al. , 2021)).For example, Stevens et al. (2012) found that prey with distractive markings had a faster decline in detection times (indicative of more rapid learning) than controls.Similarly Troscianko et al. (2013) found that (human) predators learn most efficiently when multiple prey are presented simultaneously (as in the current study) and when prey are high contrast.Thus, any benefits of being high contrast (i.e. with self-shadows) may have been offset by predators' enhanced ability to learn these phenotypes.
Our experimental design also incidentally incorporated other sources of variance, including using random and unique samples of tree bark for the targets to avoid pseudoreplication (rather than an idealised pattern design for each treatment).In addition, the work was performed in outdoor conditions where illumination varies during the day and with changes in the weather.This is important because our treatments (specifically treatment A -selfshadows) assume directional, overhead illumination which would only occur under sunny conditions at midday.Nonetheless, we found some evidence that, independent of background matching and disruptive coloration, patterns without shadows (diffuse illumination) had the lowest probability of survival of all the targets.
Patterns without self-shadows were rapidly found by the birds, perhaps because there was some component of the visual metrics that we did not measure that made them more discoverable.While patterned targets without self-shadows had reduced contrast variation relative to patterned targets with self-shadows, the lowest contrast variation was observed in non-patterned targets, which did not experience significantly greater levels of predation.While it is surprising that targets without patterning (uniform colour) survived as well as patterned prey, other studies have found that disruptive patterning can be equally as effective as matching the average coloration of the background (Fraser et al. 2007).However, we again note that the heightened foraging efficiency of the birds has likely dominated our findings and lessened any potential effects of camouflage.
We found no evidence for the idea that, if the direction of self-shadows provides important visual information, then prey with bark patterning illuminated from above (i.e., that might appear convex for viewers that resolve shape from shading) should have higher survival than those illuminated from below (i.e., that could appear concave).While adjusting the illumination direction is commonly used to investigate whether shadows provide depth information for birds (Cook et al. 2012;Firkins and Kelley 2022;Qadri et al. 2014), it is likely that these principles are less applicable to textured surfaces such as tree bark.This is because the fissures of marri bark (C.calophylla) are primarily vertical, so that the 'lit from above' and 'lit from below' conditions would result in similar luminance information (see Supplementary Material 3 for an illustration).Another criticism of our approach could be that neither of these treatments would match the background very well because the shadows differ between the target and background, irrespective of any role of depth perception.However, our targets were not designed to be background matched -we were specifically interested in comparing the survival of prey with and without self-shadows.Nonetheless, it is also feasible that there is an interaction between self-shadows and background matching, perhaps because the role of self-shadows for camouflage depends on the structure of the background (e.g.surface relief) and/or the illumination conditions (e.g.directional versus diffuse illumination).However, our field-based study could not experimentally control these factors.
To manipulate the prey's self-shadows for the same surface, we had to perform the photography at night using artificial illumination conditions.However natural light is less directional than artificial illumination, and also varies during the day so that treatment A (lit from above) would only be optimally matched to the bark background under overhead illumination conditions (i.e.around midday).The same principles apply when considering how countershading colouration functions for camouflage using the mechanism of self-shadow concealment; variation in illumination conditions (weather, time of day etc.) means that optimal camouflage would only be achievable for a short period of time (Kiltie 1988).While prey with countershading coloration that is optimised for the lighting conditions have the highest survival under those specific illumination conditions (Cuthill et al. 2016), prey with sub-optimal patterning still have a survival advantage over non-patterned prey (Penacchio et al. 2017).In nature, for camouflage strategies that rely on self-shadow concealment or pictorial cues, it is biologically implausible to have a perfect match between the coloration and the illumination conditions.
The treatment patterns may have influenced predator learning in this experiment, if it was easier for predators to form a search image for some pattern phenotypes (e.g., patterned targets without shadow cues) over others.Although our experimental design attempted to reduce this possibility, visual metrics such as colour matching and edge disruption can influence the detection probability of phenotypes that have previously been encountered by predators (Troscianko et al. 2021).Studies of learning in the context of aposematic coloration have found that while colours seemed to be learned most rapidly, the presence of pat-terning may provide additional information that aids discriminatory ability (Aronsson and Gamberale-Stille 2012).Patterns with high contrast features may initially deter naïve avian predators (Halpin et al. 2020), yet studies with humans as predators have found that high contrast disruptive patterns have decreased detection rates and are learnt more quickly than low contrast patterns, leading to a trade-off between detection and learning (Troscianko et al. 2013).Differential learning would amplify the benefits of camouflage (detection of some patterns would be enhanced through the formation of treatment-specific search images); but given the lack of effect of visual metrics on survivability, this seems highly speculative.
Animals can obtain depth information from multiple sources, including stereopsis (retinal disparity), accommodation (optical focusing cues), motion parallax/optic flow, occlusion, and pictorial cues (Howard and Rogers 2008).These different sources of depth information are fundamental for visual guidance of all movement, such as striking at prey or escaping from predators.However, stereopsis is a rare strategy and passerines will likely be using motion parallax, accommodation, and textural cues to search for prey.In the context of depth information and camouflage, there is intriguing evidence for a role of depth perception in the camouflage patterns adopted by cuttlefish.Individuals show a unique colour change ('3D body pattern') in response to 3D objects in a scene, which is different to their colour change response ('Disruptive' pattern) to a 2D background comprising the same visual information (El Nagar et al. 2021).However, whether this patterning produces a percept of depth in cuttlefish predators (or prey) remains unknown.Demonstrating that pictorial depth cues enhance camouflage by invoking a perception of depth (Egan et al. 2016;Sharman and Lovell 2019), rather than by matching depth cues in the background, is a significant future challenge but will undoubtedly increase our understanding of the importance of depth information for animal camouflage strategies.

Fig. 1
Fig. 1 Pictorial cues, such as self-shadows, can produce the impression of 3D form.(A) Artists use shading to produce the illusion of a sphere on a flat surface (left image).The assumption that light comes from overhead means that objects with bright upper surfaces are typically perceived as convex 'bumps' (middle image), while those with darker upper surfaces (right image) are usually perceived as concave 'dents'.These basic principles were used in the current experiment to manipulate self-shadow information by photographing the same section of bark under different illumination directions.(B) Section of marri bark lit from above.(C) The same section of bark photographed lit from below.(D) Bark photographed under diffuse lighting, which removes shadow cues.Images (B)-(D) were re-coloured to match average bark colour.(E) Average colour of marri bark as viewed by an avian predator

Fig. 2
Fig. 2 Kaplan-Meier survival curves showing the probability of survival over the 22-hour observation period for moth-like targets presented to free-living avian predators.The targets had wing patterning with self-shadows (Treatment A -blue line: illuminated from above; Treatment B -pale green line: illuminated from below), no self-shadows (Treatment D -purple line: diffuse illumination), or no patterning (Treatment C -dark green line: uniform bark colour).The total number of predation events for each treatment was 209, 211, 224 and 226 for Treatments A, B, C and D respectively

Fig. 4
Fig. 4 Differences in absolute mean luminance (A), absolute contrast (B) and GabRat values (C) between the treatment targets (A-D) and the bark background.Significant differences between treatments in (A) are denoted with asterisks (*P < 0.05; **P < 0.01, ***P < 0.001).For (B) there were significant pairwise differences among all treatments except for A and B, and in (C), Treatment C had significantly lower GabRat values than the other targets (post-hoc Tukey tests; P < 0.05).Note that outliers (> 3 standard deviations above the mean) have been removed when producing these figures (a = 7, b = 10, c = 2)