Introduction

Perceptual processing is limited by computational capacity. From the enormous amount of information constantly entering our perceptual system, we have to select the input that is most relevant for our current goals (e.g., Treue, 2003). This can be achieved by selective sampling of information. For example, if a visual feature of an object (e.g., its reflectance) has to be judged, an optimal sampling strategy would be to look at the parts of this object that are most diagnostic for that feature (i.e., the brighter parts; Toscani, Valsecchi, & Gegenfurtner, 2013). In active touch perception, such sampling strategies are even more prominent. When judging different features of an object by manual exploration, people apply different stereotyped Exploratory Procedures (EPs; Lederman & Klatzky, 1987). To perceive, for example, the softness of an object, they apply the EP of "pressure," which is to squeeze the object between two fingers or palpate it with one finger or a tool. In contrast, when people want to judge the object's roughness, they move the hand laterally over the object’s surface ("lateral motion"). Lederman and Klatzky (1987) have shown that people perceive a feature with the sense of touch most precisely when they apply the EP that is habitually used for that feature. Furthermore, several studies have shown that movement parameters of one EP, such as finger force in perception of softness and shape (Drewing, 2012; Kaim & Drewing, 2011; Srinivasan & LaMotte, 1995) or finger velocity in the perception of spatial frequency (Gamzu & Ahissar, 2001), are tuned to optimize perceptual precision.

Additional top-down mechanisms shape the processing and selection of the gathered sensory information. There is evidence from research on visual and auditory perception that sensory processing is not hard-wired, but flexible, and adapts to the behavioral context (see reviews in Carrasco, 2011; Gilbert & Li, 2013; Gilbert & Sigman, 2007; Summerfield & Egner, 2009). In visual and auditory perception, several overlapping forms of top-down influences such as attention, expectations, or task-driven processing were characterized and investigated (Gilbert & Sigman, 2007). Attention usually refers to the selection of information for enhanced processing depending on spatial location, or on components of a scene such as objects or features (Gilbert & Li, 2013). Expectation is mostly thought of as a brain state representing prior knowledge (Summerfield & Egner, 2009) used to facilitate the choice of the most likely interpretation of the incoming signal. Task-driven processing refers to enhanced processing of task-relevant information. It is difficult to distinguish between most of the outlined forms of top-down influences, for example between feature-oriented attention and task-driven processing or between object expectation and object-oriented attention (Gilbert & Sigman, 2007). However, behaviorally, all forms of top-down influences have in common that they improve perceptual performance, for example (valid) expectations can increase participants’ speed and accuracy at detecting visual stimuli (e.g., Posner, 1980) or improve the recognition of objects (Bar, 2004).

In the present study we do not attempt to disentangle different forms of top-down mechanisms so that we deal with these concepts on a quite general level, which is sufficient for the purpose of this study. The exception is that we want to focus on top-down influences that do not involve a change in the gathering of information. For instance, in visual perception, attention can be directed towards a target by looking at it, which is referred to as “overt attention.” In contrast, “covert attention” implies that the focus of attention is shifted mentally, without eye movements (Posner, 1980). In this study, we investigated how top-down influences affect active touch perception, beyond changes in the exploratory movements. Thus, we will use the term of “haptic covert attention” to refer to top-down effects that do not depend on the change in exploratory movements.

Studies addressing covert attention in touch perception focused on passive touch. Early studies examining the effect of spatial attention on passive touch (tactile) perception showed that participants detect vibrations (that could occur at three locations on their body) equally well when the target body site was cued versus when they had to divide attention between the three body sites (Shiffrin, Craig, & Cohen, 1973). Also, when tactile patterns were delivered to two fingers of different hands, participants could identify them equally well with and without cueing. However, several stimuli (targets and distractors) delivered to two fingers on the same hand were shown to interfere (Craig, 1985; Franzen, Markowitz, & Swets, 1970).

There is also evidence that covert attention can be directed to certain tactile stimulus features (Sinclair, Kuo, & Burton, 2000). In two experiments in which participants passively perceived vibratory or grating stimuli that could vary in frequency and duration, they received valid, neutral, and invalid cues about which dimension would change on a given trial. The authors found differences in performance (percent correct regarding which stimulus was longer/had higher frequency) between all cueing conditions for both vibratory and grating stimuli, such that performance was best for valid cues, worst for invalid cues, and intermediate for neutral cues. These results indicate that tactile attention can be allocated separately to perceived frequency (induced by vibrations or gratings) and stimulus duration. Allocation of covert attention improved performance (compared to the case in which attention had to be divided between the two features) but also had a cost when allocated to the wrong feature (as indicated by the decreased performance in the invalid cueing condition).

The behavioral effects of tactile attention are supposed to be mediated by the modulation of neural activity in the somatosensory cortex (for a review see Gomez-Ramirez, Hysaj, & Niebur, 2016). Attending to the tactile information of a bimodal visuo-tactile stimulus increased the firing rates of neurons in the primary (SI) and secondary (SII) somatosensory cortices as compared to the case when attention was directed to the visual information (Hsiao, O’Shaughnessy, & Johnson, 1993). When attention had to be directed to one of two tactile features (orientation and vibratory frequency) higher firing rates were observed in SII neurons selective for this specific feature (Gomez-Ramirez, Trzcinski, Mihalas, Niebur, & Hsiao, 2014). Furthermore, it has been shown that tactile attention increased synchronization of neural spiking activity of feature selective cells in the SII, which correlated with performance (Gomez-Ramirez et al., 2014). On a macroscopic level, attention to a tactile stimulus is associated with oscillations in the γ-band (Bauer, 2006), whereas increased amplitude in sensorimotor α- and β-bands were found to be linked to the spatially and temporally specific suppression of the unattended stimulus (Haegens, Luther, & Jensen, 2012; Haegens, Nacher, Luna, Romo, & Jensen, 2011; Jones et al., 2010; van Ede, de Lange, Jensen, & Maris, 2011; van Ede, Jensen, & Maris, 2010). In general, attention-related modulations of brain activity seem to increase along the hierarchy of somatosensory processing (e.g., Burton et al., 1999; Chen et al., 2010; Hsiao et al., 1993).

The results of the previous work addressing top-down influences on touch perception are, however, restricted to passive touch (not involving movement of sensory organs). In daily life we are used to actively exploring objects with the sense of touch. In active touch, kinesthetic information is generated in addition to cutaneous signals. Thus, in order to enhance processing of more complex object features such as shape, top-down influences have to be coordinated across the two somatosensory subsystems: cutaneous and kinesthetic (Driver & Spence, 1998). Furthermore, Gibson (1962) argued that active touch is more than just the sum of passive touch and kinesthesia. For instance, he observed that when pressure is applied to two loci of the stationary hand two sensory impressions were achieved, whereas when the same pressure pattern was obtained by active touch a single object was perceived. Gibson (1962) proposed that active touch is necessary to extract the space and time invariants of sensory stimulation that correspond to object properties in order to perceive them. Recent findings also support this view (Smith, Chapman, Donati, Fortier-Poisson, & Hayward, 2009). Smith et al. (2009) showed that force profiles that were perceived as Gaussian ridges and troughs when being explored actively were indistinguishable when played back to the stationary finger, indicating that exploratory movement was necessary to interpret the sensory signal. There is evidence that active and passive touch is also processed differently at the neural level (Simões-Franklin, Whitaker, & Newell, 2011). Active touch elicited greater activation of the somatosensory cortex and more distributed brain activity in areas outside the somatosensory region, likely related to the motor component of the task, as compared to passive touch. Moreover, self-generated (active) movements lead to suppression of predicted and hence expected somatosensory signals (Bays, Flanagan, & Wolpert, 2006; Chapman, Bushnell, Miron, Duncan, & Lund, 1987; Chapman, Jiang, & Lamarre, 1988; Seki, Perlmutter, & Fetz, 2003; Voss, Ingram, Wolpert, & Haggard, 2008; Voudouris & Fiehler, 2017; Williams, Shenasa, & Chapman, 1998). It seems paradoxical that tactile perception is attenuated during movement, though movement is necessary for active haptic perception. How tactile suppression affects behavior is still controversial. For instance, it has been shown that tactile suppression during movement does not extend to task-relevant haptic features when they are explored (Juravle, McGlone, & Spence, 2013). However, there is also evidence that performance might decrease (Cheeseman, Norman, & Kappers, 2016; Vitello, Ernst, & Fritschi, 2006; Ziat, Hayward, Chapman, Ernst, & Lenay, 2010) or increase (Frissen, Ziat, Campion, Hayward, & Guastavino, 2012) with active touch as compared to passive touch. Furthermore, as mentioned before, active exploration of objects allows improvement of discrimination performance by applying specialized exploratory movements and adjusting movement parameters. Taken together, these findings suggest that attentional effects found in passive touch do not have to occur in the same form in active touch, and it is even doubtful whether attentional mechanisms play a major role in enhancing performance in active touch since the same effect can be achieved by movement control.

Here, we studied the effect of feature selective covert attention on active touch perception of multidimensional 3D objects. Each object comprised two object features, namely shape and roughness. Perceived roughness is related to the physical microstructure of a surface and mostly determined by the spatial pattern of skin deformation (Taylor & Lederman, 1975). Other factors such as vibratory cues (Cascio & Sathian, 2001; Gamzu & Ahissar, 2001) and tangential forces (Smith, Chapman, Deslandes, Langlais, & Thibodeau, 2002) can also contribute to the perception of roughness. The peripheral neural correlate of roughness perception of rather coarse textures (macro scale, 0.5–5 mm inter-element spacing; Bensmaia, 2016) is the spatial pattern of activation of SAI (slowly adapting type I) afferents (referred to as spatial code; Blake, Hsiao, & Johnson, 1997; Connor, Hsiao, Phillips, & Johnson, 1990; Connor & Johnson, 1992). The firing rate of vibration sensitive RA (rapidly adapting) and PC (Pacinian corpuscle) afferents (referred to as temporal code; Weber et al., 2013) complements the spatial code and extends the range of perceivable textures to the micro scale (Weber et al., 2013). To perceive roughness, people usually apply the EP of "lateral motion" (repeated movement of their finger or hand over the surface of an object, Lederman & Klatzky, 1987). Haptic shape perception is related to the macro structure of an object. To perceive the global shape of an object, people apply the EP of “enclosure” (static maximum contact with the envelope of the object alternating with shifting of the object in the hand; Lederman & Klatzky, 1987). The exact shape is usually perceived by “contour following” (often non-repetitive dynamic contact with the contour of the object; Lederman & Klatzky, 1987). The features’ shape and roughness were chosen because they can be explored simultaneously with the same EP. To reinforce the perception of shape, the exploratory movement had to be similar to the typical EPs for shape perception ("enclosure" and "contour following"). For this purpose, we designed and positioned the test objects so that participants could grasp them with the index finger and the thumb ("enclosure") and move their fingers over the surface (similar to "contour following"). This way they also performed "lateral movement" through which they could explore the roughness of the test objects. Thus, information sampling was kept constant for the two tasks and observed differences in performance should only be due to different processing of the same sensory signals.

In the present study, we aimed to isolate the effect of covert attention. For this purpose, feature-specific adjustment of exploratory movements had to be minimized. Thus, we used a post-cueing paradigm and varied participants’ expectation of the upcoming task (i.e., whether they expected to judge the object’s shape or the roughness) to manipulate covert attention. In each trial, participants actively explored two sequentially presented objects with shape and roughness being parametrically varied. Only after the exploration of both objects were participants told which task they had to perform (shape or roughness discrimination). Participants’ expectation of the upcoming task was manipulated by the frequency with which a certain task occurred, as has been previously successfully applied in other tasks (Voss et al., 2008). During one session, one task was performed in 20%, 50%, or 80% of the trials. We assumed that if a task occurred in 80% of the trials, participants would expect this task more than the other, so that sensory information diagnostic for the corresponding object feature would be selectively processed (i.e., shift in covert attention). In contrast, if the task was performed in only 20% of the trials, this task would be less expected than the other, and thus performance should be worse compared to the more expected task. However, if the distribution of the two tasks was 50:50, participants should not build up a strong expectation for one of the tasks, because both tasks occurred equally often. We predicted that expectation would modulate perceptual processing of the gathered signals such that discrimination thresholds would be better when the task is more expected than when it is less expected, and in between when both tasks are equally expected.

The main experiment was preceded by two pilot experiments that aimed to assess the discrimination thresholds for shape and roughness discrimination separately. From these results, we chose the ranges and step-sizes for the test objects used in the main experiment.

Materials and methods

Participants

In total 38 participants volunteered to participate in the experiments: ten (five females, average age 23.2 years) participated in a pilot study on roughness discrimination, ten (seven females, average age 24.7 years) in a pilot study on shape discrimination and 18 (nine females, average age 22.2 years) in the main experiment. All participants were right-handed, had normal or corrected-to-normal visual acuity, and were naïve to the purpose of the experiment. None of them reported any sensory or motor impairment of the right hand. We additionally confirmed that participants had no sensory deficits by measuring a two-point discrimination threshold of 3 mm or lower at the right index finger (Johnson & Phillips, 1981).

The study was approved by the local ethics committee LEK FB06 at Giessen University and was conducted in accordance with the Declaration of Helsinki of 2008. Written informed consent was obtained from each participant. Participants were reimbursed for their participation (€8/h).

Apparatus

The experiments were conducted at a visuo-haptic workbench (Fig. 1) that was combined with three-dimensionally (3D)-printed plastic test objects. The workbench consisted of a PHANToM 1.5A haptic force feedback device, a 22-in. computer screen (Samsung, 120 Hz, 1,280 × 1,024 pixels), stereo glasses, a mirror, and a force sensor (produced by ME-Messsysteme GmbH), which comprised a measuring beam (LCB 130) and a measuring amplifier (GSV-2AS, resolution 0.05 N, temporal resolution 682 Hz) and was used to measure vertical forces during exploration of the test objects by the participants. The plastic test objects were placed in front of the participant on the force sensor. The mirror prevented a direct view of the test objects and of the participant's hand. Instead, participants viewed a schematic 3D representation of the real scene, which was displayed on the monitor via stereo glasses and the mirror (40-cm viewing distance, head fixated by a chin rest). The virtual visual scene was spatially aligned with the real scene. The finger was represented by a sphere of 8-mm diameter. No visual information about the finger was available during the exploration of the test objects (force > 0.1 N). Exerted forces were measured via the force sensor. The PHANToM was used to measure the position of the participant's finger and (in the pilot experiment for roughness discrimination) to restrict the exploration movement by force feedback. The right index finger of the participant was connected to the PHANToM arm via a custom-made adapter (Fig. 2a) consisting of a metallic pin with a round end and a plastic fingernail connected to the pin via a magnet. The plastic fingernail was affixed to the dorsal side of the participant's finger via adhesive deformable glue pads. The finger pad was left uncovered by the adapter and the finger could be moved with 6° of freedom in a 38 × 27 × 20 cm workspace. Custom-made software controlled the experiment, collected responses, and recorded relevant parameters every 3 ms. White noise presented via headphones masked possible sounds from the exploration of the test objects.

Fig. 1
figure 1

Visuo-haptic workbench. Two plastic test objects were placed in front of the participant on the force sensor next to each other along the x-axis. Test objects were schematically displayed on the screen (only the line along their bottom surfaces was shown) and appeared through stereo glasses and mirror spatially aligned with the real scene

Fig. 2
figure 2

Multidimensional test objects and their presentation. (A) The test objects were placed close to each other along the x-axis in front of the participant onto custom-made pedestals. (B) Schematic illustration of the surface texture of the test objects (all measurements in mm). (C) Schematic illustration of the shape of the test objects. The range of the shape of test objects is indicated by the most and the least oblong test objects. Where necessary, 90° angles are marked (all measures in mm)

Stimuli and set-up

All test objects were printed using a 3D printer (Object30Pro, Stratasys, material VeroClear, nominal resolution 600–1,600 dpi). They combined both features: shape and roughness. We used feature values in a range similar to that used by Roland and Mortensen (1987), who investigated perception of haptic micro- and macrostructure. The specific selection of the test objects (ranges, step number, and size) was based on two pilot experiments in which the discrimination thresholds for shape and roughness had been assessed independently. The test objects were then chosen in a way to efficiently sample the dynamic range of the psychometric functions.

The basic shape of the test objects was cuboid; shape differences were represented by variations in oblongness (similar to Roland & Mortensen, 1987). We created a set of cuboids with the same diagonal of d = 56.57 mm and a width of w = 30 mm, and placed them in front of the participant so that the diagonal was at the same height and vertically rotated in the same way for all cuboids (Fig. 2a). In this way, participants could start their exploration of each test object by grasping the lower edges at the same two points in space. This facilitated grasping the test objects with only sparse visual information and prevented the initial grip aperture from providing any information about the oblongness of the test object. Because participants explored only the top of the test objects, we reduced the cuboids to pseudo-cuboids, i.e., triangular prisms with an upper angle of 90° (schematic in Fig. 2c). Below the lower edges of the prism, we added 2-mm high prolongations to complete the angles of the edges at 90°, which would give participants the impression of touching cuboids. Variations in the degree of oblongness were created by varying the side a (Fig. 2c) of the pseudo-cuboid (implying a variation of the other side b, given that d was constant). We used two standards (a = 31.2 mm and 26.8 mm) and six comparisons (a = 40 mm, 35.6 mm, 31.2 mm, 26.8 mm, 22.4 mm, and 18 mm). Both sides of each pseudo-cuboid (a and b) were always larger than the approximate finger width (i.e., ≥ 18 mm) in order to promote finger movement across each side during exploration. The maximal side length was a = 40 mm. In this case, the test object was a square. The total distance of an index finger-thumb exploration across the two test object sides a and b was 71.63 mm ≤ a + b ≤ 80 mm.

Roughness was added to the test objects via the application of a surface texture defined by a 1D-square-wave function with an amplitude of 0.3 mm and a variable groove width (scheme in Fig. 2b). Analogous to the test objects of Roland and Mortensen (1987), we used textures with groove widths between 0.25 and 0.85 mm. We used two standards (textures with groove widths of 0.51 mm and 0.42 mm) and six comparisons for roughness discrimination (groove widths of 0.25 mm, 0.34 mm, 0.42 mm, 0.51 mm, 0.59 mm, and 0.67 mm).

We combined the shape and roughness standards in two test objects: one with side a = 31.2 mm and a texture with a groove width of 0.51 mm, and the other with side a = 26.8 mm and a texture with a groove width of 0.42 mm. Comparison test objects were produced for each combination of shape and roughness parameters, resulting in a set of 36 test objects.

The test objects were placed on custom-made pedestals (3D printed, Fig. 2a), which were approximately as long as the diagonal d (56.57 mm) and as wide as the width of the test objects (30 mm). They were rotated by 45° relative to the frontal plane of the participant, and their upper surface was inclined by -15° relative to the horizontal plane to allow comfortable grasping of the test objects. Lying on the pedestals, the test objects were elevated 24 mm from the force sensor and the corners of the test objects were free, so that participants did not touch anything else (e.g., the force sensor) when touching the test objects. Movement of the test objects was restricted by sidewalls of the pedestal and a central round plug (Fig. 2a), for which each test object had a corresponding hole. Visually, each test object was represented as a line to indicate its position. The lines (length d) were aligned with the middle axis of the bottom surface of the test objects. The endpoints of these lines were aligned with the edges of the test objects, also indicating the initial grasping points (Fig. 2a). Additionally, a second line below (prolonged shadow of the upper line) indicated the orientation of the test objects in the horizontal plane (since the upper lines disappeared as soon as participants touched the test objects to avoid informative visual input).

Design

The experimental design comprised one within-participant variable: Frequency of task (20%, 50%, and 80% of trials). In three different experimental sessions, trials for the two tasks (roughness and shape discrimination) were presented with different complementary frequencies (20:80, 50:50, and 80:20). In every trial, one of two standards for one of two features was paired with one of six corresponding comparisons. According to the task on a given trial, participants decided after the exploration of the two test objects which one felt rougher/more oblong. The sessions took place on three different days and their order was balanced across participants according to a Latin square.

We used the method of constant stimuli and a Two-Interval Forced Choice task (2IFC) to determine individual psychometric functions for roughness and shape discrimination.

Procedure

Every comparison (six in total) was paired with every standard (two in total): eight times in the 20% condition, 32 times in the 80% condition, and 16 times in the 50% condition. This resulted in 480 trials in the sessions where the task distributions were 20:80 or 80:20 (shape:roughness), and 384 trials in the sessions where the task distributions were 50:50. We used the same multidimensional test objects for shape and roughness discrimination, so that the test objects did not provide any information about the upcoming task. The feature values of the two standards were fixed. For the comparisons, the value of the judged feature was fixed on a given trial, but for the feature that was not judged, the value was randomly assigned. If, for instance, the shape standard with a = 31.2 mm was compared with the test object with a = 40 mm, the groove width of the standard was 0.51 mm and the groove width of the comparison was randomly chosen for each trial from among the six possible values (0.25–0.67 mm). The order of the trials in each session was randomized; the position of the test objects (left–right) and the order in which the test objects were explored (first–second) was balanced.

A coarse scheme of the procedure is outlined in Fig. 3. In every trial, participants sequentially explored two test objects (a standard and a comparison test object) and decided which one felt rougher/more oblong. The beginning of a trial was signaled by a tone presented via the headphones. The test object that had to be touched first was first schematically displayed as a single line on the computer screen. Participants explored each test object by grasping it at the left and right lower corners with the index finger and thumb, and by moving the two fingers along the sides upwards until they met at the upper corner. They were instructed not to move their fingers back after the start of the exploration. After the first test object was explored, the schematic of the second test object was visually displayed. As soon as both test objects were explored, participants had to lift their finger above the test objects in order to receive the task instruction (judge shape or roughness). The instruction was indicated by corresponding labels ("rougher"/"more oblong") on virtual decision buttons displayed above the test objects. Decision buttons were implemented as areas in the virtual scene above the stimuli, which were activated when the finger was moved into them. Participants reported which test object had felt rougher/more oblong by moving their index finger to one of the two decision buttons. They did not receive any feedback on their performance to avoid explicit learning. Between trials, participants rested their finger in the left corner of the 3D-scene until the experimenter had manually changed the test objects.

Fig. 3
figure 3

Experimental procedure. In the beginning of a trial, participants waited until the experimenter changed the test objects. The appearance of the schematic representation of one test object on the screen coupled with a tone signaled the participant to start the exploration. After the first test object was explored, the schematic representation of the second test object appeared on the screen. The task instruction (judge shape or roughness) was displayed after both test objects were explored. Then, participants indicated their decision and moved their finger back to the waiting position. The timing of the single trial phases was not restricted

Before the first session of the experiment, participants read a short description of the experiment in which they were instructed to explore the shape and the roughness of the test objects on every trial. They were also informed that the task instruction would appear after the exploration of the test objects, and that it would vary between trials. In addition, we manually demonstrated how to explore and determine the shape and the roughness by using two example multidimensional test objects not included in the experiment. We used complete cuboids as example test objects, which intentionally differed from our pseudo-cuboid test objects: we wanted participants to believe that they were actually judging cuboids. The diagonal was marked on the plastic with a permanent pen to explain the orientation of the test objects in the set-up. Before every session, participants completed a practice session consisting of ten trials to familiarize them with the test objects, set-up, and tasks. The frequencies of the trials within each judgment task in the practice sessions were the same as in the subsequent session, to ensure that a stable expectation was built up before the proper experiment. Studies using different stimulus probabilities report successfully inducing different expectations even without a training session (e.g., Akatsuka, Wasaka, Nakata, Kida, & Kakigi, 2007; Voss et al., 2008), so that it can be assumed that participants build up expectations very quickly based on trial probability. The experiment consisted of three sessions (one for every expectation condition). Sessions were 3–4 h long and were completed on different days. They were interspersed with 1-min pauses every 50 trials.

Data analysis

To estimate the just-noticeable differences (JNDs) for roughness and shape discrimination, we calculated the percentage of trials in which each comparison test object was perceived to be rougher/more oblong than the corresponding standard for each participant, condition, and standard. We fitted cumulative Gaussian functions to the psychometric data using the psignifit4 toolbox (Schuett, Harmeling, Macke, & Wichmann, 2016) and estimated the JNDs by the 84% discrimination threshold. This way the JND corresponds to the standard deviation of the cumulative Gaussian function (cf., Helbig & Ernst, 2007). Individual JNDs for shape and roughness discrimination were entered in separate two-way repeated measures ANOVAs with the Frequency of task (20%, 50%, 80%) and the Standard as within-participant variables. We further performed paired t-tests to compare the JNDs in the different conditions pair-wise. For these analyses, the JNDs were averaged over the standards. The p-values were Bonferroni-corrected for three comparisons.

To test whether participants adjusted their movement parameters to the different conditions, we extracted relevant movement parameters (Gamzu & Ahissar, 2001; Kaim & Drewing, 2010; O’Malley & Goldfarb, 2002). For every trial, we calculated the exploration time and the averages of vertical force and velocities on the three axes. We then averaged these data over trials of one session, obtaining a combination of five average movement parameters per participant and Frequency of Task condition (80:20 shape:roughness, 80:20 roughness:shape, 50:50 shape:roughness). On this data, we performed one-way repeated measures ANOVA with the within-participant variable Frequency of Task. As sanity checks, we also tested the goodness of fit of the psychometric functions by comparing the measure of deviance to the χ2 distribution with the same degrees of freedom as the number of comparison test objects (χ26;95%= 12.59; Wichmann & Hill, 2001) and estimated the Points of Subjective Equality (PSEs) for the two standards.

Pilot experiments

In the pilot study on roughness discrimination, the test objects were 110 × 42 × 4 mm cuboids with a central, embedded texture (the same as in the main experiment) of 40 × 80 mm. The test objects were mounted behind each other along the z-axis (10 mm in-between) on the force sensor and their outlines were visually represented on the screen. Participants explored each test object by laterally moving the finger once from the right to the left side. The exploration length was restricted to 60 mm by virtual barriers rendered by the PHANToM. The exploration was considered terminated when the finger reached the virtual barrier. In the pilot experiment on shape discrimination, the test objects were the same as in the main experiment, but with flat surfaces. The set-up and exploration of the test objects were the same as in the main experiment.

In each pilot experiment, we used two standards that were paired with 11 comparisons. For shape discrimination, we used the standards a = 30 mm and a = 28 mm, each paired with five comparisons with smaller a and five with larger a, as compared to the corresponding standard (step size 2 mm between two neighbored comparisons). For roughness discrimination, we used the standards 0.42 mm and 0.59 mm groove width, each paired with five comparisons with a smaller groove width and five with a larger groove width, as compared to the corresponding standard (step size 0.04 mm between two neighbored comparisons). Every comparison was paired 20 times with each standard, resulting in two standard × 11 comparison test objects × 20 repetitions = 440 trials. The pilot experiments were carried out in one session on the same day within 3–4 h.

Results

Pilot experiments

For roughness discrimination, we found an average JND of 0.11-mm groove width (Weber Fraction of 22%). For shape discrimination, we found an average JND of 7.83 mm in the short side a (Weber Fraction of 27%). JNDs of the different standards did not significantly differ for roughness, t(9) = 0.96, p = .361, or for shape discrimination, t(9) = 1.41, p = .193. More than 90% of fits (per participant, judged feature, standard) were good according to the defined criterion (deviance values below the critical χ2); all were included in the analysis.

Main experiment

The goodness of fit was similarly high to that in the pilot experiments (deviance values below the critical χ2in more than 90% of fits; all fits were included in the analysis). In Fig. 4, exemplary psychometric functions of one participant are plotted separately for the discrimination of shape and roughness under the three expectation conditions (high, equal, and low expectation). In Fig. 5, the JNDs (averaged over standards) are plotted as a function of the Frequency of task. The two-way repeated measures ANOVAs with the factors Frequency of task and Standard as within-participant variables did not reveal a significant interaction (shape: F(2,34) = 0.56, p = 0.579; roughness: F(2,34) = 0.69, p = 0.511), indicating that the Frequency of task main effect was not significantly modulated by the Standard. For both features (shape and roughness), we observed a main effect of Frequency of task on JNDs, indicating that the expectation to judge a certain feature of the test objects affected the processing of the incoming signal (shape: F(2,34) = 12.84, p < .001; roughness: F(2,34) = 14.99, p < .001). Next, we tested whether the JNDs were different between the single conditions. When participants judged the infrequent feature (Frequency of task = 20%), the JNDs were significantly higher than the case in which this feature was judged frequently (Frequency of task = 80%; shape, t(17) = 4.67, p < .001; roughness, t(17) = 4.75, p < .001; all p-values Bonferroni-corrected), as well as the case in which both features were judged equally often (Frequency of task = 50%; shape, t(17) = 3.11, p = .019; roughness, t(17) = 3.62, p = .006). However, we did not find significant differences between the performances when participants judged the frequent feature and both features equally often (Frequency of task = 80% vs. 50%; shape, t(17) = 1.38, p = .556; roughness, t(17) = 1.39, p = .546). As predicted by Weber’s Law we found a main effect of the Standard for both features (shape: F(1,17) = 7.98, p = 0.012; roughness: F(1,17) = 8.17, p = 0.011). The JNDs were on average higher with a higher standard stimulus value (roughness: standard 0.423 mm, JND = 0.105 mm and standard 0.508 mm, JND = 0.115 mm; shape: standard 26.8, JND = 7.02 mm and standard 31.2, JND = 7.682).

Fig. 4
figure 4

Psychometric functions of one example participant for the different expectation conditions in the roughness (left) and the shape (right) task. Probabilities of the standard (one of two) to be judged rougher/less oblong as a function of groove width/short side of the cuboid. For each feature (shape and roughness), cumulative Gaussian functions were fitted separately for the sessions with high, equal, and low expectation of the corresponding discrimination task. The physical values of the standard and chance performance (proportion rougher than 0.5) are indicated as dashed black lines

Fig. 5
figure 5

Average just-noticeable differences (JNDs) with standard errors of the mean as a function of expectation (separately) for the discrimination of shape and roughness

To exclude the possibility that the observed differences in JNDs were due to the different number of repetitions of comparisons in the three conditions, we reanalyzed the data. We randomly sampled as many trials per standard-comparison pairing in the 50% and 80% conditions as we had presented in the 20% condition. Using these data, we recalculated psychometric functions, re-estimated the JNDs as described above, and recalculated the statistical analyses 100 times. With an equal number of trials in each condition, we replicated all of our results reported above in at least 89% of repetitions.

Our analysis revealed significant differences in JNDs between the expectation conditions despite the small number of repetitions per stimulus level. Nevertheless, we repeated our analyses by modelling the response of the whole population by means of a general linear mixed model (Moscatelli, Mezzetti, & Lacquaniti, 2012). Thus, we could extend the fitting of the psychometric functions to the repeated responses of all participants considered jointly. The population analyses (Table 1) confirmed our results that precision in shape and roughness perception was modulated top-down: By comparing the confidence intervals, we found that for both features the JNDs were significantly higher in the 20% conditions than in the 50% and 80% conditions. The JNDs for the latter two conditions did not differ significantly.

Table 1 General linear mixed model results

PSEs (on average) did not differ from the standard’s average physical values, and did not differ between the conditions of Frequency of task (all p’s > 0.1).

The movement parameters (exploration time, force, velocity on three axes) are plotted in Fig. 6. The corresponding averages and standard deviations are listed in Table 2. The analysis of the movement parameters revealed no significant effect of Frequency of Task (all p’s > 0.05), suggesting that exploratory movement was similar under the different expectation conditions.

Fig. 6
figure 6

Average parameters of the exploratory movement with standard errors as a function of the expected task. The expected tasks correspond to the following Frequency of task conditions shape = 80:20 shape:roughness, dual = 50:50 shape:roughness, roughness = 80:20 roughness:shape

Table 2 Movement parameters

Though we did not find any significant differences in the parameters of exploratory movements between the different expectation conditions, we performed several post hoc analyses to inspect in more detail the differences in the force because here we found the highest relative difference (23%) between the condition in which participants expected to judge roughness and the condition in which participants expected to judge shape (Table 2). Namely, participants on average numerically exerted slightly more force (0.23 N more) when expecting to judge roughness. If this difference was, however, indeed due to systematic adjustments resulting in improved performance, it could be expected that forces and JNDs correlate, i.e., when participants expect to judge roughness, they increase force or/and they decrease force when they expect to judge shape. We thus correlated individual forces with individual JNDs for the conditions in which participants either expected to judge roughness or shape (Fig. 7). We did not find any significant correlation (roughness: p = 0.207; shape p = 0.207) in either of the conditions. If we were to try to interpret the numerical differences, there appears to be rather an opposite effect: In roughness discrimination participants with the best performance seem to have exerted rather low forces and in shape discrimination the performance seems rather to improve with increasing force. However, since we are considering null hypotheses, we additionally calculated the Bayes factor (K) for the correlation coefficients R, which quantifies the support for the alternative hypothesis over the null hypothesis. In our case K = 0.4 for roughness and K = 0.39 for shape. According to the interpretation tables for K (Jeffreys, 1961), if 0 < K < 3.16, the result is “barely worth mentioning.”

Fig. 7
figure 7

Individual just-noticeable differences (JNDs) as a function of individual force separately for the cases participants expected to judge roughness or shape. The regression lines are plotted in red

In addition, we investigated whether the differences in JNDs that represent our main results are also present in the subgroup of participants for which the pattern of force was the opposite to the one we observed on the full sample. If so, we can dissociate expectation, force and effect of expectations on the JNDs, making reasonably sure that the differences in force do not explain our results. To do so, we repeated our main analysis in the subset of participants (N=5) for which the differences in force were reversed, i.e., we selected the participants who exerted less force when they expected to judge roughness than when they expected to judge shape (red dots in Fig. 8; average force = 0.62 vs. 0.76 N, respectively, 21% difference). Average JNDs are plotted as a function of expectation only for the five participants who exerted more force when expecting to judge shape (Fig. 9). We then performed one-way repeated measures ANOVAs on the JNDs with the factor Frequency of task only for this subset of participants separately for roughness and shape discrimination. The ANOVAs revealed significant main effects of Frequency of task for both features (roughness: F(2,8) = 8.92, p = 0.009; shape: F(2,8) = 7.48, p = 0.015), as was the case in the full sample (Fig. 5). This was also the case with the full sample, when participants did not expect to judge roughness the JNDs were higher than when both features had to be judged equally often (t(4) = 3, p = 0.04). Similarly, JNDs were higher when participants did not expect to judge shape than in the condition in which both features had to be judged equally often (t(4) = 2.8, p = 0.0489). The results strongly suggest that even if there were small differences in force between the conditions, they did not cause differences in performance.

Fig. 8
figure 8

Individual forces when participants expected to judge shape as a function of the force when they expected to judge roughness. Unity-line is plotted in black. Participants who exerted more force when expecting to judge shape as compared to the force exerted when they expected to judge roughness are highlighted in red

Fig. 9
figure 9

Average just-noticeable differences (JNDs) with standard errors of the mean as a function of expectation (separately) for the discrimination of shape and roughness only for the five subjects for whom the pattern of force was the opposite of the one we observed on the full sample (higher force when expecting to judge shape)

Discussion

In the present study, we investigated whether and how covert attention influences discrimination performance in active touch perception of object shape and roughness. Participants explored multidimensional test objects that parametrically varied in roughness and shape. They were told only after the exploration which task they had to perform (a shape or roughness discrimination), which successfully prevented them from adjusting their exploratory movements to the task. We systematically manipulated their expectation of the task by varying the frequency with which each task occurred in one session of the experiment (20%, 50%, and 80%) and assumed that participants shift the focus of covert attention to the expected feature. We showed that if participants expected a certain task less than the other (20% condition), their discrimination thresholds for this task were significantly higher than in the condition in which they equally expected both tasks (50% condition), and when they expected this task more than the other (80% condition). Our results suggest that active touch perception of 3D objects can be affected by covert attention.

Participants’ performance was significantly worse in the task they were not expecting compared to the case in which both tasks had equal probability. In the expected (80%) task, performance was on average better than in the equally probable (50%) task, but the comparison did not reveal a significant difference. Our results are consistent with the findings on attentional effects in passive tactile perception, showing that attentional costs are larger than benefits (Gomez-Ramirez et al., 2016). In a passive tactile discrimination task, Sinclair et al. (2000) investigated the effect of attention on the processing of tactile vibration frequency and duration. They showed both costs and benefits of feature-oriented tactile attention: Performance was increased when attention was allocated to the validly cued feature but decreased when attention was allocated to the invalidly cued feature, compared to the case when attention was divided between features. Beneficial performance differences were on average much smaller as compared to average performance differences related to the costs of selective attention. In a study investigating spatial attention in a similar task, a comparable asymmetry was observed in reaction times (Forster & Eimer, 2005). In our experiment, we used different probabilities of task occurrence to direct participants' covert attention to different features and a post-cuing paradigm to prevent feature-specific adjustments of exploratory movements. This might have led to a weaker influence of covert attention on shape and roughness discrimination as compared to a classical cueing paradigm, and may explain why we did not find a benefit of selective feature attention. The fact that we find similar effects of selective covert attention on active touch perception indicates that covert attention can be coordinated between the cutaneous and kinesthetic afferent subsystems. Furthermore, even though active touch provides the possibility of adjusting the exploratory movements (overt attention), our results suggest that performance can additionally be modulated by covert attention in a similar way to passive tactile perception, at least in the absence of movement adjustments.

We claim that the improved performance in the expected task as compared to the unexpected task is due to feature-selective covert attention. Alternatively, one could speculate that the difference in performance between the expected and unexpected conditions was achieved because participants adjusted exploratory movements to their expectation of the upcoming task. However, the absolute differences in the relevant movement parameters between the different expectation conditions did not reach statistical significance, suggesting that the movement itself was not tuned to the expected object feature. Though not significant and small in absolute values (0.23N) the difference in force between the different expectation conditions was relatively large (23%). Participants exerted numerically more force when expecting to judge roughness as compared to when they expected to judge shape. However, we did not find any correlation between JNDs and force in either of the conditions on the individual level. Furthermore, we could also replicate our main results on the subsample of participants who exhibited the opposite behavior (more force when expecting to judge shape). This results are consistent with the finding that contact force does not affect roughness discrimination performance in haptic perception (e.g., Kwok, Darkins, Oddo, Beccai, & Wing, 2010). Therefore, we can reasonably exclude the possibility that the performance difference was due to the gathering of information; rather it was due to feature selective covert attention.

Adaptation and perceptual learning might have been possible bottom-up influences giving rise to the difference in performance we found between the expected and unexpected conditions. We believe that adaptation or mere exposure to certain stimulus variations (Watanabe, Nanez, & Sasaki, 2001) are unlikely to cause the effect because in each expectation condition participants were exposed to the same set of features, as all test objects contained both features, roughness and shape. It may be possible that performing one task more often in the expected task condition led to perceptual learning, so that performance in the expected task increased during a session and/or between sessions. The order of the sessions was, however, balanced, so that putative learning effects between sessions should have been canceled out. We also find the difference in performance between the expected and unexpected conditions when we consider only the first 96 trials in the expected condition – the same number of trials as in the unexpected condition (roughness t(17) = -4.16, p < 0.001; shape: t(17) = -4.80, p < 0.001). In this case the number of trials that might have underlaid perceptual learning is equated. It can still be argued that perceptual learning occurred because the 96 trials in which the frequent feature was compared were closer in time than the 96 trials in which the infrequent feature was compared. We think, however, that this is unlikely because perceptual learning without feedback usually requires many more repetitions than we applied in our task (e.g., ten times more in Watanabe et al., 2001). Thus, taken together, we think that the difference in performance we find between expected and unexpected feature processing is due to covert attention rather than adaptation or perceptual learning.

We cannot exclude the possibility that participants performed the shape and the roughness tasks sequentially rather than simultaneously in order to reduce the interference between the tasks. For instance, they might have judged the shape of the test objects from the initial grip by using the orientation of the orthogonal surfaces of the test objects and then focused on the object roughness during the exploration movement. If participants applied such a sequential strategy, we would have predicted no difference in performance between high and low expectation conditions. In contrast, here we observed a significant difference (depending on the expectation), suggesting that orientation of the objects' edges was not sufficient to judge object shape.

The goal of the present study was to investigate covert attention in active touch. The limitation of this approach is that on the one hand, by including movement to study active touch we also risk including the possibility of differential exploration, which has to be excluded in order to study covert attention. On the other hand, in order to isolate the effect of covert attention, it is necessary to restrict exploratory movements, which reduces the active nature of haptic exploration. In free exploration the exploration movements and movement parameters are adjusted to optimize performance (Drewing, 2012; Gamzu & Ahissar, 2001; Kaim & Drewing, 2011; Lederman & Klatzky, 1987). Also the length of the exploration varies, which affects discrimination thresholds (Drewing, Lezkan, & Ludwig, 2011; Giachritsis, Wing, & Lovell, 2009; Lezkan & Drewing, 2014; Louw, Kappers, & Koenderink, 2000; Metzger, Lezkan, & Drewing, 2018). We designed the task specifically to achieve a reasonable trade-off between these conflicting goals. The exploration was restricted by positioning the stimuli in a way that a single simple exploration movement was enough to extract both features and instructed and trained participants to perform it. This way, as opposed to passive touch, participants were actively moving the hand and fingers in order to sense the stimuli, instead of being touched by them (Gibson, 1962). Thus, more sources of information than in passive touch (kinesthetic information, information elicited by movement such as vibrations, efference copies from motor commands) were available even in this restricted exploration movement. At the same time, the possibility for task-specific adjustments of the movements was greatly minimized. Additionally, we used a post-cuing paradigm to limit selection of information during the exploration, and manipulated covert attention by different frequencies with which a certain task (roughness vs. shape discrimination) could occur. Although our instructions did not constrain force, speed, and duration of the exploration (promoting active exploration), they resulted in being rather similar between the different expectancy conditions, presumably because of how the task was designed. However, our analyses reveal that the differences in the JNDs between the expectancy conditions could not be explained by differences in exploratory behavior. Hence, we empirically support the notion that with our task demands it was possible to eliminate differential exploration to an extent that allowed drawing conclusions on covert attention in active touch.

The discrimination threshold we found for roughness perception was higher than reported in the literature. Nefs, Kappers, and Koenderink (2001) reported a Weber fraction of 6.4–11.8%, while we found one of 22%. However, in the study by Nefs et al. (2001), participants stroked over a 10-cm long stimulus twice, resulting in a total exploration length of 20 cm (3.33 times longer than in our experiment). The difference between studies is in line with previous results showing that spatiotemporal extension of the exploration leads to a decrease in discrimination thresholds (e.g., Drewing, Lezkan, & Ludwig, 2011; Giachritsis, Wing, & Lovell, 2009; Lezkan & Drewing, 2014; Louw, Kappers, & Koenderink, 2000; Metzger, Lezkan, & Drewing, 2018).

Taken together, our results suggest that akin to visual, auditory, and passive tactile perception, active touch perception of complex, multidimensional objects is influenced by covert attention. More precisely, expectations about the upcoming task influenced the discrimination of simultaneously perceived shape and roughness information of an object acquired by active touch. In line with effects of selective spatial and feature attention in passive touch perception, attentional costs also seem to be larger than their benefits in active touch perception. Although active touch perception is in many ways different from passive touch perception, our results suggest that it is influenced by covert attention in the same way, at least in the absence of adjustment of exploratory movements.