Over the past decades, dynamic interceptive actions such as catching a ball or hitting a projectile have received a great deal of attention within research on human behavior in the domains of psychology, human movement science, sport science, and performance analysis (Davids, Araújo, Vilar, Renshaw, & Pinder, 2013). Dynamic interceptive actions have proven popular methodological vehicles in psychological research because they yield insights into the intertwined and complex relationship between processes of cognition, perception, and action during performance in complex environments (Davids, Savelsbergh, Bennett, & van der Kamp, 2002; Panchuk, Davids, Sakadjian, MacMahon, & Parrington, 2013). Coordinating movements during such interceptive actions demands precise information, predicated on perceptual expertise. Successful interception of an object involves moving an effector (e.g., a hand or a foot) into the right place at the right time, and skilled athletes can satisfy these rigorous task constraints with an extraordinary degree of precision. For example, elite batters in the sport of cricket can hit projected balls with a margin of timing error of 2 to 3 ms (Regan, 1997). The speed constraints of many ball sports lead performers toward the intrinsic limitations of their visuo-motor systems, indicating that they cannot completely rely on ball flight information alone to coordinate interceptions. It has been argued that information emerging prior to ball flight—from the actions of a pitcher or bowler, for example—is pertinent for successful interception in fast ball sports (Pinder, Davids, Renshaw, & Araújo, 2011a; Pinder, Renshaw, & Davids, 2009). Advanced information sources are exemplified by visual information from the movement kinematics of another person’s actions used to project an object such as a ball (with a throw, kick, or hit) toward a catcher. Coordination of interceptive actions, under rigorous time constraints especially, encompasses the process of visual anticipation—that is, the ability to make accurate predictions from partial or incomplete advance sources of visual information (Poulton, 1957).

Van der Kamp, Rivas, van Doorn, and Savelsbergh (2008) proposed that skilled performers can regulate interceptive actions by coupling them to different sources of information that become partially available at different times in dynamic performance contexts, such as prior to and after the point of ball projection. Through this process, skilled catchers take advantage of the informational richness of the performance environment (from sources of advanced visual information, including the orientation of a thrower’s hand, or from ball flight trajectory) to functionally adapt their interceptive behaviors. These ideas emphasize the importance of the relationship between a performer and a specific performance environment, considered crucial in research from an ecological dynamics perspective (Davids & Araújo, 2010).

A common methodology for studying visual anticipation processes and interceptive actions involves presentation of video-projected images of an individual’s actions (e.g., a cricketer bowling a ball or a tennis player hitting a serve toward an observer). Images of action can be manipulated as a source of advanced visual information to examine how participants might use visual anticipation processes under different task constraints. For example, this type of methodology allows decision-making and gaze behaviors to be examined using controlled systematic experimental designs. A point of contention is that participants’ simulated behavioral responses to the presentation of information have typically been somewhat reductionist, tending to rely on verbal, written, button-pressing, or micromovement responses (e.g., Jackson & Mogan, 2007; Müller, Abernethy, & Farrow, 2006; Rowe, Horswill, Kronvall-Parkinson, Poulter, & McKenna, 2009). An important issue with these designs concerns the decoupling of perception and action in experimental work (Dicks, Button, & Davids, 2010; Panchuk et al., 2013).

A significant challenge for researchers in psychology has been to design experimental task constraints for studying dynamic actions that are representative of a performance environment from which one is sampling (Brunswik, 1956). Representative experimental designs examine psychological processes at the level of the performer–environment relationship, ensuring that the perceptual information available to regulate actions is typical of a specific performance environment (Brunswik, 1956; for a detailed review, see Pinder, Davids, Renshaw, & Araújo, 2011b). Pinder et al. (2011b) highlighted two critical features, functionality of the research and action fidelity, in a theoretical framework for representative experimental designs. Functionality of the task constraints enables performers to regulate actions with information sources that are representative of a performance environment. Therefore, when researchers design experiments, they should ensure that key perceptual variables, available in a performance environment to regulate their actions, are maintained in experimental task constraints, so that behaviors examined can be generalized to a specific performance environment. For example, catching a ball from a thrower requires information from the thrower’s movement kinematics, prior to ball flight, for successful interception. The implication is that, when catching behaviors are studied, these kinematic perceptual variables must be included in an experimental task. This type of functionality must be combined with action fidelity, which enables the performer to organize the same action that would be required in actual performance environments. High levels of action fidelity are observed when a performer’s response remains similar in experimental and actual performance conditions (Pinder et al., 2011b).

In psychology experiments, it is very difficult to maintain both functionality and fidelity, resulting in a representative experimental design. Using a "live" thrower or bowler to project an object implies having the same individual available to perform the projection action throughout extended periods of experimental data collection, which can lead to a number of issues such as validity, costs, time demands, or potential repetitive strain injury. A "live" performer could also introduce unintended variability into the projection action (Schorer, Baker, Fath, & Jaitner, 2007), making independent variables hard to control across experimental conditions or participants. Some of these limitations can be overcome by using traditional ball projection machines, so that stable ball flight trajectories can be maintained over trials without incurring injury risk, costs, or inordinate time demands on skilled individuals to “deliver” the ball to participants (e.g., bowling or pitching a ball). Yet the use of this type of ball projection technology can introduce a series of new limitations associated with the removal of advanced perceptual information from a thrower’s actions (for a more detailed review, see Pinder, Renshaw, Davids & Kerhervé 2011c). Recently, d’Avella, Cesqui, Portone, & Lacquaniti (2011) developed advancements in projection technology enabling a controlled range of different ball trajectories (varying distance, height, and flight duration). Although significant, this apparatus still neglected the availability of advanced visual information sources, despite acknowledging the importance of these sources of information when studying behavior during catching performance (Cesqui, d’Avella, Portone, & Lacquaniti, 2012).

Here, we describe a detailed methodological approach for resolving these potential problems and maintaining perception–action coupling when performance of dynamic interceptive actions, such as ball catching, is studied. We describe the methodology behind the design of a custom-built, inexpensive, integrated ball projection technology, which provides visual information on advanced movement kinematics of a throwing action, prior to ball projection, when catching behaviors are studied.

Integrated projection systems such as the apparatus described here may have a significant future in sport performance development programs and in experimental research projects. We discuss the merits of such a system, consider potential limitations, and elucidate relationships with other emerging technologies, to enable investigation and practice of dynamic interceptive actions. As well as its low cost, the methodology and apparatus presented here also support further integration with other experimental equipment typically used for studying human movement behaviors, such as a VICON camera system, digital projection systems, eye movement registration systems (e.g., ASL), electromyography (EMG), and force platforms, enabling interceptive actions to be studied comprehensively.


Apparatus design

The apparatus (see Fig. 1) integrated a Spinfire Pro 2 (Spinfiresport, Tennis Warehouse, Victoria, Australia) tennis ball projection machine with a PC (Windows XP, Microsoft, U.S.), video projector (Benq MP776s, Benq, Australia), and free-standing projection screen (Grandview, Grandview Crystal Screen, Canada). Customized software and modifications to the ball projection machine enabled synchronization of ball release and video projection. The stimuli, software, and hardware for the apparatus are outlined in detail below.

Fig. 1
figure 1

The custom-built ball projection device and setup


Ball projection machine

The Spinfire Pro 2 ball projection machine was selected as a mid-range priced device offering all the features required for the integrated design. It projects tennis balls at a range of velocities between 32 and 128 km/h (8.94–35.76 m/s), with an internally oscillating mechanism enabling a quicker change in speed and direction, as compared with whole-body designs ( It also features "extreme grip" counter-rotating wheels allowing the machine to remain silent during ball propulsion. The machine can vary oscillation horizontally and vertically, enabling the direction of balls projected to be changed to a specified location. A battery life of 3–8 h enables a high level of use without having to recharge. The machine is controlled by a membrane touch panel with a backlit LCD display enabling changes in ball speed, direction, and elevation.

Modifications to the ball machine

In our design, the original Spinfire machine was modified by fitting two solenoids (one on each side) in the ball feed chute, which held the ball inside the chute until the solenoids were released (Fig. 2a). The solenoids were wired to a data acquisition box (National Instruments multifunction card), which connected to the PC via USB cable (Fig. 2a) and triggered ball release via custom-developed software, which we describe below. A coaxial cable with BNC connector was also wired to the instruments box and could be linked to external systems for recording movement behaviors, such as a VICON MX camera system (VICON, Oxford, U.K.) or the ASL eye movement registration system (ASL, Massachusetts, U.S.). The cable sends a timing impulse (generated via the software) to external systems such as the VICON Nexus software (VICON, Oxford, U.K.) or Noraxon EMG 2400 T G2 (Arizona, U.S.) systems to either start recording or mark a timing pulse line into the data at the point the trial starts. A photoreceptive cell was also placed at the mouth of the ball feed chute to allow measurement of the time delay between triggering the solenoid and ball release (see Feed Delay below).

Fig. 2
figure 2

Position of the solenoids in the ball feed chute (a) and on a mounting bracket on the side of the chute (b)

Testing the ball projection machine

There were a number of practical issues associated with using ball projection technology that needed to be accounted for prior to moving on to experimental testing. These issues and their resolutions are described below.

  1. (1)

    Tennis ball variability

    The design, manufacture, and condition of tennis balls can create a large degree of variability in how they are projected from a machine. For example, extensive use of a tennis ball over trials can lead to changes in pressure over time, leading to variability in spatial-temporal properties of flight trajectory. The variability of the projection technology is typically captured in terms of the reliability of reproducing characteristics of ball trajectory, such as average velocity and endpoint spatial location. Previous research on dynamic interceptive actions, such as one-handed catching, have projected tennis balls to a 1-m2 target area adjacent to the shoulder of the catching arm, typically used to assess variability of the end point of ball trajectory in many previous studies (see, e.g., Davids et al., 2002). To minimize variability of ball trajectory due to condition of the tennis balls, we developed norms based on the speed settings of the ball machine that were to be used for the test (see Table 1). Each ball used for testing was projected from the machine. When released, if the ball’s velocity did not fall within one SD of the mean value or hit a 1-m2 target, it was retested. If a ball failed twice, it was discarded.

    Table 1 Velocity and delay measurements at three different ball speed settings
  2. (2)

    Feed delay

    For accurate synchronization between time of ball release from the projection machine and the video image of an action, the temporal delay between the solenoids releasing the ball and the time the ball reached the release point on-screen needed to be determined. To calculate the temporal delay, a photoreceptive cell was placed at the point of ball release (machine projection mouth) (Fig. 3b). Customized software was developed to measure the time between the solenoids releasing the ball and when the ball passed the photoreceptive cell. Customized software was developed using LabVIEW (see Fig. 5). The software enables the delay to be calculated for different ball speeds or types prior to each study to ensure accurate synchronization (see Table 1), and these values are used for developing script files for the projection software (Fig 4).

    Fig. 3
    figure 3

    a The data acquisition box mounted on the side of the ball projection machine. b The photoreceptor mount at the mouth of the ball feed chute

    Fig. 4
    figure 4

    Example and explanation of one line of script

    Fig. 5
    figure 5

    Flowcharts of software for a delay measurement and b main program

  3. (3)

    Ball placement in feed chute

    Pilot testing revealed that balls placed into the feed chute had a less variable delay (8.1 ms), as compared with balls that were dropped into the chute (22.0 ms).

  4. (4)

    Noise from solenoid triggering

    When the solenoids release the ball, there is an audible sound associated with projection that potentially can be used by participants as an acoustic signal that the ball is being released. To minimize possible cuing effects associated with this sound, participants can wear ear plugs or ear muffs or listen to white noise through headphones. However, researchers need to be careful when manipulating acoustic informational constraints during experiments. As was mentioned earlier, Egon Brunswik’s (1956) theory of representative design proposes that the informational constraints of a practice simulation should be representative of those that exist in a performance environment. It is somewhat artificial and dysfunctional for learners to become reliant on acoustic information from a ball projection machine for initiating the timing of a catching or hitting movement, for example. This is because that informational constraint might not be present in some performance environments—for example, when facing a bowler in cricket or a pitcher in baseball. Even some types of acoustic information available during practice in ball games, such as the sound of an opponent’s bat/racquet hitting a ball or a ball rotating in the air after a spin bowler's delivery, a bowler's footfalls in cricket and so on may be masked in a competitive performance environment by the sound of a noisy crowd and sounds made by opponents (e.g., grunting in the tennis serve). Many skilled performers in sport understand this potential timing benefit for their opponents. For example, in table tennis serving, elite players are known to stamp their foot at the same time that they hit the ball in order to mask acoustic information that enables a receiver to decide on the type of spin being imparted to the ball. Therefore, it is clear that some additional nonvisual informational constraints associated with ball projection technology might need to be masked during experiments and practice, especially when they cannot be relied upon by athletes, such as a batter in cricket, during competitive performance.

Projector and screen

A free-standing image projection screen (Grandview, 1.84 × 2.44 m) had a 15-cm hole cut into its surface, enabling a ball to be projected through it unobstructed (Fig. 1). The hole location was designed to be implemented with an image of an actor throwing balls. However, this location can be placed at different points on-screen, depending on the visual image being presented. A video projector (BenQ MP776st) was placed in front of the screen and was adjusted to ensure that the image height of the actor on the screen was identical to the actual height of the actor. The projector was connected to the PC running the software via a SCART lead. To display the image on the screen and run the projection software, a dual-display setup must be used.

The visual image

The software integrated external video footage (.avi format), allowing a variety of visual images be to used, such as an actor throwing a ball, a cricket bowler, a tennis server, or a hockey player performing a shot at a goal, ensuring that the system was versatile enough to examine the organization of movement responses to various interceptive actions. The creation of video stimuli involved two steps:

  1. (1)

    Video images were recorded from the perspective of the intended recipient of the action (e.g., a participant facing a bowler). During filming, the location and speeds of the ball need to be recorded to ensure accurate synchronization of the video images and ball projection speeds and trajectories. In the study by Panchuk et al. (2013), which used the apparatus outlined here, video images of throwing actions were selected only if they met the criteria of matching the required velocity (±1 SD) and if the ball traversed a 1-m2 spatial target area at which the balls would be projected during the experiment procedure. Panchuk et al.’s study collected data from 1,260 trials, of which only 37 (2.9 %) were discarded due to technical issues, such as the flight path not being projected into the correct 1-m2 target area. The projected ball speeds were also recorded to ensure that the speed of the ball projected from the image of the actor matched that of the ball released from the projection machine (Panchuk et al., 2013). Here, we further tested the variability of the ball flight path. When participants attempted to perform one-handed catches, with the ball arriving just above shoulder height of the dominant hand, within the designated target spatial area, of a total of 2,010 trials, only 46 (2.3 %) were disregarded due to an inaccurate trajectory. These data demonstrate a remarkable capacity to maintain a stable ball trajectory to satisfy experimental control needs and training requirements. When video-based projection technology is designed, it is critical that an actor's actions on the video image and the resultant ball trajectories from the machine are closely matched. Hence, this relationship should be constantly checked during experimental testing, since desynchronization may lead to inappropriate perception–action couplings emerging in participants (Panchuk et al., 2013).

  2. (2)

    The video footage can then be edited using third-party video-editing software such as Final Cut Pro X (Apple Inc, California, U.S.). First, the release point of the ball must be placed in an identical spatial location for each video image of an action to ensure that the video image and projection hole can be aligned correctly. Second, video images can be selectively edited to allow for a variety of experimental manipulations to be created, such as point-light displays and spatial and temporal occlusion (e.g., editing advanced information from the actions of a ball thrower or from ball flight information).


Script for ball projection machine

The software uses a written script that can be adapted for each type of video image that is being presented (see Fig. 4). The script can be written in Notepad (Windows) or TextEdit (Mac) and saved as a txt. file. The script contains five pieces of information: (1) the video filename, (2) the time point on the video at which the stimulus will start, (3) the time during the video when the ball needs to be released (i.e., synchronized with the time in the video image where the ball is released by the actor), (4) the time when the video image ends, and (5) the delay (i.e., the solenoids will be released 150 ms prior to the time of ball release). Multiple lines of script can be added into one txt. file (e.g., 30 trials in one file).

Ball projection interface and controls

Using LabVIEW, software was created to control the integrated machine (see Fig. 5), and a custom interface was designed. The program first enabled the script (Fig. 4) to be loaded. For each trial, a ball is loaded into the ball chute; the operator then presses the “next trial” button on the interface. The “play trial” button is pressed to start the presentation of the stimulus, and ball release occurs at the time specified in the script file. Information also presented on the interface included trial number, line of script in use, intended firing frame, delay time, intended firing frame, and corrected firing frame.


The research methods outlined in this article for studying behaviors during performance of interceptive actions provide an avenue for increasing the representativeness of experimental designs in psychology, movement science, and sport science. This methodological design provides two important advantages over existing approaches when movement behaviors are studied. First, this method maintains functionality by presenting perceptual information via advanced actions from a performance environment. Yet it can be adapted and manipulated to study important processes, such as cognition, perception, and action, in a systematic fashion. Second, the visual images can be integrated and synchronized with time of ball release, supporting action fidelity by enabling participants to coordinate a natural movement, such as catching or hitting a ball, in response to images of a ball being bowled, pitched, or thrown toward them. Reliance on reductionist methods, such as verbal reports, pointing, or micromovements by participants to simulate actions, is avoided with the implementation of this integrated projection system. These methodological characteristics result in processes of perception and action remaining coupled during task performance. Measurement of responses, such as gaze behaviors or movement kinematics, can be examined in a systematic and controlled manner, resulting in increased levels of representative task design (Brunswik, 1956).

The integrated ball projection system we have presented overcomes the limitations, with regard to the availability of advanced visual information from an actor performing an action to deliver a ball toward a participant, associated with previous research methods used to investigate dynamic interceptive actions. Although the integrated ball projection technology discussed in this article has used a 2-D display, its flexible design means that it can be integrated with the projection of 3-D displays in virtual or mixed reality environments. The limitations associated with using a 2-D image display of action are related to the availability of depth information and the presence of informational constraints provided by variables such as expansion rate of the object “projected.” Additionally, the optical flow experienced by participants interacting with a 2-D screen is somewhat different from that experienced in 3-D virtual or mixed reality environments. For example, in a virtual reality environment (VE), the performer can move to positions within the scene to gain different views of the opponent or background layout. At present, however, these potential advantages of mixed reality environments over 2-D displays are theoretical assumptions that have not been empirically verified in a direct test of performance of dynamic interceptive actions in sport contexts. Regardless, the integrated ball projection system presented here is future proofed because it can be reconfigured to allow various modifications depending on the researcher's aims and needs. The implication is that if researchers develop the capacity to create and project 3-D images in a display area for performance of interceptive actions, the technology can be modified to accommodate these advances.

Of course, this type of development can raise other challenges, such as how to integrate representative haptic and proprioceptive information during task performance. The ball projection technology discussed here currently retains an advantage over VE technology, since it ensures that haptic information is provided to participants through direct interaction with a projected ball in flight. This source of information allows learners to perceive force and velocity information from interception of an approaching projectile, allowing participants to produce actual movement patterns that are needed during actual performance of catching or hitting actions. Feedback on the performance outcomes of an interceptive action is directly available to the participant, as well as the sensory consequences of these actions. Currently, sources of haptic information associated with interceptive actions are challenging to replicate using VE, unless a considerable amount of money is spent.

This apparatus uses technical equipment in commercially available systems such as Probatter (ProBatter Sports, LLC, Milford, CT), which is currently used for batting training in baseball and cricket. This technology has an approximate price of $50,000 (, which is relatively expensive, as compared with this design costing $6,500. It is also cheaper as compared with the financial constraints of implementing VE systems, which can require various components such as specialist software and hardware, including head-mounted-displays and caves, which can cost up to $330,000 (Miles, Pop, Watt, Lawrence, & John, 2012). The economical design of this apparatus makes it more widely accessible for research institutes with limited budgets, and the flexible configuration means that it can be easily adapted for various methodological designs of different actions and behavior responses.

The apparatus has already been used for studying catching behaviors, yet could be applied to the study of various other interceptive actions, including hitting objects and locomotor pointing. This method also enables the integration into other third-party equipment, such as VICON Nexus, ASL eye movement analysis systems, or Myopack EMG systems. This capacity allows researchers to precisely examine the timing of key events within the video simulation in line with other behavior measures in a multilevel analysis of movement coordination processes (Panchuk et al., 2013). That study showed how the integrated projection technology improved catching success and resulted in observation of different patterns of movement kinematics, as compared with catching against a traditional ball projection machine with no advanced perceptual information from the actions of a thrower (Panchuk et al., 2013).

Another current limitation with this type of projection technology is having just one hole in the screen from which the ball is released. Croft, Button, and Dicks (2010) demonstrated how cricket batters can fixate gaze at the anticipated point of emergence of a ball from the projection machine. Current projection technology has a specific release point (hole in screen), and research involving gaze behaviors needs to verify whether the pickup and use of information in that simulation task actually replicates behaviors observed in the performance context. Despite this limitation, the integrated projection technology still has advantages over other commonly used methods by allowing natural actions to emerge, rather than using responses such as buttonpressing or micromovements.