Advertisement

Sports Engineering

, Volume 21, Issue 4, pp 419–427 | Cite as

Automated LED tracking to measure instantaneous velocities in swimming

  • Josje van HouwelingenEmail author
  • Raf M. Antwerpen
  • Ad P. C. Holten
  • Ernst Jan Grift
  • Jerry Westerweel
  • Herman J. H. Clercx
Open Access
Original Article

Abstract

In this paper a video-based method to automatically track instantaneous velocities of a swimmer is presented. Single cameras were used to follow a marker (LED) attached to the body. The method is inspired by particle tracking techniques, traditionally used in the field of fluid dynamics, to measure local velocities of a fluid flow. During the validation experiment, a white LED was attached to the hip of a swimmer together with a speedometer. A swimmer performed four different stroke types. The velocity profiles using LED tracking were captured and showed less noise than the speedometer measurements. Only at times when the marker disappeared above the water surface due to body role in front crawl and backstroke swimming did the LED tracking fail to capture the athlete’s motion. The algorithm was tested in a 2D case with a single LED to illustrate the proof of principle, but should be suitable for implementation in a 3D analysis or multiple LED analysis.

Keywords

LED tracking Instantaneous velocity Swimming Automated 

1 Introduction

The need for measuring the intra-cyclic velocity variations (IVVs) and velocity profiles in an experiment including many swimming trials [1] forms the framework of this study.

The most widely used method to access the instantaneous velocity of a swimmer is a speedometer, also known as a tachometer [2, 3, 4, 5, 6, 7]. A thin, non-stretching cable is attached to the waist of a swimmer. The rotation speed within the instrument is measured when pulling the cable while swimming. Although the use of a speedometer is straightforward, the method just offers 1D information and limits the swimmer’s motion. Other techniques have been developed as well , for example, [8] measured the instantaneous velocity by a three-dimensional reconstruction of the location of the centre of mass of the swimmer using multiple cameras and anthropometric data calculations, which is associated with a high computational effort. During the last decade, the use of inertial sensors (i.e., accelerometers) to measure kinematic variables in swimming has increased [9]. However, inertial sensors have noise complicating the calculation of velocity by integration. When applied in real time, the use of inertial sensors suffers from problems with data transfer through the water (surface) and integration to obtain velocity.

The use of video recordings is a common method in sport practice to provide analyses of the performance in both training and competition. The use of video recordings is attractive, since it is a versatile tool that serves as input for post-processing for a variety of analyses. Since a visual record is obtained, validation and troubleshooting of methods are simplified. With increasing digital capacities and knowledge of image processing, it is possible to improve the video-based data collection. In swimming practice, video recordings are used in a variety of ways. They are used for race analysis (software: SwimWatch, SwimOptimum), qualitative technique analysis and quantitative 2D or 3D analysis of kinematic parameters (software: Qualisis, Simi [10, 11, 12, 13, 14, 15, 16, 17, 18]) of strokes, starts and turns [19]. The use of markers (on body landmarks) is a useful addition to video analysis to obtain quantitative information. However, detecting the marker is often performed manually or only partially automated, making the analysis costly and time consuming and less feasible for practical usage [20]. Some automated tracking approaches exist (also markerless) ([20, 21, 22, 23] and software for this purpose: Simi, Qualisis), but they are either time consuming or not easily applicable in an aquatic environment. Other areas in which (3D) motion analysis is frequently used and developed are the gaming and film industry. Demands for such an application intiated the development of a real-time 3D motion analysis system (PRIMAS) by Furne et al. [24]. Fast and accurate marker tracking algorithms are essential for the real-time application of this system.

The aim of this study was to determine the instantaneous position and velocity of the swimmer. A tracking algorithm is presented, and the application for tracking a single LED in a two-dimensional space is shown to illustrate the potential of the method. The velocity time series for the four competitive stroke types (butterfly, backstroke, breaststroke and front crawl) were compared with the results of velocity measurements with a speedometer.

2 Methods

2.1 Setup and participant

The experiment was conducted in the 50 m indoor training pool of the Pieter van den Hoogenband Swimming Stadium at Innosportlab in Eindhoven, the Netherlands, which has a camera system along the length of the pool. One swimmer, competent in all four competitive strokes, participated voluntarily and gave written informed consent. The ethical officer of the University of Technology Eindhoven has approved the design of this study. The swimmer wore a swimming trunk, with a small socket on the hip containing a waterproof white LED. It was assumed that the hip velocity provides a good representation of the forward velocity profile (measured at the center of gravity), although small differences between the two have been reported [28, 29]. The motion of the swimmer (including the LED) in the sagittal plane was captured by four cameras (Basler, sc1400gc, 50 fps, resolution: 788 \(\times\) 524 pixels), located in the side wall of the pool at a depth of 0.55 m below the water surface, on positions 2.5, 5, 10 and 15 m from the beginning of the lane (see Fig. 1). The swimmer was instructed to follow the line on the bottom of the pool at 3.75 m from the side wall. At that level, the total recording range was approximately 17 m and the images of the different cameras partly overlap. Two-dimensional tracking was used, since lateral motion was assumed to be negligible. The calibration for translating pixels to meters, coupling and post-processing the recordings of the different cameras was provided by Innosportlab de Tongelreep (Eindhoven, the Netherlands). The selection of a sagittal plane of interest was required to apply the calibration, here chosen at 3.55 m, approximately coinciding with the distance to the LED. The raw video recordings were collected in a binary sequence (.seq) file. This bulk file was read in Matlab and single frames were extracted during the tracking procedure. No preprocessing of the single frames was required for the analysis. Positions of the LED and the instantaneous horizontal (forward) and vertical velocity of the LED were obtained using an automated LED-tracking algorithm, which was programmed in Matlab (R2015b). The algorithm is presented in Sect. 2.2.
Fig. 1

Experimental setup to measure the instantaneous velocity of the swimmer by means of LED tracking. Four cameras in the wall of the pool record the swimmer. The velocity (V) of the swimmer is also measured by a speedometer (S) mounted on the starting block

The swimmer was instructed to swim 25 m for each stroke type twice (butterfly, backstroke, breaststroke, and front crawl) during which the motion was captured with the cameras. To enable comparison of the velocity profile obtained by the LED tracking, the velocity of the swimmer was also measured by a speedometer (Swimsportec, length: 25 m, velocity range: 0–3 m/s captured linearly on a 5 V range) attached to the waist of the swimmer with a cord. The speedometer data was captured with a sample rate of 32.5 Hz using the Swim Analysis Software 3.0.4. developed by Swimsportec.

2.2 LED tracking algorithm

To track the LED throughout the video recordings, an algorithm consisting of different cases was used. In advance of the tracking, a template of the typical light spot emitted by the white LED was created by selecting a small area in a typical image including the LED. A single standard template T (\(13 \times 17\) pixels) was used to perform the tracking on all recordings.

The tracking was started by manually selecting the position of the LED in the first frame that captured the LED (case 1 in Fig. 2).
Fig. 2

Schematic of the algorithm used for tracking the LED. The tracking is started by following case 1–4 consecutively. When the LED remains visible qualitatively good, the code continues in case 4. Otherwise, it returns back to case 2 and searches the LED from there again

In the second frame an area of \(100 \times 100\) pixels around the position of the first point was used to search for the new position of the LED (case 2 in Fig. 2). The area was restricted, because too many pixels is computationally expensive. For too small a window the LED might move out of the search area. Three normalized cross-correlations between the area of interest and the template of the LED were performed on the separate RGB (red–green–blue) colour components denoted by \(i = R,G,B\). The normalized cross-correlation was defined as [26] :
$$\begin{aligned} c_i(x,y) = \frac{\sum _{k,l} \left[ I_i(k,l) - \bar{I}_{x,y} \right] \left[ T(k-x,l-y) - \bar{T} \right] }{\left( \sum _{k,l} \left[ I_m(k,l) - \bar{I}_{x,y} \right] ^2 \sum _{k,l} \left[ T(k-x,l-y) - \bar{T} \right] ^2 \right) ^{\frac{1}{2}}}, \end{aligned}$$
(1)
with T the template of the LED with size \((K \times L)\), \(\bar{T}\) the mean of the template, I the area of interest on the image, \(\bar{I}_{x,y}\) the mean of \(I_i\) in the region under T, in which \(k = x - \frac{1}{2}K, x - \frac{1}{2}K + 1, ..., x + \frac{1}{2}K\) and \(l = y - \frac{1}{2}L, y - \frac{1}{2}L + 1, ..., y + \frac{1}{2}L\), and \(c_i(x,y)\) the correlation coefficient on the position (xy) relative to the predicted position [30]. The three resulting cross-correlation coefficients were multiplied, giving \(c(x,y) = c_R \times c_G \times c_B\). The peak in the correlation corresponds to the displacement of the LED. To further increase accuracy, a Gaussian peak detection was applied to find the peak in the correlation. To this end, a Gaussian curve,
$$\begin{aligned} f(x,y) = a \exp \left( - \frac{(x-x_0)^2}{2\sigma _x^2} - \frac{(y-y_0)^2}{2\sigma _y^2}\right) , \end{aligned}$$
(2)
with a the height of the peak, \((x_0,y_0)\) the position of the maximum and \(\sigma _{x,y}\) the characteristic width of the Gaussian curve, was fitted through the results using a non-linear least-square algorithm available in Matlab (lsqcurvefit). This improved the algorithm to find the maximum with sub-pixel accuracy [26]. The peak corresponds to the displacement of the LED and thus to the new position.

When two previous positions are known, a prediction of the next position of the LED can be made, \(\mathbf {x}_{n+1} = 2\mathbf {x}_n - \mathbf {x}_{n-1}\) (case 3 in Fig.  2), with \(\mathbf {x}_{n+1}\) the predicted position and \(\mathbf {x}_n\) and \(\mathbf {x}_{n-1}\) the two previous positions. The same cross-correlation technique and peak detection was applied, but now an area of interest around \(\mathbf {x}_{n+1}\) with the size of the template image was used for the calculation.

The main part of the tracking relies on the following procedure (case 4 in Fig. 2). The last part of the track was smoothed up to nine positions using a moving average filter. Then, an approximate location of the LED in the next frame was estimated based on this smoothed track. The prediction was made with:
$$\begin{aligned} \mathbf {x}_{n+1}&= \mathbf {x}_n + \frac{1}{3} \left( \vert \mathbf {a} \vert + 2 \vert \mathbf {b} \vert \right) \frac{\mathbf {d}}{\vert \mathbf {d} \vert }, \nonumber \\ \mathbf {d}&= 2 \mathbf {b} \frac{\mathbf {a} \cdot \mathbf {b}}{\vert \mathbf {b} \vert ^2} - \mathbf {a}, \end{aligned}$$
(3)
with \(\mathbf {x}_n\) the coordinates of the last point in the track, \(\mathbf {a} = \mathbf {x}_{n-1} - \mathbf {x}_{n-2}\) and \(\mathbf {b} = \mathbf {x}_n - \mathbf {x}_{n-1}\) the vectors between the last three data points in the track and \(\mathbf {d}\) a directional vector which contains information about the curvature of the track. A more regular approach of a prediction based on acceleration directly gave errors due to the large fluctuations in swimming. A weighted average of \(\mathbf {a}\) and \(\mathbf {b}\) gave better results. Similar to case 3 an area of interest with the size of the template image was selected around the predicted position in the new frame. The new position was found again by applying the cross-correlation, the peak detection fit and subsequent detection.
Finally, the noise in the track was corrected by smoothing the velocities (in pixels/s) using a moving average filter (standard Matlab function nanfastsmooth) on three subsequent velocity data points:
$$\begin{aligned} \mathbf {v}_{\text {smooth}}(n) = \frac{1}{3}(\mathbf {v}_{n+1} + \mathbf {v}_{n} + \mathbf {v}_{n-1}). \end{aligned}$$
(4)
The new positions were calculated by integrating the velocity.

2.2.1 Constraints

As long as three previous positions are known, the next position was determined using case 4. However, in reality this procedure was not straightforward. Additional difficulties might arise such as the LED disappearing due to body roll, the view being blocked by passing body parts, or reflections of the LED appearing in the water surface compromising identification of the correct LED position in the image. Therefore, some constraints must be added to overcome these difficulties in automated tracking adding to the robustness of the algorithm.

In this 2D experiment, with an LED on the hip, no difficulties were expected with breaststroke swimming. In all other strokes, the hand could block the LED, which could lead to an incomplete velocity profile. In general, this was of such a short duration (\(\sim 1\) frame) that no hindrance was observed due to the speed of the hand. In backstroke and front crawl swimming, the LED disappeared for a longer period of time (\(\sim 15\) frames) when the LED reached the water surface during the body roll, which caused some interruption in the acquired velocity signal.

To quantify these mishaps in detecting, the LED location the peak to correlation energy (PCE) of the cross-correlation was used:
$$\begin{aligned} PCE = \frac{\vert c_{\max } \vert ^2}{\sum _{x,y}{\vert c(x,y) \vert ^2}}, \end{aligned}$$
(5)
with c(xy) the result of the normalized cross-correlation, \(c_{\max }\) the peak value and the denominator the correlation energy [31]. When the PCE was below a threshold (set to 0.08), \(c_{\max }\) was low compared to the noise, it was assumed that the LED was not properly detected and the code returned to case 2. As a new starting point (prediction), the last predicted position summed with the average horizontal displacement until that point was used. In the vertical, a fixed minor displacement in the negative vertical direction was added, to favour the tracking of the LED instead of its reflection in the water surface. As long as the PCE was below the threshold value, the tracking was continued with case 2. Note that the threshold in case 2 (0.025) must be chosen differently from case 3 and 4, since the window size was larger and the characteristics of the enlarged image, and thus c(xy), were completely different.

Another issue arose when multiple peaks appeared in c(xy), and the maximum of c(xy) satisfied the PCE threshold, but did not necessarily correspond to the peak of interest. The presence of a second peak was captured when \(R_{\max }/c_{\max }> 0.8\), with R the residual of the Gaussian peak detection. The location of this second peak was determined with a Gaussian peak fit. When the location of the second peak was closer to the predicted position, this location was set to the new position in the track.

2.2.2 Data analysis

The output of the tracking algorithm was a (\(3 \times n\)) array containing the frame numbers and horizontal and vertical positions in pixel coordinates (sub-pixel accurate) for a single camera. These arrays were converted to positions in real-world coordinates (m) using the calibration. The velocities were calculated using a first-order backward finite difference method:
$$\begin{aligned} \mathbf {v}_n = (\mathbf {x}_n - \mathbf {x}_{n-1})f, \end{aligned}$$
(6)
with f the frame rate (50 fps).

To obtain a single data array for each trial, the individual arrays of each of the four cameras were coupled. The cameras are synchronized and had an overlapping view at the level of the swimmer, which was useful for coupling. The cut-off (and coupling) of the arrays was chosen at the frame number halfway in the overlap region. A possibly missing data point in this part of the signal was replaced by the data point of the other camera.

The velocity data of the speedometer contained a significant amount of noise as could be observed in Fig. 3 and was therefore filtered before further analysis. A traditional fourth-order Butterworth filter (cut-off frequency 5 Hz) was applied for filtering [32]. The measurements with the speedometer and cameras were synchronized, by shifting the signals such that the characteristic peaks for each of the four strokes coincided.

For comparison of the data, an averaged velocity profile of a stroke was created by manually selecting the area around characteristic peaks within the velocity profile to distinguish single stroke cycles. The location of the local minimum was then automatically obtained by fitting a second-order polynomial through the selected areas and determining the minimum by differentiating; then the stroke cycles’ duration was resampled on 100 sample points and averaged.

3 Results

The settings (template image LED, thresholds, window size) of the LED tracking algorithm were kept constant throughout the analysis of different recordings. During the analysis, the LED was tracked automatically, only the starting position was selected manually. Note that this can be automated, but this was not implemented in this study. After periods in which the LED was lost, the LED was captured again automatically. In Table 1 the typical computation times are shown. The calculations were performed within the Matlab interface (tic toc function) on a single processor. The duration of analysing the swimmer’s passage through all frames of a single camera was on the order of 30 s.
Table 1

Typical computation time per frame of operations in the algorithm retrieved from the computer for the analysis. The total time varied by about 50%

Execution

Time s/frame

Select frame

0.014

Detect target

0.023

Predict next position

0.001

Apply calibration

0.001

Visualize results

0.130

Total

0.17

Fig. 3

Typical velocity profiles obtained with the LED tracking and speedometer data, measured from start up to the last stroke captured within the video recording. The LED tracking captures a horizontal (\(v_x\)) and vertical (\(v_y\)) velocity component. In addition to the velocity profiles obtained with tracking, the acceleration profile in the x direction is also shown. For the front crawl trial the filtered speedometer result is also shown in red. Stroke cycles are indicated with the vertical dashed lines

In Fig. 3 a typical velocity profile of a breaststroke trial (a) and front crawl trial (b) is shown for both the LED tracking and the speedometer data. For the tracking results of the breaststroke the acceleration profile in the x direction is shown (derivative of the velocity profile). Looking at the peaks of maximum velocity and maximum acceleration, there is a time shift of \(\sim 0.15\) s. When the LED is lost from the camera view the algorithm turns back to case 2 and the computational time is slightly increased. In these trials the push-off start, the underwater phase and \(\sim 7\) stroke cycles for the breaststroke and front crawl were included. During the LED tracking a horizontal (\(v_x\)) and vertical (\(v_y\)) velocity component were obtained, whereas the speedometer measures one component, which was approximately in the forward direction (\(\sim v_x\)). As can be observed, the raw speedometer signal contains more noise. Although some minor parts of the velocity profile are missing in the LED tracking of the front crawl due to body roll, a clear stroke cycle pattern is visible, partly due to the availability of the additional \(v_y\) component, where the stroke pattern is very distinguishable. Difficulties were experienced with the analysis of the velocity profile obtained with the speedometer: the signal degrades, i.e., becomes more noisy, when the distance to the speedometer increases. It is of added value to have the corresponding LED tracking data, which could be incorporated in the analysis of the speedometer data.

In Fig. 4 the averaged time-normalized velocity profiles for each stroke obtained with the LED tracking are presented. For the averaging procedure a number of complete stroke cycles were used to gain the averaged velocity profiles and their standard deviations. The results are shown for the butterfly (Fig. 4a), backstroke (Fig. 4b), breaststroke (Fig. 4c), and front crawl (Fig. 4d), respectively.
Fig. 4

Averaged time-normalized velocity profiles for each stroke obtained with the LED tracking. The blue lines correspond with the \(v_x\) profile and red lines with the \(v_y\) profile. The coloured dashed lines indicate the standard deviation. The fine black lines in the back show the single stroke cycles used in the averaging. The vertical black dashed lines with numbers indicate stroke phases, which were recognizable in the video recordings. Butterfly: 1 1st kick, 2 arms stretched, 3, 4 insweep, 4, 5 upsweep + 2nd kick, 5 release. Backstroke: 6 release right arm, 6, 7 1st downsweep left arm + release right arm, 7 catch, 7, 8 upsweep left arm, 8, 9 2nd downsweep left arm, 9 start 1st downsweep right arm + release left arm, 10 start 2nd downsweep right arm. Breaststroke: 11, 12 leg propulsion, 12 legs extended, 13 catch, 14 release, 14–11 recovery legs (and arms). Front crawl: 15–16 downsweep right arm, 16 catch + start insweep left arm, 17, 18 upsweep right arm, 18, 19 downsweep left arm, 19 catch + start insweep left arm

In the velocity profiles of the backstroke and the front crawl, a gap in the velocity data can be observed due to the body roll whereby the LED disappears above the water surface. This results in larger inaccuracies around that gap. In butterfly and breaststroke swimming the single stroke cycles are selected from \(v_x\). Typically, during front crawl and backstroke swimming, the forward velocity variations are considerably reduced, and render the selection of stroke cycles based on the horizontal velocity \(v_x\) alone more difficult. Moreover, fluctuations like noise might appear more pronounced in \(v_x\). Therefore, characteristic points in the vertical velocity \(v_y\) are used for the identification of single strokes.

Looking at the velocity profiles given in Fig. 4 and the video recordings attached as supplementary material, some actions of the swimmer are identified, which are indicated by the numbered, dashed vertical lines. However, the actual onset of an action is first reflected in the peaks of the acceleration profile, which have a time shift with the peaks in the velocity profile. The observed effects in the velocity profile might be misplaced over time (\(\sim 0.15\) s based on comparison with the acceleration profile (Fig. 3a)). The \(v_x\) profile of the butterfly stroke (Fig. 4a) shows three characteristic peaks, which correspond to the first kick, sweeping in of the hands, and the second kick in combination with the sweeping up of the arms, respectively [33]. The action of legs and arms in breaststroke swimming is represented by the first and second peak in \(v_x\) (Fig. 4c), respectively. The steap deceleration after the second peak corresponds to the high drag experienced in the arm and leg recovery phase [6, 33]. With some effort and considering that we are looking at cyclic motion, two broad peaks corresponding to the asymmetric action of the arms can be distinguished in the velocity profiles \(v_x\) of the backstroke and front crawl (Fig. 4b, d) [33]. However, the size of these intra-cyclic velocity variations in \(v_x\) is almost similar to the variations related to inter-cyclic velocity fluctuations. In \(v_y\) the variations are larger and stroke actions are more distinguishable. The minima before the acceleration around 0–0.2 (normalized times) in \(v_y\) for both backstroke and front crawl, coincides with the start of the left arm stroke and right arm stroke, respectively.

In Fig. 5 the average velocity profiles of the speedometer measurements are shown. In the back, the average velocity profile of the LED tracking measurements is added in grey. The averaging is performed on the same strokes as in Fig. 4. In Table 2, an overview of means and standard deviations of the two different measurements methods is given. In general, the averaged velocity of the LED tracking was slightly higher and the maximum and averaged deviations were smaller.
Fig. 5

Averaged time-normalized velocity profiles for each stroke obtained with the speedometer. The blue line correspond with \(\sim v_x\) profile. The blue dashed lines indicate the standard deviation. The grey lines in the back show the mean \(v_x\) profile measured with the LED tracking

Two striking differences between the results of the LED tracking and speedometer are the missing second peak in the velocity profile of the butterfly and the lower second peak within the breaststroke signal of the speedometer. The ratio between the intra-cyclic velocity variations and the inter-cyclic fluctuations (captured in the standard deviation) in \(v_x\) of the backstroke and front crawl is too low to identify clear differences. Meanwhile, \(v_y\) contains large variations during the stroke cycle, which, fortunately, are captured with the LED tracking.
Table 2

Overview of the mean and standard deviations of the velocity profiles obtained with the LED tracking/speedometer

Stroke type

Cycles #

\(\bar{\mathbf {v}}_{\mathbf {x}}\) (m/s)

\(\bar{{\varvec{\sigma }}}_{\mathbf {x}}\) (m/s)

\({\varvec{\sigma }}_{\mathbf {x,max}}\) (m/s)

\(\mathbf {max}(\vert {\mathbf {v}}_{\mathbf {x}}-\bar{\mathbf {v}}_{\mathbf {x}} \vert )\) (m/s)

Butterfly

7

1.27/1.16

0.06/0.10

0.15/0.17

0.21/0.34

Backstroke

8

1.11/1.06

0.07/0.10

0.18/0.20

0.27/0.30

Breaststroke

14

0.94/0.85

0.07/0.09

0.13/0.18

0.22/0.41

Front crawl

9

1.25/1.17

0.06/0.13

0.19/0.23

0.19/0.52

4 Discussion

The advantages of LED tracking over a speedometer include a visual record that is inherently obtained, the distinction between different velocity components \(v_x\) and \(v_y\), the collection of position and acceleration data, and that swimmers are not disrupted by the measurement device itself. The only requirement is that the pool must be accommodated with a (calibrated) camera system, a fast computer and suitable LEDs. The approach in this study was inspired by particle tracking techniques (PTV) used in the field of experimental fluid dynamics, which is a Lagrangian method to measure the displacement of submerged small particles through subsequent frames to determine the particle velocities (\(\mathbf {v}(\mathbf {x},t) = \varDelta \mathbf {x} / \varDelta t\), with \(\mathbf {v}\) the velocity vector at position \(\mathbf {x}\), \(\varDelta \mathbf {x} = \mathbf {x}_{n+1} - \mathbf {x}_{n}\) the displacement and \(\varDelta t\) the time between frames n and \(n+1\)). PTV methods can yield results at a sub-pixel level (\(\sim 0.3\) pixel for PTV). This gives a high potential for making measurements of around \(\mathscr {O}(1)\) accuracy [26, 27]. Keeping in mind the successes of the PTV technique in the field of experimental fluid dynamics, the application of these ideas to (LED) marker tracking of human motion is interesting. It is possible to expand this technique towards multiple LED tracking for a full 3D analysis. This would simplify video-based 3D motion and coordination analysis [34] to understand the relationship between body actions and acceleration. In combination with augmented reality tools automatic LED tracking may become a powerful interactive method to optimize interventions in stroke training.

The LED tracking shows similarities with the results (especially for the butterfly and breaststroke profiles) found in theliterature [5, 6, 29, 33]. However, compared to the literature and the speedometer data in this experiment, it is striking that the second peak in breaststroke is larger than the first peak. Analysis of the video footage shows that the LED analysis is correct, and the observed differences with the speedometer are due to inaccuracy of the speedometer. Also, the second peak in butterfly swimming is more pronounced than described in the literature and is completely missing from the speedometer data in this experiment. Presumably, the speedometer system is not sensitive to sudden accelerations when the cord has slack, causing not all variations in horizontal speed to be observed, or even the appearance of additional fluctuations due to unwanted extra degrees of freedom of the cord.

In previous studies that investigate the velocity profile of backstroke and front crawl, two broad peaks are indentified that correspond to the actions of the arms, with several smaller peaks on top, probably due to the action of the legs [29, 33]. These peaks were not clearly identifiable in this study, but the recordings (see Online Resource 1–4) demonstrate that the LED tracking was not compromised. Of course, the technique of the swimmer might give a distorted view of the capabilities of selecting individual strokes in the backstroke and front crawl. For example, it is assumed that an experienced swimmer, is swimming more efficiently with less velocity variations [1, 33, 35]. Therefore, it would be useful to perform similar measurements for different swimmers.

It was observed that the mean velocity obtained with the LED tracking is slightly higher (up to \(\sim 0.1\) s) for all strokes compared to the speedometer. A minor deviation might be explained by the assumptions that (i) the participant is swimming at the center of the lane (3.75 m), and (ii) the LED is just moving in the sagittal plane.

From the butterfly and breaststroke results, it can be concluded that the standard deviation in LED tracking was mainly determined by the inter-cyclic differences of the athlete. The additional inaccuracies due to noise were larger for the speedometer. Especially in the speedometer measurement of the backstroke and front crawl trials there were difficulties in selecting proper periods. In all likelihood, small errors arise when choosing the start of a single stroke, which could result in a small shift of the time-normalized velocity profile.

In this LED tracking approach, the only signal loss occurred with the body roll. Therefore, this technique seems useful for the analysis of breaststroke and butterfly swimming. Perhaps such a loss of signal can be avoided by performing a 3D analysis with multiple cameras.

Concerning the computational times of the LED tracking (Table 1) at this stage, the algorithm is suitable for analysing velocity profiles in experiments quickly. To perform real-time analysis at the pool side for application in day-to-day training or in competition, the algorithm must be further optimized and a suitable interface containing the video capture, tracking computations and visualization is advisable for user-friendliness. Much time can be saved by converting the Matlab language to an imperative programming language (such as C/C++) and parallelizing the computation on multiple processors. A fast laptop PC must then be sufficient to perform the analysis. In fact, it has been successfully used in an experiment about breaststroke, in which the intra-cyclic velocity variations and velocity profiles were the focus of investigation [1].

5 Conclusion

In this study the potential of a technique for measuring the instantaneous velocity of a swimmer by automatically tracking a LED marker was described and compared with speedometer measurements. Although some velocity profiles were incomplete due to the body role within the backstroke and front crawl stroke cycles, the LED tracking technique seems to be much more convenient given that a visual record is obtained, which can be used to check the accuracy. In general the LED tracking shows little noise and individual strokes can be better distinguished, also by the additional information of the vertical velocity component. Since the settings were kept constant throughout the analysis of all recordings, it is shown that this technique is robust. Moreover, without much optimization this technique is close to real time, which makes the technique attractive for practical usage. Extension towards 3D LED tracking is straightforward.

Notes

Compliance with ethical standards

Conflict of interest

The authors declare that there are no conflicts of interest.

Supplementary material

Online Resource 1: Illustrative movie of the LED tracking result in butterfly. (mp4 7077 KB)

Online Resource 2: Illustrative movie of the LED tracking result in backstroke. (mp4 9733 KB)

Online Resource 3: Illustrative movie of the LED tracking result in breaststroke. (mp4 9520 KB)

Online Resource 4: Illustrative movie of the LED tracking result in front crawl. (mp4 7440 KB)

References

  1. 1.
    van Houwelingen J, Roerdink M, Huibers AV, Evers LLW, Beek PJ (2017) Pacing the phasing of legand arm movements in breaststroke swimming to minimize intra-cyclic velocity fluctuations. Plos One 12(10):e0186160CrossRefGoogle Scholar
  2. 2.
    Craig A, Pendergast D (1979) Relationships of stroke rate, distance per stroke, and velocity in competitive swimming. Med Sci Sports 11(3):278Google Scholar
  3. 3.
    Manley P, Atha J (1992) Intra-stroke velocity fluctuations in paced breaststroke swimming, biomechanics and medicine in swimming VI. E & FN Spon, London, pp 151–160Google Scholar
  4. 4.
    Alberty M, Sidney M, Huot-Marchand F, Hespel J, Pelayo P (2005) Intracyclic velocity variations and arm coordination during exhaustive exercise in front crawl stroke. Int J Sports Med 26(06):471CrossRefGoogle Scholar
  5. 5.
    Chollet D, Seifert L, Leblanc H, Boulesteix L, Carter M (2004) Evaluation of arm-leg coordination in flat breaststroke. Int J Sports Med 25(07):486CrossRefGoogle Scholar
  6. 6.
    Leblanc H, Seifert L, Trouny-Chollet C, Chollet D (2007) Intra-cyclic distance per stroke phase, velocity fluctuations and acceleration time ratio of a breaststrokers hip: a comparison between elite and nonelite swimmers at different race paces. Int J Sports Med 28:140CrossRefGoogle Scholar
  7. 7.
    Feitosa W, Costa M, Morais J, Garrido N, Silva A, Lima A, Barbosa T (2013) ISBS-conference proceedings archive. In: Shiang T, Ho W, Huang P, Tsai C (eds) 31 International conference on biomechanics in sports, Taipei, Taiwan.Google Scholar
  8. 8.
    Psycharakis S, Naemi R, Connaboy C, McCabe C, Sanders R (2010) Three-dimensional analysis of intracycle velocity fluctuations in frontcrawl swimming. Scand J Med Sci Sports 20(1):128CrossRefGoogle Scholar
  9. 9.
    Stamm A, Thiel D, Burkett B, James D (2011) Towards determining absolute velocity of freestyle swimming using 3-axis accelerometers. Proc Eng 13:120CrossRefGoogle Scholar
  10. 10.
    Andrews C, Bakewell J, Scurr J (2011) Comparison of advanced and intermediate 200-m backstroke swimmers’ dominant and non-dominant shoulder entry angles across various swimming speeds. J Sports Sci 29(7):743CrossRefGoogle Scholar
  11. 11.
    Cortesi M, Fantozzi S, Gatta G (2012) Effects of distance specialization on the backstroke swimming kinematics. J Sports Sci Med 11(3):526Google Scholar
  12. 12.
    Gatta G, Cortesi M, Lucertini F, Piero B, Sisti D, Fantozzi S (2015) Path linearity of elite swimmers in a 400 m front crawl competition. J Sports Sci Med 14(1):69Google Scholar
  13. 13.
    Komar J, Leprêtre P, Alberty M, Vantorre J, Fernandes R, Hellard P, Chollet D, Seifert L (2012) Effect of increasing energy cost on arm coordination in elite sprint swimmers. Hum Mov Sci 31(3):620CrossRefGoogle Scholar
  14. 14.
    Osborough C, Payton C, Daly D (2010) Influence of swimming speed on inter-arm coordination in competitive unilateral arm amputee front crawl swimmers. Hum Mov Sci 29(6):921CrossRefGoogle Scholar
  15. 15.
    Cornett A, White J, Wright B, Willmott A, Stager J (2011) Racing start safety: head depth and head speed during competitive swim starts into a water depth of 2.29 m. Int J Aquat Res Educ 5(1):4Google Scholar
  16. 16.
    Fischer S, Kibele A (2016) The biomechanical structure of swim start performance. Sports Biomech 15(4):397CrossRefGoogle Scholar
  17. 17.
    Vantorre J, Seifert L, Fernandes R, Vilas Boas J, Chollet D (2010) Comparison of grab start between elite and trained swimmers. Int J Sports Med 31(12):887CrossRefGoogle Scholar
  18. 18.
    Takagi H, Sugimoto S, Nishijima N, Wilson B (2004) Swimming: Differences in stroke phases, arm-leg coordination and velocity fluctuation due to event, gender and performance level in breaststroke. Sports Biomech 3(1):15CrossRefGoogle Scholar
  19. 19.
    Mooney R, Corley G, Godfrey A, Osborough C, Quinlan L, OLaighin G (2015) Application of video-based methods for competitive swimming analysis: a systematic review. Sports Exerc Med 1(5):133–150CrossRefGoogle Scholar
  20. 20.
    Barris S, Button C (2008) A review of vision-based motion analysis in sport. Sports Med 38(12):1025CrossRefGoogle Scholar
  21. 21.
    Ceccon S, Ceseracciu E, Sawacha Z, Gatta G, Cortesi M, Cobelli C, Fantozzi S (2013) Motion analysis of front crawl swimming applying cast technique by means of automatic tracking. J Sports Sci 31(3):276CrossRefGoogle Scholar
  22. 22.
    Slawson S, Conway P, Justham L, West A (2010) The development of an inexpensive passive marker system for the analysis of starts and turns in swimming. Proc Eng 2(2):2727CrossRefGoogle Scholar
  23. 23.
    Magalhaes F, Sawacha Z, Di Michele R, Cortesi M, Gatta G, Fantozzi S (2013) Effectiveness of an automatic tracking software in underwater motion analysis. J Sports Sci Med 12(4):660Google Scholar
  24. 24.
    Sabel J, Van Veenendaal H, Furnee E (1994) SPIE: optical 3D measurement techniques II: applications in inspection, quality control, and robotics, vol. 2252. In: Spie, the International Society for Optical Engineering, Zurich, Switzerland, vol. 2252, pp. 530–530Google Scholar
  25. 25.
    Trangbaek S, Rasmussen C, Andersen TB (2015) On the development of inexpensive speed and position tracking system for swimming. Sports Technol 8(1–2):30CrossRefGoogle Scholar
  26. 26.
    R Adrian, Westerweel J (2011) Particle image velocimetry, vol 30. Cambridge University Press, CambridgeGoogle Scholar
  27. 27.
    Cowen E, Monismith S (1997) A hybrid digital particle tracking velocimetry technique. Exp Fluids 22(3):199CrossRefGoogle Scholar
  28. 28.
    Maglischo C, Maglischo E, Santos T (1987) The relationship between the forward velocity of the center of gravity and the forward velocity of the hip in the four competitive strokes. J Swim Res 3(2):11Google Scholar
  29. 29.
    D Costill, Lee G, DAcquisto L (1987) Video-computer assisted analysis of swimming technique. J Swim Res 3(2):5Google Scholar
  30. 30.
    Lewis J (1995) Fast template matching. Canadian image processing and pattern recognition society, Quebec City, Canada, Vision Interface, pp 120–123Google Scholar
  31. 31.
    Xue Z, Charonko J, Vlachos P (2013) PIV13. In: 10th international symposium on particle image velocimetry (Delft, the Netherlands)Google Scholar
  32. 32.
    Butterworth S (1930) On the theory of filter amplifiers. Wirel Eng 7(6):536Google Scholar
  33. 33.
    Barbosa T, Marinho D, Costa M, Silva A (2011) Biomechanics in applications, Klika V (eds) (InTech)Google Scholar
  34. 34.
    Schreven S, Beek P, Smeets J (2015) Optimising filtering parameters for a 3d motion analysis system. J Electromyogr Kinesiol 25(5):808CrossRefGoogle Scholar
  35. 35.
    Nigg B (1983) Selected methodology in biomechanics with respect to swimming. Biomech Med Swim 72–80Google Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Josje van Houwelingen
    • 1
    Email author
  • Raf M. Antwerpen
    • 1
  • Ad P. C. Holten
    • 1
  • Ernst Jan Grift
    • 2
  • Jerry Westerweel
    • 2
  • Herman J. H. Clercx
    • 1
  1. 1.Department of Applied PhysicsEindhoven University of Technology and J.M. Burgers Centre for Fluid DynamicsEindhovenThe Netherlands
  2. 2.Laboratory for Aero and HydrodynamicsDelft University of Technology and J.M. Burgers Centre for Fluid DynamicsDelftThe Netherlands

Personalised recommendations