Measuring the complex behavior of animals that often act in groups has been a challenge. In classic work, Altmann (1974) discussed the strategies of managing the complexity of assessing social animal behavior by, among other things, using randomly timed sampling sessions and limiting the number of continuously tracked (focal) animals. Since then, tracking individual animals and humans using video from digital cameras has become relatively straightforward. However, the automated assessment of the activity of large groups of animals in their natural environments has yet to see the same advances as individual tracking. Although manually assessing group movement is usually possible, automating this process with computer vision software can greatly increase the speed of data collection and expose the details of group movement structure to an extent that was not available before. Here, we briefly cover the progress of group activity detection software and how a new tool—SwarmSight—advances the state of the art.

Many animals are social and exhibit complex group behavior. For example, some fish gather together into shoals while migrating, searching for food, and avoiding predators (Cushing & Jones, 1968). Similarly, many birds accomplish similar goals and optimize group aerodynamic efficiency through flocking (Higdon & Corrsin, 1978). Insects like locusts form swarms as a response to overcrowding (Collett, Despland, Simpson, & Krakauer, 1998), and bees, wasps, and termites swarm when searching for a new colony (McGlynn, 2012). Some insect colonies, like those of stingless bees, release many individuals to form a swarm to defend against raiding species (David, 2006). These examples demonstrate that group dynamics play an important role when studying animal feeding, migratory, and defensive behaviors.

Group behavior is also present in humans, who interact with each other in a group setting as pedestrians, as drivers in car traffic, and as shoppers in retail establishments. The latter two categories play important economic roles in modern human society. For example, in 2010 traffic congestion in the U.S. resulted in economic losses of approximately US$101 billion (Lindsey, 2012), whereas in 2015, U.S. non-online sales totaled approximately US$4.4 trillion (U.S. Department of Commerce, 2015). Humans also form complex social relationships and behave differently when they are members or leaders of larger groups (Dyer, Johansson, Helbing, Couzin, & Krause, 2009). Insights into some aspects of human leadership can be gained from studying animal and agent-based computer models (Quera, Beltran, & Dolado, 2010), which are easier to manipulate experimentally. Thus, the tools that are used to assess animal group behavior can be applied to study specific human group behaviors.

With minimal use of technology, entomologists, ornithologists, and psychologists can visually observe movement or flight behaviors, use handheld counters, and record their scores and observations in computer worksheets (Altmann, 1974; Boch, Shearer, & Petrasovits, 1970; Wyatt, 1997). However, when the number of individuals in the group is too large to be tracked reliably by a human observer, only more extreme manifestations of the behavior of interest may be feasible to track. For example, when assessing defensive bee behavior, sting attacks may be recorded, whereas changes in fairly stereotypical, erratic flight behavior might be scored only qualitatively (Jones et al., 2012; Pickett, Williams, & Martin, 1982). However, this method does not provide for a detailed analysis of the temporal progression of erratic flight behavior. To obtain more precision, an individual animal’s behavior can be tracked and assessed by means of watching videos of the recorded behavior and manually scoring behavior on a frame-by-frame basis (Dyer, Johansson, Helbing, Couzin, & Krause, 2009; Grüter, Kärcher, & Ratnieks, 2011). However, that type of scoring is tedious, time consuming, and error prone.

Computer systems have been used to automate some of these tasks and do not suffer from cognitive load, attention, subjectivity, and fatigue limitations (B. R. Martin, Prescott, & Zhu, 1992; P. Martin & Bateson, 1993; Noldus, Spink, & Tegelenbosch, 2001; Olivo & Thompson, 1988; Spruijt & Gispen, 1983). Despite the advantages, existing motion detection software packages continue to have significant drawbacks that limit their usefulness in studying animal group behavior.

Some software is not designed for scientific research applications or requires programming knowledge. One way to assess group activity automatically is to extract the motion component from a prerecorded or live video. Software like iVMD (IntelliVision, 2015) can be used in such way, but it is designed for efficiently reviewing surveillance footage. Video players like VLC media player (VideoLAN, 2015) can also show regions that change from frame to frame. However, without programming against the tool’s application programming interface, the frame-by-frame motion data is difficult to access. Similarly, general-purpose scientific computing packages like MATLAB (MathWorks, 2015) or Python (Python Software Foundation, 2015) have been used to extract motion data (Hashimoto, Izawa, Yokoyama, Kato, & Moriizumi, 1999; Ramazani, Krishnan, Bergeson, & Atkinson, 2007; Togasaki et al., 2005), but also require programming knowledge.

Other software would be impractical to use with natural-scene videos. OpenControl (Aguiar, Mendonça, & Galhardo, 2007) is open-source software that uses a background subtraction algorithm to detect movement and can be used to track the locations of single animals and control maze actuators. However, the software requires setting a motionless reference frame, against which all other frames are compared. This makes the software difficult to use in natural environments like forests or open fields, where ambient lighting conditions can change due to wind or clouds, and thus shift the reference frame.

Individual-tracking software can become computationally expensive when extended to large groups of individuals. MCMC (Khan, Balch, & Dellaert, 2006), another software package implementing accurate algorithms for tracking multiple targets, has been tested with groups of up to 20 individuals, but it may not be computationally practical for more numerous groups. If many individuals enter and leave the video scene, tracking them with such software may not be more useful than simple motion data extraction.

Finally, available commercial software can be expensive and is generally not open-source. EthoVision (Noldus et al., 2001) is a sophisticated software package with an activity detection feature (Noldus Information Technology, 2015), which has been used to assess the locomotor effects of insecticide on carabid beetles (Tooming, Merivee, Must, Sibul, & Williams, 2014) and the effect of antiviral drugs on the locomotor activity of ferrets (Oh, Barr, & Hurt, 2015). Though successfully applied, the EthoVision software is expensive, and its source code is not available for inspection and modification.

To address the problems of cost, platform availability, and customization, and to create a uniquely tailored user-friendly interface for assessing the temporal progression of animal group activity in natural environments, we created the open-source SwarmSight, which runs on Microsoft Windows and works with a wide range of video formats.

In the following sections, we describe the algorithm that SwarmSight implements, and demonstrate its validity for detecting motion in a battery of synthetic motion videos. We then demonstrate a wide range of possible scientific applications by applying SwarmSight to the assessment of the group activity of stingless bees, wild birds, and hissing cockroaches.

Assessing movement activity with SwarmSight

The SwarmSight algorithm detects changes in pixel color between video frames. In the computer vision literature, this technique is referred to as background subtraction via thresholded frame differencing (Courtney, 1997; Hashimoto et al., 1999; Jain, Martin, & Aggarwal, 1979; Yalamanchili, Martin, & Aggarwal, 1982). The results of the algorithm correspond well to motion, because a moving object appears in new pixels and disappears from the pixels it previously occupied. The changed pixels can be counted, and the number of changed pixels correlates with the speed and number of moving objects in view (see Fig. 1).

Fig. 1
figure 1

Motion versus changing pixels. (a) An object moving from left to right will result in changed pixels when the object appears at the new location. (b) If the color difference between overlapping pixels is below a threshold, those pixels will not change. (c) Changed pixels when the object disappears from the screen

SwarmSight algorithm and implementation

To compute the activity metric of a video, the software uses the ffmpeg (Bellard, 2015) library to extract frames from video files. One major advantage of SwarmSight is that this underlying, open-source library supports over 50 video codecs (Bellard, 2015) and enables our software to read a wide range of video file formats, including the common .mov, .avi, and .mp4.

Once the video frames are extracted, SwarmSight computes the activity metric of each video frame. The activity metric of a frame is the sum of the pixel-by-pixel average color changes from the previous frame. Specifically, each frame, except for the first, is assigned the activity metric, which is computed as follows. Consider a video with n > 1 frames. Let c f,x,y represent the change in color of a pixel located at (x, y) coordinates of a frame f ∈ {1, n}, and let R f,xy , G f,xy , B f,xy represent the eight-bit integer RGB components of the pixel. Then compute the interframe color distance

$$ {c}_{f,x,y}=\frac{\left|{R}_{f-1,x,y}-{R}_{f,x,y}\right|+\left|{G}_{f-1,x,y}-{G}_{f,x,y}\right|+\left|{B}_{f-1,x,y}-{B}_{f,x,y}\right|}{3}, $$

which reflects the pixel’s color change from the previous frame. Now, let a f represent the activity metric for any frame f > 1, and let t ∈ {0 … 255} represent a user-defined threshold. Then compute a f :

$$ {a}_f={\displaystyle \sum_{x,y}}\ \left\{\begin{array}{c}\hfill \kern0.37em {c}_{f,x,y}>t,1,\ \hfill \\ {}\hfill \kern0.37em {c}_{f,x,y}\le t,0.\hfill \end{array}\right. $$

In other words, for each frame, count the pixels at which the color change from the previous frame exceeds the user threshold. This results in an activity metric that is highly correlated with scene motion. The threshold value t is user-selected, and its optimal value highly depends on the environment depicted in the video. A low threshold will amplify any background movements or even video compression artifacts, whereas a high threshold may diminish the desired signal. We demonstrate how to find an optimal threshold in Experiment 2 below.

Software limitations

The limitations of the software stem from the fact that it measures the aggregate movement of all objects in the video scene. This may be undesirable if the movement of some objects cannot be excluded with a different recording angle or the use of the “region-of-interest” feature (see below). It should be noted that, unless the video scene contains only one individual, the software is not designed to measure individual animal activity levels, but aggregate, group activity levels instead.

When analyzing the activity metric produced for each video, researchers using the software should treat the activity metric as a relative measure. The absolute value of the metric provides a useful metric only when compared to the value obtained from other parts of the video or of videos captured under the same conditions. For example, a close-up video of a scene will result in more pixels changing per frame, whereas a zoomed-out video of the same scene will register fewer pixels changed per frame. However, if over the course of the video the camera perspective and motion threshold parameter are not changed, the progression of pixels changed per frame will be isolated to the progression of movement in the scene.

Low wind conditions

If the video depicts flying insects, the background wind should be minimal. Background wind can cause the flying insects to move involuntarily, which may affect the activity metric. Additionally, if the scene contains undesirable objects that are moving due to low wind and are confined to a part of the scene (e.g., moving leaves in a corner), the “Region of Interest” tool could be used to exclude this undesirable motion. In Experiments 2 and 3 below, we are able to use the tool to exclude peripheral moving leaves, branches, and sponges. We note that if the target and undesirable objects overlap, the tool is not able to exclude the undesirable movement. Depending on demand from the scientific community, we could add other types of region inclusion/exclusion features in future versions of the software.

Stationary, stable video

The metric will register any movement, including camera movements, changes in perspective, or zoom. To ensure that the metric reflects only the desired movement, the videos should be shot on a tripod or other stable surface, so that only the objects whose motion is being assessed are moving.

Furthermore, if the activity metrics of multiple videos will be compared, then each video should be shot under the same lighting conditions, perspective, and zoom level. If this is not the case, differences between the video conditions could confound the changes in the activity metric. If new videos cannot be taken, it may be possible to adjust the sensitivity threshold and set equivalent regions of interest for each video to mitigate this problem. The software includes a region-of-interest feature, which allows the user to select a rectangular region of the video to restrict the pixels that will be used for computing the activity metric (see Fig. 2). When comparing a video that was recorded from a closer distance to one that was recorded farther away, the region-of-interest feature can be used on the distant video to exclude movement activity from pixels that are not visible in the closer video. Furthermore, the motion threshold parameter for the distant video would need to be decreased to adjust for the reduction in motion that is being captured by each camera pixel. This could be achieved by manually calibrating the threshold on the distant video until the baseline movement activity levels recorded in both videos are the same. Thus, the movement activity can be appropriately compared from videos recorded under different zoom conditions when only the pixels present in both videos are included and the zoom-adjusted threshold is used. However, the calibration procedure above will not be effective when the zoom differences between the compared videos are relatively large (e.g., more than several degrees). We recommend using it only if the videos cannot be recorded from the same perspective.

Fig. 2
figure 2

Screenshot of the SwarmSight user interface, showing an example video and a drawn region of interest. The sponge moves due to the wind, but its motion is excluded because the “region-of-interest” feature is used to limit the area where motion (yellow squares) is measured

After the video starts playing, the user can change the activity settings (Fig. 2) and also select the Highlight Motion checkbox to see which pixels are changing. This option can be used to easily spot fast moving objects like flying insects and can be used as an aid to counting them manually.

Minimally compressed preference

Ideally, the videos should be minimally compressed. Higher compression tends to introduce compression artifacts, which under low-threshold conditions appear as changed pixels, confounding the activity metric. Native compression by most digital cameras is acceptable.

Setting up and using the software

Installation and system requirements

The software can be freely downloaded and installed from It is a Microsoft .NET application written in C#. The code is open-source and available at The software runs on Microsoft Windows with the .NET 4.5 framework installed.

Video requirements

The software uses the ffmpeg library (Bellard, 2015), which supports a very wide variety of video file formats. The video formats produced by most digital cameras are supported.

Activity assessment

Once the software starts, the user can select the video file to analyze, set the sensitivity threshold, and draw a rectangular region of interest to exclude unnecessary or spurious movements.

As the video plays, a chart below the video screen displays the frame-by-frame logarithm of the activity metric. Using the logarithm reduces the effect of extreme movement events (the linear metric is preserved in statistics calculations). To the right of the chart, the raw frame activity data can be exported into a comma-separated value (CSV) file for further processing. The CSV file is saved in the same directory as the video file, with the same filename as the video file, but ending with the .csv extension. CSV files can be opened with Excel, R, MATLAB, and most other statistical software packages.

Validation results

To verify that the tool’s activity metric is correlated with object motion, we assessed motion using both synthetic and live field videos. In addition, we demonstrate that the tool can be used to detect and quantitatively compare insect swarm activity differences, track the progression of swarm activity over time, and reveal interesting temporal dynamics in the feeding behavior of birds and the nest-finding behavior of hissing cockroaches.

Experiment 1: Synthetic motion videos


To assess whether the tool correlates with the motion depicted in a video file, we created two synthetic videos and used the tool to assess the video motion. The first video (Fig. 3) consists of six events, occurring sequentially in 1-s intervals. In the first event, a 5 × 5 pixel box appears on a white background. Then, 1 s later, the box moves 1 pixel to the right. After another second, the box moves 2 pixels, then 3 pixels, and repeats the pattern until it moves 5 pixels. The movement pattern is implemented using a box colored with graded shades of gray. The 5 × 5 pixel box is divided into three bands of 1 × 5 pixels of 100 % black (Fig. 3, right), a 2 × 5 pixel band of 50 % black, and a 2 × 5 pixel band of 25 % black. Thresholds of 1, 64, 128, and 255 were used to demonstrate the threshold effect. A threshold of 1 requires a pixel to change only one shade (out of 254) to register as changed. A threshold of 255 is above the theoretical maximum change of 254 (white to black), and no pixels should be registered as changed.

Fig. 3
figure 3

Effects of threshold on the appearance and movement of 5 × 5 pixel multishade box. (Left) The pattern of appearance and movement of the box. (Center) Resulting changed pixels as registered by the SwarmSight algorithm. (Right) Illustration of the multishade regions of the box and the predicted threshold detection regions

To show that the software activity metric is proportional to the number of moving objects, we created a second synthetic video in which three different boxes progressively move one by one, until all three are moving at the same time (Fig. 4, left). In Frames 1 and 2, the three objects do not move. In Frames 3–6, one box starts to move at 1 pixel/frame. In Frames 7–11, a second box starts moving at the same speed and direction. Finally, in Frames 12–15, all three objects are moving. We expected that with each additional moving object, the number of changed pixels detected per frame would increase by 10 pixels.

Fig. 4
figure 4

Activity metric versus number of moving objects. (Left) Three objects are moved to the right at 1 pixel/frame in succession. (Right) The number of changed pixels detected by SwarmSight is proportional to the number of moving boxes


The first test performed as expected. At a threshold of 1 (Fig. 3, center), when a 5 × 5 box appears, a change of 25 pixels was registered. When the box moved 1 pixel, a 10-pixel change was detected, which consisted of a 5-pixel change of the leading edge of the box and a 5-pixel change of the trailing edge that the box had previously occupied. As the threshold increased to 64, the software registered 15 pixels when the box appeared, reflecting the top two rows of the box: 100 % black and 50 % black. At a threshold of 128, only the 5-pixel top black row registered. At threshold 255, no pixels registered. The changes produced by the moving box were all registered correctly.

The second test with multiple moving objects also demonstrated the expected behavior. As the first 5 × 5 box started to move, the software registered a 10-pixel/frame change (Fig. 4). As each additional object started to move, the activity metric increased by 10 pixels/frame, showing a change of 30 pixels/frame when all three objects were moving.

Experiment 2: Threshold selection in stingless bee video


One useful property of the software is that it enhances the visibility of moving objects in videos. To demonstrate this with a flying insect swarm, we tested how different threshold levels affected the visibility of flying stingless bees Tetragonisca angustula when compared to unprocessed video frames. We tested two videos of a hive of T. angustula, one showing a zoomed-in version of the hive (Fig. 5, top row), and one showing the zoomed-out version (Fig. 5, bottom row). The threshold was set using the Motion Threshold slider control available in SwarmSight (Fig. 2, top right).

Fig. 5
figure 5

Examples of detected bee motion, in yellow. A threshold between 20 and 50 provides the best signal-to-noise ratio for emphasizing flying stingless bees in these videos


The results demonstrated that the software can be used to distinguish flying insects from the background. In Fig. 5, the control, unprocessed video frames can be seen on the far right, with the motion-detected frames arrayed to the left at various thresholds, showing detected motion in yellow.

Specifically, note that when using low thresholds (Fig. 5, left, 5), the software registers noise that is not indicative of insect motion, and when using high thresholds (Fig. 5, right, 100), it does not emphasize motion enough. At medium-range thresholds, the bees are easily visible. For example, in Fig. 5, at threshold 20 in the top row, showing the zoomed-in view, the bee wings and their bodies are outlined in yellow. In contrast, at threshold 50 in the bottom row, the small bees are visible as yellow dots. As we mentioned in the previous section, because the threshold can be chosen by the user, a researcher can play a video in question and adjust the threshold via the slider to discover the optimal threshold for each video. Once discovered, the same threshold can be used to compare the activity present in several videos shot under the same conditions.

Experiment 3: Detection of change in activity and its progression


SwarmSight software was used to assess how flying-insect activity changes over time in response to a treatment. Here we examined a video of an entrance to a T. angustula hive before and after a treatment with a chemical mixture of the stingless bee’s alarm pheromone: Citral (Sigma Aldrich, B1334), 6-methyl-5-hepten-2-one (Sigma Aldrich, M48805), and benzaldehyde (Sigma Aldrich, B1334) (Wittmann, Radtke, Zeil, Lübke, & Francke, 1990). We used the SwarmSight software to assess bee swarm activity during the 30-s video and observe its progression over time. To confirm that the activity metric was proportional to the actual flying bees, we compared the SwarmSight results to the number of visible flying bees in the video frames sampled at 1-s intervals.

To test whether the software can detect significant changes in behavior from nest entrance videos, we used the software to analyze 107 T. angustula nest entrance videos, consisting of control and treatment groups. The control groups were administered behaviorally inert mineral oil (Breed, Guzmán-Novoa, & Hunt, 2004), whereas the treatment groups were administered the alarm pheromone mixture. The average activity metrics of the two groups were compared.

To demonstrate that the software can be used for measuring the progression of group activity of birds and of other insects, we used videos of wild birds and hissing cockroaches. The wild-bird video showed rock doves (Columbia livia), mourning doves (Zenaida macroura), and Gambel’s quail (Callipepla gambelii) discovering and consuming newly placed wild-bird food (Global Harvest Foods, Ltd., 2015), which was spread evenly across a 6-foot × 6-foot area on concrete pavement. This video was chosen because it contained a period without any birds as well as three phases showing waves of arriving birds. To demonstrate that SwarmSight can detect such arrival waves, the activity metric was expected to contain distinct, corresponding increases in activity during each arrival wave.

Finally, to demonstrate that the software can be used with nonflying animals, we selected a video of a group of Madagascar hissing cockroaches (Gromphadorhina portentosa) locating an artificial nest. The video scene displayed a round dish containing four insects with a covered center area: the artificial nest. The insects were placed outside of the nest and allowed to crawl freely. The video contained two phases of insect activity separated by a long phase of relative dormancy. We expected the SwarmSight results to indicate the three phases.


The results of the activity progression experiment showed a clear increase in detected movement activity of the bees after the treatment with alarm pheromone (Fig. 6, left penel, small dots). Furthermore, the progression of the response can be observed shortly after the treatment. The increase was confirmed visually from the motion-annotated frames captured before (Fig. 6, right top) and after (Fig. 6, right bottom) the treatment, as well as from manually counted bee numbers (Fig. 6, left panel, large yellow dots).

Fig. 6
figure 6

Progression of the activity metric before and after treatment of a stingless bee colony with alarm pheromone. (Left) Small dots: Changed pixels/frame. Large dots: Manually counted numbers of flying bees. (Right) Motion-annotated frames showing flying bees

The results of the experiment comparing the average activity in response to the behaviorally inert mineral oil (MO) condition videos to the average activity in the alarm pheromone (AP) condition videos show that significant differences in behavior can be detected using the SwarmSight software. The AP treatment and control group (MO) activity averages of the 107 videos show significant differences (n = 107, two-tailed t test p < .003). Figure 7 shows a screenshot of SwarmSight’s computations for the average activity levels of the two videos (showing one MO video in the top graph and an AP video in the middle graph), as well as details of the activity comparison results (bottom panel).

Fig. 7
figure 7

SwarmSight results comparing responses to mineral oil versus alarm pheromone. The top right graph shows the activity progression for the mineral oil (control) treatment; the middle right graph shows the activity progression for the alarm pheromone treatment. The regions highlighted in lavender show the ranges of frames selected for comparison. The middle graph excludes the large spike in activity due to the movement of a hand that administers the treatment. The bottom right panel shows a graphical comparison of the two videos and the details of the comparison results

The activity metric produced from the wild-bird video (Fig. 8) makes it easy to establish the onset of feeding, its duration (including any interruptions), and the progression of the feeding behavior.

Fig. 8
figure 8

Wild-bird feeding activity over time. (A) Signal showing no activity. (B) The onset of feeding as food is discovered by the birds shows up as a gradual increase of the activity metric. (C) An interruption of feeding and group takeoff shows up as a large spike in the activity metric. (D) The main feeding phase, in which waves of birds land and take off (spikes) at the food source. Note the gradual increase and decrease in overall activity as the food source is exhausted. (E) A false landing by the birds, even though the food source is exhausted. (F) Activity of lingering birds at the end of feeding

SwarmSight is also able to measure the progression of hissing cockroach activity while they find a nest. As expected, Fig. 9 shows a series of activity bursts, a period of dormancy, and a final activity burst.

Fig. 9
figure 9

Hissing cockroach activity over time. (A & C) Phases of hissing cockroach video showing bursts of activity. (B) Phase showing a period of relative dormancy

Experiment 4: Performance assessment


To assess the performance of SwarmSight, we measured the frames per second processed by the program. The threshold was set to 17, quality to 100 %, and the testing system was an Apple Macbook Air laptop with a 2-GHz Intel Core i7 processor, 8 GB RAM, running Windows 7 Professional on a Parallels Desktop virtual machine.


We found that for high-definition videos filmed using the 1,080p standard (1,920 × 1,080 pixels), the frame rates are in the 9- to 11-fps range (Table 1), and for medium-sized videos (640 × 480 pixels), the frame rate increases to 61 fps. Overall, the program processes about 20.8 million pixels per second.

Table 1 Numbers of frames per second processed by SwarmSight in response to videos with different resolutions


SwarmSight measures movement in a video scene and was demonstrated to automate quantitative assessment of the temporal progression of the activity levels of flying-insect swarms, bird flocks, and animal groups. It was also shown to be useful for detecting hard-to-see flying insects and to assist with traditional methods of insect tracking and counting. Generally, the software is most useful in situations in which changes of aggregate movement over time in a region of a video provide information that is valuable to investigators.

In addition to assessing insect and bird group behavior, the software should be useful in assessing human group behavior. For example, a video filmed from an upper floor of a building of pedestrians walking on a street or a sidewalk could be analyzed using the software. If the pedestrian traffic is low, individual pedestrians could be distinguished by sudden increases and subsequent decreases of pixel change activity. During high traffic flow, the changes in relative pedestrian activity levels over time could be assessed and compared. The same method for measuring street pedestrian traffic could be used to measure walking shopper behavior in retail stores or malls. For example, a section of a hallway or an isle could be video-recorded and baseline shopper movement activity assessed. Various interventions, such as the placement or alteration of signage, could then be assessed by comparing their effects on the video activity levels. A similar technique could be used to assess and track the changes in activity of automobile traffic.

Furthermore, SwarmSight could be used to test theoretical models of swarm or large-group behavior. For example, a group behavior model described by Couzin, Krause, Franks, and Levin (2005) predicts that only a small fraction of individuals in a group need to possess information in order to influence the behavior of the larger groups. For example, in Dyer et al. (2009), just 5 % of the individuals in a large, 200-person group needed to be informed for the larger group to converge onto the correct target. The videos recording the movement of the individuals could be analyzed using SwarmSight. Analysis of a video of the whole group would be expected to show the group movement activity over time. This would reflect the aggregate speed of the group and show when the group started and stopped moving. Additionally, any transient increases or decreases of the group speed would be reflected in the pixel change signal. Such analysis could complement manually performed measures of the group movement and its change over time.

The leadership behavior in simulated agent-based flocks, such as were seen in Quera et al. (2010), may also be assessed with SwarmVision. Though computer simulations benefit from the ability to observe any of the simulated state variables (Kelton & Law, 2000), some very large or complex simulations may not be feasible or cost-effective to resimulate. In such cases, if a video of the completed simulation is available and the progression of movement in a region of the video contains information that is useful to investigators, then our software could be used to analyze it. In this way, an aggregate-movement metric could be extracted from a video of a complex simulation without rerunning it.

One advantage of SwarmSight is the simple, easy-to-use interface that makes it possible to learn to use the tool very quickly and enables its use as a teaching tool. For example, graduate and undergraduate students in an Animal Behavior Methods class at Arizona State University received a 10-min presentation about bees and the software. Following the presentation, the students were able to use the software effectively to analyze a stingless bee nest raid video. The software presentation and the example video can be downloaded from

Overall, SwarmSight provides a powerful, yet easy-to-use, tool for assessing motion from natural and laboratory video scenes that has applications in behavioral experiments, as well as serving as a teaching tool for classroom-based behavioral experiments.

In the future, the authors plan to add additional filters and to incorporate machine-learning classifiers for more advanced insect tracking and behavior classification, and to utilize onboard graphics processing units (GPUs) to increase performance. For example, supplying a support vector machine (Cortes & Vapnik, 1995) trained on previously labeled examples of target insects, birds, humans, or other animals with pixel color and motion data could enable the software to count the number of target individuals per frame. In some cases, this metric would be even more useful than the aggregate-movement metric of the current version. Furthermore, a GPU implementation of the above algorithm might execute one or two orders of magnitude faster than the CPU implementation (Asano, Maruyama, & Yamaguchi, 2009).