Advertisement

Analysis of Crowd Dynamics with Laboratory Experiments

  • Maik Boltes
  • Jun Zhang
  • Armin Seyfried
Chapter
Part of the The International Series in Video Computing book series (VICO, volume 11)

Abstract

For the proper understanding and modelling of crowd dynamics, reliable empirical data is necessary for analysis and verification. Laboratory experiments give us the opportunity to selectively analyze parameters independently of undesired influences and adjust them to high densities seldom seen in field studies. The setup of the experiments, the extraction of the trajectories of the pedestrians and the analysis of the resulting data are discussed.Two strategies for the time-efficient automatic collection of accurate pedestrian trajectories from stereo recordings are presented. One strategy uses markers for detection and the other one is based on a perspective depth field. Measurement methods for quantities like density, velocity and specific flow are compared. The fundamental diagrams from trajectories for different experiments are analyzed.

Keywords

Specific Flow Stereo Camera Bidirectional Flow Dense Crowd Fundamental Diagram 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

4.1 Introduction

Design of egress routes for buildings and large scale events is one application for models of pedestrian streams [1, 2, 3, 4, 5]. Typical questions regarding the capacity of facilities for pedestrians are: Is the width of a door or a corridor large enough to evacuate a certain amount of people in a given time? How long does it take to clear a building? The methods and tools available to evaluate or to dimension these facilities can be categorized in legal regulations, handbooks [6, 7, 8] and computer simulations [9, 10, 11, 12]. Legal regulations are based on prescriptive methods using static rules which depend on the occupancy of the building. Two examples of rules are the minimal width of doors in dependence of the number of people in the room and the maximal length of an escape route. But static rules cannot consider the dynamics of an evacuation process and methods with a higher fidelity are necessary to resolve the development over time. Handbooks for example, use macroscopic models of streams and provide a description of the evacuation process in time and space. These models forecast when and where congestions will occur. However, macroscopic models give only a coarse description of pedestrian flow by modeling the streams as an entity with quasi-constant size and density. Microscopic models instead describe the individual movement of all pedestrians and are thus able to resolve the dynamics on an individual scale. Regardless of the fidelity of the models the basic aim should be to describe quantitatively the transport properties of pedestrian streams.

Quantities describing the performance of facilities and the transport properties of pedestrian streams are borrowed from the physics of fluids. The flow J gives the throughput at a certain cross section and is defined as the number of persons N passing the cross section in the time interval Δ t. To express the degree of congestion the density \(\rho = N/A\), which is the number of persons in an area A, is used. The velocity v specifies how long it takes to reach the exit of the building. These quantities depend on each other and the empirical relation between them (J(ρ) or v(ρ)) is commonly called the fundamental diagram. This relation is a basis for quantifying the transport properties of driven systems in stationary states. One major problem is that the empirical data base is rudimentary and inconsistent. Even basic questions and relations are queried and discussed contradictory in the literature [13]. E.g., how the maximal possible flow through a bottleneck depends on the width of the opening [14, 15], at which density jams occur [16] or whether the fundamental diagrams for uni- or bidirectional streams differ or not [17].

Several research groups started in the last decade performing experiments to address these as well as other questions. Before introducing our contribution we give an overview of the activities by countries and by citing the recent articles without any claim to comprehensiveness. These are in China [18, 19, 20, 21, 22, 23, 24, 25], in France the Pedigree project [26, 27, 28, 29], in the Netherlands [30, 31], in Japan [32, 33, 34, 35] and in Germany [36, 37]. We want to note that these experiments and field studies cover in their differentness the complexity and diversity of pedestrian traffic.

Since the year 2005 we performed in cooperation with the universities of Cologne and Wuppertal more than 300 experiments to improve the database for model validation. We focused on experiments under well controlled laboratory conditions due to several reasons. Pedestrians are subject to a lot of influences which cannot be controlled in field studies. To study the influence of one single parameter it is helpful to control external influences (light, sound, ground, etc.), boundaries and initial conditions. Even under laboratory conditions this is difficult to achieve, see e.g. [31]. Moreover, the variability allows a survey of a parameter range e.g. for the bottleneck width or length, or the density inside a corridor. Another reason for performing controlled experiments is the interest in high densities, which are seldom observable in field studies. With increasing number of participants we improved the methods to extract the individual walking paths automatically from video footage.

For the design of the experiments we started with simple geometries (corridors, bottlenecks, etc.) and flow types (unidirectional, bidirectional). Then we extended the experiments to consider more complex scenarios with bends, stairs and merging streams. The experiments are designed to ensure that the influence of one parameter on the quantity of interest can be studied. The data can then be used to develop and to systematically validate mathematical models.

4.2 Experiments and Data Capturing

4.2.1 Experiment Overview

Performing experiments under laboratory conditions gives the opportunity to analyze parameters of interest under well defined constant conditions. For self-initiated experiments the location and the structure of the test persons (e.g. culture, fitness, age, gender, size) can be determined. For this reason, series of pedestrian experiments have been designed and carried out since 2005.

The first experiments were designed to study the relation between density and velocity of single file movement [38]. With the same experimental setup the influence of motivation and culture [39] was analyzed.

After the one dimensional experiments, we have made experiments on plane ground focusing on bottlenecks and corridors. 99 runs with up to 250 test persons have been performed [40].

For the development of an evacuation assistant in the project Hermes [41] the experimental database was expanded by experiments on different types of stairs, straight corridors, corners and T-junctions. 170 runs with up to 350 people have been made in artificial environments, and additionally inside the facilities of the stadium for which the evacuation system has been developed.

In Sect. 4.3 some details of these experiments will be described.

For capturing the experiments by video recordings the cameras can be chosen appropriate to the coverage area and ceiling height. Overhead recordings perpendicular to the floor allow a view without occlusion for a range of body heights, so that an individual detection and tracking without estimation of the persons’ route can be performed. To get constant lighting conditions the experiments have primarily been made indoor with uniform artificial light. The extraction process of the route of every person is outlined in the following sections.

4.2.2 Trajectory Extraction

The goal of the extraction process are trajectories, p i (t), i ∈ [1, N], as exact as possible for all N persons at any time t in particular in crowded scenes. For this reason markers are used to improve the robustness of the automatic extraction wherever applicable.

For the same reason of exactness all automatic results are inspected by humans, who are able to correct the trajectories directly within our software. Under laboratory conditions almost no error occurs, but in real facilities like stairways in a stadium the number of incorrect detections increases. Problems faced include the varying lighting conditions and the distance of up to 13 m to the head of the persons. The manual visual inspection of the automatically extracted trajectories is one reason for the off-line detection from video recordings. The more important reason for the downstream extraction from recordings is the possible later visual analysis of effects evaluated from the trajectory data. For these advantages we acknowledge the problems of huge recording space requirements and privacy protection.

Before extracting metric information the video has to be calibrated. For the correction of the lens distortion a model of a pinhole camera with distortion is adopted (considering radial and tangential distortion up to fourth order). The perpendicular view and cameras with quadratic pixel allow an easy specification of a pixel to meter ratio considering the perspective view. For more information we refer to [42].

4.2.2.1 Detection with Markers

For the description of the detection process we restrict to one experiment performed in the project Hermes [41]. Details of the artificial setup of this experiment, a merging flow through a T-junction with a corridor width of 2.4 m and 303 test persons, can be found in Sect. 4.3.3.

The experiments have been recorded with two synchronized stereo cameras of type Bumblebee XB3 (manufactured by Point Grey). For the T-junction experiment they were mounted a = 784 cm above the floor with the viewing direction perpendicular to the floor.

The overlapping field of view of the stereo system is α = 64 at the average head distance of about 6 m from the cameras. Thus all pedestrians with the discovered height range can be seen without occlusion at any time. The cameras have a resolution of 1, 280 × 960 pixels and a frame rate of 16 frames per second, \(\varDelta t = 1/16\).

The marker has a simple structure to detect it from distances up to 13 m. All pedestrians wear a white bandana with a centered black dot of 4 cm diameter (Fig. 4.1).

The recognition of the marker is done by detecting directed isolines of the same brightness and subsequent analysis of the size, shape, arrangement and orientation of approximating ellipses.

Perspective Depth Field
Fig. 4.1

(Color online) Left: Rectified image of one stereo camera of a T-junction experiment. Right: Color coded disparity map restricted to the distance of the upper body part (570–735 cm). The background is greyed out

For the detection with markers the perspective depth field, which can be obtained from a stereo camera, is only used for background subtraction and the measurement of the head distance to the camera.

This depth field h contains the distance to every pixel of the camera and is inversely proportional to the disparity map, d ∝ 1∕h, which describes the pixel offset of both camera views of the stereo camera for every pixel. The disparity map is calculated with the semi-global block matching algorithm [43] implemented in the computer vision library OpenCV [44]. The mask size for the matched blocks has been set to 11 to get a smooth depth field. The drawback of this is a blurry depth field where the shape of objects is less sharp. Figure 4.1 shows on the right an overlay of the disparity map on the left picture. The disparity map is restricted to the distance of the upper body part color coded from red to blue according to the distance of 570–735 cm to the camera. The greyed out part indicates the background, which determination is described below.

Background Subtraction
A prior background subtraction reduces the number of false positive detections. For the background subtraction the camera distance h is used directly without generating a rectified depth field. No laborious plan-view statistics is needed because of the perpendicular view of the stereo recordings. Pixels with the coordinate \(u = (u_{x},u_{y}) \in {\mathbb{R}}^{2}\) at frame f are part of the background and thus are ignored in the detection process, if
$$\displaystyle{ h_{bg}(u) - h(u,f) < 40\,\mathrm{cm}. }$$
(4.1)
The perspective depth field of the background h bg is captured once with the scene deserted or is set to a cautiously adapted maximum distance during all frames
$$\displaystyle{ h_{bg}(u) \approx \max _{f}(h(u,f)). }$$
(4.2)
The distance threshold of 40 cm cannot be increased for robustness, since people near walls would be eliminated because of the omitted plan-view statistic. This effect can already be seen in Fig. 4.1 at the walls forming the junction. Small regions of missing values inside h bg are interpolated linearly within the row. Small regions inside the segmented foreground are added to the background to erase noise and regions that cannot be occupied by a person.
3D Position
To calculate the position in 3D real world a coordinate transformation from pixel positions to real positions and an inverse perspective transformation have to be performed. Therefor the distance to the camera is needed. For planar experiments, where we used monocular cameras, the color of a part of the marker corresponds to a height range [42]. But because we also made experiments at stairs within the Hermes project this approach cannot be used anymore. The distance or the height of the pedestrian for planar experiments, \(h^{\prime}(u_{i}) = a - h(u_{i})\), respectively is now set according to the disparity of the center pixel of the black dot, \(u_{i} \in {\mathbb{R}}^{2},i \in [1,N]\). Only with the pedestrians’ height the correct position on the plane ground can be calculated. Because of the perspective distortion the maximum error without considering the height would be [42]
$$\displaystyle{ \frac{\max _{i}(h^{\prime}(u_{i})) -\min _{i}(h^{\prime}(u_{i}))} {2} \tan \frac{{\alpha }^{\circ }} {2} \approx 16\,\mathrm{cm}. }$$
(4.3)
The height is oscillating according to the step frequency. The height is minimal at the position where body shifting moves from one to the other leg [45].

The distribution of the maximum height of each pedestrian matches exactly the distribution of the height obtained by the evaluation of questionnaires handed out to them.

4.2.2.2 Detection Without Markers

We are also developing a markerless detection, which facilitates field studies and the easier realization of moderated experiments in real environments. Studying the influence of group structures (e.g. football fan or visitor of a classical concert, sober or drunken persons) realistically can only be accomplished there. With moderated experiments in real environments, where we use independent gatherings which happen anyway (e.g. works meetings) we can increase the amount of trajectory data with less time and effort. Besides this, the markerless detection can further improve the robustness of the marker based detection described below.

Related Work

Techniques for the detection without markers for single pedestrians in crowds using monocular cameras are not as robust as techniques using stereo cameras. Publications like [46, 47, 48, 49, 50, 51] all report a false detection rate of more than 10 %. Typically the decrease of the false detection rate induces the increase of false positive detections. For our purpose this means a lot of manual work, because we need nearly no error to get reliable data for further analysis.

Detection techniques, such as [52] or [53] for stereo cameras, depend on accurate segmentation of foreground objects from the background. For dense crowds such as in our experiments these methods would not be applicable or would only detect groups of people. Other techniques use motion patterns of human beings [54, 55] like periodic leg movement or additionally take skin colour [56] into account or use a face detector [57], which is only applicable from side view because of visibility. The side view is also needed by Hou and Pang [58], because they assign a region to one person, if the region has the same distance to the camera. In our experiments the video recordings were done overhead to avoid occlusions, because we want to know the detailed position of every person at any time also in crowded scenes, so that often no extremity or skin is visible. This perpendicular view also disengages us from a decelerating plan-view statistic like in [59].

Algorithms using motion like [60] cannot be adopted, because in our experiments dense situations and thus stagnant flow often occur.

In [61] a method for people tracking in dense situations with multiple cameras is suggested. The combined data from several views is used to calculate the height and thus the position of peoples’ head.

A robust detection and tracking algorithm also for crowded scenes is described in [62]. The detection process is based on a clustering procedure using bio-metrically inspired constraints.

In [63] the detection is done by searching for clusters inside the point cloud of a depth map, which are arranged in a sphere that is proportional in size to human heads. For people walking close together van Oosterhout et al. obtain a precision of 0.97 taking tracking into account, which is nearly as high as our precision.

Detection Process

For the following example we use the same recordings as for the detection with markers, but ignore the markers.

To detect markerless pedestrians in dense crowds we utilize the depth field described in Sect. 4.2.2.1. The identification of the people is done only using the shape of the top part of their body especially the head and shoulders. If we want to identify people only by their shape, the background subtraction has to be performed previously as described before, because of the possible occurrence of similar formed objects.

Directed Isolines and Approximating Ellipses

To extract features identifying pedestrians inside the depth field, directed isolines of the same distance to the camera are used. The step size of the iso-value scanning the depth field is 5 cm. Beforehand the depth field is adapted by replacing values covered by the background mask with the furthest value which belongs to the foreground.

In Fig. 4.2 red isolines surround regions further away. They can be ignored. Green isolines encircle regions which are nearer to the camera. To improve the visibility of the isolines the color coding of the disparity map is replaced by a grey scale one.

Fig. 4.2

(Color online) Left: Isolines of the same distance to the camera at intervals of 5 cm (colored according to the orientation) drawn on the grey scaled perspective depth field. Right: Pyramidal grouped ellipses identifying pedestrians. Green ellipses correspond to the isoline nearest to the camera, following by red and blue

Fig. 4.3

Zoomed view of a part of Figs. 4.1 and 4.2: disparity, isolines and pyramid of approximating ellipses

The remaining isolines enclosing a minimum and maximum of pixels and with a small ratio between the length of the isoline and the enclosed area (to eliminate isolines with big dents) are approximated by ellipses. The ellipses allow an easier access to the global shape. Small ellipses with a large eccentricity are discarded. Large ellipses can have a bigger eccentricity to enclose multiple people. The used values for the selection of isolines and ellipses have been chosen heuristically considering the number of pixel covered by one person.

Ellipses Pyramid

By scanning the depth field downwards in steps of 5 cm a pyramid of ellipses for every person is build up (Fig. 4.3). Thus for every new depth level an ellipse is assigned to that pyramid, whose center is inside the new ellipse. If no pyramid fits, the new ellipse starts a new pyramid. For multiple ellipses on the same level covering the identical pyramid that one is chosen, whose center is the closest to the nearest pyramid center. New ellipses can cover multiple pyramids, if the pyramids have already a substantial number of ellipses from previous depth levels. Otherwise the small pyramids are rejected or, if there are only small ones, the pyramid with the closest center is chosen.

At the end we neglect pyramids with a
  • Small number of ellipses,

  • Large second ellipse (corresponding to the head),

  • Small third ellipse (to reject e.g. lifted arms),

  • Small last ellipse (corresponding to the body).

 We prefer a strict deletion to avoid false detections since it is not necessary to detect a person every frame for tracking. The values again are chosen heuristically taking peoples’ shape into account. The first ellipse is not analyzed, because the location varies too much, especially because of the different depth the heads are detected the first time. This is also the reason why the center of the second ellipse represents a pedestrian and thus is tracked. The resulting pyramids are shown on the right of Fig. 4.2. The ellipses are colored according to their level in the pyramid. The topmost ellipse has a green, the second a red and the third a blue color. The latter ellipses can cover more than one person.

Fig. 4.4

Left: Tracked people by markers and their smooth path during the last second. Right: Tracked people without markers and their more unsettled path during the last second

After analyzing the complete video recording trajectories are rejected, which do not cross the whole test area, or have only few frames where the supposed pedestrian is identified. The right of Fig. 4.4 shows the tracked people and their unsettled path during the last second in comparison to the left picture showing the path of the people detected by marker.

4.2.2.3 Tracking

For tracking of detected pedestrians the robust pyramidal iterative Lucas Kanade feature tracker [64] is used. This tracker extends the Lucas Kanade method for calculating the optical flow by introducing successive Gaussian pyramids of successive images B and propagating tracking results from a low resolution level to the next higher level as an initial guess.

The tracker searches with sub-pixel accuracy in regions of same size in recursive Gaussian pyramids
$$\displaystyle{ {B}^{L}(u_{ x},u_{y}) =\sum _{ i=-1}^{1}\sum _{ j=-1}^{1}{2}^{-(2+\vert i\vert +\vert j\vert )}{B}^{L-1}(2u_{ x} - i,2u_{y} - j) }$$
(4.4)
with B 0 = B and \({u}^{L} = {2}^{-L}u\), starting in B L−1 with \({u}^{L-1} = 2{u}^{L}\).

The size of the tracked region is adapted to the head size, which can be deduced from the persons’ height or the distance to the camera. The number of pyramidal level L is set to four. The size of the last level B L−1 is 50 % bigger than the head length of around 21 cm, so that the region of the first level B 0 has the size m of the marker used: \(m = 1.5 \cdot 21\,\mbox{ cm}/{2}^{3} \approx 4\,\mbox{ cm}\).

If the result of a tracked head is not feasible we extrapolate the next position and for the detection with marker adjust the position to the center of the marker considering the pixel brightness.

Merging Trajectories

The precision of the trajectories considering markers is sufficiently high, so that overlapping camera views allow a combination of trajectory sets. Since we use stereo recordings we also synchronize all cameras. Thus we do not need a temporal adjustment and have only to minimize the distance of the pedestrians’ positions for every frame. By using the method of least squares we find the associated trajectory pairs for each overlapping view and minimize the average distance error between the two point clouds by adapting the extrinsic parameter set for one view. The method for finding the least square solution and searching for the optimal translation vector and rotation matrix is based on the single value decomposition [65]. After applying the transformation the average error describing the distance between a corresponding trajectory pair is 1 ± 0. 4 cm and the maximum error 5 ± 2. 6 cm for all planar experiments measured in 2D. For experiments in 3D space, we get an average error of 3 ± 0. 5 cm and a maximum error of 8 ± 2. 6 cm. The maximum error appears towards the boundary of the camera’s view. To reduce the influence of this error we interpolate linearly between two matching trajectories, so that the trajectory to the respective boundary has less influence on the resulting behavior of the combined trajectory. All trajectories in the Sect. 4.3 are combined results of two camera views.

4.2.3 Results of Trajectory Extraction

Quantitative results of the detection with and without markers are shown in Table 4.1. The one misleading match of the detection using markers traces back to an area similar to the easy structured marker. The strict heuristic of the markerless detection deletes three correct trajectories.

The combination of both methods has no false detection. It uses the detection with markers and accepts the detection only, if the center of the dot is inside the second highest ellipse of the pyramid of that pedestrian.

Figure 4.4 shows the way of each detected pedestrian during the last second (left for the detection with and right without using markers). Even if the detection result is good for both methods, the quality of the method using markers results in smoother trajectories. The smoothness is important for the analysis, e.g. for the microscopic velocity calculation. To quantify the smoothness the average microscopic acceleration can be used (see Table 4.1). For successive detected points, X j , along a trajectory, p i , the microscopic acceleration at position X j is
$$\displaystyle{ \vert \vert (X_{j+1} - X_{j}) - (X_{j} - X_{j-1})\vert \vert \,/\,{(\varDelta t)}^{2}. }$$
(4.5)
Table 4.1

Comparative results of the detection methods

  

False

False

avg. acc.

 

Method

Detected

positive

negative

[m/s2]

 

With markers

304

1

0

1. 2 ± 0. 7

 

Markerless

300

0

3

6. 5 ± 6. 8

 

Combination

303

0

0

1. 2 ± 0. 7

 
Fig. 4.5

Side view of mean pyramids binning by the angle to the optical axis. The horizontal lines representing the ellipses are surrounded by two lines at the major and minor radius. The middle vertical lines show the pyramid axis concatenating the center of the ellipses stack

Figure 4.5 shows six bins of all pyramids detected during the experiment according to their angle to the optical axis from side view. The lowest ellipses are neglected, if they cover more than one person. All pyramids have been adjusted to one camera distance before a mean pyramid for each bin was calculated. The middle lines concatenate the center of the ellipses of successive height level and depict the pyramid axes. The outer two lines surround the ellipses stack at the major and minor radius. One can see that the center line of the mean pyramids is tilted according to the angle, but less than one would expect from the perspective view, because the based isolines of the latter ellipses have to cover the higher ones due to the not performed plan view statistic. The radii increase slightly for larger angle. The size of the radii can be read from the top diagram axis.

4.3 Experimental Setup of the Studied Experiments

Four pedestrian experiments will be introduced in this section. Experiments of uni-, bidirectional and merging flow were performed in hall 2 of the fairground Düsseldorf (Germany) in 2009 with up to 400 students (age: 25 ± 5. 7 years old, height: 1. 76 ± 0. 09 m, free velocity: 1. 55 ± 0. 18 m∕s), whereas the experiment of bottleneck flow were performed in 2006 in the wardroom of the “Bergische Kaserne Düsseldorf” with a test group that was comprised of soldiers. For the Hermes project an announcement was put in universities to recruit the people who would like to participate in the experiments with 50 Euro per day. Consequently, most of the participants were students. No selection of participants was undertaken. During experiments, they were asked to move normally and purposely but without pushing. They were free to speak and the sound was also recorded.

4.3.1 Unidirectional Flow

Fig. 4.6

Setup and snapshot of unidirectional flow experiment. Note that the gray area in the sketch shows the location of the measurement area in the analysis

Fig. 4.7

Trajectories of pedestrians in two runs of unidirectional flow experiment. The distances between the edge of trajectories and the boundary are not the same in various density situations

Figure 4.6 shows the sketch and a snapshot of the experiment to study unidirectional flow in a straight corridor with open boundaries. Three corridor widths (1.8, 2.4 and 3.0 m) were chosen and 28 runs were carried out in all. To regulate the pedestrian density in the corridor, the widths of the entrance b entrance and the exit b exit were changed in each run. Figure 4.7 shows the trajectories extracted from two runs of the experiment. For more details of the setup and data capturing we refer to [42, 66].

4.3.2 Bidirectional Flow

Fig. 4.8

Setup and snapshot of bidirectional flow experiment. Note that the gray area in the sketch shows the locations of the measurement areas in the analysis. Lane formation can be observed from the snapshot

Figure 4.8 shows the sketch and a snapshot of the experimental setup to study bidirectional flow in a straight corridor. 22 runs were performed with corridor width of 3.0 m and 3.6 m respectively. The width of the left entrance b l and the right entrance b r were changed in each run to regulate the density inside the corridor and the ratio of the opposing streams. To vary the degree of disorder, the participants get different instructions on which exit to choose. Three different types of setting were adopted among these experiments:

b l = b r , choose exits freely: In this type of experiment, the widths of entrance b l and b r were set as the same. The test persons were not given any instruction about exit chosen and they can choose the exit freely. Five runs of experiment were carried out with this conditions in a corridor with width of 3. 6 m.

Fig. 4.9

Trajectories of pedestrians in two runs of bidirectional flow experiment. Different types of lane formation, stable separated lanes and dynamical multi-lanes, can be observed

b l = b r , specify exits in advance: Again the same width b l and b r were chosen in the experiments. But the instruction to the test persons at the beginning of the experiments were changed. The participants were asked to choose an exit at the end of the corridor according to a number given to them in advance. The persons with odd numbers should choose the left exit in the end, while ones with even numbers were asked to choose the exit in the right side.

b l ≠ b r , specify exits in advance: In this case the widths of entrances b l and b r were different and the participants were instructed to choose an exit at the end of the corridor according to a number as the last experiment.

Fig. 4.10

Setup and a snapshot of merging flow experiment in a T-junction. Note that the gray areas in the sketch shows the locations of the measurement areas in the analysis

Fig. 4.11

Trajectories of pedestrians in two runs of merging flow experiment in a T-junction. The utilizations of the space near the merging area are different under various densities

Figure 4.9 shows the pedestrian paths for two runs of the experiment with and without instruction. More details of the experiment setup can be found in [17].

4.3.3 Merging Flow

Figure 4.10 shows the sketch of the experiment to study merging flow in a T-junction and a snapshot. Two pedestrian streams from the opposite sides of T-shaped corridor join together and form a single stream. In these experiments, all three parts of the corridor were set to the same width b cor. 12 runs were carried out with b cor of 2. 4 and 3. 0 m respectively. To regulate the pedestrian density, the width of the entrance was changed in each run. The left and right entrances were always set as the same width b entrance. In this way, we guarantee the symmetry of the two branches of the stream. The number of pedestrians in the left and right branch of the T-junction was approximately equal. The number was set to a value that the overall duration of all experiments is similar and is long enough to assure a stationary state. Figure 4.11 shows the paths of the pedestrians in T-junction at low and high density conditions.

4.3.4 Bottleneck Flow

Figure 4.12 shows a still and a sketch of the setup. The experimental setup allows to analyze the influence of the bottleneck width and length (Fig. 4.13). In one experiment the width b was varied (from 0. 9 to 2. 5 m) at fixed corridor length. In the other experiment the corridor length l was changed (0. 06, 2. 0, 4. 0 m) while the width was fixed at b = 1. 2 m. For more details of the experimental setup and data capturing we refer to [40, 67].

Fig. 4.12

Setup and snapshot of bottleneck flow experiment. For this experiment the length l and width b of the bottleneck were changed in each run to analyze the flow through it

Fig. 4.13

Trajectories of pedestrians in two runs of bottleneck flow experiment. The influence of bottleneck length and width on pedestrian movement as well as lane formation can be observed

4.4 Measurement Methods

The trajectories gained by the methods described in Sect. 4.2.2 are the basis to measure the fundamental diagram J(ρ) or v(ρ). In this section, we study how the way of analyzing the trajectories influence the relation between v, ρ and J. To determine these variables one can choose time-averaged density, velocity or flow. However, various definition of methods may arouse different measurement errors. Here we use four different definitions including macroscopic and microscopic methods to measure observable like flow, velocity and density.

Fig. 4.14

Illustration of different measurement methods. MethodA is a kind of local measurement at cross-section with position x averaged over a time interval Δ t, while MethodsBD measure at a certain time and average the results over space Δ x. Note that for MethodD, the Voronoi diagrams are generated according to the spatial distributions of pedestrians at a certain time

  • Method A

    For MethodA, a reference location x in the corridor is taken (as shown in Fig. 4.14) and mean values of flow and velocity are calculated over time Δ t. We refer to this average by \(\langle \rangle _{\varDelta t}\). The time t i and the velocity v i of each pedestrian passing x can be determined directly. Thus, the flow over time \(\langle J\rangle _{\varDelta t}\) and the time mean velocity \(\langle v\rangle _{\varDelta t}\) can be calculated as
    $$\displaystyle{ \langle J\rangle _{\varDelta t} = \frac{N_{\varDelta t}} {t_{N_{\varDelta t}} - t_{1_{\varDelta t}}}\qquad \mathrm{and}\qquad \langle v\rangle _{\varDelta t} = \frac{1} {N_{\varDelta t}}\sum _{i=1}^{N_{\varDelta t} }v_{i}(t) }$$
    (4.6)
    where N Δ t is the number of persons passing the location x during the time interval Δ t. \(t_{1_{\varDelta t}}\) and \(t_{N_{\varDelta t}}\) are the times when the first and last pedestrians pass the location in Δ t. They could be different from Δ t. The time mean velocity \(\langle v\rangle _{\varDelta t}\) is defined as the mean value of the instantaneous velocities v i (t) of the N Δ t persons according to equation (4.7). We calculate v i (t) by use of the displacement of pedestrian i in a small time interval Δ t (Note that Δ t ≫ Δ t )around t:
    $$\displaystyle{ v_{i}(t) = \frac{\|{\boldsymbol x_{i}}(t +\varDelta {t}^{{\prime}}/2) -{\boldsymbol x_{i}}(t -\varDelta {t}^{{\prime}}/2)\|} {\varDelta {t}^{{\prime}}} \, }$$
    (4.7)
  • Method B

    The second method measures the mean value of velocity and density over space and time. The spatial mean velocity and density are calculated by taking a segment with length Δ x in the corridor as the measurement area. The velocity \(\langle v\rangle _{i}\) of each person is defined as the length Δ x of the measurement area divided by the time he or she needs to cross the area (see Eq. (4.8)),
    $$\displaystyle{ \langle v\rangle _{i} = \frac{\varDelta x} {t_{i,\mathrm{out}} - t_{i,\mathrm{in}}}\, }$$
    (4.8)
    where t i, in and t i, out are the times a person i enters and exits the measurement area, respectively. The density ρ i for each person is calculated with equation (4.9):
    $$\displaystyle{ \langle \rho \rangle _{i} = \frac{1} {t_{i,\mathrm{out}} - t_{i,\mathrm{in}}} \cdot \int _{t_{i,\mathrm{in}}}^{t_{i,\mathrm{out}} } \frac{{N}^{{\prime}}(t)} {\varDelta x \cdot \varDelta y} dt\, }$$
    (4.9)
    b cor is the width of the measurement area while N (t) is the number of person in this area at a time t.
  • Method C

    With the third measurement method, let’s call it classical method, the density \(\langle \rho \rangle _{\varDelta x}\) is defined as the number of pedestrians divided by the area of the measurement section:
    $$\displaystyle{ \langle \rho \rangle _{\varDelta x} = \frac{N} {\varDelta x \cdot \varDelta y}\, }$$
    (4.10)
    The spatial mean velocity is the average of the instantaneous velocities v i (t) for all pedestrians in the measurement area at time t:
    $$\displaystyle{ \langle v\rangle _{\varDelta x} = \frac{1} {N}\sum _{i=1}^{N}v_{ i}(t)\, }$$
    (4.11)
  • Method D

    This method is based on Voronoi diagrams [68] which are a special kind of decomposition of a metric space determined by distances to a specified set of objects in the space. To each such object one associates a corresponding Voronoi cell. The distance from the set of all points in the Voronoi cell to the given object is not greater than their distance to the other objects. At any time the positions of the pedestrians can be represented as a set of objects, from which the Voronoi diagrams (see Fig. 4.14) are generated. The cell area, A i , can be thought as the personal space belonging to each pedestrian i. Then, the density and velocity distribution of the space ρ xy and v xy are defined as
    $$\displaystyle{ \rho _{xy} = 1/A_{i}\quad and\quad v_{xy} = v_{i}(t)\qquad \mbox{ if $(x,y) \in A_{i}$}\, }$$
    (4.12)
    where v i (t) is the instantaneous velocity of each person, see Eq. (4.7). The Voronoi density and velocity for the measurement area is then defined as [69]
    $$\displaystyle\begin{array}{rcl} \langle \rho \rangle _{v} = \frac{\iint \rho _{xy}dxdy} {\varDelta x \cdot \varDelta y} \,& & {}\end{array}$$
    (4.13)
    $$\displaystyle\begin{array}{rcl} \langle v\rangle _{v} = \frac{\iint v_{xy}dxdy} {\varDelta x \cdot \varDelta y} \,& & {}\end{array}$$
    (4.14)

4.5 Results of Analysis

4.5.1 Effects of Measurement Methods

Fig. 4.15

Selection of stationary state from time series of density. The begin and the end of the stationary state are selected manually and represented by two vertical lines

Fig. 4.16

The relationships between density and flow measured at the same set of trajectories but with different methods. The density in (a) is calculated indirectly using \(\rho = J/(b \cdot \varDelta x)\), while the flows in (b), (c) and (d) are obtained by adopting the equation J = ρ v b. The legends in (b), (c) and (d) are the same as in (a)

To analyze the effect of measurement methods, we calculate the fundamental diagram from unidirectional experiments with corridor width b cor  = 1. 8 m. For MethodA we choose the time interval Δ t = 10 s, \(\varDelta {t}^{{\prime}} = 0.625\) s (corresponding to ten frames) and the measurement position at x = 0 (see Fig. 4.7). For the other three methods a rectangle with a length of 2 m from \(x = -2\) m to x = 0 and a width of the corridor is chosen as the measurement area. We calculate the densities and velocities each frame with a frame rate of 16 fps. All data presented below are obtained from some set of trajectories. To determine the fundamental diagram only data at the stationary state, which were selected manually by analyzing the time series of density (see Fig. 4.15), were considered. For MethodD we use one frame per second to decrease the number of data points and to represent the data more clearly.

Figure 4.16 shows the relationship between the density and flow obtained from different methods. Using MethodA the flow and mean velocity can be obtained directly. To get the relationship between density and flow, the equation
$$\displaystyle{ \rho =\langle J\rangle _{\varDelta t}/(\langle v\rangle _{\varDelta t} \cdot b_{\mathrm{cor}}) }$$
(4.15)
was adopted to calculate the density. For the MethodB, C and D the mean density and velocity can be obtained directly since they are mean values over space. There exists a similar trend of the fundamental diagram obtained using the different methods. The pedestrian flow shows small fluctuations at low densities and high fluctuations at high densities. The fluctuations for MethodA and D are smaller than that for other methods. However, there is a major difference between the results. The fundamental diagrams obtained using MethodA and C are smooth, while that obtained with MethodB and D show a clear discontinuity at a density of about 2 m−2. The average over a time interval of MethodA and the large scatter of MethodC blur this discontinuity. MethodD can reduce the density and velocity scatter [69]. The reduced fluctuation of MethodD is combined with a good resolution in time and space, which reveal a phenomenon that is not observable with MethodA and C. Consequently, we mainly use the Voronoi method to analyze these experiments in the following part.

4.5.2 Comparison of Fundamental Diagrams for Various Flows

Fig. 4.17

Comparison of the fundamental diagrams of unidirectional flow in corridors for different widths

Figure 4.17 shows the relationship between density specific flow obtained from Voronoi method. The fundamental diagrams of unidirectional pedestrian flow in the same type of corridor but with three different widths are compared. It can be seen that they agree well with each other. The specific flow in the corridors is independent on the corridor width. At about \(\rho = 2.0\,\mathrm{{m}}^{-2}\), the specific flow reaches the maximum value which is named the capacity of a facility. This result is in conformance with Hankin’s findings [70]. He found that above a certain minimum of about 4 ft (about 1.22 m) the maximum flow in subways is directly proportional to the width of the corridor. In the range of densities reached in the experiment, our results seem to support the specific flow concept that the specific flow \(J_{s} = J/b\) is independent on the width of the facility.

Fig. 4.18

Comparison of the fundamental diagrams of unidirectional flow and bidirectional flow

Further, we compare the fundamental diagram of uni- and bidirectional flows in Fig. 4.18. At densities of ρ < 1. 0 m−2, no significant difference exists. For ρ > 1. 0 m−2, however, the difference between the uni- and bidirectional flow becomes more pronounced and a qualitative difference can be observed. In the bidirectional case a plateau is formed starting at a density ρ ≈ 1. 0 m−2 where the flow becomes almost independent of the density. Such plateaus are typical for systems which contain ‘defects’ which limit the flow and have been observed e.g. on bidirectional ant trails [71] where they are a consequence of the interaction of the ants. In our experiments the defects are conflicts of persons moving in the opposite direction. These conflicts only happen between two persons but the reduction of the velocity influences the following people. One of the remarkable things is that the data of the unidirectional flow for ρ > 2. 0 m−2 are obtained by slide change of the experiment setup. To reach densities ρ > 2. 0 m−2 for unidirectional experiment, a bottleneck at the end of the corridor is builded. This may limit the comparability of fundamental diagrams for ρ > 2. 0 m−2.

With the Voronoi method the measurement area could be chosen smaller than the pedestrians. We calculate the Voronoi density, velocity and specific flow over small regions (\(\varDelta x \times \varDelta y = 10 \times 10\,\mathrm{cm}\)) for each frame and average them over the stationary state separately. Then the profile of density, velocity and specific flow over the experimental area are obtained (see Fig. 4.19). These profiles provide new insights into the spatial characteristics and sensitivity of the quantities to other potential factors. The density profile shows conspicuous high densities at the corner of the T-junction, indicating critical spots under crowded conditions. The region with the highest density is located at a small triangle area, where the left and right branches merge. Moreover the density profile shows obvious boundary effects. Except for the merging area at the corner, the densities in the middle of the corridors are significantly higher than near the boundaries. The spatial variation of the velocity is different. Boundary effect does not occur for the velocity and the profile is independent from the corridor width especially in the exit corridor. But the velocity becomes larger after the merging of the streams and increases persistently along the movement direction. The specific flow profile shows that the highest flow occurs at the center of the exit corridor. The region of highest flow protrudes from the exit corridor into the area where the two branches start to merge. This indicates that the merging process in front of the exit corridor leads to a flow restriction. Causes for the restriction of the flow must be located outside the region of highest flow. These profiles demonstrate that density and velocity measurements are sensitive to the size and location of the measurement area. For the comparison of measurements (e.g. for model validation or calibration) it is necessary to specify precisely the size and position of the measurement area.

Fig. 4.19

The profiles of density, velocity and flow in T-junction for one run of the experiments. (a) Density profile. (b) Velocity profile. (c) Specific flow profile

Fig. 4.20

Comparison of the fundamental diagrams of merging flow in different parts of a T-junction

In Fig. 4.20, we compare the fundamental diagrams of merging flow in T-junction with corridor width b cor  = 2. 4 m. The data assigned with ‘T-left’ and ‘T-right’ are measured in the areas where the streams prepare to merge, while the data assigned with ‘T-front’ are measured in the region where the streams have already merged. The locations of the measurement areas are illustrated in Fig. 4.10. For ease of comparison, we choose these measurement areas with the same size (4. 8 m−2). One finds that the fundamental diagrams of the left and right branches match well. That means, the right or left turning of the stream dose not have influence on the fundamental diagram. However, for densities ρ > 0. 5 m−2 the velocities in the ‘right’ and ‘left’ part of the T-junction (T-left and T-right) are significantly lower than the velocities measured after the merging of the streams (T-front). This discrepancy becomes more distinct in the relation between density and specific flow. In the main stream (T-front), the specific flow increases with the density ρ till 2. 5 m−2. While in the branches, the specific flow nearly remains constant for density ρ between 1. 5 and 3. 5 m−2. Thus, there seems no unique fundamental diagram which describes the relation between velocity and density for the complete system. For this difference, we can only offer assumptions regarding the causes. One is based on behavior of pedestrians. Congestion occurs at the end of the branches, where the region of maximum density appears. Pedestrians stand in a jam in front of the merging and could not perceive where the congestion disperse or whether the jam lasts after the merging. In such situation, it is questionable whether an urge or a push will lead to a benefit. Thus an optimal usage of the available space becomes unimportant. Otherwise, the situation totally changes if the location of dissolution becomes apparent. Then a certain urge or an optimal usage of the available space makes sense and could lead to a benefit. They will move in a relatively active way. That’s maybe the reason why the velocities after merging are higher than that in front of merging at the same density. Whether this explanation is plausible could be answered by a comparison of these data with experimental data at a corner without the merging.

Fig. 4.21

(a) Variation of the flow J with bottleneck length l and (b) variation of the flow J with bottleneck width b

In Fig. 4.21, the flow from bottleneck experiment is compared with previous measurements using Method A. The black line in Fig. 4.21b represents a constant specific flow of 1. 9 (ms)−1. The difference between the flow at l = 0. 06 and l = 2. 0, 4. 0 m is Δ J ≃ 0. 5 s−1. The data points of Müller’s experiments [72] lie significantly above the black line. The Müller experimental setup features a large initial density of around 6 m−2 and an extremely short corridor. The discrepancy between the Müller data and the empirical J = 1. 9b line is roughly Δ J ≃ 0. 5 s−1. This difference can be accounted to the short corridor length, but may also be due to the higher initial density in the Müller experiment.

4.6 Conclusion

We have performed a large series of experiments to selectively analyze parameters independent of undesired influences.

Different strategies with and without markers for collecting precise trajectories out of overhead video recordings of these experiments and for future field studies have been discussed. Using the perspective depth field directly from stereo recordings is a fast method for the perpendicular view. The automatic extraction of all trajectories which we need especially to verify microscopic models and to analyze the movement microscopically has small error.

To obtain smoother trajectories without markers a subsequent smoothing could be performed or the axes of the pyramidal ellipses stack could be taken into account to get more stable points along the pedestrians’ routes.

Experiments of uni- and bidirectional flow in a straight corridor, merging flow in a T-junction and pedestrian flow through bottlenecks have been presented in more detail. Four different measurement methods for obtaining quantities from pedestrian trajectories are adopted and their influences on the fundamental diagram have been investigated. It is found that the results obtained from different methods agree well in the density ranges observed in the experiment. The main differences are the range of the fluctuations and the resolution in time and space. However, the Voronoi method permits the deepest and most precise insight into the temporal progress and spatial distribution of the velocity, density and flow.

Some selected analysis results are presented. It is shown that fundamental diagrams for the same type of facility but different corridor widths agree well and can be unified in one diagram for the specific flow. From the comparison of the fundamental diagrams between straight corridor and T-junction, it is indicated that the fundamental diagrams for different facilities are not comparable. Besides, the measurement of density and velocity strongly depends on the size and location of the measurement area, which can be observed form the profiles of the density, velocity and specific flow measured with the Voronoi method. The influence of the length and width of a bottleneck on the flow is shown and compared with previous studies.

Notes

Acknowledgements

This study was performed within the project funded by the German Research Foundation (DFG) KL 1873/1-1 and SE 1789/1-1 and the project Hermes funded by the Federal Ministry of Education and Research (BMBF) Program on “Research for Civil Security – Protecting and Saving Human Life”.

References

  1. 1.
    Schreckenberg, M., Scharma, S.D. (eds.): Pedestrian and Evacuation Dynamics. Springer, Berlin/Heidelberg (2002)MATHGoogle Scholar
  2. 2.
    Galea, E.R. (ed.): Pedestrian and Evacuation Dynamics. CMS, London (2003)Google Scholar
  3. 3.
    Klingsch, W.W.F., Rogsch, C., Schadschneider, A., Schreckenberg, M. (eds.): Pedestrian and Evacuation Dynamics. Springer Berlin/Heidelberg (2010). http://www.springer.com/math/applications/book/978-3-642-04503-5
  4. 4.
    Peacock, R.D., Kuligowski, E.D., Averill, J.D. (eds.): Pedestrian and Evacuation Dynamics. Springer, Berlin/Heidelberg (2011). doi:10.1007/978-1-4419-9725-8Google Scholar
  5. 5.
    Grayson, S., Inter Science Communications Limited, Babrauskas, V., Building Research Establishment, UK, National Fire Protection Association, USA, National Institute for Standards & Technology, Building, Fire Research Laboratory, USA, Society of Fire Protection Engineers, USA, SP Technical Institute of Sweden: Interflam 2007: Proceedings of the Eleventh International Conference, no. Bd. 1. Interscience Communications (2007). http://books.google.de/books?id=0vszPwAACAAJ
  6. 6.
    Predtechenskii, V.M., Milinskii, A.I.: Planning for Foot Traffic Flow in Buildings. Amerind Publishing, New Dehli (1978). Translation of Proekttirovanie Zhdanii s Uchetom Organizatsii Dvizheniya Lyuddskikh Potokov, Stroiizdat Publishers, Moscow, 1969Google Scholar
  7. 7.
    Nelson, H.E., Mowrer, F.W.: In: DiNenno, P.J. (ed.) SFPE Handbook of Fire Protection Engineering, 3rd edn., chap.  14, pp. 367–380. National Fire Protection Association, Quincy (2002)
  8. 8.
    Weidmann, U.: Transporttechnik der Fussgänger. Tech. Rep. Schriftenreihe des IVT Nr. 90, Institut für Verkehrsplanung, Transporttechnik, Strassen- und Eisenbahnbau, ETH Zürich, ETH Zürich (1993). Zweite, ergänzte AuflageGoogle Scholar
  9. 9.
    Thompson, P.A., Marchant, E.W.: Fire Saf. J. 24(2), 131 (1995). doi:10.1016/0379-7112(95)00019-P, http://www.sciencedirect.com/science/article/pii/037971129500019P
  10. 10.
    TraffGo HT GmbH: Handbuch PedGo 2, PedGo Editor 2 (2005). http://www.evacuation-simulation.com
  11. 11.
    Kretz, T., Hengst, S., Vortisch, P.: In: Sarvi, M. (ed.) International Symposium of Transport Simulation (ISTS08). Monash University, Melbourne (2008). http://arxiv.org/abs/0805.1788
  12. 12.
    Hostikka, S., Korhonen, T., Paloposki, T., Rinne, T., Matikainen, K., Heliövaara, S.: Development and validation of fds+evac for evacuation simulations. Tech. Rep. VTT Technical Research Centre of Finland (2008)Google Scholar
  13. 13.
    Schadschneider, A., Klingsch, W., Kluepfel, H., Kretz, T., Rogsch, C., Seyfried, A.: Encyclopedia of Complexity and System Science, vol. 5, chap. Evacuation Dynamics: Empirical Results, Modeling and Applications, pp. 3142–3176. Springer, Berlin/Heidelberg (2009)Google Scholar
  14. 14.
    Seyfried, A., Passon, O., Steffen, B., Boltes, M., Rupprecht, T., Klingsch, W.: Transp. Sci. 43(3), 395 (2009). doi:10.1287/trsc.1090.0263CrossRefGoogle Scholar
  15. 15.
    Liddle, J., Seyfried, A., Steffen, B., Klingsch, W., Rupprecht, T., Winkens, A., Boltes, M.: Microscopic insights into pedestrian motion through a bottleneck, resolving spatial and temporal variations. (2011). http://arxiv.org/abs/1105.1532
  16. 16.
    Seyfried, A., Portz, A., Schadschneider, A.: In: Bandini, S., Manzoni, S., Umeo, H., Vizzari, G. (eds.) Cellular Automata, 9th International Conference on Cellular Automata for Reseach and Industry, ACRI, Ascoli Piceno, September 2010. Lecture Notes in Computer Science, vol. 6350, pp. 496–505. Springer, Berlin/Heidelberg (2010). doi:10.1007/978-3-642-15979-4_53Google Scholar
  17. 17.
    Zhang, J., Klingsch, W., Schadschneider, A., Seyfried, A.: J. Stat. Mech. Theory Exp. 2012(2), P02002 (2012). http://stacks.iop.org/1742-5468/2012/i=02/a=P02002
  18. 18.
    Wong, S.C., Leung, W.L., Chan, S.H., Lam, W.H.K., Yung, N.H., Liu, C.Y., Zhang, P.: J. Transp. Eng. 136(3), 234 (2010). doi: 10.1061/(ASCE)TE.1943-5436.0000086 CrossRefGoogle Scholar
  19. 19.
    Liu, X., Song, W., Zhang, J.: Phys. A Stat. Mech. Appl. 388(13), 2717 (2009). doi:10.1016/j.physa.2009.03.017CrossRefGoogle Scholar
  20. 20.
    Fang, Z., Song, W., Zhang, J., Wu, H.: Phys. A Stat. Mech. Appl. 389, 815 (2010). doi:doi:10.1016/j.physa.2009.10.019Google Scholar
  21. 21.
    Ma, J., Song, W.G., Liao, G.X.: Chin. Phys. B 19(12), 128901 (2010). doi:10.1088/1674-1056/19/12/128901CrossRefGoogle Scholar
  22. 22.
    Lam, W.H.K., Lee, J.Y.S., Cheung, C.Y.: Transportation 29, 169 (2002). doi:10.1023/A:1014226416702CrossRefGoogle Scholar
  23. 23.
    Lee, J.Y.S., Lam, W.H.K.: Transp. Res. Rec. 1982, 122 (2006). doi:10.3141/1982-17CrossRefGoogle Scholar
  24. 24.
    Ren-Yong, G., Wong, S.C., Yin-Hua, X., Hai-Jun, H., Lam, W.H.K., Keechoo, C.: Chin. Phys. Lett. 29(6), 068901 (2012). http://stacks.iop.org/0256-307X/29/i=6/a=068901 Google Scholar
  25. 25.
    LI Xiang, D.L.Y.: Chin. Phys. Lett. 29(9), 98902 (2012). doi:10.1088/0256-307X/29/9/098902, http://cpl.iphy.ac.cn/EN/abstract/article_50369.shtml Google Scholar
  26. 26.
  27. 27.
    Moussaïd, M., Helbing, D., Garnier, S., Johansson, A., Combe, M., Theraulaz, G.: Proc. R. Soc. B 276(1668), 2755 (2009). doi:10.1098/rspb.2009.0405CrossRefGoogle Scholar
  28. 28.
    Jelić, A., Appert-Rolland, C., Lemercier, S., Pettré, J.: Properties of pedestrians walking in line: Stepping behavior Phys. Rev. E 86, 046111 (2012)Google Scholar
  29. 29.
    Moussaid, M., Guillot, E.G., Moreau, M., Fehrenbach, J., Chabiron, O., Lemercier, S., Pettre, J., Appert-Rolland, C., Degond, P., Theraulaz, G.: PLoS Computat. Biol. 8, 1002442 (2012). http://hal.archives-ouvertes.fr/hal-00716032. Article published in PLoS Computational biology. Freely available here: http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002442 LPT-ORSAY 12–75 LPT-ORSAY 12–75
  30. 30.
    Hoogendoorn, S.P., Daamen, W.: Transp. Sci. 39(2), 147 (2005). doi:10.1287/trsc.1040.0102CrossRefGoogle Scholar
  31. 31.
    Daamen, W., Hoogendoorn, S.: Procedia Eng. 3, 53 (2010). doi:10.1016/j.proeng.2010.07.007, http://www.sciencedirect.com/science/article/pii/S1877705810004765. First International Conference on Evacuation Modeling and Management
  32. 32.
    Yanagisawa, D., Kimura, A., Tomoeda, A., Ryosuke, N., Suma, Y., Ohtsuka, K., Nishinari, K.: Phys. Rev. E 80, 036110 (2009)CrossRefGoogle Scholar
  33. 33.
    Yanagisawa, D., Tomoeda, A., Nishinari, K.: Physical Review E 85, 016111+ (2012). doi:10.1103/PhysRevE.85.016111, http://dx.doi.org/10.1103/PhysRevE.85.016111
  34. 34.
    Nagai, R., Fukamachi, M., Nagatani, T.: Physica A 367, 449 (2006). doi:10.1016/j.physa.2005.11.031, http://dx.doi.org/10.1016/j.physa.2005.11.031
  35. 35.
    Isobe, M., Adachi, T., Nagatani, T.: Physica A 336, 638 (2004). doi:10.1016/j.physa.2004.01.043CrossRefGoogle Scholar
  36. 36.
    Kretz, T., Grünebohm, A., Schreckenberg, M.: J. Stat. Mech. 10, P10014 (2006). doi:10.1088/1742-5468/2006/10/P10014CrossRefGoogle Scholar
  37. 37.
    Plaue, M., Chen, M., Bärwolff, G., Schwandt, H.: In: Stilla, U., Rottensteiner, F., Mayer, H., Jutzi, B., Butenuth, M. (eds.) Photogrammetric Image Analysis. Lecture Notes in Computer Science, vol. 6952, pp. 285–296. Springer, Berlin/Heidelberg (2011). doi:10.1007/978-3-642-24393-6_24, http://dx.doi.org/10.1007/978-3-642-24393-6_24
  38. 38.
    Seyfried, A., Steffen, B., Klingsch, W., Boltes, M.: J. Stat. Mech. Theory Exp. P10002 (2005). doi:10.1088/1742-5468/2005/10/P10002Google Scholar
  39. 39.
    Chattaraj, U., Seyfried, A., Chakroborty, P.: Adv. Complex Syst. 12(3), 393 (2009). doi:10.1142/S0219525909002209CrossRefGoogle Scholar
  40. 40.
    Seyfried, A., Boltes, M., Kähler, J., Klingsch, W., Portz, A., Rupprecht, T., Schadschneider, A., Steffen, B., Winkens, A.: In: Klingsch, W.W.F., et al. (eds.) Pedestrian and Evacuation Dynamics, pp. 145–156. Springer Berlin/Heidelberg (2010). doi:10.1007/978-3-642-04504-2_11, http://arxiv.org/abs/0810.1945
  41. 41.
    Holl, S., Seyfried, A.: inSiDe 7(1), 60 (2009). http://inside.hlrs.de/pdfs/inSiDE_spring2009.pdf
  42. 42.
    Boltes, M., Seyfried, A., Steffen, B., Schadschneider, A.: In: Klingsch, W.W.F., et al. (eds.) Pedestrian and Evacuation Dynamics, pp. 43–54. doi:10.1007/978-3-642-04504-2-3, http://www.springer.com/math/applications/book/978-3-642-04503-5
  43. 43.
    Hirschmüller, H.: IEEE Trans. Pattern Anal. Mach. Intell. 30, 328 (2008). doi:http://doi.ieeecomputersociety.org/10.1109/TPAMI.2007.1166
  44. 44.
    Bradski, G.R., Pisarevsky, V.: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 2796+ (2000). doi:http://doi.ieeecomputersociety.org/10.1109/CVPR.2000.854964, http://dx.doi.org/10.1109/CVPR.2000.854964
  45. 45.
    Boltes, M., Seyfried, A., Steffen, B., Schadschneider, A.: In: Peacock, R.D., et al. (eds.) Pedestrian and Evacuation Dynamics, pp. 751–754. Springer, Berlin/Heidelberg (2011). doi:10.1007/978-1-4419-9725-8Google Scholar
  46. 46.
    Leibe, B., Seemann, E., Schiele, B.: In: Computer Vision and Pattern Recognition, San Diego, pp. 878–885 (2005)Google Scholar
  47. 47.
    Brostow, G.J., Cipolla, R.: In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, vol. 1, pp. 594–601. IEEE Computer Society, Washington, DC (2006). doi:10.1109/CVPR.2006.320, http://dl.acm.org/citation.cfm?id=1153170.1153531
  48. 48.
    Tu, P., Sebastian, T., Doretto, G., Krahnstoever, N., Rittscher, J., Yu, T.: In: European Conference on Computer Vision, Marseille, pp. 691–704 (2008)Google Scholar
  49. 49.
    Cheriyadat, A.M., Bhaduri, B.L., Radke, R.J.: In: Computer Vision and Pattern Recognition Workshops, Anchorage, vol. 0, pp. 1–8. IEEE Computer Society, Los Alamitos (2008). doi:http://doi.ieeecomputersociety.org/10.1109/CVPRW.2008.4562983
  50. 50.
    Saadat, S., Teknomo, K., Fernandez, P.: Fire Technol. 1–18 (2010). doi:10.1007/s10694-010-0174-9, http://dx.doi.org/10.1007/s10694-010-0174-9
  51. 51.
    Johansson, A., Helbing, D., Al-Abideen, H.Z., Al-Bosta, S.: Adv. Complex Syst. 11, 4 (2008). http://www.citebase.org/abstract?id=oai:arXiv.org:0810.4590
  52. 52.
    Rittscher, J., Tu, P., Krahnstoever, N.: In: Schmid, C., Soatto, S., Tomasi, C. (eds.) IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, vol. 2, pp. 486–493. IEEE Computer Society (2005)Google Scholar
  53. 53.
    Hu, W., Zhou, X., Tan, T., Lou, J., Maybank, S.: IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 663 (2006)CrossRefGoogle Scholar
  54. 54.
    Cutler, R., Davis, L.: IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 781 (2000)CrossRefGoogle Scholar
  55. 55.
    Pai, C., Tyan, H., Liang, Y., Liao, H., Chen, S.: Pattern Recognit. 37(5), 1025 (2004). doi:10.1016/j.patcog.2003.10.005CrossRefMATHGoogle Scholar
  56. 56.
    Darrell, T., Gordon, G., Harville, M., Woodfill, J.: Int. J. Comput. Vis. 37(2), 175 (2000)CrossRefMATHGoogle Scholar
  57. 57.
    Muñoz Salinas, R., Aguirre, E., García-Silvente, M.: Image Vis. Comput. 25, 995 (2007). doi:10.1016/j.imavis.2006.07.012, http://portal.acm.org/citation.cfm?id=1235891.1236069
  58. 58.
    Hou, Y.L., Pang, G.K.H.: In: Real, P., Díaz-Pernil, D., Molina-Abril, H., Berciano, A., Kropatsch, W.G. (eds.) International Conference on Computer Analysis of Images and Patterns, Seville. Lecture Notes in Computer Science, vol. 6854, pp. 93–101. Springer (2011)Google Scholar
  59. 59.
    Harville, M.: Image Vis. Comput. 22(2), 127 (2004). doi:10.1016/j.imavis.2003.07.009, http://www.sciencedirect.com/science/article/B6V09-49VCBKN-1/2/1663779d9256ba7d3a54634abe9e23c3
  60. 60.
    García-Martín, Á., Hauptmann, A., Martinez, J.M.: In: 8th IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS), Klagenfurt, p. 5 (2011)Google Scholar
  61. 61.
    Eshel, R., Moses, Y.: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Anchorage, vol. 1 (2008). doi:http://doi.ieeecomputersociety.org/10.1109/CVPR.2008.4587539
  62. 62.
    Kelly, P., O’Connor, N.E., Smeaton, A.F.: Image Vis. Comput. 27(10), 1445 (2009). doi:10.1016/j.imavis.2008.04.006CrossRefGoogle Scholar
  63. 63.
    van Oosterhout, T., Bakkes, S., Kröse, B.: In: International Conference on Computer Vision Theory and Applications (VISAPP), Vilamoura, pp. 620–625 (2011)Google Scholar
  64. 64.
    Bouguet, J.Y.: OpenCV Documents (1999)Google Scholar
  65. 65.
    Arun, K.S., Huang, T.S., Blostein, S.D.: IEEE Trans. Pattern Anal. Mach. Intell. 9, 698 (1987). doi:10.1109/TPAMI.1987.4767965, http://portal.acm.org/citation.cfm?id=28809.28821
  66. 66.
    Zhang, J., Klingsch, W., Schadschneider, A., Seyfried, A.: J. Stat. Mech. Theory Exp. (2011). http://arxiv.org/abs/1102.4766. ArXiv:1102.4766
  67. 67.
    Liddle, J., Seyfried, A., Klingsch, W., Rupprecht, T., Schadschneider, A., Winkens, A.: In: Traffic and Granular Flow (2009). http://arxiv.org/abs/0911.4350. ArXiv:0911.4350
  68. 68.
    Voronoi, G.M.: Journal für die reine und angewandte Mathematik 133, 198 (1908)Google Scholar
  69. 69.
    Steffen, B., Seyfried, A.: Physica A 389(9), 1902 (2010). doi:10.1016/j.physa.2009.12.015CrossRefGoogle Scholar
  70. 70.
    Hankin, B.D., Wright, R.A.: Oper. Res. Q. 9, 81 (1958)CrossRefGoogle Scholar
  71. 71.
    John, A., Schadschneider, A., Chowdhury, D., Nishinari, K.: J. Theor. Biol. 231, 279 (2004). http://arxiv.org/abs/cond-mat/0409458 Google Scholar
  72. 72.
    Müller, K.: Zur Gestaltung und Bemessung von Fluchtwegen für die Evakuierung von Personen aus Bauwerken auf der Grundlage von Modellversuchen. Dissertation, Technische Hochschule Magdeburg (1981)Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Jülich Supercomputing CentreForschungszentrum Jülich GmbHJülichGermany
  2. 2.Computer Simulation for Fire Safety and Pedestrian TrafficBergische Universität WuppertalWuppertalGermany

Personalised recommendations