Learning Crowd Steering Behaviors from Examples
Crowd steering algorithms generally use empirically selected stimuli to determine when and how an agent should react. These stimuli consist of information from the environment that potentially influence behavior such as an agent’s visual perception, its neighboring agents state and actions, locations of nearby obstacles, etc. The various different approaches, rule-based, example-based or other, all define their responses by taking into account these particular sensory inputs.
In our latest experiments we aim to determine which of a set of sensory inputs affect an agent’s behavior and at what level. Using videos of real and simulated crowds, the steering behavior (i.e. trajectories) of the people are tracked and time sampled. At each sample, the surrounding stimuli of each person and their actions are recorded. These samples are then used as the input of a regression algorithm that learns a function that can map new input states (stimuli) to new output values (speed, direction). A series of different simulations are conducted with different time varying stimuli each time in order to extract all the necessary information.
Identifying the most important factors that affect good steering behaviors can help in the design of better rule based or example based simulation systems. In addition they can help improve crowd evaluation methods.