International Conference on Interactive Digital Storytelling

Interactive Storytelling pp 130-141 | Cite as

Authoring Background Character Responses to Foreground Characters

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9445)

Abstract

This paper presents a flexible and intuitive authoring interface for specifying the behaviors of background characters and their reactions to user-controlled foreground characters. We use an event-centric behavior authoring paradigm and provide metaphors for altering the behavioral responses using conditions, modifiers, and contexts. The execution of an event (an interaction between multiple characters in the scene) is governed using authored conditions on the state of the participating characters, as well as the history of their past interactions. Our system monitors the ongoing simulation and the actions of foreground characters to trigger plausible reactions in the background characters as events, which satisfy user-authored conditions. Modifiers allow authors to vary how events are perceived by specific characters, to elicit unique responses. Contexts provide a simple mechanism to add behavior modifiers based on the current location of the characters. We demonstrate the benefits of our approach by authoring a virtual populace, and show the design of simple background activity, to more complex multi-agent interactions, that highlight the ease and flexibility of specification.

Keywords

Behavior trees Behavior authoring Background characters 

1 Introduction

As virtual simulations technologies evolve in quality of graphics and animations complexity, users’ expectations for a more immersive and realistic experience increases as well. The resulting user experience is mostly determined by two main factors: how the world is presented aesthetically, and how its inhabitants react to the user-controlled character’s actions. The second is of critical importance when it comes to interactive virtual worlds, and the smallest details can determine how immersive its atmosphere is. Higher quality productions will pay special attention to background information which complements and completes the foreground scene. These, will also have higher and more complex development cycles which certainly benefit from the introduction of technologies which abstracts certain complexities providing simply a complete component. The availability, portability and licensing of third party quality components, proves to be a challenge for smaller scale developers finding themselves in need of creating their own non-mission critical elements, potentially compromising key aspects of the intended final product.

To alleviate these production cycles, we present ABAS, a tool which enables an author to easily design and modify character behaviors, and how these synthesize with their environment and other agents. ABAS provides a flexible event-centric model, which can simply be configured and tested anew, after modification in the next run, without the need of new builds or code modifications. An event is the cornerstone of the presented authoring model, which can be easily molded by utilizing conditions, modifiers, and contexts. Using our intuitive and flexible specification interface, end users can quickly specify and generate unique, heterogeneous background character behaviors in response to foreground (player-controlled) characters. These behaviors depend on user-authored conditions and modifiers, the spatial context in which the actions take place, and even the history of interactions between characters. We demonstrate ABAS by authoring different activities in a virtual populace in 15th century Plasenscia. Users may choose to take control over any character in the scene, thus promoting it to the role of the foreground character. Every character, remember past interactions, and elicit appropriate behavioral responses depending on the context of the situation, per user-authored conditions and modifiers using our specification interface.

2 Related Work

We refer the readers to surveys in behavior authoring [1, 2] and provide a brief overview of prior work below.

Simulation-Centric Authoring. Commercial software such as Massive, Golaem, Alice, MayaCrowds, Houdini etc. are simulation-centric where designers author the responses of an autonomous agent to external stimuli and tweak simulation parameters to mold the emergent crowd behavior. This mode of authoring requires the animator to work within the limits imposed by simulation framework, which may be sufficient for generic crowds in the background of a shot (e.g. stadiums etc.). However, authoring precision is particularly important for crowd shots that mandate interactions with foreground characters, or when the behavior of a crowd must be carefully choreographed. As a result, designers resort to manual methods that provide precise control, at the expense of increasing the authoring burden.

Data-Centric Authoring. The work presented by [3], synthesizes synchronized multi-character motions and crowd animations, by performing editing and stitching operations on a library of motion capture data. However, the approach is limited to the database of pre-recorded clips as large deformations and time warping may yield unnatural results. Also, editing an individual motion may change the entire crowd animation, which is undesirable when orchestrating crowd activities with multiple constraints. Motion Patches [4] are environmental building blocks that are annotated with motion data, which informs what actions are available for animated characters within the block. Motion patches can be edited and connected together [5], or precomputed by expanding a search tree of single character motions ([6]) to synthesize complex multi-character interactions.

The aforementioned contributions rely on pre-recorded motion building blocks which could synthesize animations of long duration, with many characters in complex environments. In comparison, this paper aims to generalize motion patches to represent logical behavioral constructs which can be stitched together with precise authorial control, while using simulation to automatically synthesize interaction instances that meet author constraints. In addition, this paper further refines the markup interface implementation presented by [7, 8], in a more abstracted and intuitive way for a designer.

Parameterized Behavior Trees (PBT’s) [9] provide a graphical authoring paradigm for specifying multi-actor behaviors and conform well to an event-centric authoring model which we use in our system. ADAPT [10, 11] is an open-source platform for animating and directing virtual avatars using PBT’s. Our authoring interface leverages the ADAPT platform to animate the avatars described in the results and shown in the video. Domain-independent planners [12, 13, 14] have recently gained prominence for automatically generating behavior-rich animations, at the expense of authorial control, and a promising research direction that complements this work.

3 Framework

3.1 Preliminaries

The following are the foundational blocks of our framework:
  • SmartObject. An object or avatar that is able to interact with other smart objects using predefined affordances.

  • Affordance. Affordances define the different possible ways in which smart objects can interact with each other.

  • Trait. A specific characteristic of every agent, which has associated with it a real or boolean value, that defines the state of an agent.

  • State. A collection of traits \(\mathbf {T}\), with unique values, initialized at the beginning of each simulation.

  • Behavior. A logical interaction between two or more smart objects, defined as a PBT [9] which uses affordances as leaf nodes.

  • Condition. A specific query on an agent’s state, which must be satisfied in order to be considered valid.

  • Modifier. Refers to a discrete value which will increase, neutralize or decrease, the effect of a behavior’s impact in its surroundings.

  • Context. Geographically delimited area within the simulation. This element, allows to further modify how behaviors are perceived, and can be added freely during the design phase.

  • Events. Structure which encapsulates a particular behavior tree definition, conditions, modifiers, a type, and its priority.

3.2 Agent Model and Event-Centric Deliberation

The way the agent is designed in this implementation, allows for two type of characters. One is background, or non-controlled agents, and foreground user-controller characters. Ideally, the scene’s plot will develop around a foreground agent, and this, allows the user to issue commands to interact with its environment and other surrounding agents. In return, and based on the authors’ designs, non-controlled agents will react accordingly. Actions allowed to be performed by the user, will be represented as events at a system level. The entire authoring system, orbits around the idea that every behavior is encapsulated in an event. An Event, is a structure which carries all the required information for an agent to deliberate its viability and potentially execute its associated behavior, defined as a Parameterized Behavior Tree [9].

From the character’s state perspective, each smart object has a generic set of Traits. A Trait is implemented a property with a real number assigned to it. These values are assigned randomly on instantiation which allows for a changing experience on each simulation. Some Traits could be: Friendliness, Courage and Hostility, among others. The viability of an Event is determined on whether all its conditions are met by the evaluating agent. These Conditions might test specific Trait levels. Besides traits, each agent keeps track of past interactions with other agents. This information will be taken into consideration during each deliberation cycle, allowing for changes on behaviors based on the memory of each agent. Figure 1 summarizes the behavior cycle for background characters in the scene.
Fig. 1.

General agent model and selection

Perception. This phase keeps track of other agents in the field of view (modeled as a foveal cone of 135 degrees up to a distance of 10 m), and the actions they may perform. These are categorized as: (a) global: which impacts all background characters that see it, (b) targeted: meant for a specific character, and (c) neutral. For example, an author could mark a ‘Warning Shot’ as global, which when executed, will update every perceiving agent’s hostility level of the source character. This is provided as input to the deliberation module to elicit appropriate responses in the characters.

The perceived interactions are recorded in the agent’s memory, then used in the deliberation phase to determine a reaction. The conditions that determine these responses are defined using an XML interface (described below), and can also be tailored for unique agents, using modifiers. Additionally, every agent will be aware of its context its is currently in. Context is defined as a specific geographical place in the environment. The context can be used to enhance, reduce or neutralize behaviors’ modifiers. As an example, an agent carrying a weapon in a combat training field would not increase perceived hostility levels on other characters. This should not hold true in a non-combat-controlled environment. This concepts are also available for design and are optional to every model, hence an author has no need to worry about the design complexity until is necessary.

Deliberation. All the perceived entities, and the current state of the agent, will be taken into consideration for selecting the next group of candidate events to be executed. The first step in this process is to filter all candidate events based on their conditions and modifiers. Conditions are evaluated against the state of the current agent, or the perceived character’s state, and returns TRUE or FALSE. For example, let assume there exists an event ‘GreetMainCharacter’ with the following conditions: (1) Target.IsForegroundPlayer, (2) Target.Friendliness >0.5, (3) Self.Idle.

All conditions are created at design time and are not bound to any Event until the author assigns them to it. At code level, a condition is an Attribute which get associated with either a function or a property, for then to be evaluated against a target value during runtime. Once a condition has been instantiated, it can be reused in any defined event, hence the overhead of these are not significant as the number of available Events increase. Assuming the agent’s current state satisfies all needed conditions, then ‘GreetMainCharacter’ becomes available as a candidate event. Now, assume there is also another event which has been satisfied, for instance, ‘Wander’. This, will also be included in the candidate Events queue. At the time of taking a decision which event is to be executed, we must consider mainly four things: (1) Is this agent currently selected by the User? (2) Is the character currently executing an event? (3) Is the first candidate event of higher priority of the currently executed event? (4) Is the final chosen event the same one is being executed? Being a foreground character or taking part in a higher/equal priority than the top candidate one, can prohibit an agent to start executing a new event. Should two or more events share the same priority, the one with the higher number of constraints will be executed. Furthermore, if two or more share also the same number of constraints, then one will be picked randomly. Algorithm 1 is the highest-abstraction level routine for the event-deliberation system, to select potential candidates who satisfy its required conditions, based on their state. Every event may include modifiers that can persistently or non-persistently modify how its execution perceived by other agents. This enables a more flexible design which allows the author to further decide how a behavior will impact other agent’s perceptive memory.

Act. Finally, the events that are chosen by the deliberation phase are used to invoke appropriate reactions in the background characters, implemented as PBT’s.

4 Authoring

This section describes the interface which will allow an author to design reactive background agents.

4.1 Event

Formally, an event \(\mathbf {e}= \left\langle \varphi ,\mathbf {C},\mathbf {M},\mathbf {t},\mathbf {p}\right\rangle \) is a collection of a behavior, defined as a PBT \(\varphi \), a set of conditions \(\mathbf {C}\), modifiers \(\mathbf {M}\), type \(\mathbf {t}\), and priority \(\mathbf {p}\). An event instance is represented by the event \(\mathbf {e}\) in addition to the set of smartobjects \(\mathbf {S}\) which will execute its behavior (Fig. 2).
Fig. 2.

Behavior tree

The event’s type (global, targeted, or neutral) will let a perceiving agent determine whether it should or not take its behavior’s modifiers into account, for then updating its perception about the source agent. For example, a conversation between agent A and B will probably not need to update how an unrelated agent C feels about this two characters. A global event such as a warning shot, however, will require that all surrounding agents respond. Once an Event takes place, it will change the state of all its participants. This change of state, might produce certain designed events to become eligible for execution on the next deliberation cycle. This creates responsive agents. Another feature taken into consideration when evaluating candidate behaviors, is the context the character finds itself, at that particular time, hence the deliberation process is also context-sensitive. Finally, enabled by each characters unique initialization of trait values, each reaction might be different for each agent. Events may have multiple participants. For example, GoTo would involve a single character, while Converse may involve two or more agents.

4.2 Conditions

The author can define reusable conditions which will be evaluated at the Deliberation phase. For an Event to be valid, all of its conditions must be true. A condition represents constraints on the state of the participating agent. Example conditions include: (1) Self.Friendliness >0.3, (2) Target.IsForeground\(=\)true.

This is the main way an author will specify how an event is validated at any given time. Still, a valid event does not guarantee it will be executed since there is more to it during deliberation that its validity. Conditions can be nested and marked as non-mandatory. A non-mandatory condition will not terminate the deliberation loop upon failure to be satisfied, allowing it to still be considered, but will not allow any character to be added to the candidate agents list until they satisfy a following non failing condition. The ability of nesting conditions can be useful in two ways. The first one, is reducing verbosity at design time. The second one, combined with the mandatory property, allows for multi-agent events to select candidate agents based on their traits. For example, assume a nested condition requires: IsForegroundAndDistractedAndAttractive, while another one is IsThiefAndIsBored and a third one as IsThiefAndViolent. Once a condition is satisfied, the complying agent is added to a candidates queue and the condition is not evaluated again. The deliberation cycle will continue with two possible outcomes: the other conditions are satisfied by two other characters, or not. Should the first option occurs, the event might be executed based on its priority level, allowing the event to take place and the foreground character to get robbed by a thief and a hostile third character. The Act module will discern among traits and assign roles accordingly, possibly having the thief with higher hostility to threaten the foreground character while the other one approaches from behind to take his wallet.

Event Priorities. All Events have a priority value determined at authoring time. This value will categorize each event. For example, Events of a social nature may have higher priority than other belonging to another group. At Deliberation time, the higher the priority, the higher an event’s place in the candidates queue. We define four event priorities in ascending order: Individual, Social, User Command and Self-Preservation.

4.3 Modifiers

Each Event can have modifiers assigned to it by the author. A modifier is a simple structure which will increment, decrement, or neutralize the effect of a characters Trait at deliberation time. Each time a character interacts with another character, this behavior can have an impact in the way characters perceive each other. This impact, can be defined with Modifiers. A friendly behavior might increase the friendliness level in a single character, or a rude one might decrease it. This component, could be marked as Persistent or not, and will affect a character’s memory with regard to other character. A non-persistent modifier will be forgotten as soon as the deliberation cycle is over, hence any action which might have impacted a character’s perception will be obviated for next iterations.

4.4 Context

The author has the ability to mark in the simulation’s geography and then have modifiers assigned to it. This allows emphasizing, decrementing or neutralizing the effects of any behavior’s modifiers in a particular spatial location. Contexts use modifiers in the same manner as events, and provide an exocentric way of authoring background character responses. At runtime, should an agent is within the boundaries of a context while performing a behavior, if any modifiers were assigned to this context, they will be also be applied.

4.5 Authoring Interface

We provide a simple specification interface to easily author reactive behaviors in background characters. The following is a basic example of a simple reaction:
This example shows how an Event called SayHello might be executed by an agent after deliberations time. From the deliberating agent’s perspective, the conditions are: the perceived character is a foreground agent (selected by the user), his “friendliness” trait is greater than 0.5, and finally, he is the targeting (looking) at the agent. Should all these conditions be satisfied, this event will be included in a queue, with priority Social. If there are no other competing events, this event will be selected for execution, producing a modification in the way the foreground agent perceives this character, by increasing its friendliness of this character by 0.2.
Fig. 3.

Simulation Captures: (a) A foreground character walks by the town square. (b) He is spotted by two idle background NPCs. (c) One background agent calls for the foreground character’s attention. (d) The foreground character responds to the call. (e) The second agent approaches the distracted foreground character, (f) and pickpockets the foreground character. (g) The thief returns to its original spot and calls its accomplice. (h) The distracting agent approaches the thief.

5 Results

We use ADAPT [11] within the Unity 3D engine as the underlying platform to animate characters in an interactive virtual environment. ADAPT provides an implementation of Parameterized Behavior Trees (PBTs) [9] which are used to define events (multi-character interactions) in the scene. ABAS adds a specification layer to ADAPT and provides an intuitive means of creating rich background behaviors for interactive applications.

We demonstrate the effectiveness of ABAS b y authoring an active community around Plasencia, a town in 15th century Spain. Designing different behaviors and responses takes just a matter of minutes. Figure 4 illustrates how different agents, each with their own unique state, react to a foreground character’s hostile gesture. The highlighted character is performing a gesture for which the other three agents have different reactions. This reaction is governed by how the background character may have previously interacted with the foreground one, along with any defined modifiers or context-specific reactions. Figure 3 illustrates an ongoing simulation showcasing different foreground character interactions, and the corresponding responses from background characters.

Foreground Character Actions. Adding a new foreground character action is performed by adding a new event definition where the event priority is UserCommand. This allows the player to assume control of any character in the scene and issue a command to trigger the corresponding event definition. A foreground character event has no conditions associated with it.

Background Character Behaviors. We repeat the same process for background characters and the triggering of these events can be additionally controlled using conditions, modifiers, and contexts. For example, we can create an event for the Wandering behavior and create conditions where the character must be idle in order to participate in this event.
Fig. 4.

Unique reactions - A foreground character tries to scare two agents. One gets surprised, while the other one gets scared.

Adding Reactive Behaviors. Consider a new event WaveTo which is a user command for making a foreground character wave at another character. This has a modifier Friendliness which will add a persistent 0.3 friendliness level to the target’s memory. We define a second event RespondToCall, which simply elicits a response from a character, causing him to approach another who might be calling for attention. For this, we specify the following conditions: the target is a foreground character, this agent is not currently in an event, his or her friendliness level is above 0.5, and finally, it is being targeted by the other character. Also, note that this has a Social event priority, which has higher priority than Individual, so a Social reaction might interrupt an Individual event that may be ongoing. Secondly, this is a Targeted event, which means it will only influence the character being targeted. The supplementary video showcases this example, along with more complex illustrations.

6 Conclusion and Future Work

ABAS empowers content creators to easily author background character behaviors and responses in a centralized and intuitive way, while still producing unique and heterogeneous activity. Both the design process and mindset, are entirely approached from an intuitively event-centered perspective.Every authored block, can be easily tested and changed, without the need of performing a new build. By allowing the inclusion of perceptive memory, context and dynamic agent-specific traits’ values, this implementation differentiates itself from others which only allow for customizing behavior trees hooks. This more general-purpose engines, do not offer, for instance, perceptive and interactive memory which might produce dynamic interactions. We would like to replace the current XML specification interface with a graphical platform that will ease the authoring process. We will expand the available portfolio of entities and affordances, which will translate into a more interactive environment. This includes, and is not limited to, object manipulation, ownership, places of interest and influences. Allowing for more generic event definitions that operate at the crowd level.

References

  1. 1.
    Kapadia, M., Shoulson, A., Durupinar, F., Badler, N.: Authoring multi-actor behaviors in crowds with diverse personalities. In: Ali, S., Nishino, K., Manocha, D., Shah, M. (eds.) Modeling, Simulation and Visual Analysis of Crowds, vol. 11, pp. 147–180. Springer, New York (2013)CrossRefGoogle Scholar
  2. 2.
    Kapadia, M., Shoulson, A., Boatright, C.D., Huang, P., Durupinar, F., Badler, N.I.: What’s next? The new era of autonomous virtual humans. In: Kallmann, M., Bekris, K. (eds.) MIG 2012. LNCS, vol. 7660, pp. 170–181. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  3. 3.
    Kim, M., Hyun, K., Kim, J., Lee, J.: Synchronized multi-character motion editing. ACM TOG 28(3), 79 (2009)Google Scholar
  4. 4.
    Lee, K.H., Choi, M.G., Lee, J.: Motion patches: building blocks for virtual environments annotated with motion data. ACM TOG 25, 898–906 (2006)CrossRefGoogle Scholar
  5. 5.
    Kim, M., Hwang, Y., Hyun, K., Lee, J.: Tiling motion patches. In: ACM SIGGRAPH/EG SCA, pp. 117–126 (2012)Google Scholar
  6. 6.
    Shum, H.P., Komura, T., Shiraishi, M., Yamazaki, S.: Interaction patches for multi-character animation. ACM TOG 27, 114 (2008). ACMCrossRefGoogle Scholar
  7. 7.
    Vilhjálmsson, H.H., et al.: The behavior markup language: recent developments and challenges. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 99–111. Springer, Heidelberg (2007) CrossRefGoogle Scholar
  8. 8.
    Mateas, M., Stern, A.: A behavior language: joint action and behavioral idioms. In: Prendinger, H., Ishizuka, M. (eds.) Life-Like Characters, pp. 135–161. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    Shoulson, A., Garcia, F.M., Jones, M., Mead, R., Badler, N.I.: Parameterizing behavior trees. In: Allbeck, J.M., Faloutsos, P. (eds.) MIG 2011. LNCS, vol. 7060, pp. 144–155. Springer, Heidelberg (2011) CrossRefGoogle Scholar
  10. 10.
    Shoulson, A., Marshak, N., Kapadia, M., Badler, N.I.: Adapt: the agent development and prototyping testbed. In: ACM SIGGRAPH I3D, pp. 9–18 (2013)Google Scholar
  11. 11.
    Shoulson, A., Marshak, N., Kapadia, M., Badler, N.I.: Adapt: the agent developmentand prototyping testbed. IEEE TVCG 20(7), 1035–1047 (2014)Google Scholar
  12. 12.
    Kapadia, M., Singh, S., Reinman, G., Faloutsos, P.: A behavior-authoring framework for multiactor simulations. IEEE CGA 31(6), 45–55 (2011)Google Scholar
  13. 13.
    Kapadia, M., Falk, J., Zünd, F., Marti, M., Sumner, R.W., Gross, M.: Computer-assisted authoring of interactive narratives. In: ACM SIGGRAPH I3D, pp. 85–92 (2015)Google Scholar
  14. 14.
    Shoulson, A., Gilbert, M.L., Kapadia, M., Badler, N.I.: An event-centric planning approach for dynamic real-time narrative. In: ACM SIGGRAPH Motion on Games, 99:121–99:130 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Computer Science DepartmentRutgers UniversityNew BrunswickUSA

Personalised recommendations