1 Introduction

Automated analysis of media content, such as movies has traditionally focused on extracting and using low level features from shots and scenes for analyzing narrative structures and key events [10, 11]. For humans, however, a movie is not just a collection of shots or scenes. It is the characters that usually play the most important role in storytelling [18]. More recently, character-centric representation of movies, such as character networks have emerged as an effective approach towards media content analysis [15, 16, 22]. A character network usually has the major characters as its nodes where the edges summarize the relationship between character pairs. Such networks have been shown to facilitate a number of movie analysis tasks including character analysis [16], story segmentation [22] and major character identification [15]. The existing methods build a single, static character network for the entire movie. While static graphs offer a convenient summary of the overall interactions among characters, they can not capture the evolution of a movie’s dynamic narrative.

In this paper, we present an unsupervised approach to building a dynamic character network via online face clustering. We refer to this network as the character interaction graph (CIG), where each movie character is represented as a node, and an edge represents pairwise interaction between characters. The dynamic aspect of the CIG offers an effective way to capture the variations in character interactions over time - particularly helpful for story segmentation and event localization. Our approach (see Fig. 1 for an overview) has of two main components - online face clustering, and simultaneous creation of the CIG using the resulting clusters. Building on our previous work on online face clustering [8], we develop a new algorithm to create (and update) a CIG via clustering i.e., utilizing the information from the cluster dynamics. We demonstrate the usefulness of the CIGs for two important movie analysis tasks: (i) semantic segmentation of a movie into acts, and (ii) major character discovery. Performance is evaluated on a database of six full-length Hollywood movies containing more than 5000 face tracks. Results are compared with relevant past work and manual annotations, where our CIG-based approach shows superior performance.

Fig. 1
figure 1

Overview of the proposed approach: A movie is processed at shot level. For each shot, face tracks are created, and our online clustering algorithm either groups a face track with an existing cluster or creates a new one. In this example, at shot0, three face tracks are created and grouped into two clusters. A new cluster is added in shot1 as it belongs to a new character. The face track in shot2 is added to an existing cluster belonging to the same character. The CIG is updated after each shot is processed. Note that the CIG for the (i − 1)th shot is obtained after the (i − 1)th shot is processed

In summary, the contribution of this work is two-fold: (i) We propose an unsupervised approach to building dynamic character graph via online face clustering. This is the first work on dynamic CIG construction. (ii) We demonstrate superior performance of our CIG-based approach for two important movie analysis tasks - three act segmentation and major character identification.

The rest of this paper is organized as follows. Section 2 discusses relevant literature for character network-based movie analysis and online face clustering. Section 3 describes our approach to dynamic CIG creation via online face clustering. Section 4 proposes the methodologies to apply CIG for two movie analysis tasks. Section 5 presents extensive results, and Section 6 concludes the article with summary and discussion on future work.

2 Related work

Following the two major components of our approach, we discuss past work related to character graph construction and face clustering in multimedia content.

2.1 Character network construction

Character networks are useful for multimedia content analysis due to its wide applicability in story summarization, segmentation, character identification, and character-based search and indexing [15, 16, 21, 22]. Character networks have been constructed using movie scripts [16], spoken dialogs [15], manually-labeled data [22], and supervised face recognition [22].

Ramakrishna et al. [16] used scripts to construct a character network, where an edge between two characters (nodes) is added if the characters have consecutive dialogs. This network is used to examine the character analytics based on gender, race and age [16]. Weng et al. [22] constructed a character network, called the RoleNet, that captures the co-occurrence statistics of movie characters via face recognition. This network is used to identify the lead characters and communities, and for story segmentation [22]. Park et al. [15] built a network aligning scripts and subtitles. This network is employed in classification of major and minor characters, community clustering and sequence detection [15]. Along the similar lines, Tran and Jung constructed a CoCharNet [21] using manual annotations to encode information regarding character co-occurrences.

The work most related to our work is that of Yeh and Wu [28], where character network is constructed using face clustering. This work clusters faces and constructs a character network in an iterative fashion. However, this requires prior knowledge of the number of clusters, and is an offline method. To the best of our knowledge, this is the only prior work that uses (offline) face clustering for constructing character graphs.

2.2 Face clustering in videos

Offline methods.

The problem of unsupervised face clustering is relatively less studied as compared to its supervised counterpart, i.e., face recognition. The dominant approach to face clustering involves learning a suitable distance measure between face pairs [9, 17, 20, 30]. Several methods have proposed to use partial supervision to improve performance [3, 23]. While image-based clustering is more common, face clustering in videos can achieve significant improvement by exploiting the temporal information about the faces [1, 25,26,27]. Temporal constraints have been used in frameworks based on hidden Markov random field (HMRF) [26] and unsupervised logistic discriminative metric learning (ULDML) [2] with applications to face clustering in movies and TV series. A constrained multiview face clustering technique used constrained sparse subspace representation of faces with constrained spectral clustering [1]. Recent clustering approaches use convolutional neural networks (CNN) to learn robust face representations by using aggregated deep features [19], deep features with pairwise constraints [30], and deep features with triplet loss [29].

Online methods.

The approaches discussed above are all offline methods i.e., they assume the availability of the entire data at once. In an online setting, a clustering algorithm does not have the luxury of ‘seeing’ the entire data simultaneously. To the best of our knowledge, there is only one work on online face clustering in videos in the existing literature [14]. This work created small tracklets of faces from the video, and clustered them in an online fashion based on temporal coherence and the Chinese restaurant process (TCCRP) [14]. An extension of this work is Temporally Coherent Chinese Restaurant Franchise (TCCRF) [13], that jointly models short temporal segments. These online methods tend to create multiple clusters for the same person thereby degrading the completeness of the clusters [14].

3 Proposed approach

Overview.

In our dynamic CIG construction approach, we process a movie stream at shot-level, where a shot is a contiguously recorded sequence of frames. Our approach consists of two main components: (i) face track creation and clustering, and (ii) CIG formation and update. All the components are executed simultaneously in an online fashion by processing one shot at a time. As a shot appears, all faces are detected frame by frame and face tracks are created. Our online clustering algorithm then assigns the face tracks to either an existing cluster or to a new one. The information about the cluster updates, including formation of new clusters, are used to create a dynamic CIG. Figure 1 presents an overview of the proposed method. Below, we describe each component in detail.

3.1 Face track creation and clustering

Face track creation.

Consider a movie \({\mathscr{M}}\) comprising T frames: \({\mathscr{M}}=\{I_{t}\}_{t=1}^{T}\). We define the ith shot Si as a sequence of consecutive frames \(\{I_{t_{(i-1)} + 1}, {\ldots } I_{t_{i}}\}\), where ti is the ith shot boundary. The shot boundary ti corresponding to Si is detected by computing the pixel differences between consecutive frames (as they appear) and by comparing the difference to a predefined threshold. The accuracy of shot boundary detection is not critical to the performance of our method, hence we stick to this simple frame differencing method.

Once we have detected the boundaries of Si, a standard face detector [7] is employed to detect the faces in each frame in Si. This frame level face detection can be done in parallel to searching for shot boundaries. The face detector returns the bounding boxes of each face detected in every frame. To build a robust representation of these faces, we use a pretrained CNN, called the FaceNet [17]. Each face fp is forward-passed through the FaceNet to obtain its corresponding d dimensional feature vector vp.

To create face tracks, we use a simple yet effective strategy to combine the faces detected in consecutive frames. Let us define two faces detected in two consecutive frames as fp and fq. The overlap a(⋅) between the two faces is defined as:

$$ \mathit{a}(p, q) = \frac{\text{area}(\mathbf{f}^{p}\cap \mathbf{f}^{q})}{\max (\text{area}(\mathbf{f}^{p}), \text{area}(\mathbf{f}^{q}) )}*100 $$
(1)

where area(f) is the area of the rectangular bounding box of f. The squared distance between the feature vectors vp and vq are defined as \(\delta (p,q) = \|\mathbf {v}^{p} - \mathbf {v}^{q}\|_{2}^{2}\). If a(p, q) > 0.85 and δ(p, q) ≤ 1.0 i.e., if the faces have more than 85% overlap and less than 1.0 feature distance in consecutive frames, they are considered to be of the same person (see Fig. 2). Detected faces that overlap this way in consecutive frames are combined to form a face track, and the sequence of features corresponding to each of these faces is defined as a feature track.

Fig. 2
figure 2

Example of 85% spatial overlap between face pairs in two consecutive frames which are combined to create face tracks

Online face clustering.

Footnote 1 The next task is to cluster the face tracks as they appear in each shot. For this subtask, we use our recently developed online clustering algorithm [8]. We assume the availability of all face tracks in a single shot at a given time. Our goal is to assign a face track belonging to a person who has appeared earlier to the correct existing cluster, and to form a new cluster for a face track belonging to a person appearing for the first time.

Let us consider a shot Si containing K face tracks \(\{\mathcal {F}_{k}\}_{k=1}^{K}\). Each face track \(\mathcal {F}_{k}\) is associated with a feature track \(\mathcal {V}_{k} = \{\mathbf {v}^{1}_{k}, \mathbf {v}^{2}_{k}, {\ldots } \mathbf {v}^{{N_{k}}}_{k}\}\), where Nk is the number of faces in \(\mathcal {F}_{k}\). Also consider that we have already processed previous (i − 1) shots and have obtained L clusters corresponding to L unique characters. The clusters are represented by their corresponding cluster centers \(\mathcal {C} = \{\mathbf {c}_{l}\}_{l=1}^{L}\), where \(\mathbf {c}_{l} \in \mathbb {R}^{d}\), where cl is the feature vector obtained by averaging all features across all face tracks within the lth cluster. Note that the number of clusters and the clusters themselves are dynamic and they evolve as each shot is processed. We now define two matrices as follows:

  • A temporal constraint matrix \(\mathbf {Q} \in \mathbb {R}^{K\times K}\) is defined as

    $$ \mathbf{Q}(p,q) = \begin{array}{ll} 0 & {if \mathcal{F}_{p} and \mathcal{F}_{q} overlap in time} \\ 1 & \text{otherwise} \end{array} $$
    (2)

    where, p, q ∈{1,2,…,K}. The matrix Q enforces a temporal constraint on the face tracks such that if two face tracks have any overlap in time, they are considered to belong to two different characters, and hence, are assigned to different clusters.

  • A similarity matrix \(\mathbf {D}\in \mathbb {R}^{L\times K}\) that measures the similarity between a face track (represented by \(\mathcal {V}_{k}\)) and a cluster center cl for a given shot.

    $$ \mathbf{D}(l,k) = d(\mathbf{c}_{l}, \mathcal{V}_{k}) = 4 - \frac{1}{N_{k}}\sum\limits_{j=1}^{N_{k}} \|\mathbf{v}_{k}^{j} - \mathbf{c}_{l}\|_{2}^{2} $$
    (3)

    where l = 1,2,…,L, and k = 1,2,…,K. The second component is an average squared distance, the maximum value of which is 4 (since each feature is a unit vector). By subtracting the distance from 4 we obtain a similarity value between [0,4].

figure a

Given \(\{\mathcal {V}_{k}\}_{k=1}^{K}\), our task is to assign them to either one of the L clusters or create new clusters, if required. This is done by simply computing the similarities between \(\mathcal {V}_{k}\) for all k and \(\{\mathbf {c}_{l}\}_{l=1}^{L}\).

$$ (\hat{l}, \hat{k}) = argmax_{l,k} (\mathbf{D} \odot \mathbf{W}) $$
(4)

where, \(\mathbf {W} \in \mathbb {R}^{L\times K}\) is a weight matrix (initialized with all ones) and ⊙ denotes element wise product. If \(\max \limits _{l,k}(\mathbf {D} \odot \mathbf {W}) \geq \tau \), where τ is an user defined threshold \(\mathcal {V}_{\hat {k}}\) is assigned to the \({\hat {l}^{th}}\) cluster. Consequently, we update \(\mathbf {c}_{\hat {l}}\) by averaging over the existing and the newly added face track. On the other hand, if \(\max \limits _{l,k}(\mathbf {D} \odot \mathbf {W}) < \tau \), a new cluster is created assuming a new character has appeared. We add a new cluster \(\mathcal {C} \leftarrow \mathcal {C} \cup \mathbf {c}_{new}\). Note that since W is initialized as a matrix of all ones, it has no effect on the clustering of the first face track. For the subsequent assignments W is updated to add temporal constraints. After \(\mathcal {V}_{\hat {k}}\) is assigned to a cluster, we update D and W as follows:

  • Case I: \(\mathcal {V}_{\hat {k}}\) is assigned to an existing cluster \(\hat {l}\)

    $$ \mathbf{W}(\hat{l},:) \leftarrow \mathbf{Q}(\hat{k},:) $$
    (5)

    This updated W will make DW zero for all the face tracks having any temporal overlap with \(\mathcal {V}_{\hat {k}}\) in the \(\hat {l}^{th}\) row.

    $$ \mathbf{D}(\hat{l},k) = d(\mathbf{c}_{\hat{l}}, \mathcal{V}_{k}) \text{for $k \in [1,\vert \mathbf{ind} \vert$]} $$
    (6)
  • Case II: \(\mathcal {V}_{\hat {k}}\) is assigned to a new cluster

    $$ \begin{array}{@{}rcl@{}} \hat{l} &=& \vert\mathcal{C}\vert+1 \end{array} $$
    (7)
    $$ \begin{array}{@{}rcl@{}} \mathcal{C} &\leftarrow& \mathcal{C} \cup \mathbf{c}_{new} \end{array} $$
    (8)
    $$ \begin{array}{@{}rcl@{}} \mathbf{W}(\hat{l},:) &\leftarrow& \mathbf{Q}(\hat{k},:) \end{array} $$
    (9)
    $$ \begin{array}{@{}rcl@{}} \mathbf{D}(\hat{l},k) &=& d(\mathbf{c}_{new}, \mathcal{V}_{k}) \text{for $k\in[1,\vert \textbf{ind} \vert]$} \end{array} $$
    (10)

where ind = [1,2,⋯K]. As \(\mathcal {V}_{\hat {k}}\) is processed and sent into a cluster, its id is removed i.e., HCode \(\hat {k}^{th}\) element of ind, \(\hat {k}^{th}\) column of D and W, and \(\hat {k}^{th}\) row and column of Q are removed.

This process goes on until all tracks in Si are processed, and then we move to the next shot. We also keep track of the clusters that are updated during each shot. This information is later used to create and update the CIG. Algorithm 1 summarizes our proposed online face clustering algorithm.

3.2 CIG construction

We now describe the method to construct and update the CIG based on the online face clustering results. Each node in the CIG represents a single cluster corresponding to a character, and each edge captures the interaction between the two characters it connects. In our approach, the CIG is created in parallel to the online face clustering process, where new nodes are added to the CIG and the edge weights are updated after each shot is processed.

We define the relationship between two characters p and q in terms of their temporal co-occurrence in the same or consecutive shots. Considering an adjacency matrix A the relationship between p and q is formally defined as follows.

$$ \begin{array}{@{}rcl@{}} \mathbf{A}(p, q) = \sum\limits_{i} [ \mathbb{I}(p \in \mathcal{S}_{i} \& q \in \mathcal{S}_{i-1}) &+& \mathbb{I}(p \in {S}_{i} \& q \in {S}_{i}) \end{array} $$
$$ \begin{array}{@{}rcl@{}} &+& \mathbb{I}(p \in {S}_{i} \& q \in {S}_{i+1}) ] \end{array} $$
(11)

where \(\mathbb {I}(.)\) is the indicator function. This count defines the strength of the edge between p and q nodes in the CIG, and is denoted by the element A(p, q). A diagonal element A(p, p) denotes the number of times a character p appears in two consecutive shots. To construct and update A in an online fashion, we begin with an empty A and keep populating it with new rows and columns (corresponding to newly added nodes and edges) as new shots are processed. The dimension of A thus increases as new characters are discovered, and consequently, new nodes are added to the CIG. According to our definition of character relationship in (11), we need to look for the characters in the shot immediately before and after it. Since we can not peek into the future shot, at shot Si (i > 2), we update A for Si− 1.

Our clustering algorithm yields updated cluster ids \(\mathcal {U}_{i-2}\), \(\mathcal {U}_{i-1}\), and \(\mathcal {U}_{i}\) pertaining to the shots Si− 2,Si− 1, Si. We append \(N_{new}^{i-1}\) rows and \(N_{new}^{i-1}\) columns to A (all new elements initialized to 0), where \(N_{new}^{i-1}\) is the number of new clusters added during (i − 1)th shot. Then A(p, q) is updated as follows.

$$ \begin{array}{@{}rcl@{}} \mathbf{A}(p,q) \leftarrow \mathbf{A}(p,q) &+& \mathbb{I}(p \in \mathcal{U}_{i-1} \& q \in \mathcal{U}_{i-2}) \\ &+& \mathbb{I}(p \in \mathcal{U}_{i-1} \& q \in \mathcal{U}_{i-1}) \\ &+& \mathbb{I}(p \in \mathcal{U}_{i-1} \& q \in \mathcal{U}_{i}) \end{array} $$
(12)

Algorithm 2 summarizes the entire process of online clustering and CIG creation as they are performed in parallel. Figure 3 shows an example of a CIG created using the proposed approach for a movie called Hope Springs. The CIG has 6 pure clusters corresponding to the 6 characters discovered by our online clustering algorithm and a noisy cluster denoted by ‘X’. The edges depict the relationship between the characters where thicker edges denote higher interaction. The numbers represent the character importance scores, later described in Section 4.2 in detail.

figure b
Fig. 3
figure 3

Character clusters (left) and the constructed CIG (right) for the movie Hope Springs. The CIG shows 7 nodes corresponding to the 7 clusters discovered by our algorithm. The node marked ‘X’ denotes a noisy cluster. The numbers below each node in the CIG denote the importance (σ(p)) of the characters

4 Applications to movie analysis

In this section, we demonstrate the usefulness of the CIGs for two important movie analysis tasks: (i) Three act segmentation: detecting high level semantic structures in a movie, and (ii) Major character identification. Below, we describe in detail how CIG can facilitate these tasks.

4.1 Three act segmentation

Popular films and screenplays are known to follow a well defined storytelling paradigm. The majority of movies consist of three main segments or acts (see Fig. 4): Act I - introduces the main characters and presents a key incident or plot point that drives the story, Act II - consists of a series of events including a key event which prepares the audience for the climax, and Act III - includes the climax and the resolution of the story [5, 12]. Discovering these high-level semantic units automatically can help in movie summarization and detection of the key events [6].

Fig. 4
figure 4

The three-act narrative structure of a movie

Our objective is to segment a movie into its three acts by detecting the two act boundaries as shown in Fig. 4. Consider the CIGs \(\mathbf {A}_{S_{i-1}}\) and \(\mathbf {A}_{S_{i}}\) obtained at shots S(i− 1) and Si respectively. The difference between two CIGs is computed using graph edit distance (GED) as follows:

$$\text{GED}(\mathbf{A}_{\mathcal{S}_{(i-1)}} ,\mathbf{A}_{\mathcal{S}_{i}}) = \varDelta \eta + \varDelta e$$

where, Δη is the number of new nodes added to \(\mathbf {A}_{S_{i}}\), and Δe is the number of edges that are modified to obtain \(\mathbf {A}_{S_{i}}\) from \(\mathbf {A}_{S_{i-1}}\).

Using this measure, we compute how the CIG for a given movie changes over time between consecutive shots. A window of length Tw is used to sum all the GED scores within the window to incorporate a longer context and get a measure of overall interaction around each shot. Let this CIG difference be denoted as yged, where \(y^{ged}_{i}\) represents the changes in interaction around shot Si. We detect act boundary I as follows

$$ t_{b1} = \frac{{\sum}_{i\in \mathcal{B}_{1}} t_{i}y^{ged}_{i}}{{\sum}_{i\in \mathcal{B}_{1}} t_{i}} $$
(13)

where, ti is the time at the center of Si, and B1 is a predefined interval. This interval B1 is chosen leveraging information from film grammar [5], which suggests that act boundary I lies within 25 to 30 minutes from the start of the movie. We thus set \({\mathscr{B}}\) to have all the shots between an interval of 22 to 40 minutes from the start of the movie. The act boundary II, tb2 is detected in a similar fashion with an interval \({\mathscr{B}}_{2}\) being 14 to 34 minute before the end of the movie.

4.2 Major character identification

Another important task in movie analysis is to identify its major characters. Past work on major character discovery using character networks usually rely on betweenness, centrality and sum of the edge weights [16, 21, 22]. We compute a new measure called the eigenvector centrality for each character in our CIG.

The eigenvector centrality, ep of a character (node) p measures the influence of the node p has on the CIG, and is defined as follows:

$$ e_{p} = \frac{1}{\zeta}\sum\limits_{q=1}^{\vert\mathcal{C}\vert} e_{q} \mathbf{A}(p,q) $$
(14)

where, ζ is the largest eigenvalue of A, and A(p, q) denotes the weight of the edge between nodes p and q. We then define a node importance measureσ(p) for node p as follows:

$$ \begin{array}{@{}rcl@{}} \sigma(p) = \frac{e_{p}}{{\sum}_{q} e_{q}}. \end{array} $$

It is easy to see that the higher the value of σ(p), the more important is the node (character). We use the values of σ(p) to rank the movie characters in terms of their importance in the movie. For example, Fig. 3 shows these node importance measures for the characters in a movie.

5 Performance evaluation

In this section, we present results and performance comparisons for the different components of our proposed method. First, we present results on the performance of the online face clustering algorithm as it is a critical component of the CIG construction algorithm, and its accuracy determines the quality of the CIG. Direct evaluation of a CIG is not very meaningful, as CIGs may have different characteristics by construction. Hence, we evaluate the usefulness of the CIGs via two movie analysis tasks - act segmentation and major character discovery.

5.1 Evaluating clustering performance

Databases:

We use two databases that are commonly used to benchmark face clustering algorithms: (i) Buffy database (BF2006) [4, 26] containing 229 face tracks of 6 characters (17,337 faces altogether) extracted from the episode 2, season 5 of the TV series Buffy - the Vampire Slayer. The database includes the frame number, bounding box coordinates, track ids, and the character names for each face. (ii) Notting Hill database (NH2016) [24] that contains 277 face tracks of 7 characters (19,278 faces altogether) from the movie Notting Hill. It contains the frame numbers, bounding boxes, track ids, features and character names for each face in the database.

Experimental details:

For each video, we obtain shot boundaries, create face tracks, extract deep features and cluster the faces using our proposed algorithm (see Algorithm 1). We use the FaceNet [17] to extract features from each face in a face track. The value of the threshold parameter τ is set to 2.80 and 2.85 for the BF2006 and the NH2016 database. For BF2006, we get a cluster for each of the 6 characters, and for NH2016, we get a cluster for 6 out of the 7 characters.

Comparison with existing methods:

We compare with two baselines (Gaussian mixture model (GMM) with FaceNet features, and Kmeans with FaceNet features), and several state-of-the-art face clustering methods: (i) ULDML [2],(ii) a recently proposed constrained clustering method - the coupled HMRF (cHMRF) [24], and (iii) an aggregated CNN feature-based clustering (aCNN) [19]. Performances of all the methods are compared in Table 1 in terms of clustering accuracy (expressed in %) which compares the predicted cluster labels with the ground truth labels. Note that all the methods in Table 1 are offline methods, where the entire data, information about the face tracks and the cluster counts are provided as an input to the algorithms. For the online method, however, no information about the face tracks or cluster counts are available. The performance of our algorithm on the BF2006 database is superior to that of cHMRF and ULDML, and is comparable to Kmeans. On the NH2016 database, our algorithm outperforms all its offline counterparts, achieving a clustering accuracy of 93.8%.

Table 1 Comparison with the state-of-the-art (offline) clustering methods in terms of clustering accuracy (%)

We next compare with the only existing online face clustering algorithm TCCRP [14]. We combine TCCRP with FaceNet features, and used a tracklet length of 10. Comparison is made in terms of homogeneity score, completeness score and their harmonic-mean i.e., the V measure (see Table 2). Table 2 shows that TCCRP has higher cluster homogeneity, but this is achieved at the cost of over-clustering (note the large number of clusters created by TCCRP) and thereby degrading completeness. Our method achieves significantly higher completeness and V measure while discovering a more accurate number of clusters.

Table 2 Comparison with the existing online clustering method

5.2 Evaluating CIGs for act segmentation

Database:

We use a database of six full length Hollywood movies: Good Deeds, Hope Springs, Joyful Noise, Resident Evil, Step Up Revolution, The Vow. These movies are known to have a well defined three act structure [6]. The labels for the act boundaries for the movies were annotated by three film experts. Each expert independently marked the act boundaries for each movie, and then decided on a final time stamp (at the precision level of seconds) through discussions [6].

Experimental set up:

We run the DLib face detector [7] on each frame of the movie, and create face tracks. For removing the false detections and very small tracks we set the feature-distance threshold δ for track creation at 1.0, and the spacial overlap threshold α is set to 0.95. The online clustering threshold τ is set to 3.0 for all the movies. Also, the face tracks of length less than 15 are discarded.

Results and discussion:

We detect the two act boundaries (see Fig. 4) in each movie using the CIGs as described in Section 4.1 and compute the error in terms of the distance (in seconds) from the expert annotations. The parameter value of Tw is set to 60s. We compare the performance of our CIG-based approach with that of an existing multimodal approach proposed in an earlier work [6]. We also create a simple baseline for comparison. The baseline sets the first act boundary at the 25th minute mark of the movie, and the second act boundary is at the 25th minute mark from the end of the movie. Table 3 present all the results of act boundary detection for the proposed method along with the baseline and the multimodal approach [6]. Our CIG-based approach performs the best in terms of overall error, even though it uses information from only the visual stream. We also note that detecting act boundary II is more difficult as it has higher variability across movies. Figure 5 presents an example of CIG distance plot and the detected act boundaries for the movie Hope Springs.

Table 3 Results on act boundary detection: performance measured in terms of the distance from human expert annotated labels (in seconds)
Fig. 5
figure 5

Act boundary detection result for the movie Hope Springs

5.3 Evaluating CIGs for major character identification

For this task, we use the same six movies as the three act segmentation task described in the previous section. The experimental settings remain the same.

Results and discussion:

We first run our online face clustering algorithm on each movie. Some of the clusters thus obtained may be noisy i.e., they may contain faces from multiple characters. Such noisy clusters are formed due to (i) the presence of the minor characters in movies who do not appear on-screen long enough, and (ii) some wrongly clustered faces of major characters. Since the ground truth face labels are not available for the movies, we sought manual validation of the clusters to evaluate the performance of our method. Two human annotators labeled all the clusters formed for each movie, and identified each cluster as either a valid character cluster or a noisy cluster. The results are presented in Table 4.

Table 4 Face track statistics and clusters formed using online face clustering

After the clusters are formed, we compute σ(p) for each cluster of any given movie, and identify the top 5 clusters (using the σ(p) values) as the five major characters in the movie. To validate the results, we again seek manual evaluation. Two human annotators watched each movie, and based on the internet movie database (IMDb)Footnote 2 major cast list and the storyline, identified the top 5 characters in each movie. Table 5 presents the corresponding results, where ‘X’ denotes a noisy cluster. The results show that the top two characters are always retrieved correctly, and for most of the cases, our CIG-based approach is able to retrieve four out of the top five characters.

Table 5 Major character identification results on six full-length movies (‘X’ denotes noisy cluster)

6 Conclusion

We proposed an unsupervised approach to building a dynamic character network of movie characters through online face clustering. This is significantly different from the existing body of work that builds a single, static character network using supervision from text, meta-data, or human annotation. We demonstrated that the dynamic CIGs can successfully detect high-level semantic structures (acts) in movie narratives, and can also identify the major characters in a movie with high precision. Apart from the applications presented in this paper, dynamic CIGs are also expected to be useful for extracting character-level analytics, movie summarization, indexing and navigation.

Future work will be directed towards expanding the database used for validation, and leveraging the subtitle and audio information available for the movies to achieve better clustering accuracy. A scheme of splitting and fusing the clusters formed online can be useful to improve the quality of clusters, and in turn, can improve the quality of the CIG.