Abstract
An effective approach to automated movie content analysis involves building a network (graph) of its characters. Existing work usually builds a static character graph to summarize the content using metadata, scripts or manual annotations. We propose an unsupervised approach to building a dynamic character graph that captures the temporal evolution of character interaction. We refer to this as the character interaction graph (CIG). Our approach has two components: (i) an online face clustering algorithm that discovers the characters in the video stream as they appear, and (ii) simultaneous creation of a CIG using the temporal dynamics of the resulting clusters. We demonstrate the usefulness of the CIG for two movie analysis tasks: narrative structure (acts) segmentation and major character retrieval. Our evaluation on fulllength movies containing more than 5000 face tracks shows that the proposed approach achieves superior performance for both the tasks.
Similar content being viewed by others
1 Introduction
Automated analysis of media content, such as movies has traditionally focused on extracting and using low level features from shots and scenes for analyzing narrative structures and key events [10, 11]. For humans, however, a movie is not just a collection of shots or scenes. It is the characters that usually play the most important role in storytelling [18]. More recently, charactercentric representation of movies, such as character networks have emerged as an effective approach towards media content analysis [15, 16, 22]. A character network usually has the major characters as its nodes where the edges summarize the relationship between character pairs. Such networks have been shown to facilitate a number of movie analysis tasks including character analysis [16], story segmentation [22] and major character identification [15]. The existing methods build a single, static character network for the entire movie. While static graphs offer a convenient summary of the overall interactions among characters, they can not capture the evolution of a movie’s dynamic narrative.
In this paper, we present an unsupervised approach to building a dynamic character network via online face clustering. We refer to this network as the character interaction graph (CIG), where each movie character is represented as a node, and an edge represents pairwise interaction between characters. The dynamic aspect of the CIG offers an effective way to capture the variations in character interactions over time  particularly helpful for story segmentation and event localization. Our approach (see Fig. 1 for an overview) has of two main components  online face clustering, and simultaneous creation of the CIG using the resulting clusters. Building on our previous work on online face clustering [8], we develop a new algorithm to create (and update) a CIG via clustering i.e., utilizing the information from the cluster dynamics. We demonstrate the usefulness of the CIGs for two important movie analysis tasks: (i) semantic segmentation of a movie into acts, and (ii) major character discovery. Performance is evaluated on a database of six fulllength Hollywood movies containing more than 5000 face tracks. Results are compared with relevant past work and manual annotations, where our CIGbased approach shows superior performance.
In summary, the contribution of this work is twofold: (i) We propose an unsupervised approach to building dynamic character graph via online face clustering. This is the first work on dynamic CIG construction. (ii) We demonstrate superior performance of our CIGbased approach for two important movie analysis tasks  three act segmentation and major character identification.
The rest of this paper is organized as follows. Section 2 discusses relevant literature for character networkbased movie analysis and online face clustering. Section 3 describes our approach to dynamic CIG creation via online face clustering. Section 4 proposes the methodologies to apply CIG for two movie analysis tasks. Section 5 presents extensive results, and Section 6 concludes the article with summary and discussion on future work.
2 Related work
Following the two major components of our approach, we discuss past work related to character graph construction and face clustering in multimedia content.
2.1 Character network construction
Character networks are useful for multimedia content analysis due to its wide applicability in story summarization, segmentation, character identification, and characterbased search and indexing [15, 16, 21, 22]. Character networks have been constructed using movie scripts [16], spoken dialogs [15], manuallylabeled data [22], and supervised face recognition [22].
Ramakrishna et al. [16] used scripts to construct a character network, where an edge between two characters (nodes) is added if the characters have consecutive dialogs. This network is used to examine the character analytics based on gender, race and age [16]. Weng et al. [22] constructed a character network, called the RoleNet, that captures the cooccurrence statistics of movie characters via face recognition. This network is used to identify the lead characters and communities, and for story segmentation [22]. Park et al. [15] built a network aligning scripts and subtitles. This network is employed in classification of major and minor characters, community clustering and sequence detection [15]. Along the similar lines, Tran and Jung constructed a CoCharNet [21] using manual annotations to encode information regarding character cooccurrences.
The work most related to our work is that of Yeh and Wu [28], where character network is constructed using face clustering. This work clusters faces and constructs a character network in an iterative fashion. However, this requires prior knowledge of the number of clusters, and is an offline method. To the best of our knowledge, this is the only prior work that uses (offline) face clustering for constructing character graphs.
2.2 Face clustering in videos
Offline methods.
The problem of unsupervised face clustering is relatively less studied as compared to its supervised counterpart, i.e., face recognition. The dominant approach to face clustering involves learning a suitable distance measure between face pairs [9, 17, 20, 30]. Several methods have proposed to use partial supervision to improve performance [3, 23]. While imagebased clustering is more common, face clustering in videos can achieve significant improvement by exploiting the temporal information about the faces [1, 25,26,27]. Temporal constraints have been used in frameworks based on hidden Markov random field (HMRF) [26] and unsupervised logistic discriminative metric learning (ULDML) [2] with applications to face clustering in movies and TV series. A constrained multiview face clustering technique used constrained sparse subspace representation of faces with constrained spectral clustering [1]. Recent clustering approaches use convolutional neural networks (CNN) to learn robust face representations by using aggregated deep features [19], deep features with pairwise constraints [30], and deep features with triplet loss [29].
Online methods.
The approaches discussed above are all offline methods i.e., they assume the availability of the entire data at once. In an online setting, a clustering algorithm does not have the luxury of ‘seeing’ the entire data simultaneously. To the best of our knowledge, there is only one work on online face clustering in videos in the existing literature [14]. This work created small tracklets of faces from the video, and clustered them in an online fashion based on temporal coherence and the Chinese restaurant process (TCCRP) [14]. An extension of this work is Temporally Coherent Chinese Restaurant Franchise (TCCRF) [13], that jointly models short temporal segments. These online methods tend to create multiple clusters for the same person thereby degrading the completeness of the clusters [14].
3 Proposed approach
Overview.
In our dynamic CIG construction approach, we process a movie stream at shotlevel, where a shot is a contiguously recorded sequence of frames. Our approach consists of two main components: (i) face track creation and clustering, and (ii) CIG formation and update. All the components are executed simultaneously in an online fashion by processing one shot at a time. As a shot appears, all faces are detected frame by frame and face tracks are created. Our online clustering algorithm then assigns the face tracks to either an existing cluster or to a new one. The information about the cluster updates, including formation of new clusters, are used to create a dynamic CIG. Figure 1 presents an overview of the proposed method. Below, we describe each component in detail.
3.1 Face track creation and clustering
Face track creation.
Consider a movie \({\mathscr{M}}\) comprising T frames: \({\mathscr{M}}=\{I_{t}\}_{t=1}^{T}\). We define the i^{th} shot S_{i} as a sequence of consecutive frames \(\{I_{t_{(i1)} + 1}, {\ldots } I_{t_{i}}\}\), where t_{i} is the i^{th} shot boundary. The shot boundary t_{i} corresponding to S_{i} is detected by computing the pixel differences between consecutive frames (as they appear) and by comparing the difference to a predefined threshold. The accuracy of shot boundary detection is not critical to the performance of our method, hence we stick to this simple frame differencing method.
Once we have detected the boundaries of S_{i}, a standard face detector [7] is employed to detect the faces in each frame in S_{i}. This frame level face detection can be done in parallel to searching for shot boundaries. The face detector returns the bounding boxes of each face detected in every frame. To build a robust representation of these faces, we use a pretrained CNN, called the FaceNet [17]. Each face f^{p} is forwardpassed through the FaceNet to obtain its corresponding d dimensional feature vector v^{p}.
To create face tracks, we use a simple yet effective strategy to combine the faces detected in consecutive frames. Let us define two faces detected in two consecutive frames as f^{p} and f^{q}. The overlap a(⋅) between the two faces is defined as:
where area(f) is the area of the rectangular bounding box of f. The squared distance between the feature vectors v^{p} and v^{q} are defined as \(\delta (p,q) = \\mathbf {v}^{p}  \mathbf {v}^{q}\_{2}^{2}\). If a(p, q) > 0.85 and δ(p, q) ≤ 1.0 i.e., if the faces have more than 85% overlap and less than 1.0 feature distance in consecutive frames, they are considered to be of the same person (see Fig. 2). Detected faces that overlap this way in consecutive frames are combined to form a face track, and the sequence of features corresponding to each of these faces is defined as a feature track.
Online face clustering.
^{Footnote 1} The next task is to cluster the face tracks as they appear in each shot. For this subtask, we use our recently developed online clustering algorithm [8]. We assume the availability of all face tracks in a single shot at a given time. Our goal is to assign a face track belonging to a person who has appeared earlier to the correct existing cluster, and to form a new cluster for a face track belonging to a person appearing for the first time.
Let us consider a shot S_{i} containing K face tracks \(\{\mathcal {F}_{k}\}_{k=1}^{K}\). Each face track \(\mathcal {F}_{k}\) is associated with a feature track \(\mathcal {V}_{k} = \{\mathbf {v}^{1}_{k}, \mathbf {v}^{2}_{k}, {\ldots } \mathbf {v}^{{N_{k}}}_{k}\}\), where N_{k} is the number of faces in \(\mathcal {F}_{k}\). Also consider that we have already processed previous (i − 1) shots and have obtained L clusters corresponding to L unique characters. The clusters are represented by their corresponding cluster centers \(\mathcal {C} = \{\mathbf {c}_{l}\}_{l=1}^{L}\), where \(\mathbf {c}_{l} \in \mathbb {R}^{d}\), where c_{l} is the feature vector obtained by averaging all features across all face tracks within the l^{th} cluster. Note that the number of clusters and the clusters themselves are dynamic and they evolve as each shot is processed. We now define two matrices as follows:

A temporal constraint matrix \(\mathbf {Q} \in \mathbb {R}^{K\times K}\) is defined as
$$ \mathbf{Q}(p,q) = \begin{array}{ll} 0 & {if \mathcal{F}_{p} and \mathcal{F}_{q} overlap in time} \\ 1 & \text{otherwise} \end{array} $$(2)where, p, q ∈{1,2,…,K}. The matrix Q enforces a temporal constraint on the face tracks such that if two face tracks have any overlap in time, they are considered to belong to two different characters, and hence, are assigned to different clusters.

A similarity matrix \(\mathbf {D}\in \mathbb {R}^{L\times K}\) that measures the similarity between a face track (represented by \(\mathcal {V}_{k}\)) and a cluster center c_{l} for a given shot.
$$ \mathbf{D}(l,k) = d(\mathbf{c}_{l}, \mathcal{V}_{k}) = 4  \frac{1}{N_{k}}\sum\limits_{j=1}^{N_{k}} \\mathbf{v}_{k}^{j}  \mathbf{c}_{l}\_{2}^{2} $$(3)where l = 1,2,…,L, and k = 1,2,…,K. The second component is an average squared distance, the maximum value of which is 4 (since each feature is a unit vector). By subtracting the distance from 4 we obtain a similarity value between [0,4].
Given \(\{\mathcal {V}_{k}\}_{k=1}^{K}\), our task is to assign them to either one of the L clusters or create new clusters, if required. This is done by simply computing the similarities between \(\mathcal {V}_{k}\) for all k and \(\{\mathbf {c}_{l}\}_{l=1}^{L}\).
where, \(\mathbf {W} \in \mathbb {R}^{L\times K}\) is a weight matrix (initialized with all ones) and ⊙ denotes element wise product. If \(\max \limits _{l,k}(\mathbf {D} \odot \mathbf {W}) \geq \tau \), where τ is an user defined threshold \(\mathcal {V}_{\hat {k}}\) is assigned to the \({\hat {l}^{th}}\) cluster. Consequently, we update \(\mathbf {c}_{\hat {l}}\) by averaging over the existing and the newly added face track. On the other hand, if \(\max \limits _{l,k}(\mathbf {D} \odot \mathbf {W}) < \tau \), a new cluster is created assuming a new character has appeared. We add a new cluster \(\mathcal {C} \leftarrow \mathcal {C} \cup \mathbf {c}_{new}\). Note that since W is initialized as a matrix of all ones, it has no effect on the clustering of the first face track. For the subsequent assignments W is updated to add temporal constraints. After \(\mathcal {V}_{\hat {k}}\) is assigned to a cluster, we update D and W as follows:

Case I: \(\mathcal {V}_{\hat {k}}\) is assigned to an existing cluster \(\hat {l}\)
$$ \mathbf{W}(\hat{l},:) \leftarrow \mathbf{Q}(\hat{k},:) $$(5)This updated W will make D ⊙W zero for all the face tracks having any temporal overlap with \(\mathcal {V}_{\hat {k}}\) in the \(\hat {l}^{th}\) row.
$$ \mathbf{D}(\hat{l},k) = d(\mathbf{c}_{\hat{l}}, \mathcal{V}_{k}) \text{for $k \in [1,\vert \mathbf{ind} \vert$]} $$(6) 
Case II: \(\mathcal {V}_{\hat {k}}\) is assigned to a new cluster
$$ \begin{array}{@{}rcl@{}} \hat{l} &=& \vert\mathcal{C}\vert+1 \end{array} $$(7)$$ \begin{array}{@{}rcl@{}} \mathcal{C} &\leftarrow& \mathcal{C} \cup \mathbf{c}_{new} \end{array} $$(8)$$ \begin{array}{@{}rcl@{}} \mathbf{W}(\hat{l},:) &\leftarrow& \mathbf{Q}(\hat{k},:) \end{array} $$(9)$$ \begin{array}{@{}rcl@{}} \mathbf{D}(\hat{l},k) &=& d(\mathbf{c}_{new}, \mathcal{V}_{k}) \text{for $k\in[1,\vert \textbf{ind} \vert]$} \end{array} $$(10)
where ind = [1,2,⋯K]. As \(\mathcal {V}_{\hat {k}}\) is processed and sent into a cluster, its id is removed i.e., HCode \(\hat {k}^{th}\) element of ind, \(\hat {k}^{th}\) column of D and W, and \(\hat {k}^{th}\) row and column of Q are removed.
This process goes on until all tracks in S_{i} are processed, and then we move to the next shot. We also keep track of the clusters that are updated during each shot. This information is later used to create and update the CIG. Algorithm 1 summarizes our proposed online face clustering algorithm.
3.2 CIG construction
We now describe the method to construct and update the CIG based on the online face clustering results. Each node in the CIG represents a single cluster corresponding to a character, and each edge captures the interaction between the two characters it connects. In our approach, the CIG is created in parallel to the online face clustering process, where new nodes are added to the CIG and the edge weights are updated after each shot is processed.
We define the relationship between two characters p and q in terms of their temporal cooccurrence in the same or consecutive shots. Considering an adjacency matrix A the relationship between p and q is formally defined as follows.
where \(\mathbb {I}(.)\) is the indicator function. This count defines the strength of the edge between p and q nodes in the CIG, and is denoted by the element A(p, q). A diagonal element A(p, p) denotes the number of times a character p appears in two consecutive shots. To construct and update A in an online fashion, we begin with an empty A and keep populating it with new rows and columns (corresponding to newly added nodes and edges) as new shots are processed. The dimension of A thus increases as new characters are discovered, and consequently, new nodes are added to the CIG. According to our definition of character relationship in (11), we need to look for the characters in the shot immediately before and after it. Since we can not peek into the future shot, at shot S_{i} (i > 2), we update A for S_{i− 1}.
Our clustering algorithm yields updated cluster ids \(\mathcal {U}_{i2}\), \(\mathcal {U}_{i1}\), and \(\mathcal {U}_{i}\) pertaining to the shots S_{i− 2},S_{i− 1}, S_{i}. We append \(N_{new}^{i1}\) rows and \(N_{new}^{i1}\) columns to A (all new elements initialized to 0), where \(N_{new}^{i1}\) is the number of new clusters added during (i − 1)^{th} shot. Then A(p, q) is updated as follows.
Algorithm 2 summarizes the entire process of online clustering and CIG creation as they are performed in parallel. Figure 3 shows an example of a CIG created using the proposed approach for a movie called Hope Springs. The CIG has 6 pure clusters corresponding to the 6 characters discovered by our online clustering algorithm and a noisy cluster denoted by ‘X’. The edges depict the relationship between the characters where thicker edges denote higher interaction. The numbers represent the character importance scores, later described in Section 4.2 in detail.
4 Applications to movie analysis
In this section, we demonstrate the usefulness of the CIGs for two important movie analysis tasks: (i) Three act segmentation: detecting high level semantic structures in a movie, and (ii) Major character identification. Below, we describe in detail how CIG can facilitate these tasks.
4.1 Three act segmentation
Popular films and screenplays are known to follow a well defined storytelling paradigm. The majority of movies consist of three main segments or acts (see Fig. 4): Act I  introduces the main characters and presents a key incident or plot point that drives the story, Act II  consists of a series of events including a key event which prepares the audience for the climax, and Act III  includes the climax and the resolution of the story [5, 12]. Discovering these highlevel semantic units automatically can help in movie summarization and detection of the key events [6].
Our objective is to segment a movie into its three acts by detecting the two act boundaries as shown in Fig. 4. Consider the CIGs \(\mathbf {A}_{S_{i1}}\) and \(\mathbf {A}_{S_{i}}\) obtained at shots S_{(i− 1)} and S_{i} respectively. The difference between two CIGs is computed using graph edit distance (GED) as follows:
where, Δη is the number of new nodes added to \(\mathbf {A}_{S_{i}}\), and Δe is the number of edges that are modified to obtain \(\mathbf {A}_{S_{i}}\) from \(\mathbf {A}_{S_{i1}}\).
Using this measure, we compute how the CIG for a given movie changes over time between consecutive shots. A window of length T_{w} is used to sum all the GED scores within the window to incorporate a longer context and get a measure of overall interaction around each shot. Let this CIG difference be denoted as y^{ged}, where \(y^{ged}_{i}\) represents the changes in interaction around shot S_{i}. We detect act boundary I as follows
where, t_{i} is the time at the center of S_{i}, and B_{1} is a predefined interval. This interval B_{1} is chosen leveraging information from film grammar [5], which suggests that act boundary I lies within 25 to 30 minutes from the start of the movie. We thus set \({\mathscr{B}}\) to have all the shots between an interval of 22 to 40 minutes from the start of the movie. The act boundary II, t_{b2} is detected in a similar fashion with an interval \({\mathscr{B}}_{2}\) being 14 to 34 minute before the end of the movie.
4.2 Major character identification
Another important task in movie analysis is to identify its major characters. Past work on major character discovery using character networks usually rely on betweenness, centrality and sum of the edge weights [16, 21, 22]. We compute a new measure called the eigenvector centrality for each character in our CIG.
The eigenvector centrality, e_{p} of a character (node) p measures the influence of the node p has on the CIG, and is defined as follows:
where, ζ is the largest eigenvalue of A, and A(p, q) denotes the weight of the edge between nodes p and q. We then define a node importance measureσ(p) for node p as follows:
It is easy to see that the higher the value of σ(p), the more important is the node (character). We use the values of σ(p) to rank the movie characters in terms of their importance in the movie. For example, Fig. 3 shows these node importance measures for the characters in a movie.
5 Performance evaluation
In this section, we present results and performance comparisons for the different components of our proposed method. First, we present results on the performance of the online face clustering algorithm as it is a critical component of the CIG construction algorithm, and its accuracy determines the quality of the CIG. Direct evaluation of a CIG is not very meaningful, as CIGs may have different characteristics by construction. Hence, we evaluate the usefulness of the CIGs via two movie analysis tasks  act segmentation and major character discovery.
5.1 Evaluating clustering performance
Databases:
We use two databases that are commonly used to benchmark face clustering algorithms: (i) Buffy database (BF2006) [4, 26] containing 229 face tracks of 6 characters (17,337 faces altogether) extracted from the episode 2, season 5 of the TV series Buffy  the Vampire Slayer. The database includes the frame number, bounding box coordinates, track ids, and the character names for each face. (ii) Notting Hill database (NH2016) [24] that contains 277 face tracks of 7 characters (19,278 faces altogether) from the movie Notting Hill. It contains the frame numbers, bounding boxes, track ids, features and character names for each face in the database.
Experimental details:
For each video, we obtain shot boundaries, create face tracks, extract deep features and cluster the faces using our proposed algorithm (see Algorithm 1). We use the FaceNet [17] to extract features from each face in a face track. The value of the threshold parameter τ is set to 2.80 and 2.85 for the BF2006 and the NH2016 database. For BF2006, we get a cluster for each of the 6 characters, and for NH2016, we get a cluster for 6 out of the 7 characters.
Comparison with existing methods:
We compare with two baselines (Gaussian mixture model (GMM) with FaceNet features, and Kmeans with FaceNet features), and several stateoftheart face clustering methods: (i) ULDML [2],(ii) a recently proposed constrained clustering method  the coupled HMRF (cHMRF) [24], and (iii) an aggregated CNN featurebased clustering (aCNN) [19]. Performances of all the methods are compared in Table 1 in terms of clustering accuracy (expressed in %) which compares the predicted cluster labels with the ground truth labels. Note that all the methods in Table 1 are offline methods, where the entire data, information about the face tracks and the cluster counts are provided as an input to the algorithms. For the online method, however, no information about the face tracks or cluster counts are available. The performance of our algorithm on the BF2006 database is superior to that of cHMRF and ULDML, and is comparable to Kmeans. On the NH2016 database, our algorithm outperforms all its offline counterparts, achieving a clustering accuracy of 93.8%.
We next compare with the only existing online face clustering algorithm TCCRP [14]. We combine TCCRP with FaceNet features, and used a tracklet length of 10. Comparison is made in terms of homogeneity score, completeness score and their harmonicmean i.e., the V measure (see Table 2). Table 2 shows that TCCRP has higher cluster homogeneity, but this is achieved at the cost of overclustering (note the large number of clusters created by TCCRP) and thereby degrading completeness. Our method achieves significantly higher completeness and V measure while discovering a more accurate number of clusters.
5.2 Evaluating CIGs for act segmentation
Database:
We use a database of six full length Hollywood movies: Good Deeds, Hope Springs, Joyful Noise, Resident Evil, Step Up Revolution, The Vow. These movies are known to have a well defined three act structure [6]. The labels for the act boundaries for the movies were annotated by three film experts. Each expert independently marked the act boundaries for each movie, and then decided on a final time stamp (at the precision level of seconds) through discussions [6].
Experimental set up:
We run the DLib face detector [7] on each frame of the movie, and create face tracks. For removing the false detections and very small tracks we set the featuredistance threshold δ for track creation at 1.0, and the spacial overlap threshold α is set to 0.95. The online clustering threshold τ is set to 3.0 for all the movies. Also, the face tracks of length less than 15 are discarded.
Results and discussion:
We detect the two act boundaries (see Fig. 4) in each movie using the CIGs as described in Section 4.1 and compute the error in terms of the distance (in seconds) from the expert annotations. The parameter value of T_{w} is set to 60s. We compare the performance of our CIGbased approach with that of an existing multimodal approach proposed in an earlier work [6]. We also create a simple baseline for comparison. The baseline sets the first act boundary at the 25th minute mark of the movie, and the second act boundary is at the 25th minute mark from the end of the movie. Table 3 present all the results of act boundary detection for the proposed method along with the baseline and the multimodal approach [6]. Our CIGbased approach performs the best in terms of overall error, even though it uses information from only the visual stream. We also note that detecting act boundary II is more difficult as it has higher variability across movies. Figure 5 presents an example of CIG distance plot and the detected act boundaries for the movie Hope Springs.
5.3 Evaluating CIGs for major character identification
For this task, we use the same six movies as the three act segmentation task described in the previous section. The experimental settings remain the same.
Results and discussion:
We first run our online face clustering algorithm on each movie. Some of the clusters thus obtained may be noisy i.e., they may contain faces from multiple characters. Such noisy clusters are formed due to (i) the presence of the minor characters in movies who do not appear onscreen long enough, and (ii) some wrongly clustered faces of major characters. Since the ground truth face labels are not available for the movies, we sought manual validation of the clusters to evaluate the performance of our method. Two human annotators labeled all the clusters formed for each movie, and identified each cluster as either a valid character cluster or a noisy cluster. The results are presented in Table 4.
After the clusters are formed, we compute σ(p) for each cluster of any given movie, and identify the top 5 clusters (using the σ(p) values) as the five major characters in the movie. To validate the results, we again seek manual evaluation. Two human annotators watched each movie, and based on the internet movie database (IMDb)^{Footnote 2} major cast list and the storyline, identified the top 5 characters in each movie. Table 5 presents the corresponding results, where ‘X’ denotes a noisy cluster. The results show that the top two characters are always retrieved correctly, and for most of the cases, our CIGbased approach is able to retrieve four out of the top five characters.
6 Conclusion
We proposed an unsupervised approach to building a dynamic character network of movie characters through online face clustering. This is significantly different from the existing body of work that builds a single, static character network using supervision from text, metadata, or human annotation. We demonstrated that the dynamic CIGs can successfully detect highlevel semantic structures (acts) in movie narratives, and can also identify the major characters in a movie with high precision. Apart from the applications presented in this paper, dynamic CIGs are also expected to be useful for extracting characterlevel analytics, movie summarization, indexing and navigation.
Future work will be directed towards expanding the database used for validation, and leveraging the subtitle and audio information available for the movies to achieve better clustering accuracy. A scheme of splitting and fusing the clusters formed online can be useful to improve the quality of clusters, and in turn, can improve the quality of the CIG.
Notes
www.imdb.com/
References
Cao X, Zhang C, Zhou C, Fu H, Foroosh H (2015) Constrained multiview video face clustering. IEEE Trans Image Process 24(11):4381–4393
Cinbis R G, Verbeek J, Schmid C (2011) Unsupervised metric learning for face identification in tv video Computer Vision (ICCV), 2011 IEEE International Conference on, IEEE, pp 1559–1566
Du M, Chellappa R (2012) Face association across unconstrained video frames using conditional random fields European Conference on Computer Vision, Springer, pp 167–180
Everingham M, Sivic J, Zisserman A (2006) Hello! my name is... buffy–automatic naming of characters in tv video
Field S (2007) Screenplay: The foundations of screenwriting, Delta
Guha T, Kumar N, Narayanan SS, Smith SL (2015) Computationally deconstructing movie narratives: an informatics approach Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, IEEE, pp 2264–2268
King DE (2009) Dlibml: A machine learning toolkit. J Mach Learn Res 10:1755–1758
Kulshreshtha P, Guha T (2018) An online algorithm for constrained face clustering in videos 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE, pp 2670–2674
Le QV (2013) Building highlevel features using large scale unsupervised learning Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, IEEE, pp 8595–8598
Li Y, Lee SH, Yeh CH, Kuo CCJ (2006) Techniques for movie content analysis and skimming: tutorial and overview on video abstraction techniques. IEEE Signal Proc Mag 23(2):79–89
Li Y, Narayanan S, Kuo CC J (2004) Contentbased movie analysis and indexing based on audiovisual cues. IEEE transactions on circuits and systems for video technology 14(8):1073–1085
McKee R (1997) Substance, structure, style, and the principles of screenwriting. New York: HarperCollins, New York
Mitra A, Biswas S, Bhattacharyya C (2014) Temporally coherent bayesian models for entity discovery in videos by tracklet clustering. arXiv preprint arXiv:1409.6080
Mitra A, Biswas S, Bhattacharyya C (2017) Bayesian modeling of temporal coherence in videos for entity discovery and summarization. IEEE transactions on pattern analysis and machine intelligence 39(3):430–443
Park SB, Oh KJ, Jo GS (2012) Social network analysis in a movie using characternet. Multimedia Tools and Applications 59(2):601–627
Ramakrishna A, Martínez VR, Malandrakis N, Singla K, Narayanan S (2017) Linguistic analysis of differences in portrayal of movie characters Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol 1, pp 1669–1678
Schroff F, Kalenichenko D, Philbin J (2015) Facenet: A unified embedding for face recognition and clustering Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815–823
Sharff S (1982) The elements of cinema: toward a theory of cinesthetic impact. Columbia University Press, New York
Sharma V, Sarfraz MS, Stiefelhagen R (2017) A simple and effective technique for face clustering in tv series
Sun Y, Chen Y, Wang X, Tang X (2014) Deep learning face representation by joint identificationverification Advances in neural information processing systems, pp 1988–1996
Tran QD, Jung JE (2015) Cocharnet: Extracting social networks using character cooccurrence in movies. J. UCS 21(6):796–815
Weng CY, Chu WT, Wu JL (2009) Rolenet: Movie analysis from the perspective of social networks. IEEE Transactions on Multimedia 11 (2):256–271
Wolf L, Hassner T, Maoz I (2011) Face recognition in unconstrained videos with matched background similarity Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, IEEE, pp 529–534
Wu B, Hu BG, Ji Q (2017) A coupled hidden markov random field model for simultaneous face clustering and tracking in videos. Pattern Recogn 64:361–373
Wu B, Lyu S, Hu BG, Ji Q (2013) Simultaneous clustering and tracklet linking for multiface tracking in videos Computer Vision (ICCV), 2013 IEEE International Conference on, IEEE, pp 2856–2863
Wu B, Zhang Y, Hu BG, Ji Q (2013) Constrained clustering and its application to face clustering in videos Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, IEEE, pp 3507–3514
Xiao S, Tan M, Xu D (2014) Weighted blocksparse low rank representation for face clustering in videos European Conference on Computer Vision, Springer, pp 123–138
Yeh MC, Wu WP (2014) Clustering faces in movies using an automatically constructed social network. IEEE MultiMedia 21(2):22–31
Zhang S, Gong Y, Wang J (2016) Deep metric learning with improved triplet loss for face clustering in videos Pacific Rim Conference on Multimedia, Springer, pp 497–508
Zhang Z, Luo P, Loy CC, Tang X (2016) Joint face representation adaptation and clustering in videos European conference on computer vision, Springer, pp 236–251
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A part of this work was done when both the authors were at IIT Kanpur.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kulshreshtha, P., Guha, T. Dynamic character graph via online face clustering for movie analysis. Multimed Tools Appl 79, 33103–33118 (2020). https://doi.org/10.1007/s11042020094496
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042020094496