Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Single person pose estimation has made a remarkable progress over the past few years. This is mainly due to the availability of deep learning based methods for detecting joints [15]. While earlier approaches in this direction [4, 6, 7] combine the body part detectors with tree structured graphical models, more recent methods [13, 810] demonstrate that spatial relations between joints can be directly learned by a neural network without the need of an additional graphical model. These approaches, however, assume that only a single person is visible in the image and the location of the person is known a-priori. Moreover, the number of parts are defined by the network, e.g., full body or upper body, and cannot be changed. For realistic scenarios such assumptions are too strong and the methods cannot be applied to images that contain a number of overlapping and truncated persons. An example of such a scenario is shown in Fig. 1.

In comparison to single person human pose estimation benchmarks, multi-person pose estimation introduces new challenges. The number of persons in an image is unknown and needs to be correctly estimated, the persons occlude each other and might be truncated, and the joints need to be associated to the correct person. The simplest approach to tackle this problem is to first use a person detector and then estimate the pose for each detection independently [1113]. This, however, does not resolve the joint association problem of two persons next to each other or truncations. Other approaches estimate the pose of all detected persons jointly [14, 15]. In [2] a person detector is not required. Instead body part proposals are generated and connected in a large graph. The approach then solves the labeling problem, the joint-to-person association problem and non-maximum suppression jointly. While the model proposed in [2] can be solved by integer linear programming and achieves state-of-the-art results on a very small subset of the MPII Human Pose Dataset, the complexity makes it infeasible for a practical application. As reported in [5], the processing of a single image takes about 72 h.

In this work, we address the joint-to-person association problem using a densely connected graphical model as in [2], but propose to solve it only locally. To this end, we first use a person detector and crop image regions as illustrated in Fig. 1. Each of the regions contains sufficient context, but only the joints of persons that are very close. We then solve the joint-to-person association for the person in the center of each region by integer linear programming (ILP). The labeling of the joints and non-maxima suppression are directly performed by a convolutional neural network. We evaluate our approach on the MPII Human Pose Dataset for multiple persons where we slightly improve the accuracy of [2] while reducing the runtime by a factor between 6,000 and 19,000.

Fig. 1.
figure 1

Example image from the multi-person subset of the MPII Pose Dataset [16].

2 Related Work

Human pose estimation has generally been addressed under the assumption that only a single person is visible. For this, earlier approaches formulate the problem in a graphical model where interactions between body parts are modelled in a tree structure combined with local observations obtained from discriminatively trained part detectors [1723]. While the tree-structured models provide efficient inference, they struggle to model long-range characteristics of the human body. With the progress in convolutional neural network architectures, more recent works adopt CNNs to obtain stronger part detectors but still use graphical models to obtain coherent pose estimates [4, 6, 7].

The state-of-the-art approaches, however, demonstrate that graphical models are of little importance in the presence of strong part detectors since the long-range relationships of the body parts can be directly incorporated in the part detectors [13, 5, 810]. In [3, 8, 9] multi-staged CNN architectures are proposed where each stage of the network takes as input the score maps of all parts from its preceding stage. This provides additional information about the interdependence, co-occurrence, and context of parts to each stage, and thereby allows the network to implicitly learn image dependent spatial relationships between parts. Similarly, instead of a multi-staged architecture, [5] proposes to use a very deep network that inherently results in large receptive fields and therefore allows to use contextual information around the parts. All of these methods report impressive results for single person pose estimation without an additional graphical model for refinement.

In contrast to the single person pose estimation, multi-person pose estimation poses a significantly more complex problem, and only a few works have focused in this direction [2, 5, 1114, 22, 2426]. [22, 24] perform non-maximum suppression on the marginals obtained using a graphical model to generate multiple pose hypotheses in an image. The approaches, however, can only work in scenarios where persons are significantly distant from each other and consider only the fully visible persons. The methods in [1113] first detect the persons in an image using a person detector and then estimate the body pose for each person independently. [24] employs a similar approach for 3D pose estimation of multiple persons in a calibrated multi-camera scenario. The approach first obtains the number of persons using a person detector and then samples the 3D poses for each person from the marginals of a 3D pictorial structure model. For every detected person, [13] explores a range of tree structured models each containing only a subset of upper-body parts, and selects the best model based on a cost function that penalizes a model containing occluded parts. Since the search space of the models increases exponentially with the number of body parts, the approach is very expensive for full body skeletons. [14, 15] define a joint pose estimation model for all detected persons, and utilize several occlusion clues to model interactions between people. All these approaches rely on a standard pictorial structure model with tree structures and cannot incorporate dependencies beyond adjacent joints.

More recently, [2] proposed a joint objective function to solve multi-person pose estimation. The approach does not require a separate person detector or any prior information about the number of persons. Unlike earlier works it can tackle any type of occlusion or truncation. It starts by generating a set of class independent part proposals and constructs a densely connected graph from the proposals. It uses integer linear programming to label each proposal by a certain body part and assigns them to unique individuals. The optimization problem proposed in [2] is theoretically well founded, but is an NP-Hard problem to solve and prohibitively expensive for realistic scenarios. Therefore, it limits the number of part proposals to 100. This means that the approach can estimate the poses of at most 7 fully visible persons with 14 body parts per person. Despite the restriction, the inference takes roughly 72 h for a single image [5]. In [5], the authors build upon the same model and propose to use stronger part detectors and image dependent spatial models along with an incremental optimization approach that significantly reduces the optimization time of [2]. The approach, however, is still too slow for practical applications since it requires 8 min per image and still limits the number of proposals to a maximum of 150.

3 Overview

Our method solves the problem of joint-to-person association locally for each person in the image. To this end, we first detect the persons using a person detector [27]. For each detected person, we generate a set of joint candidates using a single person pose estimation model (Sect. 4). The candidates are prone to errors since the single person models do not take into account occlusion or truncation. In order to associate each joint to the correct person and also to remove the erroneous candidates, we perform inference locally on a fully connected graph for each person using integer linear programming (Sect. 5). Figure 2 shows an overview of the proposed approach.

Fig. 2.
figure 2

Overview of the proposed method. We detect persons in an image using a person detector (a). A set of joint candidates is generated for each detected person (b). The candidates build a fully connected graph (c) and the final pose estimates are obtained by integer linear programming (d). (best viewed in color) (Color figure online)

4 Convolutional Pose Machines

Given a person in an image \(\mathbf {I}\), we define its pose as a set \(\mathcal {X}\) = \(\{{\mathbf {x}}_{j}\}_{j} = 1 \ldots {J}\) of \(J=14\) body joints, where the vector \(\mathbf {x}_j \in \mathcal {X}\) represents the 2D location (uv) of the \(j^{th}\) joint in the image. The convolutional pose machines consist of a multi-staged CNN architecture with \(t \in \{1 \dots T\}\) stages, where each stage is a multi-label classifier \(\phi _{t}(\mathbf {x})\) that is trained to provide confidence maps \(s^j_t \in \mathbb {R}^{w \times h}\) for each joint \(j = 1 \dots J\) and the background, where w and h are the width and the height of the image, respectively.

The first stage of the architecture uses only the local image evidence and provides the confidence scores

$$\begin{aligned} \phi _{t=1}(\mathbf {x}|\mathbf {I}) \rightarrow \{s^j_1(\mathbf {x}_j = \mathbf {x})\}_{j=1 \dots J+1}. \end{aligned}$$
(1)

whereas, in addition to the local image evidence, all subsequent stages also utilize the contextual information from the preceding stages to produce confidence score maps

$$\begin{aligned} \phi _{t>1}(\mathbf {x|\mathbf {I}, \psi (\mathbf {x}, \mathbf {s}_{t-1})}) \rightarrow \{s^j_t(\mathbf {x}_j = \mathbf {x})\}_{j=1 \dots J+1}, \end{aligned}$$
(2)

where \(\mathbf {s}_t \in \mathbb {R}^{w \times h \times (J+1)}\) corresponds to the score maps of all body joints and the background at stage t, and \(\psi (\mathbf {x}, \mathbf {s}_{t-1})\) indicates the mapping from the scores \(\mathbf {s}_{t-1}\) to the context features for location \(\mathbf {x}\). The receptive field of the subsequent stages is increased to the extent that the context of the complete person is available. This allows to model complex long-range spatial relationships between joints, and to leverage the context around the person. The CPM architecture is completely differentiable and allows end-to-end training of all stages. Due to the multi-stage nature of CPM, the overall CNN architecture consists of many layers and is therefore prone to the problem of vanishing gradients [3, 28, 29]. In order to solve this problem, [3] uses intermediate supervision by adding a loss function at each stage t. The CNN architecture used for each stage can be seen in Fig. 3. In this paper we exploit the intermediate supervision of the stages during training for multi-person human pose estimation as we will discuss in the next section.

Fig. 3.
figure 3

CPM architecture proposed in [3]. The first stage (a) utilizes only the local image evidence whereas all subsequent stages (b) also utilize the output of preceding stages to exploit the spatial context between joints. The receptive field of stages \( t \ge 2 \) is increased by having multiple convolutional layers at the 8 times down-sampled score maps. All stages are locally supervised and a separate loss is computed for each stage. We provide multi-person target score maps to stage 1, and single-person score maps to all subsequent stages.

4.1 Training for Multi-person Pose Estimation

Each stage of the CPM is trained to produce confidence score maps for all body joints, and the loss function at the end of every stage computes the \(l_2\) distance between the predicted confidence scores and the target score maps. The target score maps are modeled as Gaussian distributions centered at the ground-truth locations of the joints. For multi-person pose estimation, the aim of the training is to focus only on the body joints of the detected person, while suppressing joints of all other overlapping persons. We do this by creating two types of target score maps. For the first stage, we model the target score maps by a sum of Gaussian distributions for the body joints of all persons appearing in the bounding box enclosing the primary person that appears roughly in the center of the bounding box. For the subsequent stages, we model only the joints of the primary person. An example of target score maps for different stages can be seen in Fig. 4.

Fig. 4.
figure 4

Example of target score maps for the head, neck and left shoulder. The target score maps for the first stage include the joints of all persons (left). The target score maps for all subsequent stages only include the joints of the primary person.

Fig. 5.
figure 5

Examples of score maps provided by different stages of the CPM. The first stage of CPM uses only local image evidence and therefore provides high confidence scores for the joints of all persons in the image. Whereas all subsequent stages are trained to provide high confidence scores only for the joints of the primary person while suppressing the joints of other persons. The primary person is highlighted by a yellow dot in the first row. (best viewed in color) (Color figure online)

Figure 5 shows some examples how the inferred score maps evolve as the number of stages increases. In [3], the pose of the person is obtained by taking the maximum of the inferred score maps, i.e., \(\mathbf {x}_{j} = \text {argmax}_\mathbf{{x}} s^j_{T}(\mathbf {x})\).

This, however, assumes that all joints are visible in the image and results in erroneous estimates for invisible joints and can wrongly associate joints of other nearby persons to the primary person. Instead of taking the maximum, we sample N candidates from each inferred score map \(s^j_{T}\) and resolve the joint-to-person association and outlier removal by integer linear programming.

5 Joint-to-Person Association

We solve the joint-to-person association using a densely connected graphical model as in [2]. The model proposed in [2], however, aims to resolve joint-to-person associations together with proposal labeling globally for all persons, which makes it very expensive to solve. In contrast, we propose to solve this problem locally for each person. We first briefly summarize the DeepCut method [2] in Sect. 5.1, and then describe the proposed local joint-to-person association model in Sect. 5.2.

5.1 DeepCut

DeepCut aims to solve the problem of multi-person human pose estimation by jointly modeling the poses of all persons appearing in an image. Given an image, it starts by generating a set D of joint proposals, where \(\mathbf {x}_d \in \mathbb {Z}^2\) denotes the 2D location of the \(d^{th}\) proposal. The proposals are then used to formulate a graph optimization problem that aims to select a subset of proposals while suppressing the incompatible proposals, label each selected proposal with a joint type \(j \in {1 \dots J}\), and associate them to unique individuals.

The problem can be solved by integer linear programming (ILP), optimizing over the binary variables \(x \in \{0,1\}^{D \times J}\), \(y \in \{0,1\}^{\left( {\begin{array}{c}D\\ 2\end{array}}\right) }\), and \(z \in \{0,1\}^{\left( {\begin{array}{c}D\\ 2\end{array}}\right) \times J^2}\). For every proposal d, a set of variables \(\{x_{dj}\}_{j = 1 \dots J}\) is defined where \(x_{dj} = 1\) indicates that the proposal d is of body joint type j. For every pair of proposals \(dd'\), the variable \(y_{dd'}\) indicates that the proposals d and \(d'\) belong to the same person. The variable \(z_{dd'jj'} = 1\) indicates that the proposal d is of joint type j, the proposal \(d'\) is of joint type \(j'\), and both proposals belong to the same person \((y_{dd'} = 1)\). The variable \(z_{dd'jj'}\) is constrained such that \(z_{dd'jj'} = x_{dj}x_{d'j'}y_{dd'}\). The solution of the ILP problem is obtained by optimizing the following objective function:

$$\begin{aligned} \min _{(x,y,z) \in X_{D}} \left\langle \alpha , x \right\rangle + \left\langle \beta , z \right\rangle \end{aligned}$$
(3)

subject to

$$\begin{aligned} \forall d \in D~\forall jj' \in \left( {\begin{array}{c}J\\ 2\end{array}}\right) :&\quad x_{dj} + x_{dj'} \le 1 \end{aligned}$$
(4)
$$\begin{aligned} \forall dd' \in \left( {\begin{array}{c}D\\ 2\end{array}}\right) :&\quad y_{dd'} \le \sum _{j \in J} x_{dj}, \quad y_{dd'} \le \sum _{j \in J} x_{d'j} \end{aligned}$$
(5)
$$\begin{aligned} \forall dd'd'' \in \left( {\begin{array}{c}D\\ 3\end{array}}\right) :&\quad y_{dd'} + y_{d'd''} - 1 \le y_{dd''} \end{aligned}$$
(6)
$$\begin{aligned} \forall dd' \in \left( {\begin{array}{c}D\\ 2\end{array}}\right) ~\forall jj' \in J^2 :&\quad x_{dj} + x_{d'j'} + y_{dd'} - 2 \le z_{dd'jj'} \nonumber \\&\quad z_{dd'jj'} \le min(x_{dj}, x_{d'j'}, y_{dd'}) \end{aligned}$$
(7)

and, optionally,

$$\begin{aligned} \forall dd' \in \left( {\begin{array}{c}D\\ 2\end{array}}\right) ~\forall jj' \in J^2 :&\quad x_{dj} + x_{d'j'} -1 \le y_{dd'} \end{aligned}$$
(8)

where

$$\begin{aligned} \alpha _{dj}&= \log \dfrac{1-p_{dj}}{p_{dj}} \end{aligned}$$
(9)
$$\begin{aligned} \beta _{dd'jj'}&= \log \dfrac{1-p_{dd'jj'}}{p_{dd'jj'}} \end{aligned}$$
(10)
$$\begin{aligned} \left\langle \alpha , x \right\rangle&= \sum _{d \in D} \sum _{j \in J} \alpha _{dj} x_{dj} \end{aligned}$$
(11)
$$\begin{aligned} \left\langle \beta , z \right\rangle&= \sum _{dd' \in \left( {\begin{array}{c}D\\ 2\end{array}}\right) } \sum _{j,j' \in J} \beta _{dd'jj'} z_{dd'jj'}. \end{aligned}$$
(12)

The constraints (4)–(7) enforce that optimizing (3) results in valid body pose configurations for one or more persons. The constraints (4) ensure that a proposal d can be labeled with only one joint type, while the constraints (5) guarantee that any pair of proposals \(dd'\) can belong to the same person only if both are not suppressed, i.e., \(x_{dj} = 1\) and \(x_{d'j'} = 1\). The constraints (6) are transitivity constraints and enforce for any three proposals \(dd'd'' \in \left( {\begin{array}{c}D\\ 3\end{array}}\right) \) that if d and \(d'\) belong to the same person, and \(d'\) and \(d''\) also belong to the same person, then the proposals d and \(d''\) must also belong to the same person. The constraints (7) enforce that for any \(dd' \in \left( {\begin{array}{c}D\\ 2\end{array}}\right) \) and \(jj' \in J^2\), \(z_{dd'jj'} = x_{dj}x_{d'j'}y_{dd'}\). The constraints (8) are only applicable for single-person human pose estimation, as they enforce that two proposals \(dd'\) that are not suppressed must be grouped together. In (9), \(p_{dj} \in (0,1)\) are the body joint unaries and correspond to the probability of any proposal d being of joint type j. While in (10), \(p_{dd'jj'}\) correspond to the conditional probability that a pair of proposals \(dd'\) belongs to the same person, given that d and \(d'\) are of joint type j and \(j'\), respectively. In [2] this ILP formulation is referred as Subset Partitioning and Labelling Problem, as it partitions the initial pool of proposal candidates to unique individuals, labels each proposal with a joint type j, and inherently suppresses the incompatible candidates.

5.2 Local Joint-to-Person Association

In contrast to [2], we solve the joint-to-person association problem locally for each person. We also do not label generic proposals as part of the ILP formulation since we use a neural network to obtain detections for each joint as described in Sect. 4. We therefore start with a set of joint detections \(D_J\), where every detection \(d_j\) at location \(\mathbf {x}_{d_j} \in \mathbb {Z}^2\) has a known joint type \(j \in {1 \dots J}\). Our model requires only two types of binary random variables \(x \in \{0,1\}^{D_J}\) and \(y \in \{0,1\}^{\left( {\begin{array}{c}D_J\\ 2\end{array}}\right) }\). Here, \(x_{d_j} = 1\) indicates that the detection \(d_j\) of part type j is not suppressed, and \(y_{{d_j}{{d'}_{j'}}} = 1\) indicates that the detection \(d_j\) of type j, and the detection \({d'}_{j'}\) of type \(j'\) belong to the same person. The objective function for local joint-to-person association takes the form:

$$\begin{aligned} \min _{(x,y) \in X_{D_J}} \left\langle \alpha , x \right\rangle + \left\langle \beta , y \right\rangle \end{aligned}$$
(13)

subject to

$$\begin{aligned} \forall {d_j}{{d'}_{j'}} \in \left( {\begin{array}{c}D_J\\ 2\end{array}}\right) :&\quad y_{{d_j}{{d'}_{j'}}} \le x_{d_j}, \quad y_{{d_j}{{d'}_{j'}}} \le x_{d'_{j'}} \end{aligned}$$
(14)
$$\begin{aligned} \forall {d_j}{d'_{j'}}{d''_{j''}} \in \left( {\begin{array}{c}D_J\\ 3\end{array}}\right) :&\quad y_{{d_j}{d'_{j'}}} + y_{{d'_{j'}}{d''_{j''}}} - 1 \le y_{{d_j}{{d''}_{j''}}} \end{aligned}$$
(15)
$$\begin{aligned} \forall {d_j}{d'_{j'}} \in \left( {\begin{array}{c}D_J\\ 2\end{array}}\right) :&\quad x_{d_j} + x_{d'_{j'}} -1 \le y_{{d_j}{{d'}_{j'}}} \end{aligned}$$
(16)

where

$$\begin{aligned} \alpha _{d_j}&= \log \dfrac{1-p_{d_j}}{p_{d_j}} \end{aligned}$$
(17)
$$\begin{aligned} \beta _{{d_j}{d'_{j'}}}&= \log \dfrac{1-p_{{d_j}{d'_{j'}}}}{p_{{d_j}{d'_{j'}}}} \end{aligned}$$
(18)
$$\begin{aligned} \left\langle \alpha , x \right\rangle&= \sum _{d_j \in D_J} \alpha _{d_j} x_{d_j} \end{aligned}$$
(19)
$$\begin{aligned} \left\langle \beta , y \right\rangle&= \sum _{{d_j}{d'_{j'}} \in \left( {\begin{array}{c}D_J\\ 2\end{array}}\right) } \beta _{{d_j}{d'_{j'}}} y_{{d_j}{d'_{j'}}}. \end{aligned}$$
(20)

The constraints (14) enforce that detection \(d_j\) and \({d'_{j'}}\) are connected \((y_{{d_j}{{d'}_{j'}}} = 1)\) only if both are not suppressed, i.e., \(x_{d_j} = 1\) and \(x_{d'_{j'}} = 1\). The constraints (15) are transitivity constraints as before and the constraints (16) guarantee that all detections that are not suppressed belong to the primary person. We can see from (3)–(8) and (13)–(16), that the number of variables are reduced from \(({D \times J}+{\left( {\begin{array}{c}D\\ 2\end{array}}\right) }+{\left( {\begin{array}{c}D\\ 2\end{array}}\right) \times J^2})\) to \(({D_J}+{\left( {\begin{array}{c}D_J\\ 2\end{array}}\right) })\). Similary, the number of constraints is also drastically reduced.

In (17), \(p_{d_j} \in (0,1)\) is the confidence of the joint detection \(d_j\) as probability. We obtain this directly from the score maps inferred by the CPM as \(p_{d_j} = \mathrm {f}_{\tau }(s^j_T(\mathbf {x}_{d_j}))\), where

$$\begin{aligned} \mathrm {f}_{\tau }(s) = {\left\{ \begin{array}{ll} s &{} \text {if} \quad s \ge \tau \\ 0 &{} \text {otherwise,} \end{array}\right. } \end{aligned}$$
(21)

and \(\tau \) is a threshold that suppresses detections with a low confidence score.

In (18), \(p_{{d_j}{d'_{j'}}} \in (0,1)\) corresponds to the conditional probability that the detection \(d_j\) of joint type j and the detection \(d'_{j'}\) of joint type \(j'\) belong to the same person. For \( j = j' \), it is the probability that both detections \(d_j\) and \(d'_{j'}\) belong to the same body joint. For \(j \ne j'\), it measures the compatibility between two detection candidates of different joint types. Similar to [2], we obtain these probabilities by learning discriminative models based on appearance and spatial features of the detection candidates. For \(j = j'\), we define a feature vector

$$\begin{aligned} f_{{d_j}{d'_{j'}}} = \{\bigtriangleup \mathbf {x}, \exp (\bigtriangleup \mathbf {x}), (\bigtriangleup \mathbf {x})^2\}, \end{aligned}$$
(22)

where \(\bigtriangleup \mathbf {x} = (\bigtriangleup u, \bigtriangleup v)\) is the 2D offset between the locations \(\mathbf {x}_{d_j}\) and \(\mathbf {x}_{d'_{j'}}\). For \(j \ne j'\), we define a separate feature vector based on the spatial locations as well as the appearance features obtained from the joint detectors as

$$\begin{aligned} f_{{d_j}{d'_{j'}}} = \{\bigtriangleup \mathbf {x}, ||\bigtriangleup \mathbf {x}||, \arctan \left( \dfrac{\bigtriangleup v }{\bigtriangleup u}\right) , \mathbf {s}_{T}(\mathbf {x}_{d_j}), \mathbf {s}_{T}(\mathbf {x}_{{d'}_{j'}}) \}, \end{aligned}$$
(23)

where \(\mathbf {s}_{T}(\mathbf {x})\) is a vector containing the confidences of all joints and the background at location \(\mathbf {x}\). For both cases, we gather positive and negative samples from the annotated poses in the training data and train an SVM with RBF kernel using LibSVM [30] for each pair \(jj' \in \left( {\begin{array}{c}J\\ 2\end{array}}\right) \). In order to obtain the probabilities \(p_{{d_j}{d'_{j'}}} \in (0,1)\) we use Platt scaling [31] to normalize the output of the SVMs to probabilities. After optimizing (13), the pose of the primary person is given by the detections with \(x_{d_j}=1\).

6 Experiments

We evaluate the proposed approach on the Multi-Person subset of the MPII Human Pose Dataset [16] and follow the evaluation protocol proposed in [2]. The dataset consists of 3844 training and 1758 testing images with multiple persons. The persons appear in highly articulated poses with a large amount of occlusions and truncations. Since the original test data of the dataset is withheld, we perform all intermediate experiments on a validation set of 1200 images. The validation set is sampled according to the split proposed in [4] for the single person setup, i.e., we chose all multi-person images that are part of the validation test set proposed in [4] and use all other images for training. In addition we compare the proposed method with the state-of-the-art approach [2] on their selected subset of 288 images, and also compare with [5] on the complete test set. The accuracy is measured by average precision (AP) for each joint using the scripts provided by [2].

6.1 Implementation Details

In order to localize the persons, we use the person detector proposed in [27]. The detector is trained on the Pascal VOC dataset [32]. For the quantitative evaluation, we discard detected persons with a bounding box area less equal to \(80\times 80\) pixels since small persons are not annotated in the MPII Human Pose Dataset. For the qualitative results shown in Fig. 7, we do not discard the small detections. For the CPM [3], we use the publicly available source code and train it on the Multi-Person subset of the MPII Human Pose Dataset as described in Sect. 4. As in [3], we add images from the Leeds Sports Dataset [33] during training, and use a 6 stage \((T=6)\) CPM architecture. For solving (13), we use the Gurobi Optimizer.

6.2 Results

Fig. 6.
figure 6

Impact of the parameter \(\tau \) in (21) on the pose estimation accuracy.

We first evaluate the impact of the parameter \(\tau \) in \(f_{\tau }(s)\) (21) on the pose estimation accuracy measured as mean AP on the validation set containing 1200 images. Figure 6 shows that the function \(f_{\tau }\) improves the accuracy when \(\tau \) is increased until \(\tau = 0.3\). For \(\tau > 0.4\), the accuracy drops since a high value discards correct detections. For the following experiments, we use \(\tau = 0.2\).

Table 1. Pose estimation results (AP) on the validation test set (1200 images) of the MPII Multi-Person Pose Dataset.
Table 2. Comparison of pose estimation results (AP) with state-of-the-art approaches on 288 images [2].
Table 3. Pose estimation results (AP) on the withheld test set of the MPII Multi-Person Pose Dataset.

Table 1 reports the pose estimation results under different settings of the proposed approach on the validation set. We also report the median run-time required by each settingFootnote 1. Using only the CPM to estimate the pose of each detected person achieves 45.2 % mAP and takes only 2 s per image. Using the proposed Local Joint-to-Person Association (L-JPA) model with 1 detection candidate per joint \((N=1)\) to suppress the incompatible detections improves the performance from \(45.2\,\%\) to \(49.2\,\%\) with a very slight increase in run-time. Increasing the number of candidates per joint increases the accuracy only slightly. For the following experiments, we use \(N=5\). When we compare the numbers with Fig. 6, we observe that CPM + L − JPA outperforms CPM for any \(0 \le \tau \le 0.4\).

The accuracy also depends on the used person detector. We use an off-the-shelf person detector without any fine-tuning on the MPII dataset. In order to evaluate the impact of the person detector accuracy, we also estimate poses when the person detections are given by the ground-truth torso (GT Torso) locations of the persons provided with the dataset. This results in a significant improvement in accuracy from \(49.3\,\%\) to \(76.9\,\%\) mAP, showing that a better person detector would improve the results further.

Fig. 7.
figure 7

Some qualitative results for the MPII Multi-person Pose Dataset.

Table 2 compares the proposed approach with other approaches on a selected subset of 288 test images used in [2]. Our approach outperforms the state-of-the-art method DeepCut [2] (\(54.7\,\%\) vs. \(53.5\,\%\)) while being significantly faster (10 s vs. 57995 s). If we use \(N=1\), our approach requires only 3 s per image with a minimal loss of accuracy as shown in Table 1, i.e., our approach is more than 19,000 times faster than [2]. We also compare with a concurrent work [5]. While the approach [5] achieves a higher accuracy than our method, our method is significantly faster (10 s vs. 230 s). In contrast to [5], we do not perform fine-tuning of the person detector on the MPII Multi-Person Pose Dataset and envision that doing this will lead to further improvements. We therefore compare with two additional approaches [2, 6] when using GT bounding boxes of the persons. The results for [6] are taken from [2]. Our approach outperforms both methods by a large margin.

Finally in Table 3, we report our results on all test images of the MPII Multi-Person Pose Dataset. Our method achieves \(43.1\,\%\) mAP. While the approach [2] cannot be evaluated on all test images due to the high computational complexity of the model, [5] reports a higher accuracy than our model. However, if we compare the run-times in Tables 2 and 3, we observe that the run-time of [5] doubles on the more challenging test set (485 s per image). Our approach on the other hand requires only 10 s in all evaluation settings and is around 50 times faster. If we use \(N=1\), our approach is 160 times faster than [5]. Using the torso annotation (GT Torso) as person detections results again in a significant improvement of the accuracy (\(62.2\,\%\) vs. \(43.1\,\%\) mAP). Some qualitative results can be seen in Fig. 7.

7 Conclusion

In this work we have presented an approach for multi-person pose estimation under occlusions and truncations. Since the global modeling of poses for all persons is impractical, we demonstrated that the problem can be formulated by a set of independent local joint-to-person association problems. Compared to global modeling, these problems can be solved efficiently while still being effective for handling severe occlusions or truncations. Although the accuracy can be further improved by using a better person detector, the proposed method already achieves the accuracy of a state-of-the-art method, while being 6,000 to 19,000 times faster.