Keywords

1 Introduction

Image clustering is a commonly used unsupervised analytical technique for practical computer vision applications [17]. The aim of image clustering is to discover the natural and interpretable structure of image representations, so as to group images that are similar to each other into the same cluster. Based on the number of sources where images are collected or number of features how images are described, existing clustering methods can be divided into single-view image clustering (SVIC) [1, 16, 32, 36] and multi-viewFootnote 1 image clustering (MVIC) [3, 4, 22, 47, 48]. Recently, MVIC [3, 48, 51] has been evoking more and more attention due to the flexibility of extracting multiple heterogeneous features from a single image. Compared to SVIC, MVIC has access to more characteristics and structural information of the data, and the features from diverse views can potentially complement each other and produce more effective clustering performance.

Existing MVIC methods can be roughly divided into three groups: multi-view spectral clustering [19, 30, 31], multi-view matrix factorization [4, 22, 37], and multi-view subspace clustering [13, 45, 49]. Multi-view spectral clustering [47] constructs multiple similarity graphs to achieve a common or similar eigenvector matrix on all views, and then generates consensus data partitions, which hinge crucially on the single-view spectral clustering [29]. Due to the straightforward interpretability of matrix factorization [20], multi-view matrix factorization methods [4, 22] integrate information from multiple views towards a compatible common consensus, or decompose the heterogeneous features into specified centroid and cluster indicator matrices. Different from the above strategies, multi-view subspace clustering [13] employs the complementary properties across multiple views to uncover the common latent subspace and quantify the genuine similarities. Some other kernel-based MVIC methods [10, 42] exploit a linear or a non-linear kernel on each view. Note that SVIC (e.g., k-means [16] and spectral clustering [29]) can also be leveraged to deal with multi-view clustering problem. A common practice for them is to perform clustering on either any single-view feature or simply concatenated multiple features [47, 48].

Fig. 1.
figure 1

The pipeline of HSIC. Common binary representation learning and discrete cluster structure learning are jointly and efficiently solved by alternating optimization.

Although SVIC and MVIC methods have achieved much progress on small- and middle-scale data, both of them will become intractable (because of unaffordable computation and memory overhead) when dealing with large-scale data with high dimensionality, which is a typical case in the era of ‘big data’. As pointed out in [15, 41], we argue that real-valued features are the essential bottleneck restricting the scalability of existing clustering methods. To address this issue, inspired by the recent advances on compact binary coding (a.k.a. hashing) [5, 23, 24, 27, 34, 39, 40, 43], we aim to develop a feasible binary clustering technique for large-scale MVIC. Specifically, we transform the original real-valued Euclidean space to the low-dimensional binary Hamming space, based on which an efficient clustering solution can then be devised. In this way, time-consuming Euclidean distance measures (typically of \(\mathcal {O}(Nd)\) complexity, where N and d respectively indicate the data size and dimension) for real-valued data can be substantially eliminated by the extremely fast XOR operations (of \(\mathcal {O}\)(1) complexity) for compact binary codes. Note that the proposed method is also potentially promising in practical use cases where computation and memory resources are limited (e.g., on wearable or mobile devices).

As shown in Fig. 1, we particularly develop a Highly-economized Scalable Image Clustering (HSIC) framework for efficient large-scale MVIC. HSIC jointly learns the effective common binary representations and robust discrete cluster structures. The former can maximally preserve both sharable and view-specific/individual information across multiple views; the latter can significantly promote the computational efficiency and robustness of clustering. The joint learning strategy is superior to separately learning each objective by facilitating the collaboration between both objectives. An efficient alternating optimization algorithm is developed to address the joint discrete optimization problem. The main contributions of this work include:

  1. (1)

    To the best of our knowledge, HSIC is the pioneering work with large-scale MVIC capability, where common binary representations and robust binary cluster structures can be obtained in a unified learning framework.

  2. (2)

    HSIC captures both sharable and view-specific information from multiple views to fully exploit the complementation and individuality of heterogeneous image features. The sparsity-induced \(\ell _{21}\)-norm is imposed on the clustering model to further alleviate its sensitivity against outliers and noise.

  3. (3)

    Extensive experimental results on four image datasets clearly show that HSIC can reduce the memory footprint and computational time up to 951 and 69.35 times respectively over the classical k-means algorithm, whilst consistently outperform the state-of-the-art approaches.

Notably, two works [15, 41] in the literature are most relevant to ours. [15] introduced a two-step binary k-means approach, in which clustering is performed on the binary codes obtained by Iterative Quantization (ITQ) [14], and [41] integrated binary structural SVM and k-means. Our HSIC fundamentally differs from them in the following aspects: (1) [15] and [41] are SVIC methods, while HSIC is specially designed for MVIC; (2) [15] divides the clustering task into two unconnected procedures, which completely eliminate the important tie between the binary coding and cluster structure learning. Meanwhile, the binary codes learned by [41] are too weak to achieve satisfactory results because of lacking adequate representative capability. More importantly, both methods cannot make full use of the complementary properties of multiple views for scalable MVIC, which is also shown in [50].

In the next section, we will introduce the detailed framework of our HSIC and then elaborate on the alternating optimization algorithm. The analysis in terms of computational complexity and memory load will also be presented.

2 Highly-Economized Scalable Image Clustering

Suppose we have a set of multi-view image features \(\mathcal X\) = \(\{\varvec{X}^1,\cdots ,\varvec{X}^m\}\) from m views, where \(\varvec{X}^v = [\varvec{x}_1^v, \cdots , \varvec{x}_N^v] \in \mathfrak {R}^{d_v \times N}\) is the accumulated feature matrix from the v-th view. \(d_v\) and N denote the dimensionality and the number of data points in \(\varvec{X}^v\), respectively. \(\varvec{x}_i^v \in \mathfrak {R}^{d_v \times 1}\) is the i-th feature vector from the v-th view. The main objective of unsupervised MVIC is to partition \(\mathcal X\) into c groups, where c is the number of clusters. In this work, to address the large-scale MVIC problem, our HSIC aims to perform binary clustering in the much lower-dimensional Hamming space. Particularly, we perform multi-view compression (i.e., project multi-view features onto the common Hamming space) by learning the compatible common binary representation via the complimentary characteristics of multiple views. Meanwhile, robust binary cluster structures are formulated in the learned Hamming space for efficient clustering.

As a preprocessing step, we first normalize the features from each view as zero-centered vectors. Inspired by [26, 40], in this work, each feature vector is encoded by the simple nonlinear RBF kernel mapping, i.e., \(\psi (\varvec{x}_i^v) = [exp(-\Vert \varvec{x}_i^v - \varvec{a}_1^v\Vert ^2/\gamma ),\cdots , exp(-\Vert \varvec{x}_i^v - \varvec{a}_l^v\Vert ^2/\gamma )]^{\top }\), where \(\gamma \) is the pre-defined kernel width, and \(\psi (\varvec{x}_i^v)\in \mathfrak {R}^{l\times 1}\) denotes an l-dimensional nonlinear embedding for the i-th feature from the v-th view. Similar to [25, 26, 40], \(\{\varvec{a}_i^v\}_{i=1}^l\) are randomly selected l anchor points from \(\varvec{X}^v\) (\(l=1000\) is used for each view in this work). Subsequently, we will introduce how to learn the common binary representation and robust binary cluster structure respectively, and finally end up with a joint learning objective.

(1) Common Binary Representation Learning. We consider a family of K hashing functions to be learned in HSIC, which quantize each \(\psi (\varvec{x}_i^v)\) into a binary representation \(\varvec{b}_i^v = [b_{i1}^v,\cdots ,b_{iK}^v]^T \in \{-1,1\}^{K\times 1}\). To eliminate the semantic gaps between different views, HSIC generates the common binary representation by combining multi-view features. Specifically, HSIC simultaneously projects features from multiple views onto a common Hamming space, i.e., \(\varvec{b}_i = sgn\big ((\varvec{P}^v)^{\top } \psi (\varvec{x}_i^v)\big )\), where \(\varvec{b}_i\) is the common binary code of the i-th features from different views (i.e., \(\varvec{x}_i^v\), \(\forall v=1,...,m\)), \(sgn(\cdot )\) is an element-wise sign function, \( \varvec{P}^v = [\varvec{p}_1^v, \cdots , \varvec{p}_K^v] \in \mathfrak {R}^{l\times K}\) is the mapping matrix for the v-th view and \(\varvec{p}_i^v\) is the projection vector for the i-th hashing function. As such, we construct the learning function by minimizing the following quantization loss:

$$\begin{aligned} \min _{\varvec{P}^v,\varvec{b}_i} \sum _{v=1}^m \sum _{i=1}^N \Vert \varvec{b}_i - (\varvec{P}^v)^{\top } \psi (\varvec{x}_i^v)\Vert _F^2. \end{aligned}$$
(1)

Since different views describe the same subject from different perspectives, the projection \(\{\varvec{P}^v\}_{v=1}^m\) should capture the shared information that maximizes the similarities of multiple views, as well as the view-specific/individual information that distinguishes individual characteristics between different views. To this end, we decompose each projection into the combination of sharable and individual projections, i.e., \(\varvec{P}^v = [\varvec{P}_S, \varvec{P}_I^v]\). Specifically, \(\varvec{P}_S \in \mathfrak {R}^{l\times K_S}\) is the shared projection across multiple views, while \(\varvec{P}_I^v \in \mathfrak {R}^{l\times K_I}\) is the individual projection for the v-th view, where \(K = K_S+K_I\). Therefore, HSIC collectively learns the common binary representation from multiple views using

$$\begin{aligned}&\min _{\varvec{P}^v, \varvec{B},\alpha ^v} \sum _{v=1}^m (\alpha ^v)^r \big (\Vert \varvec{B} - (\varvec{P}^v)^{\top } \psi (\varvec{X}^v)\Vert _F^2+\lambda _1 \Vert \varvec{P}^v\Vert _F^2\big ),\nonumber \\ s.t.~&\sum _v \varvec{\alpha }^v=1,\varvec{\alpha }^v>0, \varvec{B} = [\varvec{B}_s; \varvec{B}_I]\in \{-1,1\}^{K \times N}, \varvec{P}^v = [\varvec{P}_s, \varvec{P}_I^v], \end{aligned}$$
(2)

where \(\varvec{B} = [\varvec{b}_1,\cdots ,\varvec{b}_N]\), \(\varvec{\alpha }=[\alpha ^1,\cdots ,\alpha ^m]\in \mathfrak {R}^m\) weighs the importance of different views, \(r>1\) is a constant managing the weight distributions, and \(\lambda _1\) is a regularization parameter. The second term is a regularizer to control the parameter scales.

Moreover, from the information-theoretic point of view, the information provided by each bit of the binary codes needs to be maximized [2]. Based on this point and motivated by [14, 44], we adopt an additional regularizer for the binary codes \(\varvec{B}\) using maximum entropy principle, i.e., \(\max ~var[\varvec{B}] = var[sgn\big ((\varvec{P}^v)^{\top } \psi (\varvec{x}_i^v)\big )]\). This additional regularization on \(\varvec{B}\) can ensure the balanced partition and reduce the redundancy of the binary codes. Here we replace the sign function by its signed magnitude, and formulate the relaxed regularization as follows

$$\begin{aligned} \max \sum _k \mathbb {E}[\Vert (\varvec{p}_i^v)^{\top } \psi (\varvec{x}_i^v)\Vert ^2] = \frac{1}{N} tr\big ((\varvec{P}^v)^{\top } \psi (\varvec{X}^v) \psi (\varvec{X}^v)^{\top }\varvec{P}^v\big ) = g(\varvec{P}^v). \end{aligned}$$
(3)

Finally, we combine problems (2) and (3) together and reformulate the overall common binary representation learning problem as the following

$$\begin{aligned}&\min _{\varvec{P}^v, \varvec{B}} \sum _{v=1}^m (\alpha ^v)^r \big (\Vert \varvec{B} - (\varvec{P}^v)^{\top } \psi (\varvec{X}^v)\Vert _F^2 + \lambda _1 \Vert \varvec{P}^v\Vert _F^2 - \lambda _2 g(\varvec{P}^v)\big )\nonumber \\ {}&s.t.~ \sum _v \alpha ^v=1,\alpha ^v>0,\varvec{B} = [\varvec{B}_s; \varvec{B}_I] \in \{-1,1\}^{K \times N}, \varvec{P}^v = [\varvec{P}_s, \varvec{P}_I^v], \end{aligned}$$
(4)

where \(\lambda _2\) is a weighting parameter.

(2) Robust Binary Cluster Structure Learning. For binary clustering, HSIC directly factorizes the learned binary representation \(\varvec{B}\) into the binary clustering centroids \(\varvec{Q}\) and discrete clustering indicators \(\varvec{F}\) using

$$\begin{aligned} \min _{\varvec{Q}, \varvec{F}} \Vert \varvec{B} - \varvec{Q}\varvec{F}\Vert _{21},~s.t.~\varvec{Q}\varvec{1} = \varvec{0},\varvec{Q} \in \{-1,1\}^{K\times c},\varvec{F} \in \{0,1\}^{c\times N}, \sum _j f_{ji} = 1, \end{aligned}$$
(5)

where \(\Vert \varvec{A}\Vert _{21} = \sum _i\Vert \varvec{a}^i\Vert _2\), and \(\varvec{a}^i\) is the i-th row of matrix \(\varvec{A}\). The first constraint of (5) ensures the balanced property on the clustering centroids as with the binary codes. Note that the \(\ell _{21}\)-norm imposed on the loss function can also be replaced by the F-norm, i.e., \(\Vert \varvec{B} - \varvec{Q}\varvec{F}\Vert _F^2\). However, the F-norm based loss function can amplify the errors induced from noise and outliers. Therefore, to achieve more stable and robust clustering performance, we employ the sparsity-induced \(\ell _{21}\)-norm. It is also observed in [12] that the \(\ell _{21}\)-norm not only preserves the rotation invariance within each feature, but also controls the reconstruction error, which significantly mitigates the negative influence of the representation outliers.

figure a

(3) Joint Objective Function. To preserve the semantic interconnection between the learned binary codes and the robust cluster structures, we incorporate the common binary representation learning and the discrete cluster structure constructing into a joint learning framework. In this way, the unified framework can interactively enhance the qualities of the learned binary representation and cluster structures. Hence, we have the following joint objective function:

$$\begin{aligned}&\min _{\varvec{P}^v,\varvec{B},\varvec{Q},\varvec{F},\alpha ^v} \sum _{v=1}^m (\alpha ^v)^r \big (\Vert \varvec{B} - (\varvec{P}^v)^{\top } \psi (\varvec{X}^v)\Vert _F^2 + \lambda _1 \Vert \varvec{P}^v\Vert _F^2 - \lambda _2 g(\varvec{P}^v)\big )+\lambda _3\Vert \varvec{B}- \varvec{Q}\varvec{F}\Vert _{21}, \nonumber \\ {}&s.t.~\sum _v \varvec{\alpha }^v=1,\varvec{\alpha }^v>0,\varvec{B} = [\varvec{B}_s; \varvec{B}_I] \in \{-1,1\}^{K \times N}, \varvec{P}^v = [\varvec{P}_s, \varvec{P}_I^v],\nonumber \\ {}&~~~~~~\varvec{Q}\varvec{1} = \varvec{0}, \varvec{Q} \in \{-1,1\}^{K\times c},\varvec{F} \in \{0,1\}^{c\times N}, \sum _j f_{ji} = 1, \end{aligned}$$
(6)

where \(\lambda _1\), \(\lambda _2\) and \(\lambda _3\) are trade-off parameters to balance the effects of different terms. To optimize the difficult discrete programming problem, a newly-derived alternating optimization algorithm is developed as shown in the next section.

2.1 Optimization

The solution to problem (6) is non-trivial as it involves a mixed binary integer program with three discrete constraints, which lead to an NP-hard problem. In the following, we introduce an alternating optimization algorithm to iteratively update each variable while fixing others, i.e., update \(\varvec{P}_s\rightarrow \varvec{P}_I^v \rightarrow \varvec{B} \rightarrow \varvec{Q} \rightarrow \varvec{F} \rightarrow \varvec{\alpha }\) in each iteration.

Due to the intractable \(\ell _{21}\)-norm loss function, we first rewrite the last term in (6) as \(\lambda _3 tr\big (\varvec{U}^{\top } \varvec{D}\varvec{U}\big )\), where \(\varvec{U} = \varvec{B} - \varvec{Q}\varvec{F}\), and \(\varvec{D} \in \mathfrak {R}^{K\times K}\) is a diagonal matrix, the i-th diagonal element of which is defined as \(\varvec{d}_{ii}\) = \(1/ 2\Vert \varvec{u}^i\Vert \), where \(\varvec{u}^i\) is the i-th row of \(\varvec{U}\).

(1) \(\varvec{P}_s\)-Step: When fixing other variables, we update the sharable projection by

$$\begin{aligned}&\min _{\varvec{P}_s} \sum _{v=1}^m (\alpha ^v)^r \big ( \Vert \varvec{B}_s - \varvec{P}_s^{\top } \psi (\varvec{X}^v)\Vert _F^2 + \lambda _1 \Vert \varvec{P}_s\Vert _F^2 - \frac{\lambda _2}{N} tr\big (\varvec{P}_s^{\top } \psi (\varvec{X}^v) \psi ^{\top }(\varvec{X}^v)\varvec{P}_s\big )\big ). \end{aligned}$$
(7)

For notational convenience, we rewrite \(\psi (\varvec{X}^v)\psi ^{\top }(\varvec{X}^v)\) as \(\tilde{\varvec{X}}\). Taking derivation of \(\mathcal L\) with respect to \(\varvec{P}_s\) and let \(\frac{\partial \mathcal L}{\partial \varvec{P}_s} = 0\), we can obtain the closed-form solution of \(\varvec{P}_s\), i.e.,

$$\begin{aligned} \varvec{P}_s = (\varvec{A}+\lambda _1 \sum _{v=1}^m (\alpha ^v)^r\varvec{I})^{-1} \varvec{T} \varvec{B}^{\top }, \end{aligned}$$
(8)

where \(\varvec{A} = (1-\frac{\lambda _2}{N})\sum _{v=1}^m (\alpha ^v)^r \tilde{\varvec{X}}\) and \(\varvec{T} = \sum _{v=1}^m (\alpha ^v)^r \psi (\varvec{X}^v)\).

(2) \(\varvec{P}_I^v\)-Step: Similarly, when fixing other parameters, the optimal solution of the v-th individual projection matrix can be determined by solving

$$\begin{aligned}&\min _{\varvec{P}_I^v} \Vert \varvec{B}_I - (\varvec{P}_I^v)^{\top } \psi (\varvec{X}^v)\Vert _F^2 + \lambda _1 \Vert \varvec{P}_I^v\Vert _F^2 - \frac{\lambda _2}{N} tr\big (\varvec{P}_I^v \tilde{\varvec{X}}(\varvec{P}_I^v)^{\top }\big ), \end{aligned}$$
(9)

and its closed-form solution can be obtained by \(\varvec{P}_I^v = \varvec{W}\psi (\varvec{X}^v)\varvec{B}^{\top }\), where \(\varvec{W} = \left( (1-\frac{\lambda _2}{N})\tilde{\varvec{X}}+\lambda _1 \varvec{I}\right) ^{-1}\) can be calculated beforehand.

(3) \(\varvec{B}\)-Step: Problem (6) w.r.t. \(\varvec{B}\) can be rewritten as:

$$\begin{aligned} \min _{\varvec{B}} \sum _{v=1}^m (\alpha ^v)^r \big (\Vert \varvec{B} - (\varvec{P}^v)^{\top } \psi (\varvec{X}^v)\Vert _F^2 \big )+\lambda _3 tr\big (\varvec{U}^{\top } \varvec{D}\varvec{U} \big ), ~s.t.~\varvec{B} \in \{-1,1\}^{K \times N}. \end{aligned}$$
(10)

Since \(\varvec{B}\) only has ‘1’ and ‘-1’ entries and \(\varvec{D}\) is a diagonal matrix, both \(tr(\varvec{B} \varvec{B}^{\top })\) = \(tr(\varvec{B}^{\top }\varvec{B}) = KN\) and \(tr\left( \varvec{B}^{\top }\varvec{DB}\right) \) = \(N*tr(\varvec{D})\) are constant terms w.r.t. \(\varvec{B}\). Based on this and with some further algebraic computations, (10) can be reformulated as

(11)

where ‘const’ denotes the constant terms. This problem has a closed-form solution:

$$\begin{aligned} \varvec{B} =sgn \left( \sum _{v=1}^{m}(\varvec{\alpha }^v)^r \big ((\varvec{P}^v)^{\top }\psi (\varvec{X}^v)\big ) + \lambda _3 \varvec{QF}\right) . \end{aligned}$$
(12)

(4) \(\varvec{Q}\)-Step: First, we degenerate (6) into the following computationally feasible problem (by removing some irrelevant parameters and discarding the first constraint):

$$\begin{aligned} \min _{\varvec{Q}, \varvec{F}}tr\big (\varvec{U}^{\top } \varvec{D}\varvec{U} \big )+\nu \Vert \varvec{Q}^{\top }\varvec{1}\Vert _F^2,~s.t.~\varvec{Q}\in \{-1,1\}^{K\times c},~\varvec{F} \in \{0,1\}^{c\times N}, \sum _j f_{ji} = 1. \end{aligned}$$
(13)

With sufficiently large \(\nu > 0\), problems (6) and (13) will be equivalent. Then, by fixing the variable \(\varvec{F}\), problem (13) becomes

$$\begin{aligned} \min _{\varvec{Q}} \mathcal L(\varvec{Q}) = -2tr(\varvec{B}^{\top } \varvec{D} \varvec{QF})+\nu \Vert \varvec{Q}^{\top }\varvec{1}\Vert _F^2 + const,~s.t.~\varvec{Q}\in \{-1,1\}^{K\times c}. \end{aligned}$$
(14)

Inspired by the efficient discrete optimization algorithm in [35, 38], we develop an adaptive discrete proximal linearized optimization algorithm, which iteratively updates \(\varvec{Q}\) in the (p+1)-th iteration by \(\varvec{Q}^{p+1} = sgn(\varvec{Q}^p-\frac{1}{\eta }\nabla \mathcal L(\varvec{Q}^p))\), where \(\nabla \mathcal L(\varvec{Q})\) is the gradient of \(\mathcal L(\varvec{Q})\), \(\frac{1}{\eta }\) is the learning step size and \(\eta \in (C,2C)\), where C is the Lipschitz constant. Intuitively, for the very special \(sgn(\cdot )\) function, if the step size \(1/\eta \) is too small/large, the solution of \(\varvec{Q}\) will get stuck in a bad local minimum or diverge. To this end, a proper \(\eta \) is adaptively determined by enlarging or reducing based on the changing values of \(\mathcal L(\varvec{Q})\) between adjacent iterations, which can accelerate its convergence.

(5) \(\varvec{F}\)-Step: Similarly, when fixing \(\varvec{Q}\), the problem w.r.t. \(\varvec{F}\) turns into

$$\begin{aligned} \min _{\varvec{f}_i} \sum _{i=1}^N \varvec{d}_{ii}\Vert \varvec{b}_i - \varvec{Q}\varvec{f}_i\Vert _{21},~s.t.~\varvec{f}_i \in \{0,1\}^{c\times 1}, \sum _j f_{ji} = 1. \end{aligned}$$
(15)

We can divide the above problem into N subproblems, and independently optimize the cluster indicator in a column-wise fashion. That is, one column of \(\varvec{F}\) (i.e., \(\varvec{f}_i\)) is computed at each time. Specifically, we solve the subproblems in an exhaustive search manner, similar to the conventional k-means algorithm. Regarding the i-th column \(\varvec{f}_i\), the optimal solution of its j-th entry can be efficiently obtained by

$$\begin{aligned} f_{ji} = \left\{ \begin{aligned} 1&,&j = \arg \min _k H(\varvec{d}_{ii}*\varvec{b}_i,\varvec{q}_{\wp }), \\ 0&,&otherwise, \end{aligned} \right. \end{aligned}$$
(16)

where \(\varvec{q}_{\wp }\) is the \(\wp \)-th vector of \(\varvec{Q}\), and \(H(\cdot ,\cdot )\) denotes the Hamming distance metric. Note that computing the Hamming distance is remarkably faster than the Euclidean distance, so the assigned vector \(\varvec{f}_i\) will efficiently constitute the matrix \(\varvec{F}\).

(6) \(\varvec{\alpha }\)-Step: Let \(\displaystyle h^v\) = \(\Vert \varvec{B}-\left( \varvec{P}^v\right) ^{\top }\phi (\varvec{X}^v)\Vert _F^2 + \lambda _1 \Vert \varvec{P}^v ||_F^2 -\lambda _2 g(\varvec{P}^v)\), then problem (6) w.r.t. \(\varvec{\alpha }\) can be rewritten as

$$\begin{aligned} \min _{\alpha ^v} \sum _{v=1}^{m} (\alpha ^v)^r h^v,~s.t.~\sum _v \alpha ^v=1,\alpha ^v>0. \end{aligned}$$
(17)

The Lagrange function of (17) is \(\min \mathcal L(\alpha ^v,\varvec{\zeta }) = \sum _{v=1}^{m} (\alpha ^v)^r h^v-\varvec{\zeta }(\sum _{v=1}^{m} \alpha ^v-1)\), where \(\varvec{\zeta }\) is the Lagrange multiplier. Taking the partial derivatives w.r.t. \(\alpha ^v\) and \(\varvec{\zeta }\), respectively, we can get

$$\begin{aligned} \left\{ \begin{aligned} \frac{\partial \mathcal L}{\partial \alpha ^v}= & {} r(\alpha ^v)^{r-1}h^v-\varvec{\zeta },\\ \frac{\partial \mathcal L}{\partial \varvec{\zeta }}= & {} \sum _{v=1}^{m} \alpha ^v-1. \end{aligned}\right. \end{aligned}$$
(18)

Following [47], by setting \(\nabla _{\alpha ^v,\varvec{\zeta }} \mathcal L\) = \(\varvec{0}\), the optimal solution of \(\alpha ^v\) is \(\frac{(h^v)^{\frac{1}{1-r}}}{\sum _v(h^v)^{\frac{1}{1-r}}}\).

To obtain the locally optimal solution of problem (6), we update the above six variables iteratively until convergence. To deal with the out-of-example problem in image clustering, HSIC needs to generate the binary code for a new query image \(\hat{\varvec{x}}\) from the v-th view (i.e., \(\hat{ \varvec{x}}^v\)) by \({\varvec{b}}^{v} = sgn\left( (\varvec{P}^v)^{\top }\psi (\hat{\varvec{x}}^v)\right) \), and then assigns it to the j-th cluster decided by \(j = \arg \min _k H({\varvec{b}}^{v},\varvec{q}_k)\) in the fast Hamming space. For multi-view clustering, the common binary code of \(\hat{\varvec{x}}\) is \(\varvec{b} =sgn \left( \sum _{v=1}^{m}(\varvec{\alpha }^v)^r (\varvec{P}^v)^{\top }\psi (\hat{\varvec{x}}^v)\right) \). Then the optimal cluster assignment of \(\hat{\varvec{x}}\) is determined by the solution of \(\varvec{F}\). The full learning procedure of HSIC is illustrated in Algorithm 1.

2.2 Complexity and Memory Load Analysis

(1) The major computation burden of HSIC lies in the compressive binary representation learning and robust discrete cluster structures learning. The computational complexities of calculating \(\varvec{P}_S\) and \(\varvec{P}_I^v\) are \(\mathcal {O}(K_SlN)\) and \(\mathcal {O}(m(K_IlN))\), respectively. Computing \(\varvec{B}\) consumes \(\mathcal {O}(KlN)\). Similar to [15], constructing the discrete cluster structures needs \(\mathcal {O}(N)\) on bit-wise operators for \(\kappa \) iterations, where the distance computation requires only \(\mathcal {O}(1)\) per time. The total computational complexity of HSIC is \(\mathcal {O}(t((K_S+mK_I+K)lN+\kappa N))\), where t and \(\kappa \) are empirically set to 10 in all the experiments. In general, the computational complexity of optimizing HSIC is linear to the number of samples, i.e., \(\mathcal {O}(N)\). (2) For memory cost in HSIC, it is unavoidable to store the mapping matrices \(\varvec{P}_s\) and \(\varvec{P}_I^v\), demanding \(\mathcal {O}(lK_S)\) and \(\mathcal {O}(lK_I)\) memory costs, respectively. Notably, the learned binary representation and discrete cluster centroids only need the bit-wise memory load \(\mathcal {O}(K(N+c))\), which is much less than that of k-means requiring \(\mathcal {O}(d(N+c))\) real-valued numerical storage footprint.

3 Experimental Evaluation

In this section, we conducted multi-view image clustering experiments on four scalable image datasets to evaluate the effectiveness of HSIC with four frequently-used performance measures. All the experiments are implemented based on Matlab 2013a using a standard Windows PC with an Intel 3.4 GHz CPU.

3.1 Experimental Settings

Datasets and Features: We perform experiments on four image datasets, including ILSVRC2012 1K [11], Cifar-10 [18], YouTube Faces (YTBF) [46] and NUS-WIDE [9]. Specifically, we randomly select 10 classes from ILSVRC2012 1 K with 1, 300 images per class, denoted as ImageNet-10, for middle-scale multi-view clustering study. Cifar-10 contains 60, 000 tiny color images in 10 classes, with 6, 000 images per class. A subset of YTBF contains 182, 881 face images from 89 different people (\(>1,200\) for each one). Similar to [38], we collect the subset of NUS-WIDE including the 21 most frequent concepts, resulting in 195, 834 images with at least 3, 091 images per category. Because some images in NUS-WIDE were labeled by multiple concepts, we only select one of the most representative labels as their true categories for simplicity. Multiple features are extracted on all datasets. Specifically, for ImageNet-10, Cifar-10 and YTBF, we use three different types of features, i.e., 1450-d LBP, 1024-d GIST, and 1152-d HOG. For NUS-WIDE, five publicly available features are employed for experiments, i.e., 64-d color Histogram (CH), 225-d color moments (CM), 144-d color correlation (CORR), 73-d edge distribution (EDH) and 128-d wavelet texture (WT).

Metrics and Parameters: We adopt four widely-used evaluation measures [28] for clustering, including clustering accuracy (ACC), normalized mutual information (NMI), purity, and F-score. In addition, both computational time and memory footprint are compared to show the efficiency of HSIC. To fairly compare different methods, we run the provided codes with default or fine-tuned parameter settings according to the original papers. For binary clustering methods, 128-bit code length is used for all datasets. For hyper-parameters \(\lambda _1\), \(\frac{\lambda _2}{N}\), and \(\lambda _3\) of HSIC, we first employ the grid search strategy on ImageNet-10 to find the best values (i.e., \(10^{-3}\), \(10^{-3}\), and \(10^{-5}\), respectively), which are then directly adopted on other datasets for simplicity. We empirically set r and \(\delta = \frac{K_S}{K}\) (i.e., the ratio of shared binary codes) as 5 and 0.2 respectively in all experiments. The multi-view clustering results are denoted as ‘MulView’. We report the averaged clustering results with 10 times randomly initialization for each method.

We conduct the following experiments from three perspectives. Firstly, we verify various characteristics of HSIC on the middle-scale dataset, i.e., ImageNet-10. Here we compare HSIC with both SVIC and MVIC methods (including real-valued and binary ones). Secondly, three large-scale datasets are exploited to evaluate HSIC on the challenging large-scale MVIC problem. Remark: Based on the results on ImageNet-10 (see Table 2), the real-valued MVIC methods only obtain comparable results to k-means, but they are very time-consuming. Moreover, when applying those MVIC methods (e.g., AMGL and MLAN) to larger datasets, we encounter the ‘out-of-memory’ error. Therefore, the real-valued MVIC methods are not compared on the three large-scale datasets. Thirdly, some empirical analyses of our HSIC are also provided.

3.2 Experiments on the Middle-Scale ImageNet-10

We compare HSIC to several state-of-the-art clustering methods including SVIC methods (i.e., k-means [16], k-Medoids [33], Approximate kernel k-means [8], Nyström [6], NMF [20], LSC-K [7]), MVIC methods, (i.e., AMGL [31], MVKM [4], MLAN [30], MultiNMF [22], OMVC [37], MVSC [21]) and two existing binary clustering methods (i.e., ITQ+bk-means [15] and CKM [41]). Additionally, two variants of HSIC are also compared to show its efficacy, i.e., HSIC with F-norm regularized binary clustering (HSIC-F), and HSIC with two separate steps of binary code learning and discrete clustering (HSIC-TS). Similar to [21, 22], for all the SVIC methods, we simply concatenate the feature vectors of all views for the ‘MulView’ clustering.

Table 1. Performance comparisons on ImageNet-10. The bold black and numbers indicate the best single-view and multi-view clustering results, respectively.
Table 2. Time costs (in seconds) of different methods on ImageNet-10.

Table 1 demonstrates the performance of all clustering methods. From Table 1, we can observe in most cases that our HSIC can achieve comparable SVIC results but superior MVIC results in comparison with all the real-valued and binary clustering methods. This indicates the effectiveness of HSIC on the common representation learning and robust cluster structures learning, especially for the MVIC cases. Furthermore, it is clear that HSIC is superior to HSIC-F and HSIC-ST, which demonstrates the robustness and effectiveness of the joint learning framework.

The computational costs are illustrated in Table 2. From its last three columns, we can see that the binary clustering methods can reduce the computational time compared with the real-valued ones such as k-means and LSC-K, due to the highly efficient distance calculation in the Hamming space. Particularly, our HSIC is much faster than the compared real-valued and binary clustering methods, which also proves the superiority of the developed efficient optimization algorithm. Specifically, the speed-up of our HSIC for MVIC is very clear by a margin of 40.20 times compared to k-means. For memory footprint, k-means and our HSIC respectively require 361 MB and 2.73 MB, i.e., \( \approx 132\) times memory can be reduced using HSIC.

Table 3. Performance comparisons on the three large-scale datasets. The bold black and numbers indicate the best single-view and multi-view results, respectively.
Fig. 2.
figure 2

t-SNE visualization of randomly selected 5 classes from ImageNet-10. The two rows show the real-valued features and 128-bit HSIC-based binary codes, respectively.

Why does HSIC Outperform the Real-Valued Methods? Table 1 clearly shows that HSIC achieves competitive or superior clustering performance compared to the real-valued clustering methods. The favorable performance mainly comes from: (1) HSIC greatly benefits from the proposed effective discrete optimization algorithm such that the learned binary representations can eliminate some redundant and noisy information in the original real-valued features. As can be seen in Fig. 2, the similarity structures of the same clusters are enhanced in the coding space, meanwhile, some disturbances from the original features are excluded to refine the learned representation. (2) For image clustering, binary features are more robust to local changes since small variations caused by varying environments can be eliminated by quantized binary codes. (3) HSIC is a unified interactive learning framework of the optimal binary codes and clustering structures, which is shown to be better than those disjoint learning approaches (e.g., LSC-K, NMF, MVSC, AMGL and MLAN).

Table 4. Time costs (in seconds) on the three large-scale multi-view datasets.
Table 5. Memory footprint of ‘MulView’ k-means and HSIC on the three large-scale datasets. ‘Reduction’ denotes the times of memory reduction against k-means.

3.3 Experiments on Large-Scale Datasets

To show the strong scalability of HSIC on the large-scale MVIC problem, we compare HSIC with several state-of-the-art scalable clustering methods on three large-scale multi-view datasets. The clustering performance is summarized in Table 3. Given these results, we have the following observations: (1) Generally, MVIC performs better than SVIC, which implies the necessity of incorporating complementary traits of multiple features for image clustering. Particularly, our HSIC achieves competitive or better SVIC results but consistent best MVIC performance. This mainly owes to the adaptive weights learning strategy and the exploiting of sharable and individual information from heterogeneous features. (2) From the last three columns of Table 3, we can observe that HSIC and its variants tend to be better than the real-valued ones. This shows that the binary codes learned by HISC are competitive to the real-valued ones. (3) When comparing to HSIC-TS and HSIC-F, HSIC in most cases achieves superior performance. This further reflects the advantages of the unified learning strategy and robust binary cluster structure construction.

The comparisons of running time and memory footprint are illustrated in Tables 4 and 5, respectively. From Table 4, we can observe that our HSIC is the fastest method in most cases. Table 5 shows that HSIC significantly reduces the memory load for large-scale MVIC compared to k-means. The memory cost of HSIC is similar to other binary clustering methods but clearly less than the real-valued methods. Moreover, as shown in Tables 4 and 5, for MVIC on NUS-WIDE with 5 views, HSIC can cluster near one million (\(195,834\times 5\)) features in 81 seconds using only 5.52 MB memory, while k-means needs about 29 minutes with 961 MB memory. Thus, HSIC can effectively address large-scale MVIC with much less computational time and memory footprint.

3.4 Empirical Analysis

Component Analysis: We evaluate the effectiveness of different components of HSIC in Fig. 3. Specifically, in addition to ‘HSIC-TS’ and ‘HSIC-F’, we have ‘HSIC-U’ by removing the balanced and independence constraints on binary codes and clustering centroids. HSIC-‘view’ and ITQ-‘view’ respectively refer to the SVIC results obtained using HSIC and ITQ+bk-means on the ‘view’-specific features. From Fig. 3, we can observe that each component contributes essentially to the enhanced performance, and lacking any component will deteriorate the performance.

Fig. 3.
figure 3

Performance of different clustering methods vs. code lengths on Cifar-10.

Fig. 4.
figure 4

Performance of different clustering methods vs. numbers of clusters on Cifar-10.

Effect of Code Length: We show our performance changes with the increasing code lengths in Fig. 3. In general, longer codes may provide more information for higher clustering performance. Specifically, both ITQ and HSIC based methods tend to achieve improved performance with increasing numbers of bits. Moreover, HSIC-based methods are superior to the baseline k-means when the code length is larger than 32. The best clustering results are established by HSIC w.r.t. different code lengths, because HSIC can effectively coordinate the importance of different views and mine the semantic correlations between them.

Effect of Number of Clusters: All the above experiments are evaluated based on the ground-truth cluster numbers. However, if the number of clusters is unknown, how will the performance change with different cluster numbers? To this end, we perform experiments on Cifar-10 to evaluate the stabilities of different methods w.r.t. number of clusters. Figure 4 illustrates the performance changes by varying the cluster numbers from 5 to 40 with an interval of 5. Interestingly, the performance (i.e., ACC, NMI and F-score) of HSIC-based methods increases when the cluster number increases from 5 to 10, but then sharply drops using more than 10 clusters. This suggests that 10 is the optimal number of clusters. Notably, ‘purity’ can not trade off the precise clustering evaluation against the number of clusters [28]. Importantly, the clustering performance of HSIC in most cases is better than all the compared methods, and HSIC-based methods hold the first three best results. This shows that HSIC is adaptive to different cluster numbers and can be potentially used to predict the ‘optimal’ number of clusters.

4 Conclusion

In this paper, we proposed a highly-economized multi-view clustering framework, dubbed HSIC, to jointly learn the compressive binary representations and robust discrete cluster structures. Specifically, HSIC collaboratively integrated the heterogeneous features into the common binary codes, where the sharable and individual information of multiple views were exploited. Meanwhile, a robust cluster structure learning model was developed to improve the clustering performance. Moreover, an effective alternating optimization algorithm was introduced to guarantee the high-quality discrete solutions. Extensive experiments on large-scale multi-view datasets demonstrate the superiority of HSIC over the state-of-the-art methods in terms of clustering performance with significantly reduced computational time and memory footprint.