1 Introduction

Similarity search (Wang et al. 2015; Gionis et al. 1999; Qin et al. 2015; Yu et al. 2015; Liu et al. 2015; Gao et al. 2015; Liu et al. 2015; Zhang et al. 2010; Wang et al. 2014; Bian and Tao 2010) is one of the most critical problems in information retrieval as well as in pattern recognition, data mining and machine learning. Generally speaking, effective similarity search approaches try to construct the index structure in the metric space. However, with the increase of the dimensionality of the data, how to implement the similarity search efficiently and effectively has become an significant topic. To improve retrieval efficiency, hashing algorithms are deployed to find a hash function from Euclidean space to Hamming space. The hashing algorithms with binary coding techniques mainly have two advantages: (1) binary hash codes save storage space; (2) it is efficient to compute the Hamming distance (XOR operation) between the training data and the new coming data in the retrieval procedure of similarity search. The time complexity of searching the hashing table is near O(1).

Current hashing algorithms can be roughly divided into two groups: random projection based hashing and learning based hashing. For the random projection based hashing techniques, the most well-known hashing technique that preserves similarity information is probably Locality-Sensitive Hashing (LSH) (Gionis et al. 1999). LSH simply employs random linear projections (followed by random thresholding) to map data points close in a Euclidean space to similar codes. It is theoretically guaranteed that as the code length increases, the Hamming distance between two codes will asymptotically approach the Euclidean distance between their corresponding data points. Furthermore, kernelized locality-sensitive hashing (KLSH) (Kulis and Grauman 2009) has also been successfully proposed and utilized for large-scale image retrieval and classification. However, in realistic applications, LSH-related methods usually require long codes to achieve good precision, which result in low recall since the collision probability that two codes fall into the same hash bucket decreases exponentially as the code length increases.

However, the random projection based hash functions are effective only when the binary hash code is long enough. To generate more effective but compact hash codes, a number of methods such as projection learning for hashing have been introduced. Through mining the structure of the data, then being represented on the objective function, a projection learning based hashing algorithm can obtain the hash function by solving an optimization problem associated with the objective function. Spectral Hashing (SpH) (Weiss et al. 2009) is a representative unsupervised hashing method, which can learn compact binary codes that preserve the similarity between documents by forcing the balanced and uncorrelated constraints into the learned codes. Furthermore, principled linear projections like PCA Hashing (PCAH) (Wang et al. 2012) has been suggested for better quantization rather than random projection hashing. Moreover, Semantic Hashing (SH), which is based a stack of Restricted Boltzmann Machines (RBM) (Salakhutdinov and Hinton 2007), was proposed in Salakhutdinov and Hinton (2009). In particular, SH involves two steps: pre-training and fine-tuning. During these two steps, a deep generative model is greedily learned, in which the lowest layer represents the high-dimensional data vector and the highest layer represents the learned binary code for that data. Liu et al. (2011) proposed an Anchor Graph-based Hashing method (AGH), which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To further make such an approach computationally feasible, the Anchor Graphs used in Liu et al. (2011) were defined with tractable low-rank adjacency matrices. In this way, AGH can allow constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. More recently, Spherical Hashing (SpherH) (Heo et al. 2012) was proposed to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. Meanwhile, the authors also developed a new distance function for binary codes, spherical Hamming distance, to achieve final retrieval tasks. Iterative Quantization (ITQ) (Gong et al. 2013) was developed for more compact and effective binary coding. Particularly, a simple and efficient alternating minimization scheme for finding a orthogonal rotation of zero-centered data so as to minimize the quantization error of mapping this data and the vertices of a zero-centered binary hypercube. Additionally, Boosted Similarity Sensitive Coding (BSSC) (Shakhnarovich 2005) was designed to learn a compact and weighted Hamming embedding for task specific similarity search. Boosted binary regression stumps were used as hashing functions to map the input vectors into binary codes. A similar idea as BSSC is also applied to Evolutionary Compact Embedding (ECE) (Liu and Shao 2015), which combines Genetic Programming with the boosting scheme to generate high-quality binary codes for large-scale data classification tasks. Besides, Self-taught Hashing (STH) (Zhang et al. 2010), in which a two-step scheme is effectively applied to learn hash functions, was also successfully utilized for visual retrieval. More hashing techniques can also be seen in Wang et al. (2015), Cao et al. (2012), Song et al. (2014), Song et al. (2013), Liu et al. (2012), Wang et al. (2015), Lin et al. (2013).

Nevertheless, the above mentioned hashing methods have their limitations. Although the random projection based hashing methods, such as LSH, KLSH and SKLSH (Raginsky and Lazebnik 2009), can produce relatively effective codes, such simple linear hash functions cannot reflect the underlying relationship between the data points. Meanwhile, since the long codes are required for acceptable retrieval results via random projection based hashing, the storage space and the cost of computing the Hamming distance will be expensive. On the other hand, in terms of learning based hashing algorithms, most of them, e.g., Shakhnarovich (2005), Weiss et al. (2009), Liu et al. (2011), only focus on the relationship between data or sets rather than considering the combination of the intra-latent structureFootnote 1 of data and the inter-probability distribution between the high-dimensional Euclidean space and the low-dimensional Hamming space.

To overcome these limitations above, in this paper, we propose a novel NMF-based approach called Latent Structure Preserving Hashing (LSPH) which can effectively preserve data probabilistic distribution and capture much of the locality structure from the high-dimensional data. In particular, the nonnegative matrix factorization can automatically learn the intra-latent information and the part-based representations of data, while the data probabilistic distribution preserving aims to maintain the similarity between high-dimensional data and low-dimensional codes. Moreover, incorporated with the representation of binary codes, the part-based latent information obtained by NMF based hashing, i.e., LSPH, could be regarded as independent latent attributes of samples. In other words, the binary codes determine whether the high-dimensional data hits the corresponding latent attributes or not. Given an image, this kind of data-driven attributes can allow us to well describe an image and also may benefit zero-shot learning (Jayaraman and Grauman 2014; Lampert et al. 2014; Tao 2015; Yu et al. 2013) for unseen image classification/retrieval in future work.

Specifically, because of the limitation of NMF, which cannot completely discover the latent structure of the original high-dimensional data, we provide a new objective function to preserve as much of the probabilistic distribution structure of the high-dimensional data as possible to the low-dimensional map. Meanwhile, we propose an optimization framework for the objective function and show the updating rules. Besides, to implement the optimization process, the training data are relaxed into a real-valued range. Then, we convert the real-valued representations into binary codes. Finally, we analyze the experimental results and compare them with several existing hashing algorithms. The outline of the proposed LSPH approach is depicted in Fig. 1.

Fig. 1
figure 1

The outline of the proposed method. Part-based latent information is learned from NMF with the regularization of data distribution. We propose two different versions of our algorithm, i.e., single layer LSPH and multi-layer LSPH. Specifically, ML-LSPH generates deep data representations which can theoretically lead to better performance for retrieval tasks with more complex data

LSPH is a linear hashing technique with a single-layer generative network and data distribution preserving constraints. Although it is an efficient binary coding method for large-scale retrieval tasks, such a single-layer generative network may lead to several limitations in the following cases as mentioned in [1]: (1) when it learns data which lie on or near a nonlinear manifold; (2) when it learns syntactic relationships of given data; and (3) when it learns hierarchically generated data. The single-layer LSPH is apparently not fit for such cases. For instance, LSPH with a single-layer network can well tackle data with small intra-variations such as face images. However, for more complex data with extremely different viewpoints, additional degrees of freedom of the data will be required. In terms of large-scale image retrieval tasks, the sources of data can be very variant and even samples belonging to the same category can differ significantly. Naturally, the single-layer LSPH is not competent for similarity search on such heterogeneous databases.

Therefore, in this paper, we also propose an extension of LSPH called multi-layer LSPH (ML-LSPH) with the multi-layer generative network [1] and distribution preserving constraints. ML-LSPH can deeply learn part-based latent information of data and preserve the joint probabilistic distribution for deep data representations. Applying the sigmoid function to each layer, ML-LSPH is a nonlinear architecture. Similar to recent deep neural networks (Hinton et al. 2006; Masci et al. 2011; Krizhevsky et al. 2012), ML-LSPH generates deep data representations which can theoretically lead to better performance than single layer LSPH for retrieval tasks with more complex dataFootnote 2 in realistic scenarios. However, ML-LSPH is computationally more expensive during training and test phases compared to single layer LSPH. Thus, there exists a trade-off between ML-LSPH and LSPH in terms of performance and computational complexity, and the choice between these two versions depends on the requirement of the application. Besides, as ML-LSPH is a generalized framework of LSPH, it can easily shrink to LSPH if the number of the layers is set to \(\mathbf {1}\). We evaluate our LSPH and ML-LSPH on three large-scale datasets: SIFT 1M, GIST 1M and TinyImage, and the results show that our methods significantly outperform the state-of-the-art hashing techniques. It is worthwhile to highlight several contributions of the proposed methods:

  • LSPH can learn compact binary codes uncovering the latent semantics and simultaneously preserving the joint probability distribution of data.

  • We utilize multivariable logistic regression to generate the hashing function and achieve the out-of-sample extension.

  • To tackle the data with more complex distribution, a multi-layer extension of LSPH (i.e., ML-LSPH) has been proposed for large-scale retrieval as well.

The rest of this paper is organized as follows. In Sect. 2, we give a brief review of NMF. The details of LSPH and ML-LSPH are described in Sects. 3 and 4, respectively. Section 5 reports the experimental results. Finally, we conclude this paper and discuss the future work in Sect. 6.

2 A Brief Review of NMF

In this section, we mainly review some related algorithms, focusing on Nonnegative Matrix Factorization (NMF) and its variants. NMF is proposed to learn the nonnegative parts of objects. Given a nonnegative data matrix \(X = [\mathbf {x}_1,\cdots , \mathbf {x}_{N}]\in {\mathbb {R}}_{\ge 0}^{M \times N}\), each column of X is a sample data. NMF aims to find two nonnegative matrices \(U\in {\mathbb {R}}_{\ge 0}^{M \times D}\) and \(V\in {\mathbb {R}}_{\ge 0}^{D\times N}\) with full rank whose product can approximately represent the original matrix X, i.e., \(X\approx UV\). In practice, we always set \(D < \min (M,N)\). The target of NMF is to minimize the following objective function

$$\begin{aligned} {\mathcal {L}}_{NMF}=\Vert X-UV\Vert ^{2}, ~ \text {s.t.}~ U,V\ge \mathbf {0}, \end{aligned}$$
(1)

where \(\Vert \cdot \Vert \) is the Frobenius norm. To optimize the above objective function, an iterative updating procedure was developed in Lee and Seung (1999) as follows:

$$\begin{aligned} V_{ij} \leftarrow \frac{\left( U^TX\right) _{ij}}{\left( U^TUV\right) _{ij}} V_{ij},~ U_{ij} \leftarrow \frac{\left( XV^T\right) _{ij}}{\left( UVV^T\right) _{ij}} U_{ij}, \end{aligned}$$
(2)

and normalization

$$\begin{aligned} U_{ij} \leftarrow \frac{U_{ij}}{\sum _{i}U_{ij}}. \end{aligned}$$
(3)

It has been proved that the above updating procedure can find the local minimum of \({\mathcal {L}}_{NMF}\). The matrix V obtained in NMF is always regarded as the low-dimensional representation while the matrix U denotes the basis matrix.

Furthermore, there also exists some variants of NMF. Local NMF (LNMF) (Li et al. 2001) imposes a spatial localized constraint on the bases. In Hoyer (2004), sparse NMF was proposed and later, NMF constrained with neighborhood preserving regularization (NPNMF) (Gu and Zhou 2009) was developed. Besides, researchers also proposed graph regularized NMF (GNMF) (Cai et al. 2011), which effectively preserves the locality structure of data. Beyond these methods, Zhang et al. (2006) extends the original NMF with the kernel trick as kernelized NMF (KNMF), which could extract more useful features hidden in the original data through some kernel-induced nonlinear mappings. Additionally, a hashing method based on multiple kernels NMF was proposed in Liu et al. (2015), where an alternate optimization scheme is applied to determine the combination of different kernels.

In this paper, we present a Latent Structure Preserving NMF framework for hashing (i.e., LSPH), which can effectively preserve the data intrinsic probability distribution and simultaneously reduce the redundancy of low-dimensional representations. Specifically, since the solution of standard NMF only focuses on optimizing matrix factorization to minimize Eq. (1), the obtained low-dimensional representation V lacks the data relationship information. In fact, most of previous NMF extensions are based on keeping the locality regularization to guarantee that, if the high-dimensional data points are close, the low-dimensional representations from NMF can still be close. However, this kind of regularization may lead to a low-quality factorization, since it ignores preserving the whole data distribution but only focuses on locality information. For a realistic scenario with noisy data, locality preserving regularization would even produce worse performance. Rather than locality-based graph regularization, we measure the joint probability of data by Kullback-Leibler divergence, which is defined over all of the potential neighbors and has been proved to effectively resist data noise (Maaten and Hinton 2008). This kind of measurement reveals the global structure such as the presence of clusters at several scales. To make LSPH more capable on data with more complex distributions, the multi-layer LSPH (ML-LSPH) is also proposed, in which more discriminative, high-level representations can be learned from a multi-layer network with the distribution preserving regularization term. To the best of our knowledge, this is the first time that multi-layer NMF based hashing is successfully applied to feature embedding for large-scale similarity search. A preliminary version of our LSPH has been presented in Cai et al. (2015). In this paper, we include more details and experimental results and extend LSPH to ML-LSPH for more complex data in realistic retrieval applications.

3 Latent Structure Preserving Hashing

In this section, we mainly elaborate the proposed Latent Structure Preserving Hashing algorithm.

3.1 Preserving Data Structure with NMF

NMF is an unsupervised learning algorithm which can learn a parts-based representation. Theoretically, it is expected that the low-dimensional data V given by NMF can obtain locality structure from the high-dimensional data X. However, in real-world applications, NMF cannot discover the intrinsic geometrical and discriminating structure of the data space. Therefore, to preserve as much of the significant structure of the high-dimensional data as possible, we propose to minimize the Kullback-Leibler divergence (Xie et al. 2011) between the joint probability distribution in the high-dimensional space and the joint probability distribution that is heavy-tailed in the low-dimensional space:

$$\begin{aligned} C= \lambda KL(P\Vert Q). \end{aligned}$$
(4)

In Eq. (4), P is the joint probability distribution in the high-dimensional space which can also be denoted as \(p_{ij}\). Q is the joint probability distribution in the low-dimensional space that can be represented as \(q_{ij}\). \(\lambda \) is the control of the smoothness of the new representation. The conditional probability \(p_{ij}\) means the similarity between data points \(\mathbf {x}_{i}\) and \(\mathbf {x}_{j}\), where \(\mathbf {x}_{j}\) is picked in proportion to their probability density under a Gaussian centered at \(\mathbf {x}_{i}\). Since only significant points are needed to model pairwise similarities, we set \(p_{ii}\) and \(q_{ii}\) to zero. Meanwhile, it has the characteristics that \(p_{ij} = p_{ji}\) and \(q_{ij} = q_{ji}\) for \(\forall i,j\). The pairwise similarities in the high-dimensional space \(p_{ij}\) are defined as:

$$\begin{aligned} p_{ij}= \frac{\exp \left( -\Vert \mathbf {x}_i -\mathbf {x}_j\Vert ^2/ 2\sigma _i^2\right) }{\sum _{k \ne l} \exp \left( -\Vert \mathbf {x}_k - \mathbf {x}_l\Vert ^2/2\sigma _k^2\right) }, \end{aligned}$$
(5)

where \(\sigma _{i}\) is the variance of the Gaussian distribution which is centered on data point \(x_{i}\). Each data point \(x_{i}\) makes a significant contribution to the cost function. In the low-dimensional map, using the probability distribution that is heavy tailed, the joint probabilities \(q_{ij}\) can be defined as:

$$\begin{aligned} q_{ij}= \frac{\left( 1+\Vert \mathbf {v}_i-\mathbf {v}_j\Vert ^2\right) ^{-1}}{\sum _{k \ne l} \left( 1+ \Vert \mathbf {v}_k - \mathbf {v}_l \Vert ^2\right) ^{-1}}. \end{aligned}$$
(6)

This definition is an infinite mixture of Gaussians, which is much faster to evaluate the density of a point than the single Gaussian, since it does not have an exponential. This representation also makes the mapped points invariant to the changes in the scale for the embedded points that are far apart. Thus, the cost function based on Kullback-Leibler divergence can effectively measure the significance of the data distribution . \(q_{ij}\) models \(p_{ij}\) is given by

$$\begin{aligned} \begin{aligned} G= KL(P\Vert Q) =\sum _{i}\sum _{j}p_{ij}\log {p_{ij}}-p_{ij}\log {q_{ij}}. \end{aligned} \end{aligned}$$
(7)

For simplicity, we define two auxiliary variables \(d_{ij}\) and Z for making the derivation clearer as follows:

$$\begin{aligned} d_{ij}=\Vert \mathbf {v}_i-\mathbf {v}_j\Vert ~\text {and}~ Z={\sum _{k \ne l} \left( 1+ d_{kl}^2\right) ^{-1}}. \end{aligned}$$
(8)

Therefore, the gradient of function G with respect to \(\mathbf {v}_i\) can be given by

$$\begin{aligned} \frac{\partial G}{\partial \mathbf {v}_i} = 2\sum _{j=1}^N \frac{\partial G}{\partial {d}_{ij}} \left( \mathbf {v}_i-\mathbf {v}_j\right) . \end{aligned}$$
(9)

Then \(\frac{\partial G}{\partial {d}_{ij}}\) can be calculated by Kullback-Leibler divergence in Eq. (7):

$$\begin{aligned} \frac{\partial G}{\partial {d}_{ij}}=-\sum _{k \ne l}p_{kl}\left( \frac{1}{q_{kl}Z}\frac{\partial \left( \left( 1+ d_{kl}^2\right) ^{-1}\right) }{\partial {d}_{ij}}-\frac{1}{Z}\frac{\partial Z}{\partial {d}_{ij}} \right) .\nonumber \\ \end{aligned}$$
(10)

Since \(\frac{\partial ((1+ d_{kl}^2)^{-1})}{\partial {d}_{ij}}\) is nonzero if and only if \(k=i\) and \(l=j\), and \(\sum _{k \ne l}p_{kl}=1\), the gradient function can be expressed as

$$\begin{aligned} \frac{\partial G}{\partial {d}_{ij}}=2 \left( p_{ij}-q_{ij}\right) \left( 1+d_{ij}^2\right) ^{-1}. \end{aligned}$$
(11)

Eq. (11) can be substituted into Eq. (9). Therefore, the gradient of the Kullback-Leibler divergence between P and Q is

$$\begin{aligned} \frac{\partial G}{\partial \mathbf {v}_i} = 4 \sum _{j=1}^N (p_{ij}-q_{ij}) (\mathbf {v}_i - \mathbf {v}_j) \left( 1 + \Vert \mathbf {v}_i-\mathbf {v}_j\Vert ^2\right) ^{-1}.\nonumber \\ \end{aligned}$$
(12)

Therefore, through combining the data structure preserving part in Eq. (4) and the NMF technique, we can obtain the following new objective function:

$$\begin{aligned} O_f=\Vert X-UV\Vert ^2+\lambda KL(P\Vert Q), \end{aligned}$$
(13)

where \(V\in \{0,1\}^{D\times N}\), \(X,U,V\geqslant 0\), \(U\in {\mathbb {R}}^{M\times D}\), \(X\in {\mathbb {R}}^{M\times N}\), and \(\lambda \) controls the smoothness of the new representation.

In most of the circumstances, the low-dimensional data only from NMF is not effective and meaningful for realistic applications. Thus, we introduce \(\lambda KL(P\Vert Q)\) to preserve the structure of the original data which can obtain better results in information retrieval.

3.2 Relaxation and Optimization

Since the discreteness condition \(V\in \{0,1\}^{D\times N}\) in Eq. (22) cannot be calculated directly in the optimization procedure, motivated by Weiss et al. (2009), we first relax the data \(V\in \{0,1\}^{D\times N}\) to the range \(V\in {\mathbb {R}}^{D\times N}\) for obtaining real-values. Then let the Lagrangian of our problem be:

$$\begin{aligned} {\mathcal {L}}= & {} \Vert X-UV\Vert ^2+\lambda KL(P\Vert Q) + tr \left( {\varPhi }U^T\right) \nonumber \\&+\, tr ({\varPsi }V^T), \end{aligned}$$
(14)

where matrices \({\varPhi }\) and \({\varPsi }\) are two Lagrangian multiplier matrices. Since we have the gradient of \(C = \lambda G\):

$$\begin{aligned} \frac{\partial C}{\partial \mathbf {v}_i} = 4\lambda \sum _{j=1}^N \left( p_{ij}-q_{ij}\right) \left( \mathbf {v}_i - \mathbf {v}_j\right) \left( 1+\Vert \mathbf {v}_i-\mathbf {v}_j\Vert ^2\right) ^{-1},\nonumber \\ \end{aligned}$$
(15)

we make the gradients of \({\mathcal {L}}\) be zeros to minimize \(O_f\):

$$\begin{aligned} \frac{\partial {\mathcal {L}}}{\partial V}= & {} 2\left( -U^TX + U^TUV\right) + \frac{\partial C}{\partial \mathbf {v}_i} + {\varPsi }= \mathbf {0}, \end{aligned}$$
(16)
$$\begin{aligned} \frac{\partial {\mathcal {L}}}{\partial U}= & {} 2\left( -XV^T + UVV^T\right) + {\varPhi }= \mathbf {0}, \end{aligned}$$
(17)

In addition, we also have KKT conditions: \({\varPhi }_{ij} U_{ij} = 0\) and \({\varPsi }_{ij} V_{ij} = 0, \forall i,j\). Then multiplying \(V_{ij}\) and \(U_{ij}\) in the corresponding positions on both sides of Eqs. (16) and (17) respectively, we obtain

$$\begin{aligned} \left( 2 \left( -U^TX + U^TUV\right) +\frac{\partial C}{\partial \mathbf {v}_i}\right) _{ij} V_{ij}= & {} 0, \end{aligned}$$
(18)
$$\begin{aligned} 2\left( -XV^T + UVV^T\right) _{ij} U_{ij}= & {} 0. \end{aligned}$$
(19)

Note that

$$\begin{aligned} \begin{aligned} \left( \frac{\partial C}{\partial \mathbf {v}_j}\right) _i&= \left( 4\lambda \sum _{k=1}^N \frac{p_{jk} \mathbf {v}_j - q_{jk}\mathbf {v}_j - p_{jk} \mathbf {v}_k + q_{jk}\mathbf {v}_k}{1+\Vert \mathbf {v}_j-\mathbf {v}_k\Vert ^2}\right) _i \\&= 4\lambda \sum _{k=1}^N \frac{p_{jk} V_{ij} - q_{jk} V_{ij} - p_{jk} V_{ik} + q_{jk} V_{ik}}{1+\Vert \mathbf {v}_j-\mathbf {v}_k\Vert ^2}. \end{aligned} \end{aligned}$$

Therefore, we have the following update rules for any ij:

$$\begin{aligned} V_{ij}\leftarrow & {} \frac{\left( U^TX\right) _{ij} + 2 \lambda \sum \limits _{k=1}^N \frac{p_{jk} V_{ik} + q_{jk} V_{ij}}{1+\Vert \mathbf {v}_j-\mathbf {v}_k\Vert ^2}}{\left( U^TUV\right) _{ij} + 2 \lambda \sum \limits _{k=1}^N \frac{p_{jk} V_{ij} + q_{jk} V_{ik}}{1+\Vert \mathbf {v}_j-\mathbf {v}_k\Vert ^2}} V_{ij}, \end{aligned}$$
(20)
$$\begin{aligned} U_{ij}\leftarrow & {} \frac{\left( X V^T\right) _{ij}}{\left( UVV^T\right) _{ij}} U_{ij}. \end{aligned}$$
(21)

All the elements in U and V can be guaranteed that they are nonnegative from the allocation. In Lee and Seung (2000), it has been proved that the objective function is monotonically non-increasing after each update of U or V. The proof of convergence about U and V is similar to the ones in Zheng et al. (2011), Cai et al. (2011).

Once the above algorithm is converged, we can obtain the real-valued low-dimensional representation by a linear projection matrix. Since our algorithm is based on general NMF rather than Projective NMF (PNMF) (Yuan and Oja 2005; Guan et al. 2013), a direct projection does not exist for data embedding. Thus, in this paper, inspired by Cai et al. (2007), we consider using linear regression to compute our projection matrix. Particularly, we make the projection orthogonal by solving the Orthogonal Procrustes problem (Schönemann 1966) as follows:

$$\begin{aligned} \min _{{\mathcal {P}}} \Vert {\mathcal {P}} X - V\Vert , ~\text {s.t.}~ {{\mathcal {P}}^T {\mathcal {P}} = I} \end{aligned}$$
(22)

where \({\mathcal {P}}\) is the orthogonal projection. The optimal solution can be obtained by the following procedure: 1. use the singular value decomposition algorithm to decompose the matrix \(X^T V = A {\varSigma }B^T\); 2. calculate \({\mathcal {P}} = B{\varOmega }A^T\), where, \({\varOmega }\) is a connection matrix as \({\varOmega }=[I,\mathbf{0 }]\in {\mathbb {R}}^{D\times M}\) and \({\mathbf{0 }}\) indicates all zeros matrix. Given data \(\mathbf {x} \in {\mathbb {R}}^{M \times 1}\), its low-dimensional representation is \(\mathbf {v} = {\mathcal {P}} \mathbf {x}\). There are three advantages on using orthogonal projection according to Zhang et al. (2015): Firstly, the orthogonal projection can preserve the Euclidean distance between two points; Secondly, the orthogonal projection can distribute the variance more evenly across the dimensions; Thirdly, the orthogonal projection can learn maximally uncorrelated dimensions, which leads to more compact representations.

3.3 Hash Function Generation

The low-dimensional representations \(V\in {\mathbb {R}}^{D\times N}\) and the bases \(U\in {\mathbb {R}}^{M\times D}\), where \(D \ll M\), can be obtained from Eq. (20) and Eq. (21), respectively. Then we need to convert the low-dimensional real-valued representations from \(V=[\mathbf{v }_{1},\cdots ,\mathbf{v }_{N}]\) into binary codes via thresholding: if the d-th element in \(\mathbf{v }_{n}\) is larger than a specified threshold, this real value will be represented as 1; otherwise it will be 0, where \(d=1,\cdots ,D\) and \(n=1,\cdots ,N\).

In addition, a well-designed semantic hashing should also be entropy maximizing to ensure its efficiency (Baluja and Covell 2008). Meanwhile, from the information theory, through having a uniform probability distribution, the source alphabet can reach a maximal entropy. Specifically, if the entropy of codes over the corpus is small, the documents will be mapped to a small number of codes (hash bins). In this paper, the threshold of the elements in \(\mathbf {v}_n\) can be set to the median value of \(\mathbf {v}_n\), which can satisfy entropy maximization. Therefore, half of the bit-strings will be 1 and the other half will be 0. In this way, the real-value code can be calculated into a binary code (Yu et al. 2014).

However, from the above procedure, we can only obtain the binary codes of the data in the training set. Therefore, given a new sample, it cannot directly find a hash function. To solve such an “out-of-sample” problem, in our approach, we are inspired by the “self-taught” binary coding scheme (Zhang et al. 2010) to use the logistic regression (Hosmer and Lemeshow 2004) which can be treated as a type of probabilistic statistical classification model to compute the hash code for unseen test data. Specifically, we learn a square projection matrix via logistic regression, which can be regarded as a rotation of V. This kind of transformation can make the codes more balanced (Gong et al. 2013; Liu et al. 2012) and lead to better performance compared with directly binarizing V with the median value calculated from training data. To make it more convincing, we also show the performance difference in the later section. Before obtaining the logistic regression cost function, we define that the binary code is represented as \({\hat{V}}=[\hat{\mathbf{v }}_{1},\cdots ,\hat{\mathbf{v }}_{N}]\), where \(\hat{\mathbf{v }}_{n}\in \{0,1\}^{D}\) and \(n=1,\cdots ,N\). Therefore, the training set can be considered as \(\{(\mathbf{v }_{1}, \hat{\mathbf{v }}_{1}), (\mathbf{v }_{2}, \hat{\mathbf{v }}_{2}), \cdots , (\mathbf{v }_{N}, \hat{\mathbf{v }}_{N})\}\). The vector-valued regression function which is based on the corresponding regression matrix \({\varTheta }\in {\mathbb {R}}^{D\times D}\) can be represented as

$$\begin{aligned} h_{{\varTheta }}\left( \mathbf{v }_{n}\right) = \left( \frac{1}{1+e^{- \left( {\varTheta }^{T}\mathbf{v }_{n}\right) _i}}\right) ^T_{i = 1, \cdots , D}. \end{aligned}$$
(23)

Therefore, with the maximum log-likelihood criterion for the Bernoulli-distributed data, our cost function for the corresponding regression matrix can be defined as:

$$\begin{aligned} \begin{aligned} J ({\varTheta }) =&-\frac{1}{N} \Big ( \sum _{n=1}^N \Big (\hat{\mathbf{v }}_{n}^T \mathbf{log }(h_{{\varTheta }}(\mathbf{v }_{n}))\\&+ (\mathbf{1 }-\hat{\mathbf{v }}_{n})^T \mathbf{log }(\mathbf{1 }-h_{{\varTheta }}(\mathbf{v }_{n}))\Big ) + \delta \Vert {\varTheta }\Vert ^{2} \Big ), \end{aligned} \end{aligned}$$
(24)

where \(\mathbf{log }(\cdot )\) is the element-wise logarithm function and 1 is an \(D \times 1\) all ones matrix. We use \(\delta \Vert {\varTheta }\Vert ^{2}\) as the regularization term in logistic regression to avoid overfitting.

To find the matrix \({\varTheta }\) which aims to minimize \(J ({\varTheta })\), we use gradient descent and repeatedly update each parameter using a learning rate \(\alpha \). The updating equation is shown as follows:

$$\begin{aligned} {{\varTheta }}^{(t+1)} = {{\varTheta }}^{(t)} - \frac{\alpha }{N}\sum _{i=1}^N \left( h_{{\varTheta }^{(t)}} \left( \mathbf{v }_{i}\right) -\hat{\mathbf{v }}_{i}\right) \mathbf{v }_{i}^T - \frac{\alpha \delta }{N}{{\varTheta }}^{(t)}.\nonumber \\ \end{aligned}$$
(25)

The updating equation stops when the norm of difference between \({{\varTheta }}^{(t+1)}\) and \({{\varTheta }}^{(t)}\), i.e., \(||{{\varTheta }}^{(t+1)}-{{\varTheta }}^{(t)}||^2\), is smaller than a small value. Then we can obtain the regression matrix \({\varTheta }\). For a new coming test data \(X_{new} \in {\mathbb {R}}^{M \times 1}\), then its low-dimensional representation is \(V_{new} = {\mathcal {P}} X_{new}\). Note that each entry of \(h_{{\varTheta }}\) is a sigmoid function, the hash codes for a new coming sample \(X_{new} \in {\mathbb {R}}^{M \times 1}\) can be represented as:

$$\begin{aligned} {\hat{V}}_{new}=\lfloor h_{\varTheta }({\mathcal {P}} X_{new})\rceil , \end{aligned}$$
(26)

where \(\lfloor \cdot \rceil \) means the nearest integer function for each entry of \(h_{{\varTheta }}\). Specifically, since the output of logistic regression i.e., \(h_{\varTheta }({\mathcal {P}} X_{new})\), indicates the probability of “1” for each entry, \(\lfloor \cdot \rceil \) is equivalent to binarizing each bit by probability 0.5. Thus, if the probability of a bit from \(h_{\varTheta }({\mathcal {P}} X_{new})\) is larger than 0.5, it will be represented as 1, otherwise 0. For example, through Eq. (26), vector \(h_{\varTheta }({\mathcal {P}} X_{new}) =[0.17, 0.37, 0.42, 0.79, 0.03, 0.92, \cdots ]\) can be expressed as \([0, 0, 0, 1, 0, 1, \cdots ]\). Up to now, we can obtain the Latent Structure Preserving Hashing codes for both training and test data. The procedure of LSPH is summarized in Algorithm 1.

figure a

4 Multi-Layer LSPH Extension

To better tackle the retrieval tasks with more complex data distributions, in this section, we introduce the multi-layer LSPH (ML-LSPH). ML-LSPH aims to generate more informative high-level representations compared with single-layer LSPH for data with complex distributions. Once the representation by ML-LSPH is computed, the final hashing functions are also obtained through multivariable logistic regression, same as LSPH mentioned above.

Given data matrix \(X \in {\mathbb {R}}^{M \times N}\), inspired by recent deep learning algorithms and multi-layer NMF [1], Trigeorgis et al. (2014), we can extract latent data attributes by incorporating our LSPH algorithm with a multi-layer structure as illustrated in Fig. 2. Similar to the learning of the above representation matrix V, a matrix sequence \(V_1, \cdots , V_n\) can be obtained by solving the following optimization problems:

$$\begin{aligned} \begin{aligned}&\min \Vert X - U_1 V_1\Vert ^2 + \lambda KL \left( P||Q^{(1)}\right) \\&\min \Vert V_1 - U_2 V_2\Vert ^2 + \lambda KL \left( P ||Q^{(2)}\right) \\&\qquad \qquad \vdots \\&\min \Vert V_{n-1} - U_n V_n\Vert ^2 + \lambda KL\left( P ||Q^{(n)}\right) , \end{aligned} \end{aligned}$$

where \(U_i \in {\mathbb {R}}^{D_{i-1}\times D_i}, V_i \in {\mathbb {R}}^{D_i \times N}\), \(i=1,\cdots ,n\), \(D_0 = M\), P is the distribution of X and \(Q^{(i)}\) is the distribution of \(V_i\).

$$\begin{aligned}&\frac{\partial G_j}{\partial U_i} = U_{i-1}^T \cdots U_{j+1}^T \nonumber \\&\quad \left( \frac{\partial KL\left( P||Q^{(j)}\right) }{\partial V_j} \odot g' \left( U_{j+1} V_{j+1}\right) \odot \cdots \odot g'\left( U_i V_i\right) \right) V_i^T \end{aligned}$$
(27)
$$\begin{aligned}&\frac{\partial G_j}{\partial V_i} = U_i^T U_{i-1}^T \cdots U_{j+1}^T \nonumber \\&\quad \left( \frac{\partial KL\left( P||Q^{(j)}\right) }{\partial V_j} \odot g'\left( U_{j+1} V_{j+1}\right) \odot \cdots \odot g'(U_i V_i)\right) \end{aligned}$$
(28)
$$\begin{aligned}&(U_i)_{\mu \nu } \leftarrow (U_i)_{\mu \nu } \nonumber \\&\quad \left( \frac{\left( R_i V_i^T\right) _{\mu \nu } + \lambda \left( \sum _{j=1}^{i} \Big (M_{i-1}^{(j)} \odot g'(U_i V_i) \Big ) V_i^T\right) _{\mu \nu }}{\left( N_i V_i^T\right) _{\mu \nu } + \lambda \left( \sum _{j=1}^{i} \Big (S_{i-1}^{(j)} \odot g'(U_i V_i) \Big ) V_i^T\right) _{\mu \nu }}\right) ^{\gamma } \end{aligned}$$
(29)
$$\begin{aligned}&(V_i)_{\mu \nu } \leftarrow (V_i)_{\mu \nu } \nonumber \\&\left( \frac{\left( U_i^T R_i\right) _{\mu \nu } + \lambda \left( \sum _{j=1}^{i} U_i^T \Big (M_{i-1}^{(j)} \odot g'(U_i V_i) \Big )\right) _{\mu \nu }}{\left( U_i^T N_i\right) _{\mu \nu } + \lambda \left( \sum _{j=1}^{i} U_i^T \Big (S_{i-1}^{(j)} \odot g'(U_i V_i) \Big )\right) _{\mu \nu }}\right) ^{\gamma } \end{aligned}$$
(30)

In this way, \(V_i, i=1, \cdots , n\), are the hidden factors of each layer. By introducing the nonlinear function \(g(\cdot )\) into the network, these hidden factors are generated by the following rules:

$$\begin{aligned} V_i= g \left( U_{i+1} V_{i+1}\right) ,~ i=n-1, \cdots , 0. \end{aligned}$$
(31)

Then our task is to minimize the following objective function:

$$\begin{aligned} F= & {} \Vert X - g\left( U_1 g \left( U_2 \cdots g \left( U_n V_n \right) \right) \right) \Vert ^2 \nonumber \\&+\, \lambda \sum _{i=1}^n KL \left( P||Q^{(i)}\right) . \end{aligned}$$
(32)

Let us denote \(G_i = KL(P||Q^{(i)})\) and \(\odot \) is the Hadamard product (element-wise product). By taking the derivatives of F with respect to \(U_i\) and \(V_i\), we have:

$$\begin{aligned} \frac{\partial F}{\partial U_i}= & {} \left( N_i - R_i\right) V_i^T + \lambda \sum _{j=1}^{i} \frac{\partial G_j}{\partial U_i} \end{aligned}$$
(33)
$$\begin{aligned} \frac{\partial F}{\partial V_i}= & {} U_i^T \left( N_i - R_i\right) + \lambda \sum _{j=1}^{i} \frac{\partial G_j}{\partial V_i} \end{aligned}$$
(34)

where matrices \(N_i, R_i \in {\mathbb {R}}^{D_{i-1} \times N}\) are calculated by the following rules:

$$\begin{aligned} R_{i+1}= & {} \left( U_i^T R_i\right) \odot g'\left( U_{i+1} V_{i+1}\right) \\ N_{i+1}= & {} \left( U_i^T N_i\right) \odot g'\left( U_{i+1} V_{i+1}\right) \end{aligned}$$

for \(i=1, \cdots , n-1\), with the initialization:

$$\begin{aligned} R_1= & {} X \odot g' \left( U_1 V_1\right) \\ N_1= & {} \left( U_1 V_1\right) \odot g'\left( U_1 V_1\right) . \end{aligned}$$

Besides, the derivatives of G with respect to \(U_i\) and \(V_i\) are calculated by Eqs. (27) and (28). With the derivation in Sect. 3, we have the derivative

$$\begin{aligned}&\left( \frac{\partial KL(P||Q^{(j)})}{\partial V_j}\right) _{\mu \nu } \\&\quad = 4 \sum _{k=1}^N \frac{p_{\nu k} (V_j)_{\mu \nu } - q^{(j)}_{\nu k} (V_j)_{\mu \nu } - p_{\nu k} (V_j)_{\mu k} + q^{(j)}_{\nu k} (V_j)_{\mu k}}{1+\Vert \mathbf {v}^j_\nu -\mathbf {v}^j_k\Vert ^2}, \end{aligned}$$

where \(\mathbf {v}^j_k\) is the k-th column of \(V_j, k=1,\cdots ,N\) and \(j=1,\cdots ,n\). To ensure that every element in \(U_i\) and \(V_i\) is nonnegative, we use the following symbols to split the above derivatives as:

$$\begin{aligned} \frac{\partial KL\left( P||Q^{(j)}\right) }{\partial V_j} = A_j - B_j, \end{aligned}$$
(35)

where

$$\begin{aligned} \left( A_j\right) _{\mu \nu }= & {} 4 \sum _{k=1}^N \frac{p_{\nu k} (V_j)_{\mu \nu } + q^{(j)}_{\nu k} (V_j)_{\mu k}}{1+\Vert \mathbf {v}^j_\nu -\mathbf {v}^j_k\Vert ^2}, \end{aligned}$$
(36)
$$\begin{aligned} \left( B_j\right) _{\mu \nu }= & {} 4 \sum _{k=1}^N \frac{ q^{(j)}_{\nu k} (V_j)_{\mu \nu } + p_{\nu k} (V_j)_{\mu k} }{1+\Vert \mathbf {v}^j_\nu -\mathbf {v}^j_k\Vert ^2}. \end{aligned}$$
(37)

Then we can define two matrix sequences \(S_l\) and \(M_l\) as follows:

$$\begin{aligned} S_{l+1}^{(j)}= & {} U_{l+1}^T \Big (S_l^{(j)} \odot g'(U_{l+1} V_{l+1})\Big ), \end{aligned}$$
(38)
$$\begin{aligned} M_{l+1}^{(j)}= & {} U_{l+1}^T \Big (M_l^{(j)} \odot g'(U_{l+1} V_{l+1})\Big ), \end{aligned}$$
(39)

where \(l=j, \cdots , i-2\), \(S_j^{(j)} = A_j\) and \(M_j^{(j)} = B_j\). In this way, the derivatives of \(G_j\) with respect to \(U_i\) and \(V_i\), i.e., Eqs. (27) and (28), will be:

$$\begin{aligned} \frac{\partial G_j}{\partial U_i}= & {} \Big ((S_{i-1}^{(j)} - M_{i-1}^{(j)}) \odot g'(U_i V_i) \Big ) V_i^T, \end{aligned}$$
(40)
$$\begin{aligned} \frac{\partial G_j}{\partial V_i}= & {} U_i^T \Big ((S_{i-1}^{(j)} - M_{i-1}^{(j)}) \odot g'(U_i V_i) \Big ). \end{aligned}$$
(41)

Substitute the above equations into Eqs. (33) and (34), we obtain:

$$\begin{aligned} \begin{aligned} \frac{\partial F}{\partial U_i} =&(N_i - R_i)V_i^T \\&+ \lambda \sum _{j=1}^{i} \Big ((S_{i-1}^{(j)} - M_{i-1}^{(j)}) \odot g'(U_i V_i) \Big ) V_i^T,\\ \frac{\partial F}{\partial V_i} =&U_i^T(N_i - R_i) \\&+ \lambda \sum _{j=1}^{i} U_i^T \Big ((S_{i-1}^{(j)} - M_{i-1}^{(j)}) \odot g'(U_i V_i) \Big ). \end{aligned} \end{aligned}$$

Finally, similar to the procedure in Sect. 3, the update rules for multi-layer LSPH (ML-LSPH) are shown in Eqs. (29) and (30), where \(0<\gamma <1\) is the learning rate and \(i=1,\cdots , n\). The convergence property of the above iteration is similar to the one in [1]. Besides, for better understanding our ML-LSPH, we aim to unify the LSPH and ML-LSPH under a same framework. Thus, in our implementation, the function \(g(\cdot )\) applied on \(U_1\) and \(V_1\) is regarded as identity function \(f(x)=x\). The function \(g(\cdot )\) for \(U_i\) and \(V_i, i=2, \cdots , n\) is played by nonlinear sigmoid function \(f(x)=\frac{1}{1+e^{x}}\). In this way, when the number of layers \(n=1\), the ML-LSPH will shrink to the ordinary single-layer LSPH. It is noteworthy that we could theoretically formulate our ML-LSPH to an arbitrary number of layers according the above algorithms. However, for realistic applications with complex data distributions, the number of layers is always less than 3, since when the number of layers increases, the accumulative reconstruction error will cause the non-convergence of the proposed model (Trigeorgis et al. 2014).

Fig. 2
figure 2

Illustration of multi-layer LSPH (ML-LSPH)

For the hash code generating phase, it is similar to LSPH. In particular, acquired the low-dimensional representation \(V_n\) in the n-th layer, we first solve the Orthogonal Procrustes problem \(\min _{\mathcal {P}^T \mathcal {P} = I}\Vert \mathcal {P} X - V_n\Vert \) to achieve the orthogonal projection \({\mathcal {P}}\). The optimal solution can be obtained by the following procedure: 1. use the singular value decomposition algorithm to decompose the matrix \(X^T V_n = A {\varSigma }B^T\); 2. calculate \({\mathcal {P}} = B{\varOmega }A^T\), where, \({\varOmega }\) is a connection matrix as \({\varOmega }=[I,\mathbf{0 }]\in {\mathbb {R}}^{D\times M}\) and \({\mathbf{0 }}\) indicates all zeros matrix. For a new coming test data \(\mathbf {x}_{new} \in {\mathbb {R}}^{M \times 1}\), the low-dimensional representation in the n-th layer is \(\mathbf {v}_n^{new} = {\mathcal {P}} \mathbf {x}_{new}\) and the binary codes are calculated by \(\hat{\mathbf {v}}_n^{new} = \lfloor h_{\varTheta }({\mathcal {P}} \mathbf {x}_{new})\rceil \) where \({\varTheta }\) is obtained by the similar multi-variable regression scheme. The procedure of ML-LSPH is summarized in Algorithm 2.

figure b

Batch-Based Learning Scheme With the number of the layers increasing, the computational costs will inevitably increase as well in the current multi-layer network architecture of ML-LSPH. In order to effectively reduce the computational complexity on large-scale data, we adopt a random batch-based learning strategy (RBLS) in the iteration optimization of ML-LSPH. The complexity of each layer’s NMF is \(O (NMD)\) as mentioned above, which is still regarded as linear complexity in terms of N and not very demanding for large-scale data processing. However, the real bottleneck of the optimization procedure is the calculation of the KL divergence for each layer, specifically, the similarity matrices P and \(Q^{(m)}\), due to the complexity of \(O (N^2D)\). Therefore, in our implementation, we adopt RBLS to effectively reduce the complexity for computing P and \(Q^{(m)}\) in ML-LSPH. In detail, for each step of updating P and \(Q^{(m)}\), we randomly select a small subset of the whole training set. Then we only need to compute the pairwise similarity of this subset and the rest of the elements of P and \(Q^{(m)}\) are replaced by zeros:

$$\begin{aligned} p_{ij}= & {} \left\{ \begin{array}{l} \frac{\exp (-\Vert \mathbf {x}_i -\mathbf {x}_j\Vert ^2/ 2\sigma _i^2)}{\sum _{k \ne l} \exp (-\Vert \mathbf {x}_k - \mathbf {x}_l\Vert ^2/2\sigma _k^2)},~ \text {if}~ \mathbf {x}_i,\mathbf {x}_j\in \text {batch} \\ 0,~ \text {otherwise} \end{array} \right. ,\\ q_{ij}^{m}= & {} \left\{ \begin{array}{l} \frac{(1+\Vert \mathbf {v}_{i}^{m}-\mathbf {v}_{j}^{m}\Vert ^2)^{-1}}{\sum _{k \ne l} (1+ \Vert \mathbf {v}_{k}^{m} - \mathbf {v}_{l}^{m} \Vert ^2)^{-1}},~ \text {if}~ \mathbf {v}_{i}^{m},\mathbf {v}_{j}^{m}\in \text {batch} \\ 0,~ \text {otherwise} \end{array} \right. , \end{aligned}$$

where m indicates the layer index. The illustration of proposed RBLS are also shown in Fig. 3. If we assume the size of the small subset is \(\ell \), the complexity of our RBLS will be reduced from \(O (N^2D)\) to \(O (\ell ^2D)\). Usually, the \(\ell \) can be set as \(\ell =1/100N\). It is noteworthy that only the computation of P and \(Q^{(m)}\) are replaced by the above RBLS trick and other parts of the algorithm in ML-LSPH are still the same as mentioned before. In this way, our multi-layer LSPH becomes scalable for large-scale data.

Fig. 3
figure 3

Illustration of the \(P \in {\mathbb {R}}^{N\times N}\) and \(Q^{(j)} \in {\mathbb {R}}^{N\times N}\) matrices composition. The white blocks indicate zeros and dark-colored blocks indicate the similarity computed via the randomly selected subset

5 Computational Complexity Analysis

In this section, we will discuss the computational complexity of LSPH and ML-LSPH. The computational complexity of LSPH consists of three parts. The first part is for computing NMF, the complexity of which is \(O (NMD)\) (Li et al. 2014), where N is the size of the dataset, M and D are the dimensionalities of the high-dimensional data and the low-dimensional data respectively. The second part is to compute the cost function Eq. (7) in the objective function which has the complexity \(O (N^2D)\). The last part is the logistic regression procedure whose complexity is \(O (ND^2)\). Therefore, the total computational complexity of LSPH is: \(O (tNMD+N^2D+tND^2)\), where t is the number of iterations.

It is obvious that single-layer ML-LSPH is actually LSPH. The computational complexity of ML-LSPH with multiple layers consists of several NMF optimizations, the computation of matrices P and \(Q^{(j)}\), and the logistic regression procedure. With the above discussion, the computational complexity of ML-LSPH is \(O(tN M \sum _{i=1}^n D_i + \ell ^2 \sum _{i=1}^n D_i + t N D^2)\), where \(\ell \) is the batch size.

6 Experiments and Results

In this section, we systematically evaluate the proposed LSPH and multi-layer LSPH (ML-LSPH) on three large-scale datasets. The relevant experimental results and data visualization will be included in the rest of this section. All the experiments are completed using MatLab2014a on a workstation with a 12-core 3.2GHz CPU and \(120 \hbox {GB}\) of RAM running the Linux OS.

6.1 Evaluation on LSPH

In this subsection, we first apply the proposed single-layer LSPH algorithm method for large-scale similarity search tasks. Two realistic datasets are used to evaluate all methods: SIFT 1M: it contains one million 128-dim local SIFT feature vectors (Lowe 2004). GIST 1M: it contains one million 960-dim global GIST feature vectors (Oliva and Torralba 2001). The two datasets are publicly available.Footnote 3

For both SIFT 1M and GIST 1M, we respectively take 10K images as the queries by random selection and use the remaining to form the gallery database. To construct the training set, 200, 000 samples from the gallery database are randomly selected for all of the compared methods. Additionally, we also randomly choose another 50, 000 data samples as a cross-validation set for parameter tuning. In the querying phase, the returned points are regarded as true neighbors if they lie in the top 2 percentile points closest to a query for both two datasets. Since the Hamming lookup table is fast with hash codes, we will use the Hamming lookup table to measure our retrieval tasks. We evaluate the retrieval results in terms of the Mean Average PrecisionFootnote 4 (MAP) and the precision-recall curves. Additionally, we also report the training time and the test time (the average searching time used for each query) for all methods.

6.1.1 The Selected Methods and Settings

In this experiment, we compare LSPH with the 10 selected popular hashing methods including LSH (Gionis et al. 1999), BSSC (Shakhnarovich 2005), RBM (Salakhutdinov and Hinton (2007)), SpH (Weiss et al. 2009), STH (Zhang et al. 2010), AGH (Liu et al. 2011), ITQ (Gong et al. 2013), KLSH (Kulis and Grauman 2009), PCAH (Wang et al. 2012) and CH (Lin et al. 2013). In these methods, for BSSC, through the labeled pairs scheme in the boosting framework, it can obtain weights and thresholds for every hash function. RBM will be trained with several 100-100 hidden layers without fine-tuning. According to KLSH, 500 training samples and the RBF-kernel are used to output the empirical kernel map, in which we always set the scalar parameter \(\sigma \) to an appropriate value on each dataset. For ITQ, the number of the iterations is set as 50. In AGH with two-layer, we consider the number of the anchor points k as 200 and the number of the nearest anchors s in sparse coding as 50. CH has the same anchor-based sparse coding setting with AGH. All of the 10 methods are evaluated on different lengths of the codes, e.g., 16, 32, 48, 64, 80 and 96. Under the same experimental setting, all the parameters used in the compared methods have been strictly chosen according to their original papers.

In the experiments of our LSPH method, all the data are first normalized into the range of [0, 1], since the nonnegative constraint is required in our framework. We also use the validation set to tune our hyper-parameters. Particularly, for each dataset, we select one value from \(\{0.01, 0.02, \cdots , 0.10\}\) as the optimal learning rate \(\alpha \) of multi-variable logistic regression through 10-fold cross-validation on the validation set. The regularization parameter \(\lambda \) is determined from one of \(\{10^{-2}, 10^{-1}, 1, 10^1, 10^2, 10^3\}\), which yields the best performance via 10-fold cross-validation on the validation set. The regularization parameter \(\delta \) in the hash function generation is fixed as \(\delta = 0.35\). We limit the maximum number of iterations with 1000 in LSPH learning phase, as well.

6.1.2 Results Comparison

We demonstrate MAP curves on the SIFT 1M and GIST 1M datasets compared with different methods in Fig. 4. From the general tendency, the accuracies on the GIST 1M dataset are lower than those on the SIFT 1M dataset, since features in the GIST 1M has higher dimensions with larger variations than those on SIFT 1M. In detail of Fig. 4, ITQ always achieves higher Mean Average Precision (MAP) and gets a consistent increasing condition with the change of the code length on both datasets. Furthermore, MAP of CH also proves to be competitive but a little lower than ITQ. Interestingly, on both the SIFT 1M and GIST 1M datasets, the MAP of SpH and STH are always “rise-then-fall” when the length of code increases. Due to the use of random projection, LSH and KLSH have a low MAP when the code length is short. Moreover, PCAH always gets decreasing accuracies when the code length increases. For our method LSPH, it achieves the highest performance among all the compared methods on both datasets. The proposed LSPH algorithm can automatically exploit the latent structure of the original data and simultaneously preserve the consistency of distribution between the original data and the reduced representations. The above properties of LSPH allow it to achieve better performance in large-scale visual retrieval tasks. Fig. 5 also shows the precision-recall curves with the code length of 48 bits on both SIFT 1M and GIST 1M datasets with the top 2 percentile nearest neighbors as the ground truth. From all these figures, we can further discover that, for both two datasets, LSPH achieves apparently better performance than other hashing methods by comparing the Mean Average Precision (MAP) and Area Under the Curve (AUC). Additionally, to further illustrate the necessity of using logistic regression binarization rather than direct median value binarization as mentioned in Sect. 3.3, we carry out comparison experiments on both SIFT 1M and GIST 1M datasets. In Table 1, it is easy to observe that logistic regression binarization can achieve consistently higher MAP across all code lengthes than the direct median value binarization scheme.

Fig. 4
figure 4

The Mean Average Precision of the compared algorithms on the SIFT 1M and GIST 1M datasets (Color figure online)

Fig. 5
figure 5

The precision-recall curves of the compared algorithms on the SIFT 1M and GIST 1M datasets for the code of 48 bits (Color figure online)

Table 1 The comparison of MAP between using median values and Logistic regression to generate codes with different numbers of bits

Meanwhile, the training and test time for all the methods are listed in Tables 2 and 3, as well. Considering the training time, the random projection based algorithms are relatively more efficient, especially the LSH. While, RBM takes the most time for training, since it is based on a time-consuming deep learning method. Our method LSPH is significantly more efficient than STH, BSSC and RBM, but slightly slower than ITQ, AGH and SpH. It is noteworthy that once the optimal hashing functions of our method are obtained from the training phase, the optimized hashing functions will be fixed and directly used for new data. In addition, with the rapid development of silicon technologies, future computers will be much faster and even the training will become less a problem. In terms of the test phase, LSH is the most efficient methods as well, since only a simple matrix multiplication and a thresholding are needed to obtain the binary codes. AGH and SpH always take more time for the test phase. Our LSPH has the competitive efficiency with STH. Therefore, in general, it can be concluded that LSPH is an effective and relatively efficient method for the large-scale retrieval tasks.

Table 2 The comparison of MAP, training time and test time of 32 bits and 48 bits of all the compared algorithms on the SIFT 1M dataset
Table 3 The comparison of MAP, training time and test time of 32 bits and 48 bits of all the compared algorithms on the GIST 1M dataset

6.2 Evaluation on ML-LSPH

In this subsection, the multi-layer LSPH is evaluated on the TinyImage dataset, which is a subset containing 500,000 imagesFootnote 5 from ***80 Million Tiny Images (Torralba et al. 2008a, b). Some example images from the TinyImage dataset are illustrated in Fig. 6. We further take 1K images as the queries by random selection and use the remaining to form the gallery database. Considering the cost of computation in multi-layer networks, in this experiment, only 100, 000 randomly selected samples from the gallery database form the training set. Similar to the experiments of LSPH, another 50, 000 data samples are also randomly chosen as a cross-validation set for parameter tuning. For image searching tasks, given an image, we describe it with 512-dimensional GIST descriptors (Oliva and Torralba 2001) in this experiment and then learn to hash these descriptors with all compared methods. In the querying phase, a returned point is regarded as a neighbor if it lies in the top ranked 500 points for the TinyImage dataset. We evaluate the retrieval results through Hamming distance ranking and report the Mean Average Precision (MAP) and the precision-recall curves by changing the number of top ranked points. Additionally, we also report the parameter sensitive analysis and visualize some retrieved images of compared methods on this dataset.

Fig. 6
figure 6

Some example images from the TinyImage dataset (Color figure online)

To avoid confusion, in this experiment, we only compare with LSH, PCAH, ITQ, AGH, RBM and SpH, which have shown distinctive performance according to the results in the previous comparison. Besides, we also add a new hashing technique SKLSH in this experiment. Note that, RBM here has 100-100-100 three hidden layers without fine-tuning. All of the above methods are then evaluated on six different lengths of codes (32, 48, 64, 80, 96, 128). Under the same experimental setting, all the parameters used in the compared methods have been strictly chosen according to their original papers.

For our ML-LSPH, the data zero-one normalization and hyper-parameter selection follow the same scheme as those in LSPH. Besides, to better reach the local minimum loss in ML-LSPH, the learning rate \(\gamma \) for iterative optimization is considered. In this experiment, we fix \(\gamma =0.5\) for ML-LSPH. More importantly, for the reason mentioned above that when the number of the layers increases, the accumulative reconstruction error will cause the non-convergence problem of the proposed model (Trigeorgis et al. 2014), we evaluate ML-LSPH with \(n=2\) layers, which is the same setting as in Trigeorgis et al. (2014). We further set the dimension of the middle layerFootnote 6 (i.e., \(V_1\)) to 256 on this dataset.

6.2.1 Results Comparison

Fig. 7 illustrates the MAP curves of all compared algorithms on the TinyImage dataset. Our ML-LSPH algorithm achieves slightly lower MAP than ITQ when the code length is less than 48 bits but consistently outperforms all other compared methods in every length of code. RBM with 3 layers can also produce competitive search accuracies on this dataset. Different to other hashing techniques, the performance of PCAH decreases with the increase of the code length. The similar phenomenon has appeared in the previous evaluation on SIFT 1M and GIST 1M datasets. The performance of AGH, SpH and LSH is consistent with that in the previous experiments. Besides, Fig. 8 shows a series of precision-recall curves with different code lengths on the TinyImage dataset with the 500 nearest neighbors as the ground truth. By comparing the Area Under the Curve (AUC), our ML-LSPH achieves apparently better performance than other methods on relatively long bits (\(\hbox {code length } \ge 48 \hbox { bits}\)) and the performance slightly goes down when short hash codes are adopted. Moreover, in Table 4 and Fig. 9, we also compare the performance between ML-LSPH and LSPH in terms of the MAP and AUC on all three datasets. The ML-LSPH can achieve consistently better results than LSPH, since large intra-class variations in TinyImage cause complex and noisy data distributions, which are more difficult to handle by LSPH. Besides, Fig. 10 illustrates the convergence of the proposed LSPH and ML-LSPH on TinyImage with the code length of 64. We can clearly observe that, for LSPH, the loss of \(\Vert X - U_1V_1\Vert ^2\) dramatically drops down when the number of iterations increases. While, for ML-LSPH, the loss of \(\Vert X - g(U_1 g(U_2 \cdots g(U_n V_n)))\Vert ^2\) first climbs up when the number of iterations is less than 50 and then goes down. With the batch-based learning scheme, the total losses of both LSPH and ML-LSPH can steadily decrease with little fluctuation.

Fig. 7
figure 7

The top-500 Mean Average Precision of the compared algorithms on the TinyImage dataset (Color figure online)

Fig. 8
figure 8

Comparison of precision recall curves with different bits on the TinyImage dataset. Ground truth is defined by top-500 Euclidean neighbors (Color figure online)

Fig. 9
figure 9

Comparison of precision recall curves using ordinary LSPH (one layer) and ML-LSPH (two layers) with different bits on the TinyImage dataset (Color figure online)

Fig. 10
figure 10

Illustration of convergence on the TinyImage dataset with the code length 64 (\(\lambda =10\)). For ordinary LSPH (one layer): a \(loss1=\Vert X - U_1V_1\Vert ^2\) versus number of iterations. b \(loss2=KL(P||Q)\) versus number of iterations. c \(Total loss=\Vert X - U_1V_1\Vert ^2 + \lambda KL(P||Q)\) versus number of iterations. For ML-LSPH: d \(loss1=\Vert X - g(U_1 g(U_2 \cdots g(U_n V_n)))\Vert ^2\) versus number of iterations. e \(loss2=\sum _{i=1}^n KL(P||Q^{(i)})\) versus number of iterations. f \(Total loss=\Vert X - g(U_1 g(U_2 \cdots g(U_n V_n)))\Vert ^2 + \lambda \sum _{i=1}^n KL(P||Q^{(i)})\) versus number of iterations. Zoom in for better viewing

Table 4 The comparison of MAP between using LSPH and ML-LSPH with different numbers of bits on all three datasets

Additionally, in Fig. 11, we also compare the retrieval performance of ML-LSPH with respect to the regularization parameter \(\lambda \) along different code lengths via cross-validation. When tuning the parameter \(\lambda \) from 0.01 to 1000 with a scale factor of 10, the MAP curves of ML-LSPH appear to be relatively stable and insensitive to \(\lambda \). For code lengths equal to 64 bits and 80 bits, the best performance occurs when \(\lambda =10\). However, for the rest of the code lengths, ML-LSPH can achieve the highest retrieval accuracies with \(\lambda =100\). Specifically, to illustrate the effectiveness of the data distribution preserving (KL divergence) regularization term in ML-LSPH, we also compare the algorithm without using the KL divergence term in Table 5. The results indicate that combining the multi-layer NMF network with the data distribution preserving term could always gain better performance. Finally, the top-ranked retrieval results using compared methods on some typical queries are illustrated in Fig. 12. It can be observed that the retrieved images via ML-LSPH have more semantic consistency with the query images.

Fig. 11
figure 11

Parameter sensitivity analysis of the regularization parameter \(\lambda \) with different bits on the TinyImage dataset (Color figure online)

Fig. 12
figure 12

The top 25 retrieved images for queries (plane, bird, car, horse, ship and truck) with 96 bits using ML-RBM, LSH, SpH, PCAH, SKLSH, ITQ, AGH and our ML-LSPH (from a to h). Best viewed in color (Color figure online)

Table 5 Results comparison (MAP) of ML-LSPH with/without KL regularization term \(\sum _{i=1}^n KL(P||Q^{(i)})\)

7 Conclusion and Future Work

In this paper, we have proposed the Latent Structure Preserving Hashing (LSPH) algorithm, which can find a well-structured low-dimensional data representation through the Nonnegative Matrix Factorization (NMF) with the probabilistic structure preserving regularization part, and then the multi-variable logistic regression is effectively applied to generate the final hash codes. To better tackle the data with complex and noisy data distributions, a multi-layer LSPH (ML-LSPH) extension has also been developed in this paper. Extensive experiments on three large-scale datasets have demonstrated that our algorithms can lead to competitive hashing performance for large-scale retrieval tasks.

As we mentioned in the introduction of the paper, the hash codes learned from both LSPH and ML-LSPH can be regarded as independent latent attributes due to the property of NMF. In future work, this kind of learned data-driven attributes will be further explored with zero-shot learning for unseen image classification, retrieval and annotation tasks.