Keywords

1 Introduction

The automatic extraction of topics has become very important in recent years since they provide a meaningful way to organize, browse and represent large-scale collections of documents. Among the most successful approaches to topic discovery are directed topic models such as Latent Dirichlet Allocation (LDA) [1] and Hierarchical Dirichlet Processes (HDP) [15] which are Directed Graphical Models with latent topic variables. More recently, undirected graphical models have been also applied to topic modeling, (e.g., Boltzmann Machines [12, 13] and Neural Autoregressive Distribution Estimators [9]). The topics generated by both directed and undirected models have been shown to underlie the thematic structure of a text corpus. These topics are defined as distributions over terms of a vocabulary and documents in turn as distributions over topics. Traditionally, inference in topic models has not scale well to large corpora, however, more efficient strategies have been proposed to overcome this problem (e.g., Online LDA [8] and stochastic variational inference [10]). Undirected Topic Models can be also trained efficient using approximate strategies such as Contrastive Divergence [7].

In this work, we explore the mining of topics based on term co-occurrence. The underlying intuition is that terms consistently co-occurring in the same documents are likely to belong to the same topic. The resulting topics correspond to ordered subsets of the vocabulary rather than distributions over such a vocabulary. Since finding co-occurring terms is a combinatorial problem that lies in a large search space, we propose Sampled Weighted Min-Hashing (SWMH), an extended version of Sampled Min-Hashing (SMH) [6]. SMH partitions the vocabulary into sets of highly co-occurring terms by applying Min-Hashing [2] to the inverted file entries of the corpus. The basic idea of Min-Hashing is to generate random partitions of the space so that sets with high Jaccard similarity are more likely to lie in the same partition cell.

One limitation of SMH is that the generated random partitions are drawn from uniform distributions. This setting is not ideal for information retrieval applications where weighting have a positive impact on the quality of the retrieved documents [3, 14]. For this reason, we extend SMH by allowing weights in the mining process which effectively extends the uniform distribution to a distribution based on weights. We demonstrate the validity and scalability of the proposed approach by mining topics in the NIPS, 20 Newsgroups, Reuters and Wikipedia corpora which range from small (a thousand of documents) to large scale (millions of documents). Table 1 presents some examples of mined topics and their sizes. Interestingly, SWMH can mine meaningful topics of different levels of granularity.

Table 1. SWMH topic examples.

The remainder of the paper is organized as follows. Section 2 reviews the Min-Hashing scheme for pairwise set similarity search. The proposed approach for topic mining by SWMH is described in Sect. 3. Section 4 reports the experimental evaluation of SWMH as well as a comparison against Online LDA. Finally, Sect. 5 concludes the paper with some discussion and future work.

2 Min-Hashing for Pairwise Similarity Search

Min-Hashing is a randomized algorithm for efficient pairwise set similarity search (see Algorithm 1). The basic idea is to define MinHash functions h with the property that the probability of any two sets \(A_1, A_2\) having the same MinHash value is equal to their Jaccard Similarity, i.e.,

$$\begin{aligned} P[h(A_1) = h(A_2)] = \frac{\mid A_1 \cap A_2 \mid }{\mid A_1 \cup A_2 \mid } \in [0, 1]. \end{aligned}$$
(1)

Each MinHash function h is realized by generating a random permutation \(\pi \) of all the elements and assigning the first element of a set on the permutation as its MinHash value. The rationale behind Min-Hashing is that similar sets will have a high probability of taking the same MinHash value whereas dissimilar sets will have a low probability. To cope with random fluctuations, multiple MinHash values are computed for each set from independent random permutations. Remarkably, it has been shown that the portion of identical MinHash values between two sets is an unbiased estimator of their Jaccard similarity [2].

Taking into account the above properties, in Min-Hashing similar sets are retrieved by grouping l tuples \(g_1, \ldots , g_l\) of r different MinHash values as follows

$$\begin{aligned} \begin{array}{l} g_1(A_1) = (h_1(A_1), h_2(A_1), \ldots , h_r(A_1))\\ g_2(A_1) = (h_{r+1}(A_1), h_{r+2}(A_1), \ldots , h_{2\cdot r}(A_1))\\ \cdots \\ g_l(A_1) = (h_{(l-1)\cdot r+1}(A_1), h_{(l-1)\cdot r+2}(A_1), \ldots , h_{l\cdot r}(A_1)) \end{array}, \end{aligned}$$

where \(h_j(A_1)\) is the j-th MinHash value. Thus, l different hash tables are constructed and two sets \(A_1, A_2\) are stored in the same hash bucket on the k-th hash table if \(g_k(A_1) = g_k(A_2), k = 1, \ldots , l\). Because similar sets are expected to agree in several MinHash values, they will be stored in the same hash bucket with high probability. In contrast, dissimilar sets will seldom have the same MinHash value and therefore the probability that they have an identical tuple will be low. More precisely, the probability that two sets \(A_1,A_2\) agree in the r MinHash values of a given tuple \(g_k\) is \(P[g_k(A_1) = g_k(A_2)] = sim(A_1, A_2)^r\). Therefore, the probability that two sets \(A_1, A_2\) have at least one identical tuple is \(P_{collision}[A_1, A_2] = 1-(1-sim(A_1, A_2)^r)^l\).

The original Min-Hashing scheme was extended by Chum et al. [5] to weighted set similarity, defined as

$$\begin{aligned} sim_{hist}(H_1, H_2) = \frac{\sum _i w_i \min (H_1^i, H_2^i)}{\sum _i w_i \max (H_1^i, H_2^i)} \in [0, 1], \end{aligned}$$
(2)

where \(H_1^i, H_2^i\) are the frecuencies of the i-th element in the histograms \(H_1\) and \(H_2\) respectively and \(w_i\) is the weight of the element. In this scheme, instead of generating random permutations drawn from a uniform distribution, the permutations are drawn from a distribution based on element weights. This extension allows the use of popular document representations based on weighting schemes such as tf-idf and has been applied to image retrieval [5] and clustering [4].

figure a

3 Sampled Min-Hashing for Topic Mining

Min-Hashing has been used in document and image retrieval and classification, where documents and images are represented as bags of words. Recently, it was also successfully applied to retrieving co-occurring terms by hashing the inverted file lists instead of the documents [5, 6]. In particular, Fuentes-Pineda et al. [6] proposed Sampled Min-Hashing (SMH), a simple strategy based on Min-Hashing to discover objects from large-scale image collections. In the following, we briefly describe SMH using the notation of terms, topics and documents, although it can be generalized to any type of dyadic data. The underlying idea of SMH is to mine groups of terms with high Jaccard Co-occurrence Coefficient (JCC), i.e.,

$$\begin{aligned} JCC(T_1, \ldots , T_k) = \frac{\vert T_1 \cap T_2 \cap \cdots \cap T_k \vert }{\vert T_1 \cup T_2 \cup \cdots \cup T_k \vert }, \end{aligned}$$
(3)

where the numerator correspond to the number of documents in which terms \(T_1, \ldots , T_k\) co-occur and the denominator is the number of documents with at least one of the k terms. Thus, Eq. 1 can be extended to multiple co-occurring terms as

$$\begin{aligned} P[h(T_1) = h(T_2) \ldots = h(T_k)] = JCC(T_1, \ldots , T_k). \end{aligned}$$
(4)

From Eqs. 3 and 4, it is clear that the probability that all terms \(T_1, \ldots , T_k\) have the same MinHash value depends on how correlated their occurrences are: the more correlated the higher is the probability of taking the same MinHash value. This implies that terms consistently co-occurring in many documents will have a high probability of taking the same MinHash value.

In the same way as pairwise Min-Hashing, l tuples of r MinHash values are computed to find groups of terms with identical tuple, which become a co-occurring term set. By choosing r and l properly, the probability that a group of k terms has an identical tuple approximates a unit step function such that

$$\begin{aligned} P_{collision}[T_1, \ldots , T_k] \approx {\left\{ \begin{array}{ll} 1 &{} \text{ if } JCC(T_1, \ldots , T_k) \ge s* \\ 0 &{} \text{ if } JCC(T_1, \ldots , T_k) < s* \end{array}\right. }, \end{aligned}$$

Here, the selection of r and l is a trade-off between precision and recall. Given \(s*\) and r, we can determine l by setting \(P_{collision}[T_1, \ldots ,T_k]\) to 0.5, which gives

$$\begin{aligned} l = \frac{\log (0.5)}{\log (1 - s*^r)}. \end{aligned}$$

In SMH, each hash table can be seen as a random partitioning of the vocabulary into disjoint groups of highly co-occurring terms, as illustrated in Fig. 1. Different partitions are generated and groups of discriminative and stable terms belonging to the same topic are expected to lie on overlapping inter-partition cells. Therefore, we cluster co-occurring term sets that share many terms in an agglomerative manner. We measure the proportion of terms shared between two co-occurring term sets \(C_1\) and \(C_2\) by their overlap coefficient, namely

Fig. 1.
figure 1

Partitioning of the vocabulary by Min-Hashing.

$$\begin{aligned} ovr(C_1, C_2) = \frac{\mid C_1 \cap C_2 \mid }{\min (\mid C_1 \mid , \mid C_2\mid )} \in [0, 1]. \end{aligned}$$

Since a pair of co-occurring term sets with high Jaccard similarity will also have a large overlap coefficient, finding pairs of co-occurring term sets can be speeded up by using Min-Hashing, thus avoiding the overhead of computing the overlap coefficient between all the pairs of co-occurring term sets.

The clustering stage merges chains of co-occurring term sets with high overlap coefficient into the same topic. As a result, co-occurring term sets associated with the same topic can belong to the same cluster even if they do not share terms with one another, as long as they are members of the same chain. In general, the generated clusters have the property that for any co-occurring term set, there exists at least one co-occurring term set in the same cluster with which it has an overlap coefficient greater than a given threshold \(\epsilon \).

We explore the use of SMH to mine topics from documents but we judge term co-occurrence by the Weighted Co-occurrence Coefficient (WCC), defined as

$$\begin{aligned} WCC (T_1, \ldots , T_k) = \frac{\sum _i w_i \min {(T_1^i, \cdots , T_k^i)}}{\sum _i w_i \max {(T_1^i, \cdots , T_k^i )}} \in [0, 1], \end{aligned}$$
(5)

where \(T_1^i, \cdots , T_k^i\) are the frecuencies in which terms \(T_1, \ldots , T_k\) occur in the i-th document and the weight \(w_i\) is given by the inverse of the size of the i-th document. We exploit the extended Min-Hashing scheme by Chum et al. [5] to efficiently find such co-occurring terms. We call this topic mining strategy Sampled Weighted Min-Hashing (SWMH) and summarize it in Algorithm 2.

figure b

4 Experimental Results

In this section, we evaluate different aspects of the mined topics. First, we present a comparison between the topics mined by SWMH and SMH. Second, we evaluate the scalability of the proposed approach. Third, we use the mined topics to perform document classification. Finally, we compare SWMH topics with Online LDA topics.

The corpora used in our experiments were: NIPS, 20 Newsgroups, Reuters and WikipediaFootnote 1. NIPS is a small collection of articles (3, 649 documents), 20 Newsgroups is a larger collection of mail newsgroups (34, 891 documents), Reuters is a medium size collection of news (137, 589 documents) and Wikipedia is a large-scale collection of encyclopedia articles (1, 265, 756 documents)Footnote 2.

All the experiments presented in this work were performed on an Intel(R) Xeon(R) 2.66 GHz workstation with 8 GB of memory and with 8 processors. However, we would like to point out that the current version of the code is not parallelized, so we did not take advantage of the multiple processors.

4.1 Comparison Between SMH and SWMH

For these experiments, we used the NIPS and Reuters corpora and different values of the parameters \(s*\) and r, which define the number of MinHash tables. We set the parameters of similarity (\(s*\)) to 0.15, 0.13 and 0.10 and the tuple size (r) to 3 and 4. These parameters rendered the following table sizes: 205, 315, 693, 1369, 2427, 6931. Figure 2 shows the effect of weighting on the amount of mined topics. First, notice the breaking point on both figures when passing from 1369 to 2427 tables. This effect corresponds to resetting the \(s*\) to .10 when changing r from 3 to 4. Lower values in \(s*\) are more strict and therefore less topics are mined. Figure 2 also shows that the amount of mined topics is significantly reduced by SWMH, since the colliding terms not only need to appear on similar documents but now with similar proportions. The effect of using SWMH is also noticeable in the number of terms that compose a topic. The maximum reduction reached in NIPS was \(73\,\%\) while in Reuters was \(45\,\%\).

Fig. 2.
figure 2

Amount of mined topics for SMH and SWMH in the (a) NIPS and (b) Reuters corpora.

4.2 Scalability Evaluation

To test the scalability of SWMH, we measured the time and memory required to mine topics in the Reuters corpus while increasing the number of documents to be analyzed. In particular, we perform 10 experiments with SWMH, each increasing the number of documents by 10 %Footnote 3. Figure 3 illustrates the time taken to mine topics as we increase the number of documents and as we increase an index of complexity given by a combination of the size of the vocabulary and the average number of times a term appears in a document. As can be noticed, in both cases the time grows almost linearly and is in the thousand of seconds.

The mining times for the corpora were: NIPS, 43 s; 20 Newsgroups, \(70\,\mathrm{s}\); Reuters, \(4,446\,\mathrm{s}\) and Wikipedia, \(45,834\,\mathrm{s}\). These times contrast with the required time by Online LDA to model 100 topicsFootnote 4: NIPS, \(60\,\mathrm{s}\); 20 Newsgroups, \(154\,\mathrm{s}\) and Reuters, 25, 997. Additionally, we set Online LDA to model 400 topics with the Reuters corpus and took 3 days. Memory figures follow a similar behavior to the time figures. Maximum memory: NIPS, \(141\,\mathrm{MB}\); 20 Newsgroups, \(164\,\mathrm{MB}\); Reuters, \(530\,\mathrm{MB}\) and Wikipedia, \(1,500\,\mathrm{MB}\).

Fig. 3.
figure 3

Time scalability for the Reuters corpus.

Table 2. Document classification for 20 Newsgroups corpus.

4.3 Document Classification

In this evaluation we used the mined topics to create a document representation based on the similarity between topics and documents. This representation was used to train an SVM classifier with the class of the document. In particular, we focused on the 20 Newsgroups corpus for this experiment. We used the typical setting of this corpus for document classification (\(60\,\%\) training, \(40\,\%\) testing). Table 2 shows the performance for different variants of topics mined by SWMH and Online LDA topics. The results illustrate that the number of topics is relevant for the task: Online LDA with 400 topics is better than 100 topics. A similar behavior can be noticed for SWMH, however, the parameter r has an effect on the content of the topics and therefore on the performance.

4.4 Comparison Between Mined and Modeled Topics

In this evaluation we compare the quality of the topics mined by SWMH against Online LDA topics for the 20 Newsgroups and Reuters corpora. For this we measure topic coherence, which is defined as

$$ C(t) = \sum \limits _{m=2}^{M} \sum \limits _{l=1}^{m-1} \log \frac{D(v_m, v_l)}{D(v_l)}, $$

where \(D(v_l)\) is the document frequency of the term \(v_l\), and \(D(v_m, v_l)\) is the co-document frequency of the terms \(v_m\) and \(v_l\) [11]. This metric depends on the first M elements of the topics. For our evaluations we fixed M to 10. However, we remark that the comparison is not direct since both the SWMH and Online LDA topics are different in nature: SWMH topics are subsets of the vocabulary with uniform distributions while Online LDA topics are distributions over the complete vocabulary. In addition, Online LDA generates a fixed number of topics which is in the hundreds while SWMH produces thousands of topics. For the comparison we chose the n-best mined topics by ranking them using an ad hoc metric involving the co-occurrence of the first element of the topic. For the purpose of the evaluation we limited the SWMH to the 500 best ranked topics. Figure 4 shows the coherence for each corpus. In general, we can see a difference in the shape and quality of the coherence box plots. However, we notice that SWMH produces a considerable amount of outliers, which calls for further research in the ranking of the mined topics and their relation with the coherence.

Fig. 4.
figure 4

Coherence of topics mined by SWMH vs Online LDA topics in the (a) 20 Newsgroups and (b) Reuters corpora.

5 Discussion and Future Work

In this work we presented a large-scale approach to automatically mine topics in a given corpus based on Sampled Weighted Min-Hashing. The mined topics consist of subsets of highly correlated terms from the vocabulary. The proposed approach is able to mine topics in corpora which go from the thousands of documents (1 min approx.) to the millions of documents (7 h approx.), including topics similar to the ones produced by Online LDA. We found that the mined topics can be used to represent a document for classification. We also showed that the complexity of the proposed approach grows linearly with the amount of documents. Interestingly, some of the topics mined by SWMH are related to the structure of the documents (e.g., in NIPS the words in the first topic correspond to parts of an article) and others to specific groups (e.g., team sports in 20 Newsgroups and Reuters, or the Transformers universe in Wikipedia). These examples suggest that SWMH is able to generate topics at different levels of granularity.

Further work has to be done to make sense of overly specific topics or to filter them out. In this direction, we found that weighting the terms has the effect of discarding several irrelevant topics and producing more compact ones. Another alternative, it is to restrict the vocabulary to the top most frequent terms as done by other approaches. Other interesting future work include exploring other weighting schemes, finding a better representation of documents from the mined topics and parallelizing SWMH.