# Hierarchical Dirichlet scaling process

- 1.4k Downloads
- 2 Citations

## Abstract

We present the *hierarchical Dirichlet scaling process* (HDSP), a Bayesian nonparametric mixed membership model. The HDSP generalizes the hierarchical Dirichlet process to model the correlation structure between metadata in the corpus and mixture components. We construct the HDSP based on the normalized gamma representation of the Dirichlet process, and this construction allows incorporating a scaling function that controls the membership probabilities of the mixture components. We develop two scaling methods to demonstrate that different modeling assumptions can be expressed in the HDSP. We also derive the corresponding approximate posterior inference algorithms using variational Bayes. Through experiments on datasets of newswire, medical journal articles, conference proceedings, and product reviews, we show that the HDSP results in a better predictive performance than labeled LDA, partially labeled LDA, and author topic model and a better negative review classification performance than the supervised topic model and SVM.

## Keywords

Topic modeling Dirichlet process Hierarchical Dirichlet process## 1 Introduction

The hierarchical Dirichlet process (HDP) is an important nonparametric Bayesian prior for mixed membership models, and the HDP topic model is useful for a wide variety of tasks involving unstructured text (Teh et al. 2006). To extend the HDP topic model, there has been active research in dependent random probability measures as priors for modeling the underlying association between the latent semantic structure and covariates, such as time stamps and spatial coordinates (Ahmed and Xing 2010; Ren et al. 2011).

A large body of this research is rooted in the dependent Dirichlet process (DDP) (MacEachern 1999) where the probabilistic random measure is defined as a function of covariates. Most DDP approaches rely on the generalization of Sethuraman’s stick breaking representation of DP (Sethuraman 1991), incorporating the time difference between two or more data points, the spatial difference among observed data, or the ordering of the data points into the predictor dependent stick breaking process (Duan et al. 2007; Dunson and Park 2008; Griffin and Steel 2006). Some of these priors can be integrated into the hierarchical construction of DP (Srebro and Roweis 2005), resulting in topic models where temporally- or spatially-proximate data are more likely to be clustered. These existing DP approaches, however, cannot be easily extended to model underlying topics of a document collection. One reason is that the extension requires to develop a new tractable inference algorithm from models with intractable posterior distributions.

We suggest the *hierarchical Dirichlet scaling process* (HDSP) as a new way of modeling a corpus with various types of covariates such as categories, authors, and numerical ratings. The HDSP models the relationship between topics and covariates by generating dependent random measures in a hierarchy, where the first level is a Dirichlet process, and the second level is a *Dirichlet scaling process* (DSP). The first level DP is constructed in the traditional way of a stick breaking process, and the second level DSP with a normalized gamma process. With the normalized gamma process, each topic proportion of a document is independently drawn from a gamma distribution and then normalized. Unlike the stick breaking process, the normalized gamma process keeps the same order of the atoms as the first level measure, which allows the topic proportions in the random measure to be controlled. The DSP then uses that controllability to guide the topic proportions of a document by replacing the rate parameter of the gamma distribution with a scaling function that defines the correlation structure between topics and labels. The choice of the scaling function reflects the characteristics of the corpus. We show two scaling functions, the first one for a corpus with categorical labels, and the second for a corpus with both categorical and numerical labels.

The HDSP models the topic proportions of a document as a dependent variable of observable side information. This modeling approach differs from the traditional definition of a generative process where the observable variables are generated from a latent variable or parameter. For example, Zhu et al. (2009) and Mcauliffe and Blei (2007) propose generative processes where the observable labels are generated from a topic proportion of a document. However, a more natural model of the human writing process is to decide what to write about (e.g., categories) before writing the content of a document. This same approach is also successfully demonstrated in Mimno and McCallum (2012).

The outline of this paper is as follows. In Sect. 2, we describe related work and position our work within the topic modeling literature. In Sect. 3, we describe the gamma process construction of the HDP and how scale parameters are used to develop the HDSP with two different scaling functions. In Sect. 4, we derive a variational inference for the latent variables. In Sect. 5, we verify our approach on a synthetic dataset and demonstrate the improved predictive power on real world corpora. In Sect. 6, we discuss our conclusions and possible directions for future work.

## 2 Related work

For model construction, the model most closely related to HDSP is the discrete infinite logistic normal (DILN) model (Paisley et al. 2012) in which the correlations among topics are modeled through the normalized gamma construction. DILN allocates a latent location for each topic in the first level, and then draws the second level random measures from the normalized gamma construction of the DP. Those random measures are then scaled by an exponentiated Gaussian process defined on the latent locations. DILN is a nonparametric counterpart of the correlated topic model (Blei and Lafferty 2007) in which the logistic normal prior is used to model the correlations between topics. The HDSP is also constructed through the normalized gamma distribution with an informative scaling parameter, but our goal in HDSP is to model the correlations between topics and labels. The doubly correlated nonparametric topic model (DCNT) proposed by Kim and Sudderth (2011) also takes documents’ metadata into account to model the correlation among topics and metadata. Unlike the HDSP, the DCNT is constructed through a logistic stick-breaking process (Ren et al. 2011) which is originally proposed for modeling contiguous and spatially localized segments.

The Dirichlet-multinomial regression topic model (DMR-TM) (Mimno and McCallum 2012) also models the label dependent topic proportions of documents, but it is a parametric model. The DMR-TM places a log-linear prior on the parameter of the Dirichlet distribution to incorporate arbitrary types of observed labels. The DMR-TM takes the “upstream” approach in which the latent variable or latent topics are conditionally generated from the observed label information. The author-topic model (Rosen-Zvi et al. 2004) also takes the same approach, but it is a specialized model for authors of documents. Unlike the “downstream” generative approach used in the supervised topic model (Mcauliffe and Blei 2007), the maximum margin topic model (Zhu et al. 2009), and the relational topic model (Chang and Blei 2009), the upstream approach does not require specifying the probability distribution over all possible values of observed labels.

The HDSP is a new way of constructing a dependent random measure in a hierarchy. In the field of Bayesian nonparametrics, the introduction of DDP (Sethuraman 1991) has led to increased attention in constructing dependent random measures. Most such approaches develop priors to allow covariate dependent variation in the atoms of the random measure (Gelfand et al. 2005; Rao and Teh 2009) or in the weights of atoms (Griffin and Steel 2006; Duan et al. 2007; Dunson and Park 2008). These priors replace the first level of the HDP to incorporate a document-specific covariate for generating a dependent topic proportion. The HDSP allows covariate dependent variation in the weights of atoms, where the variation is controlled by the scaling function that defines the correlation between atoms and labels. A proper definition of the scaling function gives the flexibility to model various types of labels.

Several topic models for labeled documents use the credit attribution approach where each observed word token is assigned to one of the observed labels. Labeled LDA (L-LDA) allocates one dimension of the topic simplex per label and generates words from only the topics that correspond to the labels in each document (Ramage et al. 2009). An extension of this model, partially labeled LDA (PLDA), adds more flexibility by allocating a pre-defined number of topics per label and including a background label to handle documents with no labels (Ramage et al. 2011). The Dirichlet process with mixed random measures (DP-MRM) is a nonparametric topic model which generates an unbounded number of topics per label but still excludes topics from labels that are not observed in the document (Kim et al. 2012).

## 3 Hierarchical Dirichlet scaling process

In this section, we describe the hierarchical Dirichlet scaling process (HDSP). First we review the HDP with an alternative construction using the normalized gamma process construction for the second level DP. We then present the HDSP where the second level DP is replaced by Dirichlet scaling process (DSP). Finally, we describe two scaling functions for the DSP to incorporate categorical and numerical labels.

### 3.1 The normalized gamma process construction of HDP

^{1}consists of two levels of the DP where the random measure drawn from the upper level DP is the base distribution of the lower level DP. The formal definition of the hierarchical representation is as follows:

*H*is a base distribution, \(\alpha \), and \(\beta \) are concentration parameters for each level respectively, and index

*m*represents multiple draws from the second level DP. For the mixed membership model, \(\mathrm {x}_{mn}\), observation

*n*in group

*m*, can be drawn from

*H*is usually a Dirichlet distribution over the vocabulary, so the atoms of the first level random measure \(G_0\) are an infinite set of topics drawn from

*H*. The second level random measure \(G_m\) is distributed based on the first level random measure \(G_0\), so the second level shares the same set of topics, the atoms of the first level random measure.

*k*th stick of the first level. Therefore, during inference, the model does not need to keep track of which second level atoms correspond to which first level atoms. Furthermore, by placing a proper random variable on the rate parameter of the gamma distribution, the model can infer the correlations among the topics (Paisley et al. 2012) through the Gaussian process (Rasmussen and Williams 2005).

### 3.2 Hierarchical Dirichlet scaling process

*H*. The second level random measure \(G_m\) for document

*m*is drawn from the DSP parameterized by the concentration parameter \(\beta \), base distribution \(G_0\), observed labels of document \(r_{m}\), and

*scaling function*\(s(\cdot )\) with

*scaling parameter*

*w*.

*w*. Specifically, the base distribution

*H*is \(\text {Dir}(\eta ) \otimes L_w\) where \(\eta \) is the parameter of the word-topic distribution, and \(L_w\) is a prior distribution for the scaling parameter

*w*. The form of the resulting random measure is

*k*, \(p_k = V_k \prod _{k'=1}^{k'<k}(1-V_{k'})\) and {\(\phi _k\), \(w_k\)} is the atom of stick

*k*. At the second level construction, \(w_k\) becomes the parameter to guide the proportion of topic

*k*’s for each document.

*m*. Second, scale the weights of the atoms based on a scaling function parameterized by \(w_k\) and the observed labels. Let \(r_{mj}\) be the value of observed label

*j*in document

*m*, then \(G_m'\) is scaled as follows:

*k*is scaled by the scaling weight, \(s_{w_k}(r_{mj})\), and therefore, the topic proportions of a document is proportional to the scaling weights of the observed labels. The scaling function should be carefully chosen to reflect the underlying relationship between topics and labels. We show two concrete examples of scaling functions in Sect. 3.3.

*k*is drawn from a gamma distribution with parameter \(\beta p_k\), and then scaled by the scaling weight \(s_{w_k}(r_m)\)

*a*, 1) is equal to \(y \sim \) Gamma\((a, k^{-1})\),

*n*th observation in

*m*th group is drawn as follows:

*f*is a data distribution parameterized by \(\phi _k\). For topic modeling, \(G_m\) and \(x_{mn}\) correspond to document

*m*and word

*n*in document

*m*, respectively.

### 3.3 Scaling functions

Now we propose two scaling functions to express the correlation between topics and labels of documents. A scaling method is properly defined by two factors: 1) a proper prior over the scaling parameter \(w_k\), 2) a plausible scaling function between topic specific scaling parameter \(w_k\) and the observed labels of document \(r_m\).

*Scaling function 1*We design the first scaling function to model categorical side information such as authors, tags, and categories. For a corpus with

*J*unique labels, then \(w_k\) is a

*J*-dimensional parameter where each dimension matches to a corresponding label. We define the scaling function as the product of scaling parameters that correspond to the observed labels:

*j*is observed in document

*m*and zero otherwise. \(w_{kj}\) is a scaling parameter of topic

*k*for label

*j*. We place a inverse gamma prior over the weight variable \(w_{kj}\).

*k*for document

*m*is scaled as follows:

*m*:

*Scaling function 2*The above scaling function models categorical side information, but many datasets, such as product reviews have numerical ratings as well as categorical information. We propose the second scaling function that can model both numerical and categorical information. Again, let \(w_k\) be

*J*-dimensional scaling parameter where each dimension matches to a corresponding label. The second scaling function is defined as follows:

*j*for topic

*k*, and \(r_{mj}\) is the observed value of label

*j*of document

*m*. We place a normal prior over the scaling parameter \(w_{k}\). The scaling function is an inverse log-linear to the weighted sum of document’s labels. Unlike the previous scaling function which only considers whether a label is observed in a document, this scaling function incorporates the value of the observed label. With this scaling function, the proportion of topic

*k*for document

*m*is scaled as follows

The choice of scaling function reflects the modeler’s perspective with respect to the underlying relationship between topics and labels. The first scaling function scales each topic by the product of the scaling parameters of the observed labels. This reflects the modeler’s assumption that a document with a set of observed labels is likely to exhibit topics that have high correlation with all of the observed labels. With the second scaling function, the scaling weight changes exponentially as the value of label changes. This reflects the modeler’s assumption that two documents with the same set of observed labels but with different values are likely to exhibit different topics.

### 3.4 HDSP as a dependent Dirichlet process

*m*is constructed as follows:

Similarly, the HDSP constructs a dependent random measure with covariates. However, unlike the DDP-DP approach, \(G_0\) is no longer a function of covariates. The HDSP defines a single global random measure \(G_0\) and then scales \(G_0\) based on the covariates with the scaling function. With a proper, but relatively simple, scaling function that reflects the correlation between covariates and topics, the HDSP models any structures or types of covariates, whereas the DDP requires a complex dependent process for different types of covariates (Griffin and Steel 2006).

## 4 Variational inference for HDSP

The posterior inference for Bayesian nonparametric models is important because it is intractable to compute the posterior over an infinite dimensional space. Approximation algorithms, such as marginalized MCMC (Escobar and West 1995; Teh et al. 2006) and variational inference (Blei and Jordan 2006; Teh et al. 2008), have been developed for the Bayesian nonparametric mixture models. We develop a mean field variational inference (Jordan et al. 1999; Wainwright and Jordan 2008) algorithm for approximate posterior inference of the HDSP topic model. The objective of variational inference is to minimize the KL divergence between a distribution over the hidden variables and the true posterior, which is equivalent to maximizing the lower bound of the marginal log likelihood of observed data.

In this section, we first derive the inference algorithm for the first scaling function with a fully factorized variational family. Variational inference algorithms can be easily modularized with the fully factorized variational family, and the variation in a model only affects the update rules for the modified parts of the model. Therefore, for the second scaling function, we only need to update the part of the inference algorithm related to the new scaling function.

### 4.1 Variational inference for the first scaling function

*T*by letting \(V_T = 1\). Thus the model still keeps the infinite dimensionality while allowing approximation to be carried out under the bounded variational distributions.

*H*(

*q*) is the entropy for the variational distribution. By taking the derivative of this lower bound, we derive the following coordinate ascent algorithm.

*Document-level updates*At the document level, we update the variational distribution for the topic assignment \(z_{mn}\) and the document level stick proportion \(\pi _{mk}\). The update for \(q(z_{mn}|\gamma _{mn})\) is

*j*th label is observed in

*m*th document, otherwise 0.

*Corpus-level updates* At the corpus level, we update the variational distribution for the scaling parameter \(w_{kj}\), corpus level stick length \(V_k\) and word topic distribution \(\eta _{ki}\).

*i*is a word index, and \(\mathbf {1}\) is an indicator function (Blei et al. 2003).

*q*are

### 4.2 Variational inference for the second scaling function

*s*:

*j*and topic

*k*, we iteratively update \(w_{kj}\) until converged.

There might be possible alternatives for a scaling function with respect to characteristics of dataset used. Introducing a new scaling function requires a new inference algorithm, and this can be cumbersome. Recently, several approaches have been proposed to bypass the complex derivation of variational updates (Kingma and Welling 2014; Ranganath et al. 2014; Tran et al. 2016). Most of these approaches rely on re-parameterization tricks and stochastic updates with random samples from variational distributions. Although these methods are unbiased estimators of the variational parameters, sometimes they suffer from high variance of the samples, especially, when they are applied for the whole ELBO (Ranganath et al. 2014). We suggest to infer the scaling irrelevant parameters using the provided variational updates and scaling relevant parameters using these black-box techniques to reduce the possible high variances of these approaches.

## 5 Experiments

In this section, we describe how the HDSP performs with real and synthetic data. We fit the HDSP topic model with three different types of data and compare the results with several comparison models. First, we test the model with synthetic data to verify the approximate inference. Second, we train the model with categorical data whose label information is represented by binary values. Third, we train the model with mixed-type of data whose label information has both numerical and categorical values.

### 5.1 Synthetic data

There is no naturally-occurring dataset with the observable weights between topics and labels, so we synthesize data based on the model assumptions to verify our model and the approximate inference. First, we check the difference between the original topics and the inferred topics via simple visualization. Then, we focus on the differences between the inferred and synthetic weights. For all experiments with synthetic data, the datasets are generated by following the model assumptions with the first scaling function, and the posterior inferences are done with the first scaling function. We set the truncation level *T* at twice the number of topics. We terminate the variational inference when the fractional change of the lower bound falls below \(10^{-3}\), and we average all results over 10 individual runs with different initializations.

Figure 2 shows the results of the HDP and the HDSP on the synthetic dataset. Figure 2b, c are the heat maps of topics inferred from each model. We match the inferred topics to the original topics using KL divergence between the two sets of topic distributions. There are no significant differences between the inferred topics of HDSP and HDP. In addition to the topics, HDSP infers the scaling parameters between topics and labels, which are shown in Fig. 2e. The results show that the relative differences between original scaling parameters are preserved in the inferred parameters through the variational inference.

With the second experiment, we show that the inferred parameters preserve the relative differences between labels and topics in the dataset. For this experiment, we generate 1,000 documents with ten randomly drawn topics from Dirichlet(0.1) with the vocabulary size of 20. To generate the weights between topics and labels, we randomly place the topics and labels into three dimensional euclidean space, and use the distance between a topic and label as a scaling parameter. Let \(\theta _k \in \mathbb {R}^3\) be a location of topic *k* and \(\theta _j \in \mathbb {R}^3\) be a location of label *j*. We use \(|\theta _k - \theta _j|_2\) as an inverse scaling parameter \(w_{kj}^{-1}\) between topic *k* and label *j*, so the scaling weight increase as a distance between a topic and a label decreases. The location of topics and labels are uniformly drawn from three dimensional euclidean space, so the total volume is \(x^3\), then we vary the *x* value from 1 to 20 for each experiment.

### 5.2 Categorical data

^{2}(newswire from Reuter’s), OHSUMED

^{3}(a subset of the Medline journal articles), and NIPS (proceedings of NIPS conference). For RCV and OHSUMED, we use multi-category information of documents as labels, and for NIPS, we use authors of papers as labels. The average number of labels per article is 3.2 for RCV, 5.2 for OHSUMED, and 2.4 for NIPS. Table 1 contains the details of the datasets.

Datasets used for the experiments in 5.2

Docs | Vocab | Labels | Labels/doc | Doc/labels | |
---|---|---|---|---|---|

RCV | 23,149 | 9911 | 117 | 3.2 | 729.7 |

OHSUMED | 7505 | 7056 | 52 | 5.2 | 722.0 |

NIPS | 2484 | 14,036 | 2865 | 2.4 | 1.6 |

#### 5.2.1 Experimental settings

*T*to 200. We terminate variational inference when the fractional change of the lower bound falls below \(10^{-3}\), and we optimize all hyperparameters during inference except \(\eta \). For the L-LDA and PLDA, we implement the collapsed Gibbs sampling algorithm. For each model, we run 5000 iterations, the first 3000 as burn-in and then using the samples thereafter with gaps of 100 iterations. For PLDA, we set the number of topics for each label to two and five (PLDA-2, PLDA-5). For the ATM, we set the number of topics to 50, 100, and 150. We try five different values for the topic Dirichlet parameter \(\eta \): \(\eta = 0.1, 0.25, 0.5, 0.75, 1.0\). Finally all results are averaged over 20 runs with different random initializations. We do not report the standard errors because they are small enough to ignore.

#### 5.2.2 Evaluation metric

The goal of our model is to construct the dependent random probability measure given multiple labels. Therefore, our interest is to see the increments of predictive performance when the label information is given.

*N*words of a held-out document, \(\mathbf {r}'\) are the labels of the held-out document, \(z_n'\) is the latent topic of word

*n*, and \(\pi _k'\) is the

*k*th topic proportion of the held-out document. Since the integral is intractable, we approximate the probability

#### 5.2.3 Experimental results

Figure 5 shows the predictive performance of our model against the comparison models. For the OHSUMED and RCV corpora, both HDSP and wHDSP outperform all others. Among these models, L-LDA restricts the modeling flexibility the most; the PLDA relaxes that restriction by adding an additional latent label and allowing multiple topics per label. HDSP and wHDSP further increase the modeling flexibility by allowing all topics to be generated from each label. This is reflected in the results of predictive performance of the three models; L-LDA shows the worst performance, then PLDA, and HDSP and wHDSP show the lowest perplexity. For the NIPS data, we compare HDSP and wHDSP to ATM, and again, HDSP and wHDSP show the lowest perplexity.

#### 5.2.4 Modeling data with missing labels

We also test our model with partially labeled data which have not been previously covered in topic modeling. Many real-world data fall into this category where some of the data are labeled, others are incompletely labeled, and the rest are unlabeled. For this experiment, we randomly remove existing labels from the RCV and OHSUMED corpora. To remove observed labels in the training corpus, we use Bernoulli trials with varying parameters to analyze how the proportion of observed labels affects the heldout predictive performance of the model.

#### 5.2.5 Modeling data with single category

### 5.3 Mixed-type data

In this section, we present the performance of the second scaling function with a corpus of product reviews which has real-valued ratings and category information.

*r*is a vector whose values denote the observation of the labels. For each review, we set the dimension of

*r*to eight in which the first dimension is a numerical rating of a review, and then the remaining seven dimensions match the seven product categories. We set the value of each dimension to one if the review belongs to the corresponding category, and zero otherwise.

The number of reviews for each rating and category in the Amazon dataset

# reviews | Percentage | |
---|---|---|

Total | 24,259 | 100 |

5-star | 12,382 | 52 |

4-star | 5040 | 20 |

3-star | 1905 | 8 |

2-star | 1723 | 7 |

1-star | 3209 | 13 |

Category | # reviews | |
---|---|---|

Canister vacuum | 3535 | |

Digital SLR | 4189 | |

Laptop | 4252 | |

MP3 | 3659 | |

Air conditioner | 568 | |

Space heater | 3859 | |

Coffee machine | 4197 |

To evaluate the performance of wHDSP, we classify the ratings of the reviews based on a trained model. We use 90% of the corpus to train models and the remaining 10% of the corpus to test the models. To classify the rating of each review in the test set, we compute the perplexity of the given review with varying ratings from one to five, and choose the rating that shows the lowest perplexity. Generally, computing the perplexity of heldout document requires complex approximation schemes (Wallach et al. 2009), but we compute the perplexity based on the expected topic distribution given category and rating information, which requires a finite number of computations.

We compare the wHDSP with the supervised LDA (SLDA), LDA + SVM, as well as classifiers Naive Bayes, SVM, and decision trees (CART). For the LDA + SVM approach, we first train the LDA model and then use the inferred topic proportion and categories as features of the SVM. For the SLDA model, the category information cannot be used because the model is designed to learn and predict the single response variable. For both models, we set the number of topics to 50, 100, and 200.

In many applications, classifying negative feedback of users is more important than classifying positive feedback. From the negative feedback, companies can identify possible problems of their products and services and use the information to design their next product or improve their services. In most online reviews, however, the proportion of negative feedback is smaller than the proportion of positive feedback. For example, in the Amazon data, about 51% of reviews are rated as five-star, and 72% rated as four or five. A classifier trained by such skewed data is likely to be biased toward the majority class.

F1 of wHDSP and the other models for the Amazon review corpus

F1 | Ratings | ||||
---|---|---|---|---|---|

1 | 2 | 3 | 4 | 5 | |

wHDSP | 0.600 | | | | 0.687 |

wHDSP-no-cate | 0.428 | 0.087 | 0.099 | 0.061 | 0.658 |

LDA50 + SVM | 0.392 | 0.036 | 0.038 | 0.134 | 0.684 |

LDA100 + SVM | 0.454 | 0.078 | 0.073 | 0.265 | 0.678 |

LDA200 + SVM | 0.508 | 0.032 | 0.100 | 0.284 | 0.681 |

SLDA50 | 0.603 | 0.000 | 0.021 | 0.140 | |

SLDA100 | 0.606 | 0.000 | 0.021 | 0.067 | 0.740 |

SLDA200 | 0.580 | 0.015 | 0.011 | 0.140 | 0.727 |

SVM | 0.403 | 0.000 | 0.000 | 0.007 | 0.716 |

NaiveBayes | | 0.028 | 0.085 | 0.469 | 0.652 |

DecisionTree | 0.457 | 0.088 | 0.154 | 0.355 | 0.628 |

Macro and micro F1 of the wHDSP and the other models

5-Ratings | MacroF1 | MicroF1 |
---|---|---|

wHDSP | | 0.522 |

wHDSP* | 0.267 | 0.474 |

LDA50 + SVM | 0.257 | 0.518 |

LDA100 + SVM | 0.310 | 0.520 |

LDA200 + SVM | 0.321 | 0.527 |

LDA200 + SVM* | 0.309 | 0.533 |

SLDA50 | 0.301 | 0.584 |

SLDA100 | 0.287 | |

SLDA200 | 0.294 | 0.577 |

SVM | 0.225 | 0.560 |

NaiveBayes | 0.374 | 0.545 |

DecisionTree | 0.336 | 0.477 |

We perform a rating prediction task with and without the category information of reviews to see the effect of using both the category and rating information on the wHDSP and LDA + SVM approaches. The results represented by wHDSP* in Table 4 and Fig. 10b show the performance of rating prediction with the wHDSP trained without category information. For wHDSP*, the model performs worse than wHDSP, which indicates the model, without category information, cannot distinguish the review ratings which depend on topical context. The LDA + SVM without categories achieves 0.309 macro F1 and 0.533 micro F1, which are comparable to the LDA + SVM with the category information. Unlike the wHDSP, the decision boundaries of SVM are not improved with the additional category information. The result supports that for learning decision boundaries between ratings over different categories, the approach of including category information to train topics is more effective than using topics and the category information independently.

## 6 Discussions

We have presented the hierarchical Dirichlet scaling process (HDSP), a Bayesian nonparametric prior for a mixed membership model that lets us analyze underlying semantics and observable side information. The combination of the stick breaking process with the normalized gamma process in HDSP is a more controllable construction of the hierarchical Dirichlet process because each atom of the second level measure inherits from the first level measure in order. HDSP also allows more flexibility and the capability of modeling side information by the scaling functions that plug into the rate parameter of the gamma distribution. The choice of the scaling function is the most important part of the model in terms of establishing a link between topics and observed labels. We developed two scaling functions but the choice of scaling function depends on the modeler’s intention. For example, the well known linking functions from the generalized linear model can be used as scaling functions, or one can use several scaling functions together on purpose. We showed that the application of HDSP to topic modeling correctly recovers the topics and topic-label weights of synthetic data. Experiments with the real dataset show that the first scaling function is more suited for partially labeled data, and the second scaling function is more suited for a dataset with both numerical and categorical labels.

Hierarchical Dirichlet scaling process opens up a number of interesting research questions that should be addressed in future work. First, in the two scaling functions we proposed to model the correlation structure between topics and side information, we simply defined the relationship between topic *k* and label *j* through the scaling parameter \(w_{kj}\). However, this approach does not consider the correlation within topics and labels. Taking inspiration from previous work (Blei and Lafferty 2007; Mimno et al. 2007; Paisley et al. 2012) that showed correlations among topics, we can define a scaling function with a prior over the topics and labels to capture their complex relationships. Second, our posterior inference algorithm based on mean-field variational inference is tested with tens of thousands documents. However, modern data analysis requires inference of massive and/or streaming data. For a fast and efficient posterior inference, we can apply parallel or distributed algorithms based on a stochastic update (Hoffman et al. 2013; Ahn et al. 2014). Furthermore, we fix the number of labels before training but we need to find a way to model the unbounded number of labels for streaming data.

## Footnotes

## Notes

### Acknowledgements

This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government (MSIP) (No.B0101-15-0307, Basic Software Research in Human-level Lifelong Machine Learning (Machine Learning Center)).

## References

- Ahmed, A., & Xing, E. P. (2010). Timeline: A dynamic hierarchical dirichlet process model for recovering birth/death and evolution of topics in text stream. In
*Proceedings of the 26th conference on uncertainty in artificial intelligence (UAI)*(pp. 20–29).Google Scholar - Ahn, S., Shahbaba, B., & Welling, M. (2014). Distributed stochastic gradient mcmc. In
*Proceedings of the 31th international conference on machine learning (ICML)*.Google Scholar - Bishop, C. M., & Nasrabadi, N. M. (2006).
*Pattern recognition and machine learning*(Vol. 1). New York: springer.MATHGoogle Scholar - Blei, D. M., & Jordan, M. I. (2006). Variational inference for dirichlet process mixtures.
*Bayesian Analysis*,*1*(1), 121–144.MathSciNetCrossRefMATHGoogle Scholar - Blei, D. M., & Lafferty, J. D. (2007). A correlated topic model of science.
*The Annals of Applied Statistics,*17–35.Google Scholar - Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation.
*The Journal of Machine Learning Research,*993–1022.Google Scholar - Chang, J., & Blei, D. M. (2009). Relational topic models for document networks. In
*International conference on artificial intelligence and statistics*(pp. 81–88).Google Scholar - Duan, J. A., Guindani, M., & Gelfand, A. E. (2007). Generalized spatial dirichlet process models.
*Biometrika*,*94*(4), 809–825.MathSciNetCrossRefMATHGoogle Scholar - Dunson, D. B., & Park, J. H. (2008). Kernel stick-breaking processes.
*Biometrika*,*95*(2), 307–323.MathSciNetCrossRefMATHGoogle Scholar - Escobar, M. D., & West, M. (1995). Bayesian density estimation and inference using mixtures.
*Journal of the American Statistical Association*,*90*, 577–588.MathSciNetCrossRefMATHGoogle Scholar - Gelfand, A. E., Kottas, A., & MacEachern, S. N. (2005). Bayesian nonparametric spatial modeling with dirichlet process mixing.
*Journal of the American Statistical Association*,*100*(471), 1021–1035.MathSciNetCrossRefMATHGoogle Scholar - Griffin, J. E., & Steel, M. J. (2006). Order-based dependent dirichlet processes.
*Journal of the American statistical Association*,*101*(473), 179–194.MathSciNetCrossRefMATHGoogle Scholar - Hoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. (2013). Stochastic variational inference.
*The Journal of Machine Learning Research*,*14*(1), 1303–1347.MathSciNetMATHGoogle Scholar - Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., & Saul, L. K. (1999). An introduction to variational methods for graphical models.
*Machine Learning*,*37*(2), 183–233.CrossRefMATHGoogle Scholar - Kim, D., Kim, S., & Oh, A. (2012). Dirichlet process with mixed random measures: A nonparametric topic model for labeled data. In
*Proceedings of the 29th international conference on machine learning (ICML)*.Google Scholar - Kim, D. I., & Sudderth, E. B. (2011). The doubly correlated nonparametric topic model.
*Advances in Neural Information Processing Systems,*1980–1988.Google Scholar - Kingma, D. P., & Welling, M. (2014). Auto-encoding variational bayes. In
*International conference on learning representations (ICLR)*.Google Scholar - Kruskal, J. B. (1964). Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis.
*Psychometrika*,*29*(1), 1–27.MathSciNetCrossRefMATHGoogle Scholar - Liang, P., Petrov, S., Jordan, M. I., & Klein, D. (2007). The infinite pcfg using hierarchical dirichlet processes. In
*Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL)*(pp. 688–697).Google Scholar - MacEachern, S. N. (1999). Dependent nonparametric processes. In
*ASA proceedings of the section on Bayesian statistical science*(pp. 50–55).Google Scholar - Mcauliffe, J. D., & Blei, D. M. (2007). Supervised topic models.
*Advances in Neural Information Processing Systems*.Google Scholar - Mimno, D., & McCallum, A. (2012). Topic models conditioned on arbitrary features with dirichlet-multinomial regression. arXiv:1206.3278.
- Mimno, D., Li, W., & McCallum, A. (2007). Mixtures of hierarchical topics with pachinko allocation. In
*Proceedings of the 24th international conference on machine learning (ICML)*(pp. 633–640). ACM.Google Scholar - Paisley, J., Wang, C., & Blei, D. M. (2012). The discrete infinite logistic normal distribution.
*Bayesian Analysis*,*7*(4), 997–1034.MathSciNetCrossRefMATHGoogle Scholar - Ramage, D., Hall, D., Nallapati, R., & Manning, C. D. (2009). Labeled lda: A supervised topic model for credit attribution in multi-labeled corpora. In
*Proceedings of the 2009 conference on empirical methods in natural language processing*(Vol. 1, pp. 248–256). Association for Computational Linguistics.Google Scholar - Ramage, D., Manning, C. D., & Dumais, S. (2011). Partially labeled topic models for interpretable text mining.
*Proceedings of the 17th ACM international conference on knowledge discovery and data mining (KDD)*(pp. 457–465). New York, NY.Google Scholar - Ranganath, R., Gerrish, S., & Blei, D. (2014). Black box variational inference. In
*Proceedings of the seventeenth international conference on artificial intelligence and statistics (AISTATS)*(pp. 814–822).Google Scholar - Rao, V., & Teh, Y. W. (2009). Spatial normalized gamma processes.
*Advances in Neural Information Processing Systems,*1554–1562.Google Scholar - Rasmussen, C. E., & Williams, C. K. I. (2005).
*Gaussian processes for machine learning (adaptive computation and machine learning)*. Cambridge: The MIT Press.Google Scholar - Ren, L., Du, L., Carin, L., & Dunson, D. (2011). Logistic stick-breaking process.
*The Journal of Machine Learning Research*,*12*, 203–239.MathSciNetMATHGoogle Scholar - Rosen-Zvi, M., Griffiths, T., Steyvers, M., & Smyth, P. (2004). The author-topic model for authors and documents. In
*UAI*.Google Scholar - Sethuraman, J. (1991). A constructive definition of dirichlet priors.
*Statistica Sinica*,*4*, 639–650.MathSciNetMATHGoogle Scholar - Srebro, N., & Roweis, S. (2005). Time-varying topic models using dependent dirichlet processes. UTML, TR# 2005 3.Google Scholar
- Teh, Y. W., Jordan, M. I., Beal, M. J., & Blei, D. M. (2006). Hierarchical dirichlet processes. Journal of the American Statistical Association.Google Scholar
- Teh, Y. W., Kurihara, K., & Welling, M. (2008). Collapsed variational inference for HDP. NIPS 20.Google Scholar
- Tran, D., Ranganath, R., & Blei, D. M. (2016). Variational gaussian process. In
*International conference on learning representations (ICLR)*.Google Scholar - Wainwright, M. J., & Jordan, M. I. (2008). Graphical models, exponential families, and variational inference.
*Foundations and Trends in Machine Learning*,*1*(1–2), 1–305.MATHGoogle Scholar - Wallach, H. M., Murray, I., Salakhutdinov, R., & Mimno, D. (2009). Evaluation methods for topic models. In
*Proceedings of the 26th international conference on machine learning*.Google Scholar - Zhu, J., Ahmed, A., & Xing, E. P. (2009). Medlda: Maximum margin supervised topic models for regression and classification. In
*Proceedings of the 26th annual international conference on machine learning*(pp. 1257–1264). ACM.Google Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.