Contextual Sentiment Neural Network for Document Sentiment Analysis

Although deep neural networks are excellent for text sentiment analysis, their applications in real-world practice are occasionally limited owing to their black-box property. In this study, we propose a novel neural network model called contextual sentiment neural network (CSNN) model that can explain the process of its sentiment analysis prediction in a way that humans find natural and agreeable and can catch up the summary of the contents. The CSNN has the following interpretable layers: the word-level original sentiment layer, word-level sentiment shift layer, word-level global importance layer, word-level contextual sentiment layer, and concept-level contextual sentiment layer. Because of these layers, this network can explain the process of its document-level sentiment analysis results in a human-like way using these layers. Realizing the interpretability of each layer in the CSNN is a crucial problem in the development of this CSNN because the general back-propagation method cannot realize such interpretability. To realize this interpretability, we propose a novel learning strategy called initialization propagation (IP) learning. Using real textual datasets, we experimentally demonstrate that the proposed IP learning is effective for improving the interpretability of each layer in CSNN. We then experimentally demonstrate that the CSNN has both the high predictability and high explanation ability.


Motivation and Purpose
Massive web documents such as micro-blogs and customer reviews are useful for public opinion sensing and trend analysis. The sentiment analysis approach (i.e., to automatically predict whether a review is overall positive or negative) has been commonly used in this area. Deep neural networks (DNNs) are some of the best-performing machine learning methods [1]. However, DNNs are often avoided in cases where explanations are required because these networks are generally considered as black boxes. Thus, developing a high predictable neural network (NN) model that can explain the process of its prediction process in a human-like way is a critical problem. In the development of such NN model, we should consider how humans usually judge the positive or negative polarity of each review. As described in some previous linguistic researches [2][3][4], it is well known that humans judge the positive or negative document-level polarity of each review with extracting four types of word-level scores in the following order.
after considering the sentiment shift and global important point.
In addition, as described in previous text visualization research [4], the following concept-level contextual sentiment score is important for readers to catch up the summary of the review content.
5. Concept-level contextual sentiment score: this score means the concept-level positive or negative sentiment of each review where a concept means a set of similar terms.
Therefore, neural network models that can (1) analyze document-level sentiment with high predictability and (2) explain the prediction results using the above five types of sentiments as shown in Fig. 1 should have a great demand in the industry: However, a method for developing such NNs is yet to be established. Many studies have been done to address the black-box property of the NNs [4,[6][7][8][9][10][11][12][13][14]; however, it is hard to say that these previous works can realize the interpretability in the form that humans can find natural and agreeable because these previous studies alone cannot describe the above five types of scores. For example, interpretable NNs with attention mechanism [6,7] can describe the global important point of each term in a review; however, they cannot describe the other three types of word-level sentiment scores. Interpretable NNs that include word-level original sentiment scores (i.e., original sentiment interpretable NN) [4,8,9] can describe the word-level original sentiment scores; however, they cannot describe the word-level global and local contextual sentiment scores. As for other approaches, methods for interpreting NNs can describe the word-level global sentiment scores [10][11][12][13][14]; however, they cannot describe the other scores.

Approach
To solve this problem, we propose a novel NN model called contextual sentiment neural network (CSNN) and a novel learning strategy called initialization and propagation (IP) learning.

CSNN
CSNN has the following four interpretable layers: wordlevel original sentiment layer (WOSL), sentiment shift The WOSL and WCSL represent the word-level original and contextual sentiment of each term in a review, respectively. The SSL indicates whether a sentiment of each term in a review is shifted or not, and GIL indicates the global important points in a review. The WOSL is represented in a word sentiment dictionary manner. The SSL and GIL are represented using long short-term memories (LSTM) cells [15] and attention mechanism [16,17], respectively. The values of WCSL are represented by multiplying the values of WOSL, SSL, and GIL. The values of CCSL are represented by the WCSL and the K-means clustering results with the word embeddings following the strategy in [4].
Therefore, using the WOSL, SSL, GIL, and WCSL, the CSNN can explain the process of the sentiment analysis prediction in a form that humans find natural.

IP Learning
In developing this CSNN, realizing the interpretability for WOSL, SSL, GIL, and WCSL is a crucial problem. Generally, sentiment analysis models are developed using the back-propagation method with the gradient values for the loss value between the predicted document-level sentiment and the positive or negative tag of each review; however, when such general back-propagation method is used, each layer does not represent the corresponding sentiment. Thus, to realize the interpretability of layers in CSNN, we propose a novel learning strategy called the initialization and propagation (IP) learning. IP learning includes two specific strategies called Init and Update. Update is a strategy of regularization for the final weight matrix, which is expected to improve the interpretability in WCSL. Init is a strategy for initialization of the WOSL using a small word sentiment dictionary that is composed of a few hundreds of word-level original sentiment scores, which is expected to improving the interpretability in WOSL and GIL. Using both the Update and Init, the interpretability in SSL is also expected to be improved. IP learning requires only reviews, their sentiment tags, and a small word sentiment dictionary. It does not require any sentiment shift information or syntactic text analysis. This is a valuable point in our approach because we can develop CSNN even for minor language or non-grammatical documents.
We experimentally evaluated the performance of the proposed approach using real textual datasets. We first demonstrated that IP learning is useful for realizing the interpretability of each layer in the CSNN. We then demonstrated that the CSNN developed with IP learning has both the high predictability and high explanation ability.

Contribution
The contributions of this paper are as follows: • We proposed a novel NN architecture called CSNN that can explain its sentiment analysis process in a form that humans find natural and agreeable. • To realize the interpretability of CSNN, we proposed a novel learning strategy called IP learning. • We experimentally demonstrated the high interpretability and high predictability of the proposed CSNN. The remainder of this paper is structured as follows. In Sect. 2, the CSNN architecture and IP learning are explained in detail. Section 3 pre-experimentally evaluates the effect of the proposed IP learning. Section 4 presents the experiments and results. Section 5 presents the related works. In Sect. 6, the conclusion and directions for future work are discussed.

CSNN
This section introduce the proposed CSNN. A CSNN as described in Sect. 2.1 can be developed through IP learning (Sect. 2.1) using a training dataset , and a small word sentiment dictionary. Note that N is the training data size, i is a comment, and d i is its sentiment tag (1 is positive and 0 is negative).

Structure of CSNN
This section introduces the CSNN structure. The CSNN includes the following layers: WOSL, SSL, GIL, WCSL, CCSL, and outputs the document-level sentiment.
Notation. Before explaining the construction of the CSNN model, we define several symbols. Let represent the terms that appear in a text corpus of a dataset, and v be the vocabulary size. We define the vocabulary index of word w i as I(w i ) . Therefore, I(w i ) = i . Let w em i ∈ ℝ e be an embedding representation of word w i , and the embed- Here, e is the dimension size of word-level embedding. Then, for each i, ‖w em i ‖ 2 = 1 is satisfied. W em is the constant value obtained using the skip-gram method [18] and the text corpus in a training dataset. where W p ∈ ℝ v represents the original sentiment scores of words, and w p i is the i− th element of W p . The w p i value corresponds to the original sentiment score of the word w i .

SSL
First, this layer converts terms {w t } n t=1 in comment Q into their word-level embeddings {e t } n t=1 using W em , and converts them to context representations { � ⃗ h t } n t=1 and { ⃖� h t } n t=1 using forward and backward long short-term memories, ���������� ⃗ LSTM and ⃖���������� LSTM [15]: to right-and leftoriented sentiment shift representations, ⃗ s t and ⃖ s t : Here, v right , v left ∈ ℝ e are parameter values. ⃗ s t and ⃖ s t denote whether or not the sentiment of w t is shifted by the leftside and right-side terms of w t : Finally, this layer converts {⃗ s t } n t=1 and {⃖ s t } n t=1 into wordlevel sentiment shift scores {s t } n t=1 : s t denotes whether the sentiment of w t is shifted ( s t < 0 ) or not ( s t ≥ 0).
The overall structure of this SSL is shown in Fig. 3.

GIL
This layer represents the word-level global important point representations { t } n t=1 using a revised self-attention mechanism [16,17] as

WCSL
Using the WOSL, SSL, and GIL, this layer represents wordlevel contextual sentiment representations {g t } n t=1 :

CCSL
calculated using a spherical k-means method [19] where the cluster number is K. Here, the (i, k) element of b t represents the cluster weight of word w t to cluster k. Therefore, from the values in the CCSL, we can catch up the concept-level contextual sentiment scores. (3) .

Key Idea in IP learning
In developing CSNN, the realization of the interpretability in WOSL and SSL is especially difficult. Through the learning with L and Update (will be defined later), WCSL learns to represent corresponding sentiments. However, this learning strategy alone cannot realize the interpretability in WOSL and SSL because in the case where the polarity of c t is accurately negative, the following two cases are possible: (1) p t > 0 and s t < 0 , or (2) p t < 0 and s t > 0 , and the accurate case cannot be chosen automatically in general learning. We assume that this problem can be solved by initially limiting the polarity of p t to the accurate case for a few words because this limitation leads to the accurate choice from the above two cases. Therefore, this limitation can lead to the learning of s t within the appropriate case. The effect of this limitation works for only the limited words, first; however, this effect is assumed to be propagated to the other non-limited terms whose meanings are similar to any of the limited words thorough learning, afterward. To realize this idea, we utilize the Init (will be defined later) in IP learning.

Initialization and Propagation (IP) Learning
This section describes the learning strategy of the CSNN.
Overall process is described in Algorithm 1 where w o i,j is the (i, j) element of W O , and L is the cross entropy between a and d . IP learning utilizes the two specific techniques called Update and Init. Update is a strategy for improving the interpretability in WCSL. Init is a strategy for improving the interpretability in WOSL and GIL. Using both the Update and Init, the interpretability in SSL is also expected to be improved (as theoretically analyzed in Appendix A in the supplementary material).

Update
First, W O is updated according to processes 6-7 in Algorithm 1. This makes WCSL to represent the corresponding sentiment scores (Proposition A.3 in Appendix) without violating the learning process after sufficient iterations (Proposition A.7 in Appendix A).

Init
Then, W p is initialized as process 2 in Algorithm 1, where PS(w i ) is the sentiment score for word w i given by the word sentiment dictionary, and S d is a set of words from the dictionary. Init makes WOSL and SSL represent the corresponding scores in the condition that Update is utilized. Through this IP learning, for every word sufficiently similar to any of the words in S d , the WOSL, SSL, GIL, and WCSL learn to represent the corresponding scores, as theoretically analyzed in Appendix A. After the learning, the CSNN can explain its prediction result using these layers.

Pre-experimental Evaluation for IP Learning
This section experimentally tests the explanation ability and predictability of the CSNN and investigate the effect of IP learning for the interpretability of the layers in the CSNN.

Text Corpus
We used the following four textual corpora, including reviews and their sentiment tags, for this evaluation. They were used for developing CSNN.
(a) EcoRevs I and II. These datasets are composed of comments on current (I) and future (II) economic trends and their positive or negative sentiment tags 1 (b) Yahoo review. This dataset is composed of comments on stocks and their long (positive) or short (negative) attitude tags, extracted from financial micro-blogs. 2 (c) Sentiment 140. This dataset contains tweets and their positive or negative sentiment tags. 3 EcoReviews and Yahoo review were Japanese datasets, and Sentiment 140 was an English dataset. We used them to verify whether the CSNN can be used irrespective of the language or domain. We divided each dataset into the training, validation, and test datasets, as presented in Table 1.

Annotated Dataset
For this evaluation, we prepared the Economy, Yahoo, and message annotated datasets. The Economy annotated dataset has 2200 reviews (1100 positive and 1100 negative) in the test dataset of EcoReviews I. The Yahoo annotated dataset has 1520 reviews (760 positive and 760 negative) in the test dataset of Yahoo reviews. The message annotated dataset has 10258 reviews obtained from the test datasets in SemEval tasks [20,21]. In these datasets, part of the terms in reviews had word-level contextual sentiment tags and word-level sentiment shift tags. Word-level contextual sentiment tags indicate whether the word-level contextual sentiments of terms are positive or negative as shown in the following examples.
(1) In total, we are in a bull + market. (1) In total, we are in a bull (0) market.
(3) Products in this shop are too expensive (1) .
Moreover, in the message annotated dataset, part of phrases in reviews have positive or negative tags for contextual sentiments (phrase-level sentiment tags) as the following examples.  In addition, a gold global important point (0: not important or 1: important) is assigned to each term of the reviews included in the Economy and Yahoo annotated datasets. This gold global important point indicates that each term in a review is important (1) or not (0) for deciding the overall positive or negative polarity of the review as the following examples.
These tags were used in evaluating the explanation ability of the CSNN. We used the Economy, Yahoo, and message annotated datasets when developing CSNNs with the EcoReviews, Yahoo reviews, and Sentiment 140, respectively. We only employed tags of terms that were not used in Init and appeared in the training dataset, and only used tags of the phrases that include at least one term involved in the training dataset. Table 2 summarizes the numbers of tags used. See the supplementary material for details.

CSNN Development Setting
We developed the CSNN using each training and validation datasets in the following settings.

Setting in Init.
Init used a part of a Japanese financial word sentiment dictionary (JFWS dict) developed by six financial professionals and the Vader word sentiment dictionary (Vader dict) [5]. These dictionaries contain words and their sentiment scores. After we excluded the words with zero sentiment scores and those with absolute sentiment scores of less than 1.0 from JFWS dict and the Vader dict, respectively, we extracted most frequent 200 words in each training dataset from these dictionaries and used their sentiment scores in Init. To analyze the results in the cases where Init used fewer words, we evaluated the results with CSNNs developed with only 50 or 100, or 200 words: CSNN (50), CSNN (100) and CSNN (200).
Other settings. We calculated the word embedding matrix W em by the skip-gram method (window size = 5) [18] based on each textual dataset. We set the dimensions of the hidden and embedding vectors to 200, epoch to 50 with early stopping, K to [100, 500, 1000], t c to 1/K, and mini-batch size to 64. We used stratified sampling [22] to analyze imbalanced data, and the Adam optimizer [23], and the dropout [24] method (rate = 0.5) for the BiRNNs and CSNNs. We calculated W em using the skip-gram method (window size = 5) with each text corpus. We determined the hyper-parameters using the validation data. We used the mean score of the five trials for the evaluations in this paper.

Evaluation Metrics in Explanation ability
Evaluation Metric. We evaluated the explanation ability of the CSNN based on the validity in WOSL, SSL, GIL, and WCSL in the following way.

Validity of WOSL
We evaluated the validity of WOSL based on how accurately the polarities of word w i and w p i agree using the economic, Yahoo, and LEX word polarity list 4 ). These lists include words and their positive or negative polarities. The economic and Yahoo word-polarity lists include Japanese economic terms, and LEX word-polarity list includes English terms. If we used the EcoReview I or II, Yahoo reviews, and Sentiment 140 in training, we utilized the economic, Yahoo, and LEX word polarity lists, respectively. Moreover, we used only those terms that appeared in the training dataset but were not used in Init. Table 1 summarizes the number of words used in evaluating the CSNN developed with each dataset.

Validity of SSL
Using the sentiment shift tags in the annotated datasets, we evaluated the validity of the SSL based on whether the sentiment shift tags of w t and the polarity of s t > 0 (shifted: w p i < 0 and non-shifted: w p i > 0 ) is accurately agreed well.

Validity of GIL
Using the gold word-level global important points in the annotated datasets, we evaluated the validity of the GIL based on whether the values of GIL { t } n t=1 and gold wordlevel global important points were correlated. We used the Pearson correlation coefficient for this evaluation.

Validity of WCSL
Using the word-level or phrase-level contextual sentiment tags in the annotated datasets, we evaluated the validity of the WCSL with regard to whether the values of WCSL in CSNN could accurately assign the word or phrase-level contextual sentiments, that is, whether g t was accurately positive (negative) when the contextual word-level sentiment of w t was positive (negative) or whether the polarity of the summed scores for terms involved in each phrase accurately presented its sentiment. We used the macro average score between the macro F 1 score for shifted terms and that for non-shifted terms for the evaluation basis. We used this score to test whether each method could accurately correspond to both shifted and non-shifted terms.
In the above, the values for the WOSL, SSL and WCSL are evaluated using the F1 Score because the of range of values for the WOSL and WCSL is [−∞, ∞] and the range of values for the SSL is (−1, 1) . In contrast, the range of values for the GIL is [0, ∞] . Thus, we evaluated the validity of GIL by the Pearson Correlation.
Baselines. To evaluate the effect of IP learning, we compared the results of the CSNNs developed with IP learning and those of the following baseline models, namely, CSNN Base , CSNN NoInit , and CSNN NoUp . The structures of these baseline models are the same as the structure of CSNN; however, they are different in the following points: (1) WOSL: This evaluation compared the CSNN with the other word-level original sentiment assignment methods, namely, PMI [25], logistic fixed weight model (LFW) [8], sentiment-oriented NN (SONN) [9], and gradient interpretable neural network (GINN) [4]. (2) SSL: This evaluation compared the CSNN with the baseline and NegRNN methods. In the baseline, we predicted w t as "shifted" if the sentiment of d predicted by the RNN and sentiment tag of w t assigned by the PMI were different and as "not shifted" in other cases. In NegRNN, we used the RNN that predicts polarity shifts [26] developed with the polarity shifting training data created by the weighed frequency odds method [27]. (3) GIL: This evaluation compared the CSNN with the other word-level important point assignment methods using the RNNs using attention mechanism: word attention network (ATT) [28], hierarchical attention network (HN-ATT) [28], sentiment and negation neural network (SNNN) [29], and lexicon-based supervised attention (LBSA) [6]. SNNN and LBSA are set up in a form that the attention weights of terms with the strong word-level original sentiment are strengthened. We used the attention score of each model as the score. (4) WCSL: This evaluation compared the CSNN with the other word-level sentiment assignment methods: PMI, LFW, SONN, GINN, Grad + a bidirectional LSTM model (RNN) [12], LRP + RNN [30], and IntGrad + RNN [11].

Evaluation Metrics in Predictability
Evaluation Metric. We evaluates the predictability of the CSNN based on whether it can predict the sentiment tags of reviews in each test dataset. Comparison Method. We compared the CSNN and the following methods: logistic regression (LR), LFW [8], SONN [9], GINN [4], a bi-LSTM based RNN (RNN), convolutional NN (CNN) [1], ATT [28], HN-ATT [28], SNNN [29], LBSA [6]. We used the macro F 1 score as the evaluation basis. Among the above methods, LR is a linear representation model. LFW, SONN, and GINN are original sentiment interpretable NNs. ATT, HN-ATT, SNNN, and LBSA are NNs with attention mechanism, and especially, SNNN and LBSA are set up in a form that the attention weights of terms with the strong word-level original sentiment are strengthened.

Explanation ability and Predictability
Tables 3, 4, 5 and 6 summarize the results for explanation ability, indicating that the proposed CSNN outperformed the other methods in most cases. Table 7 summarizes the results, indicating that HN-ATT had greater predictability than the proposed CSNNs in most cases; however, CSNN (200) had greater predictability than LR and some deep NNs such as CNN and SNNN, and had predictability equivalent to that of ATT or LBSA. These results demonstrate that the proposed CSNN has both the high explanation ability and high predictability.

Discussion
We then discuss the performance of the CSNN in detail.

Predictability
The reason behind the good performance of HNATT in the predictability evaluation may lie in whether the sentencelevel importance is considered or not. The HNATT considers the sentence-level importance, whereas the CSNN does not consider it. Therefore, it is possible that the performance for the CSNN can become better by adding the sentence-level importance attention mechanism to the CSNN. Additionally, it should be noted that the performance for the CSNN was better than the others in Yahoo dataset. It is possible that this is because sentiment shift representations in Yahoo dataset are more general and complex than those in EcoReviews. The CSNN directly strengthens the word-level sentiment score and its sentiment shift. Thus, the CSNN can address the sentiment shift representations in Yahoo dataset.

Effect of IP Learning
It should be noted that the interpretability for the CSNN has succeeded even when we used only fifty terms for the Init and there has been significant difference for the setting of Init. These results indicate that the number of the required minimum words for the learning was less than fifty and our algorithm was sufficiently practical.

Sentiment Shift Detection Performance in Yahoo Dataset
Sentiment shift representations in Yahoo dataset are more general and complex than those in EcoReviews. We consider that this is the reason for the better performance of the CSNN. The CSNN directly strengthen the word-level sentiment score and its sentiment shift. Thus, the CSNN can address the sentiment shift representations in Yahoo dataset.

Text-Visualization Example
This section introduces some examples of text-visualization produced by the CSNN. Figures 4 and 5 show the text-visualization examples for visualizing a review in Yahoo review and a review in the Sentiment 140 using the CSNN. Users can explain the CSNN's prediction process based on this type of text-visualizations. In addition, based on the values of the right-and leftoriented sentiment shift representations, we can interpret the sentiment shift processes in the CSNN. Figure 5 shows examples. Based on Fig. 5, we can interpret that "uru (bearish)" is shifted by its right-side terms, and term "aoru (manipulate)" caused a sentiment shift because in the rightoriented sentiment shift representations, the terms to the left side of "aoru (manipulate)" become blue. In the same manner, we can interpret that "great" is shifted by "not" (rightoriented shift layer) in Fig. 4.

Related Work
There are many studies for addressing the black-box property of the deep NNs. As a useful technique for explaining the prediction results of NNs, we can present methods for interpreting prediction models [10-13, 31, 32]. These methods calculated the gradient score of each input feature in the prediction and visualized an important feature in their predictions. The LRP method is one of the state-of-the-art methods. Interpretable NNs [4, 6-9, 28, 29] are also useful in these aspects. In this context, several methods developed a neural network including the layer that represents wordlevel original score [4,8,9]. Other methods developed a neural network including the layer that represents wordlevel global context using the attention mechanism [6,7,28,29]. However, these previous methods do not satisfy our purpose because they alone cannot represent all the five types of scores, namely, word-level original sentiment score, word-level sentiment shift score, word-level global important point score, word-level contextual sentiment score, and concept-level contextual sentiment score in the explanation. In contrast, the proposed CSNN can explain the prediction results using the above five types of scores. Many existing studies explored sentiment shift detection [2,3,26,33,34]. However, because most of these methods require specific knowledge of sentiment shifts, we cannot always use them in the real world. Unlike these methods, the CSNN can detect sentiment shifts without any specific knowledge on sentiment shifts. Although a method for detecting sentiment shifts without specific knowledge was developed in a previous study [27], the CSNN was better than this method in detecting sentiment shifts. Other studies dealt with assigning original sentiment scores to words using the sentiment tags of documents [8,9,25,35]. The proposed CSNN outperformed them.

Conclusion
A novel NN architecture called CSNN that can explain its prediction process is proposed. To realize the explainability of CSNN, we proposed a novel learning strategy called IP learning. We experimentally demonstrated the effectiveness of IP learning for improving the explainability of CSNN. Using real textual datasets, we then experimentally demonstrated that the CSNN had higher predictability compared to that of some DNNs and that the explanation provided by the CSNN was sufficiently valid. In the future, we will apply this CSNN to documents pertaining to other domains or languages. Dataset, code, and the supplementary material are available 5 . Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.