Skip to main content

Reviews Left and Right: The Link Between Reviewers’ Political Ideology and Online Review Language

Abstract

Online reviews, i.e., evaluations of products and services posted on websites, are ubiquitous. Prior research observed substantial variance in the language of such online reviews and linked it to downstream consequences like perceived helpfulness. However, the understanding of why the language of reviews varies is limited. This is problematic because it might have vital implications for the design of IT systems and user interactions. To improve the understanding of online review language, the paper proposes that consumers’ personality, as reflected in their political ideology, is a predictor of such online review language. Specifically, it is hypothesized that reviewers’ political ideology as measured by degree of conservatism on a liberal–conservative spectrum is negatively related to review depth (the number of words and the number of arguments in a review), cognitively complex language in reviews, diversity of arguments, and positive valence in language. Support for these hypotheses is obtained through the analysis of a unique dataset that links a sample of online reviews to reviewers’ political ideology as inferred from their online news consumption recorded in clickstream data.

Introduction

Online consumer reviews are a regular feature on most consumer websites such as Amazon or Yelp and have attracted much attention in the information systems community in recent years (e.g., Li et al. 2019). In particular, research has highlighted that certain properties of reviews determine their effects on review helpfulness, purchase intention, and product sales. In this regard, apart from the effects of review ratings (e.g., Chevalier and Mayzlin 2006; Clemons et al. 2006), a number of studies are concerned with review language, i.e., length (e.g., Pan and Zhang 2011; Schindler and Bickart 2012), content (e.g., Willemsen et al. 2011; Yin et al. 2014), and linguistic style (e.g., Li et al. 2019; Liu et al. 2008), which are arguably at least as important for review quality and effectiveness as purely numerical ratings (Archak et al. 2011; Pavlou and Dimoka 2006).

Despite the prominence of research on the effects of review language, however, little is known about why reviewers vary in the ways they use language, build arguments, and expend effort on their reviews. Particularly, while some nascent research has emerged in this field (e.g., Hu et al. 2008; Willemsen et al. 2011), only one study (Safi and Yu 2017) has so far specifically illuminated the influence of reviewer personality on the characteristics of online consumer review language. This is surprising given that personality has long been considered an important factor to explain differences in e-commerce behavior (e.g., Gefen 2000) and information systems use in general (Zmud 1979). It therefore appears reasonable to expect that personality characteristics might help explain why people vary in the way they compose reviews.

Our paper aims to establish a novel link between reviewer personality and online reviews. Specifically, we draw on the concept of political ideology, i.e., individuals’ leanings on a continuum between liberal and conservative. Political ideology is a particularly intriguing concept because strong evidence exists that it reflects various stable, underlying personality characteristics (see Jost et al. 2003, 2009 for reviews). In addition, political ideology contains an explicitly motivational component and thus “helps to explain why people do what they do” (Jost 2006, p. 653). As a result, the implications of ideology have often been studied in research related to information systems (Flaxman et al. 2016; Gentzkow and Shapiro 2011), e.g., with regard to its consequences for IT investment (Pang 2016), technology adoption (e.g., Chen 2010; Smith 2013), user behavior on social networking sites (Yang et al. 2017), and engagement in online piracy (Graf-Vlachy et al. 2017).

We introduce political ideology to online review research because we expect several of the associated personality characteristics and motivations to predict differences in review language. Building on previous research, we theorize that individuals’ pro-social behavior and altruism (e.g., Zettler and Hilbig 2010; Van Lange et al. 2012), cognitive complexity (e.g., Van Hiel and Mervielde 2003; Jost et al. 2003), and sensitivity to negative stimuli (e.g., Hibbing et al. 2014; Joel et al. 2013) are related to the way reviews are composed. We then link these personality characteristics associated with political ideology to three of the most-studied properties of review language which have been suggested to have a pivotal impact on review helpfulness and sales, namely review depth (e.g., Mudambi and Schuff 2010; Schindler and Bickart 2012), multifacetedness (e.g., Ghose and Ipeirotis 2006, 2011; Willemsen et al. 2011), and valence (e.g., Cao et al. 2011; Yin et al. 2014). Overall, our research thus addresses the following research question: How is reviewers’ political ideology related to the language they use when composing online reviews?

We view technology – specifically websites using online reviews – as a socially “embedded system” (Orlikowski and Iacono 2001, p. 126) and aim to contribute to research on how “different user groups [engage] with that technology” (2001, p. 127). To the best of our knowledge, our study is the first to show that the differences in review language described in extant literature are associated with differences in personality of the reviewers, as reflected in their political ideology. By adding the additional factor of reviewers’ political ideology, our study goes beyond prior research, which was limited to situational antecedents such as experience or expertise (e.g., Liu et al. 2008; Smith et al. 2005), and we reach a more granular understanding of the determinants of review language. In addition, we provide evidence for the potential of political ideology as an important construct in information systems research at large. In particular, we highlight that the political ideology of system users is closely related to how they engage with information technology, which has critical implications for the design of IT systems and user interactions.

Related Literature

Online Consumer Reviews

Online consumer reviews constitute a critical element of electronic word of mouth (Lis 2013; Lis and Neßler 2014) and are a regular feature of most consumer websites, especially in e-commerce. Mudambi and Schuff (2010) defined them as “peer-generated product evaluations posted on company or third party websites” (p. 186). Including reviews on websites allows customers to build stronger social rapport with the website (Kumar and Benbasat 2006) and to reduce transaction risk and search effort (Dabholkar 2006). Firms, in turn, use reviews as a feedback mechanism for product development and quality control (Dellarocas 2003).

As reviews play such a prominent role in decision-making processes, scholars have devoted much attention to understanding how reviews differ from one another and which consequences ensue, for instance, regarding product sales and the perceived helpfulness of reviews. On a general level and perhaps unsurprisingly, research suggests that review ratings are directly related to sales (e.g., Chevalier and Mayzlin 2006; Zhu and Zhang 2010).

On a more specific level, regarding predictors of review helpfulness, studies have focused on the length, content, and stylistic features of reviews. For instance, longer reviews are generally evaluated more positively than shorter ones (Mudambi and Schuff 2010; Pan and Zhang 2011). Content-wise, readers perceive reviews as more helpful if they contain a mixture of objective product information and subjective evaluative statements (Ghose and Ipeirotis 2006, 2011) and exhibit a high diversity of arguments, i.e., both positive and negative arguments (Willemsen et al. 2011). Perceived helpfulness has also been shown to be driven by linguistic style (Zhang and Varadarajan 2006) such as a lower level of sentence complexity or fewer grammatical errors (Ghose and Ipeirotis 2011; Liu et al. 2008; Schindler and Bickart 2012). Table A1 in the Online Appendix (available online via http://link.springer.com) provides a more extensive overview of additional research regarding the effects of review properties on further variables like purchase intention and sales.

While scholars have extensively studied the consequences of review properties, they have paid much less attention to potential antecedents. In particular, factors pertaining to reviewers’ personality are almost completely unexplored. An extensive and systematic literature review unearthed only three relevant articlesFootnote 1: Picazo-Vela et al. (2010) found that conscientiousness and neuroticism correlate with an individual’s intention to provide reviews in the first place. However, these authors did not study review language. Similarly, Helm et al. (2013) provide empirical evidence that introversion is related to posting of product ratings, but do not study review language. The only article explicitly relating reviewer personality to review language is Safi and Yu’s (2017) work that links reviewers’ innate innovativeness to various properties of product reviews, e.g., expressed concern with cost or degrees of uncertainty and optimism.

Political Ideology

Political ideology, i.e., the deeply rooted values, beliefs, and preferences that people hold about ideal goals for society and their beliefs about how to achieve them, has been studied extensively in political science and related fields (Jost 2006). Usually, it is conceptualized as a liberal–conservative continuum which captures the most relevant interpersonal differences regarding political ideology (Jost et al. 2009). Specifically, researchers frequently denote a given individual’s position on the spectrum as their degree of conservatism (Jost et al. 2003).

A core tenet of political ideology research is that differences in ideology are grounded in differences in underlying personality traits and motivations (Jost 2006; Jost et al. 2003, 2009). Thus, individuals’ political ideologies are the reflection of stable personality characteristics rather than of merely situational circumstances (Alford et al. 2005; Block and Block 2006).

The two most important types of motives underlying political ideology are epistemic and existential (Jost et al. 2003, 2009). Epistemic motives include elements of how humans deal with uncertainty, ambiguity, and complexity, how strongly they need to order and structure information and how mentally rigid they are. For instance, a more conservative ideology is positively correlated with intolerance of ambiguity (e.g., Budner 1962; Sidanius 1978), lower openness to experience (e.g., Van Hiel and Mervielde 2004), and stronger individualistic and less altruistic tendencies (e.g., Van Lange et al. 2012; Zettler and Hilbig 2010).

Existential motives relate to how individuals perceive and cope with threats to the current societal system as well as their position within it. Research has shown that, for example, more conservative worldviews result from greater responsiveness to negative stimuli (e.g., Hibbing et al. 2014; Joel et al. 2013), greater fear of threat and loss (e.g., Jost et al. 2007), as well as more anger and aggression (e.g., Altemeyer 1998; Tomkins 1995).

Scholars have shown that political ideology directly impacts not only every-day human behavior beyond the immediately political sphere (e.g., Carney et al. 2008; Jost et al. 2008), but also people’s behavior related to information technology. One especially substantial line of inquiry explores the impact of online platforms on ideological segregation (Flaxman et al. 2016; Gentzkow and Shapiro 2011; Himelboim et al. 2013). For instance, users’ political ideology was found to meaningfully impact their “unfriending” of other users of opposing political ideology on social networking sites (Yang et al. 2017). Another line of inquiry relates to the effect of ideology on technology investment and adoption (Baxter and Marcella 2012; Chen 2010; Smith 2013). Researchers have, for example, studied how political ideology is related to IT investments (Pang 2016), the adoption of e-participatory government (García-Sánchez et al. 2011), and the adoption of e-voting systems (Choi and Kim 2012). In addition, prior research has studied various other consequences of information system users’ political ideologies. One study, for example, links Internet users’ political ideology to online piracy (Graf-Vlachy et al. 2017) and another study found that the ideology of Twitter users is related to the valence of the content they posted (Himelboim et al. 2016). Jointly, these studies suggest that various behavioral differences can be traced to the inherent differences in personality which are the underlying drivers of political ideology. Below, we elaborate on how political ideology and its associated personality characteristics may impact a so-far disregarded element of online behavior: The composition of online reviews.

Linking Political Ideology and Online Reviews

Altruism and Review Depth

While the benefits of online reviews are apparent and have been widely discussed, one could argue that the benefits of posting a review for the reviewer are limited compared to its costs. Benefits generally associated with online information sharing such as social status enhancement (Lee and Ma 2012; Lu and Hsiao 2007; Wasko and Faraj 2005) or reciprocity (Chiu et al. 2006) are potentially less pronounced in the context of online reviews because reviews are often anonymous and lack direct one-to-one interactions (Wasko and Faraj 2005). On the cost side, however, reviewers must allocate attention, time, and effort to composing reviews (Hew and Hara 2007; Sun et al. 2014).

For prospective customers, the amount and quality of information are important factors to consider when evaluating the benefits of a review. Mudambi and Schuff (2010) and Pan and Zhang (2011), for example, found that the longer the online review – i.e., the more words it contains – the more helpful and beneficial it is to prospective customers. Likewise, Willemsen et al. (2011) have shown that the larger the number of arguments included, i.e., the greater the argument density of a review, the more useful it is to prospective customers. Thus, while the benefits for the customer tend to increase with the number of words and the number of arguments in a review, so do the costs for the reviewer. This raises the question of what kind of person is willing to write longer reviews or such with a greater number of arguments.

We build on research which suggests that altruism affects people’s inclination to share information (Hew and Hara 2007). More generally, altruistic individuals are willing to “pay a personal cost to provide benefits to others in general, regardless of the identity of the beneficiaries” (Fowler and Kam 2007). Hence, we believe that the more altruistic an individual, the more likely it is that he or she puts a great deal of effort into composing an online review.

Notably, a host of research on political ideology suggests that such self-sacrificial tendencies are associated with a non-conservative ideology (Farwell and Weiner 2000; Van Lange et al. 2012). The psychological underpinnings of this phenomenon are preferences for equality. Whereas more conservative individuals are more likely to accept inequality as natural, less conservative individuals tend to favor greater equality (Jost et al. 2003; Van Lange et al. 2012). As Bobbio (1996, p. 40) put it: “The left favours greater equality, while the right sees society as inevitably hierarchical.” From these assumptions, it follows that less conservative individuals would be more inclined to extensively share goods or knowledge, i.e., behave altruistically, whereas more conservative individuals might not feel a need to increase equality by sharing. We therefore conclude that since more conservative individuals tend to be less altruistic, they will be less motivated to expend effort on composing a review, and thus, will submit reviews that are shorter and contain fewer arguments.

H1a

Greater reviewer conservatism is associated with a lower number of words in reviews.

H1b

Greater reviewer conservatism is associated with a lower number of arguments in reviews.

Cognitive Complexity and Review Multifacetedness

Consumers consult online reviews during the decision-making process to reduce the information asymmetry between the seller and themselves (Hu et al. 2008; Kumar and Benbasat 2006; Mudambi and Schuff 2010). In this pursuit, review multifacetedness, i.e., the degree to which multiple perspectives are considered in the review, has been shown to be important. Reviews that present both positive and negative information signal that the reviewer is independent and truthful (Crowley and Hoyer 1994) and are therefore perceived as more helpful than reviews that are one-sided (Willemsen et al. 2011).

While this aspect of balanced argumentation is relatively novel in online review research, it has been the subject of a major research stream for political ideology scholars in the form of cognitive complexity and related constructs. Cognitive complexity captures how sophisticatedly and balanced individuals process information (e.g., Suedfeld and Rank 1976; Van Hiel and Mervielde 2003). As such, an individual exhibiting low cognitive complexity is characterized by “rigid evaluations of stimuli [and] the rejection of dissonant information” (Suedfeld and Rank 1976). An individual with high cognitive complexity, in contrast, will interpret information in a flexible fashion, combine and integrate stimuli, as well as consider multiple viewpoints.

More conservative individuals have been found to exhibit a greater need for closure (Chirumbolo 2002; Chirumbolo et al. 2004) and a greater tendency for uncertainty and ambiguity avoidance (Jost et al. 2007), largely because they have a greater propensity to interpret ambiguous situations as threatening (Hibbing et al. 2014). Such tendencies, in turn, lead to a more rigid, black-and-white view of the world and to potentially premature closure, i.e., possibly ending data gathering before all information is known and thus forming opinions and making decisions without incorporating all available information (Furnham and Ribchester 1995).

Consequently, despite some recent suggestions that the reality of political ideology and cognitive complexity might be slightly more intricate (Conway et al. 2016), the overwhelming current scholarly consensus is that individuals that are more conservative tend to display lower cognitive complexity (Jost et al. 2003). Since cognition and communication are hard to separate (Slatcher et al. 2007), cognitive complexity is also reflected in language use. Multiple studies have examined cognitive complexity in oral and written communication and have found support for this rigidity-of-the-right hypothesis (e.g., Tetlock 1983; Tetlock et al. 1984).

We therefore hypothesize that reviewers’ conservatism will be linked to the degree of cognitive complexity in the language they use in reviews.

H2a

Greater reviewer conservatism is associated with less cognitively complex language used in reviews.

Furthermore, we hypothesize that more conservative individuals, who likely process information in a less complex way, will formulate online reviews that are less balanced concerning positive and negative arguments.

H2b

Greater reviewer conservatism is associated with lower argument diversity in reviews.

Finally, we expect argument diversity to be the result of thinking and expressing oneself in a more cognitively complex fashion. Correspondingly, the less balanced information processing of more conservative individuals, which makes them prone to using less complex language, may result in a reduced propensity to provide balanced argumentation. We thus hypothesize that cognitively complex language will act as a mediator for the effect of political ideology on argument diversity.

H2c

Cognitively complex language mediates the effect of political ideology on argument diversity.

Sensitivity to Negative Stimuli and Review Language Valence

The valence of an online review plays a major role in how it is received by a prospective customer. An individual is more likely to purchase a product if he or she reads a positive review compared to a negative review (e.g., Clemons et al. 2006). Furthermore, research suggests that negative reviews are perceived as more helpful than positive ones (Cao et al. 2011).

Various researchers have noted that more conservative individuals exhibit a heightened sensitivity to negative stimuli. Mendez, for example, describes a neurological “conservative-complex” (2017, p. 92) involving brain structures that are particularly responsive to negativity per se, threat, and disgust, and give rise to a propensity for avoidance (as opposed to approach). Jost and Amodio (2012) additionally review behavioral evidence and find similar results. Most prominently perhaps, Hibbing et al. (2014) review an extensive body of literature and find that conservative individuals tend to allocate more attention to negative stimuli and experience stronger reactions – both psychologically and physiologically – to those stimuli. For example, conservative individuals tend to pay more attention to negatively valenced language than more liberal individuals do (Carraro et al. 2011). Greater conservatism is also related to experiencing greater emotional reactions to negative personal outcomes (Joel et al. 2013). In fact, Hibbing et al. (2014) suggest that such negativity bias is the fundamental distinction between more and less conservative individuals. While other authors tend to take slightly different positions or argue for narrower or wider boundary conditions, most agree that there is some relationship between individuals’ political ideology and their reaction to negative stimuli (also see the extensive peer commentary in the same issue as Hibbing et al.’s article).

We expect the particular sensitivity to negative stimuli displayed by more conservative individuals to translate into the valence of their communication. Empirical evidence shows, e.g., that there is a parallel between a stronger negativity bias and more pronounced linguistic use of negatively valenced emotive intensifiers (e.g., “terribly”) across different cultures (Jing-Schmidt 2007). Similarly, prior research on political ideology and the valence of social media messages shows that conservatism is linked to less positive language (Himelboim et al. 2016). Thus, we hypothesize that more conservative individuals make use of language that is overall less positively valenced in their reviews, independent of review rating.

H3

Greater reviewer conservatism is associated with less positively valenced language used in reviews.

Figure 1 summarizes our research model.

Fig. 1
figure 1

Research model

Methodology

Data Sample

To test our hypotheses, we rely on two data sources: clickstream data and manually collected customer reviews. We use the clickstream data to measure the political ideology of the individuals in the sample, i.e., our main independent variable. We use the online customer reviews written by individuals in our sample to measure our dependent variables.

The clickstream data we use is derived from a panel maintained by Comscore, a ratings service. Our initial dataset comprises 17,097 individuals from 9933 households in the US. Their home computer Internet activity was tracked from March until August 2014. After removing individuals that did not provide all demographic information or did not meet the criteria for the measurement of political ideology (see next section), our clickstream sample comprises 3873 individuals from 3361 households.

Clickstream data has several advantages over traditional data sources such as surveys. First, as we track actual behavior of the subjects, we can at least partially avoid self-report biases such as the consistency motif, social desirability, or priming effects (Podsakoff et al. 2003). Second, as clickstream data collection is rather unobtrusive, we likely capture genuine behavior (Bucklin and Sismeiro 2009). Third, we can minimize temporal behavioral biases through a longitudinal data collection over a period of 6 months.

The online reviews we analyze were written by individuals in our sample on Amazon.com, Tripadvisor.com, and Yelp.com. We chose these websites since they used URLs that allowed us to identify when an online review was being composed and because they are popular enough in our dataset (ranked 7, 249, and 507 by page views, respectively) to provide a sufficiently large sample. Furthermore, reviews from Amazon.com and Tripadvisor.com have been used in previous studies (Chevalier and Mayzlin 2006; Mudambi and Schuff 2010; Willemsen et al. 2011; Wu 2013). The reviews were extracted in a three-step process. First, we identified URLs in our data that indicated the posting of an online review on either of the three platforms. Second, as these URLs identify the reviewed product or service but not the reviewer, we manually identified the respective user accounts using the information we have on the reviewed product or service and the review date, as well as demographic data on the user such as age, gender, and location. When we could not unambiguously identify the reviewer or when a review had not actually been posted, we discarded the data. Third, we extracted the most recent reviews the user had submitted (up to 10 reviews). Our final sample consists of 245 reviews containing 23,459 words, written by 37 reviewers. Some reviewers only wrote a single review and the highest number of reviews per reviewer was 20. The median number of reviews per reviewer was 8, while the mean was at 6.62. Three reviewers had written reviews on more than one platform.

Although the sample may appear small, our analyses are likely sufficiently powerful to detect relevant effects. We used G*Power 3.1.9.4 (Faul et al. 2007) to perform a power analysis for a multiple OLS regression, assuming a “large” (Cohen 1992) true effect of f2 = 0.35. Setting alpha to 0.05, desired power to 0.80, and the total number of predictors to 10 (the maximum used in any of our models), a sample of 26 observations is shown to be sufficient to test a single predictor, i.e., the political ideology of a reviewer. The number of independent reviewers in our study is 37, clearly exceeding this threshold.

Measuring Political Ideology

We measure political ideology using a behavioral approach, employing data on news media consumption to infer political ideology. This is possible since empirical evidence suggests that the political preferences of news media outlets and their audience are very similar (Chiang 2010; Gentzkow and Shapiro 2010; Gentzkow et al. 2014; Iyengar and Hahn 2009; Stroud 2008). Flaxman et al. (2016) estimate the political slant of news outlets by assigning a “conservative share” to the top 100 online news outlets based on the fraction of readership that voted for the Republican candidate in the 2012 US presidential election (see Table A2 in the Online Appendix). We can thus perform an unobtrusive measurement based on actual human behavior, namely online news consumption, avoiding many of the biases that plague self-report measures (Podsakoff et al. 2003).

To approximate the political ideology of the individuals in our sample, we calculate the average conservative share of online news outlets they visited in the entire 6-month period weighted with the page views each outlet accounts for. The following formula depicts the calculation of political ideology for a given individual i, with w being an index over the news outlets:

$$Political\,Ideology_{i} = \frac{{\mathop \sum \nolimits_{w = 1}^{100} \left( {conservativeshare_{w} * pageviews_{iw} } \right)}}{{\mathop \sum \nolimits_{w = 1}^{100} pageviews_{iw} }}$$

Consequently, our measure of political ideology captures conservatism on a scale from 0 to 1, with increasing values indicating greater conservatism. To ensure reliability of our measure, we only include individuals who regularly consumed online news and thus, similar to Flaxman et al. (2016), we limit our sample to individuals with on average at least four monthly page views on these news outlets.

We scrutinized our political ideology measure by comparing our distribution to the one found in the sample of Flaxman et al. (2016), as well as to the voting records of the 2012 presidential election. Both comparisons strengthen our conviction in the validity of our measure. First, while Flaxman et al. (2016) find that 66% of users have a political ideology score between 0.41 and 0.54, we find 65% of our entire clickstream sample in that range. Additionally, the ideological distance between two randomly selected individuals in their sample is 0.11 compared to 0.12 our entire sample. Second, similar to the voting records, we find that less conservative individuals have a stronger representation in young age groups as well as in metropolitan areas (New York Times 2012; Roper Center 2012).

Measuring Review Length and Number of Arguments

We measure review length as the simple count of words in the review. This approach was, for example, used by Mudambi and Schuff (2010). We measure the number of arguments in an online review as the sum of all positive and negative indirect statements. To this end, we manually coded all indirect valenced statements (e.g., “The pictures this camera takes are amazing”). Similar to Willemsen et al. (2011) we only consider indirect valenced statements as arguments, ignoring direct valenced statements (e.g., “This camera is amazing”). Two raters coded all reviews independently. Cohen’s kappa was 0.86, indicating very good intercoder reliability (Landis and Koch 1977).

Measuring Cognitively Complex Language

We measure cognitive complexity in review language with a linguistic measure developed by Pennebaker and King (1999) using the word count dictionaries from LIWC (Pennebaker et al. 2001). The measure has been frequently used to measure cognitive complexity (e.g., Abe 2011; Slatcher et al. 2007) as it captures the degree to which an individual differentiates and weighs multiple perspectives. When doing so, individuals use more exclusive words (e.g., “but”, “if”), tentative words (e.g., “almost”, “perhaps”), negations (e.g., “can’t”, “wouldn’t”), and discrepancies (e.g., “must”, “ought”), and fewer inclusive words (e.g., “with”, “and”). We first count the words belonging to the LIWC categories “exclusive”, “tentative”, “negations”, “discrepancies”, and “inclusion” used in online reviews. In line with Slatcher et al. (2007) we then compute cognitive complexity using the z-scores (variables transformed to have a mean of zero and a standard deviation of one) of the categories according to the following formula:

$$Cognitively\,complex\,language = zExcl + zTentat + zNegate + zDiscrep - zIncl$$

In our sample, the reliability of the measure indicated by Cronbach’s alpha is 0.61, which is above the reliability of the cognitive complexity measure (0.52) in the sample of Slatcher et al. (2007). Furthermore, it is above the threshold of 0.60, which indicates acceptable reliability (Hair et al. 2009).

Measuring Argument Diversity

We measure argument diversity by calculating the proportion of positive (p) and negative indirect statements (n) in an online review (see Sect. 4.3), according to the following formula:

$$Argument\, diversity = \left\{ \begin{array}{ll} \frac{n}{p} \quad if\,p > n\\ 1\quad if\,p = n \\ \frac{p}{n}\quad if\,n > p \end{array}\right.$$

As do Willemsen et al. (2011), we measure argument diversity on a scale from 0 (low diversity) to 1 (high diversity). Note that low diversity can be achieved by having either positive or negative indirect statements strongly dominate a given review.

Measuring Language Valence

We measure language valence with the Janis–Fadner coefficient of imbalance (Janis and Fadner 1943), which is frequently used by scholars in the context of content analysis (e.g., Pollock and Rindova 2003). Specifically, we employ the following formula:

$$Language\,valence = \left\{ \begin{array}{ll} \frac{{p^{2} - pn}}{{\left( {total \,words} \right)^{2}}} &if\,p > n\\ \quad 0 & if\,p = n \\ \frac{{pn - n^{2} }}{{\left( {total\,words} \right)^{2} }}& if\, n > p \end{array}\right.$$

As we aim to capture the emotional tenor of the language our subjects use, we consider each individual word as our recording unit. To classify the words in conveying positive (p in the formula) or negative emotions (n), we use the categories “positive emotions” (e.g., “beautiful”, “sharing”) and “negative emotions” (e.g., “awkward”, “nasty”) from the LIWC dictionary. As is evident, our measure is positive when positive words dominate over negative words and negative when the reverse is true. Note that the extremity of language valence is not only influenced by the ratio of positive to negative words but also by all words that are neither positive nor negative because they are still counted towards total words in the formula.

Control Variables

All regressions control for age, gender, and annual household income, which is a valid predictor for socio-economic status and education (Chiou-Wei and Inman 2008). Household income was measured on an ordinal scale from 1 through 13, indicating household income brackets from below 15,000 US$ to above 250,000 US$. We further controlled for Internet usage, which was measured in brackets coded as 1 through 3, indicating less than 5 h, between 5 and 16 h, and more than 16 h of Internet usage per week, respectively. These data were based on user self-reports. In addition, we control for the source website of the review using dummy variables (one for Amazon, one for Yelp, TripAdvisor is the reference category). We also include review ratings, i.e., the numerical star rating (ranging from 1 to 5) indicating the satisfaction of the reviewer with the product or service (Mudambi and Schuff 2010), in our model as online reviews on e-commerce sites are overwhelmingly positive (Chevalier and Mayzlin 2006). Indeed, we find that in our sample, the average rating of the reviews is 4.2 out of 5 stars. Furthermore, for H3, controlling for the review rating is paramount to isolating the effect of valenced language use from the reviewer’s satisfaction with the reviewed product or service. Lastly, we control for the review word count in the models for H2a–c. We do not do so for H3 as the word count is already included in the language valence measure or H1a-b as the word count is used as the measure for the dependent variable.

Estimation Approach and Results

Table 1 contains summary statistics and pairwise correlations for all variables used in our analyses. To test for multicollinearity, we calculated the mean variance inflation factors, which, at values between 1.70 (Model 9) and 1.81 (Model 4) for the saturated models, are well below the suggested threshold of 10.0 (Hair et al. 2009; Kutner et al. 2004).

Table 1 Descriptives and correlations (n = 245)

To test H1a, H1b, and H2a, we use panel random effects regression models to accommodate the panel structure of our dataset. To test H2b and H2c, we employ a pooled fractional probit model, as argument diversity, the dependent variable, is a fractional outcome variable (Baum 2008; Papke and Wooldridge 2008). To test H3, we use a pooled Tobit model, as language valence, the dependent variable, is a censored variable (Wooldridge 2001). To account for the fact that our observations are not independent but are nested in reviewers, we clustered standard errors at the individual reviewer in all models. The results for all models are presented in Table 2. Models 1, 3, 5, 7 and 10 are control models for H1a, H1b, H2a, H2b, and H3 respectively.

Table 2 Regression results

We find marginal support for H1a in Model 2 and support for H1b in Model 4. As anticipated, the results in Model 2 show that the word count, i.e., the review length, is higher for reviews submitted by less conservative reviewers (p < 0.10). Model 4 supports our hypothesis that less conservative reviewers also make use of more arguments in their online reviews (p < 0.05). On average, reviews by less conservative reviewers (political ideology score < 0.5) contain 97 words and 3 arguments, while reviews by more conservative reviewers (political ideology score > 0.5) contain only 71 words and 2 arguments.

Model 6 provides support for H2a as we find a negative and significant (p < 0.01) coefficient for political ideology, suggesting that the more conservative a reviewer, the less cognitively complex language will their online reviews contain. Similarly, we find support for H2b in Model 8, albeit with a slightly less significant coefficient (p < 0.05) for political ideology. Contrary to our expectations, we do not find a mediating effect of cognitively complex language for the effect of political ideology on argument diversity. As depicted in Model 9, when cognitively complex language is added to Model 8, political ideology still has a significant effect on the dependent variable (p < 0.05). A possible explanation could be that the greater argument diversity exhibited by less conservative reviewers is not only a result of greater cognitive complexity, but also directly of greater ambiguity tolerance (Jost et al. 2003).

Finally, the results of Model 11 lend support to H3, as political ideology has a negative and significant (p < 0.05) coefficient. This suggests that more conservative individuals tend to use language with a less positive valence in online reviews.

We ran several robustness checks to see whether our models were robust to the use of alternative estimators. Using OLS models with clustered standard errors provided consistent results. We further obtained very similar results when we re-ran our analyses using multilevel models. While both approaches neglect the fact that our dependent variables are not always continuous, the similar results further increase our confidence in the reported models.

Discussion

Contribution

Our research contributes to theory and practice in several ways. We explain relevant differences in how individuals write reviews based on differences in their personality, as reflected in political ideology. Prior research has only examined non-personality-induced differences such as expertise, experience, and social connections; or, if it has studied personality, it has not done so in relation to review language but rather attitudes such as intentions to provide a review. Given that the impact of differences in online reviews on sales and helpfulness has been a major topic in information systems research in past years (e.g., Forman et al. 2008), the lack of scholarly attention to explanatory variables of such differences is surprising. Our research addresses this gap and specifically contributes to research on online reviews (e.g., Goes et al. 2014) and user content generation (e.g., Baeza-Yates and Saez-Trumper 2015) by highlighting how the personality of the reviewer is predictive of the way he or she uses language, builds arguments, and expends effort on a review.

Furthermore, we contribute to theory on information systems research more generally by highlighting the role that users’ political ideology can play in the complex webs in which technology is embedded (Orlikowski and Iacono 2001). Specifically, we show that the construct of political ideology, as the result of stable underlying personality traits and motivational structures (Jost et al. 2003, 2009), is related to everyday human behavior not only in politics and the offline world (e.g., Carney et al. 2008; Jost et al. 2008), but also in online settings beyond mere news media consumption (e.g., Flaxman et al. 2016; Gentzkow and Shapiro 2011). We therefore offer an additional user-specific factor that researchers of technology adoption and use (Graf-Vlachy et al. 2018; Venkatesh et al. 2012) might wish to consider in future work. Our study highlights how the construct of political ideology – which may initially appear far-removed from systems design – is relevant for creators of information systems who need to anticipate how different users may engage with a system. Overall, our evidence reaffirms and broadens the argument of Carney et al. (2008) that “the political divide extends far beyond overtly ideological opinions to much subtler and more banal personal interests, tastes, preferences, and interaction styles” (p. 835). In fact, it also applies to information systems use.

Practical Implications

As online reviews have become an integral success factor for online retailers (e.g., Dellarocas 2003; Kumar and Benbasat 2006), such firms rely heavily on their customers to provide helpful reviews. Our findings suggest that reviewer personality influences reviews’ depth and multifacetedness, both of which have previously been linked to review helpfulness (e.g., Mudambi and Schuff 2010; Willemsen et al. 2011) and which, in turn, affect purchase intention (Coursaris et al. 2018). As online retailers may be able to infer customers’ political ideology, they can use this information productively. While they may not have access to a customer’s clickstream across the web, retailers could, for example, obtain ideology information directly by way of survey or infer it from past purchasing behavior (potentially using models trained on survey data from a subset of users) or social media posts (e.g., Preoţiuc-Pietro et al. 2017) – as far as this is ethically and legally permissible.

They can then use information on would-be-reviewers’ ideology to possibly improve the data quality (Tilly et al. 2017) of their reviews by subtly nudging (Weinmann et al. 2016) and supporting them in various ways, e.g., by providing personalized input methods, guidance, and incentives. For instance, as more conservative individuals gravitate towards lower argument diversity, firms could provide such reviewers not only with a free text field but additionally with a structured review template, in which reviewers can provide positive and negative feedback. Specifically, the template could allow reviewers to select from pre-populated input fields, displaying frequently used elements of feedback (Lukyanenko et al. 2014). Less conservative reviewers might only be offered a standard free text input field. Appropriately tuned autocomplete features or personalized defaults (Goldstein et al. 2008) might also be helpful in improving review quality. Further, instructions could be tailored to reviewers’ political ideology in that less conservative reviewers might be reminded to include a clear “buy or don’t buy” recommendation (to offset their tendency towards balance in their arguments) and more conservative reviewers to write a certain number of words (to counter their tendency towards writing shorter reviews). Nudges could also include customized appeals to motivate users to write a review in the first place. For example, building on the notion that people tend to assume that others are like them (Marks and Miller 1987), instructions for less conservative individuals might include appeals to write a review to help others make better decisions (leveraging their greater altruism) and try out new products or services (appealing to their greater openness to experience; Jost et al. 2003). Conversely, instructions to more conservative individuals might highlight the ability of online reviews to prevent poor decisions by other customers (appealing to their greater fear of loss and prevention focus; Jost et al. 2003) and discipline providers of poor quality and service (appealing to their greater support of punitive measures; Sargent 2004). Finally, to increase review depth, more conservative individuals could be incentivized to write longer reviews by rewarding them symbolically, for example by giving visual feedback about whether their review is considered sufficiently long to be helpful, or materially, for example with coupons, if their reviews exceed a specified length.

Naturally, these suggestions are contingent on the specific objectives of the online retailer and the precise consequences of review depth and multifacetedness on the achievement of these objectives. Since prior research has, for example, found conflicting evidence on the consequences of review valence on review helpfulness (Cao et al. 2011; Pan and Zhang 2011; Wu 2013), we not only refrain from making suggestions regarding review valence but also caution the reader that the effects of other properties of review language like review depth and multifacetedness might very well be context-dependent.

Limitations and Future Research

As any empirical study, ours also has limitations. Most critically, we base our measure of political ideology on a relatively novel methodology by Flaxman et al. (2016). While there is substantial evidence that the measure’s foundation, i.e., people’s preference for news outlets that conform to their own political ideology, is solid in the aggregate, no attempt has been made to validate the measure on an individual level. In particular, further validation of this measure would be welcome to ensure that it has tolerable measurement error and that it is valid across the entire liberal–conservative spectrum, even when including not only news coverage that is overtly political. We therefore propose further research that links clickstream data with detailed self-reports of political ideology. This is particularly important since, although our measure avoids common problems of self-reports, it is subject to other potential biases like reactivity or technical difficulties and errors in data collection (Jürgens et al. 2019).

Additional opportunities for future research emerge from our findings. For instance, since we established a link between political ideology and review language, and prior literature had linked review language to review helpfulness, purchase intention, and sales (e.g., Coursaris et al. 2018; Ghose and Ipeirotis 2006, 2011; Mudambi and Schuff 2010; Yin et al. 2014), we wonder if review language might mediate the relationship between reviewers’ political ideology and such substantive outcomes. Further, it would be interesting to study if the relationship between political ideology and review language is conditional on other personality characteristics like the Big Five personality traits. For example, it is conceivable that conscientiousness may dampen the relationship specifically for review depth, as relatively conscientious individuals may be likely to write thorough and balanced reviews irrespective of their political ideology.

Moving beyond the subject of online reviews, our study more broadly raises the question of how political ideology influences users’ creation of and interaction with user-generated content. For example, contributions to open source software projects or efforts like Wikipedia might be influenced by motives reflected in individuals’ political ideologies, such as altruism (Wagner and Prasarnphanich 2007). Also, there is clearly an opportunity for further general research on how system designers can leverage users’ political ideology to improve the quality of user-generated content (Lukyanenko et al. 2014; Tilly et al. 2017).

Conclusion

Overall, this paper contributes by introducing personality, as reflected in political ideology, as a predictor of online review language. We primarily view our study as a first step towards a deeper, more nuanced understanding of how reviews are created. However, it also suggests that political ideology might be an important construct for research on online behavior on a broader level and even for information systems research in general. Specifically, we theoretically and empirically underscore that the political ideology of system users is closely linked to how they engage with information technology – a notion that has potentially vital implications for the design of IT systems and user interactions. We thus encourage scholars to not only empirically further validate our findings, but also explore additional potential effects of personality in the context of online reviews, online behavior, and beyond.

Notes

  1. We searched 28 journals, including all contained in the Senior Scholars’ Basket of Journals, the recommended journals of several AIS Special Interest Groups (Cognitive Research, Decision Support and Analytics, Enterprise Systems, Human–Computer Interaction, Systems Analysis and Design), as well as Business & Information Systems Engineering. Where possible, we searched using Web of Science employing the keywords (review OR reviews) AND personality. We searched AIS Transactions on Human–Computer Interaction and Communications of the AIS through their respective websites using the keywords review * AND personality. As the Communications of the AIS included a very large number of articles concerning peer review, we added the following exclusions: NOT peer reviewer NOT review process NOT peer review. We screened titles and abstracts of all 170 hits to identify the relevant articles.

References

  • Abe JAA (2011) Changes in Alan Greenspan’s language use across the economic cycle: a text analysis of his testimonies and speeches. J Lang Soc Psychol 30(2):212–223

    Article  Google Scholar 

  • Alford JR, Funk CL, Hibbing JR (2005) Are political orientations genetically transmitted? Am Polit Sci Rev 99(2):153–167

    Article  Google Scholar 

  • Altemeyer B (1998) The other “authoritarian personality”. Adv Exp Soc Psychol 30:47–92

    Article  Google Scholar 

  • Archak N, Ghose A, Ipeirotis PG (2011) Deriving the pricing power of product features by mining consumer reviews. Manag Sci 57(8):1485–1509

    Article  Google Scholar 

  • Baeza-Yates R, Saez-Trumper D (2015) Wisdom of the crowd or wisdom of a few? An analysis of users’ content generation. In: Proceedings of the 26th ACM conference on hypertext & social media, pp 69–74

  • Baum CF (2008) Stata tip 63: modeling proportions. Stata J 8(2):299–303

    Article  Google Scholar 

  • Baxter G, Marcella R (2012) Does Scotland ‘like’ this? social media use by political parties and candidates in Scotland during the 2010 UK general election campaign. Libri 62(2):109–124

    Article  Google Scholar 

  • Block J, Block JH (2006) Nursery school personality and political orientation two decades later. J Res Pers 40(5):734–749

    Article  Google Scholar 

  • Bobbio N (1996) Left and right. Polity Press, Cambridge

    Google Scholar 

  • Bucklin RE, Sismeiro C (2009) Click here for interact insight: advances in clickstream data analysis in marketing. J Interact Mark 23(1):35–48

    Article  Google Scholar 

  • Budner S (1962) Intolerance of ambiguity as a personality variable. J Pers 30(1):29–50

    Article  Google Scholar 

  • Cao Q, Duan W, Gan Q (2011) Exploring determinants of voting for the “helpfulness” of online user reviews: a text mining approach. Decis Support Syst 50(2):511–521

    Article  Google Scholar 

  • Carney DR, Jost JT, Gosling SD, Potter J (2008) The secret lives of liberals and conservatives: personality profiles, interaction styles, and the things they leave behind. Political Psychol 29(6):807–840

    Article  Google Scholar 

  • Carraro L, Castelli L, Macchiella C (2011) The automatic conservative: ideology-based attentional asymmetries in the processing of valenced information. PLoS ONE 6(11):e26456

    Article  Google Scholar 

  • Chen P (2010) Adoption and use of digital media in election campaigns: Australia, Canada and New Zealand compared. Public Commun Rev 1:3–26

    Article  Google Scholar 

  • Chevalier JA, Mayzlin D (2006) The effect of word of mouth on sales: online book reviews. J Mark Res 43(3):345–354

    Article  Google Scholar 

  • Chiang C-F (2010) Political differentiation in newspaper markets. Working Paper

  • Chiou-Wei SZ, Inman JJ (2008) Do shoppers like electronic coupons? A panel data analysis. J Retail 84(3):297–307

    Article  Google Scholar 

  • Chirumbolo A (2002) The relationship between need for cognitive closure and political orientation: the mediating role of authoritarianism. Pers Individ Differ 32(4):603–610

    Article  Google Scholar 

  • Chirumbolo A, Areni A, Sensales G (2004) Need for cognitive closure and politics: voting, political attitudes and attributional style. Int J Psychol 39(4):245–253

    Article  Google Scholar 

  • Chiu C-M, Hsu M-H, Wang ETG (2006) Understanding knowledge sharing in virtual communities: an integration of social capital and social cognitive theories. Decis Support Syst 42(3):1872–1888

    Article  Google Scholar 

  • Choi SO, Kim BC (2012) Voter intention to use e-voting technologies: security, technology acceptance, election type, and political ideology. J Inf Technol Politics 9(4):433–452

    Article  Google Scholar 

  • Clemons E, Gao G, Hitt L (2006) When online reviews meet hyperdifferentiation: a study of the craft beer industry. J Manag Inf Syst 23(2):149–171

    Article  Google Scholar 

  • Cohen J (1992) A power primer. Psychol Bull 112(1):155–159

    Article  Google Scholar 

  • Conway LG III, Gornick LJ, Houck SC, Anderson C, Stockert J, Sessoms D, McCue K (2016) Are conservatives really more simple-minded than liberals? The domain specificity of complex thinking. Polit Psychol 37(6):777–798

    Article  Google Scholar 

  • Coursaris CK, Van Osch W, Albini A (2018) Antecedents and consequents of information usefulness in user-generated online reviews: a multi-group moderation analysis of review valence. AIS Trans Hum-Comput Interact 10(1):1–25

    Article  Google Scholar 

  • Crowley AE, Hoyer WD (1994) An integrative framework for understanding two-sided persuasion. J Consum Res 20(4):561–574

    Article  Google Scholar 

  • Dabholkar PA (2006) Factors influencing consumer choice of a ‘rating web site’: an experimental investigation of an online interactive decision aid. J Mark Theory Pract 14(4):259–273

    Article  Google Scholar 

  • Dellarocas C (2003) The digitization of word of mouth: promise and challenges of online feedback mechanisms. Manag Sci 49(10):1407–1424

    Article  Google Scholar 

  • Farwell L, Weiner B (2000) Bleeding hearts and the heartless: popular perceptions of liberal and conservative ideologies. Pers Soc Psychol Bull 26(7):845–852

    Article  Google Scholar 

  • Faul F, Erdfelder E, Lang A-G, Buchner A (2007) G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods 39:175–191

    Article  Google Scholar 

  • Flaxman S, Goel S, Rao JM (2016) Filter bubbles, echo chambers, and online news consumption. Public Opin Q 80(S1):298–320

    Article  Google Scholar 

  • Forman C, Ghose A, Wiesenfeld B (2008) Examining the relationship between reviews and sales: the role of reviewer identity disclosure in electronic markets. Inf Syst Res 19(3):291–313

    Article  Google Scholar 

  • Fowler JH, Kam CD (2007) Beyond the self: social identity, altruism, and political participation. J Politics 69(3):813–827

    Article  Google Scholar 

  • Furnham A, Ribchester T (1995) Tolerance of ambiguity: a review of the concept, its measurement and applications. Curr Psychol 14(3):179–199

    Article  Google Scholar 

  • García-Sánchez IM, Rodríguez-Domínguez L, Gallego-Álvarez I (2011) The relationship between political factors and the development of e-participatory government. Inf Soc 27(4):233–251

    Article  Google Scholar 

  • Gefen D (2000) E-commerce: the role of familiarity and trust. Omega 28(6):725–737

    Article  Google Scholar 

  • Gentzkow M, Shapiro JM (2010) What drives media slant? Evidence from U.S. daily newspapers. Econometrica 78(1):35–71

    Article  Google Scholar 

  • Gentzkow M, Shapiro JM (2011) Ideological segregation online and offline. Q J Econ 126(4):1799–1839

    Article  Google Scholar 

  • Gentzkow M, Shapiro JM, Sinkinson M (2014) Competition and ideological diversity: historical evidence from us newspapers. Am Econ Rev 104(10):3073–3114

    Article  Google Scholar 

  • Ghose A, Ipeirotis PG (2006) Designing ranking systems for consumer reviews: the impact of review subjectivity on product sales and review quality. In: Proceedings of the 16th annual workshop on information technology and systems, pp 303–310

  • Ghose A, Ipeirotis PG (2011) Estimating the helpfulness and economic impact of product reviews: mining text and reviewer characteristics. IEEE Trans Knowl Data Eng 23(10):1498–1512

    Article  Google Scholar 

  • Goes PB, Lin M, Au Yeung CM (2014) “Popularity effect” in user-generated content: evidence from online product reviews. Inf Syst Res 25(2):222–238

    Article  Google Scholar 

  • Goldstein DG, Johnson EJ, Herrmann A, Heitmann M (2008) Nudge your customers toward better choices. Harv Bus Rev 86(12):99–105

    Google Scholar 

  • Graf-Vlachy L, Goyal T, Ouardi Y, König A (2017) Political ideology as a predictor of online media piracy. In: Proceedings of the 25th European conference on information systems (ECIS)

  • Graf-Vlachy L, Buhtz K, König A (2018) Social influence in technology adoption: taking stock and moving forward. Manag Rev Q 68(1):37–76

    Article  Google Scholar 

  • Hair J, Black W, Babin B, Anderson R (2009) Multivariate data analysis, 7th edn. Pearson Prentice Hall, Upper Saddle River

    Google Scholar 

  • Helm R, Möller M, Mauroner O, Conrad D (2013) The effects of a lack of social recognition on online communication behavior. Comput Hum Behav 29(3):1065–1077

    Article  Google Scholar 

  • Hew K, Hara N (2007) Knowledge sharing in online environments: a qualitative case study. J Am Soc Inf Sci Technol 58(14):2310–2324

    Article  Google Scholar 

  • Hibbing JR, Smith KB, Alford JR (2014) Differences in negativity bias underlie variations in political ideology. Behav Brain Sci 37(3):297–307

    Article  Google Scholar 

  • Himelboim I, McCreery S, Smith M (2013) Birds of a feather tweet together: integrating network and content analyses to examine cross-ideology exposure on Twitter. J Comput Med Commun 18:40–60

    Article  Google Scholar 

  • Himelboim I, Sweetser KD, Tinkham SF, Cameron K, Danelo M, West K (2016) Valence-based homophily on Twitter: network analysis of emotions and political talk in the 2012 presidential election. New Media Soc 18(7):1382–1400

    Article  Google Scholar 

  • Hu N, Liu L, Zhang JJ (2008) Do online reviews affect product sales? The role of reviewer characteristics and temporal effects. Inf Technol Manag 9(3):201–214

    Article  Google Scholar 

  • Iyengar S, Hahn KS (2009) Red media, blue media: evidence of ideological selectivity in media use. J Commun 59(1):19–39

    Article  Google Scholar 

  • Janis IL, Fadner RH (1943) A coefficient of imbalance for content analysis. Psychometrika 8(2):105–119

    Article  Google Scholar 

  • Jing-Schmidt Z (2007) Negativity bias in language: a cognitive-affective model of emotive intensifiers. Cogn Linguist 18(3):417–443

    Article  Google Scholar 

  • Joel S, Burton CM, Plaks E (2013) Conservatives anticipate and experience stronger emotional reactions to negative outcomes. J Pers 82(1):32–43

    Article  Google Scholar 

  • Jost JT (2006) The end of the end of ideology. Am Psychol 61(7):651–670

    Article  Google Scholar 

  • Jost JT, Amodio DM (2012) Political ideology as motivated social cognition: behavioral and neuroscientific evidence. Motiv Emot 36(1):55–64

    Article  Google Scholar 

  • Jost JT, Glaser J, Kruglanski AW, Sulloway FJ (2003) Political conservatism as motivated social cognition. Psychol Bull 129(3):339–375

    Article  Google Scholar 

  • Jost JT, Napier JL, Thorisdottir H, Gosling SD, Palfai TP, Ostafin B (2007) Are needs to manage uncertainty and threat associated with political conservatism or ideological extremity? Pers Soc Psychol Bull 33(7):989–1007

    Article  Google Scholar 

  • Jost JT, Nosek BA, Gosling SD (2008) Ideology: its resurgence in social, personality, and political psychology. Perspect Psychol Sci 3(2):126–136

    Article  Google Scholar 

  • Jost JT, Federico CM, Napier JL (2009) Political ideology: its structure, functions, and elective affinities. Annu Rev Psychol 60:307–337

    Article  Google Scholar 

  • Jürgens P, Stark B, Magin M (2019) Two half-truths make a whole? On bias in self-reports and tracking data. Soc Sci Comput Rev. https://doi.org/10.1177/0894439319831643

    Article  Google Scholar 

  • Kumar N, Benbasat I (2006) The influence of recommendations and consumer reviews on evaluations of websites. Inf Syst Res 17(4):425–439

    Article  Google Scholar 

  • Kutner MH, Nachtsheim C, Neter J (2004) Applied linear regression models, 4th edn. McGraw-Hill/Irwin, Chicago

    Google Scholar 

  • Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33(1):159–174

    Article  Google Scholar 

  • Lee CS, Ma L (2012) News sharing in social media: the effect of gratifications and prior experience. Comput Hum Behav 28(2):331–339

    Article  Google Scholar 

  • Li ST, Pham TT, Chuang HC (2019) Do reviewers’ words affect predicting their helpfulness ratings? Locating helpful reviewers by linguistics styles. Inf Manag 56(1):28–38

    Article  Google Scholar 

  • Lis B (2013) In eWOM we trust. Bus Inf Syst Eng 5(3):129–140

    Article  Google Scholar 

  • Lis B, Neßler C (2014) Electronic word of mouth. Bus Inf Syst Eng 6(1):63–65

    Article  Google Scholar 

  • Preoţiuc-Pietro D, Liu, Y, Hopkins D, Ungar L (2017) Beyond binary labels: political ideology prediction of twitter users. In: Proceedings of the 55th annual meeting of the association for computational linguistics, vol 1: Long Papers, pp 729–740

  • Liu Y, Huang X, An A, Yu X (2008) Modeling and predicting the helpfulness of online reviews. In: IEEE international conference on data mining, pp 443–452

  • Lu H-P, Hsiao K-L (2007) Understanding intention to continuously share information on weblogs. Int Res 17(4):345–361

    Google Scholar 

  • Lukyanenko R, Parsons J, Wiersma YF (2014) The IQ of the crowd: understanding and improving information quality in structured user-generated content. Inf Syst Res 25(4):669–689

    Article  Google Scholar 

  • Marks G, Miller N (1987) Ten years of research on the false-consensus effect: an empirical and theoretical review. Psychol Bull 102(1):72–90

    Article  Google Scholar 

  • Mendez MF (2017) A neurology of the conservative–liberal dimension of political ideology. J Neuropsychiatry Clin Neurosci 29(2):86–94

    Article  Google Scholar 

  • Mudambi SM, Schuff D (2010) What makes a helpful online review? A study of customer reviews on Amazon.com. MIS Q 34(1):185–200

    Article  Google Scholar 

  • New York Times (2012) President exit polls. http://elections.nytimes.com/2012/results/president/exit-polls. Accessed 07 Aug 2015

  • Orlikowski WJ, Iacono CS (2001) Research commentary: desperately seeking the “IT” in IT research—a call to theorizing the IT artifact. Inf Syst Res 12(2):121–134

    Article  Google Scholar 

  • Pan Y, Zhang JQ (2011) Born unequal: a study of the helpfulness of user-generated product reviews. J Retail 87(4):598–612

    Article  Google Scholar 

  • Pang MS (2016) Politics and information technology investments in the US Federal government in 2003–2016. Inf Syst Res 28(1):33–45

    Article  Google Scholar 

  • Papke LE, Wooldridge JM (2008) Panel data methods for fractional response variables with an application to test pass rates. J Econ 145(1–2):121–133

    Article  Google Scholar 

  • Pavlou PA, Dimoka A (2006) The nature and role of feedback text comments in online marketplaces: implications for trust building, price premiums, and seller differentiation. Inf Syst Res 17(4):392–414

    Article  Google Scholar 

  • Pennebaker JW, King LA (1999) Linguistic styles: language use as an individual difference. J Pers Soc Psychol 77(6):1296–1312

    Article  Google Scholar 

  • Pennebaker JW, Francis ME, Booth RJ (2001) Linguistic inquiry and word count: LIWC 2001. Lawrence Erlbaum Associates, Mahwah

    Google Scholar 

  • Picazo-Vela S, Chou SY, Melcher AJ, Pearson JM (2010) Why provide an online review? An extended theory of planned behavior and the role of Big-Five personality traits. Comput Hum Behav 26(4):685–696

    Article  Google Scholar 

  • Podsakoff PM, MacKenzie SB, Lee J-Y, Podsakoff NP (2003) Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol 88(5):879–903

    Article  Google Scholar 

  • Pollock TG, Rindova VP (2003) Media legitimation effects in the market for initial public offerings. Acad Manag J 46(5):631–642

    Google Scholar 

  • Roper Center (2012) How groups voted in 2012. http://www.ropercenter.uconn.edu/polls/us-elections/how-groups-voted/how-groups-voted-2012/. Accessed 07 Aug 2015

  • Safi R, Yu Y (2017) Online product review as an indicator of users’ degree of innovativeness and product adoption time: a longitudinal analysis of text reviews. Eur J Inf Syst 26(4):414–431

    Article  Google Scholar 

  • Sargent MJ (2004) Less thought, more punishment: need for cognition predicts support for punitive responses to crime. Pers Soc Psychol Bull 30(11):1485–1493

    Article  Google Scholar 

  • Schindler RM, Bickart B (2012) Perceived helpfulness of online consumer reviews: the role of message content and style. J Consum Behav 11:234–243

    Article  Google Scholar 

  • Sidanius JIM (1978) lntolerance of ambiguity and socio-politico ideology: a multidimensional analysis. Eur J Soc Psychol 8:215–235

    Article  Google Scholar 

  • Slatcher RB, Chung CK, Pennebaker JW, Stone LD (2007) Winning words: individual differences in linguistic style among U.S. presidential and vice presidential candidates. J Res Pers 41(1):63–75

    Article  Google Scholar 

  • Smith A (2013) Civic engagement in the digital age. pew research center. https://www.pewinternet.org/2013/04/25/civic-engagement-in-the-digital-age/. Accessed 29 July 2019

  • Smith D, Menon S, Sivakumar K (2005) Online peer and editorial recommendations, trust, and choice in virtual markets. J Interact Mark 19(3):15–37

    Article  Google Scholar 

  • Stroud NJ (2008) Media use and political predispositions: revisiting the concept of selective exposure. Polit Behav 30(3):341–366

    Article  Google Scholar 

  • Suedfeld P, Rank AD (1976) Revolutionary leaders: long-term success as a function of changes in conceptual complexity. J Pers Soc Psychol 34(2):169–178

    Article  Google Scholar 

  • Sun Y, Fang Y, Lim KH (2014) Understanding knowledge contributors’ satisfaction in transactional virtual communities: a cost-benefit trade-off perspective. Inf Manag 51(4):441–450

    Article  Google Scholar 

  • Tetlock PE (1983) Cognitive style and political ideology. J Pers Soc Psychol 45(1):118–126

    Article  Google Scholar 

  • Tetlock PE, Hannum KA, Micheletti PM (1984) Stability and change in the complexity of senatorial debate: testing the cognitive versus rhetorical style hypotheses. J Pers Soc Psychol 46(5):979–990

    Article  Google Scholar 

  • Tilly R, Posegga O, Fischbach K, Schoder D (2017) Towards a conceptualization of data and information quality in social information systems. Bus Inf Syst Eng 59(1):3–21

    Article  Google Scholar 

  • Tomkins SS (1995) Ideology and affect. In: Demos EV (ed) Exploring affect: the selected writings of Silvan S. Tomkins. University of Cambridge Press, New York, pp 109–167

    Chapter  Google Scholar 

  • Van Hiel A, Mervielde I (2003) The measurement of cognitive complexity and its relationship with political extremism. Polit Psychol 24(4):781–801

    Article  Google Scholar 

  • Van Hiel A, Mervielde I (2004) Openness to experience and boundaries in the mind: relationships with cultural and economic conservative beliefs. J Pers 72(4):659–686

    Article  Google Scholar 

  • Van Lange PAM, Bekkers R, Chirumbolo A, Leone L (2012) Are conservatives less likely to be prosocial than liberals? From games to ideology, political preferences and voting. Eur J Pers 26(5):461–473

    Article  Google Scholar 

  • Venkatesh V, Thong JY, Xu X (2012) Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Q 36(1):157–178

    Article  Google Scholar 

  • Wagner C, Prasarnphanich P (2007) Innovating collaborative content creation: the role of altruism and wiki technology. In: 2007 40th Annual Hawaii international conference on system sciences, pp 18–28

  • Wasko M, Faraj S (2005) Why should I share? Examining social capital and knowledge contribution in electronic networks of practice. MIS Q 29(1):35–57

    Article  Google Scholar 

  • Weinmann M, Schneider C, vom Brocke J (2016) Digital nudging. Bus Inf Syst Eng 58(6):433–436

    Article  Google Scholar 

  • Willemsen LM, Neijens PC, Bronner F, De Ridder JA (2011) “Highly recommended!” The content characteristics and perceived usefulness of online consumer reviews. J Comput-Mediat Commun 17(1):19–38

    Article  Google Scholar 

  • Wooldridge JM (2001) Econometric analysis of cross section and panel data. The MIT Press, Cambridge

    Google Scholar 

  • Wu PF (2013) In search of negativity bias: an empirical study of perceived helpfulness of online reviews. Psychol Mark 30(11):971–984

    Article  Google Scholar 

  • Yang J, Barnidge M, Rojas H (2017) The politics of “unfriending”: user filtration in response to political disagreement on social media. Comput Hum Behav 70:22–29

    Article  Google Scholar 

  • Yin D, Bond S, Zhang H (2014) Anxious or angry? Effects of discrete emotions on the perceived helpfulness of online reviews. MIS Q 38(2):539–560

    Article  Google Scholar 

  • Zettler I, Hilbig BE (2010) Attitudes of the selfless: explaining political orientation with altruism. Pers Individ Differ 48(3):338–342

    Article  Google Scholar 

  • Zhang Z, Varadarajan B (2006) Utility scoring of product reviews. In: Proceedings of the 15th ACM international conference on information and knowledge management, pp 51–57

  • Zhu F, Zhang X (2010) Impact of online consumer reviews on sales: the moderating role of product and consumer characteristics. J Mark 74(2):133–148

    Article  Google Scholar 

  • Zmud RW (1979) Individual differences and MIS success: a review of the empirical literature. Manag Sci 25(10):966–979

    Article  Google Scholar 

Download references

Acknowledgement

Open Access funding provided by Projekt DEAL. This project benefitted from a collaboration with McKinsey & Company's Global iConsumer Research Initiative.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lorenz Graf-Vlachy.

Additional information

Accepted after three revisions by Jens Dibbern.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 550 kb)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Graf-Vlachy, L., Goyal, T., Ouardi, Y. et al. Reviews Left and Right: The Link Between Reviewers’ Political Ideology and Online Review Language. Bus Inf Syst Eng 63, 403–417 (2021). https://doi.org/10.1007/s12599-020-00652-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12599-020-00652-1

Keywords

  • Online consumer reviews
  • Political ideology
  • Review language
  • Reviewer personality