Skip to main content

Sentiment deviations in responses to movie trailers across social media platforms

Abstract

Social media listening has become an integral part of many companies marketing strategies. Using a unique dataset of social media comments to 413 movie trailers, we document the systematic differences in sentiments expressed on Facebook and YouTube. First, Facebook comments are less likely to involve sentiments. Second, when sentiments are expressed, Facebook comments tend to be more positive than those on YouTube. Third, on both platforms, comments are more likely to express sentiments after a movie’s release than before it. Furthermore, the sentiment gap between Facebook and YouTube diminishes after a movie’s release. We propose a behavioral explanation for our findings based on network structure and social desirability bias and test our hypothesis with an experiment. Finally, we demonstrate that cross-platform sentiment divergence is significantly associated with box office revenue.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

Notes

  1. The same algorithm is applied to extract sentiments from both Facebook and YouTube comments, thereby making the sentiments comparable across these two social media platforms. We do not have access to the text of user comments on movie trailers.

  2. In collaboration with the social media listening company, we conducted a test to assess the accuracy of the sentiment extraction algorithm. Specifically, we used the automated algorithm to extract sentiments from a random sample of 1500 comments from Twitter, Facebook, and YouTube, mostly on media and entertainment. In parallel, we asked two independent human raters to categorize these comments by the sentiment expressed into one of the “top eight” sentiments (plus an “other” category). We then computed the inter-rater agreement between the two human raters and compared that with the inter-rater agreement between each human rater and the automated algorithm. The inter-rater agreement between the human raters was 70.1%, while the average inter-rater agreement between the human raters and the automated system was 71.9% (73.9% vs. rater A and 70.0% vs. rater B). The difference between 70.1% and 71.9% is statistically non-significant (p = .20), which corroborates the stated claim that the automated tagging system is reliable. Furthermore, the social media company indicated that it frequently conducts tests to assess its sentiment extraction accuracy and that the inter-rater agreement (vs. human raters) ranges from 70% to more than 85% for movie-related comments. This is generally in line with the reported industry standards of around 80–85% (Jiang et al., 2011; Kennedy & Inkpen, 2006).

  3. For each paired t-test reported herein, we also performed the same comparison using the non-parametric Wilcoxon test as a robustness check. All results remain substantially unchanged.

  4. As the movies in our sample have different release dates, we do not conduct a single-intervention study. Therefore, our results do not rely on the parallel-trend assumption (Angrist & Pischke, 2009). In addition, we use a fixed-effect regression that includes movie fixed effects (see Appendix Table 3). The key results remain substantively unchanged.

  5. https://en.wikipedia.org/wiki/History_of_Facebook.

References

  • Angrist, J. D., & Pischke, J.-S. (2009). Mostly harmless econometrics: An empiricist’s companion. Princeton University Press.

    Book  Google Scholar 

  • Aron, A., Aron, E. N., & Smollan, D. (1992). Inclusion of other in the self scale and the structure of interpersonal closeness. Journal of Personality and Social Psychology, 63(4), 596.

    Article  Google Scholar 

  • Balducci, B., & Marinova, D. (2018). Unstructured data in marketing. Journal of the Academy of Marketing Science, 46(4), 557–590.

    Article  Google Scholar 

  • Bogaert, M., Ballings, M., Van den Poel, D., & Oztekin, A. (2021). Box office sales and social media: A cross-platform comparison of predictive ability and mechanisms. Decision Support Systems, 147, 113517.

    Article  Google Scholar 

  • Edwards, A. L. (1957). The social desirability variable in personality assessment and research. Dryden Press.

    Google Scholar 

  • Holtgraves, T. (2004). Social desirability and self-reports: Testing models of socially desirable responding. Personality and Social Psychology Bulletin, 30(2), 161–172.

    Article  Google Scholar 

  • Jiang, L., Yu, M., Zhou, M., Liu, X., & Zhao, T. (2011). Target-dependent twitter sentiment classification. Proceedings of the 49th annual meeting of the association for computational linguistics: Human Language Technologies-Volume 1, 151–160.

  • Joseph, S. (2019). Inside Kellogg’s social-driven strategy to launch new products. Digidayhttps://digiday.com/marketing/inside-kelloggs-social-driven-strategy-to-launch-new-products/. Accessed 28 May 2020.

  • Kennedy, A., & Inkpen, D. (2006). Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence, 22(2), 110–125. https://doi.org/10.1111/j.1467-8640.2006.00277.x

    Article  Google Scholar 

  • Lamberton, C., & Stephen, A. T. (2016). A Thematic exploration of digital, social media, and mobile marketing: Research evolution from 2000 to 2015 and an agenda for future inquiry. Journal of Marketing, 80(6), 146–172.

    Article  Google Scholar 

  • McLachlan, S. (2022). 23 YouTube statistics that matter to marketers in 2020. Hootsuite Social Media Management. https://blog.hootsuite.com/youtube-stats-marketers/. Accessed 7 Oct 2022.

  • Moe, W. W., & Schweidel, D. A. (2012). Online product opinions: Incidence, evaluation, and evolution. Marketing Science, 31(3), 372–386.

    Article  Google Scholar 

  • Moe, W. W., & Schweidel, D. A. (2017). Opportunities for innovation in social media analytics. Journal of Product Innovation Management, 34(5), 697–702.

    Article  Google Scholar 

  • Neff, J. (2012). P&G learns digital lessons—and that twitter success brings spambots. Advertising Age. https://adage.com/article/digital/p-g-learns-digital-lessons-signal-event-cincinnati/233202. Accessed 28 May 2020.

  • Netzer, O., Feldman, R., Goldenberg, J., & Fresko, M. (2012). Mine your own business: Market-structure surveillance through text mining. Marketing Science, 31(3), 521–543.

    Article  Google Scholar 

  • Schweidel, D. A., & Moe, W. W. (2014). Listening in on social media: A joint model of sentiment and venue format choice. Journal of Marketing Research, 51(4), 387–402.

    Article  Google Scholar 

  • Spottswood, E. L., & Hancock, J. T. (2016). The positivity bias and prosocial deception on facebook. Computers in Human Behavior, 65, 252–259. https://doi.org/10.1016/j.chb.2016.08.019

    Article  Google Scholar 

  • Tirunillai, S., & Tellis, G. J. (2014). Mining marketing meaning from online chatter: Strategic brand analysis of big data using latent dirichlet allocation. Journal of Marketing Research, 51(4), 463–479.

    Article  Google Scholar 

  • VillarroelOrdenes, F., Ludwig, S., De Ruyter, K., Grewal, D., & Wetzels, M. (2017). Unveiling what is written in the stars: Analyzing explicit, implicit, and discourse patterns of sentiment in social media. Journal of Consumer Research, 43(6), 875–894.

    Google Scholar 

Download references

Acknowledgements

The authors thank the Editor-in-Chief and two anonymous reviewers for very helpful comments and suggestions that greatly improved the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ye Hu.

Ethics declarations

Sam Hui served as a paid consultant of the company that provided the social media listening data. This research did not receive funding from the data provider.

Ye Hu and Ming Chen certify that they have no relevant financial interests to disclose.

All authors certify that they have no relevant non-financial interests to disclose.

Study 2 (the Qualtrics experiment) was funded by the Department of Marketing at the University of North Carolina, Charlotte.

Study 2 was approved by the Institutional Research Board at the University of North Carolina, Charlotte.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 23 KB)

Appendix

Appendix

Table 3 Robust check of the sentiment analysis using a fixed-effect regression
Table 4 Correlation matrix of variables

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hu, Y., Chen, M. & Hui, S. Sentiment deviations in responses to movie trailers across social media platforms. Mark Lett (2022). https://doi.org/10.1007/s11002-022-09656-1

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11002-022-09656-1

Keywords

  • Social media listening
  • Sentiment analysis
  • Movie trailers