Advertisement

Distributed Computing in Social Media Analytics

  • Matthew RiemerEmail author
Chapter
  • 947 Downloads
Part of the Scalable Computing and Communications book series (SCC)

Abstract

The rise of social media has led to some of the largest cultural shifts seen so far in the twenty-first century. Billions of people across the world actively use social media today. This abrupt societal transition has led to a dramatic increase in the extent to which a person’s social footprint is documented online in the public domain. While for some social media sites, like Snapchat and Facebook, privacy is a key feature, on sites like Twitter, comments are intentionally made public for the world to see. The ever growing number of intentionally public interactions creates new opportunities for organizations to better understand consumers and how they feel about specific issues or products. In this chapter we will discuss social polling and influencer analytics, which are two of the most popular use cases for Social Media Analytics. We will also highlight an emerging trend across multiple industries where organizations are using aggregate social polling as input to demand forecasting solutions. Data for social analytics is largely unstructured and the social graph is massive. As a result, the choice of analytics techniques can have an enormous impact on the quality of the results and ROI for businesses that undergo analytics initiatives. As such, we will cover and discuss the relative merits of a variety of popular analytics techniques, across industry and academia, addressing best practices for these use cases.

References

  1. 1.
    Cha, M., Haddadi, H., Benevenuto, F. and Gummadi, P.K., 2010. Measuring user influence in twitter: The million follower fallacy. ICWSM, 10(10–17), p. 30.Google Scholar
  2. 2.
    Bakshy, E., Hofman, J.M., Mason, W.A. and Watts, D.J., 2011, February. Everyone's an influencer: quantifying influence on twitter. In Proceedings of the fourth ACM international conference on Web search and data mining (pp. 65–74). ACM.Google Scholar
  3. 3.
    Weng, J., Lim, E.P., Jiang, J. and He, Q., 2010, February. Twitterrank: finding topic-sensitive influential twitterers. In Proceedings of the third ACM international conference on Web search and data mining (pp. 261–270). ACM.Google Scholar
  4. 4.
    Embar, V.R., Bhattacharya, I., Pandit, V. and Vaculin, R., 2015, August. Online topic-based social influence analysis for the wimbledon championships. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1759–1768). ACM.Google Scholar
  5. 5.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S. and Dean, J., 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111–3119).Google Scholar
  6. 6.
    Heath, F.F., Hull, R., Khabiri, E., Riemer, M., Sukaviriya, N. and Vaculín, R., 2015, June. Alexandria: extensible framework for rapid exploration of social media. In Big Data (BigData Congress), 2015 IEEE International Congress on (pp. 483–490). IEEE.Google Scholar
  7. 7.
    Hochreiter, S. and Schmidhuber, J., 1997. Long short-term memory. Neural computation, 9(8), pp. 1735–1780.Google Scholar
  8. 8.
    Cho, K., van Merriënboer, B., Bahdanau, D. and Bengio, Y., 2014. On the Properties of Neural Machine Translation: Encoder–Decoder Approaches. Syntax, Semantics and Structure in Statistical Translation, p. 103.Google Scholar
  9. 9.
    Dos Santos, C.N. and Gatti, M., 2014, August. Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts. In COLING (pp. 69–78).Google Scholar
  10. 10.
    Deriu, J., Gonzenbach, M., Uzdilli, F., Lucchi, A., De Luca, V. and Jaggi, M., 2016. SwissCheese at SemEval-2016 Task 4: Sentiment classification using an ensemble of convolutional neural networks with distant supervision. Proceedings of SemEval, pp. 1124–1128.Google Scholar
  11. 11.
    Go, A., Bhayani, R. and Huang, L., 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1(12).Google Scholar
  12. 12.
    Pennington, J., Socher, R. and Manning, C.D., 2014, October. Glove: Global Vectors for Word Representation. In EMNLP (Vol. 14, pp. 1532–1543).Google Scholar
  13. 13.
    Kiros, R., Zhu, Y., Salakhutdinov, R.R., Zemel, R., Urtasun, R., Torralba, A. and Fidler, S., 2015. Skip-thought vectors. In Advances in neural information processing systems (pp. 3294–3302).Google Scholar
  14. 14.
    Le, Q.V. and Mikolov, T., 2014, June. Distributed Representations of Sentences and Documents. In ICML (Vol. 14, pp. 1188–1196).Google Scholar
  15. 15.
    Dai, A.M. and Le, Q.V., 2015. Semi-supervised sequence learning. In Advances in Neural Information Processing Systems (pp. 3079–3087).Google Scholar
  16. 16.
    Riemer, M., Khabiri, E., and Goodwin, R., 2016. Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning. arXiv preprint arXiv:1704.03617.Google Scholar
  17. 17.
    Mitra, B., Nalisnick, E., Craswell, N. and Caruana, R., 2016. A Dual Embedding Space Model for Document Ranking. arXiv preprint arXiv:1602.01137.Google Scholar
  18. 18.
    Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105).Google Scholar
  19. 19.
    Tai, K.S., Socher, R. and Manning, C.D., 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075.Google Scholar
  20. 20.
    Bowman, S.R., Angeli, G., Potts, C. and Manning, C.D., 2015. A large annotated corpus for learning natural language inference. In Empirical Methods in Natural Language Processing (EMNLP) 2015.Google Scholar
  21. 21.
    Rocktäschel, T., Grefenstette, E., Hermann, K.M., Kočiský, T. and Blunsom, P., 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664.Google Scholar
  22. 22.
    Cheng, J., Dong, L. and Lapata, M., 2016. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733.Google Scholar
  23. 23.
    Krishnamurthy, R., Li, Y., Raghavan, S., Reiss, F., Vaithyanathan, S. and Zhu, H., 2009. SystemT: a system for declarative information extraction. ACM SIGMOD Record, 37(4), pp. 7–13.Google Scholar
  24. 24.
    Bontcheva, K., Derczynski, L., Funk, A., Greenwood, M.A., Maynard, D. and Aswani, N., 2013, September. TwitIE: An Open-Source Information Extraction Pipeline for Microblog Text. In RANLP (pp. 83–90).Google Scholar
  25. 25.
    Tjong Kim Sang, E.F. and De Meulder, F., 2003, May. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4 (pp. 142–147). Association for Computational Linguistics.Google Scholar
  26. 26.
    Ritter, A., Clark, S. and Etzioni, O., 2011, July. Named entity recognition in tweets: an experimental study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (pp. 1524–1534). Association for Computational Linguistics.Google Scholar
  27. 27.
    Kong, L., Schneider, N., Swayamdipta, S., Bhatia, A., Dyer, C. and Smith, N.A., 2014. A dependency parser for tweets. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).Google Scholar
  28. 28.
    Mathioudakis, M. and Koudas, N., 2010, June. Twittermonitor: trend detection over the twitter stream. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data (pp. 1155–1158). ACM.Google Scholar
  29. 29.
    Riemer, M., Krasikov, S. and Srinivasan, H., 2015. A deep learning and knowledge transfer based architecture for social media user characteristic determination. SocialNLP 2015@ NAACL, p. 39.Google Scholar
  30. 30.
  31. 31.
  32. 32.
    Riemer, M., Vempaty, A., Calmon, F.P., Heath III, F.F., Hull, R. and Khabiri, E., 2016. Correcting Forecasts with Multifactor Neural Attention. In Proceedings of The 33rd International Conference on Machine Learning (pp. 3010–3019).Google Scholar
  33. 33.
    Chen, Z. and Du, X., 2013, September. Study of stock prediction based on social network. In Social Computing (SocialCom), 2013 International Conference on (pp. 913–916). IEEE.Google Scholar
  34. 34.
    Nguyen, L.T., Wu, P., Chan, W., Peng, W. and Zhang, Y., 2012, August. Predicting collective sentiment dynamics from time-series social media. In Proceedings of the first international workshop on issues of sentiment discovery and opinion mining (p. 6). ACM.Google Scholar
  35. 35.
    Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G. and Petersen, S., 2015. Human-level control through deep reinforcement learning. Nature, 518(7540), pp. 529–533.Google Scholar
  36. 36.
  37. 37.
    Lake, B.M., Salakhutdinov, R. and Tenenbaum, J.B., 2015. Human-level concept learning through probabilistic program induction. Science, 350(6266), pp. 1332–1338.Google Scholar
  38. 38.
    Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D. and Lillicrap, T., 2016. Meta-learning with memory-augmented neural networks. In Proceedings of The 33rd International Conference on Machine Learning (pp. 1842–1850).Google Scholar
  39. 39.
    Vinyals, O., Blundell, C., Lillicrap, T. and Wierstra, D., 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems (pp. 3630–3638).Google Scholar
  40. 40.
    Kaiser, L., Nachum, O., Roy, A. and Bengio, S., 2017. Learning to Remember Rare Events. In ICLR 2017. Google Scholar
  41. 41.
    Kaiser, L., Nachum, O., Roy, A. and Bengio, S., 2017. Optimization as a model for few shot learning. In ICLR 2017. Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.AI Foundations Lab, IBM T.J. Watson Research CenterNew YorkUSA

Personalised recommendations