Like at First Sight: Understanding User Engagement with the World of Microvideos

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10539)

Abstract

Several content-driven platforms have adopted the ‘micro video’ format, a new form of short video that is constrained in duration, typically at most 5–10 s long. Micro videos are typically viewed through mobile apps, and are presented to viewers as a long list of videos that can be scrolled through. How should micro video creators capture viewers’ attention in the short attention span? Does quality of content matter? Or do social effects predominate, giving content from users with large numbers of followers a greater chance of becoming popular? To the extent that quality matters, what aspect of the video – aesthetics or affect – is critical to ensuring user engagement?

We examine these questions using a snapshot of nearly all (\({>}120,000\)) videos uploaded to globally accessible channels on the micro video platform Vine over an 8 week period. We find that although social factors do affect engagement, content quality becomes equally important at the top end of the engagement scale. Furthermore, using the temporal aspects of video, we verify that decisions are made quickly, and that first impressions matter more, with the first seconds of the video typically being of higher quality and having a large effect on overall user engagement. We verify these data-driven insights with a user study from 115 respondents, confirming that users tend to engage with micro videos based on “first sight”, and that users see this format as a more immediate and less professional medium than traditional user-generated video (e.g., YouTube) or user-generated images (e.g., Flickr).

References

  1. 1.
    Bakhshi, S., et al.: Faces engage us: photos with faces attract more likes and comments on instagram. In: Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, pp. 965–974. ACM (2014)Google Scholar
  2. 2.
    Cha, M., et al.: A measurement-driven analysis of information propagation in the flickr social network. In: Proceedings of the 18th WWW, WWW 2009, pp. 721–730. ACM, New York (2009)Google Scholar
  3. 3.
    Chen, J., et al.: Micro tells macro: predicting the popularity of micro-videos via a transductive model. In: Proceedings of the 2016 ACM on Multimedia Conference, MM 2016, pp. 898–907. ACM, New York (2016). http://doi.acm.org/10.1145/2964284.2964314
  4. 4.
    Datta, R., et al.: Algorithmic inferencing of aesthetics and emotion in natural images: an exposition. In: 2008 15th IEEE ICIP, pp. 105–108. IEEE (2008)Google Scholar
  5. 5.
    Kalayeh, M.M., et al.: How to take a good selfie? In: Proceedings of the 23rd ACM International Conference on Multimedia, MM 2015, pp. 923–926. ACM, New York (2015). http://doi.acm.org/10.1145/2733373.2806365
  6. 6.
    Fontanini, G., et al.: Web video popularity prediction using sentiment and content visual features. In: Proceedings of the 2016 ACM on ICMR, pp. 289–292. ACM (2016)Google Scholar
  7. 7.
    Grossman, D.: Can micro video change how we communicate? BBC Newsnight, September 2013Google Scholar
  8. 8.
    Hare, J.S., et al.: Openimaj and imageterrier: Java libraries and tools for scalable multimedia analysis and indexing of images. In: Proceedings of the 19th ACM MM 2011, pp. 691–694. ACM, New York (2011). http://doi.acm.org/10.1145/2072298.2072421
  9. 9.
    Isola, P., Xiao, J., Torralba, A., Oliva, A.: What makes an image memorable? In: 2011 IEEE Conference on CVPR, pp. 145–152. IEEE (2011)Google Scholar
  10. 10.
    Jou, B., et al.: Visual affect around the world: a large-scale multilingual visual sentiment ontology. In: Proceedings of the 23rd ACM MM, pp. 159–168. ACM (2015)Google Scholar
  11. 11.
    Ke, Y., Tang, X., Jing, F.: The design of high-level features for photo quality assessment. In: 2006 IEEE CVPR 2006, vol. 1, pp. 419–426. IEEE (2006)Google Scholar
  12. 12.
    Khosla, A., et al.: What makes an image popular? In: Proceedings of the 23rd International Conference on World Wide Web, WWW 2014, New York, NY, USA (2014). http://doi.acm.org/10.1145/2566486.2567996
  13. 13.
    Krizhevsky, A., et al.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  14. 14.
    Lartillot, O., Toiviainen, P.: A matlab toolbox for musical feature extraction from audio. In: International Conference on Digital Audio Effects, pp. 237–244 (2007)Google Scholar
  15. 15.
    Laurier, C., Lartillot, O., Eerola, T., Toiviainen, P.: Exploring relationships between audio features and emotion in music (2009)Google Scholar
  16. 16.
    Leung, L.: User-generated content on the internet: an examination of gratifications, civic engagement and psychological empowerment. New Media Soc. 11(8), 1327–1347 (2009)CrossRefGoogle Scholar
  17. 17.
    Louppe, G., et al.: Understanding variable importances in forests of randomized trees. In: NIPS, pp. 431–439 (2013)Google Scholar
  18. 18.
    Luo, Y., Tang, X.: Photo and video quality evaluation: focusing on the subject. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5304, pp. 386–399. Springer, Heidelberg (2008). doi:10.1007/978-3-540-88690-7_29 CrossRefGoogle Scholar
  19. 19.
    Machajdik, J., Hanbury, A.: Affective image classification using features inspired by psychology and art theory. In: Proceedings of the 18th ACM MM, MM 2010. ACM, New York (2010). http://doi.acm.org/10.1145/1873951.1873965
  20. 20.
    Mazloom, et al.: Multimodal popularity prediction of brand-related social media posts. In: Proceedings of the 2016 ACM on Multimedia Conference, MM 2016, New York, NY, USA (2016). http://doi.acm.org/10.1145/2964284.2967210
  21. 21.
    Nguyen, P.X., Rogez, G., Fowlkes, C., Ramamnan, D.: The open world of micro-videos. arXiv preprint arXiv:1603.09439 (2016)
  22. 22.
    Nwana, A.O., et al.: A latent social approach to Youtube popularity prediction. In: 2013 IEEE (GLOBECOM), pp. 3138–3144. IEEE (2013)Google Scholar
  23. 23.
    O’Brien, H.L., Toms, E.G.: What is user engagement? A conceptual framework for defining user engagement with technology. J. Am. Soc. Inform. Sci. Technol. 59(6), 938–955 (2008)CrossRefGoogle Scholar
  24. 24.
    Pinto, H., et al.: Using early view patterns to predict the popularity of Youtube videos. In: Proceedings of the sixth ACM ICWSM, pp. 365–374. ACM (2013)Google Scholar
  25. 25.
    Pogue, D.: Why are micro movies so popular these days? Sci. Am. (2013)Google Scholar
  26. 26.
    Redi, M., et al.: 6 seconds of sound and vision: creativity in micro-videos. In: Proceedings of the IEEE CVPR, pp. 4272–4279 (2014)Google Scholar
  27. 27.
    Schifanella, R., et al.: An image is worth more than a thousand favorites: surfacing the hidden beauty of flickr pictures. In: Proceedings of THE 9TH ICWSM 2015 (2015)Google Scholar
  28. 28.
    Schifanella, R., et al.: Detecting sarcasm in multimodal social platforms. In: Proceedings of the 2016 ACM MM, pp. 1136–1145. ACM (2016)Google Scholar
  29. 29.
    Slater, A., Kirby, R.: Innate and learned perceptual abilities in the newborn infant. Exp. Brain Res. 123(1–2), 90–94 (1998)CrossRefGoogle Scholar
  30. 30.
    Totti, L.C., et al.: The impact of visual attributes on online image diffusion. In: Proceedings of the 2014 ACM Conference on Web Science, WebSci 2014. ACM, New York (2014)Google Scholar
  31. 31.
    Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004)CrossRefGoogle Scholar
  32. 32.
    Wang, Y., et al.: Inferring sentiment from web images with joint inference on visual and social cues: a regulated matrix factorization approach. In: Ninth ICWSM (2015)Google Scholar
  33. 33.
    Yamasaki, T., et al.: Social popularity score: predicting numbers of views, comments, and favorites of social photos using only annotations. In: Proceedings of the First International Workshop on Internet-Scale Multimedia Management, WISMM 2014. ACM, New York (2014)Google Scholar
  34. 34.
    Yeh, C.H., et al.: Personalized photograph ranking and selection system. In: Proceedings of the 18th ACM MM, pp. 211–220. ACM (2010)Google Scholar
  35. 35.
    Zhong, C., et al.: Predicting pinterest: automating a distributed human computation. In: Proceedings of the 24th WWW, WWW 2015. ACM, New York (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Kings CollegeLondonUK
  2. 2.Bell LabsCambridgeUK

Personalised recommendations