Advertisement

Applications

  • Erik Cambria
  • Amir Hussain
Chapter
Part of the SpringerBriefs in Cognitive Computation book series (BRIEFSCC, volume 2)

Abstract

The amount of data available on the Web is growing exponentially. These data, however, are mainly in an unstructured format and, hence, not machine-processable and machine-interpretable. What is called collective intelligence today is actually just collected intelligence as the value of user contributions is simply in their being collected together and aggregated into community or domain specific sites. True collective intelligence can emerge if the data collected from all those people is aggregated and recombined to create new knowledge and new ways of learning that individual humans cannot do by themselves.

Keywords

Affective Valence Affective Information Natural Language Text Personal Picture Minimum Volume Ellipsoid 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Gruber, T.: Collective knowledge systems: where the social web meets the semantic web. Web Semant. Sci. Serv. Agents. World Wide Web 6(1), 4–13 (2007)Google Scholar
  2. 2.
    Chandra, P., Cambria, E., Hussain, A.: Clustering social networks using interaction semantics and sentics. In: Advances in Neural Networks, Lecture Notes in Computer Science, vol. 7367, pp. 379–385. Springer-Verlag, Berlin Heidelberg (2012)Google Scholar
  3. 3.
    Grassi, M., Cambria, E., Hussain, A., Piazza, F.: Sentic web: a new paradigm for managing social media affective information. Cogn. Comput. 3(3), 480–489 (2011)CrossRefGoogle Scholar
  4. 4.
    Rowe, M., Butters, J.: Assessing trust: contextual accountability. ESWC. Heraklion, In (2009)Google Scholar
  5. 5.
    Cambria, E., Chandra, P., Sharma, A., Hussain, A.: Do not feel the trolls. ISWC. Shanghai, In (2010)Google Scholar
  6. 6.
    Cambria, E., Grassi, M., Hussain, A., Havasi, C.: Sentic computing for social media marketing. Multimed. Tools Appl. 59(2), 557–577 (2012)CrossRefGoogle Scholar
  7. 7.
    Flickner, M., Sawhney, H., Niblack, W., Ashley, J., Huang, Q., Dom, B., Gorkani, M., Hafner, J., Lee, D., Petkovic, D., Steele, D., Yanker, P.: Query by image and video content: the QBIC system. Computer 28(9), 23–32 (1995)CrossRefGoogle Scholar
  8. 8.
    Bach, J., Fuller, C., Gupta, A., Hampapur, A., Horowitz, B., Humphrey, R., Jain, R., Shu, C.: Virage image search engine: an open framework for image management. In: Sethi, I., Jain, R. (eds.) Storage and Retrieval for Still Image and Video Databases, vol. 2670, pp. 76–87. SPIE, Bellingham (1996)Google Scholar
  9. 9.
    Porkaew, K., Chakrabarti, K.: Query refinement for multimedia similarity retrieval in MARS. In: ACM International Conference on Multimedia, pp. 235–238. ACM, New York (1999).Google Scholar
  10. 10.
    Nakazato, M., Manola, L., Huang, T.: ImageGrouper: search, annotate and organize images by groups. In: Chang, S., Chen, Z., Lee, S. (eds.) Recent Advances in Visual Information Systems, Lecture Notes in Computer Science, vol. 2314, pp. 93–105. Springer, Berlin Heidelberg (2002)Google Scholar
  11. 11.
    O’Hare, N., Lee, H., Cooray, S., Gurrin, C., Jones, G., Malobabic, J., O’Connor, N., Smeaton, A., Uscilowski, B.: MediAssist: using content-based analysis and context to manage personal photo collections. In: CIVR, pp. 529–532. Tempe (2006).Google Scholar
  12. 12.
    Sebe, N., Tian, Q., Loupias, E., Lew, M.S., Huang, T.S.: Evaluation of salient point techniques. In: International Conference on Image and Video Retrieval, pp. 367–377. Springer-Verlag, London (2002).Google Scholar
  13. 13.
    Urban, J., Jose, J.: EGO: a personalized multimedia management and retrieval tool. Int. J. Intell. Syst. 21(7), 725–745 (2006)CrossRefGoogle Scholar
  14. 14.
    Datta, R., Wang, J.: ACQUINE: Aesthetic quality inference engine–real-time automatic rating of photo aesthetics. International Conference on Multimedia Information Retrieval. Philadelphia, In (2010)Google Scholar
  15. 15.
    Bianchi-Berthouze, N.: K-DIME: an affective image filtering system. IEEE Multimed. 10(3), 103–106 (2003)CrossRefGoogle Scholar
  16. 16.
    Smith, J., Chang, S.: An image and video search engine for the world-wide web. Symposium on Electronic Imaging. Science and Technology, In (1997)Google Scholar
  17. 17.
    Frankel, C., Swain, M.J., Athitsos, V.: WebSeer: an image search engine for the world wide web. University of Chicago, Technical Report (1996)Google Scholar
  18. 18.
    Lempel, R., Soffer, A.: PicASHOW: pictorial authority search by hyperlinks on the web. In: WWW. Hong Kong (2001).Google Scholar
  19. 19.
    Jing, F., Wang, C., Yao, Y., Deng, K., Zhang, L., Ma, W.Y.: IGroup: web image search results clustering. In: ACM Multimedia. Santa Barbara (2006).Google Scholar
  20. 20.
    Lieberman, H., Rosenzweig, E., Singh, P.: ARIA: an agent for annotating and retrieving images. IEEE Comput. 34(7), 57–62 (2001)CrossRefGoogle Scholar
  21. 21.
    Chi, P., Lieberman, H.: Intelligent assistance for conversational storytelling using story patterns. IUI. Palo Alto, In (2011)Google Scholar
  22. 22.
    Cambria, E., Hussain, A.: Sentic album: content-, concept-, and context-based online personal photo management system. Cogn, Comput (2012)Google Scholar
  23. 23.
    Lieberman, H., Selker, T.: Out of context: computer systems that adapt to, and learn from, context. IBM Syst. J. 39(3), 617–632 (2000)CrossRefGoogle Scholar
  24. 24.
    Damasio, A.: Descartes’ Error: Emotion, Reason, and the Human Brain. Grossett/Putnam, New York (1994)Google Scholar
  25. 25.
    Vesterinen, E.: Affective computing. Digital Media Research Seminar. Helsinki, In (2001)Google Scholar
  26. 26.
    Pantic, M.: Affective computing. In: Encyclopedia of Multimedia Technology and Networking, vol. 1, pp. 8–14. Idea Group Reference (2005).Google Scholar
  27. 27.
    Bonanno, G., Papa, A., O’Neill, K., Westphal, M., Coifman, K.: The importance of being flexible: the ability to enhance and suppress emotional expressions predicts long-term adjustment. Psychol. Sci. 15, 482–487 (2004)PubMedCrossRefGoogle Scholar
  28. 28.
    Richards, J., Butler, E., Gross, J.: Emotion regulation in romantic relationships: the cognitive consequences of concealing feelings. J. Soc. Pers. Relatsh. 20, 599–620 (2003)CrossRefGoogle Scholar
  29. 29.
    Burke, A., Heuer, F., Reisberg, D.: Remembering emotional events. Mem. Cogn. 20, 277–290 (1992)CrossRefGoogle Scholar
  30. 30.
    Christianson, S., Loftus, E.: Remembering emotional events: the fate of detailed information. Cogn. Emot. 5, 81–108 (1991)CrossRefGoogle Scholar
  31. 31.
    Wessel, I., Merckelbach, H.: The impact of anxiety on memory for details in spider phobics. Appl. Cogn. Psychol. 11, 223–231 (1997)CrossRefGoogle Scholar
  32. 32.
    Reisberg, D., Heuer, F.: Memory for emotional events. In: Reisberg, D., Hertel, P. (eds.) Memory and Emotion, pp. 3–41. Oxford University Press, New York (2004)CrossRefGoogle Scholar
  33. 33.
    Laney, C., Campbell, H., Heuer, F., Reisberg, D.: Memory for thematically arousing events. Mem. Cogn. 32(7), 1149–1159 (2004)CrossRefGoogle Scholar
  34. 34.
    Hanjalic, A.: Extracting moods from pictures and sounds: towards truly personalized TV. IEEE Signal Process. Mag. 23(2), 90–100 (2006)CrossRefGoogle Scholar
  35. 35.
    Lakoff, G.: Women, Fire, and Dangerous Things. University Of Chicago Press, Chicago (1990)Google Scholar
  36. 36.
    Keelan, B.: Handbook of Image Quality. Marcel Dekker, New York (2002)CrossRefGoogle Scholar
  37. 37.
    Narwaria, M., Lin, W.: Objective image quality assessment based on support vector regression. IEEE Trans. Neural Netw. 12(3), 515–519 (2010)CrossRefGoogle Scholar
  38. 38.
    Lu, W., Zeng, K., Tao, D., Yuan, Y., Gao, X.: No-reference image quality assessment in contourlet domain. Neurocomputing 73(4–6), 784–794 (2012)Google Scholar
  39. 39.
    Redi, J., Gastaldo, P., Heynderickx, I., Zunino, R.: Color distribution information for the reduced-reference assessment of perceived image quality. IEEE Trans. Circuits Syst. Video Technol. 20(12), 1757–1769 (2012)CrossRefGoogle Scholar
  40. 40.
    Decherchi, S., Gastaldo, P., Zunino, R., Cambria, E., Redi, J.: Circular-ELM for the reduced-reference assessment of perceived image quality. Neurocomputing (2012).Google Scholar
  41. 41.
    Huang, J., Ravi, S., Mitra, M., Zhu, W., Zabih, R.: Image indexing using color correlograms. In: IEEE CVPR, pp. 762–768 (1997).Google Scholar
  42. 42.
    Lee, B., Hendler, J., Lassila, O.: The semantic web. Sci. Am. 284(5), 34–43 (2001)CrossRefGoogle Scholar
  43. 43.
    Urban, J., Jose, J., Van Rijsbergen, C.: An adaptive approach towards content-based image retrieval. Multimed. Tools Appl. 31, 1–28 (2006)CrossRefGoogle Scholar
  44. 44.
    Lansdale, M., Edmonds, E.: Using memory for events in the design of personal filing systems. Int. J. Man-Mach. Stud. 36(1), 97–126 (1992)CrossRefGoogle Scholar
  45. 45.
    Lew, M., Sebe, N., Djeraba, C., Jain, R.: Content-based multimedia information retrieval: state of the art and challenges. ACM Trans. Multimed. Comput. Commun. Appl. 2(1), 1–19 (2006)CrossRefGoogle Scholar
  46. 46.
    Machajdik, J., Hanbury, A.: Affective image classification using features inspired by psychology and art theory. International Conference on Multimedia. Florence, In (2010)Google Scholar
  47. 47.
    Kapoor, A., Burleson, W., Picard, R.: Automatic prediction of frustration. Int. J. Hum. Comput. Stud. 65, 724–736 (2007)CrossRefGoogle Scholar
  48. 48.
    Gilroy, S., Cavazza, M., Niiranen, M., Andre, E., Vogt, T., Urbain, J., Benayoun, M., Seichter, H., Billinghurst, M.: Pad-based multimodal affective fusion. In: ACII, pp. 1–8. Amsterdam (2009).Google Scholar
  49. 49.
    Zeng, Z., Pantic, M., Roisman, G., Huang, T.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)PubMedCrossRefGoogle Scholar
  50. 50.
    Gunes, H., Piccardi, M., Pantic, M.: From the lab to the real world: Affect recognition using multiple cues and modalities. In: Affective Computing: Focus on Emotion Expression, Synthesis, and Recognition, pp. 185–218 (2008).Google Scholar
  51. 51.
    Riseberg, J., Klein, J., Fernandez, R., Picard, R.: Frustrating the user on purpose: using biosignals in a pilot study to detect the user’s emotional state. CHI. Los Angeles, In (1998)Google Scholar
  52. 52.
    Ambady, N., Rosenthal, R.: Thin slices of expressive behavior as predictors of interpersonal consequences: a meta-analysis. Psychol. Bull. 11(2), 256–274 (1992)CrossRefGoogle Scholar
  53. 53.
    Camurri, A., Mazzarino, B., Volpe, G.: Analysis of expressive gesture: the eyesweb expressive gesture processing library. Gesture Workshop. Genova, In (2003)Google Scholar
  54. 54.
    Gunes, H., Piccardi, M.: Bi-modal emotion recognition from expressive face and body gestures. Netw. Comput. Appl. 30(4), 1334–1345 (2007)CrossRefGoogle Scholar
  55. 55.
    Karpouzis, K., Caridakis, G., Kessous, L., Amir, N., Raouzaiou, A., Malatesta, L., Kollias, S.: Modeling naturalistic affective states via facial, vocal and bodily expressions recognition. In: Lecture Notes in Artificial Intelligence, vol. 4451, pp. 92–116. Springer (2007).Google Scholar
  56. 56.
    Pun, T., Alecu, T., Chanel, G., Kronegg, J., Voloshynovskiy, S.: Brain-computer interaction research at the computer vision and multimedia laboratory. IEEE Trans. Neural Syst. Rehabil. Eng. 14(2), 210–213 (2006)PubMedCrossRefGoogle Scholar
  57. 57.
    Burleson, W., Picard, R., Perlin, K., Lippincott, J.: A platform for affective agent research. AAMAS. New York, In (2004)Google Scholar
  58. 58.
    Petridis, S., Pantic, M.: Audiovisual discrimination between laughter and speech. ICASSP. Las Vegas, In (2008)Google Scholar
  59. 59.
    Valstar, M., Gunes, H., Pantic, M.: How to distinguish posed from spontaneous smiles using geometric features. ICMI. Nagoya, In (2007)Google Scholar
  60. 60.
    Truong, K., Van Leeuwen, D.: Automatic discrimination between laughter and speech. Speech Commun. 49, 144–158 (2007)CrossRefGoogle Scholar
  61. 61.
    Matos, S., Birring, S., Pavord, I., Evans, D.: Detection of cough signals in continuous audio recordings using HMM. IEEE Trans. Biomed. Eng. 53(6), 1078–1083 (2006)PubMedCrossRefGoogle Scholar
  62. 62.
    Pal, P., Iyer, A., Yantorno, R.: Emotion detection from infant facial expressions and cries. In: International Conference on Acoustics, Speech and Signal Processing. Dallas (2006).Google Scholar
  63. 63.
    JongTae, J., SangWook, S., KwangEun, K., KweeBo, S.: Emotion recognition method based on multimodal sensor fusion algorithm. ISIS. Sokcho-City, In (2007)Google Scholar
  64. 64.
    Shan, C., Gong, S., McOwan, P.: Beyond facial expressions: learning human emotion from body gestures. BMVC. Warwick, In (2007)Google Scholar
  65. 65.
    Cambria, E., Hupont, I., Hussain, A., Cerezo, E., Baldassarri, S.: Sentic avatar: multimodal affective conversational agent with common sense. In: Esposito, A., Hussain, A., Faundez-Zanuy, M., Martone, R., Melone, N. (eds.) Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces: Theoretical and Practical Issues, Lecture Notes in Computer Science, vol. 6456, pp. 82–96. Springer-Verlag, Berlin (2011)Google Scholar
  66. 66.
    Baldassarri, S., Cerezo, E., Seron, F.: Maxine: a platform for embodied animated agents. Comput. Graph. 32(4), 430–437 (2008)CrossRefGoogle Scholar
  67. 67.
    Ekman, P., Dalgleish, T., Power, M.: Handbook of Cognition and Emotion. Wiley, Chichester (1999)Google Scholar
  68. 68.
    Cerezo, E., Hupont, I., Manresa, C., Varona, J., Baldassarri, S., Perales, F., Seron, F.: Real-time facial expression recognition for natural interaction. In: Pattern Recognition and Image Analysis, Lecture Notes in Computer Science, vol. 4478, pp. 40–47. Springer-Verlag, Berlin, Heidelberg (2007).Google Scholar
  69. 69.
    Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, San Francisco (2005)Google Scholar
  70. 70.
    Wallhoff, F.: Facial expressions and emotion database. Technische Universitat Munchen, Technical Report (2006)Google Scholar
  71. 71.
    Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. ICME. Singapore, In (2005)Google Scholar
  72. 72.
    Siegel, S., Castellan, N.: Nonparametric Statistics for the Social Siences. McGraw-Hill, New York (1988).Google Scholar
  73. 73.
    Hupont, I., Cambria, E., Cerezo, E., Hussain, A., Baldassarri, S.: Sentic maxine: multimodal affective fusion and emotional paths. In: Advances in Neural Networks, Lecture Notes in Computer Science, vol. 7368, pp. 555–565. Springer-Verlag, Berlin, Heidelberg (2012)Google Scholar
  74. 74.
    Whissell, C.: The dictionary of affect in language. Emot. Theory Res. Experience 4, 113–131 (1989)Google Scholar
  75. 75.
    Kumar, P., Yildirim, E.: Minimum-volume enclosing ellipsoids and core sets. J. Optim. Theory Appl. 126, 1–21 (2005)CrossRefGoogle Scholar
  76. 76.
    Milewski, A., Smith, T.: Providing presence cues to telephone users. ACM Conference on Computer Supported Cooperative Work, In (2000)Google Scholar
  77. 77.
    Chandra, P., Cambria, E., Pradeep, A.: Enriching social communication through semantics and sentics. In: IJCNLP, pp. 68–72. Chiang Mai (2011).Google Scholar
  78. 78.
    Chang, H.: Emotion barometer of reading: user interface design of a social cataloging website. International Conference on Human Factors in Computing Systems, In (2009)Google Scholar
  79. 79.
    Pampalk, E., Rauber, A., Merkl, D.: Content-based organization and visualization of music archives. ACM International Conference on Multimedia, In (2002)Google Scholar
  80. 80.
    Havasi, C., Speer, R., Holmgren, J.: Automated color selection using semantic knowledge. AAAI CSK. Arlington, In (2010)Google Scholar
  81. 81.
    Cambria, E., Hussain, A., Eckl, C.: Taking refuge in your personal sentic corner. In: IJCNLP, pp. 35–43. Chiang Mai (2011).Google Scholar
  82. 82.
    Srinivasan, U., Pfeiffer, S., Nepal, S., Lee, M., Gu, L., Barrass, S.: A survey of mpeg-1 audio, video and semantic analysis techniques. Multimed. Tools Appl. 27(1), 105–141 (2005)CrossRefGoogle Scholar
  83. 83.
    Schleicher, R., Sundaram, S., Seebode, J.: Assessing audio clips on affective and semantic level to improve general applicability. In: Fortschritte der Akustik–DAGA. Berlin (2010).Google Scholar
  84. 84.
    Ephron, H.: 1001 Books for Every Mood: A Bibliophile’s Guide to Unwinding, Misbehaving, Forgiving, Celebrating. Commiserating. Adams Media, Avon (2008)Google Scholar
  85. 85.
    Cambria, E., Hussain, A., Eckl, C.: Bridging the gap between structured and unstructured health-care data through semantics and sentics. WebSci. Koblenz, In (2011)Google Scholar
  86. 86.
    Cambria, E., Hussain, A., Havasi, C., Eckl, C., Munro, J.: Towards crowd validation of the uk national health service. WebSci. Raleigh, In (2010)Google Scholar
  87. 87.
    Benson, T., Sizmur, S., Whatling, J., Arikan, S., McDonald, D., Ingram, D.: Evaluation of a new short generic measure of health status. Inf. Prim. Care 18(2), 89–101 (2010)Google Scholar
  88. 88.
    Cambria, E., Benson, T., Eckl, C., Hussain, A.: Sentic PROMs: application of sentic computing to the development of a novel unified framework for measuring health-care quality. Expert Syst. Appl. 39(12), 10533–10543 (2012)CrossRefGoogle Scholar
  89. 89.
    Donabedian, A.: Evaluating the quality of medical care. Millbank Meml. Fund Q. 44, 166–203 (1966)CrossRefGoogle Scholar
  90. 90.
    Fanshel, S., Bush, J.: A health status index and its application to health-services outcomes. Oper. Res. 18, 1021–1066 (1970)CrossRefGoogle Scholar
  91. 91.
    Torrance, G., Thomas, W., Sackett, D.: A utility maximisation model for evaluation of health care programs. Health Serv. Res. 7, 118–133 (1972)PubMedGoogle Scholar
  92. 92.
    Culyer, A., Lavers, R., Williams, A.: Social indicators: health. Soc. Trends 2, 31–42 (1971)Google Scholar
  93. 93.
    Ware, J.: Scales for measuring general health perceptions. Health Serv. Res. 11, 396–415 (1976)PubMedGoogle Scholar
  94. 94.
    Bergner, M., Bobbitt, R., Kressel, S., Pollard, W., Gilson, B., Morris, J.: The sickness impact profile: conceptual formulation and methodology for the development of a health status measure. Int. J. Heal. Serv. 6, 393–415 (1976)CrossRefGoogle Scholar
  95. 95.
    Ware, J., Sherbourne, C.: The MOS 36-item short-form health survey (SF-36). Conceptual framework and item selection. Med. Care 30, 473–83 (1992)PubMedCrossRefGoogle Scholar
  96. 96.
    Ware, J., Kosinski, M., Keller, S.: A 12-item short-form health survey: construction of scales and preliminary tests of reliability and validity. Med. Care 34(3), 220–233 (1996)PubMedCrossRefGoogle Scholar
  97. 97.
    Brooks, R.: EuroQoL–the current state of play. Health Policy 37, 53–72 (1996)PubMedCrossRefGoogle Scholar
  98. 98.
    Horsman, J., Furlong, W., Feeny, D., Torrance, G.: The health utility index (HUI): concepts, measurement, properties and applications. Health Qual. Life Outcomes 1(54) (2003).Google Scholar

Copyright information

© The Author(s) 2012

Authors and Affiliations

  1. 1.Media LaboratoryMassachusetts Institute of TechnologyCambridgeUSA
  2. 2.Department of Computing ScienceUniversity of StirlingStirlingUK

Personalised recommendations