Advertisement

Intangible Cultural Heritage and New Technologies: Challenges and Opportunities for Cultural Preservation and Development

  • Marilena Alivizatou-Barakou
  • Alexandros Kitsikidis
  • Filareti Tsalakanidou
  • Kosmas Dimitropoulos
  • Chantas Giannis
  • Spiros Nikolopoulos
  • Samer Al Kork
  • Bruce Denby
  • Lise Buchman
  • Martine Adda-Decker
  • Claire Pillot-Loiseau
  • Joëlle Tillmane
  • S. Dupont
  • Benjamin Picart
  • Francesca Pozzi
  • Michela Ott
  • Yilmaz Erdal
  • Vasileios Charisis
  • Stelios Hadjidimitriou
  • Leontios Hadjileontiadis
  • Marius Cotescu
  • Christina Volioti
  • Athanasios Manitsaris
  • Sotiris Manitsaris
  • Nikos GrammalidisEmail author
Chapter

Abstract

Intangible cultural heritage (ICH) is a relatively recent term coined to represent living cultural expressions and practices, which are recognised by communities as distinct aspects of identity. The safeguarding of ICH has become a topic of international concern primarily through the work of United Nations Educational, Scientific and Cultural Organization (UNESCO). However, little research has been done on the role of new technologies in the preservation and transmission of intangible heritage. This chapter examines resources, projects and technologies providing access to ICH and identifies gaps and constraints. It draws on research conducted within the scope of the collaborative research project, i-Treasures. In doing so, it covers the state of the art in technologies that could be employed for access, capture and analysis of ICH in order to highlight how specific new technologies can contribute to the transmission and safeguarding of ICH.

Keywords

Intangible cultural heritage ICT Safeguarding Transmission Semantic analysis 3D visualisation Game-like educational applications 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7-ICT-2011-9) under grant agreement no FP7-ICT-600676 ‘i-Treasures: Intangible Treasures—Capturing the Intangible Cultural Heritage and Learning the Rare Know-How of Living Human Treasures’.

References

  1. 1.
    F. Cameron, S. Kenderdine, Theorizing Digital Cultural Heritage: A Critical Discourse (MIT Press, Cambridge, 2010)Google Scholar
  2. 2.
    M. Ioannides, D. Fellner, A. Georgopoulos, D. Hadjimitsis (ed.), Digital Heritage, in 3rd International Conference, Euromed 2010, Lemnos, Cyprus, Proceedings (Springer, Berlin, 2010)Google Scholar
  3. 3.
    K. Dimitropoulos, S. Manitsaris, F. Tsalakanidou, S. Nikolopoulos, B. Denby, S. Al Kork, L. Crevier-Buchman, C. Pillot-Loiseau, S. Dupont, J. Tilmanne, M. Ott, M. Alivizatou, E. Yilmaz, L. Hadjileontiadis, V. Charisis, O. Deroo, A. Manitsaris, I. Kompatsiaris, N. Grammalidis, Capturing the intangible: an introduction to the i-treasures project, in Proceedings of 9th International Conference on Computer Vision Theory and Applications (VISAPP2014), Lisbon, 5–8 Jan 2014Google Scholar
  4. 4.
    N. Aikawa, An historical overview of the preparation of the UNESCO international convention for the safeguarding of intangible heritage. Museum Int. 56, 137–149 (2004)CrossRefGoogle Scholar
  5. 5.
    V. Hafstein, Intangible heritage as list: from masterpieces to representation, in Intangible heritage, ed. by L. Smith, N. Akagawa (Routledge, Abingdon, 2009), pp. 93–111Google Scholar
  6. 6.
    P. Nas, Masterpieces of oral and intangible heritage: reflections on the UNESCO world heritage list. Curr. Anthropol. 43(1), 139–143 (2002)CrossRefGoogle Scholar
  7. 7.
    M. Alivizatou, The UNESCO programme for the proclamation of masterpieces of the oral and intangible heritage of humanity: a critical examination. J. Museum Ethnogr. 19, 34–42 (2007)Google Scholar
  8. 8.
    L. Bolton, Unfolding the Moon: Enacting Women’s Kastom in Vanuatu (University of Hawai’i Press, Honolulu, 2003)Google Scholar
  9. 9.
    K. Huffman, The fieldworkers of the Vanuatu cultural centre and their contribution to the audiovisual collections, in Arts of Vanuatu, ed. by J. Bonnemaison, K. Huffman, D. Tryon (University of Hawai’i Press, Honolulu, 1996), pp. 290–293Google Scholar
  10. 10.
    S. Zafeiriou, L. Yin, 3D facial behaviour analysis and understanding. Image Vis. Comput. 30, 681–682 (2012)CrossRefGoogle Scholar
  11. 11.
    P. Ekman, R. Levenson, W. Friesen, Emotions differ in autonomic nervous system activity. Science 221, 1208–1210 (1983)CrossRefGoogle Scholar
  12. 12.
    O. Engwall, Modeling of the vocal tract in three dimensions, in Proceedings, Eu-rospeech99, Hungary, 1999, pp. 113116Google Scholar
  13. 13.
    S.Fels, J.E. Lloyd, K. Van Den Doel, F. Vogt, I. Stavness, E. Vatikiotis-Bateson, Developing physically-based, dynamic vocal tract models using Artisynth, in Proceedings of ISSP 6, 1991, pp. 419–426Google Scholar
  14. 14.
    M. Stone, Toward a model of three-dimensional tongue movement. Phonetics 19, 309320 (1991)Google Scholar
  15. 15.
    P. Badin, G. Bailly, L. Reveret, M. Baciu, C. Segebarth, C. Savariaux, Three-dimensional linear articulatory modeling of tongue, lips and face, based on MRI and video images. J. Phon. 30(3), 533–553 (2002)CrossRefGoogle Scholar
  16. 16.
    M. Stone, A three-dimensional model of tongue movement based on ultrasound and X-ray microbeam data. J. Acoust. Soc. Am. 87, 2207 (1990)CrossRefGoogle Scholar
  17. 17.
    O. Engwall, From real-time MRI to 3D tongue movements, in Proceedings, 8th International Conference on Spoken Language Processing (ICSLP), Jeju Island, Vol. 2, 2004, pp. 1109–1112Google Scholar
  18. 18.
    M. Stone, A. Lundberg, Three-dimensional tongue surface shapes of English consonants and vowels. J. Acoust. Soc. Am. 99(6), 37283737 (1996)CrossRefGoogle Scholar
  19. 19.
    N. Henrich, B. Lortat-Jacob, M. Castellengo, L. Bailly, X Pelorson, Period-doubling occurences in singing: the “bassu” case in traditional Sardinian “A Tenore” singing, in Proceedings of the International Conference on Voice Physiology and Biomechanics, Tokyo, July 2006Google Scholar
  20. 20.
    N. Henrich, L. Bailly, X. Pelorson, B. Lortat-Jacob, Physiological and physical understanding of singing voice practices: the Sardinian Bassu case, AIRS Start-up meeting, Prince Edward Island, 2009Google Scholar
  21. 21.
    W. Cho, J. Hong, H. Park, Real-time ultrasonographic assessment of true vocal fold length in professional singers. J. Voice 26(6), 1–6 (2012)CrossRefGoogle Scholar
  22. 22.
    G. Troup, T. Griffiths, M. Schneider-Kolsky, T. Finlayson, Ultrasound observation of vowel tongue shapes in trained singers, in Proceedings of the 30th Condensed Matter and Materials Meeting, Wagga, 2006Google Scholar
  23. 23.
    T. Coduys, C. Henry, A. Cont, TOASTER and KROONDE: high-resolution and high-speed real-time sensor interfaces, in Proceedings of the Conference on New Interfaces for Musical Expression, Singapore, 2004, pp. 205–206Google Scholar
  24. 24.
    F. Bevilacqua, B. Zamborlin, A. Sypniewski, N. Schnell, F. Guedy, N. Rasamimanana, Gesture in embodied communication and human-computer interaction, in 8th International Gesture Workshop, 2010, pp. 73–84Google Scholar
  25. 25.
    M. Caon, Context-aware 3D gesture interaction based on multiple kinects, in Proceedings of the First International Conference on Ambient Computing, Applications, Services and Technologies, Barcelona, 2011, pp. 7–12Google Scholar
  26. 26.
    M. Boucher, Virtual dance and motion-capture. Contemp. Aesthet. 9, 10 (2011)Google Scholar
  27. 27.
    R. Aylward, J.A. Paradiso, Sensemble: a wireless, compact, multi-user sensor system for interactive dance, in Proceedings of the International Conference on New Interfaces for Musical Expression (NIME06), Paris, Centre Pompidou, 2006, pp. 134–139Google Scholar
  28. 28.
    D. Drobny, M. Weiss, J. Borchers, Saltate!: a sensor-based system to support dance beginners, Extended abstracts on Human factors in Computing Systems, in Proceedings of the CHI 09 International Conference (ACM, 2009, New York), pp. 3943–3948Google Scholar
  29. 29.
    F. Bevilacqua, L. Naugle, C. Dobrian, Music control from 3D motion capture of dance. CHI 2001 for the NIME workshop (2001)Google Scholar
  30. 30.
    C. Dobrian, F. Bevilacqua, Gestural control of music: using the vicon 8 motion capture system, in Proceedings of the Conference on New Interfaces for Musical Expression (NIME), National University of Singapore, 2003, pp. 161–163Google Scholar
  31. 31.
    M. Raptis, D. Kirovski, H. Hoppe, Real-time classification of dance gestures from skeleton animation, in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, New York, 2011, pp. 147–156Google Scholar
  32. 32.
    D.S. Alexiadis, P. Kelly, P. Daras, N.E. O’Connor, T. Boubekeur, M.B. Moussa, Evaluating a dancer’s performance using kinect-based skeleton tracking, in Proceedings of the 19th ACM International Conference on Multimedia (New York, ACM, 2011), pp. 659–662Google Scholar
  33. 33.
    S. Essid, D.S. Alexiadis, R. Tournemenne, M. Gowing, P. Kelly, D.S. Monaghan et al., An advanced virtual dance performance evaluator, in Proceedings of the 37th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Kyoto, 2012, pp. 2269–2272Google Scholar
  34. 34.
    G. Alankus, A.A. Bayazit, O.B. Bayazit, Automated motion synthesis for dancing characters: motion capture and retrieval. Comput. Anim. Virtual Worlds 16(3–4), 259–271 (2005)CrossRefGoogle Scholar
  35. 35.
    M. Brand, A. Hertzmann, Style machines, in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2000) (ACM Press, 2000), pp. 183–192Google Scholar
  36. 36.
    D. Bouchard, N. Badler, Semantic segmentation of motion capture using laban movement analysis, in Proceedings of the 7th International Conference on Intelligent Virtual Agents, Springer, 2007. pp. 37–44Google Scholar
  37. 37.
    K. Kahol, P. Tripathi, S. Panchanathan, Automated gesture segmentation from dance sequences, in Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FGR04), Seoul, 2004, pp. 883–888Google Scholar
  38. 38.
    J. James, T. Ingalls, G. Qian, L. Olsen, D. Whiteley, S. Wong et al., Movement-based interactive dance performance, in Proceedings of the 14th annual ACM International Conference on Multimedia (ACM, New York, 2006), pp. 470–480Google Scholar
  39. 39.
    A.-M. Burns, M.M. Wanderley, Visual methods for the retrieval of guitarist fingering, in Proceedings of the Conference on New Interfaces for Musical Expression (IRCAM-Centre, Pompidou, 2006), pp. 196–199Google Scholar
  40. 40.
    Vision par ordinateur pour la reconnaissance des gestes musicaux des doigts, Revue Francophone d’Informatique Musicale [Online] Available at: http://revues.mshparisnord.org/rfim/index.php?id=107. Accessed 13 July 2013
  41. 41.
    D. Grunberg, Gesture Recognition for Conducting Computer Music (n.d.) [On line] Available at: http://schubert.ece.drexel.edu/research/gestureRecognition. Accessed 10 Jan 2009
  42. 42.
    J. Verner, MIDI guitar synthesis yesterday, today and tomorrow, an overview of the whole fingerpicking thing. Record. Mag. 8(9), 52–57 (1995)Google Scholar
  43. 43.
    C. Traube, An interdisciplinary study of the timbre of the classical guitar, PhD Thesis, McGill University, 2004Google Scholar
  44. 44.
    Y. Takegawa, T. Terada, S. Nishio, Design and implementation of a real-time fingering detection system for piano performances, in Proceedings of the International Computer Music Conference, New Orleans, 2006, pp. 67–74Google Scholar
  45. 45.
    J. MacRitchie, B. Buck, N. Bailey, Visualising musical structure through performance gesture, in Proceedings of the International Society for Music Information Retrieval Conference, Kobe, 2009, pp. 237–242Google Scholar
  46. 46.
    M. Malempre, Pour une poignee de danses, Dapo Hainaut (ed.) (2010)Google Scholar
  47. 47.
    T. Calvert, W. Wilke, R. Ryman, I. Fox, Applications of computers to dance. IEEE Comput. Graph. Appl. 25(2), 6–12 (2005)CrossRefGoogle Scholar
  48. 48.
    Y. Shen, X. Wu, C. Lua, H. Cheng, National Dances Protection Based on Motion Capture Technology, Chengdu, Sichuan, vol. 51 (IACSIT Press, Singapore, 2012), pp. 78–81Google Scholar
  49. 49.
    W.M. Brown, L. Cronk, K. Grochow, A. Jacobson, C.K. Liu, Z. Popovic et al., Dance reveals symmetry especially in young men. Nature 438(7071), 1148–1150 (2005)CrossRefGoogle Scholar
  50. 50.
    D. Tardieu, X. Siebert, B. Mazzarino, R. Chessini, J. Dubois, S. Dupont, G. Varni, A. Visentin, Browsing a dance video collection: dance analysis and interface design. J. Multimodal User Interf. 4(1), 37–46 (2010)CrossRefGoogle Scholar
  51. 51.
    J.C. Chan, H. Leung, J.K. Tang, T. Komura, A virtual reality dance training system using motion capture technology. IEEE Trans. Learn. Technol. 4(2), 187–195 (2011)CrossRefGoogle Scholar
  52. 52.
    I. Cohen, A. Garg, T. Huang, Emotion recognition from facial expression using multilevel HMM, in Proceedings of the Neural Information Processing Systems Workshop on Affective Computing, Breckenridge, 2000Google Scholar
  53. 53.
    F. Bourel, C. Chibelushi, A. Low, Robust facial expression recognition using a state-based model of spatially-localized facial dynamics, in Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, Washington, 2002Google Scholar
  54. 54.
    B. Schuller, S. Reiter, R. Mueller, A. Hames, G. Rigoll, Speaker independent speech emotion recognition by ensemble classification, in Proceedings of the IEEE International Conference on Multimedia and Expo, Amsterdam, 2005, pp. 864–867Google Scholar
  55. 55.
    C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan, Analysis of emotional recognition using facial expressions, speech and multimodal information, in Proceedings of the International Conference on Multimodal Interfaces (ACM, New York, 2004), pp. 205–211Google Scholar
  56. 56.
    R. Picard, E. Vyzas, J. Healey, Toward machine emotional intelligence: analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 23(10), 1175–1191 (2001)CrossRefGoogle Scholar
  57. 57.
    F. Nasoz, C. Lisetti, K. Alvarez, N. Finkelstein, Emotion recognition from physiological signals for user modeling of affect, in Proceedings of the International Conference on User Modeling, Johnstown, 2003Google Scholar
  58. 58.
    C. Lisetti, F. Nasoz, Using non-invasive wearable computers to recognize human emotions from physiological signals. EURASIP J. Appl. Signal Process. 11, 1672–1687 (2004)CrossRefGoogle Scholar
  59. 59.
    D. McIntosh, A. Reichmann-Decker, P. Winkielman, J. Wilbarger, When the social mirror breaks: deficits in automatic, but not voluntary, mimicry of emotional facial expressions in autism. Dev. Sci. 9, 295–302 (2006)CrossRefGoogle Scholar
  60. 60.
    F. Esposito, D. Malerba, G. Semeraro, O. Altamura, S. Ferilli, T. Basile, M. Berard, M. Ceci, Machine learning methods for automatically processing historical documents: from paper acquisition to XML transformation, in Proceedings of the First International Workshop on Document Image Analysis for Libraries (DIAL, 04), Palo Alto, 2004, pp. 328–335Google Scholar
  61. 61.
    A. Mallik, S. Chaudhuri, H. Ghosh, Nrityakosha: preserving the intangible heritage of Indian classical dance. ACM J. Comput. Cult. Herit. 4(3), 11 (2011)Google Scholar
  62. 62.
    M. Makridis, P. Daras, Automatic classification of archaeological pottery sherds. J. Comput. Cult. Herit. 5(4), 15 (2012)CrossRefGoogle Scholar
  63. 63.
    A. Karasik, A complete, automatic procedure for pottery documentation and analysis, in Proceedings of the IEEE Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, 2010, pp. 29–34Google Scholar
  64. 64.
    S. Vrochidis, C. Doulaverakis, A. Gounaris, E. Nidelkou, L. Makris, I. Kompatsiaris, A hybrid ontology and visual-based retrieval model for cultural heritage multimedia collections. Int. J. Metadata Semant. Ontol. 3(3), 167–182 (2008)CrossRefGoogle Scholar
  65. 65.
    M. Liggins, D.L. Hall, J. Llina, Handbook of Multisensor Data Fusion, Theory and Practice, 2nd edn. (CRC Press, Boca Raton, 2008)CrossRefGoogle Scholar
  66. 66.
    O. Punska, Bayesian approach to multisensor data fusion, PhD. Dissertation, Department of Engineering, University of Cambridge, 1999Google Scholar
  67. 67.
    S. Nikolopoulos, C. Lakka, I. Kompatsiaris, C. Varytimidis, K. Rapantzikos, Y. Avrithis, Compound document analysis by fusing evidence across media, in Proceedings of the International Workshop on Content-Based Multimedia Indexing, Chania, 2009, pp. 175–180Google Scholar
  68. 68.
    S. Chang, D. Ellis, W. Jiang, K. Lee, A. Yanagawa, A.C. Loui, J. Luo, Largescale multimodal semantic concept detection for consumer video, in Proceedings of the International Workshop on Workshop on Multimedia Information Retrieval (MIR ’07), September, 2007, pp. 255–264Google Scholar
  69. 69.
    R. Huber-Mörk, S. Zambanini, M. Zaharieva, M. Kampel, Identification of ancient coins based on fusion of shape and local features. Mach. Vision Appl. 22(6), 983–994 (2011)CrossRefGoogle Scholar
  70. 70.
    D. Datcu, L.J.M. Rothkrantz, Semantic Audio-Visual Data Fusion for Automatic Emotion Recognition (Euromedia, Porto, 2008)Google Scholar
  71. 71.
    M. Koolen, J. Kamps, Searching cultural heritage data: does structure help expert searchers?, in Proceedings of RIAO ’10 Adaptivity, Personalization and Fusion of Heterogeneous Information, Paris, 2010, pp. 152–155Google Scholar
  72. 72.
    L. Bai, S. Lao, W. Zhang, G.J.F. Jones, A.F. Smeaton, Video semantic, content analysis framework based on ontology combined MPEG-7, in Adaptive Multimedia Retrieval: Retrieval, User, and Semantics, Lecture Notes in Computer Science, July, 2007, pp. 237–250Google Scholar
  73. 73.
    S. Dasiopoulou, V. Mezaris, I. Kompatsiaris, V.K. Papastathis, G.M. Strintzis, Knowledge-assisted semantic video object detection. IEEE Trans. Circuits Syst. Video Technol. 15(10), 1210–1224 (2005) (Special Issue on Analysis and Understanding for Video Adaptation)CrossRefGoogle Scholar
  74. 74.
    J. Lien, T. Kanade, J. Cohn, C. Li, Automated facial expression recognition based on facs action units, in Proceedings of the 3rd IEEE Conference on Automatic Face and Gesture Recognition, Nara, 1998, pp. 390–395Google Scholar
  75. 75.
    P. Mulholland, A. Wolff, T. Collins, Z. Zdrahal, An event-based approach to describing and understanding museum narratives, in Proceedings: Detection, Representation, and Exploitation of Events in the Semantic Web Workshop in Conjunction with the International Semantic Web Conference, Bonn, 2011Google Scholar
  76. 76.
    I. Kollia, V. Tzouvaras, N. Drosopoulos, G. Stamou, A systemic approach for effective semantic access to cultural content. Semant. Web – Interoperability, Usability Appl. 3(1), 65–83 (2012)Google Scholar
  77. 77.
    A. Gaitatzes, D. Christopoulos, M. Roussou, Reviving the past: cultural heritage meets virtual reality, in Proceedings of the 2001 Conference on Virtual Reality, Archeology, and Cultural Heritage, ACM, 2001, November, pp. 103–110Google Scholar
  78. 78.
    M. Ott, F. Pozzi, Towards a new era for cultural heritage education: discussing the role of ICT. Comput. Hum. Behav. 27(4), 1365–1371 (2011)CrossRefGoogle Scholar
  79. 79.
    K.H. Veltman, Challenges for ICT/UCT applications in cultural heritage, in ICT and Heritage, ed. by C. Carreras (2005), online at http://www.uoc.edu/digithum/7/dt/eng/dossier.pdf
  80. 80.
    J.R. Savery, T.M. Duffy, Problem-based learning: an instructional model and its constructivist framework. Educ. Technol. 35, 31–38 (1995)Google Scholar
  81. 81.
    M. Mortara, C.E. Catalano, F. Bellotti, G. Fiucci, M. Houry-Panchetti, P. Petridis, Learning cultural heritage by serious games. J. Cult. Herit. 15(3), 318–325 (2014)CrossRefGoogle Scholar
  82. 82.
    E.F. Anderson, L. McLoughlin, F. Liarokapis, C. Peters, P. Petridis, S. de Freitas, Serious games in cultural heritage, in Proceedings of the 10th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST, ed. by M. Ashley, F. Liarokapis. State of the Art Reports (2009)Google Scholar
  83. 83.
    M. Ott, F. Pozzi, ICT and cultural heritage education: which added value? in Emerging Technologies and Information Systems for the Knowledge Society, ed. by Lytras et al. Lecture Notes in Computer Science, 5288 (Springer, Berlin, 2008), pp. 131–138Google Scholar
  84. 84.
    X. Rodet, Y. Potard, J.-B. Barriere, The CHANT project: from the synthesis of the singing voice to synthesis in general. Comput. Music J. 8(3), 15–31 (1984)CrossRefGoogle Scholar
  85. 85.
    G. Berndtsson, The KTH rule system for singing synthesis. Comput. Music J. 20(1), 7691 (1996)CrossRefGoogle Scholar
  86. 86.
    P. Cook, Physical models for music synthesis, and a meta-controller for real time performance, in Proceedings of the International Computer Music Conference and Festival, Delphi, 1992Google Scholar
  87. 87.
    P. Cook, Singing voice synthesis: history, current work, and future directions. Comput. Music J. 20(3), 3846 (1996)CrossRefGoogle Scholar
  88. 88.
    G. Bennett, X. Rodet, Synthesis of the singing voice, in Current Directions in Computer Music Research, ed. by M.V. Mathews, J.R. Pierce (MIT Press, Cambridge, 1989), pp. 19–44Google Scholar
  89. 89.
    H. Kenmochi, H. Ohshita, Vocaloid–commercial singing synthesizer based on sample concatenation. Presented at Interspeech 2007, Antwerp, 2007, pp. 4009–40010Google Scholar
  90. 90.
    A. Kitsikidis, K. Dimitropoulos, S. Douka, N. Grammalidis, Dance analysis using multiple kinect sensors, in International Conference on Computer Vision Theory and Applications (VISAPP), IEEE, Vol. 2, 2014, January, pp. 789–795Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Marilena Alivizatou-Barakou
    • 1
  • Alexandros Kitsikidis
    • 2
  • Filareti Tsalakanidou
    • 2
  • Kosmas Dimitropoulos
    • 2
  • Chantas Giannis
    • 2
  • Spiros Nikolopoulos
    • 2
  • Samer Al Kork
    • 2
  • Bruce Denby
    • 2
  • Lise Buchman
    • 2
  • Martine Adda-Decker
    • 2
  • Claire Pillot-Loiseau
    • 2
  • Joëlle Tillmane
    • 2
  • S. Dupont
    • 2
  • Benjamin Picart
    • 2
  • Francesca Pozzi
    • 2
  • Michela Ott
    • 2
  • Yilmaz Erdal
    • 2
  • Vasileios Charisis
    • 2
  • Stelios Hadjidimitriou
    • 2
  • Leontios Hadjileontiadis
    • 2
  • Marius Cotescu
    • 2
  • Christina Volioti
    • 2
  • Athanasios Manitsaris
    • 2
  • Sotiris Manitsaris
    • 2
  • Nikos Grammalidis
    • 2
    Email author
  1. 1.UCL Institute of ArchaeologyLondonUK
  2. 2.Information Technologies InstituteCERTHThessalonikiGreece

Personalised recommendations