The role of respiration audio in multimodal analysis of movement qualities Vincenzo LussuRadoslaw NiewiadomskiAntonio Camurri Original Paper Open access 11 April 2019 Pages: 1 - 15
Multi-modal facial expression feature based on deep-neural networks Wei WeiQingxuan JiaMing Chu Original Paper 17 July 2019 Pages: 17 - 23
Gaze-based interactions in the cockpit of the future: a survey David RudiPeter KieferMartin Raubal Original Paper 19 July 2019 Pages: 25 - 48
Elderly users’ acceptance of mHealth user interface (UI) design-based culture: the moderator role of age Ahmed AlssweyHosam Al-Samarraie Original Paper 20 July 2019 Pages: 49 - 59
Are older people any different from younger people in the way they want to interact with robots? Scenario based survey Mriganka BiswasMarta RomeoRay B. Jones Original Paper 24 July 2019 Pages: 61 - 72
Analysis of conversational listening skills toward agent-based social skills training Hiroki TanakaHidemi IwasakaSatoshi Nakamura Original Paper 16 October 2019 Pages: 73 - 82
Comparison of spatial and temporal interaction techniques for 3D audio trajectory authoring Justin D. MathewStéphane HuotBrian F. G. Katz Original Paper 20 November 2019 Pages: 83 - 100
Interactive gaze and finger controlled HUD for cars Gowdham PrabhakarAparna RamakrishnanPradipta Biswas Original Paper 23 November 2019 Pages: 101 - 121
A comparative assessment of Wi-Fi and acoustic signal-based HCI methods on the practicality Hayoung JeongTaeho KangJong Kim Survey 20 November 2019 Pages: 123 - 137