Advertisement

“Hands Free”: Adapting the Task–Technology-Fit Model and Smart Data to Validate End-User Acceptance of the Voice Activated Medical Tracking Application (VAMTA) in the United States Military

  • James A. Rodger
  • James A. George
Chapter

Abstract

Our extensive work on validating user acceptance of a Voice Activated Medical Tracking Applications (VAMTA) in the military medical environment was broken into two phases. First, we developed a valid instrument for obtaining user evaluations of VAMTA by conducting a pilot (2004) to study the voice-activated application with medical end-users aboard U.S. Navy ships, using this phase of the study to establish face validity. Second, we conducted an in-depth study (2009) to measure the adaptation of users to a voice activated medical tracking system in preventive healthcare in the U.S. Navy. In the latter, we adapted a task–technology-fit (TTF) model (from a smart data strategy) to VAMTA, demonstrating that the perceptions of end-users can be measured and, furthermore, that an evaluation of the system from a conceptual viewpoint can be sufficiently documented. We report both on the pilot and the in-depth study in this chapter.

The survey results from the in-depth study were analyzed using the Statistical Package for the Social Sciences (SPSS) data analysis tool to determine whether TTF, along with individual characteristics, will have an impact on user evaluations of VAMTA. In conducting this in-depth study we modified the original TTF model to allow adequate domain coverage of patient care applications.

This study provides the underpinnings for a subsequent, higher level study of nationwide medical personnel. Follow-on studies will be conducted to investigate performance and user perceptions of VAMTA under actual medical field conditions.

Keywords

Voice-activated medical tracking system Task–technology-fit (TTF) model Smart data strategy Medical encounter Military medical environment Shipboard environmental survey 

References

  1. 1.
    Adams, D.A., R.R. Nelson, and P.A. Todd. ‘Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information Technology: A Replication,’ MIS Quarterly, 16:2, 1992, pp. 227–247.CrossRefGoogle Scholar
  2. 2.
    Akaike, H. “Factor Analysis and AIC,” Psychometrika, 52, 1987, pp. 317–332.MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Bartkova, K. and Jouvet, D. “On using units trained on foreign data for improved multiple accent speech recognition.’ Speech Communication, Volume 49, Issues 10–11, October-November 2007, Pages 836–846. CrossRefGoogle Scholar
  4. 4.
    Baroudi, J.J., M.H. Olson, and B. Ives. “An Empirical Study of the Impact of User Involvement on System Usage and Information Satisfaction,” Communications of the ACM, 29:3, 1986, pp. 232–238.CrossRefGoogle Scholar
  5. 5.
    Bergman, R.L. “In Pursuit of the Computer-Based Patient Record,” Hospitals and Health Networks, 67:18, 1997,pp. 43–48.Google Scholar
  6. 6.
    Bikel, D., Miller, S., Schwartz, R., & Weischedel, R. (1997). Nimble: A high performance learning name finder. Proceedings of the Fifth Conference on Applied Natural Language Processing Association for Computational Linguistics, 194–201.Google Scholar
  7. 7.
    Benzeghiba, M. et al. Automatic speech recognition and speech variability: A review Speech Communication, Volume 49, Issues 10–11, October-November 2007, Pages 763–786. CrossRefGoogle Scholar
  8. 8.
    Bozdogan, H. “Model Selection and Akaike’s Information Criteria (AIC): The General Theory and Its Analytical Extensions,” Psychometrika, 52, 1987, pp. 345–370.MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Burd, S. System Architecture, Course Technology, Boston: Massachusetts (2003).Google Scholar
  10. 10.
    Carli, L. Gender and Social Influence, Journal of Social Issues 57, 725–741 (2001).CrossRefGoogle Scholar
  11. 11.
    Cash, J. and BR. Konsynski. “IS Redraws Competitive Boundaries,” Harvard Business Review, 1985, pp. 134–142.Google Scholar
  12. 12.
    Cooke, M. et al. “Monaural speech separation and recognition challenge” Computer Speech & Language, In Press, Corrected Proof, Available online 27 March 2009. Google Scholar
  13. 13.
    Cronbach, L.J. Essentials of Psychological Testing. New York: Harper and Row, 1970.Google Scholar
  14. 14.
    Davis, F.D. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology,” MIS Quarterly, 13:3, 1989, pp. 319–341.CrossRefGoogle Scholar
  15. 15.
    Dixon, P. et al. “Harnessing graphics processors for the fast computation of acoustic likelihoods in speech recognition.” Computer Speech & Language; Oct2009, Vol. 23 Issue 4, p510–526, 17p.CrossRefGoogle Scholar
  16. 16.
    Duncan, W.J., P.M. Ginter, and L.E. Swayne. Strategic Management of Health Care Organizations. Cambridge, MA: Blakwell, 1995.Google Scholar
  17. 17.
    Ellsasser, K., J. Nkobi, and C. Kohier. “Distributing Databases: A Model for Global, Shared Care,” Healthcare Informatics, 1995, pp. 62–68.Google Scholar
  18. 18.
    Fiscus, J. G. (1997) A post-processing system to yield reduced word error rates: Recognizer Output Voting Error Reduction (ROVER). Proceedings, 1997 IEEE Workshop on Automatic Speech Recognition and Speech.Google Scholar
  19. 19.
    Flynn, R. and Jones, E. “Combined speech enhancement and auditory modeling for robust distributed speech recognition” Speech Communication, Volume 50, Issue 10, October 2008, Pages 797–809. CrossRefGoogle Scholar
  20. 20.
    George J. A. and Rodger, J.A. Smart Data. Wiley Publishing: New York 2010.CrossRefGoogle Scholar
  21. 21.
    Goodhue, D.L. “Understanding User Evaluations of Information Systems,” Management Science, 4 1:12, 1995, pp. 1827–1844.CrossRefGoogle Scholar
  22. 22.
    Goodhue, D.L. and R.L. Thompson. “Task-Technology Fit and Individual Performance,” MIS Quarterly, 19:2, 1995, pp. 213–236.CrossRefGoogle Scholar
  23. 23.
    Hagen, A. et al “Highly accurate children’s speech recognition for interactive reading tutors using subword units” Speech Communication, Volume 49, Issue 12, December 2007, Pages 861–873.CrossRefGoogle Scholar
  24. 24.
    Haque, S. et al. “Perceptual features for automatic speech recognition in noisy environments” Speech Communication, Volume 51, Issue 1, January 2009, Pages 58–75. MathSciNetCrossRefGoogle Scholar
  25. 25.
    Henderson, J. “Plugging into Strategic Partnerships: The Critical IS Connection,” Sloan Management Review, 1990, pp. 7–18.Google Scholar
  26. 26.
    Hermansen, L. A. & Pugh, W. M. (1996). Conceptual design of an expert system for planning afloat industrial hygiene surveys (Technical Report No. 96–5E). San Diego, CA: Naval Health Research Center.Google Scholar
  27. 27.
    Ingram, A. L. (1991). Report of potential applications of voice technology to armor training (Final Report: Sep 84-Mar 86). Cambridge, MA: Scientific Systems Inc.Google Scholar
  28. 28.
    Joint Commnission on Accreditation of Hospital Organizations. Accreditation Manual for Hospitals, 2009.Google Scholar
  29. 29.
    Karat, Clare-Marie; Vergo, John; Nahamoo, DaVAMTA (2007), “Conversational Interface Technologies”, in Sears, Andrew; Jacko, Julie A., The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications (Human Factors and Ergonomics), Lawrence Erlbaum Associates Inc.Google Scholar
  30. 30.
    Lee,E. Effects of “Gender” of the Computer on Informational Social Influence: The Moderating Role of Task Type, International Journal of Human-Computer Studies (2003).Google Scholar
  31. 31.
    Longest, B.B. Management Practices for the Health Professional. Norwalk, CT: Appleton and Lange, 1990, pp. 12–28.Google Scholar
  32. 32.
    McLaughlin, C. and A. Kaluzny. Continuous Quality Improvement in Health Care: Theory Implementation and Applications. Gaithersburg, MD: Aspen, 1994.Google Scholar
  33. 33.
    McTear, M. Spoken Dialogue Technology: Enabling the Conversational User Interface, ACM Computing Surveys 34(1), 90–169 (2002).CrossRefGoogle Scholar
  34. 34.
    Moore, G.C. and I. Benbasat. “The Development of an Instrument to Measure the Perceived Characteristics of Adopting an Information Technology Innovation,” Information Systems Research, 2:3, 1991, pp. 192–222.CrossRefGoogle Scholar
  35. 35.
    Nair, N. and Sreenivas, T. “Joint evaluation of multiple speech patterns for speech recognition and training” Computer Speech & Language, In Press, Corrected Proof, Available online 19 May 2009. Google Scholar
  36. 36.
    Neustein, Amy (2002) “’Smart’ Call Centers: Building Natural Language Intelligence into Voice-Based Apps” Speech Technology 7 (4): 38–40.Google Scholar
  37. 37.
    Ow, P.S., Mi. Prietula, and W. I-Iso. “Configuration Knowledge-based Systems to Organizational Structures: Issues and Examples in Multiple Agent Support,” Expert Systems in Economics, Banking and Management. Amsterdam: North-Holland, pp. 309–318.Google Scholar
  38. 38.
    Rebman, C.et al., Speech Recognition in Human-Computer Interface, Information & Management 40, 509–519 (2003).CrossRefGoogle Scholar
  39. 39.
    Robey, D. “User Attitudes and Management Information. System Use, Academy of Management Journal, 22:3, 1979, pp. 527–538.Google Scholar
  40. 40.
    Rodger, J.A. “Management of Information Teclnology and Quality Performance in Health Care Departments,” Doctoral Dissertation, Southern Illinois University at Carbondale, 1997.Google Scholar
  41. 41.
    Rodger, J. A., Pendharkar, P. C., & Paper, D. J. (1999). Management of Information Technology and Quality Performance in Health Care Facilities. International Journal of Applied Quality Management, 2 (2), 251–269.CrossRefGoogle Scholar
  42. 42.
    Rodger, J. A., Pendharkar, P. C. (2004) A Field Study of the Impact of Gender and User’s Technical Experience on the Performance of Voice Activated Medical Tracking Application, International Journal of Human-Computer Studies 60, Elsevier 529–544.CrossRefGoogle Scholar
  43. 43.
    Rodger, J. A. & Pendharkar, P. C. (2007). A Field Study of Database Communication Issues Peculiar to Users of a Voice Activated Medical Tracking Application. Decision Support Systems, 43 (2), 168–180.CrossRefGoogle Scholar
  44. 44.
    Scharenborg, O. “Reaching over the gap: A review of efforts to link human and automatic speech recognition research” Speech Communication, Volume 49, Issue 5, May 2007, Pages 336–347. CrossRefGoogle Scholar
  45. 45.
    Siniscalchi, M. and Lee, C.H. A study on integrating acoustic-phonetic information into lattice rescoring for automatic speech recognition” Speech Communication, Volume 51, Issue 11, November 2009, Pages 1139–1153. CrossRefGoogle Scholar
  46. 46.
    Torres, M. et al. Multiresolution information measures applied to speech recognition Physica A: Statistical Mechanics and its Applications, Volume 385, Issue 1, 1 November 2007, Pages 319–332. CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Department of Management Information System and Decision SciencesIndiana University of Pennsylvania, Eberly College of Business & Information TechnologyIndianaUSA

Personalised recommendations