Some Experiments in Evaluating ASR Systems Applied to Multimedia Retrieval
This paper describes some tests performed on different types of voice/audio input applying three commercial speech recognition tools. Three multimedia retrieval scenarios are considered: a question answering system, an automatic transcription of audio from video files and a real-time captioning system used in the classroom for deaf students. A software tool, RET (Recognition Evaluation Tool), has been developed to test the output of commercial ASR systems.
KeywordsAutomatic Speech Recognition (ASR) Evaluation Measurements audio transcription voice interaction
Unable to display preview. Download preview PDF.
- 1.Altova, Altova DiffDog, http://www.altova.com/products/diffdog/diff_merge_tool.html (viewed July 2010)
- 2.DiffDoc, Softinterface Inc., http://www.softinterface.com/MD/Document-Comparison-Software.htm (viewed July 2010)
- 3.IBM ViaVoice, http://www-01.ibm.com/software/pervasive/embedded_viavoice (viewed July 2010)
- 4.IDM Computer Solutions, Inc., UltraCompare, http://www.ultraedit.com/loc/es/ultracompare_es.html (viewed June 2010)
- 5.Iglesias, A., Moreno, L., Revuelta, P., Jimenez, J.: APEINTA: a Spanish educational project aiming for inclusive education In and Out of classroom. In: 14th ACM–SIGCSE Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE 2009), Paris, July 3-8, vol. 8 (2009)Google Scholar
- 6.Media Mining Indexer, Sail Labs Technology, http://www.sail-technology.com/products/commercial-products/media-mining-indexer.html (viewed June 2010)
- 7.NIST, Speech recognition scoring toolkit (SCTK) version 1.2c. (2000), http://www.nist.gov/speech/tools (viewed May 2010)
- 8.Nuance Dragon Naturally Speaking, http://www.nuance.com/naturallyspeaking/products/whatsnew10-1.asp (viewed March 2010)
- 9.WinMerge, http://winmerge.org/ (viewed June 2009)