Universal Translators: Now You’re Speaking My Language

Chapter

Abstract

A Star Trek universal translator converts spoken language into the language of the user. Current technologies can recognize written or spoken language and can convert speech to text or text to speech. Using neural networks and machine learning, the inputs can be interpreted and translated to other known languages. However, a true universal translator will learn and translate a language that has not been previously encountered, and this requires that all languages have common characteristics that can be identified and understood. Currently, linguists are using computer programs, cultural studies and even electroencephalograms to determine just how universal languages actually are. If commonalities can be found, then technology might very well be able to produce a true universal translator.

Keywords

Speech Recognition Augmented Reality Word Order Optical Character Recognition Speaker Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. S Allen. New Questions, Old Tools. Tableau 12; Spring, 2011. https://tableau.uchicago.edu/articles/2011/03/new-tools-old-questions
  2. ZM Bazarbayeva, AM Zhalalova, and YN Ormakhanova. Universal Properties of Intonation Components. Review of European Studies 7(6); 226-230, 2015. doi: 10.5539/res.v7n6p226. http://www.ccsenet.org/journal/index.php/res/article/view/48079/25835
  3. T Carter. Lights! Camera! China! Global Times. June, 19, 2015. Accessed November 18, 2015. http://www.globaltimes.cn/content/932769.shtml
  4. J Colapinto. The Interpreter. The New Yorker April 16, 2007. http://www.newyorker.com/magazine/2007/04/16/the-interpreter-2
  5. M Cotelli, B Borroni, R Manenti, A Alberici, M Calabria, C Agosti, A Arevalo, V Ginex, P Ortelli, G Binetti, O Zanetti, A Padovani, and SF Cappa. Action and object naming in frototemporal dementia, progressive supranuclear palsy, and cotricobasal degeneration. Neuropsychology 20(5); 558-565, 2006. doi: 10.1037/0894-4105.20.5.558. https://www.researchgate.net/publication/6850707_Action_and_object_naming_in_frontotemporal_dementia_progressive_supranuclear_palsy_and_corticobasal_degeneration
  6. S Dean. Konstrui pli bonan lingvon (To build a better language). The Verge 05/29/2015. Accessed November, 29, 2015. http://www.theverge.com/2015/5/29/8672371/learn-esperanto-language-duolingo-app-origin-history
  7. C Dillow. How Microsoft’s machine Learning is Breaking The Global Language Barrier. Popular Science Online. Posted December 19, 2014. Accessed on November 14, 2015. http://www.popsci.com/how-microsofts-machine-learning-breaking-language-barrier
  8. M Dunn, J Simon. Greenhill, Stephen C. Levinson, Russell D. Gray. Evolved structure of language shows lineage-specific trends in word-order universals. Nature 473; 79-82 2011. doi: 10.1038/nature09923. http://www.nature.com/nature/journal/v473/n7345/full/nature09923.html
  9. J Greenberg. Universals of Language. Cambridge, MA, MIT Press, 1963.Google Scholar
  10. G Hinton, L Den, D Yu, G Dahl, A Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen, T Sainath, and B Kingsbury. Deep neural networks for acoustic modeling of speech recognition. Signal Processing Magazine, IEEE 29(6); 82-97, 2012.doi: 10.1109/MSP.2012.2205597. http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/38131.pdf
  11. L Hyman. Universals in phonology. The linguistic Review 25(1-2); 83-137, 2008. doi: 10.1515/TLIR.2008.003. http://www.degruyter.com/view/j/tlir.2008.25.issue-1-2/tlir.2008.003/tlir.2008.003.xml
  12. F Joseph. Star Trek – Starfleet Technical Manual: Training Command, Starfleet Academy. New York: Ballantine Books, 2006.Google Scholar
  13. M Kontra, R Phillipson, T Skutnabb-Kangas, and T Varady. Language, a Right and a Resource: Approaching Linguistic Human Rights. Budapest, Hungary, Central European University Press, 1999.Google Scholar
  14. CD Manning, P Raghavan, and H Schultze. Introduction to Information retrieval. Cambridge, UK; Cambridge University Press, 2008.Google Scholar
  15. AF Martin and JS Garofolo. NIST speech processing evaluations: LVCSR, speaker recognition, language recognition. Proceedings of the IEEE Workshop on Signal Processing, Applied Public Security Forensics. Washington DC, USA, 1-7, 2007.Google Scholar
  16. YK Muthusamy, N Jain, and RA Cole. Perceptual benchmarks for automatic language identification. In Proceedings of the IEEE International Conference on Acoustics and Speech Signal Processing, Adelaide, Australia 1; 333-336, 1994.Google Scholar
  17. A Mehrabian. Silent Messages. Belmont, CA, Wadsworth, 1971.Google Scholar
  18. RE Owens. Language Development; An Introduction 98th edition. Boston, Allyn & Bacon, 2011.Google Scholar
  19. RP Rao, N Yadav, MN Vahia, H Joglekar, R Adhikari, I Mahadevan 2009 Entropic evidence for linguistic structure in the Indus script. Science 324(5931); 1165, 2009. doi: 10.1126/science.1170391. https://homes.cs.washington.edu/~rao/ScienceIndus.pdf
  20. J Robinson, J Druks, J Hodges, and P Garrard. The treatment of object naming, definition and object use in semantic dementia: the effect of errorless learning. Aphasiology 23(6); 749-775, 2009. doi: 10.1080/02687030802235195. http://www.tandfonline.com/doi/abs/10.1080/02687030802235195
  21. J Serjeant. Klingon goes boldly beyond “Star Trek” into pop culture. Reuters Entertainment November, 02, 2012. Accessed 12/03/2015. http://www.reuters.com/article/2012/11/02/entertainment-us-books-klingon-idUSBRE8A10HB20121102#3QjyyF5tVbHOmtOw.97
  22. OP Sharma, MK Ghose, KB Shah, and BK Thakur. Recent trends and tools for feature extraction in OCR technology. International Journal of Soft Computing and Engineering 2(6); 220-223, 2013. http://www.ijsce.org/attachments/File/v2i6/F1158112612.pdf
  23. B Snyder, R Barzilay, and K Knight. A statistical model for lost language decipherment. Proceedings of the 48th Annual Meeting of the Association of Computational Linguistics, pg. 1048-1057. Uppsala, Sweden, July 11-16, 2010.Google Scholar
  24. R Sproat. A statistical comparison of written language and nonlinguistic symbol systems. Language 90(2); s1-s27, 2014. doi: 10.1353/lan.2014.0042. http://www.linguisticsociety.org/document/language-vol-90-issue-2-june-2014-sproat
  25. P Taylor. Text-to-Speech Synthesis. Cambridge, UK, Cambridge University Press, 2009.Google Scholar
  26. DA van Leeuwen, M Boer, and R Orr. A human benchmark for the NIST language recognition evaluation 2005. Presented at the Odyssey: Speaker Language Recognition Workshop, Stellenbosch, South Africa. Paper 012, 2008.Google Scholar
  27. G Vigliocco, DP Vinson, J Druks, H Barber, and SF Cappa. Nouns and verbs in the brain: A review of behavioural, electrophysiological, neuropsychological and imaging studies. Neuroscience and Behavioral Reviews 35: 407-426, 2011. doi:  10.1016/j.neubiorev.2010.04.007. http://neurocog.ull.es/wp-content/uploads/2012/04/Vigliocco-Vinson-Druks-Barber-Cappa-2011.pdf
  28. M Xiang, B Dillon, M Wagers, F Liu, and T Guo. Processinf covert dependencies: an SAT study on Mandarin wh-in-situ questions. Journal of East Asian Linguistics 23: 207-232, 2014.Google Scholar
  29. P Zhan, S Wegmann, and S Lowe. Dragon Systems’ 1997 Mandarin Broadcast News System. Proceedings of the Broadcast News transcription and Understanding Workshop, Lansdowne, VA, pp. 25-27, 1998.Google Scholar
  30. MA Zissman. Predicting, diagnosing and improving automatic language identification performance. In: Proceedings of the Eurospeech Conference, Rhodes, Greece. pg, 51-54, 1996.Google Scholar

Copyright information

© Springer International Publishing Switzerland 2017

Authors and Affiliations

  1. 1.IndianapolisUSA

Personalised recommendations