Advertisement

Background

  • Bernard Scott
Chapter
Part of the Machine Translation: Technologies and Applications book series (MATRA, volume 2)

Abstract

This Chapter describes the exceptional circumstances that brought Logos Model MT into existence in 1969, and details the difficulties that confronted this pioneer development effort. Chief among the difficulties was the lack of proven models to guide the design and development of a workable MT system, causing Logos developers to turn for inspiration to assumptions about the processes taking place in human translation. Logos Model is contrasted in broad terms with statistical translation models, with which it shares certain resemblances. The eventual Logos Model translation process is then briefly described. The Chapter concludes with an overview of the basic assumptions about human translation processes that shaped Logos Model and that accounted for its early successes in the nascent MT world. The Chapter concludes with reflections about the nature and origin of language and grammar, all of which had a bearing on Logos Model design, development and performance. The advent of neural net MT is noted and the promise of this new development is briefly characterized.

References

  1. Al-Onaizan Y, Lewis W (eds) (2015) Proceedings of MT Summit XVGoogle Scholar
  2. Barreiro A, Monti J, Orliac B, Arrieta K, Batista WF, Trancoso I (2014) Linguistic evaluation of support verb constructions by OpenLogos and Google Translate. In: Proceedings, language resources and evaluation. Reykjavik, Iceland, pp 26–31Google Scholar
  3. Bengio Y (2009) Learning deep architectures for AI. Found Trends in Mach Learn 2(1):1–127CrossRefGoogle Scholar
  4. Brants T, Popat A, Peng Xu, Och F, Dean J (2007) Large language models in machine translation. In: Proceedings of the 2007 joint EMNLP-CoNLL conference. Prague, Czech Republic.Google Scholar
  5. Castilho S, Moorkens J, Gaspari F, Calisto I, Tinsley J, Way A (2017) Is neural machine translation the New State of the art? Prague Bull Math Linguist 108:109–120CrossRefGoogle Scholar
  6. Evans V (2014) The language myth: why language is not an instinct. Cambridge University Press, CambridgeGoogle Scholar
  7. Goldin-Meadow S, Mylander SC (1990) Beyond the input given. The child’s role in the acquisition of language. Language 66:323–355CrossRefGoogle Scholar
  8. Haupt FS, Schlesewsky M, Roehm D, Friederici A, Bornkessel-Schlesewsky I (2008) The status of subject-object reanalysis in the language comprehension architecture. J Mem Lang 59(1):54–96CrossRefGoogle Scholar
  9. Kalchbrenner N, Blunsom P (2013) Recurrent convolutional neural networks for discourse compositionality. In: Proceedings of the 2013 workshop on continuous vector space models and their compositionality. Sofia, Bulgaria, pp 119–126Google Scholar
  10. Koehn P (2011) Statistical machine translation. Cambridge University Press, CambridgezbMATHGoogle Scholar
  11. Langacker RW (2008) Cognitive grammar: a basic introduction. Oxford University Press, New YorkCrossRefGoogle Scholar
  12. Pinker S (1994) The language instinct. William Morrow and Co., New YorkCrossRefGoogle Scholar
  13. Scott B (1990) Biological neural net for parsing long, complex sentences. Logos Corporation PublicationGoogle Scholar
  14. Scott B (2000) Logos model as a metaphorical biological neural net. Logos Corporation Publication. http://www.mt-archive.info/Logos-2000-Scott
  15. Scott B (2003) Logos model: an historical perspective. Mach Trans 18(1):1–72MathSciNetCrossRefGoogle Scholar
  16. Vygotsky L (1934/1986) Thought and language. MIT Press, Cambridge. http://s-f-walker.org.uk/pubsebooks/pdfs/Vygotsky_Thought_and_Language.pdf. Accessed 14 Apr 2016

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Bernard Scott
    • 1
  1. 1.Tarpon SpringsUSA

Personalised recommendations