Universal Access in the Information Society

, Volume 4, Issue 3, pp 223–236 | Cite as

Symbol design: a user-centered method to design pen-based interfaces and extend the functionality of pointer input devices

  • Margrit Betke
  • Oleg Gusyatin
  • Mikhail Urinson


A method called “SymbolDesign” is proposed that can be used to design user-centered interfaces for pen-based input devices. It can also extend the functionality of pointer input devices, such as the traditional computer mouse or the Camera Mouse, a camera-based computer interface. Users can create their own interfaces by choosing single-stroke movement patterns that are convenient to draw with the selected input device, and by mapping them to a desired set of commands. A pattern could be the trace of a moving finger detected with the Camera Mouse or a symbol drawn with an optical pen. The core of the SymbolDesign system is a dynamically created classifier, in the current implementation an artificial neural network. The architecture of the neural network automatically adjusts according to the complexity of the classification task. In experiments, subjects used the SymbolDesign method to design and test the interfaces they created, for example, to browse the web. The experiments demonstrated good recognition accuracy and responsiveness of the user interfaces. The method provided an easily-designed and easily-used computer input mechanism for people without physical limitations, and, with some modifications, has the potential to become a computer access tool for people with severe paralysis.


Universal access Assistive technology Universal interfaces User interfaces Camera interfaces Pen-based interfaces Video-based human-computer interfaces Dynamic neural networks 



We wish to thank John J. Magee, Rick Hoydt, Robyn Pancholi, James Gips, the students of the Boston University Video Game Creators Consortium, and the anonymous reviewers for their assistance. The work was supported by the National Science Foundation with grants IIS-0093367, IIS-0308213, IIS-0329009, and EIA-0202067.


  1. 1.
    Applied Science Laboratories, Bedford, MA.
  2. 2.
    Bauby J-D (1997) The diving bell and the butterfly. Vintage BooksGoogle Scholar
  3. 3.
    Betke M, Gips J, Fleming P (2001) The Camera Mouse: visual tracking of body features to provide computer access for people with severe disabilities. IEEE T Neur Sys Reh 10(1):1–10CrossRefGoogle Scholar
  4. 4.
    CameraMouse, Inc. (2005) Hands-free computer control. Abilene, TX, USA
  5. 5.
    Cho SJ, Kim JH (2001) Bayesian network modeling of strokes and their relationships for on-line handwriting recognition. In: Proceedings of the sixth international conference on document analysis and recognition, pp 86–90Google Scholar
  6. 6.
    Cloud RL, Betke M, Gips J (2002) Experiments with a camera-based human-computer interface system. In: 7th ERCIM workshop on user interfaces for all, pp 103–110, Paris, FranceGoogle Scholar
  7. 7.
    Crampton SC, Betke M (2003) Counting fingers in real time: a webcam-based human-computer interface with game applications. In: Proceedings of universal access in human-computer interaction conference (UA-HCI), pp 1357–1361 Crete, GreeceGoogle Scholar
  8. 8.
    Crane HD, Ostrem JS, Edberg PK (1985) Method for distinguishing between complex character sets. US Patent 4 531 231Google Scholar
  9. 9.
    Cun YL, Bottou L, Orr G, Muller K (1998) Efficient backprop, neural networks: tricks of the trade. In: Lecture notes in computer sciences 1524, pp 5–50. Springer-VerlagGoogle Scholar
  10. 10.
    Diamond Bullet Design (2005) Usabilityfirst., Ann Arbor, MI, USA.
  11. 11.
    DiMattia P, Curran FX, Gips J (2001) An eye control teaching device for students without language expressive capacity–eagleeyes. The Edwin Mellen Press. See also
  12. 12.
    Don Johnston, Inc., Penny & Giles HeadWay, infrared head-mounted mouse alternative.
  13. 13.
    Duda RO, Hart RE, Stork DG (2001) Pattern classification, 2nd ed. John Wiley & Sons, New YorkzbMATHGoogle Scholar
  14. 14.
    Evgeniou T, Pontil M, Poggio T (2000) Regularization networks and support vector machines. Adv Comput Math 13:1–50zbMATHCrossRefMathSciNetGoogle Scholar
  15. 15.
    Fagiani C, Betke M, Gips J (2002) Evaluation of tracking methods for humancomputer interaction. In: IEEE workshop on applications in computer vision, pages 121–126, Orlando, FloridaGoogle Scholar
  16. 16.
    Fahlman SE, Lebiere C (1990) The cascade-correlation learning architecture. In: Touretzky DS (ed) Advances in neural information processing systems, vol 2. Denver, CO, USA, Morgan Kaufmann, San Mateo, pp 524–532Google Scholar
  17. 17.
    European Research Consortium for Informatics and Mathematics (ERCIM) (2005) User interfaces for all.
  18. 18.
    Forsberg A, Dieterich M, Zeleznik R (1998) The music notepad. In: Proceedings of the 11th annual ACM symposium on user interface software and technology (UIST ’98), ACM Press, pp 203–210Google Scholar
  19. 19.
    Frankish C, Hull R, Morgan P (1995) Recognition accuracy and user acceptance of pen interfaces. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI ’95). ACM Press/Addison-Wesley Publishing Co., pp 503–510Google Scholar
  20. 20.
    Goldberg D, Richardson C (1993) Touch-typing with a stylus. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI ’93). ACM Press, pp 80–87Google Scholar
  21. 21.
    Grauman K, Betke M, Lombardi J, Gips J, Bradski GR (2003) Communication via eye blinks and eyebrow raises: video-based human-computer interfaces. Univ Acc Inform Soc 2(4):359–373CrossRefGoogle Scholar
  22. 22.
    Gusyatin O, Urinson M, Betke M (2004) A method to extend functionality of pointer input devices. In: Stary C, Stephanidis C (eds) Proceedings of the 8th international ERCIM workshop on user interfaces for all, Revised Selected Papers, Lecture notes in computer science 3196. Springer-Verlag, Vienna, Austria, pp 426–439Google Scholar
  23. 23.
    Hecht-Nielsen R (1987) Kolmogorov’s mapping neural network existence theorem. In: Proceedings of IEEE first annual international conference on neural networks, vol 3. San Diego, CA, pp 11–13Google Scholar
  24. 24.
    Hutchinson T, White KP Jr, Martin WN, Reichert KC, Frey LA (1989) Human-computer interaction using eye-gaze input. IEEE T Sys Man Cybernet 19(6):1527–1533CrossRefGoogle Scholar
  25. 25.
    Isokoski P, Raisamo R (2000) Device independent text input: a rationale and an example. In: Proceedings of the working conference on advanced visual interfaces (AVI ’00), Palermo, Italy. ACM Press, pp 76–83Google Scholar
  26. 26.
    Iyer MS, Rhinehart RR (1999) Method to determine the required number of neural network training repetitions. IEEE T Neural Networ 10(2):427–432CrossRefGoogle Scholar
  27. 27.
    Ji Q, Zhu Z (2004) Eye and gaze tracking for interactive graphic display. Mach Vis Appl 15(3):139–148Google Scholar
  28. 28.
    Kang H, Lee CW, Jung K (2004) Recognition-based gesture spotting in video games. Pattern Recog Lett 25(15):1701–1714CrossRefGoogle Scholar
  29. 29.
    Kang K-W, Kim JH (2004) Utilization of hierarchical, stochastic relationship modeling for Hangul character recognition. IEEE T Pattern Anal 26(9):1185–1196CrossRefMathSciNetGoogle Scholar
  30. 30.
    Kapoor A, Picard RW (2002) Real-time, fully automatic upper facial feature tracking. In: Proceedings of the fifth IEEE international conference on automatic face gesture recognition, Washington, DC, pp 10–15Google Scholar
  31. 31.
    Kim K-N, Ramakrishna RS (1999) Vision-based eye-gaze tracking for human computer interface. In: Proceedings of the IEEE international conference on systems, man, and cybernetics, vol 2, pages 324–329, Tokyo, JapanGoogle Scholar
  32. 32.
    Kristensson P-O, Zhai S (2004) SHARK2 : A large vocabulary shorthand writing system for pen-based computers. In Proceedings of the 17th annual ACM symposium on user interface software and technology (UIST ’04), pages 43–52. ACM PressGoogle Scholar
  33. 33.
    LC Technologies, Eyegaze System.
  34. 34.
    Lee HK, Kim JH (1999) An HMM-based threshold model approach for gesture recognition. T Pattern Anal 21(10):961–973CrossRefGoogle Scholar
  35. 35.
    Leung W, Cheng K (1996) A stroke-order free Chinese handwriting input system based on relative stroke positions and back-propagation networks. In: Proceedings of the 1996 ACM symposium on applied computing (SAC ’96), pages 22–27, Philadelphia, PE, US. ACM Press.Google Scholar
  36. 36.
    Long AC Jr, Landay JA, Rowe LA, Michiels J (2000) Visual similarity of pen gestures. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI ’00). The Hague, The Netherlands. ACM Press, pp 360–367Google Scholar
  37. 37.
    MacKenzie IS, Zhang S (1997) The immediate usability of graffiti. In: Proceedings of graphics interface ’97, pages 129–137, Toronto, Canada. Canadian Information Processing Society.Google Scholar
  38. 38.
  39. 39.
    Magee JJ, Scott MR, Waber BN, Betke M (2004) EyeKeys: A realtime vision interface based on gaze detection from a low-grade video camera. In: 2004 Conference on computer vision and pattern recognition workshop (CVPR’04), vol 10, Workshop on real-time vision for human-computer interaction (RTV4HCI), Washington, DC, IEEE Computer Society, pp 159–166Google Scholar
  40. 40.
    Meyer A (1995) Pen computing: a technology overview and a vision. ACM SIGCHI Bulletin 27(3):46–90CrossRefGoogle Scholar
  41. 41.
    Morimoto CH, Koons D, Amir A, Flickner M (2000) Pupil detection and tracking using multiple light sources. Image Vision Comput 18(4):331–335CrossRefGoogle Scholar
  42. 42.
    Murase H (1988) Online recognition of free-format Japanese handwritings. In: Proceedings of the 9th international conference on pattern recognition, vol 2. Rome, Italy, November 1988. IEEE Computer Society Press, pp 1143–1147Google Scholar
  43. 43.
    Oka K, Sato Y, Koike H (2002) Real-time fingertip tracking and gesture recognition. IEEE Comput Graph 22(6):64–71CrossRefGoogle Scholar
  44. 44.
    Plamondon R, Srihari SN (2000) Online and offline handwriting recognition: a comprehensive survey. IEEE T Pattern Anal 22(1):63–84CrossRefGoogle Scholar
  45. 45.
    Microsoft Press Release. With launch of tablet PCs, pen-based computing is a reality. http://www., November 2002.
  46. 46.
    Riviere C, Thakor N (1996) Assistive computer interface for pen input by persons with tremor. IEEE Eng Med Biol 15(3):29–36CrossRefGoogle Scholar
  47. 47.
    Sears A, Arora R (2002) Data entry for mobile devices: an empirical comparison of novice performance with jot and graffiti. Interact Comput 14:413–433CrossRefGoogle Scholar
  48. 48.
    Sin BK, Kim JH (1993) A statistical approach with HMMs for on-line cursive Hangul (Korean script) recognition. In: Proceedings of the second international conference on document analysis and recognition, pp 147–150Google Scholar
  49. 49.
    Sin B-K, Kim JH (1997) Ligature modeling for online cursive script recognition. IEEE T Pattern 19:623–633CrossRefGoogle Scholar
  50. 50.
    Sjogaard S (1991) A conceptual approach to generalization in dynamic neural networks. Neural Comput 2:198–209Google Scholar
  51. 51.
    Steriadis CE, Constantinou P (2003) Designing human-computer interfaces for quadriplegic people. ACM T Computer-Human Interact (TOCHI) 10(2):87–118CrossRefGoogle Scholar
  52. 52.
    StrokeIt software, developed by Je. Doozan. Website accessed in April 2005.
  53. 53.
    Tappert CC, Suen CY, Wakahara T (1990) The state of the art in on-line handwriting recognition. IEEE T Pattern 12(8):787–808CrossRefGoogle Scholar
  54. 54.
    Tash solutions. Website accessed in April 2005.
  55. 55.
    Vapnik VN (1998) Statistical learning theory. John Wiley & Sons, New YorkzbMATHGoogle Scholar
  56. 56.
    Wakahara T, Murase H, Odaka K (1992) On-line handwriting recognition. Proceedings of the IEEE 80(7):1181–1194Google Scholar
  57. 57.
    Widrow B, Lehr MA (1990) 30 years of adaptive neural networks: Perceptron, madaline, and backpropagation. Proceedings of the IEEE 78(9):1415–1442Google Scholar
  58. 58.
    Yeung D-Y (1991) Node splitting: A constructive algorithm for feed-forward neural networks. In: Moody JE, Hanson SJ, Lippman RP (eds) Advances in neural information processing systems, vol 4. Morgan Kaufman Publishers, San Mateo, CA, pp 1072–1079Google Scholar
  59. 59.
    Yoon HS, Soh J, Bae YJ, Yang HS (2001) Hand gesture recognition using combined features of location, angle and velocity. Pattern Recog 34(7):1491–1501zbMATHCrossRefGoogle Scholar
  60. 60.
    Young L, Sheena D (1975) Survey of eye movement recording methods. Behav Res Meth Instrument 7(5):397–429Google Scholar
  61. 61.
    Zhai S, Kristensson P-O (2003) Shorthand writing on stylus keyboard. In: Proceedings of the conference on human factors in computing systems (CHI ’03), ACM Press, pp 97–104Google Scholar
  62. 62.
    Zhai S, Kristensson P-O, Smith BA (2005) In search of effective text input interfaces for off the desktop computing. Interact Comput 17(3):229–341CrossRefGoogle Scholar
  63. 63.
    ZyonSystems (2005) i-Pen–Presentation Digital Pen / Optical Pen Mouse. Seattle, WA, USA.

Copyright information

© Springer-Verlag 2005

Authors and Affiliations

  1. 1.Department of Computer ScienceBoston UniversityBostonUSA

Personalised recommendations