Comparison of Feature Reduction Methods in the Text Recognition Task
In this paper two level model of handwritten word recognition is considered. On the first level the consecutive letters are recognized by the same classifier using preprocessed data from the optical device, while on the second level we try to recognize the whole word. From the other point of view we can treat the first level as a feature reduction level. Then, in this paper two different methods of feature reduction for handwritten word recognition algorithm are described. On the lower level different well-known in the literature methods are taken into account (for example multi layer perceptron, k-NN algorithm). The results of classification from the first level serve as a feature for the second level and two different cases are considered. The first one consist in taking into account the crisp result of classification from the first level while in the second approach we take into account the support vector of decision on this level. On the second level, in order to improve the word recognition accuracy, for both methods, the Probabilistic Character Level Language Model was applied. In this model, the assumption of first-order Markov dependence in the sequence of characters was made. Moreover, we comment the possibility of using Markov model in forward and backward directions. For both methods of feature reduction the appropriate word recognition algorithms are presented. In order to find the best solution, the Viterbi algorithm is used. A number of experiments were carried out to test the properties of the proposed methods of feature reduction. The experiment results are presented and concluded in the end of the paper.
Unable to display preview. Download preview PDF.
- 1.Brakensiek, A., Rottland, J., Kosmala, A., Rigoll, G.: Off-Line Handwriting Recognition Using Various Hybrid Modeling Techniques and Character N-Grams. In: Proc. of the Seventh Int. Workshop on Frontiers in Handwriting Recognition, pp. 343–352 (2000)Google Scholar
- 2.Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Combining Classifiers: Soft Computing Solutions. John Wiley & Sons, ChichesterGoogle Scholar
- 3.Kuncheva, L.: Combining Classifiers: Soft Computing Solutions. In: Pal, S., Pal, A. (eds.) Pattern Recognition: from Classical to Modern Approaches, pp. 427–451. World Scientific, Singapore (2001)Google Scholar
- 6.l-Nasan, A., Nagy, G., Veeramachaneni, S.: Handwriting recognition using position sensitive n-gram matching. In: Proc. 7th Int. Conf. on Document Analysis and Recognition, pp. 577–582 (2003)Google Scholar
- 7.Vinciarelli, A., Bengio, S., Bunke, H.: Offline Recognition of Unconstrained Handwritten Text Using HMMs and Statistical Language Models. IEEE Trans. on PAMI 26, 709–720 (2004)Google Scholar
- 8.Sas, J., Luzyna, M.: Combining Character Classifier Using Member Classifiers Assessment. In: Proc. of 5th Int. Conf. on Intelligent Systems Design and Applications, ISDA 2005, pp. 400–405. IEEE Press, Los Alamitos (2005)Google Scholar
- 11.Sas, J., Zolnierek, A.: Handwritten Word recognition with Combined Classifier Based on Tri-grams. In: Kurzynski, M., et al. (eds.) Advances in Soft Computing. Computer Recognition Systems 2, vol. 45, pp. 477–484. Springer, Heidelberg (2007)Google Scholar