Fully-Pipelining Hardware Implementation of Neural Network for Text-Based Images Retrieval

  • Dongwuk Kyoung
  • Keechul Jung
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3973)


Many hardware implementations cannot execute the software MLPs’ applications using weight of floating-point data, because hardware design of MLPs usually uses fixed-point arithmetic for high speed and small area. The hardware design using fixed-point arithmetic has two important drawbacks which are low accuracy and flexibility. Therefore, we propose a fully-pipelining architecture of MLPs using floating-point arithmetic in order to solve these two problems. Thus our design method can implement the MLPs having the processing speed improved by optimizing the number of hidden nodes in a repeated processing. We apply a software application of MLPs-based text detection that is computed to be 1722120 times for text detection of a 1152×1546 sized image to hardware implementation. Our preliminary result shows a performance enhancement of about eleven times faster using a fully-pipelining architecture than the software application.


Hide Layer Output Layer Hide Node Sigmoid Function Hardware Implementation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Jung, K., Han, J.H.: Hybrid Approach to Efficient Text Extraction in Complex Color Images. Pattern Recognition Letters 25, 679–699 (2004)CrossRefGoogle Scholar
  2. 2.
    Jung, K., Kim, K.I., Kurata, T., Kourogi, M., Han, J.H.: Text Scanner with Text Detection Technology on Image Sequences. In: The 16th IEEE International Conference on Pattern Recognition, vol. 3, pp. 11–15 (2002)Google Scholar
  3. 3.
    Watkins, S.S., Chau, P.M.: Reduced-complexity Circuit for Neural Networks. Electronics Letters 31(19), 1644–1646 (1995)CrossRefGoogle Scholar
  4. 4.
    Wust, H., Kasper, K., Reininger, H.: Hybrid Number Representation for the FPGA-realization of a Versatile Neuro-processor. In: Proceedings of 24th Euromicro Conference 1998, vol. 2, pp. 694–701 (1998)Google Scholar
  5. 5.
    Ossoinig, H., Reisinger, E., Steger, C., Weiss, R.: Design and FPGA-Implementation of a Neural Network. In: Signal Processing Applications & Technology, Boston, USA, pp. 939–943 (1996)Google Scholar
  6. 6.
    Ma, X., Jin, L., Shen, D., Yin, J.: A Mixed Parallel Neural Networks Computing Unit Implemented In FPGA. In: IEEE Int. Conf. Neural Networks & Signal Processing, pp. 14–17 (2003)Google Scholar
  7. 7.
    IEEE Standard for Binary Floating-Point Arithmetic. In: ANSI/IEEE std 754-1985, New York, The Inst. Of Electrical and Electronics Engineers, Inc (August 1985)Google Scholar
  8. 8.
    Hidvegi, A.: Implementation of Neural Networks in FPGA (2002), http://www.sysf.physto.se/~attila/ANN.pdf
  9. 9.
    Basterretxea, K., Tarela, J.M., del Campo, I.: Approximation of Sigmoid Function and the Derivative for Hardware Implementation of Artificial Neurons. IEE Proc., Circuits Sys. 151 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Dongwuk Kyoung
    • 1
  • Keechul Jung
    • 1
  1. 1.HCI Lab., College of Information ScienceSoongsil UniversitySeoulSouth Korea

Personalised recommendations