Abstract
The vision processing chain is usually divided into three consecutive steps (González andWoods 2002): (i) low-level tasks, where both inputs and outputs are images, (ii) medium-level tasks, where inputs are images but outputs are attributes extracted from inputs and (iii) high-level tasks, which perform the cognitive functions associated to vision from the result of low- and medium-level tasks. The most remarkable features of low-level tasks are, as we saw in Chap. 2, their regularity and potential parallelism. All pixels are equally processed and the result of the processing over each pixel is usually independent from the result of the computations over the rest.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2012 Springer Science+Business Media New York
About this chapter
Cite this chapter
Fernández-Berni, J., Carmona-Galán, R., Rodríguez-Vázquez, Á. (2012). FLIP-Q: A QCIF Resolution Focal-Plane Array for Low-Power Image Processing. In: Low-Power Smart Imagers for Vision-Enabled Sensor Networks. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-2392-8_5
Download citation
DOI: https://doi.org/10.1007/978-1-4614-2392-8_5
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-2391-1
Online ISBN: 978-1-4614-2392-8
eBook Packages: EngineeringEngineering (R0)