An Abstraction for Correspondence Search Using Task-Based Controls

  • Gregor MillerEmail author
  • Sidney Fels
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9009)


The correspondence problem (finding matching regions in images) is a fundamental task in computer vision. While the concept is simple, the complexity of feature detectors and descriptors has increased as they provide more efficient and higher quality correspondences. This complexity is a barrier to developers or system designers who wish to use computer vision correspondence techniques within their applications. We have designed a novel abstraction layer which uses a task-based description (covering the conditions of the problem) to allow a user to communicate their requirements for the correspondence search. This is mainly based on the idea of variances which describe how sets of images vary in blur, intensity, angle, etc. Our framework interprets the description and chooses from a set of algorithms those that satisfy the description. Our proof-of-concept implementation demonstrates the link between the description set by the user and the result returned. The abstraction is also at a high enough level to hide implementation and device details, allowing the simple use of hardware acceleration.


Computer Vision Feature Descriptor Correspondence Problem Image Width Blur Kernel 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



We would like to gratefully acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Graphics, Animation and New Media Network of Centres of Excellence (GRAND NCE).


  1. 1.
    Miller, G., Fels, S.: OpenVL: a task-based abstraction for developer-friendly computer vision. In: Proceedings of the 13th IEEE Workshop on the Applications of Computer Vision (WACV), WVM 2013, pp. 288–295. IEEE (2013)Google Scholar
  2. 2.
    Miller, G., Jang, D., Fels, S.: Developer-friendly segmentation using OpenVL, a high-level task-based abstraction. In: Proceedings of the 1st IEEE Workshop on User-Centred Computer Vision (UCCV), WVM 2013, pp. 31–36. IEEE, New York City (2013)Google Scholar
  3. 3.
    Oleinikov, G., Miller, G., Little, J.J., Fels, S.: Task-based control of articulated human pose detection for openvl. In: Proceedings of the 14th IEEE Winter Conference on Applications of Computer Vision, WACV 2014, pp. 682–689. IEEE, New York City (2014)Google Scholar
  4. 4.
    Jang, D., Miller, G., Fels, S., Oldridge, S.: User oriented language model for face detection. In: Proceedings of the 1st Workshop on Person-Oriented Vision (POV), WVM 2011, pp. 21–26. IEEE, New York City (2011)Google Scholar
  5. 5.
    Bradski, G., Kaehler, A.: Learning OpenCV: Computer Vision with the OpenCV Library, 1st edn. O’Reilly Media, Inc., Sebastopol (2008)Google Scholar
  6. 6.
    Matsuyama, T., Hwang, V.: SIGMA: a framework for image understanding integration of bottom-up and top-down analyses. In: Proceedings of the 9th International Joint Conference on Artificial Intelligence, vol. 2, pp. 908–915. Morgan Kaufmann Publishers Inc. (1985)Google Scholar
  7. 7.
    Kohl, C., Mundy, J.: The development of the image understanding environment. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, CVPR 1994, pp. 443–447. IEEE Computer Society Press, Los Alamitos (1994)Google Scholar
  8. 8.
    Clouard, R., Elmoataz, A., Porquet, C., Revenu, M.: Borg: a knowledge-based system for automatic generation of image processing programs. Trans. Pattern Anal. Mach. Intell. 21, 128–144 (1999)CrossRefGoogle Scholar
  9. 9.
    Mundy, J.: The image understanding environment program. IEEE Expert Intell. Syst. Appl. 10, 64–73 (1995)Google Scholar
  10. 10.
    Panin, G.: Model-based Visual Tracking: The OpenTL Framework, 1st edn. Wiley, Chichester (2011)CrossRefGoogle Scholar
  11. 11.
    Konstantinides, K., Rasure, J.R.: The Khoros software development environment for image and signal processing. IEEE Trans. Image Process. 3, 243–252 (1994)CrossRefGoogle Scholar
  12. 12.
    Peterson, J., Hudak, P., Reid, A., Hager, G.D.: FVision: a declarative language for visual tracking. In: Ramakrishnan, I.V. (ed.) PADL 2001. LNCS, vol. 1990, pp. 304–321. Springer, Heidelberg (2001)Google Scholar
  13. 13.
    Firschein, O., Strat, T.M.: RADIUS: Image Understanding For Imagery Intelligence, 1st edn. Morgan Kaufmann, San Francisco (1997)Google Scholar
  14. 14.
    Makarenko, A., Brooks, A., Kaupp, T.: On the benefits of making robotic software frameworks thin. In: Proceedings of the Workshop on Measures and Procedures for the Evaluation of Robot Architectures and Middleware, IROS 2007. IEEE, New York City (2007). Invited PresentationGoogle Scholar
  15. 15.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91 (2004)CrossRefGoogle Scholar
  16. 16.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Comput. Vis. Image Underst. 110, 346–359 (2008)CrossRefGoogle Scholar
  17. 17.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb: An efficient alternative to sift or surf. In: Proceedings of the 2011 International Conference on Computer Vision, ICCV 2011, pp. 2564–2571. IEEE Computer Society, Washington, DC (2011)Google Scholar
  18. 18.
    Forssén, P.E., Lowe, D.: Shape descriptors for maximally stable extremal regions. In: IEEE International Conference on Computer Vision, vol. CFP07198-CDR. IEEE Computer Society, Rio de Janeiro (2007)Google Scholar
  19. 19.
    Ortiz, R.: Freak: Fast retina keypoint. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), CVPR 2012, pp. 510–517. IEEE Computer Society, Washington, DC (2012)Google Scholar
  20. 20.
    Bauer, J., Sünderhauf, N., Protzel, P.: Comparing several implementations of two recently published feature detectors. Intell. Auton. Veh. 6, 143–148 (2007)Google Scholar
  21. 21.
    Juan, L., Gwon, O.: A comparison of SIFT, PCA-SIFT and SURF. Int. J. of Image Process. 3, 143–152 (2009)Google Scholar
  22. 22.
    Yu, G., Morel, J.M.: A fully affine invariant image comparison method. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2009, ICASSP 2009, pp. 1597–1600 (2009)Google Scholar
  23. 23.
    Morel, J.M., Yu, G.: Asift: a new framework for fully affine invariant image comparison. SIAM J. Img. Sci. 2, 438–469 (2009)CrossRefzbMATHMathSciNetGoogle Scholar
  24. 24.
    Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., Gool, L.V.: A comparison of affine region detectors. Int. J. Comput. Vis. 65, 43–72 (2005)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Human Communication Technologies LaboratoryUniversity of British ColumbiaVancouverCanada

Personalised recommendations