Coded Aperture Flow

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8753)


Real cameras have a limited depth of field. The resulting defocus blur is a valuable cue for estimating the depth structure of a scene. Using coded apertures, depth can be estimated from a single frame. For optical flow estimation between frames, however, the depth dependent degradation can introduce errors. These errors are most prominent when objects move relative to the focal plane of the camera. We incorporate coded aperture defocus blur into optical flow estimation and allow for piecewise smooth 3D motion of objects. With coded aperture flow, we can establish dense correspondences between pixels in succeeding coded aperture frames. We compare several approaches to compute accurate correspondences for coded aperture images showing objects with arbitrary 3D motion.


Coded aperture Optical flow 


  1. 1.
    Alvarez, L., Deriche, R., Papadopoulo, T., Sánchez, J.: Symmetrical dense optical flow estimation with occlusions detection. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002, Part I. LNCS, vol. 2350, pp. 721–735. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  2. 2.
    Asada, N., Fujiwara, H., Matsuyama, T.: Analysis of photometric properties of occluding edges by the reversed projection blurring model. T-PAMI 20(2), 155–167 (1998)CrossRefGoogle Scholar
  3. 3.
    Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M., Szeliski, R.: A database and evaluation methodology for optical flow. IJCV 92(1), 1–31 (2011)CrossRefGoogle Scholar
  4. 4.
    Bando, Y.: How to disassemble the canon EF 50mm f/1.8 II lens (2013).
  5. 5.
    Dowski Jr., E., Cathey, W.: Single-lens single-image incoherent passive-ranging systems. Appl. Opt. 33(29), 6762–6773 (1994)CrossRefGoogle Scholar
  6. 6.
    Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1), 185–203 (1981)CrossRefGoogle Scholar
  7. 7.
    Jin, H., Favaro, P., Cipolla, R.: Visual tracking in the presence of motion blur. In: Proceedings of the CVPR, vol. 2, pp. 18–25. IEEE (2005)Google Scholar
  8. 8.
    Kubota, A., Kodama, K., Aizawa, K.: Registration and blur estimation methods for multiple differently focused images. In: Proceedings of the ICIP, vol. 2, pp. 447–451 (1999)Google Scholar
  9. 9.
    Levin, A., Fergus, R., Durand, F., Freeman, W.: Image and depth from a conventional camera with a coded aperture. TOG 26(3), 70 (2007)CrossRefGoogle Scholar
  10. 10.
    Lin, J., Ji, X., Xu, W., Dai, Q.: Absolute depth estimation from a single defocused image. T-IP 22(11), 4545–4550 (2013)CrossRefGoogle Scholar
  11. 11.
    Martinello, M., Favaro, P.: Single image blind deconvolution with higher-order texture statistics. In: Cremers, D., Magnor, M., Oswald, M.R., Zelnik-Manor, L. (eds.) Video Processing and Computational Video. LNCS, vol. 7082, pp. 124–151. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  12. 12.
    Martinello, M., Favaro, P.: Depth estimation from a video sequence with moving and deformable objects. In: Proceedings of the Image Processing Conference (2012)Google Scholar
  13. 13.
    Portz, T., Zhang, L., Jiang, H.: Optical flow in the presence of spatially-varying motion blur. In: Proceedings of the CVPR, pp. 1752–1759. IEEE (2012)Google Scholar
  14. 14.
    Seitz, S., Baker, S.: Filter flow. In: Proceedings of the ICCV, pp. 143–150. IEEE (2009)Google Scholar
  15. 15.
    Sellent, A., Favaro, P.: Optimized aperture shapes for depth estimation. Pattern Recogn. Lett. 40, 96–103 (2014)CrossRefGoogle Scholar
  16. 16.
    Sellent, A., Favaro, P.: Which side of the focal plane are you on? In: Proceedings of the ICCP, pp. 1–8. IEEE (2014)Google Scholar
  17. 17.
    Shroff, N., Veeraraghavan, A., Taguchi, Y., Tuzel, O., Agrawal, A., Chellappa, R.: Variable focus video: reconstructing depth and video for dynamic scenes. In: Proceedings of the ICCP, pp. 1–9. IEEE (2012)Google Scholar
  18. 18.
    Subbarao, M., Surya, G.: Depth from defocus: a spatial domain approach. IJCV 13(3), 271–294 (1994)CrossRefGoogle Scholar
  19. 19.
    Sun, D., Roth, S., Black, M.: Secrets of optical flow estimation and their principles. In: Proceedings of the CVPR, pp. 2432–2439. IEEE (2010)Google Scholar
  20. 20.
    Sun, D., Wulff, J., Sudderth, E., Pfister, H., Black, M.: A fully-connected layered model of foreground and background flow. In: Proceedings of the CVPR, pp. 2451–2458 (2013)Google Scholar
  21. 21.
    Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. TOG 26(3), 69 (2007)CrossRefGoogle Scholar
  22. 22.
    Werlberger, M., Trobin, W., Pock, T., Wedel, A., Cremers, D., Bischof, H.: Anisotropic Huber-L1 optical flow. In: Proceedings of the BMVC, pp. 1–11 (2009)Google Scholar
  23. 23.
    Zach, C., Pock, T., Bischof, H.: A duality based approach for realtime TV-\(L^{1}\) optical flow. In: Hamprecht, F.A., Schnörr, C., Jähne, B. (eds.) DAGM 2007. LNCS, vol. 4713, pp. 214–223. Springer, Heidelberg (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Institut für Informatik und angewandte MathematikUniversität BernBernSwitzerland

Personalised recommendations