Advertisement

Pattern Recognition and Image Analysis

, Volume 28, Issue 2, pp 261–272 | Cite as

A Fast Fourier based Feature Descriptor and a Cascade Nearest Neighbour Search with an Efficient Matching Pipeline for Mosaicing of Microscopy Images

  • A. Hast
  • V. A. Sablina
  • I.-M. Sintorn
  • G. Kylberg
Applied Problems
  • 28 Downloads

Abstract

Automatic mosaicing is an important image processing application and we propose several improvements and simplifications to the image registration pipeline used in microscopy to automatically construct large images of whole specimen samples from a series of images. First of all we propose a feature descriptor based on the amplitude of a few elements of the Fourier transform, which makes it fast to compute and that can be used for any image matching and registration applications where scale and rotation invariance is not needed. Secondly, we propose a cascade matching approach that will reduce the time for the nearest neighbour search considerably, making it almost independent on feature vector length. Moreover, several improvements are proposed that will speed up the whole matching process. These are: faster interest point detection, a regular sampling strategy and a deterministic false positive removal procedure that finds the transformation. All steps of the improved pipeline are explained and the results comparative experiments are presented.

Keywords

Image mosaicing matching pipeline feature descriptor microscopy images nearest neighbour search Fourier transform interest points 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    L. G. Brown, “A survey of image registration techniques,” ACM Comput. Surv. 24 (4), 325–376 (1992).CrossRefGoogle Scholar
  2. 2.
    B. Zitová and J. Flusser, “Image registration methods: A survey,” Image Vision Comput. 21, 977–1000 (2003).CrossRefGoogle Scholar
  3. 3.
    A. Elibol, J. Kim, J., N. Gracias, and R. Garcia, “Efficient image mosaicing for multi-robot visual underwater mapping,” Pattern Recogn. Lett. 46, 20–26 (2014).CrossRefGoogle Scholar
  4. 4.
    D. Capel, Image Mosaicing and Super-Resolution, Distinguished Dissertations (Springer, London, 2004).CrossRefzbMATHGoogle Scholar
  5. 5.
    K. Nasrollahi and T. B. Moeslund, “Super-resolution: A comprehensive survey,” Mach. Vision Appl. 25 (6), 1423–1468 (2014).CrossRefGoogle Scholar
  6. 6.
    Y. Tian and K.-H. Yap, “Joint image registration and super-resolution from low-resolution images with zooming motion,” IEEE Trans. Circuits Syst. Video Technol. 23 (7), 1224–1234 (2013).CrossRefGoogle Scholar
  7. 7.
    Y. He, K.-H. Yap, L. Chen, and L.-P. Chau, “A nonlinear least square technique for simultaneous image registration and super-resolution,” IEEE Trans. Image Process. 16 (11), 2830–2841 (2007).MathSciNetCrossRefGoogle Scholar
  8. 8.
    Y. Keller and A. Averbuch, “A projection-based extension to phase correlation image alignment,” Signal Process. 87 (1), 124–133 (2007).CrossRefzbMATHGoogle Scholar
  9. 9.
    C. D. Kuglin and D. C. Hines, “The phase correlation image alignment method,” in Proc. 1975 IEEE Int. Conf. on Cybernetics and Society (San Francisco, CA, USA, 1975), pp. 163–165.Google Scholar
  10. 10.
    E. De Castro and C. Morandi, “Registration of translated and rotated images using finite fourier transforms,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9 (5), 700–703 (1987).CrossRefGoogle Scholar
  11. 11.
    J. N. Sarvaiya, S. Patnaik, and K. Kothari, “Image registration using Log polar transform and phase correlation to recover higher scale,” J. Pattern Recogn. Res. 7 (1), 90–105 (2012).CrossRefGoogle Scholar
  12. 12.
    G. Wolberg and S. Zokai, “Robust image registration using log-polar transform,” in Proc. 2000 IEEE Int. Conf. on Image Processing (Vancouver, Canada, 2000), Vol. 1., pp. 493–496.Google Scholar
  13. 13.
    B. S. Reddy and B. N. Chatterji, “An FFT-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. Image Process. 5 (8), 1266–1271 (1996).CrossRefGoogle Scholar
  14. 14.
    J. Shi and C. Tomasi, “Good features to track,” in Proc. 1994 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR’94) (Seattle, WA, USA, 1994), pp. 593–600.Google Scholar
  15. 15.
    M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vision 74 (1), 59–73 (2007).CrossRefGoogle Scholar
  16. 16.
    R. Szeliski, “Image alignment and stitching: A tutorial,” Found. Trends Comput. Graphics Vision 2 (1), 1–104 (2006).MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping (SLAM): Part I,” IEEE Rob. Autom. Mag. 13 (2), 99–110 (2006).CrossRefGoogle Scholar
  18. 18.
    H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping (SLAM): Part II,” IEEE Rob. Autom. Mag. 13 (3), 108–117 (2006).CrossRefGoogle Scholar
  19. 19.
    W.-Y. Hsu, W.-F. P. Poon, and Y.-N. Sun, “Automatic seamless mosaicing of microscopic images: Enhancing appearance with colour degradation compensation and wavelet-based blending,” J. Microsc. 231 (3), 408–418 (2008).MathSciNetCrossRefGoogle Scholar
  20. 20.
    C. Sun, R. Beare, R., V. Hilsenstein, and P. Jackway, “Mosaicing of microscope images with global geometric and radiometric corrections,” J. Microsc. 224 (2), 158–165 (2006).MathSciNetCrossRefGoogle Scholar
  21. 21.
    D. E. Romo, J. Tarquino, J. D. García-Arteaga, and E. Romero, “Virtual slide mosaicing using feature descriptors and a registration consistency measure,” in IX Int. Seminar on Medical Information Processing and Analysis, Ed. by J. Brieva and B. Escalante-Ramírez, Proc. SPIE 8922, 89220Q-1–89220Q-8 (2013).Google Scholar
  22. 22.
    H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vision Image Understand. 110 (3), 346–359 (2008).CrossRefGoogle Scholar
  23. 23.
    W. Rong, H. Chen, J. Liu, Y. Xu, and R. Haeusler, “Mosaicing of microscope images based on surf,” in Proc. 2009 24th Image and Vision Computing New Zealand (IVCNZ 2009) (IEEE, 2009), p. 271–275.Google Scholar
  24. 24.
    Y. Liang, Q. Li, Z. Lin, and D. Chen, “A panoramic image registration algorithm based on SURF,” in Recent Advances in Computer Science and Information Engineering, Ed. by Z. Qian, L. Cao, W. Su, T. Wang, and H. Yang, Lecture Notes in Electrical Engineering, (Springer, Berlin, Heidelberg, 2012), Vol. 128, pp. 473–478.Google Scholar
  25. 25.
    P. V. Lukashevich, B. A. Zalesky, and S. V. Ablameyko, “Medical image registration based on SURF detector,” Pattern Recogn. Image Anal. 21 (3), 519–521 (2011).CrossRefGoogle Scholar
  26. 26.
    D. G. Lowe, “Distinctive image features from scaleinvariant key points,” Int. J. Comput. Vision 60 (2), 91–110 (2004).CrossRefGoogle Scholar
  27. 27.
    S. G. Stanciu, R. Hristu, R. Boriga, and G. A. Stanciu, “On the suitability of SIFT technique to deal with image modifications specific to confocal scanning laser microscopy,” Microsc. Microanal. 16 (5), 515–530 (2010).CrossRefGoogle Scholar
  28. 28.
    C. Tang, Y. Dong, and X. Su, “Automatic registration based on improved SIFT for medical microscopic sequence images,” in Proc. 2008 Second Int. Symp. on Intelligent Information Technology Application (IITA’08) (IEEE, 2008), Vol. 1, pp. 580–583.CrossRefGoogle Scholar
  29. 29.
    T. Botterill, S. Mills, and R. Green, “Real-time aerial image mosaicing,” in Proc. 2010 5th Int. Conf. of Image and Vision Computing New Zealand (Queenstown, NZ, 2010), pp. 1–8.Google Scholar
  30. 30.
    C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. Fourth Alvey Vision Conference (Manchester, 1988), pp. 147–151.Google Scholar
  31. 31.
    C. Schmid, R. Mohr, and C. Bauckhage, “Evaluation of interest point detectors,” Int. J. Comput. Vision 37 (2), 151–172 (2000).CrossRefzbMATHGoogle Scholar
  32. 32.
    T. Tuytelaars and K. Mikolajczyk, “Local invariant feature detectors: A survey,” Found. Trends Comput. Graphics Vision 3 (3), 177–280 (2008).CrossRefGoogle Scholar
  33. 33.
    S. Gauglitz, T. Höllerer, and M. Turk, “Evaluation of interest point detectors and feature descriptors for visual tracking,” Int. J. Comput. Vision 94 (3), 335–360 (2011).CrossRefzbMATHGoogle Scholar
  34. 34.
    M. Zuliani, C. Kenney, and B. S. Manjunath, “A mathematical comparison of point detectors,” in Proc. 2004 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW’04), Vol. 11, Second IEEE Workshop on Image and Video Registration (IVR) (Washington, DC, USA, 2004), p. 172; conference paper (7 pages) available at http://vision.ece.ucsb.edu/abstract/365 Google Scholar
  35. 35.
    E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Computer Vision — ECCV 2006, Proc. 9th European Conf. on Computer Vision, Part I, Ed. by A. Leonardis, H. Bischof, and A. Pinz, Lecture Notes in Computer Science (Springer, Berlin, Heidelberg, 2006), Vol. 3951, pp. 430–443.CrossRefGoogle Scholar
  36. 36.
    S. M. Smith and J. M. Brady, “SUSAN—a new approach to low level image processing,” Int. J. Comput. Vision 23 (1), 45–78 (1997).CrossRefGoogle Scholar
  37. 37.
    J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide baseline stereo from maximally stable extremal regions,” in Proc. British Machine Vision Conf. (BMVC 2002), Ed. by D. Marshall and P. L. Rosin (BMVA Press, 2002), pp. 36.1–36.10 (384–393). DOI: 10.5244/C.16.3610.5244/C.16.36Google Scholar
  38. 38.
    P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “KAZE features,” in Computer Vision — ECCV 2012, Proc. 12th European Conf. on Computer Vision, Part VI, Ed. by A. Fitzgibbon, S. Lazebnik, et al., Lecture Notes in Computer Science (Springer, Berlin, Heidelberg, 2012), Vol. 7577, pp. 214–227.CrossRefGoogle Scholar
  39. 39.
    P. F. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,” in Proc. British Machine Vision Conf.(BMVC 2013), Ed. by T. Burghardt, D. Damen, et al. (BMVA Press, 2013), pp. 13.1–13.11. DOI: doi 10.5244/C.27.1310.5244/C.27.13Google Scholar
  40. 40.
    N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. 2005 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR’05) (San Diego, CA, USA, 2005), Vol. 1 (IEEE Computer Society, 2005), pp. 886–893.CrossRefGoogle Scholar
  41. 41.
    S. Leutenegger, M. Chli, and R. Y. Siegwart, “BRISK: Binary robust invariant scalable keypoints,” in Proc. 2011 IEEE International Conference on Computer Vision (ICCV’ 11) (Barcelona, Spain, 2011) (IEEE Computer Society, 2011), pp. 2548–2555CrossRefGoogle Scholar
  42. 42.
    A. Alahi, R. Ortiz, and P. Vandergheynst, “FREAK: Fast retina keypoint,” in Proc. 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Providence, RI, USA, 2012), pp. 510–517.CrossRefGoogle Scholar
  43. 43.
    G. Carneiro and A. D. Jepson, “Phase-based local features,” in Computer Vision — ECCV 2002, Proc. 7th European Conf. on Computer Vision, Part. I, Ed. by A. Heyden, G. Sparr, M. Nielsen, and P. Johansen, Lecture Notes in Computer Science (Springer, Berlin, Heidelberg, 2002), Vol. 2350. P. 282–296.CrossRefzbMATHGoogle Scholar
  44. 44.
    G. Carneiro and A. D. Jepson, “Multi-scale phasebased local features,” in Proc. 2003 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR 2003) (Madison, WI, USA, 2003), Vol. 1 (IEEE Computer Society, 2003), pp. I-736–I-743.Google Scholar
  45. 45.
    I. Ulusoy and E. R. Hancock, “A statistical approach to sparse multi-scale phase-based stereo,” Pattern Recogn. 40 (9), 2504–2520 (2007).CrossRefzbMATHGoogle Scholar
  46. 46.
    A. Hast, “Robust and invariant phase based local feature matching,” in Proc. 22nd Int. Conf. on Pattern Recognition (ICPR 2014) (Stockholm, Sweden, 2014), pp. 809–814.Google Scholar
  47. 47.
    A. Hast and A. Marchetti, “Rotation invariant feature matching-based on Gaussian filtered log polar transform and phase correlation,” in Proc. 8th Int. Symp. on Image and Signal Processing and Analysis (ISPA 2013) (Trieste, Italy, 2013), pp. 107–112.Google Scholar
  48. 48.
    A. Andoni, Nearest Neighbor Search: the Old, the New, and the Impossible, Ph.D. thesis, (Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, MIT, 2009).Google Scholar
  49. 49.
    M. Muja and D. G. Lowe, “Fast matching of binary features,” in Proc. 2012 Ninth Conf. on Computer and Robot Vision (CRV 2012) (Toronto, Canada, 2012), pp. 404–410.CrossRefGoogle Scholar
  50. 50.
    A. Nakhmani and A. Tannenbaum, “A new distance measure based on generalized image normalized crosscorrelation for robust video tracking and image recognition,” Pattern Recogn. Lett. 34 (3), 315–321 (2013).`CrossRefGoogle Scholar
  51. 51.
    S.-H. Cha, “Comprehensive survey on distance/similarity measures between probability density functions,” Int. J. Math. Models Methods Appl. Sci. 1 (4), 300–307 (2007).Google Scholar
  52. 52.
    J. H. Friedman, J. L. Bentley, and R. A. Finkel, “An algorithm for finding best matches in logarithmic expected time,” ACM Trans. Math. Software 3 (3), 209–226 (1977).CrossRefzbMATHGoogle Scholar
  53. 53.
    K. Fukunage and P. M. Narendra, “A branch and bound algorithm for computing k-nearest neighbors,” IEEE Trans. Comput., C-24 (7), 750–753 (1975).Google Scholar
  54. 54.
    M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in Proc. Fourth Int. Conf. on Computer Vision Theory and Applications (VISAPP 2009) (Lisboa, Portugal, 2009), Vol. 1, pp. 331–340.Google Scholar
  55. 55.
    M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24 (6), 381–395 (1981).MathSciNetCrossRefGoogle Scholar
  56. 56.
    R.I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University Press, 2004).CrossRefzbMATHGoogle Scholar
  57. 57.
    O. Chum, J. Matas, and J. Kittler, “Locally optimized RANSAC,” in Pattern Recognition, Proc. 25th DAGM Symposium (Magdeburg, Germany, 2003), Ed. by B. Michaelis and G. Krell, Lecture Notes in Computer Science (Springer, Berlin, Heidelberg, 2003), Vol. 2781, pp. 236–243.Google Scholar
  58. 58.
    R. Raguram, J.-M. Frahm, and M. Pollefeys, “A comparative analysis of RANSAC techniques leading to adaptive real-time random sample consensus,” in Computer Vision — ECCV 2008, Proc. 10th European Conf. on Computer Vision, Part. II, Ed. by D. Forsyth, P. Torr, and A. Zisserman, Lecture Notes in Computer Science (Springer, Berlin, Heidelberg, 2008), Vol. 5303, pp. 500–513.CrossRefGoogle Scholar
  59. 59.
    A. Hast, J. Nysjö, and A, Marchetti, “Optimal RANSAC − towards a repeatable algorithm for finding the optimal set,” J. WSCG 21 (1), 21–30 (2013).Google Scholar
  60. 60.
    A. Hast, V. A. Sablina, G. Kylberg, and I.-M. Sintorn, “A simple and efficient feature descriptor for fast matching,” in WSCG 2015, 23rd Int. Conf. in Central Europe on Computer Graphics, Visualization and Computer Vision (Plzen, Czech Republic, 2015), Full Papers Proceedings, pp. 135–142.Google Scholar
  61. 61.
    R. L. Burden and J. D. Faires, Numerical Analysis, 7th ed. (Brooks/Cole, Thomson Learning, Pacific Grove, 2001).zbMATHGoogle Scholar
  62. 62.
    T. Barrera, A. Hast, and E. Bengtsson, “Incremental spherical linear interpolation,” in SIGRAD 2004, Proc. Annual SIGRAD Conf. — Environmental Visualization (Gävle, Sweden, 2004), pp. 7–10.Google Scholar
  63. 63.
    A. Hast and A. Marchetti, “Invariant interest point detection based on variations of the spinor tensor,” in WSCG 2014, 22nd Int. Conf. in Central Europe on Computer Graphics, Visualization and Computer Vision (Plzen, Czech Republic, 2014), Communication Papers Proceedings, pp. 49–56.Google Scholar
  64. 64.
    P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vision 57 (2), 137–154 (2004).CrossRefGoogle Scholar
  65. 65.
    A. Hast and G. Kylberg, “Clustering in 2D as a fast deterministic alternative to RANSAC,” in FEAST 2015: 2nd Workshop on Features and Structures, collocated with ICML 2015 (Lille, France, 2015), 1 p.Google Scholar

Copyright information

© Pleiades Publishing, Ltd. 2018

Authors and Affiliations

  • A. Hast
    • 1
  • V. A. Sablina
    • 2
  • I.-M. Sintorn
    • 1
    • 3
  • G. Kylberg
    • 3
  1. 1.Department of Information TechnologyUppsala UniversityUppsalaSweden
  2. 2.Department of Electronic ComputersRyazan State Radio Engineering UniversityRyazanRussia
  3. 3.Vironova ABStockholmSweden

Personalised recommendations