Accurate Extraction of Reciprocal Space Information from Transmission Electron Microscopy Images
As the study of complex systems has become dominant in physics the link between computational and physical science has become ever more important. In particular, with the rising popularity of imaging techniques in physis, the development and application of cutting edge computer vision techniques has become vital. Here we present novel image analysis methods which can be used to extract the position of features in diffraction patterns (reciprocal space) with unprecedented accuracy.
The first contribution we have developed is a method for calculating the nonlinear response of photographic film by using the noise in the image enabling the extraction of accurate intensity information. This allows high-resolution (but non-linear) film to be used in place of low-resolution (but linear) CCD cameras. The second contribution is a method for accurately localising very faint features in diffraction patterns by modelling the features and using the expectation maximization algorithm directly on the image to fit them. The accuracy of this technique has been verified by testing it on synthetic data.
These methods have been applied to transmission electron microscopy data, and have already enabled discoveries which would have been impossible using previously available techniques.
KeywordsNoise Intensity Shot Noise Parent Lattice Spot Intensity Transmission Electron Microscopy Data
Unable to display preview. Download preview PDF.
- 1.Williams, D.B., Carter, C.B.: Introduction to Transmission Electron Microscopy. Plenum Publishing, New York (1996)Google Scholar
- 5.Faugeras, O.D., Luong, Q.T., Maybank, S.J.: Camera self-calibration: Theory and experiments. In: 2nd Euproean Conference on Computer Vision, pp. 321–334. Springer, Heidelberg (1992)Google Scholar
- 6.Stein, G.: Lens distortion calibration using point correspondences. In: 11th IEEE Conference on Computer Vision and Pattern Recognition, pp. 602–608. Springer, Heidelberg (1997)Google Scholar
- 7.Claus, D., Fitzgibbon, A.W.: A rational function lens distortion model for general cameras. In: 18th IEEE Conference on Computer Vision and Pattern Recognition, pp. 213–219. Springer, Heidelberg (2005)Google Scholar
- 8.Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, pp. 147–151 (1988)Google Scholar
- 10.Rosten, E., Drummond, T.: Machine learning for high speed corner detection. In: 9th Euproean Conference on Computer Vision, Springer, Heidelberg (2006)Google Scholar
- 13.Cox, S., Rosten, E., Loudon, J.C., Chapman, J.C., Kos, S., Calderon, M.J., Kang, D.J., Littlewood, P.B., Midgley, P.A., Mathur, N.D.: Control of La0.5Ca0.5MnO3 superstructure through epitaxial strain release. Physics Review B (2006)Google Scholar