Skip to main content

Optimal Measurement of Visual Motion Across Spatial and Temporal Scales

  • Chapter
  • First Online:
Computer Vision in Control Systems-1

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 73))

  • 1489 Accesses

Abstract

Sensory systems use limited resources to mediate the perception of a great variety of objects and events. Here a normative framework is presented for exploring how the problem of efficient allocation of resources can be solved in visual perception. Starting with a basic property of every measurement, captured by Gabor’s uncertainty relation about the location and frequency content of signals, prescriptions are developed for optimal allocation of sensors for reliable perception of visual motion. This study reveals that a large-scale characteristic of human vision (the spatiotemporal contrast sensitivity function) is similar to the optimal prescription, and it suggests that some previously puzzling phenomena of visual sensitivity, adaptation, and perceptual organization have simple principled explanations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For brevity, here “frequency content” will sometimes be shortened to “content.”

  2. 2.

    Different criteria of measurement and sensor shapes correspond to different magnitudes of \( C_{x} \).

  3. 3.

    Here the sensors are characterized by intervals following the standard notion that biological motion sensors are maximally activated when the stimulus travels some distance \( \varDelta s \) over some temporal interval \( \varDelta t \) [17].

References

  1. Marr D (1982) Vision: a computational investigation into the human representation and processing of visual information. W. H. Freeman, San Francisco

    Google Scholar 

  2. Gabor D (1946) Theory of communication. Inst Electr Eng, Part III 93:429–457

    Google Scholar 

  3. Marcelja S (1980) Mathematical description of the response by simple cortical cells. J Opt Soc Am 70:1297–1300

    Article  MathSciNet  Google Scholar 

  4. MacKay DM (1981) Strife over visual cortical function. Nature 289:117–118

    Article  Google Scholar 

  5. Daugman JG (1985) Uncertainty relation for the resolution in space spatial frequency, and orientation optimized by two-dimensional visual cortex filters. J Opt Soc Am A 2(7):1160–1169

    Article  Google Scholar 

  6. Glezer VD, Gauzel’man VE, Iakovlev VV (1986) Principle of uncertainty in vision. Neirofiziologiia [Neurophysiology] 18(3):307–312

    Google Scholar 

  7. Field DJ (1987) Relations between the statistics of natural images and the response properties of cortical cells. J Opt Soc Am A 4:2379–2394

    Article  Google Scholar 

  8. Jones A, Palmer L (1987) An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. J Neurophysiol 58:1233–1258

    Google Scholar 

  9. Simoncelli EP, Olshausen B (2001) Natural image statistics and neural representation. Annu Rev Neurosci 24:1193–1216

    Article  Google Scholar 

  10. Saremi S, Sejnowski TJ, Sharpee TO (2013) Double-Gabor filters are independent components of small translation-invariant image patches. Neural Comput 25(4):922–939

    Article  MathSciNet  MATH  Google Scholar 

  11. Gabor D (1952) Lectures on communication theory. Technical report 238, MIT Research Laboratory of Electronics, Cambridge, MA, USA

    Google Scholar 

  12. Resnikoff HL (1989) The illusion of reality. Springer, New York

    Book  MATH  Google Scholar 

  13. MacLennan B (1994) Gabor representations of spatiotemporal visual images. Technical report. University of Tennessee, Knoxville, TN, USA

    Google Scholar 

  14. Gepshtein S, Tyukin I (2006) Why do moving things look as they do? vision. J Vis Soc Jpn, Supp. 18:64

    Google Scholar 

  15. von Neumann J (1928) Zur Theorie der Gesellschaftsspiele. [On the theory of games of strategy]. Mathematische Annalen 100 (1928):295–320, English translation in [57]

    Google Scholar 

  16. Luce RD, Raiffa H (1957) Games and decisions. John Wiley, New York

    MATH  Google Scholar 

  17. Watson AB, Ahumada AJ (1985) Model of human visual-motion sensing. J Opt Soc Am A 2(2):322–341

    Article  Google Scholar 

  18. Gepshtein S (2010) Two psychologies of perception and the prospect of their synthesis. Philos Psychol 23:217–281

    Article  Google Scholar 

  19. Gepshtein S, Tyukin I, Kubovy M (2007) The economics of motion perception and invariants of visual sensitivity. J Vis 7, 8(8):1–18

    Google Scholar 

  20. Gepshtein S, Kubovy M (2007) The lawful perception of apparent motion. J Vis 7, 8(9):1–15

    Google Scholar 

  21. Gepshtein S, Tyukin I, Kubovy M (2011) A failure of the proximity principle in the perception of motion. Humana Mente 17:21–34

    Google Scholar 

  22. Korte A (1915) Kinematoskopische Untersuchungen. Zeitschrift für Psychologie 72:194–296

    Google Scholar 

  23. Burt P, Sperling G (1981) Time, distance, and feature tradeoffs in visual apparent motion. Psychol Rev 88:171–195

    Article  Google Scholar 

  24. Wertheimer M (1923) Untersuchungen zur Lehre von der Gestalt, II. Psychologische Forschung 4:301–350

    Article  Google Scholar 

  25. Kubovy M, Holcombe AO, Wagemans J (1998) On the lawfulness of grouping by proximity. Cogn Psychol 35:71–98

    Article  Google Scholar 

  26. Koffka K (1935/1963) Principles of Gestalt psychology. A Harbinger Book, Harcourt, Brace and World, Inc., New York

    Google Scholar 

  27. Nakayama K (1985) Biological image motion processing: a review. Vis Res 25(5):625–660

    Article  Google Scholar 

  28. Weiss Y, Simoncelli EP, Adelson EH (2002) Motion illusions as optimal percepts. Nat Neurosci 5(6):598–604

    Article  Google Scholar 

  29. Longuet-Higgins HC, Prazdny, K (1981) The interpretation of a moving retinal image. Proc Roy Soc London. Ser B, Biol Sci 208(1173):385–397

    Google Scholar 

  30. Landy MS, Maloney LT, Johnsten E, Young M (1995) Measurement and modeling of depth cue combinations: in defense of weak fusion. Vis Res 35:389–412

    Article  Google Scholar 

  31. Kelly DH (1979) Motion and vision II. Stabilized spatio-temporal threshold surface. J Opt Soc Am 69(10):1340–1349

    Article  Google Scholar 

  32. Kelly DH (1994) Eye movements and contrast sensitivity. In: Kelly DH (ed) Visual science and engineering (models and applications). Marcel Dekker Inc, New York, pp 93–114

    Google Scholar 

  33. Gorban A, Pokidysheva L, Smirnova E, Tyukina T (2011) Law of the minimum paradoxes. Bull Math Biol 73(9):2013–2044

    Article  MathSciNet  MATH  Google Scholar 

  34. van Doorn AJ, Koenderink JJ (1982) Temporal properties of the visual detectability of moving spatial white noise. Exp Brain Res 45:179–188

    Article  Google Scholar 

  35. van Doorn AJ, Koenderink JJ (1982) Spatial properties of the visual detectability of moving spatial white noise. Exp Brain Res 45:189–195

    Article  Google Scholar 

  36. Laddis P, Lesmes LA, Gepshtein S, Albright TD (2011) Efficient measurement of spatiotemporal contrast sensitivity in human and monkey. In: 41st annual meeting of the society for neuroscience, vol 577.20

    Google Scholar 

  37. Gepshtein S, Lesmes LA, Albright TD (2013) Sensory adaptation as optimal resource allocation. Proc Natl Acad Sci, USA 110(11):4368–4373

    Article  Google Scholar 

  38. Lesmes LA, Gepshtein S, Lu Z-L, Albright TD (2009) Rapid estimation of the spatiotemporal contrast sensitivity surface. J Vis 9(8):696

    Article  Google Scholar 

  39. Sakitt B, Barlow HB (1982) A model for the economical encoding of the visual image in cerebral cortex. Biol Cybern 43:97–108

    Article  Google Scholar 

  40. Laughlin SB (1989) The role of sensory adaptation in the retina. J Exp Biol 146(1):39–62

    Google Scholar 

  41. Wainwright MJ (1999) Visual adaptation as optimal information transmission. Vis Res 39:3960–3974

    Article  Google Scholar 

  42. Laughlin SB, Sejnowski TJ (2003) Communication in neuronal networks. Science 301(5641):1870–1874

    Article  Google Scholar 

  43. Clifford CWG, Wenderoth P (1999) Adaptation to temporal modulation can enhance differential speed sensitivity. Vis Res 39:4324–4332

    Article  Google Scholar 

  44. Krekelberg B, Van Wezel RJ, Albright TD (2006) Adaptation in macaque MT reduces perceived speed and improves speed discrimination. J Neurophysiol 95:255–270

    Article  Google Scholar 

  45. Gepshtein S (2009) Closing the gap between ideal and real behavior: scientific versus engineering approaches to normativity. Philos Psychol 22:61–75

    Article  Google Scholar 

  46. Yeshurun Y, Carrasco M (1998) Attention improves or impairs visual performance by enhancing spatial resolution. Nature 396:72–75

    Article  Google Scholar 

  47. Yeshurun Y, Carrasco M (2000) The locus of attentional effects in texture segmentation. Nat Neurosci 3(6):622–627

    Article  Google Scholar 

  48. Jurica P, Gepshtein S, Tyukin I, Prokhorov D, van Leeuwen C (2007) Unsupervised adaptive optimization of motion-sensitive systems guided by measurement uncertainty. In: International conference on intelligent sensors, sensor networks and information, ISSNIP 2007, vol 3, Melbourne, Qld:179–184

    Google Scholar 

  49. Jurica P, Gepshtein S, Tyukin I, van Leeuwen C (2013) Sensory optimization by stochastic tuning. Psychol Rev 120(4):798–816

    Article  Google Scholar 

  50. Hebb DO (1949) The organization of behavior. John Wiley, New York

    Google Scholar 

  51. Bienenstock EL, Cooper LN, Munr PW (1982) Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J Neurosci 2:32–48

    Google Scholar 

  52. Paulsen O, Sejnowski TJ (2000) Natural patterns of activity and long-term synaptic plasticity. Curr Opin Neurobiol 10(2):172–180

    Article  Google Scholar 

  53. Bi G, Poo M (2001) Synaptic modification by correlated activity: Hebb’s postulate revisited. Annu Rev Neurosci 24:139–166

    Article  Google Scholar 

  54. Gardiner CW (1996) Handbook of stochastic methods: for physics, chemistry and the natural sciences. Springer, New York

    Google Scholar 

  55. Vergassola M, Villermaux E, Shraiman BI (2007) `Infotaxis’ as a strategy for searching without gradients. Nature 445:406–409

    Article  Google Scholar 

  56. Gepshtein S, Jurica P, Tyukin I, van Leeuwen C, Albright TD (2010) Optimal sensory adaptation without prior representation of the environment. In: 40th Annual Meeting of the Society for Neuroscience, vol 731.7

    Google Scholar 

  57. von Neumann J (1963) Theory of games, astrophysics, hydrodynamics and meteorology. In: Taub AH (ed) Collected works, vol VI. Pergamon Press, New York

    Google Scholar 

  58. Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27:379–423, 623–656

    Google Scholar 

  59. Jaynes ET (1957) Information theory and statistical mechanics. Phys Rev 106:620–630

    Article  MathSciNet  MATH  Google Scholar 

  60. Gorban A (2013) Maxallent: maximizers of all entropies and uncertainty of uncertainty. Comput Math Appl 65(10):1438–1456

    Article  MathSciNet  Google Scholar 

  61. Cover TM, Thomas JA (2006) Elements of information theory. John Wiley, New York

    MATH  Google Scholar 

Download references

Acknowledgments

This work was supported by the European Regional Development Fund, National Institutes of Health Grant EY018613, and Office of Naval Research Multidisciplinary University Initiative Grant N00014-10-1-0072.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergei Gepshtein .

Editor information

Editors and Affiliations

Appendices

Appendices

7.1.1 Appendix 1. Additivity of Uncertainty

For the sake of simplicity, the following derivations concern the stimuli that can be modeled by integrable functions \( I:{\mathbb{R}} \to {\mathbb{R}} \) of one variable \( x \). Generalizations to functions of more than one variable are straightforward. Consider two quantities:

  • Stimulus location on \( x \), where \( x \) can be space or time, and the “location” indicates respectively “where” or “when” the stimulus has occurred.

  • Stimulus content on \( f_{x} \), where \( f_{x} \) can be spatial or temporal frequency of stimulus modulation.

Suppose a sensory system is equipped with many measuring devices (“sensors”), each used to estimate both stimulus location and frequency content from “image” (or “input”) \( I(x) \). Assume that the outcome of measurement is a random variable with probability density function \( p(x,f) \). Let

$$ \begin{array}{*{20}l} {p_{x} (x)} \hfill & { = \int p(x,f)df,} \hfill \\ {p_{f} (f)} \hfill & { = \int p(x,f)dx} \hfill \\ \end{array} $$
(7.12)

be the (marginal) means of \( p(x,f) \) on dimensions \( x \) and \( f_{x} \) (abbreviated as \( f \)).

It is sometimes assumed that sensory systems “know” \( p(x,f) \), which is not true in general. Generally, one can only know (or guess) some properties of \( p(x,f) \), such as its mean and variance. Reducing the chance of gross error due to the incomplete information about \( p(x,f) \) is accomplished by a conservative strategy: finding the minima on the function of maximal uncertainty, i.e., using a minimax approach [15, 16].

The minimax approach is implemented in two steps. The first step is to find such \( p_{x} (x) \) and \( p_{f} (f) \), for which measurement uncertainty is maximal. (The uncertainty is characterized conservatively, in terms of variance alone [2]). The second step is to find the condition(s), at which the function of maximal uncertainty has the smallest value: the minimax point(s).

Maximal uncertainty is evaluated using the well-established definition of entropy [58] (cf. [59, 60]):

$$ H(X,F) = - \int p(x,f){\kern 1pt} \,\log \,p(x,f)\,dx{\kern 1pt} df. $$

According to the independence bound on entropy (Theorem 2.6.6 in [61]),

$$ H(X,F) \le H(X) + H(F) = H^{ * } (X,F), $$
(7.13)

where

$$ \begin{array}{*{20}l} {H(X) = } \hfill & { - \int p_{x} (x)\,\log p_{x} (x)\,dx,} \hfill \\ {H(F) = } \hfill & { - \int p_{f} (f)\,\log p_{f} (f)\,df.} \hfill \\ \end{array} $$

Therefore, the uncertainty of measurement cannot exceed

$$ \begin{array}{*{20}l} {H^{ * } (X,F) = } \hfill & { - \int p_{x} (x)\,\log p_{x} (x)\,dx} \hfill \\ {} \hfill & { - \int p_{f} (f)\,\log p_{f} (f)\,df.} \hfill \\ \end{array} $$
(7.14)

Eq. 7.14 is the “envelope” of maximal measurement uncertainty: a “worst-case” estimate.

By the Boltzmann theorem on maximum-entropy probability distributions [61], the maximal entropy of probability densities with fixed means and variances is attained, when the functions are Gaussian. Then, the maximal entropy is a sum of their variances [61] and

$$ \begin{array}{*{20}l} {p_{x} (x)} \hfill & { = \frac{1}{{\sigma_{x} \sqrt {2\pi } }}e^{{ - x^{2} /2\sigma_{x}^{2} }} ,} \hfill \\ {p_{f} (f)} \hfill & { = \frac{1}{{\sigma_{f} \sqrt {2\pi } }}e^{{ - f^{2} /2\sigma_{f}^{2} }} ,} \hfill \\ \end{array} $$

where \( \sigma_{x} \) and \( \sigma_{f} \) are the standard deviations. Then maximal entropy is

$$ H = \sigma_{x}^{2} + \sigma_{f}^{2} . $$
(7.15)

That is, when \( p(x,f) \) is unknown, and all one knows about marginal distributions \( p_{x} (x) \) and \( p_{f} (f) \) is their means and variances, the maximal uncertainty of measurement is the sum of variances of the estimates of \( x \) and \( f \). The following minimax step is to find the conditions of measurement, at which the sum of variances is the smallest.

7.1.2 Appendix 2. Improving Resolution by Multiple Sampling

How does an increased allocation of resources to a specific condition of measurement affect the (spatial or temporal) resolution at that condition? Consider set \( \varPsi \) of sampling functions

$$ \psi (s\sigma + \delta ),\;\sigma \in {\mathbb{R}},\;\sigma > 0,\;\delta \in {\mathbb{R}}, $$

where \( \sigma \) is a scaling parameter and \( \delta \) is a translation parameter. For a broad class of functions \( \psi ( \cdot ) \), any element of \( \varPsi \) can be obtained by addition of weighted and shifted \( \psi (s) \). The following argument proves that any function from a sufficiently broad class that includes \( \psi (s\sigma + \delta ) \) can be represented by a weighted sum of translated replicas of \( \psi (s) \).

Let \( \psi^{ * } (s) \) be a continuous function that can be expressed as a sum of a converging series of harmonic functions:

$$ \psi^{ * } (s) = \sum\limits_{i} a_{i} \,\cos (\omega_{i} s) + b_{i} \,\sin (\omega_{i} s). $$

For example, Gaussian sampling functions of arbitrary widths can be expressed as a sum of \( \cos ( \cdot ) \) and \( \sin ( \cdot ) \). Let us show that, if \( |\psi (s)| \) is Riemann-integrable, i.e., if

$$ - \infty < \int\limits_{ - \infty }^{\infty } {\psi (s)|ds < \infty } $$

and its Fourier transform \( \widehat{\psi } \) does not vanish for all \( \omega \in {\mathbb{R}} \): \( \widehat{\psi }(\omega ) \ne 0 \) (i.e., its spectrum has no “holes”), then the following expansion of \( \psi^{ * } \) is possible:

$$ \psi^{ * } (s) = \sum\limits_{i} c_{i} \psi (s + d_{i} ) + \varepsilon (s), $$
(7.16)

where \( \varepsilon (s) \) is a residual that can be arbitrarily small. This goal is attained by proving identities

$$ \begin{array}{*{20}l} {\cos (\omega_{0} s)} \hfill & { = \sum\limits_{i} c_{i,1} \psi (s + d_{i,1} ) + \varepsilon_{1} (s),} \hfill \\ {\sin (\omega_{0} s)} \hfill & { = \sum\limits_{i} c_{i,2} \psi (s + d_{i,2} ) + \varepsilon_{2} (s),} \hfill \\ \end{array} $$
(7.17)

where \( c_{i,1} \), \( c_{i,2} \) and \( d_{i,1} \), \( d_{i,2} \) are real numbers, while \( \varepsilon_{1} (s) \) and \( \varepsilon_{2} (s) \) are arbitrarily small residuals.

First, write the Fourier transform of \( \psi (s) \) as

$$ \widehat{\psi }(\omega ) = \int\limits_{ - \infty }^{\infty } {\psi (s)e^{ - i\omega s} ds} $$

and multiply both sides of the above expression by \( e^{{i\omega_{0} \upsilon }} \):

$$ e^{{i\omega_{0} \upsilon }} \widehat{\psi }(\omega ) = e^{{i\omega_{0} \upsilon }} \int\limits_{ - \infty }^{\infty } {\psi (s)e^{ - i\omega s} ds = } \int\limits_{ - \infty }^{\infty } {\psi (s)e^{{ - i(\omega s - \omega_{0} \upsilon )}} ds.} $$
(7.18)

Change the integration variable:

$$ x = \omega s - \omega_{0} \upsilon \Rightarrow dx = \omega ds,\;s = \frac{{x + \omega_{0} \upsilon }}{\omega }, $$

such that Eq. 7.18 transforms into

$$ e^{{i\omega_{0} \upsilon }} \widehat{\psi }(\omega ) = \frac{1}{\omega }\int\limits_{ - \infty }^{\infty } {\psi \left( {\frac{{x + \omega_{0} \upsilon }}{\omega }} \right)e^{ - ix} dx.} $$

Notice that \( \widehat{\psi }(\omega ) = a(\omega ) + ib(\omega ) \). Hence

$$ e^{{i\omega_{0} \upsilon }} \widehat{\psi }(\omega ) = e^{{i\omega_{0} \upsilon }} (a(\omega ) + ib(\omega )) = (\cos (\omega_{0} \upsilon ) + i\,{ \sin }(\omega_{0} \upsilon ))(a(\omega ) + ib(\omega )) $$

and

$$ \begin{array}{*{20}l} {e^{{i\omega_{0} \upsilon }} \widehat{\psi }(\omega )} \hfill & { = (\cos (\omega_{0} \upsilon )a(\omega ) - \sin (\omega_{0} \upsilon )b(\omega )) + i(\cos (\omega_{0} \upsilon )b(\omega ) + \sin (\omega_{0} \upsilon )a(\omega )).} \hfill \\ \end{array} $$

Since \( \widehat{\psi }(\omega ) \ne 0 \) is assumed for all \( \omega \in {\mathbb{R}} \), then \( a(\omega ) + ib(\omega ) \ne 0 \). In other words, either \( a(\omega ) \ne 0 \) or \( b(\omega ) \ne 0 \) should hold. For example, suppose that \( a(\omega ) \ne 0. \) Then

$$ Re\left( {e^{{i\omega_{0} \upsilon }} \widehat{\psi }(\omega )} \right) + \frac{b(\omega )}{a(\omega )}Im\left( {e^{{i\omega_{0} \upsilon }} \widehat{\psi }(\omega )} \right) = \cos (\omega_{0} \upsilon )\left( {\frac{{a^{2} (\omega ) + b^{2} (\omega )}}{a(\omega )}} \right). $$

Therefore,

$$ \begin{array}{*{20}l} {\cos (\omega_{0} \upsilon )} \hfill & { = \left( {\frac{a(\omega )}{{a^{2} (\omega ) + b^{2} (\omega )}}} \right)Re\left( {\frac{1}{\omega }\int\limits_{ - \infty }^{\infty } {\psi \left( {\frac{{x + \omega_{0} \upsilon }}{\omega }} \right)e^{ - ix} dx} } \right)} \hfill \\ {} \hfill & { + \left( {\frac{b(\omega )}{{a^{2} (\omega ) + b^{2} (\omega )}}} \right)Im\left( {\frac{1}{\omega }\int\limits_{ - \infty }^{\infty } {\psi \left( {\frac{{x + \omega_{0} \upsilon }}{\omega }} \right)e^{ - ix} dx} } \right).} \hfill \\ \end{array} $$
(7.19)

Because function \( \psi (s) \) is Riemann-integrable, the integrals in Eq. 7.19 can be approximated as

$$ Re\left( {\frac{1}{\omega }\int\limits_{ - \infty }^{\infty } {\psi \left( {\frac{{x + \omega_{0} \upsilon }}{\omega }} \right)e^{ - ix} dx} } \right) = \frac{\varDelta }{\omega }\sum\limits_{k = 1}^{N} \psi \left( {\frac{{x_{k} + \omega_{0} \upsilon }}{\omega }} \right)\cos (x_{k} ) + \frac{{\bar{\varepsilon }_{1} (\upsilon ,N)}}{\omega }, $$
(7.20)
$$ Im\left( {\frac{1}{\omega }\int\limits_{ - \infty }^{\infty } {\psi \left( {\frac{{x + \omega_{0} \upsilon }}{\omega }} \right)e^{ - ix} dx} } \right) = \frac{\varDelta }{\omega }\sum\limits_{p = 1}^{N} \psi \left( {\frac{{x_{p} + \omega_{0} \upsilon }}{\omega }} \right)\sin (x_{p} ) + \frac{{\bar{\varepsilon }_{2} (\upsilon ,N)}}{\omega }, $$
(7.21)

where \( \;x_{k} \) and \( x_{p} \) are some elements of \( {\mathbb{R}} \).

From Eqs. 7.197.21 it follows that

$$ \cos (\omega_{0} \upsilon ) = \sum\limits_{j = 1}^{2N} c_{j,1} \psi \left( {\frac{{\omega_{0} \upsilon }}{\omega } + d_{j,1} } \right) + \varepsilon_{1} (\upsilon ,N). $$

Given that \( \widehat{\psi }(\omega ) \ne 0 \) for all \( \omega \) and letting \( \omega = \omega_{0} \), it follows that

$$ \cos (\omega_{0} \upsilon ) = \sum\limits_{j = 1}^{2N} c_{j,1} \psi \left( {\upsilon + d_{j,1} } \right) + \varepsilon_{1} (\upsilon ,N), $$
(7.22)

where

$$ \frac{\bar{\varepsilon }_{1} (\upsilon ,N)}{\omega_{0}}\frac{{a(\omega_{0} )}}{{a^{2} (\omega_{0} ) + b^{2} (\omega_{0} )}} + \frac{\bar{\varepsilon }_{2} (\upsilon ,N)}{\omega_{0}}\frac{{ b(\omega_{0} )}}{{a^{2} (\omega_{0} ) + b^{2} (\omega_{0} )}} $$
(7.23)

An analogue of Eq. 7.22 for \( \sin (\omega_{0} \upsilon ) \) follows from \( \sin (\omega_{0} \upsilon ) = \cos (\omega_{0} \upsilon + \pi /2) \). This completes the proof of Eq. 7.17 and hence of Eq. 7.16.

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Gepshtein, S., Tyukin, I. (2015). Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. In: Favorskaya, M., Jain, L. (eds) Computer Vision in Control Systems-1. Intelligent Systems Reference Library, vol 73. Springer, Cham. https://doi.org/10.1007/978-3-319-10653-3_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-10653-3_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-10652-6

  • Online ISBN: 978-3-319-10653-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics