Skip to main content

Point Processes

  • Chapter
Analysis of Neural Data

Part of the book series: Springer Series in Statistics ((SSS))

  • 6036 Accesses

Abstract

At the beginning of this book, in Example 1.1 (p. 3), we described the activity of a neuron recorded from the supplementary eye field. Interpreting Fig. 1.1 we said that, toward the end of each trial, the neuron fired more rapidly under one experimental condition than under the other.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Description of this phenomenon began with work of Edgar Adrian and Keffer Hartline and their colleagues (e.g., Adrian and Zotterman 1926; Hartline and Graham 1932).

  2. 2.

    The small deviation of the curve from the diagonal in the lower left-hand corner of the P–P plot is probably due to inaccuracy of measurement for very short inter-event intervals.

  3. 3.

    A more explicit notation would be \(f_{S_{1},\ldots ,S_{N(T)},N(T)}(S_1=s_{1} ,\ldots ,S_{N(T)}=s_{n},N(T)=n)\), see p. 577, where we make explicit the randomness due to \(N(T)\).

  4. 4.

    The limit of the sum over \(S^c\) is the same as the limit of the sum over \(S \cup S^c\) because \(S\) has \(n\) elements for all sufficiently small values of \(\Delta t\), so that \(\lim \sum _S \Delta t \lambda _i=0.\)

  5. 5.

    A more general version of this result is often called Blackwell’s Theorem.

  6. 6.

    Because the history \(H_t=(S_1,S_2,\ldots ,S_{N(t-)})\) is itself a point process, it is stochastic and, therefore, the conditional intensity is stochastic. The definition (19.18) includes two separable steps: first, we define the conditional intensity

    $$ \lambda (t|s_1,\ldots ,s_n)=\mathop {\lim }\limits _{\Delta t\rightarrow 0} \frac{P(\Delta N_{(t, t+\Delta t]} =1|N(t-)=n,S_1=s_1,\ldots ,S_n=s_n)}{\Delta t} $$

    for every possible vector \((s_1,\ldots ,s_n)\) making up the history \(H_t\), and then we replace the specific values \(N(t-)=n\) and \((S_1=s_1,\ldots ,S_n=s_n)\) with their stochastic counterparts written as \(H_t=(S_1,S_2,\ldots ,S_{N(t-)})\).

  7. 7.

    General theory justifying the interchange of limit and expectation applies here.

  8. 8.

    The terminology is intended to signify that the history dependence is limited to the previous spike time. A discrete-time stochastic process is a Markov process if the probability that the process will be in a particular state at time \(t\) depends only on the state of the process at time \(t-1\).

  9. 9.

    Because integrate-and-fire neurons reset to a baseline subthreshold voltage after firing, they necessarily follow Eq. (19.30). Further discussion of IMI models and their relationship to integrate-and-fire neurons is given in Koyama and Kass (2008).

  10. 10.

    The functions \(g_0(t)\) and \(g_1(u)\) are defined only up to a multiplicative constant. That is, for any nonzero number \(c\) if we multiply \(g_0(t)\) by \(c\) and divide \(g_1(u)\) by \(c\) we do not change the result. Some arbitrary choice of scaling must therefore be introduced. In Fig. 19.7 the constant was chosen so that \(g_0(t)\) was equal to the Poisson process intensity at time \(t=50\) ms after the appearance of the visual cue.

  11. 11.

    Extending the argument slightly to include the interval \((s_N,T)\) it may also be shown that \(Z_1,\ldots ,Z_{N(T)}\) follow a homogeneous Poisson process with intensity \(\lambda =1\).

  12. 12.

    These may be combined by writing the covariance function, often called the complete covariance function as \(\kappa (0)\delta (h)+\kappa (h)\) where \(\delta (h)\) is the Dirac delta function, which is infinite at 0 and 0 for all other values of \(h\).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert E. Kass .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Kass, R.E., Eden, U.T., Brown, E.N. (2014). Point Processes. In: Analysis of Neural Data. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-9602-1_19

Download citation

Publish with us

Policies and ethics