Learning neural connectivity from firing activity: efficient algorithms with provable guarantees on topology
Abstract
The connectivity of a neuronal network has a major effect on its functionality and role. It is generally believed that the complex network structure of the brain provides a physiological basis for information processing. Therefore, identifying the network’s topology has received a lot of attentions in neuroscience and has been the center of many research initiatives such as Human Connectome Project. Nevertheless, direct and invasive approaches that slice and observe the neural tissue have proven to be time consuming, complex and costly. As a result, the inverse methods that utilize firing activity of neurons in order to identify the (functional) connections have gained momentum recently, especially in light of rapid advances in recording technologies; It will soon be possible to simultaneously monitor the activities of tens of thousands of neurons in real time. While there are a number of excellent approaches that aim to identify the functional connections from firing activities, the scalability of the proposed techniques plays a major challenge in applying them on largescale datasets of recorded firing activities. In exceptional cases where scalability has not been an issue, the theoretical performance guarantees are usually limited to a specific family of neurons or the type of firing activities. In this paper, we formulate the neural network reconstruction as an instance of a graph learning problem, where we observe the behavior of nodes/neurons (i.e., firing activities) and aim to find the links/connections. We develop a scalable learning mechanism and derive the conditions under which the estimated graph for a network of Leaky Integrate and Fire (LIf) neurons matches the true underlying synaptic connections. We then validate the performance of the algorithm using artificially generated data (for benchmarking) and real data recorded from multiple hippocampal areas in rats.
Keywords
Functional connectivity Synaptic connectivity Neural signal processing Neural network1 Introduction
Reconstructing the connectivity of neuronal networks has been a major challenge for the past decade. Currently, the only reliable way to map the underlying synaptic connectivity of neuronal networks is by using invasive procedures, which are prohibitively complex and timeconsuming: it took more than 10 expertyear to map the whole connectome of C. Elegans, comprising only 302 neurons and 7283 synaptic connections (Watts and Strogatz 1998). Similarly, a 10 expertyear effort was required to capture the connectome of fruit fly medulla columns, with only 379 traced neurons and 8637 synapses (Plaza et al. 2014). To map the whole brain of a fruit fly, with around 10,000 neurons, we would have to spend around 4700 expertyear (Plaza et al. 2014; Chiang et al. 2011). Following the same approach and using the current tech nology, it is estimated that it will take around 14 billion man/year to completely map the human brain’s connectome (Plaza et al. 2014). Although there is an increasing effort to make some parts of the invasive procedures automated, such approaches remain impractical even for midsized networks. Furthermore, the current invasive techniques cannot be applied to live specimen.
In contrast, inverse methods with the focus on mapping the functional connectivity from the activity of the neurons have received more attention in recent years. These approaches are noninvasive (or minimally invasive) so they can be applied to live specimen and they require much less time and labor to identify the functional network. Furthermore, rapid advances in recording technologies has made it possible to simultaneously monitor the activities of tens (Perin et al. 2011) to hundreds of neurons (Buzsáki 2004; Grewe et al. 2010). Upcoming technologies will significantly improve the accuracy and scale of recording neurons’ activities. It is worth mentioning that there has also been significant progress in simultaneously recording and stimulating a set of neurons (Khodagholy et al. 2014; Herrera and Adamantidis 2015; Bertotti et al. 2014). These advancements provide an abundance of data for which computationally efficient and accurate inverse algorithms would be welcome.
In this paper, we focus on the inverse problem. Our main goal is to design efficient and scalable algorithms that result in good approximations of the underlying synaptic graph. In other words, we are interested in algorithms whose inferred functional network is a close match to that of the underlying synaptic connectivities for a group of Leaky IntegrateandFire (LIf) neurons (Gerstner and Kistler 2002).
To this end, we apply a technique, usually known as the kernel method in the machine learning literature, to map the nonlinear inference problem to a linear equivalent in the kernel space. Then, we formulate the network inference problem as an instance of a constrained optimization problem where the objective function has a simple form and the constraints are all linear. As a result, we develop an algorithm that easily scales to large datasets of recorded neural activities. Moreover, we mathematically analyze this mapping and derive the conditions under which our proposed algorithm successfully identifies the type of underlying synaptic connections (e.g. being excitatory/inhibitory) in the limit of large available data.
We also show that the proposed technique is equally applicable to networks of both deterministic or stochastic neurons that follow the widely used LIf model. We support our theoretical findings with an exhaustive set of simulations where we validate the performance of our algorithm with respect to the ground truth networks (in artificially generated spiking data where the ground truth is available). We also report the result of our algorithm applied to a datatset of firing activities recorded from hippocampal areas in rats (Mizuseki et al. 2013). We find that our results are quite in line with previous findings (Mizuseki et al. 2009).
2 Related work
Solving inverse problems and trying to reverse engineer neural circuits have long been one of the main research topics in neuroscience. On a single neuron scale, characterizing neurons response and predicting its output spikes based on the input stimuli has been one of the highly explored issues and methods based on white noise analysis have been used extensively with remarkable results (see Pillow and Simoncelli (2003) for a recent example). Methods based on integrateandfire model for neurons have also been extensively used to infer mathematical models of neural circuits using the pre and postsynaptic data. Lazar and Slutskiy (2014) is a nice example of such approaches, where the HodgkinHuxley model is used to identify the neural circuit. Nevertheless, this approach requires that the pre and post synaptic measurements of the target neuron be available.
Moving to identifying the network connectivity, Cross Correlogram is perhaps the most widelyused method to identify (functional) connection between pairs of neurons or regions (Brown et al. 2004). However, approaches based on Cross Correlogram usually fall short of identifying causal relation or effective connectivity of neurons. It is very well established in statistics that the existence of correlation between two events is neither a necessary nor a sufficient conditions for inferring causality. This is why statistical hypothesis tests such as Granger causality measure were proposed as an alternative in order to overcome the drawbacks of Cross Correlogram (e.g., Kim et al. (2011)).
Another recent line of work has primarily focused on inference methods that are tailored to LIf model of neurons. In particular, Van Bussel et al. (2011) convert the nonlinear firing behavior of LIf neurons into a set of linear equations, which can be solved given a sufficient number of recorded samples. While being efficient, this algorithm is highly sensitive to the accuracy of spike times and relies on the knowledge of model parameters (e.g. synaptic propagation delays) which are difficult to obtain. Memmesheimer et al. (2014) and Baldassi et al. (2007) proposed an inference algorithm based on the Perceptron learning rule. Furthermore, Memmesheimer et al. (2014) proved that under accurate estimate of spike times it is possible to identify a simple nto1 feed forward network. They also proposed a heuristic extension that works with finite precision in recorded spike times. Nevertheless, their model does not take the (random) synaptic delays into account. Moreover, having extra postsynaptic neurons even in a simple feed forward scenario can have a dramatic effect on the performance of the inference algorithm when the structure of the graph (i.e., here being feedforward) is not known a priori. In Monasson and Cocco (2011) two Bayesian approaches are proposed to find the connections in a network of LIf neurons. Nevertheless, the proposed approaches do not account for the (random) synaptic delays as well as the effect of hidden neurons. Furthermore, the algorithm highly depends on accuracy of the recorded spike times.
A more complex and accurate family of approaches rely on Generalized Linear Models (GLM) (Paninski 2004). These methods consider the collective activity of the neural group and focus on finding the best functional network that can explain the activity. GLM was recently used in reconstructing a real physiological circuit from recorded neural data (Gerhard et al. 2013) as well as reconstructing the functional connectivity for the ganglion cells in the retina (Pillow et al. 2008). The approaches that are based on GLM are generally accurate (i.e. they identify the correct set of connections in the underlying graph) provided that the neural model used to generate the spike data matches exactly the one used in GLM (Ramirez and Paninski 2014). Extending these methods to exploit the prior distribution on the neural connections results in effective Bayesian models that are especially powerful in the face of limited data. In particular, Stevenson et al. (2009) proposed a Maximum a Posteriori (MAP) estimate to infer the neural connections and reported highly accurate results in limited data regime at the expense of very high computational costs. Bayesian approaches have also been used in identifying connections directly from calciumimaging data (Mishchenko et al. 2011).
In light of the aforementioned advantages, GLMbased techniques are among the favorite stateofthe art approaches. Nevertheless, they are not without limits. The first and probably most important drawback is scalability, which makes handling large datasets, both in terms of number of neurons and duration of recorded firing activity, difficult. Recently, however, several approximations have been suggested to resolve this issue (Ramirez and Paninski 2014; Zaytsev et al. 2015). Nevertheless, these approximations work only for a particular choice of nonlinearity (Zaytsev et al. 2015) and similar to GLMbased techniques, the convergence is only guaranteed when the model for neurons and that of GLM’s closely match each other. Soudry et al. (2015) have proposed an approach that covers a wider set of nonlinearties to overcome this issue to some extent. However, random synaptic delays have not been addressed and no guarantees are provided on the performance of the proposed method.
Close to GLMbased methods are recent approaches that model the stochastic firing rates by a Hawkes point process. In contrast to a (homogenous) Poisson process, which assumes that events occur independently of one another, in a Hawkes process past events can increase (e.g., excitation of neurons) or decrease (e.g., inhibition of neurons) the probability of future events. Based on this parametric assumption, Moore and Davenport (2015) identified the connections in a mediumsized network while assuming that the traffic is generated according to a Hawkes process. Similarly, Hall and Willett (2016) aimed at inferring the connections as well as predicting the firing rate of neurons based on their past firing activity through an online learning algorithm. Nevertheless, both methods heavily rely on the assumption that the traffic is generated according to a Hawkes process, whereas we make no such statistical assumptions. Moreover, neither of the approaches take the effect of hidden neurons into account or evaluate the performance of their algorithms in scenarios where inhibitory connections are present.
 1.
Scalability: from a practical point of view, it allows better scalability (in contrast to previous work), i.e., it requires less memory and can scale with limited resources available (see Fig. 1).
 2.
Performance guarantees: from a theoretical point of view, its performance guarantees hold under a larger family of neurons and nonlinearities.
 3.
Hidden neurons: the simplicity of the approach also enables us to derive the sufficient conditions under which the estimated functional network returned by our algorithm is not affected by the existence of hidden neurons and matches the underlying synaptic graph in the limit of large data.
Finally, we should mention that the consistency problem even for a nto1 feed forward network is NPhard. In words, determining whether or not there exists a set of delays and weights such that we can fully match the set of input firing patterns to the output is very difficult (Maass and Schmitt 1999) . Although this result does not necessarily imply that finding such a configuration is impossible (under the right set of conditions), it shows that finding provable ”positive learning results” for the case of spiking neurons is quite challenging.
3 Model formulation and problem statement
We formally introduce the neural models and the network structures considered throughout the paper. We also formally state the network inference problem.
Neurons’ model
Network model
As for the network structure, we do not assume any specific topology on the neural graph. However, as is the case in many neural networks, we require a balanced network in terms of excitatory and inhibitory connections. This requirement ensures that the excitatory and inhibitory population act in such a way that the average activity stays below a threshold.
In that regard, we usually pick 80% of connections to be excitatory and 20% to be inhibitory. For numerical results, we set the weights of all excitatory connections to be + 1mV and that of an inhibitory connection to − δ mV, where δ = n_{ e x c }/n_{ i n h }, and n_{ e x c } and n_{ i n h } are the number of excitatory and inhibitory connections in the network. Also, in accordance with biological data and following Dale’s principle (Dale 1935), we fix the type of neurons to be either excitatory or inhibitory, which means all outgoing connections of a presynaptic neuron have the same sign. ^{2}
We also assume that neural connections have intrinsic delays which represent the time it takes for the information to propagate through the axons and synapses. The delay for each link is assumed to be a random number in the interval (0,d_{max}], where d_{max} > 0 is the maximum delay. The delays do not change, once assigned, they remain fixed.
Problem statement
4 The inference algorithms
We propose an iterative inference algorithm to identify the functional connections in a network of neurons based on their firing activity. To better explain the algorithm, we first study the simpler case of deterministic LIf neurons, as described in Eq. (1). We then show in a later section that how the proposed algorithm can be naturally extended to deal with stochastic neurons as well.
Basically, by knowing the matrix \(\hat {Y}\) and the firing activity of neurons, we will look for the smallest vector vector w (in ℓnorm) that satisfies a set of constraints.
Also note that we used a different kernel matrix K^{′}, which may or may not be the same as the true kernel matrix K, depending on our prior knowledge about the underlying neural model.
We will show that in scenarios where the original problem is feasible, i.e., when Kg > 0, as long as K^{′} and K are close (in some precise algebraic sense) then by solving Problem I, given by Eq. (7), we will be able to find the type/sign and the location of nonzero entries in g, the vector of the underlying synaptic neural connections.
4.1 Centralized inference algorithms
To solve Problem II, given by Eq. (8), we propose two different online approaches, with emphasis on scalability and capability to deal with limited memory.
4.2 Scalable inference algorithms
In practical situations, the matrix of observed neural activity could be very large, especially due to the large number of recorded samples, T. For instance, one of the datasets that we have used to evaluate the performance of the proposed algorithm has around T = 7,000,000 rows and n = 1000 columns, i.e., a matrix of size 7000000 × 1000. In some cases the dataset will be even larger. Fitting such large matrices into memory (RAM) is usually difficult due to the limited amount of available resources. Therefore, it is desirable to design an algorithm that can cope with the limits on memory. Furthermore, from the computational point of view, it would also be important to have an algorithm that can break the problem into smaller subproblems, solve those subproblems in parallel, and merge the results such that the overall solution is nearoptimal.
5 Theoretical analysis
In this section, we analyze the performance of the proposed algorithms in order to identify connections under which the returned functional graphs closely approximates the underlying synaptic connections. We should again remark that our focus in this paper is to identify the existence and type of connections and not the corresponding weights.
We start by proving the desired results for a neural network consists of deterministic (and noisy) LIf neurons specified by Eq. (1). We also assume that there is incoming traffic from some unobserved (also called hidden) neurons. We establish sufficient conditions on both statistical properties of the noise as well as the inference kernel (denoted by K^{′}) such that the type of connections in identified functional graph by the algorithms introduced in the previous section matches the type of corresponding neural connections in the underlying synaptic graph. We then extend our results to show that the same algorithm can be applied to the more realistic scenario of stochastic LIf neurons.
5.1 Network of deterministic noisy LIf neurons with hidden traffic
For the network of deterministic noisy LIf neurons, we first show that as long as the noise term satisfy some statistical properties, the algorithm yields the desired result. We then investigate the conditions under which the net effect of incoming traffic from a set of hidden neurons can be modeled by the noise term with the specified statistical properties, which means that the algorithm will be successful in identifying the connection types even in presence of unobserved traffic.
 (A1)
Having enough firing data: the observed neurons fire at a rate linear in T, i.e., neuron i fires α_{ i }T spikes in the interval [0,T], with α_{min} > 0.
 (A2)
Zeromean noise: the noise in the membrane potential {v(t)} is a zeromean^{4} random variable and its samples are uncorrelated if they are more than Δ_{ v } time slots apart.
Lemma 1
(The proof is given in Appendix A.1)
 (A3)
Traffic of two hidden/outside neurons i and j are independent of each other.
 (A4)
The incoming weights from the hidden/outside neurons form a balanced random network (similar to the incoming traffic from ”visible” neurons), i.e., \(\mathbb {E}\{g_{i}^{\prime }\} = 0.\)
 (A5)
Outgoing traffic of neuron i at time t and t + Δ are uncorrelated for sufficiently large Δ.
Note that a direct consequence of the above assumptions is that if we sample neurons at intervals that are far apart, the noise terms should be uncorrelated. This fact is useful in practice in order to design better algorithms.
Now, the following theorem shows that given the above assumptions, we can rewrite Hg^{′} as a zeromean colored noise with a vanishing correlation.
Lemma 2
Given assumptions A2A5 above, the random variable v(t) = H_{ t }g^{′} form a colored random variable with a vanishing correlation.
(The proof is given in Appendix A.2)
Combining the results of Lemmas 1 and 2, we can prove the convergence of the algorithm for the case of deterministic noisy LIf neurons with incoming hidden traffic. This is formally proven in the next theorem.
Theorem 1
(The proof is given in Appendix A.3)
ᅟ
5.1.1 Network of stochastic LIf neurons
In the previous section, we proved that under certain assumptions our proposed algorithm are guaranteed to identify the type of connections in the limit of large data for deterministic LIf neurons (with hidden incoming traffic as well). In this section, we show that the same can be proven for stochastic LIf neurons if we slightly modify the proposed algorithm. The main idea is to show that solving the problem for stochastic neurons results in the same solution as solving the problem for deterministic neurons, defined in Problem II. Therefore, we can solve Problem II for the stochastic case as well to identify the connections.
 (B1)
The function f_{ s }(⋅) is an increasing function of its argument.
 (B2)The firing pattern of the postsynaptic neuron has a vanishing correlation, i.e., if two samples are more than Δ time slots apart, they becoming conditionally independent. More precisely, if t^{′}− t > Δ, thenwhere K^{′}(t_{1},t_{2}) is the subset of samples in the interval [t_{1},t_{2}].$$\begin{array}{@{}rcl@{}} \Pr\{y(t),y(t^{\prime})K^{\prime},w\} &=& \Pr\{y(t)K^{\prime}(0,t),w\}\\ &\times&\Pr\{y(t^{\prime})K^{\prime}(t,t^{\prime}),w\}, \end{array} $$
Remark 1
For the special case of f_{ s } being the sigmoid function, we have \(K^{\prime \prime }_{t} = K^{\prime }_{t}\). This is the form that has been considered in the context of GLMs (Paninski 2004).

Decreasing, i.e., L(x) ≤ L(y) if x ≥ y.

Satisfies the inequality log(f_{ s }(x)) ≤−L(x).
Theorem 2
Under assumptions B1B2 above, and with a proper choice of the loss function, the problems given by Eq. (19) and Problem II are equivalent in the sense that the solution w^{∗} to Problem II is also the maximizer of Eq. (19).
(The proof is given in Appendix A.4)
This equivalency has significant consequences. First, we can efficiently find the ML estimator for problem (13). Second, it also suggests that the convergence results for our deterministic algorithm (discussed earlier in this section) also apply to the stochastic family of neurons.
6 Experiments
In this section we validate the performance of the proposed algorithm via numerical experiments on both artificially generated data as well as data recorded from real neurons. For the former, we have used the dataset generated by Zaytsev et al. (2015).^{5} Testing on artificially generated data has an advantage in having access to the underlying synaptic connectivity (ground truth) which allows benchmarking the performance of the proposed algorithm. We have also applied the inference algorithm to a dataset of real recordings from the multiple hippocampal areas in rats (Mizuseki et al. 2013; Mizuseki et al. 2009).^{6}
6.1 Results on simulated data
The dataset of artificially generated spikes contains the firing activity of 1000 LIf neurons, with a fixed firing threshold of 20mV and a random (and unknown) synaptic propagation delay of up to 2ms (Zaytsev et al. 2015). The network topology was recurrent and randomly generated.
 1.
Spike prediction accuracy: we verify the ability of the algorithm to predict output firing activity of the postsynaptic neuron (i.e., by solving Problem I), given the inferred connection weights and the firing activity of its neighbors (when Problem I is feasible).
 2.
Average quality: we take an average over all the returned weights for excitatory, inhibitory and void connections when solving Problem II. In the ideal case, these three averages should be wellseparated and the returned weights should be concentrated around the means (i.e., variance should tend to zero).
 3.
Precision and recall: we then transform the analog association matrix returned by the algorithm to a ternary adjacency matrix of the graph. We measure how accurately the algorithm has identified excitatory, inhibitory and void connections by calculating precision and recall for each connection type.
6.2 Results on real data

One can calculate the “net” outgoing weight for each neuron and if it is higher/lower than a threshold, call it excitatory/inhibitory.

Alternatively, one can count the number of positive and negative ”peaks” among the outgoing weights and classify the neuron as excitatory/inhibitory if these two numbers are significantly different from each other.
7 Conclusion and future work
In this paper, we introduced a novel approach to identify neural connectivity from the observed firing activity of neurons. The proposed approach is based on a reformulation of LIf model for neurons in such a way that facilitates theoretical analysis and allows scalable implementations.
We theoretically proved the accuracy of our algorithm and derived the conditions under which the inferred functional connectivity matches that of the underlying synaptic network. We also showed that our algorithm is capable of dealing with both deterministic and stochastic LIf neurons through the same framework. Finally, using numerical analysis, we showed that the proposed algorithm successfully identifies the synaptic connections over a dataset of simulated spiking activity (to be able to benchmark against the ground truth) and is capable of dealing with datasets of real recordings yielding meaningful interpretations.
Different variations of the inference algorithm were proposed in this paper and each one is more suitable for certain scenarios in practice. In particular, for a centralized solution (when the data is not too big for a single machine), Dual NeuInf is a suitable algorithm as it does not require tuning a step size/learning rate. However, when the database is large, Parallel Dual NeuInf can be used to obtain the results more rapidly.
As for future directions, there are several major challenges that seems deeply intriguing. The first one concerns the existence of hidden neurons. In this paper we showed that as long as the incoming traffic from hidden neurons satisfy some statistical conditions, we are capable of finding the connectivity for the observed part of the network. Nevertheless, the more interesting challenge would be to (partially) identify the connectivity between the observed and hidden part of the network. The second challenge involves considering more realistic models of neurons. In this paper, we considered LIf neurons with fixed firing threshold. In reality though, the firing threshold is also adaptive and neurons need more accurate models to describe their behavior (especially the inhibitory ones). Taking these dynamical aspects is certainly part of our future work.
Footnotes
 1.
We sincerely thank Dr. Yury Zaytsev, Prof. Abigail Morrison and Dr. Moritz Deger for making their data and code publicly available.
 2.
Note that Dale’s law was only applied in generating firing data during numerical analysis and was not considered in designing the inference algorithm to identify the functional graph as it makes parallelization of the algorithm very difficult. Also previous work suggest that the effect of Dale’s law on the performance of the inference algorithms is usually marginals
 3.
Both these terms can be easily integrated into the weight vector by appending a separate entry to the vector g and the kernel matrix K. We focus on the case where 𝜃 = h_{0} = 0 for simplicity.
 4.
Even if the noise has a nonzero mean, it might be possible to compensate for that by adjusting the firing threshold in such a way that the mean of noise remain zero.
 5.
We sincerely thank Dr. Yury Zaytsev, Prof. Abigail Morrison and Dr. Moritz Deger for making their data and code publicly available.
 6.
We sincerely thank Prof. Kenji Mizuseki, Prof. Anton Sirota, Prof. Eva Pastalkova and Prof. György Buzsáki for making the dataset publicly available.
 7.
Note that we have excluded the case where \(\hat {y}_{c}^{\top } B_{c} F_{i}^{\top } F^{\prime }_{i} E\hat {y}_{c}= 0\) because it will happen if \({\sum }_{t}\hat {y}_{c}(t) = 0\), i.e. if we have the same number of firing instances as the instances of inactivity. However, in real neurons the latter event is much more frequent. Therefore, and when the amount of data increases, the probability of having \({\sum }_{t}\hat {y}_{c}(t) = 0\) tends to zero.
 8.
The choice of filter is made for convenience of representation only and we can extend the proof to other decaying potentials as well.
Notes
Acknowledgements
The authors would like to thank Swiss National Science Foundation for supporting this work (grant No. 20FP1151073) and DARPA YFA ((D16AP00046). We also thank Dr. Hesam Setareh, Dr. Mohammad Javad Faraji and Prof. Wulfram Gerstnerfor their kind and helpful discussions on neural models and the proposed approach. We should deeply thank Dr. Yury Zaytsev, Prof. Abigail Morrison and Dr. Mortiz Deger for generously sharing the code and data for their approach on inverse connectivity inference (Zaytsev et al. 2015), as well as Prof. Kenji Mizuseki, Prof. Anton Sirota, Dr. Eva Pastalkova and Prof. György Buzsáki for generously sharing their recorded data from hippocampal areas in rats (Mizuseki et al. 2013; Mizuseki et al. 2009). We would like to also deeply thank the reviewers whose comments and suggestions have helped us significantly improve this manuscript.
Compliance with Ethical Standards
Conflict of Interest
The authors declare that they have no conflict of interest.
References
 Baldassi, C., Braunstein, A., Brunel, N., Zecchina, R. (2007). Efficient supervised learning in networks with binary synapses. BMC Neuroscience, 8(Suppl 2), S13.CrossRefPubMedCentralGoogle Scholar
 Bertotti, G., Velychko, D., Dodel, N., Keil, S., Wolansky, D., et al. (2014). A cmosbased sensor array for invitro neural tissue interfacing with 4225 recording sites and 1024 stimulation sites. In 2014 IEEE of Biomedical circuits and systems conference (bioCAS) (pp. 304–307): IEEE.Google Scholar
 Brown, E.N., Kass, R.E., Mitra, P.P. (2004). Multiple neural spike train data analysis: stateoftheart and future challenges. Nature Neuroscience, 7(5), 456–461.CrossRefPubMedGoogle Scholar
 Buzsáki, G. (2004). Largescale recording of neuronal ensembles. Nature Neuroscience, 7(5), 446–451.CrossRefPubMedGoogle Scholar
 Chiang, A.S., Lin, C.Y., Chuang, C.C., Chang, H.M., Hsieh, C.H., Yeh, C.W., Shih, C.T., Wu, J.J., Wang, G.T., Chen, Y.C., et al. (2011). Threedimensional reconstruction of brainwide wiring networks in drosophila at singlecell resolution. Current Biology, 21(1), 1–11.CrossRefPubMedGoogle Scholar
 Dale, H. (1935). Pharmacology and nerveendings. Journal of Royal Society of Medicine, 28(3), 319–332.Google Scholar
 Gerhard, F., Kispersky, T., Gutierrez, G.J., Marder, E., Kramer, M., Eden, U. (2013). Successful reconstruction of a physiological circuit with known connectivity from spiking activity alone. PLoS Computational Biology, 9(7), e1003,138.CrossRefGoogle Scholar
 Gerstner, W., & Kistler, W.M. (2002). Spiking neuron models: Single neurons, populations, plasticity. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
 Goldstein, T., Studer, C., Baraniuk, R. (2014). A field guide to forwardbackward splitting with a fasta implementation. arXiv:14113406.
 Grewe, B.F., Langer, D., Kasper, H., Kampa, B.M., Helmchen, F. (2010). Highspeed in vivo calcium imaging reveals neuronal network activity with nearmillisecond precision. Nature Methods, 7(5), 399–405.CrossRefPubMedGoogle Scholar
 Hall, E.C., & Willett, R.M. (2016). Tracking dynamic point processes on networks. IEEE Transactions on Information Theory, 62(7), 4327–4346.CrossRefGoogle Scholar
 Herrera, C.G., & Adamantidis, A.R. (2015). An integrated microprobe for the brain. Nature Biotechnology, 33(3), 259–260.CrossRefPubMedGoogle Scholar
 Jaggi, M., Smith, V., Takác, M., Terhorst, J., Krishnan, S., Hofmann, T., Jordan, M.I. (2014). Communicationefficient distributed dual coordinate ascent. In Advances in Neural Information Processing Systems (pp. 3068–3076).Google Scholar
 Karbasi, A., Salavati, A.H., Vetterli, M. (2015). Learning network structures from firing patterns. In International Conference on Acoustics, Speech and Signal Processing.Google Scholar
 Khodagholy, D., Gelinas, J.N., Thesen, T., Doyle, W., Devinsky, O., Malliaras, G.G., Buzsáki, G. (2014). Neurogrid: recording action potentials from the surface of the brain. Nature neuroscience.Google Scholar
 Kim, S., Putrino, D., Ghosh, S., Brown, E.N. (2011). A granger causality measure for point process models of ensemble neural spiking activity. PLoS Computational Biology, 7(3), e1001,110.CrossRefGoogle Scholar
 Lazar, A.A., & Slutskiy, Y.B. (2014). Functional identification of spikeprocessing neural circuits. Neural Computation, 26(2), 264–305.CrossRefPubMedGoogle Scholar
 Maass, W., & Schmitt, M. (1999). On the complexity of learning from spiking neurons with temporal coding. Information and Computation, 153(1), 26–46.CrossRefGoogle Scholar
 Memmesheimer, R.M., Rubin, R., Ölveczky, B.P., Sompolinsky, H. (2014). Learning precisely timed spikes. Neuron, 82(4), 925–938.CrossRefPubMedGoogle Scholar
 Mishchenko, Y., Vogelstein, J.T., Paninski, L., et al. (2011). A “b”ayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data. Annals of Applied Statistics, 5(2B), 1229–1261.CrossRefGoogle Scholar
 Mizuseki, K., Sirota, A., Pastalkova, E., Buzsáki, G. (2009). Theta oscillations provide temporal windows for local circuit computation in the entorhinalhippocampal loop. Neuron, 64(2), 267–280.CrossRefPubMedPubMedCentralGoogle Scholar
 Mizuseki, K., Sirota, A., Pastalkova, E., Diba, K., Buzski, G. (2013). Multiple single unit recordings from different rat hippocampal and entorhinal regions while the animals were performing multiple behavioral tasks. CRCNS org.Google Scholar
 Monasson, R., & Cocco, S. (2011). Fast inference of interactions in assemblies of stochastic integrateandfire neurons from spike recordings. Journal of Computational Neuroscience, 31(2), 199–227.CrossRefPubMedGoogle Scholar
 Moore, M., & Davenport, M. (2015). Learning network structure via hawkes processes. In Proceedings of the Work Signal Processing with Adaptive Sparse Structured Representations (SPARS).Google Scholar
 Paninski, L. (2004). Maximum likelihood estimation of cascade pointprocess neural encoding models. Network: Computation in Neural Systems, 15(4), 243–262.CrossRefGoogle Scholar
 Perin, R., Berger, T.K., Markram, H. (2011). A synaptic organizing principle for cortical neuronal groups. Proceedings of the National Academy of Sciences, 108(13), 5419–5424.CrossRefGoogle Scholar
 Pillow, J.W., & Simoncelli, E.P. (2003). Biases in white noise analysis due to nonpoisson spike generation. Neurocomputing, 52, 109–115.CrossRefGoogle Scholar
 Pillow, J.W., Shlens, J., Paninski, L., Sher, A., Litke, A.M., Chichilnisky, E., Simoncelli, E.P. (2008). Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207), 995–999.CrossRefPubMedPubMedCentralGoogle Scholar
 Plaza, S.M., Scheffer, L.K., Chklovskii, D.B. (2014). Toward largescale connectome reconstructions. Current Opinion in Neurobiology, 25, 201–210.CrossRefPubMedGoogle Scholar
 Ramirez, A.D., & Paninski, L. (2014). Fast inference in generalized linear models via expected loglikelihoods. Journal of Computational Neuroscience, 36(2), 215–234.CrossRefPubMedGoogle Scholar
 Schur, J. (1911). Bemerkungen zur theorie der beschrankten bilinearformen mit unendlich vielen veranderlichen. Journal fur die Reine und Angewandte Mathematik, 140, 1–28.Google Scholar
 Soudry, D., Keshri, S., Stinson, P., Oh, M H, Iyengar, G., Paninski, L. (2015). Efficient shotgun inference of neural connectivity from highly subsampled activity data. PLoS Computational Biology, 11(10), e1004,464.CrossRefGoogle Scholar
 Stevenson, I.H., Rebesco, J.M., Hatsopoulos, N.G., Haga, Z., Miller, L.E., Körding, K.P. (2009). Bayesian inference of functional connectivity and network structure from spikes. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 17(3), 203–213.CrossRefPubMedGoogle Scholar
 Van Bussel, F., Kriener, B., Timme, M. (2011). Inferring synaptic connectivity from spatiotemporal spike patterns. Frontiers in computational neuroscience 5.Google Scholar
 Watts, D.J., & Strogatz, S.H. (1998). Collective dynamics of smallworld networks. Nature, 393(6684), 440–442.CrossRefPubMedGoogle Scholar
 Wright, S.J., Nowak, R.D., Figueiredo, M.A. (2009). Sparse reconstruction by separable approximation. IEEE Transactions on Signal Processing, 57(7), 2479–2493.CrossRefGoogle Scholar
 Zaytsev, Y., Morrison, A., Deger, M. (2015). Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity. Journal of Computational Neuroscience, 39(1), 77–103. https://doi.org/10.1007/s1082701505655.CrossRefPubMedPubMedCentralGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.