Skip to main content
Log in

Associative Pattern Recognition Through Macro-molecular Self-Assembly

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

We show that macro-molecular self-assembly can recognize and classify high-dimensional patterns in the concentrations of N distinct molecular species. Similar to associative neural networks, the recognition here leverages dynamical attractors to recognize and reconstruct partially corrupted patterns. Traditional parameters of pattern recognition theory, such as sparsity, fidelity, and capacity are related to physical parameters, such as nucleation barriers, interaction range, and non-equilibrium assembly forces. Notably, we find that self-assembly bears greater similarity to continuous attractor neural networks, such as place cell networks that store spatial memories, rather than discrete memory networks. This relationship suggests that features and trade-offs seen here are not tied to details of self-assembly or neural network models but are instead intrinsic to associative pattern recognition carried out through short-ranged interactions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Graves, A., Mohamed, A.R., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6645–6649 (2013)

  2. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.O. (eds.) Advances in Neural Information Processing Systems 25, pp. 1097–1105. Curran Associates, Inc., Red Hook (2012)

    Google Scholar 

  3. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. In: Proceedings of the International Association for Shell and Spatial Structures (IASS) Symposium 2009. 1 Jan 1982

  4. Purvis, J.E., Lahav, G.: Encoding and decoding cellular information through signaling dynamics. Cell 152(5), 945–956 (2013)

    Article  Google Scholar 

  5. Levine, J.H., Lin, Y., Elowitz, M.B.: Functional roles of pulsing in genetic circuits. Science 342(6163), 1193–1200 (2013)

    Article  ADS  Google Scholar 

  6. Brubaker, S.W., Bonham, K.S., Zanoni, I., Kagan, J.C.: Innate immune pattern recognition: a cell biological perspective. Annu. Rev. Immunol. 33, 257–290 (2015)

    Article  Google Scholar 

  7. Murugan, A., Zeravcic, Z., Brenner, M.P., Leibler, S.: Multifarious assembly mixtures: systems allowing retrieval of diverse stored structures. Proc. Natl. Acad. Sci. USA 112(1), 54–59 (2015)

    Article  ADS  Google Scholar 

  8. Amit, D., Gutfreund, H., Sompolinsky, H.: Storing infinite numbers of patterns in a spin-glass model of neural networks. Phys. Rev. Lett. 55(14), 1530–1533 (1985)

    Article  ADS  Google Scholar 

  9. Hertz, J., Krogh, A., Palmer, R.: Introduction to the Theory of Neural Computation. Basic Books, New York (1991)

    Google Scholar 

  10. Amit, D.J., Gutfreund, H., Sompolinsky, H.: Spin-glass models of neural networks. Phys. Rev. A 32(2), 1007 (1985)

    Article  ADS  MathSciNet  Google Scholar 

  11. MacKay, D.J.C.: Information Theory, Inference and Learning Algorithms. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  12. Burak, Y., Fiete, I.R.: Fundamental limits on persistent activity in networks of noisy neurons. Proc. Natl. Acad. Sci. USA 109(43), 17645–17650 (2012)

    Article  ADS  Google Scholar 

  13. Chaudhuri, R., Fiete, I.: Computational principles of memory. Nat. Neurosci. 19(3), 394–403 (2016)

    Article  Google Scholar 

  14. Seung, H.S.: Learning continuous attractors in recurrent networks. NIPS 97, 654–660 (1997)

    Google Scholar 

  15. Wu, S., Hamaguchi, K., Amari, S.I.: Dynamics and computation of continuous attractors. Neural Comput. 20(4), 994–1025 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  16. Monasson, R., Rosay, S.: Crosstalk and transitions between multiple spatial maps in an attractor neural network model of the hippocampus: phase diagram. Phys. Rev. E 87(6), 062813 (2013)

    Article  ADS  Google Scholar 

  17. Monasson, R., Rosay, S.: Crosstalk and transitions between multiple spatial maps in an attractor neural network model of the hippocampus: collective motion of the activity. Phys. Rev. E 89(3), 1 (2014)

    Article  Google Scholar 

  18. Battaglia, F., Treves, A.: Attractor neural networks storing multiple space representations: a model for hippocampal place fields. Phys. Rev. E 58(6), 7738–7753 (1998)

    Article  ADS  Google Scholar 

  19. Seung, H.S., Lee, D.D., Reis, B.Y., Tank, D.W.: Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron 26(1), 259–271 (2000)

    Article  Google Scholar 

  20. Hopfield, J.J.: Neurodynamics of mental exploration. Proc. Natl. Acad. Sci. USA 107(4), 1648–1653 (2010)

    Article  ADS  Google Scholar 

  21. Hopfield, J.J.: Understanding emergent dynamics: using a collective activity coordinate of a neural network to recognize time-varying patterns. Neural Comput. 27(10), 2011–2038 (2015)

    Article  Google Scholar 

  22. Fink, T., Ball, R.: How many conformations can a protein remember? Phys. Rev. Lett. 87(19), 198103 (2001)

    Article  ADS  Google Scholar 

  23. Barish, R.D., Schulman, R., Rothemund, P.W.K., Winfree, E.: An information-bearing seed for nucleating algorithmic self-assembly. Proc. Natl. Acad. Sci. USA 106(15), 6054–6059 (2009)

    Article  ADS  Google Scholar 

  24. Friedrichs, M.S., Wolynes, P.G.: Toward protein tertiary structure recognition by means of associative memory hamiltonians. Science 246(4928), 371 (1989)

    Article  ADS  Google Scholar 

  25. Sasai, M., Wolynes, P.G.: Molecular theory of associative memory hamiltonian models of protein folding. Phys. Rev. Lett. 65(21), 2740 (1990)

    Article  ADS  Google Scholar 

  26. Sasai, M., Wolynes, P.G.: Unified theory of collapse, folding, and glass transitions in associative-memory hamiltonian models of proteins. Phys. Rev. A 46(12), 7979 (1992)

    Article  ADS  Google Scholar 

  27. Bohr, H.G., Wolynes, P.G.: Initial events of protein folding from an information-processing viewpoint. Phys. Rev. A 46(8), 5242 (1992)

    Article  ADS  Google Scholar 

  28. Schafer, N.P., Kim, B.L., Zheng, W., Wolynes, P.G.: Learning to fold proteins using energy landscape theory. Isr. J. Chem. 54(8–9), 1311–1337 (2014)

    Article  Google Scholar 

  29. Ke, Y., Ong, L.L., Shih, W.M., Yin, P.: Three-dimensional structures self-assembled from DNA bricks. Science 338(6111), 1177–1183 (2012)

    Article  ADS  Google Scholar 

  30. Wei, B., Dai, M., Yin, P.: Complex shapes self-assembled from single-stranded DNA tiles. Nature 485(7400), 623–626 (2012)

    Article  ADS  Google Scholar 

  31. Colgin, L.L., Leutgeb, S., Jezek, K., Leutgeb, J.K., Moser, E.I., McNaughton, B.L., Moser, M.-B.: Attractor-map versus autoassociation based attractor dynamics in the hippocampal network. J. Neurophysiol. 104(1), 35–50 (2010)

    Article  Google Scholar 

  32. Jezek, K., Henriksen, E.J., Treves, A., Moser, E.I., Moser, M.-B.: Theta-paced flickering between place-cell maps in the hippocampus. Nature 478(7368), 246–249 (2011)

    Article  ADS  Google Scholar 

  33. Wills, T.J., Lever, C., Cacucci, F., Burgess, N., O’Keefe, J.: Attractor dynamics in the hippocampal representation of the local environment. Science 308(5723), 873–876 (2005)

    Article  ADS  Google Scholar 

  34. Kubie, J.L., Muller, R.U.: Multiple representations in the hippocampus. Hippocampus 1(3), 240–242 (1991)

    Article  Google Scholar 

  35. Curto, C., Itskov, V.: Cell groups reveal structure of stimulus space. PLoS Comput. Biol. 4(10), e1000205 (2008)

    Article  ADS  MathSciNet  Google Scholar 

  36. Pfeiffer, B.E., Foster, D.J.: Hippocampal place-cell sequences depict future paths to remembered goals. Nature 497(7447), 74–79 (2013)

    Article  ADS  Google Scholar 

  37. Ponulak, F., Hopfield, J.J.: Rapid, parallel path planning by propagating wavefronts of spiking neural activity. Front. Comput. Neurosci. 7, 98 (2013)

    Article  Google Scholar 

  38. Wu, S., Amari, S.-I.: Computing with continuous attractors: stability and online aspects. Neural Comput. 17(10), 2215–2239 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  39. Jezek, K., Henriksen, E.J., Treves, A., Moser, E.I., Moser, M.-B.: Theta-paced flickering between place-cell maps in the hippocampus. Nature 478(7368), 246–249 (2011)

    Article  ADS  Google Scholar 

  40. Hedges, L.O., Mannige, R.V., Whitelam, S.: Growth of equilibrium structures built from a large number of distinct component types. Soft Matter 10(34), 6404–6416 (2014)

    Article  ADS  Google Scholar 

  41. Murugan, A., Zou, J., Brenner, M.P.: Undesired usage and the robust self-assembly of heterogeneous structures. Nat. Commun. 6, 6203 (2015)

    Article  ADS  Google Scholar 

  42. Jacobs, W.M., Frenkel, D.: Predicting phase behavior in multicomponent mixtures. J. Chem. Phys. 139, 024108 (2013)

    Article  ADS  Google Scholar 

  43. Jacobs, W.M., Reinhardt, A., Frenkel, D.: Communication: theoretical prediction of free-energy landscapes for complex self-assembly. J. Chem. Phys. 142(2), 021101 (2015)

    Article  ADS  Google Scholar 

  44. Haxton, T.K., Whitelam, S.: Do hierarchical structures assemble best via hierarchical pathways? Soft Matter 9(29), 6851–6861 (2013)

    Article  ADS  Google Scholar 

  45. Whitelam, S., Schulman, R., Hedges, L.: Self-assembly of multicomponent structures in and out of equilibrium. Phys. Rev. Lett. 109(26), 265506 (2012)

    Article  ADS  Google Scholar 

  46. Levy, E.D., Pereira-Leal, J.B., Chothia, C., Teichmann, S.A.: 3D complex: a structural classification of protein complexes. PLoS Comput. Biol. 2(11), e155 (2006)

    Article  ADS  Google Scholar 

  47. Koyama, S.: Storage capacity of two-dimensional neural networks. Phys. Rev. E 65(1), 016124 (2001)

    Article  ADS  Google Scholar 

  48. Derrida, B., Gardner, E., Zippelius, A.: An exactly solvable asymmetric neural network model. EPL 4(2), 167 (1987)

    Article  ADS  Google Scholar 

  49. Nishimori, H., Whyte, W., Sherrington, D.: Finite-dimensional neural networks storing structured patterns. Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Top. 51(4), 3628–3642 (1995)

    MathSciNet  Google Scholar 

  50. Lang, A.H., Li, H., Collins, J.J., Mehta, P.: Epigenetic landscapes explain partially reprogrammed cells and identify key reprogramming genes. PLoS Comput. Biol. 10(8), e1003734 (2014)

    Article  ADS  Google Scholar 

Download references

Acknowledgements

We thank Michael Brenner, Nicolas Brunel, John Hopfield, David Huse, Stanislas Leibler, Pankaj Mehta, Remi Monasson, Sidney Nagel, Sophie Rosay, Zorana Zeravcic and James Zou for discussions. DJS was partially supported by NIH Grant No. K25 GM098875-02.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arvind Murugan.

Appendices

Appendix 1: Blurring Mask

We use a blurring mask to easily visualize how closely a concentration pattern \(\vec {c}_a\) matches a memory \(m_\alpha \). At each pixel (i.e., position) x of a 2-d memory \(m_\alpha \), we consider all species in a square box of area q centered at that pixel and find the overlap \(\chi (x)\) by finding the fraction of species with high concentration. At position x, we then display a gray scale value \((1-\chi (x))*g_{random} + \chi (x)*g_{correct}\), where \(g_{correct}\) is the correct gray scale value and \(g_{random}\) is a randomly chosen grayscale value. That is, if every species around x has high concentrations, then \(\chi (x) = 1\) and we display the correct pixel. If all species are of low concentrations, then \(\chi (x) = 0\) and we display a random gray scale value. Thus, the image displayed is a measure of the contiguity of high concentration species in pattern \(\vec {c}_a\) in the spatial arrangement corresponding to memory \(m_\alpha \).

In the paper, we use the \(\chi \) to compute the rate of nucleation with that region. So q must be larger than the critical nucleation seed at concentrations \(c_{low}\) and \(c_{high}\). But q must also be small enough, so the \(\chi (x)\) computed is representative of the whole region of size q. For the images presented here, \(q = 8\) was found to satisfy all of these conditions.

With these conditions, the blurring mask provides an immediate visual predictor of nucleation since the blurriness determines the nucleation rate through an exponential function (see Eq. 4). If a memory appears coherent over a region, the spontaneous nucleation rate is high in that region and that memory will be rapidly self-assembled. Blurry images have greatly suppressed nucleation rates.

Appendix 2: Gillespie Simulation for Self-Assembly

In our model, particles have distinct binding sites with specific interactions. In the 1D model, each particle has two distinct binding sites, left and right, while in the 2D model, each particle has four distinct sites. In our simulations, particles cannot “turn around”; i.e., incoming particles in Fig. 7 always present their left binding site.

We use a Gillespie algorithm to simulate the kinetics shown in Fig. 7 and Fig. 3; the Gillespie algorithm provides an exact way of measuring physical time using discrete time simulations. At each time step, we consider two kinds of processes: (1) all possible additions of a molecule of any species i to the boundary of the growing structure (2) all possible removals of molecules at boundary of the growing structure

We compute the free energy difference associated with each of these outcomes. For example, if species i is added to the boundary of the structure, the free energy changes by

$$\begin{aligned} \Delta F = - n E_0 + \mu _{i} \end{aligned}$$
(12)

where \(\mu _i\) is the chemical potential of species i and n is the number of bonds made by species i with its neighbors in the structure (which is, in turn, determined by the interaction matrix \(J^{tot}_{ij}\) described in the main text).

We assume that the kinetic rate associated with such a process is \(k = e^{-\frac{\Delta F}{2}}\). Removal rates are calculated with the same formula but with the appropriate sign reversals. One of these reactions is randomly chosen and implemented with probability proportional to the corresponding kinetic rate. Physical time is incremented by \(t \rightarrow t + \frac{1}{\sum _a k^+_a + \sum _x k^-_x}\), in accordance with Gillespie’s prescription.

The real time associated with the final frame of simulations in Fig. 3 are \(\vec {c_1}\): 221221; \(\vec {c_2}\): 168449; \(\vec {c_3}\): 141202; \(\vec {c_4}\): 355121; \(\vec {c_5}\): 182573; \(\vec {c_6}\): 389395 (all times in arbitrary units). These times are measures of nucleation time in one simulation run from which Fig. 3 was made; growth is much faster than nucleation in our parameter regime.

In this Gillespie simulation, we forbid nucleation at multiple points in the box. Instead we seed the initial structure by one random tile appearing and do not let it disappear. Such a choice does not affect the results and is necessary since our simulations do not include diffusion. Seeding with a single monomer amounts to following the fate of a generic monomer in the solution. Further, we repeat the simulation and find that the results for a given initial overlap stays the same.

Finally, Eq. 12 can be written in a more intuitive form by defining \(f \equiv q E_0 - \mu > 0\), the “force” driving self-assembly forward (where q is the range of interactions in the 1-d model). If a species binds with energy \(E_0\) to only \(n = qw\) of the q molecules in the growing tip in Fig. 7, then Eq. 12 reduces to,

$$\begin{aligned} \Delta F = -f + w E_0 \end{aligned}$$
(13)

Thus at low driving forces \(f \approx 0\), growth happens near equilibrium since even a molecule making the maximal number q of strong bonds (and thus with \(w=0\)) binds almost reversibly. At high force \(f>0\) (high concentrations), even species that fail to bind strongly with \(w > 0\) particles in the tip can bind. We connect Eq. 13 with corresponding equations for neural networks below.

Appendix 3: Neural Network Simulation

We perform kinetic Monte Carlo simulations on place cell networks using paired spin flips; at each discrete time step, we randomly pick an ‘on’ neuron b and an ‘off’ neuron a and attempt to flip their states. The free energy change associated with such a move is,

$$\begin{aligned} \Delta F = -\sum _{c \in clump, c\ne a,b} (J_{ac} - J_{cb}) \end{aligned}$$
(14)

We accept such a move with probability \(e^{-\Delta F}\).

In our simulations, we drive the clump [20, 37] by modifying \(J_{ij} \rightarrow (1-f/q)J_{ij} + f/q A_{ij}\) by an anti-symmetric component \(A_{ij} = - A_{ji}, abs(A_{ij}) = abs(J_{ij})\); this applies a driving force on the clump from left to right in each environment in Fig. 6.

We can build some intuition for the driving forces f and connect with self-assembly by re-writing \(\Delta F\) for some low energy moves. The left-most neuron in a clump (e.g., neuron 4 in Fig. 7) is the easiest to turn off, with a cost \(J_{off} = q (E_0 - f/q) = q E_0 - f\). Turning on a neuron a to the right of the clump gives an energy \(J_{on} = (q - w)E_0\) where \(q-w\) is the number of strong right-ward connections with neurons in the clump made by neuron a. Hence the free energy change in moving the clump by turning off the left-most neuron (4 in Fig. 7) and turning on neuron a can be written as,

$$\begin{aligned} \Delta F_{Clump} = -f + w E_0 \end{aligned}$$
(15)

Such a move is most favorable if \(w=0\), i.e., if the new neuron a is fully connected to the clump neurons and thus if the clump moves without breaking up (e.g., neuron 8 in Fig. 7).

Comparing Eqs. 15 and 13, we see that low energy transformations of the clump are in correspondence with transformations of the growing structure.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhong, W., Schwab, D.J. & Murugan, A. Associative Pattern Recognition Through Macro-molecular Self-Assembly. J Stat Phys 167, 806–826 (2017). https://doi.org/10.1007/s10955-017-1774-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-017-1774-2

Keywords

Navigation