Systems, Environments, and Soliton Rate Equations: Toward Realistic Modeling

In order to solve a system of nonlinear rate equations one can try to use some soliton methods. The procedure involves three steps: (1) find a ‘Lax representation’ where all the kinetic variables are combined into a single matrix ρ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho$$\end{document}, all the kinetic constants are encoded in a matrix H; (2) find a Darboux–Bäcklund dressing transformation for the Lax representation iρ˙=[H,f(ρ)]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i{{\dot{\rho }}}=[H,f(\rho )]$$\end{document}, where f models a time-dependent environment; (3) find a class of seed solutions ρ=ρ[0]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho =\rho [0]$$\end{document} that lead, via a nontrivial chain of dressings ρ[0]→ρ[1]→ρ[2]→⋯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho [0]\rightarrow \rho [1]\rightarrow \rho [2]\rightarrow \dots$$\end{document} to new solutions, difficult to find by other methods. The latter step is not a trivial one since a non-soliton method has to be employed to find an appropriate initial ρ[0]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho [0]$$\end{document}. Procedures that lead to a correct ρ[0]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho [0]$$\end{document} have been discussed in the literature only for a limited class of H and f. Here, we develop a formalism that works for practically any H, and any explicitly time-dependent f. As a result, we are able to find exact solutions to a system of equations describing an arbitrary number of species interacting through (auto)catalytic feedbacks, with general time dependent parameters characterizing the nonlinearity. Explicit examples involve up to 42 interacting species.


Introduction
The idea that formally 'quantum' structures may have interdisciplinary applications in population dynamics is not new, and can be traced back at least to the works of Jørgensen (Jørgensen 1990(Jørgensen , 1995Jørgensen et al. 2007), and (implicitly) Ulanowicz (Ulanowicz 1997(Ulanowicz , 1999(Ulanowicz , 2009 in ecological modeling. Jørgensen makes an explicit reference to uncertainty principles, whereas Ulanowicz stresses the role of propensity theory (Popper 1982). The fact that propensity is naturally related to quantum probability was intuitively felt by Popper himself, but a more recent analysis (Ballentine 2016) makes it very clear that quantum probabilities are in fact propensities. A theory that involves propensities and uncertainty relations cannot be formally very different from quantum mechanics, at least from a certain point of view.
That such a possibility exists is not so surprising if one takes into account that the socalled quantum structures occur in many areas of science, and are in fact ubiquitous (Aerts 1986;Khrennikov 2010). If one adds that ecological, chemical or biological models necessarily involve autocatalysis, one is almost inevitably led to the formalism of nonlinear density matrix equations (Aerts et al. 2013). The latter observation may be regarded as an outline of a whole research program for a generalized population dynamics. The present paper presents some results of this program, generalizing and extending the scope of (Aerts et al. 2013). The aim is to go beyond proof-of-principle and toy models, discussed in the literature so far, and develop a formalism that is flexible enough for real-life modeling.
Nonlinear density-matrix (von Neumann) equations discussed in (Aerts et al. 2013) belong to a broad class of soliton systems. The equations were originally discovered in the context of fundamental physics (Czachor 1993(Czachor , 1997) as a candidate theory for a putative nonlinear generalization of quantum mechanics. It was only later understood (Leble and Czachor 1998) that the system of differential equations one arrives at is in fact a soliton one.
A practical definition of a soliton system is the one where soliton techniques apply. Since soliton systems are known to possess a kind of universality, it is quite typical that a soliton equation discovered in one domain of science finds applications in other, completely unrelated fields. The classic example is the so-called nonlinear Schroedinger equation whose applications range from waves on deep ocean (Zakharov 1968) to optical solitons (Kibler et al. 2010). It was not so surprising that the same happened with soliton von Neumann equations which turned out to be equivalent to systems of coupled ordinary differential equations similar to rate equations occurring in biological and chemical modeling , while with a slight change of interpretation their dynamics could be related to replicating two-strand quantum systems, formal analogues of DNA (Aerts and Czachor 2007), or to n-level atoms in quantum optics . In yet another reformulation, von Neumann soliton equations were found to contain as particular cases various known or new lattice systems (Cieśliński et al. 2003).
The greatest advantage of soliton systems is the possibility of solving them exactly. The method, employed here and originally introduced for a simple quadratic nonlinearity in (Leble and Czachor 1998), belongs to the class of Darboux-Bäcklund dressing transformations (Doktorov and Leble 2007). The essence of the technique lies in finding a transformation which maps one solution (t) of a given equation into another solution [1](t) of the same equation. Not all systems of differential equations possess this property but those that do, belong to a soliton class. Once we have found the transformation it remains to find a 'seed' solution = [0] which will allow us to start the chain of transformations: [0] → [1] → [2] → … . A difficulty is that this initial step may be highly nontrivial.
There are two problems. The first one is obvious: has to be found by other means. Sometimes it is easy to find a seed solution. For example, the celebrated Korteweg-de Vries soliton equation has a trivial zero solution which is nevertheless nontrivial enough to start the chain of transformations, leading to a solitary wave already after a single step (Matveev and Salle 1991). In the von Neumann case = 0 implies [1] = 0 , a fact illustrating the second difficulty. Namely, an appropriate theorem guarantees that a solution will generate a solution [1] . However, the theorem does not guarantee that we will be happy with [1] . Examples are known where, after tedious calculations, one arrives at [1](t) = (t) , or [1](t) = (t − t 0 ) . The art of soliton modeling is to find appropriate seed solutions by means of non-soliton methods.
Let us illustrate the point on the simplest yet highly nontrivial example of a soliton von Neumann equation, solved for the first time by soliton methods in (Leble and Czachor 1998). Here ̇= d ∕dt and [A, B] = AB − BA is the commutator. H is an operator which, from the point of view of rate equations, encodes the values of possible kinetic constants (Aerts et al. 2013). So, we have to begin with a solution of (1) and then we know, by the theorem, that some [1] will again satisfy But how to find an initial if we do not know how to solve (1)? A first try is a which satisfies 2 = , so that (t) = e −iHt (0)e iHt is a solution of (1) since Equation (3) is mathematically identical to the linear von Neumann equation from quantum mechanics, so we know everything about it. However, it turns out that in such a case [1] 2 = [1] as well, and the dynamics of the new solution is effectively as linear as the one of . One of the tricks that give the right seed solution, invented in (Leble and Czachor 1998), is to find a satisfying 2 = a + , where a is a number and is an operator commuting with H. Then is effectively linear and can be easily solved. Still, an appropriate choice of a and guarantees that [1] is qualitatively different from , and some new purely nonlinear effects occur. Note that = F(H) , for some function F, so possibilities of finding an appropriate may crucially depend on properties of H. The solutions [1] found in (Leble and Czachor 1998) exhibited a new type of soliton phenomenon termed self-scattering. The theorem on dressing transformations was further generalized in (Ustinov et al. 2001) to the general equation where f(x) was basically arbitrary, and explicit solutions were found. Subsequent works showed that self-scattering may look similar to opening of a double spiral , replication (Aerts and Czachor 2007), or morphogenesis (Aerts et al. 2003). Self-scattering also leads to a specific form of pattern formation, as shown in Fig. 1. Nonlinearity is here quadratic, f ( ) = 2 , and H is a quantum harmonic oscillator Hamiltonian. Formation of the pattern from Fig. 1 is described by The pattern itself is obtained through a contour plot of A(t, y) = (t, y, y) . Such a twodimensional solution is sometimes termed a Harzian . The example of (6) shows that soliton von Neumann equations, when written in space-time variables, are mathematically similar to infinitely-dimensional reaction-diffusion systems (Murray 1977;Fife 1979;Cross and Hohenberg 1993;Aerts et al. 2003), whose general form is where ̂= A∇ 2 , and A and ̂1 are, in general complex, matrices and X, f(X) are vectors. Particular cases of (7) are the Swift-Hohenberg, − , and Ginzburg-Landau models (Ginzburg and Landau 1950;Howard and Kopell 1977;Swift and Hohenberg 1977;Stewartson and Stuart 1971).
A transition between the reaction-diffusion and rate-equation forms of the soliton system is obtained if one appropriately chooses the basis in the space of solutions. In the harmonic oscillator case the basis is given by Hermite polynomials times a Gaussian, which are eigenfunctions of H. The pattern from Fig. 1 is obtained if at t = 0 one starts with written as a combination of such Hermite-polynomial eigenfunctions (in the variable y).
The choice of a harmonic-oscillator H is here not accidental, and is related to the structure of the spectrum of H. Namely, it is known that the harmonic oscillator is an example of a system whose energy levels are equally spaced: Many different Hamiltonians share this property. However, when one translates the von Neumann equation into a set of coupled rate equations, one finds that the 'energy levels' play effectively the role of kinetic constants k, determining the dynamics of a kinetic (biological, chemical, ecological...) process.
Still, it is very unlikely that a real-life modeling will encounter a case where kinetic constants possess this type of regularity. On the contrary, it is typical that one will deal with basically arbitrary k values. Is it a problem? Yes, and a serious one: Practically all the explicit procedures of finding a seed one can find in the literature are based on H whose spectrum is equally spaced, or at least contains an equally spaced subset of eigenvalues (my unpublished preprint (Kuna 2004) seems the only exception). So, the first goal of the present paper is to describe a method that works for all H whose spectrum is discrete, and thus with no restrictions whatsoever on the possible values of admissible kinetic constants (although still not for the most general form of nonlinearity, see the last section).
The second goal is to allow for arbitrary, explicitly time dependent functions in (5). The formulas we will discuss will be valid for any choice of f j (t) . In practical examples, however, (t) is an n × n Hermitian matrix (corresponding to an ecosystem with up to n 2 populations existing at time t). For a finite n the infinite sum in (8) can be replaced by a finite sum, no matter which f we select (see "Appendix"). So, practically, our formulas are valid for nonlinearities described by polynomials of any order, with arbitrary timedependent coefficients. This is the second element extending our results beyond all that has (6) i̇(t, y, y � ) = − 2 y + 2 y � + y 2 − y �2 ∫ dz (t, y, z) (t, z, y � ).
been known on soliton von Neumann equations so far. The number of independent populations encoded in an n × n matrix can be reduced by imposing constraints such as Tr = 1 or the like. Now, before we describe all the necessary technicalities and examples, let us clarify one more point. It is known that ecosystems such as plankton may involve tens or hundreds of species (Hutchinson 1961;Huisman and Weissing 1999;Wilson and Abrams 2005;Allesina and Levine 2011). There is practically no non-soliton technique that would allow us to find an exact solution for a system consisting of, say, 100 species coupled by various catalytic and autocatalytic feedback loops. Here, we will give examples of explicit solutions [1](t) corresponding to 30 or 42 populations, interacting via (auto)catalytic feedbacks, in environments that change in time. But in order to achieve it, we have start with a seed solution The way we proceed is the following. We first write the Hamiltonian H in a diagonal form (this is always possible since H is Hermitian), where H (j) are blocks of dimension 2 or 3. We then choose the seed solution in a blockdiagonal form, and solve the equation in question. The solution (t) is typically not a very impressive one as involving an effectively linear coupling between pairs or triples of species. But recall that in the Korteweg-de Vries case the seed solution is just zero. What we are interested in is the solution [1] it generates, and perhaps also [2] , [3] etc., since the procedure, once successfully started, can be iterated an arbitrary number of times. In our case, the seed solution is chosen in such a way that where any 2 × 2 , 3 × 3 , 2 × 3 , or 3 × 2 block contains nonzero matrices. In standard terminology [1] is a single-soliton solution ( [N] would be an N-soliton one). The interaction of all the species of a plankton-type community is described here through a soliton effect. We have to make sure that all [1] (kl) ≠ 0 , which is a nontrivial task described in detail in the following sections.

States
A generic state of an ecosystem occurs only once in its history. This is basically why states represent propensities (tendencies) and not probabilities defined by the frequency approach.
(As opposed to quantum mechanics, properties of an ecosystem cannot be tested on a great number of identically prepared ecosystems.) A state is described by an operator (or just a matrix) satisfying: (a) Hermiticity † = , (b) positivity > 0 , and (c) normalization Tr = 1 . A Hermitian operator is positive if it does not have negative eigenvalues. Any set of Hermitian positive operators P k ('propositions'), satisfying the resolution of unity ∑ k P k = I (I denotes the identity operator) defines a set of propensities by the formula p k = Tr (P k ) (Ballentine 2016). We say that two propensities p k = Tr (P k ) and p l = Tr (P l ) are complementary if the commutator [P k ,P l ] = P kPl −P l P k = Q kl is nonzero. The propensities are complementary since their standard deviations ( p) 2 = Tr (P 2 ) − ( Tr (P )) 2 satisfy the uncertainty relation The model is non-Kolmogorovian, i.e. does not satisfy all the axioms of probability formalized in (Kolmogorov 1956). One can say that complementary propensities are associated with different contexts. The set of propensities p k is maximal if its corresponding set of propositions P k sums to I. Complementary propensities belong to different maximal sets. Propensities belonging to different maximal sets do not have to sum to 1. It is sometimes useful to consider models where Tr ≠ 1 . The propensities are then replaced by more general kinetic variables, while Hermiticity and positivity of guarantee that the associated variables are nonnegative at any moment of time.
A single density matrix nonlinear equation can be treated as a very compact form of a set of nonlinear rate equations involving variables belonging to several maximal sets . In order to construct the nonnegative variables occurring in the rate equations one selects an orthonormal basis {�n⟩;⟨n�m⟩ = nm } in the linear space which forms the domain of . Then three families of propositions are constructed: the complete set of projectors on basis vectors, {P n = �n⟩⟨n�} , supplemented by two sets of projectors, {P jk = �jk⟩⟨jk�} and {P � jk = �jk � ⟩⟨jk � �} , constructed from linear combinations of the basis vectors for j ≠ k . Decomposing matrix elements of into real and imaginary parts, nm = ⟨n� �m⟩ = x nm + iy nm , we obtain three families of nonnegative variables, It is interesting that replicator equations, a typical tool in evolutionary game theory (Smith 1982;Hofbauer and Sigmund 1998;Friedman and Sinervo 2016), can be written in a density matrix nonlinear von Neumann form, but with probabilities interpretable as p n (Gafiychuk and Prykarpatsky 2004) ( p jk and p ′ jk have no clear interpretation: x jk = √ p j p k , y jk = 0 ). If is a positive but non-normalized solution ( > 0 , Tr ≠ 1 ) then the variables are nonnegative, but cannot be treated as probabilities or propensities. Of particular interest is the case of a dynamics which does not preserve positivity of (t) for all t. In such a case, starting with a positive initial condition (0) > 0 , we will find dynamic variables that are nonnegative only for certain finite amounts of time. This type of solution could be interpreted as a system where certain species disappear or appear after some time. Unfortunately, this very interesting possibility is beyond the scope of the present paper.

Hierarchic Organization of Environments
Environments are organized hierarchically (Allen and Starr 1982). Subsystems are coupled to environments non-symmetrically (Aerts et al. 2013). The mathematical model is built on the basis of a hierarchically coupled set of rate equations, The system described by 1 plays a role of environment for the remaining subsystems. The one described by 2 is the environment for 3 , 4 , and so on. The collection of rate equations has to be solved in a hierarchical way. One begins with 1 since the associated differential equation is closed. Once one finds a given 1 (t) = r(t) , one switches to At each level of the hierarchy (perhaps with the exception of 1 ), one has to solve a system of coupled nonlinear rate equations with time dependent coefficients. The formalism described in the present paper assumes that the time-dependent coefficients are arbitrary.
All the examples discussed below can be understood as corresponding to an n-th level of the hierarchy.

Dressing Transformation from a Zakharov-Shabat Problem
We shall consider a pair of Zakharov-Shabat (ZS) problems (Matveev and Salle 1991), where z is a t-independent complex number. Differentiating (18) over t and multiplying (19) by −iz we obtain two equations with identical left-hand sides. Compatibility of the right-hand sides is equivalent to The second condition means that A is an arbitrary function of , i.e. A = f ( ) . Therefore the first condition is a nonlinear equation with respect to . This is our nonlinear von Neumann equation We say that (18)-(19) is a Lax pair for (22), whereas (22) is a Lax representation of a system of rate equations. This Lax pair was found in (Ustinov et al. 2001) and later generalized to yet more complicated von Neumann type equations in (Ustinov and Czachor 2002;Cieśliński et al. 2003).
All the examples discussed below are based on the following theorem, which is a particular case of the general result discussed, for example, in (Cieśliński et al. 2003). (18) and (19) and ⟨ � is a solution of

Theorem 1 Assume � ⟩ is a solution of
In this case if and A fulfill (20) and (21) (20) and (21) as well.
The notion of compatibility used before (20) is rather misleading, because it means that (20)-(21) is a sufficient condition for the existence of solutions of ZS problems. It is a part of the language we encounter in this field. The relation between (20)-(21) and ZS problems is given by Theorem 1. A more precise explanation of this theorem, as well as of the dressing transformation, is given in the "Appendix". Setting = one obtains a unitary T, so that the dressing transformation preserves Hermiticity and positivity of [1].

Constructing Seed Solutions
This is the core part of the work. We will show how to construct seed solutions for all H, any time-dependent f, and arbitrary numbers of interacting species. In order to eliminate typing errors, all the explicit examples discussed later on in the paper were cross-checked in Wolfram Mathematica.

Building Seed Solutions with 2 × 2 Blocks
Assume the Hamiltonian has discrete spectrum, H = ∑ n h n Q n , where Q n and h n are spectral projectors and eigenvalues of H, respectively. We start from an even-dimensional case. Now group all spectral projectors in pairs related to different eigenvalues h k and h k ′ , generating pro- are mutually orthogonal. For simplicity of notation we put Here, a notational digression. Whenever we perform explicit matrix calculations, we implicitly identify The same notation applies to (k) , � (k) ⟩ , � (k) ⟩ , etc. So, although, say � (1) ⟩ and � (2) ⟩ are, for calculational purposes, treated as 2-dimensional column matrices, their sum � (1) ⟩ + � (2) ⟩ is understood as a 4-dimensional column matrix. A mathematical purist would therefore rather write the sum as a direct sum. The convention we employ will not lead to ambiguities. In a two-dimensional space, all functions of the operator (k) (which is not proportional to the identity operator, so has two different eigenvalues (k) 1 and (k) 2 ) are equal to a linear function of (k) by Cayley-Hamilton theorem (for details see "Appendix"), 1 . Note that although (k) j are time independent, the parameters (k) j do depend on time if f itself is time dependent. Therefore, any such solution of (3.1) has the following form: and also Let us stress again that f ( (k) (0)) is time-dependent because f is a polynomial in (k) (0) , but with coefficients which depend on time. Therefore, for this case It is convenient to rewrite the Lax pair on the subspace  (k) = (k)  . First we use

turns into
Solving (27) and using the relation between � (k) ⟩ and � (k) ⟩ , we obtain It is extremely important to realize that z and are the same for all the subspaces indexed by k. This degeneracy of the eigenvalue problem was one of the crucial tricks that led in (Leble and Czachor 1998) to the discovery of self-scattering solutions. The same is here, but the necessity of maintaining the degeneracy is one of the difficulties one encounters when trying to employ the method in arbitrary dimensions, and for arbitrary H.
Since and z are independent of k, any linear combination � (t)⟩ = ∑ k (k) � (k) (t)⟩ fulfils and therefore can be used to build the projector P from the dressing transformation, and we sum over all the blocks. U simultaneously generates the evolution of the seed solu- The self-scattering phenomenon comes from c kl (t) . The essence of finding a nontrivial dressing transformation is in guaranteeing that c kl (t) depend on time. Otherwise, the dynamics would be as linear as the one of the seed solution.
To construct the dressing transformation it is enough to find two-dimensional operators which fulfill (28) for fixed H, and z .
The matrix representation of in the basis of eigenvectors of H looks as follows: Eigenvalue problem (28) implies two conditions: where = + i and z = x + iy . Note that , , x, y are independent of k. The above two conditions fix two parameters of the 2 × 2 matrix (k) as a function of H (k) , , z and (k) 2 : The fourth parameter (k) remains undetermined. (32) implies that ( h (k) 1 + y)( h (k) 2 + y) has to be negative. It means that − y is between h (k) 1 and h (k) 2 , for any k. It is easy to check that for any H we can also set all the parameters in such a way that will be positive and normalized.
For such a (k) (0) the eigenvector from (28) is given by where Now, we can use an equivalent form of the dressing transformation (Leble and Czachor 1998;Kuna et al. 1999;, H] , and obtain The latter formula shows another condition one has to guarantee in order to make [1](t) qualitatively different from (t) : the commutator at the right-hand side of (36) must be nonzero. Representing [1] (kl) = 2i c kl All these blocks can consist of nonzero elements, leading to the coupling between all the pairs of species in a given population. Note that the construction does not depend on the number of species.

Example: 3 × 3 [1](t) Constructed from a Single 2 × 2 Seed Block
We are in position to build a wide variety of solutions to a large class of problems. Of course, the formulas describing a general case are quite complicated. So, let us simplify the discussion by setting 2 + y < 0 and we obtain a simple 2 × 2 block which is Hermitian but not positive. We can use this block for constructing the transformation in an even-dimensional case. Let us skip the index k. We can make the seed solution odd-dimensional by adding the 1-dimensional block (2) = 0 , implying y = − h 3 because (28) becomes a number relation between and z for = x = 0 . We assume > 0 , therefore h 1 > h 3 > h 2 . Under all these assumptions our seed solution becomes To make our example yet more explicit we use f ( ) = F(t) 2 for which our is a constant solution because of [H, 2 ] = 0 . The dressing transformation produces

3
We know how to find a traceless solution of the problem with f ( ) = F(t) 2 , but it is easy to verify that (t) = exp( if is a solution of i̇= [H, 2 2 ]. For our concrete example g and H are fixed, and we know that the eigenvalues of are: 3 . Because we are interested in positive matrices we select parameters so that T < 1 3 . The coefficients 1 and 2 can be computed from and then As a result, we obtain the density matrix which solves i̇= [H, g( )] for arbitrary H and g: where The general form of solution we have given in the previous section is much more complex and flexible from the point of view of modeling, but our simplified example is easier for further analysis.
Let us finally write the associated set of rate equations: one can replace this system of six equations for catalytic processes by four equations for p 12 , p 13 , p ′ 12 , p ′

13
, involving auto-catalytic terms and new time dependent coefficients. The form of the rate equations explains why the eigenvalues of H effectively encode the kinetic constants of the dynamics, here k ij = h i − h j .

13
. (Color figure online) Fig. 3 g 2 ( ) =(t) sin( (t) ) for (t) = t , = 1 . The remaining parameters the same as in Fig. 2 After integration, Let us see what kind of a dynamics one gets for the simplest, linear time dependence (t) = t . The four kinetic variables are shown at Fig. 3.

Dynamics of 30 Interacting Species: Seed with Three 2 × 2 Blocks
In order to obtain a more complicated solution, but still of a reasonably compact form, we put w (k) = 0 in the general formula (35). The two-dimensional blocks are as follows with the eigenvalues Recall that , , x, y are k-independent. Now, take positive x, , and h (k) i . Then Tr( (k) ) = 2x + (h (k) 1 + h (k) 2 ) > 0 . Because in this case y < 0, The solution we produce from this seed solution is a little less complicated than the general one. For k ≠ l, . Explicit forms of u (k) (t) and v (k) (t) are given by Now take a seed consisting of three two-dimensional blocks.
[1] is 6 × 6 , and consists of nine, nonzero 2 × 2 blocks [1] (kl) , k, l = 1, 2, 3 . A practical advice in combining the blocks is to keep in mind that the denominators in c kl are the same for all k, l, and involve the sum over k from k = 1 to k = 3 . For example, Note the presence of | (3) | 2 e 2u (3) (t) even though the block itself is indexed by 1,2. Secondly, the contribution from c kl enters all the matrix elements of [1] , with the exception of the main diagonal. This is why all the species described by [1] interact with one another. In order to obtain a density matrix we select in such a way that ∑ 3

f () = F(t) 2
Let us again assume that the coupling with environment is given by some explicit timedependent function, say f ( ) = F(t) 2 . Then (k) .
The solution is normalized by Tr = 1 . Note that the sum of all the 30 probabilities exceeds 1. This shows that the probabilities correspond to several maximal sets, involving simultaneously different contexts for different collections of populations The associated set of rate equations involves of 36 populations, but clarity of presentation will not increase by writing down all the 30 time-dependent functions (the 'diagonal' populations p 1 , … , p 6 are constant if one works in the basis of eigenvectors of H). Yet, in order to get some flavor of the solution let us show at least some of them: The dynamics of all the 30 populations is illustrated in Fig. 4 for a simple 'seasonal' change of coupling of the species with their environment, F(t) = sin t . Fig. 5 shows what happens if one takes a non-periodic F(t) = sinh t . Of course, the solution is valid for any F(t).

Building Seed Solutions with 3 × 3 Blocks
In one of our previous examples the seed solution for a 3 × 3 was constructed by means of a 2 × 2 block, trivially extended to 3 × 3 by adding a column and row consisting of zeros. Here we explicitly construct a less trivial 3 × 3 block. Of crucial importance is again the degeneracy condition, which means that the seed (k) have to correspond to the same and z for all k. Otherwise one would not be able to combine (k) from different blocks, acting in different invariant subspaces of H.
Let us start with a Hermitian 3 × 3 matrix The equation is equivalent to a dynamical system formally similar to Euler's equations from classical mechanics of a rigid body (the 'Euler-Arnold top' (Arnold 1989) see also ). Let us now find the dynamics of modules and phases of the matrix elements: and real and imaginary parts of them, We can find integrals of motion of (48), other than the diagonal elements of , using the fact that four of the equations have right-hand sides proportional to the same time-dependent function: They are not linearly independent, since The next integrals of motion can be found from the characteristic polynomial, From the time invariance of the spectrum of solutions of (48) we find the constant of motion Now we are in position to verify which solution can be used in the dressing transformation. The first equation (18) of the Lax pair is an eigenvalue problem for the operator 1 + H . We expect that the operator has at least one eigenvalue which is constant in time. Again, we should examine the characteristic polynomial, The coefficients of the polynomial are constant as one can easily verify. The last one can be described in terms of invariants, Therefore, any solution of (48) in dimension three has all the eigenvalues constant. This means that the evolution of eigenvectors is given by some V −1 (t) , and thus To construct an explicit dressing transformation we need to find a relatively simple solution for the seed . Assume that 2 is affine in , with the constant term commuting with H, Then so the equation for , with f ( ) = 2 2 + 1 + 0 I , becomes linear Its solution reads  (50) we find that j = ph j + z , for some parameters p and z, so and Accordingly, We now use the fact that G commutes with H, to make the ZS spectral problem static (exactly like in the two dimensional case). We define � ⟩ = U † � ⟩ = e i ∫ t 0 Gd � ⟩ , so that implies (for at t = 0), Following the strategy from the two dimensional case we check that the generator of the dynamics of � ⟩ commutes with the operator of the first ZS problem: The latter means that both operators are some functions of one another. Since any function of a 3 × 3 matrix is equivalent to a second-order polynomial, then We can compare coefficients on both sides, arriving at Finally, A very simple evolution of � ⟩ is the next consequence of this relation, imaginary part of ). In order to get the explicit evolution of � ⟩ we first calculate the timedependent factor, and thus Finally, we use the form of the operator G = the explicit solution of the ZS Lax pair, To sum up this stage, we have found both a seed , which satisfies the von Neumann equation, and the vector � (t)⟩ which solves the Lax pair. To make sure that [1] will be a density matrix we have to impose additional restrictions on parameters of the seed solution. By Sylvester's positivity criterion, the three inequalities are satisfied. Now consider the case and (56) where at the right-hand-side the dimensions of the blocks have been indicated. The 2-dimensional blocks are constructed by means of (38)-(41), with w (2) = w (3) = 0 , and Note the sum from 1 to 3 in the denominator of c kl . The 3-dimensional block on the diagonal reads The following functions have been introduced in (58) .
The two 3 × 2 blocks read where ( (i) is an odd permutation of i ∈ {1, 2} ), and The remaining two 2 × 3 blocks are the Hermitian conjugates of the 3 × 2 ones, so that [1] is Hermitian. Figure 6 shows the dynamics of all the 42 time-dependent populations associated with a 7 × 7 solution [1] . The diagonal elements are time-independent, so we do not plot them.

Further Perspectives
We have developed a method of generating appropriate seed solutions for the soliton system i̇(t) = [H, f t ( (t))] . H is Hermitian and its spectrum possesses a discrete part, an assumption valid for a large class of H. Differences of eigenvalues of H, k jk = h j − h k , enter the rate equations as kinetic constants. So, given k jk , one can design the dynamics by adjusting H. The nonlinearity f t is essentially arbitrary as well, although in practical computations one replaces f t by a polynomial with time dependent coefficients. From the point of view of realistic modeling the formalism is much more flexible than anything discussed in the literature so far.
Still, is it flexible enough? Probably not. Our equation is just a tip of the iceberg. Almost unexplored is the general family of soliton von Neumann equations i̇(t) = [H t ( (t)), (t)] discussed in (Cieśliński et al. 2003). Here H( ) is a 'non-Abelian' function, a kind of polynomial whose coefficients are not numbers but operators. The relevant dressing transformations have been constructed (Cieśliński et al. 2003), but virtually nothing is known about the structure of seed solutions.
For time independent f the system described by the von Neumann equation is conservative. A time dependent f t makes the system open: dissipative or driven. Dissipation can be , introduced also through a time-independent part, such as the simple example discussed in (Aerts et al. 2013), where ∈ ℝ is a birth or mortality rate. More generally, one can consider where ( ) = ( ) † is a Hermitian, -dependent operator, but little is known about soliton integrability of equations involving a less trivial ( ). Yet another possibility of introducing dissipation without explicitly time-dependent parameters is to consider solutions analytically continued either in t (a 'complex time'), or in other parameters of the system. A complex time is known to turn the Schroedinger equation into a diffusion equation, so a similar trick can be performed in the von Neumann case (Aerts and Czachor 2007). But since Hermiticity of will be in general lost, a new model of probability has to be employed. is a complex parameter and � ⟩ a vector in Dirac notation (a Dirac 'ket'; in practical calculations typically represented by a 1-column matrix). Additionally, in order to introduce a dressing transformation, we need a conjugate equation with an independent parameter : ⟨ � is a dual vector (a Dirac 'bra'; typically represented by a 1-row matrix). The main tool of dressing transformations is the operator From (63), (64) we prove that P satisfies the nonlinear equation Now we take a third ZS problem with yet another parameter . We assume that our dressing transformation can be constructed by linear operators S and T of the form It is easy to see that first two conditions imply the third. They also give a relation between A and A : A change of a parameter just multiplies the operator by a number, implying A = X( )A , A = Y( )A and A = Z( )A , for some operator A and three functions of the parameters. So, From the second condition we find Following (Ustinov et al. 2001) we set X( ) = 1 , Y( ) = 1 and Z( ) = 1 , which finally gives The dressing transformation has been found: One should be aware that various generalizations of the above procedure are known, including greater numbers of parameters, operator solutions of ZS-type problems, 3)âaA + aA +âA =bbA .  time-dependent parameters etc. (Matveev and Salle 1991;Cieśliński 1995Cieśliński , 2002Doktorov and Leble 2007). Of particular interest is the generalization proposed in (Cieśliński 2002) since it seems to be applicable to soliton von Neumann systems with dissipation, a problem which only briefly mentioned in (Aerts et al. 2013), and which is also beyond the scope of the present paper.