Systems, environments, and soliton rate equations (II): Toward realistic modeling

In order to solve a system of nonlinear rate equations one can try to use some soliton methods. The procedure involves three steps: (1) Find a `Lax representation' where all the kinetic variables are combined into a single matrix $\rho$, all the kinetic constants are encoded in a matrix $H$; (2) find a Darboux-Backund dressing transformation for the Lax representation $i\dot \rho=[H,f(\rho)]$, where $f$ models a time-dependent environment; (3) find a class of seed solutions $\rho=\rho[0]$ that lead, via a nontrivial chain of dressings $\rho[0]\to \rho[1]\to \rho[2]\to\dots$ to new solutions, difficult to find by other methods. The latter step is not a trivial one since a non-soliton method has to be employed to find an appropriate initial $\rho[0]$. Procedures that lead to a correct $\rho[0]$ have been discussed in the literature only for a limited class of $H$ and $f$. Here, we develop a formalism that works for practically any $H$, and any explicitly time-dependent $f$. As a result, we are able to find exact solutions to a system of equations describing an arbitrary number of species interacting through (auto)catalytic feedbacks, with general time dependent parameters characterizing the nonlinearity. Explicit examples involve up to 42 interacting species.


Introduction
The idea that formally 'quantum' structures may have applications in ecology is not new, and can be traced back at least to the works of Jørgensen (Jørgensen, 1990(Jørgensen, , 1995Jørgensen et al , 2007), and (implicitly) Ulanowicz (Ulanowicz, 1997(Ulanowicz, , 1999(Ulanowicz, , 2009). Jørgensen makes an explicit reference to uncertainty principles, whereas Ulanowicz stresses the role of propensity theory (Popper, 1982). The fact that propensity is naturally related to quantum probability was intuitively felt by Popper himself, but a more recent analysis (Ballentine, 2016) makes it very clear that quantum probabilities are in fact propensities.
A theory that involves propensities and uncertainty relations cannot be formally very different from quantum mechanics, at least from a certain point of view. That such a possibility exists is not so surprising if one takes into account that the so-called quantum structures occur in many areas of science, and are in fact ubiquitous (Aerts, 1986;Khrennikov, 2010). If one adds that ecological models necessarily involve autocatalysis, one is almost inevitably led to the formalism of nonlinear density matrix equations (Aerts et al., 2013).
The latter observation may be regarded as an outline of a whole research program for mathematical ecology. The present paper presents some results of this program, generalizing and extending the scope of (Aerts et al., 2013).
The aim is to go beyond proof-of-principle and toy models, discussed in the literature so far, and develop a formalism that is flexible enough for real-life modeling.
Nonlinear density-matrix (von Neumann) equations discussed in (Aerts et al., 2013) belong to a broad class o soliton systems. The equations were originally discovered in the context of fundamental physics (Czachor, 1993(Czachor, , 1997) as a candidate theory for a putative nonlinear generalization of quantum mechanics. It was only later understood (Leble & Czachor, 1998) that the system of differential equations one arrives at is in fact a soliton one.
A practical definition of a soliton system is the one where soliton techniques apply. Since soliton systems are known to possess a kind of universality, it is quite typical that a soliton equation discovered in one domain of science finds applications in other, completely unrelated fields. The classic example is the so-called nonlinear Schroedinger equation whose applications range from waves on deep ocean (Zakharov, 1968) to optical solitons (Kibler et al., 2010). It was not so surprising that the same happened with soliton von Neumann equations which turned out to be equivalent to systems of coupled ordinary differential equations similar to rate equations occurring in biological and chemical modeling , while with a slight change of interpretation their dynamics could be related to replicating two-strand quantum systems, formal analogues of DNA (Aerts & Czachor, 2007), or to n-level atoms in quantum optics . In yet another reformulation, von Neumann soliton equations were found to contain as particular cases various known or new lattice systems (Cieśliński, Czachor & Ustinov, 2003).
The greatest advantage of soliton systems is the possibility of solving them exactly. The method, employed here and originally introduced for a simple quadratic nonlinearity in (Leble & Czachor, 1998), belongs to the class of Darboux-Backlund dressing transformations (Doktorov & Leble, 2007). The essence of the technique lies in finding a transformation which maps one solution ρ(t) of a given equation into another solution ρ[1](t) of the same equation. Not all systems of differential equations possess this property but those that do, belong to a soliton class. Once we have found the transformation it remains to find a 'seed' solution ρ = ρ[0] which will allow us to start the chain of transformations: ρ[0] → ρ[1] → ρ[2] → . . . . A difficulty is that this initial step may be highly nontrivial.
There are two problems. The first one is obvious: ρ has to be found by other means. Sometimes it is easy to find a seed solution. For example, the celebrated Korteweg-de Vries soliton equation has a trivial zero solution which is nevertheless nontrivial enough to start the chain of transformations, leading to a solitary wave already after a single step (Matveev & Salle, 1991).
In the von Neumann case ρ = 0 implies ρ[1] = 0, a fact illustrating the second difficulty. Namely, an appropriate theorem guarantees that a solution ρ will generate a solution ρ [1]. However, the theorem does not guarantee that we will be happy with ρ [1]. Examples are known where, after tedious calculations, one arrives at ρ[1](t) = ρ(t), or ρ[1](t) = ρ(t − t 0 ). The art of soliton modeling is to find appropriate seed solutions by means of non-soliton methods.
Let us illustrate the point on the simplest yet highly nontrivial example of a soliton von Neumann equation, solved for the first time by soliton methods in (Leble & Czachor, 1998). Here ρ = dρ/dt and [A, B] = AB−BA is the commutator. H is an operator which, from the point of view of rate equations, encodes the values of possible kinetic constants (Aerts et al., 2013). So, we have to begin with a solution ρ of (1) and then we know, by the theorem, that some ρ[1] will again satisfy But how to find an initial ρ if we do not know how to solve (1)? A first try is a ρ which satisfies ρ 2 = ρ, so that ρ(t) = e −iHt ρ(0)e iHt is a solution of (1) is effectively as linear as the one of ρ. One of the tricks that give the right seed solution, invented in (Leble & Czachor, 1998), is to find a ρ satisfying ρ 2 = aρ + ∆, where a is a number and ∆ is an operator commuting with H.  (Leble & Czachor, 1998) exhibited a new type of soliton phenomenon termed self-scattering. The theorem on dressing transformations was further generalized in (Ustinov et al., 2001) to the general equation where f (x) was basically arbitrary, and explicit solutions were found. Subsequent works showed that self-scattering may look similar to opening of a double spiral , replication (Aerts & Czachor, 2007), or morphogenesis (Aerts et al., 2003). Self-scattering also leads to a specific form of pattern formation, as shown in Fig. 1. Nonlinearity is here quadratic, f (ρ) = ρ 2 , and H is a quantum harmonic oscillator Hamiltonian. Formation of the pattern from Fig. 1 is described by iρ(t, y, y ) = − ∂ 2 y + ∂ 2 y + y 2 − y 2 dzρ(t, y, z)ρ(t, z, y ).
The pattern itself is obtained through a contour plot of A(t, y) = ρ(t, y, y).
A transition between the reaction-diffusion and rate-equation forms of the soliton system is obtained if one appropriately chooses the basis in the space of solutions. In the harmonic oscillator case the basis is given by Hermite polynomials times a Gaussian, which are eigenfunctions of H. The pattern from Fig. 1 is obtained if at t = 0 one starts with ρ written as a combination of such Hermite-polynomial eigenfunctions (in the variable y).
The choice of a harmonic-oscillator H is here not accidental, and is related to the structure of spectrum of H. Namely, it is known that harmonic oscillator is an example of a system whose energy levels are equally spaced : Still, it is very unlikely that a real-life modeling will encounter a case where kinetic constants possess this type of regularity. On the contrary, it is typical that one will deal with basically arbitrary ks. Is it a problem?
Yes, and a serious one: Practically all the explicit procedures of finding a seed ρ one can find in the literature are based on H whose spectrum is equally spaced, or at least contains an equally spaced subset of eigenvalues (my unpublished preprint (Kuna, 2004) seems the only exception). So, the first goal of the present paper is to describe a method that works for all H whose spectrum is discrete, and thus with no restrictions whatsoever on the possible values of admissible kinetic constants (although still not for the most general form of nonlinearity, see the last section).
The second goal is to allow for arbitrary, explicitly time dependent func- in (5). The formulas we will discuss will be valid for any choice of f j (t).
In practical examples, however, ρ(t) is an n × n Hermitian matrix (corresponding to an ecosystem with up to n 2 populations existing at time t). For a finite n the infinite sum in (8) The way we proceed is the following. We first write the Hamiltonian H in a diagonal form (this is always possible since H is Hermitian), where H (j) are blocks of dimension 2 or 3. We then choose the seed solution in a block-diagonal form, where any 2 × 2, 3 × 3, 2 × 3, or 3 × 2 block contains nonzero matrices. In standard terminology ρ[1] is a single-soliton solution (ρ[N ] would be an Nsoliton one). The interaction of all the species of a plankton-type community is described here through a soliton effect. We have to make sure that all ρ[1] (kl) = 0, which is a nontrivial task described in detail in the following sections.

States
A generic state of an ecosystem occurs only once in its history. This is basically why states represent propensities (tendencies) and not probabilities defined by the frequency approach. (As opposed to quantum mechanics, properties of an ecosystem cannot be tested on a great number of identically prepared ecosystems.) A state is described by an operator (or just a matrix) ρ satisfying: (a) Hermiticity ρ † = ρ, (b) positivity ρ > 0, and (c) normalization Tr ρ = 1. A Hermitian operator is positive if it does not have negative eigenvalues. Any set of Hermitian positive operators P k ('propositions'), satisfying the resolution of unity k P k = I (I denotes the identity operator) defines a set of propensities by the formula p k = Tr (P k ρ) (Ballentine, 2016). We say that two propensities p k = Tr (P k ρ) andp l = Tr (P l ρ) are complementary if the commutator [P k ,P l ] = P kPl −P l P k = Q kl is nonzero. The propensities are complementary since their standard deviations (∆p) 2 = Tr (P 2 ρ) − ( Tr (P ρ)) 2 satisfy the uncertainty relation The model is non-Kolmogorovian, i.e. does not satisfy all the axioms of probability formalized in (Kolmogorov, 1956). One can say that complementary propensities are associated with different contexts. The set of propensities p k is maximal if its corresponding set of propositions P k sums to I. Complementary propensities belong to different maximal sets. Propensities belonging to different maximal sets do not have to sum to 1. It is sometimes useful to consider models where Tr ρ = 1. The propensities are then replaced by more general kinetic variables, while Hermiticity and positivity of ρ guarantee that the associated variables are nonnegative at any moment of time.
A single density matrix nonlinear equation can be treated as a very compact form of a set of nonlinear rate equations involving variables belonging to several maximal sets . In order to construct the nonnegative variables occurring in the rate equations one se- for j = k. Decomposing matrix elements of ρ into real and imaginary parts, ρ nm = n|ρ|m = x nm + iy nm , we obtain three families of nonnegative variables, It is interesting that replicator equations, a typical tool in evolutionary game theory (Smith, 1982;Hofbauer & Sigmund, 1998;Friedman & Sinervo, 2016), can be written in a density matrix nonlinear von Neumann form, but with probabilities interpretable as p n (Gafiychuk & Prykarpatsky, 2004) (p jk and p jk have no clear interpretation: x jk = √ p j p k , y jk = 0). If ρ is a positive but non-normalized solution (ρ > 0, Tr ρ = 1) then the variables are nonnegative, but cannot be treated as probabilities or propensities. Of particular interest is the case of a dynamics which does not preserve positivity of ρ(t) for all t. In such a case, starting with a positive initial condition ρ(0) > 0, we will find dynamic variables that are nonnegative only for certain finite amounts of time. This type of solution could be interpreted as a system where certain species disappear or appear after some time. Unfortunately, this very interesting possibility is beyond the scope of the present paper.
The mathematical model is built on the basis of a hierarchically coupled set of rate equations,ρ . . .
The system described by ρ 1 plays a role of environment for the remaining subsystems. The one described by ρ 2 is the environment for ρ 3 , ρ 4 , and so on. The collection of rate equations has to be solved in a hierarchical way.
One begins with ρ 1 since the associated differential equation is closed. Once one finds a given ρ 1 (t) = r(t), one switches tȯ At each level of the hierarchy (perhaps with the exception of ρ 1 ), one has to solve a system of coupled nonlinear rate equations with time dependent coefficients. The formalism described in the present paper assumes that the time-dependent coefficients are arbitrary. All the examples discussed below can be understood as corresponding to an n-th level of the hierarchy.

Dressing transformation from a Zakharov-Shabat problem
We shall consider a Zakharov-Shabat (ZS) problem (Matveev & Salle, 1991) with linear operators A µ and B acting on some Hilbert space H: µ is a complex parameter and |φ µ a vector in Dirac notation (a Dirac 'ket'; in practical calculations typically represented by a 1-column matrix). Additionally, in order to introduce a dressing transformation, we need a conjugate equation with an independent parameter ν: ψ ν | is a dual vector (a Dirac 'bra'; typically represented by a 1-row matrix).
The main tool of dressing transformations is the operator From (18), (19) we prove that P satisfies the nonlinear equation Now we take a third ZS problem with yet another parameter λ. We assume that our dressing transformation can be constructed by linear operators S and T of the form Since P 2 = P one finds S −1 = I +âP forâ = −a 1+a and analogouslyb = −b 1+b . We demand and check how it restricts the form of S and T : and leading to the following three conditions: 3)âaA λ + aA µ +âA ν =bbA λ .
It is easy to see that first two conditions imply the third. They also give a relation between A µ and A ν : A change of a parameter just multiplies the operator by a number, implying for some operator A and three functions of the parameters. So, From the second condition we find Following (Ustinov et al., 2001) we set The dressing transformation has been found: One should be aware that various generalizations of the above procedure are known, including greater numbers of parameters, operator solutions of ZS-type problems, time-dependent parameters etc. (Matveev & Salle, 1991;Cieśliński, 1995Cieśliński, , 2002Doktorov & Leble, 2007). Of particular interest is the generalization proposed in (Cieśliński, 2002) since it seems to be applicable to soliton von Neumann systems with dissipation, a problem which only briefly mentioned in (Aerts et al., 2013), and which is also beyond the scope of the present paper.

Lax pair
Consider the pair of ZS-type problems where z µ is a t-independent complex number. Differentiating (46) over t and multiplying (47) by −iz µ we obtain two equations with identical left-hand sides. Compatibility of the right-hand sides is equivalent to The second condition means that A is an arbitrary function of ρ, i.e. A = f (ρ). Therefore the first condition is a nonlinear equation with respect to ρ.
This is our nonlinear von Neumann equation We say that (46)-(47) is a Lax pair for (50), whereas (50) is a Lax representation of a system of rate equations. This Lax pair was found in (Ustinov et al., 2001) and later generalized to yet more complicated von Neumann type equations in (Ustinov & Czachor, 2002;Cieśliński, Czachor & Ustinov, 2003). Now, let us apply the same dressing transformation to both equations of the Lax pair (46)-(47). Because we use two linear ZS problems of the same shape they can be transformed by the same dressing transformation and the compatibility conditions for the transformed Lax pair are the same.
Just replace ρ and A by ρ In this way the dressing transformation gives a connection between solutions of the equation of (48), expressed by Theorem 1. Assume |φ µ is a solution of (46) and (47) and ψ ν | is a solu-tion of In this case if ρ and A fulfill (48) and (49) (48) and (49) as well.
To prove the theorem we employ (22) for both ZS problems, and find Inserting (54) Then All the examples discussed below are based on Theorem 1. Setting ν = µ one obtains a unitary T , so that the dressing transformation preserves Hermiticity and positivity of ρ[1].

Constructing seed solutions
This is the core part of the work. We will show how to construct seed solutions for all H, any time-dependent f , and arbitrary numbers of interacting species. In order to eliminate typing errors, all the explicit examples discussed later on in the paper were cross-checked in Wolfram Mathematica.

Building seed solutions with 2 × 2 blocks
Assume the Hamiltonian has discrete spectrum, H = n h n Q n , where Q n and h n are spectral projectors and eigenvalues of H, respectively. We start from an even-dimensional case. Now choose two spectral projectors Q k 1 and Here, a notational digression. Whenever we perform explicit matrix calculations, we implicitly identify The same notation applies to ρ (k) , |φ (k) , |ϕ (k) , etc. So, although, say |ϕ (1) and |ϕ (2) are, for calculational purposes, treated as 2-dimensional column matrices, their sum |ϕ (1) + |ϕ (2) is understood as a 4-dimensional column matrix. A mathematical purist would therefore rather write the sum as a direct sum. The convention we employ will not lead to ambiguities.
In a two-dimensional space, all functions of the operator ρ (k) (which is not proportional to the identity operator, so has two different eigenvalues is time dependent. Therefore, any such solution of (3.1) has the following form: Let us stress again that f (ρ (k) (0)) is time-dependent because f is a polynomial in ρ (k) (0), but with coefficients which depend on time. Therefore, for this case It is convenient to rewrite the the Lax pair on the subspace H (k) = R (k) H.

First we use |φ
turns into Taking this into account, we obtain It is extremely important to realize that z µ and µ are the same for all the subspaces indexed by k. This degeneracy of the eigenvalue problem was one of the crucial tricks that led in (Leble & Czachor, 1998) to the discovery of self-scattering solutions. The same is here, but the necessity of maintaining the degeneracy is one of the difficulties one encounters when trying to employ the method in arbitrary dimensions, and for arbitrary H.
Since µ and z µ are independent of k, any linear combination and therefore can be used to build the projector P from the dressing transformation, and we sum over all the blocks. U simultaneously generates the evolution of and ρ[1](t) is precisely in the presence of nontrivial c kl (t). The self-scattering phenomenon comes from c kl (t). The essence of finding a nontrivial dressing transformation is in guaranteeing that c kl (t) depend on time. Otherwise, the dynamics would be as linear as the one of the seed solution.
To construct the dressing transformation it is enough to find two-dimensional operators which fulfill (60) for fixed H, µ and z µ .
The matrix representation of ρ in the basis of eigenvectors of H looks as follows: Eigenvalue problem (60) implies two conditions: where µ = α + iβ and µz µ = x + iy. Note that α, β, x, y are independent of k. The above two conditions fix two parameters of the 2 × 2 matrix ρ (k) as a function of H (k) , µ, z µ and ρ (k) 2 : The fourth parameter γ (k) remains undetermined. (64) 2 , for any k. It is easy to check that for any H we can also set all the parameters in such a way that ρ will be positive and normalized.
For such a ρ (k) (0) the eigenvector from (60) is given by where Now, we can use an equivalent form of the dressing transformation (Leble & Czachor, 1998;Kuna, Czachor & Leble, 1999), ρ[1] = ρ + 2iβ [P, H], and The latter formula shows another condition one has to guarantee in order to make ρ[1](t) qualitatively different from ρ(t): the commutator at the righthand side of (68) must be nonzero.
Representing ρ[1] in the bases of eigenvectors of H, we get the 2 × 2 blocks of the form with c kl given by (63), σ(i) an odd permutation of i ∈ {1, 2}, and where s (k) = ((w (k) ) 2 + 1)|βh (k) 1 + y||βh (k) 2 + y|. All these blocks can consist of nonzero elements, leading to the coupling between all the pairs of species in a given population. Note that the construction does not depend on the number of species.

Example: 3 × 3 ρ[1](t) constructed from a single 2 × 2 seed block
We are in position to build a wide variety of solutions to a large class of problems. Of course, the formulas describing a general case are quite complicated. So, let us simplify the discussion by setting µ = iβ, µz µ = iy (thus α = x = 0). Now βh (k) 1 + y > 0, βh (k) 2 + y < 0 and we obtain a simple 2 × 2 block which is Hermitian but not positive. We can use this block for constructing the transformation in an even-dimensional case. Let us skip the index k.
We can make the seed solution odd-dimensional by adding the 1-dimensional block ρ (2) = 0, implying y = −βh 3 . We assume β > 0, therefore h 1 > h 3 > h 2 . Under all these assumptions our seed solution becomes To make our example yet more explicit we use f (ρ) = F (t)ρ 2 for which our ρ is a constant solution because of [H, ρ 2 ] = 0. The dressing transformation produces t 0 F (s)ds and χ(t) = |a 1 | 2 e 2ψ(t) + |a 2 | 2 . Now, we use this simple traceless and non-positive 3 × 3 matrix to construct a density matrix (i.e. a positive solution with Tr ρ = 1). Let us first see what happens if we just add to ρ a normalized multiple of I: σ = ρ + 1 3 I, Tr ρ = 0, Tr σ = 1. The equation is where we used again the theorem from Appendix, describing f and g as a polynomial of a matrix. So, if we want to find a solution of iσ = [H, g(σ)] for a density matrix we should find a traceless solution of iρ = [H, θ 2 ρ 2 + (θ 1 + 2 3 θ 2 )ρ]. We know how to find a traceless solution of the problem with f ( ) = For our concrete example g and H are fixed, and we know that the eigenvalues of σ are: Because we are interested in positive matrices we select parameters so that T < 1 3 . The coefficients θ 1 and θ 2 can be computed from and then As a result, we obtain the density matrix which solves iρ = [H, g(ρ)] for arbitrary H and g: t 0 θ 2 (s)ds. The general form of solution we have given in the previous section is much more complex and flexible from the point of view of modeling, but our simplified example is easier for further analysis.
Let us finally write the associated set of rate equations: The function itself is time independent, so after explicit integrations one finds p 12 , p 13 , p 12 , p 13 are plotted in Fig. 2.

Dynamics of 30 interacting species: Seed ρ with three 2 × 2 blocks
In order to obtain a more complicated solution, but still of a reasonably compact form, we put w (k) = 0 in the general formula (67). The twodimensional blocks are as follows with the eigenvalues Recall that α, β, x, y are k-independent. Now, take positive x, α, and h The solution we produce from this seed solution is a little less complicated than the general one. For k = l, .
Explicit forms of u (k) (t) and v (k) (t) are given by Now take a seed ρ consisting of three two-dimensional blocks. ρ[1] is 6 × 6, and consists of nine, nonzero 2 × 2 blocks ρ[1] (kl) , k, l = 1, 2, 3. A practical advice in combining the blocks is to keep in mind that the denominators in c kl are the same for all k, l, and involve the sum over k from k = 1 to k = 3.
For example, Note the presence of |ϑ (3) | 2 e 2u (3) (t) even though the block itself is indexed by 1,2. Secondly, the contribution from c kl enters all the matrix elements of ρ[1], with the exception of the main diagonal. This is why all the species described by ρ[1] interact with one another.
In order to obtain a density matrix we select α in such a way that Let us again assume that the coupling with environment is given by some explicit time-dependent function, say f (ρ) = F (t)ρ 2 . Then θ The associated set of rate equations involves of 36 populations, but clarity of presentation will not be increased by writing down all the 30 time-dependent functions (the 'diagonal' populations p 1 , . . . , p 6 are constant if one works in the basis of eigenvectors of H). Yet, in order to get some flavor of the solution let us show at least some of them: 12 − γ (1) ) 12 − γ (1) ) The dynamics of all the 30 populations is illustrated in Fig. 4 for a simple 'seasonal' change of coupling of the species with their environment, F (t) = sin ωt. Fig. 5 shows what happens if one takes a non-periodic F (t) = sinh ωt.
Of course, the solution is valid for any F (t).

Building seed solutions with 3 × 3 blocks
In one of our previous examples the seed solution for a 3 × 3 ρ was constructed by means of a 2 × 2 block, trivially extended to 3 × 3 by adding a column and row consisting of zeros. Here we explicitly construct a less trivial 3 × 3 block. Of crucial importance is again the degeneracy condition, which means that the seed ρ (k) have to correspond to the same µ and z µ for all k. Otherwise one would not be able to combine ρ (k) from different blocks, acting in different invariant subspaces of H.
Let us start with a Hermitian 3 × 3 matrix The equation is equivalent tȯ a dynamical system formally similar to Euler's equations from classical mechanics of a rigid body (the 'Euler-Arnold top' (Arnold, 1989)). Let us now find the dynamics of modules and phases of the matrix elements: and real and imaginary parts of them, We can find integrals of motion of (80), other than the diagonal elements of ρ, using the fact that four of the equations have right-hand sides proportional to the same time-dependent function: They are not linearly independent, since The next integrals of motion can be found from the characteristic polynomial, +ρ 1 ρ 2 ρ 3 + 2Re(abc) − |a| 2 ρ 3 − |b| 2 ρ 2 − |c| 2 ρ 1 .
From time invariance of spectrum of solutions of (80) we find the constant of motion Now we are in position to verify which solution can be used in the dressing transformation. The first equation (46) of the Lax pair is an eigenvalue problem for the operator 1 µ ρ + H. We expect that the operator has at least one eigenvalue which is constant in time. Again, we should examine the characteristic polynomial, The coefficients of the polynomial are constant as one can easily verify. The last one can be described in terms of invariants, Therefore, any solution of (80) in dimension three has all the eigenvalues constant. This means that the evolution of eigenvectors is given by some To construct an explicit dressing transformation we need to find a relatively simple solution for the seed ρ. Assume that ρ 2 is affine in ρ, with the constant term commuting with H, Then Its solution reads Gdτ .
We now use the fact that G commutes with H, to make the ZS spectral problem static (exactly like in the two dimensional case). We define |φ = U † |ϕ = e i t 0 Gdτ |ϕ , so that implies (for ρ at t = 0), Following the strategy from the two dimensional case we check that the generator of the dynamics of |φ commutes with the operator of the first ZS problem: The latter means that both operators are some functions of one another.
Since any function of a 3 × 3 matrix is equivalent to a second-order polynomial, then we can compare coefficients on both sides, arriving at Finally, eigenvalues equals |a|. This implies that p = α, |a| = |β| √ −g 1 g 2 2 . In this case The corresponding eigenvector reads The index k reminds us that, from the point of view of applications, we consider a solution associated with a kth block; the only parameter that is block-independent is here β (the imaginary part of µ). In order to get the explicit evolution of |ϕ we first calculate the time-dependent factor, Finally, we use the form of the operator G = θ 1 + θ 2 (2z + |β| √ −g 1 g 2 2 ) H + θ 2 αH 2 to get the explicit solution of the ZS Lax pair, To sum up this stage, we have found both a seed ρ, which satisfies the von Neumann equation, and the vector |ϕ(t) that solves the Lax pair. To make sure that ρ[1] will be a density matrix we have to impose additional restrictions on parameters of the seed solution. By Sylvester's positivity criterion, z + αh 1 > 0, (z + αh 1 )(z + αh 2 ) > |a| 2 , (z + αh 1 )(z + αh 2 )(z + αh 3 ) + 2|a| 3 − |a| 2 (3z + α(h 1 + h 2 + h 3 )) > 0.

Now consider the case
3 , h 1 , h 2 , h and The dressed solution has the block form where at the right-hand-side the dimensions of the blocks have been indicated. The 2-dimensional blocks are constructed by means of (70)-(73), with w (2) = w (3) = 0, and Note the sum from 1 to 3 in the denominator of c kl . The 3-dimensional block on the diagonal reads The following functions have been introduced in (90): The two 3 × 2 blocks read The remaining two 2 × 3 blocks are the Hermitian conjugates of the 3 × 2 ones, so that ρ[1] is Hermitian.

Further perspectives
We have developed a method of generating appropriate seed solutions for Still, is it flexible enough? Probably not. Our equation is just a tip of the iceberg. Almost unexplored is the general family of soliton von Neumann equations iρ(t) = H t ρ(t) , ρ(t) discussed in (Cieśliński, Czachor & Ustinov, 2003). Here H(ρ) is a 'non-Abelian' function, a kind of polynomial whose coefficients are not numbers but operators. The relevant dressing transformations have been constructed (Cieśliński, Czachor & Ustinov, 2003), but virtually nothing is known about the structure of seed solutions.
For time independent f the system described by the von Neumann equation is conservative. A time dependent f t makes the system open: dissipative or driven. Dissipation can be introduced also through a time-independent part, such as the simple example discussed in (Aerts et al., 2013), where ξ ∈ R is a birth or mortality rate. More generally, one can consideṙ where Ξ(ρ) = Ξ(ρ) † is a Hermitian, ρ-dependent operator, but little is known about soliton integrability of equations involving a less trivial Ξ(ρ).
Yet another possibility of introducing dissipation without explicitly timedependent parameters is to consider solutions analytically continued either in t (a 'complex time'), or in other parameters of the system. A complex time is known to turn the Schroedinger equation into a diffusion equation, so a similar trick can be performed in the von Neumann case (Aerts & Czachor, 2007). But since Hermiticity of ρ will be in general lost, a new model of probability has to be employed.

Acknowledgment
I'm indebted to Marek Czachor for a critical reading of the entire manuscript.

Special thanks to Diederik Aerts for various help during my stay in Centrum
Leo Apostel in Brussels, where a part of this work was performed.