1 Introduction

Splines and smoothing splines have a long history of application in many fields. The basic history is outlined in Egerstedt and Martin, [3] and in Wahba, [19]. See also [20] for an introduction to the use of smoothing splines in statistics. In this paper we return to the problem of monotone smoothing splines, which was previously studied in [3, 4, 8, 9]. A classical application is to determine the average growth curve of a population of juveniles. Suppose we have a population of perhaps 30 children whose heights are measured every 6 months from age 2 to 20. It is easy to fit a smoothing spline to the data set but there is no guarantee of monotonicity. Since people just do not get shorter and then regrow, the smoothing spline is not appropriate for this and other similar situations. Instead, the appropriate tool is the monotone smoothing spline. Charles, Sun and Martin [2] used monotone smoothing splines in calculating distribution functions. There a problem can arise in that the spline may become greater than 1 at some point and by monotonicity it can never decrease, and so this violates the condition that a cumulative distribution only takes values between 0 and 1, a problem that was not addressed in [2]. In this paper, we solve this problem by imposing a condition that the spline is bounded above by some number \(x_{\max }\) (which would be 1 in the case of cumulative distribution functions). We also present an application of monotone smoothing splines in mathematical biology, where sigmoidally shaped function commonly occur.

The main difficulty of the problem is the monotonicity constraint \(\dot{x}\ge 0\), which has to hold at every point \(t\) in the interval under consideration. This infinite dimensional constraint is handled by a vector space version of the Karush–Kuhn–Tucker theorem, and using this, it is possible to reduce the original infinite dimensional problem to a finite dimensional problem which can be solved numerically.

The paper is organized as follows: In Sect. 2, we formulate the curve fitting problem as a constrained Calculus of Variations problem, and in Sect. 3, we show existence and uniqueness of the minimizer of this minimization problem, and hence existence and uniqueness of the monotone smoothing spline function. In Sect. 4, we formulate the Karush–Kuhn–Tucker conditions for this problem, and use these to prove a key lemma, saying that the second derivative of \(\ddot{x}\) is essentially piecewise linear. In Sect. 5, we use the key lemma from Sect. 4 to reformulate the infinite dimensional linear problem of Sect. 2 into a finite dimensional but nonlinear problem. This was essentially done previously in [3], except that we provide more details in the proof. We also introduce a new branch and bound type algorithm for computing the optimal curve. Sections 6 and 7 contain applications and examples which show how the method can be used. In Sect. 6, we reconstruct sigmoidal shaped curves arising in an intracellular signalling model, while in Sect. 7, we apply the method on reconstructing cumulative distribution functions using data, and in particular for a cumulative distribution function arising in the cell cycle, and the distribution that we reconstruct gives the time certain cells remain in a particular phase in the cell cycle.

2 The Problem

Let \(T>0\), and let \(0=t_{0}< t_{1}<\cdots <t_{m}=T\). Consider a data set \(\{(t_{i},\alpha _{i})\}\) with α i R, with associated weights \(w_{i}>0\), \(i=1,\dots , m\).

Let \(x_{\max }>0\), and consider the following optimization problem for functions defined on an interval \([0,T]\):

$$ \min \Biggl(\frac{1}{2} \int _{0}^{T} \ddot{x}(t)^{2}\, dt + \frac{1}{2}\sum_{i=1}^{m} w_{i}\bigl(x(t_{i})-\alpha _{i} \bigr)^{2} \Biggr), $$
(1)

subject to

$$ \left \{ \begin{aligned} x &\in H^{2}\bigl((0,T)\bigr), \\ x(0)&=0, \\ \dot{x}&\ge 0 \quad \text{on }t\in (0,T), \\ x(T) &\le x_{\max }, \end{aligned} \right . $$
(2)

where \(H^{2}((0,T))\) is the Sobolev space of twice weakly differentiable functions on \((0,T)\). The condition \(x(0)=0\) is included because in many situations in applications, it is for modelling purposes clear that the curve must satisfy this condition. This happens for example in the application of the intracellular signalling model that we discuss in Sect. 6. The condition \(x(T)\le x_{\max }\) arises from \(0\le x(t)\le x_{\max }\). Since the curve is monotonically increasing, it suffices to impose the condition at the endpoint \(t=T\).

As \((0,T)\) is a bounded interval, we can use \(\|x\|:= (\int _{0}^{T} \ddot{x}(t)^{2}\, dt )^{1/2}\) as a norm on \(H^{2}((0,T))\). The rest of the paper is devoted to solving this problem, and to applications of the developed method.

3 Existence and Uniqueness of a Minimizer

Theorem 1

Let \(X= \{x\in H^{2}((0,T));\; x(0)=0, \dot{x}\ge 0, x(T)\le x_{\max }\}\), and assume that \(m\ge 1\). There exists a unique \(x_{*}\in X\)which solves the minimization problem

$$ \min_{x\in X} \Biggl(\frac{1}{2} \int _{0}^{T} \ddot{x}(t)^{2}\, dt + \frac{1}{2}\sum_{i=1}^{m} w_{i}\bigl(x(t_{i})-\alpha _{i} \bigr)^{2} \Biggr). $$

Proof

Let f:XR

$$ f(x) = \frac{1}{2} \int _{0}^{T} \ddot{x}(t)^{2}\, dt + \frac{1}{2} \sum_{i=1}^{m} w_{i}\bigl(x(t_{i})-\alpha _{i} \bigr)^{2}. $$
(3)

We will use the direct method in the calculus of variations, which says that if a functional is coercive on \(H^{2}((0,T))\) and weakly lower semicontinuous on a weakly closed set, then a minimizer exists (see e.g. [17], p. 4). It is easy to see that the set \(X\) is convex and closed in \(H^{2}((0,T))\) and hence it is weakly closed by Mazur’s lemma (see e.g. Theorem 3.13 of [12]).

We will first check that the first term of \(f\) is weakly lower semicontinuous and coercive on \(L^{2}([0,T])\). Indeed, it is weakly lower semicontinuous since

$$ 0\le \int _{0}^{T} \bigl(\ddot{x}_{j}(t)- \ddot{x}(t)\bigr)^{2}\, dt = \int _{0}^{T} \ddot{x}_{j}(t)^{2} \, dt -2 \int _{0}^{T} \ddot{x}_{j}(t) \ddot{x}(t) \, dt + \int _{0}^{T} \ddot{x}(t)^{2}\, dt, $$

and so if \(x_{j}\rightharpoonup x\) (i.e. \(x_{j}\) converges weakly to \(x\) in \(H^{2}((0,T))\)), then

$$ \begin{aligned} 0&\le \liminf_{j\to \infty } \biggl( \int _{0}^{T} \ddot{x}_{j}(t)^{2} \, dt -2 \int _{0}^{T} \ddot{x}_{j}(t) \ddot{x}(t)\, dt + \int _{0}^{T} \ddot{x}(t)^{2}\, dt \biggr) \\ &= \liminf_{j\to \infty } \biggl( \int _{0}^{T} \ddot{x}_{j}(t)^{2} \, dt - \int _{0}^{T} \ddot{x}(t)^{2}\, dt \biggr), \end{aligned} $$

i.e.

$$ \int _{0}^{T} \ddot{x}(t)^{2}\, dt \le \liminf_{j\to \infty } \int _{0}^{T} \ddot{x}_{j}(t)^{2} \, dt, $$

which shows that \(x\mapsto \int _{0}^{T} \ddot{x}(t)^{2}\, dt\) is weakly lower semicontinuous on \(H^{2}((0,T))\). Coerciveness of the first term on \(H^{2}((0,T))\) is obvious since it is the square of the norm on \(H^{2}((0,T))\).

Weak lower semicontinuity of the second term of (1) follows since \(H^{2}((0,T))\) is compactly embedded in \(C^{1}([0,T])\subset C([0,T])\). Indeed, if \(x_{j}\rightharpoonup x\) in \(H^{2}([0,T])\), then \(x_{j}\) is bounded in \(H^{2}([0,T])\), and since the embedding of \(H^{2}((0,T))\) into \(C([0,T])\) is compact, \(x_{j}\) has a subsequence \(x_{j_{l}}\), which converges (to \(x\) by uniqueness of a weak limit) in \(C([0,T])\)). Finally, note that for each \(j=1,\dots ,m\),

$$ \bigl|x_{i}(t_{j})-x(t_{j})\bigr|\le \max _{t\in [0,T]} \bigl|x_{i}(t)-x(t)\bigr| \to 0 $$

if \(x_{i}\rightharpoonup x\) in \(H^{2}((0,T))\), and so the sum in (1) is weakly continuous.

As \(f\) is a sum of two weakly lower semicontinuous functions, it is clear that \(f\) is weakly lower semicontinuous on \(X\).

Next, we prove that \(f\) is coercive on \(X\). As \(x\mapsto \int _{0}^{T} {\ddot{x}}(t)^{2}\, dt\) is coercive on \(L^{2}([0,T])\), we see that

$$ f(x) \ge \frac{1}{2} \int _{0}^{T} \ddot{x}(t)^{2}\, dt \to \infty $$

as \(\|x\|\to \infty \).

To show uniqueness, we will show that \(f\) is strictly convex. For this, we will use that \(u\mapsto \int _{0}^{T} u(t)^{2}\, dt\) and \(x\mapsto (x-\alpha _{1})^{2}\) are strictly convex on \(L^{2}([0,T])\) and on ℝ, respectively. Let \(f_{1}(x):=\int _{0}^{T} \ddot{x}(t)^{2}\, dt\), \(f_{2}(x)=w_{1}(x(t_{1})-\alpha _{1})^{2}\), and \(f_{3}(x)=\sum_{j=2}^{m} w_{j} (x(t_{j})-\alpha _{j})^{2}\), so that \(f=f_{1}+f_{2}+f_{3}\) on \(X\). It is clear that \(f_{1}\), \(f_{2}\) and \(f_{3}\) are convex (but not strictly convex) functions on \(X\). To show that \(f\) is strictly convex, we need to prove that if

$$ f\bigl(\lambda x_{1} + (1-\lambda ) x_{2}\bigr) = \lambda f(x_{1}) + (1- \lambda ) f(x_{2}) $$
(4)

for some \(\lambda \in (0,1)\) and \(x_{1}\), \(x_{2}\in X\), then \(x_{1}=x_{2}\).

To prove this, we assume that (4) holds. Since \(f_{i}\) are convex for \(i=1,\dots ,3\) and \(f=f_{1}+f_{2}+f_{3}\), Eq. (4) holds also when \(f\) is replaced by \(f_{i}\), \(i=1,\dots ,3\). Since \(u\mapsto \int _{0}^{T} u(t)^{2}\, dt\) is strictly convex, the equality (4) for \(f_{1}\) implies that \(\ddot{x}_{1}= \ddot{x}_{2}\). Then, by integration and using that \(x_{1}(0)=x_{2}(0)=0\), it follows that there exists a real constant AR such that \(x_{1}(t)=x_{2}(t) +At\).

On the other hand, since \(x\mapsto (x-\alpha _{1})^{2}\) is strictly convex and \(w_{1}>0\), equality (4) for \(f_{2}\) implies that \(x_{1}(t_{1})=x_{2}(t_{1})\). Combining this with \(x_{1}=x_{2}+At\), we obtain \(A=0\) (since \(t_{1}>0\) and \(x_{1}(0)=x_{2}(0)\)). We have proved that \(x_{1}(t)=x_{2}(t)\) for all \(t\in [0,T]\), and hence \(x_{1}=x_{2}\) in \(X\). This concludes the proof that \(f\) is strictly convex on \(X\), and from this it also follows that the minimizer is unique. □

4 The Karush–Kuhn–Tucker Conditions

The constrained optimization problem will be solved with a vector space version of the KKT method, cf. Theorem 1 p. 249 of [6].

Let \(X:=\{x\in H^{2}((0,T));\; x(0)=0\}\), and let Z:=C([0,T])×R, with a norm defined by

$$ \bigl\| (u,v_{0})\bigr\| _{Z} = \bigl(\|u\|_{C([0,T])}^{2} + |v_{0}|^{2} \bigr)^{1/2}. $$

We define \(G:X\to Z\) by

$$ G(x) := \bigl(-\dot{x}, x(T)-x_{\max } \bigr). $$

We note that since \(x\in H^{2}((0,T))\), it follows that \(\dot{x}\in H^{1}((0,T))\subset C([0,T])\), and so it is clear that \(G:X\to Z\).

It is straight-forward to check that \(G\) is Fréchet differentiable, and its derivative is

$$ G'(x) (h) = \bigl(-\dot{h}, h(T) \bigr). $$

By the Riesz representation theorem (see e.g. pp. 113–115 of [6], and pp. 146–150 of [18]), the dual space of \(C([0,T])\) is identified with the normalized space of functions of bounded variation on \([0,T]\), denoted by \(\mathit{NBV}([0,T])\), consisting of functions \(\nu \) of bounded variation on \([0,T]\) such that \(\nu \) is right-continuous and \(\nu (0)=0\), such that the functionals \(\phi \) on \(C([0,T])\) can be expressed as the Riemann–Stieltjes integral

$$ \phi (u) = \int _{0}^{T} u(t) \,d\nu (t), $$

and the norm of \(\phi \) is the total variation of \(\nu \) on \([0,T]\), denoted by \(\|\nu \|_{\mathit{NBV}([0,T])}\).

We denote the dual space of \(Z\) by \(Z^{*}\). By the above result, \(Z^{*}\) is identified with NBV([0,T])×R, and the norm of an element \((\nu ,\mu )\in Z^{*}\) is given by

$$ \bigl\| (\nu ,\mu )\bigr\| _{Z^{*}} = \bigl( \|\nu \|_{\mathit{NBV}([0,T])}^{2} + \mu ^{2} \bigr)^{1/2}. $$

The positive cone in \(Z\) is

$$ P:=\bigl\{ (w,\alpha )\in Z;\; w\ge 0 \text{ and }\alpha \ge 0\bigr\} . $$

It is clear that \(P\) has a nonempty interior. The positive cone \(P^{*}\) in \(X^{*}\) is

P := { ( ν , μ ) NBV ( [ 0 , T ] ) × R ; ν  is nondecreasing and  μ 0 } .

We will derive the KKT conditions for the optimization problem of Eq. (1). In order to do this, we first show that all points satisfying the inequality \(G(x)\le 0\) (i.e. \(G(x)\in -P\)) are regular points for this inequality (cf. [6], p. 248).

Lemma 1

Every \(x\in X\)with \(G(x)\le 0\)is a regular point of the inequality \(G(x)\le 0\).

Proof

Let \(x\in X\) be such that \(G(x)\in -P\). We need to show that there exists an \(h\in X\) such that \(G(x)+G'(x)h\) is an interior point of \(-P\), i.e. that \(h\in H^{2}((0,T))\) satisfies

$$ \begin{aligned} -\dot{x} -\dot{h}< 0, \\ x(T)+h(T)-x_{\max }< 0. \end{aligned} $$

There are clearly many choices for \(h\), for example

$$ h(t) = \frac{x_{\max }}{2T} t - x(t). $$

With this choice, we have \(h\in X\) and

$$ \begin{aligned} -\dot{x}(t) - \dot{h}(t) &= -\frac{x_{\max }}{2T} < 0, \\ x(T)+h(T)-x_{\max } &= - \frac{x_{\max }}{2} < 0, \end{aligned} $$

i.e. \(G(x)+G'(x)h\) is an interior point of \(-P\). □

The functional f:XR defined by (3) is Fréchet differentiable, and its derivative is given by

$$ f'(x) (h) = \int _{0}^{T} \ddot{x}(t) {\ddot{h}}(t)\, dt + \sum_{i=1}^{m} w_{i} \bigl(x(t_{i})-\alpha _{i}\bigr) h(t_{i}). $$

Let \(x_{*}\in X\) be the minimizer of \(f\) subject to \(G(x)\in -P\). By the KKT Theorem (see [6], p. 249), there exists a \(z^{*}\in Z^{*}\), \(z_{*}\ge 0\) (i.e. \(z_{*}\in P^{*}\)) such that the Lagrangian

$$ f(x) + \bigl\langle G(x),z_{*}\bigr\rangle $$

is stationary at \(x_{*}\), and that \(\langle G(x_{*}),z_{*}\rangle =0\).

An explicit statement of the KKT conditions implies the following result, which will be used in the next section for constructing a numerical algorithm for the solution:

Lemma 2

Let \(x_{*}\in X\)be the minimizer of the minimization problem (1)(2), and let \(u_{*}:=\ddot{x}_{*}\). Then the following holds:

  1. (1)

    \(u_{*}\)is affine on each subinterval of \([t_{i-1},t_{i})\)where \(\dot{x}_{*}>0\),

  2. (2)

    \(u_{*}(0)=u_{*}(T)=0\).

  3. (3)

    If \(\dot{x}_{*}=0\)on an interval, then \(u_{*}=0\)on this interval (and hence it is an affine function also there).

Remark 1

We cannot conclude directly from Lemma 2 that \(u_{*}\) is piecewise linear, since we cannot yet rule out that there is an increasing sequence of points \(s_{j}\to s_{0}\) such that \(\dot{x}_{*}(s_{j})=0\) while \(x_{*}(t)>0\) for \(t\in (s_{j-1},s_{j})\) (and \(u_{*}\) is affine on each of the intervals \((s_{j-1},s_{j})\)). We will see in Lemma 3, that this does not happen for the optimal curve, and \(u_{*}\) is in fact piecewise linear.

Proof

By the KKT conditions [6], p. 249, there exists a \(\nu _{*}\in \mathit{NBV}([0,T])\) and a μ R such that

$$ \begin{aligned} \int _{0}^{T} \ddot{x}_{*}(t) { \ddot{h}}(t)\, dt + \sum_{i=1}^{m} w_{i} \bigl(x_{*}(t_{i})-\alpha _{i}\bigr) h(t_{i}) - \int _{0}^{T} \dot{h}(t)\, d\nu _{*}(t) + \mu _{*} h(T) = 0 \end{aligned} $$
(5)

for all \(h\in X\). Furthermore,

$$ - \int _{0}^{T} \dot{x}_{*}(t)\, d\nu _{*}(t) + \mu _{*} \bigl(x_{*}(T)-x_{ \max } \bigr) = 0 $$
(6)

where \(\nu _{*}\) is nondecreasing and \(\mu _{*}\ge 0\). Equation (6) is the complementary slackness condition, and together with the constraint \(G(x_{*})\le 0\), it implies that \(\nu _{*}(t)\) is constant for \(t\) such that \(\dot{x}_{*}(t)> 0\), and that \(\mu _{*}=0\) if \(x_{*}(T)-x_{\max }< 0\).

The Riemann-Stieltjes integral in (5) may be integrated by parts, and doing so and noting that \(d{\dot{h}}(t)=\ddot{h} \,dt\) (since \(\dot{h}\in H^{1}((0,T))\) and hence absolutely continuous), we obtain after collecting the two integral terms

$$ \begin{aligned} \int _{0}^{T} \bigl(u_{*}(t)+\nu _{*}(t)\bigr) \ddot{h}(t)\, dt &+ \sum_{i=1}^{m} w_{i} \bigl(x_{*}(t_{i}) - \alpha _{i} \bigr) h(t_{i}) - \nu _{*}(T) \dot{h}(T) + \mu _{*} h(T) = 0, \end{aligned} $$
(7)

for all \(h\in X\). Choosing \(h(t)=0\) on all except one of the subintervals \((t_{i-1},t_{i})\), \(i=1,\dots ,m\), we conclude that for each \(i=1,\dots ,m+1\),

$$ \int _{t_{i-1}}^{t_{i}}\bigl(u_{*}(t)+\nu _{*}(t)\bigr)\ddot{h}(t)\, dt= 0 $$

for all \(h\in C_{0}^{\infty }([t_{i-1},t_{i}])\). Hence there exist \(\beta _{i}\), \(\gamma _{i}\), \(i=1,\dots ,m\) such that \(u_{*}(t) + \nu _{*}(t) = \beta _{i}+\gamma _{i} t\) on \((t_{i-1},t_{i})\). We may assume (by choosing a representative for the function \(u_{*}\in L^{2}((0,T))\)), that \(u_{*}(t)+\nu _{*}(t)=\beta _{i}+\gamma _{i} t\) on the half open interval \([t_{i-1},t_{i})\) for \(i=1,\dots ,m\). Hence \(u_{*}\) is a right continuous function of bounded variation.

Now with a general \(h\in X\), the integral term of (7) can be rewritten using integration by parts as

$$ \begin{aligned} &\sum_{i=1}^{m} \int _{t_{i-1}}^{t_{i}}(\beta _{i} + \gamma _{i} t) \ddot{h}(t)\, dt \\ &\quad = \sum_{i=1}^{m} \bigl((\beta _{i} + \gamma _{i} t_{i}) \dot{h}(t_{i}) - (\beta _{i}+\gamma _{i} t_{i-1}) \dot{h}(t_{i-1}) - \gamma _{i} \bigl(h(t_{i})-h(t_{i-1})\bigr) \bigr) \\ &\quad = \sum_{i=1}^{m-1} \bigl(\bigl(\beta _{i}-\beta _{i+1} + (\gamma _{i} - \gamma _{i+1}) t_{i}\bigr)\dot{h}(t_{i})-(\gamma _{i}-\gamma _{i+1})h(t_{i}) \bigr) \\ &\qquad -\beta _{1} \dot{h}(0) + (\beta _{m}+\gamma _{m}T)\dot{h}(T)- \gamma _{m} h(T). \end{aligned} $$

By choosing \(h\) appropriately (i.e. exactly one of \(h(t_{i})\) and \(\dot{h}(t_{i})\) not equal to zero), we conclude that

$$ \begin{aligned} \beta _{i}-\beta _{i+1}+(\gamma _{i}-\gamma _{i+1})t_{i} &= 0, \quad \text{for }i=1,\dots ,m-1, \\ -(\gamma _{i}-\gamma _{i+1})+ w_{i} \bigl(x_{*}(t_{i})-\alpha _{i}\bigr)&=0, \quad \text{for }i=1,\dots ,m-1, \\ \beta _{1}&=0, \\ \beta _{m} + \gamma _{m} T - \nu _{*}(T) &= 0, \\ -\gamma _{m} + \mu _{*}+w_{m} \bigl(x_{*}(T)-\alpha _{m}\bigr)&=0, \end{aligned} $$
(8)

where we have also used that \(h(0)=0\). In particular, since \(\nu _{*}(0)=0\) and \(\beta _{1}=0\), it follows that \(u_{*}(0)=0\). Hence \(u_{*}\in \mathit{NBV}([0,T])\). Note that the first equation of (8) implies that \(u_{*}+\nu _{*}\) is continuous at the spline knots \(t_{i}\), \(i=1,\dots ,m\), and the second equation implies that the derivative of \(u_{*}+\nu _{*}\) has jumps of size \(-w_{i}(x_{*}(t_{i})-\alpha _{i})\) at \(t_{i}\), \(i=1,\dots ,m-1\).

Recall that \(\nu _{*}\) is (locally) constant for \(t\) such that \(\dot{x}_{*}(t)>0\). Let us examine what happens for a point \(s_{0}\) such that \(\dot{x}_{*}(s_{0})=0\). If \(s_{0}\) is an isolated zero of \(\dot{x}_{*}\), then \(\nu _{*}\) may have a jump discontinuity at \(s_{0}\) (where it is right continuous). If \(\dot{x}_{*}=0\) on an interval around \(s_{0}\), then clearly also \(u_{*}=\ddot{x}_{*}=0\) on this interval. In particular \(u_{*}\) is piecewise linear in this interval. □

5 An Algorithm for Computing the Optimal Solution

Using Lemma 2, we will now reformulate the optimization problem as a finite dimensional problem which we can solve numerically. This is essentially the approach of Sect. 7.3 of [3], and their method has been adapted to the extra constraint \(x(T)\le x_{\max }\). Instead of using dynamical programming as in [3], we give an outline of a branch and bound algorithm for finding an optimal solution to the problem. The variables of this new problem are \(x_{1},\dots ,x_{m},v_{1},\dots ,v_{m}\), which are the (unknown) values of the function \(x(t)\) and its derivative \(\dot{x}(t)\) at the spline knots.

Assuming initially that the values \(x_{1},\dots ,x_{n}\) and \(v_{1},\dots ,v_{m}\) are known, we will use the method of [3] to determine a curve with minimal cost under the constraint that it passes through the points \((t_{1},x_{1}),\dots ,(t_{m},x_{m})\) with derivatives at these points equal to \(v_{1},\dots ,v_{m}\), respectively. This will give us a new cost function depending on the variables \(x_{1},\dots ,x_{m}\) and \(v_{1},\dots ,v_{m}\). Minimizing this function is equivalent to the original infinite dimensional problem.

Now we focus our attention on one interval \([t_{i},t_{i+1}]\), and rename it \([t_{0},t_{F}]\). Without loss of generality, we assume that \(t_{0}=0\). The corresponding values of \(x\) at the endpoints 0 and \(t_{T}\) are denoted by \(x_{0}\) and \(x_{F}\), respectively. We assume without loss of generality that \(x_{0}=0\). The values of the derivatives at the endpoints are denoted by \(\dot{x}_{0}\) and \(\dot{x}_{F}\). The following lemma is essentially given in [3], but here we provide the full proof with more details, taking care of excluding the pathological case in the remark after Lemma 2. Note also that a typo in formula (7.27) of [3] has been corrected (\(\dot{x}_{i}^{2}\) instead of \(\dot{x}_{i}\)).

Lemma 3

[3] Suppose that \(\dot{x}_{0}, \dot{x}_{F}\ge 0\), \(x_{F}\ge 0\)and \(t_{F}>0\). Then the optimal control \(u\)which minimizes \(\int _{0}^{t_{F}} u^{2}\, dt\)subject to the constraints \(\ddot{x}(t)=u(t)\)for \(t\in (0,t_{F})\), \(x(0)=0\), \(x(t_{F})=x_{F}\), \(\dot{x}(0)=\dot{x}_{0}\), \(\dot{x}(t_{F})=\dot{x}_{F}\)and \(\dot{x}(t)\ge 0\)for \(t\in (0,t_{F})\)is given by

$$ u(t)= \biggl(\frac{6(\dot{x}_{0}+\dot{x}_{F})}{t_{F}^{2}}-12 \frac{x_{F}}{t_{F}^{3}} \biggr)t + \frac{6 x_{F}}{t_{F}^{2}}- \frac{4\dot{x}_{0}}{t_{F}}-\frac{2\dot{x}_{F}}{t_{F}} $$

if \(x_{F}\ge \frac{t_{F}}{3} (\dot{x}_{0} + \dot{x}_{F} - \sqrt{ \dot{x}_{0} \dot{x}_{F}} )\), and

$$ u(t) = \textstyle\begin{cases} \frac{2 (\dot{x}_{0}^{3/2}+ \dot{x}_{F}^{3/2} )^{2}}{9 x_{F}^{2}} (t - \frac{3 x_{F} \dot{x}_{0}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}} ) &\textit{if }0\le t< \frac{3x_{F} \dot{x}_{0}^{1/2}}{\dot{x}_{0}^{3/2}+\dot{x}_{F}^{3/2}}, \\ 0 &\textit{if } \frac{3x_{F} \dot{x}_{0}^{1/2}}{\dot{x}_{0}^{3/2}+\dot{x}_{F}^{3/2}}\le t \le t_{F}- \frac{3 x_{F}\dot{x}_{F}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}}, \\ \frac{2 (\dot{x}_{0}^{3/2}+ \dot{x}_{F}^{3/2} )^{2}}{9 x_{F}^{2}} (t-t_{F} + \frac{3 x_{F} \dot{x}_{F}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}} ) &\textit{if } t_{F}- \frac{3 x_{F}\dot{x}_{F}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}}< t \le t_{F} \end{cases} $$

if \(x_{F} < \frac{t_{F}}{3} (\dot{x}_{0} + \dot{x}_{F} - \sqrt{\dot{x}_{0} \dot{x}_{F}} )\). The contribution to the cost in the two cases is

$$ \int _{0}^{t_{F}} u(t)^{2}\, dt = \textstyle\begin{cases} 4 \frac{(\dot{x}_{0}^{2} + \dot{x}_{F}^{2}) t_{F}^{2}-3 x_{F} (\dot{x}_{0} + \dot{x}_{F})t_{F} + 3 x_{F}^{2} + \dot{x}_{0} \dot{x}_{F} t_{F}^{2}}{t_{F}^{3}} & \textit{if }x_{F}\ge \frac{t_{F}}{3} (\dot{x}_{0} + \dot{x}_{F} - \sqrt{\dot{x}_{0} \dot{x}_{F}} ), \\ \frac{2}{9 x_{F}} (\dot{x}_{0}^{3/2}+\dot{x}_{F}^{3/2} )^{2} &\textit{otherwise.} \end{cases} $$

The corresponding spline function \(x\)is given by

$$ x(t) = \dot{x}_{0} t + \biggl(\frac{3 x_{F}}{t_{F}^{2}} - \frac{2\dot{x}_{0} + \dot{x}_{F}}{t_{F}} \biggr)t^{2} + \biggl( \frac{\dot{x}_{0}+\dot{x}_{F}}{t_{F}^{2}} - \frac{2 x_{F}}{t_{F}^{3}} \biggr) t^{3} $$

if \(x_{F}\ge \frac{t_{F}}{3} (\dot{x}_{0} + \dot{x}_{F} - \sqrt{ \dot{x}_{0} \dot{x}_{F}} )\), and

$$ x(t) = \textstyle\begin{cases} \frac{x_{F} \dot{x}_{0}^{3/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}} + \frac{(\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2})^{2}}{27 x_{F}^{2}} (t - \frac{3 x_{F} \dot{x}_{0}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}} )^{3} & \textit{ if }0\le t < \frac{3 x_{F} \dot{x}_{0}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}} \\ \frac{x_{F} \dot{x}_{0}^{3/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}} & \textit{if } \frac{3 x_{F} \dot{x}_{0}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}} \le t\le t_{F}- \frac{3 x_{F}\dot{x}_{F}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}}, \\ \frac{x_{F} \dot{x}_{0}^{3/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}} + \frac{ (\dot{x}_{0}^{3/2}+ \dot{x}_{F}^{3/2} )^{2}}{27 x_{F}^{2}} (t-t_{F} + \frac{3 x_{F} \dot{x}_{F}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}} )^{3} &\textit{if } t_{F}- \frac{3 x_{F}\dot{x}_{F}^{1/2}}{\dot{x}_{0}^{3/2} + \dot{x}_{F}^{3/2}}< t \le t_{F} \end{cases} $$
(9)

if \(x_{F} < \frac{t_{F}}{3} (\dot{x}_{0} + \dot{x}_{F} - \sqrt{\dot{x}_{0} \dot{x}_{F}} )\).

Proof

The optimal control \(u\) which minimizes \(\int _{0}^{t_{F}} u^{2}\, dt\) subject to the constraints \(\ddot{x}=u\), \(x(0)=0\), \(x(t_{F})=x_{F}\), \(\dot{x}(0)=\dot{x}_{0}\), \(\dot{x}(t_{F})=\dot{x}_{F}\) (that is all the constraints except the monotonicity constraint \(\dot{x}\ge 0\)), is an affine function \(u(t)=Ct+D\) where \(C\) and \(D\) are chosen so that the constraints are satisfied. This gives the expression

$$ u(t) = \biggl(\frac{6(\dot{x}_{0}+\dot{x}_{F})}{t_{F}^{2}}-12 \frac{x_{F}}{t_{F}^{3}} \biggr)t + \frac{6 x_{F}}{t_{F}^{2}}- \frac{4\dot{x}_{0}}{t_{F}}-\frac{2\dot{x}_{F}}{t_{F}} $$

for all choices of \(x_{F}\), \(\dot{x}_{0}\), \(\dot{x}_{F}\) and \(t_{F}\). By integration and using the remaining constraints, the corresponding curve \(x(t)\) on the interval \((0,t_{F})\) is given by

$$ x(t) = \biggl(\frac{\dot{x}_{0}+\dot{x}_{F}}{t_{F}^{2}} - \frac{2 x_{F}}{t_{F}^{3}} \biggr) t^{3} + \biggl( \frac{3 x_{F}}{t_{F}^{2}} - \frac{2\dot{x}_{0} + \dot{x}_{F}}{t_{F}} \biggr)t^{2} + \dot{x}_{0} t. $$

Clearly, in the cases when the monotonicity constraint \(\dot{x}\ge 0\) is satisfied, this curve is optimal also for the original problem. We claim that the \(\dot{x}\ge 0\) on \((0,t_{F})\) if and only if \(\frac{x_{F}}{t_{F}}\ge \frac{1}{3} (\dot{x}_{0} +\dot{x}_{0}- \sqrt{\dot{x}_{0} \dot{x}_{F}} )\). To prove this, note that the quadratic function \(\dot{x}\) is given by

$$ \begin{aligned} \dot{x}(t) &= 3 \biggl(\frac{\dot{x}_{0}+\dot{x}_{F}}{t_{F}^{2}} - \frac{2 x_{F}}{t_{F}^{3}} \biggr)t^{2} +2 \biggl( \frac{3 x_{F}}{t_{F}^{2}}- \frac{2 \dot{x}_{0}+\dot{x}_{F}}{t_{F}} \biggr) t + \dot{x}_{0} \\ &=\frac{1}{2} C t^{2} + Dt + \dot{x}_{0}. \end{aligned} $$

We note that \(\dot{x}(0)=\dot{x}_{0}\ge 0\) and \(\dot{x}(t_{F})= \dot{x}_{F}\ge 0\) by the assumptions, and so \(\dot{x}(t)\ge 0\) for all \(t\in (0,t_{F})\) if and only if the value of \(\dot{x}\) at an interior minimizer is nonnegative.

We study the cases \(C\ge 0\) and \(C<0\) separately. If \(C<0\), then \(\dot{x}\) doesn’t have a minimizer, and so the monotonicity condition will always be satisfied. We note that in this case,

$$ \frac{x_{F}}{t_{F}}>\frac{1}{2} (\dot{x}_{0}+ \dot{x}_{F} ) \ge \frac{1}{3} (\dot{x}_{0} + \dot{x}_{F} - \sqrt{\dot{x}_{0} \dot{x}_{F}} ), $$

is always satisfied.

If \(C\ge 0\), i e if \(\frac{x_{F}}{t_{F}}\le \frac{1}{2} (\dot{x}_{0} + \dot{x}_{F} )\), then we need a necessary and sufficient condition for when the minimum value of \(\dot{x}\) is nonnegative and the minimizer belongs to the interval \((0,t_{F})\). The minimizer belongs to the interval if and only if \(-\frac{D}{2}\in [0, \frac{t_{F} C}{2}]\), i e if

$$ 0\le \frac{2 \dot{x}_{0} + \dot{x}_{F}}{t_{F}} - \frac{3 x_{F}}{t_{F}^{2}} \le 3 t_{F} \biggl( \frac{\dot{x}_{0} + \dot{x}_{F}}{t_{F}^{2}} - \frac{2 x_{F}}{t_{F}^{3}} \biggr), $$

which holds if and only if

$$ \frac{x_{F}}{t_{F}}\le \frac{1}{3} \bigl( \dot{x}_{0} + \dot{x}_{F} + \min (\dot{x}_{0}, \dot{x}_{F}) \bigr). $$
(10)

The minimum value \(B-\frac{D^{2}}{2C}\) is nonnegative if and only if \(\frac{BC}{2}- (\frac{D^{2}}{2} )^{2}\ge 0\) i e

$$ \begin{aligned} 0&\le 3\dot{x}_{0} \biggl( \frac{\dot{x}_{0} + \dot{x}_{F}}{t_{F}^{2} - 2 \frac{x_{F}}{t_{F}^{3}}} \biggr) - \biggl(\frac{3 x_{F}}{t_{F}^{2}} - \frac{2 \dot{x}_{0} + \dot{x}_{F}}{t_{F}} \biggr)^{2} \\ &= - \biggl(\frac{3 x_{F}}{t_{F}^{2}}- \frac{\dot{x}_{0} + \dot{x}_{F}}{t_{F}} \biggr)^{2} - \frac{\dot{x}_{0} \dot{x}_{F}}{t_{F}^{2}}, \end{aligned} $$

and this inequality holds if and only if

$$ \frac{1}{3} (\dot{x}_{0} + \dot{x}_{F} - \sqrt{\dot{x}_{0} \dot{x}_{F}} ) \le \frac{x_{F}}{t_{F}}\le \frac{1}{3} (\dot{x}_{0} + \dot{x}_{F} + \sqrt{\dot{x}_{0} \dot{x}_{F}} ). $$

We note that the right inequality is always satisfied if the minimizer belongs to the interval \((0,t_{F})\), by (10) and since \(\min (\dot{x}_{0},\dot{x}_{F})\le \sqrt{\dot{x}_{0} \dot{x}_{F}}\). To summarize, we see that irrespective of the sign of \(C\), a necessary and sufficient condition for the monotonicity of \(x\) on \((0,t_{F})\) is that

$$ \frac{x_{F}}{t_{F}}\ge \frac{1}{3} (\dot{x}_{0} + \dot{x}_{F} - \sqrt{\dot{x}_{0} \dot{x}_{F}} ), $$

as required. □

Let

$$ \begin{aligned} &V(\Delta x,v_{l},v_{r},\Delta t) \\ &\quad := \textstyle\begin{cases} 4 \frac{(v_{l}^{2}+v_{r}^{2}) (\Delta t)^{2}-3 \Delta x (v_{l}+v_{r}) \Delta t+3 (\Delta x)^{2}+v_{l} v_{r} (\Delta t)^{2}}{(\Delta t)^{3}} &\text{if } \Delta x \ge \frac{\Delta t}{3}(v_{l}+v_{r}-\sqrt{v_{l} v_{r}}), \\ \frac{4}{9 \Delta x} (v_{l}^{3/2}+ v_{r}^{3/2} )^{2} &\text{otherwise.} \end{cases}\displaystyle \end{aligned} $$

In view of Lemma 2 and by considering functions which on each subinterval is of the form of Lemma 3, we have proved the following:

Theorem 2

The infinite dimensional optimization problem of Sect2is equivalent to the finite dimensional problem

$$ \min \Biggl(\frac{1}{2} \sum _{i=1}^{m-1} V(x_{i+1}-x_{i}, \dot{x}_{i}, \dot{x}_{i+1},t_{i+1}-t_{i}) + \frac{1}{2} \sum_{i=1}^{m} w_{i}(x_{i}- \alpha _{i})^{2} \Biggr) $$
(11)

subject to the constraints

$$ \begin{aligned} -\dot{x}_{i}&\le 0\textit{ for }i=1,\dots ,m, \\ x_{i}-x_{i+1}&\le 0\textit{ for }i=1,\dots ,m-1, \\ x_{m}&\le x_{\max }, \\ x_{1}&=0, \end{aligned} $$

The nonlinear optimization problem of Theorem 2 can be solved numerically, for example with fmincon in matlab. Unfortunately, this method does not seem to give stable results when there are more than 10 subintervals, and it is hard to analyse due to the piecewise defined objective function.

To come around this problem, we suggest using a branch-and-bound approach which is outlined below. We emphasise that the algorithm is guaranteed to terminate, since there are finitely many (at most \(2^{m+1}\), but probably much less in practice) subproblems to solve, each of which are convex and can be solved within a fixed time, e.g. with Newton’s method. We suggest that a breadth first search is used when going through the branches of the tree, and it is likely that for most problems it will not be needed to search through so many levels of the tree. Further investigations will be needed to find out how efficient the algorithm is and the limit of the size of the problem that can be solved in practice. These questions will be addressed in a future project. In the current paper (Sects. 6 and 7) we give some examples where the method has been implemented for up to 30 data points with good results.

  1. 1.

    Start by fitting an ordinary cubic smoothing spline using the data points and with the constraints that \(x(0)=0\) and \(x(T)\le x_{\max }\). If this curve satisfies \(\dot{x}(t)\ge 0\) for every \(t\in (0,T)\), or, equivalently \(x(t_{i+1})-x(t_{i})\ge \frac{t_{i+1}-t_{i}}{3} (v_{i} + v_{i+1} - \sqrt{v_{i} v_{i+1}} )\) for every \(i=0,\dots ,m-1\), then this must be the optimal curve, and we can stop. Otherwise, the value of the optimal function for this step gives a lower bound for the optimal solution.

  2. 2.

    If the curve in step 1 was not optimal, we branch the problem into \(m-1\) subproblems, where each subproblem corresponds to an interval for which the spline is given by a piecewise defined curve as in (9), whereas the spline should be an ordinary cubic spline on the other subintervals. A minimization problem is solved using (11), except that the first line in the definition is taken for the subintervals where the curve should be an ordinary spline, and the second line for the subinterval where the spline should be piecewise defined. The optimal value for each of these subproblems give lower bounds for the optimum of that branch. If the curve is monotone, then we have an optimum for the current branch and don’t need to branch any further. Otherwise, that node have to be branched into further subproblems, each with one more subinterval where we use a piecewise defined spline curve.

  3. 3.

    We continue branching and bounding. Branches whose lower bound is smaller than an optimum in another side branch can be cut off, and don’t need to be examined further. In the end, we compare the branch with the smallest optimum, which will give the minimum of the full problem.

In Sects. 6 and 7, we show how this method can be used to construct graphs of increasing curves relevant in applications from computational biology and for finding cumulative distribution functions.

6 Applications to an Intracellular Signaling Model

As a first application of the algorithm developed in Sect. 5, we suggest curve fitting using monotone smoothing splines as an alternative to the parametric models that are commonly used in modelling of pathways of the cell. This is expected to be particularly useful in cases where the underlying chemistry is not completely understood, but when certain monotonicity trends in the data can be observed. The ODE models or stochastic models that are commonly used can be very complicated, see e.g. [10] where the modelling process of this type of models is described. In Sect. 3 of this reference, a relatively simple modelling example of this type, occurring in neuroscience is given, which we describe briefly here, to give the reader context.

Calmodulin is an abbreviation for calcium-modulated protein, which is an intermediate calcium-binding messenger protein present in all eukaryotic cells. Once bound to a calcium ion, calmodulin acts as part of a calcium signal pathway by modifying its interactions with various target proteins such as kinases or phosphatases. Its importance in neuroscience stems from its crucial involvement in synaptic plasticity.

We consider five data sets with experimental data taken from [1, 11, 15, 16], corresponding to models describing this particular pathway. See [5, 10] for a model using a system of ordinary differential equations stemming from steady state equations for these reactions, and where the same data sets are used. This particular model consists of the elementary species calcium (Ca), calmodulin (CaM), protein phosphatase 2B (PP2B), and Ca/CaM-dependent protein kinase II (CaMKII) and protein phosphatase 1 (PP1).

Up to four Ca ions are bound by calmodulin, and the first data set that we consider describes how many (moles of) ions of calcium are bound to CaM per (mole of) CaM, and this quantity is plotted versus the Ca concentration. When the Ca concentration increases, more Calcium ions are bound to the proteins, and it is therefore natural to assume that the curve corresponds to the graph of an increasing function. Each calmodulin molecule can bind at most four Ca ions, and hence the range of the function is naturally included in \([0,4]\). When there is no Ca present in the system, there cannot be any bounds, and so we require that the curve passes through the origin. The data set is taken from [16], and the data together with the fitted monotone spline is shown in the first subfigure of Fig. 1.

Fig. 1
figure 1

Reconstruction of the first five plots of Fig. 12.3 of [10] with monotone smoothing splines, using the branch and bound algorithm outlined in Sect. 5. The data for the respective subfigures are taken (in order) from [1, 11, 15, 16]

The binding of Ca ions by Calmodulin is a cooperative process. Ca-bound CaM activates PP2B, another protein implicated in molecular processes related to learning which also plays a role in striatal signaling. Dataset 2, which is taken from [11], describes the number of moles of apo calmodulin (apoCaM, i e calmodulin without calcium bound to it) bound to each mole PP2B versus the concentration of apoCaM. Since one PP2B molecule can bind at most one calmodulin molecule, the range of the function is naturally between 0 and 1. As there has to be apoCaM in the system for this type of binding to occur, we require that the \((0,0)\) is on the curve. The more apoCaM there is in the system, the more likely it is for such a binding to occur, and it is hence natural to assume that the fitted function is increasing.

In the third subfigure, percentage activation of PP2B is plotted versus Ca concentration at two different concentrations of CaM (30 nM for the left curve and 300 nM for the right curve). The data for this subfigure is taken from [16]. Naturally, the range of the function is contained in \([0,100]\), and the data suggests that the binding is more likely to occur for higher concentrations of Ca, and hence it is natural to fit the data with a curve which is the graph of an increasing function. Again, Ca is needed for the activation to occur, and for this reason we require that the curve starts at the origin.

The third protein CaMKII, is a kinase, which is activated by the binding of Ca-CaM. In the fourth subfigure, we consider data from [15], representing the number of moles of Ca that is bound to CaM per mole of CaM in the presence of the enzyme CaMKII. As in subfigure 1, the curve is expected to be the graph of a function which is increasing, whose range is contained in \([0,4]\), and which is originating from the origin.

CaMKII molecules exist as dodecamers, consisting of two hexamer rings. A CaMKII unit that has bound CaM can autophosphorylate when sitting beside an active neighboring unit in the same hexamer ring. The phosphorylated unit can remain active even in the absence of Ca-CaM. In subfigure 5, the data comes from [1], and it describes the percentage phosphorylated CaMKII (autonomy in CaMKII activity) versus calcium concentration.

The method of the current paper is applied to data sets in order to fit curves which come close to the data points. The weights \(w_{i}\) were set to 1 for all examples and the parameter \(\lambda \) was chosen to be 100 for the first two datasets and 1000 for the three last.

The same range was used for the variables as in Fig. 12.3 of [10]. Instead of imposing a new type of condition corresponding to the limit as the independent variable tends to infinity, an additional data point was introduced, which forces the curve to come close to the maximum bound at the right endpoint of the interval. For example, for dataset 1, we demand that the curve comes close to the point \((6,4)\). After doing this, the method of Sect. 5 could be used directly. The plots of Fig. 1 were obtained, and these can be compared to the first five plots in Fig. 12.3 of [10].

7 Applications for Cumulative Distribution Functions and an Example from Cell Cycle Models

A second application to the technique of this paper, is for reconstructing an unknown distribution function given some data points.

Suppose that it is known that a sample comes from a distribution with an absolutely continuous distribution function, but that the exact form of the distribution is unknown. Then we could reconstruct the cumulative distribution function by using monotone splines. We first test the method from a sample of data points coming from a normal distribution.

Using \(x_{\max }=1\), 1000 random points were generated following the normal distribution with expectation value 0 and standard deviation 1. Then a histogram with 20 bins was created using Matlab’s function histcounts. The vector \(\alpha \) with data values was created by using Matlab’s function cumsum. With \(\lambda =50\), the method of this paper was used with the minor modification that \(u(0)=0\) is replaced by \(u(t_{0})\ge 0\) to create an approximation of the cumulative distribution function. By differentiating the obtained spline function an estimate for the density function could also be obtained. The results can be seen in Fig. 2.

Fig. 2
figure 2

Estimate of the cumulative distribution function using monotone splines (left), and comparison of the derived density function with the exact density function of the normal distribution and the histogram (right)

The method is more useful when the data does not come from a standard distribution, and the following is an example of such a situation arising in cell biology. The cell cycle consists of four distinct phases: \(G1\), \(S\), \(G2\) and \(M\). For many types of cells, the time a cell spends in the \(G1\) phase is highly variable, and it is of interest to find the distribution for the time a cell spends in the \(G1\) phase. See for example [7], where such a distribution is used in an age structured cell cycle model. FUCCI is a fluorescence technology that can be used for tracking the time an individual cell spends in the \(G1\) phase [13, 14]. Using the movie S1 of the supplementary material of [13], the histogram data for the time that each cell in that movie stays in the \(G1\) phase was obtained. Using \(\lambda =3\), the cumulative distribution function could be estimated and is shown in Fig. 3

Fig. 3
figure 3

Estimate of the cumulative distribution function using monotone splines (left), and comparison of the derived density function with the histogram (right)