On Oscillation Properties of Self-Adjoint Boundary Value Problems of Fourth Order

The connection between the number of internal zeros of nontrivial solutions to fourth-order self-adjoint boundary value problems and the inertia index of these problems is studied. We specify the types of problems for which such a connection can be established. In addition, we specify the types of problems for which a connection between the inertia index and the number of internal zeros of the derivatives of nontrivial solutions can be established. Examples demonstrating the effectiveness of the proposed new approach to an oscillatory problem are considered.


1.
In this paper, we present some results and methods that make it possible to keep track of the relationship between the number of internal zeros of the solution to a self-adjoint boundary value problem of fourth order and the (negative) index of inertia of this problem. Classical results of this type for the Sturm-Liouville problem are well known (see, e.g., [1]), but, for a fourth-order problem, the situation is much more complicated.
Consider the boundary value problem (1) (2) where B and C are block diagonal real matrices of order 4: where the symbol B -1 denotes the total preimage. Here, we also used the auxiliary notation In the study of problem (1), (2), three basic degrees of generality are possible. The first corresponds to the case of , , and ; moreover, the function p is assumed to be positive. Here, Eq. (1) is understood directly and its solution is sought among the functions from the Sobolev space . Fourth-order problems are still often considered in this formulation.
The second degree of generality corresponds to the case when and ; moreover, the function p is again assumed to be positive. Here, Eq. (1) is understood as formal notation for the equation , whose solutions are sought in the class of functions satisfying the conditions This treatment is also classical and can be found in [2,Section 15].
Finally, the third degree of generality occurs in the situation when the functions are posi- [3] Here, is an arbitrary fixed vector with the property , where the column is defined according to (5). Note that the terms in (7) are well defined, since and for all . It can be shown that this treatment coincides with the above-indicated classical definitions if p, q, and r are sufficiently smooth functions.
In terms of operator theory, what was said above means that problem (1), (2) is associated with an operator that maps every function to the bounded linear functional acting on an arbitrary function according to the rule (8) Here, as before, ; moreover, by condition (4), the action of the operator T does not depend on the choice of the vector ξ. The solutions of problem (1), (2) are exactly the elements of the kernel of the operator T.
Note that the indicated general understanding of fourth-order boundary value problems goes back to the treatments of second-order boundary value problems proposed in [4,5]. In what follows, problem (1), (2) is understood in the most general (third) sense.
It follows from what was said above that all results are valid for narrower classical formulations.
Hereafter, we assume that the considered spaces are real and the operator defined by (8)   does not vanish identically, we have (or , respectively). The index of inertia of the operator T is defined as usual (see, e.g., [6]) as the maximum of the dimensions of subspaces satisfying the condition 2. In this section, we consider an auxiliary problem corresponding to the case and . For a singular function , the last inequality means that for any nonnegative function [0,1]. In this case, it is easy to show that there exists a nondecreasing function obeying the identity We distinguish a class of boundary conditions (2) characterized by the following two assumptions. Assumption A. There are no real numbers , , , and for which Assumption B. There are no real numbers , , , and for which Theorem 1. Suppose that , , and the corresponding function is not constant in neighborhoods of the points 0 and 1. Assume that boundary conditions (2) satisfy Assumptions A and B. Then the solution space of the problem has a dimension of at most 1.

Moreover, for any nontrivial solution
, there is no point satisfying the pair of equalities = 0 and the pair of equalities y'(x) = (x) = 0. 3. One of the basic concepts in the general theory of oscillation is the Kellogg kernel (see, e.g., [7, Chapter IV, Section 3, Theorem 1] or [8]). In the case of positive definite problems of fourth order with separated boundary conditions, it is known [9] that the Green's function of the corresponding operator is a Kellogg kernel if and only if it is positive on the open square (0, 1) × (0, 1). Using this fact and applying general results concerning integral equations with Kellogg kernels [7, Chapter IV, Section 3, Theorems 1, 2], we obtain the following assertion.
Theorem 2. Let the operator T be defined by (8) and be the operator of multiplication by a positive 1 generalized function . In the general situation, checking whether the Green's function of a fourth-order differential operator is positive represents a nontrivial task. In many cases, however, it suffices to answer this question when and .  does not increase the number of sign changes for any function and the following assumptions are valid: (1) All nonnegative eigenvalues of the pencil are simple.
(2) For any with property , inside the linear span of the set , there exists a neighborhood U of the function f m such that, for any function , the number of sign changes for the function I -1 y does not exceed its counterpart for . 1 :

there exists a subspace of dimension m + 1 having a trivial intersection with the kernel of H such that, for any function
, the corresponding function I -1 y has at most sign changes.
Then, for any nontrivial function , the number of sign changes for the corresponding function equals the index of inertia of T. In addition to the trivial case, when the operator is the identity mapping, the following two situations are typical: (1) The operator I performs the change of variable , where the function has an everywhere positive derivative. The role of is played by a subspace of the Sobolev space .
(2) The space does not contain nontrivial constant functions, and I -1 is the differentiation operator. The role of is played by a subspace of the Sobolev space . The first of these situations occurs, for example, in the standard procedure of eliminating the second term of the left-hand side of Eq. (1) (see, e.g., [6,[10][11][12] As applied to the second of the above-indicated situations, the following result is useful (cf. [12]  does not increase the number of sign changes for any function.
5. As a first example, we consider the spectral problem which was studied in [13], where usual conditions were imposed on the coefficients, namely, , , , and . The coefficients c k and d k , where k = 0, 1, 2, were also assumed to be nonnegative.
Since the function is nonnegative, the change-of-variable transformation → [0, 1] described in Section 4 leads to the problem with coefficients . Now Theorem 1 implies that all eigenvalues for which the function is positive are simple and the corresponding eigenfunctions have only simple zeros in the interval . Moreover, Theorems 2 and 3 with imply that the number of these zeros equals the index of inertia of the problem. Under the above-indicated smoothness conditions for the coefficients, these results were obtained in [13]. In addition to the new proof technique, the above theorems make it possible to extend this result to the case of singular coefficients. : , As another, more complicated example, we consider the spectral problem which was studied in [14], assuming that the function is nonnegative, , and ad > 0. Applying the change-of-variable transformation from Section 4, we obtain the problem with coefficients θ > 0 and . In this case, Theorem 1 automatically guarantees that any positive eigenvalue of the problem is simple. Now we set where δ 1 is the delta function supported at 1. If = 0 or , Theorems 2 and 3 immediately imply that the number of zeros of an eigenfunction on the interval (0, 1) equals the index of inertia of the problem.

If
, it is possible that the resulting operator is not positive, so the indicated procedure cannot be applied. Accordingly, in this case, we should set  For ω = π, this operator does not increase the number of sign changes for any function, so we can apply Theorem 4. For , the operator [0, 1] should be specified as an integration operator, and then Theorems 5 and 4 can be applied. In this case, the index of inertia of the problem is equal to the number of internal zeros of the eigenfunction derivative . The number of zeros of the eigenfunction itself is either equal to the index of inertia or is fewer by 1, depending on the sign of . It should be noted that Theorem 2.2 in [14], which presents results concerning the considered example, involves inaccuracies. Namely, according to this theorem, in the case , the number of positive eigenvalues for which the number of zeros of corresponding eigenfunctions is different from the index of inertia cannot exceed 2. Actually, this is not true. For example, the spectral problem has ten such eigenvalues to the left of the point λ = 10 6 . This is verified directly by analyzing the behavior of the solution to the boundary value problem depending on the parameter .