]> 3.2 Evaluation of the quadratic integrals

3.2 Evaluation of the quadratic integrals

Motivated by examples of Section 3.1, we pose the problem of evaluating the quadratic integral performance index
J v0 φ =0[vT (t),zT (tr)] P Q QT R v(t) z(t r) dt (3.16)

with P, Q, R L(n), P = PT , R = RT , P Q QT R 0over trajectories of the neutral system (3.5). We shall give a solution to this problem employing the results of Section 2.4.

In the state space H = 𝕄2 = n L2(r, 0; n) we can write (3.5) as an abstract initial value problem appearing in (2.19) with

𝒜x =𝒜v ψ = A1v + (A1A0 + A2)ψ(r) ψ D(𝒜) = v ψ n W1,2(r, 0; n),v = ψ(0) A 0ψ(r) (3.17)

and with and initial point x0 = v0 φ . We shall prove that 𝒜 generates a linear C0–semigroup {S(t)}t0 on H,

S(t) v0 φ = v(t) zt ,t 0


zt : [r, 0] θzt(θ) = z(t + θ) n .

To do this the following result will be useful.

Theorem 3.2.1 (Walker). Let H be a real Hilbert space with scalar product ,H. Assume that 𝒜 : (D(𝒜) H)H is a linear operator satisfying the assumptions:

there exists μ0 > 0 such that R(μI 𝒜) = H for all μ > μ0,
the exist ω and an equivalent scalar product ,e in H such that
x,𝒜xe ω xe2x D(𝒜) .

Then 𝒜 generates a C0–semigroup {S(t)}t0 on H with the property

S(t)x0 e eωt x 0 ex0 Ht 0 .

Proof. Recall that a scalar product ,e is equivalent with the original scalar product ,H if the norms induced by these scalar products are equivalent, i.e., there exist positive constants c1, c2 such that

c1 xH xe c2 xHx H .

The proof relies on verifying all assumptions of Theorem 2.3.2 as a sufficient condition for generation of the semigroup {S(t)}t0. Details can be found in [84, Theorem 4.2, p. 108]. □

Observe that the operator (3.17) satisfies condition (i) of Theorem 3.2.1 if for sufficiently large μ > 0 the equation

μ v ψ 𝒜v ψ = ṽ ψ̃ H

has a solution in D(𝒜). Equivalently, we seek for a solution of the system

μ ψ(0) A0ψ(r) A1ψ(0) A2ψ(r) =ṽ μψ(θ) ψ(θ) =ψ̃(θ)

satisfying ψ W1,2(r, 0; n). Solving the second equation and substituting the solution into the first equation, we obtain a nonhomogeneous algebraic linear equation in n,

μ I 1 μA1 1 μeμrA 2 eμrA 0 ψ(0) = eμr(μA 0 + A2)r0ψ̃(τ)eμτdτ + ṽ ,

which has a solution because

1 μA1 + 1 μeμrA 2 + eμrA 0 L(n)

1 μ A1 L(n) + A2 L(n) + 1 re A0 L(n) 0asμ .

Consequently the operator μI 𝒜 is onto for sufficiety large μ > 0.

To prove that also the condition (ii) of Theorem 3.2.1 is fulfilled we consider an equivalent scalar product in H,

v1 ψ1 , v2 ψ2 e := v1T v 2+r0ψ 1T (θ) I θ rA0T A 0 ψ2(θ)dθ .


2 v ψ ,𝒜v ψ e = vT (A 1+A1T )v+vT (A 1A0+A2)ψ(r)+ψT (r)(A 1A0+A2)T v

+r0 d dθ ψT (θ) I θ rA0T A 0 ψ(θ) dθ + 1 rr0ψT (θ)A 0T A 0ψ(θ)dθ =

= vT ψT (r) A1 + A1T + I A 1A0 + A2 + A0 (A1A0 + A2 + A0)T I v ψ(r) +

+1 rr0ψT (θ)A 0T A 0ψ(θ)dθ

vn2 A 1 + A1T + I L(n) + 2 vn ψ(r) n A1A0 + A2 + A0 L(n)

ψ(r) n2 + 1 r A0 L(n)2r0ψT (θ) I θ rA0T A 0 ψ(θ)dθ

max 1 r A0 L(n)2, A 1 + A1T + I L(n) + 4 A1A0 + A2 + A0 L(n)2 v ψ e2 .

The semigroup {S(t)}t0 is EXS iff

σ(A0) < 1 (3.18)

i.e., the spectrum of A0 is in an open unit circle and all roots of the characteristic quasipolynomial

λ det[λI λerλA 0 A1 erλA 2] (3.19)

have negative real parts, see [30, Lemma 6.2.11, p. 151] for a proof. In what follows, we assume that (3.18) and (3.19) hold.

A linear observation operator 𝒞L(D𝒜, 2n) (Y = 2n),

𝒞v ψ = P Q QT R 1 2 v ψ(r)

corresponds to the integrand in (3.16). Since the semigroup {S(t)}t0 is EXS we have

0z(t r) n2dt = k=0kr(k+1)r z(t r) n2dt = k=0r0 z(kr + θ) n2dθ =

= k=0r0 z kr(θ) n2dθ M2 x 0 H2 k=0e2μkr = M2 x 0 H2 1 1 e2μrx0 H .

Employing the Rayleigh inequality we get

𝒞S() v0 φ L2(0,;2n)2

λmax P Q QT R 1 2μ + 1 1 e2μr M2 x 0 H2x 0 D(𝒜)

and thus (2.21) holds, i.e., 𝒞 is admissible. It follows from Theorems 2.4.1, 2.4.2, and (2.24) that

J(x0) = x0,x0Hx0 H

where is a unique bounded self–adjoint nonnegative solution to the Lyapunov operator equation (2.22) which reduces now to

𝒜x,xH + x,𝒜xH =

= [vT ,ψT (r)] P Q QT R v ψ(r) x = v ψ D(𝒜) (3.20)

A solution of (3.20) will be sought in the form

v ψ = αv +r0β(θ)ψ(θ)dθ βT ()v +r0δ(,σ)ψ(σ)dσ + γψ (3.21)

with α,γ L(n), α = αT , γ = γT ,

δ(θ,σ) = Φ(θ σ), θ < σ ΦT (σ θ),θ > σ = δT (σ,θ) (3.22)

and Φ,β C([r, 0],L(n)). The matrix kernel function (3.22) may have a discontinuity along the diagonal θ = σ of the square [r, 0] × [r, 0], or equivalently, Φ(0) may not be a symmetric matrix.

Taking (3.17) and (3.21) into account in (3.20), and integrating by parts we get

A1v + (A1A0 + A2)ψ(r) ψ , αv +r0β(θ)ψ(θ)dθ βT ()v +r0δ(,σ)ψ(σ)dσ + γψ H+

+ v ψ , α[A1v + (A1A0 + A2)ψ(r)] +r0β(θ)ψ(θ)dθ βT ()[A 1v + (A1A0 + A2)ψ(r)] +r0δ(,σ)ψ(σ)dσ + γψ H =

= vT A 1T + ψT (r)(A 2T + A 0T A 1T ) αv +r0β(θ)ψ(θ)dθ +

+r0 ψ(θ) T βT (θ)vdθ +r0[ψ(θ)]T r0δ(θ,σ)ψ(σ)dσdθ+

+r0[ψ(θ)]T γψ(θ)dθ + vT [αA 1v + α(A1A0 + A2)ψ(r)] +r0vT β(θ)ψ(θ)dθ+

+r0ψT (θ)βT (θ)[A 1v + (A1A0 + A2)ψ(r)]dθ +r0ψT (θ)r0δ(θ,σ)ψ(σ)dσdθ+

+r0ψT (θ)γψ(θ)dθ = vT A 1T αv + ψT (r)(A 2T + A 0T A 1T )αv+

+r0vT A 1T β(θ)ψ(θ)dθ +r0ψT (r)(A 2T + A 0T A 1T )β(θ)ψ(θ)dθ+

+r0 d dθ ψT (θ)βT (θ) vdθ r0ψT (θ)dβT (θ) dθ vdθ+

+r0[ψ(θ)]T rθΦT (σ θ)ψ(σ)dσdθ +r0[ψ(θ)]T θ0Φ(θ σ)ψ(σ)dσdθ+

+vT α(A 1A0 + A2)ψ(r) +r0 d dθ ψT (θ)γψ(θ) dθ r0vT dβ(θ) dθ ψ(θ)dθ+

+r0ψT (θ)βT (θ)A 1vdθ +r0vT d dθ[β(θ)ψ(θ)]dθ + vT αA 1v+

+r0ψT (θ)βT (θ)(A 1A0 + A2)ψ(r)dθ +r0ψT (θ)rθΦT (σ θ)ψ(σ)dσdθ+

+r0ψT (θ)θ0Φ(θ σ)ψ(σ)dσdθ = vT A 1T αv + ψT (r) A 2T + A 0T A 1T αv+

+r0vT A 1T β(θ)ψ(θ)dθ +r0ψT (r) A 2T + A 0T A 1T β(θ)ψ(θ)dθ+

+ vT + ψT (r)A 0T βT (0)v ψT (r)βT (r)v r0ψT (θ)dβT (θ) dθ vdθ

r0ψT (θ)ΦT (0)ψ(θ)dθ +r0ψT (θ)rθ[Φ(σ θ)]T ψ(σ)dσdθ+

+r0vT ΦT (θ)ψ(θ)dθ +r0ψT (r)A 0T ΦT (θ)ψ(θ)dθ +r0ψT (θ)Φ(0)ψ(θ)dθ

r0ψT (θ)θ0Φ(θ σ)ψ(σ)dσdθ r0ψT (r)Φ(r θ)ψ(θ)dθ+

+ vT + ψT (r)A 0T γ v + A 0ψ(r) ψT (r)γψ(r) + vT αA 1v+

+vT α(A 1A0 + A2)ψ(r) + vT β(0) v + A 0ψ(r) vT β(r)ψ(r)

r0vT dβ(θ) dθ ψ(θ)dθ +r0ψT (θ)βT (θ)A 1vdθ+

+r0ψT (θ)βT (θ)(A 1A0 +A2)ψ(r)dθ+r0ψT (θ)[ΦT (0)ψ(θ)ΦT (rθ)ψ(r)]dθ

r0rθψT (θ)[Φ(σ θ)]T ψ(σ)dσdθ +r0ψT (θ)Φ(θ)vdθ+

+r0ψT (θ)Φ(θ)A 0ψ(r)dθr0ψT (θ)Φ(0)ψ(θ)dθ+r0θ0ψT (θ)Φ(θσ)ψ(σ)dσdθ

= vT [A 1T α+αA 1 +βT (0)+β(0)+γ]v +vT [γA 0 +α(A1A0 +A2)+β(0)A0 β(r)]ψ(r)+

+ψT (r)[A 0T γ +(A 2T +A 0T A 1T )α+A 0T βT (0)βT (r)]v +ψT (r)[A 0T γA 0 γ]ψ(r)+

+r0vT A 1T β(θ) dβ(θ) dθ + ΦT (θ) ψ(θ)dθ+

+r0ψT (θ) dβT (θ) dθ + βT (θ)A 1 + Φ(θ) vdθ+

+r0ψT (r) A 2T + A 0T A 1T β(θ) Φ(r θ) + A 0T ΦT (θ) ψ(θ)dθ+

+r0ψT (θ) βT (θ)(A 1A0 + A2) + Φ(θ)A0 ΦT (r θ) ψ(r)dθ =

= vT ψT (r) P Q QT R v ψ(r) v ψ D(𝒜) .

Hence we come to a system of equation determining α, β, γ and δ,

A1T α + αA 1 + βT (0) + β(0) + γ = P γA0 + α(A1A0 + A2) + β(0)A0 β(r) = Q A0T γA 0 γ = R A1T β(θ) dβ(θ) dθ + ΦT (θ) = 0 (A2T + A 0T A 1T )β(θ) Φ(r θ) + A 0T ΦT (θ) = 0 (3.23)

By elimination of Φ we reduce (3.23) to the discrete Lyapunov matrix equation

A0T γA 0 γ = R (3.24)

and the boundary–value problem

d dθ β(θ) + βT (r θ)A 0 = A1T β(θ) + βT (r θ)A 2 A1T α + αA 1 + βT (0) + β(0) + γ = P γA0 + α(A1A0 + A2) + β(0)A0 β(r) = Q (3.25)

Furthermore, we get also

Φ(θ) = dβT (θ) dθ βT (θ)A 1 = A2T β(r θ) A 0T dβ(r θ) dθ (3.26)

Remark 3.2.1. Castelan and Infante [10], [11] have derived (3.25) in the case A0 = 0, i.e., for retarded systems and a much more complicated version of (3.25) for neutral systems provided that W1,2(r, 0; n) was chosen as a state space.

A special technique has been developed in [12] for the analysis of their version of the problem (3.25). In what follows we adapt that technique to solve (3.25). By substituting

ϑ(θ) = βT (r θ), r θ 0 (3.27)

one can reduce the first equation of (3.25) to the system

d dθ[β(θ) + ϑ(θ)A0] = A1T β(θ) + ϑ(θ)A 2 d dθ[A0T β(θ) + ϑ(θ)] = A 2T β(θ) ϑ(θ)A 1 (3.28)

In turn, (3.28) is equivalent to a linear autonomous system in the space n2 which can be seen by applying the Kronecker product of matrices ([57, Section 8.4]). This yields

d dθ I II A0T A0T II I col β col ϑ = A1T I I A 2T A2T I I A 1T col β col ϑ ,

where col β, colϑ denote n2–dimensional vectors having rows composed of the rows of matrices β and ϑ, respectively. By the Schur lemma and (3.18) we have

det I II A0T A0T II I = det(IIA0T A 0T )0 .


d dθ col β col ϑ = I II A0T A0T II I 1 A1T I I A 2T A2T I I A 1T col β col ϑ .

Employing again the Schur lemma and some properties of the Kronecker product we find the characteristic polynomial of the above system,

det (λI A1T ) (λI + A 1T ) + (A 2T + λA 0T ) (A 2T λA 0T ) (3.29)


eλθ L M (3.30)

is an eigensolution of (3.28) where λ is a root of (3.29), and matrices L, M L(n2) satisfy the system

λL + λMA0 =A1T L + MA 2 λA0T L + λM = A 2T L MA 1 (3.31)

By multiplying the equations of (3.31) by (1), transposing and reordering them, one can see that if (3.30) is an eigensolution then eλθ MT LT is an eigensolution too.

Assume from now that all eigenvalues of (3.28) have linear elementary divisors. Then the corresponding eigenvectors form a basis in n2 and the general solution of (3.28) is

β(θ) ϑ(θ) = i=1n2 κieλiθ Li Mi + μieλiθ MiT LiT .

It is easy to see that this solution satisfies the functional equation (3.27) if and only if μi = κieλir and finally

β(θ) = i=1n2 κi eλiθL i + eλi(r+θ)M iT (3.32)

is a general solution of the first equation of (3.25). Putting (3.32) into the second and third equation of (3.25) yields

γ + A1T α + αA 1 + i=1n2 κi Li + LiT + eλir(M i + MiT ) = P γA0 + α(A1A0 + A2) + i=1n2 κi eλir(M iT A 0 Li) + (LiA0 MiT ) = Q .

A next application of the Kronecker product of matrices enables us to represent the last equations as

A1T I + I A 1T col [L i + LiT + eλir(M i + MiT )] I (A1A0 + A2)T col [eλir(L i MiT A 0) + MiT L iA0] 2n2  vectors(i=1,2,,n2) col α κ1 κ2 κn2 = = col γ col P col Q + col (γA0) (3.33)

The matrix of the system (3.33) is nonsingular. Indeed, if this is not the case, then taking P = Q = R = 0 (in virtue of (3.24) and (3.18) we also have γ = 0) and making use of formulae (3.33), (3.32), (3.26), (3.24) and (3.21) we can generate matrices α, β(θ), δ(θ,σ), γ and thus a nonzero operator being a solution to the Lyapunov operator equation (3.20). However, this contradicts the uniqueness of the null solution for 𝒞 = 0 which, in turn, follows from (3.18), (3.19) and Theorem 2.4.2. Finally, (3.33) has a unique solution which means that formulae: (3.33), (3.32), (3.26), (3.24) and (3.21) determine matrices α, β(θ), δ(θ,σ), γ and thus an operator being the unique solution of the Lyapunov operator equation (3.20).

The assumption that all eigenvalues of (3.28) are single is not essential for validity of the above derivation and (3.32) can be appropriately modified if there are nonlinear elementary divisors.

Let us indicate two possible simplifications of the performance index evaluation which can arise in practical applications. The first is symmetry of matrices α, γ, P which causes that (3.33) contains n(n1) 2 redundant equations. The second is that for a large variety of initial conditions the evaluation of the performance index does not require the knowledge of all entries of α, β(θ), δ(θ,σ), γ (e.g. for x0 = v0 0 it suffices to determine only the matrix α).

The Kronecker product of matrices can also be applied (see [36]) to derive the frequency–domain method of evaluation the performance index

J v0 φ =0zT (t)Q 0z(t)dt .