]> 6.2 The lq controller problem

6.2 The lq controller problem

In a Hilbert space H with the scalar product , we consider the following feedback system:
(t) =𝒜x(t) 𝒢x(t),t 0 x(0) =x0 y(t) =𝒞x(t) (6.1)

where 𝒜 : (D(𝒜) H) H is the infinitesimal generator of a C0–semigroup {S(t)}t0 on H; L(U,H), 𝒞 L(H,Y) where U, Y are Hilbert spaces with scalar products ,U, ,Y, respectively; x0 H is a fixed element of H, 𝒢 L(H,U) is an operator parameter describing the linear feedback u = 𝒢x.

Consider also the set

Γ = 𝒢 L(H,U) : yL2(0,;Y)2 + u L2(0,;U)2 < x 0 H (6.2)

Definition 6.2.1. The pair (𝒜,) is called stabilizable if the set

Ω = 𝒢 L(H,U) : the semigroup generated by𝒜𝒢isEXS (6.3)

is not empty.

Lemma 6.2.1. Let (𝒜,) be stabilizable. Then

Ω is an open set, Ω Γ.
The mapping Ω 𝒢(𝒢) 𝒮 is well defined, where 𝒮 L(H) denotes a positive cone of all self–adjoint nonnegative operators and (𝒢) is a unique solution to the Lyapunov operator equation
(𝒜𝒢)x1,x2 + x1,(𝒜𝒢)x2 = = 𝒞x1,𝒞x2Y 𝒢x1,𝒢x2Ux1,x2 D(𝒜) (6.4)


x0,(𝒢)x0 =0𝒞x(t) Y2 + 𝒢x(t) U2 dt (6.5)
For every x0 H, the mapping
Ω 𝒢 yL2(0,;Y)2 + u L2(0,;U)2 = x 0,(𝒢)x0 [0,)

is continuous.

Proof. (i) Clearly, Ω Γ. If L(H) is such that is sufficiently small then by the fundamental perturbation result (see [69, Theorem 1.1, p. 76]) the type of the semigroup generated by 𝒜𝒢 is negative provided that the same holds for the semigroup generated by 𝒜𝒢. This establishes (i).

(ii) This follows from Theorem 2.4.1 and Theorem 2.4.2.

(iii) For the proof of (iii) we recall the result from [69, Corollary 1.3, p. 78],

S𝒢+(t) S𝒢(t) Mϕ(t)t 0,ϕ(t) := e(ω+M )t eωt,t 0

for some M 1, where {S𝒢+(t)}t0, {S𝒢(t)}t0 are the semigroups generated by 𝒜𝒢 and 𝒜𝒢, respectively and ω is the type of {S𝒢(t)}t0. But, for 𝒢 Ω and sufficiently small , the function ϕ belongs to L2(0,), and its L2(0,) norm tends to 0 as tends to 0. Hence, the mapping Ω 𝒢𝒞S𝒢()x0 L2(0,; Y) is continuous. Only minor modifications are required to prove that the same holds for the mapping Ω 𝒢𝒢S𝒢()x0 L2(0,; U). □

Definition 6.2.2. The pair (𝒜,𝒞) is called detectable if there exists 𝒬 L(Y,H) such that the semigroup generated by 𝒜 + 𝒬𝒞 is EXS.

Lemma 6.2.2. Let (𝒜,) be stabilizable. Assume additionally that the pair (𝒜,𝒞) is detectable. Then

Ω = Γ.
The mapping
J : L(H,U) 𝒢 yL2(0,;Y)2 + uL 2(0,;U)2,𝒢 Ω + , 𝒢Ω [0,]

is continuous.

Proof. (i) It is sufficient to prove that Ω Γ. We take 𝒢 Γ and represent the first two lines of (6.1) in the form

(t) =(𝒜 + 𝒬𝒞)x(t) (𝒬𝒞x(t) + 𝒢x(t)) x(0) =x0

with 𝒬 L(Y,H) chosen in such a manner that the semigroup {S(t)}t0 generated by 𝒜 + 𝒬𝒞 is EXS. The existence of 𝒬 is ensured by the detectability of (𝒜,𝒬). Indeed, employing the variation–of–constants formula, we get

x(t) S(t)x0 + max{𝒬, }0t S(t τ) 𝒞x(τ) Y + 𝒢x(τ) U dτ .

By definition of Γ, 𝒞x() L2(0,; Y), 𝒢x() L2(0,; U). Hence, from the basic properties of convolution, it follows that x() L2(0,) for all x0 H. The last property is equivalent to the exponential stability of the semigroup generated by 𝒜𝒢 [69, Theorem 4.1, p. 116], and thus 𝒢 Ω.

(ii) By (i) we have J(𝒢) = on L(H,U) Ω (we may assume that L(H,U) Ω as otherwise the result to be proved follows from Lemma 6.2.1/(iii)) and, to show the continuity of J, it suffices to prove that J(𝒢) tends to as 𝒢 tends to Ω from the inside. Take any R > 0 and let {𝒢k}k be a sequence in Ω with 𝒢k 𝒢 Ω as k . We claim that, for almost all k , we have J(𝒢k) R. Observe that the function

[0,) t yL2(0,t;Y)2 + u L2(0,t;U)2 =0t 𝒞x (τ) Y2 + 𝒢 x(τ) U2 dτ

where x, y, u denote respectively the state, output, and control functions due to 𝒢, is nondecreasing and tends to as t . Hence there exists T > 0 such that

0T 𝒞x (t) Y2 + 𝒢 x(t) U2 dt = 2R .

The mapping L(H,U) 𝒢 yL2(0,T;Y)2 + uL 2(0,T;U)2 [0,) is continuous. Indeed, from [69, Corollary 1.3, p. 78], we know that

S𝒢+(t) S𝒢(t) Mϕ(t)t 0

where {S𝒢+(t)}t0, {S𝒢(t)}t0 are the semigroups generated by 𝒜𝒢 and 𝒜𝒢 respectively, and ω is the type of {S𝒢(t)}t0. But the function ϕ belongs to L2(0,T), and its L2(0,T) norm tends to 0 as tends to 0. Hence the mappings

L(H,U) 𝒢𝒞S𝒢()x0 L2(0,T; Y) , L(H,U) 𝒢𝒢S𝒢()x0 L2(0,T; U)

are both continuous.

By the continuity of the mapping L(H,U) 𝒢 yL2(0,T;Y)2 + uL 2(0,T;U)2 just proved, for any ɛ (0,R], we get

yL2(0,T;Y)2 + u L2(0,T;U)2 y k L2(0,T;Y)2 u k L2(0,T;U)2 ɛ

where yk and uk denote respectively the output and control functions due to 𝒢k, for almost all k . However, this implies that

J(𝒢k) = yk L2(0,;Y)2 + u k L2(0,;U)2 y k L2(0,T;Y)2 + u k L2(0,T;U)2 R

for almost all k , and the proof is complete. □

Now we formulate the parametric optimization problem which consists in finding 𝒢 Ω such that

x0,(𝒢)x0 = min KΩx0,(K)x0x0 H (6.6)

Theorem 6.2.1. If (𝒜,) is stabilizable and (𝒜,𝒞) is detectable, then the problem (6.6) has a unique solution.

Before starting the proof, let us remark that this is a well–known fundamental result concerning the lq problem (see [91] and [16, Section 4.4]), reformulated above as a parametric optimization problem. However, a new derivation of this result will be given. The main novelty, besides reformulation, is the simple explicit proof of convergence of the Newton – Kleinman sequence of stabilizing controllers.

Proof. Using (6.4), it is easy to show that, if 𝒢 Ω, then for each L(H,U) such that 𝒢 + Ω, the operator Δ = (𝒢 + ) (𝒢) is the unique bounded self–adjoint operator satisfying the operator equation

(𝒜𝒢)x1, Δx2 + x1, Δ(𝒜𝒢)x2 = x1, [(𝒢)𝒢]x 2 + [(𝒢)𝒢]x 1,x2x1,x2Ux1,x2 D(𝒜) (6.7)

Now we show that the following implication holds:

𝒢 Ω(𝒢) Ω (6.8)

Suppose for a moment that, contrary to our statement, one has (𝒢)Ω. Since Ω is an open set, there is λΩ (0, 1] such that (see Figure 6.1)

Figure 6.1: An auxiliary diagram for the proof

𝒢λ = (1 λ)𝒢 + λ(𝒢) Ωforλ [0,λ Ω)and𝒢λΩ Ω .

Consequently, putting = 𝒢λ 𝒢 = λ[(𝒢) 𝒢], λ [0,λΩ) in (6.7) we come to a conclusion that Δ = (𝒢λ) (𝒢) is a unique bounded, self–adjoint operator satisfying an operator equation

(𝒜𝒢λ)x1, Δx2 + x1, Δ(𝒜𝒢λ)x2 = (2λ λ2)(𝒢)𝒢(𝒢) 𝒢x 1,x2

for all x1,x2 D(𝒜) and all λ [0,λΩ). But 2λ λ2 0 for λ [0,λΩ), and again by the results of Theorem 2.4.1 and Theorem 2.4.2, (Δ) 0 (in the sense of quadratic forms). Hence the function

[0,λΩ) λx0,(𝒢λ)x0 = yλ L2(0,;Y)2 + u λ L2(0,;U)2

is bounded from above by x0,(𝒢)x0, where yλ(t) = 𝒞xλ(t) and uλ(t) = 𝒢xλ(t), with xλ denoting the solution of (6.1) with 𝒢 replaced by 𝒢λ. But, from Lemma 6.2.2/(i), it follows that this function takes arbitrarily large values in a sufficiently small neighbourhood of λΩ. Hence our claim (𝒢)Ω leads to a contradiction, and thus (6.8) holds. By (6.8), the sequence {𝒢k}k given by

𝒢k+1 = (𝒢 k) (6.9)

where 𝒢1 is an arbitrary element of Ω, is well–defined and contained in Ω. Taking 𝒢 = 𝒢k, = 𝒢k+1 𝒢k = (𝒢 k) 𝒢k in (6.7), one obtains

𝒜(𝒢 k) x1, Δx2 + x1, Δ 𝒜(𝒢 k) x2 =

= x1 (𝒢k)𝒢k(𝒢 k) 𝒢k ,x2x1,x2 D(𝒜),k .

Applying once more the results from Theorem 2.4.1 and Theorem 2.4.2 we get (Δ) 0. Thus the sequence of the terms

x0,(𝒢k)x0 = yk L2(0,;Y)2 + u k L2(0,;U)2

is nonincreasing and bounded from below. Now, by standard arguments [86, Theorem 4.28, p. 79] there exists L(H), with = 0, such that (𝒢k)x x as k , for each x H. Since L(H,U), we have

𝒢k+1x = (𝒢 k)x x = 𝒢xx H (6.10)

By virtue of Lemma 6.2.2/(ii),

x0,(𝒢k)x0 = yk L2(0,;Y)2 + u k L2(0,;U)2

yL2(0,;Y)2 + u L2(0,;U)2 = x 0,x0 < .

Hence 𝒢 Ω. Now we can apply Lemma 6.2.1/(iii) to get

x0,x0 = yL2(0,;Y)2 + u L2(0,;U)2 = =0𝒞x(t) Y2 + 𝒢 x(t) U2 dt = x 0,(𝒢)x0x0 H .

This means that 𝒢 Γ and satisfies (6.4) with 𝒢 = 𝒢, i.e.

(𝒜𝒢)x1,x2 + x1,(𝒜𝒢)x2 = = 𝒞x1,𝒞x2Y 𝒢x1,𝒢x2Ux1,x2 D(𝒜) (6.11)

Substituting 𝒢 = 𝒢 in (6.7), for any L(H,U) such that 𝒢 + Ω, we get

(𝒜𝒢)x1, Δx2 + x1, Δ(𝒜𝒢)x2 =

= x1,x2Ux1,x2 D(𝒜) .

Recalling again the results from Theorem 2.4.1 and Theorem 2.4.2 we come to the inequality (𝒢 + ) (𝒢), and thus 𝒢 is a solution of (6.6). Moreover, from (6.11) and Theorem 2.4.4 it follows that is a Hilbert – Schmidt operator (HS–operator) provided that 𝒢 and 𝒞 are finite – rank operators. □

Remark 6.2.1. The infinite–dimensional version of the Kleinman algorithm was used for the first time in [17] to prove that (6.11) has a maximal bounded self–adjoint positive solution (being the limit of the Kleinman sequence), provided that (𝒜,) is only stabilizable.

It follows from (6.10) and the closed–loop Lyapunov operator equation that satisfies also the open–loop Lyapunov operator equation

𝒜x1,x2 + x1,𝒜x2 = 𝒞x1,𝒞x2Y + 𝒢x1,𝒢x2Ux1,x2 D(𝒜) (6.12)