]> 6.4.1 Example 1

6.4.1 Example 1

For a time–delay system governed by the equations

ż1(t) = z1(t) + z2(t 1), t 0 ż2(t) = z2(t) + u(t), t 0 z1(0) =z10 z2(0) =z20 z2(θ) =φ2(θ), 1 θ 0,φ2 L2(1, 0) (6.31)

we wish to construct a stabilizing controller, minimizing the performance index

J =0[z 12(t) + u2(t)]dt (6.32)

The investigated system represents a class of degenerated time–delay systems, which are reducible to a finite–dimensional system without retardation. This reduction can be achieved by introducing the new state variables

ξ1(t) = z1(t + 1),ξ2(t) = z2(t) (6.33)

As a result the equations (6.31) take the form

ξ̇1(t) = ξ1(t) + ξ2(t),t 0 ξ̇2(t) = ξ2(t) + u(t), t 0 ξ1(0) =ξ10 = z 1(1) ξ2(0) =ξ20 = z 20 (6.34)

and the performance index

J =01z 12(t)dt +0[ξ 12(t) + u2(t)]dt (6.35)

provided that the restriction of z1 to an interval [0, 1] is the solution of the nonhomogeneous differential equation

ż1(t) = z1(t) + φ2(t 1),0 t 1 z1(0) =z10 (6.36)

The solution of (6.36) is

z1(t) = etz 10 +0te(tτ)φ 2(τ 1)dτ,t [0, 1]

and thus

ξ10 = z 1(1) = e1z 10 +10eθφ 2(θ)dθ (6.37)

The first integral in (6.35) does not depend on control u and thus the synthesis of a stabilizing controller reduces to the analysis of the system (6.1) in a Hilbert space H = 2 with

𝒜x = Ax,A = 1 1 0 1 ,b = 0 1 ,c = 1 0 .

The pair (A,b) is controllable, hence stabilizable. (A,c) is observable, hence detectable. The construction of stabilizing controller problem is thus well–posed.

An approximate solution can be found with the aid of an iterative process (6.9), which in n agrees with the Newton–Kleinman method for solving the matrix Riccati equation (6.15) – see [29]. Applying the Matlab/Control Toolbox one obtains

= 0.4764890.216845 0.2168450.197368 ,g = b = 0.216845 0.197368

and a characteristic polynomial of the closed–loop optimal system

det(sI A bg) = s2 + (2.197368)s + (1.414213)

having the roots (1.098684) ± j(0.455089).

The exact solution will be found by applying the Callier–Winkin lemma. Since Reσ(A) < 0 we can take f = 0 in (6.16). Now A = à (𝒜̃x = Ãx with à L(2)) and we have

π(ω) = 1 + c(jωI A)1b2 = 1 (1 + ω2)2 + 1 = ω4 + 2ω2 + 2 (1 + ω2)2 1ω .

Consequently, (6.24) is a polynomial spectral factorization problem. The factor satisfying (6.25) has the form

φ(s) = s2 + 2 + 2 2s + 2 (1 + s)2 (6.38)

The optimal stabilizing controller can be determined from (6.23)

g = g̃ = g1 g2 ,g1 = 1+22 + 2 2,g2 = 2+2 + 2 2

which agrees with numerical calculations. The optimal controller produces the steering

u(t) = g1ξ1(t) g2ξ2(t) = g1z1(t + 1) g2z2(t) (6.39)

The diagram explaining a synthesis of the optimal control law (6.39) is depicted in Figure 6.2. The initial conditions resolver recovers ξ10 from z10, φ2 using (6.37).


PICT

Figure 6.2: Synthesis of the optimal control law

Interchanging t by t + θ + 1, 1 θ 0 in the first equation of (6.31) which can be done since t + θ + 1 0 and this equation holds for t 0, yields

ż1(t + θ + 1) = z1(t + θ + 1) + z2(t + θ) .

Multiplying both sides by eθ and integrating the right–hand side with respect to θ we get

z1(t + 1) = 1 ez1(t) +10eθz 2(t + θ)dθ .

An equivalent representation of control can be derived by taking the above result into account in (6.39)

u(t) = 1 eg1z1(t) g2z2(t) g110eθz 2(t + θ)dθ (6.40)

The representation (6.40) removes any doubts concerning the realizability of the control u expressed in a form (6.39) where the current values of u are determined by the future values of z1.

In a Hilbert space 𝕄2 = 2 L2(1, 0; 2), (6.31) can be written as an abstract equation (6.1) with

x(t) = z1(t) z2(t) ψ1(t) ψ2(t) 𝕄2,b = 0 1 0 0 ,c = 1 0 0 0 ,

𝒜z1 z2 ψ1 ψ2 = z1 + ψ2(1) z2 ψ1 ψ2 , D(𝒜) = z1 z2 ψ1 ψ2 𝕄2 : ψ 1,ψ2 W1,2(1, 0), ψ1(0) ψ2(0) = z1 z2 (6.41)

The performance index is then described in a form appearing in (6.2). It is well–known, see [30, p. 139], that a necessary and sufficient condition for the semigroup generated on 𝕄2 = n L2(r, 0; n) by

𝒜z ψ = Az + Bψ(r) ψ D(𝒜) = x = z ψ 𝕄2 : ψ W1,2(r, 0; n),ψ(0) = z (6.42)

to be exponentially stable EXS is that all zeros of the characteristic quasipolynomial s det[sI A esrB] which is an entire function should have negative real parts. In our example,

n = 2,r = 1,A = 1 0 0 1 ,B = 01 00

and thus det[sI A esrB] = (s + 1)2. Hence the semigroup generated by 𝒜, defined in (6.41) is EXS. The pair (𝒜,b) is clearly stabilizable and (𝒜,c) detectable. One may apply also the general results in [83]. The problem of optimal stabilizing controller synthesis has a solution. To find it, we put f = 0 H in (6.16) getting 𝒜 = 𝒜̃. From the equation (sI 𝒜)x = b, s 1 one obtains

(sI𝒜)1b = 1 (s + 1)2 es s + 1 es(θ1) (s + 1)esθ ,c(sI𝒜)1b = es (s + 1)2 .

Now, the solution of the factorization problem (6.24) is again expressed in the form (6.38). The identity (6.23) uniquely represents an optimal controller, provided that (𝒜,b) is approximately controllable. We have

rk[sI A esrB,b] = ns (6.43)
rk[B,b] = n (6.44)

and approximate controllability easily follows from this criterion due to Triggiani and Manitius – see [75, p. 133].

In virtue of the Riesz representation theorem we may seek the optimal controller in a vector form

g = g̃ = α1 α2 β1() β2() H (6.45)

Hence, the identity (6.23) takes the form

g̃(sI 𝒜)1b = 1 (s + 1)2 es s + 1 es(θ1) (s + 1)esθ , α1 α2 β1 β2 = φ(s) 1 = = 2 + 2 2 2 s + 2 1 (1 + s)2 ,s 1 .

In fact this is an identity for entire functions,

α1es + α 2(s + 1) +10β 1(θ)es(θ1)dθ + (s + 1)10β 2(θ)esθdθ = = 2 + 2 2 2 s + 2 1 s (6.46)

The first integral in (6.46) is an entire function of s having the growth exponent distinct from the exponents of other terms (accordingly to the Paley–Wiener Theorem its support is located outside the interval [1, 0]). Thus, we should assume β1 = 0 L2(1, 0). Another justification is also possible. Namely, by (6.21) and the Paley–Wiener Theorem we have

R(𝒜,c) = NO(c,𝒜) = {x 𝕄2 : (sI 𝒜)1x,c = 0s ρ 0(𝒜)} =

= x = z ψ 𝕄2 : (s + 1)z 1 + esz 2+

+(s + 1)10es(θ+1)ψ 2(θ)dθ = 0s = x 𝕄2 : x = z1 z2 0 ψ2 .

The restriction of the system to its invariant subspace R(𝒜,c) arises by zeroing the first function component. As g R(𝒜,c), this component will not participate in the optimal control law.

Now, assume for a while that β2 W1,2(1, 0). Integrating–by–parts and comparing coefficients of the terms with the same growth exponents in (6.46), we obtain

α2 = 2 + 2 2 2 (6.47)

and the two–point boundary value problem

β2(θ) =β 2(θ) β2(1) =α1 β2(0) = 2 + 1 2 + 2 2 (6.48)

Problem (6.48) has a unique solution

α1 = 2 + 1 2 + 2 2 e ,β2(θ) = 2 + 1 2 + 2 2 eθ,θ [1, 0] (6.49)

Taking (6.47), (6.49) and (6.45) into account in the formula for the optimal control u(t) = gx(t) we get again (6.40). The uniqueness of the optimal controller justifies the fact that β2 was assumed to be absolutely continuous.