Motivated by examples of Section 3.1, we pose the problem of evaluating the quadratic
integral performance index
(3.16)
with ,
,
,
,
,
over
trajectories of the neutral system (3.5). We shall give a solution to this problem employing the
results of Section 2.4.
In the state space
we can write (3.5) as an abstract initial value problem appearing in (2.19) with
(3.17)
and with and initial point .
We shall prove that
generates a linear –semigroup
on
,
where
To do this the following result will be useful.
Theorem 3.2.1 (Walker). Let be a
real Hilbert space with scalar product .
Assume that
is a linear operator satisfying the assumptions:
(i)
there exists
such that
for all ,
(ii)
the exist and an
equivalent scalar product
in
such that
Then generates a
–semigroup
on
with
the property
Proof.Recall that a scalar product
is equivalent with the original scalar product
if the
norms induced by these scalar products are equivalent, i.e., there exist positive constants
,
such
that
The proof relies on verifying all assumptions of Theorem 2.3.2 as a sufficient condition for generation of
the semigroup .
Details can be found in [84, Theorem 4.2, p. 108]. □
Observe that the operator (3.17) satisfies condition (i) of Theorem 3.2.1 if for sufficiently
large
the equation
has a solution in .
Equivalently, we seek for a solution of the system
satisfying .
Solving the second equation and substituting the solution into the first
equation, we obtain a nonhomogeneous algebraic linear equation in
,
which has a solution because
Consequently the operator
is onto for sufficiety large .
To prove that also the condition (ii) of Theorem 3.2.1 is fulfilled we consider an equivalent scalar
product in ,
Then
The semigroup
is EXS iff
(3.18)
i.e., the spectrum of
is in an open unit circle and all roots of the characteristic quasipolynomial
(3.19)
have negative real parts, see [30, Lemma 6.2.11, p. 151] for a proof. In what follows, we
assume that (3.18) and (3.19) hold.
A linear observation operator
(),
corresponds to the integrand in (3.16). Since the semigroup
is
EXS we have
Employing the Rayleigh inequality we get
and thus (2.21) holds, i.e.,
is admissible. It follows from Theorems 2.4.1, 2.4.2, and (2.24) that
where
is a unique bounded self–adjoint nonnegative solution to the Lyapunov operator equation (2.22)
which reduces now to
and .
The matrix kernel function (3.22) may have a discontinuity along the diagonal
of the
square , or
equivalently,
may not be a symmetric matrix.
Taking (3.17) and (3.21) into account in (3.20), and integrating by parts we get
Hence we come to a system of equation determining
,
,
and
,
(3.23)
By elimination of
we reduce (3.23) to the discrete Lyapunov matrix equation
(3.24)
and the boundary–value problem
(3.25)
Furthermore, we get also
(3.26)
Remark 3.2.1. Castelan and Infante [10], [11] have derived (3.25) in the case
, i.e.,
for retarded systems and a much more complicated version of (3.25) for neutral systems provided
that
was chosen as a state space.
A special technique has been developed in [12] for the analysis of their version of the
problem (3.25). In what follows we adapt that technique to solve (3.25). By substituting
(3.27)
one can reduce the first equation of (3.25) to the system
(3.28)
In turn, (3.28) is equivalent to a linear autonomous system in the space
which
can be seen by applying the Kronecker product of matrices ([57, Section 8.4]). This
yields
where ,
denote
–dimensional
vectors having rows composed of the rows of matrices
and
,
respectively. By the Schur lemma and (3.18) we have
Hence
Employing again the Schur lemma and some properties of the Kronecker product we find the
characteristic polynomial of the above system,
(3.29)
Thus
(3.30)
is an eigensolution of (3.28) where
is a root of (3.29), and matrices ,
satisfy
the system
(3.31)
By multiplying the equations of (3.31) by
,
transposing and reordering them, one can see that if (3.30) is an eigensolution then
is an
eigensolution too.
Assume from now that all eigenvalues of (3.28) have linearelementary divisors. Then the corresponding eigenvectors form a basis in
and
the general solution of (3.28) is
It is easy to see that this solution satisfies the functional equation (3.27) if and only if
and
finally
(3.32)
is a general solution of the first equation of (3.25). Putting (3.32) into the second and third
equation of (3.25) yields
A next application of the Kronecker product of matrices enables us to represent the last
equations as
(3.33)
The matrix of the system (3.33) is nonsingular. Indeed, if this is not the case, then taking
(in virtue of (3.24) and
(3.18) we also have )
and making use of formulae (3.33), (3.32), (3.26), (3.24) and (3.21) we can generate matrices
,
,
,
and thus a
nonzero operator
being a solution to the Lyapunov operator equation (3.20). However, this contradicts the uniqueness of the
null solution for
which, in turn, follows from (3.18), (3.19) and Theorem 2.4.2. Finally, (3.33) has a unique
solution which means that formulae: (3.33), (3.32), (3.26), (3.24) and (3.21) determine matrices
,
,
,
and thus an
operator
being the unique solution of the Lyapunov operator equation (3.20).
The assumption that all eigenvalues of (3.28) are single is not essential for validity of the
above derivation and (3.32) can be appropriately modified if there are nonlinear elementary
divisors.
Let us indicate two possible simplifications of the performance index evaluation
which can arise in practical applications. The first is symmetry of matrices
,
,
which causes that
(3.33) contains
redundant equations. The second is that for a large variety of initial conditions the
evaluation of the performance index does not require the knowledge of all entries of
,
,
,
(e.g. for
it suffices to determine
only the matrix ).
The Kronecker product of matrices can also be applied (see [36]) to derive the
frequency–domain method of evaluation the performance index