Possible extensions are discussed, along with state-space formulae. This work .... for control systems, will be related to the entropy used in [12] in...

0 downloads 0 Views 195KB Size

Abstract The results of this paper generalize the formula for the entropy of a transfer function to time-varying systems. This is done through the use of some results on spectral factorizations due to Arveson and properties of the -transform which generalizes the usual transform for time-varying systems. Using the formula defined, it is shown that for linear fractional transformations like those that arise in time-varying 1 control, there exists a unique, bounded contraction which maximizes the entropy. This generalizes known results in the time-invariant case. Possible extensions are discussed, along with state-space formulae.

W

Z

H

This work was supported in part by the National Science Foundation, under contract ECS-9309387

1

Time-varying entropy

z

G

w

y

u

- K Figure 1: Closed-loop system.

1

Introduction

Since the pioneering work of Zames [25], there has been much interest in finding stabilizing controllers which ensure that the H1 norm of a closed-loop transfer function is below a given number > 0. In particular, consider the system depicted in Figure 1 and suppose that the open-loop system is given by "

#

"

#

"

z w G11 G12 y = G u = G21 G22

#"

w u

#

where the signals w , z , y and u are the external disturbance, output error, measured output, and control input respectively. The goal of H1 control theory is to find a control law u = Ky such that the closed-loop system, denoted

F`(G; K ) = G11 + G12K (I ? G22K )?1G21 has H1 norm less than , assuming such a controller exists. While early developments relied on transfer function and operator methods, a recent emphasis, based on the work of Glover and co-workers, [5, 10] has been to approach the H1 control problem using state-space methods. Glover et al have shown that the existence of stabilizing controllers achieving the required norm bound is equivalent to the existence of positive semi-definite, stabilizing solutions to two indefinite algebraic Riccati equations (AREs). Because of this connection with AREs, these results are reminiscent of earlier work on Linear Quadratic Gaussian (LQG) control. A complete characterization of all controllers achieving the closed-loop H1 norm bound is given in [10]. Assuming that a controller exists such that kF` (G; K )k1 < , it can be shown that the set of all such controllers can also parameterized by a linear fractional transformation of a specific controller Ka and a stable contraction Q:

K = F`(Ka ; Q);

kQk1 < 1: 2

Time-varying entropy

Given this parameterization, it is easy to see that all possible closed-loop transfer functions satisfying kF` (G; K )k1 < are given by:

F` (G; F`(Ka ; Q)) = F` (J; Q) =: H (Q)

for some J . To choose among this “ball” of solutions, it has been proposed that the controller selected be chosen so as to maximize the following “entropy” integral, see [2, 6, 11, 9]:

2 Z

0 j2 d!: Id (H (Q); ; z0) := 2 ln det I ? ?2 H (ei! )H (ei! ) jz1 ??jzei! j2 ? 0

(1.1)

The benefits of using controllers which maximize this entropy integral are outlined in the monograph [18] which treats the continuous-time case, and [14, 13] for the discrete-time case. As shown in [18], these controllers can be thought of as lying between H1 optimal controllers and LQG optimal controllers. Specifically, Id exhibits some norm-like properties; it is monotonically decreasing with respect to ; and it bounds the LQG cost of the closed-loop system. It has the added property that controllers which maximize the entropy are also optimal with respect to the risk-sensitive control problem of stochastic control theory [23]. Owing to the similarities between H1 control and classical LQG control which were highlighted in [5], many straightforward extensions have now appeared. In this paper we are particularly interested in controllers for time-varying systems as have been considered in [17, 16, 20]. While the controllers achieving a norm bound can also be written as a linear fractional transformation of an operator Ka and a stable contractive operator Q, it is not clear how to choose among the possible controllers, since the entropy integral (1.1) is given in terms of the transfer functions and is therefore not amenable to time-varying systems. In this paper we give a generalization of the entropy integral for discrete-time, time-varying systems. This generalization will be based on the W -transform, introduced by Alpay et al, [1] in the context of interpolation problems for non-stationary processes. As in the stationary case, the entropy defined here for control systems, will be related to the entropy used in [12] in the context of interpolants for band extension problems. In the study of linear time-invariant controllers, the entropy evaluated at z0 = 0 is of particular importance. In this case, it can be shown that the integral (1.1) is an upper bound for the H2 norm of the transfer function. For our time-varying systems, our entropy definition will deal only with the analogous evaluation at the origin. We will outline the difficulties that arise in generalizing this definition. The rest of the paper will be organized as follows. We begin by introducing the W transform in Section 2 and giving some of its properties. In Section 3, the definition of the entropy for time-varying systems is given. In Section 4 we show that the for systems that can be expressed as linear fractional transformations of linear, causal, contractive operators, the entropy defined has a maximum. Some possible extensions of the theory are discussed in Section 5. Finally, in Section 6 we give some conclusions.

2

Preliminaries

In this section we introduce the notation and present some preliminary results concerning operators and the W -transform that will be needed in the rest of the paper. Most of these results are 3

Time-varying entropy

taken from [1]. Note that the presentation in [1] assumes that causal operators are represented as upper triangular matrices. In our presentation, we will use the more common representation of causal operators as lower triangular operators.

2.1 Notation

x

Let = fx(k ) : k 2 Zg denote a sequence of vectors x(k ) 2 C n . The set of all such sequences is denoted S n . The subset of S n of square-summable sequences is `n2 . The space `n2 is a Hilbert space with inner product

hx; yi :=

x

xx

1

X

?1

x(k)y(k)

and induced norm k k := h ; i. p Let represent a linear operator from `m 2 to `2 . Then has a natural representation as a doubly-infinite matrix fG(i; j )g, i; j 2 Z, G(i; j ) 2 C pm . We will denote the operation = as follows: 2 3 32 2 .. .. .. 3 .. .. .. . . . . . . 6 7

G

6 6 6 6 6 4

p

y( ?1)

7 7 7 7 7 5

y(0) = y(1)

6 6 6 6 6 4

G

6 u(?1) G(?1;?1) G(?1;0) G(?1;1) 7 76 76 G(0;?1) G(0; 0) G(0;1) 7 6 u(0) 76 G(1;?1) G(1;0) G(1;1) 5 4 u(1)

y Gu

7 7 7 7 7 5

.. .. .. .. .. .. . . . . . . The box around the elements in the vectors (resp. matrix) denotes the element with index 0 (resp. 0; 0). n Let pm denote the space of bounded linear operators from the space `m 2 to `2 . The subp m p m space of consisting of causal (resp. diagonal) operators is denoted (resp. pm ). We will usually drop the superscript on these spaces, the Hilbert spaces on which the operators act should be clear from the context. We write ?1 , to mean the space of operators whose inverses are in . Similar expressions are used for and . For operators in , the following two facts will be useful. Proofs may be found in [1]:

X X

L

X

X

Lemma 1 For an operator

Lemma 2 If

X L

D

D

X 2 X, the elements X (i; j) satisfy kX (i; j )k kX k ; (8i; j):

D 2 D is a diagonal operator, then kDk = sup kD(i)k : i

In the sequel, the forward shift operator will play a prominent rˆole. This is the operator 2 mm. 2 .. 3 2 .. 3 . .

Z X

6 6 6 6 6 4

y(?1)

7 7 7 7 7 5

6 6 6 6 6 4

y(0)

Z y(0) = y(1) y(1) .. .

y(2)

7 7 7 7 7 5

.. .

4

Time-varying entropy

This operator has a matrix representation

j ? i = 1; Z = fZ (i; j)g = I0m; ; otherwise. One other useful operator is the projection operator Pk 2 Dmm which has a matrix representation Pk = fP (i; i)g where (

(

P (i; i) = Im; 0; 2.2 The

if i < k ; otherwise.

W -transform

In order to consider interpolation problems for non-Toeplitz operators, Alpay et al introduced the a generalization of the usual Fourier transform on sequences, known as the W -transform. In this section we provide an introduction to this transform as well as some of its properties. Details may be found in [1]. Given an operator 2 pm, = fG(i; j )g, we define the set of diagonal operators [k] = diagfH (i)g 2 where H (i) = G(i; i ? k); i 2 Z

G X

D

G

G

G. From Lemmas 1 and 2, kG(i; i ? k)k kGk : G[k] = sup i

corresponding to subdiagonal elements of

An operator

G 2 L has a unique representation as a series in terms of the G[k] as follows: G=

1

X

k=0

G[k](Z )k

where the sum converges weakly. Let 2 . We denote ( ) the spectral radius of

W D

W

W . It is well known that (W ) := maxfjj : 2 (W )g n 1=n = nlim !1 kW k :

Finally, we define

`(W ) := (WZ ):

We are now ready to introduce the W -transform. Definition 3 Let

G 2 L and W 2 D, with `(W ) < 1. We define b (W) := G

1

X

k=0

G[k](Z )k (ZW )k :

(2.1)

5

Time-varying entropy

W

This series will converge provided `( ) < 1. The transform (2.1) acts like the -transform1 of a sequence in C n . Note that in this case, the transform is taken of a sequence f [k] g of operators in . In terms of the doubly-infinite matrix representation := fG(i; j )g, the W -transform can be written as

G

G

D

b (W) = diag G

8

1

:

k=0

G(i; i ? k)

i

Y

l=1+i?k

9 =

W (l);

(2.2)

W

:= diagfW (i)g and empty products in (2.2) are taken to represent the unity matrix. where In order to illustrate some of the properties of the W -transform, we provide some examples.

G L

Example 4 (Time-Invariant Systems) Suppose that 2 represents a time invariant operator. Then has a characterization as a block Toeplitz matrix = fG(i; i ? k )g with G(i; i ? k ) = gk . And thus [k] = gk , k 2 Z+ where is to the identity operator in . We wish to evaluate this at W = , where 2 C , jj < 1. Then:

G

G I

I

I

b (W) = G

1

X

k=0

G[k] (Z )k (Z )k =

1

X

k=0

G

X

k G[k] = G()I

where G() is the usual -transform of the sequence fgk g.

Example 5 (Frozen-time systems) Consider a general causal time-varying operator = . This gives evaluate this as in the previous example at

W

I

b (W) = diagf: : : ; G G ?1();

where

Gi() =

1

X

k=0

G, but

G0() ; G1(); : : : g

G(i; i ? k)k ;

i2Z

is the -transform of the frozen-time system at time i. These frozen-time systems have received considerable interest recently in the study of slowly-time varying systems [21, 22, 26].

2.3

W -transform of a state-space system

Consider the system, (

G :=

x(k + 1) = A(k)x(k) + B (k)u(k); k 2 Z+ y(k) = C (k)x(k) + D(k)u(k)

Define the following operator in

(2.3)

D:

A := diag(: : : ; 0; 0; A(0) ; A(1); A(2); : : : ); 1

The -transform is just the Z -transform with z ?1 replaced by .

6

Time-varying entropy

BC

D

xy

z

with similar representations for , and . Let , and represent the elements of S n , S p and S m corresponding to the x(k ), y (k ) and u(k ). We can express the state-space equations (2.3) as:

Zx = Ax + Bu y = Cx + Du:

(2.4)

u to y is the L operator: G := C (Z ? A)?1B + D 1 (2.5) (Z A)k Z B + D = C k=0 A B : =: C D The series in (2.5) converges provided that (Z A) < 1. It can be shown that this condition is equivalent to uniform exponential stability of the autonomous system in (2.3), [15, 19]. For this linear time-varying system, we wish to calculate G(W) for W = diagfW (i)g. First of all, CZ (AZ )k?1BZ k for k > 0 G[k] := D k=0 The operator mapping

!

X

"

#

b

8 > > < > > :

k < 0.

0

Thus b G (W)

=

1

D + CZ (AZ )k?1B(ZW )k X

(

k=1

= diag D(i) +

1

X

k=0

C (i)A (i; i ? k + 1)B (i ? k)W (i ? k; i)

Here, represents the transition matrix of the sequences 8 > > <

A and W ; that is:

I

A(i; j ) = > A(i ? 1)A(i ? 2) A(j ) > : A(i + 1)A(i + 2) A(j ) 2.4 Properties of the

)

for i = j for i > j for i < j .

W -transform

In this section we outline some properties of the W -transform. Lemma 6 Let

G 2 X and D; W 2 D with `(W ) < 1; then DG(W ) = DG(W): d

b

7

Time-varying entropy

Proof. Using the identity

(DG)[k] = DG[k] ;

the proof is straightforward. For the next property we need a special operator. Let

D 2 D. We define

D(k) := (Z )k D(Z )k 2 D: This has the effect of moving the elements of the diagonal D “down” k steps. Lemma 7 Let G 2 X, D 2 D \ D?1 , and W 2 D with `(W ) < 1; then GD(W ) = G^ (D(1)WD?1)D: d

Proof. First, note that

G[k] (Z )k D = [GD][k](Z )k :

(2.6)

It follows that

1

G^ (D(1)WD?1)D =

X

k=0

1

X

=

k=0

1

X

=

k=0

G[k]

(Z )k

Z

h

i Z DZWD?1 k

!

D

G[k](Z )k D(ZW )k

[GD][k] (Z )k (ZW )k ;

as required. The following corollary is straightforward. Corollary 8 Let

G 2 X and D 2 D; then GD(0) = G(0)D: d

b

The following result, and, more importantly, its corollary, will be crucial to the results that follow. Lemma 9 Let

G; H 2 X, and W 2 D with `(W ) < 1; then GH (W ) = G\ H(W)(W ): d

b

Proof: See [1, Lemma 3.7]. Finally, the following corollary combines the results of Lemma 9 and Corollary 8. Corollary 10 Let

G; H 2 X; then

GH (0) = G(0)H(0): d

b

b

8

Time-varying entropy

2.5 Spectral Factorizations For our definition of entropy, we require spectral factorizations of operators. The following lemma, due to Arveson, guarantees the existence of a spectral factor for positive self-adjoint operators.

G X \ X?1 is a positive, self-adjoint operator. There exist op-

Lemma 11 ([3]) Suppose that 2 erators ; 2 \ ?1 such that

AB L L

G = A A = B B : Moreover, A = DB where D 2 D \ D?1 and D D = I . 2.5.1

State-Space Formulae

I GG

We are interested in computing spectral factorizations for operators of the form ? where k k < 1 and is given by (2.4). The following result provides a state-space equation for a particular spectral factorization:

G

G

Theorem 12 ([19]) Suppose that ments are equivalent: 1.

G is given by (2.4) with (Z A) < 1.

I ? G G > 0.

2. There exists a uniformly bounded solution equation

The following state-

X = X 0 to the operator Algebraic Riccati

X = AZXZ A + C C + (AZXZ B + C D) V ?1 (BZXZ A + DC ) V

(2.7)

I D D B ZXZ B > 0 and AF := A + BV ?1 (BZXZ A + DC )

) with := ( ? ? exponentially asymptotically stable. 3. The operator

I ? GG has a spectral factorization, i.e. I ? G G = M M

with

3

(2.8)

A B M = ?V ?1=2 (BZXZ A + DC ) V 1=2 "

#

Time-Varying Entropy

3.1 Entropy Operator

G

In this section we present our definition of the entropy for a linear time-varying operator . Suppose that 2 has operator norm k k < . It follows that the self adjoint operator ? ?2 is positive. By Lemma 11, it has a spectral factor . Using this spectral factor we begin by defining an entropy operator:

I

GG

G X

G

M

9

Time-varying entropy

G 2 X and kGk < . Let M 2 L \ L?1 be a spectral factor of G G and let W 2 D be a diagonal operator with `(W ) < 1. We

Definition 13 Suppose that the positive operator ? ?2 define:

I

E(G; ) := M(0)M(0): c

c

(3.1)

Since spectral factors are not unique, in order for the expression in (3.1) to make sense, we must show that it does not depend on the particular spectral factor chosen. Suppose that

I ? ?2GG = M M = N N where, from Lemma 11, we know that M = DN for some D 2 D such that D D = I . Let E1(G; ) := M(0)M(0); E2(G; ) := N(0)N(0): c

c

b

b

Now, from Lemma 6 c (W) M

= =

[ (W ) DN DN(W): b

Thus,

E1(G; ) =

= =

c (0)M c (0) M b b N (0)DDN (0) b (0) N b (0) = E (G; ): N 2

The entropy operator (3.1) has many of the properties that the integral (1.1) exhibits for timeinvariant systems. In the next lemma we outline some of these properties. Lemma 14 With the notation of Definition (13) we have

E(G; ) 0. (ii) E (G; ) I with equality iff G 0. (iii) If U 2 X, V 2 L with U U = I and V V = I , then E(UGV ; ) = V(0)E(G; )V(0): (i)

b

b

Proof: Property (i) is straightforward. For property (ii), note from Lemmas 1 and 2 that since c (0) = [0] ,

M

M

but

M (0) = sup kM (i; i)k kM k ; i

I ? M (0)M (0) I ? M M = ?2GG 0 c

and hence

E(G; ) I .

c

c

10

Time-varying entropy

To show (iii), first note that

kUGV k kU k kGk kV k = kGk < : Thus

I ? ?2(UGV )(UGV ) = V I ? ?2GG V = (MV )(MV ): Since M , and V 2 L, the product MV 2 L. Thus MV is a spectral factor for I ? ?2(UGV )(UGV ):

Using Corollary 10, we have

[ (0) = M(0)V(0) MV c

b

which proves property (iii).

3.2 State-space formulae For systems defined by the state-space recursion (2.3), we can give a state space formula for the entropy. For notational simplicity, we assume that = 1. for ? is given by It follows from Theorem 12 that a spectral factor for

M

I GG A B M = ?V ?1=2 (BZXZ A + DC ) V 1=2 The diagonal component of this spectral factor is V 1=2 and thus E(G; 1) = V = I ? DD ? BZXZ B: "

#

In the study of H1 control theory, we find that controllers, and thus closed-loop systems, can often be written as linear fractional transformations of an inner transfer function P and a stable contraction Q. In the next section we show that, as in the case of time-invariant systems, the entropy operator can be used to choose among a set of controllers.

4

Maximizing the Entropy Operator

For linear time-invariant systems, the set of closed-loop systems can be characterized in the form of a linear fractional transformations of an inner transfer function P and a stable contraction Q. For time-varying systems, a similar characterization of stabilizing controllers exists [20]. In the time-invariant case, the integral (1.1) can be used to select among the possible closedloop systems. It is well known that controller which maximizes the entropy integral (1.1) for z0 = 0 is the central controller; that is, the controller with contraction Q = 0. This controller coincides with the optimal risk-sensitive controller of stochastic control. Note that the central controller is equivalent to the choice Qmax = [P22(z0 )] for entropy evaluated at z0 = 0. 11

Time-varying entropy

z

w

P

y

u

- Q Figure 2: Closed-loop system.

EG

In this section we will show that the entropy operator ( ; ) plays the corresponding rˆole in time-varying optimization problems. Before doing so, we must show that a linear fractional transformation of the corresponding operators is well posed. In order to do this, we require a time-varying version of Redheffer’s Lemma:

P

h

P P P

i

LP

L L

Lemma 15 Suppose that, in Figure4, = PP1121 PP1222 , with 11, 12, 22 2 , 21 2 \ ?1 is an isometry that admits a doubly coprime factorization. Furthermore, assume that is a causal (not necessarily bounded) operator also admitting a doubly coprime factorization. The following two statements are equivalent.

Q

P ; Q)k < 1.

(i) The system is internally stable and well posed with kF` ( (ii)

Q 2 L and kQk < 1.

H are characterized as a lower linear fractional transH = F` (P ; Q) where P is as in Lemma 15, and Q is a causal contraction. By Lemma 15, H 2 L and contractive. Thus, the entropy operator E (H ; ) is well-defined for all allowable Q. In the following proposition, we show that as Q varies over the set of all causal contractive operators, E (H ; )

Suppose that all closed-loop systems formation

has a maximum.

H

PQ

= F` ( ; ) denotes the set of all closed-loop systems, where Proposition 16 Suppose that is as in Lemma 15, and is a causal, bounded contractive operator. Then

P

Q

(a)

E(H ; ) is maximized by the unique choice Q = Qmax := P22 (0): b

12

Time-varying entropy

(b) The maximum value of the entropy is given by

E(F`(P ; Qmax); ) = P21 (0) I ? P22(0)P22 (0) ?1 P21(0): Proof: Since Q is a contractive operator, the bounded, Hermitian operator I ? Q Q has a spectral factorization. Denote the spectral factor by L. Now, I ? ?2H H = I ? F`(P ; Q)F`(P ; Q) = P21 (I ? QP22 )?1 (I ? QQ) (I ? P22Q)?1 P21 = L (I ? P22Q)?1 P21 L (I ? P22Q)?1 P21 =: N N : By assumption, P21 2 L \ L?1 . Moreover, L 2 L \ L?1 , since L is a spectral factor. Finally, it is shown in the proof of Lemma 15 that (I ? P22Q) 2 L \ L?1 . Thus N 2 L \ L?1 , and it is clearly a spectral factor of the operator I ? ?2 H H .

b

b

i

h

b

b

i

h

Using Corollary 10, we can evaluate

L (I ? P22Q)?1 P21 ^(0) = L^ (0) (I ? P22Q)?1 ^(0)P21(0): Writing (I ? P22Q)?1 as a series and again using Corollary 10 we see that 1 (P22Q)k ^(0) (I ? P22Q)?1 ^(0) = i

h

b

h

"

i

#

X

k=0

1

=

k=0

k

P22(0)Q(0)

X

b

b

(4.1)

I ? P22(0)Q(0) ?1 : The sum in (4.1) converges, since kP22 (0)Q(0)k kP22Qk < 1. It follows that2 E(H ; ) = P21 (0) I ? P22(0)Q(0) ? E(Q; 1) I ? P22(0)Q(0) ?1 P21(0): Note that since kP22 (0)k kP22k < 1, the operators I ? P22(0)P22 (0); I ? P22 (0)P22(0)

=

b

b

b

b

b

b

b

b

b

b

b

b

b

b

b

admit unique positive-definite hermitian square roots; see [7, p. 88]. We proceed now as in [2], and define the following Julia 22 operator (see [24, p.148]) acting on the Hilbert space `2 `2 :

D

X :=

"

2

:= 2

We use the notation

X11 X12 X21 X22 ?P22(0) I ? P22(0)P22 (0) #

6 4

b

b

I ? P22 (0)Pb22(0) 1=2

b

1=2

b

P22(0) b

3 7 5

:

A? = (A )? . 1

13

Time-varying entropy

X is an isometry and thus, by Lemma 15, the linear fractional T := F` (X ; Q) (0). is well defined. Moreover, note that T = 0 () Q = P22 ^ 21 (0) = X21 . Proceeding as above, we can evaluate: Since X21 2 D, then X It is straightforward to check that transformation

b

E(T ; ) = X21 I ? P22(0)Q(0) ? E(Q; 1) I ? P22(0)Q(0) ?1 X21:

b

b

b

b

(4.2)

Comparing (4.2) and (4.2), we see that

E(Q; 1) = I ? P22(0)Q(0) P21?(0)E(H ; )P21?1(0) I ? P22(0)Q(0) ? = I ? P22(0)Q(0) X21 E(H ; )X21?1 I ? P22(0)Q(0) :

b

b

b

b

b

b

b

b

b

b

Thus

E(H ; ) = P21 (0)X21?E(T ; )X21?1P21(0): (4.3) Since X21 and P21 (0) are both independent of Q, the maximum in (4.3) is obtained whenever E(T ; ) is maximized. By property (ii) of Lemma 14, we know that this achieved uniquely for T 0 ) Q = P22(0), from which the result of part (a) follows. Part (b), follows immediately b

b

b

b

upon substitution.

5

Extensions

In this section we outline some possible extensions of the entropy operator defined here, and some difficulties that arise with each.

5.1 Finite Time Horizon Systems The entropy defined in this paper, despite having these many desirable properties, differs significantly with the usual entropy in that the expression in (3.1) defines entropy to be an operator and not a real number as in (1.1). For operators associated with finite time horizon systems, we may define an entropy number analogous to that of (3.1); this is now done. = fM (i; j )g represent a bounded, causal operator. Define the operator Let

M

MN := PN M (I ? P0): With respect to the direct sum decomposition `2 = P0 `2 (PN ? P0 )`2 (I ? PN )`2 , this operator has a natural partition as

2

MN =

6 4

0 0 0

0 0 HN 0 0 0

3 7 5

: 14

Time-varying entropy

where

i; j N ? 1; H N = 0f;M (i; j)g; 0elsewhere. (

Note that if

M=

P

k=0 M[k] (Z

1

MN = = =:

)k then

1

X

PN M[k] (Z )k (I ? P0)Z k (Z )k

k=0 NX ?1 k=0 NX ?1 k=0

PN M[k] (I ? Pk ) (Z )k

(5.1)

M[Nk] (Z )k :

Z )k P0Z k = Pk . We can evaluate MN (0) = M[0]N = PN M[0](I ? P0)

where, in (5.1), we have used the identity ( c

2

=:

G X

6 4

0 0 0

0 0 H[0]N 0 0 0

G

3 7 5

;

M be a spectral factor of the positive MN defined as above as well as its

Definition 17 Suppose that 2 and k k < , and let operator ? ?2 . Consider the finite rank operator non-zero component N . We define:

I

GG H

EN (G; ) := N ln det (H^ [0]N )H^ [0]N : 2

(5.2)

In the next result we show that, for time-invariant systems, the entropy EN coincides with the integral (1.1).

G = fG(i; i ? k)g 2 L, with G(i; i ? k) = gk , and

Proposition 18 For time-invariant operators,

2 R such that k k < , the following holds:

G

EN (G; ) = Id (G; ; 0) where G() is the -transform of the sequence fgk g. Proof: We first evaluate Id (G; ; 0). Suppose that

G() = and

M () =

1

X

k=0

1

X

k=0

gk k mk k 15

Time-varying entropy

G() and M () are the -transforms of the sequences fgk g and fmk g, for k 0 respectively, and that

I ? ?2 G()G() = M ()M ()

is a spectral factorization. It follows that

2 Z Id (G; ; 0) = ln det M (ei! ) d! ? = 2 2 ln jdet (M (0))j = 2 2 ln jdet (m0)j : where in the second equation we have used the Poisson integral formula. We now solve for EN ( ; ). Recall from Example 4 that for Toeplitz operators, Thus

G

M[k] = mk I .

MN (0) = PN M[0](I ? P0) = m0 PN (I ? P0) c

and where IN is a block N

H^ [0]N = m0IN

N identity matrix. Thus

2 EN (G; ) = 2 N ln det H^ [0]N = 2 2 ln jdet (m0)j

which completes the proof.

5.2 General

W

While the entropy definition in (1.1) allows one to define an entropy with respect to any z0 2 fz : jz j < 1g, the point of greatest interest is that with z0 = 0. It is to this particular entropy, which our operator definition corresponds. Nevertheless, it would be desirable to generalize Definition 13 to more general “points” corresponding to operators 6= 0. The form of in (3.1) suggests that we could define:

W

E

E(G; W ; ) := M (W )M (W ): c

c

While this formula satisfies properties (i) and (ii) of Lemma 14, it does not satisfy property (iii). More importantly, it does not seem to satisfy the same maximization property of Proposition 16 and for this reason, is of limited use. The proof of Proposition 16 breaks down since it relies heavily on Corollary 10, which does not hold for general .

W

16

Time-varying entropy

5.3 Continuous-time systems In the case of continuous-time, linear time-invariant systems, there exists an entropy integral analogous to that of (1.1); see [18]. For time-varying systems, however, it is well known that the input-output operators which are analogous to , exist in continuous resolution spaces. In general, positive, invertible hermitian operators in these spaces, do not have spectral factorizations; see [4, Theorem 14.2]. Since the definition of the entropy for discrete-time systems given here depends crucially on the existence of these factorizations, it is not clear how to generalize this to continuous-time systems.

G

6

Conclusions

State-space methods have by now become prevalent in the theory of H1 control. Apart from being advantageous in terms of the numerical computations required, they have also allowed straightforward extensions of the theory to other settings, including time-varying systems. Until now, however, it has not been possible to extend the definition of the entropy of a system to this setting, since it relied heavily on the transfer function of the system. This paper has given this extension in terms of non-Toeplitz operators. The entropy defined here, while being an operator rather than a real number, has many of the same properties as that used in the timeinvariant case. Moreover, for time-varying systems with state-space realizations, state-space formulae for this entropy have been provided.

A Appendix In this appendix we prove our time-varying version of Redheffer’s lemma.

A.1 Proof of Lemma 15 (i))(ii) For our proof, we modify the proof for the time-invariant case found in [5, Lemma 15]). Since is an isometry, k 22 k 1. This, together with the fact that is a contraction, implies that k 22 k < 1. Thus, the series

P P Q

P

Q

1

X

k=0

(P22Q)k

converges in L and is equal to (I ? P22 Q)?1. This implies that Q stabilizes P22. By the coprime

P

Q

assumption on and a time-varying version of Lemma 4.2.1 in [8], it follows that internally stabilizes . Now to show that F` ( ; ) is a contraction, we use the fact that is an isometry and a little algebra to show that

P

PQ

P

F` (P ; Q)F` (P ; Q) = I ? P21 (I ? QP22 )?1(I ? QQ)(I ? P22Q)?1P21 I;

Q

where in (A.1) we have used the fact that is a contraction. (ii))(i) To show the converse, we first prove that is a bounded operator. Recall that

Q

(A.1)

Q has a 17

Time-varying entropy

Q ND

ND L

Q L () D 2 L \ L?1 ;

?1 where , 2 . Note that 2 right coprime factorization = see [7, page 182]. From the internal stability assumption, we know that

Q(I ? P22Q)?1 2 L ) N (D ? P22N )?1 2 L: Now, since N and D are right coprime, it follows that N and D ? P22N are also right coprime. ~ , Y~ 2 L such that XN ~ + Y~ D = I . Then, To see this, suppose that X XN + Y (D ? P22N ) = I ~ + Y P22 2 L and Y = Y^ 2 L which proves coprimeness. It follows, again from [7, with X = X page 182] that

N (D ? P22N )?1 2 L ) (D ? P22N )?1 2 L ) D?1(I ? P22Q)?1 2 L ) D?1 2 L where the last line comes from that fact that (I ? P22Q)?1 2 L \ L?1 . Thus, Q 2 L. We now show that Q is a contraction. Assume otherwise, thus there exists a signal y 2 `2 ?1(I ? P22Q)y. This is in `2, since P21 2 such that u = Qy 2 `2 and kuk ky k. Let, w = P21

L \ L?1. Moreover, from the isometry condition we know that

kz k2 + kyk2 = kwk2 + kuk2 kw k 2 + ky k 2 : Thus kz k2 kw k2 which contradicts the assumption that F` (P ; Q) is a contraction.

(A.2) (A.3)

References [1] D Alpay, P. Dewilde, and H. Dym, Lossless inverse scattering and reproducing kernels for upper triangular operators, Operator Theory: Advances and Applications, vol. OT 47, 61– 135, Operator Theory: Advances and Applications, Birkh¨auser, Basel, 1990, pp. 61–135. [2] D.Z. Arov and M.G. Kre˘ın, Problem of search of the minimum entropy in indeterminate extension problems, Functional Analysis and its Applications 15 (1981), 123–126. [3] W. Arveson, Interpolation in nest algebras, J. Functional Analysis 20 (1975), 208–233. [4] K.R. Davidson, Nest algebras, Pitman research notes in mathematics, vol. 191, Longman Scientific & Technical, Harlow, UK, 1988. [5] J.C. Doyle, K. Glover, P.P. Khargonekar, and B.A. Francis, State-space solutions to standard H2 and H1 control problems, IEEE Trans. Automatic Control AC-34 (1989), no. 8, 831–847. [6] H. Dym and I. Gohberg, A maximum entropy principle for contractive interpolants, J. Functional Analysis (1986), 83–125. 18

Time-varying entropy

[7] A. Feintuch and R. Saeks, System theory: A Hilbert space approach, Academic Press, New York, 1982. [8] B.A. Francis, A course in H1 control theory, Lecture Notes in Control and Information Sciences, vol. 88, Springer-Verlag, New York, 1987. [9] K. Glover, Relations between H1 and risk sensitive controllers, 8th Intl. Conf. Analysis & Optimization of Systems, INRIA, 1988. [10] K. Glover and J. Doyle, State-space formulae for all stabilizing controllers that satisfy an H1 norm bound and relations to risk sensitivity, Systems & Control Letters 11 (1988), no. 2, 167–172. [11] K. Glover and D. Mustafa, Derivation of the maximum entropy H1 -controller and a state space formula for its entropy, Intl. J. Control 50 (1989), no. 3, 899–916. [12] I. Gohberg, M.A. Kaashoek, and H.J. Woerdeman, A maximum entropy principle in the general framework of the band method, J. Functional Analysis 95 (1991), no. 2, 231–254. [13] P.A. Iglesias and D. Mustafa, A separation principle for discrete time controllers satisfying a minimum entropy criterion, IEEE Trans. Automatic Control AC-38 (1993), no. 10, 1525– 1530. [14] P.A. Iglesias, D. Mustafa, and K. Glover, Discrete-time H1 controllers satisfying a minimum entropy criterion, Systems & Control Letters 14 (1990), no. 4, 275–286. [15] E.W. Kamen, P.P. Khargonekar, and K.R. Poolla, A transfer-function approach to linear time-varying discrete-time systems, SIAM J. Control Optimization 23 (1985), no. 4, 550– 565. [16] P.P. Khargonekar, R. Ravi, and K.M. Nagpal, H1 control of linear time-varying systems: A state-space approach, SIAM J. Control Optimization 29 (1991), no. 6, 1394–1413. [17] D.J.N. Limebeer, M. Green, and D. Walker, Discrete time H1 control, IEEE Conf. Decision Control (Tampa Bay, FL), IEEE, December 1989, pp. 392–396. [18] D. Mustafa and K. Glover, Minimum entropy H1 control, Lecture Notes in Control and Information Sciences, vol. 146, Springer-Verlag, Heidelberg, FRG, 1990. [19] M.A. Peters and P.A. Iglesias, On the induced norms of discrete-time and sampled-data time-varying systems, IEEE Conf. Decision Control, February 1994, Submitted. [20] M. Verhaegen and A.-J. Van der Veen, The bounded real lemma for discrete-time varying systems with application to robust output feedback, IEEE Conf. Decision Control (San Antonio, TX), December 1993, pp. 45–50. [21] L.Y. Wang and G. Zames, Local-global double algebras for slow H1 adaptation: Part II — Optimization of stable plants, IEEE Trans. Automatic Control AC-36 (1991), no. 2, 143–151.

19

Time-varying entropy

[22]

, Local-global double algebras for slow H1 adaptation: The case of `2 disturbances, IMA J. Mathematical Control and Information 8 (1991), no. 3, 287–319.

[23] P. Whittle, Risk-sensitive optimal control, John-Wiley and Sons, New York, 1990. [24] N. Young, An introduction to Hilbert space, Cambridge University Press, Cambridge, UK, 1988. [25] G. Zames, Feedback and optimal sensitivity: Model reference transformations, multiplicative seminorms, and approximate inverses, IEEE Trans. Automatic Control AC-26 (1981), no. 2, 301–320. [26] G. Zames and L.Y. Wang, Local-global double algebras for slow H1 adaptation: Part I — Inversion and stability, IEEE Trans. Automatic Control AC-36 (1991), 130–142.

20

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close