Assuming null controllability of the pair (A, B), several authors have proposed the quadratic family V (x, Î») = xT P(Î»)x generated from the Riccati ...

0 downloads 0 Views 166KB Size

Institut Montefiore, B28 Universit´e de Li`ege, B4000 Li`ege Sart-Tilman, Belgium. email: [email protected]

Abstract

that is, any smooth stabilizing feedback satisﬁes the constraints in a neighborhood of the origin x = 0.

A gain scheduling based on a one-parameter family of Lyapunov functions is presented for the control of linear systems with aﬃne constraints. The tuning of the parameter in the control law is assumed to result from a trade-oﬀ between the size of the state-space domain where the constraints are satisﬁed and the closed-loop performance. A target controller is chosen for local performance in this family. The proposed online scheduling is aimed at reaching the target controller in the fastest possible way, while guaranteeing satisfaction of the constraints along closed-loop solutions.

1 Introduction This paper addresses the feedback control design of linear systems x˙ = Ax + Bu

x ∈ IRn , u ∈ IR

(1.1)

subject to p aﬃne constraints Lx + M u ≤ N

(1.2)

with L ∈ IRp×n , M ∈ IRp , N ∈ IRp . Even though our primary concern will be the control of linear systems under input magnitude constraints (as in [4, 5, 8]) or magnitude and rate constraints ([3, 7]), the proposed method is, in principle, more general. The realistic assumption that we adopt as a starting point is that the constraints (1.2) are not active locally,

Assumption 1 If A + BK is an Hurwitz matrix, then Lx + M Kx ≤ N is satisfied for all x in a neighborhood of the origin. A stronger assumption adopted throughout the paper is that the constraints (1.2) can be satisﬁed with linear controllers in large regions of the state-space, at the expense of degraded performance. For instance, this is the well-known situation encountered when low-gain designs are used to address magnitude and rate limitations on the actuators. Choosing an initial controller that ensures a suﬃciently large region of attraction and a ﬁnal controller that ensures good local performance, our objective is to design a scheduling ensuring the best possible transition between these two extreme controllers along the closedloop solutions. To this end, we will assume that our “initial” and “ﬁnal” controllers belong to a one-parameter family of linear controllers: Ju = −K(λ)x

(1.3)

which, for each λ ∈ (0, 1] ensures the decrease of a quadratic Lyapunov function V (x, λ) = xT P (λ)x

(1.4)

along the solutions of (1.1) in the absence of the constraints (1.2) (with K(λ) and P (λ) > 0 continuously

diﬀerentiable). The role of the parameter J in (1.3) will be explained in Section 2. The construction of the one-parameter family (1.3) and (1.4) is not addressed in general in this paper, but an eﬃcient procedure is proposed in [5] based on a parameterized Riccati inequality. More speciﬁc choices will be discussed in the applications section. As a convention, the value λ = 1 will correspond to the target controller that yields good local performance. However, we will be interested in dealing with initial conditions x0 that initially force a smaller value of λ in order to satisfy the constraints (1.2). Our gainscheduling design will show how to adapt the parameter λ along the closed-loop solutions such as to guarantee the fastest possible transition to the target controller while satisfying the constraints and ensuring convergence to the origin. This approach was ﬁrst proposed in [5] in the restricted framework of magnitude constraints, with a Riccati-based family of Lyapunov functions (1.4) and controllers u = −B T P (λ)x. A diﬀerent family of controllers was recently proposed in [1], aimed at the online invariance of condition B T P (λ)x = 0. The present paper generalizes the results of [5] and [1] in a uniﬁed framework. The generalization of magnitude input constraints to arbitrary aﬃne constraints (1.2) is not merely notational. It is exempliﬁed in the present paper by a new and successful design scheme for the control of linear systems under both magnitude and rate limitations on the control variable. The paper is organized as follows: Section 2 discusses two diﬀerent choices of controllers for a ﬁxed family of Lyapunov functions. Section 3 describes the gainscheduling algorithm itself. Section 4 addresses the particular case of input magnitude constraints, summarizing the results of [5] and [1] in the uniﬁed framework of this paper. Section 5 addresses the control of linear systems with input magnitude and rate constraints and diﬀerent solutions are illustrated on the double integrator system.

2 Controller gain The parameter J is introduced in (1.3) to distinguish between explicit control laws u = −K(λ)x (J = 1) and implicit control laws speciﬁed by the invariance condition K(λ)x = 0 (J = 0). The reason why this distinction is rather important for the proposed gainscheduling is now brieﬂy explained. By assumption, any ﬁxed controller (1.3) will satisfy the constraints (1.2) in a neighborhood of the origin. A Lyapunov estimate of this region is given by the minimum level set V¯ (λ) where one of the constraints becomes active.

The online requirement V (x, λ) ≤ V¯ (λ) will guarantee closed-loop convergence to the origin, but it is the main source of conservatism in the adaptation of λ. In the case J = 1, V¯ (λ) is computed as V¯ (λ) = minx∈IRn xT P (λ)x s.t. ∃i : (Li − Mi K(λ))x = Ni

(2.5)

In the implicit case J = 0, we assume the relative degree one condition K(λ)B = 0, so that, when λ is ﬁxed, the invariance condition K(λ)x = 0 is ensured by the ¯ control law u = − K(λ)Ax K(λ)B . V (λ) is then computed as V¯ (λ) =minx∈IRn xT P (λ)x s.t.

K(λ)A ∃i : (Li − Mi K(λ)B )x = Ni K(λ)x = 0

(2.6)

Both in the explicit and implicit cases, V¯ (λ) results from the minimization of the quadratic function under aﬃne constraints. Presumably, the additional equality constraint in (2.6) will result in a larger value V¯ (λ), thereby reducing the conservatism of the Lyapunov estimate. The speciﬁcation of the controller through the implicit relation K(λ)x = 0 may of course pose a problem for the initialization of the control scheme. If no λ0 exists such that K(λ0 )x0 = 0, an initial phase of the control algorithm is necessary to bring the solution in an admissible region of the state-space. This part of the algorithm is somewhat decoupled from the gainscheduling problem addressed here and will not be further discussed in the present paper. It is discussed in [1] in analogy with a sliding mode control approach where K(λ)x = 0 would be the sliding surface and a “reaching mode” is necessary for initial conditions that do not belong to the sliding surface. In the sequel, the gainscheduling will be called “explicit” in the case J = 1 and “implicit” in the case J = 0.

3 Gain-scheduling Consider the feasibility region Γ determined by Γ = (x, λ) ∈ IRn × (0, 1]|xT P (λ)x ≤ V¯ (λ) By deﬁnition of V¯ (λ) the ﬁxed parameter controller Ju + K(λ0 )x = 0 yields closed-loop convergence without constraint violation for any initial condition x0 such that (x0 , λ0 ) ∈ Γ. Our gain-scheduling algorithm will determine an adaptation rule λ˙ ≥ 0 and the accompanying control law such as to maximize λ˙ along the closed-loop solutions and satisfy the constraints (1.2),

while ensuring the closed-loop invariance of Γ. Invariance of Γ will imply that the adaptation can be stopped at any time, the convergence of x(t) to the origin being then guaranteed by the preceding argument. In the explicit case u = −K(λ)x, invariance of the feasible region Γ guarantees that the constraints are satisﬁed along the closed-loop solutions because the deﬁnition of V¯ (λ) implies that Lx− M K(λ)x ≤ N when (x, λ) ∈ Γ. Invariance of Γ and satisfaction of the constraints is then guaranteed by the feedback rule: λ(x(t)) = max{η ∈ (0, 1] : V (x, η) ≤ V¯ (η)} (3.7) or through the adaptation rule s.t. (3.8) (x, λ) − V¯ (λ))+ ≤ 0 if V (x, λ) = V¯ (λ)

The feedback rule (3.7) was proposed by Megretski [5] in the particular case of the input magnitude constraints while (3.8) will be used for comparison with the implicit gain-scheduling developed below. Rewriting the diﬀerential constraint in (3.8) as ¯ ∂V ∂V ∂V x˙ + − λ˙ ≤ 0 ∂x ∂λ ∂λ + we see that, for an initial condition (x0 , λ0 ) satisfying V (x0 , λ0 ) = V¯ (λ0 ), the adaptation rule is uniquely determined as ¯ −1 ∂V ∂V ∂V − x˙ (3.9) λ˙ = − ∂λ ∂λ + ∂x under the monotonicity assumption ¯ ∂ V (λ) ∂V (x, λ) >0 − ∂λ ∂λ +

1 u= K(λ)B

∂K ˙ xλ −K(λ)Ax − ∂λ

(3.11)

˙ Because of the additional term ∂K ∂λ xλ in (3.11), the constraints are no longer guaranteed to be satisﬁed when (x, λ) ∈ Γ. To ensure closed-loop invariance of Γ and satisfaction of the constraints, the adaptation rule λ˙ must now be determined as the solution of the pointwise maximization max λ˙

max λ˙ d dt (V

enforcing the invariance condition K(λ)x = 0 is given by

s.t. ¯ (x, λ) −V (λ)) ≤ 0 if V (x, λ) = V¯ (λ) (3.12) M ˙ −K(λ)Ax − ∂K Lx + K(λ)B ∂λ xλ ≤ N λ˙ ≤ λ˙ max d dt (V

It must again be emphasized that the solution λ˙ = 0 is feasible at any point of Γ, which ensures that the solution of (3.12) is non negative. The bound λ˙ max (> 0) is arbitrary, but prevents jumps in the evolution of λ(t) and guarantees that the control (3.11) is well-deﬁned. Under normal circumstances, the gain schedulings will allow the parameter λ to converge in ﬁnite time to the target λf = 1, eventually leading to closed-loop convergence of the solution x(t) with the ﬁxed controller Ju + K(1)x = 0. Alternatively, it may happen that λ ¯ ≤ 1. The next never reaches 1, but converges to some λ theorem guarantees closed-loop convergence of x(t) to the origin in all cases. Theorem 1 Consider a family of Lyapunov functions V (x, λ) = xT P (λ)x

λ ∈ (0, 1]

P (λ) > 0

(3.10)

Assumption (3.10) guarantees a continuous evolution of λ(t), in which case the feedback rule (3.7) is just the integral form of the adaptation rule (3.8) and expresses that the closed-loop solution (x(t), λ(t)) will stay on the boundary of Γ until the target λf = 1 is reached. It is worthwhile noting that, even in the absence of the monotonicity condition (3.10), both the feedback rule (3.7) and the adaptation rule (3.8) guarantee a monotonic evolution of λ(t). This is because λ˙ = 0 is a feasible solution of (3.8) at any point of Γ. The feedback rule (3.7) is no longer valid in the case of an implicit gain-scheduling J = 0. The control law

whose time derivative, with λ fixed, along the solutions of the linear system x˙ = Ax + Bu is rendered negative definite by the explicit control law u = −K(λ)x or through the invariance condition K(λ)x = 0. Then, the feedback rule (3.7) with u = −K(λ)x

(3.13)

guarantees finite-time convergence of λ(t) to 1 and convergence of x(t) to the origin for any solution with initial condition (x0 , λ0 ) satisfying V (x0 , λ0 ) ≤ V¯ (λ0 ). Likewise, the adaptation rule (3.12) with 1 u= K(λ)B

∂K ˙ xλ −K(λ)Ax − ∂λ

(3.14)

guarantees convergence to the origin of x(t) for any initial condition (x0 , λ0 ) satisfying V (x0 , λ0 ) ≤ V¯ (λ0 ) and K(λ0 )x0 = 0. Proof See [2]

4 Input magnitude constraint The construction of a one-parameter family of Lyapunov functions is classical for linear systems x˙ = Ax + Bu subject to the input constraint

where the constraints B T P (λ0 )x = 0 and V (x, λ0 ) ≤ V¯ (λ0 ) are satisﬁed. Then the control u and the adaptation rule are directly obtained from the pointwise optimization problem (3.12). Comparisons of the two gain-scheduled algorithms on the double and triple integrators suggest that the implicit gain-scheduling usually results in faster convergence of the closed-loop solutions [1].

5 Input magnitude and rate constraints

|u| ≤ umax

Adding the rate constraint Assuming null controllability of the pair (A, B), several authors have proposed the quadratic family V (x, λ) = xT P (λ)x generated from the Riccati equation P (λ)A + AT P (λ) − P (λ)BB T P (λ) = −Q(λ),(4.15) with λ ∈ (0, ∞), Q(λ) > 0 and [4, 8]).

dQ dλ

> 0 (see for instance

The choice u = −B P (λ)x corresponds to an explicit speciﬁcation of the control law (J = 1) and leads to the gain-scheduling algorithm proposed by Megretski in [5]. The solution obtained in [5] is

|u| ˙ ≤ u˙ max to the input constraint considered in the previous section, we need to construct a one-parameter family of Lyapunov functions for the extended state-space model

T

V¯ (λ) =

u2max T B P (λ)B

and λ(x) = max{η ∈ (0, 1] : V (x, η) ≤ V¯ (η)} which indeed corresponds to the solution of (3.9) thanks ¯ ∂V to the monotonicity condition ∂P ∂λ > 0 and ∂λ < 0 The implicit speciﬁcation of the control law through the invariance condition B T P (λ)x = 0 (J = 0) leads to the gain-scheduling proposed in our earlier work [1]. The maximal admissible level set for a given λ > 0 is shown to be u2max (B T P B)3 V¯ (λ) = T −1 T (B P AP A P B)(B T P B) − (B T P AB)2 For initial conditions that cannot satisfy B T P (λ)x0 = 0 for some λ, a reaching phase is ﬁrst implemented by the control u = −sign(B T P (λ0 )x) with λ0 small and ﬁxed, which forces the convergence of the solution in ﬁnite time to a region of the state space

x˙ = u˙ =

Ax + Bu v

A simple choice, that relies on the construction in the previous section, for the Lyapunov function is the “backstepping” augmentation [6] V ((x, u), λ) = xT P (λ)x + (u + B T P (λ)x)2 A family of controllers x Jv = −K(λ) u must be constructed such that the time derivative T T T T ˙ V =T x P Ax + Tx A P x +T2x P Bu +2 u + B P x v + B P Ax + B P Bu T T T = −x Qx − Tx P BB PTx T +2 u + B P x v + B P Ax + B P Bu + B T P x

is rendered negative. This is accomplished with the explicit controller (J = 1) v

x u

=

−K(λ)

=

−(B T P Ax + B T P Bu + B T P x +k(u + B T P x))

(5.16)

with k > 0 or through the implicit speciﬁcation (J = 0) u + B T P (λ)x = 0

(5.17)

x = 0 for the limit u case k = +∞. For the explicit gain-scheduling based on (5.16), one obtains which corresponds to K(λ)

λ(x(t)) = max{η ∈ (0, 1] : V (x, η) ≤ V¯ (η)} where V¯ (λ) = min V¯1 (λ), V¯2 (λ) with V¯1 corresponding to the minimal level set inside which the magnitude constraint is satisﬁed, and V¯2 corresponding to the minimal level set inside which the rate constraint is satisﬁed: x

Example On Figure 2, we compare the eﬃciency of diﬀerent algorithms for the control of the double integrator with control rate and amplitude constraints:

x˙ 1 = x2 x˙ 2 = u |u| ≤ 1, |u| ˙ ≤1 with the extension u˙ = v. Solving the Riccati equation (4.15) with:

λ4 0

0 λ2

results in the Lyapunov matrix:

V¯2 (λ) = min V ((x, u), λ) (x,u) x s.t. K(λ) = u˙ max u

√ 3 3λ P (λ) = λ2

The implicit gain-scheduling (5.17) requires no “reaching phase” if one assumes that the initial control variable u can be freely initialized at the value −B T P (λ)x. For λ > 0 ﬁxed, invariance of the manifold u = −B T P (λ)x then imposes = =

The application of these adaptation schemes are now illustrated on a simple example.

Q(λ) =

V¯1 (λ) = min V ((x, umax ), λ)

v

The control v and the adaptation rule λ˙ are then obtained from the pointwise optimization (3.12).

−B T P (λ)Ax − B T P (λ)Bu −B T P (λ)(A − BB T P (λ))x = GT (λ)x

so that V¯1 and V¯2 are now replaced by

2 √λ 3λ

and the family of low-gain controls is then: √ λ>0 u = −B T P (λ)x = −λ2 x1 − 3λx2 which is a typical low-gain control for second order systems. We arbitrarily consider that the target behavior of the closed-loop system is for λ = 1. In this case, the explicit controller (5.16) (with J = 1 and k = 1) is v = −λ2 (x2 + 2x1 ) −

V¯1 (λ) = min xT P (λ)x + (u + B T P (λ)x)2 (x,u)

u + B T P (λ)x = 0 s.t. u = umax

√ 3λ(u + 2x2 ) − u

and the implicit controller speciﬁcation (J = 0 and k = +∞) yields u + λ2 x1 +

√ 3λx2 = 0

which reduces to V¯ (λ) is calculated and drawn on Figure 1 both for the explicit and the implicit case. We see that, for all λ between 0 and 1, V¯ is larger for the implicit scheduling. This explains why we expect faster convergence with the implicit gain-scheduling.

V¯1 (λ) = min xT P (λ)x x

s.t. B T P (λ)x = umax

and, similarly

This expectation is conﬁrmed by the the simulations; three controllers are compared on Figure 2:

V¯2 (λ) = min xT P (λ)x x s.t. GT (λ)x = u˙ max whose solutions are V¯1 (λ) = 2 vmax GT (λ)P −1 (λ)G(λ) .

u2max B T P (λ)B

(i) a ﬁxed low-gain explicit controller (dotted line curves); and V¯2 (λ) =

(ii) a gain scheduling based on the explicit controller (dash-dotted line curves);

10

(t,x )

(t,x )

1

2

0

9

1 8

−5

0.5

7

6

−10

0

10

20

30

0

0

10

20

30

0

10 20 (t, V (x, λ))

30

0

10

30

(t,u)

V¯

5

(t,v)

0.6

4

3

2

0.1

0.4

0

0.2

−0.1

0

−0.2

−0.2

−0.3

1

0 0

10

20

30

(t, λ) 0

0.1

0.2

0.3

0.4

0.5

λ

0.6

0.7

0.8

0.9

1

2.5

1

2 1.5 0.5

1

Figure 1: Evolution of V¯ (λ) for the explicit (dash-dotted line) and the implicit schedulings (solid line)

(iii) a gain-scheduling based on the implicit controller (solid line curves). For the initial condition x0 = (−10, 0), we choose u0 = −B T P (λ0 )x0 = 0.57 (with λ0 , the initial value of λ for the implicit algorithm) which ensures that the implicit scheduling is initialized on the manifold u + B T P (λ)x = 0. As expected, we see that the initial λ is larger for the implicit gain-scheduling, and that it increases faster. During the whole simulation, V¯ = V¯2 in the explicit scheduling, which means that the rate constraint is the limiting value. In the implicit scheduling, V¯ = V¯1 until λ = 0.81, and V¯ = V¯2 afterwards, that is umax is ﬁrst the limiting value, and then u˙ max ; this transition is especially visible in the change of slope at λ = 0.81 on Figure 1 and in the ﬁrst discontinuity in the v graph in Figure 2. The second discontinuity is due to the interruption of the adaptation which elimi˙ nates the B T ∂P ∂λ xλ term in the expression of v. Figure 2 shows that the implicit scheduling allows for a higher peak for x2 , which accelerates the convergence of x1 to the origin.

6 Conclusion In this paper, we have presented a scheduling method that allows for the satisfaction of both stability and performance speciﬁcations for the control of linear systems subject to aﬃne constraints. The Lyapunov-based scheduling provides online interpolation between an initial controller chosen from stability speciﬁcations and a target controller, chosen for local performance. For a given family of Lyapunov functions, two schedulings have been compared: an explicit gain-scheduling based on the control law u = −K(λ)x and an implicit gain scheduling based on the invariance condition K(λ)x = 0. The algorithms have been illustrated in the case of input magnitude and rate constraints.

0.5 0

0

10

20

30

0

20

Figure 2: Solution of the controlled double integrator with amplitude and rate constraint on the control variable (umax = 1 and vmax = 1). The application of a low-gain control law without gain-scheduling (dotted line) is compared to a control law with explicit (dash-dotted line) and implicit gain-scheduling (solid line)

References [1] F. Grognard, R. Sepulchre, G. Bastin, “Improving the performance of low-gain designs for bounded control of linear systems”, Proceedings of MTNS2000, Perpignan. [2] F. Grognard, R. Sepulchre, G. Bastin, “Control of linear systems with aﬃne constraints: a gain-scheduling approach”, CESAME Internal Report 2000-23, Universit´e catholique de Louvain, Belgium. [3] Z. Lin. “Semi-global stabilization of linear systems with position and rate-limited actuators”, Systems & Control Letters, vol. 30, no. 1, pp. 1-11, 1997. [4] Z. Lin, A.A. Stoorvogel, A. Saberi, “Output Regulation for Linear Systems Subject to Input Saturation”, Automatica, vol 32, no. 1, pp. 29-47, 1996. [5] A. Megretski, “L2 BIBO output feedback stabilization with saturated control”, Proceedings 13th IFAC World Congress, San Francisco, 1996, Vol D, pp. 435440. [6] R. Sepulchre, M. Jankovic, P.V. Kokotovi´c, Constructive Nonlinear Control, Springer-Verlag, 1996. [7] A. Stoorvogel, A. Saberi, “Output regulation of linear plants with actuators subject to amplitude and rate constraints”, Int. Journal of Rob. and Nonlinear Control, Vol 9, 10, 1999, pp. 631-657. [8] A. Teel, “Semiglobal stabilizability of linear null controllable systems with input nonlinearities”, IEEE Trans. on Automatic Control, vol. 40, no. 1, pp. 96100, 1995

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close