Dixon J (2005) Pentaho Open Source Business Intelligence. Platform Technical ... Iglesias CJ (1996) The role of hybrid systems in intelligent data man...

2 downloads 0 Views 3MB Size

D

Discrete Control Systems

18. Dictionary.com Unabridged (v 1.1). discover. http://dictionary. reference.com/browse/discover. Accessed 5 Nov 2007 19. Dietterich TG (2000) Ensemble methods in machine learning. In: First International Workshop on Multiple Classifier Systems. Lecture Notes in Computer Science. Springer, New York, pp 1–15 20. Dixon J (2005) Pentaho Open Source Business Intelligence Platform Technical White Paper. Pentaho Corporation, Orlando. http://sourceforge.net/project/showfiles.php?group_ id=140317 21. Fayyad U, Piatetsky-Shapiro G, Smyth P (1996) From data mining to knowledge discovery in databases (a survey). AI Mag 17(3):37–54 22. Fayyad U, Piatesky-Shapiro G, Smyth P, Uthurusamy R (eds) (1996) Advances in knowledge discovery and data mining. AAAI Press, Menlo Park 23. Frawley W, Piatesky-Shapiro G, Matheus C (1991) Knowledge discovery in databases: An overview. In: Piatesky-Shapiro G, Frowley W (eds) Knowledge Discovery in Databases. AAAI/MIT Press, pp 1–27, Menlo Park 24. Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Proceedings Thirteenth International Conference on Machine Learning. Morgan Kaufman, San Francisco, pp 148–156 25. Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison, Reading 26. Hand D, Mannila H, Smyth P (eds) (2001) Principles of data mining. MIT Press, Cambridge 27. Holland JH (1975) Adaptation in natural and artificial systems. MIT Press, Cambridge 28. Iglesias CJ (1996) The role of hybrid systems in intelligent data management: The case of fuzzy/neural hybrids. Control Eng Pract 4(6):839–845 29. Kass GV (1980) An exploratory technique for investigating large quantities of categorical data. Appl Stat 29:119–127 30. Kurgan L, Musilek P (2006) A survey of Knowledge Discovery and Data Mining process models. Knowl Eng Rev 21(1):1–24 31. Loh W, Shih Y (1997) Split selection methods for classification trees. Stat Sinica 7:815–840 32. Mannila H (2000) Theoretical frameworks of data mining. SIGKDD Explor 1:30–32 33. Mierswa I, Wurst M, Klinkenberg R, Scholz M, Euler T (2006) YALE: Rapid Prototyping for Complex Data Mining Tasks. In: Proc of the 12th ACMSIGKDD. International Conference on Knowledge Discovery and Data Mining, Philadelphia, pp 1–6 34. Pechenizkiy M, Tsymbal A, Puuronen S (2005) Meta-knowledge management in multistrategy process-oriented knowledge discovery systems. Technical Report, Dublin, Trinity College Dublin, Department of Computer Science, TCD-CS-2005– 30, p 12 35. Piatetsky-Shapiro G (1991) Knowledge discovery in real databases: A report on the IJCAI-89 Workshop. AI Mag 11(5): 68–70 36. Piatetsky-Shapiro G (1999) The data mining industry coming to age. IEEE Intel Syst 14(6):32–33 37. Provost F, Fawcett T, Kohavi R (1998) The case against accuracy estimation for comparing classifiers. In: Proceedings of the Fifteenth International Conference on Machine Learning, (ICML98), San Francisco 38. Quinlan JR (1986) Induction of decision trees. In: Machine Learning, vol 1. Kluwer, Hingham

39. Quinlan R (1993) C4.5: Programs for machine learning. Morgan Kaufmann, San Francisco 40. Rakotomalala R (2005) TANAGRA: Un logiciel gratuit pour l’enseignement et la recherche. In: Proc of the 5th Journees d’Extraction et Gestion des Connaissances 2:697–702 41. Reeves CR (ed) (1993) Modern heuristic techniques for combinatorial problems. Wiley, New York 42. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323:533–536 43. Sano M, Katoa Y, Taira K (2005) Functional gene-discovery systems based on libraries of hammerhead and hairpin ribozymes and short hairpin RNAs. Mol Biosyst 1:27–35 44. Shearer C (2000) The CRISP-DM model: the new blueprint for data mining. J Data Wareh l5(4):13–19 45. Smyth P, Goodman RM (1991) Rule induction using information theory. In: Piatetsky-Schapiro G, Frawley WJ (eds) Knowledge Discovery in Databases. AAAI Press, pp 159–176, Menlo Park 46. Snedecor GW, Cochran WG (1989) Statistical methods, 8th edn. Iowa State University Press, Ames 47. Tan P, Steinbach M, Kumar V (2005) Introduction to data mining. Addison, Boston 48. The Discovery System. discovery system for personality profiling. http://www.insights.com/core/English/ TheDiscoverySystem/default.shtm. Accessed 6 Nov 2007 49. Towsey M, Alpsan D, Sztriha L (1995) Training a neural network with conjugate gradient methods. IEEE Proc Neural Netw 1:373–378 50. Weiss GM, Provost F (2001) The effect of class distribution on classifier learning. Technical Report ML-TR 43, Department of Computer Science, Rutgers University 51. Werbos PJ (1994) The roots of backpropagation. Wiley, New York 52. Witten IH, Frank E (2005) Data mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco 53. Wolpert D, Macready W (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82

Books and Reviews Berthold M, Hand DJ (2003) Intelligent data analysis: An introduction, 2nd edn. Springer, New York Lin TY, Ohsuga S, Liau CJ, Hu X, Tsumoto S (eds) (2005) Foundations of data mining and knowledge discovery. Studies in Computational Intelligence, vol 6. Springer, New York

Discrete Control Systems TAEYOUNG LEE1 , MELVIN LEOK2 , HARRIS MCCLAMROCH1 1 Department of Aerospace Engineering, University of Michigan, Ann Arbor, USA 2 Department of Mathematics, Purdue University, West Lafayette, USA

Discrete Control Systems

Article Outline Glossary Deﬁnition of the Subject Introduction Discrete Lagrangian and Hamiltonian Mechanics Optimal Control of Discrete Lagrangian and Hamiltonian Systems Controlled Lagrangian Method for Discrete Lagrangian Systems Future Directions Acknowledgments Bibliography Glossary Discrete variational mechanics A formulation of mechanics in discrete-time that is based on a discrete analogue of Hamilton’s principle, which states that the system takes a trajectory for which the action integral is stationary. Geometric integrator A numerical method for obtaining numerical solutions of diﬀerential equations that preserves geometric properties of the continuous ﬂow, such as symplecticity, momentum preservation, and the structure of the conﬁguration space. Lie group A diﬀerentiable manifold with a group structure where the composition is diﬀerentiable. The corresponding Lie algebra is the tangent space to the Lie group based at the identity element. Symplectic A map is said to be symplectic if given any initial volume in phase space, the sum of the signed projected volumes onto each position-momentum subspace is invariant under the map. One consequence of symplecticity is that the map is volume-preserving as well. Definition of the Subject Discrete control systems, as considered here, refer to the control theory of discrete-time Lagrangian or Hamiltonian systems. These discrete-time models are based on a discrete variational principle, and are part of the broader ﬁeld of geometric integration. Geometric integrators are numerical integration methods that preserve geometric properties of continuous systems, such as conservation of the symplectic form, momentum, and energy. They also guarantee that the discrete ﬂow remains on the manifold on which the continuous system evolves, an important property in the case of rigid-body dynamics. In nonlinear control, one typically relies on diﬀerential geometric and dynamical systems techniques to prove

D

properties such as stability, controllability, and optimality. More generally, the geometric structure of such systems plays a critical role in the nonlinear analysis of the corresponding control problems. Despite the critical role of geometry and mechanics in the analysis of nonlinear control systems, nonlinear control algorithms have typically been implemented using numerical schemes that ignore the underlying geometry. The ﬁeld of discrete control systems aims to address this deﬁciency by restricting the approximation to the choice of a discrete-time model, and developing an associated control theory that does not introduce any additional approximation. In particular, this involves the construction of a control theory for discrete-time models based on geometric integrators that yields numerical implementations of nonlinear and geometric control algorithms that preserve the crucial underlying geometric structure. Introduction The dynamics of Lagrangian and Hamiltonian systems have unique geometric properties; the Hamiltonian ﬂow is symplectic, the total energy is conserved in the absence of non-conservative forces, and the momentum maps associated with symmetries of the system are preserved. Many interesting dynamics evolve on a non-Euclidean space. For example, the conﬁguration space of a spherical pendulum is the two-sphere, and the conﬁguration space of rigid body attitude dynamics has a Lie group structure, namely the special orthogonal group. These geometric features determine the qualitative behavior of the system, and serve as a basis for theoretical study. Geometric numerical integrators are numerical integration algorithms that preserve structures of the continuous dynamics such as invariants, symplecticity, and the conﬁguration manifold (see [14]). The exact geometric properties of the discrete ﬂow not only generate improved qualitative behavior, but also provide accurate and eﬃcient numerical techniques. In this article, we view a geometric integrator as an intrinsically discrete dynamical system, instead of concentrating on the numerical approximation of a continuous trajectory. Numerical integration methods that preserve the simplecticity of a Hamiltonian system have been studied (see [28,36]). Coeﬃcients of a Runge–Kutta method are carefully chosen to satisfy a simplecticity criterion and order conditions to obtain a symplectic Runge–Kutta method. However, it can be diﬃcult to construct such integrators, and it is not guaranteed that other invariants of the system, such as a momentum map, are preserved. Alternatively, variational integrators are constructed by dis-

2003

2004

D

Discrete Control Systems

cretizing Hamilton’s principle, rather than discretizing the continuous Euler–Lagrange equation (see [31,34]). The resulting integrators have the desirable property that they are symplectic and momentum preserving, and they exhibit good energy behavior for exponentially long times (see [2]). Lie group methods are numerical integrators that preserve the Lie group structure of the conﬁguration space (see [18]). Recently, these two approaches have been uniﬁed to obtain Lie group variational integrators that preserve the geometric properties of the dynamics as well as the Lie group structure of the conﬁguration space without the use of local charts, reprojection, or constraints (see [26,29,32]). Optimal control problems involve ﬁnding a control input such that a certain optimality objective is achieved under prescribed constraints. An optimal control problem that minimizes a performance index is described by a set of diﬀerential equations, which can be derived using Pontryagin’s maximum principle. Discrete optimal control problems involve ﬁnding a control input for a discrete dynamic system such that an optimality objective is achieved with prescribed constraints. Optimality conditions are derived from the discrete equations of motion, described by a set of discrete equations. This approach is in contrast to traditional techniques where a discretization appears at the last stage to solve the optimality condition numerically. Discrete mechanics and optimal control approaches determine optimal control inputs and trajectories more accurately with less computational load (see [19]). Combined with an indirect optimization technique, they are substantially more eﬃcient (see [17,23,25]). The geometric approach to mechanics can provide a theoretical basis for innovative control methodologies in geometric control theory. For example, these techniques allow the attitude of satellites to be controlled using changes in its shape, as opposed to chemical propulsion. While the geometric structure of mechanical systems plays a critical role in the construction of geometric control algorithms, these algorithms have typically been implemented using numerical schemes that ignore the underlying geometry. By applying geometric control algorithms to discrete mechanics that preserve geometric properties, we obtain an exact numerical implementation of the geometric control theory. In particular, the method of controlled Lagrangian systems is based on the idea of adopting a feedback control to realize a modiﬁcation of either the potential energy or the kinetic energy, referred to as potential shaping or kinetic shaping, respectively. These ideas are applied to construct a real-time digital feedback controller that stabilizes the inverted equilibrium of the cart-pendulum (see [10,11]).

In this article, we will survey discrete Lagrangian and Hamiltonian mechanics, and their applications to optimal control and feedback control theory. Discrete Lagrangian and Hamiltonian Mechanics Mechanics studies the dynamics of physical bodies acting under forces and potential ﬁelds. In Lagrangian mechanics, the trajectory of the object is derived by ﬁnding the path that extremizes the integral of a Lagrangian over time, called the action integral. In many classical problems, the Lagrangian is chosen as the diﬀerence between kinetic energy and potential energy. The Legendre transformation provides an alternative description of mechanical systems, referred to as Hamiltonian mechanics. Discrete Lagrangian and Hamiltonian mechanics has been developed by reformulating the theorems and the procedures of Lagrangian and Hamiltonian mechanics in a discrete time setting (see, for example, [31]). Therefore, discrete mechanics has a parallel structure with the mechanics described in continuous time, as summarized in Fig. 1 for Lagrangian mechanics. In this section, we describe discrete Lagrangian mechanics in more detail, and we derive discrete Euler–Lagrange equations for several mechanical systems. Consider a mechanical system on a conﬁguration space Q, which is the space of possible positions. The Lagrangian depends on the position and velocity, which are elements of the tangent bundle to Q, denoted by TQ. Let L : TQ ! R be the Lagrangian of the system. The discrete Lagrangian, Ld : Q Q ! R is an approximation to the exact discrete Lagrangian, Lexact (q0 ; q1 ) D d

Z

h

L(q01 (t); q˙01 (t)) dt ;

(1)

0

where q01 (0) D q0 , q01 (h) D q1 ; and q01 (t) satisﬁes the Euler–Lagrange equation in the time interval (0; h). A discrete action sum Gd : Q NC1 ! R, analogous to the action integral, is given by Gd (q0 ; q1 ; : : : ; q N ) D

N1 X

Ld (q k ; q kC1 ) :

(2)

kD0

The discrete Hamilton’s principle states that ıGd D 0 for any ıq k , which yields the discrete Euler–Lagrange (DEL) equation, D2 Ld (q k1 ; q k ) C D1 Ld (q k ; q kC1 ) D 0 :

(3)

Discrete Control Systems

D

Discrete Control Systems, Figure 1 Procedures to derive continuous and discrete equations of motion

This yields a discrete Lagrangian ﬂow map (q k1 ; q k ) 7! (q k ; q kC1 ). The discrete Legendre transformation, which from a pair of positions (q0 ; q1 ) gives a position-momentum pair (q0 ; p0 ) D (q0 ; D1 Ld (q0 ; q1 )) provides a discrete Hamiltonian ﬂow map in terms of momenta. The discrete equations of motion, referred to as variational integrators, inherit the geometric properties of the continuous system. Many interesting Lagrangian and Hamiltonian systems, such as rigid bodies evolve on a Lie group. Lie group variational integrators preserve the nonlinear structure of the Lie group conﬁgurations as well as geometric properties of the continuous dynamics (see [29,32]). The basic idea for all Lie group methods is to express the update map for the group elements in terms of the group operation, g1 D g0 f0 ;

(4)

where g0 ; g1 2 G are conﬁguration variables in a Lie group G, and f0 2 G is the discrete update represented by a right group operation on g 0 . Since the group element is updated by a group operation, the group structure is preserved automatically without need of parametrizations, constraints, or re-projection. In the Lie group variational integrator, the expression for the ﬂow map is obtained from the discrete variational principle on a Lie group, the same procedure presented in Fig. 1. But, the inﬁnitesimal variation of a Lie group element must be carefully ex-

pressed to respect the structure of the Lie group. For example, it can be expressed in terms of the exponential map as ˇ d ˇˇ ıg D g exp D g ; d ˇD0 for a Lie algebra element 2 g. This approach has been applied to the rotation group SO(3) and to the special Euclidean group SE(3) for dynamics of rigid bodies (see [24,26,27]). Generalizations to arbitrary Lie groups gives the generalized discrete Euler–Poincaré (DEP) equation,

Te L f0 D2 Ld (g0 ; f0 ) Adf0 Te L f1 D2 Ld (g1 ; f1 ) CTe L g 1 D1 Ld (g1 ; f1 ) D 0 ;

(5)

for a discrete Lagrangian on a Lie group, Ld : G G ! R. Here L f : G ! G denotes the left translation map given by L f g D f g for f ; g 2 G, Tg L f : Tg G ! T f g G is the tangential map for the left translation, and Ad g : g ! g is the adjoint map. A dual map is denoted by a superscript (see [30] for detailed deﬁnitions). We illustrate the properties of discrete mechanics using several mechanical systems, namely a mass-spring system, a planar pendulum, a spherical pendulum, and a rigid body.

2005

2006

D

Discrete Control Systems

Example 1 (Mass-spring System) Consider a mass-spring system, deﬁned by a rigid body that moves along a straight frictionless slot, and is attached to a linear spring. Continuous equation of motion: The conﬁguration space is Q D R, and the Lagrangian L : RR ! R is given by 1 1 ˙ D m q˙2 q2 ; L(q; q) 2 2

(6)

where q 2 R is the displacement of the body measured from the point where the spring exerts no force. The mass of the body and the spring constant are denoted by m; 2 R, respectively. The Euler–Lagrange equation yields the continuous equation of motion. m q¨ C q D 0 :

(7)

Discrete equation of motion: Let h > 0 be a discrete time step, and a subscript k denotes the kth discrete variable at t D kh. The discrete Lagrangian Ld : RR ! R is an approximation of the integral of the continuous Lagrangian (6) along the solution of (7) over a time step. Here, we choose the following discrete Lagrangian.

q k C q kC1 q kC1 q k ; 2 h h 1 m(q kC1 q k )2 (q k C q kC1 )2 : D 2h 8 (8)

Ld (q k ; q kC1 ) D hL

Direct application of the discrete Euler–Lagrange equation to this discrete Lagrangian yields the discrete equations of motion. We develop the discrete equation of motion using the discrete Hamilton’s principle to illustrate the principles more explicitly. Let Gd : R NC1 ! R be the discrete P action sum deﬁned as Gd D N1 kD0 Ld (q k ; q kC1 ), which approximates the action integral. The inﬁnitesimal variation of the action sum can be written as m h ıGd D ıq kC1 (q kC1 q k ) (q k C q kC1 ) h 4 kD0 h m C ıq k (q kC1 q k ) (q k C q kC1 ) : h 4 N1 X

Since ıq0 D ıq N D 0, the summation index can be rewritten as ıGd D

N1 X kD1

n m ıq k (q kC1 2q k C q k1 ) h

o h (q kC1 C 2q k C q k1 ) : 4

From discrete Hamilton’s principle, ıGd D 0 for any ıq k . Thus, the discrete equation of motion is given by m (q kC1 2q k C q k1 ) h h C (q kC1 C 2q k C q k1 ) D 0 : 4

(9)

For a given (q k1 ; q k ), we solve the above equation to obtain q kC1 . This yields a discrete ﬂow map (q k1 ; q k ) 7! (q k ; q kC1 ), and this process is repeated. The discrete Legendre transformation provides the discrete equation of motion in terms of the velocity as h2 h2 1C q kC1 D h q˙ k C 1 qk ; 4m 4m h h q˙ kC1 D q˙ k qk q kC1 : 2m 2m

(10) (11)

For a given (q k ; q˙ k ), we compute q kC1 and q˙ kC1 by (10) and (11), respectively. This yields a discrete ﬂow map (q k ; q˙k ) 7! (q kC1 ; q˙ kC1 ). It can be shown that this variational integrator has second-order accuracy, which follows from the fact that the discrete action sum is a second-order approximation of the action integral. Numerical example: We compare computational properties of the discrete equations of motion given by (10) and (11) with a 4(5)th order variable step size Runge– Kutta method. We choose m D 1 kg, D 1 kg/s2 so that the naturalpfrequency is 1 rad/s. The initial conditions are q0 D 2 m, q˙0 D 0, and the total energy is E D 1 Nm. The simulation time is 200 sec, and the stepsize h D 0:035 of the discrete equations of motion is chosen such that the CPU times are the same for both methods. Figure 2a shows the computed total energy. The variational integrator preserves the total energy well. The mean variation is 2:73271013 Nm. But, there is a notable dissipation of the computed total energy for the Runge–Kutta method Example 2 (Planar Pendulum) A planar pendulum is a mass particle connected to a frictionless, one degreeof-freedom pivot by a rigid massless link under a uniform gravitational potential. The conﬁguration space is the one-sphere S1 D fq 2 R2 j kqk D 1g. While it is common to parametrize the one-sphere by an angle, we develop parameter-free equations of motion in the special orthogonal group SO(2), which is a group of 22 orthogonal matrices with determinant 1, i. e. SO(2) D fR 2 R22 j RT R D I22 ; det[R] D 1g. SO(2) is diﬀeomorphic to the one-sphere. It is also possible to develop global equations of motion on the one-sphere directly, as shown

Discrete Control Systems

D

Discrete Control Systems, Figure 2 Computed total energy (RK45: blue, dotted, VI: red, solid)

in the next example, but here we focus on the special orthogonal group in order to illustrate the key steps to develop a Lie group variational integrator. We ﬁrst exploit the basic structures of the Lie group SO(2). Deﬁne a hat map ˆ, which maps a scalar ˝ to a 22 skew-symmetric matrix ˝ˆ as 0 ˝ ˆ ˝D : ˝ 0 The set of 22 skew-symmetric matrices forms the Lie algebra so(2). Using the hat map, we identify so(2) with R. An inner product on so(2) can be induced from the inner product on R as h˝ˆ 1 ; ˝ˆ 2 i D 1/2tr[˝ˆ 1T ˝ˆ 2 ] D ˝1 ˝2 for any ˝1 ; ˝2 2 R. The matrix exponential is a local diffeomorphism from so(2) to SO(2) given by cos ˝ sin ˝ exp ˝ˆ D : sin ˝ cos ˝ The kinematics equation for R 2 SO(2) can be written in terms of a Lie algebra element as R˙ D R˝ˆ :

(12)

Continuous equations of motion: The Lagrangian for a planar pendulum L : SO(2)so(2) ! R can be written as 1 2 2 ml ˝ C mg l e2T Re2 2 1 ˆ ˝i ˆ C mg l e T Re2 ; D ml 2 h˝; 2 2

ˆ D L(R; ˝)

(13)

where the constant g 2 R is the gravitational acceleration. The mass and the length of the pendulum are denoted by m; l 2 R, respectively. The second expression is used to deﬁne a discrete Lagrangian later. We choose the bases

of the inertial frame and the body-ﬁxed frame such that the unit vector along the gravity direction in the inertial frame, and the unit vector along the pendulum axis in the body-ﬁxed frame are represented by the same vector e2 D [0; 1] 2 R2 . Thus, for example, the hanging attitude is represented by R D I22 . Here, the rotation matrix R 2 SO(2) represents the linear transformation from a representation of a vector in the body-ﬁxed frame to the inertial frame. Since the special orthogonal group is not a linear vector space, the expression for the variation should be carefully chosen. The inﬁnitesimal variation of a rotation matrix R 2 SO(2) can be written in terms of its Lie algebra element as ˇ ˇ ˇ d ˇˇ ˇ ˆ ˆ ˆ ıR D R exp D R exp D Rˆ ; (14) ˇ ˇ d D0 D0 where 2 R so that ˆ 2 so(2). The inﬁnitesimal variation of the angular velocity is induced from this expression and (12) as ı ˝ˆ D ıRT R˙ C RT ı R˙ ˆ˙ ˆ T R˙ C RT (R˙ ˆ C R) D (R)

(15)

D ˆ ˝ˆ C ˝ˆ ˆ C ˆ˙ D ˆ˙ ; where we used the equality of mixed partials to compute ı R˙ as d/dt(ıR). RT ˆ Deﬁne the action integral to be G D 0 L(R; ˝)dt. The inﬁnitesimal variation of the action integral is obtained by using (14) and (15). Hamilton’s principle yields the following continuous equations of motion. g ˝˙ C e2T Re1 D 0 ; l

(16)

R˙ D R˝ˆ :

(17)

2007

2008

D

Discrete Control Systems

Deﬁne an action sum Gd : SO(2) NC1 ! R as Gd D P N1 kD0 Ld (R k ; F k ). Using (21) and (22), the variation of the action sum is written as

If we parametrize the rotation matrix as

cos RD sin

sin cos

;

ıGd D

these equations are equivalent to g ¨ C sin D 0 : l

kD1

(19)

Thus, F k D RTk R kC1 represents the relative update between two integration steps. If we ﬁnd F k 2 SO(2), the orthogonal structure is preserved through (19) since multiplication of orthogonal matrices is also orthogonal. This is a key idea of Lie group variational integrators. Deﬁne the discrete Lagrangian Ld : SO(2)SO(2) ! R to be 1 ml 2 hF k I22 ; F k I22 i 2h h h C mg l e2T R k e2 C mg l e2T R kC1 e2 2 2 (20) 1 h D ml 2 tr[I22 F k ] C mg l e2T R k e2 2h 2 h T C mg l e2 R kC1 e2 ; 2

Ld (R k ; F k ) D

which is obtained by an approximation h˝ˆ k ' RTk (R kC1 R k ) D F k I22 , applied to the continuous Lagrangian given by (13). As for the continuous time case, expressions for the inﬁnitesimal variations should be carefully chosen. The inﬁnitesimal variation of a rotation matrix is the same as (14), namely ıR k D R k ˆ k ;

1 T ml 2 F k1 F k1 2h

(18)

Discrete equations of motion: We develop a Lie group variational integrator on SO(2). Similar to (4), deﬁne F k 2 SO(2) such that R kC1 D R k F k :

N1 X

(21)

for k 2 R, and the constrained variation of F k is obtained from (19) as ıF k D ıRTk R kC1 C RTk ıR kC1 D ˆ k F k C F k ˆ kC1 D F k (ˆ kC1 F kT ˆ k F k ) (22) kC1 ˆ k ) ; D F k (ˆ where we use the fact that F ˆ F T D ˆ for any F 2 SO(2) and ˆ 2 so(2).

2

1 F k F kT hmg l e2T R k e1 ; ˆ k : 2h

From the discrete Hamilton’s principle, ıGd D 0 for any ˆ k . Thus, we obtain the Lie group variational integrator on SO(2) as

3

2h2 g T T e R kC1 e1 D 0; (23) F k F kT F kC1 F kC1 l 2 R kC1 D R k F k :

(24)

For a given (R k ; F k ) and R kC1 D R k F k , (23) is solved to ﬁnd F kC1 . This yields a discrete map (R k ; F k ) 7! (R kC1 ; F kC1 ). If we parametrize the rotation matrices R and F with and and if we assume that 1, these equations are equivalent to 1 hg ( kC1 2 k C k ) C sin k D 0 : h l The discrete version of the Legendre transformation provides the discrete Hamiltonian map as follows. F k F kT D 2h˝ˆ

2

h2 g T e R k e1 ; l 2

R kC1 D R k F k ; ˝ kC1 D ˝ k

(25) (26)

hg T hg T e2 R k e1 e R kC1 e1 : 2l 2l 2

(27)

For a given (R k ; ˝ k ), we solve (25) to obtain F k . Using this, (R kC1 ; ˝ kC1 ) is obtained from (26) and (27). This yields a discrete map (R k ; ˝ k ) 7! (R kC1 ; ˝ kC1 ). Numerical example: We compare the computational properties of the discrete equations of motion given by (25)–(27) with a 4(5)th order variable step size Runge– Kutta method. We choose m D 1 kg, l D 9:81 m. The initial conditions are 0 D /2 rad, ˝ D 0, and the total energy is E D 0 Nm. The simulation time is 1000 sec, and the step-size h D 0:03 of the discrete equations of motion is chosen such that the CPU times are identical. Figure 2b shows the computed total energy for both methods. The variational integrator preserves the total energy well. There is no drift in the computed total energy, and the mean variation is 1:0835102 Nm. But, there is a notable dissipation of the computed total energy for the Runge–

Discrete Control Systems

Kutta method. Note that the computed total energy would further decrease as the simulation time increases. Example 3 (Spherical Pendulum) A spherical pendulum is a mass particle connected to a frictionless, two degree-offreedom pivot by a rigid massless link. The mass particle acts under a uniform gravitational potential. The conﬁguration space is the two-sphere S2 D fq 2 R3 j kqk D 1g. It is common to parametrize the two-sphere by two angles, but this description of the spherical pendulum has a singularity. Any trajectory near the singularity encounters numerical ill-conditioning. Furthermore, this leads to complicated expressions involving trigonometric functions. Here we develop equations of motion for a spherical pendulum using the global structure of the two-sphere without parametrization. In the previous example, we develop equations of motion for a planar pendulum using the fact that the one-sphere S1 is diﬀeomorphic to the special orthogonal group SO(2). But, the two-sphere is not diﬀeomorphic to a Lie group. Instead, there exists a natural Lie group action on the two-sphere. That is the 3-dimensional special orthogonal group SO(3), a group of 33 orthogonal matrices with determinant 1, i. e. SO(3) D fR 2 R33 j RT R D I33 ; det[R] D 1g. The special orthogonal group SO(3) acts on the two-sphere in a transitive way; for any q1 ; q2 2 S2 , there exists a R 2 SO(3) such that q2 D Rq1 . Thus, the discrete update for the two-sphere can be expressed in terms of a rotation matrix as (19). This is a key idea to develop a discrete equations of motion for a spherical pendulum. Continuous equations of motion: Let q 2 S2 be a unit vector from the pivot point to the point mass. The Lagrangian for a spherical pendulum can be written as

D

Since q˙ D ! q for some angular velocity ! 2 R3 satisfying ! q D 0, this can also be equivalently written as !˙

g q e3 D 0 ; l

q˙ D ! q :

(32)

These are global equations of motion for a spherical pendulum; these are much simpler than the equations expressed in term of angles, and they have no singularity. Discrete equations of motion: We develop a variational integrator for the spherical pendulum deﬁned on S2 . Since the special orthogonal group acts on the two-sphere transitively, we can deﬁne the discrete update map for the unit vector as q kC1 D F k q k

(33)

for F k 2 SO(3). The rotation matrix F k is not uniquely deﬁned by this condition, since exp( qˆ k ), which corresponds to a rotation about the qk direction ﬁxes the vector qk . Consequently, if F k satisﬁes (33), then F k exp( qˆ k ) does as well. We avoid this ambiguity by requiring that F k does not rotate about qk , which can be achieved by letting F k D exp( fˆk ), where f k q k D 0. Deﬁne a discrete Lagrangian Ld : S2 S2 ! R to be Ld (q k ; q kC1 ) D

1 ml 2 (q kC1 q k ) (q kC1 q k ) 2h h h C mg l e3 q k C mg l e3 q kC1 : 2 2

The variation of qk is the same as (29), namely ıq k D k q k

1 ˙ D ml 2 q˙ q˙ C mg l e3 q ; L(q; q) 2

(28)

where the gravity direction is assumed to be e3 D [0; 0; 1] 2 R3 . The mass and the length of the pendulum are denoted by m; l 2 R, respectively. The inﬁnitesimal variation of the unit vector q can be written in terms of the vector cross product as ıq D q ;

(29)

where 2 R3 is constrained to be orthogonal to the unit vector, i. e. q D 0. Using this expression for the inﬁnitesimal variation, Hamilton’s principle yields the following continuous equations of motion. ˙ C q¨ C (q˙ q)q

g (q (q e3 )) D 0 : l

(30)

(31)

(34)

for k 2 R3 with a constraint k q k D 0. Using this discrete Lagrangian and the expression for the variation, discrete Hamilton’s principle yields the following discrete equations of motion for a spherical pendulum. h2 g q kC1 D h! k C q k e3 q k 2l 2 !1/2 2g h C 1 qk ; h! k C 2l q k e3 hg hg ! kC1 D ! k C q k e3 C q kC1 e3 : 2l 2l

(35) (36)

Since an explicit solution for F k 2 SO(3) can be obtained in this case, the rotation matrix F k does not appear in the equations of motion. This variational integrator on S2 exactly preserves the unit length of qk , the constraint

2009

2010

D

Discrete Control Systems

Discrete Control Systems, Figure 3 Numerical simulation of a spherical pendulum (RK45: blue, dotted, VI: red, solid)

q k ! k D 0, and the third component of the angular velocity ! k e3 which is conserved since gravity exerts no moment along the gravity direction e3 . Numerical example: We compare the computational properties of the discrete equations of motion given by (35) and (36) with a 4(5)th order variable step size Runge–Kutta method for (31) and (32). We choose mpD 1 kg, l D 9:81 m. The p initial conditions are q0 D [ 3/2; 0; 1/2], !0 D 0:1[ 3; 0; 3] rad/sec, and the total energy is E D 0:44 Nm. The simulation time is 200 sec, and the step-size h D 0:05 of the discrete equations of motion is chosen such that the CPU times are identical. Figure 3 shows the computed total energy and the unit length errors. The variational integrator preserves the total energy and the structure of the two-sphere well. The mean total energy deviation is 1:5460104 Nm, and the mean unit length error is 3:24761015 . But, there is a notable dissipation of the computed total energy for the Runge–Kutta method. The Runge–Kutta method also fails to preserve the structure of the two-sphere. The mean unit length error is 1:0164102 . Example 4 (Rigid Body in a Potential Field) Consider a rigid body under a potential ﬁeld that is dependent on the position and the attitude of the body. The conﬁguration space is the special Euclidean group, which is a semidirect product of the special orthogonal group and Eus R3 . clidean space, i. e. SE(3) D SO(3) Continuous equations of motion: The equations of motion for a rigid body can be developed either from Hamilton’s principle (see [26]) in a similar way as Example 2, or directly from the generalized discrete Euler–Poincaré equation given at (5). Here, we summarize the results. Let m 2 R and J 2 R33 be the mass and the moment of inertia matrix of a rigid body. For (R; x) 2 SE(3), the linear

transformation from the body-ﬁxed frame to the inertial frame is denoted by the rotation matrix R 2 SO(3), and the position of the mass center in the inertial frame is denoted by a vector x 2 R3 . The vectors ˝; v 2 R3 are the angular velocity in the body-ﬁxed frame, and the translational velocity in the inertial frame, respectively. Suppose that the rigid body acts under a conﬁguration-dependent potential U : SE(3) ! R. The continuous equations of motion for the rigid body can be written as R˙ D R˝ˆ ;

(37)

x˙ D v ;

(38)

J ˝˙ C ˝ J˝ D M ;

(39)

@U ; @x

(40)

m˙v D

where the hat map ˆ is an isomorphism from R3 to 33 skew-symmetric matrices so(3), deﬁned such that xˆ y D x y for any x; y 2 R3 . The moment due to the potential M 2 R3 is obtained by the following relationship: T ˆ D @U R RT @U : M @R @R

(41)

The matrix @U/@R 2 R33 is deﬁned such that [@U/@R] i j [email protected]/@[R] i j for i; j 2 f1; 2; 3g, where the i, jth element of a matrix is denoted by [] i j . Discrete equations of motion: The corresponding discrete equations of motion are given by

b

h J ˝k C

h2 ˆ M k D F k Jd Jd F kT ; 2

R kC1 D R k F k ;

(42) (43)

Discrete Control Systems

h2 @U k ; 2m @x k h h D F kT J˝ k C F kT M k C M kC1 ; 2 2 h Uk h @U kC1 D mv k ; 2 @x k 2 @x kC1

x kC1 D x k C hv k

(44)

J˝ kC1

(45)

mv kC1

(46)

where Jd 2 R33 is a non-standard moment of inertia matrix deﬁned as Jd D1/2tr[J]I33 J. For a given (R k ; x k ; ˝ k ; v k ), we solve the implicit Eq. (42) to ﬁnd F k 2 SO(3). Then, the conﬁguration at the next step R kC1 ;x kC1 is obtained by (43) and (44), and the moment and force M kC1;(@U kC1 )/(@x kC1 ) can be computed. Velocities ˝ kC1 ;v kC1 are obtained from (45) and (46). This deﬁnes a discrete ﬂow map, (R k ; x k ; ˝ k ; v k ) 7! (R kC1 ; x kC1 ; ˝ kC1 ; v kC1 ), and this process can be repeated. This Lie group variational integrator on SE(3) can be generalized to multiple rigid bodies acting under their mutual gravitational potential (see [26]). Numerical example: We compare the computational properties of the discrete equations of motion given by (42)–(46) with a 4(5)th order variable step size Runge– Kutta method for (37)–(40). In addition, we compute the attitude dynamics using quaternions on the unit threesphere S3 . The attitude kinematics Eq. (37) is rewritten in terms of quaternions, and the corresponding equations are integrated by the same Runge–Kutta method. We choose a dumbbell spacecraft, that is two spheres connected by a rigid massless rod, acting under a central gravitational potential. The resulting system is a restricted full two body problem. The dumbbell spacecraft model has an analytic expression for the gravitational potential, resulting in a nontrivial coupling between the attitude dynamics and the orbital dynamics. As shown in Fig. 4a, the initial conditions are chosen such that the resulting motion is a near-circular orbit

D

combined with a rotational motion. Figure 4b and c show the computed total energy and the orthogonality errors of the rotation matrix. The Lie group variational integrator preserves the total energy and the Lie group structure of SO(3). The mean total energy deviation is 2:5983104 , and the mean orthogonality error is 1:85531013 . But, there is a notable dissipation of the computed total energy and the orthogonality error for the Runge–Kutta method. The mean orthogonality errors for the Runge– Kutta method are 0.0287 and 0.0753, respectively, using kinematics equation with rotation matrices, and using the kinematics equation with quaternions. Thus, the attitude of the rigid body is not accurately computed for Runge– Kutta methods. It is interesting to see that the Runge– Kutta method with quaternions, which is generally assumed to have better computational properties than the kinematics equation with rotation matrices, has larger total energy error and orthogonality error. Since the unit length of the quaternion vector is not preserved in the numerical computations, orthogonality errors arise when converted to a rotation matrix. This suggests that it is critical to preserve the structure of SO(3) in order to study the global characteristics of the rigid body dynamics. The importance of simultaneously preserving the symplectic structure and the Lie group structure of the conﬁguration space in rigid body dynamics can be observed numerically. Lie group variational integrators, which preserve both of these properties, are compared to methods that only preserve one, or neither, of these properties (see [27]). It is shown that the Lie group variational integrator exhibits greater numerical accuracy and eﬃciency. Due to these computational advantages, the Lie group variational integrator has been used to study the dynamics of the binary near-Earth asteroid 66391 (1999 KW 4 ) in joint work between the University of Michigan and the Jet Propulsion Laboratory, NASA (see [37]).

Discrete Control Systems, Figure 4 Numerical simulation of a dumbbell rigid body (LGVI: red, solid, RK45 with rotation matrices: blue, dash-dotted, RK45 with quaternions: black, dotted)

2011

2012

D

Discrete Control Systems

Optimal Control of Discrete Lagrangian and Hamiltonian Systems Optimal control problems involve ﬁnding a control input such that a certain optimality objective is achieved under prescribed constraints. An optimal control problem that minimizes a performance index is described by a set of diﬀerential equations, which can be derived using Pontryagin’s maximum principle. The equations of motion for a system are constrained by Lagrange multipliers, and necessary conditions for optimality is obtained by the calculus of variations. The solution for the corresponding two point boundary value problem provides the optimal control input. Alternatively, a sub-optimal control law is obtained by approximating the control input history with ﬁnite data points. Discrete optimal control problems involve ﬁnding a control input for a given system described by discrete Lagrangian and Hamiltonian mechanics. The control inputs are parametrized by their values at each discrete time step, and the discrete equations of motion are derived from the discrete Lagrange–d’Alembert principle [21],

ı

N1 X kD0

Ld (q k ; q kC1 ) D

N1 X

[Fd (q k ; q kC1 ) ıq k

kD0

C FdC (q k ; q kC1 ) ıq kC1 ] ; which modiﬁes the discrete Hamilton’s principle by taking into account the virtual work of the external forces. Discrete optimal control is in contrast to traditional techniques such as collocation, wherein the continuous equations of motion are imposed as constraints at a set of collocation points, since this approach induces constraints on the conﬁguration at each discrete timestep. Any optimal control algorithm can be applied to the discrete Lagrangian or Hamiltonian system. For an indirect method, our approach to a discrete optimal control problem can be considered as a multiple stage variational problem. The discrete equations of motion are derived by the discrete variational principle. The corresponding variational integrators are imposed as dynamic constraints to be satisﬁed by using Lagrange multipliers, and necessary conditions for optimality, expressed as discrete equations on multipliers, are obtained from a variational principle. For a direct method, control inputs can be optimized by using parameter optimization tools such as a sequential quadratic programming. The discrete optimal control can be characterized by discretizing the optimal control problem from the problem formulation stage.

This method has substantial computational advantages when used to ﬁnd an optimal control law. As discussed in the previous section, the discrete dynamics are more faithful to the continuous equations of motion, and consequently more accurate solutions to the optimal control problem are obtained. The external control inputs break the Lagrangian and Hamiltonian system structure. For example, the total energy is not conserved for a controlled mechanical system. But, the computational superiority of the discrete mechanics still holds for controlled systems. It has been shown that the discrete dynamics is more reliable even for controlled system as it computes the energy dissipation rate of controlled systems more accurately (see [31]). For example, this feature is extremely important in computing accurate optimal trajectories for long term spacecraft attitude maneuvers using low energy control inputs. The discrete dynamics does not only provide an accurate optimal control input, but also enables us to ﬁnd it eﬃciently. For the indirect optimal control approach, optimal solutions are usually sensitive to a small variation of multipliers. This causes diﬃculties, such as numerical ill-conditioning, when solving the necessary conditions for optimality expressed as a two point boundary value problem. Sensitivity derivatives along the discrete necessary conditions do not have numerical dissipation introduced by conventional numerical integration schemes. Thus, they are numerically more robust, and the necessary conditions can be solved computationally eﬃciently. For the direct optimal control approach, optimal control inputs can be obtained by using a larger discrete step size, which requires less computational load. We illustrate the basic properties of the discrete optimal control using optimal control problems for the spherical pendulum and the rigid body model presented in the previous section. Example 5 (Optimal Control of a Spherical Pendulum) We study an optimal control problem for the spherical pendulum described in Example 3. We assume that an external control moment u 2 R3 acts on the pendulum. Control inputs are parametrized by their values at each time step, and the discrete equations of motion are modiﬁed to include the eﬀects of the external control inputs by using the discrete Lagrange–d’Alembert principle. Since the component of the control moment that is parallel to the direction along the pendulum has no eﬀect, we parametrize the control input as u k D q k w k for w k 2 R3 . The objective is to transfer the pendulum from a given initial conﬁguration (q0 ; !0 ) to a desired conﬁguration (qd ; ! d ) during a ﬁxed maneuver time N, while minimiz-

Discrete Control Systems

D

Discrete Control Systems, Figure 5 Optimal control of a spherical pendulum

ing the square of the weighted l2 norm of the control moments. min J D wk

N N X X h T h uk uk D (q k w k )T (q k w k ) : 2 2 kD0

kD0

We solve this optimal control problem by using a direct numerical optimization method. The terminal boundary condition is imposed as an equality constraint, and the 3(N C 1) control input parameters fw k g N kD0 are numerically optimized using sequential quadratic programming. This method is referred to as a DMOC (Discrete Mechanics and Optimal Control) approach (see [19]). Figure 5 shows a optimal solution transferring the spherical pendulum from a hanging conﬁguration given by (q0 ; !0 ) D (e3 ; 031 ) to an inverted conﬁguration (qd ; ! d ) D (e3 ; 031 ) during 1 second. The time step size is h D 0:033. Experiment have shown that the DMOC approach can compute optimal solutions using larger step sizes than typical Runge–Kutta methods, and consequently, it requires less computational load. In this case, using a second-order accurate Runge–Kutta method, the same optimization code fails while giving error messages of inaccurate and singular gradient computations. It is presumed that the unit length errors of the Runge–Kutta method, shown in Example 3, cause numerical instabilities for the ﬁnite-diﬀerence gradient computations required for the sequential quadratic programming algorithm. Example 6 (Optimal Control of a Rigid Body in a Potential Field) We study an optimal control problem of a rigid body using a dumbbell spacecraft model described in Example 4 (see [25] for detail). We assume that external control forces u f 2 R3 , and control moment u m 2 R3 act on the dumbbell spacecraft. Control inputs are parametrized by their values at each time step, and the Lie group variational integrators are modiﬁed to include the eﬀects of

the external control inputs by using discrete Lagrange– d’Alembert principle. The objective is to transfer the dumbbell from a given initial conﬁguration (R0 ; x0 ; ˝0 ; v0 ) to a desired conﬁguration (R d ; x d ; ˝ d ; v d ) during a ﬁxed maneuver time N, while minimizing the square of the l2 norm of the control inputs. N1 X

min J D

u kC1

kD0

T h f T h f Wm u m u kC1 Wf u kC1C u m kC1 kC1 ; 2 2

where Wf ; Wm 2 R33 are symmetric positive deﬁnite matrices. Here we use a modiﬁed version of the discrete equations of motion with ﬁrst order accuracy, as it yields a compact form for the necessary conditions. Necessary conditions for optimality: We solve this optimal control problem by using an indirect optimization method, where necessary conditions for optimality are derived using variational arguments, and a solution of the corresponding two-point boundary value problem provides the optimal control. This approach is common in the optimal control literature; here the optimal control problem is discretized at the problem formulation level using the Lie group variational integrator presented in Sect. “Discrete Lagrangian and Hamiltonian Mechanics”.

Ja D

N1 X kD0

h f T f f h m T m m W u kC1 C W u kC1 u u 2 kC1 2 kC1

C 1;T k fx kC1 C x k C hv k g @U kC1 f 2;T C hu kC1 C k mv kC1 C mv k h @x kC1

_ C 3;T logm(F k RTk R kC1 ) k ˚

C 4;T J˝ kC1 C F kT J˝ k C h M kC1 C u m kC1 ; k

2013

2014

D

Discrete Control Systems

where 1k ; 2k ; 3k ; 4k 2 R3 are Lagrange multipliers. The matrix logarithm is denoted by logm : SO(3) ! so(3) and the vee map _ : so(3) ! R3 is the inverse of the hat map introduced in Example 4. The logarithm form of (43) is used, and the constraint (42) is considered implicitly using constrained variations. Using similar expressions for the variation of the rotation matrix and the angular velocity given in (14) and (15), the inﬁnitesimal variation can be written as N1 n o X f ;T f ıJa D hıu k W f u k C 2k1 kD1

˚ 4 C hıu m;T Wm u m k C k1 k ˚ C zTk k1 C ATk k ; where k D [ 1k ; 2k ; 3k ; 4k ] 2 R12 , and z k 2 R12 represents the inﬁnitesimal variation of (R k ; x k ; ˝ k ; v k ), given by z k D [logm(RTk ıR k )_ ; ıx k ; ı˝ k ; ıv k ]. The matrix A k 2 R1212 is deﬁned in terms of (R k ; x k ; ˝ k ; v k ). Thus, necessary conditions for optimality are given by f

u kC1 D Wf1 2k ;

(47)

1 4 um kC1 D Wm k ;

(48)

k D ATkC1 kC1

(49)

together with the discrete equations of motion and the boundary conditions. Computational approach: Necessary conditions for optimality are expressed in terms of a two point boundary problem. The problem is to ﬁnd the optimal discrete ﬂow, multipliers, and control inputs to satisfy the equations of motion, optimality conditions, multiplier equations, and boundary conditions simultaneously. We use a neighboring extremal method (see [12]). A nominal solution sat-

Discrete Control Systems, Figure 6 Optimal control of a rigid body

isfying all of the necessary conditions except the boundary conditions is chosen. The unspeciﬁed initial multiplier is updated by successive linearization so as to satisfy the speciﬁed terminal boundary conditions in the limit. This is also referred to as the shooting method. The main advantage of the neighboring extremal method is that the number of iteration variables is small. The diﬃculty is that the extremal solutions are sensitive to small changes in the unspeciﬁed initial multiplier values. The nonlinearities also make it hard to construct an accurate estimate of sensitivity, thereby resulting in numerical ill-conditioning. Therefore, it is important to compute the sensitivities accurately to apply the neighboring extremal method. Here the optimality conditions (47) and (48) are substituted into the equations of motion and the multiplier equations, which are linearized to obtain sensitivity derivatives of an optimal solution with respect to boundary conditions. Using this sensitivity, an initial guess of the unspeciﬁed initial conditions is iterated to satisfy the speciﬁed terminal conditions in the limit. Any type of Newton iteration can be applied. We use a line search with backtracking algorithm, referred to as Newton–Armijo iteration (see [22]). Figure 6 shows optimized maneuvers, where a dumbbell spacecraft on a reference circular orbit is transferred to another circular orbit with a diﬀerent orbital radius and inclination angle. Figure 6a shows the violation of the terminal boundary condition according to the number of iterations on a logarithmic scale. Red circles denote outer iterations in the Newton–Armijo iteration to compute the sensitivity derivatives. The error in satisfaction of the terminal boundary condition converges quickly to machine precision after the solution is close to the local minimum at around the 20th iteration. These convergence results are consistent with the quadratic convergence rates expected of Newton methods with accurately computed gradients.

Discrete Control Systems

The neighboring extremal method, also referred to as the shooting method, is numerically eﬃcient in the sense that the number of optimization parameters is minimized. But, in general, this approach may be prone to numerical ill-conditioning (see [3]). A small change in the initial multiplier can cause highly nonlinear behavior of the terminal attitude and angular momentum. It is diﬃcult to compute the gradient for Newton iterations accurately, and the numerical error may not converge. However, the numerical examples presented in this article show excellent numerical convergence properties. The dynamics of a rigid body arises from Hamiltonian mechanics, which have neutral stability, and its adjoint system is also neutrally stable. The proposed Lie group variational integrator and the discrete multiplier equations, obtained from variations expressed in the Lie algebra, preserve the neutral stability property numerically. Therefore the sensitivity derivatives are computed accurately. Controlled Lagrangian Method for Discrete Lagrangian Systems The method of controlled Lagrangians is a procedure for constructing feedback controllers for the stabilization of relative equilibria. It relies on choosing a parametrized family of controlled Lagrangians whose corresponding Euler–Lagrange ﬂows are equivalent to the closed loop behavior of a Lagrangian system with external control forces. The condition that these two systems are equivalent results in matching conditions. Since the controlled system can now be viewed as a Lagrangian system with a modiﬁed Lagrangian, the global stability of the controlled system can be determined directly using Lyapunov stability analysis. This approach originated in Bloch et al. [5] and was then developed in Auckly et al. [1]; Bloch et al. [6,7,8,9]; Hamberg [15,16]. A similar approach for Hamiltonian controlled systems was introduced and further studied in the work of Blankenstein, Ortega, van der Schaft, Maschke, Spong, and their collaborators (see, for example, [33,35] and related references). The two methods were shown to be equivalent in Chang et al. [13] and a nonholonomic version was developed in Zenkov et al. [40,41], and Bloch [4]. Since the method of controlled Lagrangians relies on relating the closed-loop dynamics of a controlled system with the Euler–Lagrange ﬂow associated with a modiﬁed Lagrangian, it is natural to discretize this approach through the use of variational integrators. In Bloch et al. [10,11], a discrete theory of controlled Lagrangians was developed for variational integrators, and applied to the feedback stabilization of the unstable inverted equilibrium of the pendulum on a cart.

D

The pendulum on a cart is an example of an underactuated control problem, which has two degrees of freedom, given by the pendulum angle and the cart position. Only the cart position has control forces acting on it, and the stabilization of the pendulum has to be achieved indirectly through the coupling between the pendulum and the cart. The controlled Lagrangian is obtained by modifying the kinetic energy term, a process referred to as kinetic shaping. Similarly, it is possible to modify the potential energy term using potential shaping. Since the pendulum on a cart model involves both a planar pendulum, and a cart that translates in one-dimension, the conﬁguration space is a cylinder, S1 R. Continuous kinetic shaping: The Lagrangian has the form kinetic minus potential energy ˙ D L(q; q)

1 ˙2 ˛ C 2ˇ( )˙ s˙ C s˙2 V(q); 2

(50)

and the corresponding controlled Euler–Lagrange dynamics is d @L @L D0; dt @˙ @ d @L Du; dt @˙s

(51) (52)

where u is the control input. Since the potential energy is translation invariant, i. e., V(q) D V( ), the relative equilibria D e , s˙ D const are unstable and given by non-degenerate critical points of V( ). To stabilize the relative equilibria D e , s˙ D const with respect to , kinetic shaping is used. The controlled Lagrangian in this case is deﬁned by 1 ˙ D L(; ˙ ; s˙ C ( )˙ ) C (( )˙ )2 ; (53) L; (q; q) 2 where ( ) D ˇ( ). This velocity shift corresponds to a new choice of the horizontal space (see [8] for details). The dynamics is just the Euler–Lagrange dynamics for controlled Lagrangian (53), d @L;

@L;

D0; dt @˙ @ d @L;

D0: dt @˙s

(54) (55)

The Lagrangian (53) satisﬁes the simpliﬁed matching conditions of Bloch et al. [9] when the kinetic energy metric coeﬃcient in (50) is constant.

Setting u D d ( )˙ /dt deﬁnes the control input, makes Eqs. (52) and (55) identical, and results in controlled momentum conservation by dynamics (51)

2015

2016

D

Discrete Control Systems

and (52). Setting D 1/ makes Eqs. (51) and (54) identical when restricted to a level set of the controlled momentum. Discrete kinetic shaping: Here, we adopt the following notation: q kC1/2 D

q k C q kC1 ; 2 q k D q kC1 q k ;

q k D ( k ; s k ) :

Then, a second-order accurate discrete Lagrangian is given by, Ld (q k ; q kC1 ) D hL(q kC1/2 ; q k /h) : The discrete dynamics is governed by the equations @Ld (q k ; q kC1 ) @Ld (q k1 ; q k ) C D0; @ k @ k @Ld (q k ; q kC1 ) @Ld (q k1 ; q k ) C D u k ; @s k @s k

(56) (57)

where u k is the control input. Similarly, the discrete controlled Lagrangian is, ;

L;

(q kC1/2 ; q k /h) ; d (q k ; q kC1 ) D hL

with discrete dynamics given by, @L;

@L; (q k1 ; q k ) d (q k ; q kC1 ) C d D0; @ k @ k

(58)

@L;

@L; (q k1 ; q k ) d (q k ; q kC1 ) C d D0: @s k @s k

(59)

Equation (59) is equivalent to the discrete controlled momentum conservation: pk D ; where @L;

d (q k ; q kC1 ) @s k (1 C )ˇ( kC1/2 ) k C s k D : h

pk D

Setting uk D

k ( kC1/2 ) k1 ( k1/2 ) h

makes Eqs. (57) and (59) identical and allows one to represent the discrete momentum equation (57) as the discrete momentum conservation law p k D p.

The condition that (56)–(57) are equivalent to (58)– (59) yield the discrete matching conditions. The dynamics determined by Eqs. (56)–(57) restricted to the momentum level p k D p is equivalent to the dynamics of Eqs. (58)– (59) restricted to the momentum level p k D if and only if the matching conditions D

1 ;

D

p ; 1 C

hold. Numerical example: Simulating the behavior of the discrete controlled Lagrangian system involves viewing Eqs. (58)–(59) as an implict update map (q k2 ; q k1 ) 7! (q k1 ; q k ). This presupposes that the initial conditions are given in the form (q0 ; q1 ); however it is generally preferable to specify the initial conditions as (q0 ; q˙0 ). This is achieved by solving the boundary condition, @L (q0 ; q˙0 ) C D1 Ld (q0 ; q1 ) C Fd (q0 ; q1 ) D 0 ; @q˙ for q1 . Once the initial conditions are expressed in the form (q0 ; q1 ), the discrete evolution can be obtained using the implicit update map. We ﬁrst consider the case of kinetic shaping on a level surface, when is twice the critical value, and without dissipation. Here, h D 0:05 sec, m D 0:14 kg, M D 0:44 kg, and l D 0:215 m. As shown in Fig. 7, the dynamics is stabilized, but since there is no dissipation, the oscillations are sustained. The s dynamics exhibits both a drift and oscillations, as potential shaping is necessary to stabilize the translational dynamics. Future Directions Discrete Receding Horizon Optimal Control: The existing work on discrete optimal control has been primarily focused on constructing the optimal trajectory in an open loop sense. In practice, model uncertainty and actuation errors necessitate the use of feedback control, and it would be interesting to extend the existing work on optimal control of discrete systems to the feedback setting by adopting a receding horizon approach. Discrete State Estimation: In feedback control, one typically assumes complete knowledge regarding the state of the system, an assumption that is often unrealistic in practice. The general problem of state estimation in the context of discrete mechanics would rely on good numerical methods for quantifying the propagation of uncertainty by solving the Liouville equation, which describes the evolution of a phase space distribution function advected by

Discrete Control Systems

D

Discrete Control Systems, Figure 7 Discrete controlled dynamics with kinetic shaping and without dissipation. The discrete controlled system stabilizes the motion about the equilibrium, but the s dynamics is not stabilized; since there is no dissipation, the oscillations are sustained

a prescribed vector ﬁeld. In the setting of Hamiltonian systems, the solution of the Liouville equation can be solved by the method of characteristics (Scheeres et al. [38]). This implies that a collocational approach (Xiu [39]) combined with Lie group variational integrators, and interpolation based on noncommutative harmonic analysis on Lie groups could yield an eﬃcient means of propagating uncertainty, and serve as the basis of a discrete state estimation algorithm. Forced Symplectic–Energy-Momentum Variational Integrators: One of the motivations for studying the control of Lagrangian systems using the method of controlled Lagrangians is that the method provides a natural candidate Lyapunov function to study the global stability properties of the controlled system. In the discrete theory, this approach is complicated by the fact that the energy of a discrete Lagrangian system is not exactly conserved, but rather oscillates in a bounded fashion. This can be addressed by considering the symplectic-energy-momentum [20] analogue to the discrete Lagrange-d’Alembert principle, ı

N1 X kD0

Ld (q k ; q kC1 ; h k ) D

N1 X

[Fd (q k ; q kC1 ; h k ) ıq k

kD0

C FdC (q k ; q kC1 ; h k ) ıq kC1 ] ; where the timestep hk is allowed to vary, and is chosen to satisfy the variational principle. The variations in hk

yield an Euler–Lagrange equation that reduces to the conservation of discrete energy in the absence of external forces. By developing a theory of controlled Lagrangians around a geometric integrator based on the symplecticenergy-momentum version of the Lagrange–d’Alembert principle, one would potentially be able to use Lyapunov techniques to study the global stability of the resulting numerical control algorithms. Acknowledgments TL and ML have been supported in part by NSF Grant DMS-0504747 and DMS-0726263. TL and NHM have been supported in part by NSF Grant ECS-0244977 and CMS-0555797. Bibliography Primary Literature 1. Auckly D, Kapitanski L, White W (2000) Control of nonlinear underactuated systems. Commun Pure Appl Math 53:354–369 2. Benettin G, Giorgilli A (1994) On the Hamiltonian interpolation of near to the identity symplectic mappings with application to symplectic integration algorithms. J Stat Phys 74:1117–1143 3. Betts JT (2001) Practical Methods for Optimal Control Using Nonlinear Programming. SIAM, Philadelphia, PA 4. Bloch AM (2003) Nonholonomic Mechanics and Control. In: Interdisciplinary Applied Mathematics, vol 24. Springer, New York

2017

2018

D

Discrete Control Systems

5. Bloch AM, Leonard N, Marsden JE (1997) Matching and stabilization using controlled Lagrangians. In: Proceedings of the IEEE Conference on Decision and Control. Hyatt Regency San Diego, San Diego, CA, 10–12 December 1997, pp 2356–2361 6. Bloch AM, Leonard N, Marsden JE (1998) Matching and stabilization by the method of controlled Lagrangians. In: Proceedings of the IEEE Conference on Decision and Control. Hyatt Regency Westshore, Tampa, FL, 16–18 December 1998, pp 1446– 1451 7. Bloch AM, Leonard N, Marsden JE (1999) Potential shaping and the method of controlled Lagrangians. In: Proceedings of the IEEE Conference on Decision and Control. Crowne Plaza Hotel and Resort, Phoenix, AZ, 7–10 December 1999, pp 1652–1657 8. Bloch AM, Leonard NE, Marsden JE (2000) Controlled Lagrangians and the stabilization of mechanical systems I: The first matching theorem. IEEE Trans Syst Control 45:2253–2270 9. Bloch AM, Chang DE, Leonard NE, Marsden JE (2001) Controlled Lagrangians and the stabilization of mechanical systems II: Potential shaping. IEEE Trans Autom Contr 46:1556– 1571 10. Bloch AM, Leok M, Marsden JE, Zenkov DV (2005) Controlled Lagrangians and stabilization of the discrete cart-pendulum system. In: Proceedings of the IEEE Conference on Decision and Control. Melia Seville, Seville, Spain, 12–15 December 2005, pp 6579–6584 11. Bloch AM, Leok M, Marsden JE, Zenkov DV (2006) Controlled Lagrangians and potential shaping for stabilization of discrete mechanical systems. In: Proceedings of the IEEE Conference on Decision and Control. Manchester Grand Hyatt, San Diego, CA, 13–15 December 2006, pp 3333–3338 12. Bryson AE, Ho Y (1975) Applied Optimal Control. Hemisphere, Washington, D.C. 13. Chang D-E, Bloch AM, Leonard NE, Marsden JE, Woolsey C (2002) The equivalence of controlled Lagrangian and controlled Hamiltonian systems. Control Calc Var (special issue dedicated to Lions JL) 8:393–422 14. Hairer E, Lubich C, Wanner G (2006) Geometric Numerical Integration, 2nd edn. Springer Series in Computational Mathematics, vol 31. Springer, Berlin 15. Hamberg J (1999) General matching conditions in the theory of controlled Lagrangians. In: Proceedings of the IEEE Conference on Decision and Control. Crowne Plaza Hotel and Resort, Phoenix, AZ, 7–10 December 1999, pp 2519–2523 16. Hamberg J (2000) Controlled Lagrangians, symmetries and conditions for strong matching. In: Lagrangian and Hamiltonian Methods for Nonlinear Control. Elsevier, Oxford 17. Hussein II, Leok M, Sanyal AK, Bloch AM (2006) A discrete variational integrator for optimal control problems in SO(3). In: Proceedings of the IEEE Conference on Decision and Control. Manchester Grand Hyatt, San Diego, CA, 13–15 December 2006, pp 6636–6641 18. Iserles A, Munthe-Kaas H, Nørsett SP, Zanna A (2000) Lie-group methods. In: Acta Numerica, vol 9. Cambridge University Press, Cambridge, pp 215–365 19. Junge D, Marsden JE, Ober-Blöbaum S (2005) Discrete mechanics and optimal control. In: IFAC Congress, Praha, Prague, 3–8 July 2005 20. Kane C, Marsden JE, Ortiz M (1999) Symplectic-energymomentum preserving variational integrators. J Math Phys 40(7):3353–3371 21. Kane C, Marsden JE, Ortiz M, West M (2000) Variational integra-

22. 23.

24.

25.

26.

27.

28.

29. 30.

31.

32.

33.

34.

35.

36.

37.

38.

tors and the Newmark algorithm for conservative and dissipative mechanical systems. Int J Numer Meth Eng 49(10):1295– 1325 Kelley CT (1995) Iterative Methods for Linear and Nonlinear Equations. SIAM, Philadelphia, PA Lee T, Leok M, McClamroch NH (2005) Attitude maneuvers of a rigid spacecraft in a circular orbit. In: Proceedings of the American Control Conference. Portland Hilton, Portland, OR, 8–10 June 2005, pp 1742–1747 Lee T, Leok M, McClamroch NH (2005) A Lie group variational integrator for the attitude dynamics of a rigid body with applications to the 3D pendulum. In: Proceedings of the IEEE Conference on Control Applications. Toronto, Canada, 28–31 August 2005, pp 962–967 Lee T, Leok M, McClamroch NH (2006) Optimal control of a rigid body using geometrically exact computations on SE(3). In: Proceedings of the IEEE Conference on Decision and Control. Manchester Grand Hyatt, San Diego, CA, 13–15 December 2006, pp 2710–2715 Lee T, Leok M, McClamroch NH (2007) Lie group variational integrators for the full body problem. Comput Method Appl Mech Eng 196:2907–2924 Lee T, Leok M, McClamroch NH (2007) Lie group variational integrators for the full body problem in orbital mechanics. Celest Mech Dyn Astron 98(2):121–144 Leimkuhler B, Reich S (2004) Simulating Hamiltonian Dynamics. Cambridge Monographs on Applied and Computational Mathematics, vol 14. Cambridge University Press, Cambridge Leok M (2004) Foundations of Computational Geometric Mechanics. Ph D thesis, California Instittute of Technology Marsden JE, Ratiu TS (1999) Introduction to Mechanics and Symmetry, 2nd edn. Texts in Applied Mathematics, vol 17. Springer, New York Marsden JE, West M (2001) Discrete mechanics and variational integrators. In: Acta Numerica, vol 10. Cambridge University Press, Cambridge, pp 317–514 Marsden JE, Pekarsky S, Shkoller S (1999) Discrete Euler–Poincarée and Lie–Poisson equations. Nonlinearity 12(6):1647–1662 Maschke B, Ortega R, van der Schaft A (2001) Energy-based Lyapunov functions for forced Hamiltonian systems with dissipation. IEEE Trans Autom Contr 45:1498–1502 Moser J, Veselov AP (1991) Discrete versions of some classical integrable systems and factorization of matrix polynomials. Commun Math Phys 139:217–243 Ortega R, Spong MW, Gómez-Estern F, Blankenstein G (2002) Stabilization of a class of underactuated mechanical systems via interconnection and damping assignment. IEEE Trans Autom Contr 47:1218–1233 Sanz-Serna JM (1992) Symplectic integrators for Hamiltonian problems: an overview. In: Acta Numerica, vol 1. Cambridge University Press, Cambridge, pp 243–286 Scheeres DJ, Fahnestock EG, Ostro SJ, Margot JL, Benner LAM, Broschart SB, Bellerose J, Giorgini JD, Nolan MC, Magri C, Pravec P, Scheirich P, Rose R, Jurgens RF, De Jong EM, Suzuki S (2006) Dynamical configuration of binary near-Earth asteroid (66391) 1999 KW4. Science 314:1280–1283 Scheeres DJ, Hsiao F-Y, Park RS, Villac BF, Maruskin JM (2006) Fundamental limits on spacecraft orbit uncertainty and distribution propagation. J Astronaut Sci 54:505–523

Disordered Elastic Media

39. Xiu D (2007) Efficient collocational approach for parametric uncertainty analysis. Comm Comput Phys 2:293–309 40. Zenkov DV, Bloch AM, Leonard NE, Marsden JE (2000) Matching and stabilization of low-dimensional nonholonomic systems. In: Proceedings of the IEEE Conference on Decision and Control. Sydney Convention and Exhibition Centre, Sydney, NSW Australia; 12–15 December 2000, pp 1289–1295 41. Zenkov DV, Bloch AM, Leonard NE, Marsden JE (2002) Flat nonholonomic matching. In: Proceedings of the American Control Conference. Hilton Anchorage, Anchorage, AK, 8–10 May 2002, pp 2812–2817

Books and Reviews Bloch AM (2003) Nonholonomic Mechanics and Control. Interdisciplinary Appl Math, vol 24. Springer Bullo F, Lewis AD (2005) Geometric control of mechanical systems. Texts in Applied Mathematics, vol 49. Springer, New York Hairer E, Lubich C, Wanner G (2006) Geometric Numerical Integration, 2nd edn. Springer Series in Computational Mathematics, vol 31. Springer, Berlin Iserles A, Munthe-Kaas H, Nørsett SP, Zanna A (2000) Lie-group methods. In: Acta Numerica, vol 9. Cambridge University Press, Cambridge, pp 215–365 Leimkuhler B, Reich S (2004) Simulating Hamiltonian Dynamics. Cambridge Monographs on Applied and Computational Mathematics, vol 14. Cambridge University Press, Cambridge Marsden JE, Ratiu TS (1999) Introduction to Mechanics and Symmetry, 2nd edn. Texts in Applied Mathematics, vol 17. Springer Marsden JE, West M (2001) Discrete mechanics and variational integrators. In Acta Numerica, vol 10. Cambridge University Press, Cambridge, pp 317–514 Sanz-Serna JM (1992) Symplectic integrators for Hamiltonian problems: an overview. In: Acta Numerica, vol 1. Cambridge University Press, Cambridge, pp 243–286

Disordered Elastic Media THIERRY GIAMARCHI DPMC-MaNEP, University of Geneva, Geneva, Switzerland Article Outline Glossary Deﬁnition of the Subject Introduction Static Properties of Disordered Elastic Media Pinning and Dynamics Future Directions Bibliography Glossary Pinning An action exerted by impurities on an object. The object has a preferential position in space, and will

D

move in response to an external force only if this force is large enough. Scaling The fact that each of two quantities varies in a power-law relationship to the other. Random manifold A single elastic structure (line, sheet) embedded in a random environment. Bragg glass A periodic elastic structure embedded in a weakly disordered environment, nearly as ordered as a solid but exhibiting some characteristics normally associated with glasses. Creep Very slow response at ﬁnite temperature of a pinned structure in response to an external force. Definition of the Subject Many seemingly diﬀerent systems, with extremely diﬀerent microscopic physics, ranging from magnets to superconductors, share the same essential ingredients and can be described under the unifying concept of disordered elastic media. In all these systems, an internal elastic structure, such as an interface between regions of opposite magnetization in magnetic systems, is subject to the eﬀects of disorder existing in the material. A specially interesting feature of all these systems is that these disordered elastic structures can be set in motion by applying an external force on them (e. g. a magnetic ﬁeld sets in motion a magnetic interface), and that motion will be drastically aﬀected by the presence of the disorder. What properties result from this competition between elasticity and disorder is a complicated problem which constitutes the essence of the physics of disordered elastic media. The resulting physics present characteristics similar to those of glasses. This poses extremely challenging fundamental questions in determining the static and dynamic properties of these systems. Understanding both the static and dynamic properties of these objects is not only an important question from a fundamental point of view, but also has strong practical applications. Indeed, being able to write an interface between two regions of magnetization or polarization, and the speed of writing and stability of such regions, is what conditions, in particular, our ability to store information in such systems, as (for example) recordings on a magnetic hard drive. The physics pertaining to disordered elastic media directly condition how we can use these systems for practical applications. Introduction Understanding the statics and dynamics of elastic systems in a random environment is a long-standing problem with important applications for a host of experimental systems. Such problems can be split into two broad categories: (a)

2019