Jul 25, 2016 - http://air.sciedupress.com. Artificial Intelligence Research. 2016, Vol. 5, No. 2. ORIGINAL RESEARCH. Adaptive control for uncertain ...

0 downloads 0 Views 4MB Size

Artificial Intelligence Research

2016, Vol. 5, No. 2

ORIGINAL RESEARCH

Adaptive control for uncertain discrete-time systems with unknown disturbance based on RNN Huimin Cui∗1,2 , Jin Guo1 , Jianxin Feng3 , Tingfeng Wang1 1

State Key Laboratory of Laser Interaction with Matter, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China 2 University of Chinese Academy of Sciences, Beijing, China 3 Nanjing University of Aeronautics and Astronautics, Nanjing, China

Received: March 13, 2016 DOI: 10.5430/air.v5n2p102

Accepted: May 12, 2016 Online Published: July 25, 2016 URL: http://dx.doi.org/10.5430/air.v5n2p102

A BSTRACT A new robust adaptive control algorithm is developed for a class of uncertain discrete-time SISO systems. Different from the existing investigated systems, the concerned discrete system here is with both uncertain smooth nonlinear functions and unknown disturbance. On the basis of the idea of neural network (NN) approximation, a novel recurrent neural network (RNN) is first proposed and used to approximate a backstepping control law following the transformation of the original system into a predictor form. According to Lyapunov stability theorem, a new on-line tuning law for parameters of RNN is obtained. Meanwhile, in order to achieve satisfying robust tracking performance, a novel controller is constructed by virtue of the approximation error of RNN. It has been proved that all the concerned signals are uniformly ultimately bounded. In addition, a very small tracking error can be obtained through appropriate selection of control parameters. Finally, we give a simulation example to demonstrate the validness of the newly proposed control algorithm for the investigated systems.

Key Words: Adaptive control, Backstepping control, Discrete-time nonlinear systems, Recurrent neural networks, External disturbance

1. I NTRODUCTION During the past few decades, neural networks (NNs) have obtained widespread attentions particularly in the area of identification and control for dynamic systems by virtue of their excellent universal approximation ability.[1–12] For example, Sarma et al.[4] propose an approach combined with ANN to make out primal phonemes of Assamese language. RBF neural network is employed as an accessorial method to weaken the impact of nonlinearity and uncertainty on the nonlinear system.[5] Generally, according to structures, neu-

ral network (NN) can be categorized into two types, i.e., feed-forward neural network (FNN)[1, 2, 10, 13, 14] and recurrent neural network (RNN).[6, 9, 11, 15–18, 20] We know that FNN can only represent static mappings and its approximation performance is easily influenced by training data because the scheme of weights update does not depend on internal network information. However, RNN can memorize the past knowledge in virtue of its delay feedback loops. Thus, when the inputs are time-varying RNN can also deal with them by use of its superior temporal operation. In practical appli-

∗ Correspondence: Huimin Cui; Email: [email protected]; Address: State Key Laboratory of Laser Interaction with Matter, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033; China and University of Chinese Academy of Sciences, Beijing 100049, China.

102

ISSN 1927-6974

E-ISSN 1927-6982

http://air.sciedupress.com

Artificial Intelligence Research

cations, that characteristic makes RNN change accordingly when control conditions change suddenly. Consequently, RNN can achieve better control property compared with FNN when the system contains un-modeled dynamics. That is also the main reason why RNN has been widely used in the control field. For example, Miao et al.[9] propose a recurrent neural network control method in the case of having no information about the system dynamics. In addition, Lin et al.[11] combine a robust adaptive backstepping control (RABC) algorithm and recurrent wavelet neural network to control the target system. From simulation results in those papers it can be concluded that control method combined with recurrent neural networks can achieve good control performance. Through lots of literature investigation, recently we have found that, among various types of NN, radial basis function neural network owns the capability of approximating any function to arbitrarily small error range.[24, 25] However, so far, there has no discrete RNN found. In the study, for the purpose of enhancing the mapping capability of RBFNN, we add delay feedback links to the original NN forming the recurrent RBFNN. Then, we use the recurrent RBFNN as the main controller for the discrete nonlinear systems by use of the proposed neural network’s dynamic characteristic and relative simple structure. Compared with the previous research, this study proposes a more general RNN for use to deal with nonlinearity and uncertainties for a more general control system. We all know that compared with the continuous-time description the discrete-time description are more veritable when depicting practical problems in systems. From the investigated papers, we conclude that adaptive NN control is well developed for nonlinear continuous-time systems.[9, 10, 16–20, 27, 28] However, the difference of Lyapunov function in discretetime[13] has no the linearity property of the derivative of a Lyapunov function in continuous-time. Consequently, adaptive NN control suitable for continuous-time systems may not be applied to discrete-time systems directly. Through years of progress, many researchers have been devoted to the research of adaptive NN control for discrete-time systems. So far, the research in this aspect has advanced significantly.[13, 14, 31–34] Generally the adaptive NN control scheme for nonlinear uncertain discrete-time systems is on the basis of the backstepping technique and Lyapunov stability theory.[20–23] For example, in Ref.,[26] a NN control algorithm combined with the backstepping technique is provided for a class of strict-feedback systems. The controllers mentioned above can achieve bounded tracking error by means of neural networks as well as backstepping techniques. However, they have a drawback of complexity. Published by Sciedu Press

2016, Vol. 5, No. 2

One aspect is the computational expansion[27] which results from the repeated differentiations of the certain nonlinear functions. Furthermore, the designed controller becomes more complex when the system order grows. At present, through the introduction of dynamic surface control (DSC) technique[28, 29] this problem of complexity growing has been solved. The technique is to utilize first-order filters of the synthetic inputs at each intermediate step and it has been recently widely used in adaptive control literature such as Ref.[30] The other aspect that results into design complexity is the utilization of multiple approximators as mentioned in the previous examples. For the sake of solving the problem, in literature,[10] the author only utilizes one single NN to mimic the lumped unknown function. That approach effectively avoids use of multiple approximators and reduces the computational burden. On the basis of above observations, in the study alleviating the complexity and lightening the computational burden of the discrete-time controllers design will be further considered. In the paper, a novel controller is constructed based on single recurrent RBFNN. In order to provide convenience for the following use of the backstepping technique, the original system is first transformed into an equivalent n-step ahead predictor. Then, all the unknown functions are passed down, and only the ideal backstepping control law at the last step is approximated through the proposed NN. Thus, the controller in this paper is much simplified and its computational burden is also much lightened. In addition, a robust adaptive controller based on the approximation error is constructed for the purpose of achieving satisfying tracking performance. It is obviously seen from the stability analysis that all the signals are bounded, and through appropriate chose of control parameters arbitrarily small state tracking error can be obtained. Finally, the simulation result of an example is given, that verifies the superiority of the newly investigated controller. The structure of the paper is given as below. In Section 2, the control problem to be investigated and preliminaries such as the architecture of RNN are presented. Section 3 describes the control design procedure for certain class of systems on the basis of NN approximation. Otherwise, all the closedloop signals are also rigourously proved bounded through the constructed Lyapunov function in the section. In Section 4, a simulation example demonstrates the developed theory useful. Finally, Section 5 concludes the paper. Notation. k · k denotes the Euclidean norm of vectors and induced norm of matrices. A := B means that B is defined as A. ()T represents the transpose of vector. λmax () denotes the largest eigenvalue of a square matrix. 103

http://air.sciedupress.com

2. S YSTEM IES

Artificial Intelligence Research

2016, Vol. 5, No. 2

DESCRIPTION AND PRELIMINAR - (1) Layer 1 (input layer): The input and output in the first layer are respectively described as follows:

2.1 System description The investigated systems are described as below:

net1j (χ) = ϕj (χ) o1j (χ)

=

θj1 (net1j (χ))

(4) =

net1j (χ)

(5)

j = 1, 2, . . . , m1 ξj (ι + 1)

=

ξm (ι + 1)

=

y(ι)

=

ξj+1 (ι) + fj (ξ j (ι))

(1) where ϕj represents input to the jth node of input layer; χ 1 fm (ξ m (ι)) + gm (ξ m (ι))u(ι) + d(ι)(2) denotes the number of iterations; θj is activation function of ξ1 (ι) (3) the jth input node in this layer, which is set to be unit; m1 denotes number of input nodes.

where ξ j (ι) = [ξ1 (ι), ξ2 (ι), . . . , ξj (ι)]T ∈ Rj (j = 1, 2, . . . , m) is the system state variables which is assumed to be available for measurement, and u(ι) ∈ R, y(ι) ∈ R are the system input and output, respectively; fj (ξ j (ι))(j = 1, 2, . . . , m) and gm (ξ m (ι)) are unknown smooth nonlinear functions; d(ι) is the external disturbance. Meanwhile, |d(ι)| ≤ dmax where dmax is a known constant. Our control goal is to design a controller that ensures the output tracks the known reference signal yd (ι) and guarantees all the closed-loop signals in the system remain uniformly ultimately bounded. To obtain our main results, we give the following assumptions: Assumption 1 Ωy := {y|y = ξ1 }. Assumption 2 0 < gm (ξ m (ι)), 0 < gmin , 0 < gmax and Figure 1. Structure of three-layer recurrent radial basis function neural network gmin < |gm (ξ m (ι))| < gmax . Definition 1 The solution of (1), (2) and (3) is semi-globally (2) Layer 2 (hidden layer): Every recurrent loop is added to uniformly ultimately bounded (SGUUB), if for any Ω, which each corresponding layer. For the i-th node of the j-th input is a compact subset of Rn and all ξ¯m (k0 ) ∈ Ω, there exist an 2 1 net2i (χ) = γi o2i (χ − 1) + Σm (6) ε > 0 and a number N (ε, ξ¯m (k0 )) such that kξ¯m (k)k < ε j=1 ϑji ϕj (χ) 2 2 for all k ≥ k0 + N . −kneti (χ) − ci k )(7) o2i (χ) = θi2 (net2i (χ)) = exp( b2 i = 1, 2, . . . , m2 2.2 Architecture of RNN As we know many well-developed approaches can be utilized 2 to emulate unknown nonlinear functions. However, among where γi is the recurrent weight of the ith node; θi is activaso many frequently used methods, only NN is known to tion function of the ith node in this layer. For convenience 2 own the capability of approximating any nonlinear unknown of the following description, here we denote θi (·) as θ(·); function to arbitrarily small error range. That is also the ϑji is the connective weight which is set to be 1 here; ci reason why it is often used for identification and control of represents center of the basis function of the ith node; b > 0 nonlinear systems. In addition, we all know RBFNN owns is the width of the basis function; m2 denotes the number of simple structure and many other advantages. Thus, in order nodes in hidden layer. P to enhance the approximation ability of NN, a RBFNN using (3) Layer 3 (output layer): The output node represents the radial basis function as basic activation function is proposed summation of all incoming signals and depicted in Figure 1, where τ −1 means a time delay. 3 2 net3κ (χ) = Σm (8) i=1 ωiκ ϕi (χ) The recurrent RBFNN comprises three layers, i.e., an input 3 3 3 3 oκ (χ) = θκ (netκ (χ)) = netκ (χ) (9) layer, a hidden layer, and an output layer. And the detailed description of the structure will be given as below: κ = 1, 2, . . . , m3 104

ISSN 1927-6974

E-ISSN 1927-6982

http://air.sciedupress.com

Artificial Intelligence Research

2016, Vol. 5, No. 2

where Fj (ξ¯m (ι)) and Gm (ξ¯m (ι)) depend on fj (·)(j = 1, 2, . . . , m) and gm (·) respectively. We should be aware of the fact that functions Fj (ξ¯m (ι))(j = 1, 2, . . . , m) become highly nonlinear. Actually, when j decreases, Fj (ξ¯m (ι)) becomes more entangled and complex. The reason is that Fm−1 (ξ¯m (ι)) is obtained through one-step substitution, while F1 (ξ¯m (ι)) is obtained through (m − 1)-step substitution. Fortunately, a RNN can approximate an unknown (10) smooth function to arbitrarily small error tolerance as dis(11) cussed in section 2. Thus, employing RNN as our main controller may be a good choice when there is no knowledge (12) of the exact structures of Fj (ξ¯m (ι))(j = 1, 2, . . . , m) and Gm (ξ¯m (ι)). The construction procedure of the proposed controller will be presented as follows.

where ωiκ is the connective weight between the ith node in the hidden layer and the κth node in the output layer; θκ3 is the activation function of the κth node in the output layer, which is set to be unit; and o3κ is the κth output of the output layer; m3 denotes the number of output nodes. Moreover, we denote γ = (γ1 γ2 . . . γm2 )T ω = (ω11 ω21 . . . ωm2 1 , . . . , ω1m3 ω2m3 . . . ωm2 m3 )T % = (γ T , ω T )T Then, the final output is o3

=

Ψ(ϕ, γ, ω)

=

Ψ(ϕ|%)

(13)

Firstly, in order to make it convenient for analysis and discussion, let Fj (ι) = Fj (ξ¯m (ι))(j = 1, 2, . . . , m)and Gm (ι) = gm (ξ¯m (ι)).

where ϕ = (ϕ1 , ϕ2 , . . . , ϕm1 ) are the inputs of the RNN, Step 1: From equations (1)-(3) and (14), we obtain that Ψ = (Ψ1 , Ψ2 , . . . , Ψm3 ) are the outputs of the RNN. For a e1 (ι + m) = ξ1 (ι + m) − yd (ι + m) smooth function F (ϕ) : Rm1 −→ Rm3 , it can be expressed as F (ϕ) = Fˆ (ϕ|%∗ )+, where is the functional reconstruc= ξ2 (ι + m − 1) + F1 (ι) − yd (ι + m) (15) tion error. In the study the ideal weights %∗ can be denoted as %∗ = argmin% {supϕ kF (ϕ) − Fˆ (ϕ|%)k}, and %ˆ is often If we consider ξ2 (ι + m − 1) as a virtual control for the condenoted as the estimation of %∗ . cerned system and δ2 (ι + m − 1) as the ideal intermediate function, the following error variable can be introduced, i.e., Assumption 3 On a compact set Ωϕ ∈ Rm , the ideal RNN weight %∗ satisfies k%∗ k ≤ %m where %m is a positive constant. Lemma 1 Consider ϕ being the input vector. The properties, i.e., λmax [θ(ϕ(κ))θT (ϕ(κ))] < 1 and θT (ϕ(κ))θ(ϕ(κ)) < n, will be used in the following system stability proof.

e2 (ι + m − 1) = ξ2 (ι + m − 1) − δ2 (ι + m − 1)

(16)

It is obviously seen that e1 (ι + m) = 0 if choosing

δ2 (ι + m − 1) = −F1 (ι) + yd (ι + m)

(17)

Substituting (17) into (16) leads to

3. S INGLE

NEURAL NETWORK APPROXIMA -

ξ2 (ι + m − 1) = e2 (ι + m − 1) + δ2 (ι + m − 1)

TION BASED ADAPTIVE ROBUST CONTROL

= e2 (ι + m − 1) − F1 (ι) + yd (ι + m)

DESIGN We are often confronted with the problem of causality contradiction when constructing a backstepping controller for systems under consideration. However, when the original system (1)-(3) is transformed into an ahead predictor[28] the aforementioned problem can be naturally avoided. According to the transformation process in Ref.,[28] the initial strictfeedback form (1)-(3) can be transformed into

ξ1 (ι + m) = ξ2 (ι + m − 1) + F1 (ξ¯m (ι)) .. . ξm−1 (ι + 2) = ξm (ι + 1) + Fm−1 (ξ¯m (ι))

(18)

Substituting (18) into (15), we can achieve e1 (ι + m) = e2 (ι + m − 1). Step 2: Let e2 (ι) = ξ2 (ι) − δ2 (ι), then its (m-1)th difference is e2 (ι + m − 1) = ξ2 (ι + m − 1) − δ2 (ι + m − 1) = ξ2 (ι + m − 1) + F1 (ι) − yd (ι + m) = ξ3 (ι + m − 2) + F2 (ι) + F1 (ι) − yd (ι + m) = ξ3 (ι + m − 2) + F2∗ (ι) − yd (ι + m)

(19)

where F2∗ (ι) = F1 (ι) + F2 (ι). And ξ3 (ι + m − 2) is likewise considered as a virtual control for the investigated system.

ξm (ι + 1) = Fm (ξ¯m (ι)) + Gm (ξ¯m (ι))u(ι) + d(ι)(14) Published by Sciedu Press

105

http://air.sciedupress.com

Artificial Intelligence Research

Trough the introduction of the error variable

2016, Vol. 5, No. 2

(21) into (20) leads to

e3 (ι + m − 2) = ξ3 (ι + m − 2) − δ3 (ι + m − 2)

(20)

ξ3 (ι + m − 2) = e3 (ι + m − 2) + δ3 (ι + m − 2) = e3 (ι + m − 2) − F2∗ (ι) + yd (ι + m)

and chose of

(22)

(21) From (20), (21) and (22), e2 (ι + m − 1) = e3 (ι + m − 2) can be obtained. it can be easily obtained that e2 (ι + m − 1) = 0. Substituting Step i: As in step 1 and step 2, for ej (ι) = ξj (ι) − δj (ι), its (m-j+1)th difference is presented as below δ3 (ι + m − 2) = −F2∗ (ι) + yd (ι + m)

ej (ι + m − j + 1)

=

ξj (ι + m − j + 1) − δj (ι + m − j + 1)

=

∗ ξj+1 (ι + m − j) + Fj (ι) + Fj−1 (ι) − yd (ι + m)

=

ξj+1 (ι + m − j) + Fj∗ (ι) − yd (ι + m)

(23)

∗ where Fj∗ (ι) = Fj (ι) + Fj−1 (ι) is an unknown function. As it is apparent to obtain that ej (ι + m − j + 1) = 0. Substithe previous design step, ξj+1 (ι + m − j) is also considered tuting (25) into (24) leads to as a virtual control for the investigated system. Through the ξj+1 (ι + m − j) = ej+1 (ι + m − j) introduction of the error variable −Fj∗ (ι) + yd (ι + m) (26) ej+1 (ι + m − j) = ξj+1 (ι + m − j) Substituting (26) into (23), the error (23) is re-written as −δj+1 (ι + m − j) (24) ej (ι + m − j + 1) = ej+1 (ι + m − j) (27) and selection of Step n: As for em (ι) = ξm (ι) − δm (ι), its first difference is δj+1 (ι + m − j) = −Fj∗ (ι) + yd (ι + m) (25) expressed as below

em (ι + 1)

ξm (ι + 1) − δm (ι + 1) ∗ = Fm (ξ¯m (ι)) + Gm (ξ¯m (ι))u(ι) + d(ι) + Fm−1 (ι) − yd (ι + m)

=

∗ = Fm (ι) + Fm−1 (ι) + Gm (ι)u(ι) + d(ι) − yd (ι + m) ∗ = Fm (ι) + Gm (ι)u(ι) + d(ι) − yd (ι + m)

(28)

∗ ∗ where Fm (ι) = Fm (ι) + Fm−1 (ι) and Gm (ι) are unknown Assumption 4 For compact set Ωϕ , (ϕ(ι)) satisfies functions. Apparently, em (ι + 1) = 0 if the expected control k(ϕ(ι))k ≤ ς, where ς > 0 is unknown constant. law u∗ (ι) is chosen as follows:

u(ι) = u∗ (ι) =

∗ −Fm (ι) − d(ι) + yd (ι + m) Gm (ι)

(29)

Then,

∗ Fm (ι)

Since and Gm (ι) are unknown, we can utilize the recurrent RBFNN to emulate u∗ (ι) as below: u∗ (ι) = Ψ(ξ¯m (ι)|%∗ )) + (ξ¯m (ι))

Let ω ˆ (ι) be the estimate of ω ∗ , and ςˆ(ι) be the estimate of ς.

u(ι) = ω ˆ T θ(ϕ(ι)) + ςˆ(ι)

(32)

The updating algorithms are as below:

(30)

ω ˆ (ι + 1) = ω ˆ (ι) − λ(θ(ϕ(ι))em (ι + 1) where ξ¯m (ι) = ϕ(ι). In this paper, let the recurrent weights ςˆ(ι + 1) = ςˆ(ι) − ζ(em (ι + 1) + µˆ ς (ι)) (33) to be constant and from the aforementioned description, (30) Substituting (32) into (28), the error (28) can be transformed can be easily converted to the following form: into u∗ (ι) = Ψ(ϕ(ι)|%∗ ) + (ϕ(ι)) = ω ∗T θ(ϕ(ι)) + (ϕ(ι)) where θ = θj2 .

(31)

em (ι + 1) = Gm (ι)(ˆ ω T (ι)θ(ϕ(ι)) + ςˆ(ι)) ∗ +Fm (ι) − yd (ι + m) + d(ι)

(34)

Combining (31), (34) equals to 106

ISSN 1927-6974

E-ISSN 1927-6982

http://air.sciedupress.com

Artificial Intelligence Research

2016, Vol. 5, No. 2

∗ Gm (ι)(ˆ ω T (ι)θ(ϕ(ι)) + ςˆ(ι)) + Fm (ι) − yd (ι + m) + d(ι)

em (ι + 1) =

+Gm (ι)u∗ (ι) − Gm (ι)(ω ∗T θ(ϕ(ι)) + (ϕ(ι)))

(35)

Substituting (29) into (35) leads to em (ι + 1)

=

Gm (ι)(ˆ ω T (ι)θ(ϕ(ι)) + ςˆ(ι)) − Gm (ι)(ω ∗T θ(ϕ(ι)) + (ϕ(ι)))

=

Gm (ι)(˜ ω T (ι)θ(ϕ(ι)) + ςˆ(ι) − (ϕ(ι)))

where ω ˜ (ι) = ω ˆ (ι) − ω ∗ .

(36)

Proof: The Lyapunov function is chosen as below

2 V = Σm ˜ T (ι)λ−1 ω ˜ (ι) + ζ −1 ς˜2 (ι) j=1 ej (ι) + ω Theorem 1 Taking the second-order nonlinear system depicted in (1)-(3) into consideration, we provide the control From equation (36), we know that law as (32) and the adaptation laws as (33). Then, under em (ι + 1) ω ˜ T (ι)θ(ϕ(ι)) = − ςˆ(ι) + (ϕ(ι)) any bounded initial conditions, i.e., ξ¯m (0) is initialized in Gm (ι) Ω, all the closed-loop system signals preserve SGUUB and a small tracking error tolerance can be achieved through The first difference of (37) is given by appropriate selection of control parameters.

2 2 ∆V = Σm ˜ T (ι + 1)λ−1 ω ˜ (ι + 1) − ω ˜ T (ι)λ−1 ω ˜ (ι) + ζ −1 ς˜2 (ι + 1) − ζ −1 ς˜2 (ι) j=1 [ej (ι + 1) − ej (ι)] + ω

(37)

(38)

(39)

According to (27), we have e1 (ι + 1) = e2 (ι), e2 (ι + 1) = e3 (ι), . . . , ej (ι + 1) = ej+1 (ι), j = 1, 2, . . . , m − 1

(40)

Then, its difference along (33) and (36) is as follows: ∆V

=

e2m (ι + 1) − e21 (ι) + ω ˜ T (ι + 1)λ−1 ω ˜ (ι + 1) − ω ˜ T (ι)λ−1 ω ˜ (ι) + ζ −1 ς˜2 (ι + 1) − ζ −1 ς˜2 (ι)

=

e2m (ι + 1) − e21 (ι) − 2˜ ω T (ι)θ(ϕ(ι))em (ι + 1) + (θ(ϕ(ι))em (ι + 1))T

=

λθ(ϕ(ι))em (ι + 1) − 2˜ ς (ι)(em (ι + 1) + µˆ ς (ι)) + ζ(em (ι + 1) + µˆ ς (ι))2 2 e (ι + 1) + 2ˆ ς (ι)em (ι + 1) − 2(ϕ(ι))em (ι + 1) e2m (ι + 1) − e21 (ι) − 2 m Gm (ι) +θT (ϕ(ι))λθ(ϕ(ι))e2m (ι + 1) + ζe2m (ι + 1) − 2˜ ς (ι)em (ι + 1) − 2µ˜ ς (ι)ˆ ς (ι)

=

+2ζµˆ ς (ι)em (ι + 1) + ζµ2 ςˆ2 (ι) 2 (ζ + 1 − )e2 (ι + 1) − e21 (ι) − 2(ϕ(ι))em (ι + 1) + 2ςem (ι + 1) Gm (ι) m +θT (ϕ(ι))λθ(ϕ(ι))e2m (ι + 1) − 2µ˜ ς (ι)ˆ ς (ι) + 2ζµˆ ς (ι)em (ι + 1) + ζµ2 ςˆ2 (ι)

(41)

Using the facts that θT (ϕ(ι))θ(ϕ(ι)) <

n;

T

τ n;

θ (ϕ(ι))λθ(ϕ(ι)) ≤ −2em (ι + 1)

≤

2ζµˆ ς (ι)em (ι + 1)

≤

2˜ ς (ι)ˆ ς (ι)

=

2ςem (ι + 1)

≤

2 ; τ ζe2m (ι + 1) + ζµ2 ςˆ2 (ι); τ e2m (ι + 1) +

ς˜2 (ι) + ςˆ2 (ι) − ς 2 ; ς2 τ e2m (ι + 1) + ; τ

(42)

we obtain Published by Sciedu Press

107

http://air.sciedupress.com

Artificial Intelligence Research

2016, Vol. 5, No. 2

2 2 )e2m (ι + 1) − e21 (ι) + τ e2m (ι + 1) + Gm (ι) τ 2 ς +τ e2m (ι + 1) + + τ ne2m (ι + 1) τ −µ˜ ς 2 (ι) − µˆ ς 2 (ι) + µς 2 + ζe2m (ι + 1) + ζµ2 ςˆ2 (ι) + ζµ2 ςˆ2 (ι) 2 )e2 (ι + 1) − e21 (ι) ≤ (2ζ + 2τ + τ n + 1 − gmax m 2 1 ς 2 (ι) − µ˜ ς 2 (ι) + +( + 2µ)ς 2 + (2ζµ2 − µ)ˆ τ τ ≤ (ζ + 1 −

∆V

(43) 2

where ρ = ( τ1 + 2µ)ς 2 + τ is nonnegative. The design to be 1.8. The controller parameters chosen for simulation are λ = 0.01, ζ = 0.01, µ = 0.001. The reference signal is parameters are chosen as below yd (ι) = (1/2) sin(ι × pi/20) + (1/2) sin(ι × pi/10). 1 1 n 1 0<µ< ,ζ < − τ ( + 1) − (44) 2ζ gmax 2 2 √ Then, as long as the error em (ι) is larger than ρ, ∆V ≤ 0. This implies for all ι ≥ 0, V (ι) is bounded. This indicates √ em (ι) is bounded and satisfies that Ωe ∈ R := {e|e ≤ ρ}. Consequently, the theorem is completely proved.

4. S IMULATION EXAMPLE To illustrate the superiority of the proposed approach an example is given in the following description: ξ1 (ι + 1)

=

ξ2 (ι + 1)

=

ξ2 (ι) + f1 (ξ¯1 (ι)) f2 (ξ¯2 (ι)) + g2 (ξ¯2 (ι))u(ι) + d(ι)

yι

=

ξ1 (ι)

(45)

where f1 (ξ¯1 (ι)), f2 (ξ¯2 (ι)) and g2 (ξ¯2 (ι)) are unknown smooth functions. According to (32) the controller is chosen Figure 2. The tracking performance of system as below: u(ι) = ω ˆ T (ι)θ(ϕ(ι)) + ςˆ(ι)

(46)

For the purpose of simulation, the unknown system functions are assumed as below: 1.1ξ12 (ι) 1 + ξ12 (ι) ξ1 (ι) 1 + ξ12 (ι) + ξ22 (ι)

f1 (ξ¯1 (ι))

=

f2 (ξ¯2 (ι))

=

g2 (ξ¯2 (ι))

=

1

d(ι)

=

0.1 × sin(ι × pi/10)

(47)

At beginning let ξ(0) = [1 0]T . The node number of the first layer, the hidden layer and the output layer is m1 = 2, m2 = 9, m3 = 1, respectively. The initial Figure 3. The tracking error of system weights for RNN are ω ˆ (0) = 1.5 × ones(L, 1) and γ(0) = 1.8 × ones(L, 1). That is, the feedforward weights are ini- Simulation results obtained in this situation are presented in tialized to be 1.5, and the recurrent weights are initialized Figures 2-5. The proposed control approach is utilized to 108

ISSN 1927-6974

E-ISSN 1927-6982

http://air.sciedupress.com

Artificial Intelligence Research

track the reference signal yd (ι) for the system (47). Figures 2 and 3 apparently demonstrate that the final tracking performance is favorable with high tracking precision. Figure 4 gives the control input signal. It can be obviously observed that the control input is not smooth but bounded. Figure 5 depicts the study behavior of the RNN weights. The four figures show that the concerned signals are bounded.

2016, Vol. 5, No. 2

transformation and combining with backstepping technique, single recurrent radial basis function neural network is utilized to emulate the lumped nonlinear functions. Trough this method, the structure of the controller can be simplified observably, and the computational burden can be alleviated drastically. Another adaptive controller is also constructed in this article so as to weaken the negative impact of the approximation error on the investigated system. Stability analysis shows all the closed-loop system signals are ensured uniformly ultimately bounded, and arbitrarily small tracking error can be obtained through appropriate parameters selection. Finally, a simulation example demonstrates the proposed controller is feasible and effective.

Figure 4. The control input of system

5. C ONCLUSIONS Through combining the advantage of backstepping technique and NN, a novel approach for certain systems with external unknown disturbance is proposed in the study. In order to enhance mapping ability of NNs, feedback and delay loops are added to the original RBFNN. Then, following the system Figure 5. The trajectory of RNN weights

R EFERENCES [1] Polycarpou M. Stable adaptive neural control scheme for nonlinear systems. Automatic Control. 1996; 41: 447-451. http://dx.doi .org/10.1109/9.486648 [2] Zhang T, Ge S, Hang C. Adaptive neural network control for strictfeedback nonlinear systems using backstepping design. Automatica. 2000; 36: 1835-1846. http://dx.doi.org/10.1016/S0005-1 098(00)00116-3 [3] Ho D, Zhang P, Xu J. Fuzzy wavelet networks for function learning. IEEE Transactions on Neural Networks. 2001; 9(1): 200-211. [4] Sarma M, Sarma K. An ANN based approach to recognize initial phonemes of spoken words of Assamese language. Applied Soft Computing. 2013; 13(5): 2281-2291. http://dx.doi.org/10.10 16/j.asoc.2013.01.004 [5] Miao B, Li T. A novel neural network-based adaptive control for a class of uncertain nonlinear systems in strict-feedback form. Nonlinear Dynamics. 2015; 79: 1005-1013. http://dx.doi.org/10.10 07/s11071-014-1717-2 Published by Sciedu Press

[6] Hsu C, Cheng K. Recurrent fuzzy-neural approach for nonlinear control using dynamic structure learning scheme. Neurocomputing. 2008; 71: 3447-3459. http://dx.doi.org/10.1016/j.neucom. 2007.10.014 [7] Fu Z, Xie W, Luo W. Robust on-line nonlinear systems identification using multilayer dynamic neural networks with two-time scales. Neurocomputing. 2013; 113(3): 16-26. [8] Peng J, Dubay R. Identification and adaptive neural network control of a DC motor system with dead-zone characteristics. ISA Transactions. 2011; 50(4): 588-598. PMid:21788017. http://dx.doi.o rg/10.1016/j.isatra.2011.06.005 [9] Miao Z, Wang Y, Yang Y. Robust tracking control of uncertain dynamic nonholonomic systems using recurrent neural networks. Neurcomputing. 2014; 142: 216-227. http://dx.doi.org/10.1016 /j.neucom.2014.03.061 [10] Sun G, Wang D, Li T, et al. Single neural network approximation based adaptive control for a class of uncertain strict-feedback non-

109

http://air.sciedupress.com

Artificial Intelligence Research

linear systems. Nonlinear Dynamics. 2013; 72: 175-184. http: //dx.doi.org/10.1007/s11071-012-0701-y [11] Lin C, Hsueh C, Chen C. Robust adaptive backstepping control for a class of nonlinear systems using recurrent wavelet neural network. Optimal Control Applications and Methods. 2014; 142: 372-382. [12] Hartman E, Keeler J, Kowalski J. Layered neural networks with Gaussian hidden units as universal approximation. Neural Computing. 1990; 2(2): 210-215. http://dx.doi.org/10.1162/neco. 1990.2.2.210 [13] Huang Z, Yang Q, Luo X. Adaptive output-feedback control of a class of discrete-time nonlinear systems. Proceedings of American Control Conference. 1993; 1359-1364. [14] Ge S, Li G, Lee T. Adaptive NN control for a class of strict-feedback discrete-time nonlinear systems. Automatica. 2003; 39: 807-819. http://dx.doi.org/10.1016/S0005-1098(03)00032-3 [15] Hsu C. Adaptive recurrent neural network control using a structure adaptation algorithm. Neural Computing Application. 2009; 18: 115125. http://dx.doi.org/10.1007/s00521-007-0164-0 [16] Lin F, Wai R. Robust recurrent fuzzy neural network control for linear synchronous motor drive system. Neurocomputing. 2003; 50: 365390. http://dx.doi.org/10.1016/S0925-2312(02)00572-6 [17] Lin F, Wai R, Chou W, et al. Adaptive backstepping control using recurrent neural network for linear induction motor drive. IEEE Transactions on Industrial Electronics. 2002; 49(1): 134-146. http: //dx.doi.org/10.1109/41.982257 [18] Lin F, Shieh H, Shieh P, et al. An adaptive recurrent-neural-network motion controller for X-Y table in CNC machine. IEEE Transactions on Systems, Man, and Cybernetics, Part B, Cybernetics. 2006; 36(2): 286-299. PMid:16602590. http://dx.doi.org/10.1109/TSMCB .2005.856719 [19] Hsu C, Cheng K. Recurrent fuzzy-neural approach for nonlinear control using dynamic structure learning scheme. Neurocomputing. 2008; 71: 3447-3459. http://dx.doi.org/10.1016/j.neucom. 2007.10.014 [20] Zhang T, Ge S, Hang C. Adaptive neural network control for strict feedback nonlinear systems using backstepping design. Automatica. 2000; 36(12): 1835-1846. http://dx.doi.org/10.1016/S0005 -1098(00)00116-3 [21] Zhang T, Ge S, Hang C. Adaptive neural network control for a class of lowtriangular- structured nonlinear systems. IEEE Transactions on Neural Networks. 2006; 17(2): 509-514. PMid:16566476. http://dx.doi.org/10.1109/TNN.2005.863403 [22] Choi J, Farrell J. Adaptive observer backstepping control using neural networks. IEEE Transactions on Neural Networks. 2001; 12(5): 1103-1112. PMid:18249937. http://dx.doi.org/10.1109/72. 950139

110

2016, Vol. 5, No. 2

[23] Wang G. Adaptive NN control of uncertain nonlinear pure-feedback systems. Automatica. 2002; 38(4): 671-682. http://dx.doi.org /10.1016/S0005-1098(01)00254-0 [24] Park J, Sandberg L. Universal approximation using radial-basisfunction networks. Neural Computing. 1991; 3(2): 246-257. http: //dx.doi.org/10.1162/neco.1991.3.2.246 [25] Hartman E, Keeler J, Kowalski J. Robust and adaptive backstepping control for nonlinear systems using RBF neural networks. IEEE Transactions on Neural Networks. 2004; 15: 693701. PMid:15384556. http://dx.doi.org/10.1109/TNN.2004. 826215 [26] Wen G, Liu Y, Chen C. Direct adaptive robust NN control for a class of discrete-time nonlinear strict-feedback SISO systems. Neural Computing Applications. 2012; 21: 1423-1431. http://dx.doi.o rg/10.1007/s00521-011-0596-4 [27] Zhang C, Yang F, Wu D. Adaptive neural network tracking control for a class of nonlinear systems. International Journal of Systems Science. 2010; 2(2): 143-158. [28] Yip P, Hedrick J. Adaptive dynamic surface control: a simplified algorithm for adaptive backstepping control of nonlinear systems. International Journal of Control. 1998; 71(5): 959-979. http: //dx.doi.org/10.1080/002071798221650 [29] Wang D, Huang J. Neural network based adaptive dynamic surface control for nonlinear systems in strictfeedback form. IEEE Transactions on Neural Networks. 2005; 16(1): 195-202. PMid:15732399. http://dx.doi.org/10.1109/TNN.2004.839354 [30] Li T, Wang D, Feng G, et al. A DSC approach to robust adaptive NN tracking control for strict-feedback nonlinear systems. IEEE Transactions on Systems, Man, and Cybernetics, Part B, Cybernetics. 2010; 40(3): 915-927. PMid:19887321. http://dx.doi.org/10.1109 /TSMCB.2009.2033563 [31] Ge S, Lee T, Li G, et al. Adaptive NN control for a class of discrete-time nonlinear systems. International Journal of Control. 2003; 76(4): 334-354. http://dx.doi.org/10.1080/0020717 031000073063 [32] Ge S, Zhang J, Lee T. Adaptive neural networks control for a class of MIMO nonlinear systems with disturbances in discrete-time. IEEE Transactions on Systems, Man, and Cybernetics, Part B, Cybernetics. 2004; 34(4): 1630-1645. http://dx.doi.org/10.1109/TSMCB .2004.826827 [33] Yang C, Ge S, Xiang C, et al. Output feedback NN control for two classes of discrete-time systems with unknown control directions in a unified approach. IEEE Transactions on Neural Networks. 2008; 19(11): 1873-1886. PMid:18990642. http://dx.doi.org/10.11 09/TNN.2008.2003290 [34] Zhang J, Ge S, Lee T. Output feedback control of a class of discrete MIMO nonlinear systems with triangular form inputs. IEEE Transactions on Neural Networks. 2005; 16(6): 1491-1503. PMid:16342490. http://dx.doi.org/10.1109/TNN.2005.852242

ISSN 1927-6974

E-ISSN 1927-6982