LESSONS OF EASTER ISLAND
March 17th, 2016
Consider the linear time varying equation
e(t) = /1 u*(t )e(t), 
(H.1) 
with 0 < umin < u*(t) < umax V t ◦ 0 
(H.2) 
and such that A = – A/L is a stability matrix with П a negative number greater than the largest real part of its eigenvalues. Use the GronwallBellman inequality Rugh (1996) to conclude that 

t \e(t) < \e(0)Wexp{J Au%(x)\dx}. 0 
(H.3) 
Considering (H.2), this implies 

\e(t) < e(0)enumint. 
(H.4) 
Since П < 0, this establishes asymptotic stability of (H.1). 
□ 
Goodwin GC, Sin KS (1984) Adaptive filtering prediction and control. PrenticeHall, New York Ibragimov NH (1999) Elementary Lie group analysis and ordinary differential equations. Wiley, New York
MoscaE, Zappa G, Lemos JM (1989) Robustness of multipredictor adap...
Read MoreThe tracking dynamics is described by the following system of differential equations
Xn = 0
Xn1 = аR(1 – –Xn2)
xn xn—1
Xn2 = aR(1 — X7—Xn—3) (G.1)
xn xn 1
X1 )
Xn—Xn — 1 h
The first equation is already linear and provides an eigenvalue at the origin. Linearizing the other n — 1 equations around the equilibrium point defined by
r
Xi = — i r = 1
n
yields the (n — 1) x (n — 1) Jacobian matrix:
—2n n 0 0 … 0 n nn 0 …0 —n 0 — n n … 0
—n 0 ………….. 0 n
In order to compute the eigenvalues of this Jacobian matrix, start by observing that it can be written as
J = n(—I + A) (G.3)
where I is the identity of order n — 1 and A[(n — 1) x (n — 1)] is the matrix
The characteristic polynomial of J is obtained from
1
det(sl — J ) = det(sl + I...
Read MoreF. 1 Proof of the WARTICi/o Control Law
Hereafter we prove Eq. (5.16) that yields a closedform expression for the WARTIC – i/o control law. For that sake, consider the predictive model (5.15) and assume that u is constant and equal to u (k) over the prediction horizon, yielding
ni
70(k + i) = au(k) 22 R(k – 1 + j) + a22 R(k – p)u(k – p) + вTin(k + i – n). j=i p=i
(F.1)
Assume now that the future values of radiation at time k + 1 up to time k + T (that are unknown at time k) are equal to R(k). Equation (F.1) becomes
ni
T0(k + і) = au(k)R(k)i + y R(k – p)u(k – p) + вTin(k + і – n)...
Read MoreE. 1 Deduction of Equation (3.137)
Since,
and
E {u2(k + i _ 1)} = E I ^i_1u(k) + f[_1s(k)j
= E ^2_1 u2(k) + 2u(k)/M1p[_1s(k) + sT(k)p1pT’_1s(k)^
= E {m21} u2(k) + 2u(k)E _1pJ_^ s(k) + sT(k)E Фі_1 pf_^ s(k)
= [A2_1 + u2(k) + 2u(k) lM1ФІ1 + s(k)
+ sT(k)E [фі_1ф[_1} s(k) (E.2)
the minimization of the cost function according to Eq. (3.133) leads to
T
2 9u(k) 2 9u(k)
T
"УA2 + aei + pfa 21 + pa,
i = 1
Since the data vector, z(k) = [u (k) sT (k)] T, used to estimate the predictive models parameters is common to all models (actually, the vector used is z(k – T) so the Tsteps ahead predictor can use the last output available to perform the estimation an...
Read MoreIn Mosca et al. (1989) Propositions 1 and 2 of Chap. 3 that characterize the possible convergence points of MUSMAR and the direction it yields for the progression of the controller gains are proved using the ODE method for studying the convergence of stochastic algorithms. In this appendix we use a different method that has the advantage of not relying on the ODE method, being thus more comprehensive.
According to the control strategy used
y (t + j) « вj (Fk1) + Hj (Fk1, Fk)s (t) (D.1)
and
u(t + j _ 1) « дjx(Fk)n(t) + G’j_j(Ftх, Fk)s(t), (D.2)
where
Hj (Fk_1, Ft) = p j (Fk_1) + в j (Fk_1 Ft) (D.3)
and
Gj _1( Fk_1, Ft) = фі _1( Fk_1) Ft (D.4)
Let Fk be computed according to (3.104, 3.105)...
Read MoreIn this appendix we explain how the predictive models used by MUSMAR, (3.62) are obtained from the ARX model (3.9). As explained in Sect. 3.2.4 the MUSMAR adaptive control algorithm restricts the future control samples (with respect to the present discrete time denoted k), from time k + 1 up to t + T — 1, to be given by a constant feedback of the pseudostate, leaving u(k) free. In order to see how the predictive model (3.47) is modified by this assumption, start by observing that the pseudostate s(k) defined in (3.95) satisfies the dynamic state equation
s(k + 1) = 0ss (k) + Vsu (k) + eTe(k), (C.1)
in which
eT = [10 … 0], (C.2)
PT "
!n — 0(n— 1)xn (C 3)
01x(n+m)
0(m—1)xn Im — 1 — (m—1)x1
rs = [bo 0 … 010 … 0]T, (C.4)
and
PT = [—ax… — anbx… bm]. (C.5)
The matrix entries ...
Read MoreFor A, B, C and D matrices of convenient dimensions such that the indicated inversions exist, it holds that
Proof Right multiply the right hand side of (B.1) by A + BCD to get
^A1 – A1 [DA1 B + C1] 1 DA1^ (A + BCD)
= I – A1 B^DA1 B + C1] 1 D
+ A1 BCD – A1 b[dA1 B + C1] 1 DA1 BCD
= I + A1 B [DA1 B + C1] 1 j[DA1 B + C1] CD – D – DA1 BCdJ = I.
Now, left multiply the right hand side of (B.1) by A + BCD to get
■ DA1 B + C1
These quantities are related by the matrix regression model written for all the available data
z = ФLsft + V. (B.5)
With this notation the least squares functional (3.72) can be written
Jls($) = 2 (z – &LS$) MLS (z – <PL...
Read MoreThis appendix addresses the issue of solving the PDE (2.5).
Start by considering the homogeneous equation
to which the following ODE relating the independent variables t and x is associated
(A.2)
The first integral of (A.2) is a relation of the form
p(x, t) = C
for C an arbitrary constant, satisfied by any solution x = x (t) of (A.2), where the function p is not identically constant for all the values of x and t. In other words, the function p is constant along each solution of (A.2), with the constant C depending on the solution. Since in the case of equation (a1e1) there are 2 independent variables, there is only one functionally independent integrals Ibragimov (1999). By integrating (A...
Read More