MUSMAR as a Newton Minimization Algorithm

In Mosca et al. (1989) Propositions 1 and 2 of Chap. 3 that characterize the possible convergence points of MUSMAR and the direction it yields for the progression of the controller gains are proved using the ODE method for studying the convergence of stochastic algorithms. In this appendix we use a different method that has the advantage of not relying on the ODE method, being thus more comprehensive.

D. 1 Proof of Proposition 1

According to the control strategy used

y (t + j) « вj (Fk-1) + Hj (Fk-1, Fk)s (t) (D.1)

and

u(t + j _ 1) « дj-x(Fk-)n(t) + G’j_j(Ft-х, Fk)s(t), (D.2)

where

Hj (Fk_1, Ft) = p j (Fk_1) + в j (Fk_1 Ft) (D.3)

and

Gj _1( Fk_1, Ft) = фі _1( Fk_1) Ft (D.4)

Let Fk be computed according to (3.104, 3.105). By adding and subtracting ej(Ft_1 Fk_) and д2(F^^F^a, this becomes

T T-1    ”Ув j (Fk-1) Hj (Fk-1, Fk-1) + P^^P j (Fk-1) Gj (Fk-1, Fk-1) j=1 j=1

If pFk-1/a(Fk-1) is added and subtracted, (D.5) becomes T T-1   У в j (Fk-1) Hj (Fk-1, Fk-1) + P^^Pj (Fk-1)Gj (Fk-1, Fk-1) + P Fk-1 j=1 j=1   The gradient VTJ (F) of the receding horizon cost JT (t) with respect to the controller gains is given by

T

= 22!(ві (Fk-1)E[y(t + j)s(t)] + PPj-1(Fk-1)E[u(t + j – 1)s(t)])  or, equivalently, by

Here, (D.3, D.4) and (3.105) are used together with

E[y(t + j)s(t)] = RsHj(Fk-!, Fk) (D.9)

and

E[u(t + j)s(t)] = RsGj (Fk-!, Fk). (D.10)

Since p0 = 1, ф0 = 1 and G0(Fk-1, Fk-1), the conclusion follows from (D.8) and (D.6).

D. 2 Proof of Proposition 2

Let iterations (3.110) be initialized in a neighborhood of a local minimum F* of JOT, small enough so that JOT has a single isolated minimum in it. In this region define the candidate Lyapunov function

V(F) = J^(F) – J^(F*). (D.11)

This function is continuous and differentiable with respect to F and vanishes for F = F*, at which point it has a local minimum. Furthermore, it is strictly decreasing around the trajectories of (3.110). Indeed

V(Fk) – V(Fk-) = J^(Fk) – Jcx)(Ft-x)

= (V Jx(Fk-1))’ ■ (Fk – Fk-1) + o(Ft – F*)

1 , ,

= – (V J^(Fk-1))’ R-(Ft-1)V Jx(Fk-1)

a(Fk-1)

+ o(Ft – F*). (D.12)

Hence, and since it can be proved that Rs > 0 Mosca et al. (1989), in a sufficiently small neighborhood of F*,

V (F – k) – V (Ft-1)< 0 (D.13)

and by Lyapunov’s Direct Method the sequence Fk converges to F*.

In an analogous way, around a local maximum Fmax one can define the function

W ( F) = – J^( F) + J^( Fmax ). (D.14)

This function is continuous and differentiable and has a local minimum at Fmax but now, in a neighborhood of the maximum, it may be proved using similar arguments as above that (3.110) yields

W(Fk) – W(Fk-1) > 0. (D.15)

This inequality allows to conclude that Fmax is unstable in the sense of Lyapunov. Hence, the recursion (3.110) does not yield a sequence of gains that converge to a maximum.

Updated: August 24, 2015 — 9:41 am