Show Menu
Cheatography

numerical optimisation Cheat Sheet (DRAFT) by

Oral of numerical optimisation.

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Extended Rosenbrock Results

Now concerning the extended Rosenbrock function, the methods have shown a good behaviour and very fast conver­gen­ce.We can se that the number of iterations is very few, comparing to thee chained Rosenbrock f, the final gradient norm of the is so low, of order 10-12 for Modified newton method with analytical deriva­tives while its of order 10-9 for MNM with finite differ­ences. and also the objective function of the exact analytical gradient and hessian is more closer to 0 than for finite differ­ences solution found. But the Centred finite differ­ences converged faster. As expected the gradient norm converged quadra­tically to 0 in both cases since we are using newton method. Concerning the rate of conver­gence, we can say its also quadratic since for example here for mom with analytical deriva­tives, the rate of conver­gences is being slow in the first few iterations while its increasing in the last few iterations which explain the signif­icant decrease in the gradient norm and objective function.
 

Problem 3 Results

For this Problem, the Modified newton method with finite differ­ences had better behaviour than Modified newton method with analytical formula, we can se that the first method reaches conver­gence in around 500 iterations while the centred method reached conver­gence in 23 iteration, but the second method took around 10 times the time of the first one, which is due to the approx­ima­tions made at each iteration, the resulted gradient is approx­imately with the same order while the final objective function is minimised for the second method than the first one. A strange behaviour is encoun­tered in the first few iterations for the first method, manifested here in the increase of the gradient and thee objective function but after around 70 iteration we start having a minimi­sation, actually we couldn't figure out what's the reason for this, since we didnt manage to get the local exact local minimum and if there is one or more, for the rate of conver­gence for the finite differ­ences is almost quadratic observing the mean over iterations while we had strange behaviour of rate of conver­gence for the first method. This can be sue to the surrog­ation of the original error by the difference between xk+1 and xk intstead of the difference between xk and the exact local minima.