Cheatography
https://cheatography.com
Oral of numerical optimisation.
This is a draft cheat sheet. It is a work in progress and is not finished yet.
Extended Rosenbrock Results
Now concerning the extended Rosenbrock function, the methods have shown a good behaviour and very fast convergence.We can se that the number of iterations is very few, comparing to thee chained Rosenbrock f, the final gradient norm of the is so low, of order 10-12 for Modified newton method with analytical derivatives while its of order 10-9 for MNM with finite differences. and also the objective function of the exact analytical gradient and hessian is more closer to 0 than for finite differences solution found. But the Centred finite differences converged faster. As expected the gradient norm converged quadratically to 0 in both cases since we are using newton method. Concerning the rate of convergence, we can say its also quadratic since for example here for mom with analytical derivatives, the rate of convergences is being slow in the first few iterations while its increasing in the last few iterations which explain the significant decrease in the gradient norm and objective function. |
|
|
Problem 3 Results
For this Problem, the Modified newton method with finite differences had better behaviour than Modified newton method with analytical formula, we can se that the first method reaches convergence in around 500 iterations while the centred method reached convergence in 23 iteration, but the second method took around 10 times the time of the first one, which is due to the approximations made at each iteration, the resulted gradient is approximately with the same order while the final objective function is minimised for the second method than the first one. A strange behaviour is encountered in the first few iterations for the first method, manifested here in the increase of the gradient and thee objective function but after around 70 iteration we start having a minimisation, actually we couldn't figure out what's the reason for this, since we didnt manage to get the local exact local minimum and if there is one or more, for the rate of convergence for the finite differences is almost quadratic observing the mean over iterations while we had strange behaviour of rate of convergence for the first method. This can be sue to the surrogation of the original error by the difference between xk+1 and xk intstead of the difference between xk and the exact local minima. |
|
|
|