100% Guaranteed Results


NOC – Solved
$ 24.99
Category:

Description

5/5 – (1 vote)

Exercise 2: Nonlinear Optimization and Newton-type Methods
Prof. Dr. Moritz Diehl, Florian Messerer, Andrea Zanelli, Dimitris Kouzoupis

In this exercise we will start using solvers for nonlinear and nonconvex optimization problems and we will implement a simple Newton-type algorithm for unconstrained problems.
1. The Rosenbrock problem. Consider the following unconstrained optimization problem:
minx,y f(x,y) := (1 − x)2 + 100(y − x2)2. (1)
Such a problem is commonly referred to as Rosenbrock problem. Have a look at the script provided with this exercise that formulates (1) using CasADi and solves it with the solver for nonlinear nonconvex optimization problems IPOPT. In this exercise we will implement a simple Newton-type algorithm that can be used to solve such a problem.
(a) Compute on paper the gradient of f and its Hessian.
(b) Implement two MATLAB functions that take as input arguments x and y and return ∇f(x,y) and ∇2f(x,y) respectively.
(c) To simplify notation we introduce w = (x,y). We now want to numerically solve the optimization problem by finding a point w∗ at which ∇f(w∗) = 0. Implement the following Newton-type method:
wk+1 = wk − Mk−1∇f(wk), (2)
where M ≈∇2f(wk) is an approximation of the exact Hessian. Test your implementation with two different Hessian approximations: i) Mk = ρI2, with I2 ∈R2×2 the identity matrix, for different values of ρ ∈R++ and ii) Mk = ∇2f(wk). Initialize the iterates at w0 = (1, 1.1)T and run the algorithm for 1000 iterations. Plot the iterates in the x-y space. When using the fixed Hessian approximation, does the algorithm converge for ρ = 100? And for ρ = 500?
(d) Use now CasADi to compute the gradient and Hessian of f and use it in your implementation of the Newton method.
Hint: once you have created a CasADi expression, you can compute its Jacobian and Hessian calling the CasADi functions jacobian and hessian:
1 x = MX.sym(‘x’,2,1);
2 expr = sin(x(1))*x(2);
3 j expr = jacobian(expr,x);
4 J = Function(‘J’, {x}, {j expr});
2. A simple dynamic optimization problem. Consider the problem of finding the optimal way of throwing two balls from different locations such that their distance after a fixed time T is minimized. The dynamics of the system taken into account can be modeled by the following differential equation:
p˙1y = v1y, p˙2y = v2y

1
where piy and piz represent the y and z coordinate of the i-th ball respectively and viy and viz the components of its velocity. The two balls are subject to drag forces with drag coefficients d1 and d2, side wind w and gravitational acceleration g. In order to achieve the desired goal, we formulate the following optimization problem:
(3a)
s.t. p1z(T) ≥ 0, p2z(T) ≥ 0, (3b)
, , (3c)
(a) A template MATLAB function that takes the initial velocities of the balls as an input and returns the final position at time T is provided with this exercise. This function can be used both with numerical and CasADi symbolic inputs. Complete the provided template and use it to generate a CasADi expression for p(T). Use N = 100 equidistant intermediate steps and T = 0.5s. Set d1 = 0.1m−1, d2 = 0.5m−1 and w = 2m/s.
(b) Using CasADi, formulate the described dynamic optimization problem (3) and solve it using IPOPT. Fix ¯v = 15m/s and p1(0) = [0, 0]T , p2(0) = [10, 0]T . Once you have solved the optimization problem, simulate the system for the optimal initial velocities and plot the resulting trajectories in space.
Hint: you can have a look at the constrained Rosenbrock example provided with this exercise to learn how to formulate constrained problems in CasADi.
(c) [Bonus] Consider the case where there is no drag (d1 = d2 = 0m−1). What kind of optimization problem does (3) become?
(d) [Bonus] Change (3) to an unconstrained problem by removing (3b) and (3c). Set kv1k2 = ¯v and kv2k2 = ¯v and reformulate (3) such that the angles α1 := arccos(v1y(0)/kv1k) and α2 := arccos(−v2y(0)/kv2k) are the only decision variables. In this way a two-dimensional dynamic optimization problem is obtained. Use the Newton-type method implemented at point 1.b to solve this problem.
2

Reviews

There are no reviews yet.

Be the first to review “NOC – Solved”

Your email address will not be published. Required fields are marked *

Related products