Description
Question1 (6 points)
The negative of a convex function is known to be a concave function . The negative of a quasiconvex function is known to be a quasiconcave function . For each of the following functions determine whether it is convex, concave, quasiconvex, or quasiconcave. (a) (1 point) The function f : R → R given by
f(x) = ex − 1.
(b) (1 point) The function given by
f(x) = x1x2.
(c) (1 point) The function given by
.
(d) (1 point) The function given by
.
(e) (1 point) The function f : R × R+ → R given by
.
(f) (1 point) The function given by
where 0 ≤ α ≤ 1.
Question2 (7 points)
Let f : Rn → R. Prove the following theorems.
(a) (1 point) If f is continuously differentiable in an open neighbourhood of a local minimiser x∗, then the gradient ∇f(x∗) = 0
(b) (2 points) If the Hessian H of f is continuous in an open neighbourhood of a local minimiser x∗, then the gradient ∇f(x∗) = 0 and the Hessian H at x = x∗ is positive semi-definite .
(c) (2 points) If f is differentiable and convex, then x∗ is a global minimum if and only if
∇f(x∗) = 0
(d) (2 points) If f is twice differentiable, then f is convex if and only if the Hessian of f is positive semi-definite for all x ∈ R.
Question3 (2 points)
Let f0,f1,…,fn: R → R be continuous functions. Consider the problem of approximating f0 as a linear combination of f1,…fn. For α ∈ Rn, we say that
f = α1f1 + ···αnfn
approximates f0 with tolerance > 0 over the interval [0,T] if
for 0 ≤ t ≤ T
For a fixed tolerance > 0, we define the approximation width as the largest T such that f approximates f0 over the interval [0,T], that is
for 0 ≤ t ≤ T}
Show that W is quasiconcave.
Question4 (2 points)
Choose a suitable numerical method to find the global minimum of the function
correct to 3 decimal places. Explain your choice.
Question5 (8 points)
Suppose f is unimodal and differentiable. Consider the following algorithm which explicitly uses f0 when we know the minimum point x ∈ [a0,b0]. The essential idea is to approximate the function f on the interval [ak−1,bk−1] with a cubic polynomial of the form
P(x) = αk−1(x − ak−1)3 + βk−1(x − ak−1)2 + γk−1(x − ak−1) + ρk−1
which has the same value and derivative as f at the endpoints ak−1 and bk−1. Then using the minimum point ck−1 of the cubic polynomial to tell how to squeeze the interval
[ak−1,bk−1] to [ak,bk]
(a) (1 point) Discuss why if f0(ck−1) > 0, then we shall set ak = ak−1 and bk = ck−1, else we shall set ak = ck−1 and bk = bk−1.
(b) (1 point) Show that
(c) (1 point) Find formulas for αk−1, βk−1, γk−1 and ρk−1 in terms of ak−1 and bk−1.
(d) (1 point) Use this method to find the minimum of
on the interval [−2.4,−1.6] correct to 5 decimal places.
(e) (2 points) Compare this method with Newton’s method in terms of assumptions about f and convergence rate.
(f) (2 points) Compare this method with Golden Section Search in terms assumptions about f and convergence rate.
Page 2
Question6 (2 points)
Consider the following function
Produce an informative Matlab plot of g(y) = minf(x,y), which gives the minimum of x f(x,y)
as y changes.
Question7 (3 points)
The length, L, of the longest ladder that can pass around the corner of two corridors depends on the angle α shown in the figure below.
Produce a Matlab plot of L versus α ranging from 45◦ to 135◦ by first solving a minimisation problem using numerical methods that we have discussed so far.
Question8 (0 points)
A popular global optimization algorithm for difficult functions, especially if there are many local minima, is called the method of simulated annealing . It involves no derivatives or an initial guess that needs to be sufficiently close to the minimum. Suppose f : Rn → R has a global minimum at x∗ and the k-iteration xk has been computed. It iterates by the following scheme:
1. Generating a number of random points u1, u2, …, um in a large neighborhood of xk.
2. Computing f(u1), f(u2), …, f(um).
3. Finding the index j such that f(uj) = min{f(u1),f(u2),…,f(um)}. 4. Assigning xk+1 = uj if f(xk) > f(uj). Otherwise assigning a probability
where α is a positive real,
to ui for each i = 1,…,m. Then making a random choice among u1, u2, …, um according to the probabilities pi and assigning this randomly chosen u` to be xk+1.
With some minor modifications, this can be used for function Q: X → R, where X is any set. For example, in the traveling salesman problem, X is the set of all permutations of a set of integers. Consider the Euclidean traveling salesman problem (ETSP): Given a set of points in R2 representing positions of cities on a map, we wish to visit each city exactly once while minimizing the total distance traveled.
(a) (1 point (bonus)) Implement simulated annealing to solve ETSP with different α.
(b) (1 point (bonus)) Propose another optimization method to solve ETSP. Analyze how its efficiency would compare to that of simulated annealing.
Page 3
Reviews
There are no reviews yet.