Description
In lectures we saw that we can write a periodic function 𝑓 in the form
𝑁
𝑓(𝑥) = ∑ 𝑓𝑛̂ 𝑒𝑖𝑛𝑥
𝑛=−𝑁
where 𝑓𝑛̂ ∶= 21𝜋 ∫02𝜋 𝑓(𝑥)𝑒−𝑖𝑛𝑥d𝑥 if the period is 2𝜋.
The FFTW library uses a different convention, taking the period to be 1. Suppose we sample a function at 𝑁 + 1 uniformly-spaced points between [0, 1], obtaining values 𝑓0, 𝑓1, … , 𝑓𝑁, where 𝑓𝑛 ∶= 𝑓(𝑛ℎ) with ℎ ∶= 𝑁1 . By using the Fourier series we are implicitly assuming that 𝑓 is periodic, i.e. 𝑓0 = 𝑓𝑁. Then
𝑁2 −1
𝑓(𝑥) = ∑ 𝑓𝑛̂ 𝑒2𝜋𝑖𝑛𝑥
𝑛=−𝑁2
where the Fourier coefficients are
1
𝑓𝑛̂ ∶= ∫ 𝑓(𝑥) 𝑒−2𝜋𝑖𝑛𝑥d𝑥
0
This is the convention we will use in this problem.
1. Discretize the integral for the Fourier series coefficients using the trapezoidal rule using 𝑁 + 1 points. Using the assumption that 𝑓0 = 𝑓𝑁 reduce this to a sum over 𝑓𝑖 for 𝑖 = 0, … , 𝑁 − 1.
Implement this as a function fourier_coefficients_trapezoidal(f::Vector) where f is the vector of samples of 𝑓𝑛 for 𝑛 = 0, … , 𝑁 − 1. Your function should return the coefficients for 𝑓𝑘̂ for 𝑘 = −𝑁2 , … , 𝑁2 − 1 as a vector.
2. Consider the following three functions:
1. The sawtooth function
𝑓(𝑥) = mod(𝑥, 1)
(The function mod(𝑥, 1) returns just the fractional part of a number.)
2. The “W” wave function
⎧{𝑥 0 ≤ 𝑥 < 0.5
𝑔(𝑥) = 1 − 𝑥 0.5 ≤ 𝑥 < 1
⎨{⎩𝑔(mod(𝑥, 1)) otherwise
3. The function ℎ(𝑥) = exp(cos(2𝜋𝑥))
Plot them.
Given the properties of these functions, how would you expect their Fourier coefficients to decay?
3. Take 𝑁 = 100. Calculate 𝑓,̂ 𝑔̂and ℎ̂ using your fourier_ceofficients_trapezoidal function. Plot the magnitude of the coefficients as a function of 𝑛 only for 𝑛 ≥ 0. Do you see the expected behavior?
4. How accurately does our function fourier_coefficients_trapezoidal(f::Vector) calculate the Fourier coefficients? Use the following analytic solutions to calculate and plot the respective errors:
𝑓0̂ = 12 and 𝑓𝑛̂ = 2 𝜋𝑛𝑖 for 𝑛 ≠ 0
1 1
𝑔̂0 = 4 ; 𝑔̂𝑛 = −𝜋 2𝑛2 for 𝑛 odd; and 𝑔̂𝑛 = 0 for 𝑛 even
ℎ̂𝑛 = 𝐼1(𝑛)
where 𝐼1(𝑛) is the modified Bessel function of the first kind, calculated in Julia as besseli(n, 1) using the SpecialFunctions.jl package.
Calculate the error (using norm) as the number of points used changes between 𝑁 = 10 and 𝑁 = 1000.
5. Write a function reconstruct_fourier_series(f̂s::Vector, xs::Vector) which reconstructs f(x) from the Fourier series coefficients.
Make a plot of ||f(xt) – reconstruct_fourier_series(f̂s, xt)||/N as the number of coefficients used 𝑁 is varied from 4 → 200 for each of the functions.
Sample the reconstructed Fourier series at the points xt = 0:0.001:1. What do you see? Can you explain why?
6. Make an interactive visualiztion that plots the following on the same axes:
(i) the points used to calculate the Fourier coefficients in fourier_coefficients_rectangle;
(ii) the reconstructed function from the Fourier coefficients, found using reconstruct_fourier_series; and
(iii) the true function as the number of points is varied from 𝑁 = 10 ∶ 2 ∶ 250.
Does this help explain the results in [1.6]? In particular, what do you see for the sawtooth function? This is known as the Gibbs phenomenon, which occurs when a function is discontinuous.
7. What is the operation count for your naive fourier_coefficients_trapezoidal function? In general this will not behave well as 𝑁 grows very large. The FFT, however, can calculate this in 𝑂(log(𝑁)) steps. The FFT implemented in FFTW.jl calculates
𝑁−1
𝑓𝑛̂ = ∑ 𝑓𝑘𝑒−2𝜋𝑖𝑛𝑘/𝑁
𝑘=0
for 𝑛 = 0 ∶ 𝑁 − 1, which should be related to your discretization in [1.1].
Note that the 𝑛s are different here. But since 𝑒−2𝜋𝑖(𝑛+𝑙𝑁)𝑘/𝑁 =
𝑒−2𝜋𝑖𝑙𝑘𝑒−2𝜋𝑖𝑛𝑘/𝑁 = 𝑒−2𝜋𝑖𝑛𝑘/𝑁 we have the relationship 𝑓𝑁/2+𝑖̂ =
𝑓−̂ 𝑁/2+𝑖.
The FFT algorithm therefore outputs
f̂= [𝑓0̂ , 𝑓1̂ , … , 𝑓𝑁/2−1̂ , 𝑓−̂ 𝑁/2, 𝑓−̂ 𝑁/2+1, … , 𝑓−1̂ ]
The FFTWpackage defines a function fftshift that shifts this vector to the form
f̂= [𝑓−̂ 𝑁/2, 𝑓−̂ 𝑁/2+1, … , 𝑓−1̂ , 𝑓0̂ , 𝑓1̂ , … , 𝑓𝑁/2−1̂ ]
8. Implement a function fast_fourier_coefficients that outputs the same results as fourier_coefficients_trapezoidal but using the FFT from FFTW.jl.
Check that the output is the similar to before.
Time your two functions for 𝑁 = 210. Is one faster than the other?
How large can you take N such that fast_fourier_coefficients runs for under 1 second? What about for fourier_coefficients_trapezoidal?
Exercise 2: Solving an ODE with a spectral method
In this problem we will solve the boundary-value problem
𝑓″ = 𝑔
with boundary conditions 𝑓(0) = 𝑓(1) = 0 by using a spectral method, i.e. by expanding in suitable basis functions.
𝑓(𝑥) = (𝑛𝜋𝑥),
which is the same as the Fourier series for an odd function.
In practice we have to truncate the summation as
𝑁
𝑓(𝑥) = ∑ 𝑏𝑛 sin(𝑛𝜋𝑥)
𝑛=1
The coefficients are given by
1
𝑏𝑛 = 2 ∫ 𝑓(𝑥) sin(𝑛𝜋𝑥)d𝑥
0
Similarly to the DFT we discretize this using the rectangle rule to get the Discrete Sine Transform (DST): 𝑁−1
2 𝑛𝑘𝜋
𝑏𝑛 = 𝑓(𝑥) sin ( 𝑁 )
This sum is implemented in julia using the FFTW.jl library (which you will need to install) as follows:
dst(x) = FFTW.r2r(x, FFTW.RODFT00) / length(x) Its inverse is given by idst(x) = FFTW.r2r(x, FFTW.RODFT00)
[r2r stands for “real to real”, meaning that the transform maps a real vector to a real vector. RODFT00 is a symbol that selects one particular type of transform.]
1. Assume that there is a solution of the form 𝑓(𝑥) = ∑̃ 𝑁𝑛=1 𝑏𝑛 sin(𝑛𝜋𝑥).
Substitute this into the ODE 𝑓″(𝑥) = 𝑔(𝑥) to show that 𝑓𝑛̂̃ 𝜋 𝑛 .
2. We can therefore solve the ODE for 𝑓 by first calculating 𝑔̂using the DST, then calculate 𝑓𝑛̂ , and finally invert using the iDST.
Write a function spectral_solver(b) that does this to solve the ODE, where b is the discretized version of g(x). Solve the ODE with 𝑔(𝑥) = sin(2𝜋𝑥). Plot the result. The right-hand side is given by
h = 1 / N
b = sin.(2π * (h:h:1-h))
3. Calculate the error as a function of 𝑁 and plot it. What rate of convergence do you see?
4. Generate the error plots again, now for the right-hand side 𝑔(𝑥) = exp(sin(2𝜋𝑥)) − 1. Use the solution from your spectral solver with
𝑁 = 213 as the true solution. Calculate the error for 𝑁 = 2𝑛 for 𝑛 = 1 → 12. Some care is needed to make sure you use the correct points from the “true” solution for comparison.
Exercise 3: Finding roots in a different way
In class we defined the Chebyshev expansion of a function as
𝑁
𝑓𝑁(𝑥) = ∑ 𝑐𝑛𝑇𝑛(𝑥)
𝑛=0
which is an 𝑁th-degree polynomial. The Chebyshev polynomials are defined as 𝑇𝑛(𝑥) = cos(𝑛 arccos(𝑥)).
In general for a smooth function the Chebyshev series converges rapidly. We therefore expect that the roots of 𝑓𝑁(𝑥) should be close to the roots of 𝑓(𝑥), provided that 𝑓𝑁 is indeed a good approximation to 𝑓. We have already seen that we can find all the roots of a polynomial with various methods.
The Chebyshev polynomials satisfy the following recurrence relation:
𝑇𝑛+1(𝑥) = 2𝑥 𝑇𝑛(𝑥) − 𝑇𝑛−1(𝑥)
We will use this to find the companion matrix for 𝑓𝑁(𝑥), from which we can find the roots of 𝑓𝑁(𝑥).
1. Consider the polynomial 𝑥𝑇𝑛(𝑥). This is a degree 𝑛+1 polynomial and hence can be reexpanded in Chebyshev polynomials.
Consider the vector of Chebyshev polynomials
𝑇0(𝑥)
T(𝑥) = ⎢ 1(⋮𝑥) ⎤⎥ ⎡ 𝑇
⎣𝑇𝑁−1(𝑥)⎦
Now we can write 𝑥T(𝑥) = 𝑀T(𝑥) + 𝑘𝑁𝑒̂𝑁𝑇𝑁(𝑥), where 𝑀 is an (𝑁 −1)×(𝑁 −1) matrix. We need the 𝑘𝑁 term to account for the fact that 𝑥𝑇𝑁−1 is a degree-𝑁 polynomial. Here, 𝑒̂𝑁 is the standard basis vector with zeros everywhere except in the 𝑁th component.
Use the recurrence relation to find the form of 𝑀 and 𝑘𝑁. 2. Verify what you found in [3.1] numerically when 𝑁 = 10. Build the vector T(𝑥) for 𝑥 a random number in [−1, 1]. Compute 𝑀 and check that it gives 𝑥T(𝑥).
3. This looks almost like an eigenvalue problem, except for the 𝑇𝑁 term. To remove this we can add and subtract 𝐶𝑓𝑁(𝑥)𝑒̂𝑁 from the right-hand
side. Writing 𝑓𝑁(𝑥) = 𝑐𝑁𝑇𝑁(𝑥) + ∑𝑁−1𝑛=0 𝑐𝑛𝑇𝑛(𝑥), what value of 𝐶 should you choose so that
𝑁−1
𝑥T(𝑥) = 𝑀T(𝑥) + 𝐶𝑓𝑁(𝑥)𝑒̂𝑁 − 𝐶𝑒̂𝑁 ∑ 𝑐𝑛𝑇𝑛(𝑥) (1)
𝑛=0
= 𝑀̃ T(𝑥) + 𝐶𝑓𝑁(𝑥)𝑒̂𝑁 (2)
What is the new matrix 𝑀̃ ?
4. This becomes an eigenvalue problem when 𝑥 is a root of 𝑓𝑁(𝑥). Therefore, the eigenvalues of 𝑀̃ are the roots of 𝑓𝑁.
Write a function buildM(c::Vector) that constructs the matrix 𝑀̃ from the coefficients in the Chebyshev expansion. Use this to write a chebyshev_roots(c) function that finds the roots of the polynomial defined using the Chebyshev coefficients c. Finally write a function fN(x, c) that calculates the series expansion to find 𝑓𝑁(𝑥) defined by the vector c. 5. We can calculate Chebyshev coefficients using the dct functions in FFTW.
We will use the Chebyshev points 𝑥𝑛 ∶= cos(𝑛𝜋/𝑁) for 𝑛 = 0 ∶ 𝑁. You can then calculate the Chebyshev coefficients using the following code:
chebyshev_points(N) = cos.(π*(0:1:N)/N)
function chebyshev_coefficients(x) N = length(x)
c = FFTW.r2r(x, FFTW.REDFT00)/(N-1)
c[1] /= 2 c[N] /=2
return c
end 6. Consider the polynomial 𝑓(𝑥) = 𝑥 (𝑥 − 1/2)2 (𝑥2 − 1/9) (𝑥 + 1/4).
Using 10 Chebyshev points calculate the Chebyshev coefficients and then calculate the roots using chebyshev_roots. What do you see. What about multiplicities?
7. Plot 𝑓(𝑥) and scatter plot the roots you find on top.
8. Now consider solving the problem exp(cos(4𝜋𝑥)) = 1. Using 𝑁 = 100 points, alculate the Chebyshev coefficients for 𝑔(𝑥) = exp(cos(4𝜋𝑥))− 1. Do they decay quickly? Use these to calculate the roots of 𝑔(𝑥). Plot 𝑔(𝑥) and scatter the roots you find on top. Do you find all the roots?
(Hint: you will find 100 eigenvalues. Only plot those that are real and lie between -1 and 1.)
9. Make an interactive visualization as 𝑁 is varied between 𝑁 = 4 ∶ 150. Plot 𝑔(𝑥), the Cheyshev approximation to 𝑔(𝑥) using 𝑁 coefficients and the roots you find on the same axes. Comment on what you see.
At what value of 𝑁 do you find all the roots? Remember to plot only those roots that are real and between -1 and 1.
Exercise 4: Gram–Schmidt for polynomials In lectures we discussed treating the set of polynomials {1, 𝑥, 𝑥2, 𝑥3, …} as the basis of a vector space with the inner product
𝑏
(𝑓, 𝑔)𝑤 = ∫ 𝑓(𝑥)𝑔(𝑥)𝑤(𝑥)d𝑥
𝑎
We can therefore carry out Gram–Schmidt orthogonalization on these polynomials to generate a family of orthogonal polynomials.
We will implemnent this using the Polynomials.jl package. Integrals can be performed using the polyint(f, a, b) function to integrate the polynomial 𝑓 from 𝑎 to 𝑏. Here, 𝑓, 𝑔 and 𝑤 should all be Polynomials.
1. Write a function gram_schmidt(vs::Vector, ip) which accepts a vector of “vectors” in the vector space and the inner product on the vector spacce. For standard vectors this would be dot(v1, v2). The function should implement the Gram–Schmidt algorithm and return a vector of the resulting orthonormal basis elements.
2. Test your function for standard vectors vs = [rand(10) for i = 1:10] using ip = dot. To check everything went according to plan, form the matrix 𝐷𝑝𝑖𝑗 = 𝑞𝑖 ⋅ 𝑞𝑗. If everything worked this should be the identity matrix.
3. For polynomials we define the inner product
𝑏
(𝑓, 𝑔)𝑤 ∶= ∫ 𝑓(𝑥)𝑔(𝑥)𝑤(𝑥)d𝑥
𝑎
For Legendre polynomials we have 𝑎 = −1 , 𝑏 = 1, and 𝑤(𝑥) =
1. Using the Polynomials.jl package, implement a function legendre_inner_product(f,g). Use this to orthogonalize the vector of monomials up to order 7.
4. Use the functions you found in [4.3] and your inner-product function, find the Legendre polynomial expansion coefficients for the function 𝑓(𝑥) = 𝑥 (𝑥 − 1/2)2 (𝑥2 − 1/9) (𝑥 + 1/4).
Plot the reconstructed polynomial using the first 𝑛 coefficients for 𝑁 = 1 ∶ 7 and the true function. What do you see? How good are the Legendre polynomials at approximating the function?
Reviews
There are no reviews yet.