Trefethen & Bau & MATLAB & Julia, Lecture 8: Gram-Schmidt

This lecture is about the modified Gram-Schmidt method and flop counting. The notebooks are here.

I’m lost.

Almost as an afterthought I decided to add a demonstration of the timing of Gram-Schmidt compared to the asymptotic

n_ = collect(50:50:500);
time_ = zeros(size(n_));
for k = 1:length(n_)
n = n_[k];
A = rand(1200,n);
Q = zeros(1200,n);  R = zeros(600,600);

tic();
R[1,1] = norm(A[:,1]);
Q[:,1] = A[:,1]/R[1,1];
for j = 2:n
R[1:j-1,j] = Q[:,1:j-1]'*A[:,j];
v = A[:,j] - Q[:,1:j-1]*R[1:j-1,j];
R[j,j] = norm(v);
Q[:,j] = v/R[j,j];
end
time_[k] = toc();
end

using PyPlot
loglog(n_,time_,"-o",n_,(n_/500).^2,"--")
xlabel("n"), ylabel("elapsed time")

I noticed that while the timings were similar, Julia lagged MATLAB just a bit. I decided this would be a great chance for me to see Julia’s prowess with speedy loops firsthand.

Compare the vectorized and unvectorized Julia versions here:

Look at the last line–it’s allocating 1.4GB of memory to make the nested loop version happen! I thought perhaps I should use copy to create v in each pass, but that change didn’t help. I even tried writing my own loop for computing the dot product, to no avail.

It did help a little to replace the line in which v is updated with 