For a few years, I’ve been a fan of clickers (aka personal response systems) for large lecture sections. Clickers are a simple–and scalable–way to incorporate a little widespread active learning in the classroom. They can’t work miracles, but they do allow me to reward attendance, rouse the students once in a while, and give good feedback to all of us about how well the latest concepts are sinking in. I like the accountability: If you got the question wrong when 80% of the class got it right, that’s on you, but if 20% of the class got it right, that’s on me.
Part V of T&B is on dense methods for eigenvalue and singular value problems. For my course, this is the part of the text that I condense most severely. In part that’s due to the need to cover unconstrained nonlinear solving and optimization stuff later on. But I also find that this is the least compelling part of the text for my purposes.
It’s heavily weighted toward the hermitian case. That’s the cleanest situation, so I see the rationale.
Three in one this time: Lecture 20, which is on Gaussian elimination / LU factorization, Lecture 21, on row pivoting, and Lecture 23, on Cholesky factorization. I mainly skipped over Lecture 22, about the curious case of the stability of pivoted LU, but the main example is dropped into the end of my coverage of pivoting.
The Julia surprises are, not surprisingly, coming less frequently. In Lecture 20 I had some fun with rational representations.
Here are the notebooks in MATLAB and Julia.
The new wrinkle in these codes is extended precision. In MATLAB you need to have the Symbolic Math toolbox to do this in the form of vpa. In Julia, you have to use version 0.5 or (presumably) later, which had a surprising side effect I’ll get to below.
The reason for extended precision is that this lecture presents experiments on the accuracy of different algorithms for linear least squares problems.
I’ve run into trouble managing gists with lots of files in them, so I’m back to doing one per lecture. Here are Lecture 12 and Lecture 13.
We’ve entered Part 3 of the book, which is on conditioning and stability matters. The lectures in this part are heavily theoretical and often abstract, so I find a little occasional computer time helps to clear the cobwebs.
Right off the top, in reproducing Figure 12.
This week’s notebooks ( MATLAB and Julia–now all lectures are together for each language) are about least squares polynomial fitting.
The computational parts are almost identical, except for how polynomials are represented. In MATLAB, a vector of coefficients is interpreted as a polynomial in the context of particular functions, such as polyval. The major pain is that the convention is for the coefficients to be ordered from high degree to low, which is almost always the opposite of what you really want.
This lecture is about the modified Gram-Schmidt method and flop counting. The notebooks are here.
Almost as an afterthought I decided to add a demonstration of the timing of Gram-Schmidt compared to the asymptotic flop count. Both MATLAB and Julia got very close to the trend as got into the hundreds, using vectorized code:
n_ = collect(50:50:500); time_ = zeros(size(n_)); for k = 1:length(n_) n = n_[k]; A = rand(1200,n); Q = zeros(1200,n); R = zeros(600,600); tic(); R[1,1] = norm(A[:,1]); Q[:,1] = A[:,1]/R[1,1]; for j = 2:n R[1:j-1,j] = Q[:,1:j-1]'*A[:,j]; v = A[:,j] - Q[:,1:j-1]*R[1:j-1,j]; R[j,j] = norm(v); Q[:,j] = v/R[j,j]; end time_[k] = toc(); end using PyPlot loglog(n_,time_,"-o",n_,(n_/500).
Here are the Jupyter notebooks for Lecture 6 and Lecture 7. (I finally noticed that a Gist can hold more than one notebook…duh.)
Not much happened in Lecture 6, but I got gobsmacked in Lecture 7. It happened when I tried to convert this boring MATLAB code for backward substitution.
A = magic(9); b = (1:9)'; [Q,R] = qr(A); z = Q'*b; x(9,1) = z(9)/R(9,9); for i = 8👎1 x(i) = (z(i) - R(i,i+1:9)*x(i+1:9)) / R(i,i); end Here is what I first tried in Julia.
Notebooks are viewable for matlab and julia.
This is one of my favorite demos. It illustrates low-rank approximation by the SVD to show patterns in voting behavior for the U.S. Congress. With no a priori models, project onto two singular vectors and pow– meaning and insight jump out.
I took one shortcut. I have a MATLAB script that reads the raw voting data from voteview.com and converts it to a matrix.
The notebooks: matlab and julia.
Today is about some little conveniences/quirks in Julia. Starting here:
t = linspace(0,2*pi,300); x1,x2 = (cos(t),sin(t)); The second line assigns to two variables simultaneously. It’s totally unnecessary here, but it helps to emphasize how the quantities are related.
Next we have
U,σ,V = svd(A) I’m unreasonably happy about having Greek letters as variable names. Just type in ‘\sigma’ and hit tab, and voila! It’s a reminder of how, in the U.