I’ve run into trouble managing gists with lots of files in them, so I’m back to doing one per lecture. Here are Lecture 12 and Lecture 13.
We’ve entered Part 3 of the book, which is on conditioning and stability matters. The lectures in this part are heavily theoretical and often abstract, so I find a little occasional computer time helps to clear the cobwebs.
Right off the top, in reproducing Figure 12.

This week’s notebooks ( MATLAB and Julia–now all lectures are together for each language) are about least squares polynomial fitting.
The computational parts are almost identical, except for how polynomials are represented. In MATLAB, a vector of coefficients is interpreted as a polynomial in the context of particular functions, such as polyval. The major pain is that the convention is for the coefficients to be ordered from high degree to low, which is almost always the opposite of what you really want.

For today’s notebooks I got caught on a problem I anticipated in theory but failed to spot in practice for longer than I would like to admit.
First let me mention how interesting this Lecture is to me personally. The title of the lecture is “MATLAB”, and it details three numerical experiments. The first of these uses QR factorization of a discretization of monomials in order to approximate the Legendre polynomials.

This lecture is about the modified Gram-Schmidt method and flop counting. The notebooks are here.
I’m lost.
Almost as an afterthought I decided to add a demonstration of the timing of Gram-Schmidt compared to the asymptotic flop count. Both MATLAB and Julia got very close to the trend as got into the hundreds, using vectorized code:
n_ = collect(50:50:500); time_ = zeros(size(n_)); for k = 1:length(n_) n = n_[k]; A = rand(1200,n); Q = zeros(1200,n); R = zeros(600,600); tic(); R[1,1] = norm(A[:,1]); Q[:,1] = A[:,1]/R[1,1]; for j = 2:n R[1:j-1,j] = Q[:,1:j-1]'*A[:,j]; v = A[:,j] - Q[:,1:j-1]*R[1:j-1,j]; R[j,j] = norm(v); Q[:,j] = v/R[j,j]; end time_[k] = toc(); end using PyPlot loglog(n_,time_,"-o",n_,(n_/500).

Here are the Jupyter notebooks for Lecture 6 and Lecture 7. (I finally noticed that a Gist can hold more than one notebook…duh.)
Not much happened in Lecture 6, but I got gobsmacked in Lecture 7. It happened when I tried to convert this boring MATLAB code for backward substitution.
A = magic(9); b = (1:9)'; [Q,R] = qr(A); z = Q'*b; x(9,1) = z(9)/R(9,9); for i = 8:-1:1 x(i) = (z(i) - R(i,i+1:9)*x(i+1:9)) / R(i,i); end Here is what I first tried in Julia.

Notebooks are viewable for matlab and julia.
This is one of my favorite demos. It illustrates low-rank approximation by the SVD to show patterns in voting behavior for the U.S. Congress. With no a priori models, project onto two singular vectors and pow– meaning and insight jump out.
I took one shortcut. I have a MATLAB script that reads the raw voting data from voteview.com and converts it to a matrix.

The notebooks: matlab and julia.
Today is about some little conveniences/quirks in Julia. Starting here:
t = linspace(0,2*pi,300); x1,x2 = (cos(t),sin(t)); The second line assigns to two variables simultaneously. It’s totally unnecessary here, but it helps to emphasize how the quantities are related.
Next we have
U,σ,V = svd(A) I’m unreasonably happy about having Greek letters as variable names. Just type in ‘\sigma’ and hit tab, and voila! It’s a reminder of how, in the U.

Here are the MATLAB and julia notebooks.
The big issue this time around was graphics. This topic dramatically illustrates the advantages on both sides of the commercial/open source fence. On the MATLAB side, it’s perfectly clear what you should do. There are many options that have been well constructed, and it’s all under a relatively consistent umbrella. There are things to learn and options to choose, but it’s clear what functions you will be using to make, say, a scatter plot, and a lot of similarity across commands.

Here are the matlab and julia notebooks.
Two things stood out this time. First, consider the following snippet.
u = [ 4; -1; 2+2im ] v = [ -1; 1im; 1 ] println("dot(u,v) gives ", dot(u,v)) println("u'*v gives ",u'*v) The result is
dot(u,v) gives -2 - 3im u'*v gives Complex{Int64}[-2 - 3im] Unlike in MATLAB, a scalar is not the same thing as a 1-by-1 matrix. This has consequences. The code (u'*v)*eye(3) throws a dimension mismatch error, while the equivalent with dot is fine.

Have a look at the MATLAB and Julia versions of the notebooks for this lecture.
The first disappointment in Julia comes right at the start: no magic command in Julia! Why not? I love demonstrating with magic square matrices:
They are instantly familiar or at least understandable to any level of mathematician. They have integer entries and thus display compactly. You can demonstrate sum, transpose, and diag naturally. And getting the “antidiagonal” sum is a nice exercise.

Powered by the Academic theme for Hugo.