I’m going to wrap up the long-paused MATLAB versus Julia comparison on Trefethen & Bau by chugging through all the lectures on iterative methods in one post.
I’m back to using gists–not thrilled with any of the mechanisms for sharing this stuff.
Lecture 32 (sparse matrices and simple iterations) Lecture 33 (Arnoldi iteration) Lecture 34 (Arnoldi eigenvalues) These are remarkable mainly in that they have such striking similarity in both languages.

Part V of T&B is on dense methods for eigenvalue and singular value problems. For my course, this is the part of the text that I condense most severely. In part that’s due to the need to cover unconstrained nonlinear solving and optimization stuff later on. But I also find that this is the least compelling part of the text for my purposes.
It’s heavily weighted toward the hermitian case. That’s the cleanest situation, so I see the rationale.

Here are the notebooks in MATLAB and Julia.
The new wrinkle in these codes is extended precision. In MATLAB you need to have the Symbolic Math toolbox to do this in the form of vpa. In Julia, you have to use version 0.5 or (presumably) later, which had a surprising side effect I’ll get to below.
The reason for extended precision is that this lecture presents experiments on the accuracy of different algorithms for linear least squares problems.

This lecture is about the modified Gram-Schmidt method and flop counting. The notebooks are here.
I’m lost.
Almost as an afterthought I decided to add a demonstration of the timing of Gram-Schmidt compared to the asymptotic flop count. Both MATLAB and Julia got very close to the trend as got into the hundreds, using vectorized code:
n_ = collect(50:50:500); time_ = zeros(size(n_)); for k = 1:length(n_) n = n_[k]; A = rand(1200,n); Q = zeros(1200,n); R = zeros(600,600); tic(); R[1,1] = norm(A[:,1]); Q[:,1] = A[:,1]/R[1,1]; for j = 2:n R[1:j-1,j] = Q[:,1:j-1]'*A[:,j]; v = A[:,j] - Q[:,1:j-1]*R[1:j-1,j]; R[j,j] = norm(v); Q[:,j] = v/R[j,j]; end time_[k] = toc(); end using PyPlot loglog(n_,time_,"-o",n_,(n_/500).

The notebooks: matlab and julia.
Today is about some little conveniences/quirks in Julia. Starting here:
t = linspace(0,2*pi,300); x1,x2 = (cos(t),sin(t)); The second line assigns to two variables simultaneously. It’s totally unnecessary here, but it helps to emphasize how the quantities are related.
Next we have
U,σ,V = svd(A) I’m unreasonably happy about having Greek letters as variable names. Just type in ‘\sigma’ and hit tab, and voila! It’s a reminder of how, in the U.

Here are the MATLAB and julia notebooks.
The big issue this time around was graphics. This topic dramatically illustrates the advantages on both sides of the commercial/open source fence. On the MATLAB side, it’s perfectly clear what you should do. There are many options that have been well constructed, and it’s all under a relatively consistent umbrella. There are things to learn and options to choose, but it’s clear what functions you will be using to make, say, a scatter plot, and a lot of similarity across commands.

Here are the matlab and julia notebooks.
Two things stood out this time. First, consider the following snippet.
u = [ 4; -1; 2+2im ] v = [ -1; 1im; 1 ] println("dot(u,v) gives ", dot(u,v)) println("u'*v gives ",u'*v) The result is
dot(u,v) gives -2 - 3im u'*v gives Complex{Int64}[-2 - 3im] Unlike in MATLAB, a scalar is not the same thing as a 1-by-1 matrix. This has consequences.

This semester I’m teaching MATH 612, which is numerical linear and nonlinear algebra for grad students. Linear algebra dominates the course, and for that I’m following the now classic textbook by Trefethen & Bau. This book has real meaning to me because I learned the subject from Nick Trefethen at Cornell, just a year or two before the book was written. It’s when numerical analysis became an appealing subject to me.

Something fun for Friday?
My older son binge-watched Futurama on Netflix a few months ago. This was one of the funniest shows of at least recent TV history. Especially if you like nerdy, cultural-reference, rapid-fire style humor like a real Gen-Xer.
It’s also probably the first and only time in television history that a new mathematical theorem was proved for and first presented in a series episode. The whole run of the series had numerous mathematical references.

I just received a copy of SIAM News on a dead tree. It features a piece by Chris Johnson and Hans de Sterck about “Data Science: What Is It and How Is It Taught?” As usual in these articles, I find the specifics more interesting than the generalities of a panel discussion. I really liked this bit about the new program in Computational Modeling and Data Analytics at Virginia Tech:

Powered by the Academic theme for Hugo.