Glossary

Glossary#

adjacency matrix (§ 7.1)
Matrix whose nonzero entries show the links between nodes in a graph.

adjoint (§ 2.2)
Conjugate transpose of a complex matrix.

advection equation (§ 12.1)
Archetypical PDE of hyperbolic type, representing transport phenomena.

algorithm (§ 1.3)
List of instructions for transforming data into a result.

Arnoldi iteration (§ 8.4)
Stable algorithm for finding orthonormal bases of nested Krylov subspaces.

asymptotic (§ 2.5)
Relationship indicating that two functions have the same leading behavior in some limit.

backward error (§ 1.4)
Change to the input of a problem required to produce the result found by an inexact algorithm.

backward substitution (§ 2.3)
Systematic method for solving a linear system with an upper triangular matrix.

bandwidth (§ 2.9)
The number of diagonals around the main diagonal that have nonzero elements.

barycentric formula (§ 9.2)
Computationally useful expression for the interpolating polynomial as a ratio of rational terms.

big-O (§ 2.5)
Relationship indicating that one function is bounded above by a multiple of another in some limit.

boundary-value problem (§ 10.1)
A differential equation with which partial information about the solution is given at multiple points on the boundary of the domain.

cardinal function (§ 5.1)
Interpolating function that is 1 at one node and 0 at all the others.

Cholesky factorization (§ 2.9)
Symmetrized version of LU factorization for SPD matrices.

collocation (§ 10.4)
Solution of a differential equation by imposing it approximately at a set of nodes.

condition number (§ 1.2)
Ratio of the size of change in the output of a function to the size of change in the input that produced it.

cubic spline (§ 5.3)
Piecewise cubic function with two globally continuous derivatives, most often used for interpolation or approximation.

diagonalizable (§ 7.2)
Matrix that admits an eigenvalue decomposition. Also known as nondefective.

differentiation matrix (§ 10.3)
Matrix mapping a vector of function values to a vector of approximate derivative values.

Dirichlet condition (§ 10.1)
Boundary condition specifying the value of the solution.

dominant eigenvalue (§ 8.2)
Eigenvalue with the largest modulus (absolute value, in the real case).

double precision (§ 1.1)
Typical standard in floating-point representation, using 64 bits to achieve about 16 decimal significant digits of precision.

eigenvalue (§ 7.2)
Scalar \(\lambda\) such that \(\mathbf{A}\mathbf{x} = \lambda \mathbf{x}\) for a square matrix \(\mathbf{A}\) and nonzero vector \(\mathbf{x}\).

eigenvalue decomposition (EVD) (§ 7.2)
Expression of a square matrix as the product of eigenvector and diagonal eigenvalue matrices.

eigenvector (§ 7.2)
Vector for which the action of a matrix is effectively one-dimensional.

Euler’s method (§ 6.2)
Prototype of all IVP solution methods, obtained by assuming constant derivatives for the solution over short time intervals.

evolutionary PDE (§ 11.1)
A partial differential equation in which one of the independent variables is time or a close analog.

extrapolation (§ 5.6)
Use of multiple discretization values to cancel out leading terms in an error expansion.

finite difference (§ 5.4)
Linear combination of function values that approximates the value of a derivative of the function at a point.

finite element method (FEM) (§ 10.6)
Use of piecewise integration to pose a linear system of equations for the approximate solution of a boundary-value problem.

fixed point iteration (§ 4.2)
Repeated application of a function in hopes of converging to a fixed point.

fixed point problem (§ 4.2)
Finding a value of a given function where the input and output values are the same; equivalent to rootfinding.

floating-point numbers (§ 1.1)
A finite set that substitutes for the real numbers in machine calculations. Denoted by \(\mathbb{F}\).

flops (§ 2.5)
Arithmetic operations on floating-point numbers, often counted as a proxy for computer runtime.

forward substitution (§ 2.3)
Systematic method for solving a linear system with a lower triangular matrix.

Frobenius norm (§ 2.7)
Matrix norm computed by applying the vector 2-norm to a vector interpretation of the matrix.

Gauss–Newton method (§ 4.7)
Generalization of Newton’s method for nonlinear least squares.

Gaussian elimination (§ 2.4)
Use of row operations to transform a linear system to an equivalent one in triangular form.

generating polynomials (§ 6.6)
A pair of polynomials whose coefficients match those of a multistep method for IVPs.

global error (§ 6.2)
Error made by an IVP method over the entire time interval of the solution.

GMRES (§ 8.5)
Iterative solution of a linear system through stable least-squares solutions on nested Krylov subspaces.

graph (§ 7.1)
Representation of a network as a set of nodes and edges.

hat functions (§ 5.2)
Cardinal functions for piecewise linear interpolation.

heat equation (§ 11.1)
Archetypical parabolic PDE that describes diffusion.

hermitian (§ 7.4)
Combination of transpose and elementwise complex conjugation. Also describes a matrix that equals its own hermitian.

hermitian positive definite (HPD) (§ 7.4)
Matrix that is hermitian with strictly positive eigenvalues; complex variant of symmetric positive definite.

identity matrix (§ 2.2)
Matrix with ones on the diagonal and zeros elsewhere, acting as the multiplicative identity.

ill-conditioned (§ 1.2)
Exhibiting a large condition number, indicating high sensitivity of a result to changes in the data.

implicit (§ 6.6)
Formula that defines a quantity only indirectly, e.g., as the solution of a nonlinear equation.

induced matrix norm (§ 2.7)
Norm computed using the interpretation of a matrix as a linear operator.

initial-value problem (IVP) (§ 6.1, § 6.3)
An ordinary differential equation (possibly vector-valued) together with an initial condition.

inner product (§ 9.4)
Scalar or dot product of a pair of vectors, or its extension to a pair of functions.

interpolation (§ 2.1, § 5.1)
Construction of a function that passes through a given set of data points.

inverse iteration (§ 8.3)
Subtraction of a shift followed by matrix inversion, used in power iteration to transform the eigenvalue closest to a target value into a dominant one.

Jacobian matrix (§ 4.5)
Matrix of first partial derivatives that defines the linearization of a vector-valued function.

Kronecker product (§ 13.3)
Alternative type of matrix multiplication useful for problems on a tensor-product domain.

Krylov subspace (§ 8.4)
Vector space generated by powers of a square matrix that is often useful for reducing the dimension of large problems.

Lagrange formula (§ 9.1)
Theoretically useful expression for an interpolating polynomial.

Lanczos iteration (§ 8.6)
Specialization of the Arnoldi iteration to the case of a hermitian (or real symmetric) matrix.

Laplace equation (§ 13.3)
Archetypical elliptic PDE describing a steady state.

linear convergence (§ 4.2)
Sequence in which the difference between sequence value and limit asymptotically decreases by a constant factor at each term, making a straight line on a log-linear graph.

linear least-squares problem (§ 3.1)
Minimization of the 2-norm of the residual for an overdetermined linear system.

local truncation error (§ 6.2, § 6.6)
Discretization error made in one time step of an IVP solution method.

LU factorization (§ 2.4)
Factorization of a square matrix into the product of a unit lower triangular matrix and an upper triangular matrix.

machine epsilon (§ 1.1)
Distance from 1 to the next-largest floating-point number. Also called unit roundoff or machine precision, though the usages are not consistent across different references.

matrix condition number (§ 2.8)
Norm of the matrix times the norm of its inverse, equivalent to the condition number for solving a linear system.

method of lines (§ 11.2)
Solution technique for partial differential equations in which each independent variable is discretized separately.

multistep (§ 6.6)
Formula using information over more than a single time step to advance the solution.

Neumann condition (§ 10.1)
Boundary condition specifying the derivative of the solution.

Newton’s method (§ 4.3)
Rootfinding iteration that uses the linearization of the given function in order to define the next root approximation.

nodes (§ 5.1)
Values of the independent variable where an interpolant’s values are prescribed.

nonlinear least-squares problem (§ 4.7)
Minimization of the 2-norm of the residual of a function that depends nonlinearly on a vector.

norm (§ 2.7)
Means of defining the magnitude of a vector or matrix.

normal (§ 7.2)
Matrix that has a unitary eigenvalue decomposition.

normal equations (§ 3.2)
Square linear system equivalent to the linear least-squares problem.

numerical integration (§ 5.6)
Estimation of a definite integral by combining values of the integrand, rather than by finding an antiderivative.

one-step IVP method (§ 6.2)
IVP solver that uses information from just one time level to advance to the next.

ONC matrix (§ 3.3)
Matrix whose columns are orthonormal vectors.

order of accuracy (§ 5.5, § 5.6, § 6.2, § 6.6)
Leading power of the truncation error as a function of a discretization size parameter.

orthogonal vectors (§ 3.3)
Nonzero vectors that have an inner product of zero.

orthogonal matrix (§ 3.3)
Square ONC matrix, i.e., matrix whose transpose is its inverse.

orthogonal polynomials (§ 9.4)
Family of polynomials whose distinct members have an integral inner product equal to zero, as with Legendre and Chebyshev polynomials.

orthonormal vectors (§ 3.3)
Vectors that are both mutually orthogonal and all of unit 2-norm.

outer product (§ 2.4)
Multiplication of two vectors resulting in a rank-1 matrix.

overdetermined (§ 3.1)
Characterized by having more constraints than available degrees of freedom.

piecewise linear (§ 5.2)
Function that is linear between each consecutive pair of nodes, but whose slope may jump at the nodes.

PLU factorization (§ 2.6)
LU factorization with row pivoting.

power iteration (§ 8.2)
Repeated application of a matrix to a vector, followed by normalization, resulting in convergence to an eigenvector for the dominant eigenvalue.

preconditioning (§ 8.8)
Use of an approximate inverse to improve the convergence rate of Krylov iterations for a linear system.

pseudoinverse (§ 3.2)
Rectangular matrix that maps data to solution in the linear least-squares problem, generalizing the matrix inverse.

QR factorization (§ 3.3)
Representation of a matrix as the product of an orthogonal and an upper triangular matrix.

quadratic convergence (§ 4.3)
Sequence in which the difference between sequence value and limit asymptotically decreases by a constant times the square of the preceding difference.

quasi-Newton methods (§ 4.6)
Rootfinding methods that overcome the issues of Jacobian computation and lack of global convergence in Newton’s method.

quasimatrix (§ 9.4)
Collection of functions (such as orthogonal polynomials) that have algebraic parallels to columns of a matrix.

Rayleigh quotient (§ 7.4)
Function of vectors that equals an eigenvalue when given its eigenvector as input.

reduced QR factorization
See thin QR.

reduced SVD
See thin SVD.

residual (§ 2.8, § 4.1)
For a linear system, the difference between \(\mathbf{b}\) and \(\mathbf{A}\tilde{\mathbf{x}}\) for a computed solution approximation \(\tilde{\mathbf{x}}\). More generally, the actual value of a quantity that is made zero by an exact solution.

restarting (§ 8.5)
Technique used in GMRES to prevent the work per iteration and overall storage from growing uncontrollably.

rootfinding problem (§ 4.1)
Finding the input value for a given function which makes that function zero.

row pivoting (§ 2.6)
Reordering rows during LU factorization to ensure that the factorization exists and can be computed stably.

Runge phenomenon (§ 9.3)
Manifestation of the instability of polynomial interpolation at equally spaced nodes as degree increases.

Runge–Kutta (§ 6.4)
One-step method for IVPs that evaluates the derivative of the solution more than once to advance a single step.

secant method (§ 4.4)
Scalar quasi-Newton method that uses a secant line rather than a tangent line to define a root estimate.

shooting (§ 10.2)
Unstable technique for solving a boundary-value problem in which an initial value is sought for by a rootfinding algorithm.

simple root (§ 4.1)
Root of a function at which the derivative of the function is nonzero.

singular value decomposition (SVD) (§ 7.3)
Expression of a matrix as a product of two orthogonal/unitary matrices and a nonnegative diagonal matrix.

sparse (§ 2.9, § 8.1)
Describing a matrix that has mostly zero elements for structural reasons.

spectral convergence (§ 9.3)
Exponentially rapid decrease in error as the number of interpolation nodes increases, e.g., as observed in Chebyshev polynomial and trigonometric interpolation.

stability region (§ 11.3)
Region of the complex plane describing when numerical solution of a linear IVP is bounded as \(t\to\infty\).

step size (§ 6.2)
Increment in time between successive solution values in a numerical IVP solver.

stiff (§ 6.7, § 11.4)
Describes an IVP in which stability is a greater restriction than accuracy for many solution methods, usually favoring the use of an implicit time stepping method.

subtractive cancellation (§ 1.2)
Growth in relative error that occurs when two numbers are added/subtracted to get a result that is much smaller in magnitude than the operands; also called loss of significance or cancellation error.

superlinear convergence (§ 4.4)
Sequence for which the convergence is asymptotically faster than any linear rate.

symmetric matrix (§ 2.2)
Square matrix that is equal to its transpose.

symmetric positive definite (SPD) matrix (§ 2.9)
Matrix that is symmetric and positive definite, thereby permitting a Cholesky factorization. Correspondingly called hermitian positive definite in the complex case.

tensor-product domain (§ 13.1)
A domain that can be parameterized using variables that lay in a logical rectangle or cuboid; i.e., each variable independently varies in an interval.

thin QR factorization (§ 3.3)
Variant of the QR factorization that discards information not needed to fully represent the original matrix.

thin SVD (§ 7.3)
Variant of the singular value decomposition that discards information not needed to fully represent the original matrix.

trapezoid formula (§ 5.6, § 6.6)
Numerical integration method resulting from integration of a piecewise linear interpolant.

triangular matrix (§ 2.3)
Matrix that is all zero either above (for lower triangular) or below (for upper triangular) the main diagonal.

tridiagonal matrix (§ 2.9)
Matrix with nonzeros only on the main diagonal and the adjacent two diagonals.

trigonometric interpolation (§ 9.5)
Interpolation of a periodic function by a linear combination of real or complex trigonometric functions.

truncation error (§ 5.5, § 5.6)
Difference between an exact value and an approximation, such as one that truncates an infinite series.

unit triangular matrix (§ 2.4)
Triangular matrix that has a 1 in every position on the main diagonal.

unit vector (§ 2.7)
A vector whose norm equals 1.

unitary (§ 7.2)
Square matrix with complex-valued entries whose columns are orthonormal.

unstable (§ 1.4)
Allowing perturbations of the data to have much larger effects on the results than can be explained by the problem’s condition number.

upper Hessenberg (§ 8.4)
Describing a matrix that has nonzeros only in the upper triangle and first subdiagonal.

Vandermonde matrix (§ 2.1)
Matrix whose columns are formed from elementwise powers of a given vector, important for polynomial interpolation and approximation of data.

weights (§ 5.4, § 5.6, § 9.6)
Coefficients in a linear combination of function values in a finite-difference or integration method.

zero-stable (§ 6.8)
Boundedness property of multistep methods that is required for convergence.