8.9. Next steps#

The iterative solution of large linear systems is a vast and difficult subject. A broad yet detailed introduction to the subject, including classical topics such as Jacobi and Gauss–Seidel methods not mentioned in this chapter, is [Saa03]. A more focused introduction to Krylov methods is given in [vandVorst03].

The conjugate gradient method was originally intended to be a direct method. Theoretically, the answer is found in \(n\) steps if there are \(n\) unknowns if the arithmetic is perfect. However, for floating-point arithmetic this result no longer holds. The trouble is that as the method progresses, the succeeding search directions become closer to being dependent, and this causes problems for conditioning and floating-point computation. The method was not successful until it came to be viewed as an iterative method that could be stopped once a reasonable approximation was reached. The method was discovered by Hestenes and Stiefel independently, but they joined forces to publish a widely cited paper [HS52] as part of an early research program in computing run by what was then called the (US) National Bureau of Standards (now called the National Institute of Standards and Technology). It took until the 1970s for the method to catch on as a computational method [GOLeary89]. The interested reader can visit the SIAM History Project’s articles at http://history.siam.org/hestenes.htm to find an article by Hestenes that recounts the discovery (reprinted from Nash [Nas90]).

For those not experienced with preconditioning, it can seem like something of an art. The approach that works best very often depends on the application. Summaries of some approaches can be found in Quarteroni et al. [QSS07] and Trefethen and Bau [TI97].