Streaming Media

Document Type

Video

Publication Date

4-5-2021

Keywords

Iterative methods, linear systems, fixed-point, semi-iterative, Krylov subspace, precondition, conjugate gradient, parallel, kernal matrices

Abstract

Iterative methods for the solution of linear systems of equations – such as stationary, semi-iterative, and Krylov subspace methods – are classical methods taught in numerical analysis courses, but adapting these methods to run efficiently at large-scale on high-performance computers is challenging and a constantly evolving topic. Preconditioners – necessary to aid the convergence of iterative methods – come in many forms, from algebraic to physics-based, are regularly being developed for linear systems from different classes of problems, and similarly are evolving with high-performance computers. This lecture will cover the background and some recent developments on iterative methods and preconditioning in the context of high-performance parallel computers. Topics include asynchronous iterative methods that avoid the potentially high synchronization cost where there are very large numbers of computational threads, parallel sparse approximate inverse preconditioners, parallel incomplete factorization preconditioners and sparse triangular solvers, and preconditioning with hierarchical rank-structured matrices for kernel matrix equations.

Comments

The captions accompanying these videos were generated automatically by Kaltura software which may not accurately transcribe scientific, medical, and technical terms.

Share

COinS