Iterative Methods for Sparse Linear Systems, Second Edition gives an in-depth, up-to-date view of practical algorithms for solving large-scale linear systems of equations. These equations can number in the millions and are sparse in the sense that each involves only a small number of unknowns. The methods described are iterative, i.e., they provide sequences of approximations that will converge to the solution. This new edition includes a wide range of the best methods available today. The author has added a new chapter on multigrid techniques and has updated material throughout the text, particularly the chapters on sparse matrices, Krylov subspace methods, preconditioning techniques, and parallel preconditioners. Material on older topics has been removed or shortened, numerous exercises have been added, and many typographical errors have been corrected. The updated and expanded bibliography now includes more recent works emphasizing new and important research topics in this field. This book can be used to teach graduate-level courses on iterative methods for linear systems. Engineers and mathematicians will find its contents easily accessible, and practitioners and educators will value it as a helpful resource. The preface includes syllabi that can be used for either a semester- or quarter-length course in both mathematics and computer science.
更多科學出版社服務,請掃碼獲取。
要使我國的數(shù)學事業(yè)更好地發(fā)展起來,需要數(shù)學家淡泊名利并付出更艱苦地努力。另一方面,我們也要從客觀上為數(shù)學家創(chuàng)造更有利的發(fā)展數(shù)學事業(yè)的外部環(huán)境,這主要是加強對數(shù)學事業(yè)的支持與投資力度,使數(shù)學家有較好的工作與生活條件,其中也包括改善與加強數(shù)學的出版工作。
從出版方面來講,除了較好較快地出版我們自己的成果外,引進國外的先進出版物無疑也是十分重要與必不可少的。科學出版社影印一批他們出版的好的新書,使我國廣大數(shù)學家能以較低的價格購買,特別是在邊遠地區(qū)工作的數(shù)學家能普遍見到這些書,無疑是對推動我國數(shù)學的科研與教學十分有益的事。
這次科學出版社購買了版權,一次影印了23本施普林格出版社出版的數(shù)學書,就是一件好事,也是值得繼續(xù)做下去的事情。大體上分一下,這23本書中,包括基礎數(shù)學書5本,應用數(shù)學書6本與計算數(shù)學書12本,其中有些書也具有交叉性質。這些書都是很新的,2000年以后出版的占絕大部分,共計16本,其余的也是1990年以后出版的。這些書可以使讀者較快地了解數(shù)學某方面的前沿,例如基礎數(shù)學中的數(shù)論、代數(shù)與拓撲三本,都是由該領域大數(shù)學家編著的“數(shù)學百科全書”的分冊。對從事這方面研究的數(shù)學家了解該領域的前沿與全貌很有幫助。按照學科的特點,基礎數(shù)學類的書以“經典”為主,應用和計算數(shù)學類的書以“前沿”為主。這些書的作者多數(shù)是國際知名的大數(shù)學家,例如《拓撲學》一書的作者諾維科夫是俄羅斯科學院的院士,曾獲“菲爾茲獎”和“沃爾夫數(shù)學獎”。這些大數(shù)學家的著作無疑將會對我國的科研人員起到非常好的指導作用。
當然,23本書只能涵蓋數(shù)學的一部分,所以,這項工作還應該繼續(xù)做下去。更進一步,有些讀者面較廣的好書還應該翻譯成中文出版,使之有更大的讀者群?傊,我對科學出版社影印施普林格出版社的部分數(shù)學著作這一舉措表示熱烈的支持,并盼望這一工作取得更大的成績。
Contents
Preface to the Second Edition X111
Preface to the First Edition XVl1
1 Background in Linear AIgebra 1
1. 1 Matrices
1. 2 Square Matrices and Eigenvalues 2
1. 3 Types of Matrices 4
1. 4 Vector Inner Products and Norms 5
1. 5 Matrix Norms 7
1. 6 Subspaces , Range, and Kemel 9
1. 7 Orthogonal Vectors and Subspaces 10
1. 8 Canonical Forms of Matrices 14
1. 8.1 Reduction to the Diagonal Form 15
1. 8.2 The Jordan Canonical Form 15
1. 8.3 The Schur Canonical Form 16
1. 8.4 Application to Powers of Matrices 18
1.9 Normal and Hermitian Matrices 20
1.9.1 Normal Matrices 20
1.9.2 Hermitian Matrices 23
1. 10 Nonnegative Matrices, M -Matrices 25
1.1 1 Positive Definite Matrices 29
1. 12 Projection Operators 32
1. 12.1 Range and Null Space of a Projector 32
1. 12.2 Matrix Representations 34
1.1 2.3 Orthogonal and Oblique Projectors 34
1.12.4 Properties of Orthogonal Projectors 36
1.1 3 Basic Concepts in Linear Systems 37
1.1 3.1 Existβnce of a Solution 37
1.1 3.2 Perturbation Analysis 38
Exercises 39
Notes and References 43
2 Discretization of Partial Differential Equations 45
2.1 Partial Differential 句uations 45
2.1.1 EJ1 iptic Operators 46
2.1.2 The Convection Diffusion Equation 48
2.2 Finite Difference Methods 48
2.2.1 Basic Approximations 48
2.2.2 Difference Schemes for the Laplacian Operator 50
2.2.3 Finite Differenc for One-Dim nsion al Problems 51
2.2.4 Upwind Sch mes 52
2.2.5 nitDifferences for Two-Dimensional Probl ms 54
2.2.6 Fast Poisson Solvers 55
2.3 The Finite Element Method 60
2.4 Mesh Generation a nd Refinement 66
2.5 Finite Volum Method 68
Exercises 71
Notes and References 72
3 Sparse Matrices 73
3.1 Introduction 73
3.2 Graph Representations 75
3.2.1 Graphs and Adjacency Graphs 75
3.2.2 Graph s of PDE Matrices 76
3.3 Permutations and Reorderings 77
3.3.1 Basic Concpts 77
3.3.2 Relations with re Adjacency Graph 79
3.3.3 Common Rorderings 81
3.3.4 Irrducibility 89
3.4 Storage Schemes 89
3.5 Basic Sparse Matrix Oprations 92
3.6 Sparse Direct Solution Methods 93
3.6.1 MD Ordering 93
3.6.2 ND Orderin 94
3.7 Test Problems 94
Exe rcises 97
Notes and Rrefere nces 101
4 Basic Iterative Methods 103
4.1 Jacobi , Gauss-Sidel, and Success ssive Overr laxation 103
4.1.1 Block Relaxation Sch mes 106
4. 1.2 Iteration Matrices and Preconditioning 110
4.2 Converge ncce 111
4.2.1 Gnera l Convergence Result 112
4.2.2 Regular Splittings 115
4.2.3 Diagonally Domin ant Matrices 116
4.2.4 Symmtric Positive Dfinite Matrices 119
4.2.5 Proprty A and Consistent Orderings 119
4.3 Altemating Direction Methods 124
Exercises 126
Notes and References 128
5 Projection Methods 129
5 1 Basic Definitions and Algorithms 129
5. 1.1 General Proj ction Methods 130
5. 1. 2 M atrix Representation 131
5.2 General Theory 132
5.2.1 Two Optimality Results 133
5.2.2 Int erpretation in TIrms of Projectors 134
5.2.3 Gene ral Error Bound 135
5.3 One-Dimensional Projection Procsses 137
5.3.1 Steepest Descent 138
5.3.2 MR Iteration 140
5.3.3 Re si du al Norm Steepest Descent 142
5.4 Additive and Multiplicative Processes 143
Exercises 145
Notes and References 149
6 Krylov Subspace Methods, Part 1 151
6.1 Introduction 151
6.2 Kry lov Subspaces 152
6.3 Amoldi 's Mthod 153
6.3.1 The Basic Algorithm 154
6.3.2 Practical Implementation s 156
6.4 Amoldi 's Method for Linear Systems 159
6.4.1 Variation 1: Restarted FOM 160
6.4.2 Variation 2: 10M and DIOM 161
6.5 Generalized Minimal Re sidual Method 164
6.5.1 The Basic GMRES Algori 由m 164
6.5.2 The Hou seholderVersion 165
6.5.3 Prac tical Implementation I ssus 167
6.5.4 Breakdown of G 1RES 171
6.5.5 Variation 1: Re s tarting 171
6.5.6 Variation 2: Truncated GMRES virsions 172
6.5.7 Relations Between FOM and GMRES 177
6.5.8 Residual Smoothing 181
6.5.9 GMRES for Complex Sy stems 184
6.6 The Symmetric Lanczos Algorithm 185
6.6.1 The Algorithm 185
6.6.2 Relation to Orthogonal Polynomials 186
6.7 The Conjugate GradientAlgorithm 187
6.7.1 Derivation and Th0η 187
6.7.2 Altemative Formulations 191
6.7.3 Eigenv alue Estimates from the CG Coefficients 192
6.8 The Conjugate Residual Method 194
6.9 Generalized Co 時ugate Residual, ORTHOMIN , and ORTHODIR 194
6.10 The Faber-Manteuffel Theorem 196
6.11 Convergence Analysis 198
6.1 1.1 Real Chebyshev Polynomials 199
6.11.2 Complex Chebyshev Polynomials 200
6.11.3 Convergence of the CG Algorithm 203
6.1 1.4 Convergence of GMRES 205
6.1 2 Block Krylov Method s 208
Exercises 212
Notes and Referencs 215
7 Krylov Subspace Methods, Part 11 217
7.1 Lanczos Biorthogonalization 217
7.1.1 The Algorim.217
7.1.2 Practical Implementations 220
7.2 The Lanczos Algorithm for Linear Systems 221
7.3 The Biconjugate Gradient and Quas i-Minimal Re sidual Algorithms 222
7.3.1 The BCG Algorithm 222
7.3.2 QMRAlgorithm 224
7.4 Transpose-Free Variants 228
7.4.1 CGS. 229
7.4.2 BICGSTAB 231
7.4.3 TFQMR 234
Exercises 241
Notes and References 243
8 Methods Related to the Normal Equations 245
8.1 The Normal Eq uations 245
8.2 Row Projection Methods 247
8.2.1 Gau ss-S eidel on the Normal Equations 247
8.2.2 Cimmino's Method 249
8.3 Conjugate Gradient and Normal Equations 251
8.3.1 CGNR 252
8.3.2 CGNE 253
8.4 Saddle-Point Problems 254
Exercises 257
N otes and References 259
9 Preconditioned Iterations 261
9.1 Introduction 261
9.2 Preconditioned Conjugate Gradient 262
9.2.1 Preserving Symmetry 262
9.2.2 Efficient Implementations 265
9.3 Preconditioned Generalized Minimal Residual 267
9.3.1 Left-Preconditioned G1RES.268
9.3.2 Right-Preconditioned GMRES 269
9.3.3 Split Preconditioning 270
9.3.4 Comparison of Right and Left Preconditioning 271
9.4 Flexible Variants 272
9.4.1 FGMRES 273
9.4.2 DQGMRES 276
9.5 Preconditioned Conjugate Gradient for the Normal Equations 276
9.6 The Concus, Gol, and Widlund Algorithm 278
Exercises 279
Notes and References 280
10 Preconditioning Techniques 283
10.1 Introduction 283
10.2 Jacobi, Successive Overrelaxation, and Symmetric Successive Overrelaxation Preconditioners 284
10.3 Incomplete LU Factorization Preconditioners 287
10.3.1 ILU Factorizations 288
10.3.2 Zero Fill-in ILU (ILU(O)) 293
10.3.3 Level of Fill and ILU(p) 296
10.3.4 Matrices with Regular Structure 300
10.3.5 LU 305
10.4 Threshold Strategies and Incomplete LU with Threshold 306
10.4.1 The ILUT Approach 307
10.4.2 Analysis 308
10.4.3 Implmntation Details 310
10.4.4 ThILUTP Approach 312
10.4.5 The ILUS Approach 314
10.4.6 The ILUC Approach 316
10.5 Approximate Inverse Preconditioners 320
10.5.1 Approximating e Inverse of a Sparse Matrix 321
10.5.2 Globallteration 321
10.5.3 Column-OrientedAlgorithms 323
10.5.4 Theoretical Considerations 324
10.5.5 Convergence of Self-Preconditioned MR 326
10.5.6 AINVs via Bordering 329
10.5.7 Factor,d Invrses via Orthogonalization: AINV 331
10.5.8 Improving a Preconditioner 333
10.6 Reordering for Incomplete LU 333
10.6.1 Symmetric Permutations 333
10.6.2 Nonsymmetric Reorderings 335
10.7 Block Preconditioners 337
10.7.1 Block Tridiagonal Matrices 337
10.7.2 General Matrices 339
10.8 Preconditioners for thNormal Equations 339
10.8.1 Jacobi, SOR, and Variants 340
10.8.2 IC(O) for the Normal Equations 340
10.8.3 IncompletGram-Schmidt and ILQ 342
Exercises 345
Notes and Refernces 349
11 Parallel Implementations 353
11.1 Introduction 353
11.2 Forms of Parallelism 354
11.2.1 Multiple Functional Units 354
11.2.2 Pipelining 354
11.2.3 Vector Processors 355
11.2.4 Multiprocessing and Distributd Computing 355
11.3 Types of Parallel Architectures 355
11.3.1 Shared Memory Computers 356
11.3.2 Distributed Memory Architcturω 357
11.4 Types of Oprations 359
11.5 Matrix-by-Vector Products 361
11.5.1 The CSR and CSC Formats 362
11.5.2 Matvcs in thDiagonal Format 364
11.5.3 The Ellpack-Itpack Format 364
11.5.4 The JAD Format 365
11.5.5 The Case ofDistributed Sparse Matrices 366
11.6 Standard Preconditioning Oprations 369
11.6.1 Parallelism in Forward Sweeps 369
11.6.2 Level Scheduling: The CasofFive-Point Matrices 370
11.6.3 Level Scheduling for Irregular Graphs 370
Exerciss 373
Notes and References 375
12 Parallel Preconditioners 377
12.1 lntroduction 377
12.2 Block Jacobi Preconditioners 378
12.3 Polynomial Prconditioners 379
12.3.1 Numann Polynomials 380
12.3.2 Chebyshv Polynomials 381
12.3.3 Least-Squares Polynomials 383
12.3.4 The Nonsymmetric Case 386
12.4 Multicoloring 389
12.4.1 Red-Black Ordering 389
12.4.2 Solution of Rd-Black Systems 390
12.4.3 Multicoloring for General Sparse Matrices 391
12.5 Multi-Elimination Incomplete LU 392
12.5.1 Multi-Elimination 393
12.5.2 ILUM.….394
12.6 Distributed Incomplete LU and Symmetric Successive
Overrelaxation 396
12.7 Other Techniques 399
12.7.1 AINVs 399
12.7.2 EBE Techniques 399
12.7.3 Parallel Row Pr嚀ection Preconditioners 401
Exercises 402
Notes and References 404
13 Multigrid Methods 407
13.1 Introduction 407
13.2 Matrices and Spectra of Model Problems 408
13.2.1 The Richardson Iteration 412
13.2.2 Weighted Jacobi Iteration 414
13.2.3 Gauss-Seidel Itration 416
13.3 Intergrid Operations 419
13.3.1 Prolongation 419
13.3.2 Restriction 421
13.4 Standard Multigrid Techniques 422
13.4.1 Coarse Problems and Smoothers 423
13.4.2 Two-Grid Cycles 424
13.4.3 V-Cycles and W-Cycles 426
13.4.4 FMG 429
l3.5 Analysis of the Two-Grid Cycle 433
13.5.1 Two Important Subspaces 433
13.5.2 ConvergencεAnalysis 435
l3.6 Algebraic Multigrid 437
13.6.1 Smoothness in AMG 438
13.6.2 Interpolation in AMG 439
13.6.3 Defining Coarse Spaces in AMG 442
13.6.4 AMG via Multilevel ILU 442
13.7 Multigrid versus Krylov Methods 445
ExerCìses 446
Notes and Rferences 449
14 Domain Decomposition Methods 451
14.1 Introduction 451
14. 1.1 Notation 452
14. 1.2 Types ofPartitionings.453
14. 1.3 Types ofTechniques 453
14.2 Direct Solution and the Schur Complement 456
14.2.1 Block Gaussian Elimination 456
14.2.2 Properties of 出e Schur Complemnt 457
14.2.3 Schur Complement for VIdx-Basd Partitionings 458
14.2.4 Schur Complement for Finite Element Partitionings 460
14.2.5 Schur Complement for the Model Problem 463
14.3 Schwarz Altemating Procedures 465
14.3.1 Multiplicative Schwarz Procedure 465
14.3.2 Multiplicative Schwarz Preconditioning 470
14.3.3 Additive Schwarz Procedure 472
14.3.4 Convergence 473
14.4 Schur Complement Approach 477
14.4.1 Induced Preconditioners 478
14.4.2 Probing. 480
14.4.3 Pre conditioning Vertex-Based Schur Complements 480
14.5 Full Matrix Methods 481
14.6 Graph Partitioning 483
14.6.1 Basic Definitions 484
14.6.2 Geometric Approach 484
14.6.3 Spectral Techniques 486
14.6.4 Graph Theory Techniques 487
Exerciss.491
Nots and Refernces.492
Bibliography 495
Index 517