One of the fundamental problems in science and engineering is finding the solution of linear system of equations. These systems arise very frequently in scientific computing, for example, from finite difference or finite element approximations to partial differential equations, as intermediate steps in computing the solution of nonlinear problems, or as subproblems in linear and nonlinear programming. From a practical point of view, for linear systems of small size, the standard approach is to use direct methods, such as Gaussian elimination. These algorithms obtain the solution of the system based on a factorization of the coefficient matrix of the system. However, the dimension of this problem has increased due to need for more accuracy and availability of more information provided by developments in different sciences; for example, it can be seen in systems arising from three-dimensional partial differential equations. The actual computation of the solution may, however, lead to severe complications, when carried out in finite precision and each arithmetic operation takes finite times. Direct approaches are prohibitive both in terms of storage requirements and computing time and then the only alternative is to use iterative algorithms. Even the case when the coefficient matrix is nonsingular, which is a trivial problem from a mathematical point of view, may become very complicated from a computational point of view, and may even turn out to be impossible. An explosion of activity spurred by extraordinary technological advances in science and engineering has been seen in the field of iterative methods. The past six decades have been particularly rich in new developments providing the availability of precious toolboxes of algorithms for solving large problems which arise in scientific and industrial computational models. Fortunately, the resulting systems usually have some special structure; sparsity is the most common case. Especially attractive are iterative methods that involve the coefficient matrix only in the form of matrix-vector products with the coefficient matrix itself or its Hermitian. Such schemes naturally exploit the special structure of large sparse linear systems. They are also well-suited for the solution of certain dense large systems for which matrix-vector products can be obtained cheaply. As a result, researches have focused on Krylov suace methods as variants of projection methods. The search for efficient Krylov methods for linear systems with a general coefficient matrix has been dominated by two different approaches, both of which are generalizations of conjugate gradient. In the first approach, the requirement of short recurrences is removed. The most popular member of this family is the method of generalized minimal residual. The second approach generalizes conjugate gradient using short recurrences. The archetype of this ltr"