In mathematics, and in particular linear algebra, the Moore–Penrose inverse

of a matrix

is the most widely known generalization of the inverse matrix.[1][2][3][4] It was independently described by E. H. Moore[5] in 1920, Arne Bjerhammar[6] in 1951, and Roger Penrose[7] in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

A common use of the pseudoinverse is to compute a "best fit" (least squares) solution to a system of linear equations that lacks a unique solution (see below under § Applications). Another use is to find the minimum (Euclidean) norm solution to a system of linear equations with multiple solutions. The pseudoinverse facilitates the statement and proof of results in linear algebra.

The pseudoinverse is defined and unique for all matrices whose entries are real or complex numbers. It can be computed using the singular value decomposition.

Notation[edit]

In the following discussion, the following conventions are adopted.

Definition[edit]

For

, a pseudoinverse of

is defined as a matrix

satisfying all of the following four criteria, known as the Moore–Penrose conditions:[7][8]

 ( need not be the general identity matrix, but it maps all column vectors of  to themselves);     ( acts like a [weak inverse](<https://en.wikipedia.org/wiki/Weak_inverse>));     ( is [Hermitian](<https://en.wikipedia.org/wiki/Hermitian_matrix>));     ( is also Hermitian).