Matrix multiplication computes each entry cij as the dot product of row i of A with column j of B. The number of columns of A must equal the number of rows of B.
| Operation | Condition | Result |
|---|---|---|
| A ± B | same size | ℝm×n |
| λA | λ ∈ ℝ | ℝm×n |
| AB | cols(A)=rows(B) | ℝm×n |
| ATB | rows(A)=rows(B) | ℝn×n |
A triangular matrix has zeros either above or below the main diagonal. Its determinant is simply the product of diagonal entries, and the eigenvalues ARE the diagonal entries.
The transpose swaps rows and columns: row i becomes column i. A matrix is symmetric if it equals its transpose, and antisymmetric (skew-symmetric) if AT = −A.
The inverse A−1 "undoes" the transformation of A. Multiplying A by its inverse gives the identity matrix. Only square matrices with non-zero determinant have an inverse.
To compute A−1, write [A | I] and apply Gauss-Jordan (row reduce to RREF). When the left side becomes I, the right side is A−1.
An orthogonal matrix Q preserves lengths and angles. Its inverse is simply its transpose. Geometrically, Q is either a rotation (det = +1) or a reflection (det = −1).
A symmetric matrix equals its transpose. Its eigenvalues are always real, and eigenvectors of different eigenvalues are orthogonal. It can be diagonalized by an orthogonal matrix: ATSA = D.
The LR decomposition (also called LU) factors A into a lower triangular matrix L and an upper triangular matrix R. The elimination factors go into L. This makes solving Ax = b efficient: first solve Ly = b (forward), then Rx = y (backward).