Suppose you have a matrix \(A \in \mathbb{R}^{n \times n}\) and you want to invert it. I've already explained how to invert a matrix (English explanation), but I didn't provide any code and / or runtime analysis.
C++ Code
#include <iostream>
#include <cmath>
#include <vector>
using namespace std;
void print(vector< vector<double> > A) {
int n = A.size();
for (int i=0; i<n; i++) {
for (int j=0; j<2*n; j++) {
cout << A[i][j] << "\t";
if (j == n-1) {
cout << "| ";
}
}
cout << "\n";
}
cout << endl;
}
void calculateInverse(vector< vector<double> >& A) {
int n = A.size();
for (int i=0; i<n; i++) {
// Search for maximum in this column
double maxEl = abs(A[i][i]);
int maxRow = i;
for (int k=i+1; k<n; k++) {
if (abs(A[k][i]) > maxEl) {
maxEl = A[k][i];
maxRow = k;
}
}
// Swap maximum row with current row (column by column)
for (int k=i; k<2*n;k++) {
double tmp = A[maxRow][k];
A[maxRow][k] = A[i][k];
A[i][k] = tmp;
}
// Make all rows below this one 0 in current column
for (int k=i+1; k<n; k++) {
double c = -A[k][i]/A[i][i];
for (int j=i; j<2*n; j++) {
if (i==j) {
A[k][j] = 0;
} else {
A[k][j] += c * A[i][j];
}
}
}
}
// Solve equation Ax=b for an upper triangular matrix A
for (int i=n-1; i>=0; i--) {
for (int k=n; k<2*n;k++) {
A[i][k] /= A[i][i];
}
// this is not necessary, but the output looks nicer:
A[i][i] = 1;
for (int rowModify=i-1;rowModify>=0; rowModify--) {
for (int columModify=n;columModify<2*n;columModify++) {
A[rowModify][columModify] -= A[i][columModify]
* A[rowModify][i];
}
// this is not necessary, but the output looks nicer:
A[rowModify][i] = 0;
}
}
}
int main() {
int n;
cin >> n;
vector<double> line(2*n,0);
vector< vector<double> > A(n,line);
// Read input data
for (int i=0; i<n; i++) {
for (int j=0; j<n; j++) {
cin >> A[i][j];
}
}
for (int i=0; i<n; i++) {
A[i][n+i] = 1;
}
// Print input
print(A);
// Calculate solution
calculateInverse(A);
// Print result
cout << "Result:" << endl;
print(A);
}
This code is VERY similar to the code of Gaussian elimination. In fact, I've only changed all occurrences of n+1
to 2*n
. I also had to change lines 57-70, as we need to do all operations now on a matrix instead of a vector.
Time complexity
Lines 43-52:
\(\displaystyle Operations = \sum_{i=0}^{n-1} \left (\sum_{k=i+1}^{n-1} (\sum_{j=i}^{2n} 1) \right ) = \frac{5}{6} n^3 - \frac{5}{6} n\) (see Wolfram|Alpha)
Lines 63-70:
\(\displaystyle Operations = \sum_{i=0}^{n-1} \left (\sum_{k=0}^{i-1} (\sum_{j=n}^{2n} 1) \right ) = \frac{1}{2} n^3 - \frac{1}{2} n\) (see Wolfram|Alpha)
So we need about \(\frac{4}{3} n^3 + \mathcal{O}(n^2)\) operations to invert a matrix with Gauß-Elimination.
Space complexity
My algorithm needs space for the inverse matrix, so it is in \(\mathcal{O}(n^2)\).
Inverting an upper triangular matrix
Suppose you have an upper triangular matrix \(A \in \mathbb{R^{n \times n}}\) that you would like to invert. It could look like this:
How could we improve the algorithm from above to get speed it up?
Well, we don't need lines 24-53 any more, as those lines bring \(A\) to an upper triangular form. But we still need lines 63-70. So we can improve the algorithm to a complexity of \(\frac{1}{2} n^3 + \mathcal{O}(n^2)\).