Solving Linear Systems in C

Linear solver implementation in C. Solves systems defined with CSR matrices using the Jacobi method and Successive Over-Relaxation.

# TL;DR

Focus:AlgorithmsNumerical MethodsSoftware Development
Stack:C
Features:
  • Numerical solving of linear systems with the Jacobi method and Successive Over-Relaxation
  • Input and output of CSR matrices
  • Preconditioners prepare matrices for numerical solving
Links:

This was an implementation of the Jacobi method and Successive Over-Relaxation (SOR) for solving linear systems, written in C. This program accepts a file in the Matrix Market format containing a compressed, sparse matrix, then uses a selected method and specified number of iterations to solve the system.

# computer go brrr-

I think the heading describes this pretty succinctly. Whereas heuristics tries to solve a problem by approximating it just good enough, numerical methods are all about crafting an algorithm and having it run on the beefiest hardware you have for as long as you can bear it. It's nowhere near as elegant as heuristics, but it does work for a lot of situations.

# But Why Matrices?

Because solving linear systems is important for a multitude of things I want to look into as I continue exploring the world of code. Probably the best example I can give is the use of eigenvectors and eigenvalues in graphics applications (geometric transformations), Markov chains, calculating stresses, and even in facial recognition (with eigenfaces).

BeeAble

(i won't even lie, these are kinda creepy)

# Code and Features

There were two methods I wanted to explore in particular, those being the Jacobi method and Successive over-relaxation. I chose these two methods in particular because they're both methods of relaxing (solving for) the system, but the ways they go about it allowed me to make a very interesting parallel between linear system solvers and control theory.

## Jacobi Method

The Jacobi method, as far as I understand, is the most basic way to solve for some linear systems (the "some" is addressed later). Starting with a more formal definition, we have some system:

This system can be defined in terms of matrices, where and are known to us, and we're trying to solve for :

Using our initial guess , we can calculate the k-th iteration of the solution, denoted as , and we can iteratively compute the next individual elements of this solution vector via the formula

Admittedly, this just look like gibberish to the average person. In simpler terms, this method is a feedback system, wherein we make our next guess based on our previous guess. An easy example of such a system is a thermostat, where the temperature of the house is used to adjust the furnace or A/C to bring the interior temperature closer to the set point.

That example actually segues perfectly into my parallel between this and control theory. Specifically, the PID controllers I wrote while I was still on my robotic's team. If you think about it, the proportional term in such a controller works similarly to this method of system relaxation; we have some target we're trying to reach, we have some initial guess or value to get us there, and we're calculating our next value to reduce the error between us and the target; reducing the residual when we relax such a system is very similar to reducing the error in such a system.

## Successive Over-Relaxation

Successive over-relaxation takes the Jacobi method a step further (and subsequently, our analogy a step further) by introducing current values in our calculations. With the same system we defined above, we can iteratively calculate each element in the solution vector via the formula

Again, this looks like hieroglyphics, but there are a few things to note with this particular formula:

  • We've decomposed the matrix into upper and lower triangular matrices. This isn't a fundamental change, but it allows us to more efficiently calculate element values by using forward substitution for some of calculations.*
  • We've introduced a new value, in this formula because...
  • We now include both previous and current values when calculating the next iteration of our solution. dictates the ratio between the two when we compute our new iteration.

* Forward substitution is the process of using the value in the next equation to solve for , so on and so forth.

This changes our method of relaxing (solving) linear systems because we've gone from a pure feedback system to a system with feedback and feedforward elements. This helps us primarily in two ways:

  • Successive over-relaxation can take larger jumps towards a potential unique solution. This means SOR can typically converge to a solution faster than the Jacobi method can.
  • The introduction of the relaxation factor allows SOR to explore the solution space more efficiently, potentially avoiding oscillations. This addresses a key issue with the Jacobi method wherein systems where is not strictly diagonally dominant are not guaranteed to converge to a unique solution.

## Preconditioners

Using a different method is not the only way of increasing your chances of convergence. Since we know that the Jacobi method is guaranteed to converge given a strictly diagonally dominant matrix (for reasons that are beyond me, currently), we could try manipulating the matrix using preconditioners to try and create a system that minimizes oscillations and creates a structure more favourable to convergence.

Typically, a preconditioner is some matrix that is multiplied with to transform it in some way. I opted for a kinda messed up version of a permutation matrix-based preconditioner where I check for strict diagonal dominance, and if the coefficient matrix does not pass the test, I swap rows around until it does. The way I implemented didn't make use of a permutation matrix and instead was kinda poorly implemented with a bunch of for-loops, but it surprisingly improved the performance of the Jacobi method.

# Reflection

I think the biggest lesson from this project was when I was hit with sheer disappointment when I discovered that my code was abysmally slow. For relatively small systems, the code works just fine, but it scales horribly for some of the larger Matrix Market matrices you can find online.

I did a profile of my code to see what it was that was slowing everything down, and the issue was in my implementation of the matrix-vector multiplication operation. The implementation has a time complexity of , which would never scale to the kind of systems I wanted to compute.

The lesson here is that going into things without a plan does't always work. Had I gone into this with some kind of a plan and an understanding of how I would reduce the time complexity of my code, maybe I could've produced a more functional version of this program. Obviously, that doesn't mean I should develop every little script I write using Agile or a waterfall model, but at the very least, I should go into larger projects with some kind of plan of action.