This week we will be building upon last week's Sparse Matrix lab with the Multiplying Sparse Matrices lab. As the name implies, this week's lab will be about multiplying two sparse matrices. As always feel free to leave a comment that is related to this week's lab and lets get right into this lab.
The Lab
Continuing on from the Sparse Matrices lab, we create two new classes that inherit from Matrix and SparseMatrix called MatrixWithMult and SparseMatrixWithMult. In each of these classes, we define a multiple() method that takes in two matrices. In MatrixWithMult::multiply(), we implement the slow, naive solutions, while in SparseMatrixWithMult::multiply(), we implement a more optimized solution.
The Implementation
In SparseMatrixWithMult::multiply(), we need to implement a solution that takes advantage of what we know about matrix multiplication and the sparse matrices themselves. Starting off with the sparse matrices themselves, we need to avoid utilizing SparseMatrix::get() because it takes time to traverse the given SparseMatrix and return something. Since our SparseMatrix class stores the data internally using an array of linked lists, we can just retrieve the list from a given row and traverse it instead of using get().
However, traversing a list is not enough to quickly multiply sparse matrices. This is where we start exploiting what we know about matrix multiplication. Matrix multiplication is essentially the dot product of a given row in one matrix and a given column in the other matrix. Dot product is founded by summing the products of the corresponding terms, meaning if one of the terms is 0, the product will be 0. With this in mind, we can use this to make our algorithm faster by skipping rows of empty values and skipping multiplication operations that we know will result in 0.
A summary of the optimizations are as followed:
- Traverse the linked-lists in a designated row instead of using the get() method.
- Skip rows that are entirely empty (which means the value is 0)
- Skip multiplication if the value in a given column is 0.
Lab Questions
Q: Compare your Matrix's multiply() and SparseMatrix's multiply() methods. Is there a relation between the number of elements and the time taken to multiply()?
A: If we multiply two matrices of 500 x 500, the Matrix's multiply() will take longer since it will process each element in the two matrices. SparseMatrix's multiply(), however, will be faster since it will ignore elements that are considered to be default values (or 0 in some cases).
Q: What is the effect of the EDD (effective data density) on the time taken to multiply two sparse matrices? What about the non-sparse matrices?
A: As the number of non-default values becomes closer to the total number of values, the time it takes to multiply two sparse matrices would probably end up taking the same amount of time it takes to multiply a non-sparse matrix, since the more non-default values we have in a sparse matrix, the closer SparseMatrixWithMult::multiply() will behave as Matrix::multiply()
Q: Does it make sense to keep the lists sorted by column number?
A: It is not practical to keep the lists sorted by column number if your are iterating through them within multiply() since dot product is the sum of the products of the corresponding terms, meaning you can multiply the pair of terms in any order.
Q: Does it matter if the lists have duplicate cells (i.e. same column number)?
A: I would not say that having duplicate cells will do much in the sense of multiplying sparse matrices. It would still undergo the same process as if it was not a duplicate cell, so nothing special would happen.
Q: There is significant scope for further optimization. Discuss such ideas.
A: Besides the ones that I have just mentioned, there was one algorithm I read about (the Strassen algorithm) that would split up the matrix into smaller parts and do the multiplication operations on them.
Conclusion
Overall, the multiplying sparse matrices lab is all about limiting the number of times we need to access a sparse matrix and exploiting the rules of matrix multiplication. There are some algorithms out there that provide faster matrix multiplication than the one I was explaining here, but utilizes advance ideas out of my scope. Thank you for reading and feel free to leave a comment.
cool!
ReplyDelete