I am not certain there is one correct answer to the question, but here we go. While numerous numerical problems can be stated in a linear algebra form, it seems from my limited experience that there is a performance overhead for simple operations in using Math.NET over writing equivalent operations on raw arrays.
As a test case, I wrote code to compute the distance between a vector and the closest vector in a list, with 3 versions: operating on arrays, operating on dense vectors, and operating on dense vectors with the MKL provider. Working on arrays ran about 4x faster than on vectors, and 3x faster than using the MKL provider.
The downside is that I had to write by hand a distance computation, instead of leveraging the built-in Norm function. The upside is that it's much faster. Note: I didn't post the code, will be happy to do so if needed, I might also be using Math.NET improperly.
So my question is as follows: it seems to me that using higher-level abstractions comes at a performance cost. Is that in general the case, or are there situations (like sparse matrices for instances) where using Math.NET would be expected to outperform manually written operations on arrays?
If that is the case, I would tend to think that using the linear algebra part of Math.NET would be mostly useful for "real" algebra that involves matrices, to avoid re-implementing more complex calculations/algorithms, and potentially for code readability, but that for operations which are more simple vector by vector operations, it might be a better idea to work on raw arrays.
Any light on when it's a good idea to use the library vs. when you should roll your own would be appreciated!
Disclaimer: I'm maintaining Math.NET Numerics.
The primary value a toolkit like Math.NET Numerics tries to offer is developer productivity, especially for those without a PhD on the subject who would have a hard time or waste a lot of time implementing these sometimes quite involved algorithms themselves, possibly badly - instead of spending the time on their actual problem.
Then, there is some chance that the functionality you need has already been used by others before. Some of them may have already discovered and pointed out some issues and contributed their improvements back. More users helps improving code quality and robustness. Unfortunately this also brings us to the major drawback: It also tends to make code more general, which often makes it less efficient than a highly specialized implementation doing exactly what you need.
This is all along the lines of Cody Gray's comment: Use it if it works and is fast enough, else either help fix it and make it work (and fast), choose another toolkit that works, or implement exactly what you need yourself. Luckily for Math.NET Numerics there are some more options, see below.
As such, I agree with your conclusion: if you don't actually need any complicated operations, don't work with very large data but performance is important, there's nothing wrong with using arrays or another data structure directly (especially in F# where I personally would consider raw native data structure more often than in C#). Of course this comes at the cost of loosing some convenience and the risk that when your start needing more operations after all you may end up re-implementing the toolkit. In the end it also depends on how critical this is to your project, and whether you can spend resources and time to maintain your own math code.
Nevertheless, in my own experience it's often an advantage to own the code (so you can make changes, effective immediately) and to keep it simple an focused (so it does exactly what you need it to do and only that).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With