Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Compute determinant of binary NxN matrix by cuda

I'm looking for a good approach to compute the determinant of a binary NxN matrix.

So far I found this: https://github.com/OrangeOwlSolutions/Linear-Algebra/blob/master/DETERMINANT/determinant.cu, but this implementation may be good for general matrix (floating point) while I only need to work with integers. Also, cuBLAS or cuSOLVER only support double-precision matrices.

like image 265
dk1111 Avatar asked Nov 21 '25 21:11

dk1111


1 Answers

According to this reference, there is a known upper-bound for the determinant of a square (0,1)-matrix of rank N.

For N=36, the upper bound determinate is given as 1200757082375992968, which requires 61 integer bits for exact representation. Given that the GPU only has native integer types up to 64 bit in length, there is absolutely no way this could be done for N=64 in integer without some sort of big integer implementation (if it existed), and that would be a 100% software implementation and guaranteed to be very slow.

Therefore, the only feasible implementation would be performed in double precision floating point, and one of the existing double precision linear algebra libraries would be the optimal (and only) feasible solution on a GPU.

like image 69
3 revstalonmies Avatar answered Nov 24 '25 10:11

3 revstalonmies



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!