Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficient way to take determinant of an n! x n! matrix in Maple

I have a large matrix, n! x n!, for which I need to take the determinant. For each permutation of n, I associate

  • a vector of length 2n (this is easy computationally)
  • a polynomial of in 2n variables (a product of linear factors computed recursively on n)

The matrix is the evaluation matrix for the polynomials at the vectors (thought of as points). So the sigma,tau entry of the matrix (indexed by permutations) is the polynomial for sigma evaluated at the vector for tau.

Example: For n=3, if the ith polynomial is (x1 - 4)(x3 - 5)(x4 - 4)(x6 - 1) and the jth point is (2,2,1,3,5,2), then the (i,j)th entry of the matrix will be (2 - 4)(1 - 5)(3 - 4)(2 - 1) = -8. Here n=3, so the points are in R^(3!) = R^6 and the polynomials have 3!=6 variables.

My goal is to determine whether or not the matrix is nonsingular.


My approach right now is this:

  • the function point takes a permutation and outputs a vector
  • the function poly takes a permutation and outputs a polynomial
  • the function nextPerm gives the next permutation in lexicographic order

The abridged pseudocode version of my code is this:

B := [];
P := [];
w := [1,2,...,n];
while w <> NULL do
  B := B append poly(w);
  P := P append point(w);
  w := nextPerm(w);
od;

// BUILD A MATRIX IN MAPLE
M := Matrix(n!, (i,j) -> eval(B[i],P[j]));

// COMPUTE DETERMINANT IN MAPLE
det := LinearAlgebra[Determinant]( M );

// TELL ME IF IT'S NONSINGULAR
if det = 0 then return false;
else return true; fi;

I'm working in Maple using the built in function LinearAlgebra[Determinant], but everything else is a custom built function that uses low level Maple functions (e.g. seq, convert and cat).

My problem is that this takes too long, meaning I can go up to n=7 with patience, but getting n=8 takes days. Ideally, I want to be able to get to n=10.

Does anyone have an idea for how I could improve the time? I'm open to working in a different language, e.g. Matlab or C, but would prefer to find a way to speed this up within Maple.

I realize this might be hard to answer without all the gory details, but the code for each function, e.g. point and poly, is already optimized, so the real question here is if there is a faster way to take a determinant by building the matrix on the fly, or something like that.


UPDATE: Here are two ideas that I've toyed with that don't work:

  1. I can store the polynomials (since they take a while to compute, I don't want to redo that if I can help it) into a vector of length n!, and compute the points on the fly, and plug these values into the permutation formula for the determinant:

    permutation determinant formula

    The problem here is that this is O(N!) in the size of the matrix, so for my case this will be O((n!)!). When n=10, (n!)! = 3,628,800! which is way to big to even consider doing.

  2. Compute the determinant using the LU decomposition. Luckily, the main diagonal of my matrix is nonzero, so this is feasible. Since this is O(N^3) in the size of the matrix, that becomes O((n!)^3) which is much closer to doable. The problem, though, is that it requires me to store the whole matrix, which puts serious strain on memory, nevermind the run time. So this doesn't work either, at least not without a bit more cleverness. Any ideas?

like image 922
Daniel Avatar asked Feb 21 '13 23:02

Daniel


1 Answers

It isn't clear to me if your problem is space or time. Obviously the two trade back and forth. If you only wish to know if the determinant is positive or not, then you definitely should go with LU decomposition. The reason is that if A = LU with L lower triangular and U upper triangular, then

det(A) = det(L) det(U) = l_11 * ... * l_nn * u_11 * ... * u_nn

so you only need to determine if any of the main diagonal entries of L or U is 0.

To simplify further, use Doolittle's algorithm, where l_ii = 1. If at any point the algorithm breaks down, the matrix is singular so you can stop. Here's the gist:

for k := 1, 2, ..., n do {
  for j := k, k+1, ..., n do {
    u_kj := a_kj - sum_{s=1...k-1} l_ks u_sj;
  }
  for i = k+1, k+2, ..., n do {
    l_ik := (a_ik - sum_{s=1...k-1} l_is u_sk)/u_kk;
  }
}

The key is that you can compute the ith row of U and the ith column of L at the same time, and you only need to know the previous row/column to move forward. This way you parallel process as much as you can and store as little as you need. Since you can compute the entries a_ij as needed, this requires you to store two vectors of length n while generating two more vectors of length n (rows of U, columns of L). The algorithm takes n^2 time. You might be able to find a few more tricks, but that depends on your space/time trade off.

like image 178
PengOne Avatar answered Sep 18 '22 16:09

PengOne