I use algorithm with division.
According to https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations the division has time complexity (one of following):
O(n log n log log n)
O(n log n 2O(log* n))
O(n**2)
O(M(n))
So far I use this algorithm in Python but I need to describe it independently on platform. which one of those time complexity definitions is the correct one today for Python (or similar language) users?
In fact, any basic arithmetic operation (addition, subtraction, multiplication, division, and exponentiation) is considered to be a constant-time operation.
If we assume bitwise operations take linear time, then the time complexity of addition operation is O(N^2) where N is the number of bits. You need to note that the bitwise operations are done on 1 bit at a time hence, it takes O(1) time. Therefore, it is practical to assume addition to be O(N) time operation.
Hence, we know that multiplication has a time complexity of O(N logN) while usual algorithms in practice have a time complexity of O(N^2).
When we divide A by B in long division, Q is the quotient and R is the remainder.
As mentioned if you use ALU or FPU for basic variable types
you can assume division complexity is O(1)
because the instruction overhead is usually comparable with the actual runtime of the division used. If used HW platform does not support division (some older MCU's for example) then it must be computed via program (not single instruction) and this no longer apply.
Also if you have arbitrary precision variables (bignums) then the actual number bit or digit width start to matter and you are no longer in O(1)
In that case see the #2.
most division algorithms use multiplication as a core function
The complexity of division is then defined by the used algorithm and components used by it. For example if you have basic variables but computing division (no HW divider support) then the used operations are still O(1)
but the division used is not.
Let us take Division by repeated subtraction as example.
variable a=...,b=...,d,r;
for (d=0,r=a;r>=b;) { r-=b; d++; }
// d=a/b
// r=a%b
If n
is the bit width of result then this is O(2^n)
for basic variables. But if the variables are arbitrary precision then the used operations are no longer O(1)
This use substraction,comparison and increment which are all O(n)
so the division complexity will became O(n*(2^n))
without any change in the algorithm or code ... So you always have to know what complexity you are talking about
O(2^n)
O(n*(2^n))
This algorithm is not used because is painfully slow. Instead more advanced thing are used. Most division algorithms use multiplication as a core function so Schönhage–Strassen and Karatsuba are relevant for division algorithms. See:
Now how to determine the complexity of custom division?
Take the base complexity of your algorithm and multiply it by the slowest complexity of its core function. In case the core functions are not used each iteration this can became very tricky ... Do not forget to use the same meaning of n
while combining/comparing complexities !!!
If you do not have access to source code of the algorithm used then you can try to measure division for BIG set of numbers with big enough range of n
and try to estimate the complexity from the graph of measured times ... This is not reliable because many things can screw this up like multithreading , scheduling granularity, unknown n
,etc ...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With