Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

does a computer take more time to multiply, divide, subtract, add two big numbers than smaller number

We (people) take more time to multiply, add, divide and subtract two large numbers than two small numbers.

Does a computer take more time to multiply 5 * 2 than say 51234 * 987654 or is the operation done in same amount of time?

What about two numbers that are greater than word size of the processor (e.g. two 128 bit number)?

I saw the article https://en.wikipedia.org/wiki/Multiplication_algorithm

like image 847
kapil Avatar asked Mar 08 '16 13:03

kapil


People also ask

Is division slower than multiplication on a computer?

It used to be that both multiplication and division were slow operations. Nowadays multiplication is a bit faster (but slightly slower than addition/subtraction), but division still is slower than the others.

Does computer perform subtraction?

At their lowest level, computers cannot subtract, multiply, or divide. Neither can calculators. The world's largest and fastest supercomputer can only add—that's it.

How do computers multiply and divide?

A method of computer multiplication and division is proposed which uses binary logarithms. The logarithm of a binary number may be determined approximately from the number itself by simple shifting and counting. A simple add or subtract and shift operation is all that is required to multiply or divide.

How fast can computers add?

Most CPUs are 1 - 4 GHz, which means 1 to 4 clock cycles is somewhere around 1 nanosecond to add two numbers. 0.000 000 001 seconds. An addition takes 4 or 5 cycles.


2 Answers

Assuming you're interested mainly in the x86 family, there was a time when the time multiplication took was dependent on the operands. That was in the previous century - processors up to and including the 80486. From Pentium on, imul has always taken a number of cycles independent of its input.

Division has always been and still is an operation that takes a number of cycles that does depend on the inputs somehow, IIRC it depends mainly on the magnitude of the result.

Floating point instructions can often take variable time if you consider also "weird input" such as denormals.

like image 163
harold Avatar answered Sep 19 '22 00:09

harold


Depends upon the input-type. For primitives that are natively supported by the CPU, such as multiplication of 64bit-numbers on a 64bit-CPU: no, these are atomic operations that always take exactly the same amount of time. For non-primitive data-types, like Java's BigInteger or comparable library-classes in other languages: yes, these aren't atomic operations anymore and thus differ in the amount of time required depending upon the size of the operands.

Multiplication of primitives always takes the same amount of time due to a simple reason: the operation is build hard-wired without any conditional execution and will always iterate all 64bits on a 64bit CPU, no matter whether the input is only 5bits long or takes up all 64 bits. Same would apply to architectures for any other number of bits.

EDIT:
As @nwellnhof stated: some instructions actually do contain branching, such as for example floating-point arithmetic. Usually these instructions are based on microcode and thus can't be counted as atomic instructions in a narrower sense though.

like image 32
Paul Avatar answered Sep 19 '22 00:09

Paul