We know that modern processors execute instructions such as cosine
and sin
directly on the processor as they have opcodes for it. My question is how much cycles these instructions normally take. Do they take constant time or depend upon input parameters?
The times vary depending on the processor model. Times typically range from tens of CPU cycles to a hundred or more.
(The times consumed by many instructions vary depending on circumstances, because instructions use a variety of resources in the processor [dispatcher, execution units, rename registers, and more], so how long an instruction delays other work depends on what else is going on in the processor. For example, if some code is doing almost entirely load and store instructions, then a very occasional sine instruction might not slow its execution at all. However, instructions that take tens of CPU cycles are usually dominated by their times in the execution unit, which is the part that does the actual numerical calculation.)
The execution times may vary depending on input parameters. Large arguments to trigonometric functions must be reduced modulo 2π, which is a complicated problem by itself.
In the Mac OS X math library, we generally write our own implementations, often in assembly language, for various reasons that may include speed, conformance to standards, suitability for the application binary interface, and other features.
If you are just curious, then “tens to hundreds of processor cycles” may be a good enough answer, especially without specifying a particular processor model. Essentially, the time is long enough that you should not use these operations without good reason. (E.g., I have seen code that obtains π as 4·arctan(1). Do not do that.)
If you have other reasons for asking, you should explain, so that answers can be focused.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With