Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to calculate MIPS for an algorithm for ARM processor

I have been asked recently to produced the MIPS (million of instructions per second) for an algorithm we have developed. The algorithm is exposed by a set of C-style functions. We have exercise the code on a Dell Axim to benchmark the performance under different input.

This question came from our hardware vendor, but I am mostly a HL software developer so I am not sure how to respond to the request. Maybe someone with similar HW/SW background can help...

  1. Since our algorithm is not real time, I don't think we need to quantify it as MIPS. Is it possible to simply quote the total number of assembly instructions?

  2. If 1 is true, how do you do this (ie. how to measure the number of assembly instructions) either in general or specifically for ARM/XScale?

  3. Can 2 be performed on a WM device or via the Device Emulator provided in VS2005?

  4. Can 3 be automated?

Thanks a lot for your help. Charles


Thanks for all your help. I think S.Lott hit the nail. And as a follow up, I now have more questions.

5 Any suggestion on how to go about measuring MIPS? I heard some one suggest running our algorithm and comparing it against Dhrystone/Whetstone benchmark to calculate MIS.

6 Since the algorithm does not need to be run in real time, is MIPS really a useful measure? (eg. factorial(N)) What are other ways to quantity the processing requirements? (I have already measured the runtime performance but it was not a satisfactory answer.)

7 Finally, I assume MIPS is a crude estimate and would be dep. on compiler, optimization settings, etc?

like image 947
Charles Avatar asked Mar 24 '09 19:03

Charles


People also ask

What is the MIPS rate for each processor?

Alternatively, divide the number of cycles per second (CPU) by the number of cycles per instruction (CPI) and then divide by 1 million to find the MIPS. For instance, if a computer with a CPU of 600 megahertz had a CPI of 3: 600/3 = 200; 200/1 million = 0.0002 MIPS.

How do you measure Dmips?

Calculate for DMIPS One common representation of the Dhrystone benchmark is DMIPS. DMIPS (Dhrystone MIPS ). It is obtained when the Dhrystone score is divided by 1757 (the number of Dhrystones per second obtained on the VAX 11/780, nominally a 1 MIPS machine).


1 Answers

I'll bet that your hardware vendor is asking how many MIPS you need.

As in "Do you need a 1,000 MIPS processor or a 2,000 MIPS processor?"

Which gets translated by management into "How many MIPS?"

Hardware offers MIPS. Software consumes MIPS.

You have two degrees of freedom.

  • The processor's inherent MIPS offering.

  • The number of seconds during which you consume that many MIPS.

If the processor doesn't have enough MIPS, your algorithm will be "slow".

if the processor has enough MIPS, your algorithm will be "fast".

I put "fast" and "slow" in quotes because you need to have a performance requirement to determine "fast enough to meet the performance requirement" or "too slow to meet the performance requirement."

On a 2,000 MIPS processor, you might take an acceptable 2 seconds. But on a 1,000 MIPS processor this explodes to an unacceptable 4 seconds.


How many MIPS do you need?

  1. Get the official MIPS for your processor. See http://en.wikipedia.org/wiki/Instructions_per_second

  2. Run your algorithm on some data.

  3. Measure the exact run time. Average a bunch of samples to reduce uncertainty.

  4. Report. 3 seconds on a 750 MIPS processor is -- well -- 3 seconds at 750 MIPS. MIPS is a rate. Time is time. Distance is the product of rate * time. 3 seconds at 750 MIPS is 750*3 million instructions.

Remember Rate (in Instructions per second) * Time (in seconds) gives you Instructions.

Don't say that it's 3*750 MIPS. It isn't; it's 2250 Million Instructions.

like image 102
S.Lott Avatar answered Sep 23 '22 16:09

S.Lott