Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is FLOPS in field of deep learning?

What is FLOPS in field of deep learning? Why we don't use the term just FLO?

We use the term FLOPS to measure the number of operations of a frozen deep learning network.

Following Wikipedia, FLOPS = floating point operations per second. When we test computing units, we should consider of the time. But in case of measuring deep learning network, how can I understand this concept of time? Shouldn't we use the term just FLO(floating point operations)?

Why do people use the term FLOPS? If there is anything I don't know, what is it?

==== attachment ===

Frozen deep learning networks that I mentioned is just a kind of software. It's not about hardware. In the field of deep learning, people use the term FLOPS to measure how many operations are needed to run the network model. In this case, in my opinion, we should use the term FLO. I thought people confused about the term FLOPS and I want to know if others think the same or if I'm wrong.

Please look at these cases:

how to calculate a net's FLOPs in CNN

https://iq.opengenus.org/floating-point-operations-per-second-flops-of-machine-learning-models/

like image 870
ladofa Avatar asked Oct 22 '19 06:10

ladofa


People also ask

What are FLOPs in deep learning?

FLOPS, refers to the number of floating point operations that can be performed by a computing entity in one second. It is used to quantify the performance of a hardware. FLOPs, simply means the total number of floating point operations required for a single forward pass.

Is higher FLOPs better?

To be specific, FLOPS means floating point operations per second, and fps means frame per second. In terms of comparison, (1) FLOPS, the lower the better, (2) number of parameters, the lower the better, (3) fps, the higher the better, (4) latency, the lower the better.

What are FLOPs in computer?

In computers, FLOPS are floating-point operations per second. Floating-point is, according to IBM, "a method of encoding real numbers within the limits of finite precision available on computers." Using floating-point encoding, extremely long numbers can be handled relatively easily.


Video Answer


3 Answers

Confusingly both FLOPs, floating point operations, and FLOPS, floating point operations per second, are used in reference to machine learning. FLOPs are often used to describe how many operations are required to run a single instance of a given model, like VGG19. This is the usage of FLOPs in both of the links you posted, though unfortunately the opengenus link incorrectly mistakenly uses 'Floating point operations per second' to refer to FLOPs.

You will see FLOPS used to describe the computing power of given hardware like GPUs which is useful when thinking about how powerful a given piece of hardware is, or conversely, how long it may take to train a model on that hardware.

Sometimes people write FLOPS when they mean FLOPs. It is usually clear from the context which one they mean.

like image 127
vladievlad Avatar answered Oct 16 '22 22:10

vladievlad


I not sure my answer is 100% correct. but this is what i understand.

  • FLOPS = Floating point operations per second

  • FLOPs = Floating point operations

FLOPS is a unit of speed. FLOPs is a unit of amount.

like image 33
pakman24 Avatar answered Oct 17 '22 00:10

pakman24


What is FLOPS in field of deep learning? Why we don't use the term just FLO?

FLOPS (Floating Point Operations Per Second) is the same in most fields - its the (theoretical) maximum number of floating point operations that the hardware might (if you're extremely lucky) be capable of.

We don't use FLO because FLO would always be infinity (given an infinite amount of time hardware is capable of doing an infinite amount of floating point operations).

Note that one "floating point operation" is one multiplication, one division, one addition, ... Typically (for modern CPUs) FLOPS is calculated from repeated use of a "fused multiply then add" instruction, so that one instruction counts as 2 floating point operations. When combined with SIMD a single instruction (doing 8 "multiple and add" in parallel) might count as 16 floating point instructions. Of course this is a calculated theoretical value, so you ignore things like memory accesses, branches, IRQs, etc. This is why "theoretical FLOPs" is almost never achievable in practice.

Why do people use the term FLOPS? If there is anything I don't know, what is it?

Primarily it's used to describe how powerful hardware is for marketing purposes (e.g. "Our new CPU is capable of 5 GFLOPS!").

like image 4
Brendan Avatar answered Oct 16 '22 22:10

Brendan