Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the big deal about Big-O notation in computer science?

How would Big-O notation help in my day-to-day C# programming? Is it just an academic exercise?

like image 835
dotnet-practitioner Avatar asked Jan 03 '10 21:01

dotnet-practitioner


3 Answers

Big-O tells you the complexity of an algorithm in terms of the size of its inputs. This is essential if you want to know how algorithms will scale. If you're designing a big website and you have a lot of users, the time it takes you to handle those requests is important. If you have lots of data and you want to store it in a structure, you need to know how to do that efficiently if you're going to write something that doesn't take a million years to run.

It's not that Big-O notation itself will help you. It's that if you understand Big-O notation, you understand the worst-case complexity of algorithms. Essentially, Big-O gives you a high-level sense of which algorithms are fast, which are slow, and what the tradeoffs are. I don't see how you can understand the performance implications of anything in, say, the .NET collections library if you don't understand this.

I won't go into more detail here, since this question has been asked many times, but suffice it to say that this is something you should understand. Here's a fairly highly voted previous Big-O question to get you started.

like image 76
Todd Gamblin Avatar answered Sep 23 '22 18:09

Todd Gamblin


Big O notation allows you to analyze algorithms in terms of overall efficiency and scaleability. It abstracts away constant order differences in efficiency which can vary from platform, language, OS to focus on the inherent efficiency of the algorithm and how it varies according to the size of the input.

like image 40
Larry Watanabe Avatar answered Sep 24 '22 18:09

Larry Watanabe


I am reading answers and I (seriously) think that big-O is underestimated.

As coders who make money from coding, we need to know what big-O is and why we need it.

Let me explain what I think: Big-O notation is the efficiency/performance of your work. You have to know how fast your code works when the inputs get bigger because in real life you can't know the exact number of inputs. Furthermore, you can't compare two different algorithmic approaches without an asymptotic notation so if you want to choose the better one, you are going to compare them with big-O and see which one fits your situation. Both may be inefficient but you will know which one is better.

like image 39
rahmivolkan Avatar answered Sep 21 '22 18:09

rahmivolkan