Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Analysis of algorithms

Tags:

algorithm

I am reading algorithm analysis topic. Here is the text snippet from the book

When n doubles, the running time goes up by a factor of 2 for linear programs, 4 for quadratic programs, and 8 for cubic programs. Programs that run in logarithmic time take only an additive constant longer when n doubles, and programs that run in O(n log n) take slightly more than twice as long to run under the same circumstances.

These increases can be hard to spot if the lower-order terms have relatively large coefficients and n is not large enough.

My question is what does author mean lower-order terms have relatively large coefficients? Can any one explain with example

Thanks!

like image 515
venkysmarty Avatar asked Aug 18 '11 12:08

venkysmarty


People also ask

What is meant by analysis of algorithm?

In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them.

What are the 3 algorithm analysis techniques?

In Sections 1.3 through 1.6, we explore three important techniques of algorithm design—divide-and-conquer, dynamic programming, and greedy heuristics.

Why do we Analyse algorithms?

Before you implement any algorithm as a program, it is better to find out which among these algorithms are good in terms of time and memory. It would be best to analyze every algorithm in terms of Time that relates to which one could execute faster and Memory corresponding to which one will take less memory.

What is the framework for analysis of algorithms?

The most essential component of the efficiency analytical framework is time complexity. The valid algorithm executes in a finite period of time. The time complexity of the algorithm refers to how long it takes the algorithm to solve a given problem. In algorithm analysis, time complexity is very useful measure.


2 Answers

Suppose your algorithm actually executes n^2 + 1000 n computations when run on n elements. Now for n = 1 you need 1001 computations, and for n = 2 you need 2004. The difference from linear growth is tiny, and you can hardly spot the quadratic contribution!

Asymptotically, however, your algorithm takes O(n^2) steps, so asymptotically (as n gets large) doubling the input size quadruples your runtime. But for our small value, doubling from 1 to 2 did not quadruple the runtime! The lower-order term is n, and its coefficient (1000) is large compared to the coefficient of the leading-order termn^2 (which is 1).

This shows how the asymptotic complexity does not say anything about particular, especially small values. It's merely a limiting statement about the behaviour as n gets large.

like image 69
Kerrek SB Avatar answered Nov 16 '22 02:11

Kerrek SB


When using O notation, you specify the largest term of the function that is your performance bound. For example, if the performance was always bound by f = c3n3 + c2n2 + c1n + c0, you would say that is is O(n3). The author is saying that when n is small, the coefficients may have a larger impact than n on the performance, for example if c2 were very large and c3 very small, the performance may appear to be O(n2) until the size of n dominates the coefficients if you only go by the relative performance for specific small instances of n.

like image 44
tvanfosson Avatar answered Nov 16 '22 02:11

tvanfosson