


Dev Articles Community Forums
> Programming
> C/C++ Help

General  ####Please explain about time complexity of Insertion Sort.
Discuss ####Please explain about time complexity of Insertion Sort. in the C/C++ Help forum on Dev Articles. ####Please explain about time complexity of Insertion Sort. C/C++ Help forum discussing building and maintaining applications in C/C++. Find out why these languages are the foundation on which other languages are built.






Dev Articles Community Forums Sponsor:


June 2nd, 2008, 02:09 PM

Registered User


Join Date: Apr 2008
Posts: 3
Time spent in forums: 32 m 56 sec
Reputation Power: 0


General  ####Please explain about time complexity of Insertion Sort.
If the Insertion Sort can sort 1000 values in 10 seconds, how long will it take to sort 1,000,000 values?
I know Insertion sort's time complexity is O(n^2) but I don't know how to apply to it?
My guess is n^2=10, so, n=3.16227766 but what do I do with that?

June 2nd, 2008, 02:41 PM


Contributing Abuser


Join Date: Apr 2007
Location: Starkville, MS
Posts: 336
Time spent in forums: 5 Days 15 h 51 m 40 sec
Reputation Power: 11


big o notation is used as a tool to analyze complexity and represents the asymptotic upper bound. it is not to be taken literally. so if something is O(n^2), you cant just plug in numbers and solve for n to get the exact number of "seconds" it's gonna be, because big o is more an estimation than anything. additionally, it represents the worst case scenario. if the items you're sorting are best or average case (meaning the order they are in), it will take considerably less time.
The reason I wrote "seconds" is because it's a function for the number of steps it will take, not seconds. a step could take a variable amount of time depending on the datatypes being used and other various factors.
as far as application, you would compare it to other algorithms. if you have 2 sorting algorithms, one that's O(n^2) and one that's O(n^3), then you'll probably want to use the first one, since it will take less time. whereas, if you have 2 algorthms that are both O(n^2), then there isnt a major difference between the two (unless you know the frequency of bestcase occurances between the two)
__________________

June 2nd, 2008, 02:50 PM


Command Line Warrior


Join Date: Sep 2005
Posts: 1,021
Time spent in forums: 2 Weeks 8 h 12 m 36 sec
Reputation Power: 13


O(N^2) means that the time increases quadratically. N is 'the length of your problem instance'. It translates to: if N increases by a factor x then the time will increase by a factor x^2; In this case the problem size is increasing by a factor 1000. You would hope (linear) that the time would increase by a 1000 as well, i.e., 10,000 seconds. But the time increases by x^2 > so it increases by a factor 1000*1000. Which means the time it takes to insert sort a 1,000,000 values would take 10,000,000.
__________________
There is no such thing as C/C++, you either program C or C++

June 2nd, 2008, 04:40 PM


Command Line Warrior


Join Date: Sep 2005
Posts: 1,021
Time spent in forums: 2 Weeks 8 h 12 m 36 sec
Reputation Power: 13


Ehm yes, and it is all approximate like Bobidybob said. The big O notation is not an exact thing. For instance, an algorithm that takes 4*N^2 time and one that takes 2*N + 1/2*N^2 are both O(N^2). It is asymptotic behaviour like Bobidybob already said. So it 'only' works for large numbers of N, but that is actually what you are usually interested in. (For lists under, say, 10 elements, bubblesort is the fastest).
The big O notation does stand for asymptotic upper bound but it does not necessarily stand for worst case though. You have to mention which case you are talking about. Quicksort for instance has O(NlogN) time complexity on average. There are cases in which it performs way worse. In worst case it takes Θ(N^2) (Θ denotes both upper and lower bound)

June 3rd, 2008, 10:40 AM


Contributing User


Join Date: Dec 2007
Posts: 1,177
Time spent in forums: 1 Week 1 Day 21 h 27 m 36 sec
Reputation Power: 11


There is a value, which is usually constant for each function, which says how much each 'step' is worth. This is a rather important point, and a nice kick in the side to those who place too much importance on the O().
Consider:
Function 1: O(N^3)
Each step takes 1 ms. (Keep in mind this is a lot.)
Implementing this will take half an hour. Perhaps it's already in.
Function 2: O(N^2)
Each step takes 10 ms. (Even more huge)
Implementing this will take almost a day.
Which one do you go with?
Let me answer with a question: What are typical values for N?
If we find it's usually 10 (this value should be measured running the application as it is supposed to be used) then they both run for exactly one second.
Now, if we find N never exceeds 13 in practice, we get 2.2 seconds with #1 and 1.6 seconds with #2. If this is done only once as the program runs (and assuming the program isn't run over and over and over), is that 0.4 second saving worth a day extra work?
Of course, a huge N changes things. Just remember your guesses of how big N is tends to be wrong.
I think that rounds out the explanation nicely.

June 3rd, 2008, 11:22 AM

Registered User


Join Date: Apr 2008
Posts: 3
Time spent in forums: 32 m 56 sec
Reputation Power: 0


Thank you very much MaHuJa
Thank you very much MaHuJa
Quote:
Originally Posted by MaHuJa
There is a value, which is usually constant for each function, which says how much each 'step' is worth. This is a rather important point, and a nice kick in the side to those who place too much importance on the O().
Consider:
Function 1: O(N^3)
Each step takes 1 ms. (Keep in mind this is a lot.)
Implementing this will take half an hour. Perhaps it's already in.
Function 2: O(N^2)
Each step takes 10 ms. (Even more huge)
Implementing this will take almost a day.
Which one do you go with?
Let me answer with a question: What are typical values for N?
If we find it's usually 10 (this value should be measured running the application as it is supposed to be used) then they both run for exactly one second.
Now, if we find N never exceeds 13 in practice, we get 2.2 seconds with #1 and 1.6 seconds with #2. If this is done only once as the program runs (and assuming the program isn't run over and over and over), is that 0.4 second saving worth a day extra work?
Of course, a huge N changes things. Just remember your guesses of how big N is tends to be wrong.
I think that rounds out the explanation nicely.


Developer Shed Advertisers and Affiliates
Thread Tools 
Search this Thread 


Display Modes 
Rate This Thread 
Linear Mode


Posting Rules

You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off




