Facebook Twitter Blog Mailing List

Featured Member

Laurah Jurca
Laurah Jurca
(Laurel - United States)

<p>I currently serve as an ESOL Coach for my school district in Maryland. I work with both ESOL and mainstream teachers to help them improve the quality of their instruction for their English language learners. During my career, I have taught grades k-8,  ...

Algorithms - Lesson 5 - Different Types of Sorting

Introduction to Sorting


SORTING


What exactly does sorting mean?


Sorting means to arrange the data in a certain order. There are many algorithms designed to serve this purpose. It is considered to be defacto problem for the study of algorithms. Let us see where exactly we can see the implementation of sorting.


Most of you have been googling for various stuff. Have you ever noticed the results that appear immediately in the drop down box of the google search toolbar when you hit any key on your keyboard. Yeah! That's sorting.. google sorts various results on its server for you. So basically what we have is







 

INPUT: a sequence of n numbers (a1,a2,a3,......an)
OUTPUT: A permutation of the numbers such that (a1 < a2 < a3 .... < an)
 

There are two types of sorting: internal and external sorting.

•    INTERNAL SORTING: If all the data to be stored can be accommodated at a time in memory then internal sorting methods are used. Various methods are discussed later in these notes.
 

•    External sorting:  When the data  to be stored is so large that some of the data has to be imported from auxiliary memory (hard disk,flash drive etc...), then external sorting methods are used. This is done by calling appropriate function at the time of sorting.

Insertion Sort


INSERTION SORT


Different steps involved in the insertion sort are as follows:

Step 1: The very First element a[1] of any array is already sorted . So from the start itself we have two subsections of our array as shown below .
A: a[1]           | a [2] | …….| a [k] | a[k+1] | ……. | a [n-1] | a [n]
LEFT .........................../......................................  RIGHT
The left portion of the array is sorted while RIGHT one (Elements a[2] .. a[n] ) is unsorted.

Step 2: Choose any element (usually the first one) from the RIGHT / Unsorted
Array portion then INSERT that element into the sorted LEFT portion in its proper position after shifting the elements of the LEFT portion (if needed). In this way the LEFT / SORTED portion increases by one
element while the RIGHT /UNSORTED portion decreases by one.

Step 3: Repeat STEP 2 until the LEFT portion covers the entire array.


 

Let's take an example to show the insertion sort. We need to sort (27,19,33,15,4)

 

Iteration  Num1Num2 Num3 Num4 Num5 
 Iteration1 2719 33 15 
 Iteration2 1927 33 154
 Iteration2 1927 33 15 4
 Iteration3 19  2733 15 
 Iteration4 1519 27 33 4
 Final Sorted 415 19 27 33 

This is the iteration wise presentation for it.

The colored numbers represent the numbers that are under consideration in the respective iteration.They represent the numbers that are compared.

Pseudo code

INSERTION _SORT(A)
                  Start
1.    For m= 2 to length [A] do
2.    lock = A[m]
3.    Arrange A[j] in sorted order
4.     n ← m -1
5.     while n > 0 and A[n] > lock do
6.     A[n+1] = A[n]
7.     n = n-1
8.     A[i+1] = key
 
                      End   

 

COMPLEXITY

 CASE BEST CASE
 AVERAGE CASE
 WORST CASE
 COMPLEXITY n-1 O(square of n)O(square of n)
ADVANTAGES:
1.    Used most frequently by human beings in real life—ordering deck of cards
2.    Efficient for small arrays.
3.    Running time in best case is a function of n i.e linear as compared to SELECTION and BUBBLE sort which are quadratic in nature.
4.    It uses no extra memory, it sorts in place.

Selection Sort


SELECTION SORT


This is the simplest sorting method. In order to sort data in ascending order, the 1st element is compared to all other elements and if found greater than the compared element, then they are interchanged. So after the first iteration the smallest element is placed at the 1st position. The same procedure is repeated for the 2nd element and so on for all the elements.

Let's take an example to sort a given list of numbers(27,19,33,15,4).


You must learn a few points before going through this iterative version of selection sort in an array. These are as follows:
The elements marked as yellow in any row are those that are compared at a particular time instant.The elements that are marked with green are those elements that had already been sorted until the starting of the iteration or left out as sorted until that particular iteration ends.

 First Iteration

 

 Num1Num2 Num3 Num4 Num5 
 27 19 33 15 4
 19 27 33 15 4
 19 27 33 15 4
 15 27 33 19 4 4 27 33 19 15

Second Iteration

 Num1Num2 Num3 Num4 Num5 
 4 2733 19 15 
 4  27 33 19 15
 4 19 33 27 15
 4 15 33 27 19

Third Iteration

 Num1Num2 Num3 Num4 Num5 
 4 15 33 27 19
 4 15 27 33 19
 4 15 19 33 27


Fourth Iteration

 Num1Num2 Num3 Num4 Num5 
 4 15 19 33 27
 4 15 19 2733 

this is the iteration wise presentation of sorting step by step

This is the best way to understand this topic.

Pseudo Code

START
SET l=size of array
    For(i=0;i
        FOR(j=i;j
            If (A[i]>A[j])
               Swap(A[i],A[j])
            End if statement
        End for loop
  End for loop
END

 

COMPLEXITY

 CASEBEST CASE
AVERAGE CASE
WORST CASE
 COMPLEXITY O(square of n) O(square of n) O(square of n)

 

These are very general pseudo codes which helps a lot to develop the actual working C/c++ codes . For exact code you can refer to this link - click here

Quick Sort

Quick sort

Quick sort was discovered by Tony Hoare in 1962. In this algorithm, the hard work is splitting the array into subsets so that merging the final result is trivial.

 


1. Choose a pivot element (it can be any element but preferably) from the array.
2.
Split the array into three sub-arrays containing the items less than the pivot, the pivot itself, and the items bigger than the pivot.
3. Recursively calling (quick sort) function on the first and last sub-array.


 
This is exactly what we do: 

We put the pivot element on its right position and then again divide the array into two sub-arrays with reference to that pivot element. The newly generated sub-arrays will have their own new pivot elements.  The choice of the pivot element is completely user oriented. And then again through the recursive mechanism these sub-arrays are called in which again their pivot elements are put on their respective position and that array is further divided again.  This process continues until we have a single element left in the sub-arrays, which is in itself in its right position.    

Let me present a pictorial example to explain quick sort.

We have to sort a given list (11,10,8,12,4,3,9).

So the different steps that come while under the quick sort Algorithm that use the divide and conquer technique is as follows:

quicksort1.jpg

At the last stage we can see that the given list is sorted.

 Here's a more formal specification of the quick sort algorithm. The separate partition subroutine takes the original position of the pivot element
as input and returns the post-partition pivot position as output.


 


                                    Quicksort(A, p, r):

    if p < r:
    q = Partition(A, p, r)  // calling partion function which returns the bifurcation point
    Quicksort(A, p, q) // calling quicksort for the first array
    Quicksort(A, q+1, r) // calling quicksort for the second array

}

Partition(A, p, r): 
{

    x = A[p]                       // considering the first element to ba as the pivot element  
    i = p - 1
    j = r + 1

 
 while 1:
    j = j - 1
    while A[j] <= x:
        j = j - 1

    i = i + 1
    while A[i] >= x:
        j = j - 1

    if i < j:      

        swap(A[i], A[j])
    else:
        return j

}

 



In the above code (Quick sort) we are inputting an array containing the numbers which are to be sorted with 'n' as the total number of the numbers present in it. The function (Partition) returns a position about which that array/sub-array is to be partitioned again. This returned value is again used in the recursive calls made by the function (Quick sort).





 

Lets see the complexity expression for quick sort

 CASE BEST CASE
AVERAGE CASE
WORST CASE
 COMPLEXITY O(n logn) O(n logn) O(square of n)
The worst case is one, when the provided list is already sorted.Think about that for some time. If you are unable to find it then click here
And in order to remove this problem with quick sort, a new sorting called Randomized Quick sortis made. The only difference is that the pivot element is chosen randomly rather than selecting the same one position of the array as the pivot element.

Lets see the complexity expression for Randomized Quick Sort

 CASE BEST CASE
 AVERAGE CASE
 WORST CASE
 COMPLEXITY O(n logn) O(n logn) O(n logn)
That is the advantage of Randomized Quick Sort: it has the complexity of O(n logn) irrespective of the input list.

Heap Sort


HEAP SORT
 




 

Heap Sort involves different functions. They are provided here in detail. I hope you will enjoy it.

First of all the "Max-heap" is an important function for manipulating max-heaps (those whose parent node is bigger than both of its child nodes). Its inputs are an array A and an index i into the array.When MAX-HEAP is called, it  is assumed that the binary tree rooted at LEFT and RIGHT are max-heaps, but that A[i] may be smaller than the children, thus violating the max-heap property. The function of MAX-HEAP is to let the value at A[i] “float-down” in the max-heap so that the sub-tree rooted at the index 'i' becomes a max heap.
 





MAX-HEAP(A,i)

1.    l <- Left(i)
2.    r<- Right(i)
3.    if l≤ heapsize[A] and A[l]>A[i]
4.    then largest<-l
5.    else largest<-i
6.    if r≤heap-size [A] and A[r]>A[largest]
7.    then largest<-r
8.    if largest not equal to i
9.    then exchange A[i]↔A[largest]
10.    MAX-HEAP(A,largest)



At each step, the largest of the elements A[i], A[LEFT], A[RIGHT] are determined and its index is stored in largest. Then the sub-tree rooted at node i is a max-heap and the procedure terminates. Otherwise, one of the two children has the largest element, and A[i] is swapped with A[largest], which causes node i and its children to satisfy the max-heap property. The node indexed by largest, however, now has the original value A[i],and thus the sub-tree rooted at largest may violate the max-heap property. So, max-heap must be called recursively on that sub-tree.

We can use the procedure MAX-HEAP in a bottom-up manner to convert an array A[1...n], where in n=length[A], into a max-heap. The elements in sub-array A[(n/2+1)...n] are all leaves of the tree, and so each is a 1element heap to begin with. The procedure BUILD-MAX-HEAP goes through the remaining nodes of the tree and runs MAX-HEAP on each one.
  

BUILD-MAX HEAP(A)

1.    Heapsize[A] <-length[A]
2.    For i<-[length[A]/2 downto 1
3.    Do MAXHEAP(A,i)



The heap-sort algorithm starts by using BUILD-MAX-HEAP to build a max-heap on the input array, where n=length[A]. Since the maximum element of the array is stored at the root A[1], it can be put into its correct final position by exchanging it with A[n]. If we now “discard” node n from the heap, we observe that A[1...n-1] can easily be made into a max heap. The children of the root remain max-heaps, but the new root element may violate the max-heap property. All that is needed to restore the max-heap property, however, in one call to max-heap(A,1),which leaves a max-heap in A[1...n-1]. The heap-sort algorithm then repeats this process for the max-heap of size n-1 down to heap of size 2.
 
HEAPSORT(A)

1.    BUILD-MAX-HEAP(A)
2.    For i<-length[A] downto 2
3.    Do exchange A[1]↔A[i]
4.    Heapsize[A] <- heapsize[A]-1
5.    MAXHEAP(A,1)

 FUNCTION MAX-HEAP BUILD-MAX HEAP
 HEAP SORT
 COMPLEXITY O(n) O(log n)O(n log n)




 

So Here we complete the Heap-sort Algorithm.

For the complete C code You can refer this link which is uploaded by some other member.

Sorting Presentation

This presentation includes a brief summary about the sorts discussed in this lesson.

Open or Download This File:

Download