Explaining Proof of Quick-Sort's O(nlog n) Expected Running Time

In summary, the conversation is about understanding the proof that the expected running time of quick-sort is O(n log n). The algorithm for quick-sort is discussed and the expected time required for sorting a sequence of n elements is denoted as T(n). A recurrence relation is derived and it is shown that for n ≥ 2, T(n) ≤ kn log n. The conversation ends with a question about how to continue, possibly using strong induction.
  • #1
evinda
Gold Member
MHB
3,836
0
Hello! (Wave)

View attachment 4422

View attachment 4423I am trying to understand the proof that the expected running time of quick-sort is $O(n \log n)$.

Could you maybe explain it to me?
 

Attachments

  • worst.PNG
    worst.PNG
    25.6 KB · Views: 72
  • proof1.PNG
    proof1.PNG
    31.5 KB · Views: 69
Technology news on Phys.org
  • #2
Suppose that we use the following algorithm:

Code:
Quicksort(S):
1.  if S contains at most one element then return S
    else
2.       choose an element a randomly from S;
3.       let S1, S2, and S3 be the sequences of elements in S less than, equal to, and greater than a, respectively;
4.       return Quicksort(S1) followed by S2 followed by Quicksort(S3)
I have tried the following:

Let [tex]T(n)[/tex] be the expected time required by the Quicksort algorithm in order to sort a sequence of [tex]n[/tex] elements. We have that [tex]T(0)=T(1)=b[/tex] for a constant [tex]b[/tex].

We suppose that the elements [tex]a[/tex] is the [tex]i[/tex]-th smallest elements of the [tex]n[/tex] elements of the sequence [tex]S[/tex].
Then the 2 recursive calls of Quicksort have expected time [tex]T(i-1)[/tex] and [tex]T(n-i)[/tex], respectively.
Since the lines 2-3 require time [tex]O(n)[/tex] and the line 1 requires time [tex]O(1)[/tex], we have the following recurrence relation:

[tex]T(n) \leq cn+ \frac{1}{n} \sum_{i=1}^n \left[ T(i-1)+T(n-i)\right], n \geq 2[/tex]

[tex]\\ T(n) \leq cn+ \frac{1}{n} \sum_{i=1}^n \left[ T(i-1)+T(n-i)\right] \\ \leq cn+ \frac{1}{n} \sum_{i=1}^n \left[ 2T(i-1) \right] =cn+ \frac{2}{n} \sum_{i=0}^{n-1} T(i) \\ \Rightarrow T(n) \leq cn+ \frac{2}{n} \sum_{i=0}^{n-1} T(i)(1)[/tex]

We will show that for [tex]n \geq 2[/tex], [tex]T(n) \leq k n \log n[/tex], for [tex]k=2(c+b)[/tex] positive.

For [tex]n=2[/tex]: From the relation (1) we have [tex]T(2) \leq c2+ 2b=2(c+b) \checkmark[/tex]

We suppose that [tex]T(n) \leq k n \log n[/tex] γfor some [tex]n \geq 2[/tex].

We will show that [tex]T(n+1) \leq k (n+1) \log (n+1)[/tex] .

From the relation (1) we have [tex]T(n+1) \leq c(n+1)+ \frac{2}{n+1} \sum_{i=0}^{n} T(i) \leq c(n+1)+ \frac{2}{n+1} (n+1) T(n)= c(n+1)+2T(n) \leq c(n+1)+2 k n \log n \leq (n+1) (c+2k \log n)[/tex]How could we continue? Or should we use strong induction?
 

FAQ: Explaining Proof of Quick-Sort's O(nlog n) Expected Running Time

What is the proof of Quick-Sort's O(nlog n) expected running time?

The proof of Quick-Sort's O(nlog n) expected running time is a mathematical analysis that shows the average time complexity of the Quick-Sort algorithm is O(nlog n). This means that as the size of the input array increases, the time taken to sort the array will increase at a rate of nlog n.

How does Quick-Sort achieve O(nlog n) expected running time?

Quick-Sort achieves O(nlog n) expected running time by using a divide and conquer approach. The algorithm divides the input array into smaller subarrays and recursively sorts them. This process cuts down the number of comparisons needed to sort the array, resulting in a time complexity of O(nlog n).

What is the significance of O(nlog n) expected running time for Quick-Sort?

O(nlog n) expected running time is significant because it is the fastest possible time complexity for a comparison-based sorting algorithm. This means that Quick-Sort is one of the most efficient sorting algorithms available and is commonly used in practice.

Is Quick-Sort's O(nlog n) expected running time always guaranteed?

No, Quick-Sort's O(nlog n) expected running time is not always guaranteed. The worst-case time complexity for Quick-Sort is O(n^2), which occurs when the pivot chosen is always the smallest or largest element in the array. However, the chances of this worst-case scenario are very low, making the average case time complexity of O(nlog n) more relevant.

Are there any limitations to Quick-Sort's O(nlog n) expected running time?

Yes, there are some limitations to Quick-Sort's O(nlog n) expected running time. The algorithm performs best when the input array is randomly or uniformly distributed. If the array is already sorted or almost sorted, Quick-Sort's performance may degrade, resulting in a worst-case time complexity of O(n^2). Additionally, the algorithm requires extra space for the recursive calls, which may be a limitation for large input sizes.

Similar threads

Replies
1
Views
1K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
12
Views
3K
Replies
17
Views
4K
Replies
2
Views
1K
Back
Top