Solving Complexity Issues: Merging Algorithms

In summary: I guess the bucket sort would be more elegant, but I don't know how to do it.And to be fair, call it the opposite of "elegant".
  • #1
rover13
1
0
Homework Statement
Suppose you want to improve Merge Sort by first applying Heap Sort to a number of consecutive subarrays. Given an array A, your algorithm subdivides A into subarrays A1, A2 · · · Ak, where k is some power of 2, and applies Heap Sort on each subarray Ai alone. The algorithm proceeds into merging pairs of consecutive subarrays until the array is sorted. For example, if k = 4, you first apply Heap Sort to sort each Ai and then you merge A1 with A2 and A3 with A4, then you apply the merge function once to get the sorted array. (a) Does the proposed algorithm improve the asymptotic running time of Merge Sort when k = 2? How about the case k = log n (or a power of 2 that is closest to log n)? Justify. (b) Is the proposed algorithm stable? Is it in-place? Prove your answers.
Relevant Equations
?
So I have this problem to solve, I was thinking that for k=2, the threshold value is too low and inefficient, but I'm not sure when k is at logn or more. If 2 sorting algorithms have the same complexity, won’t merging them be pointless in terms of running time? For b), i think the merged algorithim would be stable and not in place.
 
Physics news on Phys.org
  • #2
rover13 said:
Homework Statement:: (a) ... Justify. (b) ... Prove your answers.
Your answers are just guesses, not the justifications and proofs that are asked for. You need to write things down like "let merge sort have asymptotic run time ## M(n) = k n \log n ##" and work from there.

rover13 said:
If 2 sorting algorithms have the same complexity, won’t merging them be pointless in terms of running time?
Not if you can improve ## k n \log n ## to, say, ## \frac12 k n \log n ## which would be (asymptotically) twice as fast.
 
  • Like
Likes sysprog
  • #3
Analyze the complexity of heap sorting k/n elements into buckets and then things get icky. How does merge sort perform with those conditions? Does it have better asymptotic behavior?
 
  • #4
valenumr said:
Analyze the complexity of heap sorting k/n elements into buckets and then things get icky. How does merge sort perform with those conditions? Does it have better asymptotic behavior?
What exactly do you mean by "things get icky"? Some aspects of the problem, e.g. how best to determine the optimal input size on which to partition, introduce some intricacies, but isn't that part of why the problem is included in a CS course of study?
 
  • Like
Likes berkeman and pbuk
  • #5
sysprog said:
What exactly do you mean by "things get icky"? Some aspects of the problem, e.g. how best to determine the optimal input size on which to partition, introduce some intricacies, but isn't that part of why the problem is included in a CS course of study?
Gosh, this was a while ago thinking on the problem. I'll try and recollect, but I think basically, it's pretty easy to analyze the first step (heap sorting n approximately equal sized arrays). It gets messy thinking how that is helpful for merge sort. I think you can sort each array by heap top node, pop and reheap each array, sort the arrays again, and then wash, rinse, and repeat. It's an interesting problem, but I really never managed to work though it on paper. It was messy. Since merge sort is not in place, it seems feasible, but I never got I final answer for the bucket ratio that would give improvement.
 
  • #6
@valenumr, I'm pretty sure that I'm not out of line in supposing that in this context such terms as "icky" and "messy" are more than a little bit lacking in precision.
 
Last edited:
  • #7
sysprog said:
@valenumr, I'm pretty sure that I'm not out of line in supposing that in this context such terms as "icky" and "messy" are more that a little bit lacking in precision.
Fair point. I tried applying Master's theorem to my approach to the problem, but I guess when I use such terms I mean there was much more writing than I had expected...
 
  • #8
sysprog said:
@valenumr, I'm pretty sure that I'm not out of line in supposing that in this context such terms as "icky" and "messy" are more that a little bit lacking in precision.
And to be fair, call it the opposite of "elegant". I never got to such an approach.
 

FAQ: Solving Complexity Issues: Merging Algorithms

What is the purpose of merging algorithms?

Merging algorithms are used to combine two or more algorithms into a single, more efficient algorithm. This can help to solve complex problems that may not be easily solved by a single algorithm alone.

How do merging algorithms work?

Merging algorithms typically involve breaking down the problem into smaller, more manageable parts and then combining the solutions from each algorithm to create a final solution. This can involve using techniques such as divide and conquer or dynamic programming.

What are the benefits of using merging algorithms?

Merging algorithms can lead to improved efficiency, as the combined algorithm may be able to solve the problem in fewer steps or with less computational resources. It can also help to solve more complex problems that may not be solvable with a single algorithm.

Are there any drawbacks to using merging algorithms?

One potential drawback of merging algorithms is that they may be more complex and difficult to implement compared to a single algorithm. It may also be challenging to determine the best way to combine the different algorithms for optimal results.

Can merging algorithms be applied to any type of problem?

Merging algorithms can be applied to a wide range of problems, but they may not be suitable for every problem. It is important to carefully consider the problem at hand and determine if merging algorithms would be a beneficial approach.

Similar threads

Replies
3
Views
1K
Replies
1
Views
730
Replies
2
Views
1K
Replies
1
Views
2K
Replies
1
Views
1K
Replies
10
Views
2K
Back
Top