Efficient Merge Sort for External Storage Optimization

  • Thread starter rcgldr
  • Start date
  • Tags
    Sorting
In summary: RAM versus disk. In summary, radix sort is faster than quicksort, but comparison-based sorts can be faster than radix sorts when the data is sparse.
  • #1
rcgldr
Homework Helper
8,888
649
Moderator note: This was split from https://www.physicsforums.com/showthread.php?p=1409789#post1409789

Perhaps this response should go into it's own thread regarding sorting ...

a merge sort is the quickest
Better than quicksort?
yes.
comparason sort
I think you mean sorts that use very little storage overhead. A merge sort typically doubles the required storage space.
I'd imagine a radix sort ought to beat any comparison-based sort.
Radix sort is a type of merge sort. A good merge sort will handle variable size groups, which are created in the initial "scan" by forming groups out of any increasing or optionally decreasing sequence of elements. Groups with decreasing sequences are "reversed" back to increasing sequences when written to external storage, or during the first actual merge pass. Each merge pass merges pairs (or triplets if radix 3 sort, quadlets, if radix 4 sort, ...) of groups, reducing the number of groups by 1/2 (or 1/3 or 1/4 ...) and increasing the size of the groups. The final pass occurs when the output will be creating a single group, usually outputting in desired format to external storage if required.

A merge sort is ideal for external storage, since multiple elements can be read or written with a single I/O. With 4 external storage devices, a radix 2 merge sort involves all sequential I/O, and can be done on tape drives (as was done on early systems). 3 is the bare minimum for a tape sort, but it doubles the number of passes as the output tape has to be re-read to split it back up to the other two tape drives for each pass.

Optimizations for external storage include taking advatange of hardware that can handle scattered data with a single I/O operation, meaing that memory is randomly accessed during a single write operation (no point in doing this for reads). In spite the obviousness of this, since this is how any bus mastering I/O operation with mapped memory has to be done (since the logically sequential mapped segments are physically scattered), some Japanese company managed to get a patent on this, but I don't know if the patent is still valid.
 
Last edited by a moderator:
Technology news on Phys.org
  • #2
Hurkyl said:
Better than quicksort?

Anyways, I'd imagine a radix sort ought to beat any comparison-based sort.

But really, if you only have a few thousand integers to sort, and only need to do it once, then even a bubble sort should be good enough!

It's several million to sort, if ?I do it that way, and the quicksort works just fine. Radix sort can be worse than mergesort and the rest when the numbers are sparse, as they are here -- I wouldn't even try it on this dataset.
 
  • #3
Jeff and Hurkyl:

Merge sorts are pretty good, I'll grant that, but at the moment sorting is < 20% of the total runtime if I set up the main part of the program to use a binary search (sparse array, essentially) and 0% of the runtime if I use a bit array directly. A different thread to discuss this in general terms would be interesting, and I'd love to participate, but whatever benefit I would see from changing from quicksort on data that is almost optimal for quicksort to some kind of mergesort would be minimal, at best a 5% speedup, and could easily increase runtime instead.
 
  • #4
By comparison-based sort, I mean a sort routine that's based upon using < to determine the relative order of elements.

Radix sort doesn't use <; it uses the leading digits of an element to determine in which (approximate) quantile it belongs, and then sorts the quantiles with some method.

Radix (and quick) sort work in exactly the opposite fashion from merge sort; they split the data into (approximate) quantiles, then sort the quantiles by some method. Merge sort uses some method to sort segments of the data, and then merges the segments together.

Radix sort is asymptotically faster than any comparison based sort, assuming that you have data representable as "digits", and the leading ones are sufficitly close to uniformly distributed1. In the optimal setup, where you can use just one pass to split the data into base-case-sized quantiles, the running time is actually linear in the number of elements.



e.g. if you had RAM for a little over 2^30 records, but needed to sort 2^40 records on disk, you could perform one pass through the data, using the leading 10 bits to split the data into 2^30 segments (or more!), then sort each segment in RAM. The time requirement is

. Two passes through the data on disk
. 2^10 base case sorts
. One look at the leading digits of 2^40 elements

Whereas if you used a merge sort, and merged 2^10 streams together at once, the running time is
. Two passes through the data on disk
. 2^10 base case sorts
. 2^40 pushes and pops into a 2^10-long priority queue


Your choice of how to sort 2^30 records in RAM probably affects the overall running time -- I'm sure I've seen actual experimental results that demonstrate that quick sort requires about half as many compares as merge sort, and takes about half as much time... and I'm pretty sure that radix sort beats quick sort for uniformly distributed1 data.


1: A slight variation to the radix sort is able to handle data with other known distributions.
 
  • #5
It's also worth remembering that O(any power of N) is only valid for large N.
In cases of small N a simple alogrithm can be faster than an fast one because the setup time is less - in real life N is usually small.
 
  • #6
Hurkyl said:
By comparison-based sort, I mean a sort routine that's based upon using < to determine the relative order of elements. Radix sort doesn't use <; it uses the leading digits of an element to determine in which (approximate) quantile it belongs, and then sorts the quantiles with some method.

e.g. if you had RAM for a little over 2^30 records, but needed to sort 2^40 records on disk, you could perform one pass through the data, using the leading 10 bits to split the data into 2^30 segments (or more!), then sort each segment in RAM. The time requirement is

. Two passes through the data on disk
. 2^10 base case sorts
. One look at the leading digits of 2^40 elements

Whereas if you used a merge sort, and merged 2^10 streams together at once, the running time is
. Two passes through the data on disk
. 2^10 base case sorts
. 2^40 pushes and pops into a 2^10-long priority queue
Ok, I was thinking of radix X sort as a merge base X sort, it's an old terminology for merge sort that apparently is no longer used. A radix 2 sort used 4 storage devices, a radix 3 sort used 6 devices, a radix 4 sort used 8 devices.

I don't get how 10 bits of data leads to splitting the data up into 2^30 segments. I would assume that 10 bits would let you split up the data into 2^10 segments.

I'm not sure where the pushes and pops come from. With a merge sort all you need is a list of record counts, each count representing a group, an index into the list, and a total record count or group count to know when the end of the list is reached. A merge sort processes the list sequentially. It's the sequential nature of a merge sort that makes it so fast.

A merge sort can take advantage of the ablity to read or write many records with a single access to a storage device. If there are 4 devices it's an entirely sequential I/O process.

update - A radix sort could also read or write many records with a single I/O, but somewhere during the initial or second pass, there will be more random acessing, to write all the "bins" during the initial pass, or if written sequentially, to read the scattered clusters of "bins" during the second pass.

For example, an Intel host bus adapter on a PC for SATA or PATA (IDE) hard drives can transfer 2MB of data with a single read or write command (that's 512 + 1 memory map segment descriptors, each segment being 4096 bytes long). If the record size is 128 bytes, then that's 16,384 records per read or write.

With the sort example you gave, the radix method will need 1024 "bins" that records have to be written to, based on the leading 10 bits. If the distribution is near random, then most of the time only a single record will be written per access.

update - Start off with 1024 bins in memory, each holds 16384 records, using 2MB, for a total of 2GB of memory. As each bin gets filled, it's written to one of 1024 files on the hard drive, or alternatively, all bins are written sequentially, and an array of bin numbers and counts is built, which will require random access across the disk to read the data for each "bin" on the second pass.

update - Assuming "bins" are written sequentially (should be faster), then each bin will require the disk to scan across the entire written file assuming even distribution, with random accesses skipping some of the data. That's 1024 passes over the written "bin" file to gather "bins". With the merge sort, it will take only 10 sequential passes of the data to sort it, so the merge sort will be faster.

Regarding comparason sorts, a quicksort takes up to C x N^2 operations, where C is some constant, and N is the number of records, while a merge sort takes D x N x log2 N operations (or logX where X is the number of groups merged at a time), where D is probably a larger constant than C, but for reasonably large N, the merge sort is going to be quicker.

For a memory sort, if knowledge about the data allows it to be quickly split up into "bins" like a radix sort, then that will be faster than a merge sort. However, with hard drives the penalty of access time makes the radix sort slower than a merge sort.
 
Last edited:
  • #7
example merge sort algorithm

128 byte records, 1GB of ram used for record data, and the remainder of ram used for program, variables, and buffering.

Initial pass: Create "free pool" link lists of 2MB buffers, one for input, two for output. Thread 1 gets buffer from input free pool, reads 16,384 records, adds the buffer to thread 2's input queue. Thread 2 dequeues the input buffers and moves the data into the 1GB record buffer, returns the input buffers back to the input free pool, and creates a list of pointers to each record, until 8,388,608 records are read, filling the 1GB record buffer. The list of pointers is then sorted. Thread 2 then gets buffers from the "even" output free pool, uses the sorted pointer list to fill the buffers with data, and alternately adds the buffer to thread 3's input queue, once the output group count is reached, it switched to thread 4's queues. If the environment supports scatter data writes, then just a list of pointers is sent for each output group and no data is moved. Thread 2 finally adds the group count of 8,388,608 records to the group count list. Thread 3 gets an output buffer and writes 16,384 records to the "even" output file, and returns the buffer to it's free pool. Thread 4 gets an output buffer and writes 16,384 records to the "odd" output file, and returns the buffer to it's free pool.

Merge pass: Create "free pool" link lists of 2MB buffers, two for input, one for output. Thread 1 gets a buffer from "even input" pool, reads 16384 records from "even input file", and add the buffer to even input queue list. Thread 2 gets a buffer from "odd input" pool, reads 16384 records from "odd input file", and add the buffer it's to odd input queue list. Thread 3 gets one even and one odd buffer from input queues, and merges groups using the list of group counts while doing the merge, creating a new list of group counts that is half the size of the input group count list, and sends buffers to Thread 4. Thread 4 gets buffers, writes to "even output file" until the output group count (double the input group count) is reached, and then it switches to writing to the "odd output file". (last group may be even or odd depending on total number of records to be sorted).

Final merge pass occurs when there are only two groups remaining. It's same as a normal merge pass, except any record formatting is done by the output program, and the output goes to a single file.
 
Last edited:
  • #8
The runtime of quick sort is C N log2 N on almost any dataset -- the counterexamples are few and far inbetween. You will essentially never see a large dataset where median-of-3 quick sort performs quadratically, unless you specifically crafted it to have that behavior. And you can get around this by randomly selecting the partition: then, the initial ordering of the dataset is entirely irrelevant, so you get an expected C N log2 N running time on everything.

If you are really worried about worst-case behavior, but want a deterministic comparison-based algorithm, then introsort is the winner: it normally uses quicksort, but if it detects pathological behavior, it switches over to one of the guaranteed N log N algorithms. (Merge sort, probably) In this way, it usually enjoys the extra speed of quick sort, but on bad data it very quickly switches over to the slower, but guaranteed, speed of merge sort.


Merge sort is not D N logX N time when merging X streams at once -- you forget that when merging X streams, you continually have to decide which of your X things is smallest before writing, and that's an extra factor of log2 X (at the very least), so the running time is still (assuming the D doesn't change)
D N logX N log2 X = D N log2 N​

That's why I was talking about a priority queue -- if you're merging 1000 streams, you need a data structure to hold the 1000 elements under consideration, and each time you want to write a new element into the merged list, you have to determine which of those 1000 is smallest, remove it, and then insert its successor into the set of elements under consideration: i.e. you need a priority queue. A binary heap is the fastest structure I know for this task, and it requires something like log2 X time per operation.



I guess that you can do a radix sort either way: e.g. start by sorting on the least significant digit instead of the most significant, but I haven't thought too much about it. It seems to me that starting with the most significant digit gives the clearest algorithm.



example radix sort

Same problem parameters.

Split passes
(For pass k, assume that the top 9(k-1) bits of each record are fixed, and the next 9 bits are roughly uniformly distributed)

Allocate 512 buckets of 2 megabytes each.
Allocate a few small overflow buffers.
Allocate an input buffer.

Thread 1 reads 16k records at a time, using the 9 bits to split the data into buckets. Whenever a bucket fills, thread 2 writes that data to the corresponding file on disk. (And thread 1 writes to that bucket are stored in an overflow buffer until the write is complete)

Recursively perform split passes until you have 0.5 GB-sized files.

Now, each file fits into half of memory, so we iterate through them, sorting with our favorite out-of-place sorting method and appending them to our output file.
 
Last edited:
  • #9
Hurkyl said:
The runtime of quick sort is C N log2 N on almost any dataset
I corrected this to up to N^2. I've benchmarked quick sorts versus a merge sort and the merge sort is normally faster. Quicksort sorts pointers in place, while merge sort uses the equivalent of a "semi-sparse" array for the pointers by using double the memory for pointers, using this memory to hold two lists, an input list and an output list for each pass of the merge. If I have the time, I'll update my old 16 bit benchmark code and post links to it here.

I use the "sparse factor" to compare sort times. A quick sort has a sparse factor of 1, it sorts a list of pointers in place. A merge sort has a sparse factor of 2, it uses twice the space (for the list of pointers). A radix X sort has a sparse factor of X, it uses X times amount of space (for pointers). It's logical to assume that the higher the sparse factor, the faster the sort. So a radix 3 or higher sort should be faster than a merge sort which should be faster than a quicksort. A radix 2 sort should be the same as a merge sort, since it just replaces one merge sort pass with bin split pass.

Merge sort is not D N logX N time when merging X streams at once -- you forget that when merging X streams, you continually have to decide which of your X things is smallest before writing, and that's an extra factor of log2 X (at the very least), so the running time is still (assuming the D doesn't change)
D N logX N log2 X = D N log2 N​
True for a memory sort, but when using multiple storage devices, what counts is the number of passes made on the data. 4 storage devices require log2 M passes, while 8 devices require log4 M, where M is the number of groups created during the initial pass. 8 devices will be twice as fast; in your example, 4 devices will require 10 passes, but 8 devices would only require 5 passes.

That's why I was talking about a priority queue -- if you're merging 1000 streams
But a merge sort just merges two streams, outputting a single stream alternating between two output files, for each pass. In your example the merge sort will take 10 passes, but it's all sequential I/O, which disk drives are good at. Read ahead and write behind caching on hard drives will handle multiple sequential I/O requests with just a single random access hit. Assuming uniform distribution, the radix sort will rarely find a significant number of 2MB clusters to be adjacent and in order when randomly accessing the clusters. The radix sort is dealing with well over 512 files, and the random access overhead is significant. With a single hard drive, for each merge pass, the merge sort reads 1/2GB clusters from two files, and writes 1GB clusters to two other files, with an overhead of 3 random access hits per 1GB of data merged. With 2 hard drives, a merge sort would have one input and one output file on each hard drive. The input files would be read in parallel, and output files would be written alternately, with an overhead of 2 random access hits per 1GB of data merged. With 4 hard drives, a merge sort is all sequential I/O, one random access hit per pass (in your example that's just 10 random access hits).
 
Last edited:
  • #10
Okay, I could believe that the random access could be slow -- I was trying to minimize the number of 2MB transfers. (Of course, I could always just split into the same number of buckets as things I can write at a time; I think that would be the same number of sequential disk accesses, but less CPU time)

I know I've compared for my CS classes the quick, merge, and heap sorts before, with quick sort being the winner by a factor of 2, and heap sort slightly ahead of merge. I don't remember the parameters though, nor did I know as much as I do now about memory optimization. (And, of course, this was about 9 years ago too)
 
Last edited:
  • #11
If I have the time, I'll update my old 16 bit benchmark code and post links to it here.
Link to zip of source, programs, and test source file:

http://jeffareid.net/misc/sort.zip

The test source file is just a text file with 262,144 records, each record contains 8 ascii digits of "e" followed by cr/lf, which should be random and uniform distribution.

To test the programs extract the zip to a temp directory, open a dos console window and enter the commands:

qsort src.txt dstq.txt

msort0 src.txt dstm.txt

On my system, the fastest times I see are 93.75 milliseconds for qsort and 62.5 milliseconds for msort0. Somtimes it takes an additional 10ms, not sure what's causing this.
 
Last edited by a moderator:
  • #12
Well, I imagine the fact you used the C library quick sort is the reason your merge sort was faster: qsort must be entirely generic, which means that knowledge of record size cannot be known at compile time, and the comparator must always be invoked through the function pointer. On the other hand, your merge sort is aware you are manipulating an array of pointers, and it inlines the invocation of CmprRecs.

If you want to use library routines rather than rolling your own, the C++ versions in the STL (sort and stable_sort, which in the GNU implementation invoke introsort and an iterative merge sort, I think) are templates, and can benefit from compiler optimization.
 
Last edited:
  • #13
Hurkyl said:
Well, I imagine the fact you used the C library quick sort is the reason your merge sort was faster: qsort must be entirely generic, which means that knowledge of record size cannot be known at compile time, and the comparator must always be invoked through the function pointer. On the other hand, your merge sort is aware you are manipulating an array of pointers, and it inlines the invocation of CmprRecs.
I didn't realize the compiler would inline CmprRecs, was trying to avoid it by using a pointer to function to make the comparason fair. You also have a point that qsort doesn't know at compile time that each "record" is a 4 byte pointer, so it has to do an extra level of indirection on every record access.

However, with my previous runs years ago in 16 bit mode, I compared an assembler based heap sort with merge sort, and the merge was still quicker. I'll have to see if I can get permission to post the source for the heap sort.

I updated http://jeffareid.net/misc/sort.zip , using the user process time, but the resolution is still based on a 64hz ticker.
 
Last edited by a moderator:
  • #14
Jeff Reid said:
I didn't realize the compiler would inline CmprRecs, was trying to avoid it by using a pointer to function to make the comparason fair.
It's possible you've managed to, but I wouldn't be sure unless I actually did some testing. (Or inspected the assembly) e.g. call the function directly and see if it accelerates the mergesort.

Incidentally, the C library qsort function isn't required to actually be a quick sort. While doing Google searches for my posts, I discovered that some implementations use a merge sort instead! (Or, maybe just for some cases)
 
  • #15
I added a counter to the CmprRecs function, to eliminate the issue of the generic overhead in the qsort. Msort0 has fewer compares than qsort. One reason for this is that when a pair of groups is merged, and the end of one group is reached, the remainder of the other group is copied without any compares.

Regarding the Visual Studio 2005 implementation of qsort, I can't be sure of what it's doing, except I'm pretty sure that it's a sort in place (that it doesn't allocate memory to create a second list of element pointers required for a merge sort).

I'm using GetProcessTimes(), actual user time of the process, it's based on a 64hz == 15.625ms ticker.

msort0 uses fixed sized groups, oriented for random data files
msort1 uses variable sized groups, oriented for files with some sequential (ascending or descending) data.
qsort is microsoft library qsort
hsort is a heap sort.

Link to zip of source, executables, test files and result.txt
http://jeffareid.net/misc/sort.zip

result.txt - ticks are multiples of .1 microseconds, eg 781250 = 78.1250 milliseconds.
The last 2 rows shows how the algorithms handle a file already sorted, msort1 will be fastest here.

Code:
             msort0   msort1    qsort    hsort

# ticks      625000   781250   937500  1562500   262144 10 byte records (e)
# compares  4386555  4514936  5583563  8627661

# ticks     4218750  4375000  6093750 14843750  1048576 10 byte records (pi)
# compares 19644759 20157907 24587372 38705534

# ticks     1250000   156250  1718750  4843750  1048576 10 byte records (srt)
# compares 10485760  1048575 20701938 39648064
 
Last edited by a moderator:
  • #16
Updated previous post to include two more sorts and another file.
 
  • #17
It will take a bit to do the comparison properly, but I hacked something together quickly. I generated 2^20 random __int64's (MSVC's 64-bit integer type) with rand, then I do the following:

Copy the data into a scratch buffer.
Invoke a library routine to sort.

Actually, I do this 100 times, to allow for more accurate timing. I used C++'s sort and stable_sort, and I used C's qsort. The results are:

24308746 comparisons 0.16 seconds for sort
24805310 comparisons 0.28 seconds for stable_sort
20360941 comparisons 0.32 seconds for qsort


The implementations are those provided in the MSVC 2005 express download, sort is implemented as an introsort (quicksort, but switch to heapsort if the recursion is bad), and stable_sort is implemented as a merge sort.

My comparator was a function that simply invokes the operator<. (The times were 0.20, 0.28, 0.35 when I was counting the number of compares)



If the data is already sorted, my results are

15933385 comparisons 0.06 seconds for sort
09895936 comparisons 0.14 seconds for stable_sort
15735219 comparisons 0.12 seconds for qsort
 
Last edited:
  • #18
update - I see up updated your results, so I'm updating this post.

2^20 = 1048576 10 byte records (each record has 8 digits of pi, cr, lf)

24587372 compares with VS2005 qsort vs 20360941 for your qsort / data.
20157907 compares with merge sort 1.
19644759 compares with merge sort 0.

sorted file:

20701938 compares with VS2005 qsort vs 15735219 for your qsort / data.
10485760 compares with merge sort 0.
1048575 compares with merge sort 1 (minimum).
 
Last edited:
  • #19
My cut-paste missed where I was resetting my counter. I'm updating my results; it's curious that MSVC's qsort uses less compares than its other two sorting routines! It makes sense, I suppose; because it's forced to call the comparator through a function pointer, it's worth doing some extra work to reduce the number of comparisons.
 
  • #20
Hurkyl said:
I generated 2^20 random __int64's (MSVC's 64-bit integer type) with rand.
I looked up rand(), and it's limited to a max value of 32767. How did you expand this to 64 bits?
 
  • #21
Really? I thought it gave numbers up to 2^31; that's not 64 bits, but I would expect that to be enough entropy for useful results. I'm spoiled by having lrand48 available; I almost never use rand. :frown:

Okay, I've redone it with 64 bits of entropy. (I grab bits 4-11 of rand() 8 times, for 64 bits in all):

33318225 comps, 0.23 secs, sort
24804933 comps, 0.28 secs, stable_sort
24660782 comps, 0.43 secs, qsort

So more comparisons for sort, but still better performance on this data. Eventually I'll get around to handcrafting the sorts, so that I know exactly what's going on with each one I'm toying with.
 
Last edited:
  • #22
rand_s() will return values up to 0xblackff, but the seed doesn't appear to be settable (it's random based on the description).

Converted programs to sort __int64 values, without using pointers to end up similar to the C++ sort() vector environment. Source code is easier to follow without the pointer stuff. Still using ((rand()>>4) & 0xff), for index = 0 to 7 to create 8 byte records

http://jeffareid.net/misc/sort5.zip

update - added sort and stable_sort to the test. Used more accurate timer.

33143625 compares, .115 sec sort
19645509 compares, .122 sec merge sort.
24811454 compares, .123 sec stable_sort
33143625 compares, .259 sec sort with ptr to function for compare
24543137 compares, .279 sec quick sort (microsoft qsort()).
24811454 compares, .280 sec stable_sort with ptr to function for compare

Anyway, I was surprised that sort() was a bit faster and stable_sort() about the same as msort(), at least with this test. So maybe merge is just faster for large hard drive based sorts. msort1() will be the fastest is there's a significant amount of sequential (ascending or descending) data in a list to be sorted.
 
Last edited by a moderator:
  • #23
Updated text file sort results with more accurate timer.

Text file sorts and test files.

http://jeffareid.net/misc/sort.zip

msort0 - merge sort, fixed sized groups.
msort1 - merge sort, variable sized groups.
qsort - microsoft qsort()
hsort - heap sort

compares is number of record compares.
ms is number of milliseconds it took to do the sort.

Code:
compares     ms   program    262144 records (8 digits  e, cr, lf)

 4386555     71   msort0
 4514936     73   msort1
 5583563     96   qsort
 8627661    153   hsort

compares     ms   program   1048576 records (8 digits pi, cr, lf)

19644759    400   msort0
20157907    410   msort1
24587372    569   qsort
38705534   1385   hsort

compares     ms   program   1048576 records (8 digits sorted, cr, lf)

 1048575     13   msort1
10485760    131   msort0
20701938    207   qsort
39648064    457   hsort
 
Last edited by a moderator:
  • #24
I think the key word in there is "Microsoft qsort". MS are notorious for re-inventing things for business reasons. Visual C++ 6.0 was nowhere near ANSI compliant for example. You may not be testing what you think you are testing. IMO.

Their versions of a lot of library routines are not as advertised. For a fair test, IMO, go here and use this function:

http://en.literateprograms.org/Quicksort_%28C%29
and download quicksort. You seem to be in C++, here is a C++-ified version:
http://en.literateprograms.org/Quicksort_%28C_Plus_Plus%29
 
  • #25
jim mcnamara said:
I think the key word in there is "Microsoft qsort".
Also the Microsoft C++ sort() and stable_sort() functions.
C++
Most of my stuff is in C. I only used C++ because it's the only way to get access to the sort() and stable_sort() functions since they're tied into the "vector" template in C++. Nice to see that the audience for this thread just increased by 50% with a 3rd person.
 
Last edited:
  • #26
Hurkyl -

Why are function pointers inefficient - other than that the function cannot be inlined?
I"m guessing your definition of efficient does not include generalizable...

Jeff - I looked but did not see a quicksort routine. Could you show me where yours is, please? I'm too dumb too find it.

Plus, you might want to try your test suite of goodies on a data set that is very nearly perfectly ordered, rather than in rand() or PNG order. Hurkyl had a very good point about certain unusual conditions affecting some sort alogrithms.
 
  • #27
jim mcnamara said:
Hurkyl -

Why are function pointers inefficient - other than that the function cannot be inlined?
Making comparisons is (usually) a significant portion of the runtime of a sorting routine. If the comparison is a very cheap operation, then calling it through a function pointer is several times slower than calling it directly.

Therefore, in such a circumstance, the function pointer is time inefficient.

I"m guessing your definition of efficient does not include generalizable...
Huh? Where did that come from?
 
  • #28
I updated http://jeffareid.net/misc/sort5.zip to include the en.literateprograms.org quicksort() program. I also corrected a problem where I had the compare function for sort() and stable_sort() reversed, which caused a slight change in the numbers, but no change in order in the list below. Msort5, with the fewest # of compares, was least affected by using a ptr to function for compare. quicksort() would be affected the most, but I didn't make a non ptr to function version of it.

Code:
compares     ms   program   1048576 8 bytes records (rand())

33315450    115   Microsoft sort(), SORT5.CPP
19645509    122   Jeff Reid msort5(), MSORT5.C
24806742    125   Microsoft stable_sort(), SORT5.CPP
19645509    168   Jeff Reid msort5(), function ptr, MSORT5.C
24806742    260   Microsoft stable_sort(), function ptr, SORT5.CPP
24543137    280   Microsoft qsort(), function ptr, QSORT5.C
33315450    281   Micorsoft sort(), function ptr, SORT5.CPP
36094115    340   en.literateprograms.org quicksort(),function ptr, QSORT6.C

jim mcnamara said:
Why are function pointers inefficient - other than that the function cannot be inlined?
It's making a significant difference in run time, for example, from .125 sec to .260 sec in the case of Microsoft's C++ vector stable_sort(), but who knows what internal optimizations are done when sort() can use it's own internally defined compare. My guess is that the calls mess up pipelining (but shouldn't affect cache), which is deep on a modern Pentium.
Jeff - I looked but did not see a quicksort routine.

It's qsort.c in this zip file (using Microsoft's qsort()).
http://jeffareid.net/misc/sort.zip

It's qsort5.c in this zip file (using Microsoft's qsort()), and qsort6.c (using en.literateprograms.org quicksort()).
http://jeffareid.net/misc/sort5.zip

You might want to try your test suite of goodies on a data set that is very nearly perfectly ordered.
I already did this, with a perfectly ordered file (the pi file already sorted. Msort1() looks for any sequential sequences (ascending or descending), and it's time dropped from .410 sec to .013 sec, with a bare minimum of compares (N-1). All the sorts ran faster on a pre-sorted file, but none of them had the gain of Msort1. In it's process of looking for sequential sequences, msort1() places all records into a single group during "MakeGroups", so there is nothing to do in "MergeGroups".

A bit of personal history. A scan of the first page of a merge sort program I wrote in Fortran IV for an HP 2100 mini computer system, back in 1974: ssrt3.jpg. Note the reference to "core" as opposed to ram (this was 1974). ... and yes, I am a bit of a pack rat, but selective in what I save.
 
Last edited by a moderator:
  • #29
I made 64 bit versions of msort0 and qsort:

http://jeffareid.net/misc/sort64.zip

Relative timings are the same as before, msort() between sort() and stable_sort() with no ptr to compare function, faster with ptr to compare function. Since msort() is using fixed sized groups, I eliminated the array of group counts in MSORT64.C. The array of group counts is only needed for variable sized groups, which is what msort1() from http://jeffareid.net/misc/sort.zip used.


Code:
compares     ms   program   1048576 8 bytes records (rand())

33315450     95   Microsoft sort(), SORT64.CPP
19645509    108   Jeff Reid msort64(), MSORT64.C
24806742    110   Microsoft stable_sort(), SORT64.CPP
19645509    133   Jeff Reid msort64(), function ptr, MSORT64.C
33315450    172   Microsoft sort(), function ptr, SORT64.CPP
24806742    184   Microsoft stable_sort(), function ptr, SORT64.CPP
 
Last edited by a moderator:
  • #30
Apparently no one cares anymore about this sorting thread. I should have used a few "goto"'s in my code and mentioned it, and this thread would have doubled in size.
 
  • #31
Hehe.

I still care, but I've had lots to do since I got back home, so I haven't had time to really work on this.

It doesn't help that my desktop doesn't like visual C++ express. :frown:
 
  • #32
What's the issue you're having?

I created 2 more programs that should compile with Studio Express (there's no windows.h in express, so I avoided any functions based on this). The source files will build in 32 bit or 64 bit mode with no changes (or even conditionals).

http://jeffareid.net/misc/sortcpp.zip

As mentioned before, the Microsoft routines are slow when using pointers to functions, and I'm not sure why. The sort routines are not libraries, but are actual code in <algorithm> that get compiled along with source code that includes it, so things like size of elements are optimized at compile time. Since it's just included source code, then the pointer to function versions (which are actually separate pieces of source code) should benefit from any compiler optimization that user generated code does.

The overhead of the pointer to function routines would mean that the Microsoft sorts would be a poor choice if what is being sorted is a list of pointers to records, which would be the normal method if record size was significant. So it appears that the Microsoft sort routines are only good for sorting arrays of reasonably small elements.
 
Last edited by a moderator:
  • #33
What optimizers do and don't do can be pretty weird. Last time I was playing with the speed of sort routines, I was playing with comparators, and I discovered that gcc wouldn't inline a nearly trivial function object, unless I explicitly declared it inline. :frown: This was frusterating, because the STL class less<T> and the like did not declare their operator() method as inline, and thus was performing poorly. :frown:
 
  • #34
Updated the code to use fopen, fwrite, and have CMPFUNC option again, works with Studio Express (will have to test 64 bit mode later). update - confirmed the same source works just fine in 64 bit mode.

sortv.zip

In 32 bit mode, sort(), and msort() take about the same time (on 4 million __int64's). In 64 bit mode sort() is about 14% faster. If ptr to function for compare is used, the Microsoft routines slow down quite a bit; for one thing the ptr to function is passed as a paramter, so nested calls add to the overhead, and there are about 60% more compares with sort() versus msort(). If data size is reasonably large and pointers to the data are sorted instead of the data (in memory, this is a terrible way to do this on a disk), then the penalty of the ptr to function makes both Micosoft routines much slower.

As mentioned before if the data isn't truly random, and has a signifcant amount of pre-ordered data, then the msort1() from http://jeffareid.net/misc/sort.zip will be the fastest sort.
 
Last edited by a moderator:
  • #35
I redid the text file sort routines, so that the Microsoft vector routines could be used. The sequence is to read in a text file, convert records terminated by CR LF into p-strings, generate a set of pointers to each record, sort via the pointers, then output a file using the sorted pointers.

In this case, Microsoft's quicksort routine, sort() was 50% slower than their merge sort routine, stable_sort(), which was about 15% slower than my merge sort. A pointer to function is required since the routine compares records given 2 pointers.

Included in this zip is sortp.cpp (Microsoft sort or stable_sort), msort.cpp (my sort), and a test source file, src.txt (1,000,000 records, 8 bytes of pi digits, CR, LF). This time I enabled the variable group size define which speeds up the merge sort if there's any significant ordering of data. With the random test file, the time is about the same as with the variable group size define disabled, about 2% more comparasons, but with some groups created larger than 2, and none smaller than 2 (except for last record on a odd number of records file).

sortp.zip

The challenge now is to see if someone can make a quicksort based algorithm run faster than my merge sort for this case of a text file sort via pointers to records.
 
Last edited:
Back
Top