- #1
SSequence
- 561
- 96
The question(s) here can be asked for many problems (and not just sorting).
Let's consider the standard case of sorting an array of size ##n## containing numbers. If we consider a naive algorithm, its time would be proportional to ##n^2##. For example, the naive algorithm could be proceed with two loops. In the outer loop we select the ##n##-th element and in the inner loop we assign a "rank" to that element via a counter (and the rank would exactly correspond to where we want to place that element in the sorted array). For example, if we adopt the convention of starting the array at index ##1##, then the smallest element of the array will get the rank ##1## (via a counter, because the counter gets initialised to ##1##, but is never incremented). The largest element will get the rank ##n## (because the counter keeps getting incremented at each comparison after being initialised).
For example, here is the pseudo-code for doing that:
Just a couple of points. It is assumed that the array ##B## has been initialised appropriately (based on specific programming language conventions) and is of size ##n##. And ofc the array ##A## is given as input. And further, I have assumed all numbers in the array ##A## to be distinct. It is not difficult to see that we can modify the above segment for the more generic case (where repetition is allowed), but I just wanted to keep it simple.
Now here are the questions:
(1) A lot of the times when we are given a typical problem (at least among ones that seem to be of more interest in CS), it seems that there is a clear worst case time associated with the naive or exhaustive algorithms. For example, in the case of sorting it seems to be ##n^2##. The time of ##n^2## is bad, but it does seem like one would really have to make a deliberate effort to make it any worse.
The question is, whether notions such as "naive algorithm" (or "exhausting all possibilties" given a problem) can be made rigorous in reasonably general terms (that is, at least for some classes of problems)?
(2) Can we prove that sorting can never be done in time proportional to ##n##? Can we prove that it can never be done better than ##n.f(n)##, where ##f## is presumably some function which grows very slow?
The little I know, proving these kind of bounds is generally quite difficult. But since these are very well-studied problems, perhaps more is known?
Let's consider the standard case of sorting an array of size ##n## containing numbers. If we consider a naive algorithm, its time would be proportional to ##n^2##. For example, the naive algorithm could be proceed with two loops. In the outer loop we select the ##n##-th element and in the inner loop we assign a "rank" to that element via a counter (and the rank would exactly correspond to where we want to place that element in the sorted array). For example, if we adopt the convention of starting the array at index ##1##, then the smallest element of the array will get the rank ##1## (via a counter, because the counter gets initialised to ##1##, but is never incremented). The largest element will get the rank ##n## (because the counter keeps getting incremented at each comparison after being initialised).
For example, here is the pseudo-code for doing that:
C:
for(i:=1 to n) {
rank:=1
for(j:=1 to n) {
if(A[i]>A[j])
rank:=rank+1
}
B[rank]=A[i]
}
Now here are the questions:
(1) A lot of the times when we are given a typical problem (at least among ones that seem to be of more interest in CS), it seems that there is a clear worst case time associated with the naive or exhaustive algorithms. For example, in the case of sorting it seems to be ##n^2##. The time of ##n^2## is bad, but it does seem like one would really have to make a deliberate effort to make it any worse.
The question is, whether notions such as "naive algorithm" (or "exhausting all possibilties" given a problem) can be made rigorous in reasonably general terms (that is, at least for some classes of problems)?
(2) Can we prove that sorting can never be done in time proportional to ##n##? Can we prove that it can never be done better than ##n.f(n)##, where ##f## is presumably some function which grows very slow?
The little I know, proving these kind of bounds is generally quite difficult. But since these are very well-studied problems, perhaps more is known?