- #1
K29
- 108
- 0
Twice I have had to calculate the speed of algorithms depending on input size. The first time I was working in MATLAB(which is based on C) and was timing the difference in time to perform matrix operations on the identity matrix in standard storage and in sparse storage, as I grew the matrix size.
More recently I have been testing (for increasing array input sizes) the running time of the bisection search and the linear search in the worst case.
In all the above experiments I get correct results, for example for bisection search it increases linearly with array size. For bisection search I get a graph looking like log(n). For matrix multiplication under standard storage, operation time increased exponentially, and under sparse, some function of log(n) was observed.
HOWEVER, in all the above experiments the input size was increased AUTOMATICALLY with a loop, and left for a few hours to automatically generate all data. For example, for the linear search it would increase it by 50000 inputs and always look for the element in the last position.
The first call of the searching function/matrix operation always takes much longer than the rest. Only thereafter do I observe the expected trend. In the case of linear search it is always the first two calls of the function.
In the case of MATLAB for matrix operations the algorithms were most definitely correct; it simply consisted of timing A*B with
and
In the case of the search algorithms I've verified I've coded them correctly. I am using
to time the algorithms.
Why does the first one(or two) runs take so long, while the expected trend only follows thereafter?
I am not running any other software at the same time, nor doing anything other than letting the algorithm run. I do not have a screensaver; UBUNTU just switches off the screen after a while. I have an Intel Celeron 2.13Ghz, and 2 gigs of DDR 2 RAM
For the java bisection search and linear search I am using an ArrayList datastructure. Note that to get() operation on the array list it is O(1) and this is the only operation I perform on it during the linear search.
Also note, that the input size seems to be sufficiently large. It takes a good 15 minutes to sort lists of this size ascending before running and timing the bisection search algorithm.
I have an attached an image of linear search results. You can see after the first two loops it starts following the O(n) trend(for worst case scenario).
More recently I have been testing (for increasing array input sizes) the running time of the bisection search and the linear search in the worst case.
In all the above experiments I get correct results, for example for bisection search it increases linearly with array size. For bisection search I get a graph looking like log(n). For matrix multiplication under standard storage, operation time increased exponentially, and under sparse, some function of log(n) was observed.
HOWEVER, in all the above experiments the input size was increased AUTOMATICALLY with a loop, and left for a few hours to automatically generate all data. For example, for the linear search it would increase it by 50000 inputs and always look for the element in the last position.
The first call of the searching function/matrix operation always takes much longer than the rest. Only thereafter do I observe the expected trend. In the case of linear search it is always the first two calls of the function.
In the case of MATLAB for matrix operations the algorithms were most definitely correct; it simply consisted of timing A*B with
Code:
tic
Code:
toc
In the case of the search algorithms I've verified I've coded them correctly. I am using
Code:
System.nanotime
Why does the first one(or two) runs take so long, while the expected trend only follows thereafter?
I am not running any other software at the same time, nor doing anything other than letting the algorithm run. I do not have a screensaver; UBUNTU just switches off the screen after a while. I have an Intel Celeron 2.13Ghz, and 2 gigs of DDR 2 RAM
For the java bisection search and linear search I am using an ArrayList datastructure. Note that to get() operation on the array list it is O(1) and this is the only operation I perform on it during the linear search.
Also note, that the input size seems to be sufficiently large. It takes a good 15 minutes to sort lists of this size ascending before running and timing the bisection search algorithm.
I have an attached an image of linear search results. You can see after the first two loops it starts following the O(n) trend(for worst case scenario).
Attachments
Last edited: