Algorithm analysis problem turned into a math problem

In summary: There is no guaranteed way to find the degree of a polynomial, but you can take the absolute value of the differences of the first n terms and compare that to 2n. If the two are within a certain range, the degree is probably correct.
  • #1
r0bHadz
194
17
Homework Statement
I was analyzing an algorithm for my datastructures/alg class, and I came upon the following numbers. For n = 1,2,3,4,5,6.... => 1,6,18,40,75,126.... respectively
Relevant Equations
(n^3+n^2)/2
The equation listed is the implicit form. I achieved this a weird way.

[itex]\begin{array}{|c|c|c|}
\hline 1 & 1 & 1*1 \\
\hline 2 & 6 & 2*3 \\
\hline 3 & 18 & 3*6 \\
\hline 4 & 40 & 4*10 \\
\hline 5 & 75 & 5 * 15 \\
\hline 6 & 126 & 6 * 21\\
\end{array}[/itex]

Focusing on the third column now, I can see as we go down the rows we have the left part of the product is =n, and the right part of the product I noticed can be expressed implicitly as well. So I have n* something.

I took at row 2 column three and though.. hmm, 3 = (2*3)/2, 6=(3*4)/2, 10= (4*5)/2 and I noticed the pattern ( (n)(n+1) ) /2. Multiply that with the n and I got my answer that it was O(n^4)

The thing is... sure I achieved the right answer but I just hate this process. It's like, you have to get lucky just for your brain to be able to figure out how to find the pattern. There isn't a sure way to do things, and if you can't find it under pressure on the test, you're screwed. There is no way to get the exact answer for any pattern, like there isn't step 1: do this, step 2: do this, like for example, going from ax^2 + bx+ c to y=a(x-h)^2 +k, cookie cutter method.

My question is: maybe there is a cookie cutter method for finding any pattern? Is there any more efficient way to do these kinds of problems? Not going to lie they are kind of fun to do but at the same time they are exhausting because there isn't a cookie cutter way of just getting the answer.
 
Physics news on Phys.org
  • #2
The thing is, for big-O notation you don't need the exact functional form. What you're trying to do is bound it. It helps to know how some functions grow. For instance log (to any base) grows slower than linear, so if you have log terms and linear terms, the linear terms dominate and it's ##O(n)##.

Larger exponents grow faster than smaller exponents. If you can tell that one part of an algorithm grows proportional to ##n^2## and another proportional to ##n^3## it doesn't matter what the coefficients are. The ##n^3## term dominates and this algorithm is ##O(n^3)##.

In real life if you had that set of run times, I'd plot them. It's obviously growing faster than linear, so if I suspected a power law ##T = a n^b## or ##\log T = \log a + b \log n## then I'd plot ##\log T## vs ##\log n## and see what the slope ##b## appeared to be, if it appeared to be converging toward a line (in the log-log plot) for larger ##n##. I don't care what ##a## is, and I don't care if there are lower order terms.

In a test situation I'd be surprised if you were given a problem like that. I'd think you would be more likely to analyze the growth on paper. If you can tell an operation is run ##n^2## times from the code, then you don't care what the runtime of one operation is. The ##n^2## is the complexity.
 
  • #3
Having said all that, there is a simple way you can tell if it's polynomial in ##n##. (This came up in connection with a totally different question I answered elsewhere today). Take the differences of successive terms.

You have these numbers for successive ##n##: 1, 6, 18, 40, 75, 126
Take the differences: 5, 12, 22, 35, 51
Take the differences of those: 7, 10, 13, 16
Take the differences of those: 3, 3, 3
You end up with a constant on the 3rd difference. So the original numbers follow a 3rd-degree polynomial. This algorithm is ##O(n^3)##.

I could use the difference information to find out exactly what the polynomial is. But the point is that I don't care if I just want to know the complexity.
 
  • Like
Likes QuantumQuest and StoneTemplePython
  • #4
Here's a way that works for any series in which the n-th term is a polynomial in n.

Construct a difference table as follows:

First row is the series of numbers s(1), ..., s(n).

For every row after that, calculate a number as the number above and to its right minus the number directly above. Each row will end one place earlier than the row above. The n-th row is called the series of (n-1)th differences. So the second row is first differences, third row is second diffs, and so on.
Keep going until you get a row that is all zeros or you reach about ten rows. The number of rows needed to reach a zero row is two more than the degree of the polynomial, and you are unlikely to have been given a series based on a polynomial of degree greater than eight. If you don't reach a zero row by then, it's probably not polynomial. It could be exponential, trig, combinatoric or something else.

Let n be the number of rows including the final row of zeroes, minus 2. That is the degree of your polynomial.
The leading coefficient of your polynomial is the value in the penultimate row, divided by n!

To get the other coefficients, you work backwards up the table. See if you can figure out how. But as @RPinPA points out, if you only want the big-O measure, the degree of the polynomial tells you all you need to know.
 
  • Like
Likes RPinPA and QuantumQuest
  • #5
As already noted, in algorithm analysis we're interested in the growth of a function - which relates the length of an algorithm's input to the number of steps it takes or (more rarely) to the number of storage locations it uses, so we're looking for bounds. It's good to find a closed form but this is not always viable or in general, easy.

Additionally to what has been already said for polynomial cases, if you have a recurrence relation for your algorithm, you can take a look at the small summary and problems ##1## and ##2## in February's Math Challenge. Also, it may be of help to take a look at my Intro to Data Structures for programming tutorial.
 
  • #6
QuantumQuest said:
As already noted, in algorithm analysis we're interested in the growth of a function - which relates the length of an algorithm's input to the number of steps it takes or (more rarely) to the number of storage locations it uses, so we're looking for bounds. It's good to find a closed form but this is not always viable or in general, easy.

Additionally to what has been already said for polynomial cases, if you have a recurrence relation for your algorithm, you can take a look at the small summary and problems ##1## and ##2## in February's Math Challenge. Also, it may be of help to take a look at my Intro to Data Structures for programming tutorial.

That is one amazing resource man. I am only able to skim through it right now but I can already tell I'm going to be coming back to that a LOT! Much love for that man
 
  • Like
Likes QuantumQuest
  • #7
Perhaps it's worth noting that the set of second numbers in column 3 are triangular numbers so you could write the general term as: $$n\sum_{n=1}^{n} n $$
 

FAQ: Algorithm analysis problem turned into a math problem

What is an algorithm analysis problem?

An algorithm analysis problem is a problem that involves analyzing the efficiency and performance of an algorithm. This includes determining the time and space complexity of an algorithm, as well as identifying any potential bottlenecks or areas for improvement.

How is an algorithm analysis problem turned into a math problem?

An algorithm analysis problem can be turned into a math problem by breaking down the algorithm into its individual steps and representing them using mathematical equations. This allows for a more precise and quantitative analysis of the algorithm's performance.

Why is it important to turn an algorithm analysis problem into a math problem?

Turning an algorithm analysis problem into a math problem allows for a more rigorous and objective analysis of the algorithm's performance. It also allows for easier comparison between different algorithms and can help identify areas for optimization.

What are some common mathematical concepts used in algorithm analysis?

Some common mathematical concepts used in algorithm analysis include asymptotic notation (such as Big O notation), recurrence relations, and probability theory. These concepts help quantify the time and space complexity of an algorithm and can aid in identifying the most efficient solution.

Can algorithm analysis be applied to real-world problems?

Yes, algorithm analysis can be applied to real-world problems in various fields such as computer science, engineering, and data analysis. By analyzing the efficiency of algorithms, we can improve the performance of various systems and processes, leading to more efficient and effective solutions.

Similar threads

Replies
1
Views
1K
Replies
4
Views
1K
Replies
11
Views
1K
Replies
14
Views
3K
Replies
1
Views
926
Replies
13
Views
1K
Replies
32
Views
6K
Replies
16
Views
4K
Replies
1
Views
768
Back
Top