Big O Notation Proofs: O, Ω, and Θ

In summary: Overall, great job summarizing the conversation and providing a clear and concise summary of each notation. Keep up the good work!
  • #1
needhelp83
199
0
O-notation
– O(g(n)) = { f (n) : there exist positive constants c and n0 such that
0 ≤ f (n) ≤ cg(n) for all n ≥ n0}
– g(n) is an upper bound of f(n), may not be tight

Ω-notation
– Ω(g(n)) = { f (n) : there exist positive constants c and n0such that
0 ≤ cg(n) ≤ f(n) for all n ≥ n0}
– g(n) is an lower bound of f(n), may not be tight Θ-notation

– Θ(g(n)) = { f (n) : there exist positive constants c1, c2, and n0such
that 0 ≤ c1g(n) ≤ f (n) ≤ c2 g(n) for all n ≥ n0}
– We write f (n) = Θ(g(n)) instead of f (n) ∈Θ(g(n))
– g(n) is an asymptotically tight bound for f(n)

1) [tex]n^{2} + 2n^{3} = O(n^{3}) [/tex]
[tex]f(n) = 2n^{3}, g(n)= (n^{3}) [/tex]
if [tex]f(n) \in O(g(n))[/tex]
[tex]0 \leq f(n) \leq cg(n)[/tex]
[tex]0 \leq 2n^{3} \leq c*n^{3} for \ n \geq 0[/tex]
[tex]any \ c \geq 2 \ \ f(n) \leq g(n) \ for \ n \geq 0[/tex]

2)[tex]2n + n^{2} \neq \Omega (n^{3}) [/tex]
[tex]f(n)=n^{2}, g^{3}[/tex]
[tex]0 \leq cg(n) \leq f(n)[/tex]
[tex]0 \leq c*n^{3} \leq n^{2} \ for \ n \geq 0[/tex]
[tex]any \ c \leq \frac{1}{n} \ for \ n \geq 0 [/tex]

3) [tex]ln \ n = \Theta (lg \ n) [/tex]
f(n) = Θ(g(n))
C1lg(n) ≤ ln(n) ≤ C2lg(n)
(1)ln(n) ≤ C2lg(n)
C2 = 1
ln(n) ≤ lg(n)
n = 1
e^x = 1 n^x = 1
x = 0
holds for all n > 0


(2)C1lg(n) ≤ ln(n)
C1 = ½
½lg(n) ≤ ln(n)
n = 1
½(0) ≤ 0
holds for all n > n/2

I am a beginner and looking for feedback. Are there any mistakes I have made or changes that need to be made when proving these?
 
Physics news on Phys.org
  • #2
Your proof looks correct. It is always a good idea to double check your answer to make sure you have not made any mistakes. Additionally, it might be helpful to include a brief explanation of each step in the proof to provide additional clarity.
 
  • #3


I would like to commend your efforts in understanding and explaining the concepts of Big O, Ω, and Θ notations. Your proofs are clear and well-organized, and you have correctly identified the key components in each notation.

However, there are a few points that I would like to clarify and provide feedback on:

1) In your first proof, you have correctly shown that f(n) = 2n^3 is in O(n^3) by choosing c = 2. However, it is important to note that c must be a positive constant, so c = 2 is just one possible choice. Other values of c, such as c = 3 or c = 10, would also satisfy the condition 0 ≤ 2n^3 ≤ cn^3 for all n ≥ 0. Therefore, it would be more accurate to say "any c ≥ 2" instead of "any c ≥ 0" in your proof.

2) In your second proof, you have correctly shown that f(n) = n^2 is not in Ω(n^3) by choosing c = 1/n. However, it is important to note that c must be a positive constant, so c = 1/n is not a valid choice. Instead, you could choose c = 1, which would satisfy the condition 0 ≤ cn^3 ≤ n^2 for all n ≥ 0. Therefore, it would be more accurate to say "any c ≤ 1" instead of "any c ≤ 1/n" in your proof.

3) In your third proof, you have correctly shown that ln(n) is in Θ(lg(n)) by choosing c1 = 1/2 and c2 = 1. However, it is important to note that c1 and c2 must be positive constants, so c1 = 1/2 is not a valid choice. Instead, you could choose c1 = 1, which would satisfy the first inequality ln(n) ≤ lg(n) for all n > 0. Similarly, c2 must also be a positive constant, so c2 = 1 is a valid choice, but you could also choose a larger value such as c2 = 2. This would satisfy the second inequality lg(n) ≤ ln(n) for all n > 1. Therefore, it would be more accurate to say "any c1 ≥
 

FAQ: Big O Notation Proofs: O, Ω, and Θ

What is Big O notation and why is it important?

Big O notation is a mathematical notation used to describe the asymptotic behavior of a function. It is commonly used in computer science to analyze the efficiency of algorithms and their performance as the input size grows. It is important because it allows us to compare algorithms and determine which one is more efficient for a given problem.

What is the difference between O, Ω, and Θ in Big O notation?

O, Ω, and Θ are all symbols used to describe the upper, lower, and tight bounds, respectively, of a function. O represents the upper bound, Ω represents the lower bound, and Θ represents the tight bound. In other words, O describes the worst-case scenario, Ω describes the best-case scenario, and Θ describes the average-case scenario.

How do you prove the Big O, Ω, and Θ complexities of an algorithm?

To prove the Big O, Ω, and Θ complexities of an algorithm, you must first determine the growth rate of the algorithm by analyzing its code. Then, you must compare the growth rate to the function you are trying to prove. For Big O, you must show that the algorithm's growth rate is less than or equal to the function. For Ω, you must show that the algorithm's growth rate is greater than or equal to the function. And for Θ, you must show that the algorithm's growth rate is equal to the function.

Can an algorithm have more than one Big O, Ω, or Θ complexity?

Yes, an algorithm can have multiple Big O, Ω, or Θ complexities depending on the input size and the operations being performed. For example, an algorithm may have a different complexity for its best-case scenario, worst-case scenario, and average-case scenario.

How does the choice of data structures and algorithms affect the Big O, Ω, and Θ complexities of a program?

The choice of data structures and algorithms can greatly affect the Big O, Ω, and Θ complexities of a program. Different data structures have different time and space complexities, which can impact the overall complexity of the program. Similarly, the choice of algorithms can greatly impact the efficiency of the program and, therefore, its complexities. It is important to carefully consider the data structures and algorithms used in a program to ensure optimal performance.

Similar threads

Replies
1
Views
1K
Replies
4
Views
2K
Replies
2
Views
1K
Replies
5
Views
1K
Replies
8
Views
2K
Back
Top