Asymptotic notation informal definitions

In summary: O is equal to the new Ω notation.In summary, the O notation and Ω notation could be exchanged, as well as the o notation and ω notation. This would mean changing the definitions of these notations to make them consistent. However, there may be restrictions that prevent these changes from being valid. The idea behind this exchange is to make the function f(x) smaller instead of making g(x) bigger, but this can be achieved by using the "new" definitions of O and Ω. Overall, this would result in the "old" definitions of O and Ω being swapped, making Ω equal to the new O and O equal to the new Ω.
  • #1
colt
22
0
I was wondering if the O notation definition could be exchanged with the Ω notation and o could be exchanged with the ω notation.

I ask this because of this:
2n² O(n²) means that 2n² <= c*n² which it is true for c=3 and n>=1 for example

Instead, it would be like this:

c*2n² <= n² which would be true for c=1/3 and n>=1 for example

and in the Ω case, 2n² Ω n² means that c*2n² <= n² ,which is true to c=1/3 and n>=1 for example. Instead could it be defined so it would means that 2n² < c*n², which would be true for for c=3 and n>=1, for example. Basically changing the meaning of these notations

The same would be applicable to o and ω notations.

And in the case of n²-2n = θ(n²), instead of c1*n²<=n²-2n<=c2*n² could it be c1*n²-2n<=n²<=c2*n²-2n adjusting the coefficients as needed?

It is all of this valid, or there is some restriction that forces these definitions to be that way?
 
Mathematics news on Phys.org
  • #2
I don't think it's possible to know what you are talking about. Could you please try to explain your question over?
 
  • #3
I will try starting with only the O notation then

for a function f(x) to be O g(x) is necessary that f(x) <= c * g(x) for a certain n >= n0 right?

for f(x) = 2n² to be <= than g(x) = n², you could call c of 3 and n >= 1 for instance.

However, if the definition of O notation were c* f(x) <= g(x), for a certain n>= n0, c could be 1/3 and n>=1 that the inequality would keep being true.

Instead to make g(x) bigger, the idea is to make f(x) smaller.

But if you pay attention to the Ω notation would have noted that it is c* f(x) <= g(x) for n>= n0, exactly the "new" notation for O. So Ω would have to change for not be equal to O.
And so would became f(x) <= c * g(x), that it is the "old" definition of O

So what I am doing here is basically swapping their definitions.

The original Ω is equal to the new O notation
 

FAQ: Asymptotic notation informal definitions

What is the purpose of using asymptotic notation in informal definitions?

Asymptotic notation is used to describe the time or space complexity of an algorithm or function. It provides a way to analyze and compare the efficiency of different algorithms without having to consider specific implementation details. It is particularly useful for predicting how an algorithm will perform as the input size grows.

What does the big-O notation represent in asymptotic notation?

The big-O notation, written as O(n), represents the upper bound or worst-case scenario of the time or space complexity of an algorithm. It signifies that the algorithm will not exceed this complexity as the input size increases.

How does the big-Omega notation differ from big-O notation?

The big-Omega notation, written as Ω(n), represents the lower bound or best-case scenario of the time or space complexity of an algorithm. It signifies that the algorithm will not perform better than this complexity as the input size increases.

Can asymptotic notation be used for algorithms with varying input sizes?

Yes, asymptotic notation can be used for algorithms with varying input sizes. It provides a general idea of how the algorithm will perform for any input size, as long as the input size is sufficiently large.

Are there any other types of asymptotic notation besides big-O, big-Omega, and big-Theta?

Yes, there are other types of asymptotic notation such as little-o and little-omega. Little-o notation represents an upper bound that is not tight, meaning the algorithm's complexity will eventually exceed this bound as the input size grows. Little-omega notation represents a lower bound that is not tight, meaning the algorithm's complexity will eventually perform better than this bound as the input size grows.

Similar threads

Back
Top