What happens to the coefficient in O(t(n)b^{t(n)})?

In summary, the conversation discusses a proof in "Introduction to the Theory of Computation" by Sipser which shows that a nondeterministic single-tape Turing machine with a running time of t(n) or more is equivalent to a deterministic single-tape Turing machine with a running time of 2O(t(n)). It is then concluded that O(t(n)b^{t(n)}) = 2^{O(t(n))}, but it is not entirely clear why the coefficient of the original expression disappears. The conversation also explores the possibility of breaking down the expression into two parts and clarifies that ½O(t(n)) is equivalent to O(t(n)). Overall, the conversation aims to understand the reasoning behind the equivalence of the two machines
  • #1
gnome
1,041
1
In Sipser, "Introduction to the Theory of Computation", a proof that "every t(n) time nondeterministic single-tape Turing machine (where t(n) ≥ n) has an equivalent 2O(t(n)) time deterministic single-tape Turing machine" shows that the running time of the equivalent NTM is [itex]O(t(n)b^{t(n)})[/itex] and then concludes with the statement:

[tex]O(t(n)b^{t(n)}) = 2^{O(t(n))}[/tex]

which is not entirely clear to me.

b in this case is just a constant representing the maximum branching factor of the NTM.

I guess that [itex]2^{O(t(n))}[/itex] is equivalent to [itex]b^{t(n)}[/itex] since b is a constant and therefore [itex]log_2{b}[/itex] is a constant so we get

[tex]b^{t(n)} = 2^{(log_2 b)* t(n)} = 2^{O(t(n))}[/tex]

Correct?

But, then what happened to the other O(t(n)) (the coefficient of the original expression)? That was not a constant term. It might be n, or nlogn, or n2, or whatever, but certainly not a constant. So why did it just disappear? Is it just that the exponential term so dominates that the t(n) coefficient is dropped as insignificant?
 
Physics news on Phys.org
  • #2
It didn't just disappear -- you have to break the RHS into two parts, each part asymptotically bigger than one of the factors of the LHS.
 
  • #3
I don't understand your meaning. Where are the two parts in [itex]2^{O(t(n))}[/itex]?

Or to put it another way: for argument's sake, suppose that every branch of the original NTM has a length of at most n2, so t(n) = n2 in the original NTM. And further, suppose that the maximum branching factor was just 2. Then we would have the total number of nodes [itex] = O(2^{n^2})[/itex] and the time for traveling from the root to a node = [itex]O({n^2})[/itex]. Therefore, the running time of the equivalent deterministic machine = [itex]O(n^22^{n^2})[/itex]. Can this be simplified to just [itex]2^{O(n^2)}[/itex]? Why?
 
  • #4
You can almost always break things into two parts if you're creative! How about writing it as the square of 2(1/2) O(t(n))?
 
  • #5
But what is the point of that? (And anyway, ½O(t(n)) is the same as O(t(n)), isn't it?)
 
  • #6
Because you can prove 2^O(t(n)) is asymptotically larger than t(n), and equal to O(b^t(n)), therefore 2^O(t(n)) * 2^O(t(n)) must be asymptotically at least as large as O(t(n) * b^t(n)).

But what is the point of that? (And anyway, ½O(t(n)) is the same as O(t(n)), isn't it?)

Yep! But writing it that way makes it clear how I'm breaking it into two parts.
 
  • #7
OK, I see your point.

Thanks, Hurkyl.
 

Related to What happens to the coefficient in O(t(n)b^{t(n)})?

1. What is time complexity and why is it important?

Time complexity is a measure of the amount of time it takes for an algorithm to run as the input size increases. It is important because it allows us to analyze and compare the efficiency of different algorithms, helping us to choose the most efficient solution for a given problem.

2. How is time complexity measured?

Time complexity is typically measured in terms of the input size, denoted as n, and the number of operations required for the algorithm to run, denoted as O(n). The Big O notation is used to represent the worst-case scenario for time complexity, providing an upper bound for the algorithm's running time.

3. What is the difference between best-case, worst-case, and average-case time complexity?

Best-case time complexity refers to the minimum amount of time an algorithm takes to run for a given input. Worst-case time complexity is the maximum amount of time an algorithm takes to run for a given input. Average-case time complexity is the average amount of time an algorithm takes to run for all possible inputs of a given size.

4. What are some common time complexities and how do they differ?

Some common time complexities include O(1) (constant time), O(log n) (logarithmic time), O(n) (linear time), O(n^2) (quadratic time), and O(2^n) (exponential time). These complexities differ in terms of how the algorithm's running time increases as the input size increases. O(1) is the most efficient, while O(2^n) is the least efficient.

5. Can time complexity be improved?

Yes, time complexity can be improved by optimizing the algorithm's design and implementation. This can include using more efficient data structures and algorithms, reducing the number of operations, and avoiding unnecessary iterations. Additionally, choosing the most appropriate algorithm for a given problem can significantly improve time complexity.

Similar threads

  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
28
Views
5K
  • Calculus and Beyond Homework Help
Replies
1
Views
795
  • Classical Physics
Replies
0
Views
258
  • Advanced Physics Homework Help
Replies
3
Views
1K
  • Differential Equations
Replies
3
Views
2K
  • Topology and Analysis
Replies
9
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Programming and Computer Science
Replies
9
Views
2K
  • Precalculus Mathematics Homework Help
Replies
5
Views
2K
Back
Top