- #1
CGandC
- 326
- 34
- Homework Statement
- For the algorithm below, explain whether the time and space complexities are Linear, Polynomial or Exponential ( find a tight upper-bound ) in
. Assume the size of the rules set is constant.
- Relevant Equations
- Big-Oh notation, context-free languages
Here's the following code I've written for generating strings of length k given a CNF grammar ( Chomsky Normal Form grammar ) ( The code is a recursive variant of the CYK algorithm ). The algorithm is recursive and uses memoization.
The algorithm works fine, here's an input example:
Next, I was asked to analyze both the Time-complexity of the algorithm and its Space-complexity ( Space-complexity excluding the arguments of the function ), under the assumption that the rules set ( rules_dict ) of the grammar is of constant length. I was asked to explain whether the complexities are Linear, Polynomial or Exponential ( find a tight upper-bound ) in .
My answer was that the both the time and space complexities are Polynomial ( I coudn't explain for the space-complexity). Here's my explanation:
For Time-complexity:
For each recursive call with input , we pass over all rules whose derivation leads to two variables X and Y,
For variable X we check if it can be used to produce a set of strings of lengths
For variable Y we check if it can be used to produce a set of strings of lengths
Note, because of the Memoization, each recursive call with input will have recursive calls.
Hence,
For variable X, we'll have recursive calls in total
For variable Y, we'll have recursive calls in total
Since we assumed the size of the set of rules to be constant then the total time-complexity of the algorithm is
so the time-complexity is polynomial.
For Space-complexity:
I couldn't figure it out, maybe the space complexity is also ? I couldn't explain it, but I think the answer lies in the lines:
Questions:
What do you think about the time-complexity of the above algorithm? am I correct in my analysis that it is polynomial?
How can I proceed to analyze the space-complexity? I feel lost.
Thanks in advance for any help!
Python:
def generate_language(rule_dict, start_var, k):
mem = dict()
return generate_language_rec(rule_dict, start_var, k, mem)def generate_language_rec(rule_dict, var, k, mem):
if (var, k) in mem:
return mem[(var, k)]
s = set()
if k == 0:
s.add("")
mem[(var, k)] = s
return s
if k == 1:
for x in rule_dict[var]:
if len(x) == 1:
s.update(x[0])
mem[(var, k)] = s
return s
for var_rule in rule_dict[var]:
if len(var_rule) == 2:
X, Y = var_rule[0], var_rule[1]
for j in range(1, k):
s1 = generate_language_rec(rule_dict, X, j , mem)
s2 = generate_language_rec(rule_dict, Y, k-j, mem)
s.update( { str1 + str2 for str1 in s1 for str2 in s2 }.union({ str2 + str1 for str1 in s1 for str2 in s2 } ) ) ## concatenating strings in s1 and s2 in both ways , i.e., concatenating strings from st1 with strings from st2 and then I concatenate strings from st2 with strings in st1
mem[(var, k)] = s
return s
The algorithm works fine, here's an input example:
Python:
rule_dict = {"S": {"AB", "BC"}, "A": {"BA", "a"}, "B": {"CC", "b"}, "C": {"AB", "a"}}
res = generate_language(rule_dict, "S", 5)
Next, I was asked to analyze both the Time-complexity of the algorithm and its Space-complexity ( Space-complexity excluding the arguments of the function ), under the assumption that the rules set ( rules_dict ) of the grammar is of constant length. I was asked to explain whether the complexities are Linear, Polynomial or Exponential ( find a tight upper-bound ) in
My answer was that the both the time and space complexities are Polynomial ( I coudn't explain for the space-complexity). Here's my explanation:
For Time-complexity:
For each recursive call with input
For variable X we check if it can be used to produce a set of strings of lengths
For variable Y we check if it can be used to produce a set of strings of lengths
Note, because of the Memoization, each recursive call with input
Hence,
For variable X, we'll have
For variable Y, we'll have
Since we assumed the size of the set of rules to be constant then the total time-complexity of the algorithm is
so the time-complexity is polynomial.
For Space-complexity:
I couldn't figure it out, maybe the space complexity is also
Python:
s1 = generate_language_rec(rule_dict, X, j , mem)
s2 = generate_language_rec(rule_dict, Y, k-j, mem)
What do you think about the time-complexity of the above algorithm? am I correct in my analysis that it is polynomial?
How can I proceed to analyze the space-complexity? I feel lost.
Thanks in advance for any help!
Last edited: