I'm learning python from Think Python by Allen Downey and I'm stuck at Exercise 6 here. I wrote a solution to it, and at first look it seemed to be an improvement over the answer given here. But upon running both, I found that my solution took a whole day (~22 hours) to compute the answer, while the author's solution only took a couple seconds. Could anyone tell me how the author's solution is so fast, when it iterates over a dictionary containing 113,812 words and applies a recursive function to each to compute a result?
My solution:
known_red = {'sprite': 6, 'a': 1, 'i': 1, '': 0} #Global dict of known reducible words, with their length as values
def compute_children(word):
"""Returns a list of all valid words that can be constructed from the word by removing one letter from the word"""
from dict_exercises import words_dict
wdict = words_dict() #Builds a dictionary containing all valid English words as keys
wdict['i'] = 'i'
wdict['a'] = 'a'
wdict[''] = ''
res = []
for i in range(len(word)):
child = word[:i] + word[i+1:]
if nword in wdict:
res.append(nword)
return res
def is_reducible(word):
"""Returns true if a word is reducible to ''. Recursively, a word is reducible if any of its children are reducible"""
if word in known_red:
return True
children = compute_children(word)
for child in children:
if is_reducible(child):
known_red[word] = len(word)
return True
return False
def longest_reducible():
"""Finds the longest reducible word in the dictionary"""
from dict_exercises import words_dict
wdict = words_dict()
reducibles = []
for word in wdict:
if 'i' in word or 'a' in word: #Word can only be reducible if it is reducible to either 'I' or 'a', since they are the only one-letter words possible
if word not in known_red and is_reducible(word):
known_red[word] = len(word)
for word, length in known_red.items():
reducibles.append((length, word))
reducibles.sort(reverse=True)
return reducibles[0][1]
wdict = words_dict() #Builds a dictionary containing all valid English words...
Presumably, this takes a while.
However, you regenerate this same, unchanging dictionary many times for every word you try to reduce. What a waste! If you make this dictionary once, and then re-use that dictionary for every word you try to reduce like you do for known_red
, the computation time should be greatly reduced.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With