I have a list of words for example:
words = ['one','two','three four','five','six seven']
# quote was missing
And I am trying to create a new list where each item in the list is just one word so I would have:
words = ['one','two','three','four','five','six','seven']
Would the best thing to do be join the entire list into a string and then tokenize the string? Something like this:
word_string = ' '.join(words)
tokenize_list = nltk.tokenize(word_string)
Or is there a better option?
words = ['one','two','three four','five','six seven']
With a loop:
words_result = []
for item in words:
for word in item.split():
words_result.append(word)
or as a comprehension:
words = [word for item in words for word in item.split()]
You can join using a space separator and then split again:
In [22]:
words = ['one','two','three four','five','six seven']
' '.join(words).split()
Out[22]:
['one', 'two', 'three', 'four', 'five', 'six', 'seven']
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With