I'm becoming acquainted with python and am creating problems in order to help myself learn the ins and outs of the language. My next problem comes as follows:
I have copied and pasted a huge slew of text from the internet, but the copy and paste added several new lines to break up the huge string. I wish to programatically remove all of these and return the string into a giant blob of characters. This is obviously a job for regex (I think), and parsing through the file and removing all instances of the newline character sounds like it would work, but it doesn't seem to be going over all that well for me.
Is there an easy way to go about this? It seems rather simple.
The canonical way to strip end-of-line (EOL) characters is to use the string rstrip() method removing any trailing \r or \n. Here are examples for Mac, Windows, and Unix EOL characters. Using '\r\n' as the parameter to rstrip means that it will strip out any trailing combination of '\r' or '\n'.
RegEx Module Python has a built-in package called re , which can be used to work with Regular Expressions.
The two main alternatives: read everything in as a single string and remove newlines:
clean = open('thefile.txt').read().replace('\n', '')
or, read line by line, removing the newline that ends each line, and join it up again:
clean = ''.join(l[:-1] for l in open('thefile.txt'))
The former alternative is probably faster, but, as always, I strongly recommend you MEASURE speed (e.g., use python -mtimeit
) in cases of your specific interest, rather than just assuming you know how performance will be. REs are probably slower, but, again: don't guess, MEASURE!
So here are some numbers for a specific text file on my laptop:
$ python -mtimeit -s"import re" "re.sub('\n','',open('AV1611Bible.txt').read())"
10 loops, best of 3: 53.9 msec per loop
$ python -mtimeit "''.join(l[:-1] for l in open('AV1611Bible.txt'))"
10 loops, best of 3: 51.3 msec per loop
$ python -mtimeit "open('AV1611Bible.txt').read().replace('\n', '')"
10 loops, best of 3: 35.1 msec per loop
The file is a version of the KJ Bible, downloaded and unzipped from here (I do think it's important to run such measurements on one easily fetched file, so others can easily reproduce them!).
Of course, a few milliseconds more or less on a file of 4.3 MB, 34,000 lines, may not matter much to you one way or another; but as the fastest approach is also the simplest one (far from an unusual occurrence, especially in Python;-), I think that's a pretty good recommendation.
I wouldn't use a regex for simply replacing newlines - I'd use string.replace()
. Here's a complete script:
f = open('input.txt')
contents = f.read()
f.close()
new_contents = contents.replace('\n', '')
f = open('output.txt', 'w')
f.write(new_contents)
f.close()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With