I've written a pure-Python module for Python 3.0/3.1 which I'd also like to make it compatible with 2.x (probably just 2.6/2.7) in order to make it available to the widest possible audience.
The module is concerned with reading and writing a set of related file formats, so the differences between 2.x and 3.x versions would be slight — e.g. io.BytesIO
instead of StringIO.StringIO
— but not all of them are easily handled via try/except blocks, such as setting metaclasses.
What's the correct way to handle this? Two nearly-identical codebases which must be kept in sync or one codebase sprinkled with feature detection? A single, clean codebase plus 2to3 or 3to2?
Write your code entirely against 2.x, targeting the most recent version in the 2.x series. In this case, it's probably going to remain 2.7. Run it through 2to3
, and if it doesn't pass all of its unit tests, fix the 2.x version until the generated 3.x version works.
Eventually, when you want to drop 2.x support, you can take the generated 3.x version and start modifying it directly. Until then, only modify the 2.x version.
This is the workflow highly recommended by the people who worked on 2to3
. Unfortunately I don't remember offhand the link to the document I gleaned all of this from.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With