I want to specify a sequence of large integers (with many zeros) like:
a = [1e13, 1e14, 1e19, ...]
My intuition is to use scientific notation. But in python, it's a float instead of integer. Is there a easy way in python to write these integer literals without writing all the zeros, because making sure the number of zeros correct is a nightmare.
I believe I can cast the floats back to integer using int
, but just wonder if there is a better way?
Integer literals are numbers that do not have a decimal point or an exponential part. They can be represented as: Decimal integer literals. Hexadecimal integer literals.
Integer literals frequently have prefixes indicating base, and less frequently suffixes indicating type. For example, in C++ 0x10ULL indicates the value 16 (because hexadecimal) as an unsigned long long integer. Common prefixes include: 0x or 0X for hexadecimal (base 16);
For future viewers.
Since python 3.6 PEP 515
would be included.
So you can do a = 1_000_000_000_000
for better code's readability.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With