I have a large dataset with more than 500 000 date & time stamps that look like this:
date time
2017-06-25 00:31:53.993
2017-06-25 00:32:31.224
2017-06-25 00:33:11.223
2017-06-25 00:33:53.876
2017-06-25 00:34:31.219
2017-06-25 00:35:12.634
How do I round these timestamps off to the nearest second?
My code looks like this:
readcsv = pd.read_csv(filename)
log_date = readcsv.date
log_time = readcsv.time
readcsv['date'] = pd.to_datetime(readcsv['date']).dt.date
readcsv['time'] = pd.to_datetime(readcsv['time']).dt.time
timestamp = [datetime.datetime.combine(log_date[i],log_time[i]) for i in range(len(log_date))]
So now I have combined the dates and times into a list of datetime.datetime
objects that looks like this:
datetime.datetime(2017,6,25,00,31,53,993000)
datetime.datetime(2017,6,25,00,32,31,224000)
datetime.datetime(2017,6,25,00,33,11,223000)
datetime.datetime(2017,6,25,00,33,53,876000)
datetime.datetime(2017,6,25,00,34,31,219000)
datetime.datetime(2017,6,25,00,35,12,634000)
Where do I go from here?
The df.timestamp.dt.round('1s')
function doesn't seem to be working?
Also when using .split()
I was having issues when the seconds and minutes exceeded 59
Many thanks
ceil() and math. floor() methods which rounds up and rounds down a number to the nearest whole number/integer respectively. These two methods are from the built-in math module in Python.
Without any extra packages, a datetime object can be rounded to the nearest second with the following simple function:
import datetime as dt
def round_seconds(obj: dt.datetime) -> dt.datetime:
if obj.microsecond >= 500_000:
obj += dt.timedelta(seconds=1)
return obj.replace(microsecond=0)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With