Suppose I have this data
data = {'site': ['ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY'],
'usage_date': ['2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-08-25', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01', '2019-09-01'],
'item_id': ['COR30013', 'PAC10463', 'COR30018', 'PAC10958', 'PAC11188', 'PAC20467', 'COR20275', 'PAC20702', 'COR30020', 'PAC10137', 'PAC10445', 'COR30029', 'COR30025', 'PAC10457', 'COR10746', 'PAC11136', 'COR10346', 'PAC11050', 'PAC11132', 'PAC11135', 'PAC10964', 'COR10439', 'PAC11131', 'COR10695', 'PAC11128', 'COR10433', 'COR10432', 'PAC11051', 'PAC10137', 'COR10695', 'COR30029', 'COR10346', 'COR10432', 'COR10746', 'COR10439', 'COR10433', 'COR20275', 'COR30020', 'COR30018', 'PAC11135', 'PAC10964', 'PAC11136', 'PAC10445', 'PAC11050', 'PAC11132', 'PAC20467', 'PAC11188', 'PAC10463', 'PAC20702', 'PAC10457', 'PAC10958', 'PAC11051', 'PAC11128', 'PAC11131'],
'start_count':[400.0, 96000.0, 315.0, 45000.0, 2739.0, 2232.0, 2800.0, 283500.0, 280.0, 200000.0, 96000.0, 481.0, 600.0, 18000.0, 400.0, 5500.0, 1200.0, 5850.0, 5500.0, 5500.0, 36000.0, 600.0, 5500.0, 550.0, 300.0, 4800.0, 1800.0, 1800.0, 108000.0, 500.0, 481.0, 1200.0, 1800.0, 400.0, 600.0, 3300.0, 2800.0, 455.0, 315.0, 5500.0, 36000.0, 5500.0, 96000.0, 5400.0, 5500.0, 2232.0, 2739.0, 96000.0, 283500.0, 18000.0, 72000.0, 1800.0, 300.0, 5500.0],
'received_total': [0.0, 0.0, 0.0, 0.0, 3168.0, 0.0, 0.0, 0.0, 280.0, 0.0, 0.0, 0.0, 0.0, 0.0, 400.0, 0.0, 1800.0, 0.0, 0.0, 0.0, 0.0, 400.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3600.0, 0.0, 0.0, 0.0, 1800.0, 2400.0, 400.0, 400.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1800.0, 0.0, 0.0, 3168.0, 0.0, 0.0, 0.0, 45000.0, 3600.0, 0.0, 0.0],
'end_count': [240.0, 84000.0, 280.0, 27000.0, 3432.0, 2160.0, 2000.0, 90000.0, 455.0, 108000.0, 96000.0, 437.0, 500.0, 9000.0, 600.0, 5500.0, 1950.0, 4950.0, 5500.0, 5500.0, 36000.0, 600.0, 5500.0, 550.0, 270.0, 3300.0, 1200.0, 4200.0, 192000.0, 450.0, 350.0, 1890.0, 3600.0, 600.0, 525.0, 2835.0, 1600.0, 420.0, 187.0, 5500.0, 36000.0, 5500.0, 96000.0, 6750.0, 5500.0, 1992.0, 1881.0, 84000.0, 58500.0, 9000.0, 85500.0, 3300.0, 252.0, 5500.0]}
df_sample = pd.DataFrame(data=data)
For each item_id we need to check if the current (9/1/2019) end_count is greater than the previous (8/25/2019) end_count and we have a currennt received_total of 0 meaning there is a bad count.
I have this code that works
def check_end_count(df):
l = []
for loc, df_loc in df.groupby(['site', 'item_id']):
try:
ending_count_previous = df_loc['end_count'].iloc[0]
ending_count_current = df_loc['end_count'].iloc[1]
received_total_current = df_loc['received_total'].iloc[1]
if ending_count_current > ending_count_previous and received_total_current == 0:
l.append("Ending count discrepancy")
l.append("Ending count discrepancy")
else:
l.append("Good Row")
l.append("Good Row")
except:
l.append("Nothing to compare")
df['ending_count_check'] = l
return df
df_sample = check_end_count(df_sample)
But its not that pythonic. Also, in my case I have to check for a series of dates of which I have this tuple list
print(sliding_window_dates[:3])
[array(['2019-08-25', '2019-09-01'], dtype=object),
array(['2019-09-01', '2019-09-08'], dtype=object),
array(['2019-09-08', '2019-09-15'], dtype=object)]
So what I am trying to do is the following on the larger dataframe
df_list = []
for date1, date2 in sliding_window_dates:
df_check = df_test[(df_test['usage_date'] == date1) | (df_test['usage_date'] == date2)]
for loc, df_loc in df_check.groupby(['sort_center', 'item_id']):
df_list.append(check_end_count(df_loc))
But I again I am doing this in two for loops so I assume there must be a better way to do this. Any suggestions are appreciated.
Whenever I see a problem that requires comparison acrosss dates with particular properties I immediately think "what is the correct dataframe index?". In this case, using a good index and some restructuring makes the problem pretty easy.
I did
indexed = df_sample.set_index(["site", "item_id", "usage_date"]).unstack("usage_date")
and, with
current = '2019-09-01'
previous = '2019-08-25'
We can word the condition almost 1-to-1 with the problem statement:
if the current ...
end_countis greater than the previous ...end_countand we have a currentreceived_totalof 0 ... there is a bad count.
bad_rows = (indexed[("end_count", current)] > indexed[("end_count", previous)]) & (indexed[("received_total", current)] == 0)
indexed[bad_rows]
This gives:
start_count received_total end_count
usage_date 2019-08-25 2019-09-01 2019-08-25 2019-09-01 2019-08-25 2019-09-01
site item_id
ACY PAC10137 200000.0 108000.0 0.0 0.0 108000.0 192000.0
Now, for the multi-date case, you can do this:
from itertools import pairwise
for previous, current in pairwise(sorted(indexed.columns.levels[1])):
indexed[("bad", current)] = (indexed[("end_count", current)] > indexed[("end_count", previous)]) & (indexed[("received_total", current)] == 0)
To get out a dataframe in your original form (but with a new bad column), you can just .unstack().
df_with_bad_row_cols = indexed.unstack().reset_index()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With