I have the following function:
def calculate(blob, count_per_data):
return geometric_mean( [score_per_count[ count_per_data[data] ] for data in combinations(blob)] )
The problem with my code is that if data
is not found in count_per_data
I get an exception. Instead, I wish count_per_data["unknown"]
to evaluate to 0, i.e. "unknown"
's count is 0.
In turn, the value 0 exists in score_per_count
and is not equal to 0. In other words, the score associated with a count of 0 is not itself 0.
How would you recommend I fix the above code to achieve my goal?
If you want to make sure that, the data
exists in count_per_data
and the value of count_per_data
exists in score_per_count
, you can use the list comprehension as a filter, like this
return geometric_mean([score_per_count[count_per_data[data]] for data in combinations(blob) if data in count_per_data and count_per_data[data] in score_per_count])
More readable version,
return geometric_mean([score_per_count[count_per_data[data]]
for data in combinations(blob)
if data in count_per_data and count_per_data[data] in score_per_count])
But, if you want to use default values when key is not found in a dictionary, then you can use dict.get
. Quoting from dict.get
docs,
get(key[, default])
Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError.
You can use it, like this
count_per_data.get(data, 0)
If data
is not found in count_per_data
, 0
will be used instead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With