According to authors in 1, 2, and 3, Recall is the percentage of relevant items selected out of all the relevant items in the repository, while Precision is the percentage of relevant items out of those items selected by the query.
Therefore, assuming user U gets a top-k recommended list of items, they would be something like:
Recall= (Relevant_Items_Recommended in top-k) / (Relevant_Items)
Precision= (Relevant_Items_Recommended in top-k) / (k_Items_Recommended)
Until that part everything is clear but I do not understand the difference between them and Recall rate@k. How would be the formula to compute recall rate@k?
Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k.
F1-score when Precision = 1.0 and Recall = 0.01 to 1.0 F1-score when precision = 1.0 and recall varies from 0.01 to 1.0.
If you want to maximize recall, set the threshold below 0.5 i.e., somewhere around 0.2. For example, greater than 0.3 is an apple, 0.1 is not an apple. This will increase the recall of the system. For precision, the threshold can be set to a much higher value, such as 0.6 or 0.7.
Recall@k means you count the relevant documents among the top-k and divide it by the total number of relevant documents in the repository.
Finally, I received an explanation from Prof. Yuri Malheiros (paper 1). Althougth recall rate@k as cited in papers cited in the questions seemed to be the normal recall metrics but applied into a top-k, they are not the same. This metric is also used in paper 2, paper 3 and paper 3
The recall rate@k is a percentage that depends on the tests made, i.e., the number of recommendations and each recommendation is a list of items, some items will be correct and some not. If we made 50 different recommendations, let us call it R (regardless of the number of items for each recommendation), to calculate the recall rate is necessary to look at each of the 50 recommendations. If, for each recommendation, at least one recommended item is correct, you can increment a value, in this case, let us call it N. In order to calculate the recall rate@R, it is neccesary to make the N/R.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With