I remember one of the strong points of lightfm is that the model does not suffer from cold start problem, both user and item cold start: lightfm original paper
However, I still don't understand how to use lightfm to address the cold start problem. I trained my model on user-item interaction data
. as I understand, I can only make prediction on profile_ids that exist on my data set.
def predict(self, user_ids, item_ids, item_features=None,
user_features=None, num_threads=1):
"""
Compute the recommendation score for user-item pairs.
Arguments
---------
user_ids: integer or np.int32 array of shape [n_pairs,]
single user id or an array containing the user ids for the
user-item pairs for which a prediction is to be computed
item_ids: np.int32 array of shape [n_pairs,]
an array containing the item ids for the user-item pairs for which
a prediction is to be computed.
user_features: np.float32 csr_matrix of shape [n_users, n_user_features], optional
Each row contains that user's weights over features.
item_features: np.float32 csr_matrix of shape [n_items, n_item_features], optional
Each row contains that item's weights over features.
num_threads: int, optional
Number of parallel computation threads to use. Should
not be higher than the number of physical cores.
Returns
-------
np.float32 array of shape [n_pairs,]
Numpy array containing the recommendation scores for pairs defined
by the inputs.
"""
self._check_initialized()
if not isinstance(user_ids, np.ndarray):
user_ids = np.repeat(np.int32(user_ids), len(item_ids))
assert len(user_ids) == len(item_ids)
if user_ids.dtype != np.int32:
user_ids = user_ids.astype(np.int32)
if item_ids.dtype != np.int32:
item_ids = item_ids.astype(np.int32)
n_users = user_ids.max() + 1
n_items = item_ids.max() + 1
(user_features,
item_features) = self._construct_feature_matrices(n_users,
n_items,
user_features,
item_features)
lightfm_data = self._get_lightfm_data()
predictions = np.empty(len(user_ids), dtype=np.float64)
predict_lightfm(CSRMatrix(item_features),
CSRMatrix(user_features),
user_ids,
item_ids,
predictions,
lightfm_data,
num_threads)
return predictions
any suggestions or pointers to help my understanding would really be appreciated. thank you
LightFM, like any other recommender algorithm, cannot make predictions about entirely new users if it is not given additional information about those users. The trick when trying to make recommendations for new users is to describe them in terms of the features that the algorithm has seen during training.
This is probably best explained using an example. Suppose you have users with IDs between 0 and 10 in your training set, and you want to make predictions for a new user, ID 11. If all you had was the ID of the new user, the algorithm would not be able to make predictions: after all, it knows nothing about what the preferences of user 11 are. Suppose however, that you have some features to describe the users: maybe during the sign-up process every user chooses a number of interests they have (horror movies or romantic comedies, for example). If these features are present during training, the algorithm can learn what preferences are, on average, associated with these characteristics, and will be able to produce recommendations for any new users who can be described using the same characteristics. In this example, you would be able to make predictions for user 11 if you could supply the preferences they chose during the sign-up process.
In the LightFM implementation, all of these features will be encoded in the feature matrices, probably in the form of one-hot encoding. When making recommendations for user 11, you would construct a new feature matrix for that user: as long as that feature matrix contains only the features present during training, you will be able to make predictions.
Note that it is normally useful to have a feature that corresponds only to a single user --- so a 'Is user 0' feature, a 'Is user 1' feature and so on. In the case of new users, such a feature is useless, as there is no information in training that the model could use to learn about that feature.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With