Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ALS model - predicted full_u * v^t * v ratings are very high

I'm predicting ratings in between processes that batch train the model. I'm using the approach outlined here: ALS model - how to generate full_u * v^t * v?

! rm -rf ml-1m.zip ml-1m
! wget --quiet http://files.grouplens.org/datasets/movielens/ml-1m.zip
! unzip ml-1m.zip
! mv ml-1m/ratings.dat .

from pyspark.mllib.recommendation import Rating

ratingsRDD = sc.textFile('ratings.dat') \
               .map(lambda l: l.split("::")) \
               .map(lambda p: Rating(
                                  user = int(p[0]), 
                                  product = int(p[1]),
                                  rating = float(p[2]), 
                                  )).cache()

from pyspark.mllib.recommendation import ALS

rank = 50
numIterations = 20
lambdaParam = 0.1
model = ALS.train(ratingsRDD, rank, numIterations, lambdaParam)

Then extract the product features ...

import json
import numpy as np

pf = model.productFeatures()

pf_vals = pf.sortByKey().values().collect()
pf_keys = pf.sortByKey().keys().collect()

Vt = np.matrix(np.asarray(pf_vals))

full_u = np.zeros(len(pf_keys))

def set_rating(pf_keys, full_u, key, val):
    try:
        idx = pf_keys.index(key)
        full_u.itemset(idx, val)
    except:
        pass

set_rating(pf_keys, full_u, 260, 9),   # Star Wars (1977)
set_rating(pf_keys, full_u, 1,   8),   # Toy Story (1995)
set_rating(pf_keys, full_u, 16,  7),   # Casino (1995)
set_rating(pf_keys, full_u, 25,  8),   # Leaving Las Vegas (1995)
set_rating(pf_keys, full_u, 32,  9),   # Twelve Monkeys (a.k.a. 12 Monkeys) (1995)
set_rating(pf_keys, full_u, 335, 4),   # Flintstones, The (1994)
set_rating(pf_keys, full_u, 379, 3),   # Timecop (1994)
set_rating(pf_keys, full_u, 296, 7),   # Pulp Fiction (1994)
set_rating(pf_keys, full_u, 858, 10),  # Godfather, The (1972)
set_rating(pf_keys, full_u, 50,  8)    # Usual Suspects, The (1995)

recommendations = full_u*Vt*Vt.T

top_ten_ratings = list(np.sort(recommendations)[:,-10:].flat)

print("predicted rating value", top_ten_ratings)

top_ten_recommended_product_ids = np.where(recommendations >= np.sort(recommendations)[:,-10:].min())[1]
top_ten_recommended_product_ids = list(np.array(top_ten_recommended_product_ids))

print("predict rating prod_id", top_ten_recommended_product_ids)

However the predicted ratings seem way too high:

('predicted rating value', [313.67320347694897, 315.30874327316576, 317.1563289268388, 317.45475214423948, 318.19788673744563, 319.93044594688428, 323.92448427140653, 324.12553531632761, 325.41052886977582, 327.12199687047649])
('predict rating prod_id', [49, 287, 309, 558, 744, 802, 1839, 2117, 2698, 3111])

This appears to be incorrect. Any tips appreciated.

like image 772
Chris Snow Avatar asked Jan 10 '17 12:01

Chris Snow


1 Answers

I think the approach mentioned would work if you only care about the ranking of the movies. If you want to get an actual rating there seem to be something of in terms dimension/scaling.

The idea here, is to guess the latent representation of your new user. Normally, for a user already in the factorization, user i, you have his latent representation u_i (the ith row in model.userFeatures()) and you get his rating for a given movie (movie j) using model.predict which basically multiply u_i by the latent representation of the product v_j. you can get all the predicted ratings at once if you multiply with the whole v: u_i*v.

For a new user you have to guess what is his latent representation u_new from full_u_new. Basically you want 50 coefficients that represent your new user affinity towards each of the latent product factor. For simplicity and since it was enough for my implicit feedback use case, I simply used the dot product, basically projecting the new user on the product latent factor: full_u_new*V^t gives you 50 coefficient, the coeff i being how much your new user looks like product latent factor i. and it works especially well with implicit feedback. So, using the dot product will give you that but it won't be scaled and it explains the high scores you are seeing. To get usable scores you need a more accurately scaled u_new, I think you could get that using the cosine similarity, like they did [here]https://github.com/apache/incubator-predictionio/blob/release/0.10.0/examples/scala-parallel-recommendation/custom-query/src/main/scala/ALSAlgorithm.scala

The approach mentioned by @ScottEdwards2000 in the comment is interesting too, but rather different. You could indeed look for the most similar user(s) in your training set. If there are more than one you could get the average. I don't think it would do too badly but it is a really different approach and you need the full rating matrix (to find the most similar user(s)). Getting one close user should definitely solve the scaling problem. If you manage to make both approach work you could compare the results!

like image 69
yoh.lej Avatar answered Sep 20 '22 12:09

yoh.lej