The formula is y = y1 + ((x - x1) / (x2 - x1)) * (y2 - y1), where x is the known value, y is the unknown value, x1 and y1 are the coordinates that are below the known x value, and x2 and y2 are the coordinates that are above the x value.
Interpolate Points is designed to work with data that changes slowly and smoothly over the landscape, such as temperature and pollution levels. It is not appropriate for data such as population or median income that changes very abruptly over short distances.
In data science or mathematics, interpolation is about calculating a function's value based on the value of other datapoints in a given sequence. This function may be represented as f(x), and the known x values may range from x0 to xn.
A somewhat unconventional way to get what you desire is to build a global ranking of all your products using your 10k draws.
Use each draw as a source of binary contests between the 10 products, and sum the results of these contests over all draws.
This will give you a final "leader-board" for your 10 products. From this you have relative utility across all consumers, or you can assign an absolute value based on the number of wins (and optionally, the "strength" of the alternative in each contest) for each product.
When you want to test a new product with a different attribute profile find its sparse(st) representation as a vector sum of (weighted) other sample products, and you can run the contest again with the win probabilities weighted by the contribution weights of the component attribute vectors.
The advantage of this is that simulating the contest is efficient, and the global ranking combined with representing new products as sparse vector sums of existing data allows much pondering and interpretation of the results, which is useful when you're considering strategies to beat the competition's product attributes.
To find a sparse (descriptive) representation of your new product (y) solve Ax = y where A is your matrix of existing products (rows as their attribute vectors), and y is a vector of weights of contributions from your existing products. You want to minimize the non-zero entries in y. Check out Donoho DL article on the fast homotopy method (like a power iteration) to solve l0-l1 minimization quickly to find sparse representations.
When you have this (or a weighted average of sparse representations) you can reason usefully about the performance of your new product based on the model set up by your existing preference draws.
The advantage of sparseness as a representation is it allows you to reason usefully, plus, the more features or product you have, the better, since the more likely the product is to be sparsely representable by them. So you can scale to big matrices and get really useful results with a quick algorithm.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With