Advertisement

An Evaluation Metric for Content Providing Models, Recommendation Systems, and Online Campaigns

  • Kourosh ModarresiEmail author
  • Jamie DinerEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11537)

Abstract

Creating an optimal digital experience for users require providing users desirable content and also delivering these contents in optimal time as user’s experience and interaction taking place. There are multiple metrics and variables that may determine the success of a “user digital experience”. These metrics may include accuracy, computational cost and other variables. Many of these variables may be contradictory to one another (as explained later in this submission) and their importance may depend on the specific application the digital experience optimization may be pursuing. To deal with this intertwined, possibly contradicting and confusing set of metrics, this work introduces a generalized index entailing all possible metrics and variables – that may be significant in defining a successful “digital experience design model”. Besides its generalizability, as it may include any metric the marketers or scientists consider to be important, this new index allows the marketers or the scientists to give different weights to the corresponding metrics as the significance of a specific metric may depends on the specific application. This index is very flexible and could be adjusted as the objective of” user digital experience optimization” may change.

Here, we use “recommendation” as equivalent to “content providing” throughout the submission. One well known usage of “recommender systems” is in providing contents such as products, ads, goods, network connections, services, and so on. Recommender systems have other wide and broad applications and – in general – many problems and applications in AI and machine learning could be converted easily to an equivalent “recommender system” one. This feature increases the significance of recommender systems as an important application of AI and machine learning.

The introduction of internet has brought a new dimension on the ways businesses sell their products and interact with their customers. Ubiquity of the web and consequently web applications are soaring and as a result much of the commerce and customer experience are taking place on line. Many companies offer their products exclusively or predominantly online. At the same time, many present and potential customers spend much time on line and thus businesses try to use efficient models to interact with online users and engage them in various desired initiatives. This interaction with online users is crucial for businesses that hope to see some desired outcome such as purchase, conversions of any types, simple page views, spending longer time on the business pages and so on.

Recommendation system is one of the main tools to achieve these outcomes. The basic idea of recommender systems is to analyze what is the probability of a desires action by a specific user. Then, by knowing this probability, one can make decision of what initiatives to be taken to maximize the desirable outcomes of the online user’s actions. The types of initiatives could include, promotional initiatives (sending coupons, cash, …) or communication with the customer using all available media venues such as mail, email, online ad, etc. the main goal of recommendation or targeting model is to increase some outcomes such as “conversion rate”, “length of stay on sites”, “number of views” and so on. There are many other direct or indirect metrics influenced by recommender systems. Examples of these could include an increase of the sale of other products which were not the direct goal of the recommendations, an increase the chance of customer coming back at the site, increase in brand awareness and the chance of retargeting the same user at a later time.

Keywords

Recommendation systems Machine learning Artificial intelligence 

References

  1. 1.
    Bjorck, A.: Numerical Methods for Least Squares Problems. SIAM, Philadelphia (1996)CrossRefGoogle Scholar
  2. 2.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)CrossRefGoogle Scholar
  3. 3.
    Breese, J.S., Heckerman, D., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: Proceedings of Fourteenth Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann (1998)Google Scholar
  4. 4.
    Cai, J.-F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2008)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2008)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Candès, E.J.: Compressive sampling. In: Proceedings of the International Congress of Mathematicians, Madrid, Spain (2006)Google Scholar
  7. 7.
    Chen, P.-Y., Wu, S.-Y., Yoon, J.: The impact of online recommendations and consumer feedback on sales. In: Proceedings of the 25th International Conference on Information Systems, pp. 711–724 (2004)Google Scholar
  8. 8.
    Cho, Y.H., Kim, J.K., Kim, S.H.: A personalized recommender system based on web usage mining and decision tree Induction. Expert Syst. Appl. 23, 329–342 (2002)CrossRefGoogle Scholar
  9. 9.
    Claypool, M., Gokhale, A., Miranda, T., Murnikov, P., Netes, D., Sartin M.: Combining content-based and collaborative filters in an online newspaper. In: Proceedings of the ACM SIGIR 1999 Workshop on Recommender Systems (1999)Google Scholar
  10. 10.
    Çoba, L., Zanker, M.: rrecsys: an R-package for prototyping recommendation algorithms. In: RecSys 2016 Poster Proceedings (2016)Google Scholar
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
    Physicochemical Properties of Protein Tertiary Structure Data Set. https://archive.ics.uci.edu/ml/datasets/Physicochemical+Properties+of+Protein+Tertiary+Structure
  17. 17.
    Data: Census: click on the “Compare Large Cities and Towns for Population, Housing, Area, and Density” link on Census 2000. https://factfinder.census.gov/faces/nav/jsf/pages/community_facts.xhtml
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
    d’Aspremont, A., El Ghaoui, L., Jordan, M.I., Lanckriet, G.R.G.: A direct formulation for sparse PCA using semidefinite programming. SIAM Rev. 49(3), 434–448 (2007)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Davies, A.R., Hassan, M.F.: Optimality in the regularization of ill-posed inverse problems. In: Sabatier, P.C. (ed.) Inverse Problems: An Interdisciplinary Study. Academic Press, London (1987)Google Scholar
  28. 28.
    DeMoor, B., Golub, G.H.: The restricted singular value decomposition: properties and applications. SIAM J. Matrix Anal. Appl. 12(3), 401–425 (1991)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Donoho, D.L., Tanner, J.: Sparse nonnegative solutions of underdetermined linear equations by linear programming. Proc. Natl. Acad. Sci. 102(27), 9446–9451 (2005)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Stat. 32, 407–499 (2004)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Elden, L.: Algorithms for the regularization of ill-conditioned least squares problems. BIT 17, 134–145 (1977)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Elden, L.: A note on the computation of the generalized cross-validation function for ill-conditioned least squares problems. BIT 24, 467–472 (1984)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Engl, H.W., Hanke, M., Neubauer, A.: Regularization methods for the stable solution of inverse problems. Surv. Math. Ind. 3, 71–143 (1993)MathSciNetGoogle Scholar
  34. 34.
    Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer, Dordrecht (1996)CrossRefGoogle Scholar
  35. 35.
    Engl, H.W., Groetsch, C.W. (eds.): Inverse and Ill-Posed Problems. Academic Press, London (1987)zbMATHGoogle Scholar
  36. 36.
    Gander, W.: On the linear least squares problem with a quadratic Constraint. Technical report STAN-CS-78-697, Stanford University (1978)Google Scholar
  37. 37.
    Golub, G.H., Van Loan, C.F.: Matrix computations. In: Computer Assisted Mechanics and Engineering Sciences, 4th edn. Johns Hopkins University Press, US (2013)Google Scholar
  38. 38.
    Golub, G.H., Van Loan, C.F.: An analysis of the total least squares problem. SIAM J. Numer. Anal. 17, 883–893 (1980)MathSciNetCrossRefGoogle Scholar
  39. 39.
    Golub, G.H., Kahan, W.: Calculating the singular values and pseudo-inverse of a matrix. SIAM J. Numer. Anal. Ser. B 2, 205–224 (1965)MathSciNetzbMATHGoogle Scholar
  40. 40.
    Golub, G.H., Heath, M., Wahba, G.: Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21, 215–223 (1979)MathSciNetCrossRefGoogle Scholar
  41. 41.
    Guo, S., Wang, M., Leskovec, J.: The role of social networks in online shopping: information passing, price of trust, and consumer choice. In: ACM Conference on Electronic Commerce (EC) (2011)Google Scholar
  42. 42.
    Häubl, G., Trifts, V.: Consumer decision making in online shopping environments: the effects of interactive decision aids. Market. Sci. 19, 4–21 (2000)CrossRefGoogle Scholar
  43. 43.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning; Data mining, Inference and Prediction. Springer, New York (2001).  https://doi.org/10.1007/978-0-387-84858-7CrossRefzbMATHGoogle Scholar
  44. 44.
    Hastie, T., Mazumder, R.: Matrix completion via iterative soft-thresholded SVD (2015)Google Scholar
  45. 45.
    Hastie, T., Tibshirani, R., Narasimhan, B., Chu, G.: Package ‘impute’. CRAN (2017)Google Scholar
  46. 46.
    Honaker, J., King, G., Blackwell, M.: Amelia II: a program for missing data (2012)Google Scholar
  47. 47.
    Hua, T.A., Gunst, R.F.: Generalized ridge regression: a note on negative ridge parameters. Commun. Statist. Theory Methods 12, 37–45 (1983)MathSciNetCrossRefGoogle Scholar
  48. 48.
    Iyengar, V.S., Zhang, T.: Empirical study of recommender systems using linear classifiers. In: Cheung, D., Williams, G.J., Li, Q. (eds.) PAKDD 2001. LNCS (LNAI), vol. 2035, pp. 16–27. Springer, Heidelberg (2001).  https://doi.org/10.1007/3-540-45357-1_5CrossRefGoogle Scholar
  49. 49.
    Jeffers, J.: Two case studies in the application of principal component. Appl. Stat. 16, 225–236 (1967)CrossRefGoogle Scholar
  50. 50.
    Jolliffe, I.: Principal Component Analysis. Springer, New York (1986).  https://doi.org/10.1007/b98835CrossRefzbMATHGoogle Scholar
  51. 51.
    Jolliffe, I.T.: Rotation of principal components: choice of normalization Constraints. J. Appl. Stat. 22, 29–35 (1995)MathSciNetCrossRefGoogle Scholar
  52. 52.
    Jolliffe, I.T., Trendafilov, N.T., Uddin, M.: A modified principal component technique Based on the LASSO. J. Comp. Graph. Stat. 12, 531–547 (2003)MathSciNetCrossRefGoogle Scholar
  53. 53.
    Josse, J., Husson, F.: missMDA: a package for handling missing values in multivariate data analysis. J. Stat. Softw. 70(1), 1–31 (2016)CrossRefGoogle Scholar
  54. 54.
    Linden, G., Smith, B., York, J.: Amazon.com recommendations: item-to-item collaborative filtering. Internet Comput. 7(1), 76–80 (2003)CrossRefGoogle Scholar
  55. 55.
    Mazumder, R., Hastie, T., Tibshirani, R.: Spectral regularization algorithms for learning large incomplete matrices. JMLR 2010(11), 2287–2322 (2010)MathSciNetzbMATHGoogle Scholar
  56. 56.
    McCabe, G.: Principal variables. Technometrics 26, 137–144 (1984)MathSciNetCrossRefGoogle Scholar
  57. 57.
    Modarresi, K.: A local regularization method using multiple regularization levels, Stanford, April 2007Google Scholar
  58. 58.
    Modarresi, K., Golub, G.H.: An efficient algorithm for the determination of multiple regularization parameters. In: Proceedings of Inverse Problems Design and Optimization Symposium (IPDO), 16–18 April 2007, Miami Beach, Florida, pp. 395–402 (2007)Google Scholar
  59. 59.
    Modarresi, K., Diner, J.: An efficient deep learning model for recommender systems. In: Shi, Y., et al. (eds.) ICCS 2018. LNCS, vol. 10861, pp. 221–233. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-93701-4_17CrossRefGoogle Scholar
  60. 60.
    Modarresi, K.: Recommendation system based on complete personalization. Procedia Comput. Sci. 80, 2190–2204 (2016)CrossRefGoogle Scholar
  61. 61.
    Modarresi, K.: Computation of recommender system using localized regularization. Procedia Comput. Sci. 51, 2407–2416 (2015)CrossRefGoogle Scholar
  62. 62.
    Modarresi, K.: Algorithmic approach for learning a comprehensive view of online users. Procedia Comput. Sci. 80, 2181–2189 (2016)CrossRefGoogle Scholar
  63. 63.
    Sedhain, S., Menon, A.K., Sanner, S., Xie, L.: Autorec: autoencoders meet collaborative filtering (2015)Google Scholar
  64. 64.
    Stekhoven, D.: Using the missForest Package. CRAN (2012)Google Scholar
  65. 65.
    Strub, F., Mary, J., Gaudel, R.: Hybrid collaborative filtering with autoencoders (2016)Google Scholar
  66. 66.
    Van Buuren, S., Groothuis-Oudshoorn, K.: MICE: multivariate imputation by chained equations in R. J. Stat. Softw. 45(3), 1–67 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Adobe Inc.San JoseUSA

Personalised recommendations