A Study of Evaluation Metrics for Recommender Algorithms

Jennifer Redpath, CM Shapcott, SI McClean, Liming Chen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

There are inherent problems with evaluating the accuracy of recommender systems. Commonly-used metrics for recommender systems depend on the number of recommendations produced and the number of hidden items withheld, making it difficult to directly compare one system with another. In this paper we compare recommender algorithms using two datasets; the standard MovieLens set and an e-commerce dataset that has implicit ratings based on browsing behaviour. We introduce a measure that aids in the comparison and show how to compare results with baseline predictions based on random recommendation selections.
Original languageEnglish
Title of host publicationUnknown Host Publication
PublisherAICS
Number of pages10
Publication statusPublished (in print/issue) - 25 Aug 2008
EventThe 19th Irish Conference on Artificial Intelligence and Cognitive Science -
Duration: 25 Aug 2008 → …

Conference

ConferenceThe 19th Irish Conference on Artificial Intelligence and Cognitive Science
Period25/08/08 → …

Fingerprint

Dive into the research topics of 'A Study of Evaluation Metrics for Recommender Algorithms'. Together they form a unique fingerprint.

Cite this