ACM RecSys CrowdRec 2015 Workshop

Crowdsourcing and Human Computation for Recommender Systems

ACM RecSys CrowdRec 2015 Workshop

Crowdsourcing and Human Computation for Recommender Systems

Important Dates

Call for Papers


CrowdRec 2015 will be held at the 9th ACM Conference on Recommender Systems in Vienna
http://recsys.acm.org/recsys15

Submission site:
https://easychair.org/conferences/?conf=crowdrec2015

Human computation is the application of human intelligence to solve problems that computers cannot yet solve. Crowdsourcing scales up the power of human intelligence, by calling on a large number of human contributors, referred to as the Crowd.

Recently, many areas of research have awakened to the potential of techniques that gather input from human contributors. However, the opportunities are particularly promising for recommender systems, whose reliance on expressions of human preference, e.g., ratings, in huge quantities already qualifies them as crowd-driven technology. In focusing heavily on human preference, however, today’s recommender systems fall short of the benefits of actively integrating the full potential of human intelligence.

The purpose of the CrowdRec workshop is to provide a forum for exchange and discussion on how human intelligence and crowd techniques can be used to improve recommender systems.

A wide range of possibilities exists for effectively collecting intelligent input from humans and for incentivizing the Crowd to make specific contributions. Collection of input can occur in social communities, via large online crowdsourcing platforms such as Mechanical Turk, or by way of a variety of applications that use principles of gamification to engage users. Crowdmembers can directly contribute information (such as comments and reviews), can validate information (such as tags or descriptions), or can provide feedback on recommender system design or performance. At present, however, the Crowd remains notoriously difficult to exploit effectively. The challenge arises from the complexity of user and crowdmember communities. Such groups constitute dynamic systems that are highly sensitive to changes in the form and the parameterization of their activities. A thorough understanding of how best to present tasks to the Crowd, and to make use of intelligent input, will be crucial in recommender systems to benefit from crowdsourcing and human computation.

The CrowdRec Workshop encourages contributions focusing on new approaches, new concepts, new methodologies and new applications that combine human computation/crowdsourcing with conventional recommender systems. Topics include, but are not limited to, the following:

Human Contributions beyond the User-Item Matrix
• Applications and interfaces for collecting annotations,
• Games With A Purpose (GWAP) or other annotation-as-by-product designs,
• Effective Learning from crowd-annotated or crowd-augmented datasets,
• Mining social media to support recommendation,
• Conversational recommender systems,
• Wisdom of the Crowd for decisions support.

Designing and Evaluating Recommenders using Crowd Techniques
• Recommender evaluation metrics and studies,
• Crowd-based user studies,
• Human intelligence for personalization support,
• User modeling and profiling.

Methodologies for Human Intelligence in Recommender Systems
• Identifying expertise and managing reputation,
• Engaging crowdmembers and ensuring quality,
• Tools and platforms to support crowd-enhanced Recommender Systems,
• Inherent biases, limitations and trade-offs of crowd-powered approaches,
• Empirical and case studies of crowd-enhanced recommendation,
• Ethical, cultural and policy issues related to crowd recommendation.