Scaling Human Effort in Idea Screening and Content Evaluation - HEC Paris - École des hautes études commerciales de Paris Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Scaling Human Effort in Idea Screening and Content Evaluation

Pavel Kireyev
  • Fonction : Auteur
Artem Timoshenko
  • Fonction : Auteur

Résumé

Brands and advertisers often tap into the crowd to generate ideas for new products and ad creatives by hosting ideation contests. Content evaluators then winnow thousands of submitted ideas before a separate stakeholder, such as a manager or client, decides on a small subset to pursue. We demonstrate the information value of data generated by content evaluators in past contests and propose a proof-of-concept machine learning approach to efficiently surface the best submissions in new contests with less human effort. The approach combines ratings by different evaluators based on their correlation with the past stakeholder choices, controlling for submission characteristics and textual content features. Using field data from a crowdsourcing platform, we demonstrate that the approach improves performance by identifying nonlinear transformations and efficiently reweighting evaluator ratings. Implementing the proposed approach can affect the optimal assignment of internal experts to ideation contests. Two evaluators whose votes were a priori equally correlated with sponsor choices may provide substantially different incremental information to improve the model-based idea ranking. We provide additional support for our findings using simulations based on a product design survey.
Fichier non déposé

Dates et versions

hal-02953039 , version 1 (29-09-2020)

Identifiants

Citer

Pavel Kireyev, Artem Timoshenko, Cathy Yang. Scaling Human Effort in Idea Screening and Content Evaluation. 2020. ⟨hal-02953039⟩

Collections

HEC
17 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More