Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Scaling Human Effort in Idea Screening and Content Evaluation

Abstract : Brands and advertisers often tap into the crowd to generate ideas for new products and ad creatives by hosting ideation contests. Content evaluators then winnow thousands of submitted ideas before a separate stakeholder, such as a manager or client, decides on a small subset to pursue. We demonstrate the information value of data generated by content evaluators in past contests and propose a proof-of-concept machine learning approach to efficiently surface the best submissions in new contests with less human effort. The approach combines ratings by different evaluators based on their correlation with the past stakeholder choices, controlling for submission characteristics and textual content features. Using field data from a crowdsourcing platform, we demonstrate that the approach improves performance by identifying nonlinear transformations and efficiently reweighting evaluator ratings. Implementing the proposed approach can affect the optimal assignment of internal experts to ideation contests. Two evaluators whose votes were a priori equally correlated with sponsor choices may provide substantially different incremental information to improve the model-based idea ranking. We provide additional support for our findings using simulations based on a product design survey.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal-hec.archives-ouvertes.fr/hal-02953039
Contributor : Antoine Haldemann <>
Submitted on : Tuesday, September 29, 2020 - 6:04:21 PM
Last modification on : Wednesday, October 14, 2020 - 4:13:50 AM

Identifiers

Collections

Citation

Pavel Kireyev, Artem Timoshenko, Cathy Yang. Scaling Human Effort in Idea Screening and Content Evaluation. 2020. ⟨hal-02953039⟩

Share

Metrics

Record views

26