Our study aims to better understand attention screening mechanisms (‘screeners’) on crowdworking platforms, and their consequences on worker behavior. Crowdworking platforms are popular with researchers conducting online experiments, but are associated with pitfalls that have come under increasing scientific scrutiny, such as the quality of contributions due to the limited attention span or the uncontrolled environment of crowdworkers. While screeners can be built into tasks to check for attention, and crowdworking platforms are implementing reputation management systems for workers, based upon the quality of their contributions, two main questions arise: 1) which screener types are best suited to ensure high attentiveness, and 2) how does a possible interaction between screeners and workers’ eagerness to maintain their reputation affect online research? To address both questions, we propose a two-stage experimental design to compare different screener types regarding their ability to guarantee high attentiveness and their potential interaction with reputation systems.