Abstract
The increased use of AI in hiring raises serious questions of fairness and candidate trust. A recent survey has also demonstrated that 68 per cent of technology workers do not trust AI-based hiring and 80% prefers the human approach (TechRepublic2025). Firms including Google, Cisco, and McKinsk ey adopt on-site interviews in the face of and in response to challenges about unfairness and inhumanity (Wall Street Journal, 2025; Axios, 2025). These developments underscore an increasingly “trust deficit” in AI hiring. This study examines how can organizations design AI-augmented recruitment processes to create or rebuild trust with job applicants. Drawing from trust theory, organizational justice theory, and self-determination theory, we posit that trust is not a static entity but rather is something that can be cultivated through levels of transparency, fairness, and candidate voice. Furthermore, psychological mechanisms including perceived control, dignity and calming effects might enlighten candidates' experience. Theoretically, the present study contributes to the trust literature on AI‐supported recruitment by the attention to trust repair and complementary mechanisms. This is, in practice, at best a fine rule of thumb however and can provide little more than a rough alerting sign for organisations who are trying to balance execution with legitimacy when rolling out AI tools for recruitment.
Recommended Citation
Wu, Shishi, "Trust Repair in AI-Based Recruitment" (2025). NEAIS 2025 Proceedings. 30.
https://aisel.aisnet.org/neais2025/30
Abstract Only