Developers of Artificial Intelligence (AI)-based systems are increasingly urged to assume accountability for their development decisions, referring to the degree to which they must justify underlying algorithms and their outcomes on demand. Thereby, AI developers often have to juxtapose how much accountability they self-attribute to them and how much accountability they perceive others attribute to them, creating intrapersonal perceptual accountability (in)congruence with unknown consequences for their job satisfaction. Building on perceptual congruence research and algorithmic accountability literature, we conducted an online survey of 87 AI developers about their experiences in AI-based systems development projects. Our results show that the lower the incongruence between self-attributed and others-attributed accountability, the higher the job satisfaction of AI developers. Moreover, we find that AI developers’ role ambiguity mediates this effect. Our study contributes to a more nuanced understanding of AI developers’ perceived accountability, with essential insights for defining job roles and understanding AI developers.