We investigate the usability of human-like agent-based interfaces. In an experiment we manipulate the capabili­ties and the “human-likeness” of a travel advisory agent. We show that users of the more human-like agent form an anthropomorphic use image of the system: they act as if the system is human, and try to exploit typical human-like capabilities. Unfortu­nately, this severely reduces the usa­bility of the agent that looks human but lacks human-like capabilities (overestima­tion effect). We also show that the use image users form of agent-based systems is inherently integrated (as opposed to the compositional use image they form of conventional GUIs): cues provided by the system do not instill user responses in a one-to-one manner, but are instead integrated into a single use image. Consequently, users try to exploit capabilities that were not signaled by the system to begin with, thereby further exacerbating the overestimation effect.