Conversational agents (CAs) in various forms are used in a variety of information systems. An abundance of prior research has focused on evaluating the various traits that make CAs effective. Most studies assume, however, that increasing the anthropomorphism of an agent will improve its performance. In a sensitive information disclosure task, that may not always be the case. We leverage self disclosure, social desirability, and social presence theories to predict how differing modes of conversational agents affect information disclosure. In this paper, we propose a laboratory experiment to compare how the mode of a given CA text based chatbot or voice based smart speaker paired with either high or low levels of conversational relevance, affects the disclosure of personally sensitive information. In addition to understanding influences on disclosure, we aim to break down the mechanisms through which CA design influences disclosure.