Artificial Intelligence is rapidly changing how we receive information, with tools like ChatGPT set to revolutionise how we search for, curate and receive information. However, ChatGPT (and similar) rely on uncurated information drawn from the open internet and user prompts, and are known to hallucinate and generate false or inappropriate answers. If acted on, these answers, have potential for harm. Being able to distinguish quality information is therefore important. To address, this study examines how people evaluate and weight dimensions of information quality when assessing the usefulness of medical information retrieved by tools like ChatGPT for making decisions about their health. The results show accuracy and unbiasedness increase decision-making utility while datedness (lack of currency) lowers it, but not enough to deter beliefs about its utility. Both decision utility and trustworthiness also had strong positive impacts on intention, while ease of use, unexpectedly had a strong negative impact, and functional utility was not significant.