Ethical usage of artificial intelligence and data science is a rapidly evolving topic of discussion among individuals, organizations, and society. More attention has been paid to moral rules and regulations during such discussions than these stakeholders’ moral character development. This study examines how individuals deploy their moral decision-making skills under conditions of uncertainty. What virtues are the most important or most unimportant virtues in their decision to develop trust in artificial intelligence-based emerging technologies in the presence of personal information privacy threats? Using Q-methodology, the Concourse theory, and virtue ethics, four viewpoints (i.e., virtues-based decision-making structures) of individuals are extracted from a group of 39 participants for developing trust in emerging technologies. The findings of this study are of interest to philosophers, ethicists, and other stakeholders who work in the areas of moral decision-making under uncertainty, artificial intelligence, and data ethics.