Paper Number
3254
Paper Type
Complete
Description
This paper adopts a theoretical framework to study the impact of fine-tuning in AI-powered platforms. We assume that users are heterogenous both vertically in their feedback quality and horizontally in their preference of the model’s outputs. We find that fine-tuning polarizes or centralizes the distribution of the fine-tuned model’s outputs. Fine-tuning can have negative impacts on both users and the platform. In particular, fine-tuning only benefits the platform if users’ transaction cost is relatively large and the cost of fine-tuning is relatively small. The platform always strategically lowers the initial algorithm quality to save the initial training costs. Users engage more actively with the cocreation with AI if feedback is of high quality. Contrary to prevailing optimism, we uncover that the fine-tuned model can harm users even when most of them provide high-quality feedback. Our findings provide insightful implications to AI-powered platforms, users and policymakers.
Recommended Citation
Zou, Dongchen and Feng, Juan, "Popularity versus Diversity: The Impact of Fine-tuning Leveraging User Feedback in AI-powered Platforms" (2024). ICIS 2024 Proceedings. 26.
https://aisel.aisnet.org/icis2024/humtechinter/humtechinter/26
Popularity versus Diversity: The Impact of Fine-tuning Leveraging User Feedback in AI-powered Platforms
This paper adopts a theoretical framework to study the impact of fine-tuning in AI-powered platforms. We assume that users are heterogenous both vertically in their feedback quality and horizontally in their preference of the model’s outputs. We find that fine-tuning polarizes or centralizes the distribution of the fine-tuned model’s outputs. Fine-tuning can have negative impacts on both users and the platform. In particular, fine-tuning only benefits the platform if users’ transaction cost is relatively large and the cost of fine-tuning is relatively small. The platform always strategically lowers the initial algorithm quality to save the initial training costs. Users engage more actively with the cocreation with AI if feedback is of high quality. Contrary to prevailing optimism, we uncover that the fine-tuned model can harm users even when most of them provide high-quality feedback. Our findings provide insightful implications to AI-powered platforms, users and policymakers.
Comments
09-HTI