Abstract
Recommendation systems are essential for delivering personalized content across e-commerce and streaming services. However, traditional methods often fail in cold-start scenarios where new items lack prior interactions. Recent advances in large language models (LLMs) offer a promising alternative. In this paper, we adopt the retrieve-and-recommend framework and propose to fine-tune the LLM jointly on warm- and cold-start next-item recommendation tasks, thus, mitigating the need for separate models for both item types. We computationally compare zero-shot prompting, in-context learning, and fine-tuning using the same LLM backbone, and benchmark them against strong PLM-based baselines. Our findings provide practical insights into the trade-offs between accuracy and computational cost of these methods for next-item recommendation. To enhance reproducibility, we release the source code under https://github.com/HayaHalimeh/LLMs-For-Next-Item-Recommendation.git.
Paper Type
Full Paper
DOI
10.62036/ISD.2025.68
LLMs For Warm and Cold Next-Item Recommendation: A Comparative Study across Zero-Shot Prompting, In-Context Learning and Fine-Tuning
Recommendation systems are essential for delivering personalized content across e-commerce and streaming services. However, traditional methods often fail in cold-start scenarios where new items lack prior interactions. Recent advances in large language models (LLMs) offer a promising alternative. In this paper, we adopt the retrieve-and-recommend framework and propose to fine-tune the LLM jointly on warm- and cold-start next-item recommendation tasks, thus, mitigating the need for separate models for both item types. We computationally compare zero-shot prompting, in-context learning, and fine-tuning using the same LLM backbone, and benchmark them against strong PLM-based baselines. Our findings provide practical insights into the trade-offs between accuracy and computational cost of these methods for next-item recommendation. To enhance reproducibility, we release the source code under https://github.com/HayaHalimeh/LLMs-For-Next-Item-Recommendation.git.

Recommended Citation
Halimeh, H., Freese, F. & Müller, O. (2025). LLMs For Warm and Cold Next-Item Recommendation: A Comparative Study across Zero-Shot Prompting, In-Context Learning and Fine-TuningIn I. Luković, S. Bjeladinović, B. Delibašić, D. Barać, N. Iivari, E. Insfran, M. Lang, H. Linger, & C. Schneider (Eds.), Empowering the Interdisciplinary Role of ISD in Addressing Contemporary Issues in Digital Transformation: How Data Science and Generative AI Contributes to ISD (ISD2025 Proceedings). Belgrade, Serbia: University of Gdańsk, Department of Business Informatics & University of Belgrade, Faculty of Organizational Sciences. ISBN: 978-83-972632-1-5. https://doi.org/10.62036/ISD.2025.68