Paper Number

ICIS2025-2777

Paper Type

Complete

Abstract

The rapid growth of open‐source LLMs has given rise to vast “model families” on Hugging Face—anchors plus fine-tuned, distilled, and quantized variants—yet we know little about how family structure drives adoption. Drawing on the DeLone & McLean ISS framework, we analyze 63,767 models across 14,581 families, measuring pre-trained model quality (likes, academic linkage), sibling competition (family size), horizontal variety (task-domain fine-tunes), and vertical variety (size/efficiency variants). We find that strong quality signals of pre-trained model boost downloads of its finetuned models, large family size cannibalizes per-model adoption, and both horizontal and vertical variety raise total downloads with an inverse-U effect suggesting optimal level of model family diversification. These relational and nonlinear dynamics offer new insights for both theory and practice in open-source NLP ecosystems, helping us understand the nuanced NLP adoption behavior.

Comments

14-Implementation

Share

COinS
 
Dec 14th, 12:00 AM

Fine-Tuning: How Model Family Diversification Shapes LLM Adoption

The rapid growth of open‐source LLMs has given rise to vast “model families” on Hugging Face—anchors plus fine-tuned, distilled, and quantized variants—yet we know little about how family structure drives adoption. Drawing on the DeLone & McLean ISS framework, we analyze 63,767 models across 14,581 families, measuring pre-trained model quality (likes, academic linkage), sibling competition (family size), horizontal variety (task-domain fine-tunes), and vertical variety (size/efficiency variants). We find that strong quality signals of pre-trained model boost downloads of its finetuned models, large family size cannibalizes per-model adoption, and both horizontal and vertical variety raise total downloads with an inverse-U effect suggesting optimal level of model family diversification. These relational and nonlinear dynamics offer new insights for both theory and practice in open-source NLP ecosystems, helping us understand the nuanced NLP adoption behavior.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.