Abstract

Large language models (LLMs) exhibit impressive fluency yet routinely hallucinate facts, ignore evolving context, and deflect responsibility. This research essay develops a socio‑technical design framework that re‑orients LLM applications toward dialogical, community‑centred sense‑making. Grounded in Winograd & Flor’es concepts of structural coupling and consensus domains, and informed by constructivist learning theory, the framework specifies five design requirements and three mutually reinforcing artefacts: (1) an Interaction Protocol that encodes role and commitment markers, (2) a Dynamic Context Memory grounded in community vocabulary, and (3) a Reflexive Alignment Loop in which human feedback continuously shapes the model’s epistemic stance. A prototype Innovation‑Sprint Assistant, implemented with a GPT‑4 API plus the three artefacts, demonstrates feasibility. Walk‑through evaluation with eight domain experts shows a 31 percent reduction in hallucinations and a 0.6 SD increase in trust calibration relative to a vanilla chatbot. We discuss how structurally coupled LLMs differ from retrieval‑oriented assistants, outline ethical implications of epistemic imprinting, and propose a multi‑community research agenda to test generalisability.

Practitioner Relevance Statement

When ChatGPT‑style tools are dropped into project teams, participants soon ask: Where did that answer come from—and who owns it?” Our framework shows how to wrap an off‑the‑shelf LLM so that every conversational turn carries role, intent, and commitment metadata; team vocabulary persists across sessions; and users can confirm or correct the model’s stance before its output is acted upon. A Streamlit proof‑of‑concept required only prompt templates, a small graph database, and a simple feedback dialog—no model retraining. Early adopters in innovation workshops reported fewer hallucinated references and clearer responsibility chains.

Share

COinS