Abstract

Generative AI (GenAI) promises to augment home care for older adults yet raises unresolved questions about safety, accountability, and responsibility when automated decisions cause harm. Guided by stakeholder, institutional trust, and principal-agent theory, we develop a model to examine how ownership (public vs. private) and governance strength (strong vs. weak oversight cues) shape perceived accountability, which in turn increases perceived benefits and reduces perceived risks to drive ethical judgment. A 2×2 between-subjects experiment (N=300; older adults and care providers) using a midfidelity human-supervised LLM agent Wizard-of-Oz eldercare assistant will test the model with validated measures and manipulation checks. Findings will inform regulators, providers, and developers on ownership and governance choices that align eldercare GenAI with efficacy, equity, and ethical integrity.

Share

COinS