Abstract
We employ the open-domain BERT (Bidirectional Encoder Representations from Transformers) model to extract terms specific to the healthcare domain from medical error narratives. Narratives randomly extracted from the Patient Safety Network website were used to fine-turn and test our model in the pilot study. Eight named entities were identified in these text documents: patient, age, provider, organization, disease, symptom, treatment, and system/computer program. The preliminary evaluation result shows that the average accuracy of the classification was 88%, demonstrating the potential for using the generic BERT model to extract domain-specific entities from documents.
Recommended Citation
Xu, Jennifer; Hao, Haijing; Sun, Patrick; and Zhang, Andrew, "Named Entity Recognition in Medical Error Narratives using BERT" (2021). NEAIS 2021 Proceedings. 5.
https://aisel.aisnet.org/neais2021/5