Abstract:
A method for building knowledge graphs was proposed based on the GPT model, using electronic medical records as the corpus. Through prompt design, GPT was guided to achieve objectives such as structure design, knowledge extraction, relationship limitations and format conversion. This approach facilitates tasks such as ontology construction and knowledge management, ultimately integrating the results into a medical process knowledge graph. The results indicated that: (1) prompts can guide GPT to understand the task’s essence and automatically construct the ontology model, but there are issues with accuracy and consistency; (2) GPT achieved an F1 score of 0.847 in the named entity recognition tasks, comparable to current mainstream deep learning models; (3) GPT has advantages in synonym recognition, acronym replacement and hidden relationship inference. Additionally, the efficiency of this GPT‑based method compared to traditional knowledge graph construction methods was explored, providing some valuable insights into building knowledge graphs in the context of large language models.