Risks of large language models misalignment: multi-stakeholder obligations and governance
Risks understanding and governance of large language models (LLMs) are critical to their development, and this study provides a systematic overview of LLMs risks in terms of two dimensions: risk scope and risk severity. The results show that these risks stem from eight areas and include 23 types. Further, this paper proposes a model of antecedents, objectives and dimensions of LLMs regulation based on the multi-stakeholder theory, and systematically review the current literature related to the risk governance theory of AI systems and LLMs, distill six major themes, and analyze the representative literature in each direction. This study presents for the first time a framework for evolving theoretical perspectives on LLMs risk governance research. In terms of theory, the current theoretical evolution path of AI system and large language models risk governance is systematically sorted out. In terms of practice, a framework of regulatory mechanisms and practice methods for risk governance of large language models is proposed. This work is valuable for theoretical and practical research on risks governance in LLMs.
| Item Type | Chapter |
|---|---|
| Copyright holders | © 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. |
| Departments | LSE |
| DOI | 10.1007/978-981-95-0880-8_22 |
| Date Deposited | 08 Jan 2026 |
| URI | https://researchonline.lse.ac.uk/id/eprint/130892 |