Risks of large language models misalignment: multi-stakeholder obligations and governance

Wu, L., Zou, W., Li, J. & Qi, J. (2025). Risks of large language models misalignment: multi-stakeholder obligations and governance. In Meng, X., Wang, L., Chen, H., Chen, H., Xu, S. & Zhan, X. (Eds.), Big Data and Social Computing - 10th China National Conference, BDSC 2025, Proceedings (pp. 265 - 277). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-95-0880-8_22
Copy

Risks understanding and governance of large language models (LLMs) are critical to their development, and this study provides a systematic overview of LLMs risks in terms of two dimensions: risk scope and risk severity. The results show that these risks stem from eight areas and include 23 types. Further, this paper proposes a model of antecedents, objectives and dimensions of LLMs regulation based on the multi-stakeholder theory, and systematically review the current literature related to the risk governance theory of AI systems and LLMs, distill six major themes, and analyze the representative literature in each direction. This study presents for the first time a framework for evolving theoretical perspectives on LLMs risk governance research. In terms of theory, the current theoretical evolution path of AI system and large language models risk governance is systematically sorted out. In terms of practice, a framework of regulatory mechanisms and practice methods for risk governance of large language models is proposed. This work is valuable for theoretical and practical research on risks governance in LLMs.

Full text not available from this repository.

Export as

EndNote BibTeX Reference Manager Refer Atom Dublin Core JSON Multiline CSV
Export