Page 146 - 2025年第56卷第1期
P. 146
参 考 文 献:
[ 1 ] 邓韶辉. 大坝基础灌浆 CFD 模拟与预测研究[D]. 天津: 天津大学, 2018.
[ 2 ] DEVLIN J, CHANG M W, LEE K, et al. Bert: Pre-training of deep bidirectional transformers for language un⁃
derstanding[J∕OL]. ArXiv, (2019-05-24)[2023-12-20]. https:∕∕arxiv.org∕abs∕1810.04805.
[ 3 ] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre -
training[J∕OL]. OpenAI Blog, (2018-06-11)[2023-11-11]. https:∕∕www.mikecaptain.com∕resources∕pdf∕GPT-
1.pdf.
[ 4 ] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J∕OL]. OpenAI
Blog, (2019-02-14)[2023-12-11]. https:∕∕insightcivic.s3.us-east-1.amazonaws.com∕language-models.pdf.
[ 5 ] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[J]. Advances in Neural Infor⁃
mation Processing Systems, 2020, 33: 1877-1901.
[ 6 ] SINGHAL K, AZIZI S, TU T, et al. Large language models encode clinical knowledge [ J]. Nature, 2023,
620: 172-180.
[ 7 ] CHOWDHERY A, NARANG S, DEVLIN J, et al. Palm: Scaling language modeling with pathways[J]. Journal
of Machine Learning Research, 2023, 24(240): 1-113.
[ 8 ] WEI J, BOSMA M, ZHAO V Y, et al. Finetuned language models are zero - shot learners [ J∕OL]. ArXiv,
(2022-02-08)[2023-12-20]. https:∕∕arxiv.org∕abs∕2109.01652.
[ 9 ] LIU Q, SONG J K, HUANG Z G, et al. Langchain-chatchat[ CP∕OL]. Github Repository, (2023- 04- 15)
[2023-12-11]. https:∕∕github.com∕chatchat-space∕Langchain-Chatchat.
[ 10 ] 覃思中, 郑哲, 顾燚, 等. 大语言模型在建筑工程中的应用测试与讨论[J]. 工业建筑, 2023, 53(9):
162-169.
[ 11 ] UDDINS M J, ALBERT A, OVID A, et al. Leveraging ChatGPT to aid construction hazard recognition and
support safety education and training[J]. Sustainability, 2023, 15(9): 7121.
[ 12 ] 杨阳蕊, 朱亚萍, 刘雪梅, 等. 水利工程文本中抢险实体和关系的智能分析与提取[J]. 水利学报, 2023,
54(7): 818-828.
[ 13 ] 杨阳蕊, 朱亚萍, 陈思思, 等. 融合群体智能策略的 AI 链在大坝防汛抢险知识推理中的应用[J]. 水利学
报, 2023, 54(9): 1122-1132.
[ 14 ] 冯钧, 吕志鹏, 范振东, 等. 基于大语言模型辅助的防洪调度规则标签设计方法[J]. 水利学报, 2024,
55(8): 920-930.
[ 15 ] HU E J, SHEN Y, WALLIS P, et al. Lora: Low - rank adaptation of large language models[ J∕OL]. ArXiv,
(2021-10-16)[2023-04-11]. https:∕∕arxiv.org∕abs∕2106.09685.
[ 16 ] JI S, PAN S, CAMBRIA E, et al. A survey on knowledge graphs: Representation, acquisition, and applications
[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(2): 494-514.
[ 17 ] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMa: Open and efficient foundation language models[J∕OL].
ArXiv, (2023-02-27)[2024-02-01]. https:∕∕doi.org∕10.48550∕arXiv.2302.13971.
[ 18 ] XU C, SUN Q, ZHENG K, et al. Wizardlm: Empowering large language models to follow complex instructions
[J∕OL]. ArXiv, (2023-06-10)[2024-02-02]. https:∕∕arxiv.org∕abs∕2304.12244.
[ 19 ] WANG Y, ZHONG W, LI L, et al. Aligning large language models with human: A survey [ J∕OL]. ArXiv,
(2023-02-27)[2024-02-01]. https:∕∕arxiv.org∕abs∕2302.13971.
[ 20 ] 中国电力企业联合会, 水工建筑物水泥灌浆施工技术规范: DL∕T 5148—2021 [ S]. 北京: 国家能源
局, 2021.
[ 21 ] 中国电力企业联合会, 水工建筑物水泥灌浆施工技术规范: DL∕T 5148—2012 [ S]. 北京: 国家能源
局, 2012.
[ 22 ] 杨飞, 宋吉星, 王宜春, 等. 基于 OCR 识别技术的碎片化时空信息库异常文件检测方法[J]. 武汉理工大
学学报(信息与管理工程版), 2023, 45(6): 967-971.
[ 23 ] WANG Y, KORDI Y, MISHRA S, et al. Self-instruct: Aligning language model with self-generated instructions
— 1 4 1 —