首 页
滚动信息 更多 >>
本刊2022年SCI影响因子9.7 (2023年6月发布) (2023-10-23)
本刊2021年SCI影响因子12.256 (2022-07-07)
npj Computational Materials 2019年影响因子达到9... (2020-07-04)
npj Computational Materials获得第一个SCI影响因... (2018-09-07)
英文刊《npj Computational Materials(计算材料学... (2017-05-15)
快捷服务
最新文章 研究综述
过刊浏览 作者须知
期刊编辑 审稿须知
相关链接
· 在线投稿
会议信息
友情链接
  中国科学院上海硅酸盐研究所
  无机材料学报
  OQMD数据库
近期文章
Single-model uncertainty quantification in neural network potentials does not consistently outperform model ensembles
发布时间:2024-02-23

Single-model uncertainty quantification in neural network potentials does not consistently outperform model ensembles

Aik Rui Tan, Shingo Urata, Samuel Goldman, Johannes C. B. Dietschreit & Rafael Gómez-Bombarelli

npj Computational Materials9: 225 (2023)

Editorial Summary

Neural network potentials: Who has the best performance of uncertainty quantification

Over the last decade, neural networks (NN) have increasingly been deployed to study complex materials systems. NN interatomic potentials (NNIPs) have been widely used for applications in reactive processes, protein design, solids, solid-liquid interfaces, coarse-graining, and more. Nevertheless, NNIPs remain susceptible to making poor predictions in extrapolative regimes. To maximize NNIP robustness and avoid distribution shift, the training data should contain representative samples from the same ensemble that the simulation will visit. However, since high-quality ab initio calculations are computationally expensive, quantifying model uncertainty and active learning are the key to training robust NNIPs. In this work, Aik Rui Tan et al. from the department of Materials Science and Engineering, Massachusetts Institute of Technology, examined multiple uncertainty quantification (UQ) schemes for improving the robustness of NNIPs through active learning. In particular, the authors used mean-variance estimation (MVE), deep evidential regression, and Gaussian mixture models (GMM) to evaluate their performance in ranking uncertainties using multiple metrics across three different data sets. In general, ensemble-based methods consistently perform well in terms of uncertainty rankings outside of the training data domain and provide the most robust NNIPs. MVE has been shown to perform well mostly in identifying data points within the training domain that corresponds to high errors. Deep evidential regression, offers less accurate epistemic uncertainty prediction, while GMM is more accurate and lightweight than deep evidential regression and MVE. The lack of a one-size-fits-all solution, the high cost of the more robust ensemble-based method, and the disappointing performance of elsewhere-promising evidential approaches confirm that UQ in NNIPs is an ongoing challenge for method development in AI for science.

编辑概述

神经网络势:谁的不确定性量化表现更好?

在过去的十年里,神经网络(NN)越来越多地被用于研究复杂的材料系统。神经网络原子间势(NNIP)已广泛应用于反应过程、蛋白质设计、固体、固液界面、粗粒化等领域。然而,NNIP仍然容易在外推机制中做出糟糕的预测。为了最大限度地提高NNIP的稳健性并避免分布偏移,训练数据应包含相同集成学习下的代表性样本。然而,由于高质量的从头计算的昂贵,量化模型的不确定性和主动学习是训练稳健NNIP的关键。在这项工作中,来自麻省理工大学材料科学与工程系的Aik Rui Tan等人,研究了通过主动学习提高NNIP稳健性的多种不确定性量化(UQ)方案。特别地,作者使用均值-方差估计(MVE)、深度证据回归和高斯混合模型(GMM),通过三个不同数据集的多个指标来评估其排序不确定性的性能。一般来说,基于集成学习的方法在训练数据域之外的不确定性排序方面始终表现良好,并提供最稳健的NNIPMVE在高误差的训练域内识别数据点被证明表现良好。深度证据回归提供了不太准确的认知不确定性预测,而GMM比深度证据回归和MVE更准确、更低权重。NNIP中的不确定性量化缺乏一个万能的解决方案,更稳健的集成学习方法则需要高成本,其他有前景的证据方法表现出令人失望的性能,这些情况表明NNIP中的不确定性量化是人工智能科学方法开发的一个持续挑战。



 
【打印本页】【关闭本页】
版权所有 © 中国科学院上海硅酸盐研究所  沪ICP备05005480号-1    沪公网安备 31010502006565号
地址:上海市长宁区定西路1295号 邮政编码:200050