【百家大讲堂】第186期:一种新的概率舍入误差分析方法
来源: 发布日期:2019-04-19
【百家大讲堂】第186期:一种新的概率舍入误差分析方法
讲座题目:一种新的概率舍入误差分析方法
报 告 人:Nick Higham
时 间:2019年4月24日(周三)15:30-16:30
地 点:中关村校区研究生楼302
主办单位:研究生院、数学学院
报名方式:登录北京理工大学微信企业号---第二课堂---课程报名中选择“【百家大讲堂】第186期:一种新的概率舍入误差分析方法”
【主讲人简介】
"Nick Higham, 1985年获得曼彻斯特大学数值分析博士学位. 现任英国曼彻斯特大学教授, 是世界著名的数值分析专家. 英国皇家学院院士(2007), 美国工业与应用数学学会(SIAM)会士(2009),欧洲科学院院士(2016). 曾任SIAM主席(2017-2018). 以对数值算法的精度和稳定性方面的主要工作著称. 曾获多个奖项,包括Alston S. Householder奖,1988 Leslie Fox 奖,1999 Junior Whitehead 奖,Royal Society Wolfson Research Merit 奖及2008 Fr?hlich奖. 著有《Functions of Matrices: Theory and Computation》《Accuracy and Stability of Numerical Algorithms》《Handbook of Writing for the Mathematical Sciences》,《MATLAB Guide》等书.
"
"Nick Higham, obtained Ph.D. of numerical analysis in University of Manchester in 1985. Now he is a professor of Manchester and a world-renowned expert in numerical analysis. Fellow of the Royal Society(2007), Fellow of SIAM(2009), Member of Academia Europaea(2016).Chairman of SIAM in 2017-2018. He is well known for his research on the accuracy and stability of numerical algorithms. Honours include the Alston S. Householder Award VI,the 1988 Leslie Fox Prize, a 1999 Junior Whitehead Prize, Royal Society Wolfson Research Merit Prize, and the 2008 Fr?hlich Prize. Higham is also author of Functions of Matrices: Theory and Computation, Accuracy and Stability of Numerical Algorithms, Handbook of Writing for the Mathematical Sciences,MATLAB Guide and so on."
【讲座信息】
内容简介(中文)
"对于问题大小n和单位舍入u,数值线性代数中的传统舍入误差分析导致涉及常数/gamma^{n} = nu /(1-nu)的后向误差界限。鉴于大规模且可能低精度的计算,这样的界限可能难以提供任何有用的信息。我们开发了一种新的概率舍入误差分析,可应用于各种算法。通过使用集中不等式并对舍入误差进行概率假设,我们证明了在几个核心线性代数计算中,/gamma^{n}可以用松弛常数/widetilde/gamma}^{n}代替。
与/sqrt{nlogn}u成比例,其概率下限为独立于n的数量。使用n而不是/gamma^{n},新常量/ widetilde/gamma}^{n}增长得慢得多。我们的结果有三个关键特征:它们是向后误差范围;它们是准确的,而不是第一顺序;并且它们对任何n都有效,这与通过应用中心极限定理获得的结果不同,中心极限定理仅适用于n/to/infty。我们提供的数值实验表明,对于随机矩阵和现实矩阵,边界可以比标准的确定性边界小得多,并且可以用n进行正确的渐近增长。我们还确定了两种特殊情况,其中分析所依据的假设无效且边界不成立。我们的分析首次为经验法则提供了一个严格的基础:“由于舍入误差传播的统计效应,人们可以将误差的平方根保持不变”。"
"Traditional rounding error analysis in numerical linear algebra leads to backward error bounds involving the constant /gamma^{n} = nu/(1-nu) , for a problem size n and unit roundoff u. In the light of large-scale and possibly low-precision computations, such bounds can struggle to provide any useful information. We develop a new probabilistic rounding error analysis that can be applied to a wide range of algorithms. By using a concentration inequality and making probabilistic assumptions about the rounding errors, we show that in several core linear algebra computations gamma^{n} can be replaced by a relaxed constant /widetilde/gamma}^{n}
proportional to /sqrt{nlogn}u with a probability bounded below by a quantity independent of n. The new constant /widetilde/gamma}^{n} grows much more slowly with n than gamma^{n}. Our results have three key features: they are backward error bounds; they are exact, not first order; and they are valid for any n, unlike results obtained by applying the central limit theorem, which apply only as n/to/infty. We provide numerical experiments that show that for both random and real-life matrices the bounds can be much smaller than the standard deterministic bounds and can have the correct asymptotic growth with n. We also identify two special situations in which the assumptions underlying the analysis are not valid and the bounds do not hold. Our analysis provides, for the first time, a rigorous foundation for the rule of thumb that “one can take the square root of an error constant because of statistical effects in rounding error propagation”.
"