»ùÓÚGPUµÄHalpern Peaceman-Rachford (HPR)͹ÓÅ»¯Çó½â²½Öè

2025.10.30

Ͷ¸å£ºÉÛ·Ü·Ò²¿ÃÅ£ºÀíѧԺä¯ÀÀ´ÎÊý£º

»î¶¯ÐÅÏ¢

»ã±¨±êÌâ (Title)£ºA GPU Based Halpern Peaceman-Rachford (HPR) Method for Solving Convex Programming£¨»ùÓÚGPUµÄHalpern Peaceman-Rachford (HPR)͹ÓÅ»¯Çó½â²½Ö裩

»ã±¨ÈË (Speaker)£ºÕÔÐÀÔ· ½ÌÊÚ£¨±±¾©¹¤Òµ´óѧ£©

»ã±¨¹¦·ò (Time)£º2025Äê10ÔÂ31ÈÕ (ÖÜÎå) 13:30

»ã±¨µØÖ· (Place)£ºÐ£±¾²¿F309

Ô¼ÇëÈË(Inviter)£ºÐì×Ë ½ÌÊÚ

Ö÷°ì²¿ÃÅ£ºÀíѧԺÊýѧϵ

»ã±¨ÌáÒª£º

We study the acceleration of a preconditioned alternating direction method of multipliers (pADMM), where the proximal terms are convex quadratic functions, for solving linearly constrained convex optimization problems. Our approach begins by reformulating pADMM as a proximal point method (PPM) with a positive semidefinite preconditioner, which may be degenerate due to the lack of strong convexity in the original proximal terms. We then accelerate pADMM by accelerating the resulting degenerate PPM (dPPM). In particular, we first develop an accelerated dPPM by incorporating Halpern iteration, achieving non-asymptotic convergence guarantees. Building upon this result, we further design an accelerated pADMM that enjoys non-asymptotic and nonergodic convergence rates under practical stopping criteria, including the Karush¨CKuhn¨CTucker (KKT) residual and the primal objective gap. Extensive GPU-based experiments on large-scale linear programming and convex composite quadratic programming benchmarks demonstrate the substantial advantages of our Halpern Peaceman¨CRachford (HPR) method¡ªa special case of the Halpern-accelerated pADMM applied to the dual formulation¡ªover state-of-the-art solvers, including the award-winning PDLP, as well as PDQP, SCS, CuClarabel, and Gurobi, in delivering high-accuracy solutions.

¡¾ÍøÕ¾µØÍ¼¡¿