[HTML payload içeriği buraya]
28.7 C
Jakarta
Saturday, May 16, 2026

Scaling up linear programming with PDLP


Basic linear programming (LP) issues are probably the most foundational issues in pc science and operations analysis. With in depth purposes throughout quite a few sectors of the worldwide economic system, similar to manufacturing, networking, and different fields, LP has been the cornerstone of mathematical programming and has considerably influenced the event of right now’s subtle modeling and algorithmic frameworks for data-driven determination making. If there’s one thing to optimize, there is a good probability LP is concerned.

For the reason that late Forties, LP fixing has developed considerably, with the simplex methodology by Dantzig and numerous interior-point strategies being essentially the most prevalent methods. Immediately’s superior industrial LP solvers make the most of these strategies however face challenges in scaling to very giant situations as a result of computational calls for. In response to this limitation, first-order strategies (FOMs) have gained traction for large-scale LP issues.

With the above in thoughts, we introduce our solver PDLP (Primal-dual hybrid gradient enhanced for LP), a brand new FOM–based mostly LP solver that considerably scales up our LP fixing capabilities. Using matrix-vector multiplication slightly than matrix factorization, PDLP requires much less reminiscence and is extra suitable with trendy computational applied sciences like GPUs and distributed techniques, providing a scalable various that mitigates the reminiscence and computational inefficiencies of conventional LP strategies. PDLP is open-sourced in Google’s OR-Instruments. This undertaking has been in growth since 2018 [1, 2, 3], and we’re proud to announce that it was co-awarded the celebrated Beale — Orchard-Hays Prize on the Worldwide Symposium of Mathematical Programming in July 2024. This accolade is likely one of the highest honors within the discipline of computational optimization, awarded each three years by the Mathematical Optimization Society.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles