Timezone: »

The Three-Weight Algorithm: Enhancing ADMM for Large-Scale Distributed Optimization
Nate Derbinsky · José Bento · Jonathan S Yedidia

Fri Dec 06 07:00 PM -- 11:59 PM (PST) @ Tahoe B, Harrah’s Special Events Center 2nd Floor
Event URL: http://www.youtube.com/watch?v=j46eG-ZQHA8 »

We demonstrate the power of the Three Weight Algorithm (TWA), a new distributed optimization method inspired by the alternating direction method of multipliers (ADMM), a decomposition-coordination method dating back to the 70s. ADMM has received much attention in the convex-optimization community because (1) it is guaranteed to converge to the global optimum of convex problems and (2) it is natural to parallelize the algorithm, as recently emphasized in an extended review (Boyd et al. 2011). Our Three-Weight Algorithm (Derbinsky et al. 2013) is based upon a message-passing version of ADMM and while it maintains the convergence properties for convex problems, and naturally parallelizes, it also greatly improves scaling for non-convex problems. For example, TWA easily finds world-record solutions to large-scale packing problems on a laptop computer and can solve large instances of trajectory-planning problems for dozens of swarm robots in minutes (cf. Bento et al. 2013 at NIPS2013). Our demonstration exemplifies how to solve optimization problems by (a) decomposing the problem into many small, non-convex local cost functions, as well as (b) incorporating top-down sources of problem information, such as human guidance. TWA suffers no disadvantages compared to ADMM for convex problems, but can speed time-to-solution many orders of magnitude for many large-scale non-convex problems.

Author Information

Nate Derbinsky (Disney Research)
José Bento (Boston College)
Jonathan S Yedidia (Disney Research)

More from the Same Authors