Publication
- Chenxin Ma and Martin Takáč, “Distributed Inexact Damped Newton Method: Data Partitioning and Load- Balancing,” submitted to Workshop on Distributed Machine Learning in 2017 AAAI Conference
- Chenxin Ma, Jakub Konečný, Virginia Smith, Martin Takáč, Martin Jaggi, and Peter Richtárik, “Distributed Optimization with Arbitrary Local Solvers”, Minor revision, journal of Optimization Methods and Software, 2015
- Virginia Smith, Simon Forte, Chenxin Ma, Martin Takáč and Michael Jordan, Martin Jaggi, “A General Framework for Communication-Efficient Distributed Optimization”, under review by the Journal of Machine Learning Research, 2016
- Chenxin Ma, Martin Takáč and Rachael Tappenden, “Linear Convergence of Randomized Feasible Descent Methods Under the Weak Strong Convexity Assumption”, to appear on Journal of Machine Learning Research, 2016
- Chenxin Ma, Martin Jaggi, Peter Richtárik, Martin Takáč and Michael Jordan, “Adding vs. Averaging in Distributed Primal-Dual Optimization,” International Conference on Machine Learning, ICML 2015
Presentation
- “Linear Convergence of Randomized Feasible Descent Methods Under the Weak Strong Convexity Assumption,” SIAM Conference on Optimization, Vancouver, Canada, May, 2017
- “Acceleration of a Communication-Efficient Distributed Dual Block Descent Algorithm,” poster presentation in 2015 Informs Annual Meeting, Nashville, TN, Nov. 14th, 2016
- “Distributed Inexact Damped Newton Method: Data Partitioning and Load-Balancing,” poster presentation in 10th Annual Machine Learning Symposium, Manhattan, NY, Mar. 4th, 2016
- “Acceleration of a Communication-Efficient Distributed Dual Block Descent Algorithm,” 21th Modeling and Optimization: Theory and Applications (MOPTA), Bethlehem, PA, Aug. 18th, 2016
- “Fast Coordinate Descent Methods on Linear Support Vector Machines,” SAS Interns Expo, Aug. 3rd, 2016
- “Communication-Efficient Distributed Dual Coordinate Ascent and Its Applications in Machine Learning,”2015 Informs Annual Meeting, Philadelphia, PA, Nov. 2nd, 2015
- “Partitioning Data on Features or Samples in Distributed Optimization?” poster presentation in Conference on Neural Information Processing Systems (NIPS) workshop, Montreal, Canada, Dec. 11th, 2015
- “Linear Convergence of Randomized Feasible Descent Methods Under Weak Strong Convexity Assumption,” 20th Modeling and Optimization: Theory and Applications (MOPTA), Bethlehem, PA, Jul. 23th, 2015