In practice, convex optimization tasks often appear in a form dependent on some additional parameter, such as e.g. a regularization parameter in many machine learning applications. Our proposed continuity approach here allows to obtain solutions with an approximation guarantee for the whole range of the (regularization) parameter for such problems. We obtain a piecewise constant solution path, which is often called the regularization path in the machine learning community. Applications include support vector machines, multiple kernel learning, nuclear norm regularized problems, matrix completion, and various other parameterized optimization problems.
This is joint work with Joachim Giesen and Sören Laue. A conference version of this approach for vector problems has appeared as Approximating Parameterized Convex Optimization Problems [DOI] in ESA 2010.
Update [December 2011]:
The application of this approach to matrix problems, such as semidefinite programming, and nuclear norm regularized problems as (e.g. matrix completion) is explained in our AIStats 2012 paper Regularization Paths with Guarantees for Convex Semidefinite Optimization. Here is the Poster.
An extended discussion of the approach is given in my thesis, where the applications to vector and matrix problems are explained in chapters 6 and 7 respectively.
Update [November 2011]:
A journal version of the ESA paper for the vector case has been accepted at the ACM Transactions on Algorithms, and is scheduled to appear soon.
Update [March 2011]:
Below are the slides of my seminar talk on this topic, generalizing this approach also for matrix problems and semidefinite optimization (e.g. regularization paths for matrix factorizations).