Timezone: »
Estimation in generalized linear models (GLM) is complicated by the presence of constraints. One can handle constraints by maximizing a penalized log-likelihood. Penalties such as the lasso are effective in high dimensions but often lead to severe shrinkage. This paper explores instead penalizing the squared distance to constraint sets. Distance penalties are more flexible than algebraic and regularization penalties, and avoid the drawback of shrinkage. To optimize distance penalized objectives, we make use of the majorization-minimization principle. Resulting algorithms constructed within this framework are amenable to acceleration and come with global convergence guarantees. Applications to shape constraints, sparse regression, and rank-restricted matrix regression on synthetic and real data showcase the strong empirical performance of distance penalization, even under non-convex constraints.
Author Information
Jason Xu (NSF Postdoctoral Fellow UCLA)
Eric Chi (North Carolina State University)
Kenneth Lange (UCLA)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Generalized Linear Model Regression under Distance-to-set Penalties »
Wed. Dec 6th 02:30 -- 06:30 AM Room Pacific Ballroom #38
More from the Same Authors
-
2020 Poster: Simple and Scalable Sparse k-means Clustering via Feature Ranking »
Zhiyue Zhang · Kenneth Lange · Jason Xu -
2020 Spotlight: Simple and Scalable Sparse k-means Clustering via Feature Ranking »
Zhiyue Zhang · Kenneth Lange · Jason Xu