Document Type

Dissertation

Degree

Doctor of Philosophy (PhD)

Major/Program

Computer Science

First Advisor's Name

Sundaraja Sitharama Iyengar

First Advisor's Committee Title

Committee Chair

Second Advisor's Name

Xudong He

Second Advisor's Committee Title

Committee Member

Third Advisor's Name

Jason Liu

Third Advisor's Committee Title

Committee Member

Fourth Advisor's Name

Leonardo Bobadilla

Fourth Advisor's Committee Title

Committee Member

Fifth Advisor's Name

Zhenmin Chen

Fifth Advisor's Committee Title

Committee Member

Keywords

artificial intelligence, robotics, numerical analysis, computation, operational research, ordinary differential equations, theory, algorithms

Date of Defense

11-1-2018

Abstract

The dissertation addresses the research topics of machine learning outlined below. We developed the theory about traditional first-order algorithms from convex opti- mization and provide new insights in nonconvex objective functions from machine learning. Based on the theory analysis, we designed and developed new algorithms to overcome the difficulty of nonconvex objective and to accelerate the speed to obtain the desired result. In this thesis, we answer the two questions: (1) How to design a step size for gradient descent with random initialization? (2) Can we accelerate the current convex optimization algorithms and improve them into nonconvex objective? For application, we apply the optimization algorithms in sparse subspace clustering. A new algorithm, CoCoSSC, is proposed to improve the current sample complexity under the condition of the existence of noise and missing entries.

Gradient-based optimization methods have been increasingly modeled and inter- preted by ordinary differential equations (ODEs). Existing ODEs in the literature are, however, inadequate to distinguish between two fundamentally different meth- ods, Nesterov’s acceleration gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method. In this paper, we derive high-resolution ODEs as more accurate surrogates for the two methods in addition to Nesterov’s acceleration gradient method for general convex functions (NAG-C), respectively. These novel ODEs can be integrated into a general framework that allows for a fine-grained anal- ysis of the discrete optimization algorithms through translating properties of the amenable ODEs into those of their discrete counterparts. As a first application of this framework, we identify the effect of a term referred to as gradient correction in NAG-SC but not in the heavy-ball method, shedding deep insight into why the for- mer achieves acceleration while the latter does not. Moreover, in this high-resolution ODE framework, NAG-C is shown to boost the squared gradient norm minimization at the inverse cubic rate, which is the sharpest known rate concerning NAG-C itself. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates as NAG-C for minimizing convex functions.

Identifier

FIDC007030

ORCID

https://orcid.org/0000-0003-2378-5333

Share

COinS
 

Rights Statement

Rights Statement

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).