### NonOpt (Nonlinear/nonconvex/nonsmooth Optimizer)

NonOpt is an open-source package written in C++ for solving optimization problems. For more information, please see the NonOpt homepage.

### Prototype Software

The following is a list of prototype software written by me for various research projects.

- Nonconvex, Nonsmooth Optimization
**SLQP-GS: Sequential Linear or Quadratic Programming with Gradient Sampling**

A*prototype*code for nonconvex, nonsmooth constrained optimization. The search direction computation is performed by minimizing a local linear or quadratic model of the objective subject to a linearization of the constraints. Gradients for each problem function are sampled to make the search direction computation effective in nonsmooth regions. The user has the option of choosing between SLP-GS or SQP-GS modes, and has the option of tuning various input parameters for each application. The code for a sample problem is provided in order to illustrate how other problems can be formulated and solved with the code. Note that this is only a prototype implementation. Please e-mail me if you use the code or with any bug reports, comments, or suggestions — they would be greatly appreciated! Code written by Frank E. Curtis and contributed to by Tim Mitchell.- Please refer to “A Sequential Quadratic Programming Algorithm for Nonconvex, Nonsmooth Constrained Optimization”
- Previous version(s): SLQP-GS 1.2 (zip), SLQP-GS 1.1 (tgz), SLQP-GS 1.0 (tgz)

- Nonlinear Optimization Algorithms for Potentially Infeasible Problems
**PIPAL: Penalty-Interior-Point Algorithm**

A*prototype*code for smooth constrained continuous optimization. The optimization problem is reformulated as a penalty-interior-point subproblem (with only equality constraints) to which a Newton method is applied. The slack variables are effectively eliminated during the solution process. A line search is employed for ensuring convergence from remote starting points. The penalty and interior-point parameters are updated using an adaptive strategy (a conservative strategy is also implemented as an alternative) in order to achieve rapid convergence to an optimal solution or, if no feasible solution can be found, an infeasible stationary point. The code accepts AMPL input. Please e-mail me with any bug reports, comments, or suggestions — they would be greatly appreciated!- Please refer to “A Penalty-Interior-Point Algorithm for Nonlinear Constrained Optimization”
- Previous version(s): PIPAL 1.1 (zip), PIPAL 1.0 (tgz)

- Quasi-Newton Methods for Optimization
**AggQN: Aggregated Quasi-Newton**A

*prototype*code for updating quasi-Newton (inverse) Hessian approximations for use in continuous optimization algorithms. The strategy is a limited-memory quasi-Newton approach with displacement aggregation.- Please refer to “Limited-Memory BFGS with Displacement Aggregation”

- Stochastic Algorithms for Nonconvex Constrained Optimization
**StochasticSQP: Stochastic Sequential Quadratic Optimization**

A*prototype*code for solving constrained continuous optimization problems using stochastic sequential quadratic optimization techniques. The algorithm presumes that constraint value and derivative information can be computed explicitly, but that only stochastic estimates of the gradient of the objective function are available. The user has the option of choosing between multiple algorithm varieties. Note that this is only a prototype implementation. Please e-mail me if you use the code or with any bug reports, comments, or suggestions — they would be greatly appreciated! Code written by Frank E. Curtis with contributions from Baoyu Zhou.- Please refer to “Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization”

- Stochastic Algorithm for Nonconvex Unconstrained Optimization
**SCBFGS: Self-Correcting BFGS Algorithm for Stochastic Optimization**

A*prototype*code for stochastic optimization. The code allows various algorithmic options, each trying to exploit the self-correcting properties of BFGS-type updating. A sample logistic regression minimization problem is provided in order to illustrate how other problems can be formulated and solved with the code. Note that this is only a prototype implementation. Please e-mail me if you use the code or with any bug reports, comments, or suggestions — they would be greatly appreciated!- Please refer to “A Self-Correcting Variable-Metric Algorithm for Stochastic Optimization”

- Nonconvex Smooth Optimization
**TRACE: Trust-Region Algorithm with Contractions and Expansions**

A*prototype*code for solving smooth unconstrained optimization problems using a trust-region algorithm with contractions and expansions. The algorithm possesses optimal worst-case complexity for nonconvex smooth optimization. Note that this is only a prototype implementation. Please e-mail me if you use the code or with any bug reports, comments, or suggestions — they would be greatly appreciated! Code written by Frank E. Curtis with contributions from Mohammadreza Samadi.- Please refer to “A Trust Region Algorithm with a Worst-Case Iteration Complexity….”

### Other Software

The following lists software written by others (in some cases with contributions from me) related to projects in which I have been involved.

- Large-Scale Nonlinear Optimization
- Our inexact interior-point algorithm for large-scale nonlinear optimization with inexact step computations has been implemented in
**Ipopt**and can be used along with the iterative linear system solver in**Pardiso**. Please use the Ipopt option**inexact_algorithm yes**and e-mail me with any questions, bug reports, comments, or suggestions — they would be greatly appreciated! - Please refer to “A Note on…”.

- Our inexact interior-point algorithm for large-scale nonlinear optimization with inexact step computations has been implemented in
- Nonconvex, Nonsmooth Optimization
**GRANSO (html): GRadient-based Algorithm for Nonsmooth Optimization**.- Please refer to “A BFGS-SQP Method…”.

- Primal-Dual Active-Set Methods for Convex Quadratic Optimization
**pypdas (html)**.

Python software for a primal-dual active-set method for solving general convex quadratic optimization problems; written by Zheng Han.**ipdas (html)**.

Python software for a primal-dual active-set method with inexact subproblem solves for solving certain convex quadratic optimization problems, typically arising in optimal control; written by Zheng Han.

- Optimization Benchmarking
**betaRMP (html): beta-Relative Minimization Profiles**.

Benchmarking visualization tool for creating plots that concisely compare optimization methods evaluated on large heterogenous sets of test problems; written by Tim Mitchell.**PSARNOT (html): (Pseudo)Spectral Abscissa|Radius Nonsmooth Optimization Test**.

A test set for evaluating methods for nonsmooth optimization; written by Tim Mitchell.