1. Stochastic Trust Region Methods: Trust Region Algorithm and Line Search Methods are two different types of optimization algorithms in nonlinear optimization. Stochastic gradient descent is the stochastic version of line search method in the sense that the full true gradient is replaced by the stochastic gradient. My interest is on finding the stochastic version of trust region method where the trust region sub-problem is stochastic. Notice that we can rewrite the stochastic gradient descent into the trust region framework. I want to find under what assumption the stochastic trust region algorithm will converge to the stationary point and what is the convergence rate in that case. The function type that I am interested in is a convex function with Lipschitz continuous gradient.
2. Classification Problem in High Frequency Trading: Basically, this problem can be done in both supervised learning and unsupervised learning ways. In supervised learning, the work that need to be done is essentially the feature selection. In this case, we have the labeled data with class the buy and sell and we have tons of features. The task is to do the feature selection to enhance the ‘winning chance’. The unsupervised learning way is more complicated. We do not have the labeled data but we want to find out under what circumstances we should buy and under what circumstances we should sell. Recently, I noticed that Reinforcement Learning techniques may be very helpful to solve the problem.