In my current role at UTD, my work centers on projects at the intersection of stochastic linear, and nonlinear programming, power grids and energy systems, renewable energy, and EV infrastructure planning. Previously, my Ph.D. research focused on operations research, game theory, and algorithm development for ill-posed and large-scale optimization problems. During my first postdoctoral position, I explored the theoretical aspects of machine learning and federated learning through the perspective of operations research.
Research Interests
Operations research (Stochastic linear programming, Nonlinear programming, Large-scale optimization)
Power grids and energy systems
Renewable energy and EV infrastructure planning
Alternate fuels for all-electric ships (Nuclear, Hydrogen, Ammonia)
Machine learning and reinforcement learning
Game theory
Post-disaster Grid Restoration Considering Coupled Transportation-Power Networks (ongoing)
EV Infrastructure Planning: Integrating Demand Estimation, Coverage Optimization, and Grid Load Management (ongoing)
Optimal Designing of All Electric Ships for Maritime Decarbonization Under Voyage Uncertainties (ongoing)
Optimal Operations of Nuclear-based Integrated Energy Systems (ongoing)
Convergence Rates for Optimization Problems with Variational Inequality Constraints (completed)
We consider a class of optimization problems with Cartesian variational inequality (CVI) constraints. In this work, we introduce a unifying problem formulation that can address optimization problems with complicated constraints such as (1) equilibrium constraints, or (2) complementarity constraints, or (3) an inner-level large scale optimization problem. The main application arises from the notion of efficiency of equilibria in multi-agent networks, for example, communication networks and power systems. We develop a first-order method called averaging randomized block iteratively regularized gradient scheme. We derive non-asymptotic rate statements for suitably defined suboptimality and infeasibility error metrics. Importantly, this paper appears to be the first work to provide the rate guarantees for this class of problems. We carry out the numerical experiments for computing the best Nash equilibrium in a networked Cournot competition model.
Distributed Optimization for Problems with Variational Inequality Constraints (completed)
We consider a class of constrained multi-agent optimization problems with the goal of cooperatively minimizing the sum of agent-specific objective functions. The challenge here is, the information about the objective function and the constraints (or implicitly, the knowledge of dataset) is only locally available to agents. To address this challenge, here we provide a unifying problem formulation that can address a class of distributed optimization problems with (1) complementarity constraints, or (2) local nonlinear inequality and linear equality constraints, or (3) coupling nonlinear equality constraints. We develop an iteratively regularized incremental gradient method where the agents communicate over a cycle graph for each iterate. We derive non-asymptotic agent-wise convergence rates for the suboptimality and infeasibility of the variational inequality constraints. One of the fascinating aspects of this work is, we have introduced an agent-wise averaging and correspondingly, we establish agent-wise convergence rate statements. Further, we provide preliminary numerical experiments for computing the best equilibrium in a transportation network problem. We also compare the performance of the proposed scheme with the existing incremental gradient methods in addressing constrained finite-sum problems.
Incremental Gradient Method for Large-scale Distributed Nonlinearly Constrained Optimization (completed)
We consider a finite sum problem subject to a hard-to-project constraint set. Among well-known avenues to address finite sum problems is the class of incremental gradient (IG) methods where a single component function is selected at each iteration in a cyclic or randomized manner. An important observation is, when the problem is constrained, the existing IG schemes (including projected IG, proximal IAG, and SAGA) require a projection step onto the feasible set at each iteration. Consequently, the performance of these schemes is afflicted when the problem includes: (1) nonlinear constraints, or (2) a large number of linear constraints. We develop an algorithm called averaged iteratively regularized incremental gradient scheme that does not involve any hard-to-project computation. Under mild assumptions, we derive non-asymptotic rates of convergence for both suboptimality and infeasibility metrics. Numerically, we show that the proposed scheme outperforms the standard projected IG methods on distributed soft-margin support vector machine problems.
Presentation for ACC2021 is available at https://youtu.be/qrYmnRaEZLk
Randomized Block Coordinate Iterative Regularized Subgradient Method for High-dimensional Ill-posed Convex Optimization (completed)
Motivated by ill-posed optimization problems arising in image processing, we consider a bilevel optimization model, where we seek among the optimal solutions of the inner level problem, a solution that minimizes a secondary metric. Minimal norm gradient, sequential averaging, and iterative regularization are among the well-known schemes for addressing this class of problems. To the best of our knowledge, none of these schemes address nondifferentiability of the objective function and high-dimensionality of the solution space. To address these shortcomings, we develop a randomized block coordinate iterative regularized subgradient scheme. Under a uniform selection of the blocks and a careful choice of the stepsize and regularization parameter sequences, we establish the convergence and derive the rate statement in terms of the expected objective value of the inner level problem. We demonstrate the performance of the proposed scheme in solving the ill-posed problems arising in image processing.