ResearchOur research spans the field of signal processing and optimization for machine learning, focusing on theoretically justifiable methods that are practical. Low Pass Graph Signal Processing and Graph LearningA common task in data science is to learn a graph representation from real-life data, which is to be utilized for downstream decision making. Our research focuses on building graph learning tools based on graph signal processing (GSP) inspired data models with “low pass” filtering. In other words, the purported data generative model consists of a graph filter which only retains the low frequencies content, i.e., effectively implying the common notion of “smooth graph signals”. Such low pass GSP models are prevalence in dynamics for social networks, financial networks, etc. The low pass GSP signals entail two important properties: (i) smoothness, (ii) low rankness. Our works show that the spectral properties of such low pass graph signals can be leveraged for graph topology learning from nodal observation. Recently, we have applied these techniques to the (a) detection of low pass graph signals, (b) multiplex graph learning, (c) graph topology learning from partial observation, etc. Selected Publications:
Checkout the publications for more details, as well as the Slides for a plenary talk (Youtube) I have given at the GSP Workshop 2023. Decision-dependent Stochastic Approximation AlgorithmsStochastic approximation (SA) is the working horse behind many online algorithms relying on streaming data, and it has found applications in reinforcement learning and statistical learning. Our works have considered a setting in which the streaming data is not i.i.d., but is correlated and decision dependent. Our research hinges on the analysis of SA schemes with an update procedure whose drift term depends on a decision-dependent Markov chain and the mean field is not necessarily a gradient map, leading to asymptotic bias for the one-step updates. The framework is further extended to other problems: (a) policy optimization whose decision, i.e., policy, affects future state generation, (b) performative prediction whose decision vector is used to support the future predictions and samples, (c) for bi-level optimization whose leader optimizes a decision using samples or gradient information supported by a follower solving a lower level optimization problem. Selected Publications:
Checkout the publications for more details, as well as some Slides from a talk I have given at URJC, Spain. Optimization for Large-scale Machine Learning and Signal ProcessingWe study algorithms for large scale machine learning and information processing in order to handle the challenges with ‘big-data‘. We focus on two interconnected aspects: first we study algorithms that run on inter-connected agents’ system such that the computation burden can be distributed evenly across the network; second we study algorithms that feed on stochastic (e.g., streaming or dynamical) data. In both cases, we aim to provide rigorous analysis on the performance with convex and non-convex optimization models. Recently, we consider communication efficient design that “robustly” work on physical networks with non-ideal communication architecture. Selected Publications:
Checkout the publications for more details. |