Next Talk

The OWSP will be hosting the online version for the STATOS Workshop 2022 to be held in Belgrade on Aug 28, 2022 in memory of Prof. Alex Gershman. The workshop will feature talks from world-class experts in wireless communications, distributed/federated learning, and learning-communication co-design, including

TBA 
  • Date/Time: August 28, 2022 / 1:00pm to 5:30pm @your local time

  • Meeting Link: (please subscribe to our mailing list)

The talks at the workshop will be streamed directly from Belgrade via Zoom. Please check out STATOS Workshop 2022 for the detailed schedule.

Upcoming Talks (all date/time are in Paris Time)

Click on the [+] button below to check on details about the talk. Click here if you would like to import the seminar information to iCal/Google Calendar (select Google Calendar if you prefer an automatically updated schedule on your calendar).

 
  • Title: TBA

  • Abstract: TBA.

Past Talks

You can find the recorded seminars at the Youtube channel or Bilibili channel. Click on the [+] button below to check on details about the talk.

  • Spring/Summer 2022

 
  • Title: Physics-Assisted Deep Learning for Radar Imaging and Classification Problems

  • Abstract: This talk applies deep learning to solve radar imaging and classification problems. Solving wave imaging problems using machine learning (ML) has attracted researchers’ interests in recent years. However, most existing works directly adopt ML as a black box. In fact, researchers have gained, over several decades, much insightful domain knowledge on wave physics and in addition some of these physical laws present well-known mathematical properties (even analytical formulas), which do not need to be learnt by training with a lot of data. This talk demonstrates that it is of paramount importance to address the problem of how profitably combining ML with the available knowledge on underlying wave physics. Besides imaging, another important application of wave sensing is target classification. This talk proposes a high-accuracy and efficient classification method with ML on frequency-modulated-continuous-wave (FMCW) radar. Instead of directly choosing measured data as the input of neural network, we start from the first principle of wave physics to design low-dimensional input. The proposed classifier is applied to automotive radar system, where road targets are to be classified into five categories, including pedestrian, bike, sedan, truck/bus, and other static objects. The proposed physics-assisted classifier is tested with real-world data obtained from 77-GHz FMCW radars and it turns out to competitive among the state-of-the-art methods in automotive radar application.

  • Speaker Homepage

 
  • Title: Model Based Deep Learning: Applications to Imaging and Communications

  • Abstract: Deep neural networks provide unprecedented performance gains in many real-world problems in signal and image processing. Despite these gains, the future development and practical deployment of deep networks are hindered by their black-box nature, i.e., a lack of interpretability and the need for very large training sets. On the other hand, signal processing and communications have traditionally relied on classical statistical modeling techniques that utilize mathematical formulations representing the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. Here we introduce various approaches to model based learning which merge parametric models with optimization tools leading to efficient, interpretable networks from reasonably sized training sets. We will consider examples of such model-based deep networks to image deblurring, image separation, super resolution in ultrasound and microscopy, efficient communications systems, and finally we will see how model-based methods can also be used for efficient diagnosis of COVID19 using X-ray and ultrasound.

  • Speaker Homepage

  • Video link (Youtube)

 
  • Title: Function Space Models in Deep Learning

  • Abstract: Neural network architectures are mostly derived based on heuristics, intuitions, and experimental trial and error. The choice of architecture defines a class of parametric models, but precisely what kinds of functions these models represent is difficult to characterize. In approximation theory and nonparametric statistics the term "model class" refers to a well-defined subset of functions in a Hilbert or Banach space, typically a ball defined by the norm associated with the space. However, classical functions spaces (e.g., Holder, Sobolev, Besov) fail to capture important characteristics of neural networks. Our main result unifies these two modeling viewpoints: deep neural networks are exact solutions to nonparametric estimation problems in a novel type of "mixed variation" function space. The spaces, characterized by notions of total variation in the Radon (transform) domain, include multivariate functions that are very smooth in all but a small number of directions. Spatial inhomogeneity of this sort leads to a fundamental gap between the performance of neural networks and linear methods (which include kernel methods), explaining why neural networks can outperform classical methods for high-dimensional function estimation. Our theory provides new insights into the practices of "weight decay," "overparameterization," and adding linear connections and layers to network architectures. It yields a deeper understanding of the role of sparsity and (avoiding) the curse of dimensionality. And lastly, the theory leads to new and improved neural network architectures and regularization methods. This talk is based on joint work with Rahul Parhi.

  • Speaker Homepage

  • Video link (Youtube)

 
  • Title: Study on Physics Embedded Deep Learning Techniques for Electromagnetic Imaging

  • Abstract: In recent years, research in deep learning techniques has attracted much attention. With the help of big data technology, massively parallel computing, and robust optimization algorithms, deep learning has greatly improved the performance of many applications in speech and image research. With the development of deep learning techniques, improvement in learning capacity may allow machines to “learn” from a large amount of data and “master” the physical laws in certain controlled boundary conditions. In electromagnetic engineering, physical laws set major guidelines in research and development. They discover the nature of the physical world and are universal across various scenarios. Incorporating physical principles into the deep learning framework will significantly improve the learning capacity and generalization ability of deep neural networks, hence increasing the accuracy and reliability of deep learning techniques in modeling electromagnetic phenomena. In this work, we study several techniques to embed physical simulation into deep learning to model electromagnetic wave propagation. With the help of both physical simulation and deep learning, we can improve both the accuracy and computational efficiency of electromagnetic data inversion. In the long run, a hybridization of fundamental physical principles with “knowledge” from big data could unleash numerous possibilities in electromagnetic engineering used to be impossible due to the limit of data information and the ability of computation. They will help electromagnetic technologies to be more automatic, more accurate, and more reliable.

  • Speaker Homepage

  • Video link (Youtube)

 
  • Title: Model Based Machine Learning Via Algorithm Unrolling Techniques: Foundations, Algorithms, and Applications

  • Abstract: Recent years have witnessed a surge of interest in model-based deep learning methods to tackle various inverse problems arising in signal processing, image processing, and machine learning. A popular approach — algorithm unfolding or unrolling — relies on the conversion of an iterative solver of the inverse problem onto a neural network structure. The different iterations of the iterative algorithm correspond to different layers of the neural network structure, with layer parameters corresponding to solver parameters. However, instead of fixing the layer parameters, these are optimised in a data-driven manner using learning algorithms such as stochastic gradient descent. These approaches therefore combine model based techniques with data driven ones, capturing domain knowledge via the iterative algorithm that is used to construct the neural network structure. These techniques have also been shown to deliver better performance in comparison to standard purely data-driven deep learning methods, in a number of signal and image processing challenges. This talk will overview recent advances in the foundations and applications of algorithm unfolding techniques. Concretely, it will introduce theoretical advances in the area of algorithm unfolding, including learning based guarantees; it will also introduce algorithmic advances in algorithm unfolding; and, finally it will showcase a portfolio of emerging signal & image processing challenges that benefit from algorithm unfolding approaches.

  • Speaker Homepage

  • Video link (Youtube)

  • Fall 2021

 
  • Title: Graph Neural Networks

  • Abstract: Graphs are generic models of signal structure that can help to learn in several practical problems. To learn from graph data, we need scalable architectures that can be trained on moderate dataset sizes and that can be implemented in a distributed manner. Drawing from graph signal processing, we define graph convolutions and use them to introduce graph neural networks (GNNs). We prove that GNNs are permutation equivariant and stable to perturbations of the graph, properties that explain their scalability and transferability. These results help understand the advantages of GNNs over linear graph filters. Introducing the problem of learning decentralized controllers, we discuss how GNNs naturally leverage the partial information structure inherent to distributed systems in order to learn useful efficient controllers. Using flocking as an illustrative example, we show that GNNs, not only successfully learn distributed actions that coordinate the team, but also transfer and scale to larger teams.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Rebuilding the Theoretical Foundations of Communications and Computing

  • Abstract: We are arriving at the end of an era that has guided the ICT for the last century. Quite remarkably, many of the remarkable engineering breakthroughs in Communication (the famous “G” era) and Computing (the famous “Moore’s” era) were based on quite old Basics. Indeed, the Nyquist Sampling theorem dates back to 1924, the Shannon’s Law to 1948 and  the Von Neumann Architecture to 1946. Today, we are desperately lacking guidance for new engineering solutions as we have approached those limits and there is a need for the whole industry to take its share of responsibility by re-investing massively in the fundamentals to revive a new century of engineering progress. In this talk, we will re-discuss the assumptions  made a century ago and provide a research roadmap showcasing the fundamental role of Mathematics and Physics to unlock the theoretical barriers.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Waveform design for integrated sensing and communications

  • Abstract: Integrated Radio Frequency Sensing and Communications systems operate in shared and often congested or even contested spectrum with the goal of providing both reliable communication and radar capabilities. Radars have a multitude of tasks and performance criteria that are often handled in a simplistic manner. We intend to shed more light on the radar tasks and objectives. We consider scenarios where sensing and communications systems cooperate or are codesigned for mutual benefit. Multiuser systems that share dynamic spectrum among different radio systems performing communications or sensing are of particular interest. In multiuser systems, interference is the main factor limiting performance. We will present methods for both radar and communication centric waveform designs and interference management in cooperative and codesigned systems. The design examples include multicarrier waveforms, beamformers and precoding and decoding techniques based on interference alignment.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Intelligent Reflecting Surfaces for Free-space Optical Communications

  • Abstract: Intelligent reflecting surfaces (IRSs) have the potential to transform both wireless radio frequency (RF) and free-space optics (FSO) communication channels into smart reconfigurable propagation environments. While the RF case has been heavily researched in recent years, optical IRS have received much less attention despite their potential to mitigate the inhibiting line-of-sight constraint of FSO systems. FSO systems are able to offer the high data-rate, secure, and cost-efficient communication links required for applications such as wireless front- and backhauling for 5G and 6G communication networks. Despite the substantial advancement of FSO systems over the past decades, the requirement of a line-of-sight connection between transmitter and receiver remains a key limiting factor for their deployment. In this presentation, we present an overview of existing optical IRS technologies, compare optical with RF IRSs, develop models for the end-to-end FSO channel, and present IRS designs for multi-link FSO systems. Finally, promising directions for future research on IRS-assisted FSO systems are provided.

  • Speaker Homepage

  • Video Link (Youtube), , Lecture Slides

 
  • Title: Unlimited Sampling from Theory to Practice

  • Abstract: Shannon's sampling theorem is one of the cornerstone topics that is well understood and explored, both mathematically and algorithmically. That said, practical realization of this theorem still suffers from a severe bottleneck due to the fundamental assumption that the samples can span an arbitrary range of amplitudes. In practice, the theorem is realized using so-called analog-to-digital converters (ADCs) which clip or saturate whenever the signal amplitude exceeds the maximum recordable ADC voltage thus leading to a significant information loss. In contrast, the Unlimited Sampling Framework, an alternative paradigm for sensing and recovery recently developed by the speaker jointly with Bhandari and Raskar, called is based on the observation that when a signal is mapped to an appropriate bounded interval via a modulo operation before entering the ADC, the saturation problem no longer exists, but one rather encounters a different type of information loss due to the modulo operation. Such an alternative setup can be implemented, for example, via so-called folding or self-reset ADCs, as they have been proposed in various contexts in the circuit design literature. The key task that one needs to accomplish in order to cope with this new type of information loss is to recover a bandlimited signal from its modulo samples. In this talk we will review different approaches to this problem with a particular focus on a Fourier domain approach that is robust to non-idealities in the circuit implementation, as we observe them in experiments with a hardware prototype that we constructed for this purpose. This is joint work with Ayush Bhandari and Thomas Poskitt, Imperial College London.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Deep Probabilistic Regression

  • Abstract: While deep learning-based classification is generally addressed using standardized approaches, this is really not the case when it comes to the study of regression problems. There are currently several different approaches used for regression and there is still room for innovation. We have developed a general deep regression method with a clear probabilistic interpretation. The basic building block in our construction is an energy-based model of the conditional output density p(y|x), where we use a deep neural network to predict the un-normalized density from input-output pairs (x, y). Such a construction is also commonly referred to as an implicit representation. The resulting learning problem is challenging and we offer some insights on how to deal with it. We show good performance on several computer vision regression tasks, system identification problems and 3D object detection using laser data.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: The quest for future connected environments: the potential of RadioWeaves technology

  • Abstract: Future wireless networks should support a diversity of new applications. This will require the possibility to respond with a very low latency, to support applications with edge intelligence, to provide location awareness, and to enable a very large number of simultaneous devices. Moreover, sustainable connectivity for IoT nodes operating on an ultra-constrained energy budget is desired. ‘RadioWeaves’ operating in the ‘golden’ below 6GHz frequencies has a great potential to enable the envisioned applications. This technology establishes a novel infrastructure based on distributing interconnected communication and computation resources in the environment, and operating this weave in a dynamic way. Technological challenges to be resolved include initial access protocols, dynamic resource management, and synchronization. In this webinar the technical requirements for future wireless networks will first be explained. Next, RadioWeaves will be introduced as a promising technology to create hyper-connected intelligent environments. Promising approaches to progress this technology as well as the open R&D problems – with most probably still new ones to be discovered - will be presented.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: RadioUNets: Next Generation Radio Maps using Deep Learning

  • Abstract: In wireless communication, radio maps quantify the loss of signal strength from a transmitter and all spatial locations in a 3D environment due to large scale effects. In this talk, we will introduce a novel approach based on deep learning, coined RadioUNet, which can be seen as a physical simulation that estimates the radio map at all spatial locations given the geometry of a city map. The generated pathloss estimations are very close to the simulations, but are much faster to compute for real-time applications. As one such application, we will discuss the problem of localization in a cellular network in a dense urban scenario. Our proposed method, called LocUNet, which is based on RadioUNet, can extract a very accurate localization of the user and enjoys high robustness to inaccuracies in the estimated radio maps. Finally, we provide various numerical experiments, which show the superiority of our conceptual approach. This is joint work with Ron Levie, Çagkan Yapar, and Giuseppe Caire.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Line-of-sight MIMO: An Old Theory Up To New Tricks

  • Abstract: We are in the midst of a tidal transformation in the conditions in which wireless systems operate, with a determined push towards much higher frequencies (today mmWave, tomorrow sub-terahertz), with shrinking transmission ranges, and with much denser antenna arrays. This is stretching, even breaking, time-honored modelling assumptions such as that of planar wavefronts over the span of each individual array. And, once the local curvature of those wavefronts is revealed, a new opportunity arises for spatial multiplexing without any need for scattering or for multipath components, conveniently relying only on the line-of-sight propagation that tends to dominate at those high frequencies and over short ranges. This presentation dwells on the physical underpinnings of this phenomenon, on how it can be harnessed for communication purposes, and on its potential implications for future systems.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Beyond Transmitting Bits: Semantic and Goal-Oriented Communications

  • Abstract: Traditional digital communication systems are designed to convert a noisy channel into a reliable bit-pipe; they are ignorant of the origin of the bits or how they are eventually used at the receiver. However, with the advances in machine learning (ML) technologies and their widespread adoption, it is expected that, in the near future most communications will take place among machines, where massive amounts of data is available at the transmitter, and the goal is often not to transmit this data to the receiver, but to enable it to make the right inference or take the right action. On the other hand, ML algorithms to achieve these goals are designed either for centralised implementation at powerful cloud servers, or assume finite-rate but delay- and error-free communication links. In this talk, I will show that this conventional approach of separating communication system design from ML algorithm design can be highly suboptimal for emerging edge intelligence applications, and an end-to-end design taking into account the “semantics” of the underlying data and the final “goal” at the receiver become essential. I will provide more concrete definitions of these concepts, and give striking examples of how semantic and goal oriented design can push the boundaries of edge intelligence for future communication systems.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Channel Noise as Monte Carlo Sampling: Efficient Bayesian Distributed Learning in Wireless Systems

  • Abstract: Conventional frequentist learning, as assumed by existing federated learning protocols, is limited in its ability to quantify uncertainty, incorporate prior knowledge, guide active learning, and enable continual model updates. Bayesian learning provides a principled approach to address all these limitations, at the cost of an increase in computational complexity. A standard approach to implement Bayesian learning is through Monte Carlo sampling, whereby the learner generates samples (approximately) drawn from the posterior distribution to enable Gibbs or ensemble predictors. Focusing on wireless distributed Bayesian learning, this talk introduces the idea of channel-driven MC sampling: Rather than treating channel noise as a nuisance to be mitigated, channel-driven sampling utilizes channel noise as an integral part of the MC sampling process. Two specific settings are studied: a wireless data center system encompassing a central server and multiple distributed workers, and a federated system with an edge access point and distributed agents. For the first setting, the talk investigates for the first time the design of distributed one-shot, or ""embarrassingly parallel"", Bayesian learning protocols via consensus Monte Carlo (CMC), while for the second we consider Langevin MC sampling schemes based on multiple communication rounds. In both cases, uncoded transmission is introduced not only as a means to implement ""over-the-air"" computing, but also as a way to enable channel-driven sampling. Simulation results demonstrate that, if properly accounted for, channel noise can indeed contribute to MC sampling, and does not necessarily decrease the accuracy level.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Low-Resolution to the Rescue of All-Digital Massive MIMO

  • Abstract: Massive multiple-input multiple-output (MIMO) will be a core technology of future millimeter-wave (mmWave) and terahertz (THz) wireless communication systems. The idea of massive MIMO is to equip the basestation with hundreds of antenna elements in order to serve tens of users in the same time-frequency resource. While this technology enables high spectral efficiency via fine-grained beamforming, naïve implementations of all-digital basestation architectures with conventional data converters for each antenna element would result in excessively high system costs, power consumption, and interconnect data rates. This fact is further aggravated at mmWave/THz frequencies due to the extremely large bandwidths available for communication. In this talk, we demonstrate that reliable wideband communication is practically feasible with all-digital basestation architectures when combining low-resolution data converters with low-resolution baseband processing and sophisticated signal processing algorithms.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Reconfigurable Intelligent Surfaces for Wireless Communications

  • Abstract: A Reconfigurable Intelligent Surface (RIS) is a planar structure that is engineered to have properties that enable the dynamic control of the electromagnetic waves. In wireless communications and networks, RISs are an emerging technology for realizing programmable and reconfigurable wireless propagation environments through nearly passive and tunable signal transformations. RIS-assisted programmable wireless environments are a multidisciplinary research endeavor. This presentation is aimed to report the latest research advances on modeling, analyzing, and optimizing RISs for wireless communications with focus on electromagnetically consistent models, analytical frameworks, and optimization algorithms.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: New Sparse Sampling Methods: Time-based sampling and sampling along trajectories

  • Abstract: Traditional signal processing is based on the idea that an analogue waveform should be converted in digital form by recording its amplitude information at specific time instants. Nearly all data acquisition, processing and communication methods have progressed by relying on this fundamental sampling paradigm. Interestingly, we know that the brain operates differently and represents signals using networks of spiking neurons where the timing of the spikes encodes the signal's information. This form of processing by spikes is more efficient and is inspiring a new generation of event-based audio-visual sensing and processing architectures. In the first part of this talk, we investigate time encoding as an alternative method to classical sampling, and address the problem of reconstructing classes of sparse non-bandlimited signals from time-based samples. We consider a sampling mechanism based on first filtering the input, before obtaining the timing information using a time encoding machine. Leveraging specific properties of these filters, we derive sufficient conditions and propose novel algorithms for perfect reconstruction of classes of sparse signals. In the second part of the talk we consider physical fields induced by a finite number of instantaneous diffusion sources, which we sample using a mobile sensor, along unknown trajectories composed of multiple linear segments. We address the problem of estimating the sources, as well as the trajectory of the mobile sensor and validate our approach on real thermal data. We finally conclude by highlighting further avenues for research in the emerging area of event-based sensing and sampling along.

  • Speaker Homepage

  • Video Link (Youtube)

  • Summer 2021

 
  • Title: Deep Analog-to-Digital Compression with Applications to Automotive Radar and Massive MIMO

  • Abstract: The famous Shannon-Nyquist theorem has become a landmark in analog to digital conversion and the development of digital signal processing algorithms. However, in many modern applications, the signal bandwidths have increased tremendously, while the acquisition capabilities have not scaled sufficiently fast. Furthermore, the resulting high rate digital data requires storage, communication and processing at very high rates which is computationally expensive and requires large amounts of power. In this talk we consider a general framework for sub-Nyquist sampling and processing in space, time and frequency which allows to dramatically reduce the number of antennas, sampling rates, number of bits and band occupancy in a variety of applications. It also allows for the development of efficient joint radar-communication systems. Our framework relies on exploiting signal structure, quantization and the processing task in both standard processing and in deep learning networks. We consider applications of these ideas to a variety of problems in wireless communications, efficient massive MIMO systems, automotive radar and ultrasound imaging and show several demos of real-time sub-Nyquist prototypes including a wireless ultrasound probe, sub-Nyquist automotive radar, cognitive radio and radar, dual radar-communication systems, analog precoding, sparse antenna arrays, and a deep Viterbi decoder. (This is a joint seminar with the IEEE ComSoc ISAC-ETI Seminars)

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: A Wideband Dual Function Radar Communication System with Sparse Array and OFDM Waveforms

  • Abstract: We will present our recent work on dual-function radar communication (DFRC) systems, in particular a new MIMO radar with a sparse transmit array, that transmits wideband, OFDM waveforms. The system assigns most carriers to antennas in a shared fashion, thus efficiently exploiting the available communication bandwidth, and a small set of subcarriers to active antennas in an exclusive fashion (private subcarriers). A novel target estimation approach will be presented to overcome the coupling of target parameters introduced by subcarrier sharing. The system is endowed with beamforming capability, via waveform precoding and antenna selection. The precoding and antenna selection matrices are optimally co-designed to meet a joint sensing-communication system performance. The use of shared subcarriers enables high communication rate, while the sparse transmit array maintains low system hardware cost. The sensing problem is formulated by taking into account frequency selective fading, and a method is proposed to estimate the channel coefficients during the sensing process. (This is a joint seminar with the IEEE ComSoc ISAC-ETI Seminars)

  • Speaker Homepage

 
  • Title: The 6G Radio Access Opportunity – Joint Communications & Sensing

  • Abstract: Over the generations of cellular, services have gone from pure voice service delivery to today’s integrated wireless Internet. With 5G and it’s ultra-reliable low-latency (URLLC) mode, for the first time cellular based remote controlled real and virtual object shall become feasible: The Tactile Internet. However, focusing primarily on business applications. The vision of 6G is that personal mobile robotic helpers controlled by the infrastructure will become wide-spread. Robotic systems, even when connected to the infrastructure, require to know their ambience through sensing. As relying on cameras alone is not an option, and lidar has its limits as well, radio sensing will become increasingly important. This seemingly creates a competition for new spectrum between sensing and communications! Here we want to shed light on integrating both into one 6G radio access network by joint communication and sensing. Some theoretical as well as experimental results will be given. As a consequence, sensing can become another service delivered by a cellular communications system. Joining up the terminal and infrastructure based sensing plus acknowledging the power of networking will create new opportunities. This can include empowering a “level-2” car to be driven at “level-5” with the help of 6G, but will also open completely new opportunities for your home/fitness/gaming/assistance robotic helpers as well! Taking this idea one step further one can start thinking of developing a “GearboxPHY” approach for 6G. As we need to incorporate low and high rate data communications, sensing including positioning, as well as post-Shannon communications, would it not make sense to address this by an air interface that “switches gears” automatically to deliver the service needed at the minimal energy cost?

  • Speaker Homepage

  • Fall 2020

 
  • Title: Adaptive Diffusions for Scalable and Robust Learning over Graphs

  • Abstract: Diffusion-based classifiers such as those relying on the Personalized PageRank and the Heat kernel, enjoy remarkable classification accuracy at modest computational requirements. Their performance however depends on the extent to which the chosen diffusion captures a typically unknown label propagation mechanism that can be specific to the underlying graph, and potentially different for each class. This talk will introduce a disciplined, data-efficient approach to learning class-specific diffusion functions adapted to the underlying network topology. The novel learning approach leverages the notion of “landing probabilities” of class-specific random walks, which can be computed efficiently, thereby ensuring scalability to large graphs.Furthermore, a robust version of the classifier becomes available for graph-aware learning even in noisy environments. Classification tests on real networks will demonstrate that adapting the diffusion function to the given graph and observed labels, markedly improves the performance over fixed diffusions; reaching – and many times surpassing – the classification accuracy of computationally heavier state-of-the-art competing methods, that rely on node embeddings and deep neural networks.

  • Speaker Homepage

  • Video Link (Youtube), Lecture Slides

 
  • Title: High-dimensional Regression and Dictionary Learning: Some Recent Advances for Tensor Data

  • Abstract: Data in many modern signal processing, machine learning, and statistics problems tend to have tensor (aka, multidimensional / multiway array) structure. While traditional approaches to processing of such data involve 'flattening' of data samples into vectors, it has long been realized that explicit exploitation of tensor structure of data can lead to improved performance. Recent years, in particular, have witnessed a flurry of research activity centered around development of computational algorithms for improved processing of tensor data. Despite the effectiveness of such algorithms, an explicit theoretical characterization of the benefits of exploitation of tensor structure remains unknown in the high-dimensional setting for several problems. In this talk, we focus on two such high-dimensional problems for tensor data, namely, high-dimensional regression and high-dimensional dictionary learning. The basic assumption in this talk for both these problems is that the dimensionality of data far exceeds the number of available data samples, so much so that existing approaches to regression (e.g., sparse regression) and dictionary learning (e.g., K-SVD) may not result in meaningful results. Under this high-dimensional setting, we discuss algorithms capable of exploiting certain low-dimensional structures underlying tensor data for effective regression and dictionary learning. In addition, we present sample complexity results for both high-dimensional problems that highlight the usefulness of the latent tensor structures being exploited by the presented algorithms in relation to existing works in the literature.

  • Speaker Homepage

  • Video Link (Youtube), Lecture Slides

 
  • Title: Understanding Trade-offs in Super-resolution Imaging with Spatiotemporal measurement

  • Abstract: The main goal of super-resolution is to extract finer details (typically high frequency components) of a signal from low-resolution (or low frequency) measurements collected by an imaging system. The system’s point spread function (PSF) poses fundamental limitations (for example, due to diffraction) on the achievable spatial resolution. However, the ability to recover lost high frequency components which are not retained in the measurements, is still possible, thanks to exploitation of suitable priors on the image of interest. In recent times, the topic of super-resolution has gained significant attention due to key results that suggest that certain convex algorithms (based on TV norm and atomic norm minimization) can provably localize point sources in presence of noise, provided the source locations obey a certain separation condition, The separation is reminiscent of the classical Rayleigh resolution limit, and can potentially limit the original purpose of super-resolution beyond such limits. In this talk, we will take a closer look at the problem of noisy super-resolution in passive incoherent imaging from spatiotemporal measurements, which features in applications ranging from radar source localization to correlation microscopy. We will investigate if it is indeed possible to relax the separation condition by leveraging certain trade-offs between spatial measurements (which are tightly connected to the sensing geometry and the PSF) and temporal samples. We will explore fundamental bounds and zoom into their behavior in the low SNR regime, which will bring out the very important role played by the sensing geometry. We will also revisit modern analysis of classic subspace-based super-resolution algorithms, and establish that localization of sources beyond the aforementioned separation condition is indeed possible by a combination of suitable spatial sampling techniques, and parameter estimation algorithms that are more effective in high dimensional sample-starved regimes.

  • Speaker Homepage

  • Video Link (Youtube), Lecture Slides

 
  • Title: Tensor Decompositions for Multi-aspect Graph Analytics And Beyond

  • Abstract: Tensors and tensor decompositions have been very popular and effective tools for analyzing multi-aspect data in a wide variety of fields, ranging from Psychology to Chemometrics, and from Signal Processing to Data Mining and Machine Learning. In this talk, we will demonstrate the effectiveness of tensor decompositions in modeling and mining multi-aspect graphs, focusing on unsupervised and semi-supervised community detection, and tracking communities over tensor streams in the presence of concept drift. Finally, we conclude with very recent results that demonstrate the effectiveness of tensor methods in alleviating state-of-the-art adversarial attacks in Deep Neural Networks.

  • Speaker Homepage

  • Video Link (Youtube), Lecture Slides

 
  • Title: Nonparametric Multivariate Density Estimation: A Low-Rank Characteristic Function Approach

  • Abstract: Effective non-parametric density estimation is a key challenge in high-dimensional multivariate data analysis. In this paper,we propose a novel approach that builds upon tensor factorization tools. Any multivariate density can be represented by its characteristic function, via the Fourier transform. If the sought density is compactly supported, then its characteristic function can be approximated, within controllable error, by a finite tensor of leading Fourier coefficients, whose size de-pends on the smoothness of the underlying density. This tensor can be naturally estimated from observed realizations of the random vector of interest, via sample averaging. In order to circumvent the curse of dimensionality, we introduce a low-rank model of this characteristic tensor, which significantly improves the density estimate especially for high-dimensional data and/or in the sample-starved regime. By virtue of uniqueness of low-rank tensor decomposition, under certain conditions, our method enables learning the true data-generating distribution. We demonstrate the very promising performance of the proposed method using several measured datasets.

  • Speaker Homepage

  • Video Link (Youtube), Lecture Slides

 
  • Title: Bringing Statistical Thinking in Distributed Optimization. Vignettes from statistical inference over Networks.

  • Abstract: There is growing interest in solving large-scale statistical machine learning problems over decentralized networks, where data are distributed across the nodes of the network and no centralized coordination is present (we termed these systems “meshed” networks). Modern massive datasets create a fundamental problem at the intersection of the computational and statistical sciences: how to provide guarantees on the quality of statistical inference given bounds on computational resources, such as time and communication efforts. While statistical-computation tradeoffs have been largely explored in the centralized setting, our understanding over meshed networks is limited: (i) distributed schemes, designed and performing well in the classical low-dimensional regime, can break down in the high-dimensional case; and (ii) existing convergence studies may fail to predict algorithmic behaviors; some are in fact confuted by experiments. This is mainly due to the fact that the majority of distributed algorithms over meshed networks have been designed and studied only from the optimization perspective, lacking the statistical dimension. Throughout some vignettes from low- and high-dimensional statistical inference, this talk goes over some designs and new analyses aiming at bringing statistical thinking in distributed optimization.

  • Speaker Homepage

  • Video Link (Youtube)

 
  • Title: Distributed Algorithms for Optimization in Networks

  • Abstract: We will overview the distributed optimization algorithms starting with the basic underlying idea illustrated on a prototype problem in machine learning. In particular, we will focus on convex minimization problem where the objective function is given as the sum of convex functions, each of which is known by an agent in a network. The agents communicate over the network with a task to jointly determine a minimum of the sum of their objective functions. The communication network can vary over time, which is modeled through a sequence of graphs over a static set of nodes (representing the agents in a system). In this setting, the distributed first-order methods will be discussed that make use of an agreement protocol, which is a mechanism replacing the role of a coordinator. We will discuss some refinements of the basic method and conclude with more recent developments of fast methods that can match the performance of centralized methods.

  • Speaker Homepage

  • Video Link (Youtube), Lecture Slides

  • Summer 2020

 
  • Title: Computational approaches for guiding rational vaccine design: Case studies in HCV, HIV, and COVID-19 (DataSci)

  • Abstract: This talk will describe how computational modelling and high-dimensional statistics can aid the rational design of vaccines. Approaches familiar in signal processing and physics will be introduced and applied to genetic sequence data of viruses measured from infected individuals. These approaches will be used to build computational models that inform how viral function/structure is mediated by correlated sets of genetic mutations, and to simulate viral evolutionary dynamics in individuals who present specific immune responses. When combined with experimental and clinical data, the talk will describe how the models may be used to identify new vaccine candidates for the hepatitis C virus (HCV) and for HIV. Recent progress on the use of sequence analysis to guide vaccine design for COVID-19 will also be discussed.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Tencent), Lecture Slides

 
  • Title: Modeling and learning social influence from opinion dynamics under attack (DistSP-Opt)

  • Abstract: Opinion dynamics models aim at capturing the phenomenon of social learning through public discourse. While a functioning society should converge towards common answers, the reality often is characterized by divisions and polarization. This talk reviews the key models that capture social learning and its vulnerabilities. In particular, we review models that explain the effect of bounded confidence and social pressure from zealots (i.e. fake new sources) and show how very simple models can explain the trends observed when social learning is subject to these phenomena. Their influence exposes trust different agents place on each other and introduce new learning algorithms that can estimate how agents influence each other.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Bilibili), Lecture Slides

 
  • Title: Cell Detection by Functional Inverse Diffusion and Nonnegative Group Sparsity (DataSci)

  • Abstract: On August 28, 2018, a Stockholm-based biotech company launched a new product: The Mabtech IRIS, a next-generation FluoroSpot and ELISpot reader. The reader is a machine designed to analyze a type of biomedical image-based assays that are commonly used in immunology to study cell responses. A contemporary use case involve the development of vaccines for SARS-CoV-2. At the heart of this machine is a positivity constrained groups sparsity regularized least squares optimization problem, solved with large scale optimization methods.

    The presentation will outline the problem of analyzing FluoroSpot assays from a signal processing and optimization perspective and explain the methods we designed to solve it. The problem essentially amounts to counting, localizing and quantifying heterogeneous diffuse spots in an image. The solution involves components such as the development of a tractable linear model from the physical properties that govern the reaction-diffusion-adsorption-desorption process in the assay; the formulation of an inverse problem in function spaces and its discretized approximation; the role of group sparsity in finding a plausible solution to an otherwise ill-posed problem; and how to efficiently solve the resulting 40 million variable optimization problem on a GPU.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Bilibili), Lecture Slides

 
  • Title: Signal Processing and Optimization in UAV Communication and Trajectory Design (ML-Com)

  • Abstract: Unmanned aerial vehicles (UAVs) or drones have found numerous applications in wireless communication, as either aerial user terminal or mobile access point (AP). Compared to conventional terrestrial wireless systems, UAVs’ communications face new challenges due to their high altitude above the ground and great flexibility of movement, bringing several crucial issues such as how to exploit line-of-sight (LoS) dominant UAV-ground channels while mitigating resulted strong interference, meet distinct UAV communication requirements on critical control messages versus high-rate payload data, cater for the stringent constraints imposed by the size, weight, and power (SWAP) limitations of UAVs, as well as leveraging the new degree of freedom via controlling the UAV trajectory for communication performance enhancement. In this talk, we will provide an overview of the above challenges and practical issues in UAV communications, their state-of-the-art solutions (with an emphasis on promising signal processing and optimization techniques used in them), as well as important directions for future research.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Bilibili), Lecture Slides

 
  • Title: Inferring Networks and Network Properties from Graph Dynamic Processes (DataSci)

  • Abstract: We address the problem of identifying structural features of an undirected graph from the observation of signals defined on its nodes. Fundamentally, the unknown graph encodes direct relationships between signal elements, which we aim to recover from observable indirect relationships generated by a diffusion process on the graph. Our approach leverages concepts from convex optimization and stationarity of graph signals, in order to identify the graph shift operator (a matrix representation of the graph) given only its eigenvectors. These spectral templates can be obtained, e.g., from the sample covariance of independent graph signals diffused on the sought network. The novel idea is to find a graph shift that, while being consistent with the provided spectral information, endows the network with certain desired properties such as sparsity. To that end, we develop efficient inference algorithms stemming from provably tight convex relaxations of natural non-convex criteria. For scenarios where the number of samples is insufficient for exact graph recovery, we show that coarser graph features (such as communities or centrality values) can still be correctly inferred.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Bilibili), Lecture Slides

 
  • Title: A programmable wireless world with reconfigurable intelligent surfaces (ML-Com)

  • Abstract: Wireless connectivity is becoming as essential as electricity in our modern world. Although we would like to deliver wireless services everywhere, the underlying physics makes it hard: the signal power is vanishing very quickly with the propagation distance and is absorbed or scattered off various objects. Even when we have a “strong" signal, only one in a million parts is being received, thus, there is a large room for improvements! What if we could tune the propagation environment to our needs? This is the main goal of reconfigurable intelligent surfaces, which is a beyond-5G concept currently hyped by the communication research community. The idea is to support the transmission from a source to a destination by deploying specially designed surfaces that can reconfigure how incident signal waves are reflected. Ideally, the surface will take the signal energy that reaches it and retransmit it focused on the receiver. This opens a new design dimension: we can not only optimize the transmitter and receiver but also control the channel by real-time programming. In this talk, I will explain the fundamentals of this new technology by building up a basic system model and demonstrate its properties. I will then discuss the prospects of the technology, including potential use cases, the main signal processing issues, and debunk three myths that are currently spreading among researchers.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Bilibili), Lecture Slides

 
  • Title: Machine Learning for Massive MIMO Communications (ML-Com)

  • Abstract: This talk provides two examples in which machine learning can significantly improve the design of wireless communication systems. In the first part of the talk, we show that deep neural network (DNN) can be used for efficient and distributed channel estimation, quantization, feedback, and downlink multiuser precoding for a frequency-division duplex (FDD) massive multiple-input multiple-output (MIMO) system in which a base-station (BS) serves multiple mobile users, each with a rate-limited feedback link to the BS. The key observation here is that the multiuser channel estimation and feedback problem can be thought of as a distributed source coding problem -- in contrast to the conventional approach where the channel state information (CSI) is independently quantized at each user. We show that a DNN architecture implementing distributed source coding -- mapping the received pilots directly into finite feedback bits at the user side, then mapping the feedback bits from all the users directly into the precoding matrix at the BS, can significantly improve the overall performance. In the second part of the talk, we propose an autoencoder-based symbol-level precoding scheme for a time-division duplex (TDD) massive MIMO system with 1-bit digital-to-analog converters. The goal here is to design downlink transmission schemes which are robust to imperfect CSI. Toward this end, we leverage the concept of autoencoder wherein the end-to-end system is modeled by a DNN, and the constellation and the precoding scheme can be jointly designed so that the overall system is robust to channel uncertainty.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Bilibili), Lecture Slides

 
  • Title: Learning Higher-Order Interactions with Graph Volterra Models (DataSci)

  • Abstract: Complex network processes are known to be driven not only by pairwise interactions but also by the interactions of small groups of tightly connected nodes, sometimes called higher-order interactions. So, identifying these higher-order interactions becomes paramount to gain insight in the nature of such processes. While predicting pairwise nodal interactions (links) from network data is a well-studied problem, the identification of higher-order interactions (high-order links) has not been fully understood. In this talk, we review several approaches that have been proposed for addressing this task and examine their respective limitations. Furthermore, cross-fertilizing ideas from Volterra series and linear structural equation models, we introduce a principled method that can capture higher-order interactions among nodes, the so-called graph Volterra model. The proposed approach can identify higher-order interactions among nodes by the respective graph Volterra kernels. To motivate the adoption of our new model, we demonstrate its performance for higher-order link prediction using real data from social networks and smart grids.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Bilibili), Lecture Slides

 
  • Title: Communication efficient distributed learning (DistSP-Opt)

  • Abstract: Scalable and efficient distributed learning is one of the main driving forces behind the recent rapid advancement of machine learning and artificial intelligence. One prominent feature of this topic is that recent progresses have been made by researchers in two communities: (1) the system community such as database, data management, and distributed systems, and (2) the machine learning and mathematical optimization community. The interaction and knowledge sharing between these two communities has led to the rapid development of new distributed learning systems and theory. This talk will provide a brief introduction of some distributed learning techniques that have recently been developed, namely lossy communication compression (e.g., quantization and sparsification), asynchronous communication, and decentralized communication. The goal of this presentation is to let the general audience understand the principle of communication efficient distributed learning algorithms and systems.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (bilibili), Lecture Slides

 
  • Title: Learning to team play (ML-Com)

  • Abstract: Cooperation is an essential function in a wide array of network scenarios, including wireless, robotics, and beyond. In decentralized networks, cooperation (or team play) must be achieved by agents despite the lack of common information regarding the global state of the network. Cooperation in the presence of state information uncertainties is a highly challenging problem for which no simple optimization based robust solution exist, in most cases. In this talk, we describe a machine learning approach this problem. We introdue so called Team Deep Learning Networks (Team-DNN) where agents learn to coordinate with each other under uncertainties. We apply it to wireless optimization problems and emphasize power control as a possible use case. We show how devices can learn how to message each other relevant information and take appropriate transmission decisions. Finally, team DNNs are extended to include the principle of mixture of experts (MoE) that enable the team DNNs to divide the problem space into different regions of state uncertainties and get optimized behavior in each one.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Bilibili), Lecture Slides

 
  • Title: Decentralized stochastic non-convex optimization (Dist-SP)

  • Abstract: In many emerging applications, it is of paramount interest to learn hidden parameters from the data collected at individual units. For example, self-driving cars may use onboard cameras to identify pedestrians, highway lanes, or traffic signs in various light and weather conditions. Problems such as these can be framed as classification, regression, or risk minimization, at the heart of which lies stochastic optimization. When the underlying datasets are large and further contain private information, it is not typically feasible to collect and process the entire data at a central location to solve the corresponding optimization problems. Decentralized methods thus are preferable as they benefit from local (short-range) communication and are able to tackle data imperfections both in space (geographical diversity) and in time (noise in measurements). In this talk, I will present our recent work that develops a novel algorithmic framework to address various aspects of decentralized stochastic optimization for strongly convex and non-convex problems in both online and batch data scenarios. I will quantify the performance of the underlying algorithms and describe the regimes of practical interest where the convergence rates are near-optimal. Moreover, I will characterize certain desirable attributes of such methods in the context of linear speedup and network-independent convergence rates. I will conclude by demonstrating the key aspects of the proposed methods with the help of numerical experiments.

  • Speaker Homepage

  • Video Link (Youtube), Video Link (Bilibili), Lecture Slides

How it works / Subscribe

Each talk will last 45 minutes, followed by a 10-15 minutes Q&A. All talks will be recorded and uploaded to Youtube for later views (subjected to the speaker's consent). We will announce the next speaker on this website. You will receive the access information (link of the Zoom-room and the password) the day (or approx. 12 hours) before each talk through subscribing to a mailing list.

You may subscribe to our mailing list through:

You will receive a confirmation email after successfully subscribing to our list. Please contact us if you do not receive a confirmation email after 48 hours.

Other One World Seminar Series

Organizers and Contact

The virtual seminar series of this season is organized by Marius Pesavento (TU Darmstadt), Christos Masouros (UC London), Fan Liu (SUSTech), Maokun Li (Tsinghua University), Tianhao Huang (Tsinghua University), Yonina Eldar (Weizmann Institute of Science) with the advisory board member Wing-kin (Ken) Ma (CUHK-HK), Hoi-to Wai (CUHK-HK), Tsung-hui Chang (CUHK-SZ). The Fall 2020 series has been organized by Xiao Fu (Oregon State) and Yanning Shen (UC Irvine). The Summer 2020 series has been organized by Wing-kin (Ken) Ma (CUHK-HK), Hoi-to Wai (CUHK-HK), Tsung-hui Chang (CUHK-SZ). It is also supported partly by IEEE Signal Processing Society. For inquiries, please write to this address.