Optimization for Signal Processing, Communications and Learning

In recent decades, optimization theory has gained increasing popularity in signal processing, communications, and control. Mathematical optimization describes systematic techniques to find the best set of points, according to some objective function under application specific side conditions. The tremendous success of convex optimization in these areas of research stems, on the one hand, from the observation that many fundamental problems in signal processing and communications can be modeled, analyzed, and solved using convex optimization theory. On the other hand, the widespread penetration of almost all research branches in these fields is closely related and attributed to important advances made in the development of analytic tools and software, in particular numerical solvers and generic optimization software based on interior-point methods such as SeDuMi, CVX, Gurobi, Mosek, CPLEX, and others.

Although the theory of convex optimization is well developed, the advent of big data, massively parallel hardware and distributed network architectures, as well as extreme real-time processing requirements, have posed new challenges on optimization algorithms. For example, in many recent big data applications such as massive MIMO, genetic and social network analysis, and machine learning, the underlying optimization problems often exhibit convexity. However, for these applications, the classic interior point methods are not suitable as they do not scale well with the problem dimension. In this case, customized parallel and iterative algorithms that better exploit the underlying problem structure can lead to tremendous gains in terms of computational complexity and convergence speed.

In practice, there exist other classes of applications in signal processing that cannot be closely approximated by convex optimization problems. Hence, traditional interior point methods based on Lagrangian duality are not immediately applicable. Also, conventional general-purpose gradient methods for nonconvex problems generally show slow convergence. In these cases, carefully-designed iterative descent direction methods using, for example, successive convex approximation techniques can be attractive alternatives. They exhibit various advantages over general-purpose solvers, such as high scalability and real-time processing capability due to fast convergence speed, highly parallel or distributed implementation, and efficient data access


DFG Projekt EXPRESS “Exploiting structure in compressed sensing using side constraints: From analysis to system design – Funding phase II” as part of the SPP „Compressed Sensing in Information Processing“ (CoSIP)


Further Reading

Liu, Tianyi; Tillmann, Andreas M.; Yang, Yang; Eldar, Yonina C.; Pesavento, Marius: Extended Successive Convex Approximation for Phase Retrieval With Dictionary Learning. In: IEEE Transactions on Signal Processing 2022, 70, ISSN: 1941-0476, doi:10.1109/TSP.2022.3233253, [Article]

Yang, Yang; Pesavento, Marius; Luo, Zhi-Quan; Ottersten, Björn: Inexact Block Coordinate Descent Algorithms for Nonsmooth Nonconvex Optimization. In: IEEE Transactions on Signal Processing 2020, 68, ISSN: 1941-0476, doi:10.1109/TSP.2019.2959240, [Article]

Yang, Yang; Pesavento, Marius; Chatzinotas, Symeon; Ottersten, Björn: Energy Efficiency Optimization in MIMO Interference Channels: A Successive Pseudoconvex Approximation Approach. In: IEEE Transactions on Signal Processing 2019, 67, ISSN: 1941-0476, doi:10.1109/TSP.2019.2923141, [Article]

Yang, Yang; Pesavento, Marius; Chatzinotas, Symeon; Ottersten, Björn: Successive Convex Approximation Algorithms for Sparse Signal Estimation With Nonconvex Regularizations. In: IEEE Journal of Selected Topics in Signal Processing 2018, 12, ISSN: 1941-0484, doi:10.1109/JSTSP.2018.2877584, [Article]

Yang, Yang; Pesavento, Marius: A Unified Successive Pseudoconvex Approximation Framework. In: IEEE Transactions on Signal Processing 2017, 65, ISSN: 1941-0476, doi:10.1109/TSP.2017.2684748, [Article]