Abstracts

A Personal Journey: From Signals and Systems to Graphical Models

Alan Willsky

This talk outlines a meandering line of research, started in 1988, that began with some signal processors and control theorists trying to make statistical sense of the emerging field of wavelet analysis and, to the speaker's surprise, moved into areas that certainly take advantage of his S&S/CT background but evolved into topics in a variety of fields involving efficient numerical methods for large-scale data assimilation and mapping and, eventually, a rapprochement with graphical models and machine learning. The talk will touch on our early work on multiresolution tree models (motivated by wavelets but only indirectly relevant to them), the way control theorists think of inference and approximate modeling/stochastic realization, with at least one application that rings of the way a machine learning researcher might build a model – but not a mathematical physicist! I’ll provide (finally) a real rapprochement with wavelets and then turn to approximate inference on loopy graphs. The first approach builds on an idea used in the solution of PDEs, namely nested dissection, but with some machine learning twists, and some control-theoretic stability issues (showing how control might have a few things to provide to inference algorithm designers). I will then turn to a topic again related to the ways in which numerical linear algebraists think about solving sparse systems of equations, but in the context of graphical models, this leads to the idea of walk-sum analysis, a surprisingly useful (and at least I think intuitive) idea. Walk-sum analysis then allows us to say some fairly strong things about a variety of iterative algorithms (generally known as either Richardson iterations, Jacobi iterations, or Gauss-Seidel iterations), including adaptive algorithms to guide iterations for fast convergence. Walk-sum analysis is also key in another approach with linear algebraic interpretations, involving so-called feedback vertex sets. I will also touch on an alarmingly accurate method for computing variances in graphical models that involves using a low-rank approximation to the identity matrix(!) and then return to multiresolution models but now looking at models motivated by two quite different classes of numerical algorithms: multigrid algorithms and multipole algorithms. These algorithms motivate two quite different classes of models, the latter of which requires the introduction of what we refer to as conjugate graphs. If I have any time and energy left, I will comment on some prospective topics.



Efficient space-variant blind deconvolution

Stefan Harmeling

Blur in photos due to camera shake, blur in astronomical image sequences due to atmospheric turbulence, and blur in magnetic resonance imaging sequences due to object motion are examples of blur that can not be adequately described as a space-invariant convolution, because such blur varies across the image. In this talk, we present a class of linear transformations, that are expressive enough for space-variant blurs, but at the same time especially designed for efficient matrix-vector-multiplications. Successful results on the above-mentioned examples demonstrate the practical significance of our approach.




Hierarchical preconditioners for image processing applications.

Rick Szeliski

TBA




Linear Algebra and Machine Learning of Large Informatics Graphs

Michael Mahoney

Very large informatics graphs such as large social and information networks typically have properties that render many popular machine learning and data analysis tools largely inappropriate. While this is problematic for these applications, it also suggests that these graphs may be useful as a test case for the development of new algorithmic tools that may then be applicable much more generally. Many of the popular machine learning and data analysis tools rely on linear algebra, and they are typically used by calling traditional numerical linear algebra code as a black box. After briefly reviewing some of the structural properties of large social and information networks that are responsible for the inapplicability of traditional linear algebra and machine learning tools, I will describe several examples of "new linear algebra" and "new machine learning" that arise from the analysis of such informatics graphs. These new directions involve looking "inside" the black box, and they place very different demands on the linear algebra than are traditionally placed by numerical, scientific computing, and small-scale machine learning applications.




Robust PCA and Collaborative Filtering: Rejecting Outliers, Identifying Manipulators

Constantine Caramanis

Principal Component Analysis is one of the most widely used techniques for dimensionality reduction. Nevertheless, it is plagued by sensitivity to outliers; finding robust analogs, particularly for high-dimensional data, is critical. We discuss the challenges posed by the high dimensional setting, where dimensionality is of the same order, or greater, than the number of samples. We detail why existing techniques fail -- indeed, no known algorithm can provide provable bounds to any constant fraction of outliers -- and then present two very different algorithms for High Dimensional Robust PCA.
Our first algorithm achieves a breakdown point of 50% -- the best possible using any algorithm, and a stark improvement from the previous best-known result of 0%. Our second algorithm is based on ideas from convex optimization, and in addition to recovering the principal components, is also able to identify the corrupted points. We extend this to the partially observed setting, significantly extending matrix completion results to the setting of corrupted rows or columns.




Algebraic Multigrid Methods with Applications to Data Analysis

Dan Kushnir

Multigrid solvers for solving linear problems (structured or unstructured) are based on iterating between two processes:

The most basic Multigrid solvers are the Geometric solvers (GMG) in which the problem is defined on a geometric grid. Of more interest, in the context of Machine learning, are the Algebraic Multigrid (AMG) solvers designed for unstructured problems. In this tutorial I will introduce the AMG technique, and its adoption for solving numerical problems in the context of Data Analysis. In particular, I will describe several efficient AMG eigensolvers that can be used within spectral graph methods such clustering, image segmentation, and dimensionality reduction. It turns out that the Algebraic Multigrid coarsening procedure itself can yield efficient and accurate algorithms for data clustering and image segmentation without any explicit solution of the related equations