### NUMML 2010

(Dec. 11, 2010)NIPS 2010 Workshop, to be held in Whistler, Canada.

#### Call for Participation

We invite high-quality submissions for presentation as posters at the workshop. The poster session will be designed along the lines of the poster session for the main NIPS conference. There will probably poster spotlights. The authors are encouraged (and should be motivated) to use the poster session as a means to obtain valuable feedback from experts present at the workshop

Submissions should be in the form of an extended abstract, paper (limited to 8 pages), or poster. Work must be original, not published or in submission elsewhere (a possible exception are publications at venues unknown to machine learning researchers, please state such details with your submission). Authors should make an effort to motivate why the work fits the goals of the workshop (see below) and should be of interest to the audience. Merely resubmitting a submission rejected at the main conference, without adding such motivation, is strongly discouraged.

**Submission link**

We welcome and seek contributions on the following subtopics (although we do
not limit ourselves to these):

- Current challenges:

- Large to extremely large-scale numerical algorithms
- Eigenvector compuations with huge graphs
- Randomized algorithms for low-rank matrix approximations
- Parallel and distributed algorithms
- Some more classic topics, but increasingly relevant for ML applications:

- Solving large linear systems (e.g., for linear models, Gaussian MRF (mean computations), nonlinear optimization methods (trust region Newton-Raphson, iteratively reweighted least squares, ...)
- Iterative solvers
- Preconditioning, use of model/problem structure
- Multi-grid / multi-level methods
- Numerical linear algebra packages relevant to ML:

- LAPACK, BLAS, GotoBLAS, MKL, UMFPACK, ...
- Eigenvector approximation (e.g., for linear model (covariance estimation), spectral clustering and graph Laplacian methods, PCA, scalable graph analysis (social networks), matrix completion (NetFlix)):

- Lanczos algorithm and specialized variants
- Randomized alternatives to Lanczos
- Highly parallelizable methods
- Exploiting matrix/model structure, fast matrix-vector multiplication:

- Matrix decompositions/approximations
- Multi-pole methods
- FFT-based multiplication
- Matrix factorizations, low-rank updates:

- Cholesky updates/downdates
- Factorizations for Gaussian process/kernel methods
- Parallel numerical computation for ML