Wednesday, March 17, 2010

Fixed-Budget Kernel RLS at ICASSP 2010

My paper titled "Fixed-Budget Kernel Recursive Least-Squares" will be presented this afternoon at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2010). It was written in collaboration with Weifeng Liu and Jose C. Principe of the Computational NeuroEngineering Laboratory (CNEL) at the University of Florida. I couldn't be there to present the paper myself, but luckily my former supervisor Ignacio Santamaría is attending and will be giving the talk. If you are interested in learning methods or online kernel methods, be sure to attend. It's the first paper of the "Learning Theory and Models I" session (MLSP-L2.1), which is held on Wednesday, March 17, 13:30 - 15:30.

The basic idea behind this learning algorithm is pretty simple. First of all, it is an RLS method in kernel feature space, which means it is capable of learning a nonlinear regression in an online manner. Kernel RLS methods usually face memory and computational problems, since their solutions require to invert a large kernel matrix that incorporates information about all processed data. As new data patterns (x,y) arrive (remember this is set in an online environment), this matrix grows and the computation of the regression output becomes more costly in each step. The typical approach to avoid these memory and computational problems is to restrict the data patterns to a minimum number that allows to approximate the solution sufficiently well. Each time a newly arrived pattern is found to be sufficiently interesting, it is added to a "dictionary".

In contrast to these techniques, which make the dictionary expand (albeit slowly), the technique of this paper fixes the size of the dictionary. As a consequence, apart from adding patterns to the dictionary, we also have to prune patterns. While the pruning itself involves only simple algebra, the criterion to determine what data to prune is more interesting. We chose to use a relevance-based criterion in this paper, motivated by the good results it yielded. More specifically, in each time step we calculate how relevant each stored pattern is with respect to the regression, and subsequently prune the least relevant pattern. (Details and references are in the paper itself.) As a result, after running a number of iterations of this algorithm, it obtains a compact set of patterns that summarizes the observed patterns pretty well.

I particularly like the interpretation of this algorithm as a simple learning method that deals with memory restrictions. Every time this method receives a new data pattern, it must determine actively what pattern to "forget" in order to maintain its memory size. So it's a way of "optimal forgetting". And as the results in the paper point out, it is also pretty efficient in doing so.

Figures: The previous figure is one of the slides. The below figure is an illustration of the "label update" procedure that can be introduced to equip the algorithm with tracking capability. I.e., in a time-varying environment the regression surface can be changing, as indicated by the new data pattern (xn,yn). The proposed procedure updates the labels in the vicinity of this point to reflect those changes.
But enough talking about this paper. My friends and colleagues are also presenting theirs, so if you are at the ICASSP be sure to check them out:
  • Il Park and Jose Principe, "Quantification of Inter-trial Non-stationarity in Spike Trains from Periodically Stimulated Neural Cultures" (Multivariate and Multimodal Analysis of Brain Signals, Tuesday, March 16, 13:50 - 14:10).
  • Sohan Seth, Jose C. Principe. A conditional distribution function based approach to design nonparametric tests of independence and conditional independence (Learning Theory and Models III, Wednesday, March 17, 16:00 – 18:00).
  • David Ramirez, Javier Via, Ignacio Santamaria, Roberto Lopez-Valcarce, Louis L. Scharf, "Multiantenna Spectrum Sensing: Detection Of Spatial Correlation Among Time-Series With Unknown Spectra" (SPCOM-L1: Spectrum Sensing for Cognitive Radio I, Tuesday, March 16, 14:10 - 14:30).
  • Javier Vía, David Ramírez, Ignacio Santamaría, Luis Vielva, "Widely And Semi-Widely Linear Processing Of Quaternion Vectors" (Detection and Estimation Techniques - II, Thursday, March 18, 13:30 - 15:30).
  • Luis Vielva, Javier Vía, Jesús Gutiérrez, Óscar González, Jesús Ibáñez, Ignacio Santamaría, "Building A Web Platform For Learning Advanced Digital Communications Using A Mimo Testbed" (Signal Processing Education: Signal Processing Education, Tuesday, March 16, 14:30 - 14:50)
  • Abhishek Singh, Jose Principe. Kernel width adaptation in information theoretic cost functions (Learning Theory and Models III, Wednesday, March 17, 16:00 – 18:00).
  • Abhishek Singh, Jose Principe. A closed form recursive solution for maximum correntropy training (Learning Theory and Models III, Time: Wednesday, March 17, 16:00 – 18:00).
  • Abhishek Singh, Tejaswi Tamminedi, Guy Yosiphon, Anurag Ganguli, Jacob Yadegar. Hidden markov models for modeling blood pressure data to predict acute hypotension (Bioinformatics and Biomedical Signal Processing, Tuesday, March 16, 16:00 – 18:00).
Thanks to Memming for the blog post idea!

PS: The paper will be available in short on my academic web page [update:] can be downloaded from my publications page, and I also plan to release the Matlab code either there or here in the near future. Be sure to ask me about it if you can't find it.


Memming said...

Finite resource learning problems are very important indeed. It forces one to compress as much as possible to the existing storage. I like the simple yet elegant idea of this paper.

Sangorrin said...

nice stuff!!