Title: Regularized MultiTask Learning
1Regularized MultiTask Learning
- Theodoros Evgeniou
- Massimiliano Pontil
- KDD 2004
2Multi-Task Learning
- Learn many related tasks simultaneously
- Tasks have commonality
- Each tasks have their specialty
- Why multi-task learning (my understanding)
- Obtain a better estimation of common part of the
tasks. - Acquire better understanding of the problems and
their underlying model - Application
- finance forecasting models
3SVM A Brief Introduction
4Regularized MultiTask Learning
- Based on SVM classifier
- Learn T tasks together, each task corresponds to
an SVM classifier - w0 carry information of the commonality
- vt carry information of the specialty
- Regularizing between commonality and specialty
when solving the problem
5Notation and Setup
- All data for the tasks come from the same space X
Y. (X ? Rd ,Y ? R). - T1 corresponding to single task problem
6Problem Formulation (Primal)
If this is big, we will force vt to be small,
therefore emphases w0
If this is big, we force w0 to be small, therfore
emphases vt
7Reformulate The Problem (Primal)
Proved by inspecting the Lagrangian Function of
(2). Weighted Means of wt
8The Dual Formulation
- Using feature map reduce the problem to a
single task problem
The same discriminant problem for each tasks
The same optimization objective
9The Dual Formulation (cont.)
10Training Time
- Using a standard SVM to train
- O(number of training data3) O(m3)
- T tasks, each have m training example
- Training each task individually O(Tm3)
- Training all tasks together O(T3m3)
11Experiment on Synthetic Data
12Experiment on Real Data