Kernel-Based Machine Learning and Multivariate Modelling

You are here

Credits
6
Types
Specialization complementary (Data Science)
Requirements
This subject has not requirements
Department
EIO;CS
Kernel based Machine Learning and Multivariate Modeling

Weekly hours

Theory
3
Problems
0
Laboratory
0
Guided learning
0.2
Autonomous learning
6

Objectives

  1. Understand the foundations of Kernel-Based Learning Methods
    Related competences: CG3, CEC1, CEC3, CTR6,
  2. Get acquainted with specific kernel-based methods, such as the Support Vector Machine
    Related competences: CG3, CTR4,
  3. Know methods for kernelizing existing statistical or machine learning algorithms
    Related competences: CTR6,
  4. Know the theoretical foundations of kernel functions and kernel methods
    Related competences: CG3,
  5. Know the structure of the main unsupervised learning problems.
    Related competences: CG3, CEC1, CTR4, CTR6,
  6. Learn different methods for dimensionality reduction when the standard assumptions in classical Multivariate Analysis are not fulfilled
    Related competences: CG3, CEC1, CEC3, CTR4, CTR6,
  7. Learn how to combine dimensionality reduction techniques with prediction algorithms
    Related competences: CG3, CEC1, CEC3, CTR4, CTR6,

Contents

  1. Introduction to Kernel-Based Learning
    This topic introduces the student the foundations of Kernel-Based Learning focusing on Kernel Linear Regression
  2. The Support Vector Machine (SVM)
    This topic develops Support Vector Machine (SVM) for classification, regression and novelty detection
  3. Kernels: properties & design
    This topic defines kernel functions, their properties and construction. Introduces specific kernels for different data types, such as real vectors, categorical information, feature subsets, strings, probability distributions and graphs.
  4. Kernelizing ML algorithms
    This topic reviews different techniques for kernelizing existent algorithms
  5. Theoretical underpinnings
    This topic reviews the basic theoretical underpinnings of kernel-based methods, focusing on statistical learning theory
  6. Introduction to unsupervised learning
    Unsupervised versus supervised learning. Main problems in unsupervised learning (density estimation, dimensionality reduction, latent variables, clustering).
  7. Nonlinear dimensionality reduction
    a. Principal curves.
    b. Local Multidimensional Scaling.
    c. ISOMAP.
    d. t-Stochastic Neighbor Embedding.
    e. Applications: (i) Visualization of high- or infinite-dimensional data. (ii) Exploratory analysis of functional data in Demography.
  8. Dimensionality reduction with sparsity
    a. Matrix decompositions, approximations, and completion.
    b. Sparse Principal Components and Canonical Correlation.
    c. Applications: (i) Recommender systems. (ii) Estimating causal effects.
  9. Prediction after dimensionality reduction.
    a. Reduced rank regression and canonical correlation.
    b. Principal Component regression.
    c. Distance based regression.

Activities

Activity Evaluation act


Introduction to Kernel-Based Learning


Objectives: 1
Contents:
Theory
4h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

The SVM for classification, regression and novelty detection


Objectives: 2
Contents:
Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Kernels: properties & design


Objectives: 1 3
Contents:
Theory
4h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Practice class (I): the SVM


Objectives: 1 2
Contents:
Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Theory
4h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Practice class (II): kernel design & other KBL methods


Objectives: 3 4
Contents:
Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Theoretical underpinnings


Objectives: 1 4
Contents:
Theory
4h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Introduction to unsupervised learning



Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Nonlinear dimensionality reduction 1



Theory
4h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Nonlinear dimensionality reduction 2



Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Dimensionality reduction with sparsity 1



Theory
4h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Dimensionality reduction with sparsity 2



Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Prediction after dimensionality reduction 1



Theory
4h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Prediction after dimensionality reduction 2



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0.2h
Autonomous learning
6h

Evaluation quiz


Objectives: 1 2 3 4 5 6 7
Week: 15 (Outside class hours)
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
12h

Teaching methodology

Learning is done through a combination of theoretical explanations and their application to practising exercises and real cases. The lectures will develop the necessary scientific knowledge, including its application to problem solving. These problems constitute the practical work of the students on the subject, which will be developed as autonomous learning. The software used will be primarily R.

Evaluation methodology

The course evaluation will be based on the marks obtained in the practical works delivered during the semester plus the mark obtained in the written test for global evaluation.

Each practical work will lead to the drafting of the corresponding written report which will be evaluated by the teachers resulting in a mark denoted P.

The exam will take place at the end of the semester and will evaluate the assimilation of the basic concepts on the whole subject, resulting in a mark denoted T.

The final mark will be obtained as:

60% x P + 40% x T

Bibliography

Basic:

Complementary:

Web links