Kernelized Bayesian transfer learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Citations (Scopus)

Abstract

Transfer learning considers related but distinct tasks defined on heterogenous domains and tries to transfer knowledge between these tasks to improve generalization performance. It is particularly useful when we do not have sufficient amount of labeled training data in some tasks, which may be very costly, laborious, or even infeasible to obtain. Instead, learning the tasks jointly enables us to effectively increase the amount of labeled training data. In this paper, we formulate a kernelized Bayesian transfer learning framework that is a principled combination of kernel-based dimensionality reduction models with task-specific projection matrices to find a shared subspace and a coupled classification model for all of the tasks in this subspace. Our two main contributions are: (i) two novel probabilistic models for binary and multiclass classification, and (ii) very efficient variational approximation procedures for these models. We illustrate the generalization performance of our algorithms on two different applications. In computer vision experiments, our method outperforms the state-of-the-art algorithms on nine out of 12 benchmark supervised domain adaptation experiments defined on two object recognition data sets. In cancer biology experiments, we use our algorithm to predict mutation status of important cancer genes from gene expression profiles using two distinct cancer populations, namely, patient-derived primary tumor data and in-vitro-derived cancer cell line data. We show that we can increase our generalization performance on primary tumors using cell lines as an auxiliary data source.

Original languageEnglish (US)
Title of host publicationProceedings of the National Conference on Artificial Intelligence
PublisherAI Access Foundation
Pages1831-1839
Number of pages9
Volume3
ISBN (Print)9781577356790
StatePublished - 2014
Event28th AAAI Conference on Artificial Intelligence, AAAI 2014, 26th Innovative Applications of Artificial Intelligence Conference, IAAI 2014 and the 5th Symposium on Educational Advances in Artificial Intelligence, EAAI 2014 - Quebec City, Canada
Duration: Jul 27 2014Jul 31 2014

Other

Other28th AAAI Conference on Artificial Intelligence, AAAI 2014, 26th Innovative Applications of Artificial Intelligence Conference, IAAI 2014 and the 5th Symposium on Educational Advances in Artificial Intelligence, EAAI 2014
CountryCanada
CityQuebec City
Period7/27/147/31/14

Fingerprint

Tumors
Cells
Experiments
Object recognition
Gene expression
Computer vision
Genes
Statistical Models

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Gonen, M., & Margolin, A. (2014). Kernelized Bayesian transfer learning. In Proceedings of the National Conference on Artificial Intelligence (Vol. 3, pp. 1831-1839). AI Access Foundation.

Kernelized Bayesian transfer learning. / Gonen, Mehmet; Margolin, Adam.

Proceedings of the National Conference on Artificial Intelligence. Vol. 3 AI Access Foundation, 2014. p. 1831-1839.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Gonen, M & Margolin, A 2014, Kernelized Bayesian transfer learning. in Proceedings of the National Conference on Artificial Intelligence. vol. 3, AI Access Foundation, pp. 1831-1839, 28th AAAI Conference on Artificial Intelligence, AAAI 2014, 26th Innovative Applications of Artificial Intelligence Conference, IAAI 2014 and the 5th Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, Quebec City, Canada, 7/27/14.
Gonen M, Margolin A. Kernelized Bayesian transfer learning. In Proceedings of the National Conference on Artificial Intelligence. Vol. 3. AI Access Foundation. 2014. p. 1831-1839
Gonen, Mehmet ; Margolin, Adam. / Kernelized Bayesian transfer learning. Proceedings of the National Conference on Artificial Intelligence. Vol. 3 AI Access Foundation, 2014. pp. 1831-1839
@inproceedings{b4f9aebb065d43b69aa53e6f3a16cdcf,
title = "Kernelized Bayesian transfer learning",
abstract = "Transfer learning considers related but distinct tasks defined on heterogenous domains and tries to transfer knowledge between these tasks to improve generalization performance. It is particularly useful when we do not have sufficient amount of labeled training data in some tasks, which may be very costly, laborious, or even infeasible to obtain. Instead, learning the tasks jointly enables us to effectively increase the amount of labeled training data. In this paper, we formulate a kernelized Bayesian transfer learning framework that is a principled combination of kernel-based dimensionality reduction models with task-specific projection matrices to find a shared subspace and a coupled classification model for all of the tasks in this subspace. Our two main contributions are: (i) two novel probabilistic models for binary and multiclass classification, and (ii) very efficient variational approximation procedures for these models. We illustrate the generalization performance of our algorithms on two different applications. In computer vision experiments, our method outperforms the state-of-the-art algorithms on nine out of 12 benchmark supervised domain adaptation experiments defined on two object recognition data sets. In cancer biology experiments, we use our algorithm to predict mutation status of important cancer genes from gene expression profiles using two distinct cancer populations, namely, patient-derived primary tumor data and in-vitro-derived cancer cell line data. We show that we can increase our generalization performance on primary tumors using cell lines as an auxiliary data source.",
author = "Mehmet Gonen and Adam Margolin",
year = "2014",
language = "English (US)",
isbn = "9781577356790",
volume = "3",
pages = "1831--1839",
booktitle = "Proceedings of the National Conference on Artificial Intelligence",
publisher = "AI Access Foundation",

}

TY - GEN

T1 - Kernelized Bayesian transfer learning

AU - Gonen, Mehmet

AU - Margolin, Adam

PY - 2014

Y1 - 2014

N2 - Transfer learning considers related but distinct tasks defined on heterogenous domains and tries to transfer knowledge between these tasks to improve generalization performance. It is particularly useful when we do not have sufficient amount of labeled training data in some tasks, which may be very costly, laborious, or even infeasible to obtain. Instead, learning the tasks jointly enables us to effectively increase the amount of labeled training data. In this paper, we formulate a kernelized Bayesian transfer learning framework that is a principled combination of kernel-based dimensionality reduction models with task-specific projection matrices to find a shared subspace and a coupled classification model for all of the tasks in this subspace. Our two main contributions are: (i) two novel probabilistic models for binary and multiclass classification, and (ii) very efficient variational approximation procedures for these models. We illustrate the generalization performance of our algorithms on two different applications. In computer vision experiments, our method outperforms the state-of-the-art algorithms on nine out of 12 benchmark supervised domain adaptation experiments defined on two object recognition data sets. In cancer biology experiments, we use our algorithm to predict mutation status of important cancer genes from gene expression profiles using two distinct cancer populations, namely, patient-derived primary tumor data and in-vitro-derived cancer cell line data. We show that we can increase our generalization performance on primary tumors using cell lines as an auxiliary data source.

AB - Transfer learning considers related but distinct tasks defined on heterogenous domains and tries to transfer knowledge between these tasks to improve generalization performance. It is particularly useful when we do not have sufficient amount of labeled training data in some tasks, which may be very costly, laborious, or even infeasible to obtain. Instead, learning the tasks jointly enables us to effectively increase the amount of labeled training data. In this paper, we formulate a kernelized Bayesian transfer learning framework that is a principled combination of kernel-based dimensionality reduction models with task-specific projection matrices to find a shared subspace and a coupled classification model for all of the tasks in this subspace. Our two main contributions are: (i) two novel probabilistic models for binary and multiclass classification, and (ii) very efficient variational approximation procedures for these models. We illustrate the generalization performance of our algorithms on two different applications. In computer vision experiments, our method outperforms the state-of-the-art algorithms on nine out of 12 benchmark supervised domain adaptation experiments defined on two object recognition data sets. In cancer biology experiments, we use our algorithm to predict mutation status of important cancer genes from gene expression profiles using two distinct cancer populations, namely, patient-derived primary tumor data and in-vitro-derived cancer cell line data. We show that we can increase our generalization performance on primary tumors using cell lines as an auxiliary data source.

UR - http://www.scopus.com/inward/record.url?scp=84908154456&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84908154456&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84908154456

SN - 9781577356790

VL - 3

SP - 1831

EP - 1839

BT - Proceedings of the National Conference on Artificial Intelligence

PB - AI Access Foundation

ER -