Experimental design under the Bradley-Terry model

Yuan Guo, Peng Tian, Jayashree Kalpathy-Cramer, Susan Ostmo, John Campbell, Michael Chiang, Deniz Erdogmu ş, Jennifer Dy, Stratis Ioannidis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Labels generated by human experts via comparisons exhibit smaller variance compared to traditional sample labels. Collecting comparison labels is challenging over large datasets, as the number of comparisons grows quadratically with the dataset size. We study the following experimental design problem: given a budget of expert comparisons, and a set of existing sample labels, we determine the comparison labels to collect that lead to the highest classification improvement. We study several experimental design objectives motivated by the Bradley-Terry model. The resulting optimization problems amount to maximizing submodular functions. We experimentally evaluate the performance of these methods over synthetic and real-life datasets.

Original languageEnglish (US)
Title of host publicationProceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018
EditorsJerome Lang
PublisherInternational Joint Conferences on Artificial Intelligence
Pages2198-2204
Number of pages7
Volume2018-July
ISBN (Electronic)9780999241127
StatePublished - Jan 1 2018
Event27th International Joint Conference on Artificial Intelligence, IJCAI 2018 - Stockholm, Sweden
Duration: Jul 13 2018Jul 19 2018

Other

Other27th International Joint Conference on Artificial Intelligence, IJCAI 2018
CountrySweden
CityStockholm
Period7/13/187/19/18

Fingerprint

Design of experiments
Labels

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Guo, Y., Tian, P., Kalpathy-Cramer, J., Ostmo, S., Campbell, J., Chiang, M., ... Ioannidis, S. (2018). Experimental design under the Bradley-Terry model. In J. Lang (Ed.), Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018 (Vol. 2018-July, pp. 2198-2204). International Joint Conferences on Artificial Intelligence.

Experimental design under the Bradley-Terry model. / Guo, Yuan; Tian, Peng; Kalpathy-Cramer, Jayashree; Ostmo, Susan; Campbell, John; Chiang, Michael; Erdogmu ş, Deniz; Dy, Jennifer; Ioannidis, Stratis.

Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018. ed. / Jerome Lang. Vol. 2018-July International Joint Conferences on Artificial Intelligence, 2018. p. 2198-2204.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Guo, Y, Tian, P, Kalpathy-Cramer, J, Ostmo, S, Campbell, J, Chiang, M, Erdogmu ş, D, Dy, J & Ioannidis, S 2018, Experimental design under the Bradley-Terry model. in J Lang (ed.), Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018. vol. 2018-July, International Joint Conferences on Artificial Intelligence, pp. 2198-2204, 27th International Joint Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, 7/13/18.
Guo Y, Tian P, Kalpathy-Cramer J, Ostmo S, Campbell J, Chiang M et al. Experimental design under the Bradley-Terry model. In Lang J, editor, Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018. Vol. 2018-July. International Joint Conferences on Artificial Intelligence. 2018. p. 2198-2204
Guo, Yuan ; Tian, Peng ; Kalpathy-Cramer, Jayashree ; Ostmo, Susan ; Campbell, John ; Chiang, Michael ; Erdogmu ş, Deniz ; Dy, Jennifer ; Ioannidis, Stratis. / Experimental design under the Bradley-Terry model. Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018. editor / Jerome Lang. Vol. 2018-July International Joint Conferences on Artificial Intelligence, 2018. pp. 2198-2204
@inproceedings{97065832d9864f569c527a992aa0d9db,
title = "Experimental design under the Bradley-Terry model",
abstract = "Labels generated by human experts via comparisons exhibit smaller variance compared to traditional sample labels. Collecting comparison labels is challenging over large datasets, as the number of comparisons grows quadratically with the dataset size. We study the following experimental design problem: given a budget of expert comparisons, and a set of existing sample labels, we determine the comparison labels to collect that lead to the highest classification improvement. We study several experimental design objectives motivated by the Bradley-Terry model. The resulting optimization problems amount to maximizing submodular functions. We experimentally evaluate the performance of these methods over synthetic and real-life datasets.",
author = "Yuan Guo and Peng Tian and Jayashree Kalpathy-Cramer and Susan Ostmo and John Campbell and Michael Chiang and {Erdogmu ş}, Deniz and Jennifer Dy and Stratis Ioannidis",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
volume = "2018-July",
pages = "2198--2204",
editor = "Jerome Lang",
booktitle = "Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018",
publisher = "International Joint Conferences on Artificial Intelligence",

}

TY - GEN

T1 - Experimental design under the Bradley-Terry model

AU - Guo, Yuan

AU - Tian, Peng

AU - Kalpathy-Cramer, Jayashree

AU - Ostmo, Susan

AU - Campbell, John

AU - Chiang, Michael

AU - Erdogmu ş, Deniz

AU - Dy, Jennifer

AU - Ioannidis, Stratis

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Labels generated by human experts via comparisons exhibit smaller variance compared to traditional sample labels. Collecting comparison labels is challenging over large datasets, as the number of comparisons grows quadratically with the dataset size. We study the following experimental design problem: given a budget of expert comparisons, and a set of existing sample labels, we determine the comparison labels to collect that lead to the highest classification improvement. We study several experimental design objectives motivated by the Bradley-Terry model. The resulting optimization problems amount to maximizing submodular functions. We experimentally evaluate the performance of these methods over synthetic and real-life datasets.

AB - Labels generated by human experts via comparisons exhibit smaller variance compared to traditional sample labels. Collecting comparison labels is challenging over large datasets, as the number of comparisons grows quadratically with the dataset size. We study the following experimental design problem: given a budget of expert comparisons, and a set of existing sample labels, we determine the comparison labels to collect that lead to the highest classification improvement. We study several experimental design objectives motivated by the Bradley-Terry model. The resulting optimization problems amount to maximizing submodular functions. We experimentally evaluate the performance of these methods over synthetic and real-life datasets.

UR - http://www.scopus.com/inward/record.url?scp=85055703580&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85055703580&partnerID=8YFLogxK

M3 - Conference contribution

VL - 2018-July

SP - 2198

EP - 2204

BT - Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018

A2 - Lang, Jerome

PB - International Joint Conferences on Artificial Intelligence

ER -