Abstract
We propose an architecture called siamese autoencoders for extracting and switching pre-determined styles of speech signals while retaining the content. We apply this architecture to a voice conversion task in which we define the content to be the linguistic message and the style to be the speaker's voice. We assume two or more data streams with the same content but unique styles. The architecture is composed of two or more separate but shared-weight autoencoders that are joined by loss functions at the hidden layers. A hidden vector is composed of style and content sub-vectors and the loss functions constrain the encodings to decompose style and content. We can select an intended target speaker either by supplying the associated style vector, or by extracting a new style vector from a new utterance, using a proposed style extraction algorithm. We focus on in-Training speakers but perform some initial experiments for out-of-Training speakers as well. We propose and study several types of loss functions. The experiment results show that the proposed many-To-many model is able to convert voices successfully; however, its performance does not surpass that of the state-of-The-Art one-To-one model's.
Original language | English (US) |
---|---|
Pages (from-to) | 1293-1297 |
Number of pages | 5 |
Journal | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
Volume | 2017-August |
DOIs | |
State | Published - Jan 1 2017 |
Fingerprint
Keywords
- Siamese autoencoders
- Style extraction
- Style switching
- Voice conversion
ASJC Scopus subject areas
- Language and Linguistics
- Human-Computer Interaction
- Signal Processing
- Software
- Modeling and Simulation
Cite this
Siamese autoencoders for speech style extraction and switching applied to voice identification and conversion. / Mohammadi, Seyed Hamidreza; Kain, Alexander.
In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, Vol. 2017-August, 01.01.2017, p. 1293-1297.Research output: Contribution to journal › Article
}
TY - JOUR
T1 - Siamese autoencoders for speech style extraction and switching applied to voice identification and conversion
AU - Mohammadi, Seyed Hamidreza
AU - Kain, Alexander
PY - 2017/1/1
Y1 - 2017/1/1
N2 - We propose an architecture called siamese autoencoders for extracting and switching pre-determined styles of speech signals while retaining the content. We apply this architecture to a voice conversion task in which we define the content to be the linguistic message and the style to be the speaker's voice. We assume two or more data streams with the same content but unique styles. The architecture is composed of two or more separate but shared-weight autoencoders that are joined by loss functions at the hidden layers. A hidden vector is composed of style and content sub-vectors and the loss functions constrain the encodings to decompose style and content. We can select an intended target speaker either by supplying the associated style vector, or by extracting a new style vector from a new utterance, using a proposed style extraction algorithm. We focus on in-Training speakers but perform some initial experiments for out-of-Training speakers as well. We propose and study several types of loss functions. The experiment results show that the proposed many-To-many model is able to convert voices successfully; however, its performance does not surpass that of the state-of-The-Art one-To-one model's.
AB - We propose an architecture called siamese autoencoders for extracting and switching pre-determined styles of speech signals while retaining the content. We apply this architecture to a voice conversion task in which we define the content to be the linguistic message and the style to be the speaker's voice. We assume two or more data streams with the same content but unique styles. The architecture is composed of two or more separate but shared-weight autoencoders that are joined by loss functions at the hidden layers. A hidden vector is composed of style and content sub-vectors and the loss functions constrain the encodings to decompose style and content. We can select an intended target speaker either by supplying the associated style vector, or by extracting a new style vector from a new utterance, using a proposed style extraction algorithm. We focus on in-Training speakers but perform some initial experiments for out-of-Training speakers as well. We propose and study several types of loss functions. The experiment results show that the proposed many-To-many model is able to convert voices successfully; however, its performance does not surpass that of the state-of-The-Art one-To-one model's.
KW - Siamese autoencoders
KW - Style extraction
KW - Style switching
KW - Voice conversion
UR - http://www.scopus.com/inward/record.url?scp=85039165729&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85039165729&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2017-1434
DO - 10.21437/Interspeech.2017-1434
M3 - Article
AN - SCOPUS:85039165729
VL - 2017-August
SP - 1293
EP - 1297
JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
SN - 2308-457X
ER -