Towards Unsupervised Speech Recognition and Synthesis with Quantized Speech Representation Learning

Paper: arXiv
Authors: Alexander H. Liu (co-first), Tao Tu (co-first), Hung-yi Lee, Lin-shan Lee
Abstract: In this paper we propose a Sequential Representation Quantization AutoEncoder (SeqRQ-AE) to learn from primarily unpaired audio data and produce sequences of representations very close to phoneme sequences of speech utterances. This is achieved by proper temporal segmentation to make the representations phoneme-synchronized, and proper phonetic clustering to have total number of distinct representations close to the number of phonemes. Mapping between the distinct representations and phonemes is learned from a small amount of annotated paired data. Preliminary experiments on LJSpeech demonstrated the learned representations for vowels have relative locations in latent space in good parallel to that shown in the IPA vowel chart defined by linguistics experts. With less than 20 minutes of annotated speech, our method outperformed existing methods on phoneme recognition and is able to synthesize intelligible speech that beats our baseline model.

High-quality Audio Samples

This section demonstrates examples of synthesized speech produced by the proposed SeqRQ-AE. All utterances were drawn from the test set.
Note that the following changes are made (w.r.t. the setting in the paper) for higher synthesized speech quality:
20 Minutes Paired Data
10 Minutes Paired Data
Ground Truth
“in being comparatively modern.”
“including a few of major importance.”
“Here once more the almost invariable modern practice is in opposition to a natural sense of proportion.”
“It is, moreover, true that all living forms are but series of modifications and extensions of one single plan of structure.”
“The miserable wretch during the whole of this time was struggling with the most determined violence, to the great horror of the spectators.”

Original Audio Samples

In this section, examples that received MOS test were demonstrated. Detailed information of the setting could be found in section 3 of our paper.

Ours
20min Paired Data
Ours (w/o codebook)
20min Paired Data
Speech Chain (w/o text cycle)
20min Paired Data
Ours
10min Paired Data
Supervised
23hr Paired Data
“in being comparatively modern.”
“including a few of major importance.”
“Here once more the almost invariable modern practice is in opposition to a natural sense of proportion.”
“It is, moreover, true that all living forms are but series of modifications and extensions of one single plan of structure.”
“The miserable wretch during the whole of this time was struggling with the most determined violence, to the great horror of the spectators.”