Viral News

wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

[Submitted on 20 Jun 2020 (v1), last revised 22 Oct 2020 (this version, v3)]

Download PDF

Abstract: We show for the first time that learning powerful representations from speech
audio alone followed by fine-tuning on transcribed speech can outperform the
best semi-supervised methods while being conceptually simpler. wav2vec 2.0
masks the speech input in the latent space and solves a contrastive task
defined over a quantization of the latent representations which are jointly
learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER
on the clean/other test sets. When lowering the amount of labeled data to one
hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour
subset while using 100 times less labeled data. Using just ten minutes of
labeled data and pre-training on 53k hours of unlabeled data still achieves
4.8/8.2 WER. This demonstrates the feasibility of speech recognition with
limited amounts of labeled data.

Submission history

From: Michael Auli [view email]

[v1]
Sat, 20 Jun 2020 02:35:02 UTC (301 KB)

[v2]
Tue, 22 Sep 2020 04:26:03 UTC (301 KB)

[v3]
Thu, 22 Oct 2020 06:09:10 UTC (301 KB)

Read More

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker