News

2025-12-03
Request the SAP corpus via the official website (link), by submitting the DUA (link) and a one-page proposal to email.
Note: Approval typically takes ~2–4 weeks. Upon approval, access will be immediately granted to the SAP Research Release, which contains most of the same waveforms that will be part of the official competition release, but in a different data format. The official competition release will be made available on 2026-03-01 to early approvals, or immediately once approved after that date.
2025-12-03
Team registration is now open through link.
2025-12-03
SAPC2 Challenge website launched!

Introduction

Welcome to the Speech Accessibility Project Challenge 2(SAPC2).

SAPC2 builds on the success of the Interspeech 2025 Speech Accessibility Project Challenge (Challenge API), which demonstrated significant progress in dysarthric speech recognition — reducing Word Error Rate (WER) from the Whisper-large-v2 baseline of 17.82% to 8.11%. This new edition introduces a larger, more diverse, and etiology-balanced corpus, further promoting fairness, robustness, and inclusivity in impaired-speech ASR. The challenge invites the research community to push the state of the art, develop innovative modeling techniques, and set new standards for accessible speech technology.

Challenge Tracks

The challenge features two complementary tracks:

  1. Unconstrained ASR Track: Participants may use models of any size or architecture, aiming to advance the state of the art in dysarthric speech recognition.
  2. Efficiency-Constrained ASR Track: Submitted systems must meet strict limits on model size and inference time, promoting lightweight and deployable solutions for real-world use.

Evaluation Metrics

We evaluate system performance using transcripts normalized with a fully-formatted normalizer adapted from the HuggingFace ASR leaderboard. Two metrics are used to assess transcription accuracy:

Both metrics are clipped to 100% at the utterance level. Scores are computed using two references (with/without disfluencies) and the lower error is selected per utterance.

Prizes

To be announced soon!

References

Acknowledgements

The Speech Accessibility Project is funded by a grant from the AI Accessibility Coalition. Computational resources for the challenge are provided by the National Center for Supercomputing Applications.