Skip to content

Feat: Add normalize option to CER and WER metrics for normalized score calculation #667

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

skyil7
Copy link

@skyil7 skyil7 commented Mar 24, 2025

This pull request introduces a normalize option to the compute() function of both the CER and WER metrics. When set to True, the metrics will calculate and return normalized scores.

This addresses the feature request raised in issue #161 from 2022, which has remained unaddressed. This implementation allows users to calculate CER and WER scores ranging from 0 to 100%, as requested in the issue.

The normalized CER is calculated as:

CER_normalized = (Insertions + Substitutions + Deletions) / (Insertions + Substitutions + Deletions + Correct Characters)

The normalized WER is calculated similarly, at the word level.

@skyil7
Copy link
Author

skyil7 commented Aug 14, 2025

Hello @lhoestq,

I hope you're doing well.

I'm writing to gently follow up on this PR. It's a small and straightforward change that introduces normalized versions of WER and CER for ASR evaluation.

The goal is to provide a more robust metric against outliers, which can heavily skew the standard scores. Although the implementation is minimal, I believe this addition offers significant value to researchers.

Since the change is quite small, I hope it will be quick to review. Please let me know if you have any feedback.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant