Commit 80ea3419 authored by Caroline Chen's avatar Caroline Chen Committed by Facebook GitHub Bot
Browse files

Remove RNNTL unused vars (#2142)

Summary:
remove unnecessary RNNT Loss variables and comment as indicated in https://github.com/pytorch/audio/issues/1479 review comments
(will follow up on `workspace` comments separately depending on complexity)

Pull Request resolved: https://github.com/pytorch/audio/pull/2142

Reviewed By: mthrok

Differential Revision: D33433764

Pulled By: carolineechen

fbshipit-source-id: be0ecb77dabd63d733f0d33ff258eae32305eeaf
parent 0a072f9a
......@@ -444,7 +444,6 @@ void ComputeAlphas(
TensorView<CAST_DTYPE>({maxT, maxU}, alphas + b * maxT * maxU));
}
std::vector<CAST_DTYPE> scores(B << 1);
//#pragma omp parallel for
for (int i = 0; i < B; ++i) { // use max 2 * B threads.
ComputeAlphaOneSequence<DTYPE>(
......@@ -481,9 +480,8 @@ void ComputeBetas(
TensorView<CAST_DTYPE>({maxT, maxU}, betas + b * maxT * maxU));
}
std::vector<CAST_DTYPE> scores(B << 1);
//#pragma omp parallel for
for (int i = 0; i < B; ++i) { // use max 2 * B threads.
for (int i = 0; i < B; ++i) {
ComputeBetaOneSequence<DTYPE>(
options,
/*logProbs=*/seqlogProbs[i],
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment