"...git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "b2724d7b4ce07b615bfe6c7b6ce23df04249f9c3"
Unverified Commit 6f8e367a authored by Will Rice's avatar Will Rice Committed by GitHub
Browse files

Fix Padded Batch Error 12282 (#12487)

This fixes the padded batch [issue](https://github.com/huggingface/transformers/issues/12282). The error was generated due to the maximum sequence length of the attention mask not matching the padded sequence length of the hidden_states. `np.allclose` now passes with a 1e-2 absolute tolerance.

This change fixes
parent 7fae5350
...@@ -1213,7 +1213,10 @@ class TFWav2Vec2MainLayer(tf.keras.layers.Layer): ...@@ -1213,7 +1213,10 @@ class TFWav2Vec2MainLayer(tf.keras.layers.Layer):
if inputs["attention_mask"] is not None: if inputs["attention_mask"] is not None:
# compute real output lengths according to convolution formula # compute real output lengths according to convolution formula
output_lengths = self._get_feat_extract_output_lengths(tf.reduce_sum(inputs["attention_mask"], -1)) output_lengths = self._get_feat_extract_output_lengths(tf.reduce_sum(inputs["attention_mask"], -1))
attention_mask = tf.sequence_mask(output_lengths, dtype=hidden_states.dtype)
attention_mask = tf.sequence_mask(
output_lengths, maxlen=shape_list(hidden_states)[1], dtype=hidden_states.dtype
)
hidden_states = self.feature_projection(hidden_states, training=inputs["training"]) hidden_states = self.feature_projection(hidden_states, training=inputs["training"])
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment