Unverified Commit 0512bfe7 authored by Ambesh Shekhar's avatar Ambesh Shekhar Committed by GitHub
Browse files

Custom errors and BatchSizeError (#13184)

* Adding custom errors and BatchSizeError for GPT2

* Adding custom errors and BatchSizeError for GPT2

* Changing Exception to BaseException

* Exception

* Adding args to Custom Exception

* Adding args to Custom Exception

* Changing from BaseException to Exception

* Changing Conditional loop syntax

* Adding Copyright info

* Handling check_code_quality

* Handling check_code_quality pt2

* Handling check_code_quality pt3

* Handling check_code_quality pt4

* Handling check_code_quality pt5

* Handling check_code_quality pt6

* Handling check_code_quality pt6

* Using black for check_code_quality

* sorting import style

* Changing

* Changing

* verified through style_doc.py

* verified through style_doc.py

* applying isort

* Removing indentation

* Changing

* Changing

* Changing

* Used ValueError

* Using ValueError

* Reformatted Style doc

* Using style doc on modeling_gp2.py

* Adding indentation

* Changing
parent cf574476
...@@ -695,7 +695,8 @@ class GPT2Model(GPT2PreTrainedModel): ...@@ -695,7 +695,8 @@ class GPT2Model(GPT2PreTrainedModel):
# GPT2Attention mask. # GPT2Attention mask.
if attention_mask is not None: if attention_mask is not None:
assert batch_size > 0, "batch_size has to be defined and > 0" if batch_size <= 0:
raise ValueError("batch_size has to be defined and > 0")
attention_mask = attention_mask.view(batch_size, -1) attention_mask = attention_mask.view(batch_size, -1)
# We create a 3D attention mask from a 2D tensor mask. # We create a 3D attention mask from a 2D tensor mask.
# Sizes are [batch_size, 1, 1, to_seq_length] # Sizes are [batch_size, 1, 1, to_seq_length]
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment