Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
75a63198
Unverified
Commit
75a63198
authored
Jun 27, 2024
by
Arthur
Committed by
GitHub
Jun 27, 2024
Browse files
Fix post gemma merge (#31660)
* nit * toctree issue * protect gemma2 tests as well * sdpa supported
parent
727eea4a
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
10 additions
and
5 deletions
+10
-5
docs/source/en/_toctree.yml
docs/source/en/_toctree.yml
+2
-0
docs/source/en/perf_infer_gpu_one.md
docs/source/en/perf_infer_gpu_one.md
+2
-0
tests/models/gemma2/test_modeling_gemma2.py
tests/models/gemma2/test_modeling_gemma2.py
+6
-5
No files found.
docs/source/en/_toctree.yml
View file @
75a63198
...
@@ -382,6 +382,8 @@
...
@@ -382,6 +382,8 @@
title
:
Fuyu
title
:
Fuyu
-
local
:
model_doc/gemma
-
local
:
model_doc/gemma
title
:
Gemma
title
:
Gemma
-
local
:
model_doc/gemma2
title
:
Gemma2
-
local
:
model_doc/openai-gpt
-
local
:
model_doc/openai-gpt
title
:
GPT
title
:
GPT
-
local
:
model_doc/gpt_neo
-
local
:
model_doc/gpt_neo
...
...
docs/source/en/perf_infer_gpu_one.md
View file @
75a63198
...
@@ -43,6 +43,7 @@ FlashAttention-2 is currently supported for the following architectures:
...
@@ -43,6 +43,7 @@ FlashAttention-2 is currently supported for the following architectures:
*
[
Dbrx
](
https://huggingface.co/docs/transformers/model_doc/dbrx#transformers.DbrxModel
)
*
[
Dbrx
](
https://huggingface.co/docs/transformers/model_doc/dbrx#transformers.DbrxModel
)
*
[
DistilBert
](
https://huggingface.co/docs/transformers/model_doc/distilbert#transformers.DistilBertModel
)
*
[
DistilBert
](
https://huggingface.co/docs/transformers/model_doc/distilbert#transformers.DistilBertModel
)
*
[
Gemma
](
https://huggingface.co/docs/transformers/model_doc/gemma#transformers.GemmaModel
)
*
[
Gemma
](
https://huggingface.co/docs/transformers/model_doc/gemma#transformers.GemmaModel
)
*
[
Gemma2
](
https://huggingface.co/docs/transformers/model_doc/gemma2#transformers.Gemma2Model
)
*
[
GPT2
](
https://huggingface.co/docs/transformers/model_doc/gpt2
)
*
[
GPT2
](
https://huggingface.co/docs/transformers/model_doc/gpt2
)
*
[
GPTBigCode
](
https://huggingface.co/docs/transformers/model_doc/gpt_bigcode#transformers.GPTBigCodeModel
)
*
[
GPTBigCode
](
https://huggingface.co/docs/transformers/model_doc/gpt_bigcode#transformers.GPTBigCodeModel
)
*
[
GPTNeo
](
https://huggingface.co/docs/transformers/model_doc/gpt_neo#transformers.GPTNeoModel
)
*
[
GPTNeo
](
https://huggingface.co/docs/transformers/model_doc/gpt_neo#transformers.GPTNeoModel
)
...
@@ -202,6 +203,7 @@ For now, Transformers supports SDPA inference and training for the following arc
...
@@ -202,6 +203,7 @@ For now, Transformers supports SDPA inference and training for the following arc
*
[
Dpr
](
https://huggingface.co/docs/transformers/model_doc/dpr#transformers.DprReader
)
*
[
Dpr
](
https://huggingface.co/docs/transformers/model_doc/dpr#transformers.DprReader
)
*
[
Falcon
](
https://huggingface.co/docs/transformers/model_doc/falcon#transformers.FalconModel
)
*
[
Falcon
](
https://huggingface.co/docs/transformers/model_doc/falcon#transformers.FalconModel
)
*
[
Gemma
](
https://huggingface.co/docs/transformers/model_doc/gemma#transformers.GemmaModel
)
*
[
Gemma
](
https://huggingface.co/docs/transformers/model_doc/gemma#transformers.GemmaModel
)
*
[
Gemma2
](
https://huggingface.co/docs/transformers/model_doc/gemma2#transformers.Gemma2Model
)
*
[
GPT2
](
https://huggingface.co/docs/transformers/model_doc/gpt2
)
*
[
GPT2
](
https://huggingface.co/docs/transformers/model_doc/gpt2
)
*
[
GPTBigCode
](
https://huggingface.co/docs/transformers/model_doc/gpt_bigcode#transformers.GPTBigCodeModel
)
*
[
GPTBigCode
](
https://huggingface.co/docs/transformers/model_doc/gpt_bigcode#transformers.GPTBigCodeModel
)
*
[
GPTNeoX
](
https://huggingface.co/docs/transformers/model_doc/gpt_neox#transformers.GPTNeoXModel
)
*
[
GPTNeoX
](
https://huggingface.co/docs/transformers/model_doc/gpt_neox#transformers.GPTNeoXModel
)
...
...
tests/models/gemma2/test_modeling_gemma2.py
View file @
75a63198
...
@@ -41,11 +41,12 @@ if is_torch_available():
...
@@ -41,11 +41,12 @@ if is_torch_available():
class
Gemma2ModelTester
(
GemmaModelTester
):
class
Gemma2ModelTester
(
GemmaModelTester
):
config_class
=
Gemma2Config
if
is_torch_available
():
model_class
=
Gemma2Model
config_class
=
Gemma2Config
for_causal_lm_class
=
Gemma2ForCausalLM
model_class
=
Gemma2Model
for_sequence_class
=
Gemma2ForSequenceClassification
for_causal_lm_class
=
Gemma2ForCausalLM
for_token_class
=
Gemma2ForTokenClassification
for_sequence_class
=
Gemma2ForSequenceClassification
for_token_class
=
Gemma2ForTokenClassification
@
require_torch
@
require_torch
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment