"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "656c27c3a3345d0d2cf31c16f780b573c3dea09a"
Unverified Commit 121641ca authored by Francisco Kurucz's avatar Francisco Kurucz Committed by GitHub
Browse files

Fix paths to AI Sweden Models reference and model loading (#28423)

Fix URL to Ai Sweden Models reference and model loading
parent bc72b4e2
...@@ -30,15 +30,15 @@ in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has ...@@ -30,15 +30,15 @@ in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has
320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a
causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
This model was contributed by [AI Sweden](https://huggingface.co/AI-Sweden). This model was contributed by [AI Sweden Models](https://huggingface.co/AI-Sweden-Models).
## Usage example ## Usage example
```python ```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("AI-Sweden/gpt-sw3-356m") >>> tokenizer = AutoTokenizer.from_pretrained("AI-Sweden-Models/gpt-sw3-356m")
>>> model = AutoModelForCausalLM.from_pretrained("AI-Sweden/gpt-sw3-356m") >>> model = AutoModelForCausalLM.from_pretrained("AI-Sweden-Models/gpt-sw3-356m")
>>> input_ids = tokenizer("Träd är fina för att", return_tensors="pt")["input_ids"] >>> input_ids = tokenizer("Träd är fina för att", return_tensors="pt")["input_ids"]
......
...@@ -21,20 +21,24 @@ VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"} ...@@ -21,20 +21,24 @@ VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
PRETRAINED_VOCAB_FILES_MAP = { PRETRAINED_VOCAB_FILES_MAP = {
"vocab_file": { "vocab_file": {
"AI-Sweden/gpt-sw3-126m": "https://huggingface.co/AI-Sweden/gpt-sw3-126m/resolve/main/spiece.model", "AI-Sweden-Models/gpt-sw3-126m": "https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/resolve/main/spiece.model",
"AI-Sweden/gpt-sw3-350m": "https://huggingface.co/AI-Sweden/gpt-sw3-350m/resolve/main/spiece.model", "AI-Sweden-Models/gpt-sw3-356m": "https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/resolve/main/spiece.model",
"AI-Sweden/gpt-sw3-1.6b": "https://huggingface.co/AI-Sweden/gpt-sw3-1.6b/resolve/main/spiece.model", "AI-Sweden-Models/gpt-sw3-1.3b": "https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/resolve/main/spiece.model",
"AI-Sweden/gpt-sw3-6.7b": "https://huggingface.co/AI-Sweden/gpt-sw3-6.7b/resolve/main/spiece.model", "AI-Sweden-Models/gpt-sw3-6.7b": "https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/resolve/main/spiece.model",
"AI-Sweden/gpt-sw3-20b": "https://huggingface.co/AI-Sweden/gpt-sw3-20b/resolve/main/spiece.model", "AI-Sweden-Models/gpt-sw3-6.7b-v2": "https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/resolve/main/spiece.model",
"AI-Sweden-Models/gpt-sw3-20b": "https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/resolve/main/spiece.model",
"AI-Sweden-Models/gpt-sw3-40b": "https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/resolve/main/spiece.model",
} }
} }
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
"AI-Sweden/gpt-sw3-126m": 2048, "AI-Sweden-Models/gpt-sw3-126m": 2048,
"AI-Sweden/gpt-sw3-350m": 2048, "AI-Sweden-Models/gpt-sw3-356m": 2048,
"AI-Sweden/gpt-sw3-1.6b": 2048, "AI-Sweden-Models/gpt-sw3-1.3b": 2048,
"AI-Sweden/gpt-sw3-6.7b": 2048, "AI-Sweden-Models/gpt-sw3-6.7b": 2048,
"AI-Sweden/gpt-sw3-20b": 2048, "AI-Sweden-Models/gpt-sw3-6.7b-v2": 2048,
"AI-Sweden-Models/gpt-sw3-20b": 2048,
"AI-Sweden-Models/gpt-sw3-40b": 2048,
} }
...@@ -49,7 +53,7 @@ class GPTSw3Tokenizer(PreTrainedTokenizer): ...@@ -49,7 +53,7 @@ class GPTSw3Tokenizer(PreTrainedTokenizer):
```python ```python
>>> from transformers import GPTSw3Tokenizer >>> from transformers import GPTSw3Tokenizer
>>> tokenizer = GPTSw3Tokenizer.from_pretrained("AI-Sweden/gpt-sw3-126m") >>> tokenizer = GPTSw3Tokenizer.from_pretrained("AI-Sweden-Models/gpt-sw3-126m")
>>> tokenizer("Svenska är kul!")["input_ids"] >>> tokenizer("Svenska är kul!")["input_ids"]
[1814, 377, 3617, 63504] [1814, 377, 3617, 63504]
``` ```
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment