"git@developer.sourcefind.cn:sugon_wxj/megatron-lm.git" did not exist on "78a3dc323f9da3c4f02bcbcafc7d4b06d99ed26c"
Unverified Commit 985bba90 authored by Chengxi Guo's avatar Chengxi Guo Committed by GitHub
Browse files

fix doc bug (#8082)


Signed-off-by: default avatarmymusise <mymusise1@gmail.com>
parent 08f534d2
...@@ -651,7 +651,7 @@ TF_TOKEN_CLASSIFICATION_SAMPLE = r""" ...@@ -651,7 +651,7 @@ TF_TOKEN_CLASSIFICATION_SAMPLE = r"""
>>> import tensorflow as tf >>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}') >>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)) >>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> input_ids = inputs["input_ids"] >>> input_ids = inputs["input_ids"]
...@@ -669,7 +669,7 @@ TF_QUESTION_ANSWERING_SAMPLE = r""" ...@@ -669,7 +669,7 @@ TF_QUESTION_ANSWERING_SAMPLE = r"""
>>> import tensorflow as tf >>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}') >>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)) >>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> input_dict = tokenizer(question, text, return_tensors='tf') >>> input_dict = tokenizer(question, text, return_tensors='tf')
...@@ -688,7 +688,7 @@ TF_SEQUENCE_CLASSIFICATION_SAMPLE = r""" ...@@ -688,7 +688,7 @@ TF_SEQUENCE_CLASSIFICATION_SAMPLE = r"""
>>> import tensorflow as tf >>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}') >>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)) >>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1 >>> inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
...@@ -705,7 +705,7 @@ TF_MASKED_LM_SAMPLE = r""" ...@@ -705,7 +705,7 @@ TF_MASKED_LM_SAMPLE = r"""
>>> import tensorflow as tf >>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}') >>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)) >>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)
>>> inputs = tokenizer("The capital of France is {mask}.", return_tensors="tf") >>> inputs = tokenizer("The capital of France is {mask}.", return_tensors="tf")
>>> inputs["labels"] = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> inputs["labels"] = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
...@@ -722,7 +722,7 @@ TF_BASE_MODEL_SAMPLE = r""" ...@@ -722,7 +722,7 @@ TF_BASE_MODEL_SAMPLE = r"""
>>> import tensorflow as tf >>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}') >>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)) >>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs) >>> outputs = model(inputs)
...@@ -737,7 +737,7 @@ TF_MULTIPLE_CHOICE_SAMPLE = r""" ...@@ -737,7 +737,7 @@ TF_MULTIPLE_CHOICE_SAMPLE = r"""
>>> import tensorflow as tf >>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}') >>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)) >>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife." >>> choice0 = "It is eaten with a fork and a knife."
...@@ -758,7 +758,7 @@ TF_CAUSAL_LM_SAMPLE = r""" ...@@ -758,7 +758,7 @@ TF_CAUSAL_LM_SAMPLE = r"""
>>> import tensorflow as tf >>> import tensorflow as tf
>>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}') >>> tokenizer = {tokenizer_class}.from_pretrained('{checkpoint}')
>>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)) >>> model = {model_class}.from_pretrained('{checkpoint}', return_dict=True)
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs) >>> outputs = model(inputs)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment