Unverified Commit 21741e8c authored by Yih-Dar's avatar Yih-Dar Committed by GitHub
Browse files

Update `test_batched_inference_image_captioning_conditioned` (#23391)



* fix

* fix

* fix test + add more docs

---------
Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
parent d765717c
...@@ -25,6 +25,8 @@ Tips: ...@@ -25,6 +25,8 @@ Tips:
Pix2Struct has been fine tuned on a variety of tasks and datasets, ranging from image captioning, visual question answering (VQA) over different inputs (books, charts, science diagrams), captioning UI components etc. The full list can be found in Table 1 of the paper. Pix2Struct has been fine tuned on a variety of tasks and datasets, ranging from image captioning, visual question answering (VQA) over different inputs (books, charts, science diagrams), captioning UI components etc. The full list can be found in Table 1 of the paper.
We therefore advise you to use these models for the tasks they have been fine tuned on. For instance, if you want to use Pix2Struct for UI captioning, you should use the model fine tuned on the UI dataset. If you want to use Pix2Struct for image captioning, you should use the model fine tuned on the natural images captioning dataset and so on. We therefore advise you to use these models for the tasks they have been fine tuned on. For instance, if you want to use Pix2Struct for UI captioning, you should use the model fine tuned on the UI dataset. If you want to use Pix2Struct for image captioning, you should use the model fine tuned on the natural images captioning dataset and so on.
If you want to use the model to perform conditional text captioning, make sure to use the processor with `add_special_tokens=False`.
This model was contributed by [ybelkada](https://huggingface.co/ybelkada). This model was contributed by [ybelkada](https://huggingface.co/ybelkada).
The original code can be found [here](https://github.com/google-research/pix2struct). The original code can be found [here](https://github.com/google-research/pix2struct).
......
...@@ -749,17 +749,20 @@ class Pix2StructIntegrationTest(unittest.TestCase): ...@@ -749,17 +749,20 @@ class Pix2StructIntegrationTest(unittest.TestCase):
texts = ["A picture of", "An photography of"] texts = ["A picture of", "An photography of"]
# image only # image only
inputs = processor(images=[image_1, image_2], text=texts, return_tensors="pt").to(torch_device) inputs = processor(images=[image_1, image_2], text=texts, return_tensors="pt", add_special_tokens=False).to(
torch_device
)
predictions = model.generate(**inputs) predictions = model.generate(**inputs)
self.assertEqual( self.assertEqual(
processor.decode(predictions[0], skip_special_tokens=True), "A picture of a stop sign that says yes." processor.decode(predictions[0], skip_special_tokens=True),
"A picture of a stop sign with a red stop sign on it.",
) )
self.assertEqual( self.assertEqual(
processor.decode(predictions[1], skip_special_tokens=True), processor.decode(predictions[1], skip_special_tokens=True),
"An photography of the Temple Bar and a few other places.", "An photography of the Temple Bar and the Temple Bar.",
) )
def test_vqa_model(self): def test_vqa_model(self):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment