Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • T text-generation-inference
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 2
    • Issues 2
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 1
    • Merge requests 1
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • OpenDAS
  • text-generation-inference
  • Repository

Switch branch/tag
  • text-generation-inference
  • integration-tests
  • images
  • cow_beach.png
Find file HistoryPermalink
  • drbh's avatar
    Pali gemma modeling (#1895) · 40213c95
    drbh authored May 16, 2024
    This PR adds paligemma modeling code
    
    Blog post: https://huggingface.co/blog/paligemma
    Transformers PR: https://github.com/huggingface/transformers/pull/30814
    
    install the latest changes and run with
    ```bash
    # get the weights
    # text-generation-server download-weights gv-hf/PaliGemma-base-224px-hf
    
    # run TGI
    text-generation-launcher --model-id gv-hf/PaliGemma-base-224px-hf
    ```
    
    
    basic example sending various requests
    ```python
    from huggingface_hub import InferenceClient
    
    client = InferenceClient("http://127.0.0.1:3000")
    
    
    images = [
        "https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/cow_beach_1.png",
        "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png
    
    ",
    ]
    
    prompts = [
        "What animal is in this image?",
        "Name three colors in this image.",
        "What are 10 colors in this image?",
        "Where is the cow standing?",
        "answer en Where is the cow standing?",
        "Is there a bird in the image?",
        "Is ther a cow in the image?",
        "Is there a rabbit in the image?",
        "how many birds are in the image?",
        "how many rabbits are in the image?",
    ]
    
    for img in images:
        print(f"\nImage: {img.split('/')[-1]}")
        for prompt in prompts:
            inputs = f"![]({img}){prompt}\n"
            json_data = {
                "inputs": inputs,
                "parameters": {
                    "max_new_tokens": 30,
                    "do_sample": False,
                },
            }
            generated_output = client.text_generation(prompt, max_new_tokens=30, stream=False)
            print([f"{prompt}\n{generated_output}"])
    
    ```
    
    ---------
    Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
    40213c95
cow_beach.png 65.7 KB

Download (65.7 KB)

cow_beach.png

Replace cow_beach.png

Attach a file by drag & drop or click to upload


Cancel
A new branch will be created in your fork and a new merge request will be started.