1. 19 Jul, 2024 1 commit
    • Daniël de Kok's avatar
      Improve the handling of quantized weights (#2250) · ba291dad
      Daniël de Kok authored
      * Improve the handling of quantized weights
      
      Handling of quantized weights was split between two mechanisms:
      
      - For quantized checkpoints, we used the new weight loader
        infrastructure.
      - For quantization while loading (EETQ, FP8, bitsandbytes) we
        instead relied on conditional in `get_linear`.
      
      Weight loaders support context managers to selectively load
      particular layers with different weight loaders, which is useful
      for models like Idefics2 AWQ, which uses a quantized text model,
      but unquantized vision and connector models. However, the context
      manager would be overrided by `get_linear`, which string-checks
      `quantizer`. Also, the context manager would not work with
      EETQ, FP8, and bitsandbytes.
      
      This change migrates all quantizers to the weight loader infrastructure.
      This has several benefits:
      
      - We can use context managers with all quantizers.
      - All the implementation details move down to the quantizer layers,
        `get_linear` does not need to know how to handle quantizer linear
        layers.
      - All quantizer weights are strongly typed, we don't pass around
        raw tensors.
      - We don't have to pass around the `quantizer` string everywhere.
      
      * Exclude non-MLP layers when using FP8 quantization with Llama
      ba291dad
  2. 12 Jul, 2024 2 commits
  3. 11 Jul, 2024 1 commit
  4. 09 Jul, 2024 1 commit
    • Daniël de Kok's avatar
      Move quantized weight handling out of the `Weights` class (#2194) · 8511669c
      Daniël de Kok authored
      Quantized weights were loaded in the `Weights` class, but this was
      getting quite unwieldy, where every higher level method to load weights
      was a long conditional to cover all the different quantizers.
      
      This change moves loading of quantized weights out of the `Weights`
      class. This is done by defining a simple `WeightsLoader` interface
      that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`,
      and `MarlinWeightsLoader`. These implementations are in the quantizers'
      respective modules. The `Weights` class provides the low-level load
      operations (such as loading tensors or sharded tensors), but delegates
      loads that need quantizer-specific weight processing to a loader. The
      loaders still use the low-level functionality provided by `Weights`.
      
      I initially tried making a hierarchy where a class like `GPTQWeights`
      would inherit from `Weights`. But it is not very flexible (e.g. does
      not work well with the new weight storage mock used in tests) and
      the implicit indirections made the code harder to follow.
      8511669c
  5. 05 Jul, 2024 1 commit
  6. 02 Jul, 2024 2 commits
  7. 01 Jul, 2024 3 commits
    • Nicolas Patry's avatar
      [Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940) · 4327210e
      Nicolas Patry authored
      * Using flash decoding
      
      Conditional flashdecoding.
      
      Fix max_q.
      
      Working kvcache
      
      Working version with flash decoding.
      
      Make it work for mistral.
      
      Fix after rebase..
      
      Less intrusive.
      
      REvert changes in modeling.
      
      Speedup flashdecoding.
      
      HHachweew
      Hack to make other models work.
      
      Fixing non flash decoding llama path.
      
      Router logic knows about page size.
      
      Missing 2 models.
      
      Missing cohere.
      
      Fixing cohere flash decoding.
      
      Revamped all this architecture.
      
      Fix cohere.
      
      Fixing falcon.
      
      Enabling custom block size schedule.
      
      Update router/src/infer.rs
      
      Not sending preallocated output.
      
      * Making it work on non flash decoding.
      
      * Fix Cohere.
      
      * Fix non decoding paths.
      
      * Rebased.
      
      * No need for cache_manager anymore.
      
      * Update?
      
      * "ipex" -> "cpu"
      
      * These do not belong.
      
      * Factoring cu_seqlen_qk for better abstracting over every model.
      
      * Fixing non flash tests/imports.
      
      * Changing return everywhere.
      
      * Update mistral past.
      
      * Fixing Mi{s,x}tral (non functional in Flash Decoding mode though).
      
      * Fixup mistral clamping (had issues with cuda graphs).
      
      * No need to recreate anything actually.
      4327210e
    • Wang, Yi's avatar
      refine get xpu free memory/enable Qwen2/gemma2/gemma/phi in intel platform (#2132) · 5da4cfab
      Wang, Yi authored
      
      
      * refine get xpu free memory
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      
      * enable qwen2 in xpu
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      
      * enable gemma/gemma2/phi in intel platform
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      
      ---------
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      5da4cfab
    • Daniël de Kok's avatar
      Use GPTQ-Marlin for supported GPTQ configurations (#2111) · 2ce80194
      Daniël de Kok authored
      GPTQ-Marlin is currently the best-performing kernel for GPTQ models. So
      let's use it by default if the kernels are installed, the GPU supports
      it, and the kernels support the configuration.
      
      For models generated by `text-generation-server quantize`, use
      `sym=False`. This subcommand symmetric quantization since the beginning
      and incorrectly reporting the model to be symmetric will use
      GPTQ-Marlin (which does not support asymmetric quantization).
      2ce80194
  8. 25 Jun, 2024 5 commits
    • Daniël de Kok's avatar
      Add support for Marlin 2:4 sparsity (#2102) · f1f98e36
      Daniël de Kok authored
      This change adds support for 2:4 sparsity when using Marlin
      quantization. The 2:4 kernel is used when:
      
      * The quantizer is `marlin`;
      * the quantizer checkpoint format is `marlin_24`.
      
      Fixes #2098.
      f1f98e36
    • Daniël de Kok's avatar
      Support AWQ quantization with bias (#2117) · 14980df2
      Daniël de Kok authored
      When the AWQ quantizer was used with a layer that uses a bias,
      the bias tensor was not correctly passed/used. Instead, the
      value `true`/`1.0` was added to the linear transformation.
      
      Correctly pass through the bias when it is not `None`.
      
      Fixes #2106.
      14980df2
    • drbh's avatar
      Enable multiple LoRa adapters (#2010) · 04e1af94
      drbh authored
      
      
      * feat: first draft load multiple lora
      
      * feat: load weights within layer and refactor lora pass
      
      * fix: refactor and reduce lora math
      
      * feat: baseline impl single request multi lora support
      
      * feat: prefer lorax implementation and port loading logic
      
      * fix: prefer adapter_data and refactors
      
      * feat: perfer loraxs custom punica kernels and add mlp loras
      
      * fix: adjust batch for bgmv
      
      * fix: adjust adapter_segments logic when in batch
      
      * fix: refactor and move changes to v3 proto
      
      * fix: pass model_id for all flash causal lms
      
      * fix: pass model_id for all causal and seq2seq lms
      
      * fix: add model_id to model test
      
      * feat: add lora support to mistral and refactors
      
      * feat: prefer model id in request
      
      * fix: include rust code for adapter id
      
      * feat: bump launcher and add new lora docs
      
      * feat: support base model generation and refactors
      
      * fix: rename doc to retry ci build
      
      * feat: support if vlm models
      
      * fix: add adapter_data param and avoid missing layers
      
      * fix: add adapter_data param to phi and neox
      
      * fix: update all models forwards to include adapter_data
      
      * fix: add model_id to IdeficsCausalLM
      
      * Update lora.md
      
      Fixed a typo
      
      * Update lora.md
      
      Fixing spam image
      
      * fix: add lora kernel to dockerfile, support running without kernels and refactors
      
      * fix: avoid dockerfile conflict
      
      * fix: refactors and adjust flash llama lora logic
      
      * fix: skip llama test due to CI issue (temp)
      
      * fix: skip llama test CI (temp) 2
      
      * fix: revert skips and prefer updated ci token for tests
      
      * fix: refactors and helpful comments
      
      * fix: add noop in TensorParallelAdapterRowLinear too
      
      * fix: refactor and move shard_lora_weights logic
      
      * fix: exit early if no adapter_data
      
      ---------
      Co-authored-by: default avatarDerek <datavistics@gmail.com>
      04e1af94
    • Nicolas Patry's avatar
      Removing IPEX_AVAIL. (#2115) · 9e2fdf57
      Nicolas Patry authored
      * Removing IPEX_AVAIL.
      
      Chose to unify CPU and XPU under `ipex`. Most code is exactly similar
      except for a very few spots.
      
      The biggest number of spots is the kv-cache layout and the flash_xxx.py
      files.
      Since those files should be removed soon and factored away, we should
      not need them.
      
      * Forgot a few places.
      
      * Unrelated change.
      
      * Fixing HF_TOKEN.
      
      * HF_TOKEN
      9e2fdf57
    • Wang, Yi's avatar
      Cpu tgi (#1936) · b64c70c9
      Wang, Yi authored
      
      
      * add CPU tgi support
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      
      * ipex distributed ops support
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      
      ---------
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      Co-authored-by: default avatarFuntowicz Morgan <mfuntowicz@users.noreply.github.com>
      b64c70c9
  9. 21 Jun, 2024 1 commit
  10. 14 Jun, 2024 1 commit
    • Daniël de Kok's avatar
      Add support for GPTQ Marlin (#2052) · 093a27c5
      Daniël de Kok authored
      Add support for GPTQ Marlin kernels
      
      GPTQ Marlin extends the Marlin kernels to support common GPTQ
      configurations:
      
      - bits: 4 or 8
      - groupsize: -1, 32, 64, or 128
      - desc_act: true/false
      
      Using the GPTQ Marlin kernels requires repacking the parameters in the
      Marlin quantizer format.
      
      The kernels were contributed by Neural Magic to VLLM. We vendor them
      here for convenience.
      093a27c5
  11. 12 Jun, 2024 1 commit
  12. 10 Jun, 2024 2 commits
    • Daniël de Kok's avatar
      Add Phi-3 medium support (#2039) · 85dfc392
      Daniël de Kok authored
      Add support for Phi-3-medium
      
      The main difference between the medium and mini models is that medium
      uses grouped query attention with a packed QKV matrix. This change adds
      support for GQA with packed matrixes to `Weights.get_weights_col_packed`
      and uses it for Phi-3. This also allows us to remove the custom
      implementation of GQA from dbrx attention loading.
      85dfc392
    • fxmarty's avatar
      ROCm and sliding windows fixes (#2033) · 9b3674d9
      fxmarty authored
      * update vllm commit & fix models using sliding window
      
      * update
      
      * update commit
      
      * fix bug where tunableop is bound to cuda graph even when cuda graph are disabled
      
      * enable tunableop by default
      
      * fix sliding window
      
      * address review
      
      * dead code
      
      * precise comment
      
      * is it flaky?
      9b3674d9
  13. 06 Jun, 2024 1 commit
    • Daniël de Kok's avatar
      Add support for Marlin-quantized models · 4594e6fa
      Daniël de Kok authored
      This change adds support for Marlin-quantized models. Marlin is an
      FP16xINT4 matmul kernel, which provides good speedups decoding batches
      of 16-32 tokens. It supports quantized models with symmetric
      quantization, groupsize -1 or 128, and 4-bit.
      
      Tested with:
      
      - Llama 2
      - Llama 3
      - Phi 3
      4594e6fa
  14. 05 Jun, 2024 3 commits
  15. 03 Jun, 2024 2 commits
    • Nicolas Patry's avatar
      Hotfix GPTQ. · 9a59ebce
      Nicolas Patry authored
      9a59ebce
    • Nicolas Patry's avatar
      Fixing GPTQ imports. (#1994) · 9add5d0a
      Nicolas Patry authored
      # What does this PR do?
      
      <!--
      Congratulations! You've made it this far! You're not quite done yet
      though.
      
      Once merged, your PR is going to appear in the release notes with the
      title you set, so make sure it's a great title that fully reflects the
      extent of your awesome contribution.
      
      Then, please replace this with a description of the change and which
      issue is fixed (if applicable). Please also include relevant motivation
      and context. List any dependencies (if any) that are required for this
      change.
      
      Once you're done, someone will review your PR shortly (see the section
      "Who can review?" below to tag some potential reviewers). They may
      suggest changes to make the code even better. If no one reviewed your PR
      after a week has passed, don't hesitate to post a new comment
      @-mentioning the same persons---sometimes notifications get lost.
      -->
      
      <!-- Remove if not applicable -->
      
      Fixes # (issue)
      
      
      ## Before submitting
      - [ ] This PR fixes a typo or improves the docs (you can dismiss the
      other checks if that's the case).
      - [ ] Did you read the [contributor
      guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
            Pull Request section?
      - [ ] Was this discussed/approved via a Github issue or the
      [forum](https://discuss.huggingface.co/)? Please add a link
            to it if that's the case.
      - [ ] Did you make sure to update the documentation with your changes?
      Here are the
      [documentation
      guidelines](https://github.com/huggingface/transformers/tree/main/docs),
      and
      [here are tips on formatting
      docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
      - [ ] Did you write any new necessary tests?
      
      
      ## Who can review?
      
      Anyone in the community is free to review the PR once the tests have
      passed. Feel free to tag
      members/contributors who may be interested in your PR.
      
      <!-- Your PR will be replied to more quickly if you can figure out the
      right person to tag with @
      
      
      @OlivierDehaene OR @Narsil
      
       -->
      9add5d0a
  16. 31 May, 2024 2 commits
    • Nicolas Patry's avatar
      Fixing exl2 scratch buffer. (#1990) · 5ab4cef6
      Nicolas Patry authored
      # What does this PR do?
      
      <!--
      Congratulations! You've made it this far! You're not quite done yet
      though.
      
      Once merged, your PR is going to appear in the release notes with the
      title you set, so make sure it's a great title that fully reflects the
      extent of your awesome contribution.
      
      Then, please replace this with a description of the change and which
      issue is fixed (if applicable). Please also include relevant motivation
      and context. List any dependencies (if any) that are required for this
      change.
      
      Once you're done, someone will review your PR shortly (see the section
      "Who can review?" below to tag some potential reviewers). They may
      suggest changes to make the code even better. If no one reviewed your PR
      after a week has passed, don't hesitate to post a new comment
      @-mentioning the same persons---sometimes notifications get lost.
      -->
      
      <!-- Remove if not applicable -->
      
      Fixes # (issue)
      
      
      ## Before submitting
      - [ ] This PR fixes a typo or improves the docs (you can dismiss the
      other checks if that's the case).
      - [ ] Did you read the [contributor
      guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
            Pull Request section?
      - [ ] Was this discussed/approved via a Github issue or the
      [forum](https://discuss.huggingface.co/)? Please add a link
            to it if that's the case.
      - [ ] Did you make sure to update the documentation with your changes?
      Here are the
      [documentation
      guidelines](https://github.com/huggingface/transformers/tree/main/docs),
      and
      [here are tips on formatting
      docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
      - [ ] Did you write any new necessary tests?
      
      
      ## Who can review?
      
      Anyone in the community is free to review the PR once the tests have
      passed. Feel free to tag
      members/contributors who may be interested in your PR.
      
      <!-- Your PR will be replied to more quickly if you can figure out the
      right person to tag with @
      
      
      @OlivierDehaene OR @Narsil
      
       -->
      5ab4cef6
    • Nicolas Patry's avatar
      Purely refactors paged/attention into `layers/attention` and make hardware... · 06edde94
      Nicolas Patry authored
      Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986)
      
      # What does this PR do?
      
      <!--
      Congratulations! You've made it this far! You're not quite done yet
      though.
      
      Once merged, your PR is going to appear in the release notes with the
      title you set, so make sure it's a great title that fully reflects the
      extent of your awesome contribution.
      
      Then, please replace this with a description of the change and which
      issue is fixed (if applicable). Please also include relevant motivation
      and context. List any dependencies (if any) that are required for this
      change.
      
      Once you're done, someone will review your PR shortly (see the section
      "Who can review?" below to tag some potential reviewers). They may
      suggest changes to make the code even better. If no one reviewed your PR
      after a week has passed, don't hesitate to post a new comment
      @-mentioning the same persons---sometimes notifications get lost.
      -->
      
      <!-- Remove if not applicable -->
      
      Fixes # (issue)
      
      
      ## Before submitting
      - [ ] This PR fixes a typo or improves the docs (you can dismiss the
      other checks if that's the case).
      - [ ] Did you read the [contributor
      guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
            Pull Request section?
      - [ ] Was this discussed/approved via a Github issue or the
      [forum](https://discuss.huggingface.co/)? Please add a link
            to it if that's the case.
      - [ ] Did you make sure to update the documentation with your changes?
      Here are the
      [documentation
      guidelines](https://github.com/huggingface/transformers/tree/main/docs),
      and
      [here are tips on formatting
      docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
      - [ ] Did you write any new necessary tests?
      
      
      ## Who can review?
      
      Anyone in the community is free to review the PR once the tests have
      passed. Feel free to tag
      members/contributors who may be interested in your PR.
      
      <!-- Your PR will be replied to more quickly if you can figure out the
      right person to tag with @
      
      
      @OlivierDehaene OR @Narsil
      
       -->
      06edde94
  17. 30 May, 2024 1 commit
    • Daniël de Kok's avatar
      Add support for exl2 quantization · 36dd1601
      Daniël de Kok authored
      Mostly straightforward, changes to existing code:
      
      * Wrap quantizer parameters in a small wrapper to avoid passing
        around untyped tuples and needing to repack them as a dict.
      * Move scratch space computation to warmup, because we need the
        maximum input sequence length to avoid allocating huge
        scratch buffers that OOM.
      36dd1601
  18. 23 May, 2024 1 commit
    • Wang, Yi's avatar
      reenable xpu for tgi (#1939) · f41d644a
      Wang, Yi authored
      # What does this PR do?
      
      <!--
      Congratulations! You've made it this far! You're not quite done yet
      though.
      
      Once merged, your PR is going to appear in the release notes with the
      title you set, so make sure it's a great title that fully reflects the
      extent of your awesome contribution.
      
      Then, please replace this with a description of the change and which
      issue is fixed (if applicable). Please also include relevant motivation
      and context. List any dependencies (if any) that are required for this
      change.
      
      Once you're done, someone will review your PR shortly (see the section
      "Who can review?" below to tag some potential reviewers). They may
      suggest changes to make the code even better. If no one reviewed your PR
      after a week has passed, don't hesitate to post a new comment
      @-mentioning the same persons---sometimes notifications get lost.
      -->
      
      <!-- Remove if not applicable -->
      
      Fixes # (issue)
      
      
      ## Before submitting
      - [ ] This PR fixes a typo or improves the docs (you can dismiss the
      other checks if that's the case).
      - [ ] Did you read the [contributor
      guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
            Pull Request section?
      - [ ] Was this discussed/approved via a Github issue or the
      [forum](https://discuss.huggingface.co/)? Please add a link
            to it if that's the case.
      - [ ] Did you make sure to update the documentation with your changes?
      Here are the
      [documentation
      guidelines](https://github.com/huggingface/transformers/tree/main/docs),
      and
      [here are tips on formatting
      docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation
      
      ).
      - [ ] Did you write any new necessary tests?
      
      
      ## Who can review?
      
      Anyone in the community is free to review the PR once the tests have
      passed. Feel free to tag
      members/contributors who may be interested in your PR.
      
      <!-- Your PR will be replied to more quickly if you can figure out the
      right person to tag with @
      
      
      @OlivierDehaene OR @Narsil
      
       -->
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      f41d644a
  19. 22 May, 2024 1 commit
    • drbh's avatar
      fix: use path inside of speculator config (#1935) · efb73fcb
      drbh authored
      This PR access the path on the speculator similar to
      `MLPSpeculatorHead.load` and `MedusaHeadV1.load`
      
      
      these changes resolves this error locally when loading a `MedusaHeadV2`
      ```
      TypeError: expected str, bytes or os.PathLike object, not dict
      ```
      efb73fcb
  20. 17 May, 2024 1 commit
    • fxmarty's avatar
      MI300 compatibility (#1764) · 232e8d52
      fxmarty authored
      Adds support for AMD Instinct MI300 in TGI.
      
      Most changes are:
      * Support PyTorch TunableOp to pick the GEMM/GEMV kernels for decoding
      https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/cuda/tunable.
      TunableOp is disabled by default, and can be enabled with
      `PYTORCH_TUNABLEOP_ENABLED=1`.
      * Update ROCm dockerfile to PyTorch 2.3 (actually patched with changes
      from https://github.com/pytorch/pytorch/pull/124362)
      * Support SILU & Linear custom kernels contributed by AMD
      * Update vLLM paged attention to https://github.com/fxmarty/rocm-vllm/,
      branching out of a much more recent commit
      https://github.com/ROCm/vllm/commit/3489ce7936c5de588916ae3047c44c23c0b0c308
      
      
      * Support FA2 Triton kernel as recommended by AMD. Can be used by
      specifying `ROCM_USE_FLASH_ATTN_V2_TRITON=1`.
      * Update dockerfile to ROCm 6.1
      
      By default, TunableOp tuning results are saved in `/data` (e.g.
      `/data/tunableop_meta-llama-Llama-2-70b-chat-hf_tp1_rank0.csv`) in order
      to avoid to have to rerun the tuning at each `docker run`.
      
      Example:
      ```
      Validator,PT_VERSION,2.3.0
      Validator,ROCM_VERSION,6.1.0.0-82-5fabb4c
      Validator,HIPBLASLT_VERSION,0.7.0-1549b021
      Validator,GCN_ARCH_NAME,gfx942:sramecc+:xnack-
      Validator,ROCBLAS_VERSION,4.1.0-cefa4a9b-dirty
      GemmTunableOp_Half_TN,tn_8192_7_28672,Gemm_Rocblas_45475,0.132098
      GemmTunableOp_Half_TN,tn_10240_4_8192,Gemm_Rocblas_45546,0.0484431
      GemmTunableOp_Half_TN,tn_32000_6_8192,Default,0.149546
      GemmTunableOp_Half_TN,tn_32000_3_8192,Gemm_Rocblas_45520,0.147119
      GemmTunableOp_Half_TN,tn_8192_3_28672,Gemm_Rocblas_45475,0.132645
      GemmTunableOp_Half_TN,tn_10240_3_8192,Gemm_Rocblas_45546,0.0482971
      GemmTunableOp_Half_TN,tn_57344_5_8192,Gemm_Rocblas_45520,0.255694
      GemmTunableOp_Half_TN,tn_10240_7_8192,Gemm_Rocblas_45517,0.0482522
      GemmTunableOp_Half_TN,tn_8192_3_8192,Gemm_Rocblas_45546,0.0444671
      GemmTunableOp_Half_TN,tn_8192_5_8192,Gemm_Rocblas_45546,0.0445834
      GemmTunableOp_Half_TN,tn_57344_7_8192,Gemm_Rocblas_45520,0.25622
      GemmTunableOp_Half_TN,tn_8192_2_28672,Gemm_Rocblas_45475,0.132122
      GemmTunableOp_Half_TN,tn_8192_4_8192,Gemm_Rocblas_45517,0.0453191
      GemmTunableOp_Half_TN,tn_10240_5_8192,Gemm_Rocblas_45517,0.0482514
      GemmTunableOp_Half_TN,tn_8192_5_28672,Gemm_Rocblas_45542,0.133914
      GemmTunableOp_Half_TN,tn_8192_2_8192,Gemm_Rocblas_45517,0.0446516
      GemmTunableOp_Half_TN,tn_8192_1_28672,Gemm_Hipblaslt_TN_10814,0.131953
      GemmTunableOp_Half_TN,tn_10240_2_8192,Gemm_Rocblas_45546,0.0481043
      GemmTunableOp_Half_TN,tn_32000_4_8192,Gemm_Rocblas_45520,0.147497
      GemmTunableOp_Half_TN,tn_8192_6_28672,Gemm_Rocblas_45529,0.134895
      GemmTunableOp_Half_TN,tn_57344_2_8192,Gemm_Rocblas_45520,0.254716
      GemmTunableOp_Half_TN,tn_57344_4_8192,Gemm_Rocblas_45520,0.255731
      GemmTunableOp_Half_TN,tn_10240_6_8192,Gemm_Rocblas_45517,0.0484816
      GemmTunableOp_Half_TN,tn_57344_3_8192,Gemm_Rocblas_45520,0.254701
      GemmTunableOp_Half_TN,tn_8192_4_28672,Gemm_Rocblas_45475,0.132159
      GemmTunableOp_Half_TN,tn_32000_2_8192,Default,0.147524
      GemmTunableOp_Half_TN,tn_32000_5_8192,Default,0.147074
      GemmTunableOp_Half_TN,tn_8192_6_8192,Gemm_Rocblas_45546,0.0454045
      GemmTunableOp_Half_TN,tn_57344_6_8192,Gemm_Rocblas_45520,0.255582
      GemmTunableOp_Half_TN,tn_32000_7_8192,Default,0.146705
      GemmTunableOp_Half_TN,tn_8192_7_8192,Gemm_Rocblas_45546,0.0445489
      ```
      
      ---------
      Co-authored-by: default avatarMohit Sharma <mohit21sharma.ms@gmail.com>
      232e8d52
  21. 16 May, 2024 1 commit
  22. 15 May, 2024 1 commit
  23. 14 May, 2024 1 commit
    • Nicolas Patry's avatar
      MLPSpeculator. (#1865) · e3d76564
      Nicolas Patry authored
      # What does this PR do?
      
      <!--
      Congratulations! You've made it this far! You're not quite done yet
      though.
      
      Once merged, your PR is going to appear in the release notes with the
      title you set, so make sure it's a great title that fully reflects the
      extent of your awesome contribution.
      
      Then, please replace this with a description of the change and which
      issue is fixed (if applicable). Please also include relevant motivation
      and context. List any dependencies (if any) that are required for this
      change.
      
      Once you're done, someone will review your PR shortly (see the section
      "Who can review?" below to tag some potential reviewers). They may
      suggest changes to make the code even better. If no one reviewed your PR
      after a week has passed, don't hesitate to post a new comment
      @-mentioning the same persons---sometimes notifications get lost.
      -->
      
      <!-- Remove if not applicable -->
      
      Fixes # (issue)
      
      
      ## Before submitting
      - [ ] This PR fixes a typo or improves the docs (you can dismiss the
      other checks if that's the case).
      - [ ] Did you read the [contributor
      guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
            Pull Request section?
      - [ ] Was this discussed/approved via a Github issue or the
      [forum](https://discuss.huggingface.co/)? Please add a link
            to it if that's the case.
      - [ ] Did you make sure to update the documentation with your changes?
      Here are the
      [documentation
      guidelines](https://github.com/huggingface/transformers/tree/main/docs),
      and
      [here are tips on formatting
      docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation
      
      ).
      - [ ] Did you write any new necessary tests?
      
      
      ## Who can review?
      
      Anyone in the community is free to review the PR once the tests have
      passed. Feel free to tag
      members/contributors who may be interested in your PR.
      
      <!-- Your PR will be replied to more quickly if you can figure out the
      right person to tag with @
      
      
      @OlivierDehaene OR @Narsil
      
       -->
      
      ---------
      Co-authored-by: default avatarJoshua Rosenkranz <joshua.rosenkranz@gmail.com>
      e3d76564
  24. 13 May, 2024 1 commit
    • Nicolas Patry's avatar
      Refactor layers. (#1866) · fd89d9df
      Nicolas Patry authored
      # What does this PR do?
      
      <!--
      Congratulations! You've made it this far! You're not quite done yet
      though.
      
      Once merged, your PR is going to appear in the release notes with the
      title you set, so make sure it's a great title that fully reflects the
      extent of your awesome contribution.
      
      Then, please replace this with a description of the change and which
      issue is fixed (if applicable). Please also include relevant motivation
      and context. List any dependencies (if any) that are required for this
      change.
      
      Once you're done, someone will review your PR shortly (see the section
      "Who can review?" below to tag some potential reviewers). They may
      suggest changes to make the code even better. If no one reviewed your PR
      after a week has passed, don't hesitate to post a new comment
      @-mentioning the same persons---sometimes notifications get lost.
      -->
      
      <!-- Remove if not applicable -->
      
      Fixes # (issue)
      
      
      ## Before submitting
      - [ ] This PR fixes a typo or improves the docs (you can dismiss the
      other checks if that's the case).
      - [ ] Did you read the [contributor
      guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
            Pull Request section?
      - [ ] Was this discussed/approved via a Github issue or the
      [forum](https://discuss.huggingface.co/)? Please add a link
            to it if that's the case.
      - [ ] Did you make sure to update the documentation with your changes?
      Here are the
      [documentation
      guidelines](https://github.com/huggingface/transformers/tree/main/docs),
      and
      [here are tips on formatting
      docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
      - [ ] Did you write any new necessary tests?
      
      
      ## Who can review?
      
      Anyone in the community is free to review the PR once the tests have
      passed. Feel free to tag
      members/contributors who may be interested in your PR.
      
      <!-- Your PR will be replied to more quickly if you can figure out the
      right person to tag with @
      
      
      @OlivierDehaene OR @Narsil
      
       -->
      fd89d9df