1. 02 Feb, 2023 1 commit
  2. 01 Feb, 2023 3 commits
  3. 31 Jan, 2023 8 commits
  4. 30 Jan, 2023 1 commit
  5. 26 Jan, 2023 2 commits
  6. 24 Jan, 2023 1 commit
  7. 23 Jan, 2023 3 commits
  8. 20 Jan, 2023 2 commits
  9. 17 Jan, 2023 2 commits
  10. 05 Jan, 2023 1 commit
  11. 03 Jan, 2023 2 commits
  12. 30 Dec, 2022 2 commits
    • Nick Hill's avatar
      fix(router): Include special tokens when tokenizing (#14) · 3efa5bbb
      Nick Hill authored
      There's currently a discrepancy in the tokenization between the router
      and python server code. The latter includes special tokens but former
      does not.
      
      This results in a token count mismatch for seq2seq models such as mt0
      where the tokenizer emits an EOS token at the end.
      
      This in turn results in some unexpected/incorrect output, in particular
      when batch concatenation is involved, because the python code uses the
      input length passed from the router for each row.
      
      As far as I can tell, it is better to include this token in the encoder
      `input_ids`, so I guess it's best to just adjust on the router side.
      3efa5bbb
    • Nick Hill's avatar
      fix(server): Check for device type correctly when determining initial padding (#16) · 686cc667
      Nick Hill authored
      AFAIK there is no torch device type called "gpu".
      686cc667
  13. 16 Dec, 2022 2 commits
  14. 15 Dec, 2022 1 commit
  15. 12 Dec, 2022 1 commit
  16. 08 Dec, 2022 2 commits
  17. 05 Dec, 2022 1 commit
  18. 01 Dec, 2022 1 commit
  19. 14 Nov, 2022 4 commits