1. 30 Jan, 2023 1 commit
  2. 26 Jan, 2023 2 commits
  3. 24 Jan, 2023 1 commit
  4. 23 Jan, 2023 3 commits
  5. 20 Jan, 2023 2 commits
  6. 17 Jan, 2023 2 commits
  7. 05 Jan, 2023 1 commit
  8. 03 Jan, 2023 2 commits
  9. 30 Dec, 2022 2 commits
    • Nick Hill's avatar
      fix(router): Include special tokens when tokenizing (#14) · 3efa5bbb
      Nick Hill authored
      There's currently a discrepancy in the tokenization between the router
      and python server code. The latter includes special tokens but former
      does not.
      
      This results in a token count mismatch for seq2seq models such as mt0
      where the tokenizer emits an EOS token at the end.
      
      This in turn results in some unexpected/incorrect output, in particular
      when batch concatenation is involved, because the python code uses the
      input length passed from the router for each row.
      
      As far as I can tell, it is better to include this token in the encoder
      `input_ids`, so I guess it's best to just adjust on the router side.
      3efa5bbb
    • Nick Hill's avatar
      fix(server): Check for device type correctly when determining initial padding (#16) · 686cc667
      Nick Hill authored
      AFAIK there is no torch device type called "gpu".
      686cc667
  10. 16 Dec, 2022 2 commits
  11. 15 Dec, 2022 1 commit
  12. 12 Dec, 2022 1 commit
  13. 08 Dec, 2022 2 commits
  14. 05 Dec, 2022 1 commit
  15. 01 Dec, 2022 1 commit
  16. 14 Nov, 2022 4 commits
  17. 09 Nov, 2022 1 commit
  18. 08 Nov, 2022 1 commit
  19. 07 Nov, 2022 1 commit
  20. 04 Nov, 2022 3 commits
  21. 03 Nov, 2022 1 commit
  22. 02 Nov, 2022 1 commit
  23. 28 Oct, 2022 1 commit
  24. 27 Oct, 2022 1 commit
  25. 22 Oct, 2022 2 commits