1. 23 Mar, 2023 2 commits
  2. 22 Mar, 2023 7 commits
    • Yih-Dar's avatar
      Fix PipelineTests skip conditions (#22320) · 8b05ace0
      Yih-Dar authored
      
      
      * check what tests fail
      
      * Skip failing tests
      
      * Skip failing tests
      
      * Skip failing tests
      
      * Skip failing tests
      
      * clean up
      
      * clean up
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      8b05ace0
    • Luc CAILLIAU's avatar
      Chunkable token classification pipeline (#21771) · d62e7d88
      Luc CAILLIAU authored
      
      
      * Chunkable classification pipeline 
      
      The TokenClassificationPipeline is now able to process sequences longer than 512. No matter the framework, the model, the tokenizer. We just have to pass process_all=True and a stride number (optional). The behavior remains the same if you don't pass these optional parameters. For overlapping parts when using stride above 0, we consider only the max scores for each overlapped token in all chunks where the token is.
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * update with latest black format
      
      * update black format
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * format correction
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update comments
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      
      * Update token_classification.py
      
      Correct spaces, remove process_all and keep only stride. If stride is provided, the pipeline is applied to the whole text.
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update chunk aggregation
      
      Update the chunk aggregation strategy based on entities aggregation.
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      Remove unnecessary pop from outputs dict
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update token_classification.py
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * add chunking tests
      
      * correct formating
      
      * correct formatting
      
      * correct model id for test chunking
      
      * update scores with nested simplify
      
      * Update test_pipelines_token_classification.py
      
      * Update test_pipelines_token_classification.py
      
      * update model to a tiny one
      
      * Update test_pipelines_token_classification.py
      
      * Adding smaller test for chunking.
      
      * Fixup
      
      * Update token_classification.py
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/pipelines/token_classification.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      d62e7d88
    • Younes Belkada's avatar
      Add Pix2Struct (#21400) · 0f68a7f4
      Younes Belkada authored
      
      
      * v1 all keys match
      
      * clean up
      
      * forward pass ok
      
      * add correct image transform
      
      * generate works, logits matching
      
      * clean up
      
      * more refactor
      
      * revert
      
      * revert
      
      * clean up
      
      * clean ups
      
      * clean up
      
      * refactor
      
      * refactor
      
      * fix doc
      
      * fix tokenizer test
      
      * fix toctree
      
      * revert toctree
      
      * oops
      
      * few fixes
      
      * replace to `pixel_embeds`
      
      * make fixup
      
      * test processing & feat extractor
      
      * fix some tests
      
      * more fixes
      
      * make fixup
      
      * clean up
      
      * more clean up
      
      * add a single slow test
      
      * fix test
      
      * make fixup
      
      * fix
      
      * fix authors
      
      * fix toctree
      
      * update docs
      
      * add docstring
      
      * revert change
      
      * Update src/transformers/models/pix2struct/__init__.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix tokenizer
      
      * fix processor test
      
      * fix test
      
      * make fixup
      
      * refactor
      
      * fix config
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * format
      
      * fix
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * make fixup
      
      * add docstring
      
      * fix issues
      
      * fix
      
      * fix
      
      * fix
      
      * add slow test
      
      * fix
      
      * fix
      
      * fix batched issue
      
      * fix training issues
      
      * fix ci test
      
      * fix slow test
      
      * fix conversion script
      
      * remove unneeded classes
      
      * fix slow test
      
      * fix require backends
      
      * fix masked fill
      
      * revert
      
      * fix softmax
      
      * add large models support
      
      * fix conditional generation
      
      * few fixes
      
      * add instructions
      
      * rm unneeded file
      
      * Update src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py
      
      * fix ci test
      
      * fix ci test really
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * fix nit
      
      * fix nits
      
      * fix image processors nits
      
      * docstring
      
      * clean up
      
      * fix nit
      
      * fix tests
      
      * docstring nit
      
      * fix reshape
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fix nit
      
      * fix repetition
      
      * refactor processor
      
      * make patch size consistent
      
      * refactor forward
      
      * fix docstring
      
      * fix max_patches issue
      
      * update docstirng
      
      * update docstring
      
      * fix coped from
      
      * add skip reasons
      
      * few fixes
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * format
      
      * fix doctests
      
      * refactor and fix
      
      * fix doc build issue
      
      * fix processor test
      
      * small fix conversion script
      
      * replace correct weights
      
      * make fixup
      
      * fix some issues
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * revert config and fixes
      
      * Update src/transformers/models/pix2struct/image_processing_pix2struct.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * more details
      
      * fixes
      
      * fix processor
      
      * fix processor test
      
      * fix
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * make fixup
      
      * fix processor
      
      * Update src/transformers/models/pix2struct/modeling_pix2struct.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * add copied
      
      * make fixup
      
      * fix copies
      
      * update docstring
      
      * refactor
      
      * fix docstring
      
      * fix conversion script
      
      * fix vqa issue
      
      * replace to `flattened_patches`
      
      * nit
      
      * fix numpy issue
      
      * fix image processors
      
      * add batched vqa support
      
      * fix vqa conversion
      
      * make fixup
      
      * fix conversion script
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * make fixup
      
      * add correct docstring
      
      * update docstring
      
      * fix module level + channel dim
      
      * use `make_list_of_images`
      
      * refactor
      
      * correct docstring
      
      * fix authors
      
      * remove `data_format`
      
      * add header text test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * make fixup
      
      * add checkpoints
      
      ---------
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      0f68a7f4
    • Joao Gante's avatar
      Beef up Llama tests (#22314) · fd3eb3e3
      Joao Gante authored
      * tmp commit
      
      * beef up llama tests
      fd3eb3e3
    • Joao Gante's avatar
      Generate: Export TF generate with a TF tokenizer (#22310) · 12febc20
      Joao Gante authored
      * Export TF generate with a TF tokenizer
      
      * remove unused lines
      12febc20
    • silentghoul-spec's avatar
      Fixed bug to calculate correct xpath_sub_list in MarkupLMTokenizer (#22302) · 48bef3a7
      silentghoul-spec authored
      
      
      Fixed bug to calculate correct xpath_sub_list in MarkupLMTokenizer. Earlier xpath_sub_list was same as xpath_tags_list
      Co-authored-by: default avatardusejat <dusejat@amazon.com>
      48bef3a7
    • Alara Dirik's avatar
      Add MaskedImageModelingOutput (#22212) · 0558914d
      Alara Dirik authored
      * Add MaskedImageModelingOutput
      0558914d
  3. 21 Mar, 2023 2 commits
  4. 17 Mar, 2023 1 commit
  5. 16 Mar, 2023 3 commits
    • Yih-Dar's avatar
      馃敟py38 + torch 2 馃敟馃敟馃敟馃殌 (#22204) · 5110e574
      Yih-Dar authored
      
      
      * py38 + torch 2
      
      * increment cache versions
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      5110e574
    • Jason Phang's avatar
      LLaMA Implementation (#21955) · 0041be5b
      Jason Phang authored
      
      
      * LLaMA
      
      * sharding and docs
      
      * tweak
      
      * black
      
      * inits
      
      * ruff
      
      * LLAMA_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * init
      
      * no checkpoint
      
      * docs
      
      * ruff
      
      * type_vocab_size
      
      * tokenizer fixes
      
      * tokenizer fixes
      
      * Update tokenization_llama.py
      
      * Update tokenization_llama.py
      
      * Update configuration_llama.py
      
      * Update modeling_llama.py
      
      * tokenizer add_bos by default
      
      * licenses
      
      * remove decoder
      
      * norms and mlp
      
      * rope overhaul
      
      * tweaks
      
      * black
      
      * mention OPT implementation
      
      * off-by-one naming
      
      * typo
      
      * fix
      
      * tokenization fix and slicing bug
      
      * padding config
      
      * cleanup
      
      * black
      
      * update tests
      
      * undo typo
      
      * fix vocab caching logic
      
      * ruff
      
      * docbuilder
      
      * attn fix from BlackSamorez
      
      * initial feedback
      
      * typo
      
      * docs
      
      * llama case
      
      * llama case
      
      * load checkpoint docs
      
      * comment about tokenizer
      
      * tokenizer defaults
      
      * clear past_key_values if use_cache=False
      
      * last tweaks
      
      * last tweaks
      
      * last tweaks
      
      * last tweaks
      
      ---------
      Co-authored-by: default avatarStella Biderman <stellabiderman@gmail.com>
      0041be5b
    • Yih-Dar's avatar
      52a57f7c
  6. 15 Mar, 2023 3 commits
  7. 14 Mar, 2023 4 commits
  8. 13 Mar, 2023 4 commits
  9. 10 Mar, 2023 3 commits
  10. 09 Mar, 2023 3 commits
  11. 08 Mar, 2023 3 commits
  12. 07 Mar, 2023 5 commits
    • Yih-Dar's avatar
      Update tiny model creation script and some others files (#22006) · b338414e
      Yih-Dar authored
      
      
      * Update 1
      
      * Update 2
      
      * Update 3
      
      * Update 4
      
      * Update 5
      
      * Update 6
      
      * Update 7
      
      * Update 8
      
      * Update 9
      
      * Update 10
      
      ---------
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      b338414e
    • Eli Simhayev's avatar
      [Time-Series] informer model (#21099) · 8abe4930
      Eli Simhayev authored
      * added informer to gitignore
      
      * added informer to gitignore
      
      * WIP informer2020
      
      * added checking that instantiate works
      
      * added config using gluonTS by kashif
      
      * WIP config
      
      * adding informeConfig. need to remove FeatureEmbedder
      
      * done InformerConfig, but need to change the names
      
      * Done informer model init. working on enc-dec
      
      * added things to address, after reading again enc-dec in the paper
      
      * done modeling - checking initialization work
      
      * added informer to gitignore
      
      * WIP informer2020
      
      * added checking that instantiate works
      
      * added config using gluonTS by kashif
      
      * WIP config
      
      * adding informeConfig. need to remove FeatureEmbedder
      
      * done InformerConfig, but need to change the names
      
      * Done informer model init. working on enc-dec
      
      * added things to address, after reading again enc-dec in the paper
      
      * done modeling - checking initialization work
      
      * moved enc-dec init to InformerEncoder/Decoder init
      
      * added 'init_std' to config, now model init works!
      
      * WIP conversion script, and added code sources
      
      * WIP conversion script: loading original informer pth works
      
      * WIP conversion script: change defaults in the config
      
      * WIP conversion script: supporting Informer input embedding
      
      * WIP conversion script: added parameters for the informer embed
      
      * WIP conversion script: change dim_feedforward=2048
      
      * WIP conversion script: remove unused args for loading checkpoint
      
      * just cleaning up
      
      * DataEmbedding removed, after thinking with Kashif
      
      * working on forward pass
      
      * WIP forward pass: trying to establish working batch for forward pass
      
      * cleaning and finalizing
      
      * adding HF names and docs
      
      * init after cleaning works
      
      * WIP in tests
      
      * added docs for the informer specific args
      
      * fix style
      
      * undo change
      
      * cleaning informer, now need to work only enc-dec
      
      * initial enc-dec classes
      
      * added encoder and decoder
      
      * added todo
      
      * add todos for conv_layers
      
      * added decoder docs from vanilla
      
      * added encoder docs from vanilla
      
      * remove encoder decoder from the original informer
      
      * removed AttentionLayer from the original paper
      
      * removed TriangularCausalMask, same as decoder_attention_mask
      
      * initial sparse attention
      
      * use conv_layers
      
      * fixed test_config test
      
      * fix parenthesis when itearting zip(layers, conv_layers)
      
      * error found in prob attention, added sizes as comments
      
      * fix sizes
      
      * added proposal for q_reduce indexing, and remove unused
      
      * WIP ProbMask, and changed factor=2 for testing
      
      * remove unused libs for this PR for creating the env
      
      * fix checking the attn_weights.size() after bmm
      
      * Q_reduce: changed from torch.gather to simple slicing
      
      * WIP calculate final attn_output
      
      * finish adding v_aggregated, attn_output ready
      
      * changed tgt_len to u in attention_mask, need to fix the size error
      
      * comment attention_mask for encoder, and fix if cond for v_agg
      
      * added ProbMask support (wip), removed old original code
      
      * finished ProbMask 馃槂
      
      
      
      * Revert "remove unused libs for this PR for creating the env"
      
      This reverts commit 11a081e09e92771e51a5d2758d53a9afb59547f0.
      
      * fixes
      
      * make style
      
      * fix initial tests
      
      * fix more tests
      
      * dry
      
      * make style
      
      * remove unused files
      
      * style
      
      * added integration tests
      
      * fix num_static_real_features
      
      * fix header
      
      * remove unused function
      
      * fix example
      
      * fix docs
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/modeling_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/informer/configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fixes for reviewer
      
      * use prediction_length from model
      
      * fix style
      
      * fixed informer.mdx
      
      * added to index
      
      * updated readme
      
      * undo
      
      * make fix-copies
      
      * typo
      
      * fix copy
      
      * added Informer to toctree
      
      * in order
      
      * fixed comments
      
      * remove unneeded new lines in docs
      
      * make static real and cat optional
      
      * fix use of distil conv layers
      
      * fixed integration test
      
      * added checkpoint for convlayer
      
      * make fix-copies
      
      * updated from time series model
      
      * make fix-copies
      
      * copy decoder
      
      * fix unit tests
      
      * updated scaling config
      
      * fix integration tests
      
      * IGNORE_NON_TESTED
      
      * IGNORE_NON_AUTO_CONFIGURED
      
      * IGNORE_NON_AUTO_CONFIGURED
      
      * updated check configs
      
      * fix formatting
      
      * undo change from time series
      
      * prediction_length should not be None
      
      * aliign with the blog: prettify ProbSparse and change attention_factor  to sampling_factor
      
      * make style
      
      * make fix-copies
      
      * niels CR: update contributed by
      
      * niels CR: update configuration_informer.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * niels CR: update kashif -> huggingface
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * niels CR: `sampling_factor` only relevant when `attention_type`=prob
      
      * make style
      
      * fixed U_part: added multiplication by `L_Q`
      
      * fixed bug: remove `is not None` from `if config.distil`
      
      * fixed test: `decoder_seq_length` to `encoder_seq_length` in cross_attentions check
      
      * fix integration tests
      
      * updated model hub
      
      * do not shift as in training
      
      * undo
      
      * fix make-copies
      
      * make fix-copies
      
      * added `if prediction_length is None`
      
      * changed `ProbSparseAttention` to `InformerProbSparseAttention`
      
      * changed `V_sum` -> `v_mean_dim_time`
      
      * changed `ConvLayer` to `InformerConvLayer` and fixed `super()`
      
      * TimeSeriesTansformer->Informer in decoder's Copied from
      
      * more descriptive in ProbSparse
      
      * make style
      
      * fix coped from
      
      * Revert "added `if prediction_length is None`"
      
      This reverts commit b4cbddfa05e3bd739b79569cd3c3b89e316f2451.
      
      * fixed indent
      
      * use InformerSinusoidalPositionalEmbedding
      
      * make fix-style
      
      * fix from #21860
      
      * fix name
      
      * make fix-copies
      
      * use time series utils
      
      * fix dec num_heads
      
      * docstring
      
      * added time series util doc
      
      * _import_structure
      
      * formatting
      
      * changes from review
      
      * make style
      
      * fix docs
      
      * fix doc
      
      * removed NegativeLogLikelihood
      
      ---------
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      8abe4930
    • NielsRogge's avatar
      [DETR and friends] Remove is_timm_available (#21814) · dde718e7
      NielsRogge authored
      
      
      * First draft
      
      * Fix to_dict
      
      * Improve conversion script
      
      * Update config
      
      * Remove timm dependency
      
      * Fix dummies
      
      * Fix typo, add integration test
      
      * Upload 101 model as well
      
      * Remove timm dummies
      
      * Fix style
      
      ---------
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      dde718e7
    • Arthur's avatar
      [TF] Fix creating a PR while pushing in TF framework (#21968) · 2156662d
      Arthur authored
      * add create pr arg
      
      * style
      
      * add test
      
      * ficup
      
      * update test
      
      * last nit fix typo
      
      * add `is_pt_tf_cross_test` marker for the tsts
      2156662d
    • Sanchit Gandhi's avatar
      [Whisper] Add model for audio classification (#21754) · 7c393181
      Sanchit Gandhi authored
      * [Whisper] Add model for audio classification
      
      * make fix-copies
      
      * add to docs
      
      * add docstring
      
      * empty returns
      
      * add code example
      
      * switch to fleurs
      
      * stick everything on one line
      7c393181